If you’re ever in the situation of needing to pipe a large amount of data into a program and would usually use cat or just redirect from a file, but would like some idea of how long it may take, then may I recommend to you the “pv” command (packaged in Debian/Ubuntu/RHEL/etc)?
For instance, here is restoring a 9GB MySQL dump into a MariaDB database:
root@db3:/var/tmp# pv db4.sql | mysql
570MB 0:02:06 [5.01MB/s] [> ] 5% ETA 0:34:28
Suddenly you’ve got the data rate, the percentage complete and an ETA so you can go off and get a coffee whilst it works..
I realised I had over 60 episodes of Get Smart recorded which I was never going to get around to watching, so I wanted to delete them quickly. I had a quick poke at MythWeb but that didn’t seem to have the functionality but a quick google revealed this forum post which says:
When in select recording to watch, mark the recording with a backslash “/”.
Mark all that you want to delete.
Press M to bring up the Recordings list menu.
Select playlist options
Works like a charm!
There’s also Craig’s set of command line tools that can assist with this: http://taz.net.au/mythtv-tools/.
Last August PGI announced an update to its “PGI OpenCL Compiler for ARM” (PGCL 12.7), but if you go looking for that on the PGI news page you won’t find it. In fact if you go to their products page and go to the link for the “PGI OpenCL Compiler for ARM” you’ll find it’s gone too..
For the record that part of the products page currently looks like:
PGI Compilers and Tools for Mobile and Embedded Platforms
PGI OpenCL Compiler for ARM
PGCL™ is an OpenCL™ framework for compiling and running OpenCL 1.1 embedded profile
applications on the ST-Ericsson NovaThor™ U8500 and follow-on platforms using a single
ARM core as the OpenCL host and multiple ARM cores as an OpenCL computing device
The interesting thing is that this has happened recently, Google’s cache of the news page (dated July 25th 2013) still has the announcement listed:
The Portland Group Updates its OpenCL Compiler for Multi-core ARM
August 21, 2012
Latest PGCL includes automatic generation of NEON/SIMD instructions
The Portland Group® (PGI), a wholly-owned subsidiary of STMicroelectronics and the leading
independent supplier of compilers and tools for high-performance computing, today announced
the release of PGCL 12.7. PGCL™ is the PGI OpenCL framework for multi-core ARM-based
Systems-on-Chips (SoCs), currently available on ST-Ericsson NovaThor™ platforms. PGCL includes
a PGI OpenCL compiler for multi-core ARM CPUs as a compute device and complements OpenCL
So something changed in the last week, oddly around the same time that nVidia announced it was buying PGI..
If you’ve been using SpamAssassin and have been reporting to SpamCop then you’ll have found overnight that you got a heap of bounces back saying things like:
<email@example.com> (expanded from
<firstname.lastname@example.org>): unknown user: "devnull"
It turns out that the email@example.com appears to be something that the SpamAssassin developers set without consulting with SpamCop, and SpamCop have just been blackholing those reports for an unknown amount of time. Last night it went away and so now IronPort are rejecting them which was how I learnt of this. I’m not impressed by what the SA developers did her, it should have required you to put in a registered SpamCop address and not reported if that wasn’t set.
I’ve disabled my SpamCop reporting by commenting out this line in
/etc/mail/spamassassin/v310.pre on my Debian mailserver:
If you use SpamAssassin and don’t have a registered SpamCop account you’ll want to do the same.
It’d been a while since I’d last told Digikam to scan my collection for faces, and having just upgraded to 3.2.0 I thought it was about time to have another shot at it. However, I’d noticed it was taking an awful long time and seemed to only be using one of the eight cores on this system (Ivy Bridge i7-3770K running Kubuntu 13.04) so I thought I’d see if simply taking advantage of OpenMP could improve things with multithreading.
To do that I just started a new
konsole and (as a first step) told OpenMP to use all the cores with:
Running digikam from that session and starting a face scan showed that yes, it was using all 8 cores, but not really to a great amount. Running
iotop showed it doing about 5MB/s in reads and
latencytop showed that it was spending most of its time in fsync(). Now that’s good, because it’s making sure that the data has really hit the rust to ensure everything is consistent.
However, in this case I can rebuild the entire face database should I need to, and I have about 66GB of photos to scan, plus I wanted to see just how fast this could go. 😉 So now it’s time to get a little dangerous and try Stewart Smith’s wonderfully named “libeatmydata” library which gives you a library (surprise surprise) and helper program that lets you preload an fsync() function that really only does
return(0); (which, you may be interested to know, is still POSIX compliant).
So to test that out I just needed to do:
and suddenly I had 8 cores running flat out.
iotop showed that Digikam was now doing about 25-30MB/s reads and
latencytop showed most of its time waiting for things was now for user space lock contention, i.e. locks protecting shared data structures to stop threads from stomping on each other and going off into the weeds. Interestingly the disks are a lot quieter than before too. Oh, and it’s screaming through the photos now.
WARNING: Do not use eatmydata for anything you care about, it will do just what it says in the name should your power die, system hang, universe end, etc..
No, not where I work for once, but a friend of mine is looking for an HPC sysadmin in his group in the Victoria State Government:
This role requires advanced skills in system and network administration and scripting, clustered computer systems, security, virtualisation and Petabyte-scale storage. It is highly desirable that you have acquired these skills in a Life sciences environment. The heterogeneous environment requires both Linux and Windows skills. You should have the ability to design and implement solutions for automated transfer of data within and between systems and to ensure the security of both internal and Internet-facing systems. In this complex environment, working closely in teams of multi-disciplinary scientists to deliver computing solutions, including advanced troubleshooting and diagnostic skills, will be required. Supervision of other members of the team will also be necessary.
They’ve got a 1500+ core Linux cluster.. 😉
For those WordPress admins who are lucky enough to only access via certain defined IP addresses (IPv4 or IPv6) you can lock down access to the wp-admin and wp-login.php URLs in your Apache configuration with just:
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128 10.1.2.3/32 1234:5678:90ab:cdef::/64
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128 10.1.2.3/32 1234:5678:90ab:cdef::/64
Hopefully that helps someone!
From 1992 through to 1994 I was working at the Computer Unit at the University of Wales (well, wrangled an “Employment Training” position there on my own initiative) as a sysadmin and was running Linux on an IBM XT (from very dodgy memory). A friend of mine, Piercarlo Grandi, suggested to me (semi-seriously I suspect) that you could now build a large enough PC to support quite a number of users, and that the Computer Unit could use it as a central server (they were running DEC 5830s with Utrix), so I knocked up a text file and discussed it with my colleagues. They didn’t take it very seriously – little did any of suspect how much that would change.
Well tonight I indulged in a bit of computer archaeology and managed to get the data off my Amiga hard disk (from a GVP A530 expansion unit) and browsing around happpened to stumbleover that text file, dated 8:20pm on the 8th August 1993. It’s quite touchingly naive in places, and my numbers are pretty ropey..
Preliminary Hardware Configuration for a Main Service Linux Machine
Item Each Number Total
Case 100 1 100
Keyboard + Mouse 100 1 100
Floppies 100 1 100
DAT Drives 750 4 3000
EISA SCSI Controllers 300 2 600
Memory (Mb) 25 256 6400
Pentium EISA Motherboard 1000 1 1000
3.5Gb SCSI-II Disks 1800 5 7000
Screen+SVGA Card 1000 1 1000
EISA Ethernet Card 200 2 400
CD-ROM Drive 300 1 300
Projected to be able to support between 200-400 users running Linux 0.99.p12
(Alpha release kernel with patched IP - appears stable)
(1) I've seen reports that the ethernet driver code may suffer from a
memory leak, but I've not seen any evidence for this yet as my
machine hasn't been turned on for a long enough period for it to
cause any problems.
(2) As it is so new there is very little commercial software available
for it, but there is a quite sizeable free software base with many
of the GNU packages already ported for it, and this is generally of
(3) The Linux kernel is well thought out, and includes support for shared
libraries (which Ultrix sadly never picked up) which significantly
reduces the amount of memory applications need.
(4) A Linux box of the size proposed for the service machine has not
been attempted yet (as far as I know), but ones of the size of the
proposed testbed machine are already in usage on the Internet. I
believe that Linux can handle this scaling up with no problem.
(5) There are apparently companies within the UK who sell support services
for Linux, I will investigate further.
(6) There is already a large amount of Linux expertise on the Internet,
including the comp.os.linux newsgroup, the linux-activists mailing
list and even an IRC channel dedicated to Linux users.
This post is dedicated to Rob Ash, my then boss, who took a chance taking me on after my time as a student mucking around on computers when I was meant to be doing my Physics degree, and who was a great mentor for me.
For almost a year now I’ve been a member of the Mount Burnett Observatory, a community project at the old Monash University astronomical observatory at Mount Burnett in the Dandenong Ranges. It’s great fun with both the original 18″ telescope and new 6″ and 8″ Dobsonian telescopes (some thoughtfully sponsored by the Bendigo Bank for education and outreach purposes).
It’s had a Facebook presence for a while, but nothing on Twitter, so after speaking to the webmaster and the president I’ve now set up a Twitter presence as @MBObservatory.
So if you’re into astronomy and around Melbourne (especially the south-eastern suburbs, though we do have people travelling in from quite a way) and use Twitter please do follow us!
Greg Kroah-Hartman, the maintainer of the stable releases of the Linux kernel (the point releases after a 3.x release, e.g. 3.6.5, etc) is looking for help for about 6 months as he’s getting overwhelmed.
I’m looking for someone to help me out with the stable Linux kernel release process. Right now I’m drowning in trees and patches, and could use some one to help me sanity-check the releases I’m doing.
Specifically, I’m looking for someone to help with:
- test boot the -rc stable kernels to make sure I didn’t do anything foolish.
- dig through the Linux kernel distro trees and send me the git commit ids, or the backported patches, of things they are shipping that are not in the stable and longterm kernel releases.
- do code review of the patches going into the stable releases.
If you can help out with this, I’d really appreciate it.
You’ll need to show you’ve had kernel patches accepted, are running the latest stable release candidate kernel and can find distro patches (details at his website). You’ve got until November 7th to apply!