On 20 February 2015 at 09:47, Mike Hodson <myst...@gmail.com> wrote: > Hello Didier et al, > > On Fri, Feb 20, 2015 at 12:55 AM, didier chavaroche > <didier.chavaro...@distri-service.com> wrote: > >> The way I use the dd command is the following: >> >> I developped a JAVA application which in a thread class call a script with >> two arguments SOURCE & TARGET. >> >> >> >> In this script I use the command "sudo dd if=SOURCE of=TARGET bs=4096 >> conv=notrunc,noerror &" >> >> >> Then the script identify the PID of the dd command and send every 5 >> seconds the -USR1 signal throught kill >> to print out the progress. > > > This indeed sounds like concurrent processes, definitely more complex and > involved than the simple 'dd &' and repeat 22 times more, but the same I/O > activity should take place, regardless. > > The artifical 'wall' you seemed to hit, around 11 or so processes; while I > can't provide proof, perhaps you can look into this as a possibility if you > are so inclined: I think it is disk cache even allowing you to reach 100 in > the first place. > > I can't help but thinking that if your processes aren't exactly lock-step > matched in input vs output, you will likely have 1 task that starts > streaming the file, and populating the file into your system's disk cache. > The rest of X amount of processes will start writing to the outputs and > potentially slow down / become bogged down / not be requesting the same > bytes that already exist in disk cache, by about the time the 12th process > starts.. > > At which point, the disk is now thrashing, and the cache is having a very > bad day... > > I would be _very_ interested to know if this same behavior occurs if you > have your source file in a ramdisk.
Is there a reason why $ dd if=/dev/zero count=1 of=/tmp/a of=/tmp/b could not be made to write to two, or any number of of= destinations, in single execution? -- Sami Kerola http://www.iki.fi/kerolasa/