On 3/11/2010 4:31 PM, Peter Zenge wrote: > Following up on my own post, I had a little free time the other day and > decided to investigate whether this was feasible. Setting up the > necessary services on Amazon was trivial, including access control and > block storage. I tried s3fs first, and it worked, but it felt like there > was way too much i/o going on for that kind of data (which is pretty > much what I expected). Then I tried putting my bacula-sd on an EC2 node, > writing to files on EBS, and it worked great (spooling first to the > “local” drive on EC2). Throughput however was somewhat less than I was > hoping for, approx. 25% of what I get locally to spool and then to tape. > However, I found that there was NO performance penalty for running two > jobs concurrently. I didn’t try larger numbers, but my guess is you can > run a large number of concurrent jobs to get a pretty good effective > throughput, assuming you have lots of clients with similar data sizes.
Would you care to add the steps to the wiki? Then post the URL here please? -- Dan Langille - http://langille.org/ ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users