>> For LTO, the Spool disk MUST be at least least one SSD, preferably a
stripe of them on as fast a controller as you can afford. Standard disks
simply can't keep up with tape drives.

> I had not considered that. In my case, I backup to local HDD (ZFS
array) for long term storage. Right after those jobs finish, I copy to
tape. Sounds like I need to implement spooling now. Fortunately, my full
backups are only about 400GB. I think I can get away with one SSD
feeding my LTO-4.
You should also consider doing the following in bacula-sd tape drive stanzas

  Maximum File Size = 16G
  Maximum Network Buffer Size = 262144
  Maximum block size = 2M

The tape drive stops every time it writes a file marker, and the default
1G size is far too small for LTO. The constant start/stop drop average
throughput dramatically

I would make the block size larger, but 2MB is the maximum supported by
Bacula.
(LTO supports up to 16MB - Kern - Big hint here!)

These are local-specific (my spool has up to 20 jobs feeding into 280GB
of available space) and if you have more space/fewer jobs I'd consider
bumping the Job spool size anything up to 100GB

  Maximum Spool Size     = 120G
  Maximum Job Spool Size = 30G

The current spool area is a stripe of 5 Intel X25E drives on an adaptec
51245 raid controller. These are old but being SLC are still extremely
fast and despite having handled several PB of throughput are still only
a couple percent worn. The controller itself is the limiting factor and
I'm looking at dumping it in favour of linux raid0 on an 8 port 12GB/s HBA.

Whatever you use, you _MUST_ benchmark them and pay particular attention
to the "steady state" write speeds - this is where you write/delete
several times the SSD capacity without using "trim" commands to
precondition it and then benchmark how fast it continues to write
without using trim.

What this tests is how well the drive works out that data blocks need to
be reallocated, rearranges data, erases the underlaying flash chunks and
makes them available. In a lot of MLC drives you can see a 95-98%
dropoff in write speed under such conditions.

The worst case I've seen so far is 500MB Samsung 840Pros, which go from
300-500MB/s steady state write speeds down to 5-10 second pauses with
7-8MB write bursts (or worse). I was using these as the database drives
on another Adaptec 51245 controller (which don't support trim commands)
and it completely trashed backup performance because despooling
attributes to the database ended up taking an hour for 2-3 million file
full backups (1TB data). Moving those same drives off the raid
controller into trim-supporting interfaces dropped despooling time down
to 15 seconds.

By contrast, the X25Es - which on paper look bad against "modern" drives
at only 3000 write IOPs - simply don't slow down at all (they don't
support trim anyway) and keep on writing at 200-300MB/s apiece. More
importantly for my use, when there are 10 write and 5 read streams going
on, they don't slow down in either direction.

If I was speccing a machine now, I'd use a PCIe SSD such as an Intel DC
P3700 for spool or even consider using a ramdisk if I could get enough
memory past management (it's a lot cheaper than it used to be).


Dan - in your case (ZFS), you need to pay attention to steady-state
write speeds because ZFS on most platforms doesn't trim (yet - it's in
OpenZFS devel trees) and because of that I've found that similar
problems apply to L2ARC/SLOG drives which can severely hobble ZFS
performance.




------------------------------------------------------------------------------
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to