Ok, it seems to me that there is some confusion of which kind of dedupe I'm 
referring to.

What I'm talking about is variable-length block-based target dedupe, that is, 
the DXi or DataDomain box receives all data from the client via the backup 
server (bacula SD) and then handles the deduplication and compression of these 
data. There is no dedupe or compression on the client or backup server itself 
of any sort.

This does not work in practice right now, since bacula does something to the 
backup stream that makes most of the blocks or segments unique.

I've tried this on a DXi and on a fileserver using SDFS, and both had only 5% 
data reduction after running three 6GB backups of the same filesystem on a 
client. In theory that should give us at least 66% data reduction ratio.

Please bear in mind that target dedupe is pretty much the industry standard, 
since RTO plays a huge role in deciding what type of backup and storage to use.

/tony

+----------------------------------------------------------------------
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+----------------------------------------------------------------------



------------------------------------------------------------------------------
Master HTML5, CSS3, ASP.NET, MVC, AJAX, Knockout.js, Web API and
much more. Get web development skills now with LearnDevNow -
350+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122812
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to