Hi, Les Mikesell wrote on 2009-06-03 10:11:20 -0500 [Re: [BackupPC-users] Backing up a BackupPC server]: > Craig Barratt wrote: > > [...] > > I recently heard about lessfs, which runs on top of FUSE to provide > > a file system that does block-level de-duplication. [...] > > > > Yes, taking this approach would require a very substantial rewrite. > > BackupPC would become a lot simpler. But it also creates a significant > > issue of backward compatibility. The only solution would be to provide > > tools that import the old BackupPC store into a new one. That is > > possible, but would likely be very slow. > > [...] > How hard would it be to simply make the links > for pooling an option that you could disable if the filesystem handles > it better - as you can already do with compression?
I believe you would simply have to turn off BackupPC_link. No files end up in the pool directories, each file is created anew (because the relevant pool file doesn't exist). The point is, you *can* simplify BackupPC a lot. You can get rid of the code to determine if and which pool file matches an incoming file. You can probably handle a lot of things differently (and simpler) than they are handled right now - file attributes for one thing. In the long run, you would *want* to do that. Craig, when you write of an import tool, you must be thinking of something along this line. If you just wanted to drop pooling and keep everything else as it is, your import tool would be named 'cp'. But it does seem rather simple to make pooling optional for the start, if you just want to get rid of hardlink problems and happen to have a file system where you don't gain anything from pooling anyway. > I'm not sure why you would need any other format change - and if you had > a tool to reconstruct the pool links you could switch back if you wanted - > at some cost in time and CPU. Here we are back at the original problem. If you have a tool to do that, you can also copy a pool without handling hardlinks and then re-establish them (yes, I know, supposing you have enough space). And that *is* feasible right now, it just comes at a significant cost, because you need to hash and link every single file in every pc/ directory. For small amounts of data, you won't mind doing that, but then, for small amounts of data, you don't need to, because 'rsync -H'/'cp -d' will simply work. Regards, Holger ------------------------------------------------------------------------------ OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/