I've been doing some ZFS on linux vs XFS benchmarking and I'm seeing that
ZFS is performing slightly better than XFS on reads and writes but sucks on
deletes. If you're not going to be doing lots of deletes and need the
ability to expand (e.g. thinking of using LVM) then ZFS may be a nice
Does your mysql db live on a unix system? If so, why not use
automysqlbackup and just have it dump to your backup system over NFS?
On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton lordspork...@gmail.com wrote:
I'm currently backing up mysql by way of dumping the DB to a flat file then
backing up
could get it to work. Tar is of course capable of accepting either stream or
file as input and mysqldump is capable of outputing to either stream or
file. I suppose I will just have to play around with it more maybe.
Please show an example of where you can stream data directly into tar
Yes . First list all the backups for a host :
orca pc # /usr/local/BackupPC/bin/BackupPC_deleteBackup.sh -l -c orca
If you're keeping n fulls and then doing m incrementals, just delete
all but the last full, so from the command above it's showing :
BackupNumber 1066 - full-Backup from
it is time consuming
to delete all 50+ of them per host.
On Mon, Sep 24, 2012 at 3:18 PM, Sabuj Pattanayek sab...@gmail.com wrote:
Yes . First list all the backups for a host :
orca pc # /usr/local/BackupPC/bin/BackupPC_deleteBackup.sh -l -c orca
If you're keeping n fulls and then doing m
fatter (and possibly lower latency) pipes like 10Gb Ethernet or Myrinet.
and IB which I think is cheaper than either of the above.
The newer crop of network storage such as GlusterFS (being purchased by
Red Hat) is nice for several reasons. It scales nearly linearly in i/o,
available
tar is faster since it doesn't spend hours building a file list should
there be thousands or millions of files involved.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management
http://everything2.com/title/Filesystem+performance+tweaking+with+XFS+on+Linux
Basically when creating the xfs use :
mkfs.xfs -l size=64m
and when mounting use:
mount -t xfs -o noatime,nodiratime,logbufs=8 device mountPoint
in fstab:
device mountPointxfs
Hi,
I just host this on google code :
http://code.google.com/p/nfsspeedtest/ . Created it several years ago
and been adding options to it ever since. It's an easy to use perl
script wrapper for dd with several options .
HTH,
Sabuj
On Tue, Apr 19, 2011 at 2:42 PM, comfi
On Sat, Oct 9, 2010 at 10:43 AM, Xuo x...@free.fr wrote:
Hi,
It is probably a stupid question, but what are graphs ?
There seems to have been some sort of patch that uses rrdtool to
generate graphs of pool sizes?
back to 1 window and 1 tab, I was also
able to open the host summary page.
I wonder if there's a bug for this in FF bugzilla?
Thanks,
Sabuj Pattanayek
--
___
BackupPC-users
Hi,
On one of my backuppc servers, when I click hosts summary, the browser
just spins and spins while it tries to load the hosts summary page but
it never completes. I can however, go to each of the hosts that are
backed up and even browse the backups. I've tried completely turning
off backuppc
OpenSCManager: Win32 error 5:
Access is denied
for the cygrunsrv service registration command. I also tried it
manually using the sc create rsyncd binpath=... command but it gave
me the same error. Any ideas?
Thanks,
Sabuj Pattanayek
c:\cygwin\bin\cygrunsrv.exe -I rsyncd -e CYGWIN=nontsec -p
c:/cygwin/bin/rsync.exe -a --config=c:/cygwin/etc/rsyncd.conf --daemon
--no-detach
Right, that's the same command and basically the same cygwin rsync
without having to d/ld the entire cygwin suite. But I still get the
error as
On Fri, Dec 18, 2009 at 1:27 PM, David Young randomf...@gmail.com wrote:
Just curious, what's the max sustained throughput that anyone has seen with
their system? 2-3Mbit/s is 250KB/s which seems really slow. I'm in the
process of setting up a local Backuppc server and now am concerned about
Hi,
Added some debugging code to Storage/Text.pm sub TextFileWrite, it
used to look like this :
rename($file.new, $file) if ( -f $file.new );
I changed it to :
if ( -f $file.new ) {
my $renRet = rename($file.new, $file);
if ($renRet) {
16 matches
Mail list logo