[BackupPC-users] Backing up little by little or throttling the backup?

2011-04-12 Thread Jake Wilson
We have a production server on the network with several terabytes of data
that needs to be backed up onto our BackupPC server.  The problem is that
the production server is in use most of the day.  There is a lot of normal
network traffic going in and out.

I'm wondering what options there are for backing up the production server in
a way that will hinder the performance and network access as little as
possible.  I'm not too worried about the incremental backups because those
wont take that long and will happen at night.  But the first, initial big
full backup is going to take quite a while and I don't want the production
server borderline-unresponsive during the backup process.

Here are some options I've been thinking about:

   - Backing up / but have most of the large directories and subdirectories
   excluded and slowly "unexclude" them one by one in-between full backups.
   - rsync bitrate limit throttling?
   - Instead of backing up /, specify specific big directories one at a
   time, adding more and more in-between full backups.

Anyone have any ideas or direction for this?  Or are there any built-in
config options for throttling the backup process that I'm unaware of?

Jake Wilson
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-12 Thread Jeffrey J. Kosowsky
Jake Wilson wrote at about 17:06:55 -0600 on Tuesday, April 12, 2011:
 > We have a production server on the network with several terabytes of data
 > that needs to be backed up onto our BackupPC server.  The problem is that
 > the production server is in use most of the day.  There is a lot of normal
 > network traffic going in and out.
 > 
 > I'm wondering what options there are for backing up the production server in
 > a way that will hinder the performance and network access as little as
 > possible.  I'm not too worried about the incremental backups because those
 > wont take that long and will happen at night.  But the first, initial big
 > full backup is going to take quite a while and I don't want the production
 > server borderline-unresponsive during the backup process.

I would find it hard to believe that BackupPC would throttle a
production server... Linux usually does a pretty good job of sharing
cpu, disk access, network access. And if one process throttles your
production server, then you probably have more fundamental issues you
need to deal with...

 > Here are some options I've been thinking about:
 > 
 >- Backing up / but have most of the large directories and subdirectories
 >excluded and slowly "unexclude" them one by one in-between full backups.
Too much hassle and still will "throttle" it during the smaller piece
 >- rsync bitrate limit throttling?
Given that network bandwidth is probably rate limiting this should
help if your network is getting slammed.

 >- Instead of backing up /, specify specific big directories one at a
 >time, adding more and more in-between full backups.
Too much hassle and still will "throttle" it during the smaller piece

 > Anyone have any ideas or direction for this?  Or are there any built-in
 > config options for throttling the backup process that I'm unaware of?

Have you tried it and run into bottlenecks or are you just worrying in
advance? :P

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-12 Thread Timothy J Massey
"Jeffrey J. Kosowsky"  wrote on 04/12/2011 07:41:15 
PM:

> Jake Wilson wrote at about 17:06:55 -0600 on Tuesday, April 12, 2011:
>  > We have a production server on the network with several terabytes of 
data
>  > that needs to be backed up onto our BackupPC server.  The problem is 
that
>  > the production server is in use most of the day.  There is a lot of 
normal
>  > network traffic going in and out.
>  > 
>  > I'm wondering what options there are for backing up the 
> production server in
>  > a way that will hinder the performance and network access as little 
as
>  > possible.  I'm not too worried about the incremental backups because 
those
>  > wont take that long and will happen at night.  But the first, initial 
big
>  > full backup is going to take quite a while and I don't want the 
production
>  > server borderline-unresponsive during the backup process.
> 
> I would find it hard to believe that BackupPC would throttle a
> production server... Linux usually does a pretty good job of sharing
> cpu, disk access, network access. And if one process throttles your
> production server, then you probably have more fundamental issues you
> need to deal with...

I pretty much agree with Jeffrey (if doing the full backup makes your 
server "borderline-unresponsive" how on *earth* does it actually do its 
job as a file server?!?) But I will say this:  I have found that some 
(sensitive or very heavy) users can tell when I'm running a full backup on 
a server.  However, it's usually relatively subtle.  It's not 
"borderline-unresponsive", it's more like, "Hmm, that's taking a little 
longer than I'm used to..."

>  > Here are some options I've been thinking about:
>  > 
>  >- Backing up / but have most of the large directories and 
subdirectories
>  >excluded and slowly "unexclude" them one by one in-between full 
backups.
> Too much hassle and still will "throttle" it during the smaller piece.
>  >- rsync bitrate limit throttling?
> Given that network bandwidth is probably rate limiting this should
> help if your network is getting slammed.

I would say that you first have to determine A) if it will be a problem (I 
think it won't), and B) *what* is causing the problem:  not enough RAM, 
not enough I/O throughput or not enough network bandwidth.  Then you can 
tackle the problem from there.

Of course, using the exclude trick (exclude, say, every root directory but 
one, then add them one at a time or some such) will certainly work, if 
quite possibly unnecessary and causing your first backup to take many 
(extra) days to complete.  And if a backup truly does cause your server to 
become "borderline-unresponsive" you may not have much other choice but to 
keep the deltas small enough to be completed during your backup window.

But give it a try first:  unless that production server is a 600MHz 
machine with 512MB RAM and a single SATA spindle, you will most likely be 
fine (and if you *are* running like that, well, you have other problems! 
:) ).  (Actually, I have one client with servers that are dual-processor 
600MHz with 1GB RAM that I back up during the day and the users at this 
location almost *never* notice.)

Remember that, while the BackupPC process move a *lot* of data, it's just 
a single thread.  It's no more resource-intensive per second than any 
*other* I/O user on the system.  It's I/O pattern is also relatively 
linear (modulo filesystem fragmentation), which should help, too.

Unless you're talking billions of tiny files.  Then you're on your own! :) 
 Gobs and gobs of RAM (to cache both the file list and dentries and 
inodes) wiil help in that case.

> 
>  >- Instead of backing up /, specify specific big directories one at 
a
>  >time, adding more and more in-between full backups.
> Too much hassle and still will "throttle" it during the smaller piece

This is a significantly worse variation of #1 (using excludes):  it's 
going to be a problem with pooling and buys you nothing over #1, so ignore 
it.

> 
>  > Anyone have any ideas or direction for this?  Or are there any 
built-in
>  > config options for throttling the backup process that I'm unaware of?
> 
> Have you tried it and run into bottlenecks or are you just worrying in
> advance? :P

To quote Michael Abrash:  Profile before you optimize.

Timothy J. Massey

 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo

Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-12 Thread Timothy J Massey
Timothy J Massey  wrote on 04/12/2011 10:13:11 PM:

> But give it a try first:  unless that production server is a 600MHz 
> machine with 512MB RAM and a single SATA spindle, you will most 
> likely be fine (and if you *are* running like that, well, you have 
> other problems!  :) ).  (Actually, I have one client with servers 
> that are dual-processor 600MHz with 1GB RAM that I back up during 
> the day and the users at this location almost *never* notice.) 

To clarify and expand this:  they are IBM Netfinity 5600 servers.  2 x 
600MHz Intel P3 processors, 1GB RAM, and 6 x 18GB SCSI-160 10,000 RPM 
drives in a RAID 5 array with an IBM ServeRAID hardware RAID controller. 
The systems are old and (processor) slow, but the disk performance is 
really pretty good, even today:  it'll easily saturate GigE.

My point for this:  CPU power matters little on the client side.  RAM 
matters, but only once you have enough:  depending on the number of files, 
the amount of RAM you truly need is literally in the hundreds of 
megabytes.  What *really* matters is I/O and network throughput--and on 
any halfway-decent server with multiple high-RPM hard drives, you will be 
limited by network bandwidth more than anything else.

Timothy J. Massey

 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-12 Thread Jeffrey J. Kosowsky
Timothy J Massey wrote at about 22:46:39 -0400 on Tuesday, April 12, 2011:
 > Timothy J Massey  wrote on 04/12/2011 10:13:11 PM:
 > 
 > > But give it a try first:  unless that production server is a 600MHz 
 > > machine with 512MB RAM and a single SATA spindle, you will most 
 > > likely be fine (and if you *are* running like that, well, you have 
 > > other problems!  :) ).  (Actually, I have one client with servers 
 > > that are dual-processor 600MHz with 1GB RAM that I back up during 
 > > the day and the users at this location almost *never* notice.) 
 > 
 > To clarify and expand this:  they are IBM Netfinity 5600 servers.  2 x 
 > 600MHz Intel P3 processors, 1GB RAM, and 6 x 18GB SCSI-160 10,000 RPM 
 > drives in a RAID 5 array with an IBM ServeRAID hardware RAID controller. 
 > The systems are old and (processor) slow, but the disk performance is 
 > really pretty good, even today:  it'll easily saturate GigE.
 > 

 > My point for this:  CPU power matters little on the client side.  RAM 
 > matters, but only once you have enough:  depending on the number of files, 
 > the amount of RAM you truly need is literally in the hundreds of 
 > megabytes.  What *really* matters is I/O and network throughput--and on 
 > any halfway-decent server with multiple high-RPM hard drives, you will be 
 > limited by network bandwidth more than anything else.

As I have mentioned before, I have succeeded with just 64MB of RAM of
which only about 20MB is free. I do have swap, but it really doesn't
use much swap. Now I am just backing up "normal" Linux and Windows
worstations and laptops where each backup has maybe a couple of
hundred thousand files and 20-50GB... but it does work... I use the
rsync/rsyncd transfer method and rsync >~ 3.0 is pretty memory efficient as
long as you have a "normal" filesystem without "humongous" numbers of
hard links or "insanely" large numbers of files per directory.
On small systems, I still find the primary limitations are bandwidth
(I backup many machines over 802.11g wireless) and cpu power for
compression (when I use a 500MHz Arm processor).

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-13 Thread Jake Wilson
Thanks for the replies, everyone.  I have not tried the backup yet, so I
don't know what to expect.  I just didn't want to try it on a whim without
researching it a bit more.  The servers are not just file servers.  They run
Oracle and proprietary data modeling software that do a lot of data
crunching throughout the day.  Just needed to make sure that rsyncing stuff
during the process was not going to hinder performance too bad.

Jake Wilson


On Tue, Apr 12, 2011 at 11:30 PM, Jeffrey J. Kosowsky  wrote:

> Timothy J Massey wrote at about 22:46:39 -0400 on Tuesday, April 12, 2011:
>  > Timothy J Massey  wrote on 04/12/2011 10:13:11 PM:
>  >
>  > > But give it a try first:  unless that production server is a 600MHz
>  > > machine with 512MB RAM and a single SATA spindle, you will most
>  > > likely be fine (and if you *are* running like that, well, you have
>  > > other problems!  :) ).  (Actually, I have one client with servers
>  > > that are dual-processor 600MHz with 1GB RAM that I back up during
>  > > the day and the users at this location almost *never* notice.)
>  >
>  > To clarify and expand this:  they are IBM Netfinity 5600 servers.  2 x
>  > 600MHz Intel P3 processors, 1GB RAM, and 6 x 18GB SCSI-160 10,000 RPM
>  > drives in a RAID 5 array with an IBM ServeRAID hardware RAID controller.
>  > The systems are old and (processor) slow, but the disk performance is
>  > really pretty good, even today:  it'll easily saturate GigE.
>  >
>
>  > My point for this:  CPU power matters little on the client side.  RAM
>  > matters, but only once you have enough:  depending on the number of
> files,
>  > the amount of RAM you truly need is literally in the hundreds of
>  > megabytes.  What *really* matters is I/O and network throughput--and on
>  > any halfway-decent server with multiple high-RPM hard drives, you will
> be
>  > limited by network bandwidth more than anything else.
>
> As I have mentioned before, I have succeeded with just 64MB of RAM of
> which only about 20MB is free. I do have swap, but it really doesn't
> use much swap. Now I am just backing up "normal" Linux and Windows
> worstations and laptops where each backup has maybe a couple of
> hundred thousand files and 20-50GB... but it does work... I use the
> rsync/rsyncd transfer method and rsync >~ 3.0 is pretty memory efficient as
> long as you have a "normal" filesystem without "humongous" numbers of
> hard links or "insanely" large numbers of files per directory.
> On small systems, I still find the primary limitations are bandwidth
> (I backup many machines over 802.11g wireless) and cpu power for
> compression (when I use a 500MHz Arm processor).
>
>
> --
> Forrester Wave Report - Recovery time is now measured in hours and minutes
> not days. Key insights are discussed in the 2010 Forrester Wave Report as
> part of an in-depth evaluation of disaster recovery service providers.
> Forrester found the best-in-class provider in terms of services and vision.
> Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-13 Thread Bowie Bailey
On 4/13/2011 12:14 PM, Jake Wilson wrote:
> Thanks for the replies, everyone.  I have not tried the backup yet, so
> I don't know what to expect.  I just didn't want to try it on a whim
> without researching it a bit more.  The servers are not just file
> servers.  They run Oracle and proprietary data modeling software that
> do a lot of data crunching throughout the day.  Just needed to make
> sure that rsyncing stuff during the process was not going to hinder
> performance too bad.

My advice would be "Try it and see."  If it causes a slowdown, stop the
backup and try something else.

-- 
Bowie

--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-13 Thread Les Mikesell
On 4/13/2011 11:14 AM, Jake Wilson wrote:
> Thanks for the replies, everyone.  I have not tried the backup yet, so I
> don't know what to expect.  I just didn't want to try it on a whim
> without researching it a bit more.  The servers are not just file
> servers.  They run Oracle and proprietary data modeling software that do
> a lot of data crunching throughout the day.  Just needed to make sure
> that rsyncing stuff during the process was not going to hinder
> performance too bad.

I usually just start new/big runs near the end of a business day on 
Friday so if it bleeds into the next morning it doesn't matter that 
much.  Unless your backup server is extremely fast, the rsync-in-perl 
will throttle things enough to not bother other systems much.  If you 
are concerned about saturating a WAN link, the rsync --bwlimit option 
can be added.

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-13 Thread John Rouillard
On Wed, Apr 13, 2011 at 10:14:13AM -0600, Jake Wilson wrote:
> Thanks for the replies, everyone.  I have not tried the backup yet, so I
> don't know what to expect.  I just didn't want to try it on a whim without
> researching it a bit more.  The servers are not just file servers.  They run
> Oracle and proprietary data modeling software that do a lot of data
> crunching throughout the day.  Just needed to make sure that rsyncing stuff
> during the process was not going to hinder performance too bad.

You said you were expecting a lot of data and are running oracle, I
just want to make sure you understand that you CAN NOT just backup
oracle files while oracle is running and expect to be able to restore
them.

You have to tell oracle you are doing the backup copy the data files
and then tell oracle you are done and then [qcopy the journal
files. It's been a few years since I have worked with oracle, so I am
sure my explanation is too simplistic (and may be incomplete/wrong),
but you can't just rsync oracle files and get a restorable oracle
setup.

Also if your "proprietary data modeling software" dumps state files so
that the application can crash and the data modeling can restart from
the saved state, you may have to interact with that software as well
to make sure that it has a consistant dump state on disk that is
useful as a restore. Nothing like finding out an anayslsis that ran
for 60 days before the disks crashed hard only had valid checkpoints
in the backup for day 2 8-(.

-- 
-- rouilj

John Rouillard   System Administrator
Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-13 Thread Jake Wilson
Yes I'm planning on doing cold oracle backups before BackupPC does its
thing.  Thanks

Jake Wilson


On Wed, Apr 13, 2011 at 3:39 PM, John Rouillard  wrote:

> On Wed, Apr 13, 2011 at 10:14:13AM -0600, Jake Wilson wrote:
> > Thanks for the replies, everyone.  I have not tried the backup yet, so I
> > don't know what to expect.  I just didn't want to try it on a whim
> without
> > researching it a bit more.  The servers are not just file servers.  They
> run
> > Oracle and proprietary data modeling software that do a lot of data
> > crunching throughout the day.  Just needed to make sure that rsyncing
> stuff
> > during the process was not going to hinder performance too bad.
>
> You said you were expecting a lot of data and are running oracle, I
> just want to make sure you understand that you CAN NOT just backup
> oracle files while oracle is running and expect to be able to restore
> them.
>
> You have to tell oracle you are doing the backup copy the data files
> and then tell oracle you are done and then [qcopy the journal
> files. It's been a few years since I have worked with oracle, so I am
> sure my explanation is too simplistic (and may be incomplete/wrong),
> but you can't just rsync oracle files and get a restorable oracle
> setup.
>
> Also if your "proprietary data modeling software" dumps state files so
> that the application can crash and the data modeling can restart from
> the saved state, you may have to interact with that software as well
> to make sure that it has a consistant dump state on disk that is
> useful as a restore. Nothing like finding out an anayslsis that ran
> for 60 days before the disks crashed hard only had valid checkpoints
> in the backup for day 2 8-(.
>
> --
>-- rouilj
>
> John Rouillard   System Administrator
> Renesys Corporation  603-244-9084 (cell)  603-643-9300 x 111
>
>
> --
> Benefiting from Server Virtualization: Beyond Initial Workload
> Consolidation -- Increasing the use of server virtualization is a top
> priority.Virtualization can reduce costs, simplify management, and improve
> application availability and disaster protection. Learn more about boosting
> the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-14 Thread Pavel Hofman
Jake Wilson napsal(a):
> 
> On Wed, Apr 13, 2011 at 3:39 PM, John Rouillard
> mailto:rouilj-backu...@renesys.com>> wrote:
> 
> On Wed, Apr 13, 2011 at 10:14:13AM -0600, Jake Wilson wrote:
> > Thanks for the replies, everyone.  I have not tried the backup
> yet, so I
> > don't know what to expect.  I just didn't want to try it on a whim
> without
> > researching it a bit more.  The servers are not just file servers.
>  They run
> > Oracle and proprietary data modeling software that do a lot of data
> > crunching throughout the day.  Just needed to make sure that
> rsyncing stuff
> > during the process was not going to hinder performance too bad.
> 

Hi,

We are using the trickle library modifier
http://linux.die.net/man/1/trickle :

# throttling 300kB up and 2MB down
$Conf{TarClientCmd} = '/usr/bin/trickle -s -u 300 -d 2000 $sshPath -C -q
-x -n -l root $host'
. ' /usr/bin/env LC_ALL=C nice $tarPath
--one-file-system -c -v -f - -C $shareName+'
. ' --totals';

Regards,

Pavel.



--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up little by little or throttling the backup?

2011-04-14 Thread Carl Wilhelm Soderstrom
On 04/14 09:42 , Pavel Hofman wrote:
> We are using the trickle library modifier
> http://linux.die.net/man/1/trickle :

Interesting. Thanks for posting that.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/