Markus unive...@truemetal.org wrote on 11/19/2012 04:03:03 PM:
For fun, here's the output of find / | wc -l:
24478753
real 490m35.602s
user 0m21.013s
sys 1m23.305s
25 million files! OMG. find took 8 hours to complete. Nice, hm? :-)
Wow. If a simple find took 8 hours to complete,
On Wed, Dec 5, 2012 at 8:12 AM, Timothy J Massey tmas...@obscorp.com wrote:
Wow. 25 *million* files saved in home directories? That kind of defeats the
purpose of shared data! I thought my users were bad about that... :)
Probably mostly browser-cache files that don't need to be backed
Am 05.12.2012 17:47, schrieb Les Mikesell:
On Wed, Dec 5, 2012 at 8:12 AM, Timothy J Massey tmas...@obscorp.com wrote:
Wow. 25 *million* files saved in home directories? That kind of defeats
the purpose of shared data! I thought my users were bad about that... :)
Probably mostly
On Wed, Dec 5, 2012 at 1:06 PM, Markus unive...@truemetal.org wrote:
For every share one (two, actually: rsync+ssh) defunct process is
created once the backup of that share was done and while the overall
backup is still running (while other shares are still getting backed up).
I've tried it
Am 20.11.2012 22:31, schrieb Bowie Bailey:
You're right. I wasn't considering possible characters existing between
c and d. And your suggesting appears to be a good work around.
Allow me to jump back to my original 25-million-files-problem: I came
up with another strategy: I created a shell
On Sun, Nov 25, 2012 at 10:06 AM, Markus unive...@truemetal.org wrote:
I realized that for every completed rsync run on a share, two defunct
processes remain:
[BackupPC_dump] defunct
[ssh] defunct
And after I had about 750 defunct processes (after about 375 rsync runs
on 375 different
On 11/19/2012 4:35 PM, John Rouillard wrote:
What may also work is to use excludes to do your sharding. I have 4
hosts now with different excludes. All of them back up the same share:
$Conf{RsyncShareName} = [ '/home1', ];
Then have different exclusion lists:
# Use exclusion of
On Tue, Nov 20, 2012 at 09:46:33AM -0500, Bowie Bailey wrote:
On 11/19/2012 4:35 PM, John Rouillard wrote:
What may also work is to use excludes to do your sharding. I have 4
hosts now with different excludes. All of them back up the same share:
That seems a bit overly complex. Wouldn't
On 11/20/2012 3:13 PM, John Rouillard wrote:
On Tue, Nov 20, 2012 at 09:46:33AM -0500, Bowie Bailey wrote:
On 11/19/2012 4:35 PM, John Rouillard wrote:
What may also work is to use excludes to do your sharding. I have 4
hosts now with different excludes. All of them back up the same share:
On Tue, Nov 20, 2012 at 03:34:02PM -0500, Bowie Bailey wrote:
On 11/20/2012 3:13 PM, John Rouillard wrote:
On Tue, Nov 20, 2012 at 09:46:33AM -0500, Bowie Bailey wrote:
On 11/19/2012 4:35 PM, John Rouillard wrote:
What may also work is to use excludes to do your sharding. I have 4
hosts
On 11/20/2012 4:07 PM, John Rouillard wrote:
You are assuming that [A-Za-z] is the same as [A-Ca-cD-Md-mN-Zn-z].
You are correct AFAIK in the C locale. I don't feel comfortable making
the same claim in any other locale. E.G. There could be a C caret
after C and before D that is included in the
Am 15.11.2012 19:20, schrieb Les Mikesell:
If there are top-level directories segregating the files sensibly you
could split it into multiple 'shares'. Otherwise, you could switch
the xfer method to tar. Also, I would try something like 'time
find / |wc -l' on the target system just to
On Mon, Nov 19, 2012 at 3:03 PM, Markus unive...@truemetal.org wrote:
Another box of the same customer has 2.5 files and took 29 hours for the
first full backup. That means the 25 million box' full backup should be
done within 12 days. :-) But if I understand correctly all future full
On Mon, Nov 19, 2012 at 10:03:03PM +0100, Markus wrote:
Am 15.11.2012 19:20, schrieb Les Mikesell:
If there are top-level directories segregating the files sensibly you
could split it into multiple 'shares'. Otherwise, you could switch
[...]
If that shouldn't work for some reason I will
Hi list,
I'm new here. At first, thank you for BackupPC! :)
I'm trying to backup a new client. The problem is: rsync never starts to
transfer files, not even after 12 hours of waiting. rsync is doing
something, though. More on that below.
The client is a quad core 2.8 GHz CPU, 8 GB RAM and
On Thu, Nov 15, 2012 at 12:22 PM, Markus unive...@truemetal.org wrote:
Any suggestions on what I could do or what could go wrong here?
Since your things are working well on all other machines, try a backup
on the trouble machine of just one directory (or a few) and see if
that works normally.
Hi Steve,
Am 15.11.2012 19:07, schrieb Steve:
On Thu, Nov 15, 2012 at 12:22 PM, Markus unive...@truemetal.org wrote:
Any suggestions on what I could do or what could go wrong here?
Since your things are working well on all other machines, try a backup
on the trouble machine of just one
On Thu, Nov 15, 2012 at 11:22 AM, Markus unive...@truemetal.org wrote:
The client is a quad core 2.8 GHz CPU, 8 GB RAM and 1.6 TB of many many
small files in a RAID0. CPUs 75-95% idle most of the time, load around
0.3. No swap used.
rsync 3.0.7 on the client, rsync 3.0.6 on the server.
On Thu, Nov 15, 2012 at 1:14 PM, Markus unive...@truemetal.org wrote:
Your suggestion sounds great. I just found this small how-to on a forum.
Is this how it works or is there another/better way?
Create as many client names as you like, eg: client-share1,
client-share2, client-share3,
On Thu, Nov 15, 2012 at 12:28 PM, Steve lepe...@gmail.com wrote:
Create as many client names as you like, eg: client-share1,
client-share2, client-share3, client-share4 (replace client with the
real host name and share with the share names). In each
pc/client-xxx/config.pl file, use;
Hi,
On Thursday 15 November 2012 19:14:53 Markus wrote:
Your suggestion sounds great. I just found this small how-to on a forum.
Is this how it works or is there another/better way?
Create as many client names as you like, eg: client-share1,
client-share2, client-share3, client-share4
21 matches
Mail list logo