Johan writes:
> The user that has reported me the error on the bugzilla did that also,
> the error is:
> Can't locate object method "tell" via package "IO::Handle" at
> /usr/lib/perl5/vendor_perl/5.10.0/Archive/Zip/Member.pm line 746.
Interesting. It appears Archive::Zip is trying to seek() the
I am using backuppc to backup 2 centos servers & 1 solaris
Backup in CentOS runs ok BUT in solaris I got errors
Erros when accessing folders with the character : in their name
i.e. :
/etc/svc/volatile/svc:/
/etc/openwin/etc/devdata/xf86/boards/1:3:0/
The error appears even if I exclude the files
Hi Craig,
Le 25/01/2010 01:28, Craig Barratt a écrit :
> Johan,
>
> I looked at the small broken zip file but couldn't determine
> why it wasn't written correctly.
>
> Just to confirm - BackupPC_zipCreate does work correctly when
> you run it manally with compression turned on?
>
> Next, I would s
On Jan 21, 2010, at 3:10 PM, Don Krause wrote:
> Will do..
>
> On Jan 21, 2010, at 2:59 PM, Shawn Perry wrote:
>
>> and touch a file in there with the current date/time while you're at it.
>>
>> On Thu, Jan 21, 2010 at 2:06 PM, Tino Schwarze
>> wrote:
>>> Hi Don,
>>>
>>> On Thu, Jan 21, 201
There are two points at which you'll run into problems:
1) At the 2GB barrier
2) At the point at which you exceed smbclient's timeout value, which
naturally depends on the speed of your network, hard drives, and the size
of your timeout value
> thank you for that!
>
> any experiences with SMB and
The links in the BackupPC web interface pointing to various explanations in
the built-in manual, isn't working with a plain-vanilla installation.
Creating a symlink in /usr, so that /usr/doc points to
/usr/share/doc/backuppc-3.1.0/ takes care of the problem.
This is on CentOS and BackupPC insta
>-Original Message-
>From: Carl Wilhelm Soderstrom [mailto:chr...@real-time.com]
>Sent: Monday, January 25, 2010 4:58 PM
>To: backuppc-users@lists.sourceforge.net
>Subject: Re: [BackupPC-users] Max concurrent jobs
>
>On 01/25 01:52 , Sorin Srbu wrote:
>> What kind of hardware performance is
On 01/25 01:52 , Sorin Srbu wrote:
> What kind of hardware performance is the "max concurrent jobs" value of four
> based on?
>
> I'm thinking of how much backup-load I can throw at our dual-xeon machine,
> like maybe increasing the value from four to six, or more.
I usually turn it down to 2 b
thank you for that!
any experiences with SMB and large files? with updated smbclients, when are
we likely to run into problems?
thanks a lot!
On Sun, Jan 24, 2010 at 11:17 AM, Pedro M. S. Oliveira <
pmsolive...@gmail.com> wrote:
>
> Currently I have some vmware machine files with over 120GB an
Ok, then I'll just find some way to get that machine backed up one time. Too
bad we can't just piece together an initial full.
--- On Mon, 1/25/10, Tyler J. Wagner wrote:
> From: Tyler J. Wagner
> Subject: Re: [BackupPC-users] Waiting for a full
> To: backuppc-users@lists.sourceforge.net
>
The very first full that you run on a new host is usually the worst. Once most
of the files are checksummed it's fine. I always run that first full on a
wired
connection, and then switch to wireless after that. If you still can't
complete it in that time, consider using includes/excludes to
Hi all,
What kind of hardware performance is the "max concurrent jobs" value of four
based on?
I'm thinking of how much backup-load I can throw at our dual-xeon machine,
like maybe increasing the value from four to six, or more.
Thanks.
--
BW,
Sorin
---
>-Original Message-
>From: Timothy Murphy [mailto:gayle...@eircom.net]
>Sent: Monday, January 25, 2010 4:21 AM
>To: backuppc-users@lists.sourceforge.net
>Subject: [BackupPC-users] "Unable to read 4 bytes" - most useless error
message
>ever?
>
>Is there anywhere a worse error message
>than "
13 matches
Mail list logo