Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Thu, Jun 2, 2022 at 2:29 AM Libor Klepáč  wrote:
>
> Hi,
> we use backuppc in containers (systemd-nspawn), each instance on
> separate btrfs drive.
> Then we do snapshots of said drives using btrbk.
> We pull those snapshots from remote machines, also using btrbk.
>
> If we need to spin up container in remote location (we have longer
> retention in remote location), we just create read-write copy of
> snapshot and spin it up to extract files.
>
> With backuppc4, we also tried to use btrfs compression using zstd,
> instead of backuppc internal compression (you don't need compression,
> because you don't use checksum-seed anymore).
> Seems to work nice too.
>
>
> Libor
>

Interesting implementation.

How do you manage the configuration files? Is it inside the snapshot
as well? You launch a new container on the remote location and it
reads the configuration from the snapshot?

If you have documented this implementation in some blog or Medium, I'm
interested to read more about it.


-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Wed, Jun 1, 2022 at 11:32 PM Ray Frush  wrote:
>
> I have always interpreted the 3-2-1 strategy to apply to copies of your data, 
> not the number of backups 
> (https://www.backblaze.com/blog/the-3-2-1-backup-strategy/)
>
> As such, I’ve used two strategies over time.
> 1)  Use BackupPC to backup local devices in the same building/LAN, and have a 
> second BackupPC instance in a separate space also running backups of the same 
> devices.  ( 3 copies of the data: the source, one on local backup, one on 
> remote backup.  Requires good network speeds between your local site and your 
> remote site.
>
> 2) Use BacukpPC to backup local devices to a NAS.  Use NAS replication to 
> push a copy of the BackupPC data to a remote device.
>

Strategy no 1 of running 2 BackupPC systems is interesting. I will run
some tests to figure out if our ISP upload bandwidth of just 10 Mbps
(for a 30 Mbps fibre subscription) is good enough to run a BackupPC
system in the cloud

-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Wed, Jun 1, 2022 at 11:09 PM  wrote:
>
> Sharuzzaman Ahmat Raslan wrote at about 14:46:52 +0800 on Wednesday, June 1, 
> 2022:
>  > Hello,
>  >
>  > I have been using BackupPC for a long time, and even implement it
>  > successfully for several clients.
>  >
>  > Recently I came across several articles about the 3-2-1 backup
>  > strategy and tried to rethink my previous implementation and how to
>  > achieve it with BackupPC
>  >
>  > For anyone who is not familiar with the 3-2-1 backup strategy, the
>  > idea is you should have 3 copies of backups, 2 copies locally on
>  > different media or servers, and 1 copy remotely on cloud or remote
>  > server
>  >
>  > I have previously implemented BackupPC + NAS, where I create a Bash
>  > script to copy the backup data into NAS. That should fulfil the 2
>  > local backup requirements, and I could extend it further by having
>  > another Bash script copying from the NAS to cloud storage (eg. S3
>  > bucket)
>  >
>  > My concern right now is the experience is not seamless for the user,
>  > and they have no indicator/report about the status of the backup
>  > inside the NAS and also in the S3 bucket.
>  >
>  > Restoring from NAS and S3 is also manual and is not simple for the user.
>  >
>  > Anyone has come across a similar implementation for the 3-2-1 backup
>  > strategy using BackupPC?
>  >
> Sounds interesting...
>  > Is there any plan from the developers to expand BackupPC to cover this 
> strategy?
>
> I don't think this is on the roadmap...
> But it is open source and easily extendable given that the code is
> mostly perl.
> Feel free to add this!
>
I'm not well versed in Perl, but I can try to create a POC in Python,
trying to get the spirit of BackupPC as much as possible



-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-01 Thread Sharuzzaman Ahmat Raslan
Hello,

I have been using BackupPC for a long time, and even implement it
successfully for several clients.

Recently I came across several articles about the 3-2-1 backup
strategy and tried to rethink my previous implementation and how to
achieve it with BackupPC

For anyone who is not familiar with the 3-2-1 backup strategy, the
idea is you should have 3 copies of backups, 2 copies locally on
different media or servers, and 1 copy remotely on cloud or remote
server

I have previously implemented BackupPC + NAS, where I create a Bash
script to copy the backup data into NAS. That should fulfil the 2
local backup requirements, and I could extend it further by having
another Bash script copying from the NAS to cloud storage (eg. S3
bucket)

My concern right now is the experience is not seamless for the user,
and they have no indicator/report about the status of the backup
inside the NAS and also in the S3 bucket.

Restoring from NAS and S3 is also manual and is not simple for the user.

Anyone has come across a similar implementation for the 3-2-1 backup
strategy using BackupPC?

Is there any plan from the developers to expand BackupPC to cover this strategy?

Thank you.

-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup-PC Project

2014-09-26 Thread Sharuzzaman Ahmat Raslan
Hi,

Instead of reading directly from files and directories, I think people
should start with creating proper reporting API for BackupPC.

One this reporting API is already in place, a lot of other beneficial
plugin can be created, just by querying this API.

Example reporting API:
- no. of pending backup
- no. of pending user backup
- pool size
- total file in pool
- repeated file
- longest chain
- etc

This info is somehow being calculated by BackupPC, but not being exposed
via proper API.

See http://backuppc.sourceforge.net/BackupPCServerStatus.html


On Fri, Sep 26, 2014 at 10:27 PM, Dick Tump (Shock Media B.V.) 
d...@shockmedia.nl wrote:

 Hello Heuzé,

 For my work I have created a few scripts that collects BackupPC
 information from the text files with backup details and then stores it
 into the MySQL database of our server administration. This is very
 simple to do, for example with a small Perl script. Unfortunately I
 can't share those scripts (and it's not very useful for others), but to
 help you a bit further: how I did it, was just reading the 'backup'
 files in de pc/servername directory. To find out what each value means,
 I have compared them to what the webinterface lists.

 Of course you can do the same with all other information, or make
 scripts to extract files, send you notifications by e-mail, or put
 details in some kind of database which makes it easy for you to search
 for information.

 Met vriendelijke groet / Kind regards,
 Dick Tump | Shock Media B.V.

 Tel: +31 (0)546 - 714360
 Fax: +31 (0)546 - 714361
 Web: http://www.shockmedia.nl/

 Connect to me @ LinkedIn:
 http://nl.linkedin.com/in/dicktump

 On 09/26/2014 03:48 PM, Heuzé Florent wrote:
  Hello everyone,
  As part of a research project, I want to make the development of an
  Backup-PC's plugin.
 
  Currently, in my business, integrity of backups are carried out by human
  with the Backup-PC interface (last backup age, xfer errors, average
  backups sizes, average backups duration, random file extraction, etc
 ...).
 
  This is repetitive, lengthy and inefficient task.
 
  The aim is therefore to create a plugin to get information quickly and
  efficiently needed for these manual checks.
 
  I communicated this project to the Backup-PC community to collect your
  opinions, advice, support and ideas. I also invite you to share your
  experience if you have the same problem, the next step is the
  realization of a functional analysis to determine the real need.
 
  kind regard,
 
 
  Heuzé Florent
  FIREWALL-SERVICES SARL.
  Société de Services en Logiciels Libres
  Technopôle Montesquieu
  33650 MARTILLAC
  Visio:https://vroom.im/heuzef
  Web  :http://www.firewall-services.com
 
 
 
 --
  Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
  Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
  Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
  Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
 
 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/



 --
 Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
 Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
 Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
 Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer

 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Has anybody here configured BackupPC to back up/image linux

2014-09-26 Thread Sharuzzaman Ahmat Raslan
Hi,

How about Clonezilla + Ceph cluster + btrfs?

Clonezilla - should be able to backup the whole hard disk as image
Ceph cluster - block based distributed storage, with object, file and block
based access for client.
btrfs - to use the deduplication feature to save space.

For ceph, you can start with 2 node first, to get redundancy, then slowly
add new node to increase the storage size and improving redundancy

Thanks.

On Sat, Sep 27, 2014 at 12:49 AM, xpac backuppc-fo...@backupcentral.com
wrote:

 SO, I took this from their Wikipedia page BackupPC is not a block-level
 backup system like Ghost4Linux but performs file-based backup and restore.
 Thus it is not suitable for backup of disk images or raw disk partitions

 Hmmm, just thinking outloud here but is there something open source that
 is better suited for server backups/restores/disaster recovery?

 Or do those that use this find that it is relatively easy in the event of
 a disaster.  Right now I'm working on implementing something into our small
 production environment (consisting entirely of Centos servers) and I have
 BackupPC up and running, just want to see if maybe I should reconsider
 before we get too far along and switching to something else becomes much
 more painful.

 Any/all opinions are welcomed  :D

 +--
 |This was sent by ph...@hotmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
 Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
 Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
 Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer

 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup-PC Project

2014-09-26 Thread Sharuzzaman Ahmat Raslan
Nice!

But, BackupPC::Lib I believe is Perl specific. People might have other
application in other language, so with proper API that return XMLRPC or
JSON, other people can use their familiar programming language and
interface it with BackupPC without having to understand Perl.

Thanks.

On Sat, Sep 27, 2014 at 12:59 AM, Alexander Moisseev mois...@mezonplus.ru
wrote:

 On 26.09.2014 20:09, Sharuzzaman Ahmat Raslan wrote:
 
  Instead of reading directly from files and directories, I think people
 should start with creating proper reporting API for BackupPC.
 

 Retrieving information with BackupPC::Lib is relatively simple.

 Here are couple examples:
 https://github.com/moisseev/BackupPC_report
 https://github.com/moisseev/BackupPC_Timeline



 --
 Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
 Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
 Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
 Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer

 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Mounting synology NAS breaks Backuppc

2014-09-24 Thread Sharuzzaman Ahmat Raslan
Hi,

What is the account used to mount the NAS?

backuppc or root?

Thanks.

On Thu, Sep 25, 2014 at 12:16 AM, Tom Fallon tom.r.fal...@gmail.com wrote:

  Recently I had to rebuild a backuppc server I’ve inherited from previous
 admin. Running Ubuntu 14.04 server and Backuppc version 3.2.1 installed
 from repositories. The backups are stored on a Synology NAS (RS3413XS+
 running DSM 4.3) which I’ve mounted using NFS.



 If I mount the NAS (mount NAS IP:/volume5/LinuxBackups
 /var/lib/backuppc) I can see the backups on the NAS and can do a “touch
 test.txt” and modify contents so permissions seem ok. And the backuppc web
 GUI is accessible however I get the following error.



 Error: Unable to connect to BackupPC server

 This CGI script (/backuppc/index.cgi) is unable to connect to the BackupPC
 server on Servername port -1.
 The error was: unix connect: Connection refused.
 Perhaps the BackupPC server is not running or there is a configuration
 error. Please report this to your Sys Admin.

 If I try and unmount the NAS I get error that mount is in use (makes
 sense) but when I try and stop backuppc I get another error



 No process in pidfile '/var/run/backuppc/BackupPC.pid' found running; none
 killed.



 A ps –eg | grep backuppc shows



 username@servername~$ ps -ef | grep backuppc

 backuppc  1874 1  0 16:26 ?00:00:00 /usr/bin/perl
 /usr/share/backuppc/bin/BackupPC -d

 backuppc  1877  1386  0 16:27 ?00:00:00 /usr/bin/perl
 /usr/share/backuppc/lib/realindex.cgi

 backuppc  1878  1386  0 16:27 ?00:00:00 /usr/bin/perl
 /usr/share/backuppc/lib/realindex.cgi

 backuppc  1884  1386  0 16:31 ?00:00:00 /usr/bin/perl
 /usr/share/backuppc/lib/realindex.cgi

 username  1947  1629  0 16:33 pts/000:00:00 grep --color=auto backuppc



 If I kill all backuppc services then unmount the NAS and restart backuppc
 service and GUI works again as expected.



 I have tried searching the backuppc documentation and googled fairly
 extensively however only hit I turned up with the above error was someone
 using a QNAP NAS on which the firmware had been updated. Also seeing
 several pointers to the Backuppc wiki but this does seems to only contain a
 couple of screenshots.



 So my question is – what is the correct procedure to mount an external NAS
 to the /var/lib/backuppc directory so that backuppc works as expected? Am I
 missing something obvious here?



 Cheers Kiweegie


 --
 Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
 Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
 Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
 Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer

 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Long-Term Backups/Rotation

2014-09-23 Thread Sharuzzaman Ahmat Raslan
I will upvote this answer if it is in stackoverflow :)

On Tue, Sep 23, 2014 at 1:19 AM, Bowie Bailey bowie_bai...@buc.com wrote:

 On 9/22/2014 2:33 AM, Sorin Srbu wrote:
  -Original Message-
  From: str...@hasnoname.de [mailto:str...@hasnoname.de]
  Sent: den 11 september 2014 12:14
  To: General list for user discussion, questions and support
  Subject: Re: [BackupPC-users] Long-Term Backups/Rotation
 
  wouldn't it be possible to do this with the exponential backup expiry
  like this:
 
  $Conf{FullPeriod} = 6,97
  $Conf{IncPeriod} = 0,97
  $Conf{FullKeepCnt} = [4,0,12,0,0,0,10]
 
  Would that give me:
  daily incrementals for a week,
  4 weeks of full backups (1 full each week),
  12 months (4*12 weeks) of full backups ( 1 full of each month),
  ~10 years (10*64 of backups ( 1 full of each year)
 
  Can anyone clarify this for me?
  Was there ever a conclusion to this, or did I miss it?

 That looks pretty conclusive to me.  What needs to be clarified.

 [4,0,12,0,0,0,10]

 4 backups at 1 week
 0 backups at 2 weeks
 12 backups at 4 weeks
 0 backups at 8 weeks
 0 backups at 16 weeks
 0 backups at 32 weeks
 10 backups at 64 weeks

 The monthly backups are actually slightly less than a month.  If you
 want a full year, you should do 13 of them.

 The yearly backups at 64 weeks are actually about 14.7 months apart, so
 they're not quite yearly.  You could increase the number of monthly
 backups if you want full coverage or switch to 32-week backups instead.

 Keep in mind that due to pooling, backups do not take up nearly as much
 space as you expect (only enough to account for the changed files), so
 you will be able to keep many more backups than you think you can.

 --
 Bowie


 --
 Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
 Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
 Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
 Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer

 http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup on windows

2014-09-17 Thread Sharuzzaman Ahmat Raslan
Try disable the compression, and see if it improves.

Personally, I have seen Windows machine with big share size, more than 1GB
is transferring very slow. This could be due to samba have to read all the
files, generate the tarmode file, transfer the file over to the server,
extract the file, then comparing which file that should be saved.

Try to transfer a 1GB file from your Windows machine to your Linux machine
to see how long does it takes to finish.

For my solution, I changed all Windows machine with big share size to use
rsyncd. I setup rsyncd on my BackupPC, then install rsync client on the
Windows machine.

After configuring rsync client, the first backup will be slow, but the
subsequent backup will be fast because rsync algorithm check for changed
file, and only transfer that file.

You mile might vary.



On Wed, Sep 17, 2014 at 5:18 PM, Nicola Scattolin n...@ser-tec.org wrote:

 Hi,
 i have backuppc backing up a couple of linux machines of 30/40 GB each
 and a windows shared folder of 1.1 TB.
 Usually it takes one day and a half to backup the windows folder with a
 speed around 7 mbit/s but now the speed has decreased to 5 and it takes
 up ot 3 days to make full backup.
 I have already tried to reboot the server but speed remain the same.
 so some details, the backup server is proxmox virtualized, 2.5 Gb of ram
 and 2 processors (seems enaught since never reach more than 80% when
 making backup) 10/100 eth integrated port.
 The windows system is a windows server 2003, also proxmox virtualized, 1
 processor.
 Backup is make on local environment, at night so lan traffic is very low
 or null.
 host configuration on backuppc is:
 XderMethod:smb
 ClientCharsetLegacy:iso-8859-1
 SmbClientFullCmd: $smbClientPath \\$host\$shareName $I_option -U
 $userName -E -d 1 -c tarmode\ full -Tc$X_option - $fileList
 SmbClientIncrCmd: $smbClientPath \\$host\$shareName $I_option -U
 $userName -E -d 1 -c tarmode\ full -TcN$X_option $timeStampFile - $fileList
 compresslevel:3 (maybe i can reduce compression to save time, IF the
 problem is backuppc server speed)

 how can i speed up my backups? also on linux machine max speed is 11
 mbit/s, so not so much.
 thank yuo


 --
 Want excitement?
 Manually upgrade your production database.
 When you want reliability, choose Perforce
 Perforce version control. Predictably reliable.

 http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup on windows

2014-09-17 Thread Sharuzzaman Ahmat Raslan
I'm not sure about that link.

Let me correct myself based on my previous email.

You should install and configure rsyncd on Windows machine that show
problem. If it is not problematic, smb method is simpler.

You should install rsync client on your backuppc Linux server.

How to install and configure rsyncd on Windows? I believe that this
reference is better:
http://gerwick.ucsd.edu/backuppc_manual/backuppc_winxp.html

Then using backuppc gui, edit the setting for that machine to use rsync
client in the machine to connect to rsyncd on Windows machine








On Wed, Sep 17, 2014 at 10:24 PM, Nicola Scattolin n...@ser-tec.org wrote:

  i'm trying to set up a rsyncd backup for a windows machine following this
 guide

 http://www.michaelstowe.com/backuppc/
 but the script preusercmd.sh try to open another script located at 
 /etc/backuppc/scripts/auth.sh and i don't have that script. is that script 
 somewhere else or has disappear in version 3.1.0.
 also i got trouble with winexe but this is o.s. related.


 Il 17/09/2014 12:22, Sharuzzaman Ahmat Raslan ha scritto:

Try disable the compression, and see if it improves.

  Personally, I have seen Windows machine with big share size, more than
 1GB is transferring very slow. This could be due to samba have to read all
 the files, generate the tarmode file, transfer the file over to the server,
 extract the file, then comparing which file that should be saved.

  Try to transfer a 1GB file from your Windows machine to your Linux
 machine to see how long does it takes to finish.

  For my solution, I changed all Windows machine with big share size to use
 rsyncd. I setup rsyncd on my BackupPC, then install rsync client on the
 Windows machine.

  After configuring rsync client, the first backup will be slow, but the
 subsequent backup will be fast because rsync algorithm check for changed
 file, and only transfer that file.

  You mile might vary.



 On Wed, Sep 17, 2014 at 5:18 PM, Nicola Scattolin n...@ser-tec.org
 wrote:

 Hi,
 i have backuppc backing up a couple of linux machines of 30/40 GB each
 and a windows shared folder of 1.1 TB.
 Usually it takes one day and a half to backup the windows folder with a
 speed around 7 mbit/s but now the speed has decreased to 5 and it takes
 up ot 3 days to make full backup.
 I have already tried to reboot the server but speed remain the same.
 so some details, the backup server is proxmox virtualized, 2.5 Gb of ram
 and 2 processors (seems enaught since never reach more than 80% when
 making backup) 10/100 eth integrated port.
 The windows system is a windows server 2003, also proxmox virtualized, 1
 processor.
 Backup is make on local environment, at night so lan traffic is very low
 or null.
 host configuration on backuppc is:
 XderMethod:smb
 ClientCharsetLegacy:iso-8859-1
 SmbClientFullCmd: $smbClientPath \\$host\$shareName $I_option -U
 $userName -E -d 1 -c tarmode\ full -Tc$X_option - $fileList
 SmbClientIncrCmd: $smbClientPath \\$host\$shareName $I_option -U
 $userName -E -d 1 -c tarmode\ full -TcN$X_option $timeStampFile -
 $fileList
 compresslevel:3 (maybe i can reduce compression to save time, IF the
 problem is backuppc server speed)

 how can i speed up my backups? also on linux machine max speed is 11
 mbit/s, so not so much.
 thank yuo


 --
 Want excitement?
 Manually upgrade your production database.
 When you want reliability, choose Perforce
 Perforce version control. Predictably reliable.

 http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --
 Sharuzzaman Ahmat Raslan


 --
 Want excitement?
 Manually upgrade your production database.
 When you want reliability, choose Perforce
 Perforce version control. Predictably 
 reliable.http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk



 ___
 BackupPC-users mailing listbackuppc-us...@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


 --
 Nicola
 Ser.Tec s.r.l.
 Via E. Salgari 14/E
 31056 Roncade, Treviso

   Follow us on  [image: YouTube] https://www.youtube.com/user/Sertectube  
 [image:
 Facebook] https://www.facebook.com/sertecsrl  [image: Twitter]
 https://twitter.com/sertecsrl  [image: Google+]
 https://plus.google.com/+Dpidgprinting/posts
 [image: Ser-Tec] http://dpidgprinting.com


 --
 Want excitement?
 Manually upgrade

Re: [BackupPC-users] Backup destination is another server

2014-05-20 Thread Sharuzzaman Ahmat Raslan
Backing up over VPN is a bad idea.

It is bad because your bandwidth will be your limitation. In my common
encounter, 3 Mbps is the fastest that we can get. That is way more slower
than writing to a SATA disk, usually at 3Gbps

Your backup will never finish.

What you should do is having a local BackupPC server, that backs up all
machine, then run rsync to copy the backup data to remote server. You may
not have latest and complete data, because new data keep overwriting the
old one, but at least, you get a copy somewhere remote. If your machine did
not change data much, your rsync will finish in 2 or 3 days.

This is how I implement a secondary backup for my client.

One machine as BackupPC, then I run rsync to copy /var/lib/backuppc to
another NAS. I made a script to check if the rsync is running every hour.
If there are no rsync running, start a new session. With nearly 1TB data,
usually the rsync finish copying within 24 hours (that is in same network).
If on VPN, I think it could take 1+ month (100 Mbps network = 24 hours or 1
day, 3 Mbps VPN = 100/3 = 33 days)




On Tue, May 20, 2014 at 3:33 PM, brononius backuppc-fo...@backupcentral.com
 wrote:

 Okay...

 The idea was that i've got several machines:
 - File server
 - SQL server
 - Webserver
 - Monitoring server
 - Management server (ldap, dns, proxy...)
 - 1 remote server (connected by vpn)

 And I was thinking to put the backupsoftware on the management server.
 This way, everything stays 'centralized'.
 But if i understand correctly, it would be best to install it on the
 remote server...

 +--
 |This was sent by b...@oniria.be via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup destination is another server

2014-05-20 Thread Sharuzzaman Ahmat Raslan
Wait!

RAID is not a backup. Search Google with keyword raid is not backup and
try to understand why RAID is not backup.

You can do rsync pre-seed as you mentioned, which you rsync your BackupPC
server to a machine, then move that machine to remote location, and repeat
the rsync later. That will help with initial backup.

If you really want to have a remote replication system, I would suggest you
to explore GlusterFS geo-replication.

But, you have to setup your GlusterFS cluster filesystem before you can use
the geo-replication. Setting up GlusterFS is another long story, requires
extra machine and storage, but in the end, it should be able to solve your
problem.






On Tue, May 20, 2014 at 5:15 PM, brononius backuppc-fo...@backupcentral.com
 wrote:

 My 'problem' is that the fileserver on site 1 is about 4TB of space. And
 about 2TB is used.
 In total 6x1TB disk, which are configured in raid. So we've got a kind of
 'backup' already onsite with the raid.

 And the idea is that we've got a remote backup, pref daily.

 I was thinking to sync the data the first time locally, then move the
 backup server to the remote site.
 This way, the only copies that need to be copied are the changed files.
 And not the complete 2TB.

 What do you think? Suggest? ...

 +--
 |This was sent by b...@oniria.be via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
 Instantly run your Selenium tests across 300+ browser/OS combos.
 Get unparalleled scalability from the best Selenium testing platform
 available
 Simple to use. Nothing to install. Get started now for free.
 http://p.sf.net/sfu/SauceLabs
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly runs Out of Memory

2014-01-23 Thread Sharuzzaman Ahmat Raslan
On Thu, Jan 16, 2014 at 6:59 PM, Remi Verchere rverch...@gmail.com wrote:

 Pool is 2.15GB comprising 45680 files and 36 directories (as of
 2014-01-15 20:07),
 Pool hashing gives 1 repeated files with longest chain 1,


Is all your files being backup is unique?

There seems to be only 1 repeated file from 45680 files available.




-- 
Sharuzzaman Ahmat Raslan
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using external USB disk for BackupPC Data directory

2014-01-10 Thread Sharuzzaman Ahmat Raslan
Hi Dayo,

It is possible. BackupPC did not care where is your storage, as long as is
mountable and writeable.

But, your backup speed could be slow, depending on the speed of your USB
connection

Thanks.




On Fri, Jan 10, 2014 at 5:09 PM, jargon cowtux...@gmail.com wrote:

 Hi

 I was wondering if it's possible to have the Data directory on an
 external USB drive that is connected to the box were BackupPC is installed?

 Thanks

 Dayo


 --
 CenturyLink Cloud: The Leader in Enterprise Cloud Services.
 Learn Why More Businesses Are Choosing CenturyLink Cloud For
 Critical Workloads, Development Environments  Everything In Between.
 Get a Quote or Start a Free Trial Today.

 http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using external USB disk for BackupPC Data directory

2014-01-10 Thread Sharuzzaman Ahmat Raslan
Yes, if the external disk is missing (or NFS not mounted properly), the
mount point will just point to your local disk, and BackupPC will fill your
disk to the max.

If you suddenly found out that your OS disk is full, that indicates
something wrong with your mounting

It happen to my customer who use 2nd disk for BackupPC data, and at one
time the disk did not mount properly, and the 1st disk was filled to the
max.

Thanks.




On Fri, Jan 10, 2014 at 11:33 PM, Carl Wilhelm Soderstrom 
chr...@real-time.com wrote:

 On 01/10 02:09 , Adam Hardy wrote:
  I have just started using this setup and I don't know what's going
  to happen. I assume when the drive isn't mounted, backuppc will
  complain but just try again later.

 Best you test that. :)
 I don't know the details of how you have it set up, but one of the possible
 (tho I think unlikely) failure modes is that BackupPC fills up your
 filesystem and runs you out of space. (Depends on exactly where that device
 is mounted - if BPC is expecting a certain directory tree is may in fact
 fail to run, but I don't know how gracefully it fails).

 --
 Carl Soderstrom
 Systems Administrator
 Real-Time Enterprises
 www.real-time.com


 --
 CenturyLink Cloud: The Leader in Enterprise Cloud Services.
 Learn Why More Businesses Are Choosing CenturyLink Cloud For
 Critical Workloads, Development Environments  Everything In Between.
 Get a Quote or Start a Free Trial Today.

 http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Mystery firewall on XP Pro

2014-01-08 Thread Sharuzzaman Ahmat Raslan
Hi Kenneth,

Have you tried to boot in safe mode with networking, and see if the netbios
can be queried?

Is the machine is properly configured to use netbios over tcp/ip? see the
reference here:
https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sag_tcpip_pro_usewinsconfig.mspx?mfr=true

Thanks.




On Wed, Jan 8, 2014 at 11:53 PM, Kenneth Porter sh...@sewingwitch.comwrote:

 I removed AVG anti-virus from my company systems this year and replaced it
 with the free Immunet. I'm using rsyncd to back them up to my Linux box.
 This all works fine with all but one machine. It's mysteriously blocking
 netbios access so the probe for the correct address at the start of the
 backup fails. Windows Firewall is disabled. I do see an avgagent.exe
 running in services and I'm wondering if there's some vestige of AVG
 keeping a firewall up. I ran AVG's cleaner utility and it didn't remove
 this.

 Any ideas on how to figure out what's blocking the netbios address query on
 this one machine? I tried nbtstat from the problem machine and even that
 fails.

 The command that fails with no response:

 /usr/bin/nmblookup -A 10.169.6.220



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error when backing up 1.4TB files

2013-12-25 Thread Sharuzzaman Ahmat Raslan
Hi Prem,

MYD and MYI are files for MySQL. I believe the file is keep changing, thus
BackupPC cannot determine that the size and the hash is correct, during and
after the transfer.

If you really want to backup this binary database file, I would suggest
that you create a static snapshot of the folder/mount point, maybe using
LVM tools, provided that the file is inside an LVM logical volume.

Or, you could perform an daily dump of the MySQL table (which is a text
file), then use BackupPC to backup this file. You should put exclusion for
MYD and MYR file if use this method.

Thanks.




On Thu, Dec 26, 2013 at 9:17 AM, Prem squirr...@yahoo.com wrote:

 Hi,

 I am trying to backup a 1.4TB folder in Linux but somehow I am getting
 some errors and the backup only completes partially.

 Appreciate any help on this:

 Here are the errors:


 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYD, 
 size=4488979624, placeholder=)
 Starting file 3429 (mysql/kbmatrix_snomed_10/matrixipi5.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.MYI: size doesn't match (12861890560 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.MYI got digests 
 6137db179868cdc52c7407c56fafaa5e vs 6137db179868cdc52c7407c56fafaa5e
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYI, 
 size=12861890560, placeholder=)
 Starting file 3430 (mysql/kbmatrix_snomed_10/matrixipi5.frm), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.frm: size doesn't match (8782 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.frm got digests 
 8d7348ccff2108423f12fca06c39b994 vs 8d7348ccff2108423f12fca06c39b994
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.frm, 
 size=8782, placeholder=)
 Starting file 3431 (mysql/kbmatrix_snomed_10/matrixipi6.MYD), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYD: size doesn't match (4488768644 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi6.MYD got digests 
 bb526c0f4a40072b51c09ac9dec23fa8 vs bb526c0f4a40072b51c09ac9dec23fa8
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi6.MYD, 
 size=4488768644, placeholder=)
 Starting file 3432 (mysql/kbmatrix_snomed_10/matrixipi6.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYI: size doesn't match (12866581504 vs 0)
 Child is sending done
 Got done from child
 Got stats: 16821515 1936615698 1634885492 0 ('errorCnt' = 0,'ExistFileSize' 
 = 0,'ExistFileCnt' = 31,'TotalFileCnt' = 3383,'ExistFileCompSize' = 
 0,'TotalFileSize' = '964645288896')
 finish: removing in-process file mysql/kbmatrix_snomed_10/matrixipi6.MYI
 attribWrite(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/fmysql/fkbmatrix_snomed_10/attrib
 attribWrite(dir=f%2fmnt%2fdata/mysql) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/fmysql/attrib
 attribWrite(dir=f%2fmnt%2fdata) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/attrib
 attribWrite(dir=) - /var/lib/backuppc/pc/indexing-vm/new//attrib
 Child is aborting
 Got exit from child
 Done: 3383 files, 964645288896 bytes
 Executing DumpPostUserCmd: /usr/share/backuppc/bin/endbkpemail.sh prem.kumar 
 0 indexing-vm full DumpPostUserCmd
 Got fatal error during xfer (aborted by signal=ALRM)
 Backup aborted by user signal
 Saving this as a partial backup, replacing the prior one (got 3383 and 3383 
 files versus 0)



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140

Re: [BackupPC-users] Error when backing up 1.4TB files

2013-12-25 Thread Sharuzzaman Ahmat Raslan
Hi Prem,

rsync only work best with static file.

Even though you mentioned that the file (table) is not being used, but
usually MySQL always refreshing the file, maybe doing indexing or something.

That is why proprietary backup software have agent for MySQL to read the
binary data, and dump is as static file for the backup.

Thanks.




On Thu, Dec 26, 2013 at 11:55 AM, Prem squirr...@yahoo.com wrote:

 Hi Sharuzzaman,

 Thank you for the details but isnt rsync clever enough to dump all the
 files? The thing is the files are not being used hence there should be
 likely no activities.

 As for the error below: I checked the troubleshooting page and its about
 the $Conf{ClientTimeout} settings, which is at default set to 72000
 (around 20 hrs), I have changed to 20 but have yet to try. I hope no
 other implications by doing so.

 (aborted by signal=ALRM)



   --
  *From:* Sharuzzaman Ahmat Raslan sharuzza...@gmail.com
 *To:* Prem squirr...@yahoo.com; General list for user discussion,
 questions and support backuppc-users@lists.sourceforge.net
 *Sent:* Thursday, December 26, 2013 10:54 AM
 *Subject:* Re: [BackupPC-users] Error when backing up 1.4TB files

 Hi Prem,

 MYD and MYI are files for MySQL. I believe the file is keep changing, thus
 BackupPC cannot determine that the size and the hash is correct, during and
 after the transfer.

 If you really want to backup this binary database file, I would suggest
 that you create a static snapshot of the folder/mount point, maybe using
 LVM tools, provided that the file is inside an LVM logical volume.

 Or, you could perform an daily dump of the MySQL table (which is a text
 file), then use BackupPC to backup this file. You should put exclusion for
 MYD and MYR file if use this method.

 Thanks.




 On Thu, Dec 26, 2013 at 9:17 AM, Prem squirr...@yahoo.com wrote:

  Hi,

 I am trying to backup a 1.4TB folder in Linux but somehow I am getting
 some errors and the backup only completes partially.

 Appreciate any help on this:

  Here are the errors:


 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYD, 
 size=4488979624, placeholder=)
 Starting file 3429 (mysql/kbmatrix_snomed_10/matrixipi5.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.MYI: size doesn't match (12861890560 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.MYI got digests 
 6137db179868cdc52c7407c56fafaa5e vs 6137db179868cdc52c7407c56fafaa5e
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYI, 
 size=12861890560, placeholder=)
 Starting file 3430 (mysql/kbmatrix_snomed_10/matrixipi5.frm), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.frm: size doesn't match (8782 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.frm got digests 
 8d7348ccff2108423f12fca06c39b994 vs 8d7348ccff2108423f12fca06c39b994
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.frm, 
 size=8782, placeholder=)
 Starting file 3431 (mysql/kbmatrix_snomed_10/matrixipi6.MYD), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYD: size doesn't match (4488768644 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi6.MYD got digests 
 bb526c0f4a40072b51c09ac9dec23fa8 vs bb526c0f4a40072b51c09ac9dec23fa8
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi6.MYD, 
 size=4488768644, placeholder=)
 Starting file 3432 (mysql/kbmatrix_snomed_10/matrixipi6.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYI: size doesn't match (12866581504 vs 0)
 Child is sending done
 Got done from child
 Got stats: 16821515 1936615698 1634885492 0 ('errorCnt' = 0,'ExistFileSize' 
 = 0,'ExistFileCnt' = 31,'TotalFileCnt' = 3383,'ExistFileCompSize' = 
 0,'TotalFileSize' = '964645288896')
 finish: removing in-process file mysql/kbmatrix_snomed_10/matrixipi6.MYI
 attribWrite(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/fmysql/fkbmatrix_snomed_10/attrib
 attribWrite(dir=f%2fmnt%2fdata/mysql) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/fmysql/attrib
 attribWrite(dir=f%2fmnt%2fdata) - 
 /var/lib/backuppc/pc/indexing-vm/new/f%2fmnt%2fdata/attrib
 attribWrite(dir=) - /var/lib/backuppc/pc/indexing-vm/new//attrib
 Child is aborting
 Got exit from child
 Done: 3383 files, 964645288896 bytes
 Executing DumpPostUserCmd: /usr/share/backuppc/bin/endbkpemail.sh prem.kumar 
 0 indexing-vm full DumpPostUserCmd
 Got fatal error during xfer (aborted by signal=ALRM)
 Backup aborted by user signal
 Saving this as a partial backup, replacing the prior one (got 3383 and 3383 
 files versus 0

Re: [BackupPC-users] Error when backing up 1.4TB files

2013-12-25 Thread Sharuzzaman Ahmat Raslan
Hi Prem,

If your apps is not running any DB, you can stop the MySQL server, and the
file will be static.

BackupPC will be able to finish after that.

Thanks.




On Thu, Dec 26, 2013 at 2:50 PM, Prem squirr...@yahoo.com wrote:

 Hi Sharuzzaman,

 As per my checking, the apps is not really running any DBs. Is there
 anything else to look at in this case?


   --
  *From:* Sharuzzaman Ahmat Raslan sharuzza...@gmail.com
 *To:* Prem squirr...@yahoo.com
 *Cc:* General list for user discussion, questions and support 
 backuppc-users@lists.sourceforge.net
 *Sent:* Thursday, December 26, 2013 12:03 PM

 *Subject:* Re: [BackupPC-users] Error when backing up 1.4TB files

 Hi Prem,

 rsync only work best with static file.

 Even though you mentioned that the file (table) is not being used, but
 usually MySQL always refreshing the file, maybe doing indexing or something.

 That is why proprietary backup software have agent for MySQL to read the
 binary data, and dump is as static file for the backup.

 Thanks.




 On Thu, Dec 26, 2013 at 11:55 AM, Prem squirr...@yahoo.com wrote:

 Hi Sharuzzaman,

 Thank you for the details but isnt rsync clever enough to dump all the
 files? The thing is the files are not being used hence there should be
 likely no activities.

  As for the error below: I checked the troubleshooting page and its about
 the $Conf{ClientTimeout} settings, which is at default set to 72000
 (around 20 hrs), I have changed to 20 but have yet to try. I hope no
 other implications by doing so.

 (aborted by signal=ALRM)



   --
  *From:* Sharuzzaman Ahmat Raslan sharuzza...@gmail.com
 *To:* Prem squirr...@yahoo.com; General list for user discussion,
 questions and support backuppc-users@lists.sourceforge.net
 *Sent:* Thursday, December 26, 2013 10:54 AM
 *Subject:* Re: [BackupPC-users] Error when backing up 1.4TB files

 Hi Prem,

 MYD and MYI are files for MySQL. I believe the file is keep changing, thus
 BackupPC cannot determine that the size and the hash is correct, during and
 after the transfer.

 If you really want to backup this binary database file, I would suggest
 that you create a static snapshot of the folder/mount point, maybe using
 LVM tools, provided that the file is inside an LVM logical volume.

 Or, you could perform an daily dump of the MySQL table (which is a text
 file), then use BackupPC to backup this file. You should put exclusion for
 MYD and MYR file if use this method.

 Thanks.




 On Thu, Dec 26, 2013 at 9:17 AM, Prem squirr...@yahoo.com wrote:

  Hi,

 I am trying to backup a 1.4TB folder in Linux but somehow I am getting
 some errors and the backup only completes partially.

 Appreciate any help on this:

  Here are the errors:


 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYD, 
 size=4488979624, placeholder=)
 Starting file 3429 (mysql/kbmatrix_snomed_10/matrixipi5.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.MYI: size doesn't match (12861890560 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.MYI got digests 
 6137db179868cdc52c7407c56fafaa5e vs 6137db179868cdc52c7407c56fafaa5e
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.MYI, 
 size=12861890560, placeholder=)
 Starting file 3430 (mysql/kbmatrix_snomed_10/matrixipi5.frm), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi5.frm: size doesn't match (8782 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi5.frm got digests 
 8d7348ccff2108423f12fca06c39b994 vs 8d7348ccff2108423f12fca06c39b994
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi5.frm, 
 size=8782, placeholder=)
 Starting file 3431 (mysql/kbmatrix_snomed_10/matrixipi6.MYD), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYD: size doesn't match (4488768644 vs 0)
 mysql/kbmatrix_snomed_10/matrixipi6.MYD got digests 
 bb526c0f4a40072b51c09ac9dec23fa8 vs bb526c0f4a40072b51c09ac9dec23fa8
 [ skipped 1 lines ]
 attribSet: dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10 exists
 attribSet(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10, file=matrixipi6.MYD, 
 size=4488768644, placeholder=)
 Starting file 3432 (mysql/kbmatrix_snomed_10/matrixipi6.MYI), blkCnt=0, 
 blkSize=2048, remainder=0
 mysql/kbmatrix_snomed_10/matrixipi6.MYI: size doesn't match (12866581504 vs 0)
 Child is sending done
 Got done from child
 Got stats: 16821515 1936615698 1634885492 0 ('errorCnt' = 0,'ExistFileSize' 
 = 0,'ExistFileCnt' = 31,'TotalFileCnt' = 3383,'ExistFileCompSize' = 
 0,'TotalFileSize' = '964645288896')
 finish: removing in-process file mysql/kbmatrix_snomed_10/matrixipi6.MYI
 attribWrite(dir=f%2fmnt%2fdata/mysql/kbmatrix_snomed_10) - 
 /var/lib/backuppc/pc

Re: [BackupPC-users] Keeping a large number of backups? Any drawbacks?

2013-12-05 Thread Sharuzzaman Ahmat Raslan
Hi David,

My concern is your backup volume is a single point of failure. This is
because you merge the two disk together like RAID 0. If any of your disk
fail, your whole volume will be gone.

If you can, make it into RAID 1, so that you will get 3TB space together
with peace in mind. You can lost any one of the disk, and you will still
have your data.

Regarding how much to retain, I usually set my customer backuppc to 4,0,1,1
; which is 4 copy weekly backup, none for bi-weekly, 1 copy every 4 week (1
month), and 1 copy every 8 week (2 month)

You can try to increase the count for 4 week, eg, from 1 increment until
12, which should cover every month until 1 year, and see how much data that
your system could handle. Or, if you are daring enough, increase the weekly
count from 1 increment until 52, which should cover every week for the
whole year.



On Fri, Dec 6, 2013 at 1:30 AM, David Nelson david.nelso...@gmail.comwrote:

 Hello,

 I built a BackupPC server with good older hardware and installed two WD
 RED 3 TB drives and used  LVM to make them one big ~6 TB volume for
 backupPC. The system boots from a separate disk BTW. I am backing up
 teacher and student files for a small school district. The total backup
 before pooling and compression is around 800GB. My pool size is around 1TB
 with a months worth of daily incremental and full backups on Fridays. I did
 not realize when I built the server how amazingly efficiently BackupPC
 stores the backups. I did not anticipate being able to keep so many
 backups!

 Is there any reason not to just increase the full and incremental keep
 count to a something like the whole school year? Another though is to keep
 the fulls and incremental for maybe 3 months then let the incremental go
 and just keep the weekly fulls? So that's the question, if I have plenty of
 space is there any reason not to just keep a ton of backups?

 Thank you!
 David Nelson


 --
 Sponsored by Intel(R) XDK
 Develop, test and display web and hybrid apps with a single code base.
 Download it for free now!

 http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Sharuzzaman Ahmat Raslan
My customer is storing backuppc data on ext3 filesystem. No known issue
exist for this customer, though I have not perform performance comparison
with other filesystem.




On Mon, Dec 2, 2013 at 11:00 PM, absolutely_f...@libero.it 
absolutely_f...@libero.it wrote:

 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup
 several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC schedule to skip Sat Sunday from running any backups

2013-11-19 Thread Sharuzzaman Ahmat Raslan
Hi Prem,

Click Edit Config, then click Schedule

Set the blackout period weekdays to 0, 6

Click the link at BlackoutPeriods to learn more


On Wed, Nov 20, 2013 at 11:31 AM, Prem squirr...@yahoo.com wrote:

 Hi,

 I would like to skip the weekends from any backups to run. How do I set
 this in the blackout or other options?

 I tried 0  0 in the start/end of the blackout option but it did work.

 Appreciate the assistance.


 --
 Shape the Mobile Experience: Free Subscription
 Software experts and developers: Be at the forefront of tech innovation.
 Intel(R) Software Adrenaline delivers strategic insight and game-changing
 conversations that shape the rapidly evolving mobile landscape. Sign up
 now.
 http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC schedule to skip Sat Sunday from running any backups

2013-11-19 Thread Sharuzzaman Ahmat Raslan
Hi Prem,

0.1 and 23.9 maybe?

equivalent to 12.06 AM, and 11.54 PM. you will have a gap of about 12
minutes there


On Wed, Nov 20, 2013 at 11:57 AM, Prem squirr...@yahoo.com wrote:

 Hi Sharuzzaman ,

 What value do i put for the hourBegin  hourEnd then?

   --
  *From:* Sharuzzaman Ahmat Raslan sharuzza...@gmail.com
 *To:* Prem squirr...@yahoo.com; General list for user discussion,
 questions and support backuppc-users@lists.sourceforge.net
 *Sent:* Wednesday, November 20, 2013 11:53 AM
 *Subject:* Re: [BackupPC-users] BackupPC schedule to skip Sat  Sunday
 from running any backups

 Hi Prem,

 Click Edit Config, then click Schedule

 Set the blackout period weekdays to 0, 6

 Click the link at BlackoutPeriods to learn more


 On Wed, Nov 20, 2013 at 11:31 AM, Prem squirr...@yahoo.com wrote:

 Hi,

 I would like to skip the weekends from any backups to run. How do I set
 this in the blackout or other options?

 I tried 0  0 in the start/end of the blackout option but it did work.

 Appreciate the assistance.


 --
 Shape the Mobile Experience: Free Subscription
 Software experts and developers: Be at the forefront of tech innovation.
 Intel(R) Software Adrenaline delivers strategic insight and game-changing
 conversations that shape the rapidly evolving mobile landscape. Sign up
 now.
 http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --
 Sharuzzaman Ahmat Raslan





-- 
Sharuzzaman Ahmat Raslan
--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NT_STATUS_ACCOUNT_LOCKED_OUT

2013-11-15 Thread Sharuzzaman Ahmat Raslan
How about unlocking the user in domain, and set its password to never
expire?

That's how I configure in my customer's environment


On Fri, Nov 15, 2013 at 9:32 PM, Joerg Hollandmoritz
jh...@bgc-jena.mpg.dewrote:

 Hi,

 A problem with one of the latest samba packages:
 The permanent error occurs
 “samba authentication FAILED with error NT_STATUS_ACCOUNT_LOCKED_OUT”,
  if two conditions are present:
 1) a local Windows7-account has the same name like a domain account,
 but different passwords
 2) the Windows 7 computer came up with a fresh boot.

  The Reason maybe: An Update of the samba- package:
  “It appears there is something wrong with the latest samba package….
 the account I was using for this windows backup client was local and the
one on the domain with the same name really had expired. “

 Does anybody know a solution or a fix for this bug ?


 Best,

 Joerg Hollandmoritz



 --
 DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
 OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
 Free app hosting. Or install the open source package on any LAMP server.
 Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
 http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to choose destination when download backup to my pc?

2013-11-03 Thread Sharuzzaman Ahmat Raslan
I believe that depends on your web browser setting.

What browser are you using?

Thanks.



On Mon, Nov 4, 2013 at 11:49 AM, 杨华杰 yhj...@gmail.com wrote:

 As the title


 I was downloading backup to my ubuntu desktop from backuppc and I found
 downloaded file always choose my Download folder. Shouldn't it prompt me to
 choose a folder to store?



 Regards,
 Hua Jie


 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Sharuzzaman Ahmat Raslan
On Fri, Nov 1, 2013 at 1:33 AM, Craig O'Brien cobr...@fishman.com wrote:

 messages-20131006:Sep 30 13:53:24 servername kernel: BackupPC_dump[15365]:
 segfault at a80 ip 00310f695002 sp 7fff438c9770 error 4 in
 libperl.so[310f60+162000]


This error shows BackupPC_dump segfault, and pointing to libperl.so

How do you install your BackupPC ? From source or from RPM?

If from RPM, which repo that you use?

Thanks

-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd full backup

2013-10-31 Thread Sharuzzaman Ahmat Raslan
Hi Timothy,

I got the number by observing the output of iotop while file transfer is
running. Also, on BackupPC host summary page, average transfer rate for
full backup is also around 3MB/s

It could be a network bottleneck also, as the customer is using 100Mbps
switch with around 80 PC, not including network printer and servers.
Inclusive should be around 100 network devices.

Any idea how to properly troubleshoot network bottleneck? My skill is a
little bit lacking on that area.

Thanks.



On Fri, Nov 1, 2013 at 2:12 AM, Timothy J Massey tmas...@obscorp.comwrote:

 Sharuzzaman Ahmat Raslan sharuzza...@gmail.com wrote on 10/30/2013
 10:06:18 PM:

  Hi Holger,

  Based on short session of troubleshooting, I believe the machine
  actually suffer from low I/O speed to the disk. Average read is
  about 3 MB/s, which I considered slow for a SATA disk in IDE emulation.

 *REAL* slow:  I consider anything under 20MB/s slow.

 But where did that number come from?  The pattern of reads will make a
 *huge* difference...

  I'm planning to suggest to the customer to have a RAID 1 setup to
  increase the I/O speed. I'm looking at possibilities to speed things
  up by not having to change the overall setup.

 I think you might want to have a better idea of what is going on first
 before you just start throwing hardware at it.  If your numbers were
 correct but still too slow I'd say sure.  But your numbers are *broken*
 wrong.  You *might* fix your problem (by accident!) by throwing away some
 pieces and adding others, but you might not, too.  Then you've got a client
 that just spent a bunch of money for nothing...

 Tim Massey

*Out of the Box Solutions, Inc.* *
 Creative IT Solutions Made Simple!**
 **http://www.OutOfTheBoxSolutions.com*http://www.outoftheboxsolutions.com/
 *
 **tmas...@obscorp.com* tmas...@obscorp.com   22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796


 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Sharuzzaman Ahmat Raslan
In my experience, segfault in libraries usually caused by installing it
from different source.

For example, when I install BackupPC for CentOS, I use the one in EPEL repo.

I make sure that all the libraries (perl and others), only come from CentOS
base repo, and not from other, as installing them from somewhere else might
cause incompatibilities.

In fact, sometime EPEL repo also provide perl library that conflict with
CentOS base repo, but I just ignore it, and stick to base repo.




On Fri, Nov 1, 2013 at 3:57 AM, Les Mikesell lesmikes...@gmail.com wrote:

 On Thu, Oct 31, 2013 at 2:20 PM, Holger Parplies wb...@parplies.de
 wrote:
 
  That doesn't explain your situation, but it still might be something to
 think
  about (and we might be seeing one problem on top of and as result of
 another).
  I agree with Jeffrey - an Unable to read ... error *without* a
 preceeding
  Can't write len=... to .../RStmp sounds like a mismatch between file
 length
  according to attrib file and result of decompression of compressed file -
  probably caused by corruption of the compressed file (or the attrib file,
  though unlikely, because the size is not way off).

 I think that segfault in a perl process needs to be tracked down
 before expecting anything else to make sense.  Either bad RAM or
 mismatching perl libs could break about anything else.

 --
Les Mikesell
 lesmikes...@gmail.com


 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd full backup

2013-10-30 Thread Sharuzzaman Ahmat Raslan
Hi Holger,

Based on short session of troubleshooting, I believe the machine actually
suffer from low I/O speed to the disk. Average read is about 3 MB/s, which
I considered slow for a SATA disk in IDE emulation.

I'm planning to suggest to the customer to have a RAID 1 setup to increase
the I/O speed. I'm looking at possibilities to speed things up by not
having to change the overall setup.

Thank you for providing new insights to me regarding rsync. Glad to learn
new things :)

Thanks.


On Thu, Oct 31, 2013 at 5:15 AM, Holger Parplies wb...@parplies.de wrote:

 Hi,

 Adam Goryachev wrote on 2013-10-29 15:29:42 +1100 [Re: [BackupPC-users]
 rsyncd full backup]:
  On 29/10/13 15:14, Sharuzzaman Ahmat Raslan wrote:
   [...]
  On Tue, Oct 29, 2013 at 11:33 AM, Les Mikesell lesmikes...@gmail.com
  mailto:lesmikes...@gmail.com wrote:
  On Mon, Oct 28, 2013 at 10:08 PM, Sharuzzaman Ahmat Raslan
  sharuzza...@gmail.com mailto:sharuzza...@gmail.com wrote:
   [...]
   Initially, the backup transport is SMB, but recently, I noticed
   a lot of machine backup (full and incremental) is not able to
   complete in 8 hours, due to large number of file, and big file
 size.
  
   Last week, I installed DeltaCopy (rsycnd server for Windows) on
   one machine, and change the backup transport to rysncd. The backup
   runs well.
  
   But today, I noticed, when BackupPC is running a full backup on
   the machine that have rsyncd, it still takes 8 hours to do full
   backup. [...]
  Rsync will only transfer the changed data, but in full runs the
  contents of the files are read at both ends and compared with block
  checksums, so it takes some time. [...]
  
  In essence, if I enable
  |--checksum-seed=32761
  
  |
  then the rsync full backup will be faster?
 
  Yes, the third full backup after you enable that option will be faster
  *IF* the slow speed is due to the backup server needing to decompress
  the file and check the content.

 let me stress that again: don't expect a speedup on the *first* full backup
 after you enable that option. In my limited opinion (I haven't compared
 speeds
 because I don't have any issues with slow backups), the *second* full
 backup
 should be faster, as you have pre-existing full backups, i.e. the next full
 can add the checksums. In any case, the *third* full backup should
 hopefully
 be faster :-).

  In the case that your backup client has really slow disk, then there is
  nothing you can do, except maybe modify backuppc for full backups to not
  send the ignore-times option to rsync (ie, every backup is an
  incremental). Or, of course, upgrade the client to improve performance.

 Actually, it is worth noting that aside from a possible speed improvement
 the
 switch from smb to rsync(d) gives you far more precise *incremental*
 backups,
 so it might be an option to increase FullPeriod. This may transfer more
 data
 (because the delta is always relative to the reference backup - normally
 the
 previous full backup - and not to the previous incremental backup), but you
 can always explore the IncrLevels setting. So, while you might not speed up
 the full runs, you might get away with doing them less often. I would not
 recommend patching the ignore-times option away altogether.

 But Adams point is correct: you need to find out where the problem is,
 before
 you can fix it. While you might be able to find the problem by trying out
 fixes, that might not be the most efficient way :-).

 Regards,
 Holger


 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-29 Thread Sharuzzaman Ahmat Raslan
Have you removed some PC from the backup list?

If you have, the folder to that PC is still available in /backup/pc/pc
name .  You have to remove the folder manually.

I believe that will cause high disk usage, as it is not linking to the pool.

Note at the bottom of Edit Hosts:

To delete a host, hit the Delete button. For Add, Delete, and configuration
copy, changes don't take effect until you select Save. None of the deleted
host's backups will be removed, so if you accidently delete a host, simply
re-add it. *To completely remove a host's backups, you need to manually
remove the files below /var/lib/backuppc/pc/HOST
*


Thanks.




On Wed, Oct 30, 2013 at 8:21 AM, Craig O'Brien cobr...@fishman.com wrote:

 The folder /backup is the root of the disk. I mounted the disk there,
 doing the ls -l /backup showed all the root folders on the disk. Perhaps
 there is something going on with the PC folders, as the lost+found and
 trash folders are both empty.

 I'm not sure how I can go about determining if a particular backup is
 using the pool or just storing the files in the PC folder. What's the best
 way to check if a given backup set is represented in the pool or not? Would
 knowing the size of all the pc folders help narrow it down?

 I'm not sure if this is the best way to check the hard linking, but here's
 a test I thought might be helpful. I did this command to see if a common
 file in these backups are pointing to the same inodes.

 bash-4.1$ ls -i /backup/pc/*/*/ffileshare/fWindows/fexplorer.exe

 The output is long so I'll give a snippet:

 bash-4.1$ ls -i /backup/pc/*/*/ffileshare/fWindows/fexplorer.exe
 635979167 /backup/pc/120p1m1/75/ffileshare/fWindows/fexplorer.exe
  646452561 /backup/pc/7qk56d1/79/ffileshare/fWindows/fexplorer.exe
 635979167 /backup/pc/120p1m1/76/ffileshare/fWindows/fexplorer.exe
  646452561 /backup/pc/7qk56d1/80/ffileshare/fWindows/fexplorer.exe
 635979167 /backup/pc/327kkn1/87/ffileshare/fWindows/fexplorer.exe
  646452561 /backup/pc/7qk56d1/81/ffileshare/fWindows/fexplorer.exe
 635979167 /backup/pc/327kkn1/88/ffileshare/fWindows/fexplorer.exe
  646452561 /backup/pc/7qk56d1/82/ffileshare/fWindows/fexplorer.exe

 And it continued like that which shows me that a common file is going to
 the same inodes in these backups which tells me the pool should be working
 in theory. (I'm assuming the 2 variants account for different versions of
 windows.)

 So I'm pretty stumped at how to figure out what happened to it.


 Regards,
 Craig


 On Tue, Oct 29, 2013 at 6:07 PM, backu...@kosowsky.org wrote:

 Les Mikesell wrote at about 16:51:12 -0500 on Tuesday, October 29, 2013:
   On Tue, Oct 29, 2013 at 4:30 PM, Timothy J Massey tmas...@obscorp.com
 wrote:
   
   
Check lost+found and trash while you're at it and see what's in
 there.  They should both be empty.
   
I'm with Jeff:  I think that you have multiple PC trees that are not
 part of the pool.  How you managed that I'm not sure.  But you need to find
 those files and clean them up.  Start with Jeff's command and go from there.
  
   This could happen if the backups were originally on a different
   filesystem and were copied over without preserving the pool hardlinks.
For example if you rsync an individual pc directory into place,
   subsequent rsync runs will link against those copies for existing
   files but will only make the pool links for new/changed files.
  
   --

 It also can happen if you have filesystems with flaky hard linking --
 I once had that issue with a bad user-space nfs module.


 --
 Android is increasing in popularity, but the open development platform
 that
 developers love is also attractive to malware creators. Download this
 white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.

 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --
 Android is increasing in popularity, but the open development platform that
 developers love is also attractive to malware creators. Download this white
 paper to learn more about secure code signing practices that can help keep
 Android apps secure.
 http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan

[BackupPC-users] rsyncd full backup

2013-10-28 Thread Sharuzzaman Ahmat Raslan
Hi,

I have implemented BackupPC for my customer.

Initially, the backup transport is SMB, but recently, I noticed a lot of
machine backup (full and incremental) is not able to complete in 8 hours,
due to large number of file, and big file size.

Last week, I installed DeltaCopy (rsycnd server for Windows) on one
machine, and change the backup transport to rysncd. The backup runs well.

But today, I noticed, when BackupPC is running a full backup on the machine
that have rsyncd, it still takes 8 hours to do full backup.

Which is I considered weird, because rsync suppose to compare that full
backup, with the previous full backup (or previous full + incremental), so
that only modified file is transferred.

That is my expectation when I plan to use rsyncd.

Any explanation why BackupPC is not running in this way? Any configuration
that I can changed to make it work like what I expect?

Thanks.

-- 
Sharuzzaman Ahmat Raslan
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/