Re: [BackupPC-users] Whats wrong - missing files and folders

2021-06-12 Thread Alexander Moisseev via BackupPC-users

On 12.06.2021 1:20, Taste-Of-IT wrote:

Hi,
i have BBC 4.4.0 under Debian running. I noticed that some files and folders on 
one debian machine wasnt backuped. I checked the config, but i cant find the 
error. Perhaps someone can help.

I want to backup all files and folders under /etc/pve/ but the directory 
/etc/pve is empty in the backup.

I use rsync as root user. Do you have any idea why files and folders are 
missing under /etc/pve?


Is there any chance that maybe you have mounted a file system in /etc/pve/ and 
call rsync with --one-file-system option?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] ZFS very slow with BackupPC_refCountUpdate

2021-04-27 Thread Alexander Moisseev via BackupPC-users

On 27.04.2021 11:31, Ghislain Adnet wrote:


so we have lots of z_rd_int but its 
/usr/share/backuppc/bin/BackupPC_refCountUpdate that do all the io.



Do you still have BackupPC v3 pool? It may relate to hardlinks.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cygwin-rsyncd 'rsyncd.conf' syntax for Windows Second Drive

2021-04-03 Thread Alexander Moisseev via BackupPC-users

On 03.04.2021 17:32, Tim Evans wrote:

The cygwin-rsyncd package includes example rsyncd.conf file for the entire Windows C: 
drive (i.e., "/cygdrive/c/").

What is syntax for a second Windows drive?  Apparently "/cygdrive/e/" isn't right; nor is 
just plain "/e/".


In Windows by default (administrator can reassign disk letters)
  the 1st logical drive is C:
  the 2nd logical drive is D:
  the 3rd logical drive is E:

I guess, if disk letters are default assigned, the path for the second logical drive 
should be "/cygdrive/d".


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Adding a max and warning line to the backup pool size?

2021-03-14 Thread Alexander Moisseev via BackupPC-users

On 14.03.2021 15:19, Sorin Srbu wrote:

On Sat, 2021-03-13 at 21:20 +0300, Alexander Moisseev via BackupPC-users
wrote:

On 13.03.2021 19:24, Sorin Srbu wrote:

Is it possible to add a red max and yellow warning line to the BackupPC pool
size chart, reading from the df or OS partition size?


It is easy to draw horizontal lines on the chart, but file system size for 
every OS and file system type is hard to guess.


I realize partition sizes rarely change.

Can the size be statically set instead maybe, and changed as needed when the
disk arrays, pools or partitions are increased?


Another option to get file system size is a user-configurable command.


What are some keywords to look for; "rrd add static lines" or some such?BPC
does use rrd for this, tight?


You need to add an HRULE instruction.

https://oss.oetiker.ch/rrdtool/doc/rrdgraph_graph.en.html



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Adding a max and warning line to the backup pool size?

2021-03-13 Thread Alexander Moisseev via BackupPC-users

On 13.03.2021 19:24, Sorin Srbu wrote:

Is it possible to add a red max and yellow warning line to the BackupPC pool
size chart, reading from the df or OS partition size?


It is easy to draw horizontal lines on the chart, but file system size for 
every OS and file system type is hard to guess.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] btrfs questions

2021-03-06 Thread Alexander Moisseev via BackupPC-users

On 06.03.2021 19:28, Paul Leyland wrote:

But backuppc works just fine on a BSD-licensed mainline kernel.

On 06/03/2021 14:46, Richard Shaw wrote:

On Sat, Mar 6, 2021 at 8:26 AM Paul Leyland mailto:paul.leyl...@gmail.com>> wrote:


Very happy with ZFS myself. YMMV.


If only they would move to a FOSS license instead of CDDL it could be included 
in the mainline kernel.


Moreover, BackupPC works fine on FreeBSD which has ZFS support from the kernel 
for years.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Vanished file

2021-01-11 Thread Alexander Moisseev via BackupPC-users

11.01.2021 3:07, backu...@kosowsky.org пишет:

Pete Geenhuizen wrote at about 14:38:17 -0500 on Sunday, January 10, 2021:
  >
  > /usr/share/BackupPC/bin/BackupPC_zcat
  > ./attrib_d4c95788f1e2e67ddadd2e2ff26e0fc6 |wc
  >    0   0   0
  >
  > /usr/share/BackupPC/bin/BackupPC_attribPrint
  > /etc/alternatives/froot/fetc/falternatives/attrib
  > $attrib = {
  > };
  > All the attrib_ files that I find are 0 length.

They all should be 0 length - they just reference a pool file where
the data is stored.



You should use BackupPC_attribPrint utility to get the metadata:

# su -m backuppc -c 'BackupPC_attribPrint 
./attrib_d4c95788f1e2e67ddadd2e2ff26e0fc6' | grep -A 6 /attrib


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Vanished file

2021-01-09 Thread Alexander Moisseev via BackupPC-users

Seems the same as https://github.com/backuppc/rsync-bpc/issues/18


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Return to BackupPC

2021-01-07 Thread Alexander Moisseev via BackupPC-users

07.01.2021 14:39, Sorin Srbu пишет:

The pretty pool graphs from BPC 3.3 are missing in BPC 4.4.0.
Is this expected or will they show up when a few backups have been done?


It is handled by BackupPC_nightly, just wait for a couple of days.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Incorrect reported pool size, and confusing queue reporting

2020-02-29 Thread Alexander Moisseev via BackupPC-users

29.02.2020 1:13, Matthew Pounsett пишет:



On Fri, 28 Feb 2020 at 10:54, Alexander Moisseev mailto:mois...@mezonplus.ru>> wrote:

 >
 >   It's different, but still way off.  Reporting 269G when actual usage 
is over a terabyte. But if it's not updating very often that makes a bit more 
sense.  The fact that the graph resolution is hourly is I think part of what makes 
this confusing.  It's just routinely updated with out-of-date information?
Why do you think the resolution is hourly? It is daily. The RRD step is 
exactly 86400 seconds.




The data may be updated daily, but the resolution of the 4-week graph is 
hourly; the graph is updated hourly,

No. Pictures are generated on page loading.


and each horizontal pixel of the area graph is an hour of time.

No. 4 weeks are 672 hours, but the width of the graph is 600 pixels.


If it were daily I'd only have a one or two pixel wide graph, instead of what I 
have.

Your assumption that pixel resolution always match data resolution is wrong.


The lower (52 week) graph has a resolution of one day.

Actually, both graphs have the same data resolution of one day and both of them 
have different pixel resolution.

I am not trying to convince you if it is good or bad, just explaining what you 
see on the graphs.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incorrect reported pool size, and confusing queue reporting

2020-02-28 Thread Alexander Moisseev via BackupPC-users

28.02.2020 16:47, Matthew Pounsett пишет:



On Fri, 28 Feb 2020 at 01:55, Alexander Moisseev via BackupPC-users 
mailto:backuppc-users@lists.sourceforge.net>> wrote:

 >       o Pool is 1.92GB comprising 133570 files and 4369 directories (as 
of 2020-02-27 01:00),
Probably at 01:00 BackupPC was running only for 20 minutes. That is why it 
was only 1.92GB at that moment.


Ah, I missed that.  So is it only updating its idea of how big the pool is once 
a day?

Yes, as a part of the nightly job after pool cleanup.



 >
 > I have no idea if this is related to the misreported pool size, but it 
suggests to me even more that something somewhere is stuck.  What should I be 
looking at?
 >
You should be looking at time stamps. Wait for the next nightly run (01:00 
tomorrow) and recheck pool usage statistic again.


  It's different, but still way off.  Reporting 269G when actual usage is over 
a terabyte. But if it's not updating very often that makes a bit more sense.  
The fact that the graph resolution is hourly is I think part of what makes this 
confusing.  It's just routinely updated with out-of-date information?

Why do you think the resolution is hourly? It is daily. The RRD step is exactly 
86400 seconds.

You are trying to compare pool size and file system usage, but these are two 
different things. If you want to get the pool size you should use du instead of 
df.

df output should match this numbers:
o Pool file system was recently at 4% (2020-02-27 21:24), today's max 
is 7% (2020-02-27 11:30) and yesterday's max was 1%.
but as you can see, they change during the day. 7% is about a terabyte, btw.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incorrect reported pool size, and confusing queue reporting

2020-02-27 Thread Alexander Moisseev via BackupPC-users

28.02.2020 0:32, Matthew Pounsett пишет:


I've got a new install of BackupPC 3.3.2 installed from the Debian 10 apt repository.  My 
/var/lib/backuppc volume is 15TB, and BackupPC currently has ~480GB of data on it.  
However, the front page graph and "Other info" summary both say that the pool 
is only 1.93G, and has done since the server came online about 16 hours ago.

How does it calculate what's currently in the pool?  I'm wondering if something 
somewhere has got stuck.

% df -h /var/lib/backuppc
Filesystem      Size  Used Avail Use% Mounted on
backups          15T  481G   14T   4% /var/lib/backuppc

  * The servers PID is 31493, on host bk, version 3.3.2, started at 2020-02-27 
00:40.
  * This status was generated at 2020-02-27 21:28.
  * The configuration was last loaded at 2020-02-27 19:00.
  * PCs will be next queued at 2020-02-27 22:00.
  * Other info:
  o 22 pending backup requests from last scheduled wakeup,
  o 0 pending user backup requests,
  o 0 pending command requests,
  o Pool is 1.92GB comprising 133570 files and 4369 directories (as of 
2020-02-27 01:00),

Probably at 01:00 BackupPC was running only for 20 minutes. That is why it was 
only 1.92GB at that moment.


  o Pool hashing gives 4 repeated files with longest chain 1,
  o Nightly cleanup removed 15 files of size 0.00GB (around 2020-02-27 
01:00),
  o Pool file system was recently at 4% (2020-02-27 21:24), today's max is 
7% (2020-02-27 11:30) and yesterday's max was 1%.

15TB * 0.04 = 0.6TB It is likely that it was correct at that moment.






I have no idea if this is related to the misreported pool size, but it suggests 
to me even more that something somewhere is stuck.  What should I be looking at?


You should be looking at time stamps. Wait for the next nightly run (01:00 
tomorrow) and recheck pool usage statistic again.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (No files dumped for share....)

2020-02-07 Thread Alexander Moisseev via BackupPC-users

07.02.2020 14:16, Angelo Machils пишет:

tarExtract: Done: 0 errors, 4 filesExist, 34564096 sizeExist, 4888585 
sizeExistComp, 4 filesTotal, 34564096 sizeTotal
Got fatal error during xfer (No files dumped for share DatabaseBackup)
Backup aborted (No files dumped for share DatabaseBackup)
Not saving this as a partial backup since it has fewer files than the prior one 
(got 4 and 0 files versus 4)

I’m running version 3.3.1

Searching for this error came up with quite a few results, but none seem to 
apply to my case and it seems that the files have been transferred.



You are likely using this ancient BackupPC version with a modern smbclient.
You need to upgrade BackupPC (or if it's got sentimental value, downgrade 
smbclient).

https://github.com/backuppc/backuppc/issues/169



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-11 Thread Alexander Moisseev via BackupPC-users

On 11.09.2019 18:19, Robert Trevellyan wrote:

I'm letting ZFS do the compression (using the default of LZ4) with BackupPC 
handling deduplication. I think you'll find a reasonable consensus that ZFS 
compression is always a win for storage space (it will store un-compressible 
data unmodified), whereas ZFS deduplication is best avoided in most cases, 
mostly due to its high memory usage. It's possible that BackupPC compression 
would be tighter than LZ4,


Actually, on ZFS you are not limited to LZ4, but in ZFS each file block is 
compressed independently, that is why in most cases BackupPC compression is 
higher, though it depends on data.

We moved from 77.96G cpool to pool on compressed filesystem recently. Now it 
consumes 81.2G, so there is not much difference.

# zfs get compression,compressratio,recordsize,referenced zroot/bpc/pool
NAMEPROPERTY   VALUE SOURCE
zroot/bpc/pool  compressiongzip-3local
zroot/bpc/pool  compressratio  3.87x -
zroot/bpc/pool  recordsize 128K  default
zroot/bpc/pool  referenced 81,2G -


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Admin groups

2018-11-08 Thread Alexander Moisseev via BackupPC-users

On 07.11.18 20:52, Jaime Fenton wrote:

Thanks for your question Alexander.

I agree, one group makes sense and I'm working towards that goal. However, my 
company's a large company and it may take some time before approval of a 
singular group goes through and I have a large number of users who need to be 
admin for control of Backuppc. As it is, I can add them individually to the 
singular named users for now, but that's why I was asking if more than one user 
group could be I the admin user group section.

I'll muddle through until the permission comes through, thanks for clarifying.



I've opened a PR that allows to use a space separated list of multiple groups.
https://github.com/backuppc/backuppc/pull/235


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Admin groups

2018-11-06 Thread Alexander Moisseev via BackupPC-users

On 07.11.18 1:35, Jaime Fenton wrote:

Hi there,

Is there a way to have multiple security groups have admin access (through 
$Conf{CgiAdminUserGroup}) or is it limited to only one group?



$Conf{CgiAdminUserGroup} is limited to only one group. It can be changed but 
why do you need multiple groups?
Usually you do not have a lot of privileged users, so creating a separate unix 
group should not be a problem but seems more safe.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to distinguish from which BPC-server administrative emails are coming (Was: BackupPC administrative attention needed)

2018-11-01 Thread Alexander Moisseev via BackupPC-users

On 01.11.18 10:16, Sorin Srbu wrote:

Hi and thanks for the feedback!

Tagging the $Conf{EMailAdminUserName} won’t work for me, the central 
smtp-server bounces it if I choose something else than a real existing mail 
address unfortunately.

I’ve been thinking about the headers a bit since posting, I’ll look into that 
for starters.

In the long run, making the subject line a configurable parameter sounds like a 
very good idea though!
Could you please?



@picklesrein has proposed to add $Conf{CgiURL} to the report body: 
https://github.com/moisseev/BackupPC_report/pull/2
I think it would be convenient to have a link to CGI in admin emails as well.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host summary page, Sorting is weird

2018-10-24 Thread Alexander Moisseev via BackupPC-users

On 24.10.18 9:42, Sorin Srbu wrote:

Hmm... I don't see any difference.

Am in the right spot; /usr/share/BackupPC/html/sorttable.js?
I can't find any other sorttable.js.

The Chrome I use is Version 70.0.3538.67 (Official Build) (64-bit)


That is strange. It works for me on exactly the same Chrome version.
This file may be cached on server or client side. Can you restart http daemon 
and reload the page with Shift+F5 or Ctrl+Shift+R ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host summary page, Sorting is weird

2018-10-23 Thread Alexander Moisseev via BackupPC-users

On 23.10.2018 9:19, Sorin Srbu wrote:

Ah, I might have found the problem.

I discovered the below using Google Chrome.
On a hunch I tried Firefox.

Firefox sorts properly. Seems this is a web browser problem then.



Sorin, would you test the fix?

https://github.com/backuppc/backuppc/commit/a81dd75ceb498ffc9b1ba19a955a5b14c675c4f0


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying backups to other host

2018-08-24 Thread Alexander Moisseev via BackupPC-users

On 22.08.2018 21:15, Nino Bosteels wrote:


I read that instead of dd you could use dump, which actually is aware of free 
disk space for ext(3-4). Any ideas?


Dump is not an option for you. You will be able to make dump as it copies data 
at block level, but restore works at file level, so, it will take ages to 
restore such a large pool.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to change GUI appearance (was: Backup stopped working)

2018-06-11 Thread Alexander Moisseev via BackupPC-users

On 11.06.2018 17:26, Sriram ARUN EXCELLO wrote:

Hi, this is the first time I am going to use the Backuppc  for my office, I 
need to change the GUI settings like logo change, font, label and background 
colour kindly guide me please.


Please do not hijack threads.

The logo is here: 
https://github.com/backuppc/backuppc/blob/master/images/logo.gif

Link to it is here:
https://github.com/backuppc/backuppc/blob/d7588f3a5e67be7bfb6867758c3335bde09cdf4c/lib/BackupPC/CGI/Lib.pm#L474

Everything else you can change in CSS files: 
https://github.com/backuppc/backuppc/tree/master/conf
Create your own CSS (copy and modify a stock one)and change $Conf{CgiCSSFile} 
appropriately.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Serious error: last backup ... directory doesn't exist!!! - reason found

2018-03-08 Thread Alexander Moisseev via BackupPC-users

On 3/8/2018 6:59 PM, f...@igh.de wrote:

Craig,

again I return to my issue "No space left on device".

Meanwhile I found the reason: the partition ran out of inodes. As you
wrote under "How much disk space do I need?" one has to have "plenty
of inodes". But what does that mean?

May I ask the following:

- in the "General Server Information" you give some statistical
   information about disk usage; would it be a good idea also to give
   information about inode consumption?



It is a really good idea, but obtaining inode consumption with du command is 
complicated since it returns different sets of columns on different OSes.
I think the simplest way is to replace CheckFileSystemUsage subroutine with 
Filesys::DiskSpace module.

Craig, is it ok to introduce another dependency?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No files dumped for share for localhost

2018-02-06 Thread Alexander Moisseev via BackupPC-users

On 06.02.2018 9:26, RAKOTONDRAINIBE Harimino Lalatiana wrote:

Hi Alexander ,

I did what you said so I created three module in rsyncd.conf

[boot]
path = /boot
comment = all boot files to be backupc

[var]
path = /var
comment = all files from var

[all]
path = /
comment = it's a test


And for my RsyncShareName :

$Conf{RsyncShareName} = [
   'all', 'boot', 'var'
];


But the issues still persist and don't know much what to do next



Hi Harimino,

Run BackupPC_dump again, check its output, check logs.
I guess that error is gone away and you have another one.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Handle Backup as inconsistent on xfer error

2018-02-05 Thread Alexander Moisseev via BackupPC-users

On 29.01.2018 12:32, Zielke, Julian, NLI wrote:


Talking about the point on not saving the rest of the archives: There is afaik 
no current way to tell BackuPC to stop the backup on this share, execute 
post-share using xferOK = 0 and continue
with the rest of the backups. So we'll keep it that way. Maybe this could be a 
nice feature implementation to skip a share on xfer errors and return xferOK=0. 
This would be a perfect way to have
a solution of partitial backups AND notification in both ways.


On the second thought, skipping certain shares is an overkill. Probably in case 
if we want treat shares differently we should just create different hosts for 
them.


I suggest to create an option in the configuration of the tar xfer method (bool 
/ checkbox) so the user can decide whether
to abort on Xfer errors or not.


I agree with you. It would be nice to add a configuration option for that, but 
for any xfer type, not only for tar xfer.


The options could be something like:
$Conf{xferErrsAreFatal} If any share had xfer errors then consider the dump bad 
and mark it as partial
0 - no (default)
1 - yes
n - yes if $stat{xferErrCnt} >= n

$Conf{abortOnXferErrs} abort the dump instead of marking it partial if 
$Conf{xferErrsAreFatal} condition is met.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No files dumped for share for localhost

2018-02-05 Thread Alexander Moisseev via BackupPC-users

On 05.02.2018 14:38, RAKOTONDRAINIBE Harimino Lalatiana wrote:

Hi Alexander,

Thank you for your answer, I am newbie so it was out of my mind to check why 
the xfermethod would be rsync instead of rsyncd for the localhost.

So I changed it to rsyncd. Anyway it won't run .

I set up the rsyncd.conf and create on module :

[boot]
path = /boot
comment = all boot files to be backupc

I didn't put the auth user and password because after some try it say auth 
failure and I prefer to troubelshoot it after resolving this issue

I am not sure what to put in the  $Conf{RsyncShareName} but when I run 
Backuppc-dump -i localhost, there is some errors in the output as you can see 
below.
I printed all the output  and underlined all the errors for more clarity




ERROR: The remote path must start with a module name not a /





I already tried to run rsyncd and it work but I don't know where the error in 
my configuration .

Regards,

Hari




For $Conf{RsyncShareName} you need to use a rsync module name as it is 
specified in the rsyncd.conf in square brackets, i.e. [boot].

$Conf{RsyncShareName} = [
  'boot'
];

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best way to copy backuppc data

2018-02-03 Thread Alexander Moisseev via BackupPC-users

On 2/3/2018 2:50 PM, Adam Pribyl wrote:

On Fri, 2 Feb 2018, Iturriaga Woelfel, Markus wrote:


I tried the "dump | restore" way,

dump -0 -f - /var/lib/backuppc | restore -r -f -

after few minutes:
restore: cannot write to file /tmp//rstdir1517604355: No space left on
device
   DUMP: Broken pipe
   DUMP: The ENTIRE dump is aborted.

seems restore is first writing some files to /tmp.. ok used
dump -0 -f - /var/lib/backuppc | restore -r -T /copy -f -

when it gets to the
DUMP: dumping (Pass IV) [regular files]

it just stays there for 12h with almost nothing copied.



Dump is fast because it copies file system blocks. As restore write files, it 
is slow.
You can take dumps relatively fast if you will store the dump in a file instead 
of piping it to restore.
Dump of 60G v3 cpool takes about 2 hours, but restore of this dump takes about 
1 day.

tar -C /var/lib/backuppc --one-filesystem --acls --xattrs -cf - . | tar 

-C /copy -xvf -

How does this deal with hardlinks?

I am running short on ideas how to copy this.


To get rid of hardlinks, consider to upgrade from BackupPC v3 to v4.


This seems to be yet another reason why I want to move this to RAID1 now


Another option is to move to ZFS and use `zfs send` and `zfs receive`.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When (and how) Can I remove the V3 cpool?

2018-01-31 Thread Alexander Moisseev via BackupPC-users

On 1/31/2018 8:54 PM, Clay Jackson wrote:

Hi – I upgraded to V4 late on 2017 and have been stable now for almost 2 
months.   But, my Status page is still showing 4.14G in the V3 cpool.

Can I get rid of this, and if so, how?



Probably you still have some V3 backups. It's fine, they will be gradually 
removed one by one at some point during expiration process.
If you really need to get rid of them, you can convert them to V4 backups with 
BackupPC_migrateV3toV4 utility [1].

[1]: 
http://backuppc.github.io/backuppc/BackupPC.html#Other-Command-Line-Utilities



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No files dumped for share for localhost

2018-01-30 Thread Alexander Moisseev via BackupPC-users

On 30.01.2018 9:38, RAKOTONDRAINIBE Harimino Lalatiana wrote:

So I it seems hat the issue is about DNS lookup error: general failure when I 
run ssh .

backuppc@backup:/ % /usr/bin/ssh -v -x -l backuppc localhost /bin/foobar
OpenSSH_7.2p2, OpenSSL 1.0.2k-freebsd  26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config



debug1: Server host key: ecdsa-sha2-nistp256 
SHA256:7IBJcg+uURyKkFB/9QuGJoU9tgm4m+gBiiVG44+nvyY
DNS lookup error: general failure


I think it is normal. SSH client attempts to verify keys using DNS. See 
VerifyHostKeyDNS in the man ssh_config .

.

debug1: Authentications that can continue: publickey
debug1: Trying private key: /home/backuppc/.ssh/id_dsa
debug1: Trying private key: /home/backuppc/.ssh/id_ecdsa
debug1: Trying private key: /home/backuppc/.ssh/id_ed25519
debug1: No more authentication methods to try.


Here is the problem, it can't authenticate.


Permission denied (publickey).

But it's a bit confusing because as I understand the server connect to himself 
when it try to backup the localhost so when the connection can't be set up ?

Regards,

Hari


Why do you need ssh to backup localhost in the first place? You can connect 
directly to rsyncd.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No files dumped for share for localhost

2018-01-29 Thread Alexander Moisseev via BackupPC-users

On 1/29/2018 2:53 PM, RAKOTONDRAINIBE Harimino Lalatiana wrote:

The backuppc is installed in freebsd 11.1
All hosts seems to be backuppc well except the localhost.
I did like it was said in the forum so  I build rsync-bpc 3.0.9 from current 
git .


BTW, You could use ports:

net/rsync-bpc
sysutils/backuppc4
sysutils/p5-BackupPC-XS

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Handle Backup as inconsistent on xfer error

2018-01-28 Thread Alexander Moisseev via BackupPC-users

On 1/26/2018 5:31 PM, Zielke, Julian, NLI wrote:

It will be marked as partial but not continuing and/or doing rotation. Also the 
xfer status won't be 1, because there
were actual problems saving all files within the share. This is just to prevent 
BackupPC from running backups with xfer errors and doing rotation while some 
files in all
series always were open during backup.



Julian, If I understood you correctly, you are trying to solve the following 
problem:
If xfer errors occurred during a backup (files were open or whatever), the 
backup is inconsistent as it misses some files.
In current implementation such a backup is considered successful despite xfer 
errors.
The expiration algorithm dosen't take xfer errors in account as well.
Here is a problem: under certain circumstances BackupPC expiration can leave 
errored backups, but expire consistent ones, so we will be unable to restore 
some files.

That is the reason why I do daily checks for xfer errors and remove backups 
with xfer errors.

Sure, the patch you are proposing will do it automatically, but it seem not a 
good solution to me.
What if something happened with the host's data? In many cases a recent partial 
backup is better than nothing. But you've aborted it!
More over you haven't saved the rest of shares on this host!!! 
https://github.com/backuppc/backuppc/blob/master/bin/BackupPC_dump#L1230
If some files always are open during backup, every backup will be aborted, so 
you will end up with no backups at all.

If we mark the backup as partial instead of aborting it, the partial backup 
will be removed when the next successful backup completes, or if another full 
backup fails resulting in a newer partial backup.
Advantages: files from partial backup can be restored, on the next backup we 
don't need to retransfer files already in the partial backup (for rsync, not 
for tar).

The situation when some files always are open during backup should be avoided, 
in general. In case if there are just a few files and they are always the same, 
it is probably a good idea to create an exclusion list. So if all xfer errors 
happen when attempting to transfer files that are in the exclusion list, do not 
mark that backup as partial.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Handle Backup as inconsistent on xfer error

2018-01-26 Thread Alexander Moisseev via BackupPC-users

On 26.01.2018 12:31, Zielke, Julian, NLI wrote:

I’ve decided to rewrite the code of BackupPC_dump for this case.

Here’s the patch: https://pastebin.com/wv1DFVbV

I suggest to create an option in the configuration of the tar xfer method (bool 
/ checkbox) so the user can decide whether

to abort on Xfer errors or not.


Why do you want to abort the backup? I think we should just mark it as partial.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore backup to diferent host

2018-01-11 Thread Alexander Moisseev via BackupPC-users

On 1/12/2018 4:11 AM, Egis K. wrote:

Helo,


I'm trying restore data to different host, than backup was made with no success 
version 4.1.5,**//___^
**//___^
I'm got error restoring, there is outpu :

2018-01-12 01:07:54 restore started below directory  to host 192.168.1.112
2018-01-12 01:07:56 restore failed (rsync error: error starting client-server 
protocol (code 5) at main.c(1541) [sender=3.0.9.11])


I see that in path there are double slash, but if i'm trying restore to same 
host file, slash is single.


  Are you sure?

You are about to start a restore directly to the machine 192.168.1.112. The 
following files will be restored to share , from backup number 22:

Original file/dir   Will be restored to
192.168.1.115:/cDrive/Users/user/.fog_user.log  192.168.1.112 
:*//*Users/useris/.fog_user.log

Do you really want to do this?


Restore to UNC path (Windows share) using rsync is not possible.
In case of rsyncd transfer method, "Restore the files to share" form field means an area 
name like "cDrive", not a windows share name.
You can find the area name in the rsync.conf on the target host.
If you do not have rsyncd on the target host, you can restore to UNC path, but 
you need to use SMB transfer method in this case.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup error with no error

2017-10-11 Thread Alexander Moisseev via BackupPC-users

On 10/11/2017 11:09 AM, Philippe Maladjian wrote:

Hello,

The next job tells me an error when backing up full or incremental but I do not 
see any.

When I created it, I first ran a test with / media / netapp_www, started a full 
backup and let the jobs run for several days. Then I added the other 
directories. Since I have the message that there is an error during the backup 
but I do not see in the logs.

-




Fichier /media/backuppc/pc/svbackup04/LOG.102017

Contenu du fichier /media/backuppc/pc/svbackup04/LOG.102017, modifié le 
2017-10-11 09:46:50

[...]

2017-10-11 09:44:01 unexpected empty share name skipped
2017-10-11 09:44:06 Backup aborted ()


Maybe you need to specify $Conf{RsyncShareName}

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Graph in dashboard

2017-10-05 Thread Alexander Moisseev via BackupPC-users

On 10/5/2017 1:16 PM, Gandalf Corvotempesta wrote:

2017-10-05 11:56 GMT+02:00 Alexander Moisseev via BackupPC-users
:

In BackupPC v4 if you set "$Conf{PoolSizeNightlyUpdatePeriod}" to N (default
is 16) nightly will process 1/N of the pool every night.
Pool graphs will be updated on every nightly run (i.e. every night).


I'm referring to BackupPCNightlyPeriod that is set as 1 as default.
Now i've set as 4. This shouldn't affect the graphs updates, right ?
Because I've seen that when nighly is interrupted, graphs are not updated.



It shouldn't. You'll know for sure in the morning ;)
In case of BackupPCNightlyPeriod > 1 nightly job is not been interrupted, but 
finished. It just process a subset of directories instead of whole pool.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Graph in dashboard

2017-10-05 Thread Alexander Moisseev via BackupPC-users

On 10/5/2017 12:14 PM, Gandalf Corvotempesta wrote:

Are graphs in dashboard updated after a nightly process has processes
the whole pool ?


It is updated on every nightly run. It doesn't matter if some files in the pool 
are unprocessed, they still occupy pool space.


In example, if I set the nightly process to scan pool in 2 or 3 days,
graphs would be updated every day anyway, or after 2 or 3 days ?

Asking this because I want daily graphs but as nightly is a very very
very heavy process (load is more than 10 when running) I would like to
split the nightly in 4 days or so.



In BackupPC v4 if you set "$Conf{PoolSizeNightlyUpdatePeriod}" to N (default is 
16) nightly will process 1/N of the pool every night.
Pool graphs will be updated on every nightly run (i.e. every night).

http://backuppc.github.io/backuppc/BackupPC.html#_conf_poolsizenightlyupdateperiod_

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email in backup success/failure

2017-09-03 Thread Alexander Moisseev via BackupPC-users

On 9/3/2017 10:20 PM, Gandalf Corvotempesta wrote:

So, what's the meaning for the email feature configurable in bpc settings?

Which kind of emails are sent?


BackupPC sends notifications if a host has never been backed up or the most 
recent backup is too old.
That means you will get an email if backup has never been finished in the 
configured number of days, but BackupPC remains silent if backup was finished 
with a lot of errors.
Some people thinks it's a good thing that BackupPC doesn't bother them if some 
files was locked or vanished or whatever during backup.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email in backup success/failure

2017-09-03 Thread Alexander Moisseev via BackupPC-users

On 9/3/2017 7:52 PM, Gandalf Corvotempesta wrote:

Is possible to have, every night, an email with the result of any backups?

I would like to get a report every night, after backup completion



There is no such functionality in BackupPC, but you can run external script by 
cron during blackout period.

https://github.com/moisseev/BackupPC_report
https://github.com/moisseev/BackupPC_report/blob/master/BackupPC_report

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the BackupPC pool

2017-08-09 Thread Alexander Moisseev via BackupPC-users

On 8/9/2017 11:47 PM, Hannes Elvemyr wrote:


Sound great, but how do I know that BackupPC is not reading/writing to the pool 
during the copying process (maybe some backup is running or BackupPC_Nightly 
could start doing some cleaning). Copying a large pool over a bad Internet 
connection could take hours…


Option 1. Stop BackupPC.
Option 2. Make a snapshot of the file system.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] restore .tar / .zip files empty #128

2017-07-30 Thread Alexander Moisseev via BackupPC-users

On 7/30/2017 10:49 PM, Doug Lytle via BackupPC-users wrote:

On 07/30/2017 01:43 PM, Craig Barratt wrote:

Doug,

I pushed a fix 

 that should fix the problem.



I couldn't figure out how to download a patch from the interface, so I selected 
view/raw



Doug, you can just add ".diff" or ".patch" at the end of the commits URIs like 
this:
https://github.com/backuppc/backuppc/commit/6f01264005310cffe55d5258436d38609a1ac99d.diff
https://github.com/backuppc/backuppc/commit/0439411cc17757e5f15ebfbff902aacc3d4510b6.diff

Or do the same with "compare" to get a patch that includes both commits:
https://github.com/backuppc/backuppc/compare/2c09f85...0439411.diff


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] status graph

2017-06-06 Thread Alexander Moisseev via BackupPC-users

On 6/6/2017 4:24 AM, Alexey Safonov wrote:

Alexander, Craig

any ideas what can i check and how to find this V3 pool?


I don't know how exactly do this. I guess you need to find files with hardlink 
count > 1 in the pool and cpool directories.

Probably it's not necessary, you just need to wait.

Make sure "$Conf{PoolV3Enabled} = 1;" until the last file deleted from v3 pool.
By default "$Conf{PoolSizeNightlyUpdatePeriod} = 16;". This means you need to 
wait 16 days while v3 pool will be emptied.

See details in BackupPC_migrateV3toV4 section of 
https://backuppc.github.io/backuppc/BackupPC.html#Other-Command-Line-Utilities
BTW, This section says "You could do this manually". Craig, how'd you suggest 
to find v3 files in pool and cpool?

Since your pool is not big you can set $Conf{PoolSizeNightlyUpdatePeriod} = 1;

Also it's not necessary to wait next BackupPC_nightly run. You can start it at 
any time.
# su -m backuppc -c 'BackupPC_serverMesg BackupPC_nightly run'


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/