[Bacula-users] Fwd: storage daemon resets connections....

2007-10-01 Thread IEM - network operating center (IOhannes m zmoelnig)
(by accident, i have originally sent this email from the wrong account; 
i want to apologize beforehand, if it gets through 2 times)



hi all.

i added a new host to my bacula backup setup, but unfortunately the
backup fails, because the storage daemon resets the connection, after
some time (that is: after >20min have elapsed and >4GB have been written)

unfortunately i don't know _why_ the connection is reset. the 2 messages
i get are "Error reading data header from FD. ERR=No data available" and
"Packet size too big".

a quick google for the 1st error revealed zero results.
however the "packet size too big" error can be found in the bacula FAQ:
http://www.bacula.org/dev-manual/Bacula_Freque_Asked_Questi.html#SECTION003733
however, the described problem-sources do not seem to apply to my setup:
- no other application is using the port (else i wouldn't be able to
establish _any_ connection, would i?)
- i am not using a w32 client, but bacula  2.0.2-1 on debian/linux

furthermore: i haven't (yet) found anything indicating an error in the 
logs of the involved hosts (e.g. no disk failures,...)



my setup:
- file-daemon:
debian/sarge with bacula-2.0.2-1
CPU: AMD XP1900+
memory: 1GB
the disk on this machine is NOT super fast

- storage-daemon=director:
debian/etch with bacula-2.0.2-1
CPU: 4*Intel Xeon 2.0GHz
memory: 1GB

- storage-backend:
Quantum SuperLoader3 equipped with 15 DLT-S4 tapes


- connection:
100MBit

i noticed that in the sd-configuration, the "Spool Directory" pointed to
a non-existant directory.

another host in the same bacula setup does a weekly differential backup 
with >60GB (this is the size of the backup, not of the data to be 
backuped) and does not have any problems)


any ideas what could be the source of my problem?
any way to fix it?



mfgasd.r
IOhannes


PS: this is an excerpt of the error-report:




01-Oct 10:02 backup-sd: Ready to append to end of Volume "AAB129S4" at
file=7.
01-Oct 10:26 backup-sd: zope.2007-10-01_09.58.21 Fatal error:
append.c:158 Error reading data header from FD. ERR=No data available
01-Oct 10:26 backup-sd: Job write elapsed time = 00:24:05, Transfer rate
= 3.201 M bytes/second
01-Oct 10:26 backup-sd: zope.2007-10-01_09.58.21 Fatal error: bnet.c:241
Packet size too big from "client:192.168.7.5:36643. Terminating connection.
01-Oct 10:26 zope-fd: zope.2007-10-01_09.58.21 Fatal error: backup.c:860
Network send error to SD. ERR=Die Verbindung wurde vom
Kommunikationspartner zurückgesetzt
01-Oct 10:26 backup-dir: zope.2007-10-01_09.58.21 Error: Bacula 2.0.2
(28Jan07): 01-Oct-2007 10:26:32
   JobId:  318
   Job:zope.2007-10-01_09.58.21
   Backup Level:   Full (upgraded from Incremental)
   Client: "zope-fd" 2.0.2 (28Jan07)
i386-pc-linux-gnu,debian,3.1
   FileSet:"FullLinux" 2007-05-04 10:07:19
   Pool:   "FullPool" (From Job FullPool override)
   Storage:"SuperLoader" (From Job resource)
   Scheduled time: 01-Oct-2007 09:58:18
   Start time: 01-Oct-2007 09:58:24
   End time:   01-Oct-2007 10:26:32
   Elapsed time:   28 mins 8 secs
   Priority:   10
   FD Files Written:   74,886
   SD Files Written:   74,886
   FD Bytes Written:   4,616,552,688 (4.616 GB)
   SD Bytes Written:   4,625,674,952 (4.625 GB)
   Rate:   2734.9 KB/s
   Software Compression:   None
   VSS:no
   Encryption: no
   Volume name(s): AAB129S4
   Volume Session Id:  33
   Volume Session Time:1190215557
   Last Volume Bytes:  11,307,405,312 (11.30 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  Error
   Termination:*** Backup Error ***



-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] howt to split the bacula-dir.conf

2007-10-02 Thread IEM - network operating center (IOhannes m zmoelnig)
hi


Michal Medvecký wrote:
> luyigui loholhlki napsal(a):
>> hi
>> i want to split the bacula-dir.conf file to different files (one file
>> by client)
>> is that realizable if yes : how??
>> thanks in advance
>>
>>
> @ stands for include
> 
> example:
> 
> @/etc/bacula/hosts/test.host
> 


i have a similar problem (so i hijack this thread)...

i have a general FileSet resource which i use for all my clients.
however, i want to adjust some client specific excludes on a per-host basis.
currently i do something like


# List of files to be backed up
FileSet {
   Name = "FullFileSet"
   Include {
 Options {
   fstype=ext2,jfs,ntfs,reiserfs,xfs
   onefs=no
   signature = MD5
   wilddir="*/tmp"
   exclude = yes
 }
 File = /
   }

   Exclude {
 File = "\\


this works ok, the "File" directive is fed by the file on the 
client-machine.

however, i would prefer to have all these files on the director host, 
and include them based on the job running.

something like:

  File = "mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: storage daemon resets connections....

2007-10-08 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> hi all.
> 
> i added a new host to my bacula backup setup, but unfortunately the
> backup fails, because the storage daemon resets the connection, after
> some time (that is: after >20min have elapsed and >4GB have been written)
> 
> unfortunately i don't know _why_ the connection is reset. the 2 messages
> i get are "Error reading data header from FD. ERR=No data available" and
> "Packet size too big".

an upgrade to 2.2.4 seems to have fixed this.


fmads.r
IOhannes

-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] understanding "status" of serveral daemons.

2007-10-08 Thread IEM - network operating center (IOhannes m zmoelnig)
hi.

i am currently having trouble to understand why different daemons (fd, 
sd, dir) show different statuses.

my "problem": why does a job that runs on all daemons (dir, sd, fd) do 
not show up in the "status" listings of these daemons?
i haven't found much in-depth information in the docs, probably i was 
looking in the wrong places...


my setup:
1 director
1 storage daemon
several file daemons
director and SD are running on the same machine; the SQL backend is 
postgresql-8.1
all machines have recently been upgraded from 2.0.2 to 2.2.4, and are 
running debian (mostly sarge)




that's what i get when i do a "status dir "::

Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName

324  Incr 136.790 M  OK   02-Oct-07 03:06 client1
325  Full298,323392.4 G  OK   02-Oct-07 23:13 client2
326  Incr  4,118218.9 M  OK   03-Oct-07 00:15 client3_0
327  Incr18626.89 M  OK   03-Oct-07 03:05 client3_1
328  Incr301117.1 M  OK   03-Oct-07 03:06 client4
329  Incr 399.478 M  OK   03-Oct-07 03:06 client1
330  Incr  2,365134.6 M  OK   04-Oct-07 00:11 client3_0
331  Incr 4731.37 M  OK   04-Oct-07 03:05 client3_1
332  Incr17011.31 M  OK   04-Oct-07 03:06 client4
333  Incr 3220.62 M  OK   04-Oct-07 03:06 client1



that's what i get when i do a "status storage"::

Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
===
309  Incr  3,159162.3 M  OK   29-Sep-07 00:11 client3_0
310  Incr 3122.05 M  OK   29-Sep-07 03:05 client3_1
311  Incr17763.22 M  OK   29-Sep-07 03:06 client4
312  Diff160,89660.45 G  OK   30-Sep-07 01:53 client3_0
313  Diff  2,924201.5 M  OK   30-Sep-07 03:05 client3_1
314  Diff852127.4 M  OK   30-Sep-07 03:06 client4
315  Incr86624.39 M  OK   01-Oct-07 00:15 client3_0
316  Incr 3322.73 M  OK   01-Oct-07 03:05 client3_1
317  Incr14362.49 M  OK   01-Oct-07 03:05 client4
318  Full 74,8864.625 G  Error01-Oct-07 10:26 client1



and that's what i get when i do a "status client=client1-fd"::

Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
==
318  Full 74,8864.616 G  Error01-Okt-07 10:26 client1
320  Full 79,02816.16 G  OK   01-Okt-07 16:34 client1
324  Incr 136.790 M  OK   02-Okt-07 03:06 client1
329  Incr 399.478 M  OK   03-Okt-07 03:06 client1
333  Incr 3220.62 M  OK   04-Okt-07 03:06 client1
334  Incr 2917.59 M  OK   04-Okt-07 13:06 client1
340  Incr  95.811 M  OK   05-Okt-07 03:07 client1
346  Incr 3224.97 M  OK   06-Okt-07 03:07 client1



mfds.
IOhannes m zmölnig


-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] understanding "status" of serveral daemons.

2007-10-09 Thread IEM - network operating center (IOhannes m zmoelnig)
hi

mysteriously the list of terminated jobs have been synched during the 
backup cycle last night. so i am happy again :-)


Arno Lehmann wrote:
> Hi,
> 
> You're looking at the state of terminated jobs. If the daemon in 
> question didn't save its state information before shutdown or restart, 
> that information will not be updated.

thanks for your answer.

however i am not aware of any shutdown/restarts or crashes of the daemons.

> Actually, I don't think this "terminated jobs" listing is useful 
> except in situations where you have more than one DIR, as more 
> detailed and *by definition* complete and correct information is 
> available in the catalog.

i see. i was under the wrong impression that the "state" command would 
actually query the catalog and be therefore complete by definition too.

mgs.dft
IOhannes




-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] differential backups with xfs/x86-64?

2012-01-10 Thread IEM - network operating center (IOhannes m zmoelnig)
(sorry if this comes thru as a dupe; i first sent this mail from an
unsubscribed account)


hi all,

i have a problem with bacula and incremental/differential backups
from an XFS filesystem.
to put it simple: bacula always creates full backups.


the long story:
we deployed bacula to backup our users' directory a while ago (that was
bacula-2.2.4 or so) and had good success so far (thanks, btw)
back then, the homedirectory on the fileserver (running debian/i386 with
bacula-fd) was a 2TB partition using ext3.
the backupserver (running debian/i386 with bacula-dir (postgresql) and
bacula-sd) stores all data on an DLT-S4 autochanger with a capacity of 12TB.
incremental backups of the user' homedirectories typically took less
than 30GB each day.
unfortunately our users filled up the homedirectory, so we decided to
upgrade the fileserver, and it now has a capacity of 20TB. since there
are no ext4 userland tools to handle such a capacity, we are now using
xfs as the filesystem. the fileserver is now also running an x86_64
system (still debian).
the new fileserver runs bacula-5.0.2 (as packaged for debian), so we had
to upgrade the backup-server from the originally used bacula-2.4.4 to
bacula-5.0.2; the upgrade had some problems (regarding the postgresql db
update), but it seems to have succeeded.
at least all the other servers that are being backed up, are doing fine.
these servers are running bacula-2.4.4 and bacula-2.2.8

unfortunately, the backup of the new fileserver makes problems:
there are 2 jobs for this server, one doing a system backup, and the
other one doing the backup of the users' homes.
the system backup seems to work fine, but the homes fail to do
incremental backups.
after an initial full backup started last wednesday (which consumed
1.33TB), an incremental backup was started on friday (which also
consumed 1.33TB, and is virtually identical to the full backup just
made); another incremental backup just started and has currently
consumed 280GB, indicating that it is running yet _another_ backup of
the entire fileset.

i wonder what went wrong?
i suspect that the problem is related to the use of XFS and/or to the
fileserver now being 64bit.

below you can find some technical info.

any help is appreciated.
esp. since i simply cannot afford to spend a DLT tape each day


mfgasdr
IOhannes


bacula specific information
===
according to [2] i collected the directoris configuration and the output
of "list jobs" and "llist" for the Full-job and the Incr-job (that
wrongly did a full backup again). find them in the attached tgz.


operating systems
=
backupserver$ uname -a
Linux backup.server 2.6.32-5-686 #1 SMP Thu Nov 3 04:23:54 UTC 2011 i686
GNU/Linux
fileserver$ uname -a
Linux file.server 2.6.32-5-amd64 #1 SMP Thu Nov 3 03:41:26 UTC 2011
x86_64 GNU/Linux

inode size
=-
i compiled the following little program as found on [1] on both my
backupserver and my fileserver:

#include 
#include 
int main(void) {
struct stat s;
printf("Size of inode is %d bytes\n\n", sizeof(s.st_ino));
return 0;
}


results:
fileserver  : Size of inode is 8 bytes
backupserver: Size of inode is 4 bytes

XFS on the fileserver
=
root@fileserver# xfs_info /dev/vda
meta-data=/dev/vda isize=256agcount=20, agsize=268435455 blks
 = sectsz=512   attr=2
data = bsize=4096   blocks=511991, imaxpct=5
 = sunit=0  swidth=0 blks
naming   =version 2bsize=4096   ascii-ci=0
log  =internal bsize=4096   blocks=521728, version=2
 = sectsz=512   sunit=0 blks, lazy-count=1
realtime =none extsz=4096   blocks=0, rtextents=0



links
=


[1]
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/bacula-25/bacula-xfs-and-inode64-98549/

[2]
http://www.bacula.org/5.2.x-manuals/en/problems/problems/Bacula_Frequently_Asked_Que.html#SECTION00228



bacula-info.tgz
Description: GNU Unix tar archive
--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] differential backups with xfs/x86-64?

2012-01-11 Thread IEM - network operating center (IOhannes m zmoelnig)
On 2012-01-10 14:19, Bruno Friedmann wrote:
>>
> 
> First of all, I'm using also xfs on data source and/or backup media.
> There's no special trouble on them to get incremental/differential backup.

thanks for your detailed answer.
i'm glad to hear that no troubles are to be expected in general. so not
all is lost...

> What I suspect could be : an antivirus or any other scripts running on the 
> xfs partition that change attribute or acl on files

good guess but unfortunately unlikely...or rather: the only script that
_is_ run and might update attributes is "mlocate.update"

> It will be better for you have 5.0.3 or wait until 5.2.4 release version.

if i knew this would help, it might be an option.
until i know the cause of the problem, i'd rather stick with stock
debian packages (even if it means slightly outdated packages). i'm aware
that this is usually not what people want to hear on application
specific mailinglists, where a lot of work is put into making the latest
release as stable and bug-free as possible :-)

> 
> Check also the parameter used for your incremental (did you use Accurate 
> Backup?)

i might have to re-read the manual to understand what you mean by
"Accurate Backup". obviously i have read the manual to configure bacula
correctly for my needs back-then (using an old version of bacula & OS
and a different filesystem). since truths keep shifting, i might have to
refresh my knowledge...

> If the filesystem xfs is not mounted with the noatime option, check to have 
> noatime option in the bacula job.
> so it will not change each file during the full backup, then creating a new 
> fool backup for the incremental.

that is a very good point.
# cat /proc/mounts
/dev/vda /Net/data xfs rw,relatime,attr2,nobarrier,noquota 0 0

so the xfs-attribute "noatime" is not set (thought "relatime" is).
and the the bacula fileset misses the "noatime=yes" as well.

i will add "noatime=yes" to the fileset and see whether this helps


mfgasdr
IOhannes



-- 
IEM - network operation center
mailto:n...@iem.at

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] differential backups with xfs/x86-64?

2012-01-11 Thread IEM - network operating center (IOhannes m zmoelnig)
On 2012-01-11 12:10, Martin Simmons wrote:
>> On Wed, 11 Jan 2012 10:02:00 +0100, "IEM  said:
>>
>> On 2012-01-10 14:19, Bruno Friedmann wrote:
>>>
>>> If the filesystem xfs is not mounted with the noatime option, check to have 
>>> noatime option in the bacula job.
>>> so it will not change each file during the full backup, then creating a new 
>>> fool backup for the incremental.
>>
>> that is a very good point.
>> # cat /proc/mounts
>> /dev/vda /Net/data xfs rw,relatime,attr2,nobarrier,noquota 0 0
>>
>> so the xfs-attribute "noatime" is not set (thought "relatime" is).
>> and the the bacula fileset misses the "noatime=yes" as well.
>>
>> i will add "noatime=yes" to the fileset and see whether this helps
> 
> This should make no difference -- incremental backup is based on ctime and
> mtime, not atime.

darn.

> 
> Please post your fileset definition.


i already included it in the director configuration.
anyhow, here it is again:

FileSet {
  Name = "_Net_data"
  Include {
Options {
  signature = MD5
  wildfile="*.o"
  wildfile="*.pd_linux_o"
  wildfile="*~"
  wilddir="*/tmp"
  wilddir="*/temp"
  wilddir="*/Temp"
  wilddir="*/TEMP"
  wilddir="*/.svn"
  wilddir="*/.imap"
  wilddir="*/Maildir/.SPAM*"
  wilddir="*/Trash"
  wilddir="*/.Trash"
  wilddir="*/RECYCLER"
  wilddir="*/Temporary Internet Files"
  wilddir="*/.DS_Store"
  wilddir="*/.AppleDouble"
  exclude="yes"
}
File = /Net/data
  }
  Exclude {
File = /Net/data/Benutzer.rsync
  }
}


> 
> Also, what is the output of
> 
> stat /path/to/some/file
> 
> where /path/to/some/file is a file that you didn't expect to see in the
> incremental backup?

# stat /Net/data/Sound/ablinger/Altar/05KStudie2.wav
  File: `/Net/data/Sound/ablinger/Altar/05KStudie2.wav'
  Size: 53315180Blocks: 104136 IO Block: 4096   regular file
Device: fe00h/65024dInode: 4133733 Links: 1
Access: (0744/-rwxr--r--)  Uid: (10144/ablinger)   Gid: (  100/   users)
Access: 2012-01-11 00:35:17.963002882 +0100
Modify: 2003-12-09 14:12:50.0 +0100
Change: 2011-10-04 21:35:44.745746088 +0200

so the times seem to be fine (the ctime is when i copied the old files
to the new fileserver)


alas.
btw, is there a way to "estimate" the next backup?
"estimate" seems to only estimate the fileset (that is: the "full" set)


fmgasr
IOhannes

-- 
IEM - network operation center
mailto:n...@iem.at

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] differential backups with xfs/x86-64?

2012-01-12 Thread IEM - network operating center (IOhannes m zmoelnig)
On 2012-01-11 19:45, Radosław Korzeniewski wrote:
> 
> Did you try a level option of estimate command?
> 
> * estimate job="MyJob" level=Incremental

doh!
that indeed works, thanks a lot.
i couldn't find the option with "help" in bconsole/bat

novertheless, it's in the online manual and i seem to have missed it...


fgmasdr
IOhannes

-- 
IEM - network operation center
mailto:n...@iem.at

--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] differential backups with xfs/x86-64?

2012-01-17 Thread IEM - network operating center (IOhannes m zmoelnig)
On 2012-01-12 17:32, Martin Simmons wrote:
>> On Wed, 11 Jan 2012 13:12:56 +0100, "IEM  said:
>>
> 
>> # stat /Net/data/Sound/ablinger/Altar/05KStudie2.wav
>>   File: `/Net/data/Sound/ablinger/Altar/05KStudie2.wav'
>>   Size: 53315180Blocks: 104136 IO Block: 4096   regular file
>> Device: fe00h/65024dInode: 4133733 Links: 1
>> Access: (0744/-rwxr--r--)  Uid: (10144/ablinger)   Gid: (  100/   users)
>> Access: 2012-01-11 00:35:17.963002882 +0100
>> Modify: 2003-12-09 14:12:50.0 +0100
>> Change: 2011-10-04 21:35:44.745746088 +0200
>>
>> so the times seem to be fine (the ctime is when i copied the old files
>> to the new fileserver)
> 
> That is very strange, because the Modify and Change times are both old.
> Without accurate backup, that is all that Bacula uses to detect changed files.

it seems that problem solved itself (well, i certainly didn't)...

after i found out how to do a job estimation for an incremental backup
and that estimation appeared to be sane, i re-run the job and since then
the backup volume is as expected (1.1GB rather than 1.3TB).

my fileset now includes the "noatime = yes" directive, since i don't see
how this could do any harm and mayne (just maybe) it did indeed fix the
problem.


thanks for your support.

fgmasdr
IOhannes

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] spurious tracebacks

2007-10-23 Thread IEM - network operating center (IOhannes m zmoelnig)
hi all


i am currently running bacula-2.1.5 on my backup-server (and 2.1.4 on 
the clients), on debian/etch (using the debian-packages from packman)

every now and then (the last one was today; then 1 month before; then 10 
days before that,...) i get traceback emails from my backup-server, each 
for all of the 3 daemons (dir, file, storage) running there.

the emails are pretty non-descriptive, a la:

Subject: Bacula GDB traceback of bacula-sd
(no debugging symbols found)
Using host libthread_db library "/lib/libthread_db.so.1".
ptrace: Operation not permitted.
/var/lib/bacula/6067: No such file or directory.

the emails usually are sent when nothing interesting is happening (my 
backups are all finished at night; the emails are sent somewhen during 
the day).

however, receiving tracebacks gives me an uneasy feeling.

any idea why they might occur? (ok, my information i give here is rather 
sparse, so i should ask: how should i investigate why they occur?)


any hints?

fgmasdr
IOhannes


-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] spurious tracebacks

2007-10-23 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> hi all
> 
> 
> i am currently running bacula-2.1.5 on my backup-server (and 2.1.4 on 
> the clients), on debian/etch (using the debian-packages from packman)

oops, this should read: 2.2.5 (and 2.2.4 resp.)

mfga.sr
IOhannes


-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
Brian A Seklecki (Mobile) wrote:

>> every now and then (the last one was today; then 1 month before; then 10 
>> days before that,...) i get traceback emails from my backup-server, each 
>> for all of the 3 daemons (dir, file, storage) running there.
> 
> *) Run the SD in foreground mode with debug level setup


i will try that.


> *) Run the SD in ktrace/ptrace

i rty to avoid this as long as possible: i don't feel (yet) like running 
SD in ptrace for an entire month to see something in the end.

> Is something possibly sending the process a signal?  A log rotation
> script etc?

i was thinking along these lines too.
but then the event happens at indetermined times (at least for me:-)), 
so it shouldn't be related to a cron-job (there are no cron-entries 
regarding bacula that i would have found; nor do the exact time when 
such an event occurs relate to anything found in the cron-settings)

logrotate is in operation and it is running a monthly rotation  which 
does not correspond to the tracebacks.


anyhow, thanks for the suggestions.

fdamrd
IOhannes


-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> hi all
> 
> 
> the emails usually are sent when nothing interesting is happening (my 
> backups are all finished at night; the emails are sent somewhen during 
> the day).
> 
> however, receiving tracebacks gives me an uneasy feeling.
> 
> any idea why they might occur? (ok, my information i give here is rather 
> sparse, so i should ask: how should i investigate why they occur?)

seems i forgot something crucial (at least i think it is important):

all 3 affected daemons are running after that.
most likely they are not the same processes (PID) before the tracebacks, 
  but bacula is doing backups before and after the tracebacks occured 
(and  no backups are scheduled at the moment the traceback occurs).


fma.dsr
IOhannes

-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [SOLVED] Re: spurious tracebacks

2007-10-24 Thread IEM - network operating center (IOhannes m zmoelnig)
IEM - network operating center (IOhannes m zmoelnig) wrote:
> IEM - network operating center (IOhannes m zmoelnig) wrote:
>> hi all
>>
>>
>> the emails usually are sent when nothing interesting is happening (my 
>> backups are all finished at night; the emails are sent somewhen during 
>> the day).
>>
>> however, receiving tracebacks gives me an uneasy feeling.
>>
>> any idea why they might occur? (ok, my information i give here is rather 
>> sparse, so i should ask: how should i investigate why they occur?)
> 
> seems i forgot something crucial (at least i think it is important):
> 
> all 3 affected daemons are running after that.
> most likely they are not the same processes (PID) before the tracebacks, 
>   but bacula is doing backups before and after the tracebacks occured 
> (and  no backups are scheduled at the moment the traceback occurs).
> 

it turned out to be rather simple:
after a good look at the headers of the traceback-emails, i noticed that 
they have not at all been sent from my backup machine, but from the 
machine where i did my initial tests (before deploying bacula).

so the tracebacks where totally unrelated to my running backup-system, 
and instead might be related to my everyday updating and fuzzing around 
on the test machine...

sorry for the noise

fasd.r
IOhannes



-- 
IEM - network operation center
mailto:[EMAIL PROTECTED]

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] appending to a "full" tape?

2010-04-27 Thread IEM - network operating center (IOhannes m zmoelnig)
hi all.

i'm running bacula 2.4.4 in a debian/etch environment (i know that it is
a bit outdated; but there is even one debian/sarge host that i cannot
really update...)

bacula is doing a nightly backup ("full" every 3 months or so, "diff"
every week, and "incr" every night) onto an autochanger (Quantum
SuperLoader3, btw)


due to some scsi resets, 2 of my daily pool tapes have been flagged as
"Full" even though they are virtually empty (e.g. 4% fill state).


my tape pool has settled nicely, to use (almost) all tapes in the
changer, and having to add a new tape just because two empty tapes are
flagged as "full" seems to be a waste of ressources.

since the tape in question are part of the "incremental" strategy, i'd
rather not "purge" them.

is there a way to manually force a tape to be "appendable" again?

in "bat" i see that i could change the volum status to anything, even
"Append", but i want to make sure that this is a good idea.


thanks in advance.




-- 
IEM - network operation center
mailto:n...@iem.at

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users