[Bacula-users] Backups too big, and other questions

2006-04-24 Thread Scott Ruckh
Output from df -h looks like the following:

df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda6 129G   23G  100G  19% /
/dev/sda1  99M   27M   68M  29% /boot
none 1006M 0 1006M   0% /dev/shm
/dev/sda3  49G  109M   46G   1% /home
/dev/sda2  97G  2.5G   89G   3% /var
/dev/sdb1 459G  131G  305G  31% /BACKUPS

FileSet looks like the following
# List of files to be backed up
FileSet {
  Name = "aname"
  Include {
Options {
  signature = MD5
}
File = /
File = /boot
File = /var
File = /home
  }

  Exclude {
File = /proc
File = /tmp
File = /sys
File = dev
File = /.journal
File = /.fsck
File = /mnt
File = /var/spool/squid
File = /BACKUPS
  }
}

The backup is to disk for this single system and the backup is well over
143GB in space.  The actual data being backed up is less then 30GB.  Why
is this backup so big?

With Compression on this same backup (with locally attached storage) takes
up around 30GB of space and takes over 8 hours to backup.  8 hours is way
to long to back up 30GB of space.

Simlarly a 10GB NT client backing up to same storage and director with
compression enabled across the network takes about 48 minutes and takes up
way less space.  What is going on?

Also, I have used the override Pool values in my schedule.  If an
Incremental backup kicks off due to schedule, but is later changed to full
backup due to not having and existing Full backup, the backup job will
still use the Incremental Pool.  If the job was changed to Full, why
wasn't the Pool changed to Full?

Thanks.
Scott
-- 


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Dan Langille
On 24 Apr 2006 at 16:21, Scott Ruckh wrote:

> Output from df -h looks like the following:
> 
> df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda6 129G   23G  100G  19% /
> /dev/sda1  99M   27M   68M  29% /boot
> none 1006M 0 1006M   0% /dev/shm
> /dev/sda3  49G  109M   46G   1% /home
> /dev/sda2  97G  2.5G   89G   3% /var
> /dev/sdb1 459G  131G  305G  31% /BACKUPS
> 
> FileSet looks like the following
> # List of files to be backed up
> FileSet {
>   Name = "aname"
>   Include {
> Options {
>   signature = MD5
> }
> File = /
> File = /boot
> File = /var
> File = /home
>   }
> 
>   Exclude {
> File = /proc
> File = /tmp
> File = /sys
> File = dev
> File = /.journal
> File = /.fsck
> File = /mnt
> File = /var/spool/squid
> File = /BACKUPS
>   }
> }
> 
> The backup is to disk for this single system and the backup is well over
> 143GB in space.  The actual data being backed up is less then 30GB.  Why
> is this backup so big?

Run the estimate command.  Something is taking up the space.  Are you 
backing up to disk and also backing up the Bacula Volumes?  i.e 
backing up your backups.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Jason Martin
On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
> > The backup is to disk for this single system and the backup is well over
> > 143GB in space.  The actual data being backed up is less then 30GB.  Why
> > is this backup so big?
> 
> Run the estimate command.  Something is taking up the space.  Are you 
> backing up to disk and also backing up the Bacula Volumes?  i.e 
> backing up your backups.
Also, are you spooling and including the spool directory in the
backup?

-Jason Martin
-- 
I float like an anchor and sting like a moth.
This message is PGP/MIME signed.


pgpu4A2IXmKrE.pgp
Description: PGP signature


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Scott Ruckh
This is what you said Jason Martin
> On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
>> > The backup is to disk for this single system and the backup is well
>> over
>> > 143GB in space.  The actual data being backed up is less then 30GB.
>> Why
>> > is this backup so big?
>>
>> Run the estimate command.  Something is taking up the space.  Are you
>> backing up to disk and also backing up the Bacula Volumes?  i.e
>> backing up your backups.
> Also, are you spooling and including the spool directory in the
> backup?
>
> -Jason Martin

You can see from my ealier post that my total used space (of the volumes
listed in my file list) was about 25.3GB (per the output from df -h).  My
backup volume (to disk) was already 10 times the size of the total used
disk space before I cancelled the job.

I am excluding /BACKUPS in my file list so you can see I am not backing up
my backups.

Also, I am going straight to disk so I am not spooling first.

I think I have found the culprit, /var/log/lastlog .  It is a sparse file
and appears to be 1.2TB, which is way larger then the total space of the
filesystem,  In reality, this file only uses 64K of actual used disk
space, but I am guessing bacula sees it as a 1.2TB file.

I am guessing I can exclude this file, but is there a more graceful way of
handling this file.  Now that I believe I have found the trouble maker I
will go back through the bacula archives to see if there is a solution.
If not, has anyone else had to deal with this file?

Thanks fo your help.
Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Jason Martin
On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
> I think I have found the culprit, /var/log/lastlog .  It is a sparse file
> and appears to be 1.2TB, which is way larger then the total space of the
> filesystem,  In reality, this file only uses 64K of actual used disk
> space, but I am guessing bacula sees it as a 1.2TB file.
Wow. I don't think that file is particularly important and it
can probably be deleted. You might want to run a fsck on that
filesystem to make sure all is well.

What is the ls -l output on that file?

-Jason Martin
-- 
Just don't tell the asylum you saw me here
This message is PGP/MIME signed.


pgpPsBC9tuvnZ.pgp
Description: PGP signature


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Dan Langille
If you'd CC'd me on your post to the list, I'd have gotten sooner and 
you'd have gotten your reply sooner too.  :)  

On 24 Apr 2006 at 18:50, Scott Ruckh wrote:

> This is what you said Jason Martin
> > On Mon, Apr 24, 2006 at 07:31:43PM -0400, Dan Langille wrote:
> >> > The backup is to disk for this single system and the backup is well
> >> over
> >> > 143GB in space.  The actual data being backed up is less then 30GB.
> >> Why
> >> > is this backup so big?
> >>
> >> Run the estimate command.  Something is taking up the space.  Are you
> >> backing up to disk and also backing up the Bacula Volumes?  i.e
> >> backing up your backups.
> > Also, are you spooling and including the spool directory in the
> > backup?
> >
> > -Jason Martin
> 
> You can see from my ealier post that my total used space (of the volumes
> listed in my file list) was about 25.3GB (per the output from df -h).  My
> backup volume (to disk) was already 10 times the size of the total used
> disk space before I cancelled the job.
> 
> I am excluding /BACKUPS in my file list so you can see I am not
> backing up my backups. 
> 
> Also, I am going straight to disk so I am not spooling first.

Thanks.  None of this was obvious to us.  We're good, but we're not 
*that* good.

> I think I have found the culprit, /var/log/lastlog .  It is a sparse file
> and appears to be 1.2TB, which is way larger then the total space of the
> filesystem,  In reality, this file only uses 64K of actual used disk
> space, but I am guessing bacula sees it as a 1.2TB file.

Bacula handles sparse files:  

http://www.bacula.org/rel-manual/Configuring_Director.html


sparse=yes|no
Enable special code that checks for sparse files such as created by 
ndbm. The default is no, so no checks are made for sparse files. You 
may specify sparse=yes even on files that are not sparse file. No 
harm will be done, but there will be a small additional overhead to 
check for buffers of all zero, and a small additional amount of space 
on the output archive will be used to save the seek address of each 
non-zero record read.

> I am guessing I can exclude this file, but is there a more graceful way of
> handling this file.  Now that I believe I have found the trouble maker I
> will go back through the bacula archives to see if there is a solution.
> If not, has anyone else had to deal with this file?

Perhaps the above is for you.

-- 
Dan Langille : Software Developer looking for work
my resume: http://www.freebsddiary.org/dan_langille.php




---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-24 Thread Scott Ruckh
This is what you said Jason Martin
> On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
>> I think I have found the culprit, /var/log/lastlog .  It is a sparse
>> file
>> and appears to be 1.2TB, which is way larger then the total space of the
>> filesystem,  In reality, this file only uses 64K of actual used disk
>> space, but I am guessing bacula sees it as a 1.2TB file.
> Wow. I don't think that file is particularly important and it
> can probably be deleted. You might want to run a fsck on that
> filesystem to make sure all is well.
>
> What is the ls -l output on that file?

I think I am going to try Dan's suggestion of setting the sparse file
configuration parameter.

/var/log/lastlog is the file that contains all the data from when you run
the lastlog command (it displays the last time users logged into the
system).  I am running CentOS 4.3 which is a RHEL4 clone.  I do not know
if it is part of other distros.  A du on the file shows that in reality it
is only 64K.

Anyway, thanks for everyone's help.

Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread Alan Brown

On Mon, 24 Apr 2006, Scott Ruckh wrote:


I am excluding /BACKUPS in my file list so you can see I am not backing up
my backups.


VERIFY that this is happening. It is very easy to get the syntax wrong and 
have /BACKUPS being backed up unintentionally.



I think I have found the culprit, /var/log/lastlog .  It is a sparse file
and appears to be 1.2TB, which is way larger then the total space of the
filesystem


Wow, do you have that many users???

Lastlog is a database file containing (as its name suggests) information 
on each user's last sucessful login



I am guessing I can exclude this file, but is there a more graceful way of
handling this file.


Yes, use the appropriate Options flags for sparse file handling in 
bacula-dir.conf.


Note that you'd have had the same problem using tar unless using sparse 
file handling options there too.


AB







---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread John Kodis
On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:

> I think I have found the culprit, /var/log/lastlog .  It is a sparse
> file and appears to be 1.2TB, which is way larger then the total
> space of the filesystem, In reality, this file only uses 64K of
> actual used disk space, but I am guessing bacula sees it as a 1.2TB
> file.

In the olden days, prior to the wide-spread use of 64 bit computers,
Unix user IDs were 32 bit integers, and the lastlog file had an entry
for each one.  There were few enough that the bit of wasted space
didn't matter.  At some point, Red Hat changed user IDs to 64 bit
integers on 64 bit platforms, and since the lastlog file still had an
entry for each one, there were now so many user IDs that the wasted
space does matter, bloating the lastlog file to 1.2TB as you've noted.
They've corrected this problem in recent releases of their
distribution.

$ uname -p
x86_64
$ ls -l /var/log/lastlog
-r  1 root root 11390920 Apr 25 07:49 /var/log/lastlog

Just FYI.

-- John Kodis.


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread Scott Ruckh

-- 
This is what you said John Kodis
> On Mon, Apr 24, 2006 at 06:50:33PM -0700, Scott Ruckh wrote:
>
>> I think I have found the culprit, /var/log/lastlog .  It is a sparse
>> file and appears to be 1.2TB, which is way larger then the total
>> space of the filesystem, In reality, this file only uses 64K of
>> actual used disk space, but I am guessing bacula sees it as a 1.2TB
>> file.
>
> In the olden days, prior to the wide-spread use of 64 bit computers,
> Unix user IDs were 32 bit integers, and the lastlog file had an entry
> for each one.  There were few enough that the bit of wasted space
> didn't matter.  At some point, Red Hat changed user IDs to 64 bit
> integers on 64 bit platforms, and since the lastlog file still had an
> entry for each one, there were now so many user IDs that the wasted
> space does matter, bloating the lastlog file to 1.2TB as you've noted.
> They've corrected this problem in recent releases of their
> distribution.
>
> $ uname -p
> x86_64
> $ ls -l /var/log/lastlog
> -r  1 root root 11390920 Apr 25 07:49 /var/log/lastlog


I am not sure my FileSet is correct (see below), but it "sort-of" worked. 
The backup still took 3 hours longer then if I completely excluded
/var/log/lastlog.

FileSet {
  Name = "Firewall Full"

  Include {
Options {
  signature = MD5
  sparse = yes
  compression = GZIP
}
File = /var/log/lastlog
  }

  Include {
Options {
  compression = GZIP
  signature = MD5
}
File = /
File = /boot
File = /home
  }

  Include {
Options {
  compression = GZIP
  signature = MD5
  wildfile = "/var/log/lastlog"
  Exclude = yes
}
File = /var
  }

  Exclude {
File = /proc
File = /tmp
File = /sys
File = dev
File = /.journal
File = /.fsck
File = /mnt
File = /var/spool/squid
File = /BACKUPS
  }
}

The good news is the backup only took about 13GB of space as opposed to
the 205GB of space it had gobbled up before I had to cancel the job.  The
bad news is that this backup is 3 hours longer then running the backup and
just excluding /var/log/lastlog.

Does the above FileSet look correct for what I am trying to accomplish?

Thanks for the help.

Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread Pieter (NL)

I use following filesets. First exclude in options and after that include:

FileSet {
Name= "bsdserver1 files"
Include {
Options {
signature   = MD5
compression = GZIP
Exclude = yes
WildDir = "/proc"
WildDir = "/dev"
}
File = "/etc"
File = "/usr/local/etc"
File = "/usr/image/part2/home"
File = "/usr/image/part2/profiles"
File = "/usr/image/part2/oldproj"
File = "/usr/image/part1/mysql.bak"
}
}


Maybe for you something like this will work:

FileSet {
  Name = "Firewall Full"

  Include {
Options {
  compression = GZIP
  signature = MD5
  exclude = yes
  WildFile = "/var/log/lastlog"
WildDir = /proc
WildDir = /tmp
WildDir = /sys
WildDir = /dev
WildDir = /.journal
WildDir = /.fsck
WildDir = /mnt
WildDir = /var/spool/squid
WildDir = /BACKUPS
}
File = /
File = /boot
File = /home
File = /var
  }

Pieter
--
View this message in context: 
http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4082934
Sent from the Bacula - Users forum at Nabble.com.



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread Scott Ruckh


This is what you said Pieter (NL)
>
> I use following filesets. First exclude in options and after that include:
>
> FileSet {
>   Name= "bsdserver1 files"
>   Include {
>   Options {
>   signature   = MD5
>   compression = GZIP
>   Exclude = yes
>   WildDir = "/proc"
>   WildDir = "/dev"
>   }
>   File = "/etc"
>   File = "/usr/local/etc"
>   File = "/usr/image/part2/home"
>   File = "/usr/image/part2/profiles"
>   File = "/usr/image/part2/oldproj"
>   File = "/usr/image/part1/mysql.bak"
>   }
> }
>
>
> Maybe for you something like this will work:
>
> FileSet {
>   Name = "Firewall Full"
>
>   Include {
> Options {
>   compression = GZIP
>   signature = MD5
>   exclude = yes
>   WildFile = "/var/log/lastlog"
> WildDir = /proc
> WildDir = /tmp
> WildDir = /sys
> WildDir = /dev
> WildDir = /.journal
> WildDir = /.fsck
> WildDir = /mnt
> WildDir = /var/spool/squid
> WildDir = /BACKUPS
> }
> File = /
> File = /boot
> File = /home
> File = /var
>   }
>
> Pieter
> --

Thanks for the reply, but I believe you have excluded the /var/log/lastlog
file all together.  This is not the desired result.

Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-25 Thread Pieter (NL)

Did you look, with for example wx-console, if the files you didn't want to be
backed up are indeed excluded to make sure if your fileset is or isn't the
problem

Pieter


--
View this message in context: 
http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4083663
Sent from the Bacula - Users forum at Nabble.com.



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Scott Ruckh

This is what you said Pieter (NL)
>
> Did you look, with for example wx-console, if the files you didn't want to
> be
> backed up are indeed excluded to make sure if your fileset is or isn't the
> problem
>
> Pieter
>
>
> --
> View this message in context:
> http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4083663
> Sent from the Bacula - Users forum at Nabble.com.

Backing up the sparse file (1.2TB in size (64K actual size)), with the
spase otion enabled for the file, still takes about 2.5 hours to backup. 
Does this sound correct?

That is an improvement on the 7 hours without sparse enabled, but I would
have guessed that backing up a sparse file whose actual file size is 64K
would only take a matter of seconds.

Here is the new FileSet with the sparse option enabled:

FileSet {
  Name = "Firewall Full"

  Include {
Options {
  signature = MD5
  sparse = yes
  compression = GZIP
}
File = /var/log/lastlog
  }

  Include {
Options {
  compression = GZIP
  signature = MD5
  wildfile = "/var/log/lastlog"
  wildfile = "/.journal"
  wildfile = "/.fsck"
  wilddir = /proc
  wilddir = /tmp
  wilddir = /sys
  wilddir = /dev
  wilddir = /mnt
  wilddir = /BACKUPS
  wilddir = /var/spool/squid
  Exclude = yes
}
File = /
File = /boot
File = /home
File = /var
  }
}

I could probably remove /wildir = /BACKUPS as /BACKUPS is its own
filesystem, but I included it for readability.

If this is still not the correct way to handle the sparse file, please let
me know.

Thanks.
Scott



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Pieter (NL)

>From what I undestand of the manual and fd source code Bacula is actually
reading all 1.2 Tb of data to scan for blocks with all zero's. I could
imagine this takes time.
What is actually stored in /var/log/lastlog? Is it worth being backed up?

Pieter


Scott Ruckh wrote:
> 
> 
> This is what you said Pieter (NL)
>>
>> Did you look, with for example wx-console, if the files you didn't want
>> to
>> be
>> backed up are indeed excluded to make sure if your fileset is or isn't
>> the
>> problem
>>
>> Pieter
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4083663
>> Sent from the Bacula - Users forum at Nabble.com.
> 
> Backing up the sparse file (1.2TB in size (64K actual size)), with the
> spase otion enabled for the file, still takes about 2.5 hours to backup. 
> Does this sound correct?
> 
> That is an improvement on the 7 hours without sparse enabled, but I would
> have guessed that backing up a sparse file whose actual file size is 64K
> would only take a matter of seconds.
> 
> Here is the new FileSet with the sparse option enabled:
> 
> FileSet {
>   Name = "Firewall Full"
> 
>   Include {
> Options {
>   signature = MD5
>   sparse = yes
>   compression = GZIP
> }
> File = /var/log/lastlog
>   }
> 
>   Include {
> Options {
>   compression = GZIP
>   signature = MD5
>   wildfile = "/var/log/lastlog"
>   wildfile = "/.journal"
>   wildfile = "/.fsck"
>   wilddir = /proc
>   wilddir = /tmp
>   wilddir = /sys
>   wilddir = /dev
>   wilddir = /mnt
>   wilddir = /BACKUPS
>   wilddir = /var/spool/squid
>   Exclude = yes
> }
> File = /
> File = /boot
> File = /home
> File = /var
>   }
> }
> 
> I could probably remove /wildir = /BACKUPS as /BACKUPS is its own
> filesystem, but I included it for readability.
> 
> If this is still not the correct way to handle the sparse file, please let
> me know.
> 
> Thanks.
> Scott
> 
> 
> 
> ---
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job
> easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 
--
View this message in context: 
http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4102834
Sent from the Bacula - Users forum at Nabble.com.



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Kern Sibbald
On Wednesday 26 April 2006 15:44, Scott Ruckh wrote:
> This is what you said Pieter (NL)
>
> > Did you look, with for example wx-console, if the files you didn't want
> > to be
> > backed up are indeed excluded to make sure if your fileset is or isn't
> > the problem
> >
> > Pieter
> >
> >
> > --
> > View this message in context:
> > http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.htm
> >l#a4083663 Sent from the Bacula - Users forum at Nabble.com.
>
> Backing up the sparse file (1.2TB in size (64K actual size)), with the
> spase otion enabled for the file, still takes about 2.5 hours to backup.
> Does this sound correct?

Yes, due to the nature of the problem, Bacula is forced to read 1.2TB of data.  
The OS doesn't simply tell Bacula that a block read doesn't exist, it returns 
zeros for every non-existant block, and Bacula has to look at all those 
stupid zeros. Wading through 1.2TB of non-existant zeros takes time. Sparse 
files are not very backup friendly.

>
> That is an improvement on the 7 hours without sparse enabled, but I would
> have guessed that backing up a sparse file whose actual file size is 64K
> would only take a matter of seconds.
>
> Here is the new FileSet with the sparse option enabled:
>
> FileSet {
>   Name = "Firewall Full"
>
>   Include {
> Options {
>   signature = MD5
>   sparse = yes
>   compression = GZIP
> }
> File = /var/log/lastlog
>   }
>
>   Include {
> Options {
>   compression = GZIP
>   signature = MD5
>   wildfile = "/var/log/lastlog"
>   wildfile = "/.journal"
>   wildfile = "/.fsck"
>   wilddir = /proc
>   wilddir = /tmp
>   wilddir = /sys
>   wilddir = /dev
>   wilddir = /mnt
>   wilddir = /BACKUPS
>   wilddir = /var/spool/squid
>   Exclude = yes
> }
> File = /
> File = /boot
> File = /home
> File = /var
>   }
> }
>
> I could probably remove /wildir = /BACKUPS as /BACKUPS is its own
> filesystem, but I included it for readability.
>
> If this is still not the correct way to handle the sparse file, please let
> me know.
>
> Thanks.
> Scott
>
>
>
> ---
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job
> easier Download IBM WebSphere Application Server v.1.0.1 based on Apache
> Geronimo
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Ryan Novosielski
lastlog contains the last logins to the system. While many hackers are
smarter than to leave it around, some are not, and if your machine is
brought down, it can be forensic evidence  (or at least clues) if you
have that file. I'd back it up if it were my system.

Really, the thing that needs to be done is to find out why on this
system it is 1.2TB. Chances are, restarting whatever file is currently
holding lastlog open (or rotating the file) may take care of the
problem. I'd doubt this persists across reboot (unless this file is made
this large because of a different problem).

Try running lsof on that file, if you have it.

Pieter (NL) wrote:

>From what I undestand of the manual and fd source code Bacula is actually
>reading all 1.2 Tb of data to scan for blocks with all zero's. I could
>imagine this takes time.
>What is actually stored in /var/log/lastlog? Is it worth being backed up?
>
>Pieter
>
>
>Scott Ruckh wrote:
>  
>
>>This is what you said Pieter (NL)
>>
>>
>>>Did you look, with for example wx-console, if the files you didn't want
>>>to
>>>be
>>>backed up are indeed excluded to make sure if your fileset is or isn't
>>>the
>>>problem
>>>
>>>Pieter
>>>
>>>
>>>--
>>>View this message in context:
>>>http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4083663
>>>Sent from the Bacula - Users forum at Nabble.com.
>>>  
>>>
>>Backing up the sparse file (1.2TB in size (64K actual size)), with the
>>spase otion enabled for the file, still takes about 2.5 hours to backup. 
>>Does this sound correct?
>>
>>That is an improvement on the 7 hours without sparse enabled, but I would
>>have guessed that backing up a sparse file whose actual file size is 64K
>>would only take a matter of seconds.
>>
>>Here is the new FileSet with the sparse option enabled:
>>
>>FileSet {
>>  Name = "Firewall Full"
>>
>>  Include {
>>Options {
>>  signature = MD5
>>  sparse = yes
>>  compression = GZIP
>>}
>>File = /var/log/lastlog
>>  }
>>
>>  Include {
>>Options {
>>  compression = GZIP
>>  signature = MD5
>>  wildfile = "/var/log/lastlog"
>>  wildfile = "/.journal"
>>  wildfile = "/.fsck"
>>  wilddir = /proc
>>  wilddir = /tmp
>>  wilddir = /sys
>>  wilddir = /dev
>>  wilddir = /mnt
>>  wilddir = /BACKUPS
>>  wilddir = /var/spool/squid
>>  Exclude = yes
>>}
>>File = /
>>File = /boot
>>File = /home
>>File = /var
>>  }
>>}
>>
>>I could probably remove /wildir = /BACKUPS as /BACKUPS is its own
>>filesystem, but I included it for readability.
>>
>>If this is still not the correct way to handle the sparse file, please let
>>me know.
>>
>>Thanks.
>>Scott
>>
>>
>>
>>---
>>Using Tomcat but need to do more? Need to support web services, security?
>>Get stuff done quickly with pre-integrated technology to make your job
>>easier
>>Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
>>http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>>___
>>Bacula-users mailing list
>>Bacula-users@lists.sourceforge.net
>>https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>>
>>
>--
>View this message in context: 
>http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4102834
>Sent from the Bacula - Users forum at Nabble.com.
>
>
>
>---
>Using Tomcat but need to do more? Need to support web services, security?
>Get stuff done quickly with pre-integrated technology to make your job easier
>Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
>http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>___
>Bacula-users mailing list
>Bacula-users@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/bacula-users
>  
>


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Pieter (NL)

Can't you use the command last (/var/log/wtmp) for that?
The size is that big, according to a google search, because a userid exists
which is probably -1 or realy big. That is why the size of the file is only
64K with all the nulls left out. One record for each userid who has logged
in. The index used in this file is userid So 1,2 Tb / recordsize if the
big userid


Ryan Novosielski wrote:
> 
> lastlog contains the last logins to the system. While many hackers are
> smarter than to leave it around, some are not, and if your machine is
> brought down, it can be forensic evidence  (or at least clues) if you
> have that file. I'd back it up if it were my system.
> 
> Really, the thing that needs to be done is to find out why on this
> system it is 1.2TB. Chances are, restarting whatever file is currently
> holding lastlog open (or rotating the file) may take care of the
> problem. I'd doubt this persists across reboot (unless this file is made
> this large because of a different problem).
> 
> Try running lsof on that file, if you have it.
> 
> Pieter (NL) wrote:
> 
>>From what I undestand of the manual and fd source code Bacula is actually
>>reading all 1.2 Tb of data to scan for blocks with all zero's. I could
>>imagine this takes time.
>>What is actually stored in /var/log/lastlog? Is it worth being backed up?
>>
>>Pieter
>>
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 
--
View this message in context: 
http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4107091
Sent from the Bacula - Users forum at Nabble.com.



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Ryan Novosielski
A correction to what I said: it appears to be "last login" data. I'm
assuming this is used for things like finger, etc. I would probably want
to keep as much of that type of data as possible, again, if it were my
system.

Recall that that inidividual says the file is REALLY only 64k, and the
file APPEARS to be 1.2TB.

Pieter (NL) wrote:

>Can't you use the command last (/var/log/wtmp) for that?
>The size is that big, according to a google search, because a userid exists
>which is probably -1 or realy big. That is why the size of the file is only
>64K with all the nulls left out. One record for each userid who has logged
>in. The index used in this file is userid So 1,2 Tb / recordsize if the
>big userid
>
>
>Ryan Novosielski wrote:
>  
>
>>lastlog contains the last logins to the system. While many hackers are
>>smarter than to leave it around, some are not, and if your machine is
>>brought down, it can be forensic evidence  (or at least clues) if you
>>have that file. I'd back it up if it were my system.
>>
>>Really, the thing that needs to be done is to find out why on this
>>system it is 1.2TB. Chances are, restarting whatever file is currently
>>holding lastlog open (or rotating the file) may take care of the
>>problem. I'd doubt this persists across reboot (unless this file is made
>>this large because of a different problem).
>>
>>Try running lsof on that file, if you have it.
>>
>>Pieter (NL) wrote:
>>
>>>From what I undestand of the manual and fd source code Bacula is actually
>>
>>
>>>reading all 1.2 Tb of data to scan for blocks with all zero's. I could
>>>imagine this takes time.
>>>What is actually stored in /var/log/lastlog? Is it worth being backed up?
>>>
>>>Pieter
>>>
>>>  
>>>
>>Bacula-users mailing list
>>Bacula-users@lists.sourceforge.net
>>https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>>
>>
>--
>View this message in context: 
>http://www.nabble.com/Backups-too-big%2C-and-other-questions-t1502635.html#a4107091
>Sent from the Bacula - Users forum at Nabble.com.
>
>
>
>---
>Using Tomcat but need to do more? Need to support web services, security?
>Get stuff done quickly with pre-integrated technology to make your job easier
>Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
>http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>___
>Bacula-users mailing list
>Bacula-users@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/bacula-users
>  
>


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Scott Ruckh
This is what you said Eric Warnke
>
> Oops tar -Scf lowercase s is somthing else.
>
> Cheers,
> Eric

Good idea.

I tried:
tar -Sc -f /var/log/lastlog.tar /var/log/lastlog

But it appears to do much the same as bacula.  It too must read through
the entire file, so it does not speed things up.

Thanks.
Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Bill Moran
"Scott Ruckh" <[EMAIL PROTECTED]> wrote:

> This is what you said Eric Warnke
> >
> > Oops tar -Scf lowercase s is somthing else.
> >
> > Cheers,
> > Eric
> 
> Good idea.
> 
> I tried:
> tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
> 
> But it appears to do much the same as bacula.  It too must read through
> the entire file, so it does not speed things up.

The only program I'm aware of that will back up sparse files efficiently
is dump.  But dump isn't cross-platform - not even a little.

-- 
Bill Moran
Potential Technologies
http://www.potentialtech.com


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Eric Warnke
Unfortunately I have looked high and low, there is just no good way to read sparse files intelligently.  Whoever though of providing the functionality without an API to step through the block mapping was a moron.  It is truly a brain dead technology.  At least there is a fcntl under Win32 to deal with it with some intelligence.
Cheers,EricOn 4/26/06, Bill Moran <[EMAIL PROTECTED]> wrote:
"Scott Ruckh" <[EMAIL PROTECTED]
> wrote:> This is what you said Eric Warnke> >> > Oops tar -Scf lowercase s is somthing else.> >> > Cheers,> > Eric>> Good idea.>> I tried:
> tar -Sc -f /var/log/lastlog.tar /var/log/lastlog>> But it appears to do much the same as bacula.  It too must read through> the entire file, so it does not speed things up.The only program I'm aware of that will back up sparse files efficiently
is dump.  But dump isn't cross-platform - not even a little.--Bill MoranPotential Technologieshttp://www.potentialtech.com


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Scott Ruckh
This is what you said Eric Warnke
> Unfortunately I have looked high and low, there is just no good way to
> read
> sparse files intelligently.  Whoever though of providing the functionality
> without an API to step through the block mapping was a moron.  It is truly
> a
> brain dead technology.  At least there is a fcntl under Win32 to deal with
> it with some intelligence.
>
> Cheers,
> Eric
>
>
>
> On 4/26/06, Bill Moran <[EMAIL PROTECTED]> wrote:
>>
>> "Scott Ruckh" <[EMAIL PROTECTED]> wrote:
>>
>> > This is what you said Eric Warnke
>> > >
>> > > Oops tar -Scf lowercase s is somthing else.
>> > >
>> > > Cheers,
>> > > Eric
>> >
>> > Good idea.
>> >
>> > I tried:
>> > tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
>> >
>> > But it appears to do much the same as bacula.  It too must read
>> through
>> > the entire file, so it does not speed things up.
>>
>> The only program I'm aware of that will back up sparse files efficiently
>> is dump.  But dump isn't cross-platform - not even a little.
>>

I want to thank everyone for their help.

I am not even sure why this file is created this way, seems like a strange
implementation from someone on the outside looking in.

I will just learn to deal with /var/log/lastlog until something better
come up.

Thanks.
Scott


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Eric Warnke
SOLUTIONTurns out there is a way to not only identify sparse files, but also read the map as root!  FIBMAP ioctl will do the job quite nicely and there is even a utility to read it ( as root ) filefrag -v 
This sure as heck beats scanning the file for zeros manually!Cheers,EricOn 4/26/06, Scott Ruckh <
[EMAIL PROTECTED]> wrote:This is what you said Eric Warnke> Unfortunately I have looked high and low, there is just no good way to
> read> sparse files intelligently.  Whoever though of providing the functionality> without an API to step through the block mapping was a moron.  It is truly> a> brain dead technology.  At least there is a fcntl under Win32 to deal with
> it with some intelligence.>> Cheers,> Eric On 4/26/06, Bill Moran <[EMAIL PROTECTED]> wrote:
 "Scott Ruckh" <[EMAIL PROTECTED]> wrote: > This is what you said Eric Warnke>> > >>> > > Oops tar -Scf lowercase s is somthing else.
>> > >>> > > Cheers,>> > > Eric>> >>> > Good idea.>> >>> > I tried:>> > tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
>> >>> > But it appears to do much the same as bacula.  It too must read>> through>> > the entire file, so it does not speed things up. The only program I'm aware of that will back up sparse files efficiently
>> is dump.  But dump isn't cross-platform - not even a little.>>I want to thank everyone for their help.I am not even sure why this file is created this way, seems like a strangeimplementation from someone on the outside looking in.
I will just learn to deal with /var/log/lastlog until something bettercome up.Thanks.Scott


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Kern Sibbald
On Thursday 27 April 2006 04:42, Eric Warnke wrote:
> Unfortunately I have looked high and low, there is just no good way to read
> sparse files intelligently.  Whoever though of providing the functionality
> without an API to step through the block mapping was a moron.  It is truly
> a brain dead technology.  At least there is a fcntl under Win32 to deal
> with it with some intelligence.

Yes,  the SCSI tape interface and accessing sparse files on Unix were two 
horrible design decisions (or perhaps lack of design).  Quite a contrast to 
the rest of Unix.

>
> Cheers,
> Eric
>
> On 4/26/06, Bill Moran <[EMAIL PROTECTED]> wrote:
> > "Scott Ruckh" <[EMAIL PROTECTED]> wrote:
> > > This is what you said Eric Warnke
> > >
> > > > Oops tar -Scf lowercase s is somthing else.
> > > >
> > > > Cheers,
> > > > Eric
> > >
> > > Good idea.
> > >
> > > I tried:
> > > tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
> > >
> > > But it appears to do much the same as bacula.  It too must read through
> > > the entire file, so it does not speed things up.
> >
> > The only program I'm aware of that will back up sparse files efficiently
> > is dump.  But dump isn't cross-platform - not even a little.
> >
> > --
> > Bill Moran
> > Potential Technologies
> > http://www.potentialtech.com

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-04-26 Thread Kern Sibbald
On Thursday 27 April 2006 06:04, Eric Warnke wrote:
> SOLUTION

I haven't found the FIBMAP doc, but are you really sure that this will permit 
a program to properly read it.  I say that because I pointed filefrag at a 
non-sparse file, and it seems to report a discontinuity.  Perhaps the 
underlying ioctl() gives the information a user program needs, but filefrag 
seems to report fragmentations rather than "holes" in the file.


>
> Turns out there is a way to not only identify sparse files, but also read
> the map as root!
>
> FIBMAP ioctl will do the job quite nicely and there is even a utility to
> read it ( as root ) filefrag -v 
>
> This sure as heck beats scanning the file for zeros manually!
>
> Cheers,
> Eric
>
> On 4/26/06, Scott Ruckh <[EMAIL PROTECTED]> wrote:
> > This is what you said Eric Warnke
> >
> > > Unfortunately I have looked high and low, there is just no good way to
> > > read
> > > sparse files intelligently.  Whoever though of providing the
> >
> > functionality
> >
> > > without an API to step through the block mapping was a moron.  It is
> >
> > truly
> >
> > > a
> > > brain dead technology.  At least there is a fcntl under Win32 to deal
> >
> > with
> >
> > > it with some intelligence.
> > >
> > > Cheers,
> > > Eric
> > >
> > > On 4/26/06, Bill Moran <[EMAIL PROTECTED]> wrote:
> > >> "Scott Ruckh" <[EMAIL PROTECTED]> wrote:
> > >> > This is what you said Eric Warnke
> > >> >
> > >> > > Oops tar -Scf lowercase s is somthing else.
> > >> > >
> > >> > > Cheers,
> > >> > > Eric
> > >> >
> > >> > Good idea.
> > >> >
> > >> > I tried:
> > >> > tar -Sc -f /var/log/lastlog.tar /var/log/lastlog
> > >> >
> > >> > But it appears to do much the same as bacula.  It too must read
> > >>
> > >> through
> > >>
> > >> > the entire file, so it does not speed things up.
> > >>
> > >> The only program I'm aware of that will back up sparse files
> >
> > efficiently
> >
> > >> is dump.  But dump isn't cross-platform - not even a little.
> >
> > I want to thank everyone for their help.
> >
> > I am not even sure why this file is created this way, seems like a
> > strange implementation from someone on the outside looking in.
> >
> > I will just learn to deal with /var/log/lastlog until something better
> > come up.
> >
> > Thanks.
> > Scott

-- 
Best regards,

Kern

  (">
  /\
  V_V


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups too big, and other questions

2006-05-08 Thread Russell Howe
Eric Warnke wrote, sometime around 27/04/06 03:42:
> Unfortunately I have looked high and low, there is just no good way to
> read sparse files intelligently.  Whoever though of providing the
> functionality without an API to step through the block mapping was a
> moron.  It is truly a brain dead technology.  At least there is a fcntl
> under Win32 to deal with it with some intelligence.

As far as I know, there is no cross-platform way to do it, and in Linux,
no filesystem-agnostic way to do it.

XFS has a special ioctl, XFS_BMAP, or something like that. It returns a
list of extents occupied by the file, and from there you can work out
where the gaps are.

I think there were whispers on the linux-xfs/l-k mailing lists about
genericising this across all file systems, but I don't know if it
actually got taken up as a serious proposition.

Anyway, at least in XFS, it's possible.

# dd if=/dev/zero seek=50 of=foo count=100
100+0 records in
100+0 records out
51200 bytes (51 kB) copied, 0.000916 seconds, 55.9 MB/s
# xfs_bmap foo
foo:
0: [0..47]: hole
1: [48..151]: 20688..20791
#

Here we can see that the first 48 blocks are hole, and the next 103
blocks contain data (in this case, zeros). Presumably bacula would have
read this entire file as being one big hole, had it been backed up with
sparse turned on? If so, this could lead to a substantial disk space
discrepancy on a system if you back up with sparse turned on, and then
restore. (you'll have 'gained' any disk space previously occupied by zeros).

And no, you don't need to be root to perform the ioctl.

-- 
Russell Howe
[EMAIL PROTECTED]


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users