Re: [Bacula-users] Bacula Autochanger Questions

2010-08-05 Thread Gary R. Schmidt
On Thu, August 5, 2010 20:21, sid009 wrote:
>
>>
>> To the OP - Have you put the barcode stickers (from HP) *on the tape
>> cartridges the correct way up?*
>>
>
>
> Didn't receive any barcodes from HP unfortunately so I have printed off
> some 3 of 9 barcodes (ending with L1,L2...) to stick on tapes - thanks for
> the tip I will make sure the labels are the correct way up. I will replace
> a tape and try re-label it see how it goes (I am glad the tapes are in
> 10's not 100's)
What does "mtx status" return?

Does "btape test" pass?

Cheers,
GaryB-)



--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow backups under XenServer with Windows2003 SP1/R2guests

2010-08-05 Thread James Harper
> Hello,
> 
>   So i am trying to back up my file server with Bacula and i am
having
> a difficult time getting it to work. I am able to back up desktops at
> 7MB/s, non-virtualized windows servers at 4.7MB/s, virtualized
> desktops at 4.7MB/s and virtualized windows servers at 100KB/s.??
> 
>   The servers are any of Windows 2003 SP1 or Windows 2003 R2, the
> desktops (virtualized or not) are Windows XP SP3/SP2. My hosts server
> all run Citrix XenServer Enterprise, running on HP blades  with 8GB
> of RAM and dual quad core CPUs on SAS HDDs. The VMs storage resides
> in an HP EVA4400 connected via fiber.
> 
>   If i mount server1's (virtualized windows 2003) share on the
bacula
> server and copy the same amount of data(3.1GB) it gets done in about
> 6min. If i copy the data via "scp" it gets done in about 5min at a
> rate of 11MB/s.
> 
>   But when i run bacula on any virtualized windows 2003 server the
> data rate doesnt go higher than 200KB/s.
> 
>   NOTE: That i am backing all this up to disk.

FWIW, I'm getting at least 1MByte/second backing up Windows machines
running under Xen (not Citrix XenServer though), so I don't think there
is anything inherently broken in Bacula although you didn't mention what
version you are running.

Try turning off the tcp offload features in Dom0 and windows and see if
that makes a difference. I suspect it won't if scp is working over the
exact same network link but many strange network bugs in the tcp offload
code start to surface once you start playing with virtualisation.

Otherwise, fire up wireshark in the Windows DomU and look for packet
retransmits etc that would indicate network problems.

James

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] labeled volume doesnt exist in db.

2010-08-05 Thread John Drescher
2010/8/5 Jeremiah D. Jester :
> I’m having problems labeling a couple new volumes. I tried entering ‘label
> barcodes’, which works on most tapes but I have several it that it doesn’t
> return a status, pool or media type for.  See volume ‘000107’ that I singled
> out below.
>
>
>
> I then try to ‘label’ the volume but bacula complains that it is already
> labeled. When I attempt to change the volume status I get a sql error
> stating that no volume exists.
>
>
>
> Does anyone know what is going on? See bacula output below for more info.
>
No but you can get the media record for any volume into the database using

bscan

see the manual for operation.

The following link will pull the volume into the catalog:

http://www.bacula.org/manuals/en/utility/utility/Volume_Utility_Tools.html#SECTION00272000

If you did not write any jobs to the volume an alternate method is to
erase the volume (assuming its in /dev/nst0 already):

mt -f /dev/nst0 rewind
mt -f /dev/nst0 weof

Then relabel with label barcodes.

John

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] labeled volume doesnt exist in db.

2010-08-05 Thread Jeremiah D. Jester
I'm having problems labeling a couple new volumes. I tried entering 'label 
barcodes', which works on most tapes but I have several it that it doesn't 
return a status, pool or media type for.  See volume '000107' that I singled 
out below.

I then try to 'label' the volume but bacula complains that it is already 
labeled. When I attempt to change the volume status I get a sql error stating 
that no volume exists.

Does anyone know what is going on? See bacula output below for more info.

Thanks,

JJ


*status slots
 Slot |   Volume Name|   Status  | Media Type   |  Pool 
 |
--+--+---+--+|
..
   18*|   000107 | ? |? |  
? |
..


*label
The defined Storage resources are:
 1: File
 2: Tape
Select Storage resource (1-2): 2
Enter autochanger drive[0]: 1
Enter new Volume name: 000107
Enter slot (0 or Enter for none): 18
Defined Pools:
 1: Onsite
 2: Catalog
 3: Offsite
 4: Scratch
Select the Pool (1-4): 4
Connecting to Storage daemon Tape at bacula.microslu.washington.edu:9103 ...
Sending label command for Volume "000107" Slot 18 ...
3307 Issuing autochanger "unload slot 15, drive 1" command.
3304 Issuing autochanger "load slot 18, drive 1" command.
3305 Autochanger "load slot 18, drive 1", status is OK.
3920 Cannot label Volume because it is already labeled: "000107"
Label command failed for Volume 000107.
Do not forget to mount the drive!!!

*mount
The defined Storage resources are:
 1: File
 2: Tape
Select Storage resource (1-2): 2
Enter autochanger drive[0]: 1
Enter autochanger slot: 18
3001 Device "Drive2" (/dev/nst1) is mounted with Volume "000107"


*update volume=000107
Parameters to modify:
 1: Volume Status
 2: Volume Retention Period
 3: Volume Use Duration
 4: Maximum Volume Jobs
 5: Maximum Volume Files
 6: Maximum Volume Bytes
 7: Recycle Flag
 8: Slot
 9: InChanger Flag
10: Volume Files
11: Pool
12: Volume from Pool
13: All Volumes from Pool
14: All Volumes from all Pools
15: Enabled
16: RecyclePool
17: Action On Purge
18: Done
Select parameter to modify (1-18): 1
sql_get.c:1062 Media record for Volume "000107" not found.



Jeremiah Jester
Senior Informatics Specialist
UW Microbiology - Katze Lab
P: 206-732-6185


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems with multiple drives in changer

2010-08-05 Thread Jon Schewe
 So I've got 2 drives in my tape library. Up until now I've only had
bacula using the first drive. Right now that drive is giving me some
trouble, so I want to tell bacula to switch to the second drive, but
bacula won't do that. When I try to mount a volume with the tape library
I get the following output from bacula-sd (with debug=100):
mn-server-sd: stored.c:552-0 Could not open device "Drive-1" (/dev/nst1)
mn-server-sd: dircmd.c:599-0 Try changer device Drive-0
mn-server-sd: dircmd.c:611-0 Device DLT-changer not autoselect skipped.
mn-server-sd: dircmd.c:599-0 Try changer device Drive-1
mn-server-sd: dircmd.c:620-0 Device DLT-changer drive wrong: want=0
got=1 skipping

Here's my device sections, note that Autoselect is set to "no" on Drive-0.
Autochanger {
  Name = DLT-changer
  Device = Drive-0
  Device = Drive-1
  Changer Command = "/usr/lib/bacula/mtx-changer %c %o %S %a %d"
  Changer Device = /dev/tape_lib_changer0
}

Device {
  Name = Drive-0  #
  Drive Index = 0
  Media Type = DLT-7000
  Archive Device = /dev/nst0
  AutomaticMount = yes   # when device opened, read it
  AlwaysOpen = yes
  RemovableMedia = yes
  RandomAccess = no
  AutoChanger = yes
  Autoselect = no
  # Enable the Alert command only if you have the mtx package loaded
  #Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}
Device {
  Name = Drive-1  #
  Drive Index = 1
  Media Type = DLT-7000
  Archive Device = /dev/nst1
  AutomaticMount = yes   # when device opened, read it
  AlwaysOpen = yes
  RemovableMedia = yes
  RandomAccess = no
  AutoChanger = yes
  # Enable the Alert command only if you have the mtx package loaded
  #Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
}

I saw someone with a similar problem awhile back on the list, but they
never got an answer.
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg29543.html

-- 
Jon Schewe | http://mtu.net/~jpschewe
If you see an attachment named signature.asc, this is my digital
signature. See http://www.gnupg.org for more information.


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Henry Yen
On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
> Am 05.08.2010 16:57, schrieb Henry Yen:

First, I welcome this discussion, however arcane (as long as the
List permits it, of course) -- I am happy to discover if I'm wrong
in my thinking.  That said, I'm not (yet) convinced.

This part in particular I stand by, as a response to the notion of
using *either* /dev/random *or* /dev/urandom:

> > Again, on Linux, you generally can't use /dev/random at all -- it will
> > block after reading just a few dozen bytes.  /dev/urandom won't block,
> > but your suggestion of creating a large file from it is very sensible.

For this part, however, I don't agree with your assertion, for two reasons:

> > /dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
> > a large "uncompressible" file could be done sort of like:
> >
> >dd if=/dev/urandom of=tempchunk count=1048576
> >cat tempchunk tempchunk tempchunk tempchunk tempchunk tempchunk > bigfile
> >   
> cat-ting random data a couple of times to make one big random file wont
> really work, unless the size of the chunks is way bigger than the
> "library" size of the compression algorithm.

Reason 1: the example I gave yields a file size for "tempchunk" of 512MB,
not 1MB, as given in your counter-example.  I agree that (at least now-a-days)
catting 1MB chunks into a 6MB chunk is likely (although not assured)
to lead to greatly reduced size during later compression, but I disagree
that catting 512MB chunks into a 3GB chunk is likely to be compressible
by any general-purpose compressor.

On a 32-bit 3GB machine, I created "chunk" from the above "dd" command,
and then ran gzip/bzip2/lzma, all with the "-9" flag, resulting in:

536870912 chunk
536957165 chunk.gz
544157933 chunk.lzma
539244413 chunk.bz2
   3221225472 bigchunk
   3221746982 bigchunk.gz
   3235476896 bigchunk.bz2
   3265163180 bigchunk.lzma

> your example will probably lead to a 5:1 compression ratio :-)
> so this will more than likely not be a really good test.

> Also, afaik tar
> has an "optimization" when outputting to /dev/null, better output to
> /dev/zero instead if using tar to check possible speeds.

(Yes, although there is considerable disagreement over this (mis)feature;
 my take is that the consensus is "yes, probably bad, definitely
 under-documented (the behavior does match the "info" document), but
 too late to change now".)

> P.S.: Checked to make sure.. depends on the compression algorithm of course:
> 
> $ dd if=/dev/urandom of=chunk bs=1M count=1
> 1+0 Datensätze ein
> 1+0 Datensätze aus
> 1048576 Bytes (1,0 MB) kopiert, 0,154357 s, 6,8 MB/s
> 
> $ gzip chunk
> $ ls -al chunk*
> -rw-r--r-- 1 christian christian 1048576  5. Aug 17:06 chunk
> -rw-r--r-- 1 christian christian 1048764  5. Aug 17:06 chunk.gz
> 
> $ cat chunk chunk chunk chunk chunk chunk > bigchunk
> $ gzip bigchunk
> $ ls -al bigchunk*
> -rw-r--r-- 1 christian christian 6291456  5. Aug 17:07 bigchunk
> -rw-r--r-- 1 christian christian 6292442  5. Aug 17:07 bigchunk.gz
> 
> $ lzma bigchunk
> $ ls -al bigchunk*
> -rw-r--r-- 1 christian christian 6291456  5. Aug 17:07 bigchunk
> -rw-r--r-- 1 christian christian 1063718  5. Aug 17:07 bigchunk.lzma

Reason 2: Although the compression of data on a general-purpose machine
will certainly get faster and more capable of detecting duplication inside
larger and larger chunks, I daresay that this ability with regards to hardware
compression is unlikely to increase dramatically.  For instance, the lzma
of that 3GB file as shown above ran for about 30 minutes.  By contrast,
with a 27MB/sec write physical write speed, that same 3GB would only take
about 2 minutes to actually write.  Even at a 6:1 compression ratio
(necessarily limited to that exact number because of this example), it would
still take more than twice a long just to analyze the data to yield
that compression than to write the uncompressed stream.  Put another way,
I don't see tape drives (currently in the several-hundred-gigabyte range)
increasing their compression buffer sizes or CPU capabilities to analyze
more than a couple dozens of megabytes, at most, anytime in the near future.
That is why I think that generating a "test" file for write-throughput
testing that's a few GB's or so in length, made up from chunks that are 
a few hundreds of MB's (larger chunks take more and more time to create),
is quite sufficient.

-- 
Henry Yen   Aegis Information Systems, Inc.
Senior Systems Programmer   Hicksville, New York

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula confused about slots on autochanger

2010-08-05 Thread Josh Fisher

On 8/4/2010 5:43 PM, Maria Mckinley wrote:
> On 8/3/2010 9:20 PM, Maria McKinley wrote:
>   >  >  It seems that part of the problem I have been having with backup is
> that
>   >  >  bacula is having communication problems with the autochanger. Bacula
>   >  >  tells the autochanger to load slot 3, but instead slot 2 is loaded:
> On some systems, (BSD?) slot numbers are expected to begin at zero.
> Bacula (and Linux) expect slot numbers to begin at one. When Bacula
> sends a command to the autochanger to load slot 1, it is expecting the
> first slot to be loaded. But on systems that number from zero, when you
> use mtx to load slot 1 it will actually load the second slot. This can
> be worked around by changing your autochanger script to subtract 1 from
> the slot number on a load command and add 1 to slot numbers for the
> loaded and list commands.
>
> I am running linux, but this sounds like it would fix the problem. It
> isn't clear to me what to change. Here is the relevant bit from my
> script (3 is the input argument for slots). I don't think I can just
> change the setup argument to slot=$3+1, but not entirely sure why not:

It is not the problem if you are running Linux. But it does look like 
the output of the mtx-changer script is not correct for the 'list' 
command. The excerpt from your mtx-changer script looks quite different 
from the one installed on my Linux system by Bacula 5.0.2. What version 
of Bacula are you using? What is the actual output of:

billie:~#  /usr/lib64/bacula/mtx-changer /dev/sg2 list


> # Setup arguments
> ctl=$1
> cmd="$2"
> slot=$3
> device=$4
> drive=$5
>
> debug "Parms: $ctl $cmd $slot $device $drive"
>
> case $cmd in
>  unload)
> debug "Doing mtx -f $ctl unload $slot $drive"
> #
> # enable the following line if you need to eject the cartridge
> # mt -f $device offline
> # sleep 10
> ${MTX} -f $ctl unload $slot $drive
> ;;
>
>  load)
> debug "Doing mtx -f $ctl load $slot $drive"
> ${MTX} -f $ctl load $slot $drive
> rtn=$?
> #
> # Increase the sleep time if you have a slow device
> # or remove the sleep and add the following:
> # sleep 15
> wait_for_drive $device
> exit $rtn
> ;;
>
>  list)
> debug "Doing mtx -f $ctl -- to list volumes"
> make_temp_file
> # Enable the following if you are using barcodes and need an inventory
> # ${MTX} -f $ctl inventory
> ${MTX} -f $ctl status>${TMPFILE}
> rtn=$?
> cat ${TMPFILE} | grep " Storage Element [0-9]*:.*Full" | awk
> "{print \$3 \$4}" | sed "s/Full *\(:VolumeTag=\)*//"
> #
> # If you have a VXA PacketLoader and the above does not work, try
> #  turning it off and enabling the following line.
> # cat ${TMPFILE} | grep " *Storage Element [0-9]*:.*Full" | sed "s/
> Storage Element //" | sed "s/Full :VolumeTag=//"
> #
> cat ${TMPFILE} | grep "^Data Transfer Element [0-9]*:Full
> (Storage Element [0-9]" | awk '{printf "%s:%s\n",$7,$10}'
> rm -f ${TMPFILE}>/dev/null 2>&1
> exit $rtn
> ;;
>
>  loaded)
> debug "Doing mtx -f $ctl $drive -- to find what is loaded"
> make_temp_file
> ${MTX} -f $ctl status>${TMPFILE}
> rtn=$?
> cat ${TMPFILE} | grep "^Data Transfer Element $drive:Full" | awk
> "{print \$7}"
> cat ${TMPFILE} | grep "^Data Transfer Element $drive:Empty" | awk
> "{print 0}"
> rm -f ${TMPFILE}>/dev/null 2>&1
> exit $rtn
> ;;
>
>  slots)
> debug "Doing mtx -f $ctl -- to get count of slots"
> ${MTX} -f $ctl status | grep " *Storage Changer" | awk "{print \$5}"
> ;;
> esac
>
> thanks,
> maria
>
> --
> The Palm PDK Hot Apps Program offers developers who use the
> Plug-In Development Kit to bring their C/C++ apps to Palm for a share
> of $1 Million in cash or HP Products. Visit us here for more details:
> http://p.sf.net/sfu/dev2dev-palm
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow backups under XenServer with Windows2003 SP1/R2 guests

2010-08-05 Thread Romer Ventura
Hello,

So i am trying to back up my file server with Bacula and i am having  
a difficult time getting it to work. I am able to back up desktops at  
7MB/s, non-virtualized windows servers at 4.7MB/s, virtualized  
desktops at 4.7MB/s and virtualized windows servers at 100KB/s.??

The servers are any of Windows 2003 SP1 or Windows 2003 R2, the  
desktops (virtualized or not) are Windows XP SP3/SP2. My hosts server  
all run Citrix XenServer Enterprise, running on HP blades  with 8GB  
of RAM and dual quad core CPUs on SAS HDDs. The VMs storage resides  
in an HP EVA4400 connected via fiber.

If i mount server1's (virtualized windows 2003) share on the bacula  
server and copy the same amount of data(3.1GB) it gets done in about  
6min. If i copy the data via "scp" it gets done in about 5min at a  
rate of 11MB/s.

But when i run bacula on any virtualized windows 2003 server the  
data rate doesnt go higher than 200KB/s.

NOTE: That i am backing all this up to disk.

Here are the relevant config files:

bacula-dir.conf:
Job {
   Name = test1
   Type = Backup
   Level = Full
   Client = 
   FileSet = "wintest"
   Schedule = "WeeklyCycle"
   Storage = File
   Messages = Standard
   Pool = Default
   Write Bootstrap = "/var/lib/bacula/%c.bsr"
   Priority = 10
}

Client {
   Name = vir-server1-fd
   Address = vir-server1.DOMAIN.COM
   FDPort = 9102
   Catalog = MyCatalog
   Password = "asdfasdfsadfaf"  # password for FileDaemon
   File Retention = 14 days# 30 days
   Job Retention = 1 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
}

# Client (File Services) to backup

Client {
   Name = ph-desktop1-fd
   Address = ph-desktop1.DOMAIN.COM
   FDPort = 9102
   Catalog = MyCatalog
   Password = "asdfsadfdsafsdafasd"  # password for FileDaemon
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
}

# Client (File Services) to backup

Client {
   Name = vir-desktop2-fd
   Address = vir-desktop2.DOMAIN.COM
   FDPort = 9102
   Catalog = MyCatalog
   Password = "asdfsdfdsfsadfa/"  # password for FileDaemon
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
}

Pool {
   Name = Default
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically  
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 3 days # one year
   Maximum Volume Jobs = 1
   LabelFormat = File-
}

bacula-sd.conf:
Device {
   Name = FileStorage
   Media Type = File
   Archive Device = /bacula-staging/backups
   LabelMedia = yes   # lets Bacula label unlabeled  
media
   Random Access = Yes;
   AutomaticMount = yes;   # when device opened, read it
   RemovableMedia = no;
   Maximum Network Buffer Size = 13107200
   AlwaysOpen = no;
}

Thanks
--
Romer Ventura


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Christian Gaul
Am 05.08.2010 16:57, schrieb Henry Yen:
> On Thu, Aug 05, 2010 at 10:09:06AM -0400, John Drescher wrote:
>   
>> On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen  wrote:
>> 
>   
>>> On (at least) Linux, /dev/random will quickly block - use /dev/urandom 
>>> instead.
>>>   
>> Since these tend to be slow I would just create a large file from one of 
>> these.
>> 
> Well, for this case you shouldn't simply pick one or the other.
>
> Again, on Linux, you generally can't use /dev/random at all -- it will
> block after reading just a few dozen bytes.  /dev/urandom won't block,
> but your suggestion of creating a large file from it is very sensible.
> /dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
> a large "uncompressible" file could be done sort of like:
>
>dd if=/dev/urandom of=tempchunk count=1048576
>cat tempchunk tempchunk tempchunk tempchunk tempchunk tempchunk > bigfile
>
>   
cat-ting random data a couple of times to make one big random file wont
really work, unless the size of the chunks is way bigger than the
"library" size of the compression algorithm.

your example will probably lead to a 5:1 compression ratio :-)
so this will more than likely not be a really good test. Also, afaik tar
has an "optimization" when outputting to /dev/null, better output to
/dev/zero instead if using tar to check possible speeds.

Calculating the checksumms for the data can be CPU bound.


P.S.: Checked to make sure.. depends on the compression algorithm of course:

$ dd if=/dev/urandom of=chunk bs=1M count=1
1+0 Datensätze ein
1+0 Datensätze aus
1048576 Bytes (1,0 MB) kopiert, 0,154357 s, 6,8 MB/s

$ gzip chunk

$ ls -al chunk*
-rw-r--r-- 1 christian christian 1048576  5. Aug 17:06 chunk
-rw-r--r-- 1 christian christian 1048764  5. Aug 17:06 chunk.gz

$ cat chunk chunk chunk chunk chunk chunk > bigchunk

$ gzip bigchunk
$ ls -al bigchunk*
-rw-r--r-- 1 christian christian 6291456  5. Aug 17:07 bigchunk
-rw-r--r-- 1 christian christian 6292442  5. Aug 17:07 bigchunk.gz

$ lzma bigchunk
$ ls -al bigchunk*
-rw-r--r-- 1 christian christian 6291456  5. Aug 17:07 bigchunk
-rw-r--r-- 1 christian christian 1063718  5. Aug 17:07 bigchunk.lzma


(edited out some lines..)

-- 
Christian Gaul
otop AG
D-55116 Mainz
Rheinstraße 105-107
Fon: 06131.5763.330
Fax: 06131.5763.500
E-Mail: christian.g...@otop.de
Internet: www.otop.de

Vorsitzender des Aufsichtsrats: Christof Glasmacher
Vorstand: Dirk Flug
Registergericht: Amtsgericht Mainz
Handelsregister: HRB 7647

Bundesweit Fachberater in allen Hilfsmittelfachbereichen gesucht! Bewerbung und
Infos unter www.otop.de

Hinweis zum Datenschutz:
Diese E-Mail ist ausschließlich für den in der Adresse genannten Empfänger
bestimmt, da sie vertrauliche firmeninterne Informationen enthalten kann. Soweit
eine Weitergabe oder Verteilung nicht ausschließlich zu internen Zwecken des
Empfängers geschieht, ist jede unzulässige Veröffentlichung, Verwendung,
Verbreitung, Weiterleitung und das Kopieren dieser E-Mail und ihrer verknüpften
Anhänge streng untersagt. Falls Sie nicht der beabsichtigte Empfänger dieser
Nachricht sind, löschen Sie diese E-Mail bitte und informieren unverzüglich den
Absender.


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Henry Yen
On Thu, Aug 05, 2010 at 10:09:06AM -0400, John Drescher wrote:
> On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen  wrote:

> > On (at least) Linux, /dev/random will quickly block - use /dev/urandom 
> > instead.
> 
> Since these tend to be slow I would just create a large file from one of 
> these.

Well, for this case you shouldn't simply pick one or the other.

Again, on Linux, you generally can't use /dev/random at all -- it will
block after reading just a few dozen bytes.  /dev/urandom won't block,
but your suggestion of creating a large file from it is very sensible.
/dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
a large "uncompressible" file could be done sort of like:

   dd if=/dev/urandom of=tempchunk count=1048576
   cat tempchunk tempchunk tempchunk tempchunk tempchunk tempchunk > bigfile

-- 
Henry Yen   Aegis Information Systems, Inc.
Senior Systems Programmer   Hicksville, New York


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread John Drescher
On Thu, Aug 5, 2010 at 8:57 AM, Henry Yen  wrote:
> On Thu, Aug 05, 2010 at 12:46:49PM +0100, Alan Brown wrote:
>
>> Tape speed testing while writing a repetitive file is useless as
>> hardware compression makes it go a lot faster than natually.
>>
>> For tape tests use /dev/random
>
> On (at least) Linux, /dev/random will quickly block - use /dev/urandom 
> instead.

Since these tend to be slow I would just create a large file from one of these.

John

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula restore runs forever - bootstrap creation [solved]

2010-08-05 Thread Martin Simmons
> On Thu, 5 Aug 2010 11:46:27 +0200, news only said:
> 
> Martin Simmons schrieb:
> 
> > What is the size of the job in bytes?
> 
> exactly 28321715406 B (approx 27 GB)

OK, that is very close to the total number of 64512 byte blocks recorded in
VolIndex 1 to 6, so I think those rows represent the whole job (nothing
missing or extra).


> ++---+--+++---+---+-++--+
> | VolumeName | MediaType | VolIndex | JobMediaId | FirstIndex | LastIndex | 
> StartFile | EndFile | StartBlock | EndBlock |
> ++---+--+++---+---+-++--+
> | KYE713L4   | LTO-4 |1 |356 |  1 |101610 |   
> 246 | 246 |  0 |77503 |
> | KYE713L4   | LTO-4 |2 |357 | 101610 |116651 |   
> 247 | 247 |  0 |77503 |
> | KYE713L4   | LTO-4 |3 |358 | 116651 |126690 |   
> 248 | 248 |  0 |77503 |
> | KYE713L4   | LTO-4 |4 |359 | 126690 |151467 |   
> 249 | 249 |  0 |77503 |
> | KYE713L4   | LTO-4 |5 |360 | 151467 |169844 |   
> 250 | 250 |  0 |77503 |
> | KYE713L4   | LTO-4 |6 |361 | 169844 |292514 |   
> 251 | 251 |  0 |51961 |
> | KYE713L4   | LTO-4 |7 |913 |  1 |292514 |   
>   0 | 251 |  0 |51961 |
> ++---+--+++---+---+-++--+
> 
> Well! Does the big gap in JobMediaId indicate that in fact this session
> was "bscanned" though it wasn't already purged?

Yes, it is very likely.  It looks like bscan creates only one JobMedia record
per job and its StartFile is always 0.


> Deleting the extra entries won't be a problem. Thanks a lot for putting
> me in the right direction! My problem is solved, but there seems to be
> an error somewhere in the bscan routines

Bscan has some limitations, but they are not all documented (or known!).

__Martin

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Henry Yen
On Thu, Aug 05, 2010 at 12:46:49PM +0100, Alan Brown wrote:

> Tape speed testing while writing a repetitive file is useless as 
> hardware compression makes it go a lot faster than natually.
> 
> For tape tests use /dev/random

On (at least) Linux, /dev/random will quickly block - use /dev/urandom instead.

-- 
Henry Yen   Aegis Information Systems, Inc.
Senior Systems Programmer   Hicksville, New York

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] best way to write backup to DVD

2010-08-05 Thread Carlo Filippetto
How many time you have to store that DVD?

Set a correct retention period for the volumes, ad use volumes with the
DVD's size..
Whan the jobs 'close' the volumes you can burn it on DVD and remove them
from the HD.
When you need to restore the data you need only to copy from the DVD to disk
the correct volumes.

Your db will grow since the retention period and than it will be
'purged/pruned' from the catalog..





2010/8/2 Phil Stracchino 

> On 08/01/10 19:46, john fish wrote:
> > But I am doing full (monthly) and incremental (daily) backups and they
> > are all over volumes.
> > Plus when I write volume(s) to DVD dont I need bootstrap files and mysql
> > data to restore?
>
> Not necessarily, no.  You certainly don't need to save bootstrap files;
> they are generated at restore time.  Having the catalog data is not
> necessary either, as it can be reloaded into the database via bscan; but
> bscan can take a while.  You could include a catalog dump with each
> dataset, but there is no specific provision in Bacula to take an
> external catalog dump and cleanly merge it into the running catalog.
> I've never tried doing so manually myself, and could not vouch for the
> safety of such a process.  If you were going to do that, the SAFE way
> would be to extract the catalog dump to a new location, stop Bacula,
> restart it pointing at the alternate catalog, then perform your restore.
>  Whether this is feasible is going to obviously be hihgly dependent upon
> how busy your Bacula installation is.
>
> You COULD, of course, hypothetically speaking, run a second Director
> specifically to do such restores, starting it up only when needed.
>
>
> --
>  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
> Renaissance Man, Unix ronin, Perl hacker, Free Stater
> It's not the years, it's the mileage.
>
>
> --
> The Palm PDK Hot Apps Program offers developers who use the
> Plug-In Development Kit to bring their C/C++ apps to Palm for a share
> of $1 Million in cash or HP Products. Visit us here for more details:
> http://p.sf.net/sfu/dev2dev-palm
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula to Backup EMC Clariion cx4

2010-08-05 Thread Carlo Filippetto
Mount a LUN on the Server that you want to use as Storage Daemon and than
use that mount point for your backup


CIAO

---
Carlo Filippetto

2010/8/4 Heitor Medrado de Faria 

> Guys,
>
> Does anyone knows, briefly, how to backup data on the EMC Clariion cx4
> storage, using Bacula?
>
> Regards,
>
> --
> Heitor Medrado de Faria
> www.bacula.com.br
> Msn: hei...@bacula.com.br
> Gtalk: heitorfa...@gmail.com
> Skype: neocodeheitor
> + 55 71 9132-3349
> +55 71 3381-6869
>
>
>
>
> --
> The Palm PDK Hot Apps Program offers developers who use the
> Plug-In Development Kit to bring their C/C++ apps to Palm for a share
> of $1 Million in cash or HP Products. Visit us here for more details:
> http://p.sf.net/sfu/dev2dev-palm
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Cejka Rudolf
ekke85 wrote (2010/08/05):
> slow. It writes at 22mb/sec. The drives should be able to do a lot more
> then that. I have to backup 11TB that takes a couple of days to

Try tar -cf /dev/null /data-with-11-tb and you will see, if the
bottleneck is data source, or something else. How many files do
you have here?

> The spooling attribute was not enabled, I have enabled it now.

Yes, attribute spoolig is good thing too. However, data spooling
is more important, as i said before.

> ]# dd if=/dev/zero of=/home/bigfile bs=1M count=1
> 1+0 records in
> 1+0 records out
> 1048576 bytes (10 GB) copied, 189.551 seconds, 55.3 MB/s

Very slow. My spooling system says 372 MB/s, which is for spooling
4 parallel backups and one writing thread to LTO-3 tape. With 55 MB/s,
you can do at most one backup at a time, and the tape is still
"undernourished" (if write speed would equal to read speed)..

> This is writing that 10gb file with tar to tape:
> ]# time tar -czf /dev/Drive1 /home/bigfile
> tar: Removing leading `/' from member names

No, it was not tar, but a gzipped tar, where source was just zeros...
I expect, that you wrote on the tape just about 10 MB and you could
not say, what you tested more: Reading speed, processor speed, or
tape write speed.

> The 11Tb I have to backup is on a NetApp, the NetApp is mounted via
> NFS on the backup host and is getting the data from there to write to disk.

Which role NetApp plays here? NFS is not very good ticket to the speed...

-- 
Rudolf Cejka  http://www.fit.vutbr.cz/~cejkar
Brno University of Technology, Faculty of Information Technology
Bozetechova 2, 612 66  Brno, Czech Republic

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Cejka Rudolf
ekke85 wrote (2010/08/05):
> I do not have spooling on and I don't have software compression on.

It seems that you have LTO-3 drive(s). If you are not able to constantly
backup data atleast at rate 27 MB/s (HP LTO-3) or 40 MB/s (IBM LTO-3), you
need the spooling - which is a must. Note that required constant rates on
loaded nodes or small files are almost impossible. 

-- 
Rudolf Cejka  http://www.fit.vutbr.cz/~cejkar
Brno University of Technology, Faculty of Information Technology
Bozetechova 2, 612 66  Brno, Czech Republic

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Alan Brown
ekke85 wrote:

> The spooling attribute was not enabled, I have enabled it now. This is what I 
> get writing to disk and then also writing that file to tape with tar:

Tape speed testing while writing a repetitive file is useless as 
hardware compression makes it go a lot faster than natually.

For tape tests use /dev/random

NFS is a dreadful protocol and usually very slow (highly inefficient). 
If you were streaming off the netapp at 22MB/s then you are doing well.

There are mount parameter you can apply to NFS to speed things up, we 
well as moving to 9000 byte packets on a Gb network (don't do this in a 
mixed speed environment, there are always problems)


Concentrate on getting the spooling working well - if you are using 
anything higher than LTO2 your spool area will have to be a dedicated 
striped disk array (or a SLC SSD) as LTO3 and onwards can easily outrun 
a single hard drive's ability to stream data (if the drive has to do any 
seeking, forget it)

Bottlenecks appear all over the place with high speed tape. The only way 
to find them is to test every component for throughput and latency as 
well as testing the complete chain.

Don't forget to test the source speed of /dev/zero and /dev/random - 
check throughputs when piping to /dev/zero. Ditto for despooling area. 
If the tape speed is anywhere near approaching these speeds then your 
results will be distorted.





--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Thomas Mueller

> 
> Hi Thomas
> 
> The spooling attribute was not enabled, I have enabled it now. This is
> what I get writing to disk and then also writing that file to tape with
> tar:
> 
> 
> This is a 10gb file to disk:
> ]# dd if=/dev/zero of=/home/bigfile bs=1M count=1 1+0
> records in
> 1+0 records out
> 1048576 bytes (10 GB) copied, 189.551 seconds, 55.3 MB/s
>

reading/writing one large file is not the same as reading/writing 
thousands of small files/directories.

are there just large files on the netapp? if not do the tar test with a 
directory with an average sample of the 11tb.

- Thomas


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread ekke85


Quote:
Hi

I have a Quantum Scalar i500 and it works in Bacula, but it is very
slow. It writes at 22mb/sec. The drives should be able to do a lot more
then that. I have to backup 11TB that takes a couple of days to
complete, I don't want to think how long it would take to restore Sad
This is the output I get from btape without hardware compression:


* did you enable (at least attribute) spooling?
* with what speed can you actually read the data ? (try to make a tar of
a large directory "time tar -cf /dev/null " )
* is the storage daemon on the same machine as the 11tb data?

- Thomas


Hi Thomas

The spooling attribute was not enabled, I have enabled it now. This is what I 
get writing to disk and then also writing that file to tape with tar:


This is a 10gb file to disk:
]# dd if=/dev/zero of=/home/bigfile bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 189.551 seconds, 55.3 MB/s

This is writing that 10gb file with tar to tape:
]# time tar -czf /dev/Drive1 /home/bigfile
tar: Removing leading `/' from member names

real   2m53.655s
user   2m39.219s
sys   0m28.415s
]# 



The 11Tb I have to backup is on a NetApp, the NetApp is mounted via NFS on the 
backup host and is getting the data from there to write to disk.

ekke85

+--
|This was sent by ekk...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Problem with Volumes

2010-08-05 Thread Carlo Filippetto
Perfect!

Thank you

Ciao


---
Carlo Filippetto


2010/8/4 John Drescher 

> -- Forwarded message --
> From: John Drescher 
> Date: Wed, Aug 4, 2010 at 8:50 AM
> Subject: Re: [Bacula-users] Problem with Volumes
> To: Carlo Filippetto 
>
>
> On Wed, Aug 4, 2010 at 8:46 AM, Carlo Filippetto
>  wrote:
> > AZZZ
> > I create the volumes and late I add thje 'Duration'!!!
> >
> > thank's very mutch!!
> >
> > now what I have to do??
> >
>
>
> http://old.nabble.com/Re:-How-does-a-change-to-Volume-Use-Duration-take-effect--p15765197.html
>
> John
>
>
>
> --
> John M. Drescher
>
>
> --
> The Palm PDK Hot Apps Program offers developers who use the
> Plug-In Development Kit to bring their C/C++ apps to Palm for a share
> of $1 Million in cash or HP Products. Visit us here for more details:
> http://p.sf.net/sfu/dev2dev-palm
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Autochanger Questions

2010-08-05 Thread sid009

> 
> To the OP - Have you put the barcode stickers (from HP) *on the tape 
> cartridges the correct way up?* 
> 


Didn't receive any barcodes from HP unfortunately so I have printed off some 3 
of 9 barcodes (ending with L1,L2...) to stick on tapes - thanks for the tip I 
will make sure the labels are the correct way up. I will replace a tape and try 
re-label it see how it goes (I am glad the tapes are in 10's not 100's)

Once the tapes have been labelled correctly - are there any specific config 
parameters I need to set to ensure the tapes are rotated on a daily basis. At 
present all tapes belong to one default pool and are all set to append

Thanks for your help

+--
|This was sent by s...@logicscope.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread Thomas Mueller
Am Thu, 05 Aug 2010 05:57:06 -0400 schrieb ekke85:

> Hi
> 
> I have a Quantum Scalar i500 and it works in Bacula, but it is very
> slow. It writes at 22mb/sec. The drives should be able to do a lot more
> then that. I have to backup 11TB that takes a couple of days to
> complete, I don't want to think how long it would take to restore :(
> This is the output I get from btape without hardware compression:
> 

* did you enable (at least attribute) spooling?
* with what speed can you actually read the data ? (try to make a tar of 
a large directory "time tar -cf /dev/null " )
* is the storage daemon on the same machine as the 11tb data?

- Thomas


--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quantum Scalar i500 slow write speed

2010-08-05 Thread ekke85
Hi

I have a Quantum Scalar i500 and it works in Bacula, but it is very slow. It 
writes at 22mb/sec. The drives should be able to do a lot more then that. I 
have to backup 11TB that takes a couple of days to complete, I don't want to 
think how long it would take to restore :(
This is the output I get from btape without hardware compression:

speed file_size=1GB nb_file=3
btape: btape.c:1056 Test with zero data, should give the maximum 
throughput.
btape: btape.c:905 Begin writing 3 files of 1.073 GB with raw blocks of 
64512 bytes.
++
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 63.16 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 71.58 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 76.70 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
70.03 MB/s

btape: btape.c:1068 Test with random data, should give the minimum 
throughput.
btape: btape.c:905 Begin writing 3 files of 1.073 GB with raw blocks of 
64512 bytes.
++
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 76.70 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 76.70 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 82.60 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
78.57 MB/s

btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 3 files of 1.073 GB with blocks of 
64512 bytes.
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 56.51 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 53.69 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 53.69 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
54.60 MB/s

btape: btape.c:1094 Test with random data, should give the minimum 
throughput.
btape: btape.c:960 Begin writing 3 files of 1.073 GB with blocks of 
64512 bytes.
++
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 56.51 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 59.65 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive3" (/dev/Drive3)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 59.65 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
58.57 MB/s


and this is what I get with hradware compression:


speed file_size=1GB nb_file=3
btape: btape.c:1056 Test with zero data, should give the maximum 
throughput.
btape: btape.c:905 Begin writing 3 files of 1.073 GB with raw blocks of 
64512 bytes.
++
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 107.3 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 107.3 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 107.3 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
107.3 MB/s

btape: btape.c:1068 Test with random data, should give the minimum 
throughput.
btape: btape.c:905 Begin writing 3 files of 1.073 GB with raw blocks of 
64512 bytes.
++
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 76.70 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 76.70 MB/s
+
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:410 Volume bytes=1.073 GB. Write rate = 82.60 MB/s
btape: btape.c:384 Total Volume bytes=3.221 GB. Total Write rate = 
78.57 MB/s

btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 3 files of 1.073 GB with blocks of 
64512 bytes.
+
btape: btape.c:608 Wrote 1 EOF to "Drive0" (/dev/Drive0)
btape: btape.c:

Re: [Bacula-users] Bacula restore runs forever - bootstrap creation [solved]

2010-08-05 Thread news . only
Martin Simmons schrieb:

> Ok, that explains the duplicate bootstrap rows.
> 
> What is the size of the job in bytes?

exactly 28321715406 B (approx 27 GB)

> Did it complete without errors?

Yes, it did.

> Have you run any tools like bscan with this volume to update the
> catalog?

I did! I still was in the phase of evaluating and I tried a lot of
things. Several older sessions where already purged.

But I can't say for sure WHEN I tried bscan.

> What does the following slightly modified query print for that jobid?
> 
> SELECT  VolumeName, MediaType, VolIndex, JobMediaId,
> FirstIndex, LastIndex, StartFile,
> JobMedia.EndFile, StartBlock, JobMedia.EndBlock
> FROMJobMedia, Media
> WHERE   JobMedia.JobId=NNN
> AND JobMedia.MediaId=Media.MediaId
> ORDER BY VolIndex,JobMediaId;

++---+--+++---+---+-++--+
| VolumeName | MediaType | VolIndex | JobMediaId | FirstIndex | LastIndex | 
StartFile | EndFile | StartBlock | EndBlock |
++---+--+++---+---+-++--+
| KYE713L4   | LTO-4 |1 |356 |  1 |101610 | 
  246 | 246 |  0 |77503 |
| KYE713L4   | LTO-4 |2 |357 | 101610 |116651 | 
  247 | 247 |  0 |77503 |
| KYE713L4   | LTO-4 |3 |358 | 116651 |126690 | 
  248 | 248 |  0 |77503 |
| KYE713L4   | LTO-4 |4 |359 | 126690 |151467 | 
  249 | 249 |  0 |77503 |
| KYE713L4   | LTO-4 |5 |360 | 151467 |169844 | 
  250 | 250 |  0 |77503 |
| KYE713L4   | LTO-4 |6 |361 | 169844 |292514 | 
  251 | 251 |  0 |51961 |
| KYE713L4   | LTO-4 |7 |913 |  1 |292514 | 
0 | 251 |  0 |51961 |
++---+--+++---+---+-++--+

Well! Does the big gap in JobMediaId indicate that in fact this session
was "bscanned" though it wasn't already purged?

I tried this one:

SELECT VolumeName, FirstIndex, JobId, count(*)
FROM  JobMedia, Media
WHERE JobMedia.MediaId=Media.MediaId
GROUP BY VolumeName, FirstIndex, JobId
HAVING count(*) > 1;

which gave the following result:
+++---+--+
| VolumeName | FirstIndex | JobId | count(*) |
+++---+--+
| KYE713L4   |  1 |10 |2 |
| KYE713L4   |  1 |13 |2 |
| KYE713L4   |  1 |14 |2 |
| KYE713L4   |  1 |15 |2 |
[...]
| KYE713L4   |  1 |   269 |2 |
| KYE713L4   |  1 |   270 |2 |
| KYE713L4   |  1 |   271 |2 |
| KYE714L4   |  1 |   275 |2 |
| KYE714L4   |  1 |   276 |2 |
| KYE714L4   |  1 |   277 |2 |
[...]
| KYE714L4   |  1 |   361 |2 |
| KYE714L4   |  1 |   362 |2 |
| KYE714L4   |  1 |   363 |2 |
| KYE714L4   |  1 |   364 |2 |
| KYE714L4   |  1 |   368 |2 |
| KYE715L4   |   1357 |   346 |   12 |
| KYE715L4   |   1358 |   346 |5 |
| KYE715L4   |   1767 |   353 |   17 |

When I try your query with one of the JobId between 10 and 368 I always
get something like this:
++---+--+++---+---+-++--+
| VolumeName | MediaType | VolIndex | JobMediaId | FirstIndex | LastIndex | 
StartFile | EndFile | StartBlock | EndBlock |
++---+--+++---+---+-++--+
| KYE714L4   | LTO-4 |1 |738 |  1 | 1 | 
   67 |  67 |  0 | 4982 |
| KYE714L4   | LTO-4 |2 |   1007 |  1 | 1 | 
0 |  67 |  0 | 4982 |
++---+--+++---+---+-++--+

Deleting the extra entries won't be a problem. Thanks a lot for putting
me in the right direction! My problem is solved, but there seems to be
an error somewhere in the bscan routines

IIRC I used
bscan -msS /dev/nst1 -V KYE713L4\|KYE714L4

Greetings,
Michael

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-user

Re: [Bacula-users] DisktoCatalog difference verification

2010-08-05 Thread Ceejay Cervantes
On Wed, Aug 4, 2010 at 6:48 PM, Martin Simmons  wrote:

> > On Wed, 4 Aug 2010 11:12:14 +0800, Ceejay Cervantes said:
> >
> > I'm using Bacula 5.0.2 on CentOS 5.5. I've created a DisktoCatalog level
> > verify job as a way to check things. I'm getting the warning below even
> > though they are on disk:
> >
> > Warning: The following files are in the Catalog but not on disk:
> > /usr/
> > /var/
> > /var/log/
> > /var/log/audit/
> > /home/
> > /boot/
> > ...
> > I tried listing files of the job using the bconsole and I saw two
> > entries of "/usr" and the rest mentioned on the warning. Is it safe to
> > ignore the warning or can it be removed?
>
> Are these mountpoints?  If so, you can safely ignore the warnings because
> the
> directories are listed in the FileSet.
>
> I suggest reporting the warnings as a bug (see
> http://www.bacula.org/en/?page=bugs).
>
> __Martin
>
>
> --
> The Palm PDK Hot Apps Program offers developers who use the
> Plug-In Development Kit to bring their C/C++ apps to Palm for a share
> of $1 Million in cash or HP Products. Visit us here for more details:
> http://p.sf.net/sfu/dev2dev-palm
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

Thanks Martin.
--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users