[Bacula-users] Missing full backup

2013-03-22 Thread Mike Seda
Hi All,
I performed a manual Full backup on 2/6/13.

However, when I recently attempted to do a restore (via "Select backup 
for a client before a specified time" with "2013-02-23 00:00:00" as the 
argument), I was presented with the following:
++---+---+---+-++
| JobId  | Level | JobFiles  | JobBytes  | StartTime   | 
VolumeName |
++---+---+---+-++
| 14,391 | F | 2,128,189 | 1,918,767,865,530 | 2013-01-18 08:49:52 | 
02L4   |
| 14,391 | F | 2,128,189 | 1,918,767,865,530 | 2013-01-18 08:49:52 | 
63L4   |
| 15,122 | D |   331,747 |   177,583,386,852 | 2013-02-16 02:21:41 | 
43L4   |
| 15,147 | I |   157 |   246,885,602 | 2013-02-17 01:05:54 | 
35L4   |
| 15,172 | I |29 |   233,494,619 | 2013-02-18 01:09:47 | 
35L4   |
| 15,197 | I |   107 | 6,423,634,284 | 2013-02-19 01:10:54 | 
35L4   |
| 15,222 | I |   117,469 |53,195,102,406 | 2013-02-20 05:20:26 | 
35L4   |
| 15,247 | I |11,065 | 5,000,910,468 | 2013-02-21 01:15:32 | 
35L4   |
| 15,272 | I | 2,850 |24,396,269,258 | 2013-02-22 01:06:24 | 
35L4   |
++---+---+---+-++

As you can see, the 2/6/13 Full is missing, and Bacula seems to be 
referring to a previous Full - even the Differential on 2/16/13 seems to 
have been done against the 1/18/13 Full.

Does anyone have any idea what could have caused this? Most importantly 
though, does anyone have any ideas on how to retrieve the data from the 
2/6/13 backup?

I suppose I could bscan in the two tapes that were used for my 2/16/13 
Full - since I was able to retrieve the VolumeNames (and JobId) from my 
bacula.log file.

BTW, restoring based on the aforementioned JobId fails with:
Unable to get Job record for JobId=14872: ERR=sql_get.c:306 No Job found 
for JobId 14872

The only thing I could think of is that the following bit me:
Allow Duplicate Jobs = "no"

My theory is that my manual job was still running when the scheduled 
version of the job started. Then, somehow the JobId of the running 
manual job was changed. I've seen similar behavior with Cancel Queued 
Duplicates and Cancel Running Duplicates before and it has puzzled me. I 
do not have Cancel Queued Duplicates or Cancel Running Duplicates 
enabled though.

Please advise.

Thanks,
Mike


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Huge Backups

2012-10-22 Thread Mike Seda
Hi,
I'm not using onefs=no.

What would be another cause of a loop?

This is my fileset:
FileSet {
   Name = "backup1"
   Include {
 Options { signature = MD5 }
 File = /
 File = /boot
   }

   Exclude {
 File = /proc
 File = /tmp
 File = /var/spool/bacula
   }
}

And here is my df:
[maseda@backup1 ~]$ df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb2  20G  3.2G   16G  18% /
tmpfs  12G 0   12G   0% /dev/shm
/dev/sdb1 985M   44M  892M   5% /boot
/dev/sdb5 247G  188M  234G   1% /scratch
/dev/sda1  61T   42T   19T  70% /data

Any thoughts?

Mike


On 10/22/2012 03:37 PM, Jérôme Blion wrote:
> Hello,
>
> Are you sure there is no loop ?
> Typically, it can happens with onefs=no
>
> HTH.
> Jérôme Blion.
>
>
> Le 22/10/2012 22:33, Mike Seda a écrit :
>> On 10/22/2012 01:15 PM, John Drescher wrote:
>>>> I currently have a machine with ~3 GB of data.
>>>>
>>>> However, ~230 GB is being backed up by Bacula.
>>>>
>>>> I performed a "bconsole -> estimate client=blah listing", and it doesn't
>>>> look like any files beyond what I specified in the fileset are being
>>>> backed up.
>>>>
>>>> I even set sparse=yes in the fileset options, but it didn't help.
>>>>
>>>> Please let me know what I'm missing here.
>>>>
>>> Are you sure that 230GB was backed up or are you looking at the size
>>> of your disk volumes expecting them to reset each backup or something
>>> like that?
>> I'm sure that ~230 GB was backed up. It's very strange.
>>
>>> John
>> --
>> Everyone hates slow websites. So do we.
>> Make your web apps faster with AppDynamics
>> Download AppDynamics Lite for free today:
>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
> --
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_sfd2d_oct
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Huge Backups

2012-10-22 Thread Mike Seda

On 10/22/2012 01:15 PM, John Drescher wrote:
>> I currently have a machine with ~3 GB of data.
>>
>> However, ~230 GB is being backed up by Bacula.
>>
>> I performed a "bconsole -> estimate client=blah listing", and it doesn't
>> look like any files beyond what I specified in the fileset are being
>> backed up.
>>
>> I even set sparse=yes in the fileset options, but it didn't help.
>>
>> Please let me know what I'm missing here.
>>
> Are you sure that 230GB was backed up or are you looking at the size
> of your disk volumes expecting them to reset each backup or something
> like that?

I'm sure that ~230 GB was backed up. It's very strange.

>
> John


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Huge Backups

2012-10-22 Thread Mike Seda
Hi All,
I currently have a machine with ~3 GB of data.

However, ~230 GB is being backed up by Bacula.

I performed a "bconsole -> estimate client=blah listing", and it doesn't 
look like any files beyond what I specified in the fileset are being 
backed up.

I even set sparse=yes in the fileset options, but it didn't help.

Please let me know what I'm missing here.

Thanks,
Mike

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win32 FD / Write error sending N bytes to Storage daemon

2011-06-15 Thread Mike Seda
I just wanted to add that my similar problem was also related to network 
gear (hardware firewall). I resolved it by following the document below:

http://wiki.bacula.org/doku.php?id=faq#my_backup_starts_but_dies_after_a_while_with_connection_reset_by_peer_error

I did have Heartbeat Interval set several days back, but I did not 
complete (or know about) step 2 at the aforementioned link. I went ahead 
and added the change from step 2, and then added Heartbeat Interval back 
into my configs (FD, SD, DIR). It seems to have worked. :-)


BTW, the FD that was having issues (connection reset after 2 hours) is 
in front of the hardware firewall and the Bacula DIR is behind it. Plus, 
the SD lives in a totally different VLAN (behind *another* firewall), 
but will be moved to the same VLAN as the DIR in the next week or so.



On 06/15/2011 02:43 AM, Yann Cézard wrote:

Le 13/06/2011 14:32, Josh Fisher a écrit :

On 6/13/2011 2:15 AM, Mike Seda wrote:
I forgot to mention that during my debugging, I did have "Heartbeat 
Interval" set to 10 on the Client, Storage, and Director resources. 
The same error still occurred... Very odd.




I have encountered similar situations with clients. Everything but 
Bacula would appear to work over the network, but Bacula would fail. 
In one case it was a bad switch, and 2 or 3 other times it was a bad 
NIC in the client. My conclusion is that Bacula is very sensitive to 
network problems, and since it is network heavy during a backup, it 
tends to reveal network problems when nothing else does. If the 
client has been working in the past, then suddenly began failing 
jobs, then the problem is not likely the config. The procedure I now 
go through to diagnose client problems is something like:


1) If a win32 client, then disable OS power management (can turn off 
NIC's PHY inappropriately)

2) Swap connections with an existing, known working client (if possible)
3) Replace Ethernet patch cable
4) Connect client to a different switch (if possible)
5) Replace client's NIC
6) Try different plenum cabling or bypass plenum cabling if possible
7) Physically move client and directly connect to the switch SD is 
connected to


For me, _*this error has always thus far ended up being a hardware 
problem.*_



I totally second that.
This is exactly what we are observing here, even it the clues were 
saying something else :

- Bacula is the only application that have the problem
- More precisely, Windows clients are the only ones to have problems.
=> But the real problem is network !

After some more tests (the day after my last tests, the network team
told me they had rebooted one of the network device, which made the
problem disappear for one day or two...), I can now say that the 
problem is on

the network side of our infrastructure, with no doubt !
Having a DIR/SD in a VM running on a side or the other of the 
problematic device
make the problem appears/disappears, so it is obvious now the problem 
is on

our network path, not in bacula.

My 2 cents.
--
Yann Cézard  -  infrastructures - administrateur systèmes serveurs
Centre de ressources informatiques-http://cri.univ-pau.fr
Université de Pau et des pays de l'Adour -http://www.univ-pau.fr
bâtiment d'Alembert (anciennement IFR), rue Jules Ferry, 64000 Pau
Téléphone : +33 (0)5 59 40 77 94


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win32 FD / Write error sending N bytes to Storage daemon

2011-06-12 Thread Mike Seda
I forgot to mention that during my debugging, I did have "Heartbeat 
Interval" set to 10 on the Client, Storage, and Director resources. The 
same error still occurred... Very odd.



On 06/10/2011 07:02 PM, Blake Dunlap wrote:
$20 you have the other bacula comm channel failing due to timeout of 
the state on a forwarding device. Dropping spool sizes is only 
increasing the frequency of communication across that path. You will 
likely see this problem solved completely by setting a short duration 
keepalive in your bacula configs.


-Blake

On Fri, Jun 10, 2011 at 20:48, Mike Seda <mailto:mas...@stanford.edu>> wrote:


I just encountered a similar error in RHEL 6 using 5.0.3 (on the
server
and client) with Data Spooling enabled:
10-Jun 02:06 srv084 JobId 43: Error: bsock.c:393 Write error sending
65536 bytes to Storage daemon:srv010.nowhere.us:9103
<http://srv010.nowhere.us:9103>: ERR=Broken pipe
10-Jun 02:06 srv084 JobId 43: Fatal error: backup.c:1024 Network send
error to SD. ERR=Broken pipe

The way that I made it go away was to decrease "Maximum Spool
Size" from
200G to 2G. I also received the same error at 100G and 50G. I ended up
just disabling data spooling completely on this box since small spool
sizes almost defeat the point of spooling at all.

I've also been seeing some sporadic tape drive errors recently,
too. So
that may be part of the problem. I will be running the
vendor-suggested
diags on the library (Dell TL4000 with 2 x LTO-4 FC drives) in the
next
couple of days.

Plus, this is a temporary SD instance that I will eventually
migrate to
new hardware and add large/fast SAN disk to for spooling. This should
explain the reason for the small spool size settings... This box only
has a 2 x 300 GB drive SAS 10K RAID 1.

It'd be nice to see if anyone else has received this error on a
similar
HW/SW configuration.

Mike


On 06/07/2011 09:48 AM, Yann Cézard wrote:
> Le 07/06/2011 18:10, Josh Fisher a écrit :
>> Another problem I see with Windows 7 clients is too aggressive
power
>> management turning off the Ethernet interface even though it is
in use
>> by bacula-fd. Apparently there is some Windows system call that a
>> service (daemon) must make to tell Windows not to do power
management
>> while it is busy. I don't know what versions of Windows do
that, other
>> than 7 and Vista, but it is a potential problem.
> There is no power management on our servers :-D
>
> I just ran some tests this afternoon, I create a new bacula server
> with lenny / bacula 2.4.4, and downgrade the client to 2.4.4, to
> be sure that all was fine with the same fileset, etc.
> The test was OK, no problem, the job ran fine.
> Than I tested again with our production server (5.0.3) and
> the 2.4.4 client =>  network error, failed job
> I upgraded the test bacula server to squeeze / bacula 5.0.2,
> and still the 2.4.4 fd on the client =>  No problem !
>
> So it seems that the problem is clearly in "network hardware" on the
> server side.
>
> We will do some more tests on the network side (change
> switch port, change wire, see if no firmware update is
available...),
> but now I really doubt that the problem is in bacula, nor it can be
> resolved in it.
>
> The strange thing is that the problems are only observed with win32
> clients. Perhaps the Windows (2003) TCP/IP stack is less fault
tolerant than
> the linux one in some very special case ?
>
> Regards,
>


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
<mailto:Bacula-users@lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win32 FD / Write error sending N bytes to Storage daemon

2011-06-10 Thread Mike Seda
I just encountered a similar error in RHEL 6 using 5.0.3 (on the server 
and client) with Data Spooling enabled:
10-Jun 02:06 srv084 JobId 43: Error: bsock.c:393 Write error sending 
65536 bytes to Storage daemon:srv010.nowhere.us:9103: ERR=Broken pipe
10-Jun 02:06 srv084 JobId 43: Fatal error: backup.c:1024 Network send 
error to SD. ERR=Broken pipe

The way that I made it go away was to decrease "Maximum Spool Size" from 
200G to 2G. I also received the same error at 100G and 50G. I ended up 
just disabling data spooling completely on this box since small spool 
sizes almost defeat the point of spooling at all.

I've also been seeing some sporadic tape drive errors recently, too. So 
that may be part of the problem. I will be running the vendor-suggested 
diags on the library (Dell TL4000 with 2 x LTO-4 FC drives) in the next 
couple of days.

Plus, this is a temporary SD instance that I will eventually migrate to 
new hardware and add large/fast SAN disk to for spooling. This should 
explain the reason for the small spool size settings... This box only 
has a 2 x 300 GB drive SAS 10K RAID 1.

It'd be nice to see if anyone else has received this error on a similar 
HW/SW configuration.

Mike


On 06/07/2011 09:48 AM, Yann Cézard wrote:
> Le 07/06/2011 18:10, Josh Fisher a écrit :
>> Another problem I see with Windows 7 clients is too aggressive power
>> management turning off the Ethernet interface even though it is in use
>> by bacula-fd. Apparently there is some Windows system call that a
>> service (daemon) must make to tell Windows not to do power management
>> while it is busy. I don't know what versions of Windows do that, other
>> than 7 and Vista, but it is a potential problem.
> There is no power management on our servers :-D
>
> I just ran some tests this afternoon, I create a new bacula server
> with lenny / bacula 2.4.4, and downgrade the client to 2.4.4, to
> be sure that all was fine with the same fileset, etc.
> The test was OK, no problem, the job ran fine.
> Than I tested again with our production server (5.0.3) and
> the 2.4.4 client =>  network error, failed job
> I upgraded the test bacula server to squeeze / bacula 5.0.2,
> and still the 2.4.4 fd on the client =>  No problem !
>
> So it seems that the problem is clearly in "network hardware" on the
> server side.
>
> We will do some more tests on the network side (change
> switch port, change wire, see if no firmware update is available...),
> but now I really doubt that the problem is in bacula, nor it can be
> resolved in it.
>
> The strange thing is that the problems are only observed with win32
> clients. Perhaps the Windows (2003) TCP/IP stack is less fault tolerant than
> the linux one in some very special case ?
>
> Regards,
>

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] MySQL versus Postgres

2011-06-06 Thread Mike Seda
All,
I'm still doing some testing with Bacula in my new environment. After 
one week of backups, Bacula is storing approximately 25,000,000 files 
(10 TB of data). Our other 5 TBs of data is not in Bacula yet, but will 
be soon. Our 15 TB of total data will also grow by 50% each year.

Postgres seems to be handling the current amount of data decently for us 
thus far. I just want to make sure that Postgres is the right choice 
(performance-wise) before I push forward with this solution.

I recently read that Postgres is recommended over MySQL for larger 
Bacula installations. I just wanted to get your thoughts on that.

BTW, the current environment lives inside ESX, but the new environment 
will live on a dedicated box with the following specs:
- Dell PE 1950 III
- 32 GB of RAM
- 2 x 300 GB SAS 15K RAID 1
- 2-drive FC LTO-4 TL4000 library for copying File storage to tape
- SAN-attached LUNS for spooling and File storage

Thanks,
Mike

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk-Based Autochanger

2011-06-02 Thread Mike Seda
Doh. I mean to say "If I'm *not* pointing to removable media"...


On 06/02/2011 02:59 PM, Mike Seda wrote:
> Hi All,
> I'm currently tweaking a Bacula D2D setup, and am wondering if I 
> should be writing to a disk-based Autochanger versus directly to a 
> disk-based Device. If I'm pointing to removable media, I should just 
> write directly to one or more Devices (w/o an Autochanger), right?
>
> Mike

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk-Based Autochanger

2011-06-02 Thread Mike Seda
Hi All,
I'm currently tweaking a Bacula D2D setup, and am wondering if I should 
be writing to a disk-based Autochanger versus directly to a disk-based 
Device. If I'm pointing to removable media, I should just write directly 
to one or more Devices (w/o an Autochanger), right?

Mike

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Mike Seda
All,
Nevermind about dedup with Bacula. It seems that the current block 
format doesn't work too well with it:
http://changelog.complete.org/archives/5547-research-on-deduplicating-disk-based-and-cloud-backups

I'm getting descent compression rates though with LZJB (compression=on), 
which makes it compelling enough to stick with ZFS.

My original question of recommended "Maximum Volume Bytes" size still 
stands though. The documentation seems to recommend 50 GB, but we need 
to backup 15 TB of data. I'm just wondering if that changes things or not.

Cheers,
Mike


On 05/20/2011 10:48 AM, Mike Seda wrote:
> Hi All,
> I'm currently setting up a disk-based storage pool in Bacula and am
> wondering what I should set "Maximum Volume Bytes" to. I was thinking of
> setting it to "100G", but am just wondering if this is sane.
>
> FYI, the total data of our clients is 15 TB, but we are told that this
> data should at least double each year.
>
> I noticed that there seems to be a limit on the number of disk-based
> volumes in a pool due to the suffix having 4 digits, i.e. 0001. This
> adds up to about 10,000 possible volumes per pool. So 10,000 volumes x
> 100 GB is 1 PB. That seems like overkill. Perhaps setting "Maximum
> Volume Bytes = 10G" would be more reasonable since this would add up to
> 100 TB.
>
> I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am
> wondering if smaller volumes will dedup better than larger ones. I'm
> curious to see what others are doing to take advantage of dedup-enabled
> ZFS storage w/ Bacula.
>
> Thanks,
> Mike
>
> --
> What Every C/C++ and Fortran developer Should Know!
> Read this article and learn how Intel has extended the reach of its
> next-generation tools to help Windows* and Linux* C/C++ and Fortran
> developers boost performance applications - including clusters.
> http://p.sf.net/sfu/intel-dev2devmay
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Volume Bytes

2011-05-20 Thread Mike Seda
Hi All,
I'm currently setting up a disk-based storage pool in Bacula and am 
wondering what I should set "Maximum Volume Bytes" to. I was thinking of 
setting it to "100G", but am just wondering if this is sane.

FYI, the total data of our clients is 15 TB, but we are told that this 
data should at least double each year.

I noticed that there seems to be a limit on the number of disk-based 
volumes in a pool due to the suffix having 4 digits, i.e. 0001. This 
adds up to about 10,000 possible volumes per pool. So 10,000 volumes x 
100 GB is 1 PB. That seems like overkill. Perhaps setting "Maximum 
Volume Bytes = 10G" would be more reasonable since this would add up to 
100 TB.

I'm also storing these file volumes on ZFS (v28 w/ dedup=on), and am 
wondering if smaller volumes will dedup better than larger ones. I'm 
curious to see what others are doing to take advantage of dedup-enabled 
ZFS storage w/ Bacula.

Thanks,
Mike

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Postgres Error

2011-05-18 Thread Mike Seda
All,
OK. It looks like this issue has shown up before and is not critical:
http://old.nabble.com/regression-tests%3A-jobhisto_jobid_seq-td28277084.html

If anyone else runs into this in the future, the relevant commit is at 
the following link:
http://www.bacula.org/git/cgit.cgi/bacula/commit/?id=fe35ad03e0bff911ab8c7fcea9684bba83e3e9b9

I'm not sure why this commit never made it into bacula-client-5.0.3 from 
FreeBSD Ports though.

Mike


On 05/18/2011 11:16 AM, Mike Seda wrote:
> Hi Martin,
> It turns out that I do have a jobhisto table after all:
>
> bacula=> SELECT table_name FROM information_schema.tables WHERE 
> table_schema = 'public';
>table_name
> 
>  location
>  filename
>  path
>  file
>  mediatype
>  pool
>  storage
>  log
>  fileset
>  media
>  locationlog
>  jobhisto
>  pathhierarchy
>  unsavedfiles
>  basefiles
>  jobmedia
>  job
>  client
>  counters
>  version
>  cdimages
>  device
>  status
>  pathvisibility
> (24 rows)
>
> bacula=>
>
> Mike
>
>
> On 05/18/2011 10:46 AM, Mike Seda wrote:
>> Hi Martin,
>> It looks like make_bacula_tables succeeded. There were some notices (not
>> errors) though, which are provided below:
>>
>> [pgsql@bmir-backup-dir /usr/local]$ share/bacula/make_bacula_tables
>> Making PostgreSQL tables
>> psql::7: NOTICE:  CREATE TABLE will create implicit sequence
>> "filename_filenameid_seq" for serial column "filename.filenameid"
>> psql::7: NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit
>> index "filename_pkey" for table "filename"
>> CREATE TABLE
>> ALTER TABLE
>> CREATE INDEX
>> psql::17: NOTICE:  CREATE TABLE will create implicit sequence
>> "path_pathid_seq" for serial column "path.pathid"
>> psql::17: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "path_pkey" for table "path"
>> CREATE TABLE
>> ALTER TABLE
>> CREATE INDEX
>> psql::33: NOTICE:  CREATE TABLE will create implicit sequence
>> "file_fileid_seq" for serial column "file.fileid"
>> psql::33: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "file_pkey" for table "file"
>> CREATE TABLE
>> CREATE INDEX
>> CREATE INDEX
>> psql::84: NOTICE:  CREATE TABLE will create implicit sequence
>> "job_jobid_seq" for serial column "job.jobid"
>> psql::84: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "job_pkey" for table "job"
>> CREATE TABLE
>> CREATE INDEX
>> CREATE TABLE
>> CREATE INDEX
>> psql::99: NOTICE:  CREATE TABLE will create implicit sequence
>> "location_locationid_seq" for serial column "location.locationid"
>> psql::99: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "location_pkey" for table "location"
>> CREATE TABLE
>> psql::109: NOTICE:  CREATE TABLE will create implicit sequence
>> "fileset_filesetid_seq" for serial column "fileset.filesetid"
>> psql::109: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "fileset_pkey" for table "fileset"
>> CREATE TABLE
>> CREATE INDEX
>> psql::126: NOTICE:  CREATE TABLE will create implicit sequence
>> "jobmedia_jobmediaid_seq" for serial column "jobmedia.jobmediaid"
>> psql::126: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "jobmedia_pkey" for table "jobmedia"
>> CREATE TABLE
>> CREATE INDEX
>> psql::178: NOTICE:  CREATE TABLE will create implicit sequence
>> "media_mediaid_seq" for serial column "media.mediaid"
>> psql::178: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "media_pkey" for table "media"
>> CREATE TABLE
>> CREATE INDEX
>> psql::188: NOTICE:  CREATE TABLE will create implicit sequence
>> "mediatype_mediatypeid_seq" for serial column "mediatype.mediatypeid"
>> psql::188: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "mediatype_pkey" for table "mediatype"
>> CREATE TABLE
>> psql::195: NOTICE:  CREATE TABLE will create implicit sequence
>> "storage_storageid_seq" for serial column "storage.storageid"
>> psql::195: NOTICE:  CREATE TABLE / PRIMARY KEY will create
>> implicit index "storage_pkey" for table "storage"
>> CREATE TABLE
>> ps

Re: [Bacula-users] Postgres Error

2011-05-18 Thread Mike Seda
Hi Martin,
It turns out that I do have a jobhisto table after all:

bacula=> SELECT table_name FROM information_schema.tables WHERE 
table_schema = 'public';
table_name

  location
  filename
  path
  file
  mediatype
  pool
  storage
  log
  fileset
  media
  locationlog
  jobhisto
  pathhierarchy
  unsavedfiles
  basefiles
  jobmedia
  job
  client
  counters
  version
  cdimages
  device
  status
  pathvisibility
(24 rows)

bacula=>

Mike


On 05/18/2011 10:46 AM, Mike Seda wrote:
> Hi Martin,
> It looks like make_bacula_tables succeeded. There were some notices (not
> errors) though, which are provided below:
>
> [pgsql@bmir-backup-dir /usr/local]$ share/bacula/make_bacula_tables
> Making PostgreSQL tables
> psql::7: NOTICE:  CREATE TABLE will create implicit sequence
> "filename_filenameid_seq" for serial column "filename.filenameid"
> psql::7: NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit
> index "filename_pkey" for table "filename"
> CREATE TABLE
> ALTER TABLE
> CREATE INDEX
> psql::17: NOTICE:  CREATE TABLE will create implicit sequence
> "path_pathid_seq" for serial column "path.pathid"
> psql::17: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "path_pkey" for table "path"
> CREATE TABLE
> ALTER TABLE
> CREATE INDEX
> psql::33: NOTICE:  CREATE TABLE will create implicit sequence
> "file_fileid_seq" for serial column "file.fileid"
> psql::33: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "file_pkey" for table "file"
> CREATE TABLE
> CREATE INDEX
> CREATE INDEX
> psql::84: NOTICE:  CREATE TABLE will create implicit sequence
> "job_jobid_seq" for serial column "job.jobid"
> psql::84: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "job_pkey" for table "job"
> CREATE TABLE
> CREATE INDEX
> CREATE TABLE
> CREATE INDEX
> psql::99: NOTICE:  CREATE TABLE will create implicit sequence
> "location_locationid_seq" for serial column "location.locationid"
> psql::99: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "location_pkey" for table "location"
> CREATE TABLE
> psql::109: NOTICE:  CREATE TABLE will create implicit sequence
> "fileset_filesetid_seq" for serial column "fileset.filesetid"
> psql::109: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "fileset_pkey" for table "fileset"
> CREATE TABLE
> CREATE INDEX
> psql::126: NOTICE:  CREATE TABLE will create implicit sequence
> "jobmedia_jobmediaid_seq" for serial column "jobmedia.jobmediaid"
> psql::126: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "jobmedia_pkey" for table "jobmedia"
> CREATE TABLE
> CREATE INDEX
> psql::178: NOTICE:  CREATE TABLE will create implicit sequence
> "media_mediaid_seq" for serial column "media.mediaid"
> psql::178: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "media_pkey" for table "media"
> CREATE TABLE
> CREATE INDEX
> psql::188: NOTICE:  CREATE TABLE will create implicit sequence
> "mediatype_mediatypeid_seq" for serial column "mediatype.mediatypeid"
> psql::188: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "mediatype_pkey" for table "mediatype"
> CREATE TABLE
> psql::195: NOTICE:  CREATE TABLE will create implicit sequence
> "storage_storageid_seq" for serial column "storage.storageid"
> psql::195: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "storage_pkey" for table "storage"
> CREATE TABLE
> psql::214: NOTICE:  CREATE TABLE will create implicit sequence
> "device_deviceid_seq" for serial column "device.deviceid"
> psql::214: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "device_pkey" for table "device"
> CREATE TABLE
> psql::246: NOTICE:  CREATE TABLE will create implicit sequence
> "pool_poolid_seq" for serial column "pool.poolid"
> psql::246: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "pool_pkey" for table "pool"
> CREATE TABLE
> CREATE INDEX
> psql::259: NOTICE:  CREATE TABLE will create implicit sequence
> "client_clientid_seq" for serial column "client.clientid"
> psql::259: NOTICE:  CREATE TABLE / PRIMARY KEY will create
> implicit index "client_pkey" for table "client"
> CREATE TABLE
> CREATE IN

Re: [Bacula-users] Postgres Error

2011-05-18 Thread Mike Seda
ll create 
implicit index "basefiles_pkey" for table "basefiles"
CREATE TABLE
CREATE INDEX
psql::320: NOTICE:  CREATE TABLE / PRIMARY KEY will create 
implicit index "unsavedfiles_pkey" for table "unsavedfiles"
CREATE TABLE
psql::327: NOTICE:  CREATE TABLE / PRIMARY KEY will create 
implicit index "cdimages_pkey" for table "cdimages"
CREATE TABLE
psql::335: NOTICE:  CREATE TABLE / PRIMARY KEY will create 
implicit index "pathhierarchy_pkey" for table "pathhierarchy"
CREATE TABLE
CREATE INDEX
psql::347: NOTICE:  CREATE TABLE / PRIMARY KEY will create 
implicit index "pathvisibility_pkey" for table "pathvisibility"
CREATE TABLE
CREATE INDEX
CREATE TABLE
psql::361: NOTICE:  CREATE TABLE / PRIMARY KEY will create 
implicit index "status_pkey" for table "status"
CREATE TABLE
INSERT 0 1
INSERT 0 1
.
.
INSERT 0 1
INSERT 0 1
Creation of Bacula PostgreSQL tables succeeded.

BTW, it does not look like there was a jobhisto table created.

Mike


On 05/18/2011 04:26 AM, Martin Simmons wrote:
>>>>>> On Tue, 17 May 2011 20:40:27 -0700, Mike Seda said:
>> Hi All,
>> I'm currently attempting to stand up a Bacula Director on FreeBSD 8.2.
>>
>> I installed the following packages from FreeBSD Ports:
>> bacula-client-5.0.3
>> bacula-server-5.0.3
>> postgresql-client-8.3.14,1
>> postgresql-server-8.3.14
>>
>> Everything has gone pretty well so far, but I just ran into the error below:
>> [pgsql@bmir-backup-dir /usr/local]$ share/bacula/grant_bacula_privileges
>> Granting PostgreSQL privileges
>> CREATE ROLE
>> GRANT
>> GRANT
>> .
>> .
>> GRANT
>> GRANT
>> psql::38: ERROR:  relation "jobhisto_jobid_seq" does not exist
>> GRANT
>> GRANT
>> .
>> .
>> GRANT
>> GRANT
>> Privileges for user bacula granted on database bacula.
>>
>> Any thoughts?
> Was make_bacula_tables 100% successful?  In particular, did it create the
> jobhisto table?
>
> __Martin
>
> --
> What Every C/C++ and Fortran developer Should Know!
> Read this article and learn how Intel has extended the reach of its
> next-generation tools to help Windows* and Linux* C/C++ and Fortran
> developers boost performance applications - including clusters.
> http://p.sf.net/sfu/intel-dev2devmay
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Postgres Error

2011-05-17 Thread Mike Seda
Hi All,
I'm currently attempting to stand up a Bacula Director on FreeBSD 8.2.

I installed the following packages from FreeBSD Ports:
bacula-client-5.0.3
bacula-server-5.0.3
postgresql-client-8.3.14,1
postgresql-server-8.3.14

Everything has gone pretty well so far, but I just ran into the error below:
[pgsql@bmir-backup-dir /usr/local]$ share/bacula/grant_bacula_privileges
Granting PostgreSQL privileges
CREATE ROLE
GRANT
GRANT
.
.
GRANT
GRANT
psql::38: ERROR:  relation "jobhisto_jobid_seq" does not exist
GRANT
GRANT
.
.
GRANT
GRANT
Privileges for user bacula granted on database bacula.

Any thoughts?

Thanks,
Mike

--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Multiple Storage (and Director) Daemons

2011-05-11 Thread Mike Seda
Hi All,
I'm currently attempting to architect a Bacula solution for an 
environment of 100+ clients with 15+ TB of total data (5,000,000+ files).

During my research on the above solution, I read that it's possible to 
run multiple storage (and director) daemons. This was very interesting 
to me, but I'm just wondering what the reasons would be for doing so.

Thanks In Advance,
Mike

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Post-upgrade, from 2.4.4 to 3.0.2, bconsole is hung up.

2010-02-02 Thread Mike Seda
All,
I'm currently experiencing the hung bconsole issue, as well.

No modifications were made to my system recently.

After a couple of days, bconsole just doesn't respond anymore.

I just get the following...

[ms...@backup0 ~]$ sudo /usr/sbin/bconsole -d 200
Password:
Connecting to Director localhost:9101
bconsole: bsock.c:221-0 Current host[ipv4:127.0.0.1:9101] All 
host[ipv4:127.0.0.1:9101]
bconsole: bsock.c:155-0 who=Director daemon host=localhost port=9101
Director authorization problem.
Most likely the passwords do not agree.
If you are using TLS, there may have been a certificate validation error 
during the TLS handshake.
Please see 
http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00376
 
for help.

Restarting bacula-dir fixes the problem, but this is not an acceptable 
solution.

Has anyone out there resolved this issue w/o downgrading to a 2.x.x 
version? And could upgrading to 5.0.0 help?

My server setup is:
Bacula 3.0.3 (x86_64, RPMs, MySQL Catalog)
CentOS 5.4 (x86_64)
HP MSL2024 (SAS, 1 HP LTO-4 Drive)

Mike

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 2.2 Faster?

2008-01-14 Thread Mike Seda
felix,
i too have noticed a severe performance degradation due to high load 
when i upgraded the backups server from 2.0.1 to 2.2.7. both sets of 
rpms came from the rpms-contrib-fschwarz repository on sourceforge.net.

were your rpms based on an --enable-batch-insert option to ./configure ?

also, is there anything else that may be contributing to the severe 
performance hit?

regards,
mike


Michael Nelson wrote:
> Dan Langille wrote:
>   
>> On 15 Aug 2007 at 15:27, Michael Nelson wrote:
>>
>>   
>> 
>>> H.
>>>
>>> I got 2.2 installed here from source, and was expecting to see the big 
>>> performance gains I had heard about.  Strangely, at least in my 
>>> application, 2.2 seems to be /significantly slower/ than 2.02. 
>>> 
>>>   
>> Did you specify --enable-batch-insert on your configuration?  I found 
>> that in ReleaseNotes.
>>   
>> 
>
> Arrgggh.  No.  I have recompiled it now with that switch but it's off 
> running a long job so I will install the new version with batch-insert 
> when it is done.
>
> Thanks
> Michael
>
>   


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger with two drives

2008-01-13 Thread Mike Seda
hi arno,
please see inline responses:

Arno Lehmann wrote:
> Hi,
>
> 13.01.2008 00:06, Mike Seda wrote:
>   
>> hi arno,
>> i have matched up my config to the example that you gave. so now i have:
>> FD: 20
>> SD: 20
>> DIR/Jobs: 1
>> DIR/Director: 20
>> DIR/Storage1(Autochanger): 4
>>
>> but, jobs still seem to run in serial... any thoughts?
>> 
>
> Have you reloaded the configurations? 
i just restarted the sd and dir daemons after the config change
> In this case, a 'reload' command 
> in bconsole should be sufficient. Apart from that, I can only 
> recommend to use the 'show' commands - 'show dir' 
for 'show dir', i get:
Director: name=uwharrie-dir MaxJobs=20 FDtimeout=1800 SDtimeout=1800
   query_file=/etc/bacula/query.sql
  --> Messages: name=Daemon
  mailcmd=/usr/sbin/bsmtp -h localhost -f "(Bacula) %r" -s "Bacula 
daemon message" %r
> and 'show storage' 
>   
for 'show storage', i get:
Storage: name=Autochanger address=uwharrie SDport=9103 MaxJobs=4
  DeviceName=PX502 MediaType=LTO-3 StorageId=3
> should tell you about the current MaxJobs setting.
>
> If that's more than one, you should check the job priorities - Bacula 
> will only run jobs with identical priority simultaneously.
>   
wow... i think you nailed it... i am using different job priorities for 
each job... and that would explain the fact that bacula has only been 
running simultaneous jobs of the same priority level for me... i will 
test this asap...

so, i should set *all* jobs to the same priority? or should i just set 
the backup jobs as "Priority = 1" and restore jobs as "Priority = 2"?

thx in advance,
mike
> Arno
>   
>> thx,
>> mike
>>
>>
>> Arno Lehmann wrote:
>> 
>>> Hi,
>>>
>>> 10.01.2008 20:34, Mike Seda wrote:
>>>   
>>>   
>>>> hi arno,
>>>> i forgot to mention that i have:
>>>> Maximum Concurrent Jobs = 2
>>>> 
>>>> 
>>> Where do you have that setting?
>>>
>>>   
>>>   
>>>> should i increase that number? fyi, i just noticed that two concurrent 
>>>> backup jobs do in fact run, but only for the _same_ host. it is weird.
>>>> 
>>>> 
>>> I'm not so sure yet...
>>>
>>> Let me explain what I usually set up regarding maximum concurrent jobs:
>>> In the FD: Whatever I think is reasonable for that machine and its 
>>> network link, usually one or two
>>> In the SD: Whatever you think it can manage. Depends on number of 
>>> drives, available spool space, and settings in the DIR. Usually rather 
>>> high - up to 20 or upwards even.
>>> In the DIR/Job resource: Typically one.
>>> In the DIR/Director resource: What you think is a reasonable upper 
>>> limit for the jobs running simultaneously. This applies to all 
>>> configured storage devices, so you'll need at least as many jobs as 
>>> you have storage devices, usually.
>>> DIR/Storage resource: Here goes the real number of jobs you want to go 
>>> to that device in parallel. This is where I actually manage how many 
>>> jobs run in parallel.
>>>
>>> Short example:
>>> Let's assume we have 20 clients, each with one job, all scheduled at 
>>> the same time.
>>> FD: 20
>>> SD: 20
>>> DIR/Jobs: 1
>>> DIR/Director: 20
>>> DIR/Storage1: 1
>>> DIR/Storage2: 4
>>> DIR/Storage3: 2
>>>
>>> In this case, Bacula could run up to 7 jobs in parallel, but would 
>>> never run more than 1/4/2 jobs to Storage1/2/3 at the same time.
>>>
>>> Provided you've got your storage devices assigned to the pools, and 
>>> the different job levels go to separate pools - which is what I prefer 
>>> - and Storage2 is an autochanger with more than one drive and tapes 
>>> from all pools loaded such a setup can help you using your tape drives 
>>> most efficiently.
>>>
>>> Arno
>>>
>>>
>>>   
>>>   
>>>> regards,
>>>> mike
>>>>
>>>>
>>>> Arno Lehmann wrote:
>>>> 
>>>> 
>>>>> Hi,
>>>>>
>>>>> 08.01.2008 17:34, John Drescher wrote:
>>>>>   
>>>>>   
>>>>>   
>>>>>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL P

Re: [Bacula-users] autochanger with two drives

2008-01-12 Thread Mike Seda
alan,
a few weeks ago, i did in fact read the pertinent sections of the manual 
regarding "Maximum Concurrent Jobs". i changed what it told me to change 
in the configs, but i was still getting the same result as now.

btw, i did restart the daemons after the config change in case someone 
was going to propose that. :-P

best,
mike


Alan Brown wrote:
> On Thu, 10 Jan 2008, Mike Seda wrote:
>
>> hi arno,
>> i forgot to mention that i have:
>> Maximum Concurrent Jobs = 2
>>
>> should i increase that number?
>
> Yes - and read the fine manual, as the Maximum Concurrent Jobs 
> directive needs to be tweaked in several locations in several 
> different config files in order to handle multiple drives and multiple 
> clients.
>
> (Text search on "Maximum Concurrent Jobs" will find what you need)
>


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger with two drives

2008-01-12 Thread Mike Seda
hi arno,
i have matched up my config to the example that you gave. so now i have:
FD: 20
SD: 20
DIR/Jobs: 1
DIR/Director: 20
DIR/Storage1(Autochanger): 4

...but, jobs still seem to run in serial... any thoughts?

thx,
mike


Arno Lehmann wrote:
> Hi,
>
> 10.01.2008 20:34, Mike Seda wrote:
>   
>> hi arno,
>> i forgot to mention that i have:
>> Maximum Concurrent Jobs = 2
>> 
>
> Where do you have that setting?
>
>   
>> should i increase that number? fyi, i just noticed that two concurrent 
>> backup jobs do in fact run, but only for the _same_ host. it is weird.
>> 
>
> I'm not so sure yet...
>
> Let me explain what I usually set up regarding maximum concurrent jobs:
> In the FD: Whatever I think is reasonable for that machine and its 
> network link, usually one or two
> In the SD: Whatever you think it can manage. Depends on number of 
> drives, available spool space, and settings in the DIR. Usually rather 
> high - up to 20 or upwards even.
> In the DIR/Job resource: Typically one.
> In the DIR/Director resource: What you think is a reasonable upper 
> limit for the jobs running simultaneously. This applies to all 
> configured storage devices, so you'll need at least as many jobs as 
> you have storage devices, usually.
> DIR/Storage resource: Here goes the real number of jobs you want to go 
> to that device in parallel. This is where I actually manage how many 
> jobs run in parallel.
>
> Short example:
> Let's assume we have 20 clients, each with one job, all scheduled at 
> the same time.
> FD: 20
> SD: 20
> DIR/Jobs: 1
> DIR/Director: 20
> DIR/Storage1: 1
> DIR/Storage2: 4
> DIR/Storage3: 2
>
> In this case, Bacula could run up to 7 jobs in parallel, but would 
> never run more than 1/4/2 jobs to Storage1/2/3 at the same time.
>
> Provided you've got your storage devices assigned to the pools, and 
> the different job levels go to separate pools - which is what I prefer 
> - and Storage2 is an autochanger with more than one drive and tapes 
> from all pools loaded such a setup can help you using your tape drives 
> most efficiently.
>
> Arno
>
>
>   
>> regards,
>> mike
>>
>>
>> Arno Lehmann wrote:
>> 
>>> Hi,
>>>
>>> 08.01.2008 17:34, John Drescher wrote:
>>>   
>>>   
>>>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>>>> 
>>>> 
>>>>> all,
>>>>> i am frustrated beyond belief.
>>>>>
>>>>> i recently spent $8K for a second tape drive for my library.
>>>>>
>>>>> all i want is for two bacula backup jobs (one for each tape drive) to
>>>>> run concurrently. i upgraded the backup server to 2.2.7 (to take
>>>>> advantage of improved code for handling multiple drives). the clients
>>>>> are still running 2.0.x.
>>>>>
>>>>> i have looked at *so* many postings on concurrent jobs, and have tried
>>>>> multiple configuration changes but to no avail. please tell me that
>>>>> there is a way to make two concurrent backup jobs work. is there anyway
>>>>> to put an end to my suffering other than buying a gun? :-P
>>>>>   
>>>>>   
>>> Sure...
>>>
>>>   
>>>   
>>>>> My server setup is:
>>>>> Bacula 2.2.7
>>>>> RHEL 4 AS
>>>>> Quantum PX502 (FC, 2 HP LTO-3 Drives)
>>>>>
>>>>> i can send my *.conf files if needed.
>>>>>   
>>>>>   
>>> Probably not needed.
>>>
>>>   
>>>   
>>>>> mike
>>>>>
>>>>>   
>>>>>   
>>>> First advice:
>>>> Put the two jobs you want to run concurrently on different tape drives
>>>> in different pools.
>>>> 
>>>> 
>>> Also, read the manual chapter on configuring the DIR, especially the 
>>> Job section, and even more especially this part:
>>>
>>> Prefer Mounted Volumes = 
>>>  If the Prefer Mounted Volumes directive is set to yes (default 
>>> yes), the Storage daemon is requested to select either an Autochanger 
>>> or a drive with a valid Volume already mounted in preference to a 
>>> drive that is not ready. This means that all jobs will attempt to 
>>> append to the same Volume (providing the Volume is appropriate -- 
>>

Re: [Bacula-users] autochanger with two drives

2008-01-10 Thread Mike Seda
hi arno,
i forgot to mention that i have:
Maximum Concurrent Jobs = 2

should i increase that number? fyi, i just noticed that two concurrent 
backup jobs do in fact run, but only for the _same_ host. it is weird.

regards,
mike


Arno Lehmann wrote:
> Hi,
>
> 08.01.2008 17:34, John Drescher wrote:
>   
>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>> 
>>> all,
>>> i am frustrated beyond belief.
>>>
>>> i recently spent $8K for a second tape drive for my library.
>>>
>>> all i want is for two bacula backup jobs (one for each tape drive) to
>>> run concurrently. i upgraded the backup server to 2.2.7 (to take
>>> advantage of improved code for handling multiple drives). the clients
>>> are still running 2.0.x.
>>>
>>> i have looked at *so* many postings on concurrent jobs, and have tried
>>> multiple configuration changes but to no avail. please tell me that
>>> there is a way to make two concurrent backup jobs work. is there anyway
>>> to put an end to my suffering other than buying a gun? :-P
>>>   
>
> Sure...
>
>   
>>> My server setup is:
>>> Bacula 2.2.7
>>> RHEL 4 AS
>>> Quantum PX502 (FC, 2 HP LTO-3 Drives)
>>>
>>> i can send my *.conf files if needed.
>>>   
>
> Probably not needed.
>
>   
>>> mike
>>>
>>>   
>> First advice:
>> Put the two jobs you want to run concurrently on different tape drives
>> in different pools.
>> 
>
> Also, read the manual chapter on configuring the DIR, especially the 
> Job section, and even more especially this part:
>
> Prefer Mounted Volumes = 
>  If the Prefer Mounted Volumes directive is set to yes (default 
> yes), the Storage daemon is requested to select either an Autochanger 
> or a drive with a valid Volume already mounted in preference to a 
> drive that is not ready. This means that all jobs will attempt to 
> append to the same Volume (providing the Volume is appropriate -- 
> right Pool, ... for that job). If no drive with a suitable Volume is 
> available, it will select the first available drive. Note, any Volume 
> that has been requested to be mounted, will be considered valid as a 
> mounted volume by another job. This if multiple jobs start at the same 
> time and they all prefer mounted volumes, the first job will request 
> the mount, and the other jobs will use the same volume.
>
>  If the directive is set to no, the Storage daemon will prefer 
> finding an unused drive, otherwise, each job started will append to 
> the same Volume (assuming the Pool is the same for all jobs). Setting 
> Prefer Mounted Volumes to no can be useful for those sites with 
> multiple drive autochangers that prefer to maximize backup throughput 
> at the expense of using additional drives and Volumes. This means that 
> the job will prefer to use an unused drive rather than use a drive 
> that is already in use.
>
> Of course, you do this in addition to enabling basic Job Concurrency...
>
> Arno
>
>   
>> John
>>
>> -
>> Check out the new SourceForge.net Marketplace.
>> It's the best place to buy or sell services for
>> just about anything Open Source.
>> http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>> 
>
>   


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger with two drives

2008-01-09 Thread Mike Seda
Michael Short wrote:
> I meant for this to hit the list:
>
> On Jan 8, 2008 3:22 PM, Michael Short <[EMAIL PROTECTED]> wrote:
>   
>> Mike,
>>
>> I recommend that you upgrade your FD agents, I have had some trouble
>> with the volumes produced by 2.0FD->2.2SD/DIR
>> 

yikes... i will upgrade the FDs after the current backups finish... i 
was wondering about compatibility. i was just lazy i guess. i did notice 
disgustingly horrible performance due to very high load on the server 
during backups after the 2.0.1 to 2.2.7 upgrade

>> Sincerely,
>> -Michael
>> 
>
> -
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger with two drives

2008-01-09 Thread Mike Seda
Arno Lehmann wrote:
> Hi,
>
> 08.01.2008 17:34, John Drescher wrote:
>   
>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>> 
>>> all,
>>> i am frustrated beyond belief.
>>>
>>> i recently spent $8K for a second tape drive for my library.
>>>
>>> all i want is for two bacula backup jobs (one for each tape drive) to
>>> run concurrently. i upgraded the backup server to 2.2.7 (to take
>>> advantage of improved code for handling multiple drives). the clients
>>> are still running 2.0.x.
>>>
>>> i have looked at *so* many postings on concurrent jobs, and have tried
>>> multiple configuration changes but to no avail. please tell me that
>>> there is a way to make two concurrent backup jobs work. is there anyway
>>> to put an end to my suffering other than buying a gun? :-P
>>>   
>
> Sure...
>
>   
>>> My server setup is:
>>> Bacula 2.2.7
>>> RHEL 4 AS
>>> Quantum PX502 (FC, 2 HP LTO-3 Drives)
>>>
>>> i can send my *.conf files if needed.
>>>   
>
> Probably not needed.
>
>   
>>> mike
>>>
>>>   
>> First advice:
>> Put the two jobs you want to run concurrently on different tape drives
>> in different pools.
>> 
>
> Also, read the manual chapter on configuring the DIR, especially the 
> Job section, and even more especially this part:
>
> Prefer Mounted Volumes = 
>   
i already have "Prefer Mounted Volumes = no" set in bacula-dir.conf
>  If the Prefer Mounted Volumes directive is set to yes (default 
> yes), the Storage daemon is requested to select either an Autochanger 
> or a drive with a valid Volume already mounted in preference to a 
> drive that is not ready. This means that all jobs will attempt to 
> append to the same Volume (providing the Volume is appropriate -- 
> right Pool, ... for that job). If no drive with a suitable Volume is 
> available, it will select the first available drive. Note, any Volume 
> that has been requested to be mounted, will be considered valid as a 
> mounted volume by another job. This if multiple jobs start at the same 
> time and they all prefer mounted volumes, the first job will request 
> the mount, and the other jobs will use the same volume.
>
>  If the directive is set to no, the Storage daemon will prefer 
> finding an unused drive, otherwise, each job started will append to 
> the same Volume (assuming the Pool is the same for all jobs). Setting 
> Prefer Mounted Volumes to no can be useful for those sites with 
> multiple drive autochangers that prefer to maximize backup throughput 
> at the expense of using additional drives and Volumes. This means that 
> the job will prefer to use an unused drive rather than use a drive 
> that is already in use.
>
> Of course, you do this in addition to enabling basic Job Concurrency...
>
> Arno
>
>   
>> John
>>
>> -
>> Check out the new SourceForge.net Marketplace.
>> It's the best place to buy or sell services for
>> just about anything Open Source.
>> http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>> 
>
>   


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger with two drives

2008-01-09 Thread Mike Seda
John Drescher wrote:
> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>   
>> all,
>> i am frustrated beyond belief.
>>
>> i recently spent $8K for a second tape drive for my library.
>>
>> all i want is for two bacula backup jobs (one for each tape drive) to
>> run concurrently. i upgraded the backup server to 2.2.7 (to take
>> advantage of improved code for handling multiple drives). the clients
>> are still running 2.0.x.
>>
>> i have looked at *so* many postings on concurrent jobs, and have tried
>> multiple configuration changes but to no avail. please tell me that
>> there is a way to make two concurrent backup jobs work. is there anyway
>> to put an end to my suffering other than buying a gun? :-P
>>
>> My server setup is:
>> Bacula 2.2.7
>> RHEL 4 AS
>> Quantum PX502 (FC, 2 HP LTO-3 Drives)
>>
>> i can send my *.conf files if needed.
>>
>> mike
>>
>> 
> First advice:
> Put the two jobs you want to run concurrently on different tape drives
> in different pools.
>   
different pools would be extremely undesirable for me. i currently have 
the following pools:
1: Weekly
2: Scratch
3: Migrate
4: Archive
5: Monthly
6: Daily
7: Clone

the aforementioned pools are very well organized, and i would rather 
leave them be.

i am willing to put the two jobs i want to run concurrently on different 
tape drives, but *not* different pools. will just the different tape 
drives help?

btw, concurrent jobs seems to be working with the same host, but not 
dissimilar hosts. that may help in debugging
> John
>   


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] autochanger with two drives

2008-01-08 Thread Mike Seda
all,
i am frustrated beyond belief.

i recently spent $8K for a second tape drive for my library.

all i want is for two bacula backup jobs (one for each tape drive) to 
run concurrently. i upgraded the backup server to 2.2.7 (to take 
advantage of improved code for handling multiple drives). the clients 
are still running 2.0.x.

i have looked at *so* many postings on concurrent jobs, and have tried 
multiple configuration changes but to no avail. please tell me that 
there is a way to make two concurrent backup jobs work. is there anyway 
to put an end to my suffering other than buying a gun? :-P

My server setup is:
Bacula 2.2.7
RHEL 4 AS
Quantum PX502 (FC, 2 HP LTO-3 Drives)

i can send my *.conf files if needed.

mike


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] upgrade from version 2.0.1 to 2.2.6

2008-01-04 Thread Mike Seda
all,
i wish to upgrade from version 2.0.1 to 2.2.6. the purpose of the 
upgrade is so that bacula will better handle my library which has two 
tape drives.

can i do the upgrade just on my bacula server and not the clients? and, 
is there a mysql db upgrade step involved? if so, what is involved.

also, should i go with version 2.2.6 or 2.2.7?

btw, i am using the "rpms-contrib-fschwarz" el4 packages from 
sourceforge.net.

thx,
mike

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using clone jobs

2007-11-08 Thread Mike Seda
All,
I will soon purchase a second (identical) tape drive for my autochanger, 
which will be used primarily for cloning jobs.

After reading the documentation, it seems that the following (with 
site-specific modifications of job-name and media-type) is all that 
needs to be added to each Job (or perhaps JobDefs) resource to make 
cloning happen:
run = " level=%l since=\"%s\" storage="

Is this correct?

I suppose that I can have a job cloned to a different pool using the 
"storage" keyword in that line, but I would much rather say:
run = " level=%l since=\"%s\" *pool*=<*pool-name*>"

Are there any plans to implement "pool" as a cloning keyword in the run 
directive? I think that it would be much cleaner to refer to the 
pool-name as opposed to media-type since some folks (like yours truly) 
have one media-type, but multiple pools.

Also, how does Bacula handle cloned jobs when "Spool Data = yes". 
Basically, does Bacula then spool once or twice?

Please advise.

Cheers,
Mike


Michael Proto wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I neglected to mention that I have a 2nd tape drive that I plan to use
> with this clone job, so there shouldn't be an issue with both running
> simultaneously.
>
> Michael Proto wrote:
>   
>> As part of my monthly backup cycle, I am required to take 2 full backups
>> to 2 different pools at the beginning of each month, one to store
>> locally and one to store off-site. Up until now I have been running 2
>> independent schedules to take care of these:
>>
>> bacula-dir.conf:
>> Schedule {
>>   Name = "WeeklyCycle"
>>   Run = Level=Full Pool=MonthlyOffsite Priority=9 1st sat at 1:05
>>   Run = Level=Full Pool=Monthly Priority=9 1st sun at 1:05
>>   Run = Level=Incremental Pool=Daily FullPool=Monthly mon-sat at 1:05
>>
>> ...
>>
>> I see in the Director's Job resource that there is a "Run" directive
>> (not to be confused with the Run directive in the Schedule resource
>> above) that can be used to clone jobs. If possible I'd like to use this
>> to clone the "Level=Full Pool=Monthly" backup job shown above to the
>> MonthlyOffsite pool. Unfortunately I'm unable to wrap my head around
>> this directive based on the single example given in the documentation.
>>
>> Here's a sample of one of my Job and JobDef resources:
>>
>> JobDefs {
>>   Name = "DefaultJob"
>>   Type = Backup
>>   Level = Incremental
>>   FileSet = "Full Set"
>>   Schedule = "WeeklyCycle"
>>   Storage = ADIC-Library1
>>   Messages = Standard
>>   Pool = Daily
>>   Full Backup Pool = Monthly
>>   Priority = 10
>>   Maximum Concurrent Jobs = 1
>>   Spool Data = yes
>> }
>>
>> Job {
>>   Name = "archive2"
>>   JobDefs = DefaultJob
>>   Client = "archive2-fd"
>>   FileSet = "archive2"
>> }
>>
>>
>> I imagine I'd need a line like this in the archive2 job:
>>
>>   Run = "archive2 level=Full since=\"%s\" pool=MonthlyOffsite"
>>
>> The way I see it, if I put that in the archive2 job resource it will run
>> every time a scheduled (incremental or full) job runs for this Job
>> resource. Is there a way to have a "Run" job ONLY run when the other
>> monthly full backup is scheduled to run? Or would I need to create
>> another Job resource for all of my clients, under another schedule, and
>> remove these full backups from my WeeklyCycle schedule?
>>
>>
>>
>> Thanks,
>> Michael Proto
>> 
>
> - -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
> - --
> Michael Proto| SecureWorks
> Unix Administrator   |
> PGP ID: 5D575BBE | [EMAIL PROTECTED]
> ***
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.7 (FreeBSD)
>
> iD8DBQFGCox/OLq/wl1XW74RAqGCAJ9Ukre8Fs9L/3/fKf1n4u0E6Xlj0gCeLM6q
> jEdKRB10O4pzsTTPolo8rVk=
> =qaww
> -END PGP SIGNATURE-
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search

Re: [Bacula-users] MySQL (Innodb) - "table is full" ???

2007-11-08 Thread Mike Seda
Hi All,
My bacula database (with MyISAM tables) is currently 5.3 GB in size 
after only 10 months of use.

Last weekend my File table filled up, which was easily fixed by doing 
the following as recommended at 
http://www.bacula.org/dev-manual/Catalog_Maintenance.html#SECTION00244
 
:
ALTER TABLE File MAX_ROWS=281474976710656;

But, the above command made me wonder if I will fill the File table 
again in the future. It also made me consider migrating my tables from 
MyISAM to InnoDB. Do you think the migration is worth the hassle? I 
should mention that I do AutoPrune my normal backups, but I must keep my 
archival backups indefinitely. These archival backups total over 2 TB 
per month.

Btw, with the rate at which my users generate data it is conceivable 
that the normal and archival backups will continue to grow in size. Fyi, 
my autochanger is stackable, which means that I can just buy another 
unit and have 38 new slots (and possible 2 more drives) instantly 
available within the same storage resource. I mention this to denote 
that I am only worried about the limitations of my *database* storage 
not tape storage.

Any thoughts?

Regards,
Mike


Drew Bentley wrote:
> On 8/17/07, Alan Brown <[EMAIL PROTECTED]> wrote:
>   
>> On Fri, 17 Aug 2007, Drew Bentley wrote:
>>
>> 
>>> Yeah, autoextend for InnoDB seems to have bitten you. I usually never
>>> do this and have monitors to tell me if it's reaching a certain
>>> threshold, as you're probably not even using all of the InnoDB space
>>> allocated, as it's not particularly nice in giving back space that was
>>> once used, at least in my experience.
>>>   
>> Is there any way to see how much it's actually using?
>>
>>
>> 
>
> Not that I'm aware of, only show table status and or innodb status
> will print out the usage. If you perform a full dump and reinsert,
> you're always going to gain usage in space.
>
> -drew
>
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] differential backups not reqiered?

2007-11-08 Thread Mike Seda
Hi All,
I wish to save space on my tapes by changing backup levels in my 
"Monthly" Schedule resource.

My current "Monthly" Schedule resource is:
Schedule {
  Name = "Monthly"
  Run = Level=Full Pool=Monthly 1st fri at 23:05
  Run = Level=Full Pool=Weekly 2nd-5th fri at 23:05
  Run = Level=Differential Pool=Weekly sat-thu at 23:05
}

I wish to change the aforementioned resource to:
Schedule {
  Name = "Monthly"
  Run = Level=Full Pool=Monthly 1st fri at 23:05
  Run = Level=Differential Pool=Weekly 2nd-5th fri at 23:05
  Run = Level=Incremental Pool=Daily sat-thu at 23:05
}

Btw, with my proposed changes to the backup levels, I plan to use the 
following Volume Retention times while having "Recycle = yes" and 
"AutoPrune = yes" :
Monthly = 372 days
Weekly = 37 days
Daily = 14 days

Do the proposed Schedule and Volume Retention times seem reasonable? I 
added 7 days to each time as a fudge-factor. Any critiques and/or advice 
is welcome.

Also, I have read that Bacula will not backup files that have a ctime 
less than the last backup (of any level, i.e. Full, Differential, 
Incremental). Is that still true with Bacula 2.0.1? If so, is there 
anyway to circumvent this? Basically, will changing the 2nd, 3rd, 4th, 
and 5th Friday backups from Full to Differentials put my data at more of 
a risk if my users perform moves and untars? I feel that if I tell my 
users (one of which is my boss) to touch all files/dirs after a move or 
untar, he will then tell me to go get a quote for some new backup 
software. This is not an option for me as I am a bigtime open-source 
evangelist, and Bacula has served me well thus far. Please advise.

Thx,
Mike


Kern Sibbald wrote:
> On Friday 03 November 2006 11:27, Jaap Stolk wrote:
>   
>> On 11/3/06, Jaap Stolk <[EMAIL PROTECTED]> wrote:
>> 
>>> I was wondering if i could do without the differential backups 
>>>   
> altogether ?
>   
>> (in reply to my own post)
>> I did some more reading and found that the differential backup only
>> looks at the file date/time, exactly like the incremental backup. so
>> this is no reason to use a differential backup.
>> 
>
> Except that doing Differential backups allows you to restore faster and to 
> recycle your Incremental backups faster.
>
>   
>> The other thing is a differential backup would reduce the number of
>> incremental backups i need to scan when restoring files, but since i
>> backup to a file this doesn't involve manual tape changes, so this is
>> also not a problem in my case.
>>
>> I think i can detect files that are missed in the incremental backup
>> (because of an old file timestamp) using a verify job, and either
>> manual or automatically touch these files, so they will be backed up
>> in the next incremental backup.
>> 
>
> That's an interesting idea ...
>
>   
>> so i have no further questions, unless someone sees a bog problem in my 
>> 
> setup.
>   
>> Kind regards,
>> Jaap Stolk
>>
>> -
>> Using Tomcat but need to do more? Need to support web services, security?
>> Get stuff done quickly with pre-integrated technology to make your job 
>> 
> easier
>   
>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>> 
>
> -
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multiple autochangers

2007-11-06 Thread Mike Seda
All,
I am interested in purchasing another autochanger.

I am under the impression that the only way to conveniently (without the 
annoyance of manual shuffling) use two autochangers is to have different 
MediaTypes assigned to them. Is that correct?

Thx,
Mike


Arno Lehmann wrote:
> Hi,
>
> On 12/19/2006 12:13 AM, Jake Goerzen wrote:
>   
>> Hello,  I have 2 Sun L9 autochangers up and running on a single host.  
>> Is it possible to configure bacula to use both of these autochangers 
>> with 2 concurrent backup jobs?
>> 
>
> Yes.
>
> You define two storage devices in the SD, the corresponding two storage 
> devices in the DIR, and use one for each job. No automatic load 
> balancing, though...
>
> And if you want to minimize manual tape shuffling, make sure the two 
> devices have different MediaTypes assigned to them.
>
> Arno
>
>   
>> Thanks, Jake
>>
>>
>>
>> -
>> Take Surveys. Earn Cash. Influence the Future of IT
>> Join SourceForge.net's Techsay panel and you'll get the chance to share your
>> opinions on IT & business topics through brief surveys - and earn cash
>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>> 
>
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochangers over Fiber Channel

2007-11-06 Thread Mike Seda
Hi Flak,
I use mtx all the time with my Quantum PX502 Autochanger (LTO-3, FC, One 
Drive) on RHEL 4 AS (32-bit). The only gotcha is that FC AL does not 
work with the autochanger, and I was forced to use FC Point-to-Point.

If you are doing FC Point-to-Point you need to have an FC switch or a 
multi-port FC HBA. My FC HBA only has one port. I was lucky enough to 
have extra ports on my FC switch. I used these three extra ports on the 
FC switch to make the connection between my HBA, the autochanger 
robotics, and the one drive.

I hope that helps.

Best,
Mike


Flak Magnet wrote:
> I currently have a Quantum-SL3 autochanger working with Bacula running on 
> Solaris with a LVD-SCSI connection.
>
> We're looking at acquiring a 2nd one and attaching it via fiber channel to a 
> Linux host and "driving" that one with Bacula too. 
>
> The MTX compatiblity page doesn't have any indication of what kind of 
> connection (SCSI vs Fiber is being used and I think that SCSI is probably 
> more prevalent than fiber.
>
> Does anyone on the list have any experience with autochangers and fiber 
> channel connections working with MTX (and therefore bacula)?
>
> I have found the following:
> http://article.gmane.org/gmane.comp.bacula.user/30699/match=fiber+channel
> So apparently Mike Seda a FC connection working
> and:
> http://article.gmane.org/gmane.comp.bacula.user/38666/match=fiber+channel
> So Arno indicates that fiber channel "looks like SCSI" to the OS, therefore 
> to 
> MTX.  
>
> So I'm pretty confident that we'll be all-set, right?  Anyone care to burst 
> my 
> bubble on that?
>
> Thanks in advance for any responses. 
>
>   


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] MTEOM

2007-09-26 Thread Mike Seda
Kern,
I think I have found the cause of the MTEOM errors that I have been 
experiencing from time to time. These MTEOM errors are not caused by Bacula.

In the past few months, I have rebooted my Quantum PX502 library a few 
times (once every two months or so) while a tape was mounted in the 
drive. This caused the HP (LTO-3 nFC) tape drive in the library to write 
an MTEOM marker to the mounted tape, which made it impossible for Bacula 
to write beyond that point (although reading was possible).

Basically, Quantum told me to unmount the tape before I ever 
powercycle/reboot the library again.

The tapes can be recovered by backing them up and reformatting them (to 
get rid of the MTEOM marker).

Basically, my tapes that I thought were bad are actually recoverable.

Cheers,
Mike


Mike Seda wrote:
> I should mention that all of my other tapes are working flawlessly with 
> my current setup... I just have this one tape that gives MTEOM errors... 
> There was nothing in the system log, but I think I know what caused the 
> problem... I accidentally rebooted my bacula server while bacula was 
> writing to tape... The reboot operation just hung, and I hard-powered 
> the bacula server off since I figured I was screwed anyway... The tape 
> clearly didn't like this... I think the tape is slightly damaged... I 
> can restore from it, but cannot append to it... How can I move the data 
> from this tape onto another? Probably migration, huh?
>
>
> Kern Sibbald wrote:
>   
>> On Saturday 31 March 2007 20:38, Mike Seda wrote:
>>   
>> 
>>> I have a similar MTEOM error with one of my tapes. My drive and other 
>>> tapes are totally fine. I can even restore from this tape, but just 
>>> cannot append to it. The exact error is below:
>>>
>>> 18-Mar 02:17 uwharrie-sd: Volume "MSR122L3" previously written, moving 
>>> to end of data.
>>> 18-Mar 02:35 uwharrie-sd: tsali.2007-03-17_23.05.00 Error: Unable to 
>>> position to end of data on device "Drive-1" (/dev/nst0): ERR=dev.c:922 
>>> ioctl MTEOM error on "Drive-1" (/dev/nst0). ERR=Input/output error.
>>> 18-Mar 02:35 uwharrie-sd: Marking Volume "MSR122L3" in Error in Catalog.
>>>
>>> My setup is:
>>> Bacula 2.0.1
>>> RHEL 4 AS
>>> Quantum PX502 (FC, 1 HP LTO-3 Drive)
>>>
>>> Any thoughts?
>>> 
>>>   
>> Anyone who gets MTEOM errors on a tape that has been written with no error 
>> messages, has a serious problem -- either improperly configured (tape drive 
>> parameters/modes don't agree with those in Bacula's device resource), or 
>> hardware problems.
>>
>> Start by looking at your system log and Bacula output for errors.  If there 
>> are, then they need to be resolved.  If not, then it could well be 
>> configuration so re-run the 9 (or 10) testing steps documented in the tape 
>> testing chapter of the manual.
>>
>>   
>> 
>>> Julien Cigar wrote:
>>> 
>>>   
>>>> Bill Moran wrote:
>>>>   
>>>>   
>>>> 
>>>>> In response to Julien Cigar <[EMAIL PROTECTED]>:
>>>>>   
>>>>> 
>>>>> 
>>>>>   
>>>>>> We've bought a new tape drive here, a Sony SDX-700C (AIT-3), with new 
>>>>>> tapes. I'm running Bacula 1.38.11 under (Debian) Linux (2.6.18)
>>>>>> I've run the btape test / btape fill with success. The tape is 
>>>>>> initialized correctly (variable block mode + ...) :
>>>>>>
>>>>>> phoenix:/home/jcigar# cat /etc/stinit.def
>>>>>> {buffer-writes read-ahead async-writes}
>>>>>>
>>>>>> manufacturer=SONY model = "SDX-700C" {
>>>>>> mode1 blocksize=0 compression=1
>>>>>> }
>>>>>>
>>>>>> phoenix:/home/jcigar# mt -f /dev/nst0 status
>>>>>> SCSI 2 tape drive:
>>>>>> File number=0, block number=0, partition=0.
>>>>>> Tape block size 0 bytes. Density code 0x32 (AIT-3).
>>>>>> Soft error count since last status=0
>>>>>> General status bits on (4101):
>>>>>>  BOT ONLINE IM_REP_EN
>>>>>>
>>>>>> Unfortunately I got an error this morning :
>>>>>>
>>>>>> 01-Feb 03:27 phoenix-sd: Volume "Full-Tape-0001" previously written, 
>>>>>> moving to end of data.
>>>

[Bacula-users] weird warnings after library firmware upgrade

2007-09-18 Thread Mike Seda
Hi Arno,
I activated the wait_for_drive function in /etc/bacula/mtx-changer, and 
the warnings went away!

Yippeee!! You rule!! 8-)

Cheers,
Mike


Hi,
18.09.2007 20:30,, Mike Seda wrote::
 > > Hi All,
 > > I just conducted a successful restore job via bconsole, but received
 > > some weird warning messages. Suspiciously, these warnings started to
 > > appear after a library (robotics and drive) firmware upgrade.
 > >
 > > The warning that I receive is:
 > > Warning: acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume
 > > "MSR115L3" failed: ERR=dev.c:424 Unable to open device "Drive-1"
 > > (/dev/nst0): ERR=Input/output error

Is it possible that, in your mtx-changer script, you use a hard-coded
wait time after loading a tape? If that's the case, I suspect that the
new firmware made autochanger operations take longer, and thus your
timeout is overrun. If that's the case, you should activate the
wait_for_drive function and use that.

If you already use that, I'd suggest to add a hard-coded wait time
after autochanger operations, in case the new firmware makes mtx
return even before the actual operation is finished.

Arno

 > > The complete restore job output is provided below:
 > > 18-Sep 14:02 uwharrie-dir: Start Restore Job
 > > RestoreFiles.2007-09-18_14.02.42
 > > 18-Sep 14:02 uwharrie-sd: 3307 Issuing autochanger "unload slot 1, 
drive
 > > 0" command.
 > > 18-Sep 14:03 uwharrie-sd: 3304 Issuing autochanger "load slot 11, 
drive
 > > 0" command.
 > > 18-Sep 14:03 uwharrie-sd: 3305 Autochanger "load slot 11, drive 0",
 > > status is OK.
 > > 18-Sep 14:03 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0"
 > > command.
 > > 18-Sep 14:03 uwharrie-sd: 3302 Autochanger "loaded? drive 0", 
result is
 > > Slot 11.
 > > 18-Sep 14:03 uwharrie-sd: RestoreFiles.2007-09-18_14.02.42 Warning:
 > > acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume "MSR115L3"
 > > failed: ERR=dev.c:424 Unable to open device "Drive-1" (/dev/nst0):
 > > ERR=Input/output error
 > >
 > > 18-Sep 14:03 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0"
 > > command.
 > > 18-Sep 14:03 uwharrie-sd: 3302 Autochanger "loaded? drive 0", 
result is
 > > Slot 11.
 > > 18-Sep 14:03 uwharrie-sd: RestoreFiles.2007-09-18_14.02.42 Warning:
 > > acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume "MSR115L3"
 > > failed: ERR=dev.c:424 Unable to open device "Drive-1" (/dev/nst0):
 > > ERR=Input/output error
 > >
 > > 18-Sep 14:03 uwharrie-sd: Please mount Volume "MSR115L3" on Storage
 > > Device "Drive-1" (/dev/nst0) for Job RestoreFiles.2007-09-18_14.02.42
 > > 18-Sep 14:06 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0"
 > > command.
 > > 18-Sep 14:06 uwharrie-sd: 3302 Autochanger "loaded? drive 0", 
result is
 > > Slot 11.
 > > 18-Sep 14:06 uwharrie-sd: Ready to read from volume "MSR115L3" on 
device
 > > "Drive-1" (/dev/nst0).
 > > 18-Sep 14:06 uwharrie-sd: Forward spacing Volume "MSR115L3" to
 > > file:block 692:0.
 > > 18-Sep 14:08 uwharrie-sd: End of Volume at file 692 on device 
"Drive-1"
 > > (/dev/nst0), Volume "MSR115L3"
 > > 18-Sep 14:08 uwharrie-sd: End of all volumes.
 > > 18-Sep 14:08 uwharrie-dir: Bacula 2.0.1 (12Jan07): 18-Sep-2007 14:08:34
 > >   JobId:  2306
 > >   Job:RestoreFiles.2007-09-18_14.02.42
 > >   Client: uwharrie-fd
 > >   Start time: 18-Sep-2007 14:02:44
 > >   End time:   18-Sep-2007 14:08:34
 > >   Files Expected: 1
 > >   Files Restored: 1
 > >   Bytes Restored: 32,813
 > >   Rate:   0.1 KB/s
 > >   FD Errors:  0
 > >   FD termination status:  OK
 > >   SD termination status:  OK
 > >   Termination:Restore OK
 > >
 > > I also received a similar warning message at bconsole by simply 
mounting
 > > a different tape:
 > > *mount
 > > The defined Storage resources are:
 > >  1: Tape
 > >  2: File
 > > Select Storage resource (1-2): 1
 > > Connecting to Storage daemon Tape at uwharrie:9103 ...
 > > Enter autochanger slot: 1
 > > 3301 Issuing autochanger "loaded? drive 0" command.
 > > 3302 Autochanger "loaded? drive 0", result: nothing loaded.
 > > 3304 Issuing autochanger "load slot 1, drive 0"

[Bacula-users] weird warnings after library firmware upgrade

2007-09-18 Thread Mike Seda
Hi All,
I just conducted a successful restore job via bconsole, but received 
some weird warning messages. Suspiciously, these warnings started to 
appear after a library (robotics and drive) firmware upgrade.

The warning that I receive is:
Warning: acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume 
"MSR115L3" failed: ERR=dev.c:424 Unable to open device "Drive-1" 
(/dev/nst0): ERR=Input/output error

The complete restore job output is provided below:
18-Sep 14:02 uwharrie-dir: Start Restore Job 
RestoreFiles.2007-09-18_14.02.42
18-Sep 14:02 uwharrie-sd: 3307 Issuing autochanger "unload slot 1, drive 
0" command.
18-Sep 14:03 uwharrie-sd: 3304 Issuing autochanger "load slot 11, drive 
0" command.
18-Sep 14:03 uwharrie-sd: 3305 Autochanger "load slot 11, drive 0", 
status is OK.
18-Sep 14:03 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0" 
command.
18-Sep 14:03 uwharrie-sd: 3302 Autochanger "loaded? drive 0", result is 
Slot 11.
18-Sep 14:03 uwharrie-sd: RestoreFiles.2007-09-18_14.02.42 Warning: 
acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume "MSR115L3" 
failed: ERR=dev.c:424 Unable to open device "Drive-1" (/dev/nst0): 
ERR=Input/output error

18-Sep 14:03 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0" 
command.
18-Sep 14:03 uwharrie-sd: 3302 Autochanger "loaded? drive 0", result is 
Slot 11.
18-Sep 14:03 uwharrie-sd: RestoreFiles.2007-09-18_14.02.42 Warning: 
acquire.c:200 Read open device "Drive-1" (/dev/nst0) Volume "MSR115L3" 
failed: ERR=dev.c:424 Unable to open device "Drive-1" (/dev/nst0): 
ERR=Input/output error

18-Sep 14:03 uwharrie-sd: Please mount Volume "MSR115L3" on Storage 
Device "Drive-1" (/dev/nst0) for Job RestoreFiles.2007-09-18_14.02.42
18-Sep 14:06 uwharrie-sd: 3301 Issuing autochanger "loaded? drive 0" 
command.
18-Sep 14:06 uwharrie-sd: 3302 Autochanger "loaded? drive 0", result is 
Slot 11.
18-Sep 14:06 uwharrie-sd: Ready to read from volume "MSR115L3" on device 
"Drive-1" (/dev/nst0).
18-Sep 14:06 uwharrie-sd: Forward spacing Volume "MSR115L3" to 
file:block 692:0.
18-Sep 14:08 uwharrie-sd: End of Volume at file 692 on device "Drive-1" 
(/dev/nst0), Volume "MSR115L3"
18-Sep 14:08 uwharrie-sd: End of all volumes.
18-Sep 14:08 uwharrie-dir: Bacula 2.0.1 (12Jan07): 18-Sep-2007 14:08:34
  JobId:  2306
  Job:RestoreFiles.2007-09-18_14.02.42
  Client: uwharrie-fd
  Start time: 18-Sep-2007 14:02:44
  End time:   18-Sep-2007 14:08:34
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 32,813
  Rate:   0.1 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK

I also received a similar warning message at bconsole by simply mounting 
a different tape:
*mount
The defined Storage resources are:
 1: Tape
 2: File
Select Storage resource (1-2): 1
Connecting to Storage daemon Tape at uwharrie:9103 ...
Enter autochanger slot: 1
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3304 Issuing autochanger "load slot 1, drive 0" command.
3305 Autochanger "load slot 1, drive 0", status is OK.
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result is Slot 1.
3901 open device failed: ERR=dev.c:424 Unable to open device "Drive-1" 
(/dev/nst0): ERR=Input/output error

My hardware/software configuration is:
- Dell PE 2650 (1 x 73 GB RAID 1 (OS), 1 x 1.3 TB RAID 0 (data spooling 
partition) , 4 GB RAM, 2 GB swap)
- Quantum PX502 (LTO-3, FC, 1-Drive)
- RHEL 4 AS
- Bacula 2.0.1 (Pools=Weekly, Monthly, Scratch, Migrate, Archive)

Any thoughts?

Regards,
Mike


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bcopy

2007-05-02 Thread Mike Seda
I tried Arno's suggestions, but to no avail. I ended up doing the following:
- bcopy the hosed tape (MSR133L3) to another file (FILE0003)
- rewind, weof, and relabel MSR133L3
- compared file indeces of FILE0003 and FILE0002 by using bls ... There 
were duplicate file indeces... Hence the errors
- wrote a bootstrap file to only get certain file indeces from FILE0003, 
and ran a bcopy -b with this bootstrap file to copy FILE0003 to MSR133L3
- then, i just continued with the original command that I was trying to 
run before the infamous  heard around the world:
bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o MSR133L3 -v -w 
/var/bacula/spool VG1-LV0 Drive-1

This seemed to work fine... Bacula seemed to like this... No more 
errors... 8-)


Martin Simmons wrote:
>>>>>> On Wed, 02 May 2007 12:01:08 -0400, Mike Seda said:
>>>>>> 
>> Hi All,
>> Basically, the error that I received during a bcopy from disk to tape 
>> was the following:
>>
>> 01-May 22:48 bcopy: Volume "MSR133L3" previously written, moving to end 
>> of data.
>> 01-May 22:48 bcopy: bcopy Error: I cannot write on Volume "MSR133L3" 
>> because:
>> The number of files mismatch! Volume=4 Catalog=0
>> 01-May 22:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.
>>
>> Updating the number of volume files in the catalog does not make this 
>> error go away. Does anyone know how I can proceed? I would like to 
>> continue appending to this tape since it has around 500 GB free.
>> 
>
> I assume you get a different error in that case (i.e. it doesn't still say
> Catalog=0).
>
> Since you interrupted bcopy while it was writing, that volume will be in a
> funny state to the only safe option is to purge it to make it start again.
>
> __Martin
>
>
>   
>> Thx,
>> Mike
>>
>>
>> Mike Seda wrote:
>> 
>>> Hi All,
>>> I just have one more question... I executed the following command, but 
>>> (stupidly) hit  shortly thereafter:
>>> bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o MSR133L3 -v -w 
>>> /var/bacula/spool VG1-LV0 Drive-1
>>>
>>> Now, when I try to execute the aforementioned command again, I get the 
>>> following error:
>>> [EMAIL PROTECTED] spool]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 
>>> -o MSR133L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
>>> bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
>>> 01-May 22:48 bcopy: Ready to read from volume "FILE0002" on device 
>>> "VG1-LV0" (/var/bacula/spool).
>>> bcopy: butil.c:286 Using device: "Drive-1" for writing.
>>> 01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>>> 01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
>>> 01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>>> 01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
>>> 01-May 22:48 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>>> 01-May 22:48 bcopy: Volume "MSR133L3" previously written, moving to end 
>>> of data.
>>> 01-May 22:48 bcopy: bcopy Error: I cannot write on Volume "MSR133L3" 
>>> because:
>>> The number of files mismatch! Volume=4 Catalog=0
>>> 01-May 22:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.
>>> 01-May 22:49 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>>> Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when ready:
>>>
>>> Any thoughts?
>>>
>>>
>>> Martin Simmons wrote:
>>>   
>>>   
>>>>>>>>> On Mon, 30 Apr 2007 17:54:48 -0400, Mike Seda said:
>>>>>>>>> 
>>>>>>>>>   
>>>>>>>>>   
>>>>> Hi All,
>>>>> I successfully bcopied a tape to disk. Then, during a subsequent bcopy 
>>>>> from disk to tape, I got the following output:
>>>>>
>>>>> [EMAIL PROTECTED] ~]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o 
>>>>> MSR131L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
>>>>> bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
>>>>> 30-Apr 13:36 bcopy: Ready to read from volume "FILE0002" on device 
>>>>> "VG1-LV0" (/var/bacula/spool).
>>>>> bcopy: butil.c:286 Usi

Re: [Bacula-users] Using bcopy

2007-05-02 Thread Mike Seda
Hi All,
Basically, the error that I received during a bcopy from disk to tape 
was the following:

01-May 22:48 bcopy: Volume "MSR133L3" previously written, moving to end 
of data.
01-May 22:48 bcopy: bcopy Error: I cannot write on Volume "MSR133L3" 
because:
The number of files mismatch! Volume=4 Catalog=0
01-May 22:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.

Updating the number of volume files in the catalog does not make this 
error go away. Does anyone know how I can proceed? I would like to 
continue appending to this tape since it has around 500 GB free.

Thx,
Mike


Mike Seda wrote:
> Hi All,
> I just have one more question... I executed the following command, but 
> (stupidly) hit  shortly thereafter:
> bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o MSR133L3 -v -w 
> /var/bacula/spool VG1-LV0 Drive-1
>
> Now, when I try to execute the aforementioned command again, I get the 
> following error:
> [EMAIL PROTECTED] spool]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 
> -o MSR133L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
> bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
> 01-May 22:48 bcopy: Ready to read from volume "FILE0002" on device 
> "VG1-LV0" (/var/bacula/spool).
> bcopy: butil.c:286 Using device: "Drive-1" for writing.
> 01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
> 01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
> 01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
> 01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
> 01-May 22:48 bcopy: Invalid slot=0 defined, cannot autoload Volume.
> 01-May 22:48 bcopy: Volume "MSR133L3" previously written, moving to end 
> of data.
> 01-May 22:48 bcopy: bcopy Error: I cannot write on Volume "MSR133L3" 
> because:
> The number of files mismatch! Volume=4 Catalog=0
> 01-May 22:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.
> 01-May 22:49 bcopy: Invalid slot=0 defined, cannot autoload Volume.
> Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when ready:
>
> Any thoughts?
>
>
> Martin Simmons wrote:
>   
>>>>>>> On Mon, 30 Apr 2007 17:54:48 -0400, Mike Seda said:
>>>>>>> 
>>>>>>>   
>>> Hi All,
>>> I successfully bcopied a tape to disk. Then, during a subsequent bcopy 
>>> from disk to tape, I got the following output:
>>>
>>> [EMAIL PROTECTED] ~]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o 
>>> MSR131L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
>>> bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
>>> 30-Apr 13:36 bcopy: Ready to read from volume "FILE0002" on device 
>>> "VG1-LV0" (/var/bacula/spool).
>>> bcopy: butil.c:286 Using device: "Drive-1" for writing.
>>> 30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>>> 30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
>>> 30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>>> 30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
>>> 30-Apr 13:36 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>>> 30-Apr 13:36 bcopy: Wrote label to prelabeled Volume "MSR131L3" on 
>>> device "Drive-1" (/dev/nst0)
>>> bcopy: bcopy.c:242 Volume label not copied.
>>> 30-Apr 16:26 bcopy: End of Volume "" at 523:8456 on device "Drive-1" 
>>> (/dev/nst0). Write of 64512 bytes got -1.
>>> 
>>>   
>> This message usually means that the tape is full.
>>
>>
>>   
>> 
>>> 30-Apr 16:26 bcopy: Re-read of last block succeeded.
>>> 
>>>   
>> This is good.  It means that Bacula is able to read the block before the one
>> that filled the tape.
>>
>>
>>   
>> 
>>> 30-Apr 16:26 bcopy: End of medium on Volume "" Bytes=523,512,105,984 
>>> Blocks=8,114,956 at 30-Apr-2007 16:26.
>>> 30-Apr 16:26 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>>> Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when 
>>> ready:
>>>
>>> What does this mean? I am inclined to thing that bcopy is done. My 
>>> paranoid self thinks that I forgot to rewind tape volume "MSR131L3", 
>>> which cause

Re: [Bacula-users] Using bcopy

2007-05-01 Thread Mike Seda
Hi All,
I just have one more question... I executed the following command, but 
(stupidly) hit  shortly thereafter:
bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o MSR133L3 -v -w 
/var/bacula/spool VG1-LV0 Drive-1

Now, when I try to execute the aforementioned command again, I get the 
following error:
[EMAIL PROTECTED] spool]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 
-o MSR133L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
01-May 22:48 bcopy: Ready to read from volume "FILE0002" on device 
"VG1-LV0" (/var/bacula/spool).
bcopy: butil.c:286 Using device: "Drive-1" for writing.
01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
01-May 22:48 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
01-May 22:48 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 30.
01-May 22:48 bcopy: Invalid slot=0 defined, cannot autoload Volume.
01-May 22:48 bcopy: Volume "MSR133L3" previously written, moving to end 
of data.
01-May 22:48 bcopy: bcopy Error: I cannot write on Volume "MSR133L3" 
because:
The number of files mismatch! Volume=4 Catalog=0
01-May 22:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.
01-May 22:49 bcopy: Invalid slot=0 defined, cannot autoload Volume.
Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when ready:

Any thoughts?


Martin Simmons wrote:
>>>>>> On Mon, 30 Apr 2007 17:54:48 -0400, Mike Seda said:
>>>>>> 
>> Hi All,
>> I successfully bcopied a tape to disk. Then, during a subsequent bcopy 
>> from disk to tape, I got the following output:
>>
>> [EMAIL PROTECTED] ~]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o 
>> MSR131L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
>> bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
>> 30-Apr 13:36 bcopy: Ready to read from volume "FILE0002" on device 
>> "VG1-LV0" (/var/bacula/spool).
>> bcopy: butil.c:286 Using device: "Drive-1" for writing.
>> 30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>> 30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
>> 30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
>> 30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
>> 30-Apr 13:36 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>> 30-Apr 13:36 bcopy: Wrote label to prelabeled Volume "MSR131L3" on 
>> device "Drive-1" (/dev/nst0)
>> bcopy: bcopy.c:242 Volume label not copied.
>> 30-Apr 16:26 bcopy: End of Volume "" at 523:8456 on device "Drive-1" 
>> (/dev/nst0). Write of 64512 bytes got -1.
>> 
>
> This message usually means that the tape is full.
>
>
>   
>> 30-Apr 16:26 bcopy: Re-read of last block succeeded.
>> 
>
> This is good.  It means that Bacula is able to read the block before the one
> that filled the tape.
>
>
>   
>> 30-Apr 16:26 bcopy: End of medium on Volume "" Bytes=523,512,105,984 
>> Blocks=8,114,956 at 30-Apr-2007 16:26.
>> 30-Apr 16:26 bcopy: Invalid slot=0 defined, cannot autoload Volume.
>> Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when 
>> ready:
>>
>> What does this mean? I am inclined to thing that bcopy is done. My 
>> paranoid self thinks that I forgot to rewind tape volume "MSR131L3", 
>> which caused this error. Please advise what to do next.
>> 
>
> I would expect Bacula to rewind the tape before copying to it.  Could the tape
> be full?  Did the tape contain something already when you started copying to
> it?
>
> Beware also that tapes are not all exactly the same length, so Bacula can't
> guarantee to copy a full tape to another one.
>
> __Martin
>
> -
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bcopy

2007-04-30 Thread Mike Seda
Hi All,
I successfully bcopied a tape to disk. Then, during a subsequent bcopy 
from disk to tape, I got the following output:

[EMAIL PROTECTED] ~]# bcopy -c /etc/bacula/bacula-sd.conf -i FILE0002 -o 
MSR131L3 -v -w /var/bacula/spool VG1-LV0 Drive-1
bcopy: butil.c:283 Using device: "VG1-LV0" for reading.
30-Apr 13:36 bcopy: Ready to read from volume "FILE0002" on device 
"VG1-LV0" (/var/bacula/spool).
bcopy: butil.c:286 Using device: "Drive-1" for writing.
30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
30-Apr 13:36 bcopy: 3301 Issuing autochanger "loaded? drive 0" command.
30-Apr 13:36 bcopy: 3302 Autochanger "loaded? drive 0", result is Slot 28.
30-Apr 13:36 bcopy: Invalid slot=0 defined, cannot autoload Volume.
30-Apr 13:36 bcopy: Wrote label to prelabeled Volume "MSR131L3" on 
device "Drive-1" (/dev/nst0)
bcopy: bcopy.c:242 Volume label not copied.
30-Apr 16:26 bcopy: End of Volume "" at 523:8456 on device "Drive-1" 
(/dev/nst0). Write of 64512 bytes got -1.
30-Apr 16:26 bcopy: Re-read of last block succeeded.
30-Apr 16:26 bcopy: End of medium on Volume "" Bytes=523,512,105,984 
Blocks=8,114,956 at 30-Apr-2007 16:26.
30-Apr 16:26 bcopy: Invalid slot=0 defined, cannot autoload Volume.
Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when 
ready:

What does this mean? I am inclined to thing that bcopy is done. My 
paranoid self thinks that I forgot to rewind tape volume "MSR131L3", 
which caused this error. Please advise what to do next.
Regards,
Mike


Martin Simmons wrote:
>>>>>> On Mon, 30 Apr 2007 10:29:18 -0400, Ryan Novosielski said:
>>>>>> 
>> Deleting a disk volume will not mean that it will then reuse that
>> number, so you will have a blank spot if that media no longer exists. I
>> think this answers your question, but I'm not 100% certain what it was
>> to begin with. :)
>> 
>
> Right.  The only way to make the MediaIds contiguous would be to hack the
> counter back down to its previous value in the catalog.
>
> __Martin
>
>
>   
>> Mike Seda wrote:
>> 
>>> Thank you very much for the quick/great respsonses.  ;-)  The disk 
>>> volume (required due to my lack of a second tape drive) introduces one 
>>> more question... After I am done bcopying from the disk volume to the 
>>> final duplicate tape volume, is it better to delete or disable the disk 
>>> volume? I ask this since I would prefer my MediaIds to be contiguous.
>>>
>>>
>>> Martin Simmons wrote:
>>>   
>>>>>>>>> On Sun, 29 Apr 2007 21:22:20 -0400, Mike Seda said:
>>>>>>>>> 
>>>>>>>>>   
>>>>> All,
>>>>> I want to duplicate a tape volume to another tape volume. Since I only 
>>>>> have one tape drive, I must copy the tape volume to a disk volume and 
>>>>> then copy the disk volume to a duplicate tape.
>>>>>
>>>>> Question # 1
>>>>> If I ever need to bscan in the duplicate tape, I want the volume name to 
>>>>> be that of the original input tape volume. Will bcopy accomplish this 
>>>>> for me, or will the resulting duplicate tape volume name be that of the 
>>>>> disk volume?
>>>>> 
>>>>>   
>>>> Neither -- bcopy never copies the volume label, so the label on the new 
>>>> tape
>>>> will remain.  The label on the disk volume is irrelevant.
>>>>
>>>>
>>>>   
>>>> 
>>>>> Question # 2
>>>>> The one thing that I do not like about bcopy is that it wants you to 
>>>>> actually create the duplicate volumes in the catalog even though there 
>>>>> will be no job records associated with these volumes. Since, I find this 
>>>>> unnecessary and confusing, I am tempted to just use dd to accomplish my 
>>>>> tape duplication task. Since bacula reads files in 32K buffers, I came 
>>>>> up with the following dd command:
>>>>> dd if=/dev/nst0 of=.img ibs=32k
>>>>> Will the aforementioned dd command provide me a good dd image that I can 
>>>>> subsequently write to a duplicate tape?
>>>>> 
>>>>>   
>>>> Maybe.  A couple of things that will affect it:
>>>>
>>>> 1.

Re: [Bacula-users] Using bcopy

2007-04-30 Thread Mike Seda
Thank you very much for the quick/great respsonses.  ;-)  The disk 
volume (required due to my lack of a second tape drive) introduces one 
more question... After I am done bcopying from the disk volume to the 
final duplicate tape volume, is it better to delete or disable the disk 
volume? I ask this since I would prefer my MediaIds to be contiguous.


Martin Simmons wrote:
>>>>>> On Sun, 29 Apr 2007 21:22:20 -0400, Mike Seda said:
>>>>>> 
>> All,
>> I want to duplicate a tape volume to another tape volume. Since I only 
>> have one tape drive, I must copy the tape volume to a disk volume and 
>> then copy the disk volume to a duplicate tape.
>>
>> Question # 1
>> If I ever need to bscan in the duplicate tape, I want the volume name to 
>> be that of the original input tape volume. Will bcopy accomplish this 
>> for me, or will the resulting duplicate tape volume name be that of the 
>> disk volume?
>> 
>
> Neither -- bcopy never copies the volume label, so the label on the new tape
> will remain.  The label on the disk volume is irrelevant.
>
>
>   
>> Question # 2
>> The one thing that I do not like about bcopy is that it wants you to 
>> actually create the duplicate volumes in the catalog even though there 
>> will be no job records associated with these volumes. Since, I find this 
>> unnecessary and confusing, I am tempted to just use dd to accomplish my 
>> tape duplication task. Since bacula reads files in 32K buffers, I came 
>> up with the following dd command:
>> dd if=/dev/nst0 of=.img ibs=32k
>> Will the aforementioned dd command provide me a good dd image that I can 
>> subsequently write to a duplicate tape?
>> 
>
> Maybe.  A couple of things that will affect it:
>
> 1. The dd will stop at an eof marker on the tape.  Bacula puts eof markers
>between jobs and also every 1GB (by default).  You'll probably need to run
>dd several times to different img file until it reports end of tape and
>then the same number of calls to dd to write the new tape.
>
> 2. The default block size is variable, not 32K, unless you override that in
>the configuration.  I don't think you can duplicate variable block sizes
>with dd.
>
> __Martin
>   


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bcopy

2007-04-29 Thread Mike Seda
All,
I want to duplicate a tape volume to another tape volume. Since I only 
have one tape drive, I must copy the tape volume to a disk volume and 
then copy the disk volume to a duplicate tape.

Question # 1
If I ever need to bscan in the duplicate tape, I want the volume name to 
be that of the original input tape volume. Will bcopy accomplish this 
for me, or will the resulting duplicate tape volume name be that of the 
disk volume?

Question # 2
The one thing that I do not like about bcopy is that it wants you to 
actually create the duplicate volumes in the catalog even though there 
will be no job records associated with these volumes. Since, I find this 
unnecessary and confusing, I am tempted to just use dd to accomplish my 
tape duplication task. Since bacula reads files in 32K buffers, I came 
up with the following dd command:
dd if=/dev/nst0 of=.img ibs=32k
Will the aforementioned dd command provide me a good dd image that I can 
subsequently write to a duplicate tape?

Thanks,
Mike


Flak Magnet wrote:
> I'm having a hard time figuring out the syntax for using bcopy.  I've 
> searched the documentation and the bacula-users archives, not to mention 
> google for something that would give me an "Aha!".  No joy.
>
> I want to duplicate a tape volume to another tape volume.  Since I only 
> have one tape drive for this task, I need to copy the tape volume to a 
> disk volume, then the disk volume to the duplicate tape.
>
> An example of a good bcopy command syntax for each operation would be 
> appreciated.
>
>
> TIA,
>
> Tim
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] delete a single job

2007-04-14 Thread Mike Seda
All,
Is it possible to delete a single job in bacula 2.0.1?
Thx,
Mikw

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] MTEOM

2007-04-01 Thread Mike Seda
I should mention that all of my other tapes are working flawlessly with 
my current setup... I just have this one tape that gives MTEOM errors... 
There was nothing in the system log, but I think I know what caused the 
problem... I accidentally rebooted my bacula server while bacula was 
writing to tape... The reboot operation just hung, and I hard-powered 
the bacula server off since I figured I was screwed anyway... The tape 
clearly didn't like this... I think the tape is slightly damaged... I 
can restore from it, but cannot append to it... How can I move the data 
from this tape onto another? Probably migration, huh?


Kern Sibbald wrote:
> On Saturday 31 March 2007 20:38, Mike Seda wrote:
>   
>> I have a similar MTEOM error with one of my tapes. My drive and other 
>> tapes are totally fine. I can even restore from this tape, but just 
>> cannot append to it. The exact error is below:
>>
>> 18-Mar 02:17 uwharrie-sd: Volume "MSR122L3" previously written, moving 
>> to end of data.
>> 18-Mar 02:35 uwharrie-sd: tsali.2007-03-17_23.05.00 Error: Unable to 
>> position to end of data on device "Drive-1" (/dev/nst0): ERR=dev.c:922 
>> ioctl MTEOM error on "Drive-1" (/dev/nst0). ERR=Input/output error.
>> 18-Mar 02:35 uwharrie-sd: Marking Volume "MSR122L3" in Error in Catalog.
>>
>> My setup is:
>> Bacula 2.0.1
>> RHEL 4 AS
>> Quantum PX502 (FC, 1 HP LTO-3 Drive)
>>
>> Any thoughts?
>> 
>
> Anyone who gets MTEOM errors on a tape that has been written with no error 
> messages, has a serious problem -- either improperly configured (tape drive 
> parameters/modes don't agree with those in Bacula's device resource), or 
> hardware problems.
>
> Start by looking at your system log and Bacula output for errors.  If there 
> are, then they need to be resolved.  If not, then it could well be 
> configuration so re-run the 9 (or 10) testing steps documented in the tape 
> testing chapter of the manual.
>
>   
>> Julien Cigar wrote:
>> 
>>> Bill Moran wrote:
>>>   
>>>   
>>>> In response to Julien Cigar <[EMAIL PROTECTED]>:
>>>>   
>>>> 
>>>> 
>>>>> We've bought a new tape drive here, a Sony SDX-700C (AIT-3), with new 
>>>>> tapes. I'm running Bacula 1.38.11 under (Debian) Linux (2.6.18)
>>>>> I've run the btape test / btape fill with success. The tape is 
>>>>> initialized correctly (variable block mode + ...) :
>>>>>
>>>>> phoenix:/home/jcigar# cat /etc/stinit.def
>>>>> {buffer-writes read-ahead async-writes}
>>>>>
>>>>> manufacturer=SONY model = "SDX-700C" {
>>>>> mode1 blocksize=0 compression=1
>>>>> }
>>>>>
>>>>> phoenix:/home/jcigar# mt -f /dev/nst0 status
>>>>> SCSI 2 tape drive:
>>>>> File number=0, block number=0, partition=0.
>>>>> Tape block size 0 bytes. Density code 0x32 (AIT-3).
>>>>> Soft error count since last status=0
>>>>> General status bits on (4101):
>>>>>  BOT ONLINE IM_REP_EN
>>>>>
>>>>> Unfortunately I got an error this morning :
>>>>>
>>>>> 01-Feb 03:27 phoenix-sd: Volume "Full-Tape-0001" previously written, 
>>>>> moving to end of data.
>>>>> 01-Feb 03:27 phoenix-sd: canis-job.2007-02-01_03.00.00 Error: Unable to 
>>>>> position to end of data on device "sony SDX-700C" (/dev/nst0): 
>>>>> ERR=dev.c:839 ioctl MTEOM error on "sony SDX-700C" (/dev/nst0). 
>>>>> ERR=Input/output error.
>>>>>
>>>>> 01-Feb 03:27 phoenix-sd: Marking Volume "Full-Tape-0001" in Error in 
>>>>> Catalog.
>>>>> 
>>>>>   
>>>>>   
>>>> I'm a little unclear on certain aspects of this ...
>>>>
>>>> Have you made other backups successfully?  Did you try a different tape?
>>>> Sometimes tapes ship from the factory with problems.  Is this the same
>>>> tape that you used for the tests?
>>>>
>>>>   
>>>> 
>>>> 
>>> Yes, it runs fine since one month, and it's the same tape I've used for 
>>> the tests (after the test I made an mt -f /dev/nst0 rewind ; mt -f 
>>> /dev/nst0 weof ; mt -f /dev/nst0 rewind ; $> bconsole ; *label).
>>>
>>>

Re: [Bacula-users] MTEOM

2007-03-31 Thread Mike Seda
I have a similar MTEOM error with one of my tapes. My drive and other 
tapes are totally fine. I can even restore from this tape, but just 
cannot append to it. The exact error is below:

18-Mar 02:17 uwharrie-sd: Volume "MSR122L3" previously written, moving 
to end of data.
18-Mar 02:35 uwharrie-sd: tsali.2007-03-17_23.05.00 Error: Unable to 
position to end of data on device "Drive-1" (/dev/nst0): ERR=dev.c:922 
ioctl MTEOM error on "Drive-1" (/dev/nst0). ERR=Input/output error.
18-Mar 02:35 uwharrie-sd: Marking Volume "MSR122L3" in Error in Catalog.

My setup is:
Bacula 2.0.1
RHEL 4 AS
Quantum PX502 (FC, 1 HP LTO-3 Drive)

Any thoughts?


Julien Cigar wrote:
> Bill Moran wrote:
>   
>> In response to Julien Cigar <[EMAIL PROTECTED]>:
>>   
>> 
>>> We've bought a new tape drive here, a Sony SDX-700C (AIT-3), with new 
>>> tapes. I'm running Bacula 1.38.11 under (Debian) Linux (2.6.18)
>>> I've run the btape test / btape fill with success. The tape is 
>>> initialized correctly (variable block mode + ...) :
>>>
>>> phoenix:/home/jcigar# cat /etc/stinit.def
>>> {buffer-writes read-ahead async-writes}
>>>
>>> manufacturer=SONY model = "SDX-700C" {
>>> mode1 blocksize=0 compression=1
>>> }
>>>
>>> phoenix:/home/jcigar# mt -f /dev/nst0 status
>>> SCSI 2 tape drive:
>>> File number=0, block number=0, partition=0.
>>> Tape block size 0 bytes. Density code 0x32 (AIT-3).
>>> Soft error count since last status=0
>>> General status bits on (4101):
>>>  BOT ONLINE IM_REP_EN
>>>
>>> Unfortunately I got an error this morning :
>>>
>>> 01-Feb 03:27 phoenix-sd: Volume "Full-Tape-0001" previously written, 
>>> moving to end of data.
>>> 01-Feb 03:27 phoenix-sd: canis-job.2007-02-01_03.00.00 Error: Unable to 
>>> position to end of data on device "sony SDX-700C" (/dev/nst0): 
>>> ERR=dev.c:839 ioctl MTEOM error on "sony SDX-700C" (/dev/nst0). 
>>> ERR=Input/output error.
>>>
>>> 01-Feb 03:27 phoenix-sd: Marking Volume "Full-Tape-0001" in Error in 
>>> Catalog.
>>> 
>>>   
>> I'm a little unclear on certain aspects of this ...
>>
>> Have you made other backups successfully?  Did you try a different tape?
>> Sometimes tapes ship from the factory with problems.  Is this the same
>> tape that you used for the tests?
>>
>>   
>> 
>
> Yes, it runs fine since one month, and it's the same tape I've used for 
> the tests (after the test I made an mt -f /dev/nst0 rewind ; mt -f 
> /dev/nst0 weof ; mt -f /dev/nst0 rewind ; $> bconsole ; *label).
>
> I will try with another tape to see ...
>
>   
>> We had a drive go bad on us and the errors that Bacula was spewing out
>> were the most cryptic things you could imagine.  It became obvious when
>> we visited the datacenter, as the drive had a flashing error code on
>> the front.
>>
>>   
>> 
>
>
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula on FC6 ?

2007-03-30 Thread Mike Seda
Hi All,
Does anyone know if the bacula-client FC6 x86_64 rpms work with RHEL5?
Best,
Mike


Felix Schwarz wrote:
> Arnaud Mombrial wrote:
>   
>> Does anyone knows if there would be (or is there already ??) an fc6 package 
>> for bacula-client ?
>> 
>
> Sorry for the delay, I was *extremly* busy last month. I will build an fc6 
> x86 this week.
> x86_64 as soon Xen is working on my new, shiny Dell build monster ;-)
>
> fs
>
> PS: fc5 x64 will be built next week if Xen works till then.
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup rates

2007-03-20 Thread Mike Seda
Hi Sven,
I wouldn't nfs mount anything on my backup server. I would just backup 
the NFS share on the server-side, i.e. run bacula-fd on the NFS server 
and add the NFS shared dir to the NFS server's FileSet.

Btw, last night my clients got the following Rates in MB/s:
4,7,14, 21,10
These rates vary due to different network connectivity, i.e. WAN vs LAN, 
quality of uplinks between switches, backplane bandwidth of switches...

Fyi, my backup server is:
Dell PE 2650 (1 x 73 GB RAID 1, 1 x 450 GB RAID 0 , 4 GB RAM, 2 GB swap)
Bacula 2.0.1 (400 GB of Data-Spooling configured, Current Pools=Weekly, 
Monthly, Scratch, Migrate, Archive)
RHEL 4 AS
Quantum PX502 (LTO-3, FC, 1-Drive)

Mike


[EMAIL PROTECTED] wrote:
> Hi,
>
> I would like to use bacula for backup, but the observed perfomance is
> not good enough for our use case.
> The backup-server running Director, SD and FD (version 1.38.11 on 
> Debian unstable) is an AMD Athlon(tm) XP 1500+ with 1 GB of ram.
> It reads the files via Gbit-ethernet from an NFS-share.
> The storage device is an autochanger-library with an HP Ultrium-3 
> tape drive, which is attached to the server via an Adaptec 
> AIC-7892A U160/m-SCSI-card.
>
> The problem is, that Bacula in a test run gives a rate of upto 18MB/s 
> with a single large file served from NFS. Backing up approx. 7,5 
> million files with a total size of 920GB brings down the rate to
> approx. 6MB/s (md5 signatures and no spooling).
> I can hardly estimate if this is good or bad.
>
> Well, I know, that a rate of 6 MB/s is to slow by a factor of 3 to 4 
> for our goal of backing up 1-1.5TB in less than 20hours.
>
> I'm sure, that I should upgrade the hardware to get more performance,
> but before doing so, I would like to hear from others, 
> what hardware they use for backup, how their setup is and which 
> backup rates they achieve.
>
>
> Bye,
> Sven
>
>
>
> "Jetzt Handykosten senken mit klarmobil - 14 Ct./Min.! Hier klicken"
> http://www.klarmobil.de/index.html?pid=73025
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] damaged database (was Re: possible bug found)

2007-03-17 Thread Mike Seda
Hi Kern,
Thanks for these tips... I didn't see them until just now... I actually 
fixed the database already based on documented changes that I had stored 
in my Wiki and bacula emails (containing JobId and VolumeName)... I 
verified the changes by doing a few restores and the aforementioned 
double migration jobs (that seemed to be the ultimate test)... I learned 
my lesson... Your support has been above and beyond the call of duty... 
You (and the entire bacula community) are great... 8-) Btw, I have 
turned on Binary Logging in MySQL so that I can replay the logs in the 
future.
Thx,
Mike


Kern Sibbald wrote:
> On Thursday 15 March 2007 16:17, Mike Seda wrote:
>   
>> Ok.. I have found all references to MediaId... The following sql
>> statements were executed:
>> select MediaId, JobId from JobMedia;
>> select MediaId from Media;
>> select MediaId from CDImages;
>> select MediaId from LocationLog;
>>
>> Since I also changed PoolId, I did:
>> select PoolId from Pool;
>> select PoolId from Job;
>> select PoolId from Media;
>> select PoolId,JobId from Job;
>>
>> I will search through the output of these commands and see if I can
>> bring the database back to a sane state... Am I on the right track?
>> 
>
> Judging by your previous email, the database is probably in a very 
> inconsistent state, as you have mentioned.  Unless you remember *exactly* all 
> the changes that you made, it will be very difficult to properly undo them.
>
> My personal approach would probably be to go the conservative route:
>
> 1. Make a listing of all the Volumes and Jobs in the current database for
> future reference.
> 2. Save an "archival" copy of the current database.
> 3. Create a new clean database, 
> 4. Add new Volumes to the new database
> 5. Do a Full backup of everything.  
>
> Then hope you don't need anything from the old database.  
>
> If you do need something from the old database, there are several 
> possibilities: 
> 1. Temporarily restore the old database, and attempt a restore crossing your 
> fingers.
> 2. If the above fails, attempt to manually reset the mediaIds in the old 
> temporarily restored database as you propose doing (I personally wouldn't do 
> this).
> 3. Scan in the needed volumes into the new "clean" database -- terribly time 
> consuming.
>
> At some point manually recycle the old Volumes when they would have been 
> automatically recycled, and start re-using them in the new catalog.
>
> Good luck.
>
> Kern
>
>
>
>   
>> Thx,
>> M
>>
>> Mike Seda wrote:
>> 
>>> Hi All,
>>> Kern is right... I should have never changed those MediaIds... I
>>> actually remember changing a few other things such as renaming the
>>> Default pool to Weekly and resetting the auto_increment value on Media
>>> and Pool tables. I also remember changing something about PoolId. In
>>> hind-sight, I don't know what I was thinking (probably just being too
>>> picky as always)... It sounds like I hosed my database. I am so worried
>>> about this... Is there anyway to restore my database? Btw, these changes
>>> were made when I originally setup bacula... So, I cannot just restore
>>> the database from a previous dumpfile... The only way that I can forsee
>>> restoring the database is to do the following:
>>>
>>> 1) backup the current database to a dumpfile
>>> 2) drop the current database
>>> 3) recreate the database using the initial bacula sql scripts provided
>>> with the distribution
>>> 4) bscan in all of my tapes
>>>
>>> ... Am I wrong? I hope so, because this sounds like a grueling
>>> process... Basically, is there a better way to fix my database, such as
>>> using some sql-hackery?
>>>
>>> My setup is:
>>> MySQL 4.1.20-1
>>> Bacula 2.0.1 (Current Pools=Weekly, Scratch, Migrate, Archive)
>>> RHEL 4 AS
>>>
>>> Regards,
>>> Mike
>>>
>>> Kern Sibbald wrote:
>>>   
>>>> On Thursday 15 March 2007 04:38, [EMAIL PROTECTED] wrote:
>>>> 
>>>>> Upon running a migration job, bacula asked me to load one of my
>>>>> cleaning tapes... Then, I updated the MediaId in the Media table to
>>>>> correspond with the tape that bacula should have been looking for...
>>>>> This caused bacula to successfully complete the migration job...I think
>>>>> bacula should have been looking for a specific VolumeName *not*
>>>>> MediaId... I like bacula, but thi

Re: [Bacula-users] Migration: last full backup to archive?

2007-03-17 Thread Mike Seda
Right. After the double migration, I verified the job with a small 
restore... Everything seemed fine...


Kern Sibbald wrote:
> On Thursday 15 March 2007 20:44, Mike Seda wrote:
>   
>> Fyi, my double migration was successfull... First I ran a job called T2D
>> and then ran a job called D2T... Basically, I used a disk array as a
>> temporary storage between the two tape pools... This was necessary since
>> I only have one tape drive
>> 
>
> It is nice to hear that migration is actually being used.  You might try 
> a "test" restore from the final tape just to be sure.  I don't expect any 
> problems, but with something new like this, it is always conforting to have a 
> confirmation.
>
>   
>> Kern Sibbald wrote:
>> 
>>> On Saturday 10 March 2007 19:25, Mike Seda wrote:
>>>   
>>>> Hi Kern,
>>>> My proposed method may be feasible, but do you recommend a more elegant
>>>> solution to accomplish a migration job between two different tape pools
>>>> based on the storage resources at my disposal? I just want to make sure
>>>> before I make the necessary changes to my SAN fabric.
>>>> 
>>> I'm not the best person to make such recommendations since I am not using
>>> Migration in production as hopefully some people are.  Perhaps someone
>>> else from the list will have a better idea.
>>>
>>>   
>>>> Regards,
>>>> Mike
>>>>
>>>> Kern Sibbald wrote:
>>>> 
>>>>> On Friday 09 March 2007 02:33, Mike Seda wrote:
>>>>>   
>>>>>> Hi All,
>>>>>> I too have switched off a few machines lately, and wish to save their
>>>>>> last complete backup to an Archive pool. Basically, I want to migrate
>>>>>> a few jobs from my "Weekly" pool to my "Archive" pool. Fortunately, I
>>>>>> too have one last full backup in the system, but since I only have one
>>>>>> tape drive migration is a bit tricky... I just wanted to run my
>>>>>> temporary solution by you before I implement it (btw, a second tape
>>>>>> drive is in our budget for next fiscal year):
>>>>>>
>>>>>> 1) Mount a 1 TB FC disk array (currently not being utilized) on the
>>>>>> bacula server
>>>>>> 2) Tell bacula the array is of Media Type "File" (my tape library is
>>>>>> Media Type "Tape"), and put it in the "Disk" pool.
>>>>>> 3) Do a migrate job from my "Weekly" pool to the "Disk" Pool.
>>>>>> 4) Do a migrate job from my "Disk" pool to the "Archive" Pool.
>>>>>>
>>>>>> Does this sound like a reasonable solution until I can spring for
>>>>>> another tape drive?...
>>>>>> 
>>>>> Obviously migrating with only one tape drive is not optimal, but what
>>>>> you describe sounds feasible.  Be sure to mark the tapes you are taking
>>>>> offsite as Archive.
>>>>>
>>>>> We will be very interested to hear how this double migration works for
>>>>> you ...
>>>>>
>>>>>   
>>>>>> Fyi, the array is fast (12 x 15 K FC disks in
>>>>>> RAID 5)...
>>>>>>
>>>>>> Fyi, my setup is:
>>>>>> Dell PE 2650 (1 x 73 GB RAID 1, 1 x 450 GB RAID 0 )
>>>>>> Bacula 2.0.1 (400 GB of Data-Spooling configured to mitigate tape
>>>>>> shoe-shine, Current Pools=Weekly, Scratch)
>>>>>> RHEL 4 AS
>>>>>> Quantum PX502 (LTO-3, FC, 1-Drive)
>>>>>>
>>>>>> Regards,
>>>>>> Mike
>>>>>>
>>>>>> Alan Brown wrote:
>>>>>> 
>>>>>>> I have a number of machines which have been switched off in the last
>>>>>>> 6 months and want to save their last complete backup to an archive
>>>>>>> pool in order to both free up the main pool and to ensure there's one
>>>>>>> last copy of everything.
>>>>>>>
>>>>>>> In some cases, there is one last full backup in the system, so all
>>>>>>> that'd be required is a single jobID migration
>>>>>

Re: [Bacula-users] Migration: last full backup to archive?

2007-03-15 Thread Mike Seda
Fyi, my double migration was successfull... First I ran a job called T2D 
and then ran a job called D2T... Basically, I used a disk array as a 
temporary storage between the two tape pools... This was necessary since 
I only have one tape drive


Kern Sibbald wrote:
> On Saturday 10 March 2007 19:25, Mike Seda wrote:
>   
>> Hi Kern,
>> My proposed method may be feasible, but do you recommend a more elegant
>> solution to accomplish a migration job between two different tape pools
>> based on the storage resources at my disposal? I just want to make sure
>> before I make the necessary changes to my SAN fabric.
>> 
>
> I'm not the best person to make such recommendations since I am not using 
> Migration in production as hopefully some people are.  Perhaps someone else 
> from the list will have a better idea.
>
>   
>> Regards,
>> Mike
>>
>> Kern Sibbald wrote:
>> 
>>> On Friday 09 March 2007 02:33, Mike Seda wrote:
>>>   
>>>> Hi All,
>>>> I too have switched off a few machines lately, and wish to save their
>>>> last complete backup to an Archive pool. Basically, I want to migrate a
>>>> few jobs from my "Weekly" pool to my "Archive" pool. Fortunately, I too
>>>> have one last full backup in the system, but since I only have one tape
>>>> drive migration is a bit tricky... I just wanted to run my temporary
>>>> solution by you before I implement it (btw, a second tape drive is in
>>>> our budget for next fiscal year):
>>>>
>>>> 1) Mount a 1 TB FC disk array (currently not being utilized) on the
>>>> bacula server
>>>> 2) Tell bacula the array is of Media Type "File" (my tape library is
>>>> Media Type "Tape"), and put it in the "Disk" pool.
>>>> 3) Do a migrate job from my "Weekly" pool to the "Disk" Pool.
>>>> 4) Do a migrate job from my "Disk" pool to the "Archive" Pool.
>>>>
>>>> Does this sound like a reasonable solution until I can spring for
>>>> another tape drive?...
>>>> 
>>> Obviously migrating with only one tape drive is not optimal, but what you
>>> describe sounds feasible.  Be sure to mark the tapes you are taking
>>> offsite as Archive.
>>>
>>> We will be very interested to hear how this double migration works for
>>> you ...
>>>
>>>   
>>>> Fyi, the array is fast (12 x 15 K FC disks in
>>>> RAID 5)...
>>>>
>>>> Fyi, my setup is:
>>>> Dell PE 2650 (1 x 73 GB RAID 1, 1 x 450 GB RAID 0 )
>>>> Bacula 2.0.1 (400 GB of Data-Spooling configured to mitigate tape
>>>> shoe-shine, Current Pools=Weekly, Scratch)
>>>> RHEL 4 AS
>>>> Quantum PX502 (LTO-3, FC, 1-Drive)
>>>>
>>>> Regards,
>>>> Mike
>>>>
>>>> Alan Brown wrote:
>>>> 
>>>>> I have a number of machines which have been switched off in the last 6
>>>>> months and want to save their last complete backup to an archive pool
>>>>> in order to both free up the main pool and to ensure there's one last
>>>>> copy of everything.
>>>>>
>>>>> In some cases, there is one last full backup in the system, so all
>>>>> that'd be required is a single jobID migration
>>>>>
>>>>> In other cases, the migrated last full backup would have to be made up
>>>>> of full+diff+inc
>>>>>
>>>>> Does anyone have cookiecutter instructions for this?
>>>>>
>>>>> If so, they should be added to the manual's Migration chapter.
>>>>>
>>>>> AB
>>>>>
>>>>>
>>>>> ---
>>>>> -- Take Surveys. Earn Cash. Influence the Future of IT
>>>>> Join SourceForge.net's Techsay panel and you'll get the chance to share
>>>>> your opinions on IT & business topics through brief surveys-and earn
>>>>> cash
>>>>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVD
>>>>> EV ___
>>>>> Bacula-users mailing list
>>>>> Bacula-users@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-u

Re: [Bacula-users] damaged database (was Re: possible bug found)

2007-03-15 Thread Mike Seda
Ok.. I have found all references to MediaId... The following sql 
statements were executed:
select MediaId, JobId from JobMedia;
select MediaId from Media;
select MediaId from CDImages;
select MediaId from LocationLog;

Since I also changed PoolId, I did:
select PoolId from Pool;
select PoolId from Job;
select PoolId from Media;
select PoolId,JobId from Job;

I will search through the output of these commands and see if I can 
bring the database back to a sane state... Am I on the right track?
Thx,
M


Mike Seda wrote:
> Hi All,
> Kern is right... I should have never changed those MediaIds... I 
> actually remember changing a few other things such as renaming the 
> Default pool to Weekly and resetting the auto_increment value on Media 
> and Pool tables. I also remember changing something about PoolId. In 
> hind-sight, I don't know what I was thinking (probably just being too 
> picky as always)... It sounds like I hosed my database. I am so worried 
> about this... Is there anyway to restore my database? Btw, these changes 
> were made when I originally setup bacula... So, I cannot just restore 
> the database from a previous dumpfile... The only way that I can forsee 
> restoring the database is to do the following:
>
> 1) backup the current database to a dumpfile
> 2) drop the current database
> 3) recreate the database using the initial bacula sql scripts provided 
> with the distribution
> 4) bscan in all of my tapes
>
> ... Am I wrong? I hope so, because this sounds like a grueling 
> process... Basically, is there a better way to fix my database, such as 
> using some sql-hackery?
>
> My setup is:
> MySQL 4.1.20-1
> Bacula 2.0.1 (Current Pools=Weekly, Scratch, Migrate, Archive)
> RHEL 4 AS
>
> Regards,
> Mike
>
>
> Kern Sibbald wrote:
>   
>> On Thursday 15 March 2007 04:38, [EMAIL PROTECTED] wrote:
>>   
>> 
>>> Upon running a migration job, bacula asked me to load one of my
>>> cleaning tapes... Then, I updated the MediaId in the Media table to
>>> correspond with the tape that bacula should have been looking for...
>>> This caused bacula to successfully complete the migration job...I think
>>> bacula should have been looking for a specific VolumeName *not*
>>> MediaId... I like bacula, but this is just wrong... Btw, this problem
>>> occured because I changed some of my MediaIds a while back (I can't
>>> help it sql is fun and I'm a control freak :-D)... But still, i think
>>> bacula should have asked me to load tape "MSR100L3" or whatever... Ya
>>> know?
>>> 
>>>   
>> If you modified a MediaId in the Media table and you don't understand the 
>> full 
>> details of how the database is organized and linked together (e.g. where to 
>> find *all* references to the MediaId), you have probably damaged your 
>> database.
>>
>> While certain fields can be modified directly via SQL, MediaIds are not one. 
>> For the record, I discourage all users from doing similar things, and I can 
>> assure you, it is not something that I would personally do.
>>
>> To the best of my knowledge the only place Bacula ever asks for a MediaId is 
>> when it asks you to select a particular Volume during the update command 
>> (and 
>> possibly some other ones).  
>>
>> If you decide to respond to this with a bit more concrete information, 
>> please 
>> read the Support page on the bacula web site (www.bacula.org -> Support) 
>> first.
>>
>>   
>> 
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] damaged database (was Re: possible bug found)

2007-03-15 Thread Mike Seda
Hi All,
Kern is right... I should have never changed those MediaIds... I 
actually remember changing a few other things such as renaming the 
Default pool to Weekly and resetting the auto_increment value on Media 
and Pool tables. I also remember changing something about PoolId. In 
hind-sight, I don't know what I was thinking (probably just being too 
picky as always)... It sounds like I hosed my database. I am so worried 
about this... Is there anyway to restore my database? Btw, these changes 
were made when I originally setup bacula... So, I cannot just restore 
the database from a previous dumpfile... The only way that I can forsee 
restoring the database is to do the following:

1) backup the current database to a dumpfile
2) drop the current database
3) recreate the database using the initial bacula sql scripts provided 
with the distribution
4) bscan in all of my tapes

... Am I wrong? I hope so, because this sounds like a grueling 
process... Basically, is there a better way to fix my database, such as 
using some sql-hackery?

My setup is:
MySQL 4.1.20-1
Bacula 2.0.1 (Current Pools=Weekly, Scratch, Migrate, Archive)
RHEL 4 AS

Regards,
Mike


Kern Sibbald wrote:
> On Thursday 15 March 2007 04:38, [EMAIL PROTECTED] wrote:
>   
>> Upon running a migration job, bacula asked me to load one of my
>> cleaning tapes... Then, I updated the MediaId in the Media table to
>> correspond with the tape that bacula should have been looking for...
>> This caused bacula to successfully complete the migration job...I think
>> bacula should have been looking for a specific VolumeName *not*
>> MediaId... I like bacula, but this is just wrong... Btw, this problem
>> occured because I changed some of my MediaIds a while back (I can't
>> help it sql is fun and I'm a control freak :-D)... But still, i think
>> bacula should have asked me to load tape "MSR100L3" or whatever... Ya
>> know?
>> 
>
> If you modified a MediaId in the Media table and you don't understand the 
> full 
> details of how the database is organized and linked together (e.g. where to 
> find *all* references to the MediaId), you have probably damaged your 
> database.
>
> While certain fields can be modified directly via SQL, MediaIds are not one. 
> For the record, I discourage all users from doing similar things, and I can 
> assure you, it is not something that I would personally do.
>
> To the best of my knowledge the only place Bacula ever asks for a MediaId is 
> when it asks you to select a particular Volume during the update command (and 
> possibly some other ones).  
>
> If you decide to respond to this with a bit more concrete information, please 
> read the Support page on the bacula web site (www.bacula.org -> Support) 
> first.
>
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] legal "Selection Pattern" for migration job

2007-03-14 Thread Mike Seda
Is the following legal syntax for a migration job?

Selection Type = SQLQuery
Selection Pattern = "http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Difference Between "Archive" and "Read-Only" Volume Status?

2007-03-13 Thread Mike Seda
Well, I think VolStatus=Archive is implemented. The latest 
rel-bacula.pdf says:

"If the Volume has another status, such as Archive, Read-Only, Disabled, 
Busy, or
Cleaning, the Volume status will not be changed to Purged."

And, Kern told me to "Be sure to mark the tapes you are taking offsite 
as Archive."


Jason King wrote:
> Sure :)
>
> Ryan Novosielski wrote:
>   
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> I could be wrong, but I don't believe either one is implemented. :-P Do
>> I win? :)
>>
>> =R
>>
>> Jason King wrote:
>>   
>> 
>>> Not 100% sure, but archived volumes probably stay archived until their 
>>> rentention period is up, then they would go into recycle which would 
>>> then allow bacula to use them. Read-Only is set by the user and bacula 
>>> will never change that attribute.
>>>
>>> Mike Seda wrote:
>>> 
>>>   
>>>> I have the same question:
>>>>
>>>> "Could someone tell me if there is a difference in the way that Bacula 
>>>> handles volumes with the status "Archive" and the status "Read-Only"?"
>>>>
>>>> Did this ever get answered?
>>>>
>>>>
>>>> Neal Gamradt wrote:
>>>>   
>>>>   
>>>> 
>>>>> All,
>>>>>
>>>>> Could someone tell me if there is a difference in the way that Bacula 
>>>>> handles volumes with the status "Archive" and the status "Read-Only"?  
>>>>> Does any data get removed from the database with "Archive"?  I have 
>>>>> some files I want to pertinently archive on a volume that I am going 
>>>>> to take out of rotation, but I don't want to remove them from the 
>>>>> database, that way I can restore quickly if I need to.
>>>>>
>>>>> Also, does "Disabled" remove the information for that volume from the 
>>>>> database?  Thanks!
>>>>>
>>>>> Running Redhat Enterprise 4.0 with Bacula 1.38.11 and a Superloader 3 
>>>>> with an S4 tape drive.
>>>>>
>>>>> Neal
>>>>> 
>>>>>   
>> - --
>>   _  _ _  _ ___  _  _  _
>>  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer III
>>  |$&| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
>>  \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.5 (MingW32)
>> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>>
>> iD8DBQFF9xAmmb+gadEcsb4RAlsIAJ0V3FIQ2Yc6oQZhp+m+OLPVR15/DgCfQ5OH
>> VOwm8OrIsUzzgn3JGcPXVlI=
>> =6wYl
>> -END PGP SIGNATURE-
>>
>> -
>> Take Surveys. Earn Cash. Influence the Future of IT
>> Join SourceForge.net's Techsay panel and you'll get the chance to share your
>> opinions on IT & business topics through brief surveys-and earn cash
>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>   
>> 
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Difference Between "Archive" and "Read-Only" Volume Status?

2007-03-13 Thread Mike Seda
I have the same question:

"Could someone tell me if there is a difference in the way that Bacula 
handles volumes with the status "Archive" and the status "Read-Only"?"

Did this ever get answered?


Neal Gamradt wrote:
> All,
>
> Could someone tell me if there is a difference in the way that Bacula 
> handles volumes with the status "Archive" and the status "Read-Only"?  
> Does any data get removed from the database with "Archive"?  I have 
> some files I want to pertinently archive on a volume that I am going 
> to take out of rotation, but I don't want to remove them from the 
> database, that way I can restore quickly if I need to.
>
> Also, does "Disabled" remove the information for that volume from the 
> database?  Thanks!
>
> Running Redhat Enterprise 4.0 with Bacula 1.38.11 and a Superloader 3 
> with an S4 tape drive.
>
> Neal
>
> 
> View Athletes' Collections with Live Search. See it! 
> 
> 
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> 
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migration: last full backup to archive?

2007-03-10 Thread Mike Seda
Hi Kern,
My proposed method may be feasible, but do you recommend a more elegant 
solution to accomplish a migration job between two different tape pools 
based on the storage resources at my disposal? I just want to make sure 
before I make the necessary changes to my SAN fabric.
Regards,
Mike


Kern Sibbald wrote:
> On Friday 09 March 2007 02:33, Mike Seda wrote:
>   
>> Hi All,
>> I too have switched off a few machines lately, and wish to save their
>> last complete backup to an Archive pool. Basically, I want to migrate a
>> few jobs from my "Weekly" pool to my "Archive" pool. Fortunately, I too
>> have one last full backup in the system, but since I only have one tape
>> drive migration is a bit tricky... I just wanted to run my temporary
>> solution by you before I implement it (btw, a second tape drive is in
>> our budget for next fiscal year):
>>
>> 1) Mount a 1 TB FC disk array (currently not being utilized) on the
>> bacula server
>> 2) Tell bacula the array is of Media Type "File" (my tape library is
>> Media Type "Tape"), and put it in the "Disk" pool.
>> 3) Do a migrate job from my "Weekly" pool to the "Disk" Pool.
>> 4) Do a migrate job from my "Disk" pool to the "Archive" Pool.
>>
>> Does this sound like a reasonable solution until I can spring for
>> another tape drive?... 
>> 
>
> Obviously migrating with only one tape drive is not optimal, but what you 
> describe sounds feasible.  Be sure to mark the tapes you are taking offsite 
> as Archive.  
>
> We will be very interested to hear how this double migration works for you ...
>
>
>
>   
>> Fyi, the array is fast (12 x 15 K FC disks in 
>> RAID 5)...
>>
>> Fyi, my setup is:
>> Dell PE 2650 (1 x 73 GB RAID 1, 1 x 450 GB RAID 0 )
>> Bacula 2.0.1 (400 GB of Data-Spooling configured to mitigate tape
>> shoe-shine, Current Pools=Weekly, Scratch)
>> RHEL 4 AS
>> Quantum PX502 (LTO-3, FC, 1-Drive)
>>
>> Regards,
>> Mike
>>
>> Alan Brown wrote:
>> 
>>> I have a number of machines which have been switched off in the last 6
>>> months and want to save their last complete backup to an archive pool in
>>> order to both free up the main pool and to ensure there's one last copy
>>> of everything.
>>>
>>> In some cases, there is one last full backup in the system, so all that'd
>>> be required is a single jobID migration
>>>
>>> In other cases, the migrated last full backup would have to be made up of
>>> full+diff+inc
>>>
>>> Does anyone have cookiecutter instructions for this?
>>>
>>> If so, they should be added to the manual's Migration chapter.
>>>
>>> AB
>>>
>>>
>>> -
>>> Take Surveys. Earn Cash. Influence the Future of IT
>>> Join SourceForge.net's Techsay panel and you'll get the chance to share
>>> your opinions on IT & business topics through brief surveys-and earn cash
>>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>   
>> -
>> Take Surveys. Earn Cash. Influence the Future of IT
>> Join SourceForge.net's Techsay panel and you'll get the chance to share
>> your opinions on IT & business topics through brief surveys-and earn cash
>> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>> 


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migration: last full backup to archive?

2007-03-08 Thread Mike Seda
Hi All,
I too have switched off a few machines lately, and wish to save their 
last complete backup to an Archive pool. Basically, I want to migrate a 
few jobs from my "Weekly" pool to my "Archive" pool. Fortunately, I too 
have one last full backup in the system, but since I only have one tape 
drive migration is a bit tricky... I just wanted to run my temporary 
solution by you before I implement it (btw, a second tape drive is in 
our budget for next fiscal year):

1) Mount a 1 TB FC disk array (currently not being utilized) on the 
bacula server
2) Tell bacula the array is of Media Type "File" (my tape library is 
Media Type "Tape"), and put it in the "Disk" pool.
3) Do a migrate job from my "Weekly" pool to the "Disk" Pool.
4) Do a migrate job from my "Disk" pool to the "Archive" Pool.

Does this sound like a reasonable solution until I can spring for 
another tape drive?... Fyi, the array is fast (12 x 15 K FC disks in 
RAID 5)...

Fyi, my setup is:
Dell PE 2650 (1 x 73 GB RAID 1, 1 x 450 GB RAID 0 )
Bacula 2.0.1 (400 GB of Data-Spooling configured to mitigate tape 
shoe-shine, Current Pools=Weekly, Scratch)
RHEL 4 AS
Quantum PX502 (LTO-3, FC, 1-Drive)

Regards,
Mike


Alan Brown wrote:
> I have a number of machines which have been switched off in the last 6 
> months and want to save their last complete backup to an archive pool in 
> order to both free up the main pool and to ensure there's one last copy of 
> everything.
>
> In some cases, there is one last full backup in the system, so all that'd 
> be required is a single jobID migration
>
> In other cases, the migrated last full backup would have to be made up of 
> full+diff+inc
>
> Does anyone have cookiecutter instructions for this?
>
> If so, they should be added to the manual's Migration chapter.
>
> AB
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Rescue - Solaris 10 - getdiskinfo errors - error on route command

2007-02-08 Thread Mike Seda
oops... yeah...  i have U2... 06/06... i am in production and rather not 
go to U3 just yet


Ryan Novosielski wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I haven't had the time to work on it, speaking for myself. I almost did
> (and had the hardware I was planning to do it on repaired), but didn't
> actually get it out the door.
>
> FYI, though, your versions are screwed up. U2 was 06/06, U3 is 11/06.
> Not sure which one you mean, though, I would use U3 if I were you.
>
> Mike Seda wrote:
>   
>> Hi All,
>> Has there been any progress regarding the creation of a Solaris 10 
>> (sparc) rescue cd? I am very interested in testing for you if there is 
>> any code available. I have a SunFire V440 (sparc) box reserved for 
>> testing. My plan is to test a bare metal restore of my production 
>> SunFire V440 (Solaris 10 U2 (10/06)) onto this test box.
>> Thx,
>> Mike
>>
>>
>> Ryan Novosielski wrote:
>> I think I had your e-mail address wrong anyway -- I was reading out of
>> the top of that CD file but it seemed to be missing something.
>>
>> I did mess with your CD instructions some, just like you I've been
>> swamped. The machine I was attempting to test restores on subsequently
>> had a controller failure. Sun dispatched the wrong part and the holiday
>> was upon us.
>>
>> Anyway, sorry if I added any chatter to your mailbox! :)
>>
>> [EMAIL PROTECTED] wrote:
>>   
>> 
>>>>> Sorry everyone I have just been swamped at work and haven't had much
>>>>> time for anything else. I messaged Kern the other day with basically
>>>>> this.
>>>>>
>>>>> The restore CD project has just been setting on the back burner
>>>>> because I had no response or feedback from any of the people that
>>>>> requested a copy off list. I just assumed that there was no interest
>>>>> in the project.
>>>>>
>>>>> I will be more then happy to complete the project if there is
>>>>> interest other then myself. Basically the CD works fine but it's just
>>>>> not refined to an acceptable point for inclusion to the bacula
>>>>> project.
>>>>>
>>>>> Over the next few days I will do an outline of the current status and
>>>>> what I would like to include in the CD and post to the list for
>>>>> feedback.
>>>>>
>>>>> -- Robert
>>>>>
>>>>>
>>>>> 
>>>>>   
>>>   
> - -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>   
>> -
>> Using Tomcat but need to do more? Need to support web services, security?
>> Get stuff done quickly with pre-integrated technology to make your job 
>> easier.
>> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>> 
>
> - --
>   _  _ _  _ ___  _  _  _
>  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer III
>  |$&| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
>  \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.5 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFFy03kmb+gadEcsb4RAhZDAJwPt1/bK6KYL2xZsvNMRKlYFSFrQwCcC3Cc
> z9cPfWDAiUQFr0h6FcaU76k=
> =O9ak
> -END PGP SIGNATURE-
>
>   


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Rescue - Solaris 10 - getdiskinfo errors - error on route command

2007-02-08 Thread Mike Seda
Hi All,
Has there been any progress regarding the creation of a Solaris 10 
(sparc) rescue cd? I am very interested in testing for you if there is 
any code available. I have a SunFire V440 (sparc) box reserved for 
testing. My plan is to test a bare metal restore of my production 
SunFire V440 (Solaris 10 U2 (10/06)) onto this test box.
Thx,
Mike


Ryan Novosielski wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I think I had your e-mail address wrong anyway -- I was reading out of
> the top of that CD file but it seemed to be missing something.
>
> I did mess with your CD instructions some, just like you I've been
> swamped. The machine I was attempting to test restores on subsequently
> had a controller failure. Sun dispatched the wrong part and the holiday
> was upon us.
>
> Anyway, sorry if I added any chatter to your mailbox! :)
>
> [EMAIL PROTECTED] wrote:
>   
>> Sorry everyone I have just been swamped at work and haven't had much
>> time for anything else. I messaged Kern the other day with basically
>> this.
>>
>> The restore CD project has just been setting on the back burner
>> because I had no response or feedback from any of the people that
>> requested a copy off list. I just assumed that there was no interest
>> in the project.
>>
>> I will be more then happy to complete the project if there is
>> interest other then myself. Basically the CD works fine but it's just
>> not refined to an acceptable point for inclusion to the bacula
>> project.
>>
>> Over the next few days I will do an outline of the current status and
>> what I would like to include in the CD and post to the list for
>> feedback.
>>
>> -- Robert
>>
>>
>> 
>
> - --
>   _  _ _  _ ___  _  _  _
>  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer III
>  |$&| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
>  \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.5 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFFbJ+jmb+gadEcsb4RAuhQAJ42wu3Y7re51dSUy6CHW3I7i8odXgCgtYkd
> +hSNtL1QtOs7WluQoCIAYEA=
> =RREY
> -END PGP SIGNATURE-
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] high load average

2007-02-06 Thread Mike Seda
Aaron Knister wrote:
> Is it a software raid 0?
hardware raid (Dell PowerEdge Expandable RAID Controller 3/Di (rev 
01))... its an adaptec under the covers
>
> Honestly I wouldn't worry about the load, especially if this system is 
> dedicated to bacula backup functions.
good point
>
> -Aaron
>
> Mike Seda wrote:
>> Aaron Knister wrote:
>>> During the job run an "iostat -k 2" and tell me what the iowait 
>>> percentage is. 
>> i've been using top, and the elevated load does positively correlate 
>> to iowait
>>> If it's over 20% 
>> it is
>>> you have a disk bottleneck and the CPU is waiting on the disk to 
>>> write/read data which will jack up your load. 
>> i have a raid 0 with write cache enabled... i may have to find 
>> another solution... should i just disable data/attribute spooling?
>>> It's also possible that the calculation of md5 sigs is adding a bit 
>>> of CPU overhead. 
>> yeah... i thought so, but the docs recommend using md5 sigs so i left 
>> it in the config
>>> What is the average speed of your jobs (in terms of MB/s) during 
>>> this high load?
>> 30 MB/s
>>>
>>> -Aaron
>>>
>>> Mike Seda wrote:
>>>> Wow... On my server running bacula-sd and bacula-dir, the load 
>>>> average can get as high as 9.2 when doing full backups of certain 
>>>> machines... Any thoughts as to why?
>>>> Server specs:
>>>> Dell PE 2650 (2 x 3.06 GHz Xeon 32-bit single-core)
>>>> PX502 LTO-3 FC AutoChanger
>>>> RHEL 4 AS
>>>> bacula 2.0.1 i386 (via rpm)
>>>> data spooling (350 GB) to RAID 0 (3 x 146 GB SCSI)
>>>> md5 sig option in FileSet
>>>>
>>>>
>>>> - 
>>>>
>>>> Using Tomcat but need to do more? Need to support web services, 
>>>> security?
>>>> Get stuff done quickly with pre-integrated technology to make your 
>>>> job easier.
>>>> Download IBM WebSphere Application Server v.1.0.1 based on Apache 
>>>> Geronimo
>>>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 
>>>>
>>>> ___
>>>> Bacula-users mailing list
>>>> Bacula-users@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>>
>>>>   
>>>
>>
>>
>


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] high load average

2007-02-06 Thread Mike Seda
Aaron Knister wrote:
> During the job run an "iostat -k 2" and tell me what the iowait 
> percentage is. 
i've been using top, and the elevated load does positively correlate to 
iowait
> If it's over 20% 
it is
> you have a disk bottleneck and the CPU is waiting on the disk to 
> write/read data which will jack up your load. 
i have a raid 0 with write cache enabled... i may have to find another 
solution... should i just disable data/attribute spooling?
> It's also possible that the calculation of md5 sigs is adding a bit of 
> CPU overhead. 
yeah... i thought so, but the docs recommend using md5 sigs so i left it 
in the config
> What is the average speed of your jobs (in terms of MB/s) during this 
> high load?
30 MB/s
>
> -Aaron
>
> Mike Seda wrote:
>> Wow... On my server running bacula-sd and bacula-dir, the load 
>> average can get as high as 9.2 when doing full backups of certain 
>> machines... Any thoughts as to why?
>> Server specs:
>> Dell PE 2650 (2 x 3.06 GHz Xeon 32-bit single-core)
>> PX502 LTO-3 FC AutoChanger
>> RHEL 4 AS
>> bacula 2.0.1 i386 (via rpm)
>> data spooling (350 GB) to RAID 0 (3 x 146 GB SCSI)
>> md5 sig option in FileSet
>>
>>
>> - 
>>
>> Using Tomcat but need to do more? Need to support web services, 
>> security?
>> Get stuff done quickly with pre-integrated technology to make your 
>> job easier.
>> Download IBM WebSphere Application Server v.1.0.1 based on Apache 
>> Geronimo
>> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>   
>


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] high load average

2007-02-05 Thread Mike Seda
Wow... On my server running bacula-sd and bacula-dir, the load average 
can get as high as 9.2 when doing full backups of certain machines... 
Any thoughts as to why?
Server specs:
Dell PE 2650 (2 x 3.06 GHz Xeon 32-bit single-core)
PX502 LTO-3 FC AutoChanger
RHEL 4 AS
bacula 2.0.1 i386 (via rpm)
data spooling (350 GB) to RAID 0 (3 x 146 GB SCSI)
md5 sig option in FileSet


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [Fwd: [lbg-admin] Bacula: Backup Canceled of perou-big-fd Full]

2007-02-04 Thread Mike Seda

Hi All,
I have a bacula client with about 283 GB used disk space, yet bacula 
seemed to backup around 1.398 TB, and would have backed up more unless I 
canceled the job. The machine's specs are listed below:


2-way Opteron 846
RHEL 4 WS
Bacula 2.0.1 x86_64 client (via rpm)

Any thoughts?
Best,
Mike
--- Begin Message ---
03-Feb 23:03 uwharrie-dir: Start Backup JobId 93, 
Job=perou-big.2007-02-02_23.05.09
03-Feb 23:03 uwharrie-sd: Spooling data ...
04-Feb 00:31 uwharrie-sd: User specified spool size reached.
04-Feb 00:31 uwharrie-sd: Writing spooled data to Volume. Despooling 
350,000,049,627 bytes ...
04-Feb 02:15 uwharrie-sd: Despooling elapsed time = 01:43:39, Transfer rate = 
56.27 M bytes/second
04-Feb 02:22 uwharrie-sd: Spooling data again ...
04-Feb 03:55 uwharrie-sd: User specified spool size reached.
04-Feb 03:55 uwharrie-sd: Writing spooled data to Volume. Despooling 
350,000,049,636 bytes ...
04-Feb 05:47 uwharrie-sd: Despooling elapsed time = 01:52:43, Transfer rate = 
51.75 M bytes/second
04-Feb 05:54 uwharrie-sd: Spooling data again ...
04-Feb 07:30 uwharrie-sd: User specified spool size reached.
04-Feb 07:30 uwharrie-sd: Writing spooled data to Volume. Despooling 
350,000,049,636 bytes ...
04-Feb 09:22 uwharrie-sd: Despooling elapsed time = 01:52:16, Transfer rate = 
51.95 M bytes/second
04-Feb 09:29 uwharrie-sd: Spooling data again ...
perou-big-fd:  /var/lib/nfs/rpc_pipefs is a different filesystem. Will not 
descend from / into /var/lib/nfs/rpc_pipefs
04-Feb 11:39 uwharrie-sd: User specified spool size reached.
04-Feb 11:39 uwharrie-sd: Writing spooled data to Volume. Despooling 
350,000,047,444 bytes ...
04-Feb 11:53 perou-big-fd: perou-big.2007-02-02_23.05.09 Fatal error: 
backup.c:860 Network send error to SD. ERR=Input/output error
04-Feb 11:53 uwharrie-sd: Job marked to be canceled.
04-Feb 11:53 uwharrie-sd: Despooling elapsed time = 00:14:12, Transfer rate = 
410.7 M bytes/second
04-Feb 11:53 uwharrie-dir: Bacula 2.0.1 (12Jan07): 04-Feb-2007 11:53:43
  JobId:  93
  Job:perou-big.2007-02-02_23.05.09
  Backup Level:   Full
  Client: "perou-big-fd" 2.0.1 (12Jan07) 
x86_64-redhat-linux-gnu,redhat,
  FileSet:"perou-big" 2007-02-02 23:05:09
  Pool:   "Weekly" (From Job resource)
  Storage:"Tape" (From Job resource)
  Scheduled time: 02-Feb-2007 23:05:08
  Start time: 03-Feb-2007 23:03:37
  End time:   04-Feb-2007 11:53:43
  Elapsed time:   12 hours 50 mins 6 secs
  Priority:   98
  FD Files Written:   92,841
  SD Files Written:   0
  FD Bytes Written:   1,398,683,194,291 (1.398 TB)
  SD Bytes Written:   0 (0 B)
  Rate:   30270.6 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): MSR106L3
  Volume Session Id:  10
  Volume Session Time:1170468396
  Last Volume Bytes:  1,125,809,620,992 (1.125 TB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Canceled
  SD termination status:  Error
  Termination:Backup Canceled



---
You are currently subscribed to lbg-admin axsd: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]
--- End Message ---
-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ERR=Input/output error

2007-01-28 Thread Mike Seda
Hi All,
I received "ERR=Input/output error" upon executing "label barcodes". 
However, "update slots" was successful. Any ideas? Verbose bconsole 
output provided below:

Connecting to Storage daemon Tape at uwharrie:9103 ...
Sending label command for Volume "MSR100L3" Slot 1 ...
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3304 Issuing autochanger "load slot 1, drive 0" command.
3305 Autochanger "load slot 1, drive 0", status is OK.
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result is Slot 1.
block.c:994 Read error on fd=5 at file:blk 0:0 on device "Drive-1" 
(/dev/nst0). ERR=Input/output error.
3000 OK label. VolBytes=64512 DVD=0 Volume="MSR100L3" Device="Drive-1" 
(/dev/nst0)
Catalog record for Volume "MSR100L3", Slot 1  successfully created.

Specs:
Dell PE 2650
RHEL 4 AS
Quantum PX502 Autochanger (1 drive, LTO-3, FC, barcode reader)
Bacula 2.0.1 (via rpm)

Regards,
Mike


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum PX502 Working..

2007-01-16 Thread Mike Seda
Hi James,
You were right... I got my fiber channel PX502 working with bacula. 8-) 
The robotics and drive both appear as scsi devices. Btw, can someone 
update http://bacula.org/dev-manual/Supported_Autochangers.html to 
reflect the following:
Linux Quantum LTO-3 PX502 38 400/800 GB
Cheers,
Mike


James Ray wrote:
> Mike Seda wrote:
>   
>> Hi All,
>> I noticed from the following threads that the Quantum PX502 Library 
>> (LTO-3 SCSI) seems to work with bacula:
>> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg13744.html 
>>  
>> https://mail.dvs1.informatik.tu-darmstadt.de/lurker/message/20051214.230617.b5f41ed5.en.html
>>  
>>
>> But, has anyone gotten the fibre channel version of the PX502 to work 
>> with bacula as well?
>> 
>
> I can't imagine there should be much different. To your system they
> should appear the same way.
>
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-2.0.0 rpm release

2007-01-11 Thread Mike Seda
Hi All,
I wish to install bacula 2.0.0 on my el4 system. I just have a 
question... What is the difference between the "rpms" and 
"rpms-contrib-fschwarz" links at 
http://sourceforge.net/project/showfiles.php?group_id=50727 ? Is there a 
reason why these links are separated? Basically, which is the 
recommended link for downloading bacula 2.0.0 el4 rpms? If "rpms" is the 
answer, then when will the el4 rpms appear there?
Thx,
Mike


Felix Schwarz wrote:
> Alan Brown schrieb:
>   
>> One of the other posters has commented on the updatedb problem if a mysql 
>> root/bacula password is set.
>>
>> As locking down mysql access is essential for security, I think it would 
>> be best if the script asked for login/pass before touching mysql
>> 
>
> This is one of the problem which is why I don't like automatic conversions in 
> RPMs (their 
> use is strongly discouraged in Fedora/RHEL - AFAIK).
>
> Regarding your proposal: I don't think this is a good idea because 
> installing/updating a 
> RPM should never require user interaction. Just think of unattended, 
> automated update 
> solutions which would hang forever in this situation. Even worse, maybe some 
> services were 
> shut down during the update so essentially the system would remain in a 
> non-functional 
> state, requiring some manual intervention.
>
> Debian has debconf for that but this mechanism bit me several times. IMHO one 
> should not 
> do any conversion but provide user scripts which may be invoked with the 
> correct 
> credentials etc.
>
> fs
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spooling of backup to disk

2007-01-05 Thread Mike Seda
Hi All,
I plan to install bacula next week. I am just curious as to what 
"Maximum Spool Size" should be set to if my goal is to backup a total of 
3 TB over gigE. I am still going for a D2T solution, but I just wanted 
to see if you guys think that leveraging 100 to 300 GB of unused disk 
for spooling would help avoid shoe-shine.
Thx in advance,
Mike


Per Andreas Buer wrote:
> Hi.
>
> The documentation says it does not make sense to spool data to disk when 
> doing backup to disk. I'm using migration and I believe it actually 
> makes sense to spool the data first. I plan to run between 10 and 20 
> jobs in parallell - so the migration jobs are running the data from disk 
> to tape the data will probably be read between 10 and 20 times. If the 
> data are spooled first I believe the migration should run a lot quicker 
> - because the data won't be intermingled. Backup might actually be a bit 
> slower - but I'm planning of using a couple of rather fast disked - 
> striped - in order to mitigate this.
>
> Does this makes sense? I haven't tried this yet as I am only backing up 
> one client at the moment.
>
> Happy new year, people. Thank you for making this excellent piece of 
> software.
>
> Per.
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Solaris 10 clients with ZFS filesystems

2006-11-08 Thread Mike Seda
Hi All,
Does bacula work for Solaris 10 clients with ZFS filesystems?
Best,
Mike



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quantum PX502 Working..

2006-10-31 Thread Mike Seda
Hi All,
I noticed from the following threads that the Quantum PX502 Library 
(LTO-3 SCSI) seems to work with bacula:
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg13744.html  
https://mail.dvs1.informatik.tu-darmstadt.de/lurker/message/20051214.230617.b5f41ed5.en.html
 

But, has anyone gotten the fibre channel version of the PX502 to work 
with bacula as well?
Thx,
Mike


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users