[Bacula-users] [EMAIL PROTECTED]: Problem with *ACL and restore]

2005-05-18 Thread Dmitry Sivachenko
Hello!

Just to note that this problem still exists in bacula-1.36.3...

Is anybody working on console ACL support?

Thanks!



- Forwarded message from Dmitry Sivachenko [EMAIL PROTECTED] -

From: Dmitry Sivachenko [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Date: Thu, 7 Apr 2005 14:46:43 +0400
Subject: Problem with *ACL and restore

Hello!

I am running bacula-1.36.2.  I wish to backup several machines and setup
ACLs so that each machine can restore only it's own files (and ideally cannot
even get an information about what other machines are served by this 
bacula server).

Server is running on machine named m0.  Clients are m0 (server itself), m4
and m5 at this time.  I have a single Storage named File, a
single Pool named RAIDPool and a single Catalog named MyCatalog.

Clients are named m0-fd, m4-fd and m5-fd.  Their FileSets are m0-FileSet,
m4-FileSet and m5-FileSet and Backup jobs are m0-Job, m4-Job and m5-Job.

Below is the relevant excerpts from bacula-dir.conf:

Job {
  Name = RestoreFiles
  Type = Restore
  Storage = File
  Pool = RAIDPool
  Client = m0-fd
  FileSet = m0-FileSet
  Messages = Standard
  Where = /tmp/bacula-restores
}

Console {
  Name = m4
  Password = ValidPass
  JobACL = m4-Job, RestoreFiles
  ClientACL = m4-fd
  StorageACL = *all*
  ScheduleACL = *all*
  PoolACL = *all*
  FileSetACL = m4-FileSet
  CatalogACL = *all*
  CommandACL = help, restore, run
}


Now please take a look at my restore session I have problems with:

m4# bconsole
Connecting to Director m0:9101
1000 OK: m0-dir Version: 1.36.2 (28 February 2005)
Enter a period to cancel a command.
*restore
Using default Catalog name=MyCatalog DB=bacula

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
 1: List last 20 Jobs run
 2: List Jobs where a given File is saved
 3: Enter list of comma separated JobIds to select
 4: Enter SQL list command
 5: Select the most recent backup for a client
 6: Select backup for a client before a specified time
 7: Enter a list of files to restore
 8: Enter a list of files to restore before a specified time
 9: Cancel
Select item:  (1-9): 5
Automatically selected Client: m4-fd
Automatically selected FileSet: m4-FileSet

-COMMENT
Here bacula correctly selected the only allowed FileSet for this
client -- m4-FileSet.  Let's go on...
-COMMENT

+---+---+--+-++---+-
-++
| JobId | Level | JobFiles | StartTime   | VolumeName | StartFile | VolS
essionId | VolSessionTime |
+---+---+--+-++---+-
-++
| 31| F | 105210   | 2005-04-07 01:27:18 | raid-0041  | 0 | 3
 | 1112797662 |
+---+---+--+-++---+-
-++
You have selected the following JobId: 31
No authorization. Job  not selected.

Building directory tree for JobId 31 ...  ++

1 Job, 98,730 files inserted into the tree.

You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added, unless
you used the all keyword on the command line.
Enter done to leave this mode.

cwd is: /
$ cd /var/log
cwd is: /var/log/
$ mark messages
1 files marked.
$ done
Bootstrap records written to /raid/bacula/restore.bsr

The job will require the following Volumes:

   raid-0041


1 file selected to be restored.

No authorization. FileSet m0-FileSet.
You have messages.
*

Here server complains about nonauthorized FileSet for that client m0-FileSet.
It was taken from RestoreJob definition (see above).  Why?  Is it a bug?
How can I force bacula to use m4-FileSet which it already picked up?

Thanks in advance!

PS: I am more that willing to provide additional info if needed.

- End forwarded message -

- End forwarded message -


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Ludovic Strappazon
Hi Sean,
It is possible to have Concurrent Jobs running and spooling in the 
second scenario :-)
You can send more details.

Ludovic

Sean O'Grady wrote:
Hello,
I'm trying to get a better understanding of Concurrent Job behaviour 
and how it relates to multiple jobs going to a single Storage Device.

The basics of my setup are multiple clients and a single Storage 
device. I specify that all Jobs will be spooled and that there is a 
Maximum Concurrent Job number of 20.

What I would like to have happen is if 5 Jobs start @ 23:00 the first 
one started spools its data and then writes to tapes when its finished 
spooling. The additional 4 Jobs meanwhile start spooling their data 
from the client while the first job is running and then write to tape 
when the Storage Device becomes available. The order of Job completion 
can be FIFO as long as the data can be spooled concurrently from all 
clients (assuming there is enough disk space).

As an alternative which would be even better - All 5 Jobs start @ 
23:00 spooling data from the client, the first Job to complete the 
spooling from the client starts writing to the Storage Device. 
Remaining Jobs queue for the Storage Device as it becomes available 
and as their spooling completes.

Instead what I'm seeing is while the first job executes the additional 
jobs all have a status of is waiting on max Storage jobs and will 
not begin spooling their data until that first Job has 
spooled-despooled-written to the Storage Device.

My question is of course is this possible to have Concurrent Jobs 
running and spooling in one of the scenarios above (or another I'm 
missing).

If so I'll send out more details of my config to see if anyone can 
point out what I'm doing wrong.

Thanks,
Sean
--
Sean O'Grady
System Administrator
Sheridan College
Oakville, Ontario
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.12 - Release Date: 17/05/2005

---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spam on this list

2005-05-18 Thread Matthew Hawkins
Kern Sibbald ([EMAIL PROTECTED]) wrote:
 What I don't like about this is that some users (such as myself) don't want 
 to 
 subscribe to lists even to get help

If you got software for free, and you can't even be bothered to do something as
simple as subscribe to a free mailing list to receive free help, then IMO you
don't deserve to get that help.  If you're hungry, and you can't be bothered
getting off your fat butt to go get something to eat - guess what?  You starve.
No magical genie will pop out of the computer and feed you.

It appears the lists are run by mailman, subscribers can simply tell the list
to not send them any mail if they don't want it, or unsubscribe even more
easily than they easily subscribed (as the unsubscribe link comes in every
message).

-- 
Matt
PS: please do not break the list by munging the reply-to headers.  Ta.


pgpEtqz9fdXnS.pgp
Description: PGP signature


Re: [Bacula-users] Bacula in the news

2005-05-18 Thread Michel Meyers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dan Langille wrote:
Sorry, bad URL, try this instead, with  instead of amp;
http://techrepublic.com.com/5208-6230-
0.html?forumID=90threadID=173955start=0
[quote]I've looked at the Administration/Backup page on linux.org, and
I've looked closely at Bacula, DAR and backup2l.[/quote]
Did I mention that I put it on linux.org (and that I'm actively
maintaining the entries on icewalkers.com, freshmeat.net and tucows when
new releases come out)? ;)
Unfortunately I haven't been able to update the linux.org entry for
quite a while now since it came back with 'We are currently updating
this section of the site. Change submissions are temporarily disabled.'
for the last months.
If anybody else knows some more of those listing sites where I can
easily submit and maintain a Bacula entry (some of these sites make you
go through an uncomparably complicated and annoying procedure just to
get a listing), please let me know.
Greetings,
   Michel
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32) - GPGrelay v0.958
iD8DBQFCivX42Vs+MkscAyURAufFAJ41VByE6Jj34IfkC2AbFwjkstynYQCgw5lZ
YXqP9k3GgTXZyAYDDB6th3U=
=kp1E
-END PGP SIGNATURE-
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Upgraded to 1.36.3 , still Restores everything

2005-05-18 Thread Danie Theron
Hi ,
OK , upgraded to 1.36.3 ( thanks goes to Arno and Andrew). Did a test 
restore of a users directory , but still it seems bacula wants to 
restore everything. I'm running the following setup :

Fedora Core 3 with RAID 5 (4 x 300GB Seagate Barracudas) , running 
/usr/bin/mysqladmin  Ver 8.23 Distrib 3.23.58, for redhat-linux-gnu on i386

The job will require the following Volumes:
  apollousersfull-0005
9 files selected to be restored.
The defined Restore Job resources are:
1: RestoreFiles
2: RestoreApolloUsersIncr
Select Restore Job (1-2): 2
Defined Clients:
1: mailx3-fd
2: danie-fd
3: rock-fd
4: venus-fd
5: apollo-fd
6: mercury-fd
7: tyrone-fd
8: errol-fd
Select the Client (1-8): 2
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/restore.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:verpaktshareFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 8
Please enter the Bootstrap file name: /var/bacula/apollousers.bsr
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:verpaktshareFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 2
The defined Storage resources are:
1: verpaktshareFile
2: mailx3File
3: apolloprofFile
4: apollosqlFile
5: apollousersFile
6: rocksqlFile
7: rockprofFile
8: sqlFile
9: winFile
   10: File
Select Storage resource (1-10): 5
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:apollousersFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 4
The defined FileSet resources are:
1: verpaktshare Set
2: mailx3 Set
3: apollosql Set
4: apolloprof Set
5: apollousers Set
6: apollodesign Set
7: te_hdrive Set
8: rocksql Set
9: rockprof Set
   10: sql Set
   11: Windows 2000 Set
   12: Full Set
   13: Catalog
Select FileSet resource (1-13): 5
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:apollousersFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): yes
Job started. JobId=627
*messages
18-May 10:04 venus-dir: Start Restore Job 
RestoreApolloUsersIncr.2005-05-18_10.04.23
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
18-May 10:04 venus-sd: Ready to read from volume apollousersfull-0005 
on device /arch/apollo/users.
danie-fd: drwxrwxrwx   1 00  0 2003-12-10 17:05:17  
/e/tmp/bacula-restores/d//Users/Admin/
danie-fd: -rwxrwxrwx   1 00   45103616 2004-08-16 08:02:00  
/e/tmp/bacula-restores/d//Users/antoinette/Personal Folders(1).pst
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*messages
You have no messages.
*cancel
Automatically selected Job: RestoreApolloUsersIncr.2005-05-18_10.04.23
Confirm cancel (yes/no): yes
2001 Job RestoreApolloUsersIncr.2005-05-18_10.04.23 marked to be canceled.
3000 Job RestoreApolloUsersIncr.2005-05-18_10.04.23 marked to be canceled.
You have messages.
*messages
18-May 10:04 venus-sd: RestoreApolloUsersIncr.2005-05-18_10.04.23 Fatal 
error: read.c:132 Error sending to File daemon. ERR=Input/output error
18-May 10:04 venus-sd: RestoreApolloUsersIncr.2005-05-18_10.04.23 Error: 
bnet.c:411 Wrote -4 bytes to client:192.168.135.153:36643, but only 
19279 accepted.
18-May 10:06 danie-fd: RestoreApolloUsersIncr.2005-05-18_10.04.23 Fatal 
error: ..\filed\../../filed/restore.c:125 Data record error. ERR=The 
operation completed successfully.

18-May 10:04 venus-dir: Bacula 1.36.3 (22Apr05): 18-May-2005 10:04:47
 JobId:

Re: [Bacula-users] Spam on this list

2005-05-18 Thread Kern Sibbald
Hello,

On Wednesday 18 May 2005 09:37, Matthew Hawkins wrote:
 Kern Sibbald ([EMAIL PROTECTED]) wrote:
  What I don't like about this is that some users (such as myself) don't
  want to subscribe to lists even to get help

 If you got software for free, and you can't even be bothered to do
 something as simple as subscribe to a free mailing list to receive free
 help, then IMO you don't deserve to get that help.  If you're hungry, and
 you can't be bothered getting off your fat butt to go get something to eat
 - guess what?  You starve. No magical genie will pop out of the computer
 and feed you.

Well, everyone is entitled to his opinion.  

In my case, it is not that I cannot be bothered to subscribe as you seem 
suggest. This should be obvious from the amount of time and effort I put into 
Bacula. Rather, what holds me back from subscribing to other lists is 
overloading myself with even more email -- so I appreciate open lists, and 
would like to keep Bacula operating that way.  


 It appears the lists are run by mailman, subscribers can simply tell the
 list to not send them any mail if they don't want it, or unsubscribe even
 more easily than they easily subscribed (as the unsubscribe link comes in
 every message).

There are a lot of users out there who are struggling to learn Linux, and it 
is not always so obvious for them how to subscribe/unsubscribe -- at least 
judging by the number of illfated attempts by some to remove themselves from 
the lists I maintain (9 or 10).

-- 
Best regards,

Kern

  (
  /\
  V_V


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula in the news

2005-05-18 Thread Kern Sibbald
On Wednesday 18 May 2005 09:59, Michel Meyers wrote:
 Dan Langille wrote:
  Sorry, bad URL, try this instead, with  instead of amp;
 
  http://techrepublic.com.com/5208-6230-
  0.html?forumID=90threadID=173955start=0

 [quote]I've looked at the Administration/Backup page on linux.org, and
 I've looked closely at Bacula, DAR and backup2l.[/quote]

 Did I mention that I put it on linux.org (and that I'm actively
 maintaining the entries on icewalkers.com, freshmeat.net and tucows when
 new releases come out)? ;)

Thanks. :-)


 Unfortunately I haven't been able to update the linux.org entry for
 quite a while now since it came back with 'We are currently updating
 this section of the site. Change submissions are temporarily disabled.'
 for the last months.

 If anybody else knows some more of those listing sites where I can
 easily submit and maintain a Bacula entry (some of these sites make you
 go through an uncomparably complicated and annoying procedure just to
 get a listing), please let me know.

 Greetings,
 Michel


 ---
 This SF.Net email is sponsored by Oracle Space Sweepstakes
 Want to be the first software developer in space?
 Enter now for the Oracle Space Sweepstakes!
 http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (
  /\
  V_V


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re: [Bacula-(users|devel)] Spam on this list

2005-05-18 Thread Alan Brown
On Tue, 17 May 2005, Ivan Petrovich wrote:
My subscription goes to address A where mail gets forwards to address
B or C or ... depending on where I am at the time. If I need to make a
posting, I would do it from, say, B, adding a reply-to line pointing
to address A. But that fails to work with many mailing lists (with
restrictions similiar to what's proposed here), so I started spoofing
my return address to say 'A'. That works well for some mailing lists,
but not for the ones that are dead serious about blocking spam. (They
would detect that the address is spoofed and reject my mail.)
In such a case you subscribe your other addresses and set them to NOMAIL.

---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula in the news

2005-05-18 Thread Jo
Dan Langille wrote:
On 17 May 2005 at 22:42, Dan Langille wrote:
 

Well, not really in the news, but here's someone talking about it:
http://techrepublic.com.com/5208-6230-
0.html?forumID=90amp;threadID=173955amp;start=0
   

Sorry, bad URL, try this instead, with  instead of amp;
http://techrepublic.com.com/5208-6230-
0.html?forumID=90threadID=173955start=0
 

http://techrepublic.com.com/5208-6230-0.html?forumID=90threadID=173955start=0
Let's see if this one makes it to the list without being broken in two.
Jo
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spam on this list

2005-05-18 Thread Matthew Hawkins
Kern Sibbald ([EMAIL PROTECTED]) wrote:
 Well, everyone is entitled to his opinion.  

And thanks to the internet, we can all express it ;)

 In my case, it is not that I cannot be bothered to subscribe as you seem 
 suggest. This should be obvious from the amount of time and effort I put into 
 Bacula.

It was a general 'you' (aka 'someone') and not a personal 'you' (ie, 'Kern') -
sorry I didn't make that clearer.  I certainly very much appreciate all the
time and effort you have placed into bacula.  Thank you very much.

-- 
Matt


pgpAdkk9OWAyD.pgp
Description: PGP signature


Re: [Bacula-users] Upgraded to 1.36.3 , still Restores everything

2005-05-18 Thread Kern Sibbald
Hello,

It appears that you are *vastly* over complicating things.  First, you only 
need one (the default) RestoreFiles job. Second, once you select the files, 
using the restore command and menu item 5 (if I remember right), which I 
don't see in the listing below, there should be little or no need to use 
mod, and if you are using mod to change the bootstrap file, you are surely 
doing something wrong unless you are a *super* expert. If you use mod to 
change other parameters, probably you are doing something wrong.

If you do the restore correctly, the bootstrap will be generated for you and 
it will be named /some-path/restore.bsr.  There is no need to change it.

I'd suggest you bring up a test Bacula someplace and run through the example 
in the Tutorial chapter. It shows you the easy way to restore files.



On Wednesday 18 May 2005 10:36, Danie Theron wrote:
 Hi ,

 OK , upgraded to 1.36.3 ( thanks goes to Arno and Andrew). Did a test
 restore of a users directory , but still it seems bacula wants to
 restore everything. I'm running the following setup :

 Fedora Core 3 with RAID 5 (4 x 300GB Seagate Barracudas) , running
 /usr/bin/mysqladmin  Ver 8.23 Distrib 3.23.58, for redhat-linux-gnu on i386


 The job will require the following Volumes:

apollousersfull-0005


 9 files selected to be restored.

 The defined Restore Job resources are:
  1: RestoreFiles
  2: RestoreApolloUsersIncr
 Select Restore Job (1-2): 2
 Defined Clients:
  1: mailx3-fd
  2: danie-fd
  3: rock-fd
  4: venus-fd
  5: apollo-fd
  6: mercury-fd
  7: tyrone-fd
  8: errol-fd
 Select the Client (1-8): 2
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/restore.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:verpaktshareFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: Replace
 11: JobId
 Select parameter to modify (1-11): 8
 Please enter the Bootstrap file name: /var/bacula/apollousers.bsr
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/apollousers.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:verpaktshareFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: Replace
 11: JobId
 Select parameter to modify (1-11): 2
 The defined Storage resources are:
  1: verpaktshareFile
  2: mailx3File
  3: apolloprofFile
  4: apollosqlFile
  5: apollousersFile
  6: rocksqlFile
  7: rockprofFile
  8: sqlFile
  9: winFile
 10: File
 Select Storage resource (1-10): 5
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/apollousers.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:apollousersFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: Replace
 11: JobId
 Select parameter to modify (1-11): 4
 The defined FileSet resources are:
  1: verpaktshare Set
  2: mailx3 Set
  3: apollosql Set
  4: apolloprof Set
  5: apollousers Set
  6: apollodesign Set
  7: te_hdrive Set
  8: rocksql Set
  9: rockprof Set
 10: sql Set
 11: Windows 2000 Set
 12: Full Set
 13: Catalog
 Select FileSet resource (1-13): 5
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/apollousers.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:apollousersFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): yes
 Job started. JobId=627
 *messages
 18-May 10:04 venus-dir: Start Restore Job
 RestoreApolloUsersIncr.2005-05-18_10.04.23
 *messages
 You have no messages.
 *messages
 You have no messages.
 *messages
 You have no messages.
 *messages
 You have no messages.
 *messages
 18-May 10:04 venus-sd: Ready to read from volume apollousersfull-0005
 on device /arch/apollo/users.
 danie-fd: drwxrwxrwx   1 00  0 2003-12-10 17:05:17
 /e/tmp/bacula-restores/d//Users/Admin/
 danie-fd: -rwxrwxrwx   1 00   45103616 2004-08-16 08:02:00
 

Re: [Bacula-users] Upgraded to 1.36.3 , still Restores everything

2005-05-18 Thread Arno Lehmann
Hi,
Kern Sibbald wrote:
Hello,
It appears that you are *vastly* over complicating things.  First, you only 
need one (the default) RestoreFiles job. Second, once you select the files, 
using the restore command and menu item 5 (if I remember right), which I 
don't see in the listing below, there should be little or no need to use 
mod, and if you are using mod to change the bootstrap file, you are surely 
doing something wrong unless you are a *super* expert. If you use mod to 
change other parameters, probably you are doing something wrong.
I noticed Danie changing the bootstrap file name just now.
And, while I agree that this is not really necessary in a normal case 
(and you should make sure that the bootstrap file contains the correct 
information gathered from the catalog) I see one good reason to do it.

In case you have multiple restores running simultaneously, referencing 
the same restore job template.

Imagine two users from different consoles starting a restore... the 
director will probably screw up one of them.

If you do the restore correctly, the bootstrap will be generated for you and 
it will be named /some-path/restore.bsr.  There is no need to change it.
Considering the above scenario - wouldn't it be good to allow a macro in 
the bootstrap file name which makes it unique? Client and timestamp 
sound about right.

Now, I don't know if this possible now, I haven't tried it yet, but if 
it is possible it would be good to include that in the sample 
configuraton as well as the documentation.

Arno
I'd suggest you bring up a test Bacula someplace and run through the example 
in the Tutorial chapter. It shows you the easy way to restore files.


On Wednesday 18 May 2005 10:36, Danie Theron wrote:
Hi ,
OK , upgraded to 1.36.3 ( thanks goes to Arno and Andrew). Did a test
restore of a users directory , but still it seems bacula wants to
restore everything. I'm running the following setup :
Fedora Core 3 with RAID 5 (4 x 300GB Seagate Barracudas) , running
/usr/bin/mysqladmin  Ver 8.23 Distrib 3.23.58, for redhat-linux-gnu on i386
The job will require the following Volumes:
  apollousersfull-0005
9 files selected to be restored.
The defined Restore Job resources are:
1: RestoreFiles
2: RestoreApolloUsersIncr
Select Restore Job (1-2): 2
Defined Clients:
1: mailx3-fd
2: danie-fd
3: rock-fd
4: venus-fd
5: apollo-fd
6: mercury-fd
7: tyrone-fd
8: errol-fd
Select the Client (1-8): 2
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/restore.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:verpaktshareFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 8
Please enter the Bootstrap file name: /var/bacula/apollousers.bsr
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:verpaktshareFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 2
The defined Storage resources are:
1: verpaktshareFile
2: mailx3File
3: apolloprofFile
4: apollosqlFile
5: apollousersFile
6: rocksqlFile
7: rockprofFile
8: sqlFile
9: winFile
   10: File
Select Storage resource (1-10): 5
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:apollousersFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog
Priority:   10
OK to run? (yes/mod/no): mod
Parameters to modify:
1: Level
2: Storage
3: Job
4: FileSet
5: Client
6: When
7: Priority
8: Bootstrap
9: Where
   10: Replace
   11: JobId
Select parameter to modify (1-11): 4
The defined FileSet resources are:
1: verpaktshare Set
2: mailx3 Set
3: apollosql Set
4: apolloprof Set
5: apollousers Set
6: apollodesign Set
7: te_hdrive Set
8: rocksql Set
9: rockprof Set
   10: sql Set
   11: Windows 2000 Set
   12: Full Set
   13: Catalog
Select FileSet resource (1-13): 5
Run Restore job
JobName:RestoreApolloUsersIncr
Bootstrap:  /var/bacula/apollousers.bsr
Where:  /e/tmp/bacula-restores
Replace:always
FileSet:apollousers Set
Client: danie-fd
Storage:apollousersFile
When:   2005-05-18 10:03:42
Catalog:MyCatalog

Re: [Bacula-users] [SOLVED] sqlite crash in bacula-dir-1.36.2-1mdk

2005-05-18 Thread Kern Sibbald
Hello,

You need to be a bit more explicit about what is going on here. To the best of 
my knowledge Bacula does not use any temporary files other than what it 
writes the Working Directory.  When Bacula is pruning, and during certain 
other operations, it will create temporary tables.  It is my understanding 
that those temporary tables should go in the same file/directory as the 
catalog database itself.

It is possible that SQLite tries to write in some files, but if that is the 
case, you should specify which file/files.

Bacula should *never* attempt to write in the directory from which it is 
started unless it is incorrectly configured, so making that directory 
writable is not a good idea.

Now, if the user or the package creator makes the serious error of pointing 
the Working Directory to the same place where Bacula is stored, then you will 
definitely have a problem.

On Wednesday 18 May 2005 11:26, [EMAIL PROTECTED] wrote:
 Hello all,

 I experienced a crash (described hereafter) after editing the
 bacula-dir.conf file.

 It seems that bacula-dir needs to write some temporary files when
 examining/modifying its (sqlite) database .

 It tries to do so in the directory that was the current directory when the
 bacula-dir daemon started. If that directory is not writable by Bacula, a
 crash results.

 The scripts in my configuration (Mandrake) do not make sure that this is
 the case : so if I start bacula-dir with the usual
 service bacula-dir start
 when I am in my home directory (not writable by Bacula), it won't work.

 cd'ing to a directory writable by bacula and restarting the bacula-dir
 solves the problem.

 It would probably be a good idea to ensure that the temporary files
 bacula-dir needs are written in the same directory as the sqlite bacula.db
 itself.

 Should I submit a bug-report ?

 Cheers
 -- Jean Marc



  Message original 
 Objet:   [Bacula-users] sqlite crash in bacula-dir-1.36.2-1mdk
 De:  [EMAIL PROTECTED] [EMAIL PROTECTED]
 Date:Mar 17 mai 2005 16:30
 À:   bacula-users@lists.sourceforge.net
 --

 Hello all,

 On a fresh install of bacula, the following leads to a crash :

 - do some backup
 - edit bacula-dir.conf to add some files
 - restart bacula-dir

 results in this error (seen from bconsole):

 *status dir
 Using default Catalog name=MyCatalog DB=bacula
 Could not open database bacula.
 sqlite.c:151 Unable to open Database=/var/lib/bacula/bacula.db.
 ERR=malformed database schema - unable to open a temporary database file
 for storing temporary tables

 The only way out is to recreate the database.

 I don't see any permissions problems. The directory /var/lib/bacula
 belongs to the user bacula. /tmp and /var/tmp are writable by bacula.

 I cannot setdebug without getting that same error.

 If I use sqlite to see what's in the bacula.db, I see the database main
 which is in the normal place (/var/lib/bacula) and
 1temp /var/tmp/sqlite_qVfam9PKLi9Sfu1

 /var/tmp has this permissions (same as /tmp)
 drwxrwx-wt  2 root adm
 and there is no file /var/tmp/sqlite_qVfam9PKLi9Sfu1

 Where on earth is bacula trying to create this temporary database ? And
 which one ?

 My install :

 Linux Mandrake 10.1
 Bacula 1.36.2-1mdk from cooker

 What beats me is that as long as I do not change bacula-dir.conf, I can do
 as many backups as I want. I can stop/restart bacula-dir, all is OK.

 As soon as I change bacula-dir.conf, kaboom.

 Any hints ?
 Thanks in advance

 -- Jean-Marc




 ---
 This SF.Net email is sponsored by Oracle Space Sweepstakes
 Want to be the first software developer in space?
 Enter now for the Oracle Space Sweepstakes!
 http://ads.osdn.com/?ad_idt12alloc_id344op=click
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




 ---
 This SF.Net email is sponsored by Oracle Space Sweepstakes
 Want to be the first software developer in space?
 Enter now for the Oracle Space Sweepstakes!
 http://ads.osdn.com/?ad_idt12alloc_id344opÌk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (
  /\
  V_V


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Upgraded to 1.36.3 , still Restores everything

2005-05-18 Thread Kern Sibbald
On Wednesday 18 May 2005 12:21, Arno Lehmann wrote:
 Hi,

 Kern Sibbald wrote:
  Hello,
 
  It appears that you are *vastly* over complicating things.  First, you
  only need one (the default) RestoreFiles job. Second, once you select the
  files, using the restore command and menu item 5 (if I remember right),
  which I don't see in the listing below, there should be little or no need
  to use mod, and if you are using mod to change the bootstrap file, you
  are surely doing something wrong unless you are a *super* expert. If you
  use mod to change other parameters, probably you are doing something
  wrong.

 I noticed Danie changing the bootstrap file name just now.

 And, while I agree that this is not really necessary in a normal case
 (and you should make sure that the bootstrap file contains the correct
 information gathered from the catalog) I see one good reason to do it.

 In case you have multiple restores running simultaneously, referencing
 the same restore job template.

 Imagine two users from different consoles starting a restore... the
 director will probably screw up one of them.

This is a situation that I would like to fix sometime, and it has been on the 
todo list for quite some time.

Fortunately, it is normally one person who runs the restores from a single 
console, and in that case, even if he runs multiple restores there are 
unlikely to be any problems as the restore.bsr file is immediately sent off 
to the FD when the restore job starts.  If you queue them for running later, 
there will, of course, be a problem.


  If you do the restore correctly, the bootstrap will be generated for you
  and it will be named /some-path/restore.bsr.  There is no need to change
  it.

 Considering the above scenario - wouldn't it be good to allow a macro in
 the bootstrap file name which makes it unique? Client and timestamp
 sound about right.

 Now, I don't know if this possible now, I haven't tried it yet, but if
 it is possible it would be good to include that in the sample
 configuraton as well as the documentation.

 Arno

  I'd suggest you bring up a test Bacula someplace and run through the
  example in the Tutorial chapter. It shows you the easy way to restore
  files.
 
  On Wednesday 18 May 2005 10:36, Danie Theron wrote:
 Hi ,
 
 OK , upgraded to 1.36.3 ( thanks goes to Arno and Andrew). Did a test
 restore of a users directory , but still it seems bacula wants to
 restore everything. I'm running the following setup :
 
 Fedora Core 3 with RAID 5 (4 x 300GB Seagate Barracudas) , running
 /usr/bin/mysqladmin  Ver 8.23 Distrib 3.23.58, for redhat-linux-gnu on
  i386
 
 
 The job will require the following Volumes:
 
apollousersfull-0005
 
 
 9 files selected to be restored.
 
 The defined Restore Job resources are:
  1: RestoreFiles
  2: RestoreApolloUsersIncr
 Select Restore Job (1-2): 2
 Defined Clients:
  1: mailx3-fd
  2: danie-fd
  3: rock-fd
  4: venus-fd
  5: apollo-fd
  6: mercury-fd
  7: tyrone-fd
  8: errol-fd
 Select the Client (1-8): 2
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/restore.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:verpaktshareFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: Replace
 11: JobId
 Select parameter to modify (1-11): 8
 Please enter the Bootstrap file name: /var/bacula/apollousers.bsr
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/apollousers.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:verpaktshareFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: Replace
 11: JobId
 Select parameter to modify (1-11): 2
 The defined Storage resources are:
  1: verpaktshareFile
  2: mailx3File
  3: apolloprofFile
  4: apollosqlFile
  5: apollousersFile
  6: rocksqlFile
  7: rockprofFile
  8: sqlFile
  9: winFile
 10: File
 Select Storage resource (1-10): 5
 Run Restore job
 JobName:RestoreApolloUsersIncr
 Bootstrap:  /var/bacula/apollousers.bsr
 Where:  /e/tmp/bacula-restores
 Replace:always
 FileSet:apollousers Set
 Client: danie-fd
 Storage:apollousersFile
 When:   2005-05-18 10:03:42
 Catalog:MyCatalog
 Priority:   10
 OK to run? (yes/mod/no): mod
 Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Client
  6: When
 

Re: [Bacula-users] [SOLVED] sqlite crash in bacula-dir-1.36.2-1mdk

2005-05-18 Thread Luca Berra
Kern Sibbald wrote:
Now, if the user or the package creator makes the serious error of pointing 
the Working Directory to the same place where Bacula is stored, then you will 
definitely have a problem.
the working directory points to /var/lib/bacula on default installs

---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] urgent windows recover problem

2005-05-18 Thread Gerd Mueller
Hi list,

we've got a urgent recover problem! While restoring files from a windows
backup we always get the following problems:

backup-sd: Got EOF at file 3  on device /var/backups/bacula/File, Volume
diff0003
backup-sd: End of Volume at file 3 on device /var/backups/bacula/File,
Volume diff0003
backup-sd: Ready to read from volume full0019 on device
/var/backups/bacula/File.
kliniken-data-fd: -rwxrwxrwx   1 00  19456 2005-02-24
09:21:41  /tmp/bacula-restores/e//Meddok/2004 - Entbindungsf[1].
Gnzburg.doc
kliniken-data-fd: -rwxrwxrwx   1 002507776 2005-02-24
09:21:27  /tmp/bacula-restores/e//Meddok/2004 - Entb[1].Flle - GZ.xls
kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
to
directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
..\findlib\../../findlib/create_file.c:182 Could not create
/tmp/bacula-restores/e//Meddok/Akten
Kontrolle/Kontrolle_Entlasscodierung.mdb: ERR=Das System kann den
angegebenen Pfad nicht finden.
kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
to
directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
..\findlib\../../findlib/create_file.c:182 Could not create
/tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
(Einlernen)/CareCenter.ppt: ERR=Das System kann den angegebenen Pfad
nicht
finden.
kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
..\findlib\../../findlib/create_file.c:182 Could not create
/tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
(Einlernen)/Checkliste.doc: ERR=Das System kann den angegebenen Pfad
nicht
finden.

Anybody any idea? Right now we are using the 1.3.6.3 client

Thank you

Gerd

-- 
Gerd Mller   NETWAYS GmbH
Senior Systems Engineer   Deutschherrnstr. 47a
Fon.0911/92885-0  D-90429 Nrnberg
Fax.0911/92885-33
[EMAIL PROTECTED]   http://www.netways.de


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] urgent windows recover problem

2005-05-18 Thread Simon Weller
Tell us a little more about your machines. Are they Active Directory? Do
you have similar problems restoring to linux machines?

- Si

On Wed, 2005-05-18 at 14:22 +0200, Gerd Mueller wrote:
 Hi list,
 
 we've got a urgent recover problem! While restoring files from a windows
 backup we always get the following problems:
 
 backup-sd: Got EOF at file 3  on device /var/backups/bacula/File, Volume
 diff0003
 backup-sd: End of Volume at file 3 on device /var/backups/bacula/File,
 Volume diff0003
 backup-sd: Ready to read from volume full0019 on device
 /var/backups/bacula/File.
 kliniken-data-fd: -rwxrwxrwx   1 00  19456 2005-02-24
 09:21:41  /tmp/bacula-restores/e//Meddok/2004 - Entbindungsf[1].
 Günzburg.doc
 kliniken-data-fd: -rwxrwxrwx   1 002507776 2005-02-24
 09:21:27  /tmp/bacula-restores/e//Meddok/2004 - Entb[1].Fälle - GZ.xls
 kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
 to
 directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
 kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
 ..\findlib\../../findlib/create_file.c:182 Could not create
 /tmp/bacula-restores/e//Meddok/Akten
 Kontrolle/Kontrolle_Entlasscodierung.mdb: ERR=Das System kann den
 angegebenen Pfad nicht finden.
 kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
 to
 directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
 kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
 ..\findlib\../../findlib/create_file.c:182 Could not create
 /tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
 (Einlernen)/CareCenter.ppt: ERR=Das System kann den angegebenen Pfad
 nicht
 finden.
 kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
 ..\findlib\../../findlib/create_file.c:182 Could not create
 /tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
 (Einlernen)/Checkliste.doc: ERR=Das System kann den angegebenen Pfad
 nicht
 finden.
 
 Anybody any idea? Right now we are using the 1.3.6.3 client
 
 Thank you
 
 Gerd
 
-- 
Simon Weller 
Systems Engineer, LPIC-2
Education Networks of America
1101 McGavock St.
Nashville TN 37203
Direct Line:  615.312.6068
Network Operations Center: 1.800.836.4357



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] urgent windows recover problem

2005-05-18 Thread Gerd Mueller
I try... All machines are runing with AD but bacula runs as system
service. we are only making backups of windows machines :-( Recovery to
different machines produce the same error. It also looks like the
problems only happens with full backups :-( 

Best regards
Gerd

On Wed, 2005-05-18 at 07:32 -0500, Simon Weller wrote:
 Tell us a little more about your machines. Are they Active Directory? Do
 you have similar problems restoring to linux machines?
 
 - Si
 
 On Wed, 2005-05-18 at 14:22 +0200, Gerd Mueller wrote:
  Hi list,
  
  we've got a urgent recover problem! While restoring files from a windows
  backup we always get the following problems:
  
  backup-sd: Got EOF at file 3  on device /var/backups/bacula/File, Volume
  diff0003
  backup-sd: End of Volume at file 3 on device /var/backups/bacula/File,
  Volume diff0003
  backup-sd: Ready to read from volume full0019 on device
  /var/backups/bacula/File.
  kliniken-data-fd: -rwxrwxrwx   1 00  19456 2005-02-24
  09:21:41  /tmp/bacula-restores/e//Meddok/2004 - Entbindungsf[1].
  Gnzburg.doc
  kliniken-data-fd: -rwxrwxrwx   1 002507776 2005-02-24
  09:21:27  /tmp/bacula-restores/e//Meddok/2004 - Entb[1].Flle - GZ.xls
  kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
  to
  directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
  kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
  ..\findlib\../../findlib/create_file.c:182 Could not create
  /tmp/bacula-restores/e//Meddok/Akten
  Kontrolle/Kontrolle_Entlasscodierung.mdb: ERR=Das System kann den
  angegebenen Pfad nicht finden.
  kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error: Cannot chdir
  to
  directory, /tmp/bacula-restores/e//Meddok: ERR=Zugriff verweigert
  kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
  ..\findlib\../../findlib/create_file.c:182 Could not create
  /tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
  (Einlernen)/CareCenter.ppt: ERR=Das System kann den angegebenen Pfad
  nicht
  finden.
  kliniken-data-fd: RestoreFiles.2005-05-18_12.52.33 Error:
  ..\findlib\../../findlib/create_file.c:182 Could not create
  /tmp/bacula-restores/e//Meddok/Akten Kontrolle/Schulung
  (Einlernen)/Checkliste.doc: ERR=Das System kann den angegebenen Pfad
  nicht
  finden.
  
  Anybody any idea? Right now we are using the 1.3.6.3 client
  
  Thank you
  
  Gerd
  
-- 
Gerd Mller   NETWAYS GmbH
Senior Systems Engineer   Deutschherrnstr. 47a
Fon.0911/92885-0  D-90429 Nrnberg
Fax.0911/92885-33
[EMAIL PROTECTED]   http://www.netways.de


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup job is always running for the director. It is finish for the client

2005-05-18 Thread Evelyne Cangini
Hello,
I runned a backup job.
The size of the backup on appendable tape is in conformity so that I 
waited and the backup does not write anything more on this tape for a 
long time.
The driver is ready. Status storage :
   Device /dev/st0 is mounted with Volume prod002
   Total Bytes=21,228,045,983 Blocks=329,056 Bytes/block=64,511
   Positioned at File=22 Block=0

But the status dir displays  that my job is always running and 4 other 
jobs are waiting execution.
And the status client displays  No Jobs running.
I have none log, none message which announces  that the job is finished 
or that there is an error.

Someone has an idea of what occurs? And how to leave safely this situation?
Thanks,
Evelyne

---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [SOLVED] sqlite crash in bacula-dir-1.36.2-1mdk

2005-05-18 Thread [EMAIL PROTECTED]
Hello,

(details at the end)

I did an strace on bacula-dir, both from a directory not writable by
bacula and from a directory writable by bacula, (I join the results) and
bacula/sqlite does try to write a temp file in the current dir.

I don't know whether this is a packaging pb, a Bacula pb, or a sqlite pb...

Here is part of the diff between the strace results.

 open(./sqlite_OtfWKTMWWYwQT2u, O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE,
0600) = 8

 open(./sqlite_DL1LQlVjc3m7Mtm, O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE,
0600) = -1 EACCES (Permission
denied)

Then it seems (I'm not an expert) that bacula/sqlite tries successively

/var/tmp
/usr/tmp
/tmp

but for some reason thinks they are not writable (they are), I guess, and
then it ends.

 stat64(/var/tmp, {st_mode=S_IFDIR|S_ISVTX|0773, st_size=4096, ...}) = 0
 access(/var/tmp, R_OK|W_OK|X_OK) = -1 EACCES (Permission denied)
 stat64(/usr/tmp, {st_mode=S_IFDIR|S_ISVTX|0773, st_size=4096, ...}) = 0
 access(/usr/tmp, R_OK|W_OK|X_OK) = -1 EACCES (Permission denied)
 stat64(/tmp, {st_mode=S_IFDIR|S_ISVTX|0773, st_size=176128, ...}) = 0
 access(/tmp, R_OK|W_OK|X_OK)= -1 EACCES (Permission denied)
 stat64(., {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
 access(., R_OK|W_OK|X_OK)   = -1 EACCES (Permission denied)
 access(./sqlite_jF1otGrWX1AOsYg, F_OK) = -1 ENOENT (No such file or
directory)
 access(./sqlite_jF1otGrWX1AOsYg, F_OK) = -1 ENOENT (No such file or
directory)
 open(./sqlite_jF1otGrWX1AOsYg, O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE,
0600) = -1 EACCES (Permission
denied)

I see at the beginning if the trace diff that :
 getcwd(/var/lib/bacula, 1024)   = 16
---
 getcwd(/etc/bacula, 1024)   = 12

I don't know whether this comes from Bacula or sqlite, but one of them
seems to need some info about the current directory.

Well, I have a workaround, so it's OK for me, but I guess that some others
may have the problem too. I guess it's good to at least know it's here, so
they don't' spend the time to understand it again.

Hope this helps. Thanks anyway for all the work you put into Bacula !

-- Jean Marc


More details


- I removed all the packages I had installed from Cooker, removed
/var/lib/bacula and /usr/lib/bacula so that I had no database.

- I reinstalled them from LimitedEdition2005 (contrib). That made no
difference, since they were binary equal anyway (I don't know why Cooker
and official packages are the same) :

bacula-common-1.36.2-1mdk.i586
bacula-console-1.36.2-1mdk.i586
bacula-dir-1.36.2-1mdk.i586
bacula-fd-1.36.2-1mdk.i586
bacula-sd-1.36.2-1mdk.i586
libsqlite0-2.8.16-1mdk.i586
sqlite-tools-2.8.16-1mdk.i586

- I made a change in the bacula-dir.conf file (added a file) to force
bacula to modify its catalog database

- I started bacula-fd and bacula-sd

- I cd'ed to a directory not writable by the user bacula and started strace

strace -f -e trace=file -o bacula_trace_not_writeble_current_dir.txt
bacula-dir -f -u bacula -g bacula -c /etc/bacula/bacula-dir.conf

- I started  bconsole in an other console and typed
status monitor
and got the error.

- I stopped the strace, cd'ed to a directory writable by bacula and did
the same, and this time I got no error.

To make sure that /var/tmp, /usr/tmp and /tmp are readable/writable by the
user bacula, I did this :

[EMAIL PROTECTED] bacula]# ls -lad /var/tmp
drwxrwx-wt  2 root adm 4096 mai 18 14:21 /var/tmp/
[EMAIL PROTECTED] bacula]# ls -lad /usr/tmp
lrwxrwxrwx  1 root root 10 jun 13  2004 /usr/tmp - ../var/tmp/
[EMAIL PROTECTED] bacula]# ls -lad /tmp
drwxrwx-wt  6 root adm 176128 mai 18 14:58 /tmp/
[EMAIL PROTECTED] bacula]# su - bacula
-bash-2.05b$ echo this is a test  /var/tmp/test.txt
-bash-2.05b$ cat /var/tmp/test.txt
this is a test
-bash-2.05b$ echo this is an other test  /tmp/test.txt
-bash-2.05b$ cat /tmp/test.txt
this is an other test


execve(/usr/sbin/bacula-dir, [bacula-dir, -f, -u, bacula, -g, 
bacula, -c, /etc/bacula/bacula-dir.conf], [/* 46 vars */]) = 0
open(/etc/ld.so.preload, O_RDONLY) = -1 ENOENT (No such file or directory)
open(/etc/ld.so.cache, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=23119, ...}) = 0
open(/usr/lib/libsqlite.so.0, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=282272, ...}) = 0
open(/lib/tls/libpthread.so.0, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=83255, ...}) = 0
open(/lib/libnsl.so.1, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=69216, ...}) = 0
open(/usr/lib/libstdc++.so.6, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=833176, ...}) = 0
open(/lib/tls/libm.so.6, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=139908, ...}) = 0
open(/lib/libgcc_s.so.1, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=31396, ...}) = 0
open(/lib/tls/libc.so.6, O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0755, st_size=1165108, ...}) = 0
getcwd(/etc/bacula, 1024)   = 12
open(/dev/null, O_RDONLY|O_LARGEFILE) = 3
open(/etc/bacula/bacula-dir.conf, O_RDONLY|O_LARGEFILE) = 3
fstat64(3, 

Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Sean O'Grady
Well its good to know that Bacula will do what I need!
Guess now I need to determine what I've done wrong in my configs ...
I'm short forming all the config inforation to reduce the size of the 
e-mail but I can post my full configs if necessary. Anywhere where I 
have Maximum Concurrent Jobs I've posted that section of the config. 
If there is something else besides Maximum Concurrent Jobs needed in 
the configs to get this behaviour to happen and I'm missing it, please 
let me know.

Any suggestions appreciated!
Sean
In bacula-dir.conf ...
Director {
 Name = mobinet-dir1
 DIRport = 9101# where we listen for UA connections
 QueryFile = /etc/bacula/query.sql
 WorkingDirectory = /data/bacula/working
 PidDirectory = /var/run
 Maximum Concurrent Jobs = 10
 Password =  # Console password
 Messages = Daemon
}
JobDefs {
  Name = MobinetDef
  Storage = polaris-sd
  Schedule = Mobinet-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
JobDefs {
  Name = SiriusWebDef
  Storage = polaris-sd
  Schedule = SiriusWeb-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
Storage {
 Name = polaris-sd
 Address = 
 SDPort = 9103
 Password = 
 Device = PowerVault 122T VS80
 Media Type = DLTIV
 Maximum Concurrent Jobs = 10
}
In bacula-sd.conf
Storage { # definition of myself
 Name = polaris-sd
 SDPort = 9103  # Director's port 
 WorkingDirectory = /data/bacula/working
 Pid Directory = /var/run
 Maximum Concurrent Jobs = 10
}

Device {
  Name = PowerVault 122T VS80
  Media Type = DLTIV
  Archive Device = /dev/nst0
  Changer Device = /dev/sg1
  Changer Command = /etc/bacula/mtx-changer %c %o %S %a
  AutoChanger = yes
  AutomaticMount = yes   # when device opened, read it
  AlwaysOpen = yes
  LabelMedia = no
  Spool Directory = /data/bacula/spool
  Maximum Spool Size = 14G
}
In bacula-fd.conf on all the clients
FileDaemon {  # this is me
 Name = polaris-mobinet-ca
 FDport = 9102  # where we listen for the director
 WorkingDirectory = /data/bacula/working
 Pid Directory = /var/run
 Maximum Concurrent Jobs = 10
}
Arno Lehmann wrote:
Hello,
Sean O'Grady wrote:
...
As an alternative which would be even better - All 5 Jobs start @ 
23:00 spooling data from the client, the first Job to complete the 
spooling from the client starts writing to the Storage Device. 
Remaining Jobs queue for the Storage Device as it becomes available 
and as their spooling completes.

Instead what I'm seeing is while the first job executes the 
additional jobs all have a status of is waiting on max Storage jobs 
and will not begin spooling their data until that first Job has 
spooled-despooled-written to the Storage Device.

My question is of course is this possible to have Concurrent Jobs 
running and spooling in one of the scenarios above (or another I'm 
missing).

Well, I guess that this must be a setup problem on your side - after 
all, this is what I'm doing here and it works (apart from very few 
cases where jobs are held that *could* start, but I couldn't find out 
why yet).

From your description, I assume that you forgot to set Maximum 
Concurrent Jobs in all the necessary places, namely in the storage 
definitions.

I noticed that the same message is printed when the director has to 
wait for a client, though. (This is not yet confirmed, noticed it only 
yesterday and couldn't verify it yet).

If so I'll send out more details of my config to see if anyone can 
point out what I'm doing wrong.

First, verify the settings you have - there are directives in the 
client's config, the sd config, and the director configuration where 
you need to apply the right settings for your setup.

Arno

Thanks,
Sean
--
Sean O'Grady
System Administrator
Sheridan College
Oakville, Ontario
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Wilson Guerrero C.
On Tuesday 17 May 2005 23:46, Sean O'Grady wrote:
 Hello,

 I'm trying to get a better understanding of Concurrent Job behaviour and
 how it relates to multiple jobs going to a single Storage Device.

 The basics of my setup are multiple clients and a single Storage device.
 I specify that all Jobs will be spooled and that there is a Maximum
 Concurrent Job number of 20.

 What I would like to have happen is if 5 Jobs start @ 23:00 the first
 one started spools its data and then writes to tapes when its finished
 spooling. The additional 4 Jobs meanwhile start spooling their data from
 the client while the first job is running and then write to tape when
 the Storage Device becomes available. The order of Job completion can be
 FIFO as long as the data can be spooled concurrently from all clients
 (assuming there is enough disk space).

 As an alternative which would be even better - All 5 Jobs start @ 23:00
 spooling data from the client, the first Job to complete the spooling
 from the client starts writing to the Storage Device. Remaining Jobs
 queue for the Storage Device as it becomes available and as their
 spooling completes.

 Instead what I'm seeing is while the first job executes the additional
 jobs all have a status of is waiting on max Storage jobs and will not
 begin spooling their data until that first Job has
 spooled-despooled-written to the Storage Device.

 My question is of course is this possible to have Concurrent Jobs
 running and spooling in one of the scenarios above (or another I'm
 missing).

Yes. It works here flawlessly.
Make sure you enable concurrent jobs in:

1-.
bacula-dir.conf
Director {# define myself
...
  Maximum Concurrent Jobs = 15
}
Storage {
...
  Maximum Concurrent Jobs = 20
}

2-.
bacula-sd.conf
Storage { # definition of myself
...
  Maximum Concurrent Jobs = 20
}

and don't forget

Job {
.
  SpoolData = yes
}

in the job definitions.

 If so I'll send out more details of my config to see if anyone can point
 out what I'm doing wrong.

 Thanks,
 Sean




---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Re: urgent windows recover problem

2005-05-18 Thread Felix Schwarz
Hi,

Gerd Mueller wrote:
 backup-sd: Ready to read from volume full0019 on device
 /var/backups/bacula/File.
 kliniken-data-fd: -rwxrwxrwx   1 00  19456 2005-02-24
 09:21:41  /tmp/bacula-restores/e//Meddok/2004 - Entbindungsf[1].
 Gnzburg.doc

Are you shure that /tmp/bacula-restores exist on the machine you want
to restore to? (that one hit me ones). Maybe the /e//Meddok is causing
problems?

-- 
Felix



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Upgraded to 1.36.3 , still Restores everything

2005-05-18 Thread Kern Sibbald
Hello,

On Wednesday 18 May 2005 15:47, Danie Theron wrote:
 Kern Sibbald wrote:
 Hello,
 
 It appears that you are *vastly* over complicating things.  First, you
  only need one (the default) RestoreFiles job. Second, once you select the
  files, using the restore command and menu item 5 (if I remember right),
  which I don't see in the listing below, there should be little or no need
  to use mod, and if you are using mod to change the bootstrap file, you
  are surely doing something wrong unless you are a *super* expert. If you
  use mod to change other parameters, probably you are doing something
  wrong.
 
 If you do the restore correctly, the bootstrap will be generated for you
  and it will be named /some-path/restore.bsr.  There is no need to change
  it.
 
 I'd suggest you bring up a test Bacula someplace and run through the
  example in the Tutorial chapter. It shows you the easy way to restore
  files.

 Hi ,

 Yes , you are absolutely right , it was a id10t error *blush* , ran
 through the steps as you suggested and it restored the files in a flash!
 Thanks for simplifying things for my over paranoid mind.

Ah, nice. That is a relief -- probably for you as well.


 I did however get a few of these errors , anything to be alarmed about ?

No. These are information messages. When there is a warning Bacula has WARNING 
in the message, and when there is an error Bacula has ERROR in the message.
What counts the most is the Job summary. If it says it got all the files back, 
you can be 99.9% things are OK.


 8-May 15:33 venus-sd: Got EOF at file 1  on device /arch/rock/sql, Volume
 rocksqlfull-0004 18-May 15:33 venus-sd: End of Volume at file 1 on device
 /arch/rock/sql, Volume rocksqlfull-0004 danie-fd: drwxrwxrwx   1 0   
 0  0 2001-09-10 15:55:02  /tmp/bacula-restores/d//PTS Shares/
 18-May 15:33 venus-sd: Ready to read from volume rocksqlincr-0002 on
 device /arch/rock/sql. 18-May 15:33 venus-sd: Got EOF at file 1  on device
 /arch/rock/sql, Volume rocksqlincr-0002 18-May 15:33 venus-sd: End of
 Volume at file 1 on device /arch/rock/sql, Volume rocksqlincr-0002 18-May
 15:33 venus-sd: Ready to read from volume rocksqlincr-0003 on device
 /arch/rock/sql. 18-May 15:33 venus-sd: Got EOF at file 1  on device
 /arch/rock/sql, Volume rocksqlincr-0003 18-May 15:33 venus-sd: End of
 Volume at file 1 on device /arch/rock/sql, Volume rocksqlincr-0003 18-May
 15:33 venus-sd: End of all volumes.

 Also , last really dumb question , how do I ,in restore mode, cd to a dir
 with spaces in it?

cd file with spaces in it

or use wx-console.  You point and click. It takes a bit to learn where to 
click, but a bit of fiddling gets you there ... :-)



 Once again , thanks for your patience and help.


-- 
Best regards,

Kern

  (
  /\
  V_V


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup job is always running for the director. It is finish for the client

2005-05-18 Thread Arno Lehmann
Hi,
Evelyne Cangini wrote:
Hello,
I runned a backup job.
The size of the backup on appendable tape is in conformity so that I 
waited and the backup does not write anything more on this tape for a 
long time.
The driver is ready. Status storage :
   Device /dev/st0 is mounted with Volume prod002
   Total Bytes=21,228,045,983 Blocks=329,056 Bytes/block=64,511
   Positioned at File=22 Block=0

But the status dir displays  that my job is always running and 4 other 
jobs are waiting execution.
And the status client displays  No Jobs running.
I have none log, none message which announces  that the job is finished 
or that there is an error.

Someone has an idea of what occurs? And how to leave safely this situation?
I'm not really sure what might happen, but there might be some things to 
check.

First, did you activate notifications in case of success? What does 
status dir in the console tell about running and completed jobs?

Second, has the client been restarted in between? In that case, 
everything could seem fine, but there are situations when the director 
waits for it, and those timeouts can be rather long. After 2 or 4 hours 
(not sure, I remember both numbers somehow...) you would get an error

Perhaps (but unlikely, I think) the director is still despooling job 
data... this can take quite a while, but status storage would report it. 
And, of course, you would have spooling turned on in the configuration.

Then, you can always use top or ps to see if the bacula processes are 
running or hang. I did have a few cases where the director hung, but 
then the console would not work as well. Still, perhaps worth checking.

Finally, concerning recovering - you can try to cancel the stuck job. 
Usually you would get two messages about it being cancelled. If this 
doesn't work, you can *try* to kill the SD process and restart it 
immediately, but I'd prefer simply shutting down all of bacula and 
restarting afterwards and then starting the lost jobs manually. In any 
case, the tape currently in use might be considered as with errors, 
usually because of a number of files mismatch. In that case, simply 
mark the volume as used and it will be recycled normally.

Arno
Thanks,
Evelyne

---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Arno Lehmann
Hi.
Sean O'Grady wrote:
Well its good to know that Bacula will do what I need!
Guess now I need to determine what I've done wrong in my configs ...
I'm short forming all the config inforation to reduce the size of the 
e-mail but I can post my full configs if necessary. Anywhere where I 
have Maximum Concurrent Jobs I've posted that section of the config. 
If there is something else besides Maximum Concurrent Jobs needed in 
the configs to get this behaviour to happen and I'm missing it, please 
let me know.
The short form is ok :-)
Now, after reading through it I actually don't see any reason why only 
one job at a time is run.

Perhaps someone else can...
Still, I have some questions.
First, which version of bacula do you use?
Then, do you perhaps use job overrides concerning the pools or the 
priorities in your schedule?
And, finally, are all the jobs scheduled to run at the same level, e.g. 
full, and do they actually do so? Perhaps you have a job running at Full 
level, and the others are scheduled to run incremental, so they have to 
wait for the right media (of pool DailyPool).

Arno
Any suggestions appreciated!
Sean
In bacula-dir.conf ...
Director {
 Name = mobinet-dir1
 DIRport = 9101# where we listen for UA connections
 QueryFile = /etc/bacula/query.sql
 WorkingDirectory = /data/bacula/working
 PidDirectory = /var/run
 Maximum Concurrent Jobs = 10
 Password =  # Console password
 Messages = Daemon
}
JobDefs {
  Name = MobinetDef
  Storage = polaris-sd
  Schedule = Mobinet-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
JobDefs {
  Name = SiriusWebDef
  Storage = polaris-sd
  Schedule = SiriusWeb-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
Storage {
 Name = polaris-sd
 Address = 
 SDPort = 9103
 Password = 
 Device = PowerVault 122T VS80
 Media Type = DLTIV
 Maximum Concurrent Jobs = 10
}
In bacula-sd.conf
Storage { # definition of myself
 Name = polaris-sd
 SDPort = 9103  # Director's port  WorkingDirectory 
= /data/bacula/working
 Pid Directory = /var/run
 Maximum Concurrent Jobs = 10
}

Device {
  Name = PowerVault 122T VS80
  Media Type = DLTIV
  Archive Device = /dev/nst0
  Changer Device = /dev/sg1
  Changer Command = /etc/bacula/mtx-changer %c %o %S %a
  AutoChanger = yes
  AutomaticMount = yes   # when device opened, read it
  AlwaysOpen = yes
  LabelMedia = no
  Spool Directory = /data/bacula/spool
  Maximum Spool Size = 14G
}
In bacula-fd.conf on all the clients
FileDaemon {  # this is me
 Name = polaris-mobinet-ca
 FDport = 9102  # where we listen for the director
 WorkingDirectory = /data/bacula/working
 Pid Directory = /var/run
 Maximum Concurrent Jobs = 10
}
Arno Lehmann wrote:
Hello,
Sean O'Grady wrote:
...
As an alternative which would be even better - All 5 Jobs start @ 
23:00 spooling data from the client, the first Job to complete the 
spooling from the client starts writing to the Storage Device. 
Remaining Jobs queue for the Storage Device as it becomes available 
and as their spooling completes.

Instead what I'm seeing is while the first job executes the 
additional jobs all have a status of is waiting on max Storage jobs 
and will not begin spooling their data until that first Job has 
spooled-despooled-written to the Storage Device.

My question is of course is this possible to have Concurrent Jobs 
running and spooling in one of the scenarios above (or another I'm 
missing).

Well, I guess that this must be a setup problem on your side - after 
all, this is what I'm doing here and it works (apart from very few 
cases where jobs are held that *could* start, but I couldn't find out 
why yet).

From your description, I assume that you forgot to set Maximum 
Concurrent Jobs in all the necessary places, namely in the storage 
definitions.

I noticed that the same message is printed when the director has to 
wait for a client, though. (This is not yet confirmed, noticed it only 
yesterday and couldn't verify it yet).

If so I'll send out more details of my config to see if anyone can 
point out what I'm doing wrong.

First, verify the settings you have - there are directives in 

Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Sean O'Grady
Hi,
After seeing two people respond saying that this was feasible and 
checking what Wilson had in his config against mine I did a little more 
digging. (Thanks Arno, your e-mail came in as I was writing this and 
confirmed the info about Pools. Also I'm running 1.36.3. )

I believe I have sorted out what my issue with this is. As I didn't post 
my complete configs and only the ones that I thought would be relevant I 
ended up only giving half the picture. What was missing was that there 
is another set of Pool tapes and different Jobs that run using these 
Pools (that also do data spooling) at the same time as the Jobs I showed 
before.

Looking at src/dird/jobq.c I see the following which hopefully Kern or 
someone else in touch with the code can enlighten a bit more for me.

SNIP
if (njcr-store == jcr-store  njcr-pool != jcr-pool) {
skip_this_jcr = true;
break;
}
SNIP
This says to me that as long as the Pools of the Jobs being queued 
match, the Jobs will all run concurrently. Jobs however that have 
mismatching Pools will instead queue and wait for the storage device to 
free when previous jobs complete.

Its probably not this simple but some behaviour equivalent to ...
if (njcr-store == jcr-store  njcr-pool != jcr-pool  
njcr-spool_data != true) {
	skip_this_jcr = true;
	break;
}

... sould allow for Jobs to queue with different Pools that have 
spooling on. To ensure that Jobs complete some further checks of the 
storage daemon and the director that -

1) when spooling from the client completes is the Storage device 
available for append
2) if the Storage device is availble is a Pool object suitable for this 
Job currently loaded (if not load it)
3) when the Job completes check the status of Jobs queued and grab the 
next Job where the spooling is complete goto 2) again

My question now changes to Is there a way for Jobs to run Concurrently 
that use different Pools as long as the Job Definitions are set to Spool 
Data as outlined in the example above (or something similiar) ?

Or of course maybe Bacula can already handle this and I'm just missing it :)
Thanks,
Sean
Arno Lehmann wrote:
Hi.
Sean O'Grady wrote:
Well its good to know that Bacula will do what I need!
Guess now I need to determine what I've done wrong in my configs ...
I'm short forming all the config inforation to reduce the size of the 
e-mail but I can post my full configs if necessary. Anywhere where I 
have Maximum Concurrent Jobs I've posted that section of the config. 
If there is something else besides Maximum Concurrent Jobs needed in 
the configs to get this behaviour to happen and I'm missing it, please 
let me know.

The short form is ok :-)
Now, after reading through it I actually don't see any reason why only 
one job at a time is run.

Perhaps someone else can...
Still, I have some questions.
First, which version of bacula do you use?
Then, do you perhaps use job overrides concerning the pools or the 
priorities in your schedule?
And, finally, are all the jobs scheduled to run at the same level, e.g. 
full, and do they actually do so? Perhaps you have a job running at Full 
level, and the others are scheduled to run incremental, so they have to 
wait for the right media (of pool DailyPool).

Arno
Any suggestions appreciated!
Sean
In bacula-dir.conf ...
Director {
 Name = mobinet-dir1
 DIRport = 9101# where we listen for UA connections
 QueryFile = /etc/bacula/query.sql
 WorkingDirectory = /data/bacula/working
 PidDirectory = /var/run
 Maximum Concurrent Jobs = 10
 Password =  # Console password
 Messages = Daemon
}
JobDefs {
  Name = MobinetDef
  Storage = polaris-sd
  Schedule = Mobinet-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
JobDefs {
  Name = SiriusWebDef
  Storage = polaris-sd
  Schedule = SiriusWeb-Cycle
  Type = Backup
  Max Start Delay = 32400 # 9 hours
  Max Run Time = 14400 # 4 hours
  Rerun Failed Levels = yes
  Maximum Concurrent Jobs = 5
  Reschedule On Error = yes
  Reschedule Interval = 3600
  Reschedule Times = 2
  Priority = 10
  Messages = Standard
  Pool = Default
  Incremental Backup Pool = MobinetDailyPool
  Differential Backup Pool = MobinetWeeklyPool
  Full Backup Pool = MobinetMonthlyPool
  SpoolData = yes
}
Storage {
 Name = polaris-sd
 Address = 
 SDPort = 9103
 Password = 
 Device = PowerVault 122T VS80
 Media Type = DLTIV
 Maximum Concurrent Jobs = 10
}
In bacula-sd.conf
Storage { # definition of myself
 Name = polaris-sd
 SDPort = 9103  # Director's port  
WorkingDirectory = /data/bacula/working
 Pid 

[Bacula-users] Re: Bacula-users -- confirmation of subscription -- request 339739

2005-05-18 Thread Mike Baroukh
Le Mercredi 18 Mai 2005 21:57, [EMAIL PROTECTED] a 
crit:
 Bacula-users -- confirmation of subscription -- request 339739

 We have received a request from 82.235.218.191 for subscription of
 your email address, [EMAIL PROTECTED], to the
 bacula-users@lists.sourceforge.net mailing list.  To confirm the
 request, please send a message to
 [EMAIL PROTECTED], and either:

 - maintain the subject line as is (the reply's additional Re: is
 ok),

 - or include the following line - and only the following line - in the
 message body:

 confirm 339739

 (Simply sending a 'reply' to this message should work from most email
 interfaces, since that usually leaves the subject line in the right
 form.)

 If you do not wish to subscribe to this list, please simply disregard
 this message.  Send questions to
 [EMAIL PROTECTED]


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of Windows FD

2005-05-18 Thread Martin Simmons
 On Tue, 17 May 2005 18:54:01 -0400, Matthew Butt [EMAIL PROTECTED] 
 said:

  Matt I have two identical Win 2k3 servers (Dell PowerEdge 2800, U320 RAID5,
  Matt dual P4 Xeon 2.8) that I need to backup data onto an FC3 server running
  Matt Bacula (P4 2.8GHz, USB2 HDD).  All three machines have Gigabit cards
  Matt running on a Gigabit switch with appropriate Cat5e cables.

  Matt Server1 has two files totaling 5Gb.  Bacula grabs these files at about
  Matt 25MB/sec - total backup time is under 4mins.
  Matt Server2 has ~4000 files totaling 100Gb.  The problem is that Bacula is
  Matt only running at about 50Kb/sec for this server - simple maths tells us
  Matt that it's going to take 3.5 weeks to backup the entire server!

  Matt There's obviously something awry here - can anyone give me any ideas
  Matt what to look into? The networking between the machines all appears to be
  Matt fine (I can pull off large files between the Win 2k3 servers at around
  Matt 30MB/sec) so it seems to be something to do with the Bacula FD.

I would start by looking at the Processes tab of the Task Manager on Server2
during the backup to see what % of the CPU bacula-fd is getting and if
anything else is running.

__Martin


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] Speed of Windows FD

2005-05-18 Thread Matthew Butt
   Matt I have two identical Win 2k3 servers (Dell PowerEdge 2800,
U320
 RAID5,
   Matt dual P4 Xeon 2.8) that I need to backup data onto an FC3
server
 running
   Matt Bacula (P4 2.8GHz, USB2 HDD).  All three machines have Gigabit
 cards
   Matt running on a Gigabit switch with appropriate Cat5e cables.
 
   Matt Server1 has two files totaling 5Gb.  Bacula grabs these files
at
 about
   Matt 25MB/sec - total backup time is under 4mins.
   Matt Server2 has ~4000 files totaling 100Gb.  The problem is that
 Bacula is
   Matt only running at about 50Kb/sec for this server - simple maths
 tells us
   Matt that it's going to take 3.5 weeks to backup the entire server!
 
   Matt There's obviously something awry here - can anyone give me any
 ideas
   Matt what to look into? The networking between the machines all
appears
 to be
   Matt fine (I can pull off large files between the Win 2k3 servers
at
 around
   Matt 30MB/sec) so it seems to be something to do with the Bacula
FD.
 
 I would start by looking at the Processes tab of the Task Manager on
 Server2
 during the backup to see what % of the CPU bacula-fd is getting and if
 anything else is running.

The bacula-fd process on Server2 is using at most about 2%.  Mem usage
is very low (3Mb).  Nothing else is using the processor or disks
intensively on that machine at the moment and bconsole is reporting
about 3MB/sec transfer (speed changes wildly it seems, but never very
fast!)

The bacula server is running around 5-15% for bacula-sd.  Again, nothing
much else is happening on that server, load average: 0.21, 0.12, 0.09.

Is there any profiling I can run on the Windows FD client, or at least
see a file-by-file progress?



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] Speed of Windows FD

2005-05-18 Thread Simon Weller
Have you checked network speed and duplex?

- Si

On Wed, 2005-05-18 at 17:09 -0400, Matthew Butt wrote:
Matt I have two identical Win 2k3 servers (Dell PowerEdge 2800,
 U320
  RAID5,
Matt dual P4 Xeon 2.8) that I need to backup data onto an FC3
 server
  running
Matt Bacula (P4 2.8GHz, USB2 HDD).  All three machines have Gigabit
  cards
Matt running on a Gigabit switch with appropriate Cat5e cables.
  
Matt Server1 has two files totaling 5Gb.  Bacula grabs these files
 at
  about
Matt 25MB/sec - total backup time is under 4mins.
Matt Server2 has ~4000 files totaling 100Gb.  The problem is that
  Bacula is
Matt only running at about 50Kb/sec for this server - simple maths
  tells us
Matt that it's going to take 3.5 weeks to backup the entire server!
  
Matt There's obviously something awry here - can anyone give me any
  ideas
Matt what to look into? The networking between the machines all
 appears
  to be
Matt fine (I can pull off large files between the Win 2k3 servers
 at
  around
Matt 30MB/sec) so it seems to be something to do with the Bacula
 FD.
  
  I would start by looking at the Processes tab of the Task Manager on
  Server2
  during the backup to see what % of the CPU bacula-fd is getting and if
  anything else is running.
 
 The bacula-fd process on Server2 is using at most about 2%.  Mem usage
 is very low (3Mb).  Nothing else is using the processor or disks
 intensively on that machine at the moment and bconsole is reporting
 about 3MB/sec transfer (speed changes wildly it seems, but never very
 fast!)
 
 The bacula server is running around 5-15% for bacula-sd.  Again, nothing
 much else is happening on that server, load average: 0.21, 0.12, 0.09.
 
 Is there any profiling I can run on the Windows FD client, or at least
 see a file-by-file progress?
 
 
 
 ---
 This SF.Net email is sponsored by Oracle Space Sweepstakes
 Want to be the first software developer in space?
 Enter now for the Oracle Space Sweepstakes!
 http://ads.osdn.com/?ad_idt12alloc_id344op=click
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
-- 
Simon Weller 
Systems Engineer, LPIC-2
Education Networks of America
1101 McGavock St.
Nashville TN 37203
Direct Line:  615.312.6068
Network Operations Center: 1.800.836.4357



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Sean O'Grady
Hi,
Good points on a number of things but a few comments need to be made.
1) I'm not attempting to use spooling as a backup method that I want to 
restore from. I'm using spooling as its intended for, to avoid 
shoe-shining. I backup a number of clients servers at remote sites and 
their network connections can sometimes be saturated while backing up. 
With the spooling the tape is only moving when it needs to be which is 
good for the wear and tear :) There is multiple clients and I'm trying 
to use different pools of tapes for them which is whats got me into this 
predicament. In terms of spool size and running out of space that is a 
consideration even in the current version so with some careful 
management this problem could be avoided.

2) I don't believe writing to a disk based volume and then migrating to 
tape would work for me. For restores wouldn't that require me to first 
restore the disk based volume from to tape to disk then restore the 
files I need from that disk volume (which is really its own Storage 
Device)? I haven't really looked into this scenario but the bits that I 
have read led me to believe that the restore scenario would be like that.

3) In terms of waiting for a Volume to be inserted I have the luxury of 
having a tape auto-loader doing the work for me. In my proposed scenario 
Bacula could check to see what Volume it requires  as the Job finishes 
its spooling and if the tape is not in the drive it could issue an 
mtx-changer command and have the autoloader load it. I see some 
potential issues here with timing and deadlocking for the drive between 
jobs but some careful queue management could ensure this works.

4) Your absolutely right about Bacula positioning the tape before the 
Job starts. Looking at the code I see that before data spooling begins 
only after Bacula acquires the Storage Device which wouldn't work so 
well with Jobs needing Volumes from multiple Pools. With how the checks 
are currently working in terms of getting the ok to start  the job 
wouldn't end up being too much different (I say with a wink), possibly 
some shuffling around in the order of the sub-routines?? In reality I 
think this is where the major part of the work would be needed, since 
there is potential for some major failure here.

With all this being said I'll diffently bring it up next time Kern asks 
for wish-list suggestions. In the meantime I can simply do away with 
the multiple pools and make sure that same Level Jobs happen in the same 
time frames and I should have the behaviour that I want minus the 
separate Pools.

Thanks everyone for your help!
Sean
Arno Lehmann wrote:
Hello,
Sean O'Grady wrote:
Hi,
...
I believe I have sorted out what my issue with this is. As I didn't 
post my complete configs and only the ones that I thought would be 
relevant I ended up only giving half the picture. What was missing 
was that there is another set of Pool tapes and different Jobs that 
run using these Pools (that also do data spooling) at the same time 
as the Jobs I showed before.

Ok, so this explains it.
Looking at src/dird/jobq.c I see the following which hopefully Kern 
or someone else in touch with the code can enlighten a bit more for me.

Well, I'm not in touch with the code, but still...
 SNIP
if (njcr-store == jcr-store  njcr-pool != jcr-pool) {
skip_this_jcr = true;
break;
}
 SNIP
This says to me that as long as the Pools of the Jobs being queued 
match, the Jobs will all run concurrently. Jobs however that have 
mismatching Pools will instead queue and wait for the storage device 
to free when previous jobs complete.

That's about it, I'd say.
Its probably not this simple but some behaviour equivalent to ...
if (njcr-store == jcr-store  njcr-pool != jcr-pool  
njcr-spool_data != true) {
skip_this_jcr = true;
break;
}

... sould allow for Jobs to queue with different Pools that have 
spooling on.

Your ideamight be possible, but there are someother things to consider.
One is that bacula positions the tape *before* starting a job, i.e. 
bwfore starting to spool data.

I was wondering about this, but I can see some good reason as well. I 
guess that Kern's idea was that a job should only run when everything 
indicates that it can run at all.

So, making sure tape space is available is one important preparation.
To ensure that Jobs complete some further checks of the storage 
daemon and the director that -

1) when spooling from the client completes is the Storage device 
available for append
2) if the Storage device is availble is a Pool object suitable for 
this Job currently loaded (if not load it)
3) when the Job completes check the status of Jobs queued and grab 
the next Job where the spooling is complete goto 2) again

Although I can see advantages in your scenario I also see some 
disadvantages.

Spool space is one important thing - allowing jobs to spool without 
being sure when they will be despooled can use up much or even all of 
your disk space, thus 

Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Kern Sibbald
On Wednesday 18 May 2005 22:01, Sean O'Grady wrote:
 Hi,

 After seeing two people respond saying that this was feasible and
 checking what Wilson had in his config against mine I did a little more
 digging. (Thanks Arno, your e-mail came in as I was writing this and
 confirmed the info about Pools. Also I'm running 1.36.3. )

 I believe I have sorted out what my issue with this is. As I didn't post
 my complete configs and only the ones that I thought would be relevant I
 ended up only giving half the picture. What was missing was that there
 is another set of Pool tapes and different Jobs that run using these
 Pools (that also do data spooling) at the same time as the Jobs I showed
 before.

 Looking at src/dird/jobq.c I see the following which hopefully Kern or
 someone else in touch with the code can enlighten a bit more for me.

  SNIP

 if (njcr-store == jcr-store  njcr-pool != jcr-pool) {
   skip_this_jcr = true;
   break;
 }

Two jobs with the same Storage resource and different Pools cannot run 
simultaneously because (with the 1.36 implementation) this would imply that 
one tape drive could mount two different Volumes.


  SNIP

 This says to me that as long as the Pools of the Jobs being queued
 match, the Jobs will all run concurrently. Jobs however that have
 mismatching Pools will instead queue and wait for the storage device to
 free when previous jobs complete.

 Its probably not this simple but some behaviour equivalent to ...

 if (njcr-store == jcr-store  njcr-pool != jcr-pool 
 njcr-spool_data != true) {
   skip_this_jcr = true;
   break;
 }

 ... sould allow for Jobs to queue with different Pools that have
 spooling on. To ensure that Jobs complete some further checks of the
 storage daemon and the director that -

 1) when spooling from the client completes is the Storage device
 available for append
 2) if the Storage device is availble is a Pool object suitable for this
 Job currently loaded (if not load it)
 3) when the Job completes check the status of Jobs queued and grab the
 next Job where the spooling is complete goto 2) again

 My question now changes to Is there a way for Jobs to run Concurrently
 that use different Pools as long as the Job Definitions are set to Spool
 Data as outlined in the example above (or something similiar) ?

Use two different Storage resources, so that each one can have a different 
Volume from a different Pool mounted.


 Or of course maybe Bacula can already handle this and I'm just missing it
 :)

I believe that you are just missing it, but it is not so obvious.


 Thanks,
 Sean

 Arno Lehmann wrote:
  Hi.
 
  Sean O'Grady wrote:
  Well its good to know that Bacula will do what I need!
 
  Guess now I need to determine what I've done wrong in my configs ...
 
  I'm short forming all the config inforation to reduce the size of the
  e-mail but I can post my full configs if necessary. Anywhere where I
  have Maximum Concurrent Jobs I've posted that section of the config.
  If there is something else besides Maximum Concurrent Jobs needed in
  the configs to get this behaviour to happen and I'm missing it, please
  let me know.
 
  The short form is ok :-)
 
  Now, after reading through it I actually don't see any reason why only
  one job at a time is run.
 
  Perhaps someone else can...
 
  Still, I have some questions.
  First, which version of bacula do you use?
  Then, do you perhaps use job overrides concerning the pools or the
  priorities in your schedule?
  And, finally, are all the jobs scheduled to run at the same level, e.g.
  full, and do they actually do so? Perhaps you have a job running at Full
  level, and the others are scheduled to run incremental, so they have to
  wait for the right media (of pool DailyPool).
 
  Arno
 
  Any suggestions appreciated!
 
  Sean
 
  In bacula-dir.conf ...
 
  Director {
   Name = mobinet-dir1
   DIRport = 9101# where we listen for UA connections
   QueryFile = /etc/bacula/query.sql
   WorkingDirectory = /data/bacula/working
   PidDirectory = /var/run
   Maximum Concurrent Jobs = 10
   Password =  # Console password
   Messages = Daemon
  }
 
  JobDefs {
Name = MobinetDef
Storage = polaris-sd
Schedule = Mobinet-Cycle
Type = Backup
Max Start Delay = 32400 # 9 hours
Max Run Time = 14400 # 4 hours
Rerun Failed Levels = yes
Maximum Concurrent Jobs = 5
Reschedule On Error = yes
Reschedule Interval = 3600
Reschedule Times = 2
Priority = 10
Messages = Standard
Pool = Default
Incremental Backup Pool = MobinetDailyPool
Differential Backup Pool = MobinetWeeklyPool
Full Backup Pool = MobinetMonthlyPool
SpoolData = yes
  }
 
  JobDefs {
Name = SiriusWebDef
Storage = polaris-sd
Schedule = SiriusWeb-Cycle
Type = Backup
Max Start Delay = 32400 # 9 hours
Max Run Time = 14400 # 4 hours
Rerun Failed Levels = yes
Maximum Concurrent Jobs = 5
Reschedule On Error = yes
  

Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Kern Sibbald
On Wednesday 18 May 2005 22:37, Arno Lehmann wrote:
 Hello,

 Sean O'Grady wrote:
  Hi,

 ...

  I believe I have sorted out what my issue with this is. As I didn't post
  my complete configs and only the ones that I thought would be relevant I
  ended up only giving half the picture. What was missing was that there
  is another set of Pool tapes and different Jobs that run using these
  Pools (that also do data spooling) at the same time as the Jobs I showed
  before.

 Ok, so this explains it.

  Looking at src/dird/jobq.c I see the following which hopefully Kern or
  someone else in touch with the code can enlighten a bit more for me.

 Well, I'm not in touch with the code, but still...

   SNIP
 
  if (njcr-store == jcr-store  njcr-pool != jcr-pool) {
  skip_this_jcr = true;
  break;
  }
 
   SNIP
 
  This says to me that as long as the Pools of the Jobs being queued
  match, the Jobs will all run concurrently. Jobs however that have
  mismatching Pools will instead queue and wait for the storage device to
  free when previous jobs complete.

 That's about it, I'd say.

  Its probably not this simple but some behaviour equivalent to ...
 
  if (njcr-store == jcr-store  njcr-pool != jcr-pool 
  njcr-spool_data != true) {
  skip_this_jcr = true;
  break;
  }
 
  ... sould allow for Jobs to queue with different Pools that have
  spooling on.

 Your ideamight be possible, but there are someother things to consider.
 One is that bacula positions the tape *before* starting a job, i.e.
 bwfore starting to spool data.

 I was wondering about this, but I can see some good reason as well. I
 guess that Kern's idea was that a job should only run when everything
 indicates that it can run at all.

 So, making sure tape space is available is one important preparation.

  To ensure that Jobs complete some further checks of the
  storage daemon and the director that -
 
  1) when spooling from the client completes is the Storage device
  available for append
  2) if the Storage device is availble is a Pool object suitable for this
  Job currently loaded (if not load it)
  3) when the Job completes check the status of Jobs queued and grab the
  next Job where the spooling is complete goto 2) again

 Although I can see advantages in your scenario I also see some
 disadvantages.

 Spool space is one important thing - allowing jobs to spool without
 being sure when they will be despooled can use up much or even all of
 your disk space, thus preventing jobs from running smooth that otherwise
 could run fine.

 Then, I think it's a good idea to have jobs finish as soon as possible,
 with would not be the case if they started, spooled data, and then had
 to wait for someone to insert the right volume. Bacula keeps open some
 network connections with each job, so it even wastes resources (although
 this should not be a serious problem).

 Finally, I think spooling as bacula does it now is not the best approach
 to your needs. A spooled job is not available for restore and not
 considered done, so it's not yet a useful backup. A better approach
 would be to backup to a disk based volume first, and later migrate the
 job to tape.

  My question now changes to Is there a way for Jobs to run Concurrently
  that use different Pools as long as the Job Definitions are set to Spool
  Data as outlined in the example above (or something similiar) ?
 
  Or of course maybe Bacula can already handle this and I'm just missing
  it :)

 This time you're not :-)

 But, considering that Kern seems to have the development version in a
 state that approaches beta stability, I assume he will release the
 version 1.38 in the next few months.

Well, everyone is free to modify the code to suit his own needs, but my view 
is not to start jobs until the Volume can be mounted on the drive. If the 
drive is busy with another Volume, spooling or no spooling, my view is that 
it is not a good idea to start the job as it could lead to a deadlock or 
enormous consumption of disk space for long periods of time.


 After that, he will probably ask for feature requests and suggestions.
 This would be the best time to present your ideas once more.

 Anyway, I'd vote for job migration :-)

 Arno

  Thanks,
  Sean
 
  Arno Lehmann wrote:
  Hi.
 
  Sean O'Grady wrote:
  Well its good to know that Bacula will do what I need!
 
  Guess now I need to determine what I've done wrong in my configs ...
 
  I'm short forming all the config inforation to reduce the size of the
  e-mail but I can post my full configs if necessary. Anywhere where I
  have Maximum Concurrent Jobs I've posted that section of the
  config. If there is something else besides Maximum Concurrent Jobs
  needed in the configs to get this behaviour to happen and I'm missing
  it, please let me know.
 
  The short form is ok :-)
 
  Now, after reading through it I actually don't see any reason why only
  one job at a time is run.
 
  Perhaps someone else can...
 
  Still, I have 

[Bacula-users] Incremental Backups and 'new' old files

2005-05-18 Thread Ryan LeBlanc
We are running tests with Bacula to see if it will work in our
environment.  So far, we are very impressed!

We have, however, run into a small problem.  We do a full backup of a
folder, and all files are copied as expected.  We then put a file into
this folder.  It, however is an old file with a create/modified date
older than the latest full backup.  However, this file is new to the
folder.  Bacula ignores this file in the incremental backup.  The next
full backup to come along backs the file up as expected, along with
everything else in the folder.

Is this a bug, by design, or a configuration problem on our end?

Ryan


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Job Behaviour

2005-05-18 Thread Kern Sibbald
On Wednesday 18 May 2005 23:24, Sean O'Grady wrote:
 Hi,

 Good points on a number of things but a few comments need to be made.

 1) I'm not attempting to use spooling as a backup method that I want to
 restore from. I'm using spooling as its intended for, to avoid
 shoe-shining. I backup a number of clients servers at remote sites and
 their network connections can sometimes be saturated while backing up.
 With the spooling the tape is only moving when it needs to be which is
 good for the wear and tear :) There is multiple clients and I'm trying
 to use different pools of tapes for them which is whats got me into this
 predicament. In terms of spool size and running out of space that is a
 consideration even in the current version so with some careful
 management this problem could be avoided.

 2) I don't believe writing to a disk based volume and then migrating to
 tape would work for me. For restores wouldn't that require me to first
 restore the disk based volume from to tape to disk then restore the
 files I need from that disk volume (which is really its own Storage
 Device)? I haven't really looked into this scenario but the bits that I
 have read led me to believe that the restore scenario would be like that.

Actually this would probably work quite well, because while the data is on 
disk Bacula would restore it from disk, and when it is on tape Bacula would 
restore it from tape. 

However, this is not currently implemented and thus is not possible.


 3) In terms of waiting for a Volume to be inserted I have the luxury of
 having a tape auto-loader doing the work for me. In my proposed scenario
 Bacula could check to see what Volume it requires  as the Job finishes
 its spooling and if the tape is not in the drive it could issue an
 mtx-changer command and have the autoloader load it. I see some
 potential issues here with timing and deadlocking for the drive between
 jobs but some careful queue management could ensure this works.


 4) Your absolutely right about Bacula positioning the tape before the
 Job starts. Looking at the code I see that before data spooling begins
 only after Bacula acquires the Storage Device which wouldn't work so
 well with Jobs needing Volumes from multiple Pools. With how the checks
 are currently working in terms of getting the ok to start  the job
 wouldn't end up being too much different (I say with a wink), possibly
 some shuffling around in the order of the sub-routines?? In reality I
 think this is where the major part of the work would be needed, since
 there is potential for some major failure here.

Well, the idea of obtaining a tape drive at the last minute is interesting, 
and I'm going to think about it carefully, but my intuition tells me it is  
dangerous.  You could have 10 jobs partially completed all waiting for one 
tape drive.  This could bring your server to its knees in terms of resource 
usage (especially disk space).


 With all this being said I'll diffently bring it up next time Kern asks
 for wish-list suggestions. In the meantime I can simply do away with
 the multiple pools and make sure that same Level Jobs happen in the same
 time frames and I should have the behaviour that I want minus the
 separate Pools.

Yes, or get more tape drives. :-)


 Thanks everyone for your help!

 Sean

 Arno Lehmann wrote:
  Hello,
 
  Sean O'Grady wrote:
  Hi,
 
  ...
 
  I believe I have sorted out what my issue with this is. As I didn't
  post my complete configs and only the ones that I thought would be
  relevant I ended up only giving half the picture. What was missing
  was that there is another set of Pool tapes and different Jobs that
  run using these Pools (that also do data spooling) at the same time
  as the Jobs I showed before.
 
  Ok, so this explains it.
 
  Looking at src/dird/jobq.c I see the following which hopefully Kern
  or someone else in touch with the code can enlighten a bit more for me.
 
  Well, I'm not in touch with the code, but still...
 
   SNIP
 
  if (njcr-store == jcr-store  njcr-pool != jcr-pool) {
  skip_this_jcr = true;
  break;
  }
 
   SNIP
 
  This says to me that as long as the Pools of the Jobs being queued
  match, the Jobs will all run concurrently. Jobs however that have
  mismatching Pools will instead queue and wait for the storage device
  to free when previous jobs complete.
 
  That's about it, I'd say.
 
  Its probably not this simple but some behaviour equivalent to ...
 
  if (njcr-store == jcr-store  njcr-pool != jcr-pool 
  njcr-spool_data != true) {
  skip_this_jcr = true;
  break;
  }
 
  ... sould allow for Jobs to queue with different Pools that have
  spooling on.
 
  Your ideamight be possible, but there are someother things to consider.
  One is that bacula positions the tape *before* starting a job, i.e.
  bwfore starting to spool data.
 
  I was wondering about this, but I can see some good reason as well. I
  guess that Kern's idea was that a job should only run when 

RE: [Bacula-users] Speed of Windows FD

2005-05-18 Thread Matthew Butt
Hi Simon,

The bacula server is running at 1000Mbps, full duplex:

tg3: eth0: Link is up at 1000 Mbps, full duplex.
tg3: eth0: Flow control is on for TX and on for RX.

I'm trying to figure out at what speed/duplex the Windows server is but
the switch it's plugged into shows that's it's also 1000Mbps full
duplex.  Cabling is all Cat5e.

Matthew Butt + T R I C Y C L E

 -Original Message-
 From: Simon Weller [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 18, 2005 5:14 PM
 To: Matthew Butt
 Cc: bacula-users@lists.sourceforge.net
 Subject: RE: [Bacula-users] Speed of Windows FD
 
 Have you checked network speed and duplex?
 
 - Si
 
 On Wed, 2005-05-18 at 17:09 -0400, Matthew Butt wrote:
 Matt I have two identical Win 2k3 servers (Dell PowerEdge 2800,
  U320
   RAID5,
 Matt dual P4 Xeon 2.8) that I need to backup data onto an FC3
  server
   running
 Matt Bacula (P4 2.8GHz, USB2 HDD).  All three machines have
Gigabit
   cards
 Matt running on a Gigabit switch with appropriate Cat5e cables.
  
 Matt Server1 has two files totaling 5Gb.  Bacula grabs these
files
  at
   about
 Matt 25MB/sec - total backup time is under 4mins.
 Matt Server2 has ~4000 files totaling 100Gb.  The problem is
that
   Bacula is
 Matt only running at about 50Kb/sec for this server - simple
maths
   tells us
 Matt that it's going to take 3.5 weeks to backup the entire
server!
  
 Matt There's obviously something awry here - can anyone give me
any
   ideas
 Matt what to look into? The networking between the machines all
  appears
   to be
 Matt fine (I can pull off large files between the Win 2k3
servers
  at
   around
 Matt 30MB/sec) so it seems to be something to do with the
Bacula
  FD.
  
   I would start by looking at the Processes tab of the Task Manager
on
   Server2
   during the backup to see what % of the CPU bacula-fd is getting
and if
   anything else is running.
 
  The bacula-fd process on Server2 is using at most about 2%.  Mem
usage
  is very low (3Mb).  Nothing else is using the processor or disks
  intensively on that machine at the moment and bconsole is reporting
  about 3MB/sec transfer (speed changes wildly it seems, but never
very
  fast!)
 
  The bacula server is running around 5-15% for bacula-sd.  Again,
nothing
  much else is happening on that server, load average: 0.21, 0.12,
0.09.
 
  Is there any profiling I can run on the Windows FD client, or at
least
  see a file-by-file progress?
 
 
 
  ---
  This SF.Net email is sponsored by Oracle Space Sweepstakes
  Want to be the first software developer in space?
  Enter now for the Oracle Space Sweepstakes!
  http://ads.osdn.com/?ad_idt12alloc_id344op=click
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 --
 Simon Weller
 Systems Engineer, LPIC-2
 Education Networks of America
 1101 McGavock St.
 Nashville TN 37203
 Direct Line:  615.312.6068
 Network Operations Center: 1.800.836.4357
 




---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_idt12alloc_id344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental Backups and 'new' old files

2005-05-18 Thread Ryan LeBlanc
Arno, thank you for your response.

Here are our details:

Bacula version 1.36.3 server running on Linux kernel 2.4.26.  It has
ext2 partitions mounted (rw)

The client is running Windows XP, no special mount options, just windows
default.  NTFS format on the partition


Arno Lehmann wrote:

 Hello,

 Ryan LeBlanc wrote:

 We are running tests with Bacula to see if it will work in our
 environment.  So far, we are very impressed!

 We have, however, run into a small problem.  We do a full backup of a
 folder, and all files are copied as expected.  We then put a file into
 this folder.  It, however is an old file with a create/modified date
 older than the latest full backup.  However, this file is new to the
 folder.  Bacula ignores this file in the incremental backup.  The next
 full backup to come along backs the file up as expected, along with
 everything else in the folder.

 Is this a bug, by design, or a configuration problem on our end?


 The behaviour you observe might depend on the file system. According
 to the manual, (under unix and linux systems) the timestamps of the
 last modification or attribute change. Usually, what you describe
 should result in a new attribute change timestamp. However, that
 might depend on your filesystem and its mount options.

 So, most probably, it's a configuration problem or happens by using
 the wrong operating system. Well, I didn't check this here, but at
 least that's what the manual says.

 You might want to tell us which client operating system, file system
 and mount options you use - perhaps someone can tell more then.

 Arno


 Ryan


 ---
 This SF.Net email is sponsored by Oracle Space Sweepstakes
 Want to be the first software developer in space?
 Enter now for the Oracle Space Sweepstakes!
 http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental Backups and 'new' old files

2005-05-18 Thread Arno Lehmann
Ryan LeBlanc wrote:
Arno, thank you for your response.
Here are our details:
Bacula version 1.36.3 server running on Linux kernel 2.4.26.  It has
ext2 partitions mounted (rw)
Ok, the server doesn't matter here, I think.
The client is running Windows XP, no special mount options, just windows
default.  NTFS format on the partition
As far as I know, NTFS has similar timestamps - atime, mtime and ctime - 
as normal unix file systems. I'm not sure, but I think I remember 
reading somewhere that under Windows you can avoid changing them when 
you modify a file.

Let's see what other say...
Arno
Arno Lehmann wrote:

Hello,
Ryan LeBlanc wrote:

We are running tests with Bacula to see if it will work in our
environment.  So far, we are very impressed!
We have, however, run into a small problem.  We do a full backup of a
folder, and all files are copied as expected.  We then put a file into
this folder.  It, however is an old file with a create/modified date
older than the latest full backup.  However, this file is new to the
folder.  Bacula ignores this file in the incremental backup.  The next
full backup to come along backs the file up as expected, along with
everything else in the folder.
Is this a bug, by design, or a configuration problem on our end?

The behaviour you observe might depend on the file system. According
to the manual, (under unix and linux systems) the timestamps of the
last modification or attribute change. Usually, what you describe
should result in a new attribute change timestamp. However, that
might depend on your filesystem and its mount options.
So, most probably, it's a configuration problem or happens by using
the wrong operating system. Well, I didn't check this here, but at
least that's what the manual says.
You might want to tell us which client operating system, file system
and mount options you use - perhaps someone can tell more then.
Arno

Ryan
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412alloc_id=16344op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users