Re: [Bacula-users] Job is waiting on Storage

2017-07-21 Thread Ana Emília M . Arruda
= 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice1
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice2
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> Device {
>
>   Name = MonthlyDevice3
>
>   Media Type = MonthlyDisk
>
>   Archive Device = /backup/bacula/monthly
>
>   Autochanger = yes;
>
>   LabelMedia = yes;   # lets Bacula label unlabeled media
>
>   Random Access = Yes;
>
>   AutomaticMount = yes;   # when device opened, read it
>
>   RemovableMedia = no;
>
>   AlwaysOpen = no;
>
>   Maximum Concurrent Jobs = 1
>
> }
>
>
>
> *Jim Richardson*
>
> CISSP CISA
>
>
> Secur*IT*360
>
>
>
> *From:* Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
> *Sent:* Sunday, July 16, 2017 8:40 PM
> *To:* Jim Richardson <j...@securit360.com>
> *Cc:* Bill Arlofski <waa-bac...@revpol.com>; Bacula-users@lists.
> sourceforge.net
>
> *Subject:* Re: [Bacula-users] Job is waiting on Storage
>
>
>
> Hi Jim,
>
>
>
> I will try to help here.
>
>
>
> It seems to me your C2T-Data backup job is reading from disk and writing
> to tape.
>
>
>
> The disk autochanger used by this job gor reading is "FileChgr" and it has
> three devices each having a different media type (DailyDisk, WeeklyDisk and
> MontlhyDisk).
>
>
>
> In this case, only one drive will be able to use "DailyDisk" media types.
>
>
>
> Since jobid=934 is using the DailyDevice for reading, you do not have any
> other device to use for writing DaikyDisk medias and this is why
> jobids=936-939 are waiting.
>
>
>
> Please note this kind of disk autochanger configuration is not
> recommended. Instead, all drives configured for one disk autochanger should
> use the same media type.
>
>
>
> I would recommend you to review your current settings to have one
> autochanger to deal with only one specific media type.
>
>
>
> In your case, you will need at least one drive to be used by the C2T-Data
> backup job for reading and another drive to be used by any other backup job
> for writing.
>
>
>
> Hope this helps.
>
>
>
> Best,
>
> Ana
>
>
>
>
>
> El 14 jul. 2017 18:39, "Jim Richardson" <j...@securit360.com> escribió:
>
> Bill, thank you for your response.  The C2T "Cycle to Tape" jobs are
> actually functioning properly.  The first job takes longer, and I have one
> tape drive.  I am using Priority to ensure that the C2T-Data job completes
> before the C2T-Archive job.  The D2D "Daily to Disk" jobs use a different
> set of devices.  But, if this could be the root of my problem I will
> investigate.  To complete the picture, the priority of the C2T-Data job is
> 10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula
> post backup job which is 99, due to wanting a clean backup after all jobs
> are complete.
>
>
>
> This is the behavior I am looking for: *from the 7.4.6 manual*: "Note
> that only higher priority jobs will start early. Suppose the director will
> allow two concurrent jobs, and that two jobs with priority 10 are running,
> with two more in the queue. If a job with priority 5 is added to the queue,
> it will be run as soon as one of the running jobs finishes. However, new
> priority 10 jobs will not be run until the priority 5 job has finished."
>
>
>
> It seems I am limited to only 2 connections to my Storage, but I can’t see
> where that is configured improperly.
>
>
>
> As a quick rationale
>
> Concurrency:
>
> My DIR allows for up to 20 concurrent
>
> My SD allows for up to 20 concurrent
>
> My FD allows for up to 20 concurrent
>
> My Clients allow for up to 2 concurrent (by schedule will only happen on
> Sundays)
>
> My Bacula Client allows for up to 10 concurrent (just in ca

Re: [Bacula-users] Job is waiting on Storage

2017-07-16 Thread Ana Emília M . Arruda
nger Command = ""

  Changer Device = /dev/null

}



Device {

  Name = DailyDevice

  Media Type = DailyDisk

  Archive Device = /backup/bacula/daily

  Autochanger = yes;

  LabelMedia = yes;

  Random Access = Yes;

  AutomaticMount = yes;

  RemovableMedia = no;

  AlwaysOpen = no;

  Maximum Concurrent Jobs = 10

}



Device {

  Name = WeeklyDevice

  Media Type = WeeklyDisk

  Archive Device = /backup/bacula/weekly

  Autochanger = yes;

  LabelMedia = yes;

  Random Access = Yes;

  AutomaticMount = yes;

  RemovableMedia = no;

  AlwaysOpen = no;

  Maximum Concurrent Jobs = 10

}



Device {

  Name = MonthlyDevice

  Media Type = MonthlyDisk

  Archive Device = /backup/bacula/monthly

  Autochanger = yes;

  LabelMedia = yes;

  Random Access = Yes;

  AutomaticMount = yes;

  RemovableMedia = no;

  AlwaysOpen = no;

  Maximum Concurrent Jobs = 10

}



Autochanger {

  Name  = "Dell-TL1000"

  Device= ULT3580

  Description   = "Dell TL1000 (model IBM 3572-TL)"

  Changer Device= /dev/sg5

  Changer Command   = "/usr/local/sbin/mtx-changer %c %o %S %a %d"

}



Device {

  Name  = ULT3580

  Description   = "IBM ULT3580-HH7"

  Media Type= LTO-7

  Archive Device= /dev/nst0

  Label Media   = yes

#  Label Type   = IBM;

  AutomaticMount= yes;

  AlwaysOpen= yes;

  RemovableMedia= yes;

  RandomAccess  = no;

  AutoChanger   = yes;

  Changer Device= /dev/sg5

  Drive Index   = 0

  Spool Directory   = /backup/bacula/spool

  Changer Command   = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"

  # Enable the Alert command only if you have the mtx package loaded

  Alert Command     = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"

  Maximum Concurrent Jobs = 1

}



/etc/bacula/bacula-fd.conf



FileDaemon {

  Name = bacula-fd

  FDport = 9102

  WorkingDirectory = /var/spool/bacula

  Pid Directory = /var/run

  Maximum Concurrent Jobs = 20

  Plugin Directory = /usr/lib64/bacula

}



Thanks you again and I hope we can find a resolution 







Jim Richardson



-Original Message-

From: Bill Arlofski [mailto:waa-bac...@revpol.com]

Sent: Friday, July 14, 2017 12:08 AM

To: bacula-users@lists.sourceforge.net

Subject: Re: [Bacula-users] Job is waiting on Storage



On 07/13/2017 06:50 PM, Jim Richardson wrote:

> I can’t seem to get Bacula to run simultaneous jobs when using the

> same storage device.  Can anyone offer advice?

>

>

>

> Running Jobs:

>

> Console connected at 13-Jul-17 19:37

>

> JobId  Type Level Files Bytes  Name  Status

>

> ==

>

>934  Back Diff106,028537.4 G C2T-Data is running

>

>935  Back Diff  0 0  C2T-Archive is waiting for higher

> priority jobs to finish

>

>936  Back Full 19,94313.58 G D2D-DC02-Application is running

>

>937  Back Full  0 0  D2D-HRMS-Application is waiting on

> Storage "Storage_Daily2Disk"

>

>938  Back Full  0 0  D2D-Fish-Application is waiting on

> Storage "Storage_Daily2Disk"

>

>939  Back Full  0 0  D2D-SPR01-Application is waiting
on

> Storage "Storage_Daily2Disk"



Hi Jim,



To me, it looks like your settings are correct regarding
MaximumConcurrentjobs (MCJ)...



What I think is going on here is that jobid 935 is holding everything else
up due to it having a different priority.



Notice that its status is: "waiting for higher priority jobs to finish"



Unless you have set "AllowMixedPriority" in your Job resources, then the
other jobs will wait until this one is finished. Personally, I do not
recommend that this be set, as it causes more confusion than clarity in my
opinion.



Just an FYI: The status "is waiting for higher priority jobs to finish", in
my humble opinion is not really 100% correct. It could be that it "is
waiting on LOWER priority jobs to finish", but the same message is printed
in both cases.

I think this message could be more specific to the actual case, or made
more generic to say "waiting on jobs of different priorities to finish, and
'AllowMixedPriority' not enabled..."  (something like this)



I wonder why though, that jobid  936 (after 935) is listed as running...

Perhaps check its priority to see if it is the same as jobid 934 "C2T-Data"



If you set the "C2T-Archive" job's priority to the same priority as the
other backup jobs, then it will not be held up, and it will not hold up any
other queued jobs.



You can investigate the "AllowMixedPriority" 

Re: [Bacula-users] Job is waiting on Storage

2017-07-14 Thread Jim Richardson
 Changer Command   = "/usr/local/sbin/mtx-changer %c %o %S %a %d"

}



Device {

  Name  = ULT3580

  Description   = "IBM ULT3580-HH7"

  Media Type= LTO-7

  Archive Device= /dev/nst0

  Label Media   = yes

#  Label Type   = IBM;

  AutomaticMount= yes;

  AlwaysOpen= yes;

  RemovableMedia= yes;

  RandomAccess  = no;

  AutoChanger   = yes;

  Changer Device= /dev/sg5

  Drive Index   = 0

  Spool Directory   = /backup/bacula/spool

  Changer Command   = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"

  # Enable the Alert command only if you have the mtx package loaded

  Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"

  Maximum Concurrent Jobs = 1

}



/etc/bacula/bacula-fd.conf



FileDaemon {

  Name = bacula-fd

  FDport = 9102

  WorkingDirectory = /var/spool/bacula

  Pid Directory = /var/run

  Maximum Concurrent Jobs = 20

  Plugin Directory = /usr/lib64/bacula

}



Thanks you again and I hope we can find a resolution 







Jim Richardson



-Original Message-

From: Bill Arlofski [mailto:waa-bac...@revpol.com]

Sent: Friday, July 14, 2017 12:08 AM

To: bacula-users@lists.sourceforge.net

Subject: Re: [Bacula-users] Job is waiting on Storage



On 07/13/2017 06:50 PM, Jim Richardson wrote:

> I can’t seem to get Bacula to run simultaneous jobs when using the

> same storage device.  Can anyone offer advice?

>

>

>

> Running Jobs:

>

> Console connected at 13-Jul-17 19:37

>

> JobId  Type Level Files Bytes  Name  Status

>

> ==

>

>934  Back Diff106,028537.4 G C2T-Data is running

>

>935  Back Diff  0 0  C2T-Archive is waiting for higher

> priority jobs to finish

>

>936  Back Full 19,94313.58 G D2D-DC02-Application is running

>

>937  Back Full  0 0  D2D-HRMS-Application is waiting on

> Storage "Storage_Daily2Disk"

>

>938  Back Full  0 0  D2D-Fish-Application is waiting on

> Storage "Storage_Daily2Disk"

>

>939  Back Full  0 0  D2D-SPR01-Application is waiting on

> Storage "Storage_Daily2Disk"



Hi Jim,



To me, it looks like your settings are correct regarding MaximumConcurrentjobs 
(MCJ)...



What I think is going on here is that jobid 935 is holding everything else up 
due to it having a different priority.



Notice that its status is: "waiting for higher priority jobs to finish"



Unless you have set "AllowMixedPriority" in your Job resources, then the other 
jobs will wait until this one is finished. Personally, I do not recommend that 
this be set, as it causes more confusion than clarity in my opinion.



Just an FYI: The status "is waiting for higher priority jobs to finish", in my 
humble opinion is not really 100% correct. It could be that it "is waiting on 
LOWER priority jobs to finish", but the same message is printed in both cases.

I think this message could be more specific to the actual case, or made more 
generic to say "waiting on jobs of different priorities to finish, and 
'AllowMixedPriority' not enabled..."  (something like this)



I wonder why though, that jobid  936 (after 935) is listed as running...

Perhaps check its priority to see if it is the same as jobid 934 "C2T-Data"



If you set the "C2T-Archive" job's priority to the same priority as the other 
backup jobs, then it will not be held up, and it will not hold up any other 
queued jobs.



You can investigate the "AllowMixedPriority" option, but I think it may not do 
what you want (exactly).



Another option is to set up some schedules to try to make sure this "Archive"

jobs is run when no other normal backup jobs are running.





Best regards,



Bill



--

Bill Arlofski

http://www.revpol.com/bacula

-- Not responsible for anything below this line --



--

Check out the vibrant tech community on one of the world's most engaging tech 
sites, Slashdot.org! http://sdm.link/slashdot 
___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users

CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited. If you received this email in error, please 
notify the sender and delete this email from your system. Thank you.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2017-07-14 Thread Bill Arlofski
On 07/14/2017 12:17 AM, Darold Lucus wrote:
> I have the same issue, I have four different storage devices. Only one Job per
> storage device can be run concurrently. Now different jobs from another
> storage device can run at the same time, 4 Jobs on 4 separate storage device
> will run at the same time, but never two jobs on the same storage device at
> once. I am not sure f this is just typical behavior for bacula storage daemon
> or if it is a setting that can be adjusted to make multiple jobs on the same
> storage device run concurrently. If someone has any extra insight on this I
> would greatly appreciate it as well.
> 
>  
> 
> Sincerely,
> 
> Darold Lucus

Hi Darold,   (I am posting this reply to the list)

To help with this, I would ask to have you post all of the resource configs
like Jim did, and also post the bconsole output of:

* s dir

When you see only 1 job running while expecting multiple concurrency.

The 's dir' (status director) will tell us exactly what is preventing a job
from running...

Best regards,

Bill



-- 
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2017-07-13 Thread Bill Arlofski
On 07/13/2017 06:50 PM, Jim Richardson wrote:
> I can’t seem to get Bacula to run simultaneous jobs when using the same
> storage device.  Can anyone offer advice?
> 
>  
> 
> Running Jobs:
> 
> Console connected at 13-Jul-17 19:37
> 
> JobId  Type Level Files Bytes  Name  Status
> 
> ==
> 
>934  Back Diff106,028537.4 G C2T-Data is running
> 
>935  Back Diff  0 0  C2T-Archive is waiting for higher
> priority jobs to finish
> 
>936  Back Full 19,94313.58 G D2D-DC02-Application is running
> 
>937  Back Full  0 0  D2D-HRMS-Application is waiting on
> Storage "Storage_Daily2Disk"
> 
>938  Back Full  0 0  D2D-Fish-Application is waiting on
> Storage "Storage_Daily2Disk"
> 
>939  Back Full  0 0  D2D-SPR01-Application is waiting on
> Storage "Storage_Daily2Disk"

Hi Jim,

To me, it looks like your settings are correct regarding MaximumConcurrentjobs
(MCJ)...

What I think is going on here is that jobid 935 is holding everything else up
due to it having a different priority.

Notice that its status is: "waiting for higher priority jobs to finish"

Unless you have set "AllowMixedPriority" in your Job resources, then the
other jobs will wait until this one is finished. Personally, I do not
recommend that this be set, as it causes more confusion than clarity in my
opinion.

Just an FYI: The status "is waiting for higher priority jobs to finish", in my
humble opinion is not really 100% correct. It could be that it "is waiting on
LOWER priority jobs to finish", but the same message is printed in both cases.
I think this message could be more specific to the actual case, or made more
generic to say "waiting on jobs of different priorities to finish, and
'AllowMixedPriority' not enabled..."  (something like this)

I wonder why though, that jobid  936 (after 935) is listed as running...
Perhaps check its priority to see if it is the same as jobid 934 "C2T-Data"

If you set the "C2T-Archive" job's priority to the same priority as the other
backup jobs, then it will not be held up, and it will not hold up any other
queued jobs.

You can investigate the "AllowMixedPriority" option, but I think it may not do
what you want (exactly).

Another option is to set up some schedules to try to make sure this "Archive"
jobs is run when no other normal backup jobs are running.


Best regards,

Bill

-- 
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on Storage

2017-07-13 Thread Jim Richardson
I can't seem to get Bacula to run simultaneous jobs when using the same storage 
device.  Can anyone offer advice?

Running Jobs:
Console connected at 13-Jul-17 19:37
JobId  Type Level Files Bytes  Name  Status
==
   934  Back Diff106,028537.4 G C2T-Data is running
   935  Back Diff  0 0  C2T-Archive is waiting for higher 
priority jobs to finish
   936  Back Full 19,94313.58 G D2D-DC02-Application is running
   937  Back Full  0 0  D2D-HRMS-Application is waiting on 
Storage "Storage_Daily2Disk"
   938  Back Full  0 0  D2D-Fish-Application is waiting on 
Storage "Storage_Daily2Disk"
   939  Back Full  0 0  D2D-SPR01-Application is waiting on 
Storage "Storage_Daily2Disk"

Relavent configuration settings:

bacula-dir.conf
Director {
 Maximum Concurrent Jobs = 20
}

Storage {
  Name = Storage_Daily2Disk
  Maximum Concurrent Jobs = 10# run up to 10 jobs a the same time
}

Client { /* All clients */
  Maximum Concurrent Jobs = 2
}

bacula-sd.conf
Storage {
  Name = bacula-sd
  Maximum Concurrent Jobs = 20
}

Autochanger {
  Name = FileChgr
  Device = DailyDevice, WeeklyDevice, MonthlyDevice
  Changer Command = ""
  Changer Device = /dev/null
}

Device {
  Name = DailyDevice
  Media Type = DailyDisk
  Archive Device = /backup/bacula/daily
  Autochanger = yes;
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 10
}

bacula-fd.conf
FileDaemon {  # this is me
  Name = bacula-fd
  Maximum Concurrent Jobs = 20
}


Jim Richardson

CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary and privileged information, and unauthorized 
disclosure or use is prohibited. If you received this email in error, please 
notify the sender and delete this email from your system. Thank you.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on Storage

2014-10-02 Thread Alexei Babich
Hello.
I try to use Bacula 5.2.6 on Debian Linux.
I have one storage (hard disk), one pool, job limit (on storage) = 10 
and 20 clients. Volumes created automatically, and each job uses 1 volume.
Schedule starts all 20 jobs for all 20 clients simultaneously. But only 
one really run: some jobs in waiting on Storage status, some in 
waiting on max Storage jobs.
I want to configure bacula as follows: schedule starts all 20 jobs, but 
not more, than 10 simultaneously (storage job limit) can be in running 
state. And if one job finish work, then next job goes in runnig state.
I hope I'm clearly outlined the basic idea - I do not quite understand 
the English language :)

So, can I configure bacula so that it did what I needed?

Thank you.

--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2014-10-02 Thread John Drescher
On Thu, Oct 2, 2014 at 5:58 AM, Alexei Babich imp...@tiny-vps.com wrote:
 Hello.
 I try to use Bacula 5.2.6 on Debian Linux.
 I have one storage (hard disk), one pool, job limit (on storage) = 10
 and 20 clients. Volumes created automatically, and each job uses 1 volume.
 Schedule starts all 20 jobs for all 20 clients simultaneously. But only
 one really run: some jobs in waiting on Storage status, some in
 waiting on max Storage jobs.
 I want to configure bacula as follows: schedule starts all 20 jobs, but
 not more, than 10 simultaneously (storage job limit) can be in running
 state. And if one job finish work, then next job goes in runnig state.
 I hope I'm clearly outlined the basic idea - I do not quite understand
 the English language :)

 So, can I configure bacula so that it did what I needed?


Remember bacula storage devices operate like a tape meaning only 1
volume can be loaded at a time in a storage device. If you need more
volumes loaded you must add more storage devices and adjust your jobs
to go to the different storages.

John

--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on Storage

2014-10-02 Thread eudald
Hello Alexei

Yes, you can configure bacula in order to do so.

Just some things
in bacula-dir.conf, on Director braces insert directive or modify:
Maximum Concurrent Jobs = 10

In bacula-sd.conf, on Storage braces insert directive or modify:
Maximum Concurrent Jobs = 10

check config and restart bacula
It should do the work.

+--
|This was sent by eud...@leadiance.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-28 Thread altiris
I have discovered the problem and it had to do with my /etc/hosts file. I had 
changed it a bit slightly a while ago and I recently remembered that bacula 
started acting up when I changed it (I was working with DNS and mail server at 
the time). Basically the issue was I that I had 192.168.12.13 
mycomputer-mydomain.com mycomputer instead of 192.168.12.13 
mycomputer.mydomain.com mycomputer

I am really upset that was whats wrongI make a lot of these silly mistakes 
simply because the time I work on my server are late at night

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-28 Thread Kern Sibbald
I wouldn't take it so hard.  At least you know that it is not a Bacula
bug, which might have been even more painful :-)

Best regards,
Kern

On 06/28/2014 07:38 PM, altiris wrote:
 I have discovered the problem and it had to do with my /etc/hosts file. I had 
 changed it a bit slightly a while ago and I recently remembered that bacula 
 started acting up when I changed it (I was working with DNS and mail server 
 at the time). Basically the issue was I that I had 192.168.12.13 
 mycomputer-mydomain.com mycomputer instead of 192.168.12.13 
 mycomputer.mydomain.com mycomputer

 I am really upset that was whats wrongI make a lot of these silly 
 mistakes simply because the time I work on my server are late at night

 +--
 |This was sent by altiris28...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 Open source business process management suite built on Java and Eclipse
 Turn processes into business applications with Bonita BPM Community Edition
 Quickly connect people, data, and systems into organized workflows
 Winner of BOSSIE, CODIE, OW2 and Gartner awards
 http://p.sf.net/sfu/Bonitasoft
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-27 Thread altiris
I have checked the config files and there appears to be no 127.0.0.1 entries in 
them.

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-24 Thread altiris
I think the problem I am having is that the directory is not being able to 
connect to the storage daemons for some odd reason. I went into the bconsole 
tool (/usr/sbin/bconsole) and typed status and selected Storage, the bconsole 
would hang at Connecting to Storage daemon Game Servers at 
mycomputer.mydomain.com:9103 where mycomputer and mydomain were replaced with 
their respective names. I have checked if the port 9103 is open and it is, my 
fqdn returns the name that the director is trying to connect to (listed above). 
I dont understand why this happening, it used to work fine.

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-24 Thread John Drescher
On Tue, Jun 24, 2014 at 9:18 PM, altiris bacula-fo...@backupcentral.com wrote:
 I think the problem I am having is that the directory is not being able to 
 connect to the storage daemons for some odd reason. I went into the bconsole 
 tool (/usr/sbin/bconsole) and typed status and selected Storage, the 
 bconsole would hang at Connecting to Storage daemon Game Servers at 
 mycomputer.mydomain.com:9103 where mycomputer and mydomain were replaced 
 with their respective names. I have checked if the port 9103 is open and it 
 is, my fqdn returns the name that the director is trying to connect to 
 (listed above). I dont understand why this happening, it used to work fine.

Make sure you do not have 127.0.0.1 or localhost in any of your bacula
configuration files.

John

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-20 Thread John Drescher
On Thu, Jun 19, 2014 at 10:51 PM, altiris
bacula-fo...@backupcentral.com wrote:
 Well...I already know I am doing multiple things wrong. I just made a new 
 daemon for each storage device, if I added a new storage device I would add a 
 new storage daemon and make them match such as the Media type name. I have 
 the two storage daemons on the same IP address and same port. This is all 
 being done on one machine. What confuses me though is even with just having 
 one backup job enabled and trying to run it I get that same error message as 
 when having two backup jobs enabled with only one job going.

That is because the second daemon fails to run.

John

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-19 Thread altiris
I think I am understanding better, but as you said Also there is nothing 
stopping you from having more than 1 storage
device with volumes on the same filesystem.  I have actually made everything 
separate per job, so separate volume pool, storage device and daemon (as you 
can see in the OP)because to me this makes it easier for things to be organized 
and track down future errors (while it seems this is causing problems?) but I 
am still getting problems. I have removed the entire second job from my config 
files and now even with the one backup job I am getting a status from the 
director that is waiting on the Storage Daemon Game_Servers why is it doing 
this, this is the only backup job that is currently enabled and set to run.

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-19 Thread John Drescher
On Thu, Jun 19, 2014 at 10:19 PM, altiris
bacula-fo...@backupcentral.com wrote:
 I think I am understanding better, but as you said Also there is nothing 
 stopping you from having more than 1 storage
 device with volumes on the same filesystem.  I have actually made everything 
 separate per job, so separate volume pool, storage device and daemon (as you 
 can see in the OP)because to me this makes it easier for things to be 
 organized and track down future errors (while it seems this is causing 
 problems?) but I am still getting problems. I have removed the entire second 
 job from my config files and now even with the one backup job I am getting a 
 status from the director that is waiting on the Storage Daemon Game_Servers   
   why is it doing this, this is the only backup job that is currently enabled 
 and set to run.

This sounds complicated. How did you create a new daemon for each SD?
You do you have each on a different port? Or using a VM? You can not
have more than 1 storage daemon listening at the same ipaddress and
port.

John

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-19 Thread altiris
Well...I already know I am doing multiple things wrong. I just made a new 
daemon for each storage device, if I added a new storage device I would add a 
new storage daemon and make them match such as the Media type name. I have the 
two storage daemons on the same IP address and same port. This is all being 
done on one machine. What confuses me though is even with just having one 
backup job enabled and trying to run it I get that same error message as when 
having two backup jobs enabled with only one job going.

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-17 Thread altiris
I have been trying to setup bacula so that it will perform 2 backup jobs. One 
to backup a folder in a user's home folder and another to backup an entire 
partition. Each time I try to run one of the two backup jobs (via webmin) it 
says it has started but when I go to the director status it outputs 
BackupGame_Servers   114 Incremental is waiting on Storage 
Game_Servers. Where BackupGame_Servers is the job name, 114 I believe is the 
job number, Incremental is the type of backup, and Storage Game_Servers is the 
Storage Daemon it is using. When i try to run the other backup job I also get a 
is waiting on Storage daemon _. The backups are set to be stored on on 
separate partition and in separate folders in that partition. I don't see why 
this is doing this. I use a different storage device, storage daemon, volume 
pool, and backup schedule to keep everything organized. I posted my config 
files to better show/explain and eliminate the possibilities of errors.


Backup job A (BackupGame_Servers)

/etc/bacula/bacula-dir.conf
Client {
  Name = bacula-server1
  Password = mypasswordhere
  Address = sysdomain.server1.com
  FDPort = 9102
  Catalog = MyCatalog
  File Retention = 30 days
  Job Retention = 6 months
}
FileSet {
  Name = Game_Servers
  Include {
File = /home/games/Servers
Options {
  signature = MD5
}
  }
}
Storage {
  Name = Game_Servers
  Password = mypasswordhere
  Address = sysdomain.server1.com
  SDPort = 9103
  Device = Game_Servers_Backup
  Media Type = GameServerBackup
  Maximum Concurrent Jobs = 20
}
Job {
  Name = BackupGame_Servers
  Type = Backup
  Client = bacula-server1
  FileSet = Game_Servers
  Schedule = WeeklyCycle_Daily
  Storage = Game_Servers
  Pool = File_Game_Servers
  Messages = Standard
  Level = Incremental
}
Job {
  Name = RestoreGame_Servers
  Type = Restore
  Client = bacula-server1
  FileSet = Game_Servers
  Storage = Game_Servers
  Pool = File_Game_Servers
  Messages = Standard
}
Schedule {
  Name = WeeklyCycle_Daily
  Run = Level=Incremental Pool=File_Game_Servers sun-sat at 2:00
}
Pool {
  Name = File_Game_Servers
  Pool Type = Backup
  Volume Retention = 365 days
  Recycle = yes
  Au
toPrune = yes
}

Schedule {
  Name = WeeklyCycle_Daily
  Run = Level=Incremental Pool=File_Game_Servers sun-sat at 2:00
}

/etc/bacula/bacula-sd.conf
Device {
  Name = Game_Servers_Backup
  Archive Device = /media/496GB_Filesystem/Backups/server1/Game_Servers_Backups
  Media Type = GameServerBackup
  Device Type = File
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  LabelMedia = yes
}


Backup Job B (BackupRoot)


/etc/bacula/bacula-dir.conf
FileSet {
  Name = Root
  Include {
File = /
Options {
  signature = MD5
}
  }
  Exclude {
File = /tmp
File = /media
  }
}
Pool {
  Name = File_Root
  Pool Type = Backup
  Volume Retention = 365 days
  Recycle = yes
  AutoPrune = yes
}
Storage {
  Name = Root
  Password = mypasswordhere
  Address = sysdomain.server1.com
  SDPort = 9103
  Device = Root_Backup
  Media Type = RootBackup
  Maximum Concurrent Jobs = 20
}
Job {
  Name = BackupRoot
  Type = Backup
  Level = Incremental
  Client = bacula-server1
  FileSet = Root
  Schedule = WeeklyCycle_Sundays
  Storage = Root
  Pool = File_Root
  Messages = Standard
  Enabled = No
}

Schedule {
  Name = WeeklyCycle_Sundays
  Run = Level=Incremental Pool=File_Root 1st-5th mon at 2:00
}

/etc/bacula/bacula-sd.conf

Device {
  Name = Root_Backup
  Archive Device = /media/496GB_Filesystem/Backups/server1/Full_System_Backups
  Media Type = RootBackup
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
}

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-17 Thread John Drescher
On Mon, Jun 16, 2014 at 12:32 PM, altiris
bacula-fo...@backupcentral.com wrote:
 I have been trying to setup bacula so that it will perform 2 backup jobs. One 
 to backup a folder in a user's home folder and another to backup an entire 
 partition. Each time I try to run one of the two backup jobs (via webmin) it 
 says it has started but when I go to the director status it outputs 
 BackupGame_Servers   114 Incremental is waiting on Storage 
 Game_Servers. Where BackupGame_Servers is the job name, 114 I believe is the 
 job number, Incremental is the type of backup, and Storage Game_Servers is 
 the Storage Daemon it is using. When i try to run the other backup job I also 
 get a is waiting on Storage daemon _. The backups are set to be stored 
 on on separate partition and in separate folders in that partition. I don't 
 see why this is doing this. I use a different storage device, storage daemon, 
 volume pool, and backup schedule to keep everything organized. I posted my 
 config files to better show/explain and eliminate the possibilities of errors.


Remember that a disk device acts just like a tape drive meaning that
for a single device you can not load 2 volumes at the same time. If
you want more than 1 job to run you need more than 1 storage device.
If you plan on adding additional clients and want them to run more
simultaneous jobs I recommend that you search the archives for bacula
vchanger. This will make disk storage work like a virtual tape
autochanger with multiple drives so you can have concurrency.

John

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job keeps waiting on storage daemon

2014-06-17 Thread altiris
Thank you John. So just to be sure, I can only perform one job on each storage 
device. In this case, I can only perform one backup on the partition in my HDD? 
Even if the jobs are to set to be performed at different times, I still can't 
perform more than one job?

+--
|This was sent by altiris28...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-17 Thread John Drescher
 Thank you John. So just to be sure, I can only perform one job on each 
 storage device. In this case, I can only perform one backup on the partition 
 in my HDD? Even if the jobs are to set to be performed at different times, I 
 still can't perform more than one job?


No you can run multiple jobs to the same device at the same time. The
jobs however have to go to the same volume so the same pool. A storage
device can only access 1 volume at a time.

Also this has nothing to do with your filesystem or partition. You can
and are expected to have more than 1 volume on your filesystem.

John

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job keeps waiting on storage daemon

2014-06-17 Thread John Drescher
On Tue, Jun 17, 2014 at 11:57 PM, John Drescher dresche...@gmail.com wrote:
 Thank you John. So just to be sure, I can only perform one job on each 
 storage device. In this case, I can only perform one backup on the partition 
 in my HDD? Even if the jobs are to set to be performed at different times, I 
 still can't perform more than one job?


 No you can run multiple jobs to the same device at the same time. The
 jobs however have to go to the same volume so the same pool. A storage
 device can only access 1 volume at a time.

 Also this has nothing to do with your filesystem or partition. You can
 and are expected to have more than 1 volume on your filesystem.

Also there is nothing stopping you from having more than 1 storage
device with volumes on the same filesystem.

John

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-09-01 Thread Marco Lertora

 Il 31/08/2010 21.00, Marco Lertora ha scritto:

Marco Lertora wrote:

   Il 31/08/2010 17.27, Bill Arlofski ha scritto:
   

On 08/31/10 08:44, Marco Lertora wrote:
 

Hi!

I've the same problem! anyone found a solution?

I have 3 concurrent jobs, which backup from different fd to the same
device on sd.
All jobs use the same pool and the pool use Maximum Volume Bytes as
volume splitting policy, as suggested in docs.
All job has the same priority.

Everything starts good, but after some volumes changes (becouse they
reach the max volume size) the storage lost the pool information of the
mounted volume
So, the jobs started after that, wait on sd for a mounted volume with
the same pool as the one wanted by the job.

Regards
Marco Lertora
   

Sorry for a me too post... But:


I have been noticing the same thing here.  I just have not been able to
monitor it and accurately document it.

Basically it appears to be exactly what you have stated above. I am also using
only disk storage with my file tapes configured to be a maximum of 10GB each.

I have seen a status dir  show me  job xxx waiting on storage and have
noted that the job(s) waiting are of the same priority as the job(s) currently
running and are configured to use the same device and pool.

I have also noticed exactly what Lukas Kolbe described here where the job
wants one pool, but thinks it has a null named pool:

 

3608 JobId=308 wants Pool=dp but have Pool=
   

and here where the device is mounted, the volume name is known but the pool is
unknown:

 

Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
   Volume:  Vol0349
   Pool:*unknown*
   Media type:  File
   Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
   Positioned at File=2 Block=3,136,734,274
   

So by all indications the job(s) that are waiting on storage should be
running but are instead needlessly waiting.


Initially, my thought was that I had the Pool in the jobs defined like:

Pool = Default

and the Default pool had no tapes in it - Bacula requires a Pool to be defined
in a Job definition - Which is why I used Default, but I was overriding the
Pool in the Schedule like so:

Schedule {
Name = WeeklyToOffsiteDisk
  Run = Full  pool=Offsite-eSATA  sun at 20:30
  Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
  Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
}


I have recently reconfigured my system to use one pool Offsite-eSATA and
have set:

Pool = Offsite-eSATA

directly in all of the the Job definitions instead of using the Schedule
override, but I am still seeing what you both have described.
 


Hi,
I've try to increse sd log with setdebug option but, no luck.
I've try to look in source, but they are quite complex so, no luck

this is the code where the match fail:

   

static int is_pool_ok(DCR *dcr)
{
DEVICE *dev = dcr-dev;
JCR *jcr = dcr-jcr;

/* Now check if we want the same Pool and pool type */
if (strcmp(dev-pool_name, dcr-pool_name) == 0
strcmp(dev-pool_type, dcr-pool_type) == 0) {
   /* OK, compatible device */
   Dmsg1(dbglvl, OK dev: %s num_writers=0, reserved, pool
matches\n, dev-print_name());
   return 1;
} else {
   /* Drive Pool not suitable for us */
   Mmsg(jcr-errmsg, _(
3608 JobId=%u wants Pool=\%s\ but have Pool=\%s\ nreserve=%d on
drive %s.\n),
 (uint32_t)jcr-JobId, dcr-pool_name, dev-pool_name,
 dev-num_reserved(), dev-print_name());
   queue_reserve_message(jcr);
   Dmsg2(dbglvl, failed: busy num_writers=0, reserved, pool=%s
wanted=%s\n,
  dev-pool_name, dcr-pool_name);
}
return 0;
}
 


I suppose dev-pool_name was empty. this is confirmed by the code where
status message is build

   

  if (dev-is_labeled()) {
 len = Mmsg(msg, _(Device %s is mounted with:\n
   Volume:  %s\n
   Pool:%s\n
   Media type:  %s\n),
dev-print_name(),
dev-VolHdr.VolumeName,
dev-pool_name[0]?dev-pool_name:*unknown*,
dev-device-media_type);
 sendit(msg, len, sp);
  } else {
 


but I can't find where this property is set.
it happen in some but not all volume change and I think when storage or
probably a device end all running jobs

any bacula guru or developer can hear us?
   


I forgot:
my bacula version is: 5.0.2
this issue should be linked to bug: 1541
http://bugs.bacula.org/view.php?id=1541

Hi,

this is my workaround:

file: src/stored/vol_mgr.c
in function: bool free_volume(DEVICE *dev)
closer line: 625
replace:
if (!dev-num_reserved()) {
with:
if (!dev-num_reserved()  !dev-num_writers) {
this work for me but should be tested

But the most important is that in ver. 5.0.3 the problems was 

Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora
  Hi!

I've the same problem! anyone found a solution?

I have 3 concurrent jobs, which backup from different fd to the same 
device on sd.
All jobs use the same pool and the pool use Maximum Volume Bytes as 
volume splitting policy, as suggested in docs.
All job has the same priority.

Everything starts good, but after some volumes changes (becouse they 
reach the max volume size) the storage lost the pool information of the 
mounted volume
So, the jobs started after that, wait on sd for a mounted volume with 
the same pool as the one wanted by the job.

Regards
Marco Lertora


Il 04/06/2010 7.57, Lukas Kolbe ha scritto:
 Hi!

 I have the following pools:

 *list pools
 ++-+-+-+--+-+
 | poolid | name| numvols | maxvols | pooltype | labelformat |
 ++-+-+-+--+-+
 |  1 | Default |   0 |   0 | Backup   | *   |
 |  2 | lib1|   1 |   0 | Backup   | *   |
 |  3 | dp  | 348 | 555 | Backup   | Vol |
 |  4 | Scratch |   0 |   0 | Backup   | *   |
 ++-+-+-+--+-+

 dp is the diskpool, containing 32GiB-Volumes. It is configured as
 follows:
 Pool {
  Name= dp
  Pool Type   = Backup
  Recycle = yes
  Recycle Pool= dp
  Recycle Oldest Volume   = yes
  AutoPrune   = yes
  Volume Retention= 365 Days
  Storage = dp1
  Next Pool   = lib1
  LabelFormat = Vol
  Maximum Volume Bytes= 32G
  Maximum Volumes = 555 # 17.76TB
 }

 Now the problem is that many of the scheduled jobs are often ... is
 waiting on Storage dp1, and status sd=dp1 says:

 Jobs waiting to reserve a drive:
 3602 JobId=290 device dp1 (/var/bacula/diskpool/fs1) is busy (already 
 reading/writing).
 3608 JobId=309 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=308 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=310 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=249 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=311 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=267 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=269 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 3608 JobId=312 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
 (/var/bacula/diskpool/fs1).
 Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
  Volume:  Vol0349
  Pool:*unknown*
  Media type:  File
  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
  Positioned at File=2 Block=3,136,734,274

 So, he seems to be unhappy that the sd has a Volume from an *unknown*
 pool, whereas the jobs require a volume from the dp pool. list media
 pool=dp verifies that Vol0349 actually belongs to the dp pool. What's
 interesting is that when I manually start all my backup-jobs while no
 other job is running, all of them get going. Only the scheduled jobs
 suffer from this.

 Kind regards,



--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Steve Ellis
  On 8/31/2010 5:44 AM, Marco Lertora wrote:
Hi!

 I've the same problem! anyone found a solution?

 I have 3 concurrent jobs, which backup from different fd to the same
 device on sd.
 All jobs use the same pool and the pool use Maximum Volume Bytes as
 volume splitting policy, as suggested in docs.
 All job has the same priority.

 Everything starts good, but after some volumes changes (becouse they
 reach the max volume size) the storage lost the pool information of the
 mounted volume
 So, the jobs started after that, wait on sd for a mounted volume with
 the same pool as the one wanted by the job.

 Regards
 Marco Lertora


I have seen something very much like this issue, except with tape 
drives.  I was trying to document it more fully before sending it in.

It seems that for me, after a tape change during a backup, the SD 
doesn't discover the pool of the mounted tape until after all currently 
running jobs complete, so no new jobs can start--once all running jobs 
finish, the currently mounted volume's pool is discovered by the SD, 
then any jobs stuck because the pool wasn't known can start.  I didn't 
know a similar or the same issue affected file volumes--it is relatively 
rare that I hit the tape volume version of this problem, since not very 
many of my backups span tapes.

If this is easily reproducible with tape volumes, someone should file a 
bug report.

-se



--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on Storage

2010-08-31 Thread Bill Arlofski
On 08/31/10 08:44, Marco Lertora wrote:
   Hi!
 
 I've the same problem! anyone found a solution?
 
 I have 3 concurrent jobs, which backup from different fd to the same 
 device on sd.
 All jobs use the same pool and the pool use Maximum Volume Bytes as 
 volume splitting policy, as suggested in docs.
 All job has the same priority.
 
 Everything starts good, but after some volumes changes (becouse they 
 reach the max volume size) the storage lost the pool information of the 
 mounted volume
 So, the jobs started after that, wait on sd for a mounted volume with 
 the same pool as the one wanted by the job.
 
 Regards
 Marco Lertora


Sorry for a me too post... But:


I have been noticing the same thing here.  I just have not been able to
monitor it and accurately document it.

Basically it appears to be exactly what you have stated above. I am also using
only disk storage with my file tapes configured to be a maximum of 10GB each.

I have seen a status dir  show me  job xxx waiting on storage and have
noted that the job(s) waiting are of the same priority as the job(s) currently
running and are configured to use the same device and pool.

I have also noticed exactly what Lukas Kolbe described here where the job
wants one pool, but thinks it has a null named pool:

 3608 JobId=308 wants Pool=dp but have Pool=

and here where the device is mounted, the volume name is known but the pool is
unknown:

Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
  Volume:  Vol0349
  Pool:*unknown*
  Media type:  File
  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
  Positioned at File=2 Block=3,136,734,274



So by all indications the job(s) that are waiting on storage should be
running but are instead needlessly waiting.


Initially, my thought was that I had the Pool in the jobs defined like:

Pool = Default

and the Default pool had no tapes in it - Bacula requires a Pool to be defined
in a Job definition - Which is why I used Default, but I was overriding the
Pool in the Schedule like so:

Schedule {
  Name = WeeklyToOffsiteDisk
Run = Full  pool=Offsite-eSATA  sun at 20:30
Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
}


I have recently reconfigured my system to use one pool Offsite-eSATA and
have set:

Pool = Offsite-eSATA

directly in all of the the Job definitions instead of using the Schedule
override, but I am still seeing what you both have described.


--
Bill Arlofski
Reverse Polarity, LLC

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora
  Il 31/08/2010 17.27, Bill Arlofski ha scritto:
 On 08/31/10 08:44, Marco Lertora wrote:
Hi!

 I've the same problem! anyone found a solution?

 I have 3 concurrent jobs, which backup from different fd to the same
 device on sd.
 All jobs use the same pool and the pool use Maximum Volume Bytes as
 volume splitting policy, as suggested in docs.
 All job has the same priority.

 Everything starts good, but after some volumes changes (becouse they
 reach the max volume size) the storage lost the pool information of the
 mounted volume
 So, the jobs started after that, wait on sd for a mounted volume with
 the same pool as the one wanted by the job.

 Regards
 Marco Lertora

 Sorry for a me too post... But:


 I have been noticing the same thing here.  I just have not been able to
 monitor it and accurately document it.

 Basically it appears to be exactly what you have stated above. I am also using
 only disk storage with my file tapes configured to be a maximum of 10GB 
 each.

 I have seen a status dir  show me  job xxx waiting on storage and have
 noted that the job(s) waiting are of the same priority as the job(s) currently
 running and are configured to use the same device and pool.

 I have also noticed exactly what Lukas Kolbe described here where the job
 wants one pool, but thinks it has a null named pool:

 3608 JobId=308 wants Pool=dp but have Pool=
 and here where the device is mounted, the volume name is known but the pool is
 unknown:

 Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
   Volume:  Vol0349
   Pool:*unknown*
   Media type:  File
   Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
   Positioned at File=2 Block=3,136,734,274


 So by all indications the job(s) that are waiting on storage should be
 running but are instead needlessly waiting.


 Initially, my thought was that I had the Pool in the jobs defined like:

 Pool = Default

 and the Default pool had no tapes in it - Bacula requires a Pool to be defined
 in a Job definition - Which is why I used Default, but I was overriding the
 Pool in the Schedule like so:

 Schedule {
Name = WeeklyToOffsiteDisk
  Run = Full  pool=Offsite-eSATA  sun at 20:30
  Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
  Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
 }


 I have recently reconfigured my system to use one pool Offsite-eSATA and
 have set:

 Pool = Offsite-eSATA

 directly in all of the the Job definitions instead of using the Schedule
 override, but I am still seeing what you both have described.

Hi,
I've try to increse sd log with setdebug option but, no luck.
I've try to look in source, but they are quite complex so, no luck

this is the code where the match fail:

 static int is_pool_ok(DCR *dcr)
 {
DEVICE *dev = dcr-dev;
JCR *jcr = dcr-jcr;

/* Now check if we want the same Pool and pool type */
if (strcmp(dev-pool_name, dcr-pool_name) == 0 
strcmp(dev-pool_type, dcr-pool_type) == 0) {
   /* OK, compatible device */
   Dmsg1(dbglvl, OK dev: %s num_writers=0, reserved, pool 
 matches\n, dev-print_name());
   return 1;
} else {
   /* Drive Pool not suitable for us */
   Mmsg(jcr-errmsg, _(
 3608 JobId=%u wants Pool=\%s\ but have Pool=\%s\ nreserve=%d on 
 drive %s.\n),
 (uint32_t)jcr-JobId, dcr-pool_name, dev-pool_name,
 dev-num_reserved(), dev-print_name());
   queue_reserve_message(jcr);
   Dmsg2(dbglvl, failed: busy num_writers=0, reserved, pool=%s 
 wanted=%s\n,
  dev-pool_name, dcr-pool_name);
}
return 0;
 }

I suppose dev-pool_name was empty. this is confirmed by the code where
status message is build

  if (dev-is_labeled()) {
 len = Mmsg(msg, _(Device %s is mounted with:\n
   Volume:  %s\n
   Pool:%s\n
   Media type:  %s\n),
dev-print_name(),
dev-VolHdr.VolumeName,
dev-pool_name[0]?dev-pool_name:*unknown*,
dev-device-media_type);
 sendit(msg, len, sp);
  } else {

but I can't find where this property is set.
it happen in some but not all volume change and I think when storage or
probably a device end all running jobs

any bacula guru or developer can hear us?

Marco


 --
 Bill Arlofski
 Reverse Polarity, LLC

 --
 This SF.net Dev2Dev email is sponsored by:

 Show off your parallel programming skills.
 Enter the Intel(R) Threading Challenge 2010.
 http://p.sf.net/sfu/intel-thread-sfd
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora

Marco Lertora wrote:

  Il 31/08/2010 17.27, Bill Arlofski ha scritto:
  

On 08/31/10 08:44, Marco Lertora wrote:


   Hi!

I've the same problem! anyone found a solution?

I have 3 concurrent jobs, which backup from different fd to the same
device on sd.
All jobs use the same pool and the pool use Maximum Volume Bytes as
volume splitting policy, as suggested in docs.
All job has the same priority.

Everything starts good, but after some volumes changes (becouse they
reach the max volume size) the storage lost the pool information of the
mounted volume
So, the jobs started after that, wait on sd for a mounted volume with
the same pool as the one wanted by the job.

Regards
Marco Lertora
  

Sorry for a me too post... But:


I have been noticing the same thing here.  I just have not been able to
monitor it and accurately document it.

Basically it appears to be exactly what you have stated above. I am also using
only disk storage with my file tapes configured to be a maximum of 10GB each.

I have seen a status dir  show me  job xxx waiting on storage and have
noted that the job(s) waiting are of the same priority as the job(s) currently
running and are configured to use the same device and pool.

I have also noticed exactly what Lukas Kolbe described here where the job
wants one pool, but thinks it has a null named pool:



3608 JobId=308 wants Pool=dp but have Pool=
  

and here where the device is mounted, the volume name is known but the pool is
unknown:



Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
  Volume:  Vol0349
  Pool:*unknown*
  Media type:  File
  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
  Positioned at File=2 Block=3,136,734,274
  

So by all indications the job(s) that are waiting on storage should be
running but are instead needlessly waiting.


Initially, my thought was that I had the Pool in the jobs defined like:

Pool = Default

and the Default pool had no tapes in it - Bacula requires a Pool to be defined
in a Job definition - Which is why I used Default, but I was overriding the
Pool in the Schedule like so:

Schedule {
   Name = WeeklyToOffsiteDisk
 Run = Full  pool=Offsite-eSATA  sun at 20:30
 Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
 Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
}


I have recently reconfigured my system to use one pool Offsite-eSATA and
have set:

Pool = Offsite-eSATA

directly in all of the the Job definitions instead of using the Schedule
override, but I am still seeing what you both have described.



Hi,
I've try to increse sd log with setdebug option but, no luck.
I've try to look in source, but they are quite complex so, no luck

this is the code where the match fail:

  

static int is_pool_ok(DCR *dcr)
{
   DEVICE *dev = dcr-dev;
   JCR *jcr = dcr-jcr;

   /* Now check if we want the same Pool and pool type */
   if (strcmp(dev-pool_name, dcr-pool_name) == 0 
   strcmp(dev-pool_type, dcr-pool_type) == 0) {
  /* OK, compatible device */
  Dmsg1(dbglvl, OK dev: %s num_writers=0, reserved, pool 
matches\n, dev-print_name());

  return 1;
   } else {
  /* Drive Pool not suitable for us */
  Mmsg(jcr-errmsg, _(
3608 JobId=%u wants Pool=\%s\ but have Pool=\%s\ nreserve=%d on 
drive %s.\n),

(uint32_t)jcr-JobId, dcr-pool_name, dev-pool_name,
dev-num_reserved(), dev-print_name());
  queue_reserve_message(jcr);
  Dmsg2(dbglvl, failed: busy num_writers=0, reserved, pool=%s 
wanted=%s\n,

 dev-pool_name, dcr-pool_name);
   }
   return 0;
}



I suppose dev-pool_name was empty. this is confirmed by the code where
status message is build

  

 if (dev-is_labeled()) {
len = Mmsg(msg, _(Device %s is mounted with:\n
  Volume:  %s\n
  Pool:%s\n
  Media type:  %s\n),
   dev-print_name(),
   dev-VolHdr.VolumeName,
   dev-pool_name[0]?dev-pool_name:*unknown*,
   dev-device-media_type);
sendit(msg, len, sp);
 } else {



but I can't find where this property is set.
it happen in some but not all volume change and I think when storage or
probably a device end all running jobs

any bacula guru or developer can hear us?
  


I forgot:
my bacula version is: 5.0.2
this issue should be linked to bug: 1541
http://bugs.bacula.org/view.php?id=1541


Marco

  

--
Bill Arlofski
Reverse Polarity, LLC

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list

[Bacula-users] Job stalling waiting for Storage, but jobs are on two different autoloaders

2010-06-12 Thread bhjfax
Hello 

 

JobId Level   Name   Status

==

  3673 Differe  trac_full.2010-06-12_07.59.48_11 is running

  3675 Differe  nitro_full.2010-06-12_08.06.47_13 is running

  3676 Increme  trac_full.2010-06-12_08.16.55_15 is waiting on Storage
PV132T

 

Above is the output from the director on my system.

 

I have two different autoloaders - job 3673 is running on a DELL TL4000 and
when I run another job for the same client using the other autoloader
(PV132T), the job stalls waiting for the first job to finish.

 

Anyone got any clues what I may be doing wrong which is stopping the two
jobs running concurrently?

 

The reasons I need this - say for example I run out of tapes on an
Incremental Backup - I don't want the Differential to be blocked on the
other autoloader waiting for the incremental to finish.

 

Thanks

 

Brian.

 



My config files:

 

In bacula-sd .conf 

 

Storage {

Maximum Concurrent Jobs = 100

etc etc

}

 

In bacula-dir.conf



Director {

  Maximum Concurrent Jobs = 20

Etc etc

}



Client

{

Client {

  Name = trac

  Maximum Concurrent Jobs = 3 

etc etc

}



Storage {

  Name = PV132T

  Maximum Concurrent Jobs = 20

}

 


This e-mail is confidential and may be privileged. It may be read, copied and 
used only by the intended recipient. No communication sent by e-mail to or from 
Eutechnyx is intended to give rise to contractual or other legal liability, 
apart from liability which cannot be excluded under English law. 

This email has been scanned for all known viruses by the Email Protection 
Agency. http://www.epagency.net


www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job stalling waiting for Storage, but jobs are on two different autoloaders

2010-06-12 Thread bhjfax
As a slight follow up. I have got this to work by running two different
storage directors - one for each autoloader (on separate ports).

 

Let me know if anyone knows an easier solution.

 

-Brian

 

From: bhjfax [mailto:bhj...@eutechnyx.com] 
Sent: 12 June 2010 08:32
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Job stalling waiting for Storage, but jobs are on
two different autoloaders

 

Hello 

 

JobId Level   Name   Status

==

  3673 Differe  trac_full.2010-06-12_07.59.48_11 is running

  3675 Differe  nitro_full.2010-06-12_08.06.47_13 is running

  3676 Increme  trac_full.2010-06-12_08.16.55_15 is waiting on Storage
PV132T

 

Above is the output from the director on my system.

 

I have two different autoloaders - job 3673 is running on a DELL TL4000 and
when I run another job for the same client using the other autoloader
(PV132T), the job stalls waiting for the first job to finish.

 

Anyone got any clues what I may be doing wrong which is stopping the two
jobs running concurrently?

 

The reasons I need this - say for example I run out of tapes on an
Incremental Backup - I don't want the Differential to be blocked on the
other autoloader waiting for the incremental to finish.

 

Thanks

 

Brian.

 



My config files:

 

In bacula-sd .conf 

 

Storage {

Maximum Concurrent Jobs = 100

etc etc

}

 

In bacula-dir.conf



Director {

  Maximum Concurrent Jobs = 20

Etc etc

}



Client

{

Client {

  Name = trac

  Maximum Concurrent Jobs = 3 

etc etc

}



Storage {

  Name = PV132T

  Maximum Concurrent Jobs = 20

}

 

This e-mail is confidential and may be privileged. It may be read, copied
and used only by the intended recipient. No communication sent by e-mail to
or from Eutechnyx is intended to give rise to contractual or other legal
liability, apart from liability which cannot be excluded under English law. 

This email has been scanned for all known viruses by www.epagency.net
http://www.epagencynet .

www.eutechnyx.com http://www.eutechnyxcom  Eutechnyx Limited. Registered
in England No: 2172322

This email has been scanned for all known viruses by the Email Protection
Agency


This e-mail is confidential and may be privileged. It may be read, copied and 
used only by the intended recipient. No communication sent by e-mail to or from 
Eutechnyx is intended to give rise to contractual or other legal liability, 
apart from liability which cannot be excluded under English law. 

This email has been scanned for all known viruses by the Email Protection 
Agency. http://www.epagency.net


www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on Storage

2010-06-04 Thread Lukas Kolbe
Hi!

I have the following pools:

*list pools
++-+-+-+--+-+
| poolid | name| numvols | maxvols | pooltype | labelformat |
++-+-+-+--+-+
|  1 | Default |   0 |   0 | Backup   | *   |
|  2 | lib1|   1 |   0 | Backup   | *   |
|  3 | dp  | 348 | 555 | Backup   | Vol |
|  4 | Scratch |   0 |   0 | Backup   | *   |
++-+-+-+--+-+

dp is the diskpool, containing 32GiB-Volumes. It is configured as
follows:
Pool {
Name= dp
Pool Type   = Backup
Recycle = yes
Recycle Pool= dp
Recycle Oldest Volume   = yes
AutoPrune   = yes
Volume Retention= 365 Days
Storage = dp1
Next Pool   = lib1
LabelFormat = Vol
Maximum Volume Bytes= 32G
Maximum Volumes = 555 # 17.76TB
}

Now the problem is that many of the scheduled jobs are often ... is
waiting on Storage dp1, and status sd=dp1 says:

Jobs waiting to reserve a drive:
   3602 JobId=290 device dp1 (/var/bacula/diskpool/fs1) is busy (already 
reading/writing).
   3608 JobId=309 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=308 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=310 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=249 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=311 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=267 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=269 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
   3608 JobId=312 wants Pool=dp but have Pool= nreserve=0 on drive dp1 
(/var/bacula/diskpool/fs1).
Device dp1 (/var/bacula/diskpool/fs1) is mounted with:
Volume:  Vol0349
Pool:*unknown*
Media type:  File
Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
Positioned at File=2 Block=3,136,734,274

So, he seems to be unhappy that the sd has a Volume from an *unknown*
pool, whereas the jobs require a volume from the dp pool. list media
pool=dp verifies that Vol0349 actually belongs to the dp pool. What's
interesting is that when I manually start all my backup-jobs while no
other job is running, all of them get going. Only the scheduled jobs
suffer from this.

Kind regards,

-- 
Lukas




--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users