Re: [Bacula-users] sql to add tape in pool.

2010-08-31 Thread Phil Stracchino
On 08/31/10 16:30, Prashant Ramhit wrote:
> Hi All,
> 
> I want to add 52 volume in a pool.
> 
> Could someone please send me the sql to add a volume with the following 
> parameters

Why would you want to use SQL to do this?

> Volume name= "week40" ( this is a variable  and will be auto generated, 
> for the time being please consider week40)
> Catalogue=weekly_offsite
> Pool=weekly
> Slot=6
> InChanger=yes
> 
> And is there any more command to type in bconsole after i've applied 
> this to mssql db.

You shouldn't be touching the SQL DB to do this at all.  If you're going
to go doing stuff in Bacula's catalog database behind its back, don't be
surprised if things break in unexpected ways later.  You can (and
should) perform this entire operation using bconsole.

There are two steps you need to perform.

The first, unfortunately time-consuming, is to label the volumes using
the 'label' command.  Tapes need to be in the drive to be labeled, so
you can't practically automate the process unless you have a robotic
tape library, in which case you should already have barcode labels on
your tapes and you can just load them into the library and issue a
'label barcodes' command.

The second step is to add your new volumes to your weekly pool using the
'add' command.  This is very simple as you can tell it the base volume
name, how many volumes you want to create, and the starting number.
Give it 'week' for base name, 1 for starting number, and 52 for how many
volumes to add, and you should be all set.

Remember when labeling your tapes that if doing a bulk addition in this
manner, Bacula will default to using four digits plus the specified base
name to generate the label.  This behavior can be overridden in the Pool
specification by using the Label Format directive.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] sql to add tape in pool.

2010-08-31 Thread Prashant Ramhit
Hi All,

I want to add 52 volume in a pool.

Could someone please send me the sql to add a volume with the following 
parameters

Volume name= "week40" ( this is a variable  and will be auto generated, 
for the time being please consider week40)
Catalogue=weekly_offsite
Pool=weekly
Slot=6
InChanger=yes

And is there any more command to type in bconsole after i've applied 
this to mssql db.
Bacula version is 5.0.1-1ubuntu1

Many thanks and Kind regards,
Prashant

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora

Marco Lertora wrote:

  Il 31/08/2010 17.27, Bill Arlofski ha scritto:
  

On 08/31/10 08:44, Marco Lertora wrote:


   Hi!

I've the same problem! anyone found a solution?

I have 3 concurrent jobs, which backup from different fd to the same
device on sd.
All jobs use the same pool and the pool use "Maximum Volume Bytes" as
volume splitting policy, as suggested in docs.
All job has the same priority.

Everything starts good, but after some volumes changes (becouse they
reach the max volume size) the storage lost the pool information of the
mounted volume
So, the jobs started after that, wait on sd for a mounted volume with
the same pool as the one wanted by the job.

Regards
Marco Lertora
  

Sorry for a "me too" post... But:


I have been noticing the same thing here.  I just have not been able to
monitor it and accurately document it.

Basically it appears to be exactly what you have stated above. I am also using
only disk storage with my "file tapes" configured to be a maximum of 10GB each.

I have seen a "status dir"  show me  "job xxx waiting on storage" and have
noted that the job(s) waiting are of the same priority as the job(s) currently
running and are configured to use the same device and pool.

I have also noticed exactly what Lukas Kolbe described here where the job
wants one pool, but thinks it has a "null named pool":



3608 JobId=308 wants Pool="dp" but have Pool=""
  

and here where the device is mounted, the volume name is known but the pool is
unknown:



Device "dp1" (/var/bacula/diskpool/fs1) is mounted with:
  Volume:  Vol0349
  Pool:*unknown*
  Media type:  File
  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
  Positioned at File=2 Block=3,136,734,274
  

So by all indications the job(s) that are "waiting on storage" should be
running but are instead needlessly waiting.


Initially, my thought was that I had the Pool in the jobs defined like:

Pool = Default

and the Default pool had no tapes in it - Bacula requires a Pool to be defined
in a Job definition - Which is why I used "Default", but I was overriding the
Pool in the Schedule like so:

Schedule {
   Name = WeeklyToOffsiteDisk
 Run = Full  pool=Offsite-eSATA  sun at 20:30
 Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
 Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
}


I have recently reconfigured my system to use one pool "Offsite-eSATA" and
have set:

Pool = Offsite-eSATA

directly in all of the the Job definitions instead of using the Schedule
override, but I am still seeing what you both have described.



Hi,
I've try to increse sd log with setdebug option but, no luck.
I've try to look in source, but they are quite complex so, no luck

this is the code where the match fail:

  

static int is_pool_ok(DCR *dcr)
{
   DEVICE *dev = dcr->dev;
   JCR *jcr = dcr->jcr;

   /* Now check if we want the same Pool and pool type */
   if (strcmp(dev->pool_name, dcr->pool_name) == 0 &&
   strcmp(dev->pool_type, dcr->pool_type) == 0) {
  /* OK, compatible device */
  Dmsg1(dbglvl, "OK dev: %s num_writers=0, reserved, pool 
matches\n", dev->print_name());

  return 1;
   } else {
  /* Drive Pool not suitable for us */
  Mmsg(jcr->errmsg, _(
"3608 JobId=%u wants Pool=\"%s\" but have Pool=\"%s\" nreserve=%d on 
drive %s.\n"),

(uint32_t)jcr->JobId, dcr->pool_name, dev->pool_name,
dev->num_reserved(), dev->print_name());
  queue_reserve_message(jcr);
  Dmsg2(dbglvl, "failed: busy num_writers=0, reserved, pool=%s 
wanted=%s\n",

 dev->pool_name, dcr->pool_name);
   }
   return 0;
}



I suppose dev->pool_name was empty. this is confirmed by the code where
status message is build

  

 if (dev->is_labeled()) {
len = Mmsg(msg, _("Device %s is mounted with:\n"
  "Volume:  %s\n"
  "Pool:%s\n"
  "Media type:  %s\n"),
   dev->print_name(),
   dev->VolHdr.VolumeName,
   dev->pool_name[0]?dev->pool_name:"*unknown*",
   dev->device->media_type);
sendit(msg, len, sp);
 } else {



but I can't find where this property is set.
it happen in some but not all volume change and I think when storage or
probably a device end all running jobs

any bacula guru or developer can hear us?
  


I forgot:
my bacula version is: 5.0.2
this issue should be linked to bug: 1541
http://bugs.bacula.org/view.php?id=1541


Marco

  

--
Bill Arlofski
Reverse Polarity, LLC

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
__

Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora
  Il 31/08/2010 17.27, Bill Arlofski ha scritto:
> On 08/31/10 08:44, Marco Lertora wrote:
>>Hi!
>>
>> I've the same problem! anyone found a solution?
>>
>> I have 3 concurrent jobs, which backup from different fd to the same
>> device on sd.
>> All jobs use the same pool and the pool use "Maximum Volume Bytes" as
>> volume splitting policy, as suggested in docs.
>> All job has the same priority.
>>
>> Everything starts good, but after some volumes changes (becouse they
>> reach the max volume size) the storage lost the pool information of the
>> mounted volume
>> So, the jobs started after that, wait on sd for a mounted volume with
>> the same pool as the one wanted by the job.
>>
>> Regards
>> Marco Lertora
>
> Sorry for a "me too" post... But:
>
>
> I have been noticing the same thing here.  I just have not been able to
> monitor it and accurately document it.
>
> Basically it appears to be exactly what you have stated above. I am also using
> only disk storage with my "file tapes" configured to be a maximum of 10GB 
> each.
>
> I have seen a "status dir"  show me  "job xxx waiting on storage" and have
> noted that the job(s) waiting are of the same priority as the job(s) currently
> running and are configured to use the same device and pool.
>
> I have also noticed exactly what Lukas Kolbe described here where the job
> wants one pool, but thinks it has a "null named pool":
>
>> 3608 JobId=308 wants Pool="dp" but have Pool=""
> and here where the device is mounted, the volume name is known but the pool is
> unknown:
>
>> Device "dp1" (/var/bacula/diskpool/fs1) is mounted with:
>>   Volume:  Vol0349
>>   Pool:*unknown*
>>   Media type:  File
>>   Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
>>   Positioned at File=2 Block=3,136,734,274
>
>
> So by all indications the job(s) that are "waiting on storage" should be
> running but are instead needlessly waiting.
>
>
> Initially, my thought was that I had the Pool in the jobs defined like:
>
> Pool = Default
>
> and the Default pool had no tapes in it - Bacula requires a Pool to be defined
> in a Job definition - Which is why I used "Default", but I was overriding the
> Pool in the Schedule like so:
>
> Schedule {
>Name = WeeklyToOffsiteDisk
>  Run = Full  pool=Offsite-eSATA  sun at 20:30
>  Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
>  Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
> }
>
>
> I have recently reconfigured my system to use one pool "Offsite-eSATA" and
> have set:
>
> Pool = Offsite-eSATA
>
> directly in all of the the Job definitions instead of using the Schedule
> override, but I am still seeing what you both have described.

Hi,
I've try to increse sd log with setdebug option but, no luck.
I've try to look in source, but they are quite complex so, no luck

this is the code where the match fail:

> static int is_pool_ok(DCR *dcr)
> {
>DEVICE *dev = dcr->dev;
>JCR *jcr = dcr->jcr;
>
>/* Now check if we want the same Pool and pool type */
>if (strcmp(dev->pool_name, dcr->pool_name) == 0 &&
>strcmp(dev->pool_type, dcr->pool_type) == 0) {
>   /* OK, compatible device */
>   Dmsg1(dbglvl, "OK dev: %s num_writers=0, reserved, pool 
> matches\n", dev->print_name());
>   return 1;
>} else {
>   /* Drive Pool not suitable for us */
>   Mmsg(jcr->errmsg, _(
> "3608 JobId=%u wants Pool=\"%s\" but have Pool=\"%s\" nreserve=%d on 
> drive %s.\n"),
> (uint32_t)jcr->JobId, dcr->pool_name, dev->pool_name,
> dev->num_reserved(), dev->print_name());
>   queue_reserve_message(jcr);
>   Dmsg2(dbglvl, "failed: busy num_writers=0, reserved, pool=%s 
> wanted=%s\n",
>  dev->pool_name, dcr->pool_name);
>}
>return 0;
> }

I suppose dev->pool_name was empty. this is confirmed by the code where
status message is build

>  if (dev->is_labeled()) {
> len = Mmsg(msg, _("Device %s is mounted with:\n"
>   "Volume:  %s\n"
>   "Pool:%s\n"
>   "Media type:  %s\n"),
>dev->print_name(),
>dev->VolHdr.VolumeName,
>dev->pool_name[0]?dev->pool_name:"*unknown*",
>dev->device->media_type);
> sendit(msg, len, sp);
>  } else {

but I can't find where this property is set.
it happen in some but not all volume change and I think when storage or
probably a device end all running jobs

any bacula guru or developer can hear us?

Marco

>
> --
> Bill Arlofski
> Reverse Polarity, LLC
>
> --
> This SF.net Dev2Dev email is sponsored by:
>
> Show off your parallel programming skills.
> Enter the Intel(R) Threading Challenge 2010.
> http://p.sf.net/sfu/intel-thread-sfd
> 

[Bacula-users] Job is waiting on Storage

2010-08-31 Thread Bill Arlofski
On 08/31/10 08:44, Marco Lertora wrote:
>   Hi!
> 
> I've the same problem! anyone found a solution?
> 
> I have 3 concurrent jobs, which backup from different fd to the same 
> device on sd.
> All jobs use the same pool and the pool use "Maximum Volume Bytes" as 
> volume splitting policy, as suggested in docs.
> All job has the same priority.
> 
> Everything starts good, but after some volumes changes (becouse they 
> reach the max volume size) the storage lost the pool information of the 
> mounted volume
> So, the jobs started after that, wait on sd for a mounted volume with 
> the same pool as the one wanted by the job.
> 
> Regards
> Marco Lertora


Sorry for a "me too" post... But:


I have been noticing the same thing here.  I just have not been able to
monitor it and accurately document it.

Basically it appears to be exactly what you have stated above. I am also using
only disk storage with my "file tapes" configured to be a maximum of 10GB each.

I have seen a "status dir"  show me  "job xxx waiting on storage" and have
noted that the job(s) waiting are of the same priority as the job(s) currently
running and are configured to use the same device and pool.

I have also noticed exactly what Lukas Kolbe described here where the job
wants one pool, but thinks it has a "null named pool":

> 3608 JobId=308 wants Pool="dp" but have Pool=""

and here where the device is mounted, the volume name is known but the pool is
unknown:

>Device "dp1" (/var/bacula/diskpool/fs1) is mounted with:
>  Volume:  Vol0349
>  Pool:*unknown*
>  Media type:  File
>  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
>  Positioned at File=2 Block=3,136,734,274



So by all indications the job(s) that are "waiting on storage" should be
running but are instead needlessly waiting.


Initially, my thought was that I had the Pool in the jobs defined like:

Pool = Default

and the Default pool had no tapes in it - Bacula requires a Pool to be defined
in a Job definition - Which is why I used "Default", but I was overriding the
Pool in the Schedule like so:

Schedule {
  Name = WeeklyToOffsiteDisk
Run = Full  pool=Offsite-eSATA  sun at 20:30
Run = Incremental   pool=Offsite-eSATA-Inc  mon-fri at 20:30
Run = Differential  pool=Offsite-eSATA-Diff sat at 20:30
}


I have recently reconfigured my system to use one pool "Offsite-eSATA" and
have set:

Pool = Offsite-eSATA

directly in all of the the Job definitions instead of using the Schedule
override, but I am still seeing what you both have described.


--
Bill Arlofski
Reverse Polarity, LLC

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Steve Ellis
  On 8/31/2010 5:44 AM, Marco Lertora wrote:
>Hi!
>
> I've the same problem! anyone found a solution?
>
> I have 3 concurrent jobs, which backup from different fd to the same
> device on sd.
> All jobs use the same pool and the pool use "Maximum Volume Bytes" as
> volume splitting policy, as suggested in docs.
> All job has the same priority.
>
> Everything starts good, but after some volumes changes (becouse they
> reach the max volume size) the storage lost the pool information of the
> mounted volume
> So, the jobs started after that, wait on sd for a mounted volume with
> the same pool as the one wanted by the job.
>
> Regards
> Marco Lertora
>
>
I have seen something very much like this issue, except with tape 
drives.  I was trying to document it more fully before sending it in.

It seems that for me, after a tape change during a backup, the SD 
doesn't discover the pool of the mounted tape until after all currently 
running jobs complete, so no new jobs can start--once all running jobs 
finish, the currently mounted volume's pool is discovered by the SD, 
then any jobs stuck because the pool wasn't known can start.  I didn't 
know a similar or the same issue affected file volumes--it is relatively 
rare that I hit the tape volume version of this problem, since not very 
many of my backups span tapes.

If this is easily reproducible with tape volumes, someone should file a 
bug report.

-se



--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on Storage

2010-08-31 Thread Marco Lertora
  Hi!

I've the same problem! anyone found a solution?

I have 3 concurrent jobs, which backup from different fd to the same 
device on sd.
All jobs use the same pool and the pool use "Maximum Volume Bytes" as 
volume splitting policy, as suggested in docs.
All job has the same priority.

Everything starts good, but after some volumes changes (becouse they 
reach the max volume size) the storage lost the pool information of the 
mounted volume
So, the jobs started after that, wait on sd for a mounted volume with 
the same pool as the one wanted by the job.

Regards
Marco Lertora


Il 04/06/2010 7.57, Lukas Kolbe ha scritto:
> Hi!
>
> I have the following pools:
>
> *list pools
> ++-+-+-+--+-+
> | poolid | name| numvols | maxvols | pooltype | labelformat |
> ++-+-+-+--+-+
> |  1 | Default |   0 |   0 | Backup   | *   |
> |  2 | lib1|   1 |   0 | Backup   | *   |
> |  3 | dp  | 348 | 555 | Backup   | Vol |
> |  4 | Scratch |   0 |   0 | Backup   | *   |
> ++-+-+-+--+-+
>
> dp is the diskpool, containing 32GiB-Volumes. It is configured as
> follows:
> Pool {
>  Name= dp
>  Pool Type   = Backup
>  Recycle = yes
>  Recycle Pool= dp
>  Recycle Oldest Volume   = yes
>  AutoPrune   = yes
>  Volume Retention= 365 Days
>  Storage = dp1
>  Next Pool   = lib1
>  LabelFormat = "Vol"
>  Maximum Volume Bytes= 32G
>  Maximum Volumes = 555 # 17.76TB
> }
>
> Now the problem is that many of the scheduled jobs are often "... is
> waiting on Storage dp1", and status sd=dp1 says:
>
> Jobs waiting to reserve a drive:
> 3602 JobId=290 device "dp1" (/var/bacula/diskpool/fs1) is busy (already 
> reading/writing).
> 3608 JobId=309 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=308 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=310 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=249 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=311 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=267 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=269 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> 3608 JobId=312 wants Pool="dp" but have Pool="" nreserve=0 on drive "dp1" 
> (/var/bacula/diskpool/fs1).
> Device "dp1" (/var/bacula/diskpool/fs1) is mounted with:
>  Volume:  Vol0349
>  Pool:*unknown*
>  Media type:  File
>  Total Bytes=11,726,668,867 Blocks=181,775 Bytes/block=64,512
>  Positioned at File=2 Block=3,136,734,274
>
> So, he seems to be unhappy that the sd has a Volume from an *unknown*
> pool, whereas the jobs require a volume from the "dp" pool. "list media
> pool=dp" verifies that Vol0349 actually belongs to the "dp" pool. What's
> interesting is that when I manually start all my backup-jobs while no
> other job is running, all of them get going. Only the scheduled jobs
> suffer from this.
>
> Kind regards,
>


--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error: Watchdog sending kill after 518424 seconds to thread stalled reading Storage daemon

2010-08-31 Thread Suhail Akhtar
A bit frustrating at the moment as I don’t have access to the host. We are 
backing up Windows and Linux clients. The windows host seem to have no issues, 
very curious.

-Original Message-
From: m...@free-minds.net [mailto:m...@free-minds.net] 
Sent: 31 August 2010 11:15
To: Suhail Akhtar
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Error: Watchdog sending kill after 518424 seconds 
to thread stalled reading Storage daemon

Mh no but 144h run time sound like the process was hunging and then
killed...

On Tue, 31 Aug 2010 05:49:39 -0400, "Suhail Akhtar"
 wrote:
> Hi, 
> 
> I was wondering if anyone has come across the following error message. 
> 
> 'Error: Watchdog sending kill after 518424 seconds to thread stalled
> reading Storage daemon' 
> 
> I don't have access to the host generating the message as yet, but hope
> to soon so I can have a snoop around. Not quite sure if this relates to
> 'job duration' or possible priority issue. Any light shed on this error
> message will be much appreciated.  
> 
> Thanks, 
> 
> Suha
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error: Watchdog sending kill after 518424 seconds to thread stalled reading Storage daemon

2010-08-31 Thread me
Mh no but 144h run time sound like the process was hunging and then
killed...

On Tue, 31 Aug 2010 05:49:39 -0400, "Suhail Akhtar"
 wrote:
> Hi, 
> 
> I was wondering if anyone has come across the following error message. 
> 
> 'Error: Watchdog sending kill after 518424 seconds to thread stalled
> reading Storage daemon' 
> 
> I don't have access to the host generating the message as yet, but hope
> to soon so I can have a snoop around. Not quite sure if this relates to
> 'job duration' or possible priority issue. Any light shed on this error
> message will be much appreciated.  
> 
> Thanks, 
> 
> Suha

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error: Watchdog sending kill after 518424 seconds to thread stalled reading Storage daemon

2010-08-31 Thread Suhail Akhtar
Hi,

 

I was wondering if anyone has come across the following error message.

 

'Error: Watchdog sending kill after 518424 seconds to thread stalled
reading Storage daemon'

 

I don't have access to the host generating the message as yet, but hope
to soon so I can have a snoop around. Not quite sure if this relates to
'job duration' or possible priority issue.  Any light shed on this error
message will be much appreciated.  

 

Thanks,

Suha

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users