Re: [Bacula-users] Virtual Full - Set NextPool for the virtual job only

2012-02-09 Thread Jan Lentfer
Am 06.02.2012 12:51, schrieb Martin Simmons:
>
> Not quite what you are asking, but you could set the NextPool as necessary for
> the Virtual Fulls and then use this hack for the Copy jobs:
>
> http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.devel/14084

Thanks for the input everyone. I followed that approach (almost). I 
still have NExtPool set for copy to tape devices and added "dummy" pools 
to do Virtual Fulls and a CopyToArchive job using a COPY job with a SQL 
statement.


Jan

--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Full - Set NextPool for the virtual job only

2012-02-05 Thread Jan Lentfer
Am 05.02.2012 11:26, schrieb Jan Lentfer:
> Currently I have a setup where I use 3 file pools for Incr, Diff and
> Full. I use copy jobs to copy the incr jobs to one tape drive (DLT) and
> the full and diff jobs to another tape drive (LTO-1).
>
> Now I thought about doing Virtual Full Backups from the file pools to a
> 3rd drive every 2nd month or so with a retention of 5 years to establish
> a long term archive.
>
> Of course the NextPool is already configured in the File Pool
> Definitions for the Copy jobs.
>
> Is there a possibility to run Virtual Fulls from the File Pools to
> another Pool? E.g. is it possible to set the NextPool in the Virtual
> Full's job definition or on the schedule definition?

I think I have solved/worked around this using a "Dummy" Pool like this:

Pool {
   Name = "ArchivePool"
   Pool Type = Backup
   Storage = "DLT-Tape II"
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 5 Years  #
   Label Format = "Archive-"
#  Maximum Volumes = 15# 10 Volumes
}
Pool {
   Name = "ArchiveDummyPool"
   Pool Type = Backup
   Storage = FileStorageDev7"
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 1 week
   NextPool= ArchivePool
   Label Format = "Dummy-"
#  Maximum Volumes = 15# 10 Volumes
}

The Dummy pool has a File Storage device that points to the same 
directory the file volumes are stored in.

And then I use this "archive" job:
Job {
 Name = "Archive_epia"
 Enabled = Yes
 JobDefs = "DefaultJob"
 Level=VirtualFull
 Pool=ArchiveDummyPool
 Client = "epia-fd"
 FileSet = "epia-fs"
 Write Bootstrap = "/home/bacula/file-dev/epia.bsr"
}

Now I just have to figure out how to prevent restore from taking the 
ArchivePool into account for "standard" restores (because that would be 
slow)

Jan



--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual Full - Set NextPool for the virtual job only

2012-02-05 Thread Jan Lentfer
Currently I have a setup where I use 3 file pools for Incr, Diff and 
Full. I use copy jobs to copy the incr jobs to one tape drive (DLT) and 
the full and diff jobs to another tape drive (LTO-1).

Now I thought about doing Virtual Full Backups from the file pools to a 
3rd drive every 2nd month or so with a retention of 5 years to establish 
a long term archive.

Of course the NextPool is already configured in the File Pool 
Definitions for the Copy jobs.

Is there a possibility to run Virtual Fulls from the File Pools to 
another Pool? E.g. is it possible to set the NextPool in the Virtual 
Full's job definition or on the schedule definition?

Many thanks for hints and pointers


Jan

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Roadwarriors break volumes

2011-03-22 Thread Jan Lentfer
Am 18.03.2011 17:22, schrieb Phil Stracchino:
> On 03/18/11 11:43, Jacek Bilski wrote:
>> Is there any way to prevent this? Or at least to mark this volume as
>> recycle? How do you deal with such situations?
> The best strategy for mobile users, really, is to rsync to a local disk
> pool and then back up the local disk pool.
>
I think you could also spool those clients (data and attributes) so if 
the client goes offline only the spooled stuff gets discarded, nothing 
goes onto the volumes or database until the job has completley finished.

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/


--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy/Move Jobs to another volume?

2011-03-03 Thread Jan Lentfer

Am 03.03.2011 17:43, schrieb Dan Schaefer:

I had Bacula misconfigured and now I have multiple jobs (Full Backups) written 
to a single File
Volume. Can someone advise on how I can move a certain job into another volume 
so that I can purge
and delete the oversized volume? I hope this is possible. I suppose what I 
could do is restore that
job and then back it up again into a different pool? Thanks in advance.

You should look into 
http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html.


Declare the pool you want to move it to as "next pool" and migrate jobs 
over.


hth

Jan

--
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Extremely long restore times.

2011-02-24 Thread Jan Lentfer
On Wed, 23 Feb 2011 12:08:43 +0100, "Erik P. Olsen" 
wrote:
> Since I moved to version 5.0.3 I have experienced extemely long restore
> times. 
> The time is mostly spend in the building of directory trees. The time
used
> to be 
> measured in minutes, now it is hours.
> 
> My system is Fedora 14, bacula 5.0.3, mysql 5.1.55, backup media is a 1
TB 
> external HD with e-sata interface only used with bacula.
> 
> Backup is swift so I don't understand why building of directory trees
> should 
> take so long time. When the directory trees have been build, the restore
> is as 
> swift as the backup.
> 
> I suspect problems with the mysql database but have no idea of what I 
> could/should do to correct the problem.
> 
> Thanks in advance for any advice that might lead me to a solution.

>From which version of bacula have you upgraded? Check the updates to the
database scheme, maybe some index was not created?

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
Free Software Download: Index, Search & Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to backup only the latest file of a daily sql backup

2011-02-12 Thread Jan Lentfer
Am 12.02.2011 09:11, schrieb J. Echter:
> Am 12.02.2011 08:57, schrieb Jan Lentfer:
>> machine is a windows 7 x64.
>> I don't exactly get what you mean. If you dump daily to the same
>> directory and run incremental over that directory, only the dump of
>> the actual day should be backed up?!?!
> the problem is, i can't do a Full Backup of this. Reason is first tape
> would be full.
>
> i do full backups to disk, and also have fulls on the server. so i would
> need only the latest file on tape, just to be sure :)
hm.. without thinking about it too much, couldn't you dump into some 
temp directory with a generic name (not including timestamps or such), e.g.

/var/temp_dumps/sqldump.file

backup that directory and then move the file to your usual directory 
including the timestamp using, e.g.

/var/dumps/sqldump_20110212.file

using client run after job script?

You would need to exclude  /var/dumps from the usual backup then.

You a quick thought :)

Which DBMS is this anyway?

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to backup only the latest file of a daily sql backup

2011-02-12 Thread Jan Lentfer
Hi Jürgen,

Am 12.02.2011 08:29, schrieb J. Echter:
> can anybody tell me how to handle this?
>
> what i have:
>
> - a daily backup of a sql db
> - the backup files are kept 14 days on the server itself
> - i want to backup only the newest file to tape
>
> files are called, for example:
>
> sql_backup_2011_02_07_230003_9327174.bak
>
> the reason for this is, my tapes running out of space... :)
>
> backups are 35 gb, i could save ~30 gb.
>
> machine is a windows 7 x64.
I don't exactly get what you mean. If you dump daily to the same 
directory and run incremental over that directory, only the dump of the 
actual day should be backed up?!?!

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Retrieving Client Adress from bacula-dir.conf

2011-02-09 Thread Jan Lentfer
Hi,

we had the discussion here latley on how to retrieve the client's adress
from bacula-dir.conf for RunAfter/Before scripts. I got some awk here that
works for me that I'd like to share:

BEGIN   {RS="}\n"; FS = "\n";}
/Client[[:space:]]+{/ {
found = 0;
for (i = 1; i <= NF; i++) {
if ($i ~ /Name/) {
split ($i, nameinfo, "[[:space:]]*=[[:space:]]*");
if (nameinfo[2] == name) {
found = 1;
}
}
if ($i ~ /Address/ && found == 1) {
split ($i, addressinfo, "[[:space:]]*=[[:space:]]*");
address = addressinfo[2];
if (address ~ /(^[[:space:]]+)|([[:space:]]+$)/) {
gsub(/^[[:space:]]+/, "", address);
gsub(/[[:space:]]+$/, "", address);
}

break;
}
}
if (found)
printf ("%s\n", address);
}
END {}

It can be called like this:
awk -v name= -f  /path/to/bacula-dir.conf

I use this together with %c to retrieve the Client's address.

Credits for this need to go to my fellow DragonFly BSD colleague Joe
Talbott, who put this together for me as my awk capabilites are a little
limited :)

Hope this of use to others, too.

Jan

--- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job copy/migration: do it in bulk

2011-02-07 Thread Jan Lentfer
On Sun, 06 Feb 2011 23:02:13 -0500, Dan Langille  wrote:
> I recently changed how I do job copy.  I used to do them all at once. 
> and frequently.  Now I've moved to doing them in batches, once a day. 
> All my incr, all my diff, and all the fulls.
> 
>http://www.freebsddiary.org/bacula-disk-to-tape-via-sql.php
> 
> Why?  Less tape churn.  Previously, the copy-all-from-pool approach was 
> mixing up different backup levels.  Thus, you might have an inc, diff, 
> inc, full, inc, diff... resulting in several tape changes.  By running 
> one copy job for just inc, then another copy job for diff, etc, the 
> tapes churn less.

I am doing something similar on my home setup. I have 3 Tape Drives (1
LTO-1 for Full, 1 DLTIIIXT for Incremental and 1 DLTIII for Differential).
Also I have 1 file pool each for Full, Incr, Diff with the tape drives as
"Next Pool". I have jobs running every day at noon to migrate stuff based
on time onto the tapes. The times are set in a way to make sure I always
have the latest Full (and Differential) and the necessary Incremental
backups on file storage for fast restore.

Next thing I want to try out is putting the File Pools on DragonFly BSD
with HAMMER filesystem. HAMMER supports Data De-Dup in batch job and also
in live-mode on the development branch.

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] client address in RunAfter Script

2011-02-06 Thread Jan Lentfer
Am 06.02.2011 12:51, schrieb Randy Katz:
> Almost the same question was asked in a previous post, I believe 
> yesterday titled: Re: [Bacula-users] character substitution / client ip
> So you are not alone! Be well,

Uh, missed that one :( Shame on me. Thanks for the hint.

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] client address in RunAfter Script

2011-02-06 Thread Jan Lentfer
Am 06.02.2011 12:22, schrieb Randy Katz:
> On 2/6/2011 2:03 AM, Jan Lentfer wrote:
>> Hi,
>>
>> I am in the need to know the  client address / hostname (as configured
>> in the Client definition) in a "Run After Job" script. According to the
>> manual there are only these place holders:
>> [...]
> You should be able to parse the client address from the job name/desc 
> and/or grep from the .conf files, right?
>
Yeah sure, just thought there maybe was a more straightforward way of 
doing it :)

Thanks
Jan


-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] client address in RunAfter Script

2011-02-06 Thread Jan Lentfer
Hi,

I am in the need to know the  client address / hostname (as configured 
in the Client definition) in a "Run After Job" script. According to the 
manual there are only these place holders:

%c = Client's name
%d = Director's name
%e = Job Exit Status
%i = JobId
%j = Unique Job id
%l = Job Level
%n = Job name
%s = Since time
%t = Job type (Backup, ...)
%v = Volume name (Only on director side)


Anyone has already done something like that?

Many thanks in advance.

Jan

--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental jobs upgraded to full backups

2011-02-03 Thread Jan Lentfer
On Thu, 3 Feb 2011 08:48:51 -0500, Win Htin  wrote:
> Hi folks,
> 
> Looking at last evening's backup reports, I noticed 3 incremental jobs
> ran as full backups.
> Since those 3 jobs are not newly created client jobs I am a bit
> perplexed with the sudden change.

Maybe your full backups where pruned from the database before? Or did you
change the fileset of these jobs?

Jan


-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How would I 'nuke' my bacula instance - Start afresh so to speak.

2011-01-05 Thread Jan Lentfer
On Wed, 05 Jan 2011 10:15:10 +, Mister IT Guru 
wrote:

> Thanks for the link, most useful! But I'm going to assume that all disk 
> based volumes already have an EOF? I can understand having to mark 
> tapes, so can I assume that the disk based volumes that are already in 
> existence will just get reused?

Just rm them an start fresh should be ok.

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How would I 'nuke' my bacula instance - Start afresh so to speak.

2011-01-05 Thread Jan Lentfer
On Wed, 05 Jan 2011 09:38:14 +, Mister IT Guru 
wrote:
> I've run lots of test jobs, and I have a lot of backup data, that I 
> don't really need, around 2TB or so! (we have a few servers!) I would 
> like to know if it's possible to remove all of those jobs out of the 
> bacula database. Personally, I would have cut this configure out, and 
> drop it on a previous backup I have, but then I don't learn about how 
> bacula works.
> 
> My main fear, is that I rsync my disk backend offsite, and I've 
> currently suspended that because of all these test jobs that I'm 
> running. Also, I've reset the bacula-dir and sd, during backups, and 
> I've a feeling that some of them are not viable.
> 
> I guess what I'm asking is, is it possible to wipe the slate clean, but 
> keep my working configuration from within bacula?

For each database type there is a description in the manual how to wipe
the database clean after initial setup and testing, e.g.

http://www.bacula.org/5.0.x-manuals/en/main/main/Installing_Configuring_Post.html#SECTION00433

hth

Jan

-- 
professional: http://www.oscar-consult.de
private: http://neslonek.homeunix.org/drupal/

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migration Pools and Volume recycling

2010-12-04 Thread Jan Lentfer
Dear all,

I am a long time bacula user (home network), but I have just subscribed 
to the lists because I have a question regarding Volume Migration I 
could not find any information for in the internet.

I have 4 pool configured in my system:

Pool {
   Name = DLT-Pool
   Pool Type = Backup
   Storage = DLT-Tape
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 1 Year  # 6 month
   Label Format = "TAPE-"
   Maximum Volumes = 15# 10 Volumes
}

Pool {
   Name = "DLT-Pool II"
   Pool Type = Backup
   Storage = "DLT-Tape II"
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 1 Year  # 6 month
   Label Format = "TAPEII-"
   Maximum Volumes = 15# 10 Volumes
}

Pool {
   Name = "FileFullPool"
   Pool Type = Backup
   Storage = "FileStorage"
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 1 Year  # 6 month
   Label Format = "FileFull-"
   Maximum Volume Bytes = 5G
   Maximum Volumes = 14# 10 Volumes
   Next Pool = "DLT-Pool"
   Migration High Bytes = 52G
   Migration Low Bytes = 47G
}

Pool {
   Name = "FileIncPool"
   Pool Type = Backup
   Storage = "FileStorage"
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Recycle Oldest Volume = yes # Use oldest Volume
   Volume Retention = 1 Year  # 6 month
   Label Format = "FileInc-"
   Maximum Volume Bytes = 2G
   Maximum Volumes = 18# 12 Volumes
   Next Pool = "DLT-Pool II"
   Migration High Bytes = 34G
   Migration Low Bytes = 30G
}

This works great so far and runnning the migration jobs manually with 
"Migration High Bytes" actually migrates jobs onto the tape pools. So, 
the setup is working technically.

Now, my question: E.g. the FileFull Pool is set to 14 volumes à 5 GB 
with Migration High Bytes set to 52GB. So if the 10th volume is half 
full the daily migration job should put stuff onto the tape pool. What 
if I have several Volumes in FileFullPool on state "Error" leading to 
not using the full 5GB of these volumes? Volume recycling will happen 
after 14th volume is full, recycling 1st volume also if this 1st volume 
is not yet migrated to tape because "Migration High Bytes" has not yet 
been reached?

I hope I could make myself clear, if not please don't hesitate to ask.

Many thanks in advance

Jan Lentfer

-- 
professional: http://www.oscar-consult.de


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users