On Thu, Apr 28, 2011 at 11:25 AM, Dennis Hoppe
wrote:
> Hello John,
>
> Am 28.04.2011 16:07, schrieb John Drescher:
>> 2011/4/28 Dennis Hoppe :
>>> this is my first attempt with bacula and i need some advice about my
>>> configs. I am running a file based bac
> No, there are just a "normal" number of files from a shared folder of
> my fileserver with spreadsheets, documents, images, PDFs, just
> information of final users.
>
The performance problem is probably filesystem performance. A single
hard drive will only hit 100 MB/s if you are baking up files
On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees wrote:
> Hi:
>
> I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
> TS3100 with hardware compression enabled and software (Bacula)
> compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
> network and iperf tests repo
2011/4/28 Dennis Hoppe :
> Hello,
>
> this is my first attempt with bacula and i need some advice about my
> configs. I am running a file based backup with an extra device for each
> client.
>
> I thought this would support parallel jobs, but if i start two backup
> jobs like "run job=bserver" and
> Hello,
>
> have anyone an idea?
>
2.2.6 is came out in November of 2007. It is unlikely that many are
using this old of a version of bacula.
>From memory (since I have not used that version since sometime in
2008) 2.2.6 does not support 64 bit windows clients and may have
problems with shadow c
On Tue, Apr 26, 2011 at 1:24 PM, Ralf Gross wrote:
> Krysztofiak schrieb:
>> Hello,
>> is there a possibility to run Verify Job for a certain Backup Job (not the
>> last one) with Level=VolumeToCatalog?
>> For example I run two Backup Jobs with ids 1 and 2 and then I want to Verify
>> the first
On Tue, Apr 26, 2011 at 12:46 PM, John Drescher wrote:
>> is there a possibility to run Verify Job for a certain Backup Job (not the
>> last one) with Level=VolumeToCatalog?
>> For example I run two Backup Jobs with ids 1 and 2 and then I want to Verify
>> the first one.
> is there a possibility to run Verify Job for a certain Backup Job (not the
> last one) with Level=VolumeToCatalog?
> For example I run two Backup Jobs with ids 1 and 2 and then I want to Verify
> the first one.
>
I have seen this myself and I believe this is a bug.
John
-
> Be *very* aware of the vagaries of email clients, the line above from
> John appaers to contain the phrase "always = client", when his original
> post contained "always > = client".
>
> This is because somewhere in the chain something wrapped the
> line at "> =" and
> so the "> " was seen as bei
> This rule is not the real truth.
> I'm backing up a 2.4.4 (Debian Lenny) client on a 5.0.2 (Debian Squeeze)
> Director (and Storage)
That does not violate the rule I gave.
> Does the Bacula team plan to provide such compatibility matrix ?
>
This rule has been told to us from the developers cou
On Wed, Apr 20, 2011 at 5:52 PM, Jérôme Blion wrote:
> Le 20/04/2011 16:01, Jeremy Maes a écrit :
>> Op 20/04/2011 15:29, Ben Schmidt schreef:
>>> I'm running a bacula 2.2.6 on a old Server that's just working. One of
>>> it's Clients was replaced by a new Server with debian 6.0 today and I
>>> ca
On Wed, Apr 20, 2011 at 10:31 AM, Jason Voorhees wrote:
> On Wed, Apr 20, 2011 at 9:22 AM, Paul Mather wrote:
>> On Apr 20, 2011, at 10:17 AM, Jason Voorhees wrote:
>>
>>> Hi people:
>>>
>>> I'm running Bacula 5.0.3 with an tape library IBM TS3100 using LTO5
>>> tapes of 1.5 TB capacity each one.
> My name is Dan, and it's been 16 days since I last touched my bacula-dir.conf
> file.
>
Probably the same for me.. Although for me it would have been an
include off of the bacula-dir.conf
John
--
Benefiting from Serv
> I am new to bacula and I have run into a problem running any job in
> bconsole.
>
> I am getting the following error.
>
> 15-Apr 11:14 KITSrv01-sd JobId 9: Fatal error: Device reservation failed for
> JobId=9:
> 15-Apr 11:14 KITSrv01-dir JobId 9: Fatal error:
> Storage daemon didn't accept D
> So one of my machines has a few zillion tiny little files.
>
> My full backup took 44 hours. I can deal with that if I have to.
> My incremental backup has been running for 10 hours now.
> Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
> Files Examined=14,675,372
>
> I know that b
2011/4/19 Carlo Filippetto :
> Hi,
> I have a problema with catalog backup
> Everywhere I put the DB dump bacula don't work and I had the message as in
> the title.
>
> I tried modifing the /etc/bacula/make_catalog_backup, I add chmod and chown
> command to the file and the dir. At the end I tried
> I'm researching using this software for a lab. and was wondering if it
> allows admin to assign quotas and user accounts.
No quotas exist. However I guess you can assign a user a pool and give
that a maximum # of fixed size volumes. As long as they use their pool
you would have the same effect a
On Thu, Apr 14, 2011 at 5:37 PM, John Drescher wrote:
>> That's my first message to the bacula-users mailing list.
>>
>> I searched in all the Bacula's documentation and didn't find the answer:
>>
>> In the bacula-dir.conf file there is a variable name
-- Forwarded message --
From: John Drescher
Date: Thu, Apr 14, 2011 at 5:37 PM
Subject: Re: [Bacula-users] Bacula-dir.conf - the DirAddress variable
To: Wagner Pereira
> That's my first message to the bacula-users mailing list.
>
> I searched in all the Bacula
> When testing bacula I notice restores always expect the client machine
> to be up so you can restore directly to it. What would you do if the
> client machine was dead or just not alive on the network? Is there a
> way to specify a restore point other than the machine itself that the
> data was
-- Forwarded message --
From: John Drescher
Date: Wed, Apr 13, 2011 at 12:38 PM
Subject: Re: [Bacula-users] Dell PowerVault 124T tape drive
To: Steffen Fritz
On Wed, Apr 13, 2011 at 12:37 PM, John Drescher wrote:
> On Wed, Apr 13, 2011 at 7:48 AM, Steffen Fritz wrote:
>
> I have successfully installed Bacula on an Ununtu 10.10 server and gotten
> through the first part of the tutorial (pg 95-104) in the documentation.
>
>
>
> I have added a second remote client which is running RHEL 6.0. The daemon
> runs on the client.
>
>
>
> When I run the new job I get the fol
> I haven't gotten this to work yet, that is my goal. The two options I
> have gotten are to turn my jbod into a raid or use "vchanger". I'm all
> for using RAID, but my co-workers don't want to too.. they are worried
> about the array getting corrupted and then we lose all our backups,
> where a
>> "The same pool is no problem at all. What is the problem is bacula
>> does not normally move a job from one device to a second and it will
>> never continue a job from one media type to a different. The first
>> problem can possibly be overcome with what is called as a virtual
>> autochanger"
>
> still recommend using bacula vchanger in this situation.
>
Some how the I in that sentence disappeared while I was typing..
--
John M. Drescher
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartph
On Wed, Apr 6, 2011 at 11:54 AM, Mike Hobbs wrote:
> I just found this old archived messages..
>
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg29377.html
>
> The end of this email says:
>
> "The same pool is no problem at all. What is the problem is bacula
> does not normal
2011/4/4 Sarder Kamal :
> Dear List Members
>
> I am trying to configure bacula to back up nearly 1.5TB of data, which is
> always timing out since the backup does not complete before the next backup.
> Ideally, I would prefer the backup to begin after eveyone is out of office
> and finish before a
On Sun, Apr 3, 2011 at 9:06 AM, hymie! wrote:
>
> Greetings. A couple of questions so that I can hopefully better
> understand what I'm doing with my Bacula setup.
>
> Q1:
> Is this statment correct?
> "A Storage may contain several Pools, but a Pool lives on one and only one
> Storage"
>
No. Thi
On Thu, Mar 31, 2011 at 9:06 AM, Mike Hendrie wrote:
> Thank you John.
>
> Can you please explain to me the boot strap file? I am new to linux.
>
The bootstrap file is part of bacula. In the case of the catalog I
found it very important to manually extract the catalog I needed from
a disk volume
> Question: In the event of my entire bacula system going down. What is the
> best method for recovery? I looked in the online manual, but it still was
> not very clear to me.
>
> If I have my backup of catalog files, all incremental and full backups, and
> I reinstall bacula on a new server. How d
> Using a cron is not a good solution, I prefer to keep job management
> and scheduling inside bacula .
You could also schedule an admin job inside bacula runs the admin job
you can run your script.
John
--
Create and pu
> My first question is : How can I tell bacula to choose dynamically the
> client according to the result of a script ?
>
You can script that by echoing commands to bconsole. Then execute your
script as a cron job.
> My second question is : Can I safely/often change the client of a job
> without
On Wed, Mar 30, 2011 at 5:00 AM, Laurent HENRY wrote:
> Le Tuesday 15 March 2011 13:40:10 John Drescher, vous avez écrit :
>> > Fine, i understand it now. Is there a way to just spool attributes and
>> > not backuped files themselwes ?
>>
>> SpoolAttributes=ye
>> I had recently changed the bacula-sd.conf from using " Maximum Block
>> Size = 262144", to whatever the default value is. I didn't realize that
>> by undoing this, all previous backups would no longer be readable (since
>> I guess the SD is expecting a different block size?). So, I added that
>
On Mon, Mar 28, 2011 at 2:11 PM, Ondrej PLANKA (Ignum profile)
wrote:
>
> Dne 28.3.2011 16:43, Josh Fisher napsal(a):
>> On 3/27/2011 11:31 AM, Ondrej PLANKA (Ignum profile) wrote:
>>> >> Ciao!
>>> >>
>>> >> same behavior in 5.0.3 version. File based Volumes are not truncated
>>> >
On Sun, Mar 27, 2011 at 11:31 AM, Ondrej PLANKA (Ignum profile)
wrote:
> >> Ciao!
> >>
> >> same behavior in 5.0.3 version. File based Volumes are not truncated
> >> after console command "purge volume action=all allpools storage=File "
> >>
> >> Any hints?
> >
> > Does it work with pool=p
On Sat, Mar 26, 2011 at 10:00 AM, hymie! wrote:
>
> Greetings.
>
> I have two 1TB disks that I'd like to use for my Bacula volumes.
>
> My bacula-sd.conf says:
> ===
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /storage
> LabelMedia = yes; # lets Bac
> I'm looking to do remote backups for several clients. I currently have
> a bacula server running (which has saved my job more times than I care
> to admit) which backs up all my internal servers.
>
> I'm wondering what I need to know to sell a service like this, and use
> bacula. Do I need to p
> I haven't had as many die as you have (Do your users kick their computers
> around the room?) but my experience matches yours when looking at changes in
> the raw data. The problem is I haven't had enough die to put 100% certainty
> on it so I tend to rely on smartd's output.
>
I have between 10
>> Well, a good start is to use something like SMART monitoring set up to
>> alert you when any drive enters what it considers a pre-fail state.
>> (Which can be simple age, increasing numbers of hard errors, increasing
>> variation in spindle speed, increasing slow starts, etc, etc...)
>
> FWIW: N
> Thanks for the fast response John. Where can I verify that each volume is set
> to recycle?
>
look at the output of
list media pool=WhateverPoolYouCreated
in bconsole
> Kern's docs also state that you should make your volumes no bigger than 5
> gigabytes. I have set mine to 20 GB at present.
> My Question:
>
> I want to know is Bacula is smart enough to see once it goes past the 10 day
> point pruning point, does it know to start reusing the oldest volumes again,
> and only creating additional volumes when absolutely necessary?
That is the way bacula works.
Remember the following rul
2011/3/16 Mike Hendrie :
> I will check that.
>
> I was using the commented out default client in the bacula-dir.conf, but
> updated the content with the new windows computer name. Then on the windows
> computer, I updated the director username and password, and the monitor
> username and password.
On Wed, Mar 16, 2011 at 1:29 PM, Mike Hobbs wrote:
> On 03/16/2011 01:12 PM, Robison, Dave wrote:
>> Just curious, why not put that jbod into a RAID array? I believe you'd
>> get far better performance with the additional spools and you'd get
>> redundancy as well.
>>
>> Personally I'd set that u
> Fine, i understand it now. Is there a way to just spool attributes and not
> backuped files themselwes ?
>
SpoolAttributes=yes in the Job resource.
http://www.bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#SECTION00183
--
John M. Drescher
On Tue, Mar 15, 2011 at 5:31 AM, Soul wrote:
> So... I am again a little bit further.
> Some paths in the configs were wrong. Also the magazines wasn`t owned by
> the bacula user.
>
> I changed that so that now I am able to load the disks an let bacula
> write on them.
>
> But Manually!
>
> If I t
On Fri, Mar 11, 2011 at 12:28 PM, Mike Hendrie wrote:
> Ubuntu firewall is: inactive
> XP firewall is: disabled
>
> No localhost or 127.0.0.1 settings in the following files:
>
> bacula-dir.conf
> bacula-sd.conf
> bacula-fd.conf
>
>
> In the files I am using "bacula" in place of the server DNS: 10
On Fri, Mar 11, 2011 at 11:49 AM, Mike Hendrie wrote:
> Sorry for the delay. I needed a break.
> Thank you for your responses.
>
> I have DNS working:
> Bacula server: 10.2.1.98
> XP: 10.2.1.97
>
> Is there something special that needs to be setup for the filestore? I am
> using the same file sto
> Im using bacula 5.2 with 5 USB 500GB disks each for a working day of the
> week.
> Now my problem is whenever I try to label a volume I get the error "3920
> Cannot label Volume because it is already labeled:".
> Initially I'd labeled them but due to some issues like waiting for an
> append-able
>> Attribute spooling makes sense for either though.
>
> Attribute spooling ?
>
That enables spooling if the database entries at the end of the job in
a batch instead of 1 at a time as the files are being processed. Its
quite a bit faster to enable attribute spooling unless you have your
database o
On Wed, Mar 9, 2011 at 4:52 PM, Mike Eggleston wrote:
> On Wed, 09 Mar 2011, John Drescher might have said:
>
>> 2011/3/9 James Woodward :
>> > Hello Mike,
>> > I'd have to say no based on the tables defined
>> > here http://bacula.org/5.0.x-manuals/e
2011/3/9 James Woodward :
> Hello Mike,
> I'd have to say no based on the tables defined
> here http://bacula.org/5.0.x-manuals/en/developers/developers/Database_Tables.html.
> The file size does not appear to be stored in the database. I don't think
> any size is mentioned until you get to the job
> I tried to enable spooling (500 Gb spool) for the tape backup. The perf were
> worse.
>
BTW, I use 5GB spool.
I assume then you do not run concurrent jobs? For a single job the
time will be longer because bacula does not concurrently spool and
despool but with concurrent jobs and spooling with a
On Tue, Mar 8, 2011 at 8:24 AM, Laurent HENRY wrote:
> Le Wednesday 02 March 2011 20:50:18 Mark, vous avez écrit :
>> > btw, i actually experience it with disk backups.
>> > Does this mean software compression limits deeply backup speed ?
>>
>> In my case, software compression was a _massive_ slow
On Mon, Mar 7, 2011 at 1:24 PM, Win Htin wrote:
> Thanks John. Do you recollect which actual database I have to look into?
>
http://bacula.org/5.0.x-manuals/en/problems/problems/Tips_Suggestions.html#SECTION0036
John
---
On Mon, Mar 7, 2011 at 2:11 PM, Michael Edwards
wrote:
> I canceled a job about an hour ago as it wasn't going to run. It is the
> only job that would be writing to the volume mounted in drive 1 of our 2
> drive Dell Powervault. Drive 0 is in use doing backups from another
> pool. After about a
> I noticed every time I have to cancel a running job, the tape/volume
> goes into error state. I assumed the job would terminate gracefully
> with an eod mark but seems not. Is there a way I can put an eod mark
> and put the tape back to append mode? I am running version 2.2.6 on
> RHEL 4.6 server
>> Well, depending on what you consider "mainline".
What I meant by "mainline" was a filesystem that was considered stable
in the mainline kernel.
John
--
What You Don't Know About Data Connectivity CAN Hurt You
This pap
On Thu, Mar 3, 2011 at 5:57 PM, Sean Clark wrote:
> On 03/03/2011 03:21 PM, John Drescher wrote:
>> On Thu, Mar 3, 2011 at 4:11 PM, Fabio Napoleoni - ZENIT
>> wrote:
> [...]
>>> So the poor throughput is given by software compression. I don't know what
>
On Thu, Mar 3, 2011 at 4:11 PM, Fabio Napoleoni - ZENIT wrote:
>
> Il giorno 03/mar/2011, alle ore 20.53, Alan Brown ha scritto:
>
>> Fabio Napoleoni - ZENIT wrote:
>>
>>> Thank you for your analysis, after that I think that the problem is not the
>>> nfs overhead, because the despooling phase (o
On Thu, Mar 3, 2011 at 9:52 AM, Fabio Napoleoni - ZENIT wrote:
>
> Il giorno 03/mar/2011, alle ore 04.52, Fabio Napoleoni - ZENIT ha scritto:
>
>> I have my director using an external raid device which exports its
>> filesystem with nfs protocol as storage. I just ended my bacula
>> configuratio
> Maybe its a newbie question, but I just could not find any answer...does
> BAT have a graphical way of browsing filesets in clients? if not...does
> other interface provide this functionality?. thanks for your help
>
You can browse what was included your backup job but it will not
browse what cu
On Wed, Mar 2, 2011 at 5:36 AM, Soul wrote:
> Hi,
>
> I am trying to use 4 separate USB-Disks as Storage-Media for Backups.
>
> So... I made that the Backup-Disks are automatic mounted with autofs.
>
> Every Client reaches the Director and the Storage-Deamon...
>
> So far so good... but when the b
On Tue, Mar 1, 2011 at 8:40 AM, Laurent HENRY wrote:
> Hi all,
> Maybe not a direct question about bacula but i am looking for some feedback
> of bacula users.
>
> My Debian 5 bacula server is directly connected to a Cisco 6500 Switch.
> Either the bacula server, the switch, the servers backuped
> John -- I'd be grateful for your comments on how best to do a dd based
> read and write test for LTO3 and LTO4 tapes. This seems like a fairly
> good way of narrowing down a drive or interconnect problem.
>
> Is something like the following correct?
>
> mt -f /dev/nst0 rewind
> dd if=/dev/s
On Mon, Feb 28, 2011 at 5:33 PM, John Drescher wrote:
> On Mon, Feb 28, 2011 at 5:22 PM, Jeremiah D. Jester
> wrote:
>> Here's my output..
>>
>> #test write.
>> [root@scrappy bacula]# mt -f /dev/nst0 rewind
>> [root@scrappy bacula]# mt -f /dev/nst0 weof
On Mon, Feb 28, 2011 at 5:22 PM, Jeremiah D. Jester
wrote:
> Here's my output..
>
> #test write.
> [root@scrappy bacula]# mt -f /dev/nst0 rewind
> [root@scrappy bacula]# mt -f /dev/nst0 weof
> [root@scrappy bacula]# dd if=/dev/nst0 of=/dev/null bs=64512
> dd: reading `/dev/nst0': Input/output erro
> I am trying to migrate from backup exec to bacula, and I want to know if
> anyone has succesfully read symantec backup exec tapes with bacula or
> how they migrated. Thanks in advance
>
Bacula will not read backup executive tapes (or any other format but
bacula's own format). You will have to re
> I'd like to know if anyone could point me some directions about how I could
> create a routine in Bacula to make a Full backup of my main servers and then
> manually remove these tapes to take them to a safe place (in case of a
> catastrophe).
>
> I've a few doubts about how I could make this wor
2011/2/27 Igor Zinovik :
> On Feb 25, John Drescher wrote:
>> > device {
>> > name = backup-disk-device
>> > media type = File
>> > archive device = /var/backup
>> > label media = yes # lets Bacula label unlabeled media
>
On Fri, Feb 25, 2011 at 6:39 PM, John Drescher wrote:
>> I have noticed in my install of Bacula there is a WeeklyCycle schedule
>> configured as follows:
>> Full: 1st Sunday
>> Differential: 2nd-5th Sunday
>> Incremental: Mon-Sat
>> Does bacula delete incremen
> I have noticed in my install of Bacula there is a WeeklyCycle schedule
> configured as follows:
> Full: 1st Sunday
> Differential: 2nd-5th Sunday
> Incremental: Mon-Sat
> Does bacula delete incrementals between Fulls and Differentials.
No
> If not what
> is the purpose of mixing Differentials
2011/2/25 Igor Zinovik :
> Hello.
>
> I'm using bacula 5.0.3 since november 2010. Now i'm facing
> problem with my backup jobs. We have a mail server that
> also runs bacula-fd and i have to backup it, but this
> computer has very large file system partitions. Bacula have to backup
> about 310
-- Forwarded message --
From: John Drescher
Date: Thu, Feb 24, 2011 at 8:08 PM
Subject: Re: [Bacula-users] Tape label errors from dmesg
To: jerry lowry
> hello, I had an interesting development today. I labeled a dlt IV tape (
> standalone tape drive) for a new backup
2011/2/24 João Alberto Kuchnier :
> Hi everyone!
>
> I need some help to enable a new backup procedure here.
>
> Every month, I have to store a backup tape externally with data from all
> my 30 servers. Today, I would have to manually run each backup without
> scheduling. Is there any way to create
On Mon, Feb 21, 2011 at 5:41 PM, John Drescher wrote:
>>> It appears the problem is with the catalog. I am using 64 bit gentoo
>>> linux and bacula-5.0.3 although the problem has persisted for years. I
>>> only use disk volumes for my catalog so I have not spent much e
>> It appears the problem is with the catalog. I am using 64 bit gentoo
>> linux and bacula-5.0.3 although the problem has persisted for years. I
>> only use disk volumes for my catalog so I have not spent much effort
>> investigating the cause.
>
> What datatype is media.endblock on your system?
At work I periodically have append failures for my disk based volumes.
The following is an example of the error:
21-Feb 14:20 fileserver-dir JobId 24345: Error: sql_create.c:152
sql_create.c:152 update UPDATE Media SET EndFile=0,
EndBlock=3294226434 WHERE MediaId=275 failed:
ERROR: integer out of
On Mon, Feb 21, 2011 at 12:31 PM, wrote:
>>>
>>> How can I get this library to look for the next tape (04)?
>>>
>>
>>I am not sure exactly why you want this but you could always mark the
>>other two tapes Full, Used, or Archive so that bacula ignores them.
>>
>>John
>
> It won't run a backup,
> The following is on a CentOS 5.5 x86_64 system with bacula 2.0.3-10 installed
> and working very well except for the following:
You should really update to a more recent version. 2.0.3 is over 4
years old and has many bugs that were fixed in future versions. The
current bacula version is 5.0.3.
> I am fairly new to Bacula and am very pleased with it. I have also seen a
> number of questions regarding remote client backups. As an IT admin I have
> found a number of product now have the ability to perform one full backup
> and then incrementals from that point forward. They also have the ab
On Mon, Feb 21, 2011 at 11:19 AM, John Drescher wrote:
>> I am fairly new to Bacula and am very pleased with it. I have also seen a
>> number of questions regarding remote client backups. As an IT admin I have
>> found a number of product now have the ability to perform one
I tried a few other jobs and it seems that regardless of the verify
fileset you choose the VolumeToCatalog verify always picks the last
backup that was performed on the client and not the last full backup
for a given fileset or similar. The job I want to verify uses LTO2
media to the LTO2-Archived
2011/2/18 Mike Hendrie :
> I simplified the FileSet to include one directory, c:\home.
> I then ran the estimate, from bconsole, to verify if the contents All
> files were accounted for.
> When I run the job I get this from the bconsole director status:
> 28 Full il93mdec-fd.2011-02-18_06.51.37
> Anyone in the Seattle who would be willing to do some bacula training
> /support? I sent an similar post a few weeks ago regarding 3rd party support
> contracts. My goal is to overcome common problems and to be able to better
> administer our bacula systems.
>
>
> Have you checked with Bacula Sys
2011/2/11 Jeremiah D. Jester :
> I’m trying to determine that status of some of my tapes and I’m a little
> confused on the output. Anyone have any insight?
>
>
>
> Thanks,JJ
>
>
>
> [root@scrappy bacula]# ./bin/btape -c bacula-sd.conf /dev/nst0
>
> Tape block granularity is 1024 bytes.
>
> btape:
> Thanks for the reply. These are new tapes. When I try to do a ‘label
> barcodes’ on the new tapes I get errors so I’ve been manually wiping them
> with the following commands.
>
Getting a read error on first usage on a new tape is expected and not
harmful at all. Just ignore the read error 0:0
> So auto-labeling doesn't start 0 or 1 per pool, it uses the mediaid then.
This was a design decision to prevent a bug that occurs when you
delete intermediate volumes.
> What about using a counter variable to do it?
You can do this using your script or modifying the bacula source code
and bui
> I've got one of my pools setup to auto-label volumes. However instead of
> starting the number at 0 or 1, it's using the mediaid of the volume. Let me
> know what can be done to have the numbering start at 1 and continue from
> there. Thanks.
If you want this you must implement it in a script
On Tue, Feb 15, 2011 at 8:39 PM, Chris Geegan wrote:
> Ok I can get those. In general though do I have the general idea correct? If
> there are unused volumes and a new pool is configured to auto-label then the
> pool will automatically label unused volumes for backups as needed.
>
> Sound about r
> My bat 5.0.3 running on Vista is having trouble getting information from
> my 5.0.3 Director. It sees only the default pool.
>
> The Pools pane shows only the Default pool. However, if I enable
> Settings | Preferences | Debug | Display all messages in console, and
> then restart Bat, all pools
2011/2/15 Chris Geegan :
> Got a question and probably a simple 1.
>
> I have a 702 gigabyte volume where all of my backups will be stored. Going
> with the default settings in the volume pools of 50 gigabytes per volume I
> have created 14 volumes for a total of 700 gigabytes. The default file poo
For some reason Verify VolumeToCatalog is not working as expected. I
tried to verify a 1.86TB job to verify that the data on the 6 LTO2
tapes is correct with fileset ImageData-ds3-fs but it appears that
bacula is mixed up trying to verify the catalog backup job instead.
Here is the log:
14-Feb 10:
On Wed, Feb 9, 2011 at 12:27 PM, Jeremiah D. Jester
wrote:
> Is it possible to 'pause' a job and then resume at a later time?
>
No
John
--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoi
On Wed, Feb 9, 2011 at 11:09 AM, Valerio Pachera wrote:
> Hi all, I have a storage server where I defined two Devices of type
> file (two different folders).
> I have this storage defition:
>
> Storage {
> Name = control-station-sd
> # Do not use "localhost" here
> Address = 1
On Wed, Feb 9, 2011 at 9:49 AM, John Drescher wrote:
>> (To Ferdinando) The Maximum concurrent jobs is set to 20.
>>
>> (To Matias) My storages are all "file" media type. I think that is not the
>> same case as you. But I can try to use two catalogs. How can I d
> (To Ferdinando) The Maximum concurrent jobs is set to 20.
>
> (To Matias) My storages are all "file" media type. I think that is not the
> same case as you. But I can try to use two catalogs. How can I do it?
>
Two catalogs will not give you concurrency. If you do not want
separate catalogs I w
> John,
> sorry for the probably obvious question: if I have one storage
> daemon with two storage devices defined, one File and on Tape, can I
> have two backup jobs running simultaneously, provided one goes on the
> File device and the other one on the Tape one ?
>
You can have at least that
> I have one iSCSI drive with more than one storage devices defined but when I
> run backup jobs only runs one an the others are waiting. I want it to run
> concurrently. I think that is possible running more than one storage daemon.
> I tried to execute bacula with more than one bacula-sd.conf
> I have been running version 2.4.3 for windows and decided it was time
> to upgrade. Now I cant find a version that includes the director for
> windows. Can anyone point me to a version that contains the director?
>
I do not think there is one any more since the server is not supported
on windo
1001 - 1100 of 3424 matches
Mail list logo