alog.
>
> http://wiki.douglasqsantos.com.br/doku.php/using_bscan_
> to_recreate_a_catalog_from_a_volume_en
>
> Best Regards
>
>
> 2018-07-13 10:30 GMT+01:00 Gi Dot :
>
>> Hi,
>>
>> I wanted to restore a backup from a tape. Job is listed in the catalog,
>> vol
Hi,
I wanted to restore a backup from a tape. Job is listed in the catalog,
volume status is 'Append'. However, when I tried to restore, it says:
For one or more of the JobIds selected, no files were found,
so file selection is not possible.
Most likely your retention policy pruned the files.
Do
Hi,
I have 2 catalog backups, can I merge some jobs from the second catalog
into the first catalog? Reason is I have accidentally purged jobs from a
client. I have a backup of catalog of 3 days ago but if I restore that
catalog, I'll lose the recent 3 days jobs list.
It seems feasible but some op
That should be it then. Thanks for the help.
On Wed, Oct 25, 2017 at 7:28 PM, Gary R. Schmidt
wrote:
> On 25/10/2017 21:34, Davide Franco wrote:
>
>> Hi,
>>
>> As far as I know, both sd and director needs to be on same version.
>>
>> I can’t find it in Bacula documentation yet, but feel free to
On Wed, Oct 25, 2017 at 5:27 PM, Heitor Faria wrote:
>
> Hi,
>
>
> Hello,
>
>
>
> I've setup bacula-dir on Ubuntu, and bacula-sd on Freenas.
>
>
> Why? Isn't iSCSI easier?
>
I'm not used to Freenas (or any NAS). I will look into it.
>
> Problem is the director can't connect to the storage.
>
>
>
s
>
> Davide
>
> On Wed, 25 Oct 2017 at 11:11, Gi Dot wrote:
>
>> Hi,
>>
>> I've setup bacula-dir on Ubuntu, and bacula-sd on Freenas. Problem is the
>> director can't connect to the storage. Error below:
>>
>> *message
>> 25-Oct 16:5
Hi,
I've setup bacula-dir on Ubuntu, and bacula-sd on Freenas. Problem is the
director can't connect to the storage. Error below:
*message
25-Oct 16:52 foo-dir JobId 0: Fatal error: authenticate.c:122 Director
unable to authenticate with Storage daemon at "xx.xx.xx.yy:9103". Possible
causes:
Pass
.
> The job for client A would use storage A, client B would use storage B,
> etc. Then you could use Copy jobs to copy the jobs from each site's tape
> storage to the freenas storage.
>
> -Jonathan Hankins
>
> On Sun, Oct 8, 2017 at 11:47 PM Gi Dot wrote:
>
>> Thi
Think I'll go ahead with setting up a director on the administrative
district. Thanks a lot for your tips and advice. You have been a great help.
On Mon, Oct 9, 2017 at 12:37 PM, Phil Stracchino
wrote:
> On 10/09/17 00:13, Gi Dot wrote:
> > Well I have the freedom to test out any
ementation.
Thanks for the explanation on bacula with cluster. I've never setup bacula
with cluster before, it's something to look at.
On Mon, Oct 9, 2017 at 11:54 AM, Phil Stracchino
wrote:
> On 10/08/17 23:41, Gi Dot wrote:
> > Ah, I didn't think of that. Ma
mewhere else, then you
wouldn't need to bscan the backup media for restoration.
On Mon, Oct 9, 2017 at 11:07 AM, Phil Stracchino
wrote:
> On 10/08/17 22:39, Gi Dot wrote:
> > Your understanding is correct. I am not aware that this setup is not
> > supported/possible. I suppose
at 10:28 AM, Phil Stracchino
wrote:
> On 10/08/17 22:17, Gi Dot wrote:
> > Hi,
> >
> > Does bacula storage has a limit to accept jobs at one time?
> >
> > I have configured 7 servers to backup on a FreeNAS. Each server runs
> > bacula-dir pointing to its own b
Hi,
Does bacula storage has a limit to accept jobs at one time?
I have configured 7 servers to backup on a FreeNAS. Each server runs
bacula-dir pointing to its own bacula database locally. Each server has 2
storage configured, local and FreeNAS which is geographically remote. Jobs
were configured
Hi,
I have 3 sets of tapes, pool'ed by week -
Set1Pool:
mon-set1 - mon
tue-set1 - tue
wed-set1 - wed
thu-set1 - thu
fri-set1 - fri/sat/sun
Set2Pool:
mon-set2 - mon
tue-set2 - tue
wed-set2 - wed
thu-set2 - thu
fri-set2 - fri/sat/sun
Set3Pool:
mon-set3 - mon
tue-set3 - tue
wed-set3 - wed
thu-set
read_attr to interrogate the chip.
>
>
> You should also install the IBM or HP drive management tools (even if this
> means installing windows) and interrogate drive health.
>
>
> tapeinfo and loaderinfo utilities are useful but incomplete for this kind
> of diagnosis.
>
>
: 2,106
endblock: 0
volparts: 0
labeltype: 0
storageid: 1
deviceid: 0
locationid: 0
recyclecount: 0
initialwrite:
scratchpoolid: 0
recyclepoolid: 0
comment:
On Mon, Jan 9, 2017 at 11:29 AM, Gi Dot wrote:
> Hi all,
>
Hi all,
At the data centre we are using IBM-LTO tape - 3.0TB compressed, 1.5T
uncompressed. Last 2 nights a backup was running and it stopped at about
150GB size and bacula marked the tape as full.
Since the total amount of backed up data sometimes could be huge, I have
purged the volume straigh
Yes, I understand that. But for such setup like recycling the same tape
every night, and if the backup takes several hours to complete, the hours
calculation is pretty important.
On 2 Jan 2017 17:32, "Radosław Korzeniewski"
wrote:
Hello,
2016-12-29 8:10 GMT+01:00 Gi Dot :
> Hell
h retention period just
a few hours extra.
On 1 Jan 2017 03:00, "Dan Langille" wrote:
> On Dec 29, 2016, at 2:10 AM, Gi Dot wrote:
>
> Hello,
>
> I have always calculated my volume retention period based on its last
> written date, and time. This means when I want to pla
Hello,
I have always calculated my volume retention period based on its last
written date, and time. This means when I want to plan for a new
configuration, I will consider how long it takes for the backup to complete
- say if it takes 2 hours to complete and I need a retention period of 1
day, I
ntally purged one can be recovered before reuse by using the
> bscan utility.
>
>
> >
> > Purging won't make you lose other backups. My point was just that
> purging
> > overrides the retention period, so you can lose backups that were
> supposed to
> > b
Thanks, I've stopped with my habit setting the volstatus to recycle
manually. Now I just purge the volumes when I don't need it.
On Fri, Dec 9, 2016 at 11:29 PM, Alan Brown wrote:
> On 06/12/16 20:26, Gi Dot wrote:
>
> Well I've actually update the status of al
are running more than 500 jobs a day, you need to take the Bacula
> Admin I course.
>
> - If you are running more than 500 jobs and not a Bacula expert, you most
> likely need a professional service contract.
>
> Best regards,
> Kern
>
>
> On 12/06/2016 09:55 PM, Gi D
ycled
it will skip all the pruning part.
On Wed, Dec 7, 2016 at 4:41 AM, Gi Dot wrote:
> All of the volumes were full at the time, and have past the retention
> period. Bacula prunes the oldest volume, and stuck at it.
>
> Sorry if this question is trivial, but I don't really un
On Mon, 5 Dec 2016 17:40:08 +0800, Gi Dot said:
> >
> > > What exactly do you mean by "too long"? Does bacula encounter a
> > > timeout during the pruning from a database error?
> > Like it runs overnight and still not done with it. Sample log:
> >
Not that I noticed. Purging will remove the jobs associated with the volume
off the catalog right?
On Mon, Dec 5, 2016 at 6:34 PM, Uwe Schuerkamp
wrote:
> Are you seeing any high loads on the server while pruning job is
> running? It looks like the pruning job is stuck in some sort of
> loop. Gi
t;
>
>
> Best Regards
>
> *Wanderlei Hüttel*
> http://www.huttel.com.br
>
> 2016-12-05 7:40 GMT-02:00 Gi Dot :
>
>> - Hardware specs of the director (assuming all components run on a
>> single machine)
>> >> Backup runs on a database server. There&
you mean by "too long"? Does bacula encounter a
timeout during the pruning from a database error?
>> Like it runs overnight and still not done with it. Sample log:
https://dpaste.de/wetb/raw
Thanks.
On Mon, Dec 5, 2016 at 5:02 PM, Uwe Schuerkamp
wrote:
> On Mon, Dec 05, 2016 a
True, I think so too.
On Fri, Nov 25, 2016 at 9:18 PM, Martin Simmons
wrote:
> >>>>> On Wed, 23 Nov 2016 04:51:57 +0800, Gi Dot said:
> >
> > Hi all,
> >
> > The other day I was running a full and differential backup back to back
> on
> > a d
Hello,
I have this problem with one of my client experiencing pruning of a volume
that is taking too long (and in the end I ended up recycling it manually by
updating the volume status). I have googled up on this and from what I
understand it is mostly due to the database indexing (to be honest I
Hi all,
The other day I was running a full and differential backup back to back on
a database server, and I was surprised to see the size of both backups are
not that far off. The full backup size was about 37G, and the differential
backup that runs after that was about 30G. Excerpt from the log:
ip Deanovic
wrote:
> On Wednesday 2016-11-02 03:32:39 Gi Dot wrote:
> > Yes, both parameters will honor the retention period.
> >
> > Sorry, I think I worded it wrongly in my first mail. What I meant was,
> > for both parameters, bacula won't recyle either curre
ng.
As for 'Purge Oldest Volume', I'm not keen to use this parameter as it will
remove the job list from the catalog as well.
Thanks.
--
On Mon, Oct 31, 2016 at 5:58 PM, Josip Deanovic
wrote:
> On Monday 2016-10-31 10:21:37 Gi Dot wrote:
> > Hi all,
> >
> >
Hi all,
I'd like to know if I understand these 2 parameters correctly:
Recycle Current Volume = yes
Recycle current mounted volume in pool.
Recycle Oldest Volume = yes
Recycle oldest volume in pool.
But, for both parameters, bacula still won't recyle either current or
oldest volume if there is
Hi all,
It is quite frequent that my backup job ending up with the "The number of
files mismatch!" error. What could be the root cause of this?
Full error from log:
17-Oct 17:05 srv2-dir JobId 54: shell command: run BeforeJob
"/usr/lib64/bacula/make_catalog_backup.pl MyCatalog"
17-Oct 17:05 srv2-
et as well and that's why I needed the mount
script.
Thanks.
On 23 Sep 2016 19:54, "Marcin Haba" wrote:
> Hello Gi Dot,
>
> OK, fine.
>
> Please note that mount and umount scripts that you mentioned are
> defined on Director side, not on Storage Daemon side. You also
onfig. Actually
I don't even need to use the scripts anymore, but maybe I'll just make a
test for the fun of it.
Will let you know on how it goes. Thanks a lot for your help and advice.
On 23 Sep 2016 18:57, "Marcin Haba" wrote:
> Hello Gi Dot,
>
> Thanks for this o
Oh ya btw I'm using the mt command.
On Fri, Sep 23, 2016 at 5:51 PM, Gi Dot wrote:
> Hi,
>
> Thanks for your reply. I have changed my bacula-sd.conf configuration
> since a couple of weeks, and removed the mount.sh and umount.sh scripts. So
> far I have not received th
0
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 1
Thanks.
On Fri, Sep 23, 2016 at 11:39 AM, Marcin Haba wrote:
> Hello Gi Dot,
>
> Could you try to run test command in btape tool and past here output
> from this test?
>
> The output text from the test co
regardless a
wrong tape is inserted, or if a tape is not changed for some time that will
lead it to be fully utilised, but can't be recycled because there are other
tapes that are appendable.
On 12 Sep 2016 22:06, "Phil Stracchino" wrote:
On 09/12/16 06:14, Gi Dot wrote:
> Hi l
Hi list,
If I have a list of tapes with status Used, and the tape that I inserted is
also Used and the retention period is already exceeded, bacula won't
recycle the tape if it is not the oldest volume, right?
If the prior statement is true, is it possible for me to get bacula to
recycle a tape t
Hi all,
I am using IBM DAT 160 media with bacula, and I always get a tape error
after a successful backup job. There were a lot of times that the tapes
snapped as well. At first I thought the tapes are just defect, but the
error occured so frequent that I started to doubt my bacula configuration.
Hi,
I added the directive "Purge Oldest Volume = yes", and reload through
bconsole, but still it doesn't work. Do I need to restart bacula-dir
service? I have tried running the command update from bconsole as well.
Thanks.
On Mon, Aug 18, 2014 at 8:39 PM, Heitor Faria wrote:
> From bacula doc
>From bacula documentation:
- Purge the oldest Volume if PurgeOldestVolume=yes (the Volume with the
oldest LastWritten date and VolStatus equal to Full, Recycle, Purged, Used,
or Append is chosen). We strongly recommend against the use of *
PurgeOldestVolume* as it can quite easily lea
Hi,
I configured bacula to perform backup on a tape for every 2 days (1 tape
for Monday & Tuesday, another tape for Wednesday & Thursday, and so on).
These tapes will be recycled every 2 weeks.
Example:
Week 1, Monday & Tuesday = Tape A
Week 2, Monday & Tuesday = Tape B
Week 3, Monday & Tuesday =
That, is a good idea. Thanks!
On Fri, Aug 8, 2014 at 10:17 PM, J. Echter <
j.ech...@echter-kuechen-elektro.de> wrote:
> Am 07.08.2014 09:22, schrieb Gi Dot:
>
> Hi,
>
> Due to holiday and having no one around to switch tapes for me, I have
> more than 10 jobs create
Hi,
Due to holiday and having no one around to switch tapes for me, I have more
than 10 jobs created and pending for backup. Normally when situation like
this occur (though with lesser number of jobs), I will just put in the
tapes one by one and let the jobs run in sequence one after another.
Thi
most always -- contain a
> different number of members?
>
> you could reduce the conflict to once a year
> by defining odd/even weeks of the year.
>
>
> Gi Dot schrieb:
>
>> Hi,
>>
>> I'm having a bit of a problem with my backup schedule. The requ
Hi,
I'm having a bit of a problem with my backup schedule. The requirement is
to use 1 tape for 2 days incremental backup and 1 tape for 1 day full
backup. The tapes will be classified into 2 groups, the odd week(1st, 3rd,
5th week of the month), and the even week(2nd and 4th week of the month).
L
49 matches
Mail list logo