aid to just dump the entire catalog and all the
volumes and make a clean start once you have everything working. But at
the moment, if you're trying to sort good data from bad while half your
backups are still failing and you don't yet know why, it's a bit like
trying to bail a lak
want to replicate only the good jobs, your best bet is
probably to migrate the good jobs to a new Pool and then replicate only
those.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renai
ll the state
files in and under Bacula's working directory as well.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
On 07/09/10 11:55, Prashant Ramhit wrote:
> Hi,
> Your solution worked perfectly.
> Storage can be defined in either the Job or Pool.
> And you should specify the level in the Job, the bacula figures out
> which pool to backup to, hence the storage and the drive.
That's it e
TO4-pool with volumes of type
LTO4.)
3. Use the Full Pool, Differential Pool and Incremental Pool directives
in your JobDefs to specify the correct Pool for each level. Bacula will
figure out the correct storage device to use based on the media type
specified in the Pool.
--
Phil Stracchino
> days etc..
>
> The problème is, if I put a tape in the bank, 19 days later Bacula will ask
> me THIS special tape, which is not available (cause in the bank..).
>
> Is there a solution for this problem or it is inherent to the way Bacula
> actually works ?
This sounds at
On 07/08/10 06:59, Koray AGAYA wrote:
> Thanks for your help I have a Question. How to flow Bacula on Sun
> Solaris JAVA Desktop
I'm sorry, I don't understand the question. Could you try rephrasing it?
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.560
100% certain.
There exists a console function to list the jobs contained on a volume,
grasshopper. It is readily accessible from the Pools listing in BAT, if
you don't want to do it from bconsole.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@c
levels.
- Most of this is covered in the Getting Started section of the Bacula
manual.
- You're probably going to want to either look into the
truncate-on-purge feature, or do some external scripting to delete
purged Volumes as an admin job. Possibly both.
--
Phil Strac
, you still show only a small number of your clients
listed or still cannot access your old volumes, you probably want to
restore your previous bacula-dir.conf and bacula-sd.conf and copy the
appropriate data, then restart Bacula.
If you're still having problems after that, then we can figure
why an LTO5 should not
also work.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
It&
ughts so far!
It sounds like your best bet is to write a Perl or expect script that
performs the necessary bconsole interaction to run your restore job as
you need it run, then execute that script from an admin job.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71
n I'm just testing.
However, if your purpose is simply to mirror, rather than to continually
test the latest backup, then it's probably considerably more *efficient*
to use rsync. In fact, *unless* you want to do the daily restore as a
test to make sure the backups are good, you might w
regards.
I would try something like:
Schedule {
Name = "Semestral"
Run = Level=Full jan 1st mon at 09:00
Run = Level=Full jul 1st mon at 09:00
}
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordina
apply in each config concerned file ..
That is used simply to generate a random password at installation time.
While it does generate a good strong random password, it's by no means
a requirement to use that method.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...
it's a fairly simple task to create a
script that uses the console to generate a regular report of purged
volumes. If it keeps a state log of what it has reported in the past,
it could equally easily report only newly-pruned volumes. I don't know
whether that would meet your reporting nee
. As
best I recall, it is sent across the network as an MD5 hash, not in clear.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
, the primary purposes of
spooling are to allow a fast dump to disk of small backup sets followed
by a slower write out to tape (in order to free up clients as fast as
possible), or to prevent shoeshining when clients and/or the network
cannot transfer data fast enough to keep a high-speed tape driv
acula 3.x, since volumes can have numeric "names", when
deleting volumes by media ID one must precede the ID with an asterisk,
as the prompt above tries to remind you. "5" will be interpreted as a
volume name; to specify a media ID, you need "*5".
--
Phil Stra
and make sure it *REALLY IS*
putting out enough power *under full load* to drive everything in the
system. In this case, based on my calculations, the original power
supply had to be falling short of its rated power output by almost 17%.
--
Phil Stracchino, CDK#2 DoD#299792458 ICB
> Write Bootstrap = "/var/bacula/%c.bsr"
> Allow Duplicate Jobs = no
> Cancel Queued Duplicates = yes
> }
You need to add "Rerun Failed Levels = yes" in this resource.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.
run first the v10-to-v11 update
script, then the v11-to-v12 script. If Ubuntu didn't install them, you
should be able to get them from Sourceforge.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@c
Pool settings.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
It's not the years
for HBA has installed successfully.
What is this lin-tape you speak of?
If your Linux kernel is properly configured (i.e, SCSI HBA support, SCSI
tape support, and the PERC6 low-level SCSI driver enabled), it should
Just Work. If those aren't compiled into your kernel, see if you have
them presen
andled just like any
other incremental change.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
It's
s in the class, you back up only user data plus any
base system files that are different from those on the reference machine.
Once I have all of my Windows boxes on the same version of Windows again
(right now, half are XP Pro and half are 2K Pro), I'm planning to set up
a base job for them
a23 \
> --with-db-name=bacula \
> --with-db-user=bacula \
>
> exit 0
It told you exactly what the problem is. You're building a full install
of all components, but you didn't tell it which database to build against.
You're clearly using MySQL; try adding --with-mysql to
> Is there a cross-compile manual? I will be very glad to compile the
> Windows Bacula Server version.
I personally can't help you with that, not having done it myself, but
one or another of the other Bacula folks who monitor this list should be
able to tell you how to do it.
21 | file_0021 | Append| 1 |0 |0 |
> 1,728,000 | 1 | 20 | 0 | File | -00-00 00:00:00 |
> +-++---+-+--+--+--+-+--+---+---+-----+
> *
>
> Am I good to go?
The volumes look fine. If it still uses multiple volumes, I don't think
it's a media problem.
--
Phil
way) to make sure they get recycled (cycle over and over). Do I just go into
> bat and mark each volume 'recycle' ?
New volumes will be in state 'Append' and do not need to be marked in
any way.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -7
ror.
Dermot,
There is no command logging facility in the Bacula console. It might
make a good feature request.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin,
dd
That's very curious behavior. At the moment, I couldn't make a guess as
to what's causing it.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl
On 05/25/10 13:18, Joseph Spenner wrote:
> --- On Tue, 5/25/10, Phil Stracchino wrote:
>
>
>>
>> The 'Storage' section of bacula-dir.conf does not call out
>> individual
>> devices, just storage daemons. You should have the
>> Storage device
>
. Any ideas?
First of all, check that all your Maximum Concurrent Jobs settings are
correct in ALL applicable resources, in both the bacula-sd.conf and
bacula-dir.conf files.
Also check that you haven't set the "Use Volume Once" preference (which
is usually a mistake made due to mi
On 05/25/10 12:36, Joseph Spenner wrote:
> --- On Tue, 5/25/10, Phil Stracchino wrote:
>> If you enter the console command 'update' with no
>> arguments, it will
>> give you a number of choices of what to update. 'Pool
>> from resource'
>> w
he console command 'update' with no arguments, it will
give you a number of choices of what to update. 'Pool from resource'
will be one of these (it should be the second, I think).
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerl
rking"
> Maximum Concurrent Jobs = 20
> }
>
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /opt/bacula/volumes
> LabelMedia = yes;
> Random Access = Yes;
> AutomaticMount = yes;
> RemovableMedia = no;
> AlwaysOpen = no;
>
to first update the Pool from the resource, then update sall
the Volumes from the newly-updated Pool, in order to propagate all the
new settings to the Pool and all the volumes it contains.)
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metro
> Anyone have any ideas?
This may be a silly question, but ... how much data is being backed up
each night?
How many clients do you have and what are your various concurrency
settings? In particular, does every storage device have a concurrency
setting greater than the number of clients?
ou do not try to use any new functionality not supported by
the 2.4 clients (VSS, for example). Older server, newer client is
strongly discouraged.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ord
good idea", but I do it myself
without any problem. It simply requires having a method to regularly
remove purged Volumes from disk and from the Catalog.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net
the new server.
>
> This would let us do the upgrade at a relative calm pace, while
> keeping our double backup strategy in place.
>
> Does anyone has an experience on this situation ?
There should be no problem backing up an older client with a newer server.
--
Phil Stracch
On 05/18/10 10:37, Joseph Spenner wrote:
> --- On Mon, 5/17/10, Phil Stracchino wrote:
>> As previously mentioned, staggering backups on your clients is
>> fairly easy. A six-day backup cycle is *unusual*, because it
>> doesn't fit neatly into either weekly or monthly
Run = Level=Full sat at 1:05
> Run = Incremental sun-fri at 1:05
> }
>
> Schedule {
> Name = "WeeklyCycleAfterBackup"
> Run = Full sun-sat at 1:10
> }
>
> ...
>
> Then, as I add clients, I decide which Schedule to use such that the Full
> b
o with Bacula,
because Bacula is not designed to do scheduling that way. I'm really
not certain how one would schedule a calendar-independent six-day
rotating schedule in Bacula, and I'm not convinced it can be done
without enumerating the entire schedule a year at a time. If you r
lso consider using Pool specifications in the Job/JobDefs record rather
than using Schedule-based Pool overrides. Schedule-based overrides have
been deprecated because it is not possible to make them work properly
with automatic job level promotion. At this time, they are still
supported for backwards com
a/bacula-5.0.2.ebuild manifest
# emerge -av app-backup/bacula
The ebuild manifest step is *important*. If you do not update the
manifest after patching, the ebuild checksum will be wrong, and portage
will helpfully re-download the ebuild for you, undoing your work.
--
Phil Stracchino, CDK#2
correctly. What you could do is configure the source appropriately,
then do a make in just the subdirectory containing the scripts to make
the update scripts you need.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net
$# Microsoft...
Hey, if this stuff made any logical sense, Microsoft wouldn't be able to
sell MSCE training.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl
On 05/06/10 02:57, Vlamsdoem wrote:
> On 05/05/10 15:12, Phil Stracchino wrote:
>> On 05/05/10 08:38, John Drescher wrote:
>>
>>>> Sorry my servers are on gigabit links.
>>>> How do you come to 9MB/s with a 100Mb link, is it not equals
ibly have only one backup job/location for those
> files.
This doesn't necessarily follow unless you're using accurate backup.
But it sounds as though the best approach here is to ensure that the
shared storage is mounted at a specific node when your backup runs.
--
Phil Stracch
remember that the data transfer rates specified on disk
interfaces are the maximum burst transfer rate FROM a full disk cache or
TO an empty one. The actual sustained rates at which the physical
mechanism can read or write data to and from the platters are FAR lower.
--
Phil Stracchino,
On 05/04/10 13:29, Joseph Spenner wrote:
> --- On Tue, 5/4/10, Phil Stracchino wrote:
>
>>> I have mysql running on my Suse 11.2 64bit
>> system. How does mysql get populated with the bacula
>> database/tables? When I compiled from source earlier,
>> I fo
processes run during the
> RPM installation. I logged into mysql and did 'show tables', but saw no
> bacula data.
Having never installed from RPM, I can't help you there, sorry. But the
RPMs SHOULD contain the scripts you reference above.
--
Phil Stracchino
t work??
Can you be a little more specific about "don't worl"? What doesn't
work? Compression? Backups?
If you're not getting compression and you're asking about that, from the
fragmentary bits of configuration you've posted above you appear to have
compressi
Name = FS-test-windows
> Enable VSS = yes
> Ignore FileSet Changes = no
> Include {
> Options {
> compression = GZIP
> signature = MD5
> }
> }
> File = C:/Programmi/Test
> }
Actually, THIS FileSet will not back up anything, bec
te everything.
If it's taking forever to keep the machines mirrored with rsync, it'll
take forever and a day with Bacula. Use the right tool for the job.
Bacula is a backup suite, not a synchronization tool. Rsync is, in
fact, the best tool for this job.
--
Phil Stracchino, CDK#2
>> already?
>>
>>
>> Please post the output of
>>
>> list media pool=POOL-FUL-bkp13
>>
>> John
>>
>
> No I do not, all the volumes were created from File12 and not from File12e.
> Should I edit the pool?
Then you need to either ma
wo" but have Pool="POOL-
>> FULL-vps-serverone" nreserve=0 on drive "bkp12-disk" (/bacula).
>>
>>
>
> The problem is each job uses a different pool and only 1 volume can be
> loaded in a device at a time so the second job has to wait.
If it
piled for MySQL.
> bacula-postgresql-5.0.1-1.su112.x86_64.rpm
All the server daemons, compiled for PostgreSQL.
> bacula-sqlite-5.0.1-1.su112.x86_64.rpm
All the server daemons, compiled for SQLite.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerl
bles already built?
No. Bacula-mysql will be a package of all the bacula server tools,
built to use mysql databases, as compared to bacula-sqlite or
bacula-postgresql.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.
-1.su112.x86_64.rpm
> bacula-mysql-5.0.1-1.su112.x86_64.rpm
If I were you, I would wait a day or two and grab 5.0.2, which was
released today.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
r that job, and set Maximum Volume Jobs
= 1 for that Pool.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Free Stater
It's not
7;portable'
format.) Use the job parameter to select which client's backup you're
restoring from; use the client parameter to select which client to
restore files to; use the where parameter to specify where on that
client to restore to.
--
Phil Stracchino, CDK#2 DoD
gt; Pool: Weekly
> Media type: ULTRIUM-LTO-4
[...]
Have you tried putting a 'Maximum Volumes = 13' directive in your Pool
resource to tell it not to create any more new volumes besides the ones
you already have?
(Though if your retention is 14 days, you probably
On 04/22/10 17:47, Joseph Spenner wrote:
> --- On Wed, 4/14/10, Phil Stracchino wrote:
>
>>
>> Joseph,
>>
>> That looks much better. Now your volumes have the
>> correct nine days
>> retention time for your desired ten-day cycle, instead of
>> ze
b Name Here"
You can do this either by typing the command directly at a console, or
from the right-click menu from the Job list in BAT. Then, tomorrow,
just undo it:
enable job="Your Job Name Here"
And you're all set.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM
b or JobDefs is
both cleaner and more reliable, and also allows Jobs using the same
Schedule to use different override Pools.
(In this case, the Incremental Pool and Differential Pool directives are
redundant and should be unnecessary. But it won't hurt anything to have
them there, and the ex
ytes exactly as they should be.
Still, knowing how to determine exactly what got backed up by a specific
job is a useful thing.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Another
alternative would be to start up BAT, go to Jobs Run, select job 183,
right-click, and select "List Files On Job"; Bacula will tell you
precisely which files were backed up. (You can accomplish the same
thing using bconsole, but it'll require a manual SQL query.) Then you
the VARIABLE
information in the Job resource. Putting the variable information in
the JobDefs resource completely defeats the purpose of JobDefs.
- Use your Schedule to run each client's Job at different levels.
- Create different Pools for different Storage devices, unless the two
devices
the bacula.org site as a bug. It should be a pretty simple fix.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin, Perl hacker, Fr
been on the
> forum
I hear you. Just be aware that it could result in purging a backup that
you really needed. (If, say, someone does a restore that requires a
tape from a full backup, and forgets to change the tape afterward.)
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607
ript that will purge the current
> tape in the tape drive so that bacula uses this tape to do the
> nightly backup.
This is potentially dangerous. If you're having to resort to things
like this, you're doing your Bacula volume management wrong.
--
Phil Stracchino, CDK#2
backup data appended
> to that volume. Is there a way to make it wipe the volume clean before
> writing?
What are the full completion messages for the job?
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@
+--+---+---+-+
> *
> ===
Once purged, you can either let Bacula recycle them itself, or you can
use the Update Volume command to manually change the volume status to
Recycle. This is a particularly straightforward operation if you are
using BAT,
d as soon as
they complete. (That's why you were always seeing "No prior job found".)
I suggest you now purge and recycle all of your existing volumes so that
you're starting over from a clean slate, let it run, and see how it
goes. Right now, you have only one appendable volum
}
>
> Could it be the "Volume Use Retetion" ? Currently, my backups run fast since
> there are only 2 clients. So, if they're done within an hour, maybe this
> setting isn't doing what it would normally do when backups take a few hours
> to run?
Did you Updat
at I can't install the bacula client.
Ah, so yet another situation where policies written by the ignorant
actively get in the way of getting critical work done.
I feel your pain. :p
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net
cross your network
twice, to and from the client mounting the CIFS share, and therefore
limiting your backup speed for that share to AT MOST half the client's
available bandwidth.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net
On 04/12/10 11:58, Joseph Spenner wrote:
> --- On Mon, 4/12/10, Phil Stracchino wrote:
>>
>> Did you fix the retention period yet? If it's
>> immediately reusing the
>> first volume, it probably means your retention is too
>> short. If you're
>&
and your retention
period needs to be set such that the first volume becomes available
again just after the last volume is used (for a ten-day rotation, nine
days should be right).
Make sure that after you update the Pool resource, you FIRST update the
Pool from the resource, THEN update ALL
l want to fix that Volume Retention directive, too, unless you
really want the volume to be pruned the day after the backup. If your
plan is to use the volumes in a 10-day rotation, try setting the Volume
Retention to 9 days.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -7
very client
> on every run?
>
> I suspect it is at least 1 of the following:
>
> File Retention
> Job Retention
> AutoPrune
> Recycle
> Volume Retention
I strongly doubt it has anything to do with any of those settings. Why
not post your actual Pool and Storage d
On 04/08/10 09:13, Craig Ringer wrote:
> Phil Stracchino wrote:
>> I'll be interested to see those results. Which filesystems are you testing?
>
> I'm interested in ext3, ext4 and xfs. I should probably look at zfs too,
> but don't have any hosts that it runs
On 04/08/10 03:53, Craig Ringer wrote:
> BTW, When I suggested that greater write concurrency would be desirable
> and should be easier, Phil Stracchino raised some concerns about
> concurrent writes to a file system increasing fragmentation and hurting
> overall performance. Rather th
job?
That certainly ought to work. A properly written script, passed the
client name and level, should even be able to make an informed guess at
roughly how much disk the job should require.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@me
t;
> It's a trivial change that would enable Bacula to use any builtin
> hardware crypto engine supported by OpenSSL. Worth making, so that by
> the time the new Intel hardware hits Bacula supports it?
>
> --
> Craig Ringer
Sounds like a good idea to me. Want to write up
estore all of it or
> none of it, you don't get to pick individual files.
Well, actually, you can still restore individual files. You just need
to bscan the volume back into the catalog first.
Looks like your retention times need some tuning, though.
--
Phil Stracchino, CDK#2 DoD#299
ly to another SD,
something for which at present there is no provision. With that out of
the way, implementing cross-SD copy-and-migrate would probably be fairly
trivial.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@
e new disk-only SD and volumes of any kind on the
traditional SD, because Bacula does not yet support copy or migration
between different SDs. At this time, both source and destination
devices are required to be on the same SD.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -
ime.
Oh, sure. But as best I can understand the OP, it seems to me that
verify is what he's looking for. However, I could easily be
misunderstanding.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.
about crypto in the archives
> in case I don't have any luck following it up with an actually parallel
> implementation and others are looking into it later.
>
> The next step is to try to spawn worker threads to encrypt chunks in
> parallel. Hopefully this will be possible with
ries in Bacula (and many
> other things) for parallel crypto and many other parallel tasks, as
> they're excellent even without special hardware. Unfortunately they're
> rather GPL-incompatible and are only "free" for non-commercial use.
It would indeed be very ni
This just doesn't mean that we do baremetal or something like that
> automated :):)
This sounds as though you should look into the Verify feature and see if
it does what you need.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net
was less than [directive value] free space on the device. I'm not
sure whether this has actively gone anywhere yet.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance Man, Unix ronin,
ed; I haven't
> noticed any significant write performance drops over time.
>
> It may slow restores a little, but again with a many-spindle array I'm
> not sure how much practical effect it'll have. Is fragmentation
> avoidance worth all this complexity?
chedule = "Monthly Rotation"
> [...]
> }
>
> ... or something like that, so other things may be overridden based on
> level.
This is a good suggestion, I think. I'd go ahead and write it up and
submit it. :)
--
Phil Stracchino, CDK#2 DoD#2997924
On 04/07/10 00:42, Craig Ringer wrote:
> Phil Stracchino wrote:
>> I can confirm that it still works in 5.x as well. I use this for disk
>> volumes:
>>
>> Label Format =
>> "FULL-$Year${Month:p/2/0/r}${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
&
On 04/07/10 01:05, Craig Ringer wrote:
> Phil Stracchino wrote:
>> It is possible right now to open more than one file-based volume at a
>> time.
>> You simply need to define multiple storage devices under the same
>> storage daemon; each device can have one volume o
901 - 1000 of 1449 matches
Mail list logo