Bacula on FreeBSD is at 2.4.2, any short-term prospects we will see version
3 port or package?
Yudhvir
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://
Thanks Martin,
You have put a good closure on the quest for knowledge. If I upgrade Bacula,
will I have to upgrade the database? Meaning do I have to run those update
table scripts. I am on postgresql version 8.29.
Yudhvir
OK, this shows why it is slow. The algorithm in add_findex is only
> eff
Thanks for all your help you guys. I am impressed with the level of
expertise here!
> Error accessing memory address 0x7fbff000: Bad address.
> > #0 0x0040c043 in add_findex ()
>
> The function add_findex is interesting, but I think like your bacula-dir
> was
>
> Try the following gdb
(gdb) thread apply all bt
Thread 4 (Thread 0x801902180 (LWP 100350)):
#0 0x0008016f98cc in nanosleep () from /lib/libc.so.7
#1 0x0008009078c5 in nanosleep () from /lib/libthr.so.3
#2 0x0044e21e in bmicrosleep ()
#3 0x0042408d in wait_for_next_job ()
#4 0x00408a
I got into gdb but know very little how to move around in there. I tried:
[r...@lucifer ~]# gdb /usr/local/sbin/bacula-dir 27410
GNU gdb 6.1.1 [FreeBSD]
This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols
found)...
Attaching to program: /usr/local/sbin/bacula-dir, process 274
>
> Did you wait till the cpu went back to low cpu usage?
No, it stays high overnight and my patience runs out before cpu pegging
does.
> Depending on
> your configuration and optimization of your database this could take
> anywhere from a few minutes to a few hours to finish.
>
> I assume the d
Although the cpu is pinged at 100%
Yudhvir
>
> The dir and database are on the same machine and memory is not a problem.
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.
John,
The dir and database are on the same machine and memory is not a problem. I
tried a partial restore - it restores files but not recursively. Meaning no
subdirectories. Then I tried restoring the subdirectory. It get that too but
no sub-sub directories.
Yudhvir
--
Trying to restore files using bconsole: * restore client=client1-fd
fileset=Client1-Fileset select current all done. It does the 'select',
'current', and 'all' but sits there on the 'done' part. I have left it like
this overnight with no change in status. My setup is Bacula 2.4.4 DIR and SD
on a F
HELP
How do I actually restore 11.6 million files from a backup job?
SETUP
Bacula 2.4.4 DIR and SD on a FreeBSD 7.1. backed up 11.6 Million files
compressed into 372 G Bytes . I am trying to restore them onto a different
system. I use bconsole to say:
BCONSOLE COMMAND
* restore client=client1-fd
Does the compression happen at the fd side or at the sd side?
* My fd side is Solaris and gzip version is: gzip 1.3.5 (2002-09-30)
* And another fd side is Solaris with the same version 1.3.5 - and it does
compression fine.
* My sd side is FreeBSD and gzip version is: FreeBSD gzip 20070711
Yudh
PROBLEM
dir and sd, and catalog are on a FreeBSd/zfs machine and clients are on a
SUN Solaris zfs. One client does compression fine and the other does no
compression. Both client configuration Filesets have the same compression
lines.
BACKING UP
a zfs partition:
05-Jun 18:08 titanic-dir JobId 134
bconsole 'status jobs' shows:
Terminated Jobs:
JobId LevelFiles Bytes Status FinishedName
===
2 Full 0 0 OK 12-Apr-09 01:41
BackupCatalog
30
PROBLEM - I have run multiple fulls and have NEVER been able to complete a
PG insert operation. Granted my fileset is large - about 9.8 million files,
~700GB. What am I doing wrong?
SYSTEM
FreeBSD 7.1-RELEASE system, 8 GB RAM, 4 core AMD 64 processor, SUN Ultra 40
DATABASE
Postgresql version 8.2
Michael,
This is a work in progress and I'll keep everyone posted on what my configs
are once I know something works. I have re-compiled bacula with batch mode
turned on.
Yudhvir
===
On Sun, Nov 30, 2008 at 9:18 AM, Michael Galloway wrote:
> On Thu, Nov 27, 2008 at 04:03:50PM +0100, Daniel Betz
SWAGGER
Yes, I mean to swagger here. I am migrating to Bacula and just the mail and
user directories are about 690 GB. I expect our full dataset to backup will
end up in the 30 TB range.
REASON TO SWAGGER
I am looking for anyone else in the same boat. Looking for hardware which
will support this
MY SITUATION
I can take your Megabytes and shame you with my 9,868,868 mostly Maildir
files and 690.8 GB space they take up. Take that! It took 25 hours to
transfer and is currently "indexing." Before I ramble on, here is some
confguration info:
CONFIGURATION
dir Version: 2.4.2 (26 July 2008), a
I have over 200 users with about 20 TB's of data that we backup. And I have
a problem...
THE PROBLEM
The concept of backup is starting to lose it's meaning with desktops
sporting 1.5 TB drives. How can we stay abreast (sp?) of backing up ever
increasing drive sizes in workstations?
CENTRALIZED BA
AWESOME: Total newbie. Great software! I just came up with an alternate to
"it comes in the night..." - "Works great, less filling"
VERSIONS: On FreeBSD 7.0, installed bacula 2.2.5 and trying to veer away
from Networker.
225 CLIENTS: About 100 of my users are on Linux and 100 on OS X with a
coup
101 - 119 of 119 matches
Mail list logo