Re: [Bacula-users] purge recycle used, somethings went wrong

2007-04-14 Thread Manuel Stächele
Hi Arno,

now i have very actual real data and i do not understand whats going wrong.

i just checked the backup and the following messages :

14-Apr 01:18 fileserver-dir: Pruned 6 Jobs on Volume 
Freitag_103_ab_30.1.06 from catalog.
14-Apr 01:18 fileserver-sd: ntserver.2007-04-14_01.05.00 Warning: 
Director wanted Volume Donnerstag_303_ab_30.1.06 for device SDLT 
(/dev/nst0).
 Current Volume Freitag_103_ab_30.1.06 not acceptable because:
 1998 Volume Freitag_103_ab_30.1.06 status is Used, but should be 
Append, Purged or Recycle (cannot automatically recycle current volume, 
as it still contains unpruned data or the Volume Retention time has not 
expired.).
14-Apr 01:18 fileserver-sd: Please mount Volume 
Donnerstag_303_ab_30.1.06 on Storage Device SDLT (/dev/nst0) for Job 
ntserver.2007-04-14_01.05.00
14-Apr 02:18 fileserver-sd: Please mount Volume 
Donnerstag_303_ab_30.1.06 on Storage Device SDLT (/dev/nst0) for Job 
ntserver.2007-04-14_01.05.00
14-Apr 04:18 fileserver-sd: Please mount Volume 
Donnerstag_303_ab_30.1.06 on Storage Device SDLT (/dev/nst0) for Job 
ntserver.2007-04-14_01.05.00


ok bacula can not prune the volume but why ? 
complete last week it works fine, but now bacula can't purge again.

i have the llist command output for the Freitag Volume:


   MediaId: 20
VolumeName: Freitag_103_ab_30.1.06
  Slot: 0
PoolId: 1
 MediaType: SDLT
  FirstWritten: 2007-03-23 09:44:52
   LastWritten: 2007-03-24 04:04:30
 LabelDate: 2007-03-23 09:44:52
   VolJobs: 6
  VolFiles: 145
 VolBlocks: 2,206,529
 VolMounts: 82
  VolBytes: 142,347,429,166
 VolErrors: 0
 VolWrites: 28,399,307
  VolCapacityBytes: 0
 VolStatus: Used
   Recycle: 1
  VolRetention: 432,000
VolUseDuration: 82,800
MaxVolJobs: 0
   MaxVolFiles: 0
   MaxVolBytes: 0
 InChanger: 1
   EndFile: 144
  EndBlock: 14,861
  VolParts: 0
 LabelType: 0
 StorageId: 2

And the List Volumes output:

|  20 | Freitag_103_ab_30.1.06| Used  | 142,347,429,166 | 
145 |  432,000 |   1 |0 | 1 | SDLT  | 
2007-03-24 04:04:30 |

There are now messages between ... then i enter the command mount and 
the following output :


14-Apr 07:52 fileserver.issler.de-dir: Recycled current volume 
Freitag_103_ab_30.1.06
14-Apr 07:52 fileserver.issler.de-sd: Recycled volume 
Freitag_103_ab_30.1.06 on device SDLT (/dev/nst0), all previous data 
lost.


why now? i can not understand this.
so after i mount the volume manually, he recycled the volume and the 
jobs start to run. but why not automatic???

any hints?

thanks

manuel


Arno Lehmann schrieb:
 Hello,
 
 On 4/3/2007 3:14 PM, Manuel Stächele wrote:
 hi,

 something goes wrong so i give you first the pool definitions, that you 
 can look in my confs. to understand what im talking about.

 Pool {
Name = Woche
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 5 days
Volume Use Duration = 23 hours
Accept Any Volume = yes
Recycle Current Volume = yes
 }

 we have a daily cycle which backup every job full every day ;-)
 we have the following volumes:

 mon1, tue1, wed1, thu1, fri1
 mon2, tue2, wed2, thu2, fri2
 mon3, tue3, wed3, thu3, fri3

 it works fine: after volume use duration the volume status changed to 
 used no problem!

 so the problem if bacula want to reuse a volume it looks an tries to 
 prune the volume in the drive. but in the last days it says ever:

 could not prune all data, cound not recycle because still jobs on the 
 volume.

 - it worked for about 3-4 weeks perfekt now its broken, an i do not know 
 why.
 - if i say mount in bconsole after bacula send me the error cound not 
 prune... than it work often, it prune outstanding jobs and recycle the 
 volume.

 can you say me whats wrong?
 
 The jobs might 'overtake' your retention periods.
 
 Use the llist command to find out when the volume requested was last 
 written to, add the retention time, and then you see when the tape can 
 be recycled.
 
 I guess that the job requiring this tape is scheduled to a time before that.
 
 This can happen when your jobs wait a liitle for the tape each day, and 
 this time will add up because the retention time only starts after the - 
 now postponed - jobs finish.
 
 After some time, the retention delay might become greater than the spare 
 time you have set up.
 
 In the end, Bacula can't prune jobs because they are still in the 
 retention period when the jobs start.
 
 
 i know my engish is pretty poor i am sorry and pleas ask me if you dont 
 understand my problem.
 
 Oh, I hope I understood your problem right. A more detailed explanation 
 might be possible with some real data. Post the actual messages you get, 
 along with the llist output for the volume requested, and also add the 
 query results for this 

[Bacula-users] bacula-users email list

2007-04-14 Thread Kern Sibbald
Hello,

This email is to let you know that I will be unsubscribing from the 
bacula-users email list, and thus will no longer read nor respond to those 
emails, unless I am either directly copied, or a copy is sent to the 
bacula-devel list.

This decision is motivated by the fact that the number of emails has grown to 
be quite large, and hence to read them all requires a good deal of time. In 
addition, I believe that the list is very well self sustaining, in fact, my 
attempts to clarify certain points seem to me to be more and more contested, 
possibly because I don't follow the list as well as is necessary for giving 
appropriate responses.

I would appreciate it if users would not copy me unless it is a question for 
development, or in the case of something that is *very* likely a bug.  I 
would appreciate it if some of the experienced list participants would bring 
to my attention anything that seems important (copy me or the devel list).  
However,please ensure that the appropriate basic information is provided 
(e.g. Bacula version, DB type, sample output showing the error/problem, ...).

The downside of this is that I often make lots of minor improvements to the 
manual based on comments/misunderstandings on the bacula-users list.  

Best regards,

Kern

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Checking my understanding of Maximum Concurrent Jobs

2007-04-14 Thread Johan Ehnberg
Flak Magnet wrote:
 A couple of days ago I added a 2nd computer with the file and storage 
 daemons to what was previously a single-system bacula config.
 
 They're working just fine.  Yay!
 
 I'd like to help, tips, caveats for concurrent jobs.  My setup is like this:
 
 server8: Solaris8, Bacula v2.0.3, MySQL, DIR/SD/FD, Quantum-SL3 w/ 1 LT03.
 
 server57: Solaris8, Bacula v 2.0.2, SD/FD, Qualstar TLS4210 w/ 1 AIT3.
 
 All jobs from server8 go to the storage daemon on server8.
 Likewise, all jobs from server47 go to the SD on server57.
 
 I want to allow one job to run concurrently on each SD/FD pair so that 
 if one of my libraries is waiting for a tape or is just misconfuggered, 
 the jobs for the other server will still execute.  I believe this is to 
 be done with Max Concurrent Jobs.
 
 Each SD/FD pair runs one job for data.  THEN I back up the bacula 
 database on server8 to the server8-sd.
 
 So, if i understand the directive properly, in their respective config 
 files I set the Maximum Concurrent Jobs to equal:
 
 Director:  2 or even higher.
 
 Job:  1 (For the data-related jobs because I don't run the same job more 
 than one at a time.)
 
 Client:  1 (I only run 1 job at a time
 
 Storage: 1 (Because I don't want/need to interleave jobs on tapes.)
 
 I'm pretty confident that I have it right up to this point.  BUT!  I 
 don't want to back up the database in the middle of server57's 
 data-backup.  I don't think there is a directive that I can use to 
 ensure that the database job doesn't run until ALL jobs for the day are 
 done running.  So I have to accomplish that by scheduling it long enough 
 after all my other jobs that they should be done assuming their not 
 waiting on a tape, etc...  Correct?
 
 Thanks!
 
 --Tim

The logics of your Max Concurrent Jobs seems fine.

The default config shows how to schedule the catalog backup: you can 
schedule it a bit later than the main backups, (just 5 mins is enough) 
so it won't start _before_ them. To have it wait for the others to 
finish, just give it a lower priority (higher number).

Cheers,
Johan

-- 
Johan Ehnberg

Email: [EMAIL PROTECTED]
GSM:   +358503209688
WWW:   http://www.ehnberg.net/johan/

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Two tiered bacula setup?

2007-04-14 Thread Johan Ehnberg
Josh Kuo wrote:
 Hi all:
 
 I am new to bacula, I have read through the documentation and a
 portion of the mail archive, but perhaps I am searching using the
 wrong keywords, I could not find what I am looking for, and I was
 hoping that someone could point me in the right direction. Apologies
 if this has been brought up and answered before, I could not find the
 answers myself.
 
 Here is the problem I need to solve:
 
 - Client site has low bandwidth (upload) to the Internet (less than 0.5Mbps).
 - Client site has several PCs, with a combined data of 30GB that needs
 to be backed up. Assume new data rate at 1GB per week.
 - I am bringing a server on-site to serve as on-site backup, with
 bacula installed. I install bacula on all the client boxes, and they
 back up at least once a day to the on-site bacula box. Let's call this
 the LAN backup tier.
 - I also need to replicate some of the data off-site, for emergency
 recovery in case of catastrophic event (say fire burned down the
 office).  I need to do this at least once a week (maybe even more
 frequently, nightly would be preferred). Let's call this the WAN
 backup tier.
 
 My plan is to use bacula for LAN backup at least once a day (may be
 more frequent depending on our needs), and use some sort of 'rsync'
 mechanism for the WAN backup, so each remote site's data is replicated
 in our data center.
 
 Question:
 Can I use bacula for my WAN backup? I did not think it will work too
 well because the LAN backup would leave me with some binary
 tarball-ish files every night, in my example, if I do a full backup
 (not incremental), then I need to re-transmit 30GB of data across the
 WAN link, which would take literally days to complete.  I only want to
 send changes, and only do the 30GB massive dump the very first time.
 In fact, my plan is to send a guy on-site to manually carry the 30GB
 (maybe more) data back to our data center (where possible).
 
 My thoughts currently is to do my LAN backup (this part is easy with
 bacula), and then immediately after each successful backup, do a
 restore locally somewhere, so it is decompressed back out to
 directories and files; then I can use rsync (or bacula client) to do
 my WAN level backup, which should only transmit changes over the said
 0.5Mbps link to the data center.  Hopefully, since I am only
 transmitting changes, it will only take a few hours every night,
 instead of several days if I cam transmitting the whole thing.
 
 So another question:
 How do I setup Bacula to automatically restore somewhere? The docs I
 am reading so far talks about using the command-line console to
 restore files, and I don't want to write a expect script to do the
 restore...
 
 I am hoping that someone out there has already solved this problem
 before, and came up with a much more elegant solution than me.  If so,
 I would love to hear what others have done in this situation.  If I am
 way off on my solution, please tell me, as I am still fairly new to
 the whole large scale backup thing.
 
 Thanks in advance for any help, and again, my apologies if this has
 been answered in the past.  I will keep reading the docs and mailing
 list archive in search of my answers.
 
 -Josh
 
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/bacula-users

For the WAN link I would suggest the following model modified of what I 
developed for roaming clients with slow links:

Schedule a restore job (or many if you want to use bootstraps) to the SD 
machine. Then in that job, after dumping the files, run the rsync 
script. The script should be with something like this:

-azv --delete-after --bwlimit=40

The bwlimit is good to ensure that the site is not crippled by the job. 
Also, running it every night is a good idea split up the work.

The WAN can then run jobs on the imported rsync files if you want more 
than just a repository off-site. Before the Full job, run a second 
restore job with otherwise the same rsync job but with '-c'. This 
verifies every file's contents to be sure it has not been corrupted. 
It's way faster than sending every file, but takes a lot of resources 
while running.

To schedule a restore job, and to run scripts, see my example:
Job {
   Name = JOBNAME
   Type = Restore
   Client = LAN-SD
   Fileset = FILESET
   Storage = LAN-SD
   Schedule = JOBSCHEDULE
   Priority = 10
   Pool = Default
   Messages = Standard
   Where = /tmp/PUSHJOB
   Bootstrap = /var/lib/bacula/CLIENT.bsr
   ClientRunAfterJob = /etc/bacula/SCRIPT
}

Schedule {
   Name = JOBSCHEDULE
   Run = daily at 21:05
}

If 

[Bacula-users] can not connect to mysql socket

2007-04-14 Thread Dave
Hello,
I'm trying to get bacula 2.03 working on FreeBSD. I'm using mysql5 as 
the database server and i'm getting an error that a connection can not be 
made to /tmp/mysql.soc. I've confirmed the file does exist. Using the mysql 
command i can log in to the bacula database. For my mysql install i've added 
the skip-networking and wait_time options to my /etc/my.cnf file and i've 
also added a password for the root user. I've checked the manual and in the 
catalog resource there's a section for defining a db socket, bacula is using 
the right one. Here's my catalog resource definition:

# Generic catalog service
Catalog {
  Name = MyCatalog
  dbname = bacula; user = bacula; password = 
}

The full error is below, but the base problem seems to be that bacula can't 
talk to mysql over the socket.
Thanks.
Dave.

#bconsole
Connecting to Director bacula.davemehler.com:9101
1000 OK: zeus-dir Version: 2.0.3 (06 March 2007)
Enter a period to cancel a command.
*run
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog
A job name must be specified.
The defined Job resources are:
 1: BackupCatalog
 2: RestoreFiles
 3: zeus_Root_Usr
 4: isos-dvd
Select Job resource (1-4): 4
Run Backup job
JobName:  isos-dvd
Level:Incremental
Client:   zeus-fd
FileSet:  Iso Files
Pool: DVD-Pool (From Job resource)
Storage:  DVD-Writer (From Pool resource)
When: 2007-04-14 06:17:12
Priority: 10
OK to run? (yes/mod/no): y
Job queued. JobId=16
You have messages.
*
*messages
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Fatal error: Network 
error with FD during Backup: ERR=Broken pipe
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Fatal error: No Job 
status returned from FD.
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Error: sql_update.c:194 
sql_update.c:194 update UPDATE Job SET JobStatus='f',EndTime='2007-04-14 
06:10:00',ClientId=1,JobBytes=0,JobFiles=0,JobErrors=3,VolSessionId=1,VolSessionTime=1176530713,PoolId=7,FileSetId=1,JobTDate=1176545400,RealEndTime='2007-04-14
 
06:10:00',PriorJobId=0 WHERE JobId=14 failed:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
14-Apr 06:10 zeus-dir: sql_update.c:194 UPDATE Job SET 
JobStatus='f',EndTime='2007-04-14 
06:10:00',ClientId=1,JobBytes=0,JobFiles=0,JobErrors=3,VolSessionId=1,VolSessionTime=1176530713,PoolId=7,FileSetId=1,JobTDate=1176545400,RealEndTime='2007-04-14
 
06:10:00',PriorJobId=0 WHERE JobId=14
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Warning: Error updating 
job record. sql_update.c:194 update UPDATE Job SET 
JobStatus='f',EndTime='2007-04-14 
06:10:00',ClientId=1,JobBytes=0,JobFiles=0,JobErrors=3,VolSessionId=1,VolSessionTime=1176530713,PoolId=7,FileSetId=1,JobTDate=1176545400,RealEndTime='2007-04-14
 
06:10:00',PriorJobId=0 WHERE JobId=14 failed:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Fatal error: 
sql_get.c:288 sql_get.c:288 query SELECT 
VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId
 
FROM Job WHERE JobId=14 failed:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
14-Apr 06:10 zeus-dir: sql_get.c:288 SELECT 
VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId
 
FROM Job WHERE JobId=14
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Warning: Error getting 
job record for stats: sql_get.c:288 query SELECT 
VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name,PriorJobId,RealEndTime,JobId
 
FROM Job WHERE JobId=14 failed:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Fatal error: 
sql_get.c:663 sql_get.c:663 query SELECT 
ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE 
Client.Name='zeus-fd' failed:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
14-Apr 06:10 zeus-dir: sql_get.c:663 SELECT 
ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE 
Client.Name='zeus-fd'
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Warning: Error getting 
client record for stats: Client record not found in Catalog.
14-Apr 06:10 zeus-dir: isos-dvd.2007-04-14_02.17.47 Fatal error: 
sql_get.c:893 sql_get.c:893 query SELECT 
MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,MediaType,VolStatus,PoolId,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,InChanger,EndFile,EndBlock,VolParts,LabelType,LabelDate,StorageId,Enabled,LocationId,RecycleCount,InitialWrite,ScratchPoolId,RecyclePoolId
 
FROM Media WHERE VolumeName='DVD-0001' failed:
Can't connect to local MySQL server through 

Re: [Bacula-users] [Bacula-devel] Feature Request: Layered Jobdefs

2007-04-14 Thread Kern Sibbald
Hello,

I've added this to the pending-projects, which means it is in limbo, but not 
forgotten unless sufficient Bacula community interest develops for it.

On Tuesday 10 April 2007 21:29, Darien Hager wrote:
 Item 1:   Allow Jobdefs to inherit from other Jobdefs
Origin: Darien Hager [EMAIL PROTECTED]
Date:   Date submitted (e.g. 28 October 2005)
Status: Initial Request
 
What:   Allow JobDefs to inherit/modify settings from other JobDefs
 
Why:Makes setting up jobs much easier for situations with many  
 clients doing similar work
 
Notes:
 
 Example: User has several JobDefs which all need Messages=standard,  
 Type=Backup, and settings for Rerun Failed Levels and Max * Time.  
 This feature would allow those common properties to be within a  
 single JobDef which each child JobDefs inherits from, before the  
 final Job definitions sets further specifics such as Client.
 
 Currently the documentation leaves open the possibility that this can  
 be done, but tests with Bacula 2.0.1 suggest that JobDefs entries  
 cannot themselves have a JobDefs property.
 
 Technical caveat: Should probably include rudimentary checks against  
 a cyclic relationship, such as a limit to the number of allowed layers.
 
 See also: Job Groups or hierarchy Feb 6 2007
 
 
 
 --
 --Darien A. Hager
 [EMAIL PROTECTED]
 
 
 
 

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature Request: Per-Client priorities

2007-04-14 Thread Kern Sibbald
Hello,

This Feature Request is a bit confusing to me.  I think I get it, but I am not 
sure.  I will present what I think you mean, and if it is the case, would you 
re-write the Feature Request using my terminology and resubmit it?  Also, 
please clarify the the first part of your example (current system with one 
priority) by showing the priorities and fictive start and stop wall clock 
times for each job, and add to the second part of your example by showing the 
two priorities for each job and fictive wall clock times when each job starts 
and stops under the proposed two priority system.

As I see it, you are asking to implement a Client Priority = nn directive in 
addition to the current Priority.  

The current Priority will be applied to jobs as it currently is, then in 
addition if two Jobs from the same Client have the same Priority but 
different Client Priority, the job with the higher Client priority (lower 
number) will run first.  


On Tuesday 10 April 2007 21:29, Darien Hager wrote:
 Item 1:   Allow per-client job priority ordering
Origin: Darien Hager [EMAIL PROTECTED]
Date:   Date submitted (e.g. 28 October 2005)
Status: Initial Request
 
What:
 
 Allow an optional per-client priority setting which is applied AFTER  
 the current priority scheduling algorithm.
 
 Applies only within the group of waiting jobs (ergo with the same  
 global priority) that are also associated with the same Client. The  
 higher priority (lower priority number) job will always be run first.  
 Applying it only after the global priorities should make  
 implementation easier.
 
Why:Like the current global priority, allows ordering of jobs  
 to be preserved for clients.  Unlike the current system, allows other  
 clients to proceed independently of one another so that a single  
 client will not hold up others.
 
Notes:
 
 Example:
 
 Two clients (1 and 2), with two jobs per client, referred to as job  
 types A and B. Under the current system, you can set A's at priority  
 10 and B's at priority 11, and have A scheduled to run 10 minutes  
 before B. This completes the given requirement that A should always  
 precede B on either client, perhaps because they have a run-script  
 dependency or significant relative importance.
 
 Let's call the client/job combinations 1A, 1B, 2A, 2B, and assume  
 that due to differences between the clients, the times taken for each  
 job vary, perhaps one client has more of one kind of files than the  
 other does, etc. Job times required:
 
 1A:   30 minutes
 1B: 10 Minutes
 2A: 10 minutes
 2B: 30 minutes
 
 Under the current system, all clients will be done in 60 minutes,  
 since all A jobs are one block, and all B jobs are a separate block,  
 and each block must take as long as it's longest job.
 
 With per-client priorities (and all four jobs under the same global  
 priority level) all clients complete within half the time at 30  
 minutes, due to removing unnecessary idle time.
 
 
 --
 --Darien A. Hager
 [EMAIL PROTECTED]
 
 
 
  
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to  
 share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php? 
 page=join.phpp=sourceforgeCID=DEVDEV__ 
 _
 Bacula-users mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] delete a single job

2007-04-14 Thread Mike Seda
All,
Is it possible to delete a single job in bacula 2.0.1?
Thx,
Mikw

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD crash (Didn't accept device)

2007-04-14 Thread Arno Lehmann
Hi,

On 4/13/2007 10:26 PM, Darien Hager wrote:
...
 13-Apr 02:46 spath-store: ABORTING due to ERROR in smartall.c:144
 Out of memory
 The above says that your machine is out of memory.
 
 It happened before on the previous server, and I toned down the  
 concurrent jobs (to no effect)... I don't know what it could be doing  
 which would exhaust it. The machine in question has 2GB of memory and  
 4GB of swap. Futhermore, it's a recent addition to the flock that  
 still needs more stuff installed, so the only thing it's doing now is  
 SD tasks.

I experienced a similar thing recently at a customer's installation... 
the server ran out of memory, and got restarted automatically. Cursory 
watching of memory usage showed me the SD used about 300 MB (on a 
machine with 512 MB). Afterwards, I set up some memory monitoring, but 
the problem hasn't happened again.

 Do you have any suggestions of things to check which may be causing  
 excessive memory usage? I can't think of anything off the top of my  
 head that the SD would be doing which could necessitate using that  
 much, assuming that it's main task is Network and Disk I/O bound  
 rather than manipulating any large complex data structures, etc.

I've got no idea why the SD would need that much memory... usually I 
don't notice a remarkable memory consumption by the SD.

Can you reproduce the problem?

 Thanks,
 --
 --Darien A. Hager
 [EMAIL PROTECTED]
 
 
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
IT-Service Lehmann[EMAIL PROTECTED]
Arno Lehmann  http://www.its-lehmann.de

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] adding to can not connect to mysql socket message

2007-04-14 Thread Dave
Hello,
I've done some more checking of my bacula situation. To recap bacula is 
giving me an error can not connect to mysql socket /tmp/mysql.soc. This is 
bacula 2.03 and mysql5 on FreeBSD 6. I commented out the skip-networking 
line in /etc/my.cnf, i didn't want mysql to listen on tcp/ip but have 
interactions only through sockets, restarted mysql, restarted the bacula 
director, and it worked. This says that skip-networking is the issue, i was 
wondering if anyone had bacula talking to a mysql database using sockets?
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-users