Is this still loading the driver from
/usr/lib64/bacula-sd-cloud-driver-9.6.3.so?  It is a little strange that you
have bacula in /opt/bacula/bin/bacula-sd but the plugins are in /usr/lib64.

Please also post the output from:

objdump -t /usr/lib64/bacula-sd-cloud-driver-9.6.3.so | grep _driver

Do you also have /opt/bacula/plugins/bacula-sd-cloud-driver-9.6.3.so?

__Martin


>>>>> On Thu, 14 May 2020 08:49:14 +0200, Phillip Dale said:
> 
> I could not get much information out of that traceback. Hopefully this helps, 
> so here is the traceback file I got:
> 
> [New LWP 19474]
> [New LWP 19470]
> [New LWP 19315]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x00007feb40a409a3 in select () from /usr/lib64/libc.so.6
> $1 = "12-May-2020 22:44:03\000\000\000\000\000\000\000\000\000"
> $2 = '\000' <repeats 127 times>
> $3 = 0x231aeb8 "bacula-sd"
> $4 = 0x231aef8 "/opt/bacula/bin/bacula-sd"
> $5 = 0x0
> $6 = '\000' <repeats 49 times>
> $7 = 0x7feb420b443f "9.6.3 (09 March 2020)"
> $8 = 0x7feb420b4463 "x86_64-pc-linux-gnu"
> $9 = 0x7feb420b4477 "redhat"
> $10 = 0x7feb420b447e "(Core)"
> $11 = "backup.novalocal", '\000' <repeats 33 times>
> $12 = 0x7feb420b4455 "redhat (Core)"
> Environment variable "TestName" not defined.
> #0  0x00007feb40a409a3 in select () from /usr/lib64/libc.so.6
> #1  0x00007feb4204e6cc in bnet_thread_server (addrs=0x231f6f8, 
> max_clients=41, client_wq=0x630d80 <dird
> _workq>, handle_client_request=0x40ebd8 <handle_connection_request(void*)>) 
> at bnet_server.c:166
> #2  0x000000000040a347 in main (argc=0, argv=0x7fffaaa99890) at stored.c:327
> 
> Thread 4 (Thread 0x7feb387e3700 (LWP 19315)):
> #0  0x00007feb41e1cde2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
> /usr/lib64/libpthread.so.0
> #1  0x00007feb420a1ae1 in watchdog_thread (arg=0x0) at watchdog.c:299
> #2  0x00007feb41e18ea5 in start_thread () from /usr/lib64/libpthread.so.0
> #3  0x00007feb40a498dd in clone () from /usr/lib64/libc.so.6
> 
> Thread 3 (Thread 0x7feb38fe4700 (LWP 19470)):
> #0  0x00007feb41e201d9 in waitpid () from /usr/lib64/libpthread.so.0
> #1  0x00007feb42093f88 in signal_handler (sig=11) at signal.c:233
> #2  <signal handler called>
> #3  0x00007feb41e1ad00 in pthread_mutex_lock () from 
> /usr/lib64/libpthread.so.0
> #4  0x00007feb420abf31 in lmgr_p (m=0x10) at lockmgr.c:106
> #5  0x00007feb420ae7b8 in lock_guard::lock_guard (this=0x7feb38fe3510, 
> mutex=...) at ../lib/lockmgr.h:2
> 89
> #6  0x00007feb37dd8296 in cloud_proxy::volume_lookup (this=0x0, 
> volume=0x7feb3000acc8 "Vol-0003") at cl
> oud_parts.c:229
> #7  0x00007feb37dd1658 in cloud_dev::probe_cloud_proxy (this=0x7feb3000a878, 
> dcr=0x7feb300146d8, VolNam
> e=0x7feb3000acc8 "Vol-0003", force=false) at cloud_dev.c:1217
> #8  0x00007feb37dd0593 in cloud_dev::open_device (this=0x7feb3000a878, 
> dcr=0x7feb300146d8, omode=2) at
> cloud_dev.c:1025
> #9  0x00007feb4251ce8a in DCR::mount_next_write_volume (this=0x7feb300146d8) 
> at mount.c:191
> #10 0x00007feb424f6d32 in acquire_device_for_append (dcr=0x7feb300146d8) at 
> acquire.c:420
> #11 0x000000000040c325 in do_append_data (jcr=0x7feb300008e8) at append.c:102
> #12 0x0000000000416ed7 in append_data_cmd (jcr=0x7feb300008e8) at 
> fd_cmds.c:263
> #13 0x0000000000416b68 in do_client_commands (jcr=0x7feb300008e8) at 
> fd_cmds.c:218
> #14 0x000000000041688a in run_job (jcr=0x7feb300008e8) at fd_cmds.c:167
> #15 0x0000000000418c58 in run_cmd (jcr=0x7feb300008e8) at job.c:240
> #16 0x000000000040f196 in handle_connection_request (arg=0x2340ef8) at 
> dircmd.c:242
> #17 0x00007feb420a2b54 in workq_server (arg=0x630d80 <dird_workq>) at 
> workq.c:372
> #18 0x00007feb41e18ea5 in start_thread () from /usr/lib64/libpthread.so.0
> #19 0x00007feb40a498dd in clone () from /usr/lib64/libc.so.6
> 
> Thread 2 (Thread 0x7feb368e2700 (LWP 19474)):
> #0  0x00007feb41e1cde2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
> /usr/lib64/libpthread.so.0
> #1  0x00007feb420a294b in workq_server (arg=0x630d80 <dird_workq>) at 
> workq.c:349
> #2  0x00007feb41e18ea5 in start_thread () from /usr/lib64/libpthread.so.0
> #3  0x00007feb40a498dd in clone () from /usr/lib64/libc.so.6
> 
> Thread 1 (Thread 0x7feb42b90880 (LWP 19313)):
> #0  0x00007feb40a409a3 in select () from /usr/lib64/libc.so.6
> #1  0x00007feb4204e6cc in bnet_thread_server (addrs=0x231f6f8, 
> max_clients=41, client_wq=0x630d80 <dird_workq>, 
> handle_client_request=0x40ebd8 <handle_connection_request(void*)>) at 
> bnet_server.c:166
> #2  0x000000000040a347 in main (argc=0, argv=0x7fffaaa99890) at stored.c:327
> #0  0x00007feb40a409a3 in select () from /usr/lib64/libc.so.6
> No symbol table info available.
> #1  0x00007feb4204e6cc in bnet_thread_server (addrs=0x231f6f8, 
> max_clients=41, client_wq=0x630d80 <dird_workq>, 
> handle_client_request=0x40ebd8 <handle_connection_request(void*)>) at 
> bnet_server.c:166
> 166           if ((stat = select(maxfd + 1, &sockset, NULL, NULL, NULL)) < 0) 
> {
> maxfd = 5
> sockset = {fds_bits = {32, 0 <repeats 15 times>}}
> clilen = 16
> turnon = 1
> buf = "188.95.226.225", '\000' <repeats 113 times>
> allbuf = "0.0.0.0:9103 \000\000\000\060\215\251\252\377\177\000\000 
> \215\251\252\377\177\000\000!\000\000\000\000\000\000\000$\274UA\353\177\000\000\000\000\000\000\000\000\000\000hG\271B\353\177\000\000\000\200\271B\353\177\000\000U\301UA\353\177\000\000\320\277\225@\353\177\000\000\360\027TA\353\177\000\000\000\000\000\000\001\000\000\000t\004\000\000\001\000\000\000\334\367\063\002\000\000\000\000\350\215\251\252\377\177\000\000\300\215\251\252\377\177\000\000\001\000\000\000\000\000\000\000hG\271B\353\177\000\000\370\254\271B\353\177\000\000\230\251\271B\353\177\000\000\217`\230B\353\177\000\000\000\000\000\000\000\000\000\000hG\271B\353\177\000\000\001\000\000\000\377\177\000\000\000\000\000\000\000\000\000\000"...
> stat = 0
> tlog = 0
> fd_ptr = 0x0
> sockfds = {<SMARTALLOC> = {<No data fields>}, head = 0x7fffaaa989a0, tail = 
> 0x7fffaaa989a0, loffset = 0, num_items = 1}
> newsockfd = 7
> clientaddr = {ss_family = 2, __ss_padding = 
> "\220\020\274_\342\341\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\027",
>  '\000' <repeats 15 times>, 
> "\253]\230B\353\177\000\000\000\000\000\000\320\022\023\000`\212\251\252\377\177\000\000\200\226\230\000\000\000\000\000\360(\341A\353\177\000\000p\213\251\252\377\177\000\000`\213\251\252\377\177\000\000\004\000\000\000\000\000\000\000\253]\230B\353\177\000\000\000\000\000\000\000\000\000",
>  __ss_align = 140648411321972}
> addr = 0x0
> #2  0x000000000040a347 in main (argc=0, argv=0x7fffaaa99890) at stored.c:327
> 327                           &dird_workq, handle_connection_request);
> ch = -1
> no_signals = false
> thid = 140648250230528
> uid = 0x0
> gid = 0x0
> #0  0x0000000000000000 in ?? ()
> No symbol table info available.
> #0  0x0000000000000000 in ?? ()
> No symbol table info available.
> #0  0x0000000000000000 in ?? ()
> No symbol table info available.
> #0  0x0000000000000000 in ?? ()
> No symbol table info available.
> #0  0x0000000000000000 in ?? ()
> No symbol table info available.
> [Inferior 1 (process 19313) detached]
> Attempt to dump current JCRs. njcrs=1
> threadid=0x7feb38fe4700 JobId=8 JobStatus=R jcr=0x7feb300008e8 
> name=BackupClient1.2020-05-12_22.44.00_03
>         use_count=1 killable=1
>         JobType=B JobLevel=F
>         sched_time=12-May-2020 22:44 start_time=12-May-2020 22:44
>         end_time=01-Jan-1970 00:00 wait_time=01-Jan-1970 00:00
>         db=(nil) db_batch=(nil) batch_started=0
>         dcr=0x7feb300146d8 volumename=Vol-0003 dev=0x7feb3000a878 newvol=1 
> reserved=1 locked=0
> List plugins. Hook count=0
> 
> /Phillip
> 
> 
> > On 13 May 2020, at 18:48, Martin Simmons <mar...@lispworks.com> wrote:
> > 
> >>>>>> On Wed, 13 May 2020 15:39:56 +0200, Phillip Dale said:
> >> 
> >> Hi all,
> >> 
> >> I just joined this list, so not sure if this should go here or in the 
> >> development list. I have the same issue that Rick Tuk has from his post on 
> >> May 07.
> >> 
> >> I am running on CentOS 7 and everything works fine until I try to use Ceph 
> >> S3 or Amazon S3 storage. At this time, the bacula-sd crashes. My setup is 
> >> very similar to the one in his post.
> >> Not sure about where to go from here. Hoping for some help.
> >> 
> >> Here is the traceback from running bacula-sd with -q20:
> >> 
> >> backup.novalocal-sd: init_dev.c:437-0 Open SD driver at 
> >> /usr/lib64/bacula-sd-cloud-driver-9.6.3.so
> >> backup.novalocal-sd: init_dev.c:442-0 Lookup "BaculaSDdriver" in 
> >> driver=cloud
> >> backup.novalocal-sd: init_dev.c:444-0 Driver=cloud entry point=7feb37dcc907
> >> backup.novalocal-sd: stored.c:615-0 SD init done CephStorage 
> >> (0x7feb30008818)
> >> backup.novalocal-sd: init_dev.c:469-0 SD driver=cloud is already loaded.
> >> backup.novalocal-sd: stored.c:615-0 SD init done S3CloudStorage 
> >> (0x7feb3000a878)
> >> backup.novalocal-sd: stored.c:615-0 SD init done TmpFileStorage 
> >> (0x7feb3000c928)
> >> backup.novalocal-sd: bnet_server.c:86-0 Addresses 0.0.0.0:9103
> >> List plugins. Hook count=0
> >> Bacula interrupted by signal 11: Segmentation violation
> >> Kaboom! bacula-sd, backup.novalocal-sd got signal 11 - Segmentation 
> >> violation at 12-May-2020 22:44:03. Attempting traceback.
> >> Kaboom! exepath=/opt/bacula/bin/
> >> Calling: /opt/bacula/bin/btraceback /opt/bacula/bin/bacula-sd 19313 
> >> /opt/bacula/working
> >> It looks like the traceback worked...
> >> LockDump: /opt/bacula/working/bacula.19313.traceback
> > 
> > Did it send you an email with the traceback?  That might contain more
> > information.
> > 
> > If you can't find the email, then look in
> > /opt/bacula/working/bacula.19313.traceback.
> > 
> > __Martin
> 
> 
> 
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to