Re: [Bacula-users] Errors migrating from mysql to postgres

2017-12-01 Thread Michel Figgins
Yes please on the explanation.
When I use mysqldump –hex-blob , the RestoreObject field/column has the null 
byte equivalents (i.e. it starts with 0x3C0042004100 which is “mailto:wanderleihut...@gmail.com]
Sent: Thursday, November 30, 2017 8:14 AM
To: Radosław Korzeniewski
Cc: Martin Simmons; bacula-users
Subject: Re: [Bacula-users] Errors migrating from mysql to postgres

Hello Radoslaw

Could you explain better about "mysqldump --hex-blob"?

Maybe I could include in the script and improve for not happen any errors.

Best regards

Wanderlei Hüttel
http://www.huttel.com.br

2017-11-30 9:10 GMT-02:00 Radosław Korzeniewski 
mailto:rados...@korzeniewski.net>>:
Hello,

2017-11-28 11:55 GMT+01:00 Wanderlei Huttel 
mailto:wanderleihut...@gmail.com>>:
Hello Michel

I've created some scripts to do this job.
https://github.com/wanderleihuttel/bacula-utils/tree/master/convert_mysql_to_postgresql

I only had some troubles with the Log table, but I've changed some registries 
manually and everything worked fine.
A guy in Brazil with a database with 6gb used these scripts and had no problems

I have migrated a Bacula 5.x to 7.x from MySQL into a PostgreSQL and I had the 
same problem with RestoreObjects table. The solution was to use: --hex-blob 
parameter to mysqldump and use a decode() function from PostgreSQL.
The Restore Objects table is populated when Bacula run a Windows VSS backup. If 
you backup all but Windows VSS you will have no problem with above script. :)

The next problem I encountered was an ' character in some filenames which 
mysqldump is unable to proper escape, even with --compatible=postgresql option. 
So I had to escape it manually (with an additional script).

Finally the migration was successful but due to above errors it took more time 
then expected.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with version 9.0.3 failing since the upgrade

2017-12-01 Thread Jerry Lowry
list,

I am re-iterating my problem, due to no response.  As a side note the
storage server and director are running on the same server.  This happens
on a copy job from one disk to another disk on the same system.
It is a definite problem as I am loosing backup data!

bacula 9.0.3
mariadb 10.2.8
centos 6.9

I upgraded bacula from 5.2.13 which worked very well, to version 9.0.3.
Basically installed new version from source and then upgraded the database
structure.  The source was compiled with the following:

./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
--with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
--enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
--with-dump-email=u...@domain.com --with-job-email=u...@domain.com
--with-smtp-host=smtp.googlemail.com --enable-bat

The problem started with my offsite backups. I will get the following error:

13-Nov 01:18 distress JobId 33429: Fatal error: Socket error on Data
received command: ERR=No data available
13-Nov 01:18 distress JobId 33429: Fatal error: fd_cmds.c:157 Read
data not accepted

And the backup fails. Most of the time it is on a backup that spans
multiple disks.  So, I chatted with the ATTO raid support folks and they
suggested that I use a different hotswap raid enclosure due to the one I
was using was not very reliable in their opinion.  Although this enclosure
had worked very reliably for well into 10 years without a problem!  So, I
moved the system to a completely new system ( Supermicro with ATTO raid ).
The problem still persists!  I have rebuilt the raid disk structure and
changed the working of the backups. To no avail!
My backups worked flawlessly before the upgrade!  Once going to v 9 I can
not count how many offsite backups the have failed to complete with this
type of error.  I also get

13-Nov 01:23 distress JobId 33430: Warning: mount.c:210 Open of File
device "BottomSwap" (/BottomSwap) Volume "dcBS-104" failed:
ERR=file_dev.c:190 Could not
open(/BottomSwap/dcBS-104,OPEN_READ_WRITE,0640): ERR=No such file or
directory
and
15-Nov 17:20 kilchis JobId 35825: Error: bsock.c:849 Read error from
Storage daemon:kilchis:9103: ERR=Connection reset by peer
15-Nov 17:20 kilchis JobId 35825: Fatal error: append.c:271 Network
error reading from FD. ERR=Connection reset by peer

All of this happens on one storage server, well actually two storage
servers but they

service two different subnets/domains. It all started with the upgrade!

please, tell me that you have fixed this with a new version!

thanks


On Mon, Nov 27, 2017 at 7:45 AM, Jerry Lowry  wrote:

> list,
>
> bacula 9.0.3
> mariadb 10.2.8
> centos 6.9
>
> I upgraded bacula from 5.2.13 which worked very well, to version 9.0.3.
> Basically installed new version from source and then upgraded the database
> structure.  The source was compiled with the following:
>
> ./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
> --with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
> --enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
> --with-dump-email=u...@domain.com --with-job-email=u...@domain.com
> --with-smtp-host=smtp.googlemail.com --enable-bat
>
> The problem started with my offsite backups. I will get the following
> error:
>
> 13-Nov 01:18 distress JobId 33429: Fatal error: Socket error on Data received 
> command: ERR=No data available
> 13-Nov 01:18 distress JobId 33429: Fatal error: fd_cmds.c:157 Read data not 
> accepted
>
> And the backup fails. Most of the time it is on a backup that spans
> multiple disks.  So, I chatted with the ATTO raid support folks and they
> suggested that I use a different hotswap raid enclosure due to the one I
> was using was not very reliable in their opinion.  Although this enclosure
> had worked very reliably for well into 10 years without a problem!  So, I
> moved the system to a completely new system ( Supermicro with ATTO raid ).
> The problem still persists!  I have rebuilt the raid disk structure and
> changed the working of the backups. To no avail!
> My backups worked flawlessly before the upgrade!  Once going to v 9 I can
> not count how many offsite backups the have failed to complete with this
> type of error.  I also get
>
> 13-Nov 01:23 distress JobId 33430: Warning: mount.c:210 Open of File device 
> "BottomSwap" (/BottomSwap) Volume "dcBS-104" failed: ERR=file_dev.c:190 Could 
> not open(/BottomSwap/dcBS-104,OPEN_READ_WRITE,0640): ERR=No such file or 
> directory
> and
> 15-Nov 17:20 kilchis JobId 35825: Error: bsock.c:849 Read error from Storage 
> daemon:kilchis:9103: ERR=Connection reset by peer
> 15-Nov 17:20 kilchis JobId 35825: Fatal error: append.c:271 Network error 
> reading from FD. ERR=Connection reset by peer
>
> All of this happens on one storage server, well actually two storage servers 
> but they
>
> service two different subnets/domains. It all started with the upgrade!
>
> please, tell me that you ha

Re: [Bacula-users] LTO-7 library question (planning purchase)

2017-12-01 Thread itlinux_igtp

Will do, as excluding the maintenance costs the library works fine.

After watching the technicians replace the robot arm two times, I 
suspect that the enclosure might be at fault as installing the new arm 
seemed was never smooth. I agree that given the history of this library, 
I'm not keen of keeping it as the main backup solution. It will boil 
down to maintenance costs vs new hardware price.


Thanks for the input!

Regards,
Iñaki.
On 11/30/2017 05:45 PM, Alan Brown wrote:

On 30/11/17 16:18, itlinux_igtp wrote:

Hi Alan,

That was my initial plan, to just add two LTO7 drives to the current 
setup, however, the providers I contacted said drive cost was like 
90% of the library and the current maintenance costs for the i80 are 
currently over 2500 euros a year, and I had to replace two robotic 
arms already so running without a maintenance contract would be 
extremely risky. Then there is the issue of where I can buy, as I 
work for a public institution in spain and there are several 
restrictions on where and how we can purchase :(


I'll check again with the providers I have to get an exact quote, 
maybe this will be cheaper.


Talk directly with Quantum Europe.

There's a large profit margin for the reseller selling you a new 
changer+maintenance, so many providers add substantial charges.


I have _never_ had to replace a robotic arm in any robot unit and the 
service managers of both Quantum and Overland have said that the level 
of failure of robot components is vanishingly rare (Overland expect 
their Neo2/4000 and 8000 range to last at least a decade.)


The fact that you'e had to replace one twice is rather alarming and 
indicates that something's not setup correctly - don't assume that the 
maintenance people know what they're doing because they frequently 
don't and any kind of misalignment will give problems.


GIven that history with that particular unit, I'd replace it, but 
don't assume its the design at fault. There's a nice youtube video at 
https://www.youtube.com/watch?v=HG6cP8Tncgw which shows how simple 
they are.





Running new hardware tech is something I always fear a bit, LTO8 
should be fine, but I'd rather use something well tested, the 
increase speed an capacity though is tempting.


Regards,
Iñaki

On 11/30/2017 05:06 PM, Alan Brown wrote:

On 30/11/17 15:13, itlinux_igtp wrote:

Dear List,

We've been using Bacula  with a quantum i80 dual drive LTO-5 
library for a while now, and it has been working great since we 
deployed a bit more than three years ago. I have to thank the 
developers for creating and releasing such a great tool. It has 
been rock solid and improved the speed of our backups considerably 
and also is way nicer to manage than the previous proprietary 
solution.


Due to an increase in the data volume we have to backup the current 
library is not enough so the plan is to upgrade the library for a 
dual drive LTO-7 one, most likely an HP msl 4048. I was wondering 
if someone in the list has any experience running Bacula on one of 
these (good or bad)? I think there shouldn't be any issue, but I 
rather ask first, just in case :)


All of HP's changers are rebadged Quantum or Overland/Tandberg 
devices and both makers libraries work fine with bacula (I've used 
both, and HP-branded stuff too)


However, you could just as easily add LTO7 drives into your i80 (or 
wait a couple of months and install LTO8) - talk to Quantum and see 
what they'll charge you to take this option. The design lifespan of 
the changer itself is _much_ longer than that of the drive






Regards,

Iñaki


-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users






--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users