Re: [bareos-users] Rados Storage Backend

2020-06-05 Thread Rick Tuk
Hi Sylvain,

Thanks for the reply.
I would like to use the droplet implementation. This was the first thing I 
looked at, however we are running Ubuntu 18.04 exclusively and the packages are 
not available for this distro.

Are there any plans from bareos to create these packages for Ubuntu?

Met vriendelijke groet / With kind regards,
Rick Tuk 

> On Jun 5, 2020, at 7:49 PM, Sylvain Donnet  wrote:
> 
> Hi Rick,
> 
> Sorry for the delay of my answer, but I just discover your question.
> 
> We were using same configuration, but we have had a lot of troubles with it 
> (problems of striper, repetitive crashes, slowness, ...).
> 
> We have reconfigured BAREOS for an S3 approach on our CEPH cluster.
> So, you have to :
> - create a gateway for S3 on CEPH, and create bucket,
> - switch bareos storages to droplets, instead of rados.
> 
> It is well documented on Bareos docs, and it just works !
> 
> Take care of your Linux distribution for Bareos. the droplet libraries are 
> not available on all distri.
> 
> My two cents
> 
> Sylvain
> 
> Le mercredi 20 mai 2020 10:43:49 UTC+2, Rick Tuk a écrit :
> LS,
> 
> my settings are based on a post from Alexander Kushnirenko 
> (https://groups.google.com/forum/#!topic/bareos-users/hnLJrH60GHU)
> if there is anything I am missing I would very much appreciate the help.
> 
> Rick
> 
> On Tuesday, May 12, 2020 at 12:21:45 PM UTC+2, Rick Tuk wrote:
> LS, 
> 
> I’m running Bareos 19.2.7 on Ubuntu 18.04 with the Rados Storage Backend 
> My device configuration looks like this: 
> 
> Device { 
> Name = RadosDevice 
> Archive Device = "Rados Device" 
> Device Options = 
> "conffile=/etc/ceph/ceph.conf,poolname=bareos,clustername=ceph,username=client.bareos,striped,stripe_unit=4194304,object_size=67108864,stripe_count=12"
>  
> Maximum Block Size = 4194304 
> Media Type = RadosFile 
> Device Type = rados 
> Label Media = yes 
> Random Access = yes 
> Automatic Mount = yes 
> Removable Media = no 
> Always Open = no 
> } 
> 
> In my pools I have setup Maximum Volume Bytes = 10G 
> 
> When I list the files on rados with "rados --id bareos --keyring 
> /etc/ceph/ceph.client.bareos.keyring -p bareos ls —striper” I expect to see 
> my volumes, however it does not show anything 
> When I list the files on rados without the —striper option I see a lot of 
> volumes with high volume numbers. 
> 
> Also, the highest throughput I’m currently seeing Is about 6MB/s while 
> everything is connected using gigabit 
> 
> Installed relevant packages: 
> bareos-common/unknown,now 19.2.7-2 amd64 [installed,automatic] 
> bareos-filedaemon/unknown,now 19.2.7-2 amd64 [installed] 
> bareos-storage/unknown,now 19.2.7-2 amd64 [installed,automatic] 
> bareos-storage-ceph/unknown,now 19.2.7-2 amd64 [installed] 
> ceph-common/stable,now 14.2.9-1bionic amd64 [installed] 
> libradosstriper1/stable,now 14.2.9-1bionic amd64 [installed,automatic] 
> 
> Any help would be greatly appreciated 
> 
> 
> Met vriendelijke groet / With kind regards, 
> Rick Tuk 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bareos-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/e4d80603-61aa-448f-8731-7aeb0ede3f8bo%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/E00A9905-8518-4C8E-A77D-9C8A174C6A98%40mostwanted.io.


[bareos-users] Re: Rados Storage Backend

2020-06-05 Thread Sylvain Donnet
Hi Rick,

Sorry for the delay of my answer, but I just discover your question.

We were using same configuration, but we have had a lot of troubles with it 
(problems of striper, repetitive crashes, slowness, ...).

We have reconfigured BAREOS for an S3 approach on our CEPH cluster.
So, you have to :
- create a gateway for S3 on CEPH, and create bucket,
- switch bareos storages to droplets, instead of rados.

It is well documented on Bareos docs, and it just works !

Take care of your Linux distribution for Bareos. the droplet libraries are 
not available on all distri.

My two cents

Sylvain

Le mercredi 20 mai 2020 10:43:49 UTC+2, Rick Tuk a écrit :
>
> LS,
>
> my settings are based on a post from Alexander Kushnirenko (
> https://groups.google.com/forum/#!topic/bareos-users/hnLJrH60GHU)
> if there is anything I am missing I would very much appreciate the help.
>
> Rick
>
> On Tuesday, May 12, 2020 at 12:21:45 PM UTC+2, Rick Tuk wrote:
>>
>> LS, 
>>
>> I’m running Bareos 19.2.7 on Ubuntu 18.04 with the Rados Storage Backend 
>> My device configuration looks like this: 
>>
>> Device { 
>> Name = RadosDevice 
>> Archive Device = "Rados Device" 
>> Device Options = 
>> "conffile=/etc/ceph/ceph.conf,poolname=bareos,clustername=ceph,username=client.bareos,striped,stripe_unit=4194304,object_size=67108864,stripe_count=12"
>>  
>>
>> Maximum Block Size = 4194304 
>> Media Type = RadosFile 
>> Device Type = rados 
>> Label Media = yes 
>> Random Access = yes 
>> Automatic Mount = yes 
>> Removable Media = no 
>> Always Open = no 
>> } 
>>
>> In my pools I have setup Maximum Volume Bytes = 10G 
>>
>> When I list the files on rados with "rados --id bareos --keyring 
>> /etc/ceph/ceph.client.bareos.keyring -p bareos ls —striper” I expect to see 
>> my volumes, however it does not show anything 
>> When I list the files on rados without the —striper option I see a lot of 
>> volumes with high volume numbers. 
>>
>> Also, the highest throughput I’m currently seeing Is about 6MB/s while 
>> everything is connected using gigabit 
>>
>> Installed relevant packages: 
>> bareos-common/unknown,now 19.2.7-2 amd64 [installed,automatic] 
>> bareos-filedaemon/unknown,now 19.2.7-2 amd64 [installed] 
>> bareos-storage/unknown,now 19.2.7-2 amd64 [installed,automatic] 
>> bareos-storage-ceph/unknown,now 19.2.7-2 amd64 [installed] 
>> ceph-common/stable,now 14.2.9-1bionic amd64 [installed] 
>> libradosstriper1/stable,now 14.2.9-1bionic amd64 [installed,automatic] 
>>
>> Any help would be greatly appreciated 
>>
>>
>> Met vriendelijke groet / With kind regards, 
>> Rick Tuk 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e4d80603-61aa-448f-8731-7aeb0ede3f8bo%40googlegroups.com.


Re: [bareos-users] Re: Bareos high availability

2020-06-05 Thread Spadajspadaj
Probably. That's why I'd probably prefer some duplicated, but not 
necessarily HA setup. I'm always a bit suspicious towards HA-clustering 
non-HA-capable solutions.


But YMMV

On 05.06.2020 14:47, Oleg Volkov wrote:

Jobs will be obviously aborted and failed.
Then you have to care about them manually as usual for failed job.

K.O.


On Friday, June 5, 2020 at 1:32:40 PM UTC+3, Spadajspadaj wrote:

I would be, however, cautious about possible scenarios where a
node breaks and fails over to the other server - for example - in
the middle of a backup job. Such scenarios would need some testing
so you know what to expect and how to handle such situation.

On 05.06.2020 09:00, Oleg Volkov wrote:

I do not see any problem. It is just a service.
Make postgres HA, make /etc/bareos and /var/lib/bareos be on
shared disk, make VIP and colocate it with bareos services.

Never tried this with bareos, but made a lot of clusters - there
should be no problem.
Just follow any active-standby scenario.

On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan
wrote:

Hi,

I got a requirement for install bareos as a high available
cluster on CentOS with PostgreSQL database. But in this case
I have limited to two servers + a shared storage (may be a
SAN or a NAS). So I have to install all bareos-fd, bareos-sd
and bareos-dir on these two servers as high available services.

I hope to make PostgreSQL high availability with transaction
log shipping.

But I donot have an idea to make bareos services high available.

Is there anyone who has deployed a high available cluster for
bareos?

-- 
You received this message because you are subscribed to the

Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to bareos...@googlegroups.com .
To view this discussion on the web visit

https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com

.


--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fead0ad8-344d-4a1f-8eb1-84affc638961o%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0df69943-bbca-abce-0c9e-c42968492ead%40gmail.com.


[bareos-users] Messages with webhooks or api call instead of emails

2020-06-05 Thread Giacomo Gorgellino

Hi,

it would be highly appreciated if bareos could send notifications via 
webhooks or web API. AFAIK "Messages" directive only supports email.

Can "Mail Command" be customized to use eg cURL?

Let say.. something similar to:

Messages {
  Name = webhook_ERM
  Mail Command = "curl -vs -H \"Content-Type: application/json\" -X 
POST -d '{\"JobName\": \"%n\", \"JobId\": \"%i\", \"ExitStatus\": 
\"%e\", \"Message\": \"Job %n - id %i is ended with Exit code %e\"}' 
\"http://mywebhookserver/invoke?token=supertokenhere\";


  mail on error = bac...@risorsa.com = all, !skipped, !audit
  console = all, !skipped, !saved, !audit
  append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
  catalog = all, !skipped, !saved, !audit

}

Can it work?

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/916b2acb-c0b6-f0a3-2d00-65b0582641f6%40risorsa.com.


Re: [bareos-users] Re: Bareos high availability

2020-06-05 Thread Oleg Volkov
Jobs will be obviously aborted and failed.
Then you have to care about them manually as usual for failed job.

K.O.


On Friday, June 5, 2020 at 1:32:40 PM UTC+3, Spadajspadaj wrote:
>
> I would be, however, cautious about possible scenarios where a node breaks 
> and fails over to the other server - for example - in the middle of a 
> backup job. Such scenarios would need some testing so you know what to 
> expect and how to handle such situation.
> On 05.06.2020 09:00, Oleg Volkov wrote:
>
> I do not see any problem. It is just a service.
> Make postgres HA, make /etc/bareos and  /var/lib/bareos be on shared disk, 
> make VIP and colocate it with bareos services.
>
> Never tried this with bareos, but made a lot of clusters - there should be 
> no problem. 
> Just follow any active-standby scenario.
>
> On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan wrote: 
>>
>> Hi, 
>>
>> I got a requirement for install bareos as a high available cluster on 
>> CentOS with PostgreSQL database. But in this case I have limited to two 
>> servers + a shared storage (may be a SAN or a NAS). So I have to install 
>> all bareos-fd, bareos-sd and bareos-dir on these two servers as high 
>> available services. 
>>
>> I hope to make PostgreSQL high availability with transaction log 
>> shipping. 
>>
>> But I donot have an idea to make bareos services high available.
>>
>> Is there anyone who has deployed a high available cluster for bareos?
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bareos...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com
>  
> 
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fead0ad8-344d-4a1f-8eb1-84affc638961o%40googlegroups.com.


Re: [bareos-users] Howto Correctly Configure Full Pool For Catalog Backup For an Archiving System?

2020-06-05 Thread 'DUCARROZ Birgit' via bareos-users

Thank you a lot, this helps me!
Kind regards,
Birgit
On 05/06/20 09:25, Spadajspadaj wrote:
If you don't specify retention period, they will get set at default 
values so it's not a proper solution. I'd rather go and set it to some 
insanely huge value.


But of course it will result in ever-growing storage demand for the 
catalog database since no jobs/files/volumes will be getting purged from 
catalog.



On 04.06.2020 20:35, 'DUCARROZ Birgit' via bareos-users wrote:

Hi list,

I have some WORM tapes which I will use for eternal archive.
My Catalog will be backuped on an external share.

Actually I'm not sure how to config my Full Pool For the Catalog backup.


If I AutoPrune and also configure Volume / Job and File Retention, and 
my Full-Pool will be overwritten in 180 days (so i.ex also Catalog 
infos for the WORM data, will I also lose the Catalog Information of 
data that has been written to restore from a WORM if this WORM must be 
used in some years?


Pool {
  Name = Full
  ...
  AutoPrune = yes
  Volume Retention = 180 days
  Job  Retention = 179 days
  File Retention = 178 days
}

For archiving, must I set AutoPrune = no and set neiter 
Volume/Job/File Retention?

But in this case the external share will endless grow.

How do you configure Pool information for archives?


Thank you for some hints!
Kind regards,
Birgit





--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c9f69708-faac-5003-8289-4670f68b5377%40unifr.ch.


Re: [bareos-users] Re: Bareos high availability

2020-06-05 Thread Spadajspadaj
I would be, however, cautious about possible scenarios where a node 
breaks and fails over to the other server - for example - in the middle 
of a backup job. Such scenarios would need some testing so you know what 
to expect and how to handle such situation.


On 05.06.2020 09:00, Oleg Volkov wrote:

I do not see any problem. It is just a service.
Make postgres HA, make /etc/bareos and  /var/lib/bareos be on shared 
disk, make VIP and colocate it with bareos services.


Never tried this with bareos, but made a lot of clusters - there 
should be no problem.

Just follow any active-standby scenario.

On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan wrote:

Hi,

I got a requirement for install bareos as a high available cluster
on CentOS with PostgreSQL database. But in this case I have
limited to two servers + a shared storage (may be a SAN or a NAS).
So I have to install all bareos-fd, bareos-sd and bareos-dir on
these two servers as high available services.

I hope to make PostgreSQL high availability with transaction log
shipping.

But I donot have an idea to make bareos services high available.

Is there anyone who has deployed a high available cluster for bareos?

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/aafbcb37-af38-c378-70d2-298d3d47f9f2%40gmail.com.


Re: [bareos-users] Howto Correctly Configure Full Pool For Catalog Backup For an Archiving System?

2020-06-05 Thread Spadajspadaj
If you don't specify retention period, they will get set at default 
values so it's not a proper solution. I'd rather go and set it to some 
insanely huge value.


But of course it will result in ever-growing storage demand for the 
catalog database since no jobs/files/volumes will be getting purged from 
catalog.



On 04.06.2020 20:35, 'DUCARROZ Birgit' via bareos-users wrote:

Hi list,

I have some WORM tapes which I will use for eternal archive.
My Catalog will be backuped on an external share.

Actually I'm not sure how to config my Full Pool For the Catalog backup.


If I AutoPrune and also configure Volume / Job and File Retention, and 
my Full-Pool will be overwritten in 180 days (so i.ex also Catalog 
infos for the WORM data, will I also lose the Catalog Information of 
data that has been written to restore from a WORM if this WORM must be 
used in some years?


Pool {
  Name = Full
  ...
  AutoPrune = yes
  Volume Retention = 180 days
  Job  Retention = 179 days
  File Retention = 178 days
}

For archiving, must I set AutoPrune = no and set neiter 
Volume/Job/File Retention?

But in this case the external share will endless grow.

How do you configure Pool information for archives?


Thank you for some hints!
Kind regards,
Birgit



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/49ddc465-ecfd-da36-d06c-f374e6488836%40gmail.com.


[bareos-users] Re: Bareos high availability

2020-06-05 Thread Oleg Volkov
I do not see any problem. It is just a service.
Make postgres HA, make /etc/bareos and  /var/lib/bareos be on shared disk, 
make VIP and colocate it with bareos services.

Never tried this with bareos, but made a lot of clusters - there should be 
no problem. 
Just follow any active-standby scenario.

On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan wrote:
>
> Hi,
>
> I got a requirement for install bareos as a high available cluster on 
> CentOS with PostgreSQL database. But in this case I have limited to two 
> servers + a shared storage (may be a SAN or a NAS). So I have to install 
> all bareos-fd, bareos-sd and bareos-dir on these two servers as high 
> available services. 
>
> I hope to make PostgreSQL high availability with transaction log shipping. 
>
> But I donot have an idea to make bareos services high available.
>
> Is there anyone who has deployed a high available cluster for bareos?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com.