And I can write to AWS now--with use_https set to false.
host = s3.amazonaws.com
use_https = false
backend = s3
aws_region = us-east-2
aws_auth_sign_version = 4
access_key = "REDACTED"
secret_key = "REDACTED"
pricing_dir = ""
--
You received this message because you are subscribed to the Googl
Progress! With
aws_auth_sign_version = 4 # must be 4 for AWS S3, must be 2 for CEPH Object
I can write to our ceph store.
$ mc ls ceph/cfull0009
[2020-11-30 07:34:27 CST] 42KiB
On Monday, November 30, 2020 at 3:06:04 AM UTC-6 Frank Ueberschar wrote:
> Both the logs show a http authentic
Both the logs show a http authentication error. From my far position I
would suggest to double check the droplet config regarding this line:
aws_auth_sign_version = 2
Am 27.11.20 um 18:23 schrieb 'JAMES BELLINGER' via bareos-users:
Our ceph server is back. A section of the bareos-sd.trace and
Our ceph server is back. A section of the bareos-sd.trace and the matching
section of the ceph log follow:
bareos-sd (150): stored/mount.cc:205-224 autoLoadDev returns 0
bareos-sd (150): stored/mount.cc:240-224 want vol=cfull0009 devvol=
dev="uwcephS3" (bareos-bucket)
bareos-sd (100): stored/de
I installed the binaries, and tried the AWS test (yesterday wasn't
good--our test bucket is in AWS us-east-1)
bareos-sd (150): stored/mount.cc:240-218 want vol=cfull0010 devvol=
dev="awstest" (bareos-test-uw)
bareos-sd (100): stored/dev.cc:619-218 open dev: type=6 dev_name="awstest"
(bareos-te
Feel free to check the binaries:
https://download.bareos.org/bareos/experimental/CD/PR-674/
Best, Frank
Am 20.11.20 um 16:44 schrieb Frank Ueberschar:
At the beginning of each job the droplet-sd-backend probes the s3-host
by accessing the attributes of a bucket with name "/".
Now, both yo
For me it fails whether use_https is true or false
On Monday, November 23, 2020 at 6:41:11 AM UTC-6 andr...@gmail.com wrote:
> I've experienced this too with AWS S3 and bareos-sd 18.2.5; it stop
> working in the same pattern Dmitry described. What I do find a bit odd is
> that it works with ht
I've experienced this too with AWS S3 and bareos-sd 18.2.5; it stop working
in the same pattern Dmitry described. What I do find a bit odd is that it
works with http only set in the droplet profile:
use_https = false
--
Andrei
On Friday, 20 November 2020 at 17:40:14 UTC+1 Dmitry Ponkin wrote:
Yes, I didn't update anything. The issue started presenting itself a month
ago, and after several attempts the jobs tended to finally work. Last night
was the first time none of them worked.
No idea what might've caused it on Amazon side. They usually warn about any
sort of breaking changes mont
Am 20.11.20 um 17:11 schrieb Dmitry Ponkin:
> I just patched 19.2.7 to use marker.bareos and put that file into every
> bucket. It's a subpar solution at best and should not be merged.
If you didn't upgrade Bareos or libdroplet and it suddenly broke, I
guess it was the other end breaking compatib
I just patched 19.2.7 to use marker.bareos and put that file into every
bucket. It's a subpar solution at best and should not be merged.
I suggest finding another way to check the connection. libdroplet can
output the list of accessible buckets, this can be used instead, as it
doesn't require a
At the beginning of each job the droplet-sd-backend probes the s3-host
by accessing the attributes of a bucket with name "/".
Now, both your descriptions probably describe two issues:
1. It cannot establish a connection with a TLS Handshake ?
2. requesting the attributes of a bucket "/" does n
https://github.com/bareos/bareos/blob/9c59c6460c7ddcdc361c6fb03fa01a2fd2aaa28e/core/src/stored/backends/droplet_device.cc#L385
And here's where the backend shoots itself in the foot. I'm considering
putting an empty file in every bucket I use for backups and using its name
to check if the bucket
I used dplsh to test it out and sure enough, passing / to getattr yields
DPL_FAILURE
bucket_name:
bucket_name:/> getattr /
status: *DPL_FAILURE (-1)*
bucket_name:/> getattr -r test.file
last-modified=
accept-ranges=bytes
etag=
date=
server=AmazonS3
content-type=
x-amz-request-id=
content-length=
I'm having the same issue with AWS S3 at the moment. No SSL errors though.
HEAD in logs, too. This is how droplet backend checks if the bucket exists
and if the connection could be established.
https://github.com/bareos/bareos/blob/9c59c6460c7ddcdc361c6fb03fa01a2fd2aaa28e/core/src/stored/backends/
I checked SSL access directly using the MinIO Client on the bareos-sd
server. That works.
I have to assume that I'm doing something wrong, and would appreciate extra
eyes on the configuration.
James Bellinger
--
You received this message because you are subscribed to the Google Groups
"bar
The central part of the trace follows. The message about the SSL
connection seems ambiguous:
ERROR: error: src/conn.c:392: init_ssl_conn: SSL certificate verification
status: 0: ok
I take this to mean that somehow this is not managing the credentials
properly
bareos-sd (150): stored/mount.
You may want to switch-on the tracefile on the storage daemon that gives
you more debugging output:
i.e. "setdebug trace=1 level=200 storage=awstest"
Additionally you may want to try the current nightly build because we
added some other improvements to the droplet backend-device:
https://down
I am testing bareos-19-2.7-2 on CentOS Linux release 7.4.1708, including
the bareos-storage-droplet RPM.
We want to evaluate how well we can backup to the cloud, or to a ceph
server of our own. I have tried both, and both fail.
The credentials are valid, and were tested independently. (Unles
19 matches
Mail list logo