att
>
> Thanks for your answer.
>
> Should I open a bug report then?
>
> How would I be able to read more from it? Have multiple threads access
> it and read from it simultaneously?
>
> Marc
>
> On 1/25/24 20:25, Matt Benjamin wrote:
> > Hi Marc,
> >
>
ld)
> --- pthread ID / name mapping for recent threads ---
>7f2472a89b00 / safe_timer
>7f2472cadb00 / radosgw
>...
>log_file
>
> /var/lib/ceph/crash/2024-01-25T13:10:13.909546Z_01ee6e6a-e946-4006-9d32-e17ef2f9df74/log
> --- end dump of recent events ---
>
30303:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.14",
> "started": "Wed, 06 Dec 2023 10:44:40 GMT",
> "status": "COMPLETE"
> },
> {
> "bucket":
> ":ec3201cam02:f3fec4b6-a248-4f3f-be75-b8055e612
ommand.
> >>
> >> Have I missed a pagination limit for listing user buckets in the rados
> >> gateway?
> >>
> >> Thanks,
> >> Tom
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.
hange? Is this
> effective immediate with strong consistency or is there some propagation
> delay (hopefully on with some upper bound)?
>
>
> Best regards
> Matthias
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To un
basically an extension
> from zone-level redirect_zone. I found it helpful in realizing CopyObject
> with (x-copy-source) in multisite environments where bucket content don't
> exist in all zones. This feature is similar to what Matt Benjamin suggested
> about the concept of "bucke
d
> works well for bucket migration.
>
> Cheers,
> Yixin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Su
erim
> workaround to sync existing objects is to either
>
> * create new objects (or)
>
> * execute "bucket sync run"
>
> after creating/enabling the bucket policy.
>
> Please note that this issue is specific to only bucket policy but
> doesn't exist for sync-policy set at zonegroup level.
>
>
> Thanks,
&
>
> Chris
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/techn
ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
_
ibe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
__
delete the export:
> ceph nfs export delete nfs4rgw /bucketexport
>
> Ganesha servers go back normal:
> rook-ceph-nfs-nfs1-a-679fdb795-82tcx 2/2 Running
>0 4h30m
> rook-ceph-nfs-nfs4rgw-a-5c594d67dc-nlr42 2/2 Running
>
And to clarify, too, this Aquarium work is the first attempt by folks to
build a file backed storage setup, it's great to see innovation around this.
Matt
On Thu, Oct 20, 2022 at 1:50 PM Joao Eduardo Luis wrote:
> On 2022-10-20 17:46, Matt Benjamin wrote:
> > The ability to run a
tps://github.com/aquarist-labs/s3gw-charts
> [2] https://longhorn.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Stre
t; Kind regards,
> Rok
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
htt
quot;ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17)
> pacific (stable)": 108
> },
> "mds": {
> "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17)
> pacific (stable)": 2
> },
> "rgw"
> If I can't manage the number of versions, then sooner or later the versions
> will kill the entire cluster:(
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
cribe send an email to ceph-users-le...@ceph.io
> >>
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users
s mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-8
_
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
te
t05
> > 2~1nElF0c3uq5FnZ9cKlsnGlXKATvjr0g
> > ...
>
>
>
> On the latest master, I see that these objects are deleted immediately
> post abortmp. I believe this issue may have beenn fixed as part of [1],
> backported to v16.2.7 [2]. Maybe you could try upgrading your c
imilar experiment
> in the past.
Not sure, good question.
Matt
>
> Thanks for any help you can provide!
>
> Jorge
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
&g
you
> >>
> >> Michal
> >>
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> >
>
>
be a better option.
>
> David
>
> On Fri, May 7, 2021 at 4:21 PM Matt Benjamin wrote:
> >
> > Hi David,
> >
> > I think the solution is most likely the ops log. It is called for
> > every op, and has the transaction id.
> >
> > Matt
> &g
are emitted like the request logs above by beast, so that we can
> >> handle it using journald. If there's an alternative that would
> >> accomplish the same thing, we're very open to suggestions.
> >>
> >> Thank you,
> >> David
> >> __
;UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Str
5a441452710 op status=0 http_status=200 latency=0.022s
> ==
> 2021-04-22 10:27:55.445 7f2d85fd4700 1 beast: 0x55a441452710:
> 10.151.101.15 - - [2021-04-22 10:27:55.0.44549s] "GET
> /descript/2020/01/17/1b819bd9-5036-4ca4-98f7-b0308e1e3017 HTTP/1.1"
> 200 0 - "aws
ph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
__
_
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list
able to "source" it's data from the OSDs and
> sync that way, then I'd be up for setting up a skeleton implementation, but
> it sounds like RGW Metadata is only going to record things which are flowing
> through the gateway. (Is that correct?)
>
>
>
>
> On Tu
__
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technolog
maintain?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/s
ry cluster is way behind the master cluster because of the relatively
> slow speed.
> - Is there anything else I can do to optimize replication speed ?
>
> Thanks for your comments !
>
> Nicolas
>
> ___
> ceph-users mailing list
ies?
> Should we manually reshard this bucket again?
>
> Thanks!
>
> Dan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Mi
during a failed multipart upload:
> > root@jump:~# aws --no-verify-ssl --endpoint-url
> > http://lab-object.cancercollaboratory.org:7480 s3 cp 4GBfile
> > s3://troubleshooting
> > upload failed: ./4GBfile to s3://troubleshooting/4GBfile An error
> > occurred (UnknownError) when callin
ete the index file of the bucket.
>
> Pray to god to not happen again.
>
> Still pending backporting to Nautilus of the new experimental tool to find
> orphans in RGW
>
> Maybe @Matt Benjamin can give us and ETA for get ready that tool backported...
>
> Regards
>
>
&g
The lifecycle changes in question do not change the semantics nor any
api of lifecycle. The behavior change was a regression.
regards,
Matt
On Wed, Aug 5, 2020 at 12:12 PM Daniel Poelzleithner wrote:
>
> On 2020-08-05 15:23, Matt Benjamin wrote:
>
> > There is new lifecycle p
in identifying the issue.
Matt
On Wed, Aug 5, 2020 at 9:23 AM Matt Benjamin wrote:
>
> Hi Chris,
>
> There is new lifecycle processing logic backported to Octopus, it
> looks like, in 15.2.3. I'm looking at the non-current calculation to
> see if it could incorrectly rely on a
>> "ID": "Expiration & incomplete uploads",
> >> "Prefix": "",
> >> "Status": "Enabled",
> >> "NoncurrentVersionExpiration": {
> >> "NoncurrentDays": 18
> &
15 0.00402s
> s3:multi_object_delete http status=403
> 2020-07-11T17:55:54.038+0100 7f45adad7700 1 == req done
> req=0x7f45adaced50 op status=0 http_status=403 latency=0.00402s ==
> 2020-07-11T17:55:54.038+0100 7f45adad7700 20 process_request() returned -13
> 20
luster got heavy uploads and deletes.
>
> Are those params usable? For us doesn't have sense store delete objects 2
> hours in a gc.
>
> Regards
> Manuel
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscrib
it treat the full object "paths" as
> a completely flat namespace?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 1
ve any such field in the
>>>> introspection result and I can't seem to figure out how to make this all
>>>> work.
>>>>
>>>> I cranked up the logging to 20/20 and still did not see any hints as to
>>>> what part of the policy is causing t
be intercepted, amended,
> lost or deleted, or contain viruses.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Stree
just let it runs automatically?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
htt
An issue presenting exactly like this was fixed in spring of last year, for
certain on nautilus and higher.
Matt
On Sat, Apr 11, 2020, 12:04 PM <346415...@qq.com> wrote:
> Ceph Version : Mimic 13.2.4
>
> The cluster has been running steadily for more than a year, recently I
> found cluster
ze of the PR -- 22 commits and
> 32 files altered -- my guess is that it will not be backported to Nautilus.
> However I'll invite the principals to weigh in.
>
> Best,
>
> Eric
>
> --
> J. Eric Ivancich
> he/him/his
> Red Hat Storage
> Ann Arbor, Michigan, USA
&g
15-090436_1254x522_scrubbed.png
> ce2fc9ee-edc8-4dc7-a3fe-b1458c67168b.5805.1_kanariepiet.jpg
>
> root@node1:~# rados -p tier2-hdd ls
> ce2fc9ee-edc8-4dc7-a3fe-b1458c67168b.5805.1__shadow_.FEruUOZaVJXJcOG-e2tO1xcInNzoEvN_0
>
> $ s3cmd info s3://bucket/kanariepiet.jpg
> [snip]
>
t to us too.
>
>
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
--
Matt Benjami
oded request classes. It's not especially
> > useful in its current form, but we do have plans to further elaborate
> > the classes and eventually pass the information down to osds for
> > integrated QOS.
> >
> > As of nautilus, though, the thread pool size is the only ef
50 matches
Mail list logo