On Tue, Jan 21, 2020 at 05:57:51PM +, Robin H. Johnson wrote:
> On Mon, Jan 20, 2020 at 12:57:51PM +, EDH - Manuel Rios wrote:
> > Hi Cephs
> >
> > Several nodes of our Ceph 14.2.5 are fully dedicated to host cold storage /
> > backups information.
> >
On Mon, Jan 20, 2020 at 12:57:51PM +, EDH - Manuel Rios wrote:
> Hi Cephs
>
> Several nodes of our Ceph 14.2.5 are fully dedicated to host cold storage /
> backups information.
>
> Today checking the data usage with a customer found that rgw-admin is
> reporting:
...
> That's near 5TB used
On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pedersen wrote:
> Hi Martin,
>
> Even before adding cold storage on HDD, I had the cluster with SSD only. That
> also could not keep up with deleting the files.
> I am no where near I/O exhaustion on the SSDs or even the HDDs.
Please see my pres
On Wed, May 15, 2019 at 10:59:38AM +, Guoyong wrote:
> Does anybody know whether S3 encryption of Ceph is ready for production?
SSE-C I can say I have used & offered in production; I cannot speak for the
SSE-S3 & SSE-KMS.
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasu
On Fri, May 10, 2019 at 02:27:11PM +, Sage Weil wrote:
> If you are a Ceph developer who has contributed code to Ceph and object to
> this change of license, please let us know, either by replying to this
> message or by commenting on that pull request.
Am I correct in reading the diff that o
On Sun, Apr 21, 2019 at 03:11:44PM +0200, Marc Roos wrote:
> Double thanks for the on-topic reply. The other two repsonses, were
> making me doubt if my chinese (which I didn't study) is better than my
> english.
They were almost on topic, but not that useful. Please don't imply
language failings
On Fri, Apr 19, 2019 at 12:10:02PM +0200, Marc Roos wrote:
> I am a bit curious on how production ceph clusters are being used. I am
> reading here that the block storage is used a lot with openstack and
> proxmox, and via iscsi with vmare.
Have you looked at the Ceph User Surveys/Census?
https:
On Mon, Apr 08, 2019 at 06:38:59PM +0800, 黄明友 wrote:
>
> hi,all
>
>I had test the cloud sync module in radosgw. ceph verion is
>13.2.5 , git commit id is
>cbff874f9007f1869bfd3821b7e33b2a6ffd4988;
Reading src/rgw/rgw_rest_client.cc
shows that it only generates v2 signatu
On Wed, Feb 06, 2019 at 11:49:28AM +0200, Maged Mokhtar wrote:
> It could be used for sending cluster maps or other configuration in a
> push model, i believe corosync uses this by default. For use in sending
> actual data during write ops, a primary osd can send to its replicas,
> they do not h
On Sun, Jan 20, 2019 at 09:05:10PM +, Max Krasilnikov wrote:
> > Just checking, since it isn't mentioned here: Did you explicitly add
> > public_network+cluster_network as empty variables?
> >
> > Trace the code in the sourcefile I mentioned, specific to your Ceph
> > version, as it has change
On Sun, Jan 20, 2019 at 08:54:57PM +, Max Krasilnikov wrote:
> День добрий!
>
> Fri, Jan 18, 2019 at 11:02:51PM +, robbat2 wrote:
>
> > On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote:
> > > Dear colleagues,
> > >
> > > we build L3 topology for use with CEPH, which is
On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote:
> Dear colleagues,
>
> we build L3 topology for use with CEPH, which is based on OSPF routing
> between Loopbacks, in order to get reliable and ECMPed topology, like this:
...
> CEPH configured in the way
You have a minor misconfigu
On Sun, Nov 25, 2018 at 07:43:30AM +0700, Lazuardi Nasution wrote:
> Hi Robin,
>
> Do you mean that Cumulus quagga fork is FRRouting (https://frrouting.org/)?
> As long as I know Cumulus using it now.
I started this before Cumulus was fully shipping FRRouting; and used
their binaries.
Earlier vers
On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> I'm looking example Ceph configuration and topology on full layer 3
> networking deployment. Maybe all daemons can use loopback alias address in
> this case. But how to set cluster network and public network configuration,
> using
On Tue, Oct 02, 2018 at 12:37:02PM -0400, Ryan Leimenstoll wrote:
> I was hoping to get some clarification on what "rgw relaxed s3 bucket
> names = false” is intended to filter.
Yes, it SHOULD have caught this case, but does not.
Are you sure it rejects the uppercase? My test also showed that it
On Fri, Sep 21, 2018 at 04:17:35PM -0400, Jin Mao wrote:
> I am looking for an API equivalent of 'radosgw-admin log list' and
> 'radosgw-admin log show'. Existing /usage API only reports bucket level
> numbers like 'radosgw-admin usage show' does. Does anyone know if this is
> possible from rest AP
On Mon, Apr 30, 2018 at 11:39:11PM -0300, Leonardo Vaz wrote:
> Hey Cephers!
>
> We just announced the 2018 edition of Ceph user Survey:
>
> https://www.surveymonkey.com/r/ceph2018
>
> It will be accepting answers until May 15th and the results will be
> published on the project website.
>
> P
On Tue, Apr 10, 2018 at 10:06:57PM -0500, Robert Stanford wrote:
> I used this command to purge my rgw data:
>
> rados purge default.rgw.buckets.data --yes-i-really-really-mean-it
>
> Now, when I list the buckets with s3cmd, I still see the buckets (s3cmd ls
> shows a listing of them.) When I
On Tue, Mar 06, 2018 at 02:40:11PM -0500, Ryan Leimenstoll wrote:
> Hi all,
>
> We are trying to move a bucket in radosgw from one user to another in an
> effort both change ownership and attribute the storage usage of the data to
> the receiving user’s quota.
>
> I have unlinked the bucket a
On Wed, Feb 28, 2018 at 10:51:29PM +, Sage Weil wrote:
> On Wed, 28 Feb 2018, Dan Mick wrote:
> > Would anyone else appreciate a Google Calendar invitation for the CDMs?
> > Seems like a natural.
>
> Funny you should mention it! I was just talking to Leo this morning about
> creating a publi
On Wed, Feb 21, 2018 at 10:19:58AM +, Dave Holland wrote:
> Hi,
>
> We would like to scan our users' buckets to identify those which are
> publicly-accessible, to avoid potential embarrassment (or worse), e.g.
> http://www.bbc.co.uk/news/technology-42839462
>
> I didn't find a way to use rado
On Mon, Feb 19, 2018 at 07:57:18PM -0600, Graham Allan wrote:
> Sorry to send another long followup, but actually... I'm not sure how to
> change the placement_rule for a bucket - or at least what I tried does
> not seem to work. Using a different (more disposable) bucket, my attempt
> went like
On Fri, Feb 16, 2018 at 07:06:21PM -0600, Graham Allan wrote:
[snip great debugging]
This seems similar to two open issues, could be either of them depending
on how old that bucket is.
http://tracker.ceph.com/issues/22756
http://tracker.ceph.com/issues/22928
- I have a mitigation posted to 22756.
On Tue, Jan 30, 2018 at 10:32:04AM +0100, Ingo Reimann wrote:
> What could be the problem,and how may I solve that?
For anybody else tracking this, the logs & debugging info are filed at
http://tracker.ceph.com/issues/22928
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
On Wed, Jan 31, 2018 at 07:39:02AM +0100, Ingo Reimann wrote:
> Hi Robin,
>
> thanks for your reply.
>
> Concerning "https://tracker.ceph.com/issues/22756 - buckets showing as
> empty": Our cluster is rather old - argonaut, but the affected bucket and
> user are created under jewel.
>
> If you ne
On Tue, Jan 30, 2018 at 10:32:04AM +0100, Ingo Reimann wrote:
> The problem:
> Some Buckets are not accessible from the luminous gateway. The metadata
> for that buckets seemed ok, but listing was not possible. A local s3cmd
> got "404 NoSuchKey". I exported and imported the metadata for one instan
On Mon, Dec 25, 2017 at 11:52:36AM +0800, QR wrote:
> Is anyone know the reason that ERR_BUCKET_EXISTS is modified to zero?
> Thanks.
This comes down to arguing about AWS S3 CreateBucket behavior if the
bucket already existed and was owned by you (plus which region it is in
vs where the request was
On Fri, Dec 15, 2017 at 05:21:37PM +, David Turner wrote:
> We're trying to build an auditing system for when a user key pair performs
> an operation on a bucket (put, delete, creating a bucket, etc) and so far
> were only able to find this information in the level 10 debug logging in
> the rgw
On Mon, Dec 11, 2017 at 09:29:11AM +, Martin Emrich wrote:
> Hi!
>
> Am 09.12.17, 00:19 schrieb "Robin H. Johnson" :
>
> If you use 'radosgw-admin bi list', you can get a listing of the raw
> bucket
> index. I'll bet that the objects ar
If you use 'radosgw-admin bi list', you can get a listing of the raw bucket
index. I'll bet that the objects aren't being shown at the S3 layer
because something is wrong with them. But since they are in the bi-list,
you'll get 409 BucketNotEmpty.
At this point, I've found two different approaches
On Mon, Oct 30, 2017 at 04:42:00PM +, alastair.dewhu...@stfc.ac.uk wrote:
> Hello
..
> We have tested that individually both the IPv4 and IPv6 works (the
> service starts and transfers work), so we believe the problem is with
> how ceph parses the port setting. We did consider the possibility
On Sun, Oct 22, 2017 at 01:31:03PM +, Rudenko Aleksandr wrote:
> In past we rewrite http response header by Apache rules for our
> web-interface and pass CORS check. But now it’s impossible to solve on
> balancer level.
You CAN modify the CORS responses at the load-balancer level.
Find below t
On Thu, Sep 07, 2017 at 08:24:04PM +, Robin H. Johnson wrote:
> pg 5.3d40 is active+clean+inconsistent, acting [1322,990,655]
> pg 5.f1c0 is active+clean+inconsistent, acting [631,1327,91]
Here is the output of 'rados list-inconsistent-obj' for the PGs:
$ sudo rados list-in
Hi,
Our clusters were upgraded to v10.2.9, from ~v10.2.7 (actually a local
git snapshot that was not quite 10.2.7), and since then, we're seeing a
LOT more scrub errors than previously.
The digest logging on the scrub errors, in some cases, is also now maddeningly
short: it doesn't contain ANY in
On Wed, Sep 06, 2017 at 02:08:14PM +, Engelmann Florian wrote:
> we are running a luminous cluster and three radosgw to serve a s3 compatible
> objectstore. As we are (currently) not using Openstack we have to use the
> RadosGW Admin API to get our billing data. I tried to access the API with
I just hit this too, and found it was fixed in master, so generated a
backport issue & PR:
http://tracker.ceph.com/issues/20966
https://github.com/ceph/ceph/pull/16952
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Trustee & Treasurer
E-Mail : robb...@gentoo.org
GnuPG FP : 11AC
(Trim lots of good related content).
The upcoming HAProxy 1.8 has landed further patches for improving hot
restarts/reloads of HAProxy, which previously lead to a brief gap period
when new connections were not serviced. Lots of other approaches had
been seen, including delaying TCP SYN momentarily
On Wed, May 31, 2017 at 05:02:14PM +0100, Dave Holland wrote:
> I put a radosgw debug=20 log of the successful OPTIONS call and failing
> POST call here:
> https://docs.google.com/document/d/1i3exJSil1xj14ZrDOF_oM9eZC238gnNVAsnaZ-Pkvzo/edit?usp=sharing
>
> Happy to provide other debug info if nece
On Fri, May 19, 2017 at 01:55:50PM +, george.vasilaka...@stfc.ac.uk wrote:
> Anyone seen this before who can point me in the right direction to start
> digging?
Your RGW buckets, how many objects in them, and do they have the index
sharded?
I know we have some very large & old buckets (10M+
On Thu, May 04, 2017 at 04:35:21PM +0800, hrchu wrote:
> Thanks for reply.
>
> tc can only do limit on interfaces or given IPs, but what I am talking
> about is "per connection", e.g., each put object could be 5MB/s, get
> object could be 1MB/s.
To achieve your required level of control, you need
On Thu, Mar 16, 2017 at 02:22:08AM +, Rich Rocque wrote:
> Has anyone else run into this or have any suggestions on how to remedy it?
We need a LOT more info.
> After a couple months of almost no issues, our Ceph cluster has
> started to have frequent failures. Just this week it's failed about
On Fri, Mar 03, 2017 at 10:55:06AM +1100, Blair Bethwaite wrote:
> Does anyone have any recommendations for good tools to perform
> file-system/tree backups and restores to/from a RGW object store (Swift or
> S3 APIs)? Happy to hear about both FOSS and commercial options please.
This isn't Ceph spe
On Thu, Feb 23, 2017 at 10:40:31PM +, Scottix wrote:
> Ya the ceph-mon.$ID.log
>
> I was running ceph -w when one of them occurred too and it never output
> anything.
>
> Here is a snippet for the the 5:11AM occurrence.
Yep, I don't see anything in there that should have triggered
HEALTH_WARN
On Thu, Feb 23, 2017 at 09:49:21PM +, Scottix wrote:
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>
> We are seeing a weird behavior or not sure how to diagnose what could be
> going on. We started monitoring the overall_status from the json query and
> every once in a whil
On Fri, Feb 10, 2017 at 05:14:51PM +0100, Uwe Mesecke wrote:
> Hey,
>
> just to keep you updated: I was able to add a lifecycle using s3cmd (version
> 1.6.1). I think I can live with that because for me changing lifecycles is
> some one-time setup task.
s3cmd does fallback to V2 signatures.
Can
On Fri, Jan 20, 2017 at 11:37:47AM +0100, Wido den Hollander wrote:
> Maybe the dev didn't want to write docs, he/she forgot or just didn't get to
> it yet.
>
> It would be very much appreciated if you would send a PR with the updated
> documentation :)
As the dev, I did write docs, and have pos
On Wed, Dec 28, 2016 at 09:31:57PM +0100, Marc Roos wrote:
> Is it possible to rsync to the ceph object store with something like
> this tool of amazon?
> https://aws.amazon.com/customerapps/1771
That's a service built on top of AWS EC2 that just happens to back
storage into AWS S3.
There's no fu
On Wed, Oct 26, 2016 at 11:43:15AM +0200, Trygve Vea wrote:
> Hi!
>
> I'm trying to get s3website working on one of our Rados Gateway
> installations, and I'm having some problems finding out what needs to
> be done for this to work. It looks like this is a halfway secret
> feature, as I can only
On Mon, Jul 18, 2016 at 10:48:16AM +0300, Victor Efimov wrote:
> xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>someownerSOMEOWNER
>
> note the "someowner" is used as id.
> Problem that S3-compatible library that I use crashes on this, it
> expects 64 character hex string.
>
> According to S3
On Fri, Jun 03, 2016 at 11:34:35AM +0700, Khang Nguyễn Nhật wrote:
> s3 = boto3.client(service_name='s3', region_name='', use_ssl=False,
> endpoint_url='http://192.168.1.10:', aws_access_key_id=access_key,
> aws_secret_access_key= secret_key,
> config=Config(
On Sun, May 29, 2016 at 05:17:14PM +0530, Gaurav Bafna wrote:
> Hi Cephers,
>
> I am unable to create bucket hosting a webstite in my vstart cluster.
>
> When I do this in boto :
>
> website_bucket.configure_website('index.html','error.html')
>
> I get :
>
> boto.exception.S3ResponseError: S3R
On Sun, May 01, 2016 at 08:46:36PM +1000, Stuart Longland wrote:
> Hi all,
>
> This evening I was in the process of deploying a ceph cluster by hand.
> I did it by hand because to my knowledge, ceph-deploy doesn't support
> Gentoo, and my cluster here runs that.
You'll want the ceph-disk & ceph-de
On Mon, Apr 11, 2016 at 06:49:09PM -0400, Shinobu Kinjo wrote:
> Just to clarify to prevent any confusion.
>
> Honestly I've never used ext4 as underlying filesystem for the Ceph cluster,
> but according to wiki [1], ext4 is recommended -;
>
> [1] https://en.wikipedia.org/wiki/Ceph_%28software%
On Wed, Mar 16, 2016 at 06:36:33AM +, Pavan Rallabhandi wrote:
> I find this to be discussed here before, but couldn¹t find any solution
> hence the mail. In RGW, for a bucket holding objects in the range of ~
> millions, one can find it to take for ever to delete the bucket(via
> radosgw-admin
On Thu, Mar 03, 2016 at 01:55:13PM +0100, Ritter Sławomir wrote:
> Hi,
>
> I think this is really serious problem - again:
>
> - we silently lost S3/RGW objects in clusters
>
> Moreover, it our situation looks very similiar to described in
> uncorrected bug #13764 (Hammer) and in corrected #8
On Mon, Feb 29, 2016 at 04:58:07PM +, Luis Periquito wrote:
> Hi all,
>
> I have a biggish ceph environment and currently creating a bucket in
> radosgw can take as long as 20s.
>
> What affects the time a bucket takes to be created? How can I improve that
> time?
>
> I've tried to create i
On Tue, Feb 16, 2016 at 04:16:49PM -0600, Justin Restivo wrote:
> I verified that this issue is on Amazons side -- I watched it populate to
> 101 and failed to let me produce buckets past that. I just submitted a new
> ticket as I should have had a bucket limit of 500. Thank you for your
> response
On Tue, Feb 16, 2016 at 10:08:38AM -0600, Justin Restivo wrote:
> Hi all,
>
> I am attempting to run the Ceph S3 tests and am really struggling. Any help
> at all would be appreciated.
>
> I have my credentials pointing at my AWS environment, which has a 500
> bucket limit. When I run the tests,
On Wed, Jan 27, 2016 at 12:08:36AM +0100, Florian Haas wrote:
> Agreed, but you don't necessarily need haproxy to do load balancing
> (round-robin DNS CNAME with short TTLs is another option), and Wido
> started the discussion around an option to ditch HAProxy for radosgw
> altogether. ;)
There's a
On Tue, Jan 26, 2016 at 11:51:51PM +0100, Florian Haas wrote:
> Hey, slick. Thanks! Out of curiosity, does the wip branch correctly
> handle Accept-Encoding: gzip?
No, Accept-Encoding is NOT presently implemented in RGW; regardless of
static-website.
It's pretty low priority for the use-cases I n
(I'm on the list, no need to respond directly to either my addresses,
robb...@gentoo.org or robin.john...@dreamhost.com).
On Tue, Jan 26, 2016 at 02:46:00PM -0800, Yehuda Sadeh-Weinraub wrote:
> > The moment this lands in a release, we'll be more than happy to ditch
> > the HAProxy request/respons
On Sat, Oct 03, 2015 at 11:07:22AM +0200, Loic Dachary wrote:
> Hi Ceph,
>
> TL;DR: If you have one day a week to work on the next Ceph stable releases
> [1] your help would be most welcome.
I'd like to throw my name in.
As of August, I work on Ceph development for Dreamhost. Most of my work
foc
On Thu, Oct 01, 2015 at 10:01:03PM -0400, J David wrote:
> So, do medium-sized IT organizations (i.e. those without the resources
> to have a Ceph developer on staff) run Hammer-based deployments in
> production successfully?
I'm not sure if I count, given that I'm now working at DreamHost as the
i
On Thu, Sep 17, 2015 at 11:19:28AM -0700, Sage Weil wrote:
> > Please revoke the old keys, so that if they were taken by the attacker,
> > they cannot be used (you can't un-revoke a key generally).
> Done:
> http://pgp.mit.edu/pks/lookup?search=ceph&op=index
Thank you!
--
Robin Hugh Johnson
On Thu, Sep 17, 2015 at 09:29:35AM -0700, Sage Weil wrote:
> Last week, Red Hat investigated an intrusion on the sites of both the Ceph
> community project (ceph.com) and Inktank (download.inktank.com), which
> were hosted on a computer system outside of Red Hat infrastructure.
>
> Ceph.com pro
On Wed, Sep 09, 2015 at 05:28:26PM +, Chang, Fangzhe (Fangzhe) wrote:
> I noticed that S3 Java SDK for getContentType() no longer works in
> Ceph/Radosgw v0.94 (Hammer). It seems that S3 SDK expects the metadata
> “Content-Type” whereas ceph responds with “Content-type”.
> Does anyone know h
On Wed, Aug 26, 2015 at 11:52:02AM +0100, Luis Periquito wrote:
> On Mon, Feb 23, 2015 at 10:18 PM, Yehuda Sadeh-Weinraub
> wrote:
>
> >
> >
> > --
> >
> > *From: *"Shinji Nakamoto"
> > *To: *ceph-us...@ceph.com
> > *Sent: *Friday, February 20, 2015 3:58:39 PM
> > *Su
Override 'rgw frontends' for each instance as well.
Eg:
rgw frontends = civetweb port=7480
rgw frontends = civetweb port=7481
The default value is:
rgw frontends = fastcgi, civetweb port=7480
On Wed, Jun 24, 2015 at 07:52:36PM -0300, Italo Santos wrote:
> Hello Somnath,
>
> I’ve create the fil
I'm trying to get better performance out of exporting RBD volumes via
tgt for iSCSI consumers...
By terrible, I'm getting <5MB/sec reads, <50IOPS. I'm pretty sure neither RBD
or iSCSI themselves are the problems; as the individually perform well.
iSCSI to RAM-backed: >60MB/sec, >500IOPS
iSCSI to
On Wed, Sep 24, 2014 at 11:31:29AM -0700, Yehuda Sadeh wrote:
> On Wed, Sep 24, 2014 at 11:17 AM, Craig Lewis
> wrote:
> > Yehuda, are there any potential problems there? I'm wondering if duplicate
> > bucket names that don't have the same contents might cause problems? Would
> > the second clu
On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
> Keep Cluster A intact and migrate it to your new hardware. You can do
> this with no downtime, assuming you have enough IOPS to support data
> migration and normal usage simultaneously. Bring up the new OSDs and
> let everything rebala
On Sun, Sep 21, 2014 at 02:33:09PM +0900, Christian Balzer wrote:
> > For a variety of reasons, none good anymore, we have two separate Ceph
> > clusters.
> >
> > I would like to merge them onto the newer hardware, with as little
> > downtime and data loss as possible; then discard the old hardwar
For a variety of reasons, none good anymore, we have two separate Ceph
clusters.
I would like to merge them onto the newer hardware, with as little
downtime and data loss as possible; then discard the old hardware.
Cluster A (2 hosts):
- 3TB of S3 content, >100k files, file mtimes important
- <50
73 matches
Mail list logo