[ceph-users] Intel SSD (DC S3700) Power_Loss_Cap_Test failure

2016-08-02 Thread Christian Balzer
Hello, not a Ceph specific issue, but this is probably the largest sample size of SSD users I'm familiar with. ^o^ This morning I was woken at 4:30 by Nagios, one of our Ceph nodes having a religious experience. It turns out that the SMART check plugin I run to mostly get an early wearout warni

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Ric Wheeler
On 08/02/2016 07:26 PM, Ilya Dryomov wrote: This seems to reflect the granularity (4194304), which matches the >8192 pages (8192 x 512 = 4194304). However, there is no alignment >value. > >Can discard_alignment be specified with RBD? It's exported as a read-only sysfs attribute, just like disca

[ceph-users] Ceph RGW issue.

2016-08-02 Thread Khang Nguyễn Nhật
Hi, I have seen an error when I'm using Ceph RGW v10.2.2 with S3 API, it's as follows: I have three S3 users are A, B, C. Both A, B, C have some buckets and objects. When I used A or C in order to PUT, GET object to RGW, I have seen "decode_policy Read AccessControlPolicy2BFULL_CONTROL" in ceph-cli

Re: [ceph-users] Read Stalls with Multiple OSD Servers

2016-08-02 Thread Helander, Thomas
Hi David, There’s a good amount of backstory to our configuration, but I’m happy to report I found the source of my problem. We were applying some “optimizations” for our 10GbE via sysctl, including disabling net.ipv4.tcp_sack. Re-enabling net.ipv4.tcp_sack resolved the issue. Thanks, Tom Fro

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hello David, Thanks a lot for detailed information! This is going to help me. Regards Gaurav Goyal On Tue, Aug 2, 2016 at 11:46 AM, David Turner wrote: > I'm going to assume you know how to add and remove storage > http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The > onl

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-02 Thread c
Am 2016-08-02 13:30, schrieb c: Hello Guys, this time without the original acting-set osd.4, 16 and 28. The issue still exists... [...] For the record, this ONLY happens with this PG and no others that share the same OSDs, right? Yes, right. [...] When doing the deep-scrub, monitor (atop,

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread David Turner
I'm going to assume you know how to add and remove storage http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/. The only other part of this process is reweighting the crush map for the old osds to a new weight of 0.0 http://docs.ceph.com/docs/master/rados/operations/crush-map/. I

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread David Turner
Just add the new storage and weight the old storage to 0.0 so all data will move off of the old storage to the new storage. It's not unique to migrating from SANs to Local Disks. You would do the same any time you wanted to migrate to newer servers and retire old servers. After the backfillin

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hi David, Thanks for your comments! Can you please help to share the procedure/Document if available? Regards Gaurav Goyal On Tue, Aug 2, 2016 at 11:24 AM, David Turner wrote: > Just add the new storage and weight the old storage to 0.0 so all data > will move off of the old storage to the new

[ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Dear Ceph Team, I need your guidance on this. Regards Gaurav Goyal On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal wrote: > Dear Team, > > I have ceph storage installed on SAN storage which is connected to > Openstack Hosts via iSCSI LUNs. > Now we want to get rid of SAN storage and move over c

Re: [ceph-users] Fwd: Re: (no subject)

2016-08-02 Thread Gaurav Goyal
Hello Jason/Kees, I am trying to take snapshot of my instance. Image was stuck up in Queued state and instance is stuck up in Image Pending Upload state. I had to manually quit the job as it was not working since last 1 hour .. my instance is still in Image Pending Upload state. Is it something

[ceph-users] Reminder: CDM tomorrow

2016-08-02 Thread Patrick McGarry
Hey cephers, Just a reminder that our Ceph Developer Monthly discussion is happening tomorrow at 12:30p EDT on bluejeans. Please, if you are working on something in the Ceph code base currently, just drop a quick note on the CDM page so that we’re able to get it on the agenda. Thanks! http://wiki

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov wrote: > On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev > wrote: >> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote: >>> Alex Gorbachev wrote on 08/01/2016 04:05 PM: Hi Ilya, On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov w

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Ilya Dryomov
On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev wrote: > On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote: >> Alex Gorbachev wrote on 08/01/2016 04:05 PM: >>> Hi Ilya, >>> >>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote: On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev >>>

Re: [ceph-users] Cleaning Up Failed Multipart Uploads

2016-08-02 Thread Tyler Bishop
We're having the same issues. I have a 1200TB pool at 90% utilization however disk utilization is only 40% Tyler Bishop Chief Technical Officer 513-299-7108 x10 tyler.bis...@beyondhosting.net If you are not the intended recipient of this transmission you are notified that

Re: [ceph-users] Cleaning Up Failed Multipart Uploads

2016-08-02 Thread Brian Felton
I am actively working through the code and debugging everything. I figure the issue is with how RGW is listing the parts of a multipart upload when it completes or aborts the upload (read: it's not getting *all* the parts, just those that are either most recent or tagged with the upload id). As s

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote: > Alex Gorbachev wrote on 08/01/2016 04:05 PM: >> Hi Ilya, >> >> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote: >>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev >>> wrote: RBD illustration showing RBD ignoring discard unt

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-02 Thread c
Hello Guys, this time without the original acting-set osd.4, 16 and 28. The issue still exists... [...] For the record, this ONLY happens with this PG and no others that share the same OSDs, right? Yes, right. [...] When doing the deep-scrub, monitor (atop, etc) all 3 nodes and see if a p

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Ilya Dryomov
On Tue, Aug 2, 2016 at 1:05 AM, Alex Gorbachev wrote: > Hi Ilya, > > On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote: >> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev >> wrote: >>> RBD illustration showing RBD ignoring discard until a certain >>> threshold - why is that? This behavior is u