Re: [ceph-users] PG stuck peering after host reboot

2017-02-24 Thread Wido den Hollander
> Op 23 februari 2017 om 19:09 schreef george.vasilaka...@stfc.ac.uk: > > > Since we need this pool to work again, we decided to take the data loss and > try to move on. > > So far, no luck. We tried a force create but, as expected, with a PG that is > not peering this did absolutely nothing.

[ceph-users] ceph-disk and mkfs.xfs are hanging on SAS SSD

2017-02-24 Thread Rajesh Kumar
Hi, I am using Ceph Jewel on Ubuntu 16.04 Xenial, with SAS SSD and driver=megaraid_sas "/usr/bin/python /usr/sbin/ceph-disk prepare --osd-uuid --fs-type xfs /dev/sda3" is hanging. This command is start "mkfs.xfs -f -i size=2048 -- /dev/sda3" which is actually hanging. The system becomes very

[ceph-users] Fwd: Ceph configuration suggestions

2017-02-24 Thread Karthik Nayak
Hello, We are using Ceph as distributed block storage for our Openstack setup. Details: Ceph Version: 10.2.5 Openstack: Kilo We have 7 nodes, with each nodes configuration as below: HDD: 9 x 1.8T SSD: 3 x 447G CPU: Intel® Xeon® Processor E5-2620 v4 (8 Cores, 16 Threads) RAM: 64G Network: 2 x

Re: [ceph-users] S3 Radosgw : how to grant a user within a tenant

2017-02-24 Thread Vincent Godin
>On 02/17/2017 06:25 PM, Vincent Godin wrote: >> I created 2 users : jack & bob inside a tenant_A >> jack created a bucket named BUCKET_A and want to give read access to the >> user bob >> >> with s3cmd, i can grant a user without tenant easylly: s3cmd setacl >> --acl-grant=read:user s3://BUCKET_A

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-24 Thread Marius Vaitiekunas
On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub wrote: > On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas > wrote: > > Hi Cephers, > > > > We are testing rgw multisite solution between to DC. We have one > zonegroup > > and to zones. At the moment all writes/deletes are done only to pr

Re: [ceph-users] ceph-disk and mkfs.xfs are hanging on SAS SSD

2017-02-24 Thread Wido den Hollander
> Op 24 februari 2017 om 9:12 schreef Rajesh Kumar : > > > Hi, > > I am using Ceph Jewel on Ubuntu 16.04 Xenial, with SAS SSD and > driver=megaraid_sas > > > "/usr/bin/python /usr/sbin/ceph-disk prepare --osd-uuid --fs-type xfs > /dev/sda3" is hanging. This command is start "mkfs.xfs -f -i

[ceph-users] How to prevent blocked requests?

2017-02-24 Thread Mehmet
Hey friends, a month ago i had an issue with few blocked requests where some of my VMs did freeze while this happened. I guessed the culprit was a spinning disk with a lot of "delayed ECC" (showed via smartctl: 48701). So we decided to take this osd down/out to do some checks. After this blo

[ceph-users] Recovery ceph cluster down OS corruption

2017-02-24 Thread Iban Cabrillo
Hi, We have a serious issue. We have a mini cluster (jewel version) with two server (Dell RX730), with 16Bays and the OS intalled on dual 8 GB sd card, But this configuration is working really really bad. The replication is 2, but yesterday one server crash and this morning the other One, thi

Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-24 Thread Eneko Lacunza
Hi Iban, Is the monitor data safe? If it is, just install jewel in other servers and plug in the OSD disks, it should work. El 24/02/17 a las 14:41, Iban Cabrillo escribió: Hi, We have a serious issue. We have a mini cluster (jewel version) with two server (Dell RX730), with 16Bays and the

Re: [ceph-users] Fwd: Upgrade Woes on suse leap with OBS ceph.

2017-02-24 Thread David Disseldorp
Hi, On Thu, 23 Feb 2017 21:07:41 -0800, Schlacta, Christ wrote: > So hopefully when the suse ceph team get 11.2 released it should fix this, > yes? Please raise a bug at bugzilla.opensuse.org, so that we can track this for the next openSUSE maintenance update. Cheers, David

[ceph-users] Ceph on XenServer

2017-02-24 Thread Massimiliano Cuttini
Dear all, even if Ceph should be officially supported by Xen since 4 years. * http://xenserver.org/blog/entry/tech-preview-of-xenserver-libvirt-ceph.html * https://ceph.com/geen-categorie/xenserver-support-for-rbd/ Still there is no support yet. At this point there are only some self-made pl

Re: [ceph-users] rgw leaking data, orphan search loop

2017-02-24 Thread George Mihaiescu
Hi, I updated http://tracker.ceph.com/issues/18331 with my own issue, and I am hoping Orit or Yehuda could give their opinion on what to do next. What was the purpose of the "orphan find" tool and how to actually clean up these files? Thank you, George On Fri, Jan 13, 2017 at 2:22 PM, Wido den

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-24 Thread Yehuda Sadeh-Weinraub
On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas wrote: > > > On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub > wrote: >> >> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas >> wrote: >> > Hi Cephers, >> > >> > We are testing rgw multisite solution between to DC. We have one >> > zo

Re: [ceph-users] rgw leaking data, orphan search loop

2017-02-24 Thread Yehuda Sadeh-Weinraub
Hi, we wanted to have more confidence in the orphans search tool before providing a functionality that actually remove the objects. One thing that you can do is create a new pool, copy these objects to the new pool (as a backup, rados -p --target-pool= cp ), and remove these objects (rados -p r

Re: [ceph-users] Ceph on XenServer

2017-02-24 Thread Andrei Mikhailovsky
Hi Max, I've played around with ceph on xenserver about 2-3 years ago. I made it work, but it was all hackish and a lot of manual work. It didn't play well with the cloud orchestrator and I gave up hoping that either Citrix or Ceph team would make it work. Currently, I would not recommend usin

Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-24 Thread Iban Cabrillo
HI Eneko, yes the three mons are up and running. I do not have any other servers to plug-in these disk, but could i reinstall the server and in some way mount the again the osd-disk, ? I do not know the steps to do this Regards, I 2017-02-24 14:52 GMT+01:00 Eneko Lacunza : > Hi Iban, > > Is

Re: [ceph-users] Ceph on XenServer

2017-02-24 Thread Iban Cabrillo
Hi Massimiliano, We are running CEPH agains our openstack instance running Xen: ii xen-hypervisor-4.6-amd64 4.6.0-1ubuntu4.3 amd64Xen Hypervisor on AMD64 ii xen-system-amd64 4.6.0-1ubuntu4.1 amd64Xen System on AMD64 (meta-package) ii xen-utils

Re: [ceph-users] rgw leaking data, orphan search loop

2017-02-24 Thread George Mihaiescu
Hi Yehuda, Thank you for the quick reply. What is the you're referring to that I should backup and then delete? I extracted the files from the ".log" pool where the "orphan find" tool stored the results, but they are zero bytes files. -rw-r--r-- 1 root root 0 Feb 24 12:45 orphan.scan.orphans.r

Re: [ceph-users] rgw leaking data, orphan search loop

2017-02-24 Thread Yehuda Sadeh-Weinraub
oid is object id. The orphan find command generates a list of objects that needs to be removed at the end of the run (if finishes successfully). If you didn't catch that, you should be able to still run the same scan (using the same scan id) and retrieve that info again. Yehuda On Fri, Feb 24, 20

[ceph-users] Can Cloudstack really be HA when using CEPH?

2017-02-24 Thread Adam Carheden
>From the docs for each project: "When a primary storage outage occurs the hypervisor immediately stops all VMs stored on that storage device"http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/reliability.html "CloudStack will only bind to one monitor (You can however cre