[ceph-users] Ceph Developers Required - Bangalore

2016-11-25 Thread Thangaraj Vinayagamoorthy
Hi , We are looking strong Ceph Developer in our Organization. Kindly share your contact number if you are interested to solve good data problems. Regards, Thangaraj V 7899706889 This email and any files transmitted with it are confidential and intended solely for the individual or entity to

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Wido den Hollander
> Op 26 november 2016 om 5:13 schreef "Andrus, Brian Contractor" > : > > > Hmm. Apparently download.ceph.com = us-west.ceph.com > And there is no repomd.xml on us-east.ceph.com > You could check http://us-east.ceph.com/timestamp to see how far behind it is on

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Andrus, Brian Contractor
Hmm. Apparently download.ceph.com = us-west.ceph.com And there is no repomd.xml on us-east.ceph.com This seems to happen a little too often for something that is stable and released. Makes it seem like the old BBS days of “I want to play DOOM, so I’m shutting the services down” Brian Andrus

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Vy Nguyen Tan
Hi Matt and Joao, Thank you for your information. I am installing Ceph with alternative mirror (ceph-deploy install --repo-url http://hk.ceph.com/rpm-jewel/el7/ --gpg-url http://hk.ceph.com/keys/release.asc {host}) and everything work again. On Sat, Nov 26, 2016 at 10:12 AM, Joao Eduardo Luis

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Joao Eduardo Luis
On 11/26/2016 03:05 AM, Vy Nguyen Tan wrote: Hello, I want to install CEPH on new nodes but I can't reach CEPH repo, It seems the repo are broken. I am using CentOS 7.2 and ceph-deploy 1.5.36. Patrick sent an email to the list informing this would happen back on Nov 18th; quote: Due to

Re: [ceph-users] CEPH mirror down again

2016-11-25 Thread Matt Taylor
Hey, There are many alternate mirrors available: http://docs.ceph.com/docs/jewel/install/mirrors/ Pick the closest one to you. :) Cheers, Matt. On 26/11/16 14:05, Vy Nguyen Tan wrote: Hello, I want to install CEPH on new nodes but I can't reach CEPH repo, It seems the repo are broken. I am

[ceph-users] CEPH mirror down again

2016-11-25 Thread Vy Nguyen Tan
Hello, I want to install CEPH on new nodes but I can't reach CEPH repo, It seems the repo are broken. I am using CentOS 7.2 and ceph-deploy 1.5.36. *[root@cp ~]# ping -c 3 download.ceph.com * *PING download.ceph.com (173.236.253.173) 56(84)

[ceph-users] docker storage driver

2016-11-25 Thread Pedro Benites
Hi, I want to configure a registry with "Ceph rados storage driver", but after start the registry with "docker run -d -p 5000:5000 --restart=always --name registry -v `pwd`/config.yml:/etc/docker/registry/config.yml registry:2" I got this error in the dockers logs: "panic: StorageDriver

Re: [ceph-users] general ceph cluster design

2016-11-25 Thread Maxime Guyot
Hi Nick, See inline comments. Cheers, Maxime On 25/11/16 16:01, "ceph-users on behalf of nick" wrote: >Hi, >we are currently planning a new ceph cluster which will be used for >virtualization (providing RBD storage

Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-25 Thread Lenz Grimmer
Hi Swami, On 11/25/2016 11:04 AM, M Ranga Swami Reddy wrote: > Can you please confirm, if the DeepSea works on Ubuntu also? Not yet, as far as I can tell, but testing/feedback/patches are very welcome ;) One of the benefits of using Salt is that it supports multiple distributions. However,

Re: [ceph-users] Ceph performance laggy (requests blocked > 32) on OpenStack

2016-11-25 Thread Thomas Danan
Hi Kévin, I am currently having a similar issue. in my env I have around 16 Linux vms (vmware) more or less equaly loaded accessing a 1PB ceph hammer cluster (40 dn, 800 osds) through rbd. Very often we have IO freeze on the VM xfs FS and we also continuously have slow requests on osd ( up to

[ceph-users] general ceph cluster design

2016-11-25 Thread nick
Hi, we are currently planning a new ceph cluster which will be used for virtualization (providing RBD storage for KVM machines) and we have some general questions. * Is it advisable to have one ceph cluster spread over multiple datacenters (latency is low, as they are not so far from each

[ceph-users] CoW clone performance

2016-11-25 Thread Kees Meijs
Hi list, We're using CoW clones (using OpenStack via Glance and Cinder) to store virtual machine images. For example: > # rbd info cinder-volumes/volume-a09bd74b-f100-4043-a422-5e6be20d26b2 > rbd image 'volume-a09bd74b-f100-4043-a422-5e6be20d26b2': > size 25600 MB in 3200 objects >

Re: [ceph-users] Ceph strange issue after adding a cache OSD.

2016-11-25 Thread Nick Fisk
It might be worth trying to raise a ticket with those errors and say that you believe they occurred after PG splitting on the cache tier and also include the asserts you originally posted. > -Original Message- > From: Daznis [mailto:daz...@gmail.com] > Sent: 25 November 2016 13:59 > To:

Re: [ceph-users] about using SSD in cephfs, attached with some quantified benchmarks

2016-11-25 Thread John Spray
On Fri, Nov 25, 2016 at 8:16 AM, JiaJia Zhong wrote: > confusing questions: (ceph0.94) > > 1. Is there any way to cache the whole metadata datas into MDS's memory ? > (metadata osds dates-async> MDS memory) > > I dunno if I misunderstand the role of mds :(,

Re: [ceph-users] Ceph strange issue after adding a cache OSD.

2016-11-25 Thread Daznis
I think it's because of these errors: 2016-11-25 14:51:25.644495 7fb73eef8700 -1 log_channel(cluster) log [ERR] : 14.28 deep-scrub stat mismatch, got 145/144 objects, 0/0 clones, 57/57 dirty, 0/0 omap, 54/53 hit_set_archive, 0/0 whiteouts, 365399477/365399252 bytes,51328/51103 hit_set_archive

Re: [ceph-users] Ceph performance laggy (requests blocked > 32) on OpenStack

2016-11-25 Thread RDS
If I use slow HDD, I can get the same outcome. Placing journals on fast SAS or NVMe SSD will make a difference. If you are using SATA SSD, those SSD are much slower. Instead of guessing why Ceph is lagging, have you looked at ceph -w and iostat and vmstat reports during your tests? Io stat will

[ceph-users] Ceph performance laggy (requests blocked > 32) on OpenStack

2016-11-25 Thread Kevin Olbrich
Hi, we are running 80 VMs using KVM in OpenStack via RBD in Ceph Jewel on a total of 53 disks (RAID parity already excluded). Our nodes are using Intel P3700 DC-SSDs for journaling. Most VMs are linux based and load is low to medium. There are also about 10 VMs running Windows 2012R2, two of

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-25 Thread Nick Fisk
Hi, I didn’t so the maths, so maybe 7GB isn’t worth tuning for, although every little helps ;-) I don’t believe peering or recovery should effect this value, but other things will consume memory during recovery, but I’m not aware if this can be limited or tuned. Yes, the write and

Re: [ceph-users] Ceph strange issue after adding a cache OSD.

2016-11-25 Thread Nick Fisk
Possibly, do you know the exact steps to reproduce? I'm guessing the PG splitting was the cause, but whether this on its own would cause the problem or also needs the introduction of new OSD's at the same time, might make tracing the cause hard. > -Original Message- > From: Daznis

Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-25 Thread M Ranga Swami Reddy
Hello Tim, Can you please confirm, if the DeepSea works on Ubuntu also? Thanks Swami On Thu, Nov 3, 2016 at 11:22 AM, Tim Serong wrote: > Hi All, > > I thought I should make a little noise about a project some of us at > SUSE have been working on, called DeepSea. It's a

[ceph-users] Assertion "needs_recovery" fails when balance_read reaches a replica OSD where the target object is not recovered yet.

2016-11-25 Thread xxhdx1985126
Hi, everyone. In our online system, some OSDs always fail due to the following error: 2016-10-25 19:00:00.626567 7f9a63bff700 -1 error_msg osd/ReplicatedPG.cc: In function 'void ReplicatedPG::wait_for_unreadable_object(const hobject_t&, OpRequestRef)' thread 7f9a63bff700 time 2016-10-25

[ceph-users] about using SSD in cephfs, attached with some quantified benchmarks

2016-11-25 Thread JiaJia Zhong
confusing questions: (ceph0.94) 1. Is there any way to cache the whole metadata datas into MDS's memory ? (metadata osds dates-async> MDS memory) I dunno if I misunderstand the role of mds :(, so many post threads that advising Using SSD osds for metadata. the metadata stores