[ceph-users] Quick short survey which SSDs

2016-07-05 Thread Götz Reinicke - IT Koordinator
Hi, we have offers for ceph storage nodes with different SSD types and some are already mentioned as a very good choice but some are total new to me. May be you could give some feedback on the SSDs in question or provide just small information which you primarily us? Regarding the three disk in

Re: [ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-07-05 Thread Micha Krause
*bump* Am 01.07.2016 um 13:00 schrieb Micha Krause: Hi, > In Infernalis there was this command: radosgw-admin regions list But this is missing in Jewel. Ok, I just found out that this was renamed to zonegroup list: root@rgw01:~ # radosgw-admin --id radosgw.rgw zonegroup list read_default

[ceph-users] Antw: Re: Mounting Ceph RBD image to XenServer 7 as SR

2016-07-05 Thread Steffen Weißgerber
>>> Jake Young schrieb am Donnerstag, 30. Juni 2016 um 00:28: > On Wednesday, June 29, 2016, Mike Jacobacci wrote: > Hi, >> Hi all, >> >> Is there anyone using rbd for xenserver vm storage? I have XenServer 7 >> and the latest Ceph, I am looking for the the best way to mount the rbd >> volu

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Dan van der Ster
Hi, On Tue, Jul 5, 2016 at 9:23 AM, Götz Reinicke - IT Koordinator wrote: > Hi, > > we have offers for ceph storage nodes with different SSD types and some > are already mentioned as a very good choice but some are total new to me. > > May be you could give some feedback on the SSDs in question o

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Christian Balzer
Hello, On Tue, 5 Jul 2016 09:23:27 +0200 Götz Reinicke - IT Koordinator wrote: > Hi, > > we have offers for ceph storage nodes with different SSD types and some > are already mentioned as a very good choice but some are total new to me. > > May be you could give some feedback on the SSDs in qu

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Dan van der Ster
On Tue, Jul 5, 2016 at 9:53 AM, Christian Balzer wrote: >> Unfamiliar: Samsung SM863 >> > You might want to read the thread here: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007871.html > > And google "ceph SM863". > > However I'm still waiting for somebody to confirm that

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-05 Thread Brad Hubbard
On Sun, Jul 3, 2016 at 7:51 AM, Alex Gorbachev wrote: >> Thank you Stefan and Campbell for the info - hope 4.7rc5 resolves this >> for us - please note that my workload is purely RBD, no QEMU/KVM. >> Also, we do not have CFQ turned on, neither scsi-mq and blk-mq, so I >> am surmising ceph-osd must

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Dan van der Ster
On Tue, Jul 5, 2016 at 10:04 AM, Dan van der Ster wrote: > On Tue, Jul 5, 2016 at 9:53 AM, Christian Balzer wrote: >>> Unfamiliar: Samsung SM863 >>> >> You might want to read the thread here: >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007871.html >> >> And google "ceph S

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-05 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Alex Gorbachev > Sent: 04 July 2016 20:50 > To: Campbell Steven > Cc: ceph-users ; Tim Bishop li...@bishnet.net> > Subject: Re: [ceph-users] Is anyone seeing iissues with > task_numa_find_cpu

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-05 Thread Nick Fisk
> -Original Message- > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 04 July 2016 22:00 > To: Nick Fisk > Cc: Oliver Dzombic ; ceph-users us...@lists.ceph.com>; mq ; Christian Balzer > > Subject: Re: [ceph-users] > suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Christian Balzer
On Tue, 5 Jul 2016 10:22:37 +0200 Dan van der Ster wrote: > On Tue, Jul 5, 2016 at 10:04 AM, Dan van der Ster > wrote: > > On Tue, Jul 5, 2016 at 9:53 AM, Christian Balzer wrote: > >>> Unfamiliar: Samsung SM863 > >>> > >> You might want to read the thread here: > >> http://lists.ceph.com/piperma

[ceph-users] Antw: Re: Running ceph in docker

2016-07-05 Thread Steffen Weißgerber
>>> Josef Johansson schrieb am Donnerstag, 30. Juni 2016 um 15:23: > Hi, > Hi, > You could actually managed every osd and mon and mds through docker swarm, > since all just software it make sense to deploy it through docker where you > add the disk that is needed. > > Mons does not need perm

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-05 Thread Kenneth Waegeman
On 04/07/16 11:22, Kenneth Waegeman wrote: On 01/07/16 16:01, Yan, Zheng wrote: On Fri, Jul 1, 2016 at 6:59 PM, John Spray wrote: On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman wrote: Hi all, While syncing a lot of files to cephfs, our mds cluster got haywire: the mdss have a lot o

Re: [ceph-users] rbd cache command thru admin socket

2016-07-05 Thread Jason Dillaman
Yes, the socket will be created and will remain as long as librbd is running -- in the case of QEMU, it should be available for as long as the VM is running. On Fri, Jul 1, 2016 at 2:42 PM, Deneau, Tom wrote: > Thanks, Jason-- > > Turns out AppArmor was indeed enabled (I was not aware of that). >

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-05 Thread Yan, Zheng
On Tue, Jul 5, 2016 at 7:56 PM, Kenneth Waegeman wrote: > > > On 04/07/16 11:22, Kenneth Waegeman wrote: >> >> >> >> On 01/07/16 16:01, Yan, Zheng wrote: >>> >>> On Fri, Jul 1, 2016 at 6:59 PM, John Spray wrote: On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman wrote: > >

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-05 Thread xiaoxi chen
> From: uker...@gmail.com > Date: Tue, 5 Jul 2016 21:14:12 +0800 > To: kenneth.waege...@ugent.be > CC: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] mds0: Behind on trimming (58621/30) > > On Tue, Jul 5, 2016 at 7:56 PM, Kenneth Waegeman > wrote: > > > > > > On 04/07/16 11:22, Kenneth W

Re: [ceph-users] cluster failing to recover

2016-07-05 Thread Matyas Koszik
Should you be interested, the solution to this was ceph pg $pg mark_unfound_lost delete for all pgs that had unfound objects, now the cluster is back in a healthy state. I think this is very counter-intuitive (why should totally unrelated pgs be affected by this?!) but at least the solution was s

Re: [ceph-users] cluster failing to recover

2016-07-05 Thread Sean Redmond
Hi, What happened to the missing 2 OSD's? 53 osds: 51 up, 51 in Thanks On Tue, Jul 5, 2016 at 4:04 PM, Matyas Koszik wrote: > > Should you be interested, the solution to this was > ceph pg $pg mark_unfound_lost delete > for all pgs that had unfound objects, now the cluster is back in a health

Re: [ceph-users] cluster failing to recover

2016-07-05 Thread Matyas Koszik
Hi, The disks died, and were removed by: ceph osd out $osd ceph osd lost $osd ceph osd crush remove $osd ceph auth del $osd ceph osd rm $osd When writing my mails it was after the 'lost' or 'crush remove' step, not sure. But even the last step didn't fix the issue. It was like this: http://paste

[ceph-users] Ceph Developer Monthly

2016-07-05 Thread Patrick McGarry
Hey cephers, Just a reminder that this month's Ceph Developer Monthly (originally scheduled for tomorrow) will not be happening due to holidays, travel, and other conflicts. If you have questions feel free to send them my way. Thanks! -- Best Regards, Patrick McGarry Director Ceph Community |

Re: [ceph-users] Active MON aborts on Jewel 10.2.2 with FAILED assert(info.state == MDSMap::STATE_STANDBY

2016-07-05 Thread Gregory Farnum
Thanks for the report; created a ticket and somebody will get on it shortly. http://tracker.ceph.com/issues/16592 -Greg On Sun, Jul 3, 2016 at 5:55 PM, Bill Sharer wrote: > I was working on a rolling upgrade on Gentoo to Jewel 10.2.2 from 10.2.0. > However now I can't get a monitor quorum going

Re: [ceph-users] mds standby + standby-reply upgrade

2016-07-05 Thread Gregory Farnum
On Mon, Jul 4, 2016 at 12:38 PM, Dzianis Kahanovich wrote: > Gregory Farnum пишет: >> On Thu, Jun 30, 2016 at 1:03 PM, Dzianis Kahanovich wrote: >>> Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global >>> stop/restart everything oneshot. >>> >>> Infernalis: e5165: 1/1/1 up {0=c=u

Re: [ceph-users] Running ceph in docker

2016-07-05 Thread Josef Johansson
Hi, The docker image is a new bootstrapped system with all the binaries included. However, it's possible to let the docker have a whole device by specifying the --device parameter and then it will survive a reboot or even rebuild. Regards, Josef On Tue, 5 Jul 2016, 12:28 Steffen Weißgerber, wro

Re: [ceph-users] Running ceph in docker

2016-07-05 Thread Vasu Kulkarni
On Wed, Jun 29, 2016 at 11:05 PM, F21 wrote: > Hey all, > > I am interested in running ceph in docker containers. This is extremely > attractive given the recent integration of swarm into the docker engine, > making it really easy to set up a docker cluster. > > When running ceph in docker, should

Re: [ceph-users] Quick short survey which SSDs

2016-07-05 Thread Christian Balzer
On Tue, 5 Jul 2016 09:23:27 +0200 Götz Reinicke - IT Koordinator wrote: > Hi, > > we have offers for ceph storage nodes with different SSD types and some > are already mentioned as a very good choice but some are total new to me. > > May be you could give some feedback on the SSDs in question or

Re: [ceph-users] Active MON aborts on Jewel 10.2.2 with FAILED assert(info.state == MDSMap::STATE_STANDBY

2016-07-05 Thread Bill Sharer
Relevant USE flags FWIW # emerge -pv ceph These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild R ~] sys-cluster/ceph-10.2.2::gentoo USE="fuse gtk jemalloc ldap libaio libatomic nss radosgw static-libs xfs -babeltrace -cephfs -cryptopp -debu

[ceph-users] Should I restart VMs when I upgrade ceph client version

2016-07-05 Thread 한승진
Hi Cephers, I implemented Ceph with OpenStack. Recently, I upgrade Ceph server from Hammer to Jewel. Also, I plan to upgrade ceph clients that are OpenStack Nodes. There are a lot of VMs running in Compute Nodes. Should I restart the VMs after upgrade of Compute Nodes?

[ceph-users] Snap delete performance impact

2016-07-05 Thread Adrian Saul
I recently started a process of using rbd snapshots to setup a backup regime for a few file systems contained in RBD images. While this generally works well at the time of the snapshots there is a massive increase in latency (10ms to multiple seconds of rbd device latency) across the entire cl

Re: [ceph-users] Should I restart VMs when I upgrade ceph client version

2016-07-05 Thread Brad Hubbard
On Wed, Jul 6, 2016 at 3:28 PM, 한승진 wrote: > Hi Cephers, > > I implemented Ceph with OpenStack. > > Recently, I upgrade Ceph server from Hammer to Jewel. > > Also, I plan to upgrade ceph clients that are OpenStack Nodes. > > There are a lot of VMs running in Compute Nodes. > > Should I restart the

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-05 Thread Goncalo Borges
Hi All... Just to confirm that, after applying the patch and recompiling, we are no longer seeing segfaults. I just tested with a user application which would kill ceph-fuse almost instantaneously. Now it is running for quite some time, reading and updating the files that it should. I sho

Re: [ceph-users] Should I restart VMs when I upgrade ceph client version

2016-07-05 Thread Alexandre DERUMIER
you can do a live migration to an upgraded host, a new qemu process (with new librbd version) will be generated. - Mail original - De: "한승진" À: "ceph-users" Envoyé: Mercredi 6 Juillet 2016 07:28:02 Objet: [ceph-users] Should I restart VMs when I upgrade ceph client version Hi Cephers