Re: [ceph-users] Does ceph pg scrub error affect all of I/O in ceph cluster?

2017-08-03 Thread David Turner
Health_err only indicates that a request might block, not that it will. In the case of a scrub error, only requests made to the objects flagged as inconsistent in the failed PG will block. The rest of the objects in that PG will work fine even though the PG has a scrub error. On Fri, Aug 4, 2017,

Re: [ceph-users] Does ceph pg scrub error affect all of I/O in ceph cluster?

2017-08-03 Thread Adrian Saul
Depends on the error case – usually you will see blocked IO messages as well if there is a condition causing OSDs to be unresponsive. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ??? Sent: Friday, 4 August 2017 1:34 PM To: ceph-users@lists.ceph.com Subject: [ceph-use

[ceph-users] Does ceph pg scrub error affect all of I/O in ceph cluster?

2017-08-03 Thread 한승진
Hi cephers, I experienced ceph status into HEALTH_ERR because of pg scrub error. I thought all I/O is blocked when the status of ceph is Error. However, ceph could operate normally even thought ceph is in error status. There are two pools in ceph cluster which are include seperate nodes.(volume

Re: [ceph-users] Is erasure-code-pool’s pg num calculation same as common pool?

2017-08-03 Thread Zhao Damon
Thanks! On 2017年8月3日 +0800 21:50, David Turner , wrote: Yes. The only "difference" is that the number of replicas is k+n combined. So if you have 6+2, then each PG will reside on 8 osds. The limitation is how many PGs an osd daemon is responsible for which directly impacts its memory requireme

Re: [ceph-users] Luminous scrub catch-22

2017-08-03 Thread Roger Brown
Thank you. That was it. They were installed, but hadn't been restarted since upgrading. Solved with: sudo systemctl restart ceph-mgr.target On Thu, Aug 3, 2017 at 1:56 PM Gregory Farnum wrote: > I believe that command should still work, but it looks like it requires a > working manager daemon.

Re: [ceph-users] CEPH bluestore space consumption with small objects

2017-08-03 Thread Gregory Farnum
Don't forget that at those sizes the internal journals and rocksdb size tunings are likely to be a significant fixed cost. On Thu, Aug 3, 2017 at 3:13 AM Wido den Hollander wrote: > > > Op 2 augustus 2017 om 17:55 schreef Marcus Haarmann < > marcus.haarm...@midoco.de>: > > > > > > Hi, > > we are

Re: [ceph-users] Luminous scrub catch-22

2017-08-03 Thread Gregory Farnum
I believe that command should still work, but it looks like it requires a working manager daemon. Did you set one up yet? On Thu, Aug 3, 2017 at 7:31 AM Roger Brown wrote: > I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs > that report they need to be scrubbed, however t

[ceph-users] expanding cluster with minimal impact

2017-08-03 Thread Laszlo Budai
Dear all, I need to expand a ceph cluster with minimal impact. Reading previous threads on this topic from the list I've found the ceph-gentle-reweight script (https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight) created by Dan van der Ster (Thank you Dan for sharin

Re: [ceph-users] "Zombie" ceph-osd@xx.service remain fromoldinstallation

2017-08-03 Thread c . monty
3. August 2017 16:37, "Burkhard Linke" schrieb: > Hi, > > On 03.08.2017 16:31, c.mo...@web.de wrote: > >> Hello! >> >> I have purged my ceph and reinstalled it. >> ceph-deploy purge node1 node2 node3 >> ceph-deploy purgedata node1 node2 node3 >> ceph-deploy forgetkeys >> >> All disks configu

Re: [ceph-users] "Zombie" ceph-osd@xx.service remain fromoldinstallation

2017-08-03 Thread Burkhard Linke
Hi, On 03.08.2017 16:31, c.mo...@web.de wrote: Hello! I have purged my ceph and reinstalled it. ceph-deploy purge node1 node2 node3 ceph-deploy purgedata node1 node2 node3 ceph-deploy forgetkeys All disks configured as OSDs are physically in two servers. Due to some restrictions I needed to m

[ceph-users] "Zombie" ceph-osd@xx.service remain from old installation

2017-08-03 Thread c . monty
Hello! I have purged my ceph and reinstalled it. ceph-deploy purge node1 node2 node3 ceph-deploy purgedata node1 node2 node3 ceph-deploy forgetkeys All disks configured as OSDs are physically in two servers. Due to some restrictions I needed to modify the total number of disks usable as OSD, thi

[ceph-users] Luminous scrub catch-22

2017-08-03 Thread Roger Brown
I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs that report they need to be scrubbed, however the command to scrub them seems to have gone away. The flapping OSD is an issue for another thread. Please advise. Example: roger@desktop:~$ ceph --version ceph version 12.1.2 (b

Re: [ceph-users] Is erasure-code-pool’s pg num calculation same as common pool?

2017-08-03 Thread David Turner
Yes. The only "difference" is that the number of replicas is k+n combined. So if you have 6+2, then each PG will reside on 8 osds. The limitation is how many PGs an osd daemon is responsible for which directly impacts its memory requirements. On Thu, Aug 3, 2017, 6:32 AM Zhao Damon wrote: > Hi e

[ceph-users] All flash ceph witch NVMe and SPDK

2017-08-03 Thread Mike A
Hello Our goal it is make fast storage as possible. By now our configuration of 6 servers look like that: * 2 x CPU Intel Gold 6150 20 core 2.4Ghz * 2 x 16 Gb NVDIMM DDR4 DIMM * 6 x 16 Gb RAM DDR4 * 6 x Intel DC P4500 4Tb NVMe 2.5" * 2 x Mellanox ConnectX-4 EN Lx 25Gb dualport What a status in c

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Wido den Hollander
> Op 3 augustus 2017 om 14:14 schreef Hans van den Bogert > : > > > Thanks for answering even before I asked the questions:) > > So bottom line, HEALTH_ERR state is simply part of taking a (bunch of) OSD > down? Is HEALTH_ERR period of 2-4 seconds within normal bounds? For > context, CPUs are

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
Thanks for answering even before I asked the questions:) So bottom line, HEALTH_ERR state is simply part of taking a (bunch of) OSD down? Is HEALTH_ERR period of 2-4 seconds within normal bounds? For context, CPUs are 2609v3 per 4 OSDs. (I know; they're far from the fastest CPUs) On Thu, Aug 3,

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
What are the implications of this? Because I can see a lot of blocked requests piling up when using 'noout' and 'nodown'. That probably makes sense though. Another thing, no when the OSDs come back online, I again see multiple periods of HEALTH_ERR state. Is that to be expected? On Thu, Aug 3, 201

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Wido den Hollander
> Op 3 augustus 2017 om 13:36 schreef linghucongsong : > > > > > set the osd noout nodown > While noout is correct and might help in some situations, never set nodown unless you really need that. It will block I/O since you are taking down OSDs which aren't marked as down. In Hans's case

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread linghucongsong
set the osd noout nodown At 2017-08-03 18:29:47, "Hans van den Bogert" wrote: Hi all, One thing which has bothered since the beginning of using ceph is that a reboot of a single OSD causes a HEALTH_ERR state for the cluster for at least a couple of seconds. In the case of planned reb

Re: [ceph-users] "rbd create" hangs for specific pool

2017-08-03 Thread linghucongsong
root ssds { id -9 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 } It is empty in ssds! rule ssdpool { ruleset 1 type replicated min_size 1 max_size 10 step take ssds step chooseleaf firstn

[ceph-users] Is erasure-code-pool’s pg num calculation same as common pool?

2017-08-03 Thread Zhao Damon
Hi everyone: I just wonder is erasure-code-pool’s pg num calculation rule same as common pool? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Gracefully reboot OSD node

2017-08-03 Thread Hans van den Bogert
Hi all, One thing which has bothered since the beginning of using ceph is that a reboot of a single OSD causes a HEALTH_ERR state for the cluster for at least a couple of seconds. In the case of planned reboot of a OSD node, should I do some extra commands in order not to go to HEALTH_ERR state?

Re: [ceph-users] CEPH bluestore space consumption with small objects

2017-08-03 Thread Wido den Hollander
> Op 2 augustus 2017 om 17:55 schreef Marcus Haarmann > : > > > Hi, > we are doing some tests here with a Kraken setup using bluestore backend (on > Ubuntu 64 bit). > We are trying to store > 10 mio very small objects using RADOS. > (no fs, no rdb, only osd and monitors) > > The setup was

[ceph-users] "rbd create" hangs for specific pool

2017-08-03 Thread Stanislav Kopp
Hello, I was running ceph cluster with hdds for OSDs, now I've created new dedicated SSD pool within same cluster, everything looks fine, cluster is "healthy", but if I try to create new rbd image in this new ssd pool it just hangs, I've tried both "rbd" command and within proxmox gui, " rbd" just

Re: [ceph-users] ceph osd safe to remove

2017-08-03 Thread Dan van der Ster
On Thu, Aug 3, 2017 at 11:42 AM, Peter Maloney wrote: > On 08/03/17 11:05, Dan van der Ster wrote: > > On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney > wrote: > > Hello Dan, > > Based on what I know and what people told me on IRC, this means basicaly the > condition that the osd is not acting nor

Re: [ceph-users] ceph osd safe to remove

2017-08-03 Thread Peter Maloney
On 08/03/17 11:05, Dan van der Ster wrote: > On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney > wrote: >> Hello Dan, >> >> Based on what I know and what people told me on IRC, this means basicaly the >> condition that the osd is not acting nor up for any pg. And for one person >> (fusl on irc) that

[ceph-users] Definition when setting up pool for Ceph Filesystem

2017-08-03 Thread c . monty
Hello! When setting up Ceph Filesystem at least two RADOS pools, one for data and one for metadata, are required. Example: $ ceph osd pool create cephfs_data $ ceph osd pool create cephfs_metadata My question is regarding the value : Should this value be equal for data an metadata? Is my assump

Re: [ceph-users] ceph osd safe to remove

2017-08-03 Thread Dan van der Ster
On Fri, Jul 28, 2017 at 9:42 PM, Peter Maloney wrote: > Hello Dan, > > Based on what I know and what people told me on IRC, this means basicaly the > condition that the osd is not acting nor up for any pg. And for one person > (fusl on irc) that said there was a unfound objects bug when he had siz

Re: [ceph-users] ceph osd safe to remove

2017-08-03 Thread Dan van der Ster
Thanks for this -- it is indeed pretty close to what I was looking for. I'll look more in detail at its heuristic to confirm it's correctly telling you which OSDs are safe to remove or not. BTW, I had to update all the maps to i64 from i32 to make this work -- I'll be sending a pull req. -- Dan

Re: [ceph-users] librados for MacOS

2017-08-03 Thread Willem Jan Withagen
On 03/08/2017 09:36, Brad Hubbard wrote: > On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote: >> Hello, >> >> is there a way to get librados for MacOS? Has anybody tried to build >> librados for MacOS? Is this even possible? > > Yes, it is eminently possible, but would require a dedicated effort

Re: [ceph-users] librados for MacOS

2017-08-03 Thread Brad Hubbard
On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote: > Hello, > > is there a way to get librados for MacOS? Has anybody tried to build > librados for MacOS? Is this even possible? Yes, it is eminently possible, but would require a dedicated effort. As far as I know there is no one working on this

[ceph-users] librados for MacOS

2017-08-03 Thread Martin Palma
Hello, is there a way to get librados for MacOS? Has anybody tried to build librados for MacOS? Is this even possible? Best, Martin ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com