[ceph-users] osd down

2017-04-17 Thread ??????
Hi, All: I am installing ceph in 2 node using ceph-deploy. node1:monitor and osd.0, ip:192.168.1.11 node2:osd.1, ip:192.168.1.12 When I've configured node1 as monitor and osd.0, it ok. But when I added node

[ceph-users] RadosGW and Openstack Keystone revoked tokens

2017-04-17 Thread magicb...@gmail.com
Hi is it possible to configure radosGW (10.2.6-0ubuntu0.16.04.1) to work with Openstack Keystone UUID based tokens? RadosGW is expecting a list of revoked tokens, but that option only works in keystone deployments based on PKI token (not uuid/fernet tokens) error log: /2017-04-17 10:40:43.75

Re: [ceph-users] Socket errors, CRC, lossy con messages

2017-04-17 Thread Alex Gorbachev
On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov wrote: > On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev > wrote: >> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote: >>> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev >>> wrote: Hi Ilya, On Wed, Apr 12, 2017 at 4:58 AM Ilya

Re: [ceph-users] Creating journal on needed partition

2017-04-17 Thread Nikita Shalnov
Hi all. Is there any way to create osd manually, which would use a designated partition of the journal disk (without using ceph-ansible)? I have journals on SSD disks nad each journal disk contains 3 partitions for 3 osds. Example: one of the osds crashed. I changed a disk (sdaa) and want to pr

Re: [ceph-users] Creating journal on needed partition

2017-04-17 Thread Chris Apsey
Nikita, Take a look at https://git.cybbh.space/vta/saltstack/tree/master/apps/ceph Particularly files/init-journal.sh and files/osd-bootstrap.sh We use salt to do some of the legwork (templatizing the bootstrap process), but for the most part it is all just a bunch of shell scripts with som

Re: [ceph-users] IO pausing during failures

2017-04-17 Thread Matthew Stroud
Any updates here? CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intende

Re: [ceph-users] Extremely high OSD memory utilization on Kraken 11.2.0 (with XFS -or- bluestore)

2017-04-17 Thread Sage Weil
On Sat, 15 Apr 2017, Aaron Ten Clay wrote: > Hi all, > > Our cluster is experiencing a very odd issue and I'm hoping for some > guidance on troubleshooting steps and/or suggestions to mitigate the issue. > tl;dr: Individual ceph-osd processes try to allocate > 90GiB of RAM and are > eventually nuk

[ceph-users] bluestore object overhead

2017-04-17 Thread Pavel Shub
Hey All, I'm running a test of bluestore in a small VM and seeing 2x overhead for each object. Here's the output of df detail: GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 20378M 14469M5909M 29.00772k POOLS: NAMEID CA

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-17 Thread Richard Hesse
A couple of questions: 1) What is your rack topology? Are all ceph nodes in the same rack communicating with the same top of rack switch? 2) Why did you choose to run the ceph nodes on loopback interfaces as opposed to the /24 for the "public" interface? 3) Are you planning on using RGW at all?

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-17 Thread Richard Hesse
I'm just spitballing here, but what if you set osd crush update on start = false ? Ansible would activate the OSD's but not place them in any particular rack, working around the ceph.conf problem you mentioned. Then you could place them in your CRUSH map by hand. I know you wanted to avoid editing

[ceph-users] SSD Primary Affinity

2017-04-17 Thread Reed Dier
Hi all, I am looking at a way to scale performance and usable space using something like Primary Affinity to effectively use 3x replication across 1 primary SSD OSD, and 2 replicated HDD OSD’s. Assuming production level, we would keep a pretty close 1:2 SSD:HDD ratio, but looking to experiment

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-04-17 Thread Félix Barbeira
We are implementing an IPv6 native ceph cluster using SLAAC. We have some legacy machines that are not capable of using IPv6, only IPv4 due to some reasons (yeah, I know). I'm wondering what could happen if I use an additional IPv4 on the radosgw in addition to the IPv6 that is already running. The