Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Budai Laszlo
could you post the result of "ceph -s" ? besides the health status there are 
other details that could help, like the status of your PGs., also the result of 
"ceph-disk list" would be useful to understand how your disks are organized. 
For instance with 1 SSD for 7 HDD the SSD could be the bottleneck.
From the outputs you gave us we don't know which are the spinning disks and 
which is the ssd (looking at the numbers I suspect that sdi is your SSD). we 
also don't kow what parameters were you using when you've ran the iostat 
command.

Unfortunately it's difficult to help you without knowing more about your system.

Kind regards,
Laszlo

On 24.03.2018 20:19, Sam Huracan wrote:
> This is from iostat:
>
> I'm using Ceph jewel, has no HW error.
> Ceph  health OK, we've just use 50% total volume.
>
>
> 2018-03-24 22:20 GMT+07:00 >:
>
> I would Check with Tools like atop the utilization of your Disks also. 
> Perhaps something Related in dmesg or dorthin?
>
> - Mehmet
>
> Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan 
> >:
>
>
> Hi guys,
> We are running a production OpenStack backend by Ceph.
>
> At present, we are meeting an issue relating to high iowait in VM, in 
> some MySQL VM, we see sometime IOwait reaches  abnormal high peaks which lead 
> to slow queries increase, despite load is stable (we test with script 
> simulate real load), you can see in graph.
> https://prnt.sc/ivndni
>
> MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD. 
> In this cluster, IOwait on each ceph host is about 20%.
> https://prnt.sc/ivne08
>
>
> Can you guy help me find the root cause of this issue, and how to 
> eliminate this high iowait?
>
> Thanks in advance.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread ceph
Perhaps unbelanced OSDs?
Could you send us an osd tree Output?

- Mehmet 

Am 24. März 2018 19:46:44 MEZ schrieb "da...@visions.se" :
>You have 2 drives at almost 100% util which means they are maxed. So
>you need more disks or better drives to fix your io issues (SSDs for
>MySQL is a no brainer really) 
>-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars
>2018 19:20Till: c...@elchaka.de;Kopia:
>ceph-users@lists.ceph.com;Ämne:Re: [ceph-users] Fwd: High IOWait Issue
>This is from iostat:
>I'm using Ceph jewel, has no HW error.Ceph  health OK, we've just use
>50% total volume.
>
>
>2018-03-24 22:20 GMT+07:00  :
>I would Check with Tools like atop the utilization  of your Disks also.
>Perhaps something Related in dmesg or dorthin?
>
>
>
>- Mehmet   
>
>Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan
>:
>
>Hi guys,We are running a production OpenStack backend by Ceph.
>At present, we are meeting an issue relating to high iowait in VM, in
>some MySQL VM, we see sometime IOwait reaches  abnormal high peaks
>which lead to slow queries increase, despite load is stable (we test
>with script simulate real load), you can see in
>graph.https://prnt.sc/ivndni
>
>MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD.
>In this cluster, IOwait on each ceph host is about
>20%.https://prnt.sc/ivne08
>
>Can you guy help me find the root cause of this issue, and how to
>eliminate this high iowait?
>Thanks in advance.
>
>
>
>___
>
>ceph-users mailing list
>
>ceph-users@lists.ceph.com
>
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread da...@visions.se
You have 2 drives at almost 100% util which means they are maxed. So you need 
more disks or better drives to fix your io issues (SSDs for MySQL is a no 
brainer really) 
-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars 2018 
19:20Till: c...@elchaka.de;Kopia: ceph-users@lists.ceph.com;Ämne:Re: 
[ceph-users] Fwd: High IOWait Issue
This is from iostat:
I'm using Ceph jewel, has no HW error.Ceph  health OK, we've just use 50% total 
volume.


2018-03-24 22:20 GMT+07:00  :
I would Check with Tools like atop the utilization  of your Disks also. Perhaps 
something Related in dmesg or dorthin?



- Mehmet   

Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan :

Hi guys,We are running a production OpenStack backend by Ceph.
At present, we are meeting an issue relating to high iowait in VM, in some 
MySQL VM, we see sometime IOwait reaches  abnormal high peaks which lead to 
slow queries increase, despite load is stable (we test with script simulate 
real load), you can see in graph.https://prnt.sc/ivndni

MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD. In this 
cluster, IOwait on each ceph host is about 20%.https://prnt.sc/ivne08

Can you guy help me find the root cause of this issue, and how to eliminate 
this high iowait?
Thanks in advance.



___

ceph-users mailing list

ceph-users@lists.ceph.com

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Sam Huracan
This is from iostat:

I'm using Ceph jewel, has no HW error.
Ceph  health OK, we've just use 50% total volume.


2018-03-24 22:20 GMT+07:00 :

> I would Check with Tools like atop the utilization of your Disks also.
> Perhaps something Related in dmesg or dorthin?
>
> - Mehmet
>
> Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan <
> nowitzki.sa...@gmail.com>:
>>
>>
>> Hi guys,
>> We are running a production OpenStack backend by Ceph.
>>
>> At present, we are meeting an issue relating to high iowait in VM, in
>> some MySQL VM, we see sometime IOwait reaches  abnormal high peaks which
>> lead to slow queries increase, despite load is stable (we test with script
>> simulate real load), you can see in graph.
>> https://prnt.sc/ivndni
>>
>> MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD. In
>> this cluster, IOwait on each ceph host is about 20%.
>> https://prnt.sc/ivne08
>>
>>
>> Can you guy help me find the root cause of this issue, and how to
>> eliminate this high iowait?
>>
>> Thanks in advance.
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Enable object map kernel module

2018-03-24 Thread ceph


Am 24. März 2018 00:05:12 MEZ schrieb Thiago Gonzaga :
>Hi All,
>
>I'm starting with ceph and faced a problem while using object-map
>
>root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format
>2
>--image-feature exclusive-lock
>root@ceph-mon-1:/home/tgonzaga# rbd feature enable test object-map
>root@ceph-mon-1:/home/tgonzaga# rbd list
>test
>root@ceph-mon-1:/home/tgonzaga# rbd map test
>rbd: sysfs write failed
>RBD image feature set mismatch. You can disable features unsupported by
>the
>kernel with "rbd feature disable test object-map".
>In some cases useful info is found in syslog - try "dmesg | tail".
>rbd: map failed: (6) No such device or address
>
>how can we deal with that? I see some comments that large images
>without
>omap may suffer to get deleted

I guess your issue is not Related to the Feature... directly...you are trying 
to mount the Image with a Kernel which isnt Support your used Feature. 

Perhaps could rbd-nbd be helpfull as i understand right this uses libdrb.

- Mehmet   
>
>Regards,
>
>*Thiago Gonzaga*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread ceph
I would Check with Tools like atop the utilization  of your Disks also. Perhaps 
something Related in dmesg or dorthin?

- Mehmet   

Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan :
>Hi guys,
>We are running a production OpenStack backend by Ceph.
>
>At present, we are meeting an issue relating to high iowait in VM, in
>some
>MySQL VM, we see sometime IOwait reaches  abnormal high peaks which
>lead to
>slow queries increase, despite load is stable (we test with script
>simulate
>real load), you can see in graph.
>https://prnt.sc/ivndni
>
>MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD.
>In
>this cluster, IOwait on each ceph host is about 20%.
>https://prnt.sc/ivne08
>
>
>Can you guy help me find the root cause of this issue, and how to
>eliminate
>this high iowait?
>
>Thanks in advance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Radosgw ldap info

2018-03-24 Thread Marc Roos


To clarify if I understand correctly:

It is NOT POSSIBLE to use an s3 client like eg. cyberduck/mountainduck 
and supply a user with an 'Access key' and a 'Password' regardless if 
the user is defined in ldap or local?

I honestly cannot see how this ldap integration should even work, 
without a proper ldap scheme for auth caps being available. Nor do I 
understand where you set currently these auth caps, nor do I understand 
what use the current ldap functionality has.

Would be nice to update this on these pages

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/ceph_object_gateway_with_ldapad_guide/index
http://docs.ceph.com/docs/master/radosgw/ldap-auth/


Maybe it is good to give some 'beginners' access to the docs pages. 
Because as they are learning ceph (and maybe missing info in the docs) 
they can add this then. Because I have the impression that many things 
asked here could be added to the docs.





-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: zondag 18 maart 2018 5:04
To: ceph-users@lists.ceph.com
Cc: Marc Roos; Yehuda Sadeh-Weinraub
Subject: Re: [ceph-users] Radosgw ldap user authentication issues

Hi Marc


> looks like no search is being done there.

> rgw::auth::s3::AWSAuthStrategy denied with reason=-13


The same for me, http://tracker.ceph.com/issues/23091


But Yehuda closed this.




k



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HA for Vms with Ceph and KVM

2018-03-24 Thread Xavier Trilla
Hi,

Looks like there is some misinformation about exclusive lock feature, here is 
some information already on the mailing list:


The naming of the "exclusive-lock" feature probably implies too much compared 
to what it actually does.  In reality, when you enable the "exclusive-lock" 
feature, only one RBD client is able to modify the image while the lock is 
held.  However, that won't stop other RBD clients from *requesting* that 
maintenance operations be performed on the image (e.g. snapshot, resize).

Behind the scenes, librbd will detect that another client currently owns the 
lock and will proxy its request over to the current watch owner.  This ensures 
that we only have one client modifying the image while at the same time not 
crippling other use cases.  librbd also supports cooperative exclusive lock 
transfer, which is used in the case of qemu VM migrations where the image needs 
to be opened R/W by two clients at the same time.

--

Jason Dillaman


Saludos Cordiales,
Xavier Trilla P.
SiliconHosting

¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?

¡Pruébalo ahora en Clouding.io!

El 19 mar 2018, a las 9:38, Gregory Farnum 
> escribió:

You can explore the rbd exclusive lock functionality if you want to do this, 
but it’s not typically advised because using it makes moving live VMs across 
hosts harder, IUIC.
-Greg

On Sat, Mar 17, 2018 at 7:47 PM Egoitz Aurrekoetxea 
> wrote:

Good morning,


Does some kind of config param exist in Ceph for avoid two hosts accesing to 
the same vm pool or at least image inside?. Can be done at pool or image level?.


Best regards,

--


[sarenet]
Egoitz Aurrekoetxea
Departamento de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es
www.sarenet.es

Antes de imprimir este correo electrónico piense si es necesario hacerlo.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Shell / curl test script for rgw

2018-03-24 Thread Marc Roos
 
This one is downloading a file from the bucket or accesses admin without 
modifications.

#!/bin/bash
#

file="bucket"
bucket="admin"
file="test.txt"
bucket="test"

key=""
secret=""

host="192.168.1.114:7480"
resource="/${bucket}/${file}"

contentType="application/x-compressed-tar"
contentType="text/plain"
dateValue=`date -R -u`
method="GET"


stringToSign="${method}

${contentType}
${dateValue}
${resource}"

signature=`echo -en "$stringToSign" | openssl sha1 -hmac ${secret} 
-binary | base64`

curl -X ${method} -H "Content-Type: ${contentType}" -H "Date: 
${dateValue}" -H "Authorization: AWS ${key}:${signature}" -H "Host: 
${host}" "https://${host}${resource}?format=json=True; --insecure

echo



-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: zaterdag 24 maart 2018 4:03
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Shell / curl test script for rgw

On 03/24/2018 07:22 AM, Marc Roos wrote:
>   
> Thanks! I got it working, although I had to change the date to "date 
> -R -u", because I got the "RequestTimeTooSkewed" error.
>
> I also had to enable buckets=read on the account that was already able 

> to read and write via cyberduck, I don’t get that.
>
> radosgw-admin caps add --uid='test$test1' --caps "buckets=read"
>

Please, post your version.
Because I was tune up date by this reason ("RequestTimeTooSkewed").



k


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread da...@visions.se
Also, you only posted a total io wait through top. Please use iostat to check 
each backend disk utilization. 
-- Ursprungligt meddelande--Från: Budai LaszloDatum: lör 24 mars 2018 
08:57Till: ceph-users@lists.ceph.com;Kopia: Ämne:Re: [ceph-users] Fwd: High 
IOWait Issue
Hi,

what version of ceph are you using? what is HW config of your OSD nodes?
Have you checked your disks for errors (dmesg, smartctl).
What status is the ceph reporting? (ceph -s)
What is the saturation level of your ceph ? (ceph dt)

Kind regards,
Laszlo

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Marc Roos
 
I am glad to make your day! It took me a bit to come up with fitting 
answer to your question ;)
Have a nice weekend






-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it] 
Sent: zaterdag 24 maart 2018 13:18
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] where is it possible download CentOS 7.5

Thanks Marc,

your answer is so illuminating.
If it was so easy I would had already downloaded since 2 months.
But it's not on the official channel and there is not any mention 
anywhere of this release (sorry for you but neither on Google).

Well ...except on the Ceph documention of course.
So I post here.-. I guess somebody read the docs before me and somebody 
"maybe" had already solved this Xfiles for everybody.

But thank you for your ridiculuous answer.
You make my day.



Il 24/03/2018 12:47, Marc Roos ha scritto:
>   
>
> https://www.google.pl/search?dcr=0=hp=where+can+i+download+ce
> ntos+7.5=where+can+i+download+centos+7.5
>
>
>
> -Original Message-
> From: Max Cuttins [mailto:m...@phoenixweb.it]
> Sent: zaterdag 24 maart 2018 12:36
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] where is it possible download CentOS 7.5
>
> As stated in the documentation, in order to use iSCSI it's needed use 
> CentOS7.5.
> Where can I download it?
>
>
>
>
> Thanks
>
>
>
>
>
>
> iSCSI Targets
>
>
> Traditionally, block-level access to a Ceph storage cluster has been 
> limited to QEMU and librbd, which is a key enabler for adoption within 

> OpenStack environments. Starting with the Ceph Luminous release, 
> block-level access is expanding to offer standard iSCSI support 
> allowing wider platform usage, and potentially opening new use cases.
>
> * RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI
> client test kernel
> 
> * A working Ceph Storage cluster, deployed with ceph-ansible or
> using the command-line interface
> * iSCSI gateways nodes, which can either be colocated with OSD 
nodes
> or on dedicated nodes
> * Separate network subnets for iSCSI front-end traffic and Ceph
> back-end traffic
>
>
>



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Max Cuttins

Thanks Marc,

your answer is so illuminating.
If it was so easy I would had already downloaded since 2 months.
But it's not on the official channel and there is not any mention 
anywhere of this release (sorry for you but neither on Google).


Well ...except on the Ceph documention of course.
So I post here.-. I guess somebody read the docs before me and somebody 
"maybe" had already solved this Xfiles for everybody.


But thank you for your ridiculuous answer.
You make my day.



Il 24/03/2018 12:47, Marc Roos ha scritto:
  


https://www.google.pl/search?dcr=0=hp=where+can+i+download+centos+7.5=where+can+i+download+centos+7.5



-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it]
Sent: zaterdag 24 maart 2018 12:36
To: ceph-users@lists.ceph.com
Subject: [ceph-users] where is it possible download CentOS 7.5

As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?




Thanks






iSCSI Targets


Traditionally, block-level access to a Ceph storage cluster has been
limited to QEMU and librbd, which is a key enabler for adoption within
OpenStack environments. Starting with the Ceph Luminous release,
block-level access is expanding to offer standard iSCSI support allowing
wider platform usage, and potentially opening new use cases.

*   RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI
client test kernel

*   A working Ceph Storage cluster, deployed with ceph-ansible or
using the command-line interface
*   iSCSI gateways nodes, which can either be colocated with OSD nodes
or on dedicated nodes
*   Separate network subnets for iSCSI front-end traffic and Ceph
back-end traffic





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Marc Roos
 

https://www.google.pl/search?dcr=0=hp=where+can+i+download+centos+7.5=where+can+i+download+centos+7.5



-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it] 
Sent: zaterdag 24 maart 2018 12:36
To: ceph-users@lists.ceph.com
Subject: [ceph-users] where is it possible download CentOS 7.5

As stated in the documentation, in order to use iSCSI it's needed use 
CentOS7.5.
Where can I download it?




Thanks






iSCSI Targets


Traditionally, block-level access to a Ceph storage cluster has been 
limited to QEMU and librbd, which is a key enabler for adoption within 
OpenStack environments. Starting with the Ceph Luminous release, 
block-level access is expanding to offer standard iSCSI support allowing 
wider platform usage, and potentially opening new use cases.

*   RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI 
client test kernel 
 
*   A working Ceph Storage cluster, deployed with ceph-ansible or 
using the command-line interface
*   iSCSI gateways nodes, which can either be colocated with OSD nodes 
or on dedicated nodes
*   Separate network subnets for iSCSI front-end traffic and Ceph 
back-end traffic


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Max Cuttins
As stated in the documentation, in order to use iSCSI it's needed use 
CentOS7.5.

Where can I download it?


Thanks


 iSCSI Targets

Traditionally, block-level access to a Ceph storage cluster has been 
limited to QEMU and |librbd|, which is a key enabler for adoption within 
OpenStack environments. Starting with the Ceph Luminous release, 
block-level access is expanding to offer standard iSCSI support allowing 
wider platform usage, and potentially opening new use cases.


 * RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI
   client test kernel
   
 * A working Ceph Storage cluster, deployed with |ceph-ansible| or
   using the command-line interface
 * iSCSI gateways nodes, which can either be colocated with OSD nodes
   or on dedicated nodes
 * Separate network subnets for iSCSI front-end traffic and Ceph
   back-end traffic

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Shell / curl test script for rgw

2018-03-24 Thread Marc Roos
 
This is working but I want to modify it to download some file, I am not 
to interested at this time testing admin caps.




#!/bin/bash
#
# radosgw-admin caps add --uid='' --caps "buckets=read"

file=1MB.bin
bucket=test

key=""
secret=""

host="192.168.1.114:7480"
resource="/${bucket}/${file}"
resource="/admin/bucket"

contentType="application/x-compressed-tar"
dateValue=`date -R -u`
method="GET"


function hmacsha256 {
  local key="$1"
  local data="$2"
  echo -n "$data" | openssl dgst -sha256 -mac HMAC -macopt "$key" | sed 
's/^.* //'
}

stringToSign="${method}


${dateValue}
${resource}"



signature=`echo -en "$stringToSign" | openssl sha1 -hmac ${secret} 
-binary | base64`

curl -X ${method} -H "Date: ${dateValue}" -H "Authorization: AWS 
${key}:${signature}" -H "Host: ${host}" 
"https://${host}${resource}?format=json=True; --insecure

echo


-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru] 
Sent: zaterdag 24 maart 2018 4:03
To: Marc Roos; ceph-users
Subject: *SPAM* Re: [ceph-users] Shell / curl test script for 
rgw

On 03/24/2018 07:22 AM, Marc Roos wrote:
>   
> Thanks! I got it working, although I had to change the date to "date 
> -R -u", because I got the "RequestTimeTooSkewed" error.
>
> I also had to enable buckets=read on the account that was already able 

> to read and write via cyberduck, I don’t get that.
>
> radosgw-admin caps add --uid='test$test1' --caps "buckets=read"
>

Please, post your version.
Because I was tune up date by this reason ("RequestTimeTooSkewed").



k


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS Bug/Problem

2018-03-24 Thread Yan, Zheng
On Fri, Mar 23, 2018 at 7:45 PM, Perrin, Christopher (zimkop1)
 wrote:
> Hi,
>
> Last week out MDSs started failing one after another, and could not be 
> started anymore. After a lot of tinkering I found out that MDSs crashed after 
> trying to rejoin the Cluster. The only Solution I found that, let them start 
> again was resetting the journal vie cephfs-journal-tool. Now I have broken 
> files all over the Cluster.
>
> Before the crash the OSDs blocked tens of thousands of slow requests.
>
> Can I somehow restore the broken files (I still have a backup of the journal) 
> and how can I make sure that this doesn't happen agian. I am still not sure 
> why this even happened.
>
> This happened on ceph version 12.2.3.
>
> This is the log of one MDS:
>   -224> 2018-03-22 15:52:47.310437 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> <== mon.0 x.x.1.17:6789/0 2  auth_reply(proto 2 0 (0) Success) v1  
> 33+0+0 (3611581813 0 0) 0x555883df2780 con 0x555883eb5000
>   -223> 2018-03-22 15:52:47.310482 7fd5798fd700 10 monclient(hunting): my 
> global_id is 745317
>   -222> 2018-03-22 15:52:47.310634 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> --> x.x.1.17:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- 0x555883df2f00 
> con 0
>   -221> 2018-03-22 15:52:47.311096 7fd57c09f700  5 -- x.x.1.17:6803/122963511 
> >> x.x.1.17:6789/0 conn(0x555883eb5000 :-1 
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=793748 cs=1 l=1). rx mon.0 
> seq 3 0x555883df2f00 auth_reply(proto 2 0 (0) Success) v1
>   -220> 2018-03-22 15:52:47.311178 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> <== mon.0 x.x.1.17:6789/0 3  auth_reply(proto 2 0 (0) Success) v1  
> 222+0+0 (1789869469 0 0) 0x555883df2f00 con 0x555883eb5000
>   -219> 2018-03-22 15:52:47.311319 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> --> x.x.1.17:6789/0 -- auth(proto 2 181 bytes epoch 0) v1 -- 0x555883df2780 
> con 0
>   -218> 2018-03-22 15:52:47.312122 7fd57c09f700  5 -- x.x.1.17:6803/122963511 
> >> x.x.1.17:6789/0 conn(0x555883eb5000 :-1 
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=793748 cs=1 l=1). rx mon.0 
> seq 4 0x555883df2780 auth_reply(proto 2 0 (0) Success) v1
>   -217> 2018-03-22 15:52:47.312208 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> <== mon.0 x.x.1.17:6789/0 4  auth_reply(proto 2 0 (0) Success) v1  
> 799+0+0 (4156877078 0 0) 0x555883df2780 con 0x555883eb5000
>   -216> 2018-03-22 15:52:47.312393 7fd5798fd700  1 monclient: found mon.filer1
>   -215> 2018-03-22 15:52:47.312416 7fd5798fd700 10 monclient: 
> _send_mon_message to mon.filer1 at x.x.1.17:6789/0
>   -214> 2018-03-22 15:52:47.312427 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> --> x.x.1.17:6789/0 -- mon_subscribe({monmap=0+}) v2 -- 0x555883c8ed80 con 0
>   -213> 2018-03-22 15:52:47.312461 7fd5798fd700 10 monclient: 
> _check_auth_rotating renewing rotating keys (they expired before 2018-03-22 
> 15:52:17.312460)
>   -212> 2018-03-22 15:52:47.312477 7fd5798fd700 10 monclient: 
> _send_mon_message to mon.filer1 at x.x.1.17:6789/0
>   -211> 2018-03-22 15:52:47.312482 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> --> x.x.1.17:6789/0 -- auth(proto 2 2 bytes epoch 0) v1 -- 0x555883df2f00 con > 0
>   -210> 2018-03-22 15:52:47.312552 7fd580637200  5 monclient: authenticate 
> success, global_id 745317
>   -209> 2018-03-22 15:52:47.312570 7fd580637200 10 monclient: 
> wait_auth_rotating waiting (until 2018-03-22 15:53:17.312568)
>   -208> 2018-03-22 15:52:47.312776 7fd57c09f700  5 -- x.x.1.17:6803/122963511 
> >> x.x.1.17:6789/0 conn(0x555883eb5000 :-1 
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=793748 cs=1 l=1). rx mon.0 
> seq 5 0x555883c8f8c0 mon_map magic: 0 v1
>   -207> 2018-03-22 15:52:47.312841 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> <== mon.0 x.x.1.17:6789/0 5  mon_map magic: 0 v1  433+0+0 (493202164 
> 0 0) 0x555883c8f8c0 con 0x555883eb5000
>   -206> 2018-03-22 15:52:47.312868 7fd5798fd700 10 monclient: handle_monmap 
> mon_map magic: 0 v1
>   -205> 2018-03-22 15:52:47.312892 7fd5798fd700 10 monclient:  got monmap 7, 
> mon.filer1 is now rank 0
>   -204> 2018-03-22 15:52:47.312901 7fd57c09f700  5 -- x.x.1.17:6803/122963511 
> >> x.x.1.17:6789/0 conn(0x555883eb5000 :-1 
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=793748 cs=1 l=1). rx mon.0 
> seq 6 0x555883df2f00 auth_reply(proto 2 0 (0) Success) v1
>   -203> 2018-03-22 15:52:47.312900 7fd5798fd700 10 monclient: dump:
> epoch 7
> fsid a5473adc-cfb8-4672-883e-40f5f6541a36
> last_changed 2017-12-08 10:38:51.267030
> created 2017-01-20 17:05:29.092109
> 0: x.x.1.17:6789/0 mon.filer1
> 1: x.x.1.18:6789/0 mon.filer2
> 2: x.x.1.21:6789/0 mon.master1
>
>   -202> 2018-03-22 15:52:47.312950 7fd5798fd700  1 -- x.x.1.17:6803/122963511 
> <== mon.0 x.x.1.17:6789/0 6  auth_reply(proto 2 0 (0) Success) v1  
> 194+0+0 (1424514407 0 0) 0x555883df2f00 con 0x555883eb5000
>   -201> 2018-03-22 15:52:47.313072 7fd5798fd700 10 monclient: 
> _check_auth_rotating have uptodate 

Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-24 Thread Yan, Zheng
On Sat, Mar 24, 2018 at 11:34 AM, Josh Haft  wrote:
>
>
> On Fri, Mar 23, 2018 at 8:49 PM, Yan, Zheng  wrote:
>>
>> On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft  wrote:
>> > On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng  wrote:
>> >>
>> >> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft  wrote:
>> >> > Hello!
>> >> >
>> >> > I'm running Ceph 12.2.2 with one primary and one standby MDS.
>> >> > Mounting
>> >> > CephFS via ceph-fuse (to leverage quotas), and enabled ACLs by adding
>> >> > fuse_default_permissions=0 and client_acl_type=posix_acl to the mount
>> >> > options. I then export this mount via NFS and the clients mount
>> >> > NFS4.1.
>> >> >
>> >> does fuse_default_permissions=0 work?
>> >
>> > Yes, ACLs work as expected when I set fuse_default_permissions=0.
>> >
>> >> > After doing some in-depth testing it seems I'm unable to allow access
>> >> > from
>> >> > the NFS clients to a directory/file based on group membership when
>> >> > the
>> >> > underlying CephFS was mounted with ACL support. This issue appears
>> >> > using
>> >> > both filesystem permissions (e.g. chgrp) and NFSv4 ACLs. However,
>> >> > ACLs do
>> >> > work if the principal is a user instead of a group. If I disable ACL
>> >> > support
>> >> > on the ceph-fuse mount, things work as expected using fs permissions;
>> >> > obviously I don't get ACL support.
>> >> >
>> >> > As an intermediate step I did check whether this works directly on
>> >> > the
>> >> > CephFS filesystem - on the NFS server - and it does. So it appears to
>> >> > be an
>> >> > issue re-exporting it via NFS.
>> >> >
>> >> > I do not see this issue when mounting CephFS via the kernel,
>> >> > exporting via
>> >> > NFS, and re-running these tests.
>> >> >
>> >> > I searched the ML and bug reports but only found this -
>> >> > http://tracker.ceph.com/issues/12617 - which seems close to the issue
>> >> > I'm
>> >> > running into, but was closed as resolved 2+ years ago.
>> >> >
>> >> > Has anyone else run into this? Am I missing something obvious?
>> >> >
>> >>
>> >> ceph-fuse does permission check according to localhost's config of
>> >> supplement group. that's why you see this behavior.
>> >
>> > You're saying both the NFS client and server (where ceph-fuse is
>> > running) need to use the same directory backend? (they are)
>> > I should have mentioned I'm using LDAP/AD on client and server, so I
>> > don't think that is the problem.
>> >
>> > Either way, I would not expect the behavior to change simply by
>> > enabling ACLs, especially when I'm using filesystem permissions, and
>> > ACLs aren't part of the equation.
>>
>> More specifically, ceph-fuse find which groups request initiator are
>> in by function fuse_req_getgroups(). this function does tricks on
>> "/proc/%lu/task/%lu/status".  It only works  when nfs client and
>> ceph-fuse are running on the same machine.
>>
> So why does this work when I'm using ceph-fuse but ACLs are disabled?
>>

Really?

Please check if supplement groups work for inodes without ACL (mount
fuse with config option fuse_default_permissions=0)


>>
>> >> Yan, Zheng
>> >>
>> >> > Thanks!
>> >> > Josh
>> >> >
>> >> >
>> >> > ___
>> >> > ceph-users mailing list
>> >> > ceph-users@lists.ceph.com
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Budai Laszlo
Hi,

what version of ceph are you using? what is HW config of your OSD nodes?
Have you checked your disks for errors (dmesg, smartctl).
What status is the ceph reporting? (ceph -s)
What is the saturation level of your ceph ? (ceph dt)

Kind regards,
Laszlo

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Sam Huracan
Hi guys,
We are running a production OpenStack backend by Ceph.

At present, we are meeting an issue relating to high iowait in VM, in some
MySQL VM, we see sometime IOwait reaches  abnormal high peaks which lead to
slow queries increase, despite load is stable (we test with script simulate
real load), you can see in graph.
https://prnt.sc/ivndni

MySQL VM are place on Ceph HDD Cluster, with 1 SSD journal for 7 HDD. In
this cluster, IOwait on each ceph host is about 20%.
https://prnt.sc/ivne08


Can you guy help me find the root cause of this issue, and how to eliminate
this high iowait?

Thanks in advance.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com