Re: [ceph-users] jewel blocked requests

2016-09-12 Thread shiva rkreddy
By saying "old clients"  did you mean, (a) Client VMs running old Operating
System (b)  Client VMs/Volumes that are in-use for a long time and across
ceph releases ? Was there any tuning done to fix it?

Thanks,

On Mon, Sep 12, 2016 at 3:05 PM, Wido den Hollander  wrote:

>
> > Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)" <
> jonrodwri...@gmail.com>:
> >
> >
> > Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN
> > because of 'blocked requests > 32 sec'.   Seems to be related to writes.
> >
> > Has anyone else seen this?  Or can anyone suggest what the problem might
> be?
> >
>
> Do you by any chance have old clients connecting? I saw this after a Jewel
> upgrade as well and it was because of very old clients still connecting to
> the cluster.
>
> Wido
>
> > Thanks!
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


On Mon, Sep 12, 2016 at 3:05 PM, Wido den Hollander  wrote:

>
> > Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)" <
> jonrodwri...@gmail.com>:
> >
> >
> > Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN
> > because of 'blocked requests > 32 sec'.   Seems to be related to writes.
> >
> > Has anyone else seen this?  Or can anyone suggest what the problem might
> be?
> >
>
> Do you by any chance have old clients connecting? I saw this after a Jewel
> upgrade as well and it was because of very old clients still connecting to
> the cluster.
>
> Wido
>
> > Thanks!
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Adding a subnet

2016-02-27 Thread shiva rkreddy
Hi,

Use case:
 I've a Hammer based cluster. There are no IPs left on cluster_network to
add new OSD servers. The new subnet is accessible to the current one.

Question:
While adding the new subnet to ceph.conf, is it required to restart ceph
services on all the nodes in current cluster to pickup the change?

Thanks,
Shiva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd/rados packages in python virtual environment

2015-10-02 Thread shiva rkreddy
Thanks Ken.
Does it mean we are going to have a pip package anytime soon? Redhat or
Ubuntu ship anything currently?


On Fri, Oct 2, 2015 at 11:33 AM, Ken Dreyer  wrote:

> On Thu, Oct 1, 2015 at 9:32 PM, shiva rkreddy 
> wrote:
> > Hi,
> > Any one has tried installing python-rbd and python-rados packages in
> python
> > virtual environment?
> > We are planning to have openstack services(cinder/glance) run in the
> virtual
> > environment. There are no pip install packages available for python-rbd
> and
> > python-rados, atleast on pypi.python.org.
> >
> > Alternate is to copy the files manually or make own package.
>
> This occasionally comes up in the context of openstack.
>
> There is a Redmine ticket for it, at http://tracker.ceph.com/issues/5900
>
> - Ken
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd/rados packages in python virtual environment

2015-10-01 Thread shiva rkreddy
Hi,
Any one has tried installing python-rbd and python-rados packages in python
virtual environment?
We are planning to have openstack services(cinder/glance) run in the
virtual environment. There are no pip install packages available for
python-rbd and python-rados, atleast on pypi.python.org.

Alternate is to copy the files manually or make own package.

Thanks,
Shiva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] download.ceph.com down

2015-09-26 Thread shiva rkreddy
Hi,
Did anyone notice that download.ceph.com is down today?
I've been trying to get latest Hammer packages v0.94.3 for Ubuntu. Any
alternatives?

Here is the error:


This webpage is not available

ERR_CONNECTION_REFUSED

==

Appreciate your help!

Thanks,
Siva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.80.8 and librbd performance

2015-04-14 Thread shiva rkreddy
Retried the test with by setting: rbd_concurrent_management_ops and
rbd-concurrent-management-ops to 20 (default 10?) and didn't see any
difference in the delete time.

Steps:
1. Create 20, 500GB volumes
2. run : rbd -n clientkey -p cindervols rbd rm $volumeId &
3. run rbd ls command in with 1 second sleep and capture output  : rbd -n
clientkey -p cindervols rbd ls

It took the same amount of time to remove all entries in the pool when the
ops setting was default.

Thanks

On Tue, Apr 14, 2015 at 10:01 PM, shiva rkreddy 
wrote:

> The clusters are in test environment, so its a new deployment of 0.80.9.
> OS on the cluster nodes is reinstalled as well, so there shouldn't be any
> fs aging unless the disks are slowing down.
>
> The perf measurement is done initiating multiple cinder create/delete
> commands and tracking the volume to be in available or completely gone from
> "cinder list" output.
> Even running  "rbd rm " command from cinder node results in similar
> behaviour.
>
> I'll try with  increasing  rbd_concurrent_management in ceph.conf.
>  Is the param name rbd_concurrent_management or rbd-concurrent-management ?
>
>
> On Tue, Apr 14, 2015 at 12:36 PM, Josh Durgin  wrote:
>
>> I don't see any commits that would be likely to affect that between
>> 0.80.7 and 0.80.9.
>>
>> Is this after upgrading an existing cluster?
>> Could this be due to fs aging beneath your osds?
>>
>> How are you measuring create/delete performance?
>>
>> You can try increasing rbd concurrent management ops in ceph.conf on the
>> cinder node. This affects delete speed, since rbd tries to delete each
>> object in a volume.
>>
>> Josh
>>
>> *From:* shiva rkreddy 
>> *Sent:* Apr 14, 2015 5:53 AM
>> *To:* Josh Durgin
>> *Cc:* Ken Dreyer; Sage Weil; Ceph Development; ceph-us...@ceph.com
>> *Subject:* Re: v0.80.8 and librbd performance
>>
>> Hi Josh,
>>
>> We are using firefly 0.80.9 and see both cinder create/delete numbers
>> slow down compared 0.80.7.
>> I don't see any specific tuning requirements and our cluster is run
>> pretty much on default configuration.
>> Do you recommend any tuning or can you please suggest some log signatures
>> we need to be looking at?
>>
>> Thanks
>> shiva
>>
>> On Wed, Mar 4, 2015 at 1:53 PM, Josh Durgin  wrote:
>>
>>> On 03/03/2015 03:28 PM, Ken Dreyer wrote:
>>>
>>>> On 03/03/2015 04:19 PM, Sage Weil wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> This is just a heads up that we've identified a performance regression
>>>>> in
>>>>> v0.80.8 from previous firefly releases.  A v0.80.9 is working it's way
>>>>> through QA and should be out in a few days.  If you haven't upgraded
>>>>> yet
>>>>> you may want to wait.
>>>>>
>>>>> Thanks!
>>>>> sage
>>>>>
>>>>
>>>> Hi Sage,
>>>>
>>>> I've seen a couple Redmine tickets on this (eg
>>>> http://tracker.ceph.com/issues/9854 ,
>>>> http://tracker.ceph.com/issues/10956). It's not totally clear to me
>>>> which of the 70+ unreleased commits on the firefly branch fix this
>>>> librbd issue.  Is it only the three commits in
>>>> https://github.com/ceph/ceph/pull/3410 , or are there more?
>>>>
>>>
>>> Those are the only ones needed to fix the librbd performance
>>> regression, yes.
>>>
>>> Josh
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.80.8 and librbd performance

2015-04-14 Thread shiva rkreddy
The clusters are in test environment, so its a new deployment of 0.80.9. OS
on the cluster nodes is reinstalled as well, so there shouldn't be any fs
aging unless the disks are slowing down.

The perf measurement is done initiating multiple cinder create/delete
commands and tracking the volume to be in available or completely gone from
"cinder list" output.

Even running  "rbd rm " command from cinder node results in similar
behaviour.

I'll try with  increasing  rbd_concurrent_management in ceph.conf.
 Is the param name rbd_concurrent_management or rbd-concurrent-management ?


On Tue, Apr 14, 2015 at 12:36 PM, Josh Durgin  wrote:

> I don't see any commits that would be likely to affect that between 0.80.7
> and 0.80.9.
>
> Is this after upgrading an existing cluster?
> Could this be due to fs aging beneath your osds?
>
> How are you measuring create/delete performance?
>
> You can try increasing rbd concurrent management ops in ceph.conf on the
> cinder node. This affects delete speed, since rbd tries to delete each
> object in a volume.
>
> Josh
>
> *From:* shiva rkreddy 
> *Sent:* Apr 14, 2015 5:53 AM
> *To:* Josh Durgin
> *Cc:* Ken Dreyer; Sage Weil; Ceph Development; ceph-us...@ceph.com
> *Subject:* Re: v0.80.8 and librbd performance
>
> Hi Josh,
>
> We are using firefly 0.80.9 and see both cinder create/delete numbers slow
> down compared 0.80.7.
> I don't see any specific tuning requirements and our cluster is run pretty
> much on default configuration.
> Do you recommend any tuning or can you please suggest some log signatures
> we need to be looking at?
>
> Thanks
> shiva
>
> On Wed, Mar 4, 2015 at 1:53 PM, Josh Durgin  wrote:
>
>> On 03/03/2015 03:28 PM, Ken Dreyer wrote:
>>
>>> On 03/03/2015 04:19 PM, Sage Weil wrote:
>>>
>>>> Hi,
>>>>
>>>> This is just a heads up that we've identified a performance regression
>>>> in
>>>> v0.80.8 from previous firefly releases.  A v0.80.9 is working it's way
>>>> through QA and should be out in a few days.  If you haven't upgraded yet
>>>> you may want to wait.
>>>>
>>>> Thanks!
>>>> sage
>>>>
>>>
>>> Hi Sage,
>>>
>>> I've seen a couple Redmine tickets on this (eg
>>> http://tracker.ceph.com/issues/9854 ,
>>> http://tracker.ceph.com/issues/10956). It's not totally clear to me
>>> which of the 70+ unreleased commits on the firefly branch fix this
>>> librbd issue.  Is it only the three commits in
>>> https://github.com/ceph/ceph/pull/3410 , or are there more?
>>>
>>
>> Those are the only ones needed to fix the librbd performance
>> regression, yes.
>>
>> Josh
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.80.8 and librbd performance

2015-04-14 Thread shiva rkreddy
Hi Josh,

We are using firefly 0.80.9 and see both cinder create/delete numbers slow
down compared 0.80.7.
I don't see any specific tuning requirements and our cluster is run pretty
much on default configuration.
Do you recommend any tuning or can you please suggest some log signatures
we need to be looking at?

Thanks
shiva

On Wed, Mar 4, 2015 at 1:53 PM, Josh Durgin  wrote:

> On 03/03/2015 03:28 PM, Ken Dreyer wrote:
>
>> On 03/03/2015 04:19 PM, Sage Weil wrote:
>>
>>> Hi,
>>>
>>> This is just a heads up that we've identified a performance regression in
>>> v0.80.8 from previous firefly releases.  A v0.80.9 is working it's way
>>> through QA and should be out in a few days.  If you haven't upgraded yet
>>> you may want to wait.
>>>
>>> Thanks!
>>> sage
>>>
>>
>> Hi Sage,
>>
>> I've seen a couple Redmine tickets on this (eg
>> http://tracker.ceph.com/issues/9854 ,
>> http://tracker.ceph.com/issues/10956). It's not totally clear to me
>> which of the 70+ unreleased commits on the firefly branch fix this
>> librbd issue.  Is it only the three commits in
>> https://github.com/ceph/ceph/pull/3410 , or are there more?
>>
>
> Those are the only ones needed to fix the librbd performance
> regression, yes.
>
> Josh
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread shiva rkreddy
We have currently two  osds configured on this system running  RHEL6.5,
sharing a ssd drive as journal devices.

udevadm trigger --sysname-match=sdb or udevadm trigger
--sysname-match=/dev/sdb, return without any output. Same thing happens on
ceph 0.80.7 where mount and services are started automatically.

Output paste from ceph 0.80.9 is : http://pastebin.com/1Yqntadi


On Sun, Apr 5, 2015 at 11:22 AM, Loic Dachary  wrote:

>
>
> On 04/04/2015 22:09, shiva rkreddy wrote:
> > HI,
> > I'm currently testing Firefly 0.80.9 and noticed that OSD are not
> auto-mounted after server reboot.
> > It used to mount auto with Firefly 0.80.7.  OS is RHEL 6.5.
> >
> > There was another thread earlier on this topic with v0.80.8, suggestion
> was to add mount points to /etc/fstab.
> >
> > Question is whether the 0.80.7 behaviour could return or its needs to be
> done via /etc/fstab or something else?
>
> It should work without adding lines in /etc/fstab. Could you give more
> details about your setup ? Could you try
>
> udevadm trigger --sysname-match=sdb
>
> if an osd is managing /dev/sdb. Does that mount the osd ? It would also be
> useful to have the output of ls -l /dev/disk/by-partuuid
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD auto-mount after server reboot

2015-04-04 Thread shiva rkreddy
HI,
I'm currently testing Firefly 0.80.9 and noticed that OSD are not
auto-mounted after server reboot.
It used to mount auto with Firefly 0.80.7.  OS is RHEL 6.5.

There was another thread earlier on this topic with v0.80.8, suggestion was
to add mount points to /etc/fstab.

Question is whether the 0.80.7 behaviour could return or its needs to be
done via /etc/fstab or something else?

Thanks,
Shiva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd troubleshooting

2014-11-04 Thread shiva rkreddy
Hi,
I'm trying to run osd troubleshooting commands.

*Use case: Stopping osd without re-balancing.*

.#ceph osd noout  // this command works.
But, neither of the following work:
#stop ceph-osd id=1
(Error message: *no valid command found; 10 closest matches:* ...)
 or
# ceph osd stop osd.1
( Error message: *stop: Unknown job: ceph-osd* )

Environment:
ceph: 0.80.7
OS: RHEL6.5
upstart-0.6.5-13.el6_5.3.x86_64
ceph-0.80.7-0.el6.x86_64
ceph-common-0.80.7-0.el6.x86_64

Thanks,
shiva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] iptables

2014-09-25 Thread shiva rkreddy
Hello,
On my ceph cluster osd node . there is a rule to REJECT all.
As per the documentation, added a rule to allow the trafficon the full
range of ports,
But, the cluster will not come into clean state. Can you please share your
experience with the iptables configuration.

Following are the INPUT rules:

5ACCEPT tcp  --  10.108.240.192/260.0.0.0/0   multiport
dports 6800:7100
6REJECT all  --  0.0.0.0/00.0.0.0/0
reject-with icmp-host-prohibited

Thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph health related message

2014-09-18 Thread shiva rkreddy
Hi,

I've setup a cluster with 3 monitors and 2 OSD nodes with 2 disks
each.Cluster is in active+clean state. But, "ceph -s" keeps throwing the
following message, every other time "ceph -s"  is run.

 #ceph -s
2014-09-19 04:13:07.116662 7fc88c3f9700  0 -- :/1011833 >> *192.168.240.200
:*6789/0 pipe(0x7fc890021200 sd=3 :0 s=1 pgs=0 cs=0
l=1 c=0x7fc890021470).fault

If ceph -s is run from the same IP that is listed above, ceph -s doesn't
throw the message, not even once.

Appreciate your suggestions.

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph general configuration questions

2014-09-16 Thread shiva rkreddy
Thanks Dan. Is there any preferred filesystem filesystem for the leveldb
files? I understand that the filesystem should be of same type on both /var
and ssd partition.
Should it be ext4, xfs, something else or doesn't matter?

On Tue, Sep 16, 2014 at 10:15 AM, Dan Van Der Ster <
daniel.vanders...@cern.ch> wrote:

>  Hi,
>
>  On 16 Sep 2014, at 16:46, shiva rkreddy  wrote:
>
> 2. Has any one used SSD devices for Monitors. If so, can you please share
> the details ? Any specific changes to the configuration files?
>
>
>  We use SSDs on our monitors — a spinning disk was not fast enough for
> leveldb and we observed monitor elections during heavy backfilling.
> There’s no special configuration required, just mount the SSD on
> /var/lib/ceph
>
>  Cheers, Dan
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph general configuration questions

2014-09-16 Thread shiva rkreddy
Hi.
I'm new to ceph and have been going thorough the setup phase. I was able to
setup couple of Proof of Concept cluster. Got some general questions,
thought the community would be able to clarify.

1. I've been using ceph-deploy for deployment. In a 3 Monitor and 3 OSD
configuration, one of the VMs is used as admin node.I see some
configurations with separate admin node. Question is, is there any purpose
of the admin node other than cluster administration.
2. Has any one used SSD devices for Monitors. If so, can you please share
the details ? Any specific changes to the configuration files?

Appreciate your response.

Best,
Shiva,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com