Re: [ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-15 Thread Amit Vijairania
Thanks Sage!  We will test this and share our observations..

Regards,
Amit

Amit Vijairania  |  415.610.9908
--*--


On Mon, Sep 15, 2014 at 8:28 AM, Sage Weil  wrote:
> Hi Amit,
>
> On Mon, 15 Sep 2014, Amit Vijairania wrote:
>> Hello!
>>
>> In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per
>> host / 150 OSDs per rack), is it possible to create a ruleset for a
>> pool such that the Primary and Secondary PGs/replicas are placed in
>> one rack and Tertiary PG/replica is placed in the other rack?
>>
>> root standard {
>>   id -1 # do not change unnecessarily
>>   # weight 734.400
>>   alg straw
>>   hash 0 # rjenkins1
>>   item rack1 weight 367.200
>>   item rack2 weight 367.200
>> }
>>
>> Given there are only two (2) buckets, but three (3) replica, is it
>> even possible?
>
> Yes:
>
> rule myrule {
> ruleset 1
> type replicated
> min_size 1
> max_size 10
> step take default
> step choose firstn 2 type rack
> step chooseleaf firstn 2 type host
> step emit
> }
>
> That will give you 4 osds, spread across 2 hosts in each rack.  The pool
> size (replication factor) is 3, so RADOS will just use the first three (2
> hosts in first rack, 1 host in second rack).
>
> sage
>
>
>
>
>> I think following Giant blueprint is trying to address scenario I
>> described above.. Is the following blueprint targeted for Giant
>> release?
>> http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement
>>
>>
>> Regards,
>> Amit Vijairania  |  Cisco Systems, Inc.
>> --*--
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-15 Thread Amit Vijairania
Hello!

In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per
host / 150 OSDs per rack), is it possible to create a ruleset for a
pool such that the Primary and Secondary PGs/replicas are placed in
one rack and Tertiary PG/replica is placed in the other rack?

root standard {
  id -1 # do not change unnecessarily
  # weight 734.400
  alg straw
  hash 0 # rjenkins1
  item rack1 weight 367.200
  item rack2 weight 367.200
}

Given there are only two (2) buckets, but three (3) replica, is it
even possible?

I think following Giant blueprint is trying to address scenario I
described above.. Is the following blueprint targeted for Giant
release?
http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement


Regards,
Amit Vijairania  |  Cisco Systems, Inc.
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph RBD kernel module support for Cache Tiering

2014-09-15 Thread Amit Vijairania
Hello!

We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly" 0.80.5..

Does RBD kernel module support Cache Tiering in Firefly?
If not, when will RBD kernel module support Cache Tiering (Linux
kernel version and Ceph version)?

Regards,
Amit Vijairania  |  Cisco Systems, Inc.
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph RBD kernel module support for Cache Tiering

2014-09-15 Thread Amit Vijairania
Hello!

We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly"
0.80.5..

Does RBD kernel module support Cache Tiering in Firefly?
If not, when will RBD kernel module support Cache Tiering (Linux kernel
version and Ceph version)?

Regards,
Amit Vijairania  |  Cisco Systems, Inc.
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-15 Thread Amit Vijairania
Thanks Greg!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-15 Thread Amit Vijairania
Thanks Greg!


Amit Vijairania  |  415.610.9908
--*--


On Thu, May 15, 2014 at 9:55 AM, Gregory Farnum  wrote:

> On Thu, May 15, 2014 at 9:52 AM, Amit Vijairania
>  wrote:
> > Hello!
> >
> > Does CEPH rely on any multicasting?  Appreciate the feedback..
>
> Nope! All networking is point-to-point.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-15 Thread Amit Vijairania
Hello!

Does CEPH rely on any multicasting?  Appreciate the feedback..

Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Does CEPH rely on any multicasting?

2014-05-15 Thread Amit Vijairania
Hello!

Does CEPH rely on any multicasting?  Appreciate the feedback..

Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [OFF TOPIC] Deep Intellect - Inside the mind of the octopus

2014-05-11 Thread Amit Vijairania
Everyone involved with Ceph must be curious about Cephalopods..  Very
interesting article..
http://www.orionmagazine.org/index.php/articles/article/6474/

Amit Vijairania
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [OFF TOPIC] Deep Intellect - Inside the mind of the octopus

2014-05-11 Thread Amit Vijairania
Everyone involved with Ceph must be curious about Cephalopods..  Very
interesting article..
  http://www.orionmagazine.org/index.php/articles/article/6474/


- Amit Vijairania
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fscache and Ceph

2014-02-25 Thread Amit Vijairania
After reading the following on Fscahce integration with CephFS, I'm would
like to know which version of Linux kernel has all the Fscache patches
available?

http://ceph.com/community/first-impressions-through-fscache-and-ceph/

Do we know when these patches will be available in future release of Ubuntu
or RHEL (CentOS)?

Thanks!
Amit

Amit Vijairania  |  415.610.9908
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OpenStack Grizzly Authentication (Keystone PKI) with RADOS Gateway

2013-09-27 Thread Amit Vijairania
Hello!

Does RADOS Gateway supports or integrates with OpenStack (Grizzly)
Authentication (Keystone PKI)?

Can RADOS Gateway use PKI tokens to conduct user token verification without
explicit calls to Keystone.

Thanks!
Amit

Amit Vijairania  |  978.319.3684
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] NFS vs. CephFS for /var/lib/nova/instances

2013-08-23 Thread Amit Vijairania
Hi Greg,

We are using RBD for most of our VM images and volumes. But, if you spin
off an instance from a Glance image without specifying a boot volume,
Glance caches the image (/var/lib/nova/instances/_base) on Nova node where
this instance is scheduled..  You can use a shared file system for Glance
cache and avoid image caching (or copying over the network) to every Nova
node..

Also, /var/lib/nova/instances/ directory is also used to store libvirt.xml
for an instance and during Live Migration or Nova Evacuate, it helpful if
this directory is shared between Nova nodes..

Thanks!
Amit

Reference:
https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type



Amit Vijairania  |  978.319.3684
--*--


On Thu, Aug 22, 2013 at 9:05 AM, Gregory Farnum  wrote:

> On Thursday, August 22, 2013, Amit Vijairania wrote:
>
>> Hello!
>>
>> We, in our environment, need a shared file system for
>> /var/lib/nova/instances and Glance image cache (_base)..
>>
>> Is anyone using CephFS for this purpose?
>> When folks say CephFS is not production ready, is the primary concern
>> stability/data-integrity or performance?
>> Is NFS (with NFS-Ganesha) is better solution?  Is anyone using it today?
>>
>
> Our primary concern about CephFS is about its stability; there are a
> couple of important known bugs and it has yet to see the string QA that
> would qualify it for general production use. Storing VM images is one of
> the use cases it might be okay for, but:
> Why not use RBD? It sounds like that's what you want, and RBD is
> purpose-built for managing VM images and volumes!
> -Greg
>
>
>
>>
>> Please let us know..
>>
>> Thanks!
>> Amit
>>
>> Amit Vijairania  |  978.319.3684
>> --*--
>>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] NFS vs. CephFS for /var/lib/nova/instances

2013-08-22 Thread Amit Vijairania
Hello!

We, in our environment, need a shared file system for
/var/lib/nova/instances and Glance image cache (_base)..

Is anyone using CephFS for this purpose?
When folks say CephFS is not production ready, is the primary concern
stability/data-integrity or performance?
Is NFS (with NFS-Ganesha) is better solution?  Is anyone using it today?

Please let us know..

Thanks!
Amit

Amit Vijairania  |  978.319.3684
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph auth get-or-create

2013-06-17 Thread Amit Vijairania
Hello!

How do you add new pool access to existing Ceph Client?

e.g.
At first create a new user -- openstack-volumes:

ceph auth get-or-create client.openstack-volumes mon 'allow r' osd 'allow
class-read object_prefix rbd_children, allow rwx *pool=openstack-volumes*,
allow rx pool=openstack-images'

Add another pool for this user to access -- openstack-volumes:

ceph auth get-or-create client.openstack-volumes mon 'allow r' osd 'allow
class-read object_prefix rbd_children, allow rwx *pool=openstack-volumes*,
allow rwx *pool=openstack-volumes-2*, allow rx pool=openstack-images'

Thanks!

Amit Vijairania  |  978.319.3684
--*--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Live Migration: KVM-Libvirt & Shared-storage

2013-05-29 Thread Amit Vijairania
We are currently testing Ceph with OpenStack Grizzly release and looking
for some insight on Live Migration [1]..  Based on documentation, there are
two options for shared-storage and used for Nova instances (
/var/lib/nova/instances):  NFS and OpenStack Gluster Connector..

Do you know if anyone is using or have tested CephFS for Nova instances
directory (console.log, libvirt.xml, )?

[1]
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-migrations.html


Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pg_num (pgp_num) for default pools

2013-04-14 Thread Amit Vijairania
Hello!

How is pg_num (pgp_num) for default pools (data, metadata and rbd) is
calculated?

According to following:
http://ceph.com/docs/master/rados/operations/placement-groups

If my cluster has 30 OSDs and 3 replica per Object (osd pool default size =
3), I should be creating new pools with pg_num = pgp_num = 1000..  But,
default pools on my cluster are:

$ ceph osd pool get data pg_num
pg_num: 1984
$ ceph osd pool get metadata pg_num
pg_num: 1984
$ ceph osd pool get rbd pg_num
pg_num: 1984

Is there a different formula used for default pools?

Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com