Re: [ceph-users] Designing ceph cluster

2016-09-15 Thread Gaurav Goyal
Dear Ceph users,

Any suggestion on this please.


Regards
Gaurav Goyal

On Wed, Sep 14, 2016 at 2:50 PM, Gaurav Goyal 
wrote:

> Dear Ceph Users,
>
> I need you help to sort out following issue with my cinder volume.
>
> I have created ceph as backend for cinder. Since i was using SAN storage
> for ceph and want to get rid of it i had completely uninstalled ceph from
> my openstack environment.
>
> Right now i am in a situation where we have ordered local disks to create
> ceph storage on local disks. but prior to configure ceph, we want to create
> cinder volume using LVM on one of local disk.
>
> I could create the cinder volume but unable to attach this volume to
> instance.
>
> *Volume Overview*
> Information
> --
> Name
> test123
> ID
> e13d0ffc-3ed4-4a22-b270-987e81b1ca8f
> Status
> Available
> Specs
> --
> Size
> 1 GB
> Created
> Sept. 13, 2016, 7:12 p.m.
> Attachments
> --
> Attached To   *Not attached*
>
> [root@OSKVM1 ~]# fdisk -l
>
> Disk /dev/sda: 599.6 GB, 599550590976 bytes, 1170997248 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk label type: dos
>
> Disk identifier: 0x0002a631
>
>Device Boot  Start End  Blocks   Id  System
>
> /dev/sda1   *2048 1026047  512000   83  Linux
>
> /dev/sda2 1026048  1170997247   584985600   8e  Linux LVM
>
> Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk /dev/mapper/centos-swap: 4294 MB, 4294967296 bytes, 8388608 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 byte
>
> Disk /dev/mapper/centos-home: 541.0 GB, 540977135616 bytes, 1056595968
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk /dev/sdb: 1099.5 GB, 1099526307840 bytes, 2147512320 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 byte
>
> *Disk
> /dev/mapper/cinder--volumes-volume--e13d0ffc--3ed4--4a22--b270--987e81b1ca8f:
> 1073 MB, 1073741824 bytes, 2097152 sectors*
>
> *Units = sectors of 1 * 512 = 512 bytes*
>
> *Sector size (logical/physical): 512 bytes / 512 bytes*
>
> *I/O size (minimum/optimal): 512 bytes / 512 bytes*
>
>
> I am getting following error while attaching new volume to my new
> instance. Please suggest a way forward
>
> 2016-09-13 16:48:18.335 55367 INFO nova.compute.manager
> [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309
> 9ef033cefb684be68105e30ef2b3b651 - - -] [instance:
> 8115ad54-dd36-47ba-bbd1-5c1df9989bf7] Attaching volume
> d90e4835-58f5-45a8-869e-fc3f30f0eaf3 to /dev/vdb
>
> 2016-09-13 16:48:20.548 55367 WARNING os_brick.initiator.connector
> [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309
> 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at:
> [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.
> org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will
> rescan & retry.  Try number: 0.
>
> 2016-09-13 16:48:21.656 55367 WARNING os_brick.initiator.connector
> [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309
> 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at:
> [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.
> org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will
> rescan & retry.  Try number: 1.
>
> 2016-09-13 16:48:25.772 55367 WARNING os_brick.initiator.connector
> [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309
> 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at:
> [u'/dev/disk/by-path/ip-10.24.0.4:3260-iscsi-iqn.2010-10.
> org.openstack:volume-d90e4835-58f5-45a8-869e-fc3f30f0eaf3-lun-0']. Will
> rescan & retry.  Try number: 2.
>
> 2016-09-13 16:48:34.875 55367 WARNING os_brick.initiator.connector
> [req-d19d0eb4-7ecc-4baa-8733-9c0f07f8890b dff16cdb3bea43a199ec4b29d2ba3309
> 9ef033cefb684be68105e30ef2b3b651 - - -] ISCSI volume not yet found at:
> [u&

Re: [ceph-users] Designing ceph cluster

2016-09-14 Thread Gaurav Goyal
10.org.openstack:volume-e13d0ffc-3ed4-4a22-b270-987e81b1ca8f,t,0x1
/dev/sde


On Thu, Aug 18, 2016 at 12:39 PM, Vasu Kulkarni  wrote:

> Also most of the terminology looks like from Openstack and SAN, Here
> are the right terminology that should be used for Ceph
> http://docs.ceph.com/docs/master/glossary/
>
>
> On Thu, Aug 18, 2016 at 8:57 AM, Gaurav Goyal 
> wrote:
> > Hello Mart,
> >
> > My Apologies for that!
> >
> > We are couple of office colleagues using the common gmail account. That
> has
> > caused the nuisance.
> >
> > Thanks for your response!
> >
> > On Thu, Aug 18, 2016 at 6:00 AM, Mart van Santen 
> wrote:
> >>
> >> Dear Guarav,
> >>
> >> Please respect everyones time & timezone differences. Flooding the
> >> mail-list won't help
> >>
> >> see below,
> >>
> >>
> >>
> >> On 08/18/2016 01:39 AM, Gaurav Goyal wrote:
> >>
> >> Dear Ceph Users,
> >>
> >> Awaiting some suggestion please!
> >>
> >>
> >>
> >> On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal <
> er.gauravgo...@gmail.com>
> >> wrote:
> >>>
> >>> Hello Mart,
> >>>
> >>> Thanks a lot for the detailed information!
> >>> Please find my response inline and help me to get more knowledge on it
> >>>
> >>>
> >>> Ceph works best with more hardware. It is not really designed for small
> >>> scale setups. Of course small setups can work for a PoC or testing,
> but I
> >>> would not advise this for production.
> >>>
> >>> [Gaurav] : We need this setup for PoC or testing.
> >>>
> >>> If you want to proceed however, have a good look the manuals or this
> >>> mailinglist archive and do invest some time to understand the logic and
> >>> workings of ceph before working or ordering hardware
> >>>
> >>> At least you want:
> >>> - 3 monitors, preferable on dedicated servers
> >>> [Gaurav] : With my current setup, can i install MON on Host 1 -->
> >>> Controller + Compute1, Host 2 and Host 3
> >>>
> >>> - Per disk you will be running an ceph-osd instance. So a host with 2
> >>> disks will run 2 osd instances. More OSD process is better
> performance, but
> >>> also more memory and cpu usage.
> >>>
> >>> [Gaurav] : Understood, That means having 1T x 4 would be better than
> 2T x
> >>> 2.
> >>
> >> Yes, more disks will do more IO
> >>>
> >>>
> >>> - Per default ceph uses a replication factor of 3 (it is possible to
> set
> >>> this to 2, but is not advised)
> >>> - You can not fill up disks to 100%, also data will not distribute even
> >>> over all disks, expect disks to be filled up (on average) maximum to
> 60-70%.
> >>> You want to add more disks once you reach this limit.
> >>>
> >>> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result
> >>> in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
> >>>
> >>> [Gaurav] : As this is going to be a test lab environment, can we change
> >>> the configuration to have more capacity rather than redundancy? How
> can we
> >>> achieve it?
> >>
> >>
> >> Ceph has an excellent documentation. This is easy to find and search for
> >> "the number of replicas", you want to set both "size" and "min_size" to
> 1 on
> >> this case
> >>
> >>> If speed is required, consider SSD's (for data & journals, or only
> >>> journals).
> >>>
> >>> In you email you mention "compute1/2/3", please note, if you use the
> rbd
> >>> kernel driver, this can interfere with the OSD process and is not
> advised to
> >>> run OSD and Kernel driver on the same hardware. If you still want to do
> >>> that, split it up using VMs (we have a small testing cluster where we
> do mix
> >>> compute and storage, there we have the OSDs running in VMs)
> >>>
> >>> [Gaurav] : within my mentioned environment, How can we split rbd kernel
> >>> driver and OSD process? Should it be like rbd kernel driver on
> controller
> >>> and OSD processes on compute hosts?
> >>>
> >>> Since my host 1 is controller + Compute1, Can you please share the
> steps
> >

Re: [ceph-users] Designing ceph cluster

2016-08-18 Thread Gaurav Goyal
Hello Mart,

My Apologies for that!

We are couple of office colleagues using the common gmail account. That has
caused the nuisance.

Thanks for your response!

On Thu, Aug 18, 2016 at 6:00 AM, Mart van Santen  wrote:

> Dear Guarav,
>
> Please respect everyones time & timezone differences. Flooding the
> mail-list won't help
>
> see below,
>
>
>
> On 08/18/2016 01:39 AM, Gaurav Goyal wrote:
>
> Dear Ceph Users,
>
> Awaiting some suggestion please!
>
>
>
> On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal 
> wrote:
>
>> Hello Mart,
>>
>> Thanks a lot for the detailed information!
>> Please find my response inline and help me to get more knowledge on it
>>
>>
>> Ceph works best with more hardware. It is not really designed for small
>> scale setups. Of course small setups can work for a PoC or testing, but I
>> would not advise this for production.
>>
>> [Gaurav] : We need this setup for PoC or testing.
>>
>> If you want to proceed however, have a good look the manuals or this
>> mailinglist archive and do invest some time to understand the logic and
>> workings of ceph before working or ordering hardware
>>
>> At least you want:
>> - 3 monitors, preferable on dedicated servers
>> [Gaurav] : With my current setup, can i install MON on Host 1 -->
>> Controller + Compute1, Host 2 and Host 3
>>
>> - Per disk you will be running an ceph-osd instance. So a host with 2
>> disks will run 2 osd instances. More OSD process is better performance, but
>> also more memory and cpu usage.
>>
>> [Gaurav] : Understood, That means having 1T x 4 would be better than 2T x
>> 2.
>>
> Yes, more disks will do more IO
>
>
>> - Per default ceph uses a replication factor of 3 (it is possible to set
>> this to 2, but is not advised)
>> - You can not fill up disks to 100%, also data will not distribute even
>> over all disks, expect disks to be filled up (on average) maximum to
>> 60-70%. You want to add more disks once you reach this limit.
>>
>> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result
>> in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>>
>> [Gaurav] : As this is going to be a test lab environment, can we change
>> the configuration to have more capacity rather than redundancy? How can we
>> achieve it?
>>
>
> Ceph has an excellent documentation. This is easy to find and search for
> "the number of replicas", you want to set both "size" and "min_size" to 1
> on this case
>
> If speed is required, consider SSD's (for data & journals, or only
>> journals).
>>
>> In you email you mention "compute1/2/3", please note, if you use the rbd
>> kernel driver, this can interfere with the OSD process and is not advised
>> to run OSD and Kernel driver on the same hardware. If you still want to do
>> that, split it up using VMs (we have a small testing cluster where we do
>> mix compute and storage, there we have the OSDs running in VMs)
>>
>> [Gaurav] : within my mentioned environment, How can we split rbd kernel
>> driver and OSD process? Should it be like rbd kernel driver on controller
>> and OSD processes on compute hosts?
>>
>> Since my host 1 is controller + Compute1, Can you please share the steps
>> to split it up using VMs and suggested by you.
>>
>
> We are running kernel rbd on dom0 and osd's in domu, as well a monitor in
> domu.
>
> Regards,
>
> Mart
>
>
>
>
>
>> Regards
>> Gaurav Goyal
>>
>>
>> On Wed, Aug 17, 2016 at 9:28 AM, Mart van Santen < 
>> m...@greenhost.nl> wrote:
>>
>>>
>>> Dear Gaurav,
>>>
>>> Ceph works best with more hardware. It is not really designed for small
>>> scale setups. Of course small setups can work for a PoC or testing, but I
>>> would not advise this for production.
>>>
>>> If you want to proceed however, have a good look the manuals or this
>>> mailinglist archive and do invest some time to understand the logic and
>>> workings of ceph before working or ordering hardware
>>>
>>> At least you want:
>>> - 3 monitors, preferable on dedicated servers
>>> - Per disk you will be running an ceph-osd instance. So a host with 2
>>> disks will run 2 osd instances. More OSD process is better performance, but
>>> also more memory and cpu usage.
>>> - Per default ceph uses a replication factor of 3 (it is possible to set
>>> this to 2, but is not advised

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
As it is a lab environment, can i install the setup in a way to achieve
less redundancy (replication factor) and more capacity?

How can i achieve that?




On Wed, Aug 17, 2016 at 7:47 PM, Gaurav Goyal 
wrote:

> Hello,
>
> Awaiting any suggestion please!
>
>
>
>
> Regards
>
> On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal 
> wrote:
>
>> Hello Brian,
>>
>> Thanks for your response!
>>
>> Can you please elaborate on this.
>>
>> Do you mean i must use
>>
>> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>>
>> This is going to be a lab environment. Can you please suggest to have
>> best possible design for my lab environment.
>>
>>
>>
>> On Wed, Aug 17, 2016 at 9:54 AM, Brian ::  wrote:
>>
>>> You're going to see pretty slow performance on a cluster this size
>>> with spinning disks...
>>>
>>> Ceph scales very very well but at this type of size cluster it can be
>>> challenging to get nice throughput and iops..
>>>
>>> for something small like this either use all ssd osds or consider
>>> having more spinning osds per node backed by nvme or ssd journals..
>>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal 
>>> wrote:
>>> > Dear Ceph Users,
>>> >
>>> > Can you please address my scenario and suggest me a solution.
>>> >
>>> > Regards
>>> > Gaurav Goyal
>>> >
>>> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <
>>> er.gauravgo...@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hello
>>> >>
>>> >>
>>> >> I need your help to redesign my ceph storage network.
>>> >>
>>> >> As suggested in earlier discussions, i must not use SAN storage. So we
>>> >> have decided to removed it.
>>> >>
>>> >> Now we are ordering Local HDDs.
>>> >>
>>> >> My Network would be
>>> >>
>>> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2-->
>>> Compute2 -->
>>> >> Local Disk 600GB Host 3 --> Compute2
>>> >>
>>> >> Is it right setup for ceph network? For Host1 and Host2 , we are
>>> using 1
>>> >> 600GB disk for basic filesystem.
>>> >>
>>> >> Should we use same size storage disks for ceph environment or i can
>>> order
>>> >> Disks in size of 2TB for ceph cluster?
>>> >>
>>> >> Making it
>>> >>
>>> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>> >>
>>> >> 12TB in total. replication factor 2 should make it 6 TB?
>>> >>
>>> >>
>>> >> Regards
>>> >>
>>> >> Gaurav Goyal
>>> >>
>>> >>
>>> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>>> bkris...@walmartlabs.com>
>>> >> wrote:
>>> >>>
>>> >>> Hi Gaurav,
>>> >>>
>>> >>> There are several ways to do it depending on how you deployed your
>>> ceph
>>> >>> cluster. Easiest way to do it is using ceph-ansible with
>>> purge-cluster yaml
>>> >>> ready made to wipe off CEPH.
>>> >>>
>>> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>> >>>
>>> >>> You may need to configure ansible inventory with ceph hosts.
>>> >>>
>>> >>> Else if you want to purge manually, you can do it using:
>>> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>> >>>
>>> >>>
>>> >>> Thanks
>>> >>> Bharath
>>> >>>
>>> >>> From: ceph-users  on behalf of
>>> Gaurav
>>> >>> Goyal 
>>> >>> Date: Thursday, August 4, 2016 at 8:19 AM
>>> >>> To: David Turner 
>>> >>> Cc: ceph-users 
>>> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN
>>> storage to
>>> >>> Local Disks
>>> >>>
>>> >>> Please suggest a procedure for this uninstallation process?
>>> >>>
>>> >>>
>>> >>> Regards
>>> >>> Gaurav Goyal

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Hello,

Awaiting any suggestion please!




Regards

On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal 
wrote:

> Hello Brian,
>
> Thanks for your response!
>
> Can you please elaborate on this.
>
> Do you mean i must use
>
> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>
> This is going to be a lab environment. Can you please suggest to have best
> possible design for my lab environment.
>
>
>
> On Wed, Aug 17, 2016 at 9:54 AM, Brian ::  wrote:
>
>> You're going to see pretty slow performance on a cluster this size
>> with spinning disks...
>>
>> Ceph scales very very well but at this type of size cluster it can be
>> challenging to get nice throughput and iops..
>>
>> for something small like this either use all ssd osds or consider
>> having more spinning osds per node backed by nvme or ssd journals..
>>
>>
>>
>> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal 
>> wrote:
>> > Dear Ceph Users,
>> >
>> > Can you please address my scenario and suggest me a solution.
>> >
>> > Regards
>> > Gaurav Goyal
>> >
>> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <
>> er.gauravgo...@gmail.com>
>> > wrote:
>> >>
>> >> Hello
>> >>
>> >>
>> >> I need your help to redesign my ceph storage network.
>> >>
>> >> As suggested in earlier discussions, i must not use SAN storage. So we
>> >> have decided to removed it.
>> >>
>> >> Now we are ordering Local HDDs.
>> >>
>> >> My Network would be
>> >>
>> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2
>> -->
>> >> Local Disk 600GB Host 3 --> Compute2
>> >>
>> >> Is it right setup for ceph network? For Host1 and Host2 , we are using
>> 1
>> >> 600GB disk for basic filesystem.
>> >>
>> >> Should we use same size storage disks for ceph environment or i can
>> order
>> >> Disks in size of 2TB for ceph cluster?
>> >>
>> >> Making it
>> >>
>> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>> >>
>> >> 12TB in total. replication factor 2 should make it 6 TB?
>> >>
>> >>
>> >> Regards
>> >>
>> >> Gaurav Goyal
>> >>
>> >>
>> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>> bkris...@walmartlabs.com>
>> >> wrote:
>> >>>
>> >>> Hi Gaurav,
>> >>>
>> >>> There are several ways to do it depending on how you deployed your
>> ceph
>> >>> cluster. Easiest way to do it is using ceph-ansible with
>> purge-cluster yaml
>> >>> ready made to wipe off CEPH.
>> >>>
>> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>> >>>
>> >>> You may need to configure ansible inventory with ceph hosts.
>> >>>
>> >>> Else if you want to purge manually, you can do it using:
>> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>> >>>
>> >>>
>> >>> Thanks
>> >>> Bharath
>> >>>
>> >>> From: ceph-users  on behalf of
>> Gaurav
>> >>> Goyal 
>> >>> Date: Thursday, August 4, 2016 at 8:19 AM
>> >>> To: David Turner 
>> >>> Cc: ceph-users 
>> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN
>> storage to
>> >>> Local Disks
>> >>>
>> >>> Please suggest a procedure for this uninstallation process?
>> >>>
>> >>>
>> >>> Regards
>> >>> Gaurav Goyal
>> >>>
>> >>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
>> >>> mailto:er.gauravgo...@gmail.com>> wrote:
>> >>>
>> >>> Thanks for your  prompt
>> >>> response!
>> >>>
>> >>> Situation is bit different now. Customer want us to remove the ceph
>> >>> storage configuration from scratch. Let is openstack system work
>> without
>> >>> ceph. Later on install ceph with local disks.
>> >>>
>> >>> So I need to know a procedure to uninstall ceph and unconfigure it
>> from
>> >>> openstack.
>> 

Re: [ceph-users] Designing ceph cluster

2016-08-17 Thread Gaurav Goyal
Dear Ceph Users,

Awaiting some suggestion please!



On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal 
wrote:

> Hello Mart,
>
> Thanks a lot for the detailed information!
> Please find my response inline and help me to get more knowledge on it
>
>
> Ceph works best with more hardware. It is not really designed for small
> scale setups. Of course small setups can work for a PoC or testing, but I
> would not advise this for production.
>
> [Gaurav] : We need this setup for PoC or testing.
>
> If you want to proceed however, have a good look the manuals or this
> mailinglist archive and do invest some time to understand the logic and
> workings of ceph before working or ordering hardware
>
> At least you want:
> - 3 monitors, preferable on dedicated servers
> [Gaurav] : With my current setup, can i install MON on Host 1 -->
> Controller + Compute1, Host 2 and Host 3
>
> - Per disk you will be running an ceph-osd instance. So a host with 2
> disks will run 2 osd instances. More OSD process is better performance, but
> also more memory and cpu usage.
>
> [Gaurav] : Understood, That means having 1T x 4 would be better than 2T x
> 2.
>
> - Per default ceph uses a replication factor of 3 (it is possible to set
> this to 2, but is not advised)
> - You can not fill up disks to 100%, also data will not distribute even
> over all disks, expect disks to be filled up (on average) maximum to
> 60-70%. You want to add more disks once you reach this limit.
>
> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result in
> a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>
> [Gaurav] : As this is going to be a test lab environment, can we change
> the configuration to have more capacity rather than redundancy? How can we
> achieve it?
>
> If speed is required, consider SSD's (for data & journals, or only
> journals).
>
> In you email you mention "compute1/2/3", please note, if you use the rbd
> kernel driver, this can interfere with the OSD process and is not advised
> to run OSD and Kernel driver on the same hardware. If you still want to do
> that, split it up using VMs (we have a small testing cluster where we do
> mix compute and storage, there we have the OSDs running in VMs)
>
> [Gaurav] : within my mentioned environment, How can we split rbd kernel
> driver and OSD process? Should it be like rbd kernel driver on controller
> and OSD processes on compute hosts?
>
> Since my host 1 is controller + Compute1, Can you please share the steps
> to split it up using VMs and suggested by you.
>
> Regards
> Gaurav Goyal
>
>
> On Wed, Aug 17, 2016 at 9:28 AM, Mart van Santen 
> wrote:
>
>>
>> Dear Gaurav,
>>
>> Ceph works best with more hardware. It is not really designed for small
>> scale setups. Of course small setups can work for a PoC or testing, but I
>> would not advise this for production.
>>
>> If you want to proceed however, have a good look the manuals or this
>> mailinglist archive and do invest some time to understand the logic and
>> workings of ceph before working or ordering hardware
>>
>> At least you want:
>> - 3 monitors, preferable on dedicated servers
>> - Per disk you will be running an ceph-osd instance. So a host with 2
>> disks will run 2 osd instances. More OSD process is better performance, but
>> also more memory and cpu usage.
>> - Per default ceph uses a replication factor of 3 (it is possible to set
>> this to 2, but is not advised)
>> - You can not fill up disks to 100%, also data will not distribute even
>> over all disks, expect disks to be filled up (on average) maximum to
>> 60-70%. You want to add more disks once you reach this limit.
>>
>> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result
>> in a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>>
>>
>> If speed is required, consider SSD's (for data & journals, or only
>> journals).
>>
>> In you email you mention "compute1/2/3", please note, if you use the rbd
>> kernel driver, this can interfere with the OSD process and is not advised
>> to run OSD and Kernel driver on the same hardware. If you still want to do
>> that, split it up using VMs (we have a small testing cluster where we do
>> mix compute and storage, there we have the OSDs running in VMs)
>>
>> Hope this helps,
>>
>> regards,
>>
>> mart
>>
>>
>>
>>
>> On 08/17/2016 02:21 PM, Gaurav Goyal wrote:
>>
>> Dear Ceph Users,
>>
>> I need your help to redesign my ceph storage network.
>>
>> As suggested in earli

Re: [ceph-users] Designing ceph cluster

2016-08-17 Thread Gaurav Goyal
Hello Mart,

Thanks a lot for the detailed information!
Please find my response inline and help me to get more knowledge on it


Ceph works best with more hardware. It is not really designed for small
scale setups. Of course small setups can work for a PoC or testing, but I
would not advise this for production.

[Gaurav] : We need this setup for PoC or testing.

If you want to proceed however, have a good look the manuals or this
mailinglist archive and do invest some time to understand the logic and
workings of ceph before working or ordering hardware

At least you want:
- 3 monitors, preferable on dedicated servers
[Gaurav] : With my current setup, can i install MON on Host 1 -->
Controller + Compute1, Host 2 and Host 3

- Per disk you will be running an ceph-osd instance. So a host with 2 disks
will run 2 osd instances. More OSD process is better performance, but also
more memory and cpu usage.

[Gaurav] : Understood, That means having 1T x 4 would be better than 2T x
2.

- Per default ceph uses a replication factor of 3 (it is possible to set
this to 2, but is not advised)
- You can not fill up disks to 100%, also data will not distribute even
over all disks, expect disks to be filled up (on average) maximum to
60-70%. You want to add more disks once you reach this limit.

All on all, with a setup of 3 hosts, with 2x2TB disks, this will result in
a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB

[Gaurav] : As this is going to be a test lab environment, can we change the
configuration to have more capacity rather than redundancy? How can we
achieve it?

If speed is required, consider SSD's (for data & journals, or only
journals).

In you email you mention "compute1/2/3", please note, if you use the rbd
kernel driver, this can interfere with the OSD process and is not advised
to run OSD and Kernel driver on the same hardware. If you still want to do
that, split it up using VMs (we have a small testing cluster where we do
mix compute and storage, there we have the OSDs running in VMs)

[Gaurav] : within my mentioned environment, How can we split rbd kernel
driver and OSD process? Should it be like rbd kernel driver on controller
and OSD processes on compute hosts?

Since my host 1 is controller + Compute1, Can you please share the steps to
split it up using VMs and suggested by you.

Regards
Gaurav Goyal


On Wed, Aug 17, 2016 at 9:28 AM, Mart van Santen  wrote:

>
> Dear Gaurav,
>
> Ceph works best with more hardware. It is not really designed for small
> scale setups. Of course small setups can work for a PoC or testing, but I
> would not advise this for production.
>
> If you want to proceed however, have a good look the manuals or this
> mailinglist archive and do invest some time to understand the logic and
> workings of ceph before working or ordering hardware
>
> At least you want:
> - 3 monitors, preferable on dedicated servers
> - Per disk you will be running an ceph-osd instance. So a host with 2
> disks will run 2 osd instances. More OSD process is better performance, but
> also more memory and cpu usage.
> - Per default ceph uses a replication factor of 3 (it is possible to set
> this to 2, but is not advised)
> - You can not fill up disks to 100%, also data will not distribute even
> over all disks, expect disks to be filled up (on average) maximum to
> 60-70%. You want to add more disks once you reach this limit.
>
> All on all, with a setup of 3 hosts, with 2x2TB disks, this will result in
> a net data availablity of (3x2x2TBx0.6)/3 = 2.4 TB
>
>
> If speed is required, consider SSD's (for data & journals, or only
> journals).
>
> In you email you mention "compute1/2/3", please note, if you use the rbd
> kernel driver, this can interfere with the OSD process and is not advised
> to run OSD and Kernel driver on the same hardware. If you still want to do
> that, split it up using VMs (we have a small testing cluster where we do
> mix compute and storage, there we have the OSDs running in VMs)
>
> Hope this helps,
>
> regards,
>
> mart
>
>
>
>
> On 08/17/2016 02:21 PM, Gaurav Goyal wrote:
>
> Dear Ceph Users,
>
> I need your help to redesign my ceph storage network.
>
> As suggested in earlier discussions, i must not use SAN storage. So we
> have decided to removed it.
>
> Now we are ordering Local HDDs.
>
> My Network would be
>
> Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3
>
> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
> 500GB disk for OS on each host .
>
> Should we use same size storage disks 500GB *8 for ceph environment or i
> can order Disks in size of 2TB for ceph cluster?
>
> Making it
>
> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>
> 12TB in total. replication fa

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Hello Brian,

Thanks for your response!

Can you please elaborate on this.

Do you mean i must use

4 x 1TB HDD on each nodes rather than 2 x 2TB?

This is going to be a lab environment. Can you please suggest to have best
possible design for my lab environment.



On Wed, Aug 17, 2016 at 9:54 AM, Brian ::  wrote:

> You're going to see pretty slow performance on a cluster this size
> with spinning disks...
>
> Ceph scales very very well but at this type of size cluster it can be
> challenging to get nice throughput and iops..
>
> for something small like this either use all ssd osds or consider
> having more spinning osds per node backed by nvme or ssd journals..
>
>
>
> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal 
> wrote:
> > Dear Ceph Users,
> >
> > Can you please address my scenario and suggest me a solution.
> >
> > Regards
> > Gaurav Goyal
> >
> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal  >
> > wrote:
> >>
> >> Hello
> >>
> >>
> >> I need your help to redesign my ceph storage network.
> >>
> >> As suggested in earlier discussions, i must not use SAN storage. So we
> >> have decided to removed it.
> >>
> >> Now we are ordering Local HDDs.
> >>
> >> My Network would be
> >>
> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2
> -->
> >> Local Disk 600GB Host 3 --> Compute2
> >>
> >> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
> >> 600GB disk for basic filesystem.
> >>
> >> Should we use same size storage disks for ceph environment or i can
> order
> >> Disks in size of 2TB for ceph cluster?
> >>
> >> Making it
> >>
> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
> >>
> >> 12TB in total. replication factor 2 should make it 6 TB?
> >>
> >>
> >> Regards
> >>
> >> Gaurav Goyal
> >>
> >>
> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
> bkris...@walmartlabs.com>
> >> wrote:
> >>>
> >>> Hi Gaurav,
> >>>
> >>> There are several ways to do it depending on how you deployed your ceph
> >>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster
> yaml
> >>> ready made to wipe off CEPH.
> >>>
> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
> >>>
> >>> You may need to configure ansible inventory with ceph hosts.
> >>>
> >>> Else if you want to purge manually, you can do it using:
> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
> >>>
> >>>
> >>> Thanks
> >>> Bharath
> >>>
> >>> From: ceph-users  on behalf of
> Gaurav
> >>> Goyal 
> >>> Date: Thursday, August 4, 2016 at 8:19 AM
> >>> To: David Turner 
> >>> Cc: ceph-users 
> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage
> to
> >>> Local Disks
> >>>
> >>> Please suggest a procedure for this uninstallation process?
> >>>
> >>>
> >>> Regards
> >>> Gaurav Goyal
> >>>
> >>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
> >>> mailto:er.gauravgo...@gmail.com>> wrote:
> >>>
> >>> Thanks for your  prompt
> >>> response!
> >>>
> >>> Situation is bit different now. Customer want us to remove the ceph
> >>> storage configuration from scratch. Let is openstack system work
> without
> >>> ceph. Later on install ceph with local disks.
> >>>
> >>> So I need to know a procedure to uninstall ceph and unconfigure it from
> >>> openstack.
> >>>
> >>> Regards
> >>> Gaurav Goyal
> >>> On 03-Aug-2016 4:59 pm, "David Turner"
> >>> mailto:david.tur...@storagecraft.com>>
> wrote:
> >>> If I'm understanding your question correctly that you're asking how to
> >>> actually remove the SAN osds from ceph, then it doesn't matter what is
> using
> >>> the storage (ie openstack, cephfs, krbd, etc) as the steps are the
> same.
> >>>
> >>> I'm going to assume that you've already added the new storage/osds to
> the
> >>> cluster, weighted the SAN osds to 0.0 and that the

[ceph-users] Designing ceph cluster

2016-08-17 Thread Gaurav Goyal
Dear Ceph Users,


I need your help to redesign my ceph storage network.

As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.

Now we are ordering Local HDDs.

My Network would be

Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3

Is it right setup for ceph network? For Host1 and Host2 , we are using 1
500GB disk for OS on each host .

Should we use same size storage disks 500GB *8 for ceph environment or i
can order Disks in size of 2TB for ceph cluster?

Making it

2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3

12TB in total. replication factor 2 should make it 6 TB?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Re: (no subject)

2016-08-17 Thread Gaurav Goyal
Dear Ceph Users,


I need your help to redesign my ceph storage network.

As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.

Now we are ordering Local HDDs.

My Network would be

Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3

Is it right setup for ceph network? For Host1 and Host2 , we are using 1
500GB disk for OS on each host .

Should we use same size storage disks 500GB *8 for ceph environment or i
can order Disks in size of 2TB for ceph cluster?

Making it

2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3

12TB in total. replication factor 2 should make it 6 TB?


Regards

On Tue, Aug 2, 2016 at 11:16 AM, Gaurav Goyal 
wrote:

>
> Hello Jason/Kees,
>
> I am trying to take snapshot of my instance.
>
> Image was stuck up in Queued state and instance is stuck up in Image
> Pending Upload state.
>
> I had to manually quit the job as it was not working since last 1 hour ..
> my instance is still in Image Pending Upload state.
>
> Is it something wrong with my ceph configuration?
> can i take snapshots with ceph storage? How?
>
> Regards
> Gaurav Goyal
>
> On Wed, Jul 13, 2016 at 9:44 AM, Jason Dillaman 
> wrote:
>
>> The RAW file will appear to be the exact image size but the filesystem
>> will know about the holes in the image and it will be sparsely
>> allocated on disk.  For example:
>>
>> # dd if=/dev/zero of=sparse-file bs=1 count=1 seek=2GiB
>> # ll sparse-file
>> -rw-rw-r--. 1 jdillaman jdillaman 2147483649 Jul 13 09:20 sparse-file
>> # du -sh sparse-file
>> 4.0K sparse-file
>>
>> Now, running qemu-img to copy the image into the backing RBD pool:
>>
>> # qemu-img convert -f raw -O raw ~/sparse-file rbd:rbd/sparse-file
>> # rbd disk-usage sparse-file
>> NAMEPROVISIONED USED
>> sparse-file   2048M0
>>
>>
>> On Wed, Jul 13, 2016 at 3:31 AM, Fran Barrera 
>> wrote:
>> > Yes, but is the same problem isn't? The image will be too large because
>> the
>> > format is raw.
>> >
>> > Thanks.
>> >
>> > 2016-07-13 9:24 GMT+02:00 Kees Meijs :
>> >>
>> >> Hi Fran,
>> >>
>> >> Fortunately, qemu-img(1) is able to directly utilise RBD (supporting
>> >> sparse block devices)!
>> >>
>> >> Please refer to http://docs.ceph.com/docs/hammer/rbd/qemu-rbd/ for
>> >> examples.
>> >>
>> >> Cheers,
>> >> Kees
>> >>
>> >> On 13-07-16 09:18, Fran Barrera wrote:
>> >> > Can you explain how you do this procedure? I have the same problem
>> >> > with the large images and snapshots.
>> >> >
>> >> > This is what I do:
>> >> >
>> >> > # qemu-img convert -f qcow2 -O raw image.qcow2 image.img
>> >> > # openstack image create image.img
>> >> >
>> >> > But the image.img is too large.
>> >>
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Jason
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Dear Ceph Users,

Can you please address my scenario and suggest me a solution.

Regards
Gaurav Goyal

On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal 
wrote:

> Hello
>
>
> I need your help to redesign my ceph storage network.
>
> As suggested in earlier discussions, i must not use SAN storage. So we
> have decided to removed it.
>
> Now we are ordering Local HDDs.
>
> My Network would be
>
> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2 -->
> Local Disk 600GB Host 3 --> Compute2
>
> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
> 600GB disk for basic filesystem.
>
> Should we use same size storage disks for ceph environment or i can order
> Disks in size of 2TB for ceph cluster?
>
> Making it
>
> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>
> 12TB in total. replication factor 2 should make it 6 TB?
>
>
> Regards
>
> Gaurav Goyal
>
> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna 
> wrote:
>
>> Hi Gaurav,
>>
>> There are several ways to do it depending on how you deployed your ceph
>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml
>> ready made to wipe off CEPH.
>>
>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>
>> You may need to configure ansible inventory with ceph hosts.
>>
>> Else if you want to purge manually, you can do it using:
>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>
>>
>> Thanks
>> Bharath
>>
>> From: ceph-users  on behalf of Gaurav
>> Goyal 
>> Date: Thursday, August 4, 2016 at 8:19 AM
>> To: David Turner 
>> Cc: ceph-users 
>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
>> Local Disks
>>
>> Please suggest a procedure for this uninstallation process?
>>
>>
>> Regards
>> Gaurav Goyal
>>
>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal > <mailto:er.gauravgo...@gmail.com>> wrote:
>>
>> Thanks for your  prompt
>> response!
>>
>> Situation is bit different now. Customer want us to remove the ceph
>> storage configuration from scratch. Let is openstack system work without
>> ceph. Later on install ceph with local disks.
>>
>> So I need to know a procedure to uninstall ceph and unconfigure it from
>> openstack.
>>
>> Regards
>> Gaurav Goyal
>> On 03-Aug-2016 4:59 pm, "David Turner" > <mailto:david.tur...@storagecraft.com>> wrote:
>> If I'm understanding your question correctly that you're asking how to
>> actually remove the SAN osds from ceph, then it doesn't matter what is
>> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
>> same.
>>
>> I'm going to assume that you've already added the new storage/osds to the
>> cluster, weighted the SAN osds to 0.0 and that the backfilling has
>> finished.  If that is true, then your disk used space on the SAN's should
>> be basically empty while the new osds on the local disks should have a fair
>> amount of data.  If that is the case, then for every SAN osd, you just run
>> the following commands replacing OSD_ID with the osd's id:
>>
>> # On the server with the osd being removed
>> sudo stop ceph-osd id=OSD_ID
>> ceph osd down OSD_ID
>> ceph osd out OSD_ID
>> ceph osd crush remove osd.OSD_ID
>> ceph auth del osd.OSD_ID
>> ceph osd rm OSD_ID
>>
>> Test running those commands on a test osd and if you had set the weight
>> of the osd to 0.0 previously and if the backfilling had finished, then what
>> you should see is that your cluster has 1 less osd than it used to, and no
>> pgs should be backfilling.
>>
>> HOWEVER, if my assumptions above are incorrect, please provide the output
>> of the following commands and try to clarify your question.
>>
>> ceph status
>> ceph osd tree
>>
>> I hope this helps.
>>
>> > Hello David,
>> >
>> > Can you help me with steps/Procedure to uninstall Ceph storage from
>> openstack environment?
>> >
>> >
>> > Regards
>> > Gaurav Goyal
>> 
>> [cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>
>>
>> David Turner | Cloud Operations Engineer | StorageCraft Technology
>> Corporation<https://storagecraft.com>
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> 
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> 
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-16 Thread Gaurav Goyal
Hello


I need your help to redesign my ceph storage network.

As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.

Now we are ordering Local HDDs.

My Network would be

Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2 -->
Local Disk 600GB Host 3 --> Compute2

Is it right setup for ceph network? For Host1 and Host2 , we are using 1
600GB disk for basic filesystem.

Should we use same size storage disks for ceph environment or i can order
Disks in size of 2TB for ceph cluster?

Making it

2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3

12TB in total. replication factor 2 should make it 6 TB?


Regards

Gaurav Goyal

On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna 
wrote:

> Hi Gaurav,
>
> There are several ways to do it depending on how you deployed your ceph
> cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml
> ready made to wipe off CEPH.
>
> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>
> You may need to configure ansible inventory with ceph hosts.
>
> Else if you want to purge manually, you can do it using:
> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>
>
> Thanks
> Bharath
>
> From: ceph-users  on behalf of Gaurav
> Goyal 
> Date: Thursday, August 4, 2016 at 8:19 AM
> To: David Turner 
> Cc: ceph-users 
> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
> Local Disks
>
> Please suggest a procedure for this uninstallation process?
>
>
> Regards
> Gaurav Goyal
>
> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal  mailto:er.gauravgo...@gmail.com>> wrote:
>
> Thanks for your  prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from scratch. Let is openstack system work without
> ceph. Later on install ceph with local disks.
>
> So I need to know a procedure to uninstall ceph and unconfigure it from
> openstack.
>
> Regards
> Gaurav Goyal
> On 03-Aug-2016 4:59 pm, "David Turner"  <mailto:david.tur...@storagecraft.com>> wrote:
> If I'm understanding your question correctly that you're asking how to
> actually remove the SAN osds from ceph, then it doesn't matter what is
> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
> same.
>
> I'm going to assume that you've already added the new storage/osds to the
> cluster, weighted the SAN osds to 0.0 and that the backfilling has
> finished.  If that is true, then your disk used space on the SAN's should
> be basically empty while the new osds on the local disks should have a fair
> amount of data.  If that is the case, then for every SAN osd, you just run
> the following commands replacing OSD_ID with the osd's id:
>
> # On the server with the osd being removed
> sudo stop ceph-osd id=OSD_ID
> ceph osd down OSD_ID
> ceph osd out OSD_ID
> ceph osd crush remove osd.OSD_ID
> ceph auth del osd.OSD_ID
> ceph osd rm OSD_ID
>
> Test running those commands on a test osd and if you had set the weight of
> the osd to 0.0 previously and if the backfilling had finished, then what
> you should see is that your cluster has 1 less osd than it used to, and no
> pgs should be backfilling.
>
> HOWEVER, if my assumptions above are incorrect, please provide the output
> of the following commands and try to clarify your question.
>
> ceph status
> ceph osd tree
>
> I hope this helps.
>
> > Hello David,
> >
> > Can you help me with steps/Procedure to uninstall Ceph storage from
> openstack environment?
> >
> >
> > Regards
> > Gaurav Goyal
> 
> [cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>
>
> David Turner | Cloud Operations Engineer | StorageCraft Technology
> Corporation<https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> 
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> 
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Please suggest a procedure for this uninstallation process?


Regards
Gaurav Goyal

On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal 
wrote:

> Thanks for your  prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from scratch. Let is openstack system work without
> ceph. Later on install ceph with local disks.
>
> So I need to know a procedure to uninstall ceph and unconfigure it from
> openstack.
>
> Regards
> Gaurav Goyal
> On 03-Aug-2016 4:59 pm, "David Turner" 
> wrote:
>
>> If I'm understanding your question correctly that you're asking how to
>> actually remove the SAN osds from ceph, then it doesn't matter what is
>> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
>> same.
>>
>> I'm going to assume that you've already added the new storage/osds to the
>> cluster, weighted the SAN osds to 0.0 and that the backfilling has
>> finished.  If that is true, then your disk used space on the SAN's should
>> be basically empty while the new osds on the local disks should have a fair
>> amount of data.  If that is the case, then for every SAN osd, you just run
>> the following commands replacing OSD_ID with the osd's id:
>>
>> # On the server with the osd being removed
>> sudo stop ceph-osd id=OSD_ID
>> ceph osd down OSD_ID
>> ceph osd out OSD_ID
>> ceph osd crush remove osd.OSD_ID
>> ceph auth del osd.OSD_ID
>> ceph osd rm OSD_ID
>>
>> Test running those commands on a test osd and if you had set the weight
>> of the osd to 0.0 previously and if the backfilling had finished, then what
>> you should see is that your cluster has 1 less osd than it used to, and no
>> pgs should be backfilling.
>>
>> HOWEVER, if my assumptions above are incorrect, please provide the output
>> of the following commands and try to clarify your question.
>>
>> ceph status
>> ceph osd tree
>>
>> I hope this helps.
>>
>> > Hello David,
>> >
>> > Can you help me with steps/Procedure to uninstall Ceph storage from
>> openstack environment?
>> >
>> >
>> > Regards
>> > Gaurav Goyal
>>
>> --
>>
>> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
>> StorageCraft
>> Technology Corporation <https://storagecraft.com>
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> --
>>
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> --
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Thanks for your  prompt
response!

Situation is bit different now. Customer want us to remove the ceph storage
configuration from scratch. Let is openstack system work without ceph.
Later on install ceph with local disks.

So I need to know a procedure to uninstall ceph and unconfigure it from
openstack.

Regards
Gaurav Goyal
On 03-Aug-2016 4:59 pm, "David Turner" 
wrote:

> If I'm understanding your question correctly that you're asking how to
> actually remove the SAN osds from ceph, then it doesn't matter what is
> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
> same.
>
> I'm going to assume that you've already added the new storage/osds to the
> cluster, weighted the SAN osds to 0.0 and that the backfilling has
> finished.  If that is true, then your disk used space on the SAN's should
> be basically empty while the new osds on the local disks should have a fair
> amount of data.  If that is the case, then for every SAN osd, you just run
> the following commands replacing OSD_ID with the osd's id:
>
> # On the server with the osd being removed
> sudo stop ceph-osd id=OSD_ID
> ceph osd down OSD_ID
> ceph osd out OSD_ID
> ceph osd crush remove osd.OSD_ID
> ceph auth del osd.OSD_ID
> ceph osd rm OSD_ID
>
> Test running those commands on a test osd and if you had set the weight of
> the osd to 0.0 previously and if the backfilling had finished, then what
> you should see is that your cluster has 1 less osd than it used to, and no
> pgs should be backfilling.
>
> HOWEVER, if my assumptions above are incorrect, please provide the output
> of the following commands and try to clarify your question.
>
> ceph status
> ceph osd tree
>
> I hope this helps.
>
> > Hello David,
> >
> > Can you help me with steps/Procedure to uninstall Ceph storage from
> openstack environment?
> >
> >
> > Regards
> > Gaurav Goyal
>
> --
>
> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
> --
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Hello David,

Can you help me with steps/Procedure to uninstall Ceph storage from
openstack environment?


Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:57 AM, Gaurav Goyal 
wrote:

> Hello David,
>
> Thanks a lot for detailed information!
>
> This is going to help me.
>
>
> Regards
> Gaurav Goyal
>
> On Tue, Aug 2, 2016 at 11:46 AM, David Turner <
> david.tur...@storagecraft.com> wrote:
>
>> I'm going to assume you know how to add and remove storage
>> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/.  The
>> only other part of this process is reweighting the crush map for the old
>> osds to a new weight of 0.0
>> http://docs.ceph.com/docs/master/rados/operations/crush-map/.
>>
>> I would recommend setting the nobackfill and norecover flags.
>>
>> ceph osd set nobackfill
>> ceph osd set norecover
>>
>> Next you would add all of the new osds according to the ceph docs and
>> then reweight the old osds to 0.0.
>>
>> ceph osd crush reweight osd.1 0.0
>>
>> Once you have all of that set, unset nobackfill and norecover.
>>
>> ceph osd unset nobackfill
>> ceph osd unset norecover
>>
>> Wait until all of the backfilling finishes and then remove the old SAN
>> osds as per the ceph docs.
>>
>>
>> There is a thread from this mailing list about the benefits of weighting
>> osds to 0.0 instead of just removing them.  The best thing that you gain
>> from doing it this way is that you can remove multiple nodes/osds at the
>> same time without having degraded objects and especially without losing
>> objects.
>>
>> --
>>
>> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
>> StorageCraft
>> Technology Corporation <https://storagecraft.com>
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> --
>>
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> --
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hello David,

Thanks a lot for detailed information!

This is going to help me.


Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:46 AM, David Turner  wrote:

> I'm going to assume you know how to add and remove storage
> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/.  The
> only other part of this process is reweighting the crush map for the old
> osds to a new weight of 0.0
> http://docs.ceph.com/docs/master/rados/operations/crush-map/.
>
> I would recommend setting the nobackfill and norecover flags.
>
> ceph osd set nobackfill
> ceph osd set norecover
>
> Next you would add all of the new osds according to the ceph docs and then
> reweight the old osds to 0.0.
>
> ceph osd crush reweight osd.1 0.0
>
> Once you have all of that set, unset nobackfill and norecover.
>
> ceph osd unset nobackfill
> ceph osd unset norecover
>
> Wait until all of the backfilling finishes and then remove the old SAN
> osds as per the ceph docs.
>
>
> There is a thread from this mailing list about the benefits of weighting
> osds to 0.0 instead of just removing them.  The best thing that you gain
> from doing it this way is that you can remove multiple nodes/osds at the
> same time without having degraded objects and especially without losing
> objects.
>
> --
>
> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hi David,

Thanks for your comments!
Can you please help to share the procedure/Document if available?

Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:24 AM, David Turner  wrote:

> Just add the new storage and weight the old storage to 0.0 so all data
> will move off of the old storage to the new storage.  It's not unique to
> migrating from SANs to Local Disks.  You would do the same any time you
> wanted to migrate to newer servers and retire old servers.  After the
> backfilling is done, you can just remove the old osds from the cluster and
> no more backfilling will happen.
>
> --
>
> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
> ----------
> *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of
> Gaurav Goyal [er.gauravgo...@gmail.com]
> *Sent:* Tuesday, August 02, 2016 9:19 AM
> *To:* ceph-users
> *Subject:* [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
> Local Disks
>
> Dear Ceph Team,
>
> I need your guidance on this.
>
>
> Regards
> Gaurav Goyal
>
> On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal 
> wrote:
>
>> Dear Team,
>>
>> I have ceph storage installed on SAN storage which is connected to
>> Openstack Hosts via iSCSI LUNs.
>> Now we want to get rid of SAN storage and move over ceph to LOCAL disks.
>>
>> Can i add new local disks as new OSDs and remove the old osds ?
>> or
>>
>> I will have to remove the ceph from scratch and install it freshly with
>> Local disks?
>>
>>
>> Regards
>> Gaurav Goyal
>>
>>
>>
>>
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Dear Ceph Team,

I need your guidance on this.


Regards
Gaurav Goyal

On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal 
wrote:

> Dear Team,
>
> I have ceph storage installed on SAN storage which is connected to
> Openstack Hosts via iSCSI LUNs.
> Now we want to get rid of SAN storage and move over ceph to LOCAL disks.
>
> Can i add new local disks as new OSDs and remove the old osds ?
> or
>
> I will have to remove the ceph from scratch and install it freshly with
> Local disks?
>
>
> Regards
> Gaurav Goyal
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Re: (no subject)

2016-08-02 Thread Gaurav Goyal
Hello Jason/Kees,

I am trying to take snapshot of my instance.

Image was stuck up in Queued state and instance is stuck up in Image
Pending Upload state.

I had to manually quit the job as it was not working since last 1 hour ..
my instance is still in Image Pending Upload state.

Is it something wrong with my ceph configuration?
can i take snapshots with ceph storage? How?

Regards
Gaurav Goyal

On Wed, Jul 13, 2016 at 9:44 AM, Jason Dillaman  wrote:

> The RAW file will appear to be the exact image size but the filesystem
> will know about the holes in the image and it will be sparsely
> allocated on disk.  For example:
>
> # dd if=/dev/zero of=sparse-file bs=1 count=1 seek=2GiB
> # ll sparse-file
> -rw-rw-r--. 1 jdillaman jdillaman 2147483649 Jul 13 09:20 sparse-file
> # du -sh sparse-file
> 4.0K sparse-file
>
> Now, running qemu-img to copy the image into the backing RBD pool:
>
> # qemu-img convert -f raw -O raw ~/sparse-file rbd:rbd/sparse-file
> # rbd disk-usage sparse-file
> NAMEPROVISIONED USED
> sparse-file   2048M0
>
>
> On Wed, Jul 13, 2016 at 3:31 AM, Fran Barrera 
> wrote:
> > Yes, but is the same problem isn't? The image will be too large because
> the
> > format is raw.
> >
> > Thanks.
> >
> > 2016-07-13 9:24 GMT+02:00 Kees Meijs :
> >>
> >> Hi Fran,
> >>
> >> Fortunately, qemu-img(1) is able to directly utilise RBD (supporting
> >> sparse block devices)!
> >>
> >> Please refer to http://docs.ceph.com/docs/hammer/rbd/qemu-rbd/ for
> >> examples.
> >>
> >> Cheers,
> >> Kees
> >>
> >> On 13-07-16 09:18, Fran Barrera wrote:
> >> > Can you explain how you do this procedure? I have the same problem
> >> > with the large images and snapshots.
> >> >
> >> > This is what I do:
> >> >
> >> > # qemu-img convert -f qcow2 -O raw image.qcow2 image.img
> >> > # openstack image create image.img
> >> >
> >> > But the image.img is too large.
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Error with instance snapshot in ceph storage : Image Pending Upload state.

2016-07-27 Thread Gaurav Goyal
Dear Ceph Team,

I am trying to take snapshot of my instance.

Image was stuck up in Queued state and instance is stuck up in Image
Pending Upload state.

I had to manually quit the job as it was not working since last 1 hour ..
my instance is still in Image Pending Upload state.

Is it something wrong with my ceph configuration?
can i take snapshots with ceph storage?

Regards
Gaurav Goyal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Try to install ceph hammer on CentOS7

2016-07-22 Thread Gaurav Goyal
It will be smooth installation. I have recently installed hammer on centos
7.


Regards
Gaurav Goyal

On Fri, Jul 22, 2016 at 7:22 AM, Ruben Kerkhof 
wrote:

> On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch 
> wrote:
> > Hi,
>
> Hi,
> >
> > I try to install ceph hammer on centos7 but something with the RPM
> > Repository seems to be wrong.
> >
> > In my yum.repos.d/ceph.repo file I have the following configuration:
> >
> > [ceph]
> > name=Ceph packages for $basearch
> > baseurl=baseurl=http://download.ceph.com/rpm-hammer/el7/$basearch
>
> There's your issue. Remove the seconds baseurl=
>
> Kind regards,
>
> Ruben
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph : Generic Query : Raw Format of images

2016-07-20 Thread Gaurav Goyal
Dear Ceph User,

I want to ask a very generic query regarding ceph.

Ceph does use .raw format. But every single company is providing qcow2
images.
It takes a lot of time to convert the images to raw format.

is it something everyone else is dealing with?
Or i am not doing something right.

Specially when Organization know ceph functionality, why dont they create
raw images along with qcow2.


Regards
Gaurav Goyal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Gaurav Goyal
Thanks!

I need to create a VM having qcow2 image file as 6.7 GB but raw image as
600GB which is too big.
Is there a way that i need not to convert qcow2 file to raw and it works
well with rbd?


Regards
Gaurav Goyal

On Mon, Jul 11, 2016 at 11:46 AM, Kees Meijs  wrote:

> Glad to hear it works now! Good luck with your setup.
>
> Regards,
> Kees
>
> On 11-07-16 17:29, Gaurav Goyal wrote:
> > Hello it worked for me after removing the following parameter from
> > /etc/nova/nova.conf file
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Gaurav Goyal
Hello it worked for me after removing the following parameter from
/etc/nova/nova.conf file

[root@OSKVM1 ~]# cat /etc/nova/nova.conf|grep hw_disk_discard

#hw_disk_discard=unmap


Though as per ceph documentation, for KILO version we must set this
parameter. I am using Liberty but i am not sure if this parameter is
removed from Liberty. If that is the case please update the documentation.


KILO

Enable discard support for virtual machine ephemeral root disk:

[libvirt]

...

hw_disk_discard = unmap # enable discard support (be careful of performance)


Regards

Gaurav Goyal

On Mon, Jul 11, 2016 at 4:38 AM, Kees Meijs  wrote:

> Hi,
>
> I think there's still something misconfigured:
>
> Invalid: 400 Bad Request: Unknown scheme 'file' found in URI (HTTP 400)
>
>
> It seems the RBD backend is not used as expected.
>
> Have you configured both Cinder *and* Glance to use Ceph?
>
> Regards,
> Kees
>
> On 08-07-16 17:33, Gaurav Goyal wrote:
>
>
> I regenerated the UUID as per your suggestion.
> Now i have same UUID in host1 and host2.
> I could create volumes and attach them to existing VMs.
>
> I could create new glance images.
>
> But still finding the same error while instance launch via GUI.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Design for Ceph Storage integration with openstack

2016-07-11 Thread Gaurav Goyal
Situation is --> I have installed openstack setup (Liberty) for my lab.
Dear Ceph users,

I need your suggestion for my ceph design.

I have

Host 1 --> Controller + Compute1
Host 2  --> Compute 2

DELL SAN storage is attached to both hosts as

[root@OSKVM1 ~]# iscsiadm -m node

10.35.0.3:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4

with fdisk -l, it is mentioned as
sdc, sdd, sde and sdf on host1
sdb,sdc,sdd,and sde on host 2

I need to configure this SAN storage as CEPH.

i am thinking of
osd0 with sdc on host1
osd1 with sdd on host1

osd2 with sdd on host2
osd3 with sde on host2

So as to have

[root@host1 ~]# ceph osd tree

ID WEIGHT  TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 7.95996 root default

-2 3.97998 host host1

 0 1.98999 osd.0up  1.0  1.0

 1 1.98999 osd.1up  1.0  1.0

-3 3.97998 host host2

 2 1.98999 osd.2up  1.0  1.0

 3 1.98999 osd.3up  1.0  1.0
Is it ok? or i must change my ceph design?


Regards
Gaurav Goyal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Drive letters shuffled on reboot

2016-07-10 Thread Gaurav Goyal
Hello,

This is an interesting topic and would like to know a solution to this
problem. Does that mean we should never use Dell storage as ceph storage
device?  I have similar setup with Dell 4 iscsi LUNs attached to openstack
controller and compute node in active-active situation.

As they were in active active 1 selected first 2 luns as osd on node1 and
last 2 as osd on node 2.

Is it ok to have this configuration specially when and node will be down or
considering live migration.

Regards
Gaurav Goyal
On 10-Jul-2016 9:02 pm, "Christian Balzer"  wrote:

>
> Hello,
>
> On Sun, 10 Jul 2016 12:46:39 + (UTC) William Josefsson wrote:
>
> > Hi everyone,
> >
> > I have problem with swapping drive and partition names on reboot. My
> > Ceph is Hammer on CentOS7, Dell R730 6xSSD (2xSSD OS RAID1 PERC,
> > 4xSSD=Journal drives), 18x1.8T SAS for OSDs.
> >
> > Whenever I reboot, drives randomly seem to change names. This is
> > extremely dangerous and frustrating when I've initially setup CEPH with
> > ceph-deploy, zap, prepare and activate. It has happened that I've
> > accidentally erased wrong disk too when e.g. /dev/sdX had
> > become /dev/sdY.
> >
> This isn't a Ceph specific question per se and you could probably keep
> things from moving around by enforcing module loads in a particular order.
>
> But that of course still wouldn't help if something else changed or a
> drive totally failed.
>
> So in the context of Ceph, it doesn't (shouldn't) care if the OSD (HDD)
> changes names, especially since you did set it up with ceph-deploy.
>
> And to avoid the journals getting jumbled up, do what everybody does
> (outside of Ceph as well), use /dev/disk/by-id or uuid.
>
> Like:
> ---
> # ls -la /var/lib/ceph/osd/ceph-28/
>
>  journal -> /dev/disk/by-id/wwn-0x55cd2e404b73d569-part3
> ---
>
> Christian
> > Please see an output below of how this drive swapping below appears SDC
> > is shifted, indexes and drive names got shuffled. Ceph OSDs didn't come
> > up properly.
> >
> > Please advice on how to get this corrected, with no more drive name
> > shuffling. Can this be due to the PERC HW raid? thx will
> >
> >
> >
> > POST REBOOT 2 (expected outcome.. with sda,sdb,sdc,sdd as journal. sdw
> > is a perc raid1)
> >
> >
> > [cephnode3][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
> > [cephnode3][DEBUG ] /dev/sda :
> > [cephnode3][DEBUG ]  /dev/sda1 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sda2
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ]  /dev/sda3 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> > [cephnode3][DEBUG ]  /dev/sda4 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sda5
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ] /dev/sdb : [cephnode3][DEBUG ]  /dev/sdb1 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdb2
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ]  /dev/sdb3 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> > [cephnode3][DEBUG ]  /dev/sdb4 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdb5
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ] /dev/sdc : [cephnode3][DEBUG ]  /dev/sdc1 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdc2
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ]  /dev/sdc3 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> > [cephnode3][DEBUG ]  /dev/sdc4 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdc5
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ] /dev/sdd : [cephnode3][DEBUG ]  /dev/sdd1 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdd2
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ]  /dev/sdd3 other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
> > [cephnode3][DEBUG ]  /dev/sdd4 other,
> > ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG ]  /dev/sdd5
> > other, ebd0a0a2-b9e5-4433-87c0-68b6b72699c7 [cephnode3][DEBUG
> > ] /dev/sde : [cephnode3][DEBUG ]  /dev/sde1 ceph data, active, cluster
> > ceph, osd.0 [cephnode3][DEBUG ] /dev/sdf : [cephnode3][DEBUG
> > ]  /dev/sdf1 ceph data, active, cluster ceph, osd.1 [cephnode3][DEBUG
> > ] /dev/sdg : [cephnode3][DEBUG ]  /dev/sdg1 ceph data, active, cluster
> > ceph, osd.2 [cephnode3][DEBUG ] /dev/sdh : [cephnode3][DEBUG
> > ]  /dev/sdh1 ceph data, active, cluster ceph, osd.3 [cephnode3][DEB

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
 instance

2016-07-08 16:29:42.267 86007 INFO nova.virt.libvirt.driver [-] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] During wait destroy, instance
disappeared.

2016-07-08 16:29:42.508 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Deleting instance files
/var/lib/nova/instances/cb6056a8-1bb9-4475-a702-9a2b0a7dca01_del

2016-07-08 16:29:42.509 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Deletion of
/var/lib/nova/instances/cb6056a8-1bb9-4475-a702-9a2b0a7dca01_del complete

2016-07-08 16:30:16.167 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 16:30:16.654 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 16:30:16.655 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None

2016-07-08 16:30:16.692 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
updated for OSKVM1:controller

[root@OSKVM1 nova]# rpm -qa|grep libvirt

*libvirt*-client-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-config-network-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-network-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64

*libvirt*-1.2.17-13.el7_2.5.x86_64

*libvirt*-gobject-0.1.9-1.el7.x86_64

*libvirt*-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64

*libvirt*-python-1.2.17-2.el7.x86_64

*libvirt*-daemon-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-lxc-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-kvm-1.2.17-13.el7_2.5.x86_64

*libvirt*-glib-0.1.9-1.el7.x86_64

*libvirt*-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64

*libvirt*-gconfig-0.1.9-1.el7.x86_64

[root@OSKVM1 nova]#

[root@OSKVM1 nova]#

[root@OSKVM1 nova]#

[root@OSKVM1 nova]# rpm -qa|grep qemu

ipxe-roms-*qemu*-20160127-1.git6366fa7a.el7.noarch

*qemu*-kvm-1.5.3-105.el7_2.4.x86_64

libvirt-daemon-driver-*qemu*-1.2.17-13.el7_2.5.x86_64

*qemu*-img-1.5.3-105.el7_2.4.x86_64

*qemu*-kvm-common-1.5.3-105.el7_2.4.x86_64

[root@OSKVM1 ~]# glance image-create --name "Fedora" --file
/root/Fedora-Cloud-Base-24-1.2.x86_64.raw --disk-format raw
--container-format bare --visibility public --progress

[=>] 100%

+--+--+

| Property | Value
  |

+--+--+

| checksum | dc5e336d5ba97e092a5fdfd999501c25
  |

| container_format | bare
  |

| created_at   | 2016-07-08T20:19:22Z
  |

| direct_url   |
rbd://9f923089-a6c0-4169-ace8-ad8cc4cca116/images/ef181b81-64ec-
  |

|  | 47e5-9850-10cd40698284/snap
  |

| disk_format  | raw
  |

| id   | ef181b81-64ec-47e5-9850-10cd40698284
  |

| min_disk | 0
  |

| min_ram  | 0
  |

| name | Fedora
  |

| owner| 7ee74061ac7c421a824a6e04e445d3d0
  |

| protected| False
  |

| size | 3221225472
  |

| status   | active
  |

| tags | []
  |

| updated_at   | 2016-07-08T20:27:10Z
  |

On Fri, Jul 8, 2016 at 12:25 PM, Gaurav Goyal 
wrote:

> [root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$
>
> [DEFAULT]
>
> instance_usage_audit = True
>
> instance_usage_audit_period = hour
>
> notify_on_state_change = vm_and_task_state
>
> notification_driver = messagingv2
>
> rbd_user=cinder
>
> rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb
>
> rpc_backend = rabbit
>
> auth_strategy = keystone
>
> my_ip = 10.1.0.4
>
> network_api_class = nova.network.neutronv2.api.API
>
> security_gr

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
[root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$

[DEFAULT]

instance_usage_audit = True

instance_usage_audit_period = hour

notify_on_state_change = vm_and_task_state

notification_driver = messagingv2

rbd_user=cinder

rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.1.0.4

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =
nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

enabled_apis=osapi_compute,metadata

[api_database]

connection = mysql://nova:nova@controller/nova

[barbican]

[cells]

[cinder]

os_region_name = RegionOne

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

host = controller

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova

[libvirt]

inject_password=false

inject_key=false

inject_partition=-2

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_LIVE, VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_TUNNELLED

disk_cachemodes ="network=writeback"

images_type=rbd

images_rbd_pool=vms

images_rbd_ceph_conf =/etc/ceph/ceph.conf

rbd_user=cinder

rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb

hw_disk_discard=unmap

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = True

metadata_proxy_shared_secret = X

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = X

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

novncproxy_base_url = http://controller:6080/vnc_auto.html

vncserver_proxyclient_address = $my_ip

[workarounds]

[xenserver]

[zookeeper]


[root@OSKVM1 ceph]# ls -ltr

total 24

-rwxr-xr-x 1 root   root92 May 10 12:58 rbdmap

-rw--- 1 root   root 0 Jun 28 11:05 tmpfDt6jw

-rw-r--r-- 1 root   root63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r-- 1 glance glance  64 Jul  5 14:51 ceph.client.glance.keyring

-rw-r--r-- 1 cinder cinder  64 Jul  5 14:53 ceph.client.cinder.keyring

-rw-r--r-- 1 cinder cinder  71 Jul  5 14:54
ceph.client.cinder-backup.keyring

-rwxrwxrwx 1 root   root   438 Jul  7 14:19 ceph.conf

[root@OSKVM1 ceph]# more ceph.client.cinder.keyring

[client.cinder]

key = AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==

[root@OSKVM1 ~]# rados lspools

rbd

volumes

images

backups

vms

[root@OSKVM1 ~]# rbd -p rbd ls

[root@OSKVM1 ~]# rbd -p volumes ls

volume-27717a88-3c80-420f-8887-4ca5c5b94023

volume-3bd22868-cb2a-4881-b9fb-ae91a6f79cb9

volume-b9cf7b94-cfb6-4b55-816c-10c442b23519

[root@OSKVM1 ~]# rbd -p images ls

9aee6c4e-3b60-49d5-8e17-33953e384a00

a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f

[root@OSKVM1 ~]# rbd -p vms ls

[root@OSKVM1 ~]# rbd -p backup


*I could create cinder and  attach it to one of already built nova
instance.*

[root@OSKVM1 ceph]# nova volume-list

WARNING: Command volume-list is deprecated and will be removed after Nova
13.0.0 is released. Use python-cinderclient or openstackclient instead.

+--+---+--+--+-+--+

| ID   | Status| Display Name |
Size | Volume Type | Attached to  |

+--+---+--+--+-+--+

| 14a572d0-2834-40d6-9650-cb3e18271963 | available | nova-vol_gg  | 10
  | -   |  |

| 3bd22868-cb2a-4881-b9fb-ae91a6f79cb9 | in-use| nova-vol_1   | 2
  | -   | d06f7c3b-5bbd-4597-99ce-fa981d2e10db |

| 27717a88-3c80-420f-8887-4ca5c5b94023 | available | cinder-ceph-vol1 | 10
  | -   |  |

+--+---+--+--+-+--+

On Fri, Jul 8, 2016 at 11:33 AM, Gaurav Goyal 
wrote:

> Hi Kees,
>
> I regenerated the UUID as per your suggestion.
> Now i have same UUID in host1 and host2.
> I could create volumes and attach them to existing VMs.
>
> I could create new glance images.
&

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
ova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 365, in
download

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] image_chunks =
self._client.call(context, 1, 'data', image_id)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 231, in call

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] result = getattr(client.images,
method)(*args, **kwargs)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 148, in
data

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] % urlparse.quote(str(image_id)))

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280,
in get

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] return self._request('GET', url,
**kwargs)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272,
in _request

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] resp, body_iter =
self._handle_response(resp)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in
_handle_response

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] raise exc.from_response(resp,
resp.content)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Invalid: 400 Bad Request: Unknown
scheme 'file' found in URI (HTTP 400)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]

2016-07-08 11:25:19.575 86007 INFO nova.compute.manager
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Terminating instance

2016-07-08 11:25:19.583 86007 INFO nova.virt.libvirt.driver [-] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] During wait destroy, instance
disappeared.

2016-07-08 11:25:19.665 86007 INFO nova.virt.libvirt.driver
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Deleting instance files
/var/lib/nova/instances/bf4839c8-2af6-4959-9158-fe411e1cfae7_del

2016-07-08 11:25:19.666 86007 INFO nova.virt.libvirt.driver
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Deletion of
/var/lib/nova/instances/bf4839c8-2af6-4959-9158-fe411e1cfae7_del complete

2016-07-08 11:25:26.073 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 11:25:26.477 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 11:25:26.478 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None


Regards
Gaurav Goyal

On Fri, Jul 8, 2016 at 10:17 AM, Kees Meijs  wrote:

> Hi,
>
> I'd recommend generating an UUID and use it for all your compute nodes.
> This way, you can keep your configuration in libvirt constant.
>
> Regards,
> Kees
>
> On 08-07-16 16:15, Gaurav Goyal wrote:
> >
> > For below section, should i generate separate UUID for both compte hosts?
> >
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hi Kees,

Thanks for your help!

Node 1 controller + compute

-rw-r--r-- 1 root   root63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r-- 1 glance glance  64 Jul  5 14:51 ceph.client.glance.keyring

-rw-r--r-- 1 cinder cinder  64 Jul  5 14:53 ceph.client.cinder.keyring

-rw-r--r-- 1 cinder cinder  71 Jul  5 14:54
ceph.client.cinder-backup.keyring

Node 2 compute2

-rw-r--r--  1 root root  63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r--  1 root root  64 Jul  5 14:57 ceph.client.cinder.keyring

[root@OSKVM2 ceph]# chown cinder:cinder ceph.client.cinder.keyring

chown: invalid user: ‘cinder:cinder’


For below section, should i generate separate UUID for both compte hosts?

i executed uuidgen on host1 and put the same on second one. I need your
help to get rid of this problem.

Then, on the compute nodes, add the secret key to libvirt and remove the
temporary copy of the key:

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
  
client.cinder secret
  

EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret
457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
&& rm client.cinder.key secret.xml

Moreover,  i do not find libvirtd group.

[root@OSKVM1 ceph]# chown qemu:libvirtd /var/run/ceph/guests/

chown: invalid group: ‘qemu:libvirtd’


Regards

Gaurav Goyal

On Fri, Jul 8, 2016 at 9:40 AM, Kees Meijs  wrote:

> Hi Gaurav,
>
> Have you distributed your Ceph authentication keys to your compute
> nodes? And, do they have the correct permissions in terms of Ceph?
>
> K.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
2.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] resp, body_iter =
self._handle_response(resp)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in
_handle_response

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] raise exc.from_response(resp,
resp.content)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Invalid: 400 Bad Request: Unknown
scheme 'file' found in URI (HTTP 400)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]

2016-07-08 09:28:32.574 31909 INFO nova.compute.manager
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Terminating instance

2016-07-08 09:28:32.579 31909 INFO nova.virt.libvirt.driver [-] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] During wait destroy, instance
disappeared.

2016-07-08 09:28:32.646 31909 INFO nova.virt.libvirt.driver
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Deleting instance files
/var/lib/nova/instances/ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88_del

2016-07-08 09:28:32.647 31909 INFO nova.virt.libvirt.driver
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Deletion of
/var/lib/nova/instances/ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88_del complete

2016-07-08 09:29:15.793 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 09:29:16.295 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 09:29:16.296 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None

2016-07-08 09:29:16.337 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Compute_service record
updated for OSKVM1:controller



[root@OSKVM1 nova]# openstack image list

+--+---+

| ID   | Name  |

+--+---+

| a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f | SevOne20k |

| 5ab4e28c-f92c-428c-a520-61d8a937a70f | SevOner2q |

| b7757ffd-0965-4c96-86cb-dbf3094a2e66 | Sevone vmdk image |

| 025f8a67-6796-439f-92a7-87f35c57cd66 | SevOne Compressed |

| 8b0f5432-74e6-48c2-bd20-0db80fb3c07b | SevOne|

| 5c089b72-87c1-4181-a567-a71f59f83c99 | cirros|

+--+---+

[root@OSKVM1 nova]# glance image-show a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f

+--+--+

| Property | Value|

+--+--+

| checksum | 21995fe7ab30f9b12eb6bea182304522 |

| container_format | bare |

| created_at   | 2016-07-06T17:26:31Z |

| disk_format  | qcow2|

| id   | a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f |

| min_disk | 0|

| min_ram  | 0|

| name | SevOne20k|

| owner| 7ee74061ac7c421a824a6e04e445d3d0 |

| protected| False|

| size | 7181697024   |

| status   | active   |

| tags | []   |

| updated_at   | 2016-07-06T17:44:13Z |

| virtual_size | None |

| visibility   | public   |

+--+--+


Regards

Gaurav Goyal


On Fri, Jul 8, 2016 at 3:15 AM, Fran Barrera  wrote:

> Hello,
>
> You only need a create a pool and authentication in Ceph for cinder.
>
> Your configuration should be like this (This is an example configuration
> with Ceph Jewel and Openstack Mitaka):
>
>
> [DEFAULT]
> enabled_backends = ceph
> [ceph]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Thanks for the verification!

Yeah i didnt find additional section for [ceph] in my cinder.conf file.
Should i create that manually?
As i didnt find [ceph] section so i modified same parameters in [DEFAULT]
section.
I will change that as per your suggestion.

Moreoevr checking some other links i got to know that, i must configure
following additional parameters
should i do that and install tgtadm package?

rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes

Do i need to execute following commands?

"pvcreate /dev/rbd1" &
"vgcreate cinder-volumes /dev/rbd1"


Regards

Gaurav Goyal



On Thu, Jul 7, 2016 at 10:02 PM, Jason Dillaman  wrote:

> These lines from your log output indicates you are configured to use LVM
> as a cinder backend.
>
> > 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume
> driver LVMVolumeDriver (3.0.0)
> > 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
> cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
> name cinder-volumes
>
> Looking at your provided configuration, I don't see a "[ceph]"
> configuration section. Here is a configuration example [1] for Cinder.
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder
>
> On Thu, Jul 7, 2016 at 9:35 PM, Gaurav Goyal 
> wrote:
>
>> Hi Kees/Fran,
>>
>>
>> Do you find any issue in my cinder.conf file?
>>
>> it says Volume group "cinder-volumes" not found. When to configure this
>> volume group?
>>
>> I have done ceph configuration for nova creation.
>> But i am still facing the same error .
>>
>>
>>
>> */var/log/cinder/volume.log*
>>
>> 2016-07-07 16:20:13.765 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:23.770 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:30.789 136259 WARNING oslo_messaging.server [-]
>> start/stop/wait must be called in the same thread
>>
>> 2016-07-07 16:20:30.791 136259 WARNING oslo_messaging.server
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] start/stop/wait must
>> be called in the same thread
>>
>> 2016-07-07 16:20:30.794 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Caught SIGTERM,
>> stopping children
>>
>> 2016-07-07 16:20:30.799 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Waiting on 1 children
>> to exit
>>
>> 2016-07-07 16:20:30.806 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Child 136259 killed by
>> signal 15
>>
>> 2016-07-07 16:20:31.950 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Determined volume DB
>> was not empty at startup.
>>
>> 2016-07-07 16:20:31.956 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Image-volume cache
>> disabled for host OSKVM1@ceph.
>>
>> 2016-07-07 16:20:31.957 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Starting 1 workers
>>
>> 2016-07-07 16:20:31.960 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Started child 32549
>>
>> 2016-07-07 16:20:31.963 32549 INFO cinder.service [-] Starting
>> cinder-volume node (version 7.0.1)
>>
>> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
>> LVMVolumeDriver (3.0.0)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Failed to initialize
>> driver.
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Traceback (most
>> recent call last):
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in
>> init_host
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> self.driver.check_for_setup_error()
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cin

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
09 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] six.reraise(new_exc, None,
exc_trace)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 365, in
download

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] image_chunks =
self._client.call(context, 1, 'data', image_id)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 231, in call

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] result = getattr(client.images,
method)(*args, **kwargs)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 148, in
data

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] % urlparse.quote(str(image_id)))

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280,
in get

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] return self._request('GET', url,
**kwargs)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272,
in _request

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] resp, body_iter =
self._handle_response(resp)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in
_handle_response

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] raise exc.from_response(resp,
resp.content)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] Invalid: 400 Bad Request: Unknown
scheme 'file' found in URI (HTTP 400)

2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7]

2016-07-07 16:22:19.041 31909 INFO nova.compute.manager
[req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] Terminating instance

2016-07-07 16:22:19.050 31909 INFO nova.virt.libvirt.driver [-] [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] During wait destroy, instance
disappeared.

2016-07-07 16:22:19.120 31909 INFO nova.virt.libvirt.driver
[req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] Deleting instance files
/var/lib/nova/instances/39c047a0-4554-4160-a3fe-0943b3eed4a7_del

2016-07-07 16:22:19.121 31909 INFO nova.virt.libvirt.driver
[req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
39c047a0-4554-4160-a3fe-0943b3eed4a7] Deletion of
/var/lib/nova/instances/39c047a0-4554-4160-a3fe-0943b3eed4a7_del complete

2016-07-07 16:22:53.779 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Auditing locally
available compute resources for node controller

2016-07-07 16:22:53.868 31909 WARNING nova.virt.libvirt.driver
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Periodic task is
updating the host stat, it is trying to get disk instance-0006, but
disk file was removed by concurrent operations such as resize.

2016-07-07 16:22:54.236 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Total usable vcpus:
40, total allocated vcpus: 1

2016-07-07 16:22:54.237 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=1 pci_stats=None



Regards

Gaurav Goyal


On Thu, Jul 7, 2016 at 12:06 PM, Gaurav Goyal 
wrote:

> Hi Fran,
>
> Here is my cinder.conf file. Please help to analyze it.
>
> Do i need to create volume group as mentioned in this link
>
> http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
>
>
> [root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$
>
> [DEFAULT]
>
> rpc_backend = rabbi

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Hi Fran,

Here is my cinder.conf file. Please help to analyze it.

Do i need to create volume group as mentioned in this link
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html


[root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.24.0.4

notification_driver = messagingv2

backup_ceph_conf = /etc/ceph/ceph.conf

backup_ceph_user = cinder-backup

backup_ceph_chunk_size = 134217728

backup_ceph_pool = backups

backup_ceph_stripe_unit = 0

backup_ceph_stripe_count = 0

restore_discard_excess_bytes = true

backup_driver = cinder.backup.drivers.ceph

glance_api_version = 2

enabled_backends = ceph

rbd_pool = volumes

rbd_user = cinder

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_secret_uuid = a536c85f-d660-4c25-a840-e321c09e7941

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

volume_driver = cinder.volume.drivers.rbd.RBDDriver

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[cors]

[cors.subdomain]

[database]

connection = mysql://cinder:cinder@controller/cinder

[fc-zone-manager]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder

[matchmaker_redis]

[matchmaker_ring]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = 

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[profiler]

On Thu, Jul 7, 2016 at 11:38 AM, Fran Barrera 
wrote:

> Hello,
>
> Are you configured these two paremeters in cinder.conf?
>
> rbd_user
> rbd_secret_uuid
>
> Regards.
>
> 2016-07-07 15:39 GMT+02:00 Gaurav Goyal :
>
>> Hello Mr. Kees,
>>
>> Thanks for your response!
>>
>> My setup is
>>
>> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
>> Openstack node 2 --> Compute2
>>
>> Ceph version Hammer
>>
>> I am using dell storage with following status
>>
>> DELL SAN storage is attached to both hosts as
>>
>> [root@OSKVM1 ~]# iscsiadm -m node
>>
>> 10.35.0.3:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>>
>>
>> Since in my setup same LUNs are MAPPED to both hosts
>>
>> i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2
>>
>>
>> *Node1 has *
>>
>> /dev/sdc12.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0
>>
>> /dev/sdd12.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1
>>
>> *Node 2 has *
>>
>> /dev/sdd12.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2
>>
>> /dev/sde12.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3
>>
>> [root@OSKVM1 ~]# ceph status
>>
>> cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116
>>
>>  health HEALTH_WARN
>>
>> mon.OSKVM1 low disk space
>>
>>  monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}
>>
>> election epoch 1, quorum 0 OSKVM1
>>
>>  osdmap e40: 4 osds: 4 up, 4 in
>>
>>   pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects
>>
>> 13857 MB used, 8154 GB / 8168 GB avail
>>
>>  576 active+clean
>>
>> *Can you please help me to know if it is correct configuration as per my
>> setup?*
>>
>> After this setup, i am trying to configure Cinder and Glance to use RBD
>> for a backend.
>> Glance image is already stored in RBD.
>> Following this link http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>
>> I have managed to install glance image in rbd. But i am finding some
>> issue in cinder co

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
 mds

caps: [osd] allow rwx

osd.0

key: AQAB4HtX7q27KBAAEqcuJXwXAJyD6a1Qu/MXqA==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.1

key: AQC/4ntXFJGdFBAAADYH03iQTF4jWI1LnBZeJg==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.2

key: AQCa43tXr12fDhAAzbq6FO2+8m9qg1B12/99Og==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.3

key: AQA/5HtXDNfcLxAAJWawgxc1nd8CB+4uH/8fdQ==

caps: [mon] allow profile osd

caps: [osd] allow *

client.admin

key: AQBNknJXE/I2FRAA+caW02eje7GZ/uv1O6aUgA==

caps: [mds] allow

caps: [mon] allow *

caps: [osd] allow *

client.bootstrap-mds

key: AQBOknJXjLloExAAGjMRfjp5okI1honz9Nx4wg==

caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

key: AQBNknJXDUMFKBAAZ8/TfDkS0N7Q6CbaOG3DyQ==

caps: [mon] allow profile bootstrap-osd

client.bootstrap-rgw

key: AQBOknJXQAUiABAA6IB4p4RyUmrsxXk+pv4u7g==

caps: [mon] allow profile bootstrap-rgw

client.cinder

key: AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==

caps: [mon] allow r

caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rwx pool=vms, allow rx pool=images

client.cinder-backup

key: AQCXAHxXAVSNKhAAV1d/ZRMsrriDOt+7pYgJIg==

caps: [mon] allow r

caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=backups

client.glance

key: AQCVAHxXupPdLBAA7hh1TJZnvSmFSDWbQiaiEQ==

caps: [mon] allow r

caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=images


Regards

Gaurav Goyal

On Thu, Jul 7, 2016 at 2:54 AM, Kees Meijs  wrote:

> Hi Gaurav,
>
> Unfortunately I'm not completely sure about your setup, but I guess it
> makes sense to configure Cinder and Glance to use RBD for a backend. It
> seems to me, you're trying to store VM images directly on an OSD
> filesystem.
>
> Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack/ for
> details.
>
> Regards,
> Kees
>
> On 06-07-16 23:03, Gaurav Goyal wrote:
> >
> > I am installing ceph hammer and integrating it with openstack Liberty
> > for the first time.
> >
> > My local disk has only 500 GB but i need to create 600 GB VM. SO i
> > have created a soft link to ceph filesystem as
> >
> > lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
> > /var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd
> > /var/lib/nova [root@OSKVM1 nova]#
> >
> > now when i am trying to create an instance it is giving the following
> > error as checked from nova-compute.log
> > I need your help to fix this issue.
> >
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2016-07-06 Thread Gaurav Goyal
Hi,

I am installing ceph hammer and integrating it with openstack Liberty for
the first time.

My local disk has only 500 GB but i need to create 600 GB VM. SO i have
created a soft link to ceph filesystem as

lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
/var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd /var/lib/nova
[root@OSKVM1 nova]#

now when i am trying to create an instance it is giving the following error
as checked from nova-compute.log
I need your help to fix this issue.

2016-07-06 15:49:31.554 136121 INFO nova.compute.manager
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Starting instance... 2016-07-06
15:49:31.655 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Attempting claim: memory 512 MB, disk
1 GB 2016-07-06 15:49:31.656 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Total memory: 193168 MB, used:
1024.00 MB 2016-07-06 15:49:31.656 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] memory limit: 289752.00 MB, free:
288728.00 MB 2016-07-06 15:49:31.657 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Total disk: 2042 GB, used: 1.00 GB
2016-07-06 15:49:31.657 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] disk limit: 2042.00 GB, free: 2041.00
GB 2016-07-06 15:49:31.673 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Claim successful 2016-07-06
15:49:32.154 136121 INFO nova.virt.libvirt.driver
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Creating image 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Instance failed to spawn 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Traceback (most recent call last):
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] yield resources 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67]
block_device_info=block_device_info) 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527,
in spawn 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] admin_pass=admin_password)
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2953,
in _create_image 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] instance, size,
fallback_from_host) 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6406,
in _try_fetch_image_cache 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67]
size=size) 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
240, in cache 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] *args, **kwargs)
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager [instance:

[ceph-users] Fwd: Ceph installation and integration with Openstack

2016-07-03 Thread Gaurav Goyal
Dear Ceph Users,

I am very new to Ceph product and want to gain some knowledge for my lab
setup.

Situation is --> I have installed openstack setup (Liberty) for my lab.

Host 1 --> Controller + Compute1
Host 2  --> Compute 2

DELL SAN storage is attached to both hosts as

[root@OSKVM1 ~]# iscsiadm -m node

10.35.0.3:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4

I need to configure this SAN storage as CEPH. SO i want to know how to
install Ceph Hammer version.  I need to use ceph as block storage for my
openstack environment.

I was following this link for ceph installation  "
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/";
Please help me with correct link for ceph installation and integration with
Openstack.

1.  I think ceph-deploy, Mon0 , OSD0 would be host 1
  OSD1 would be Host 2
  Is it ok?

2.  What should be filesystem for ceph?
3. Basically, this is first time so want to have detailed
information/guidance for confidence.


Regards
Gaurav Goyal
+1647-685-3000
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph installation and integration with Openstack

2016-07-03 Thread Gaurav Goyal
Dear All,

I need your kind help please. I am new and want to understand the ceph
installation concept as per my lab setup.

Regards
Gaurav Goyal
On 02-Jul-2016 7:27 pm, "Gaurav Goyal"  wrote:

> Dear Ceph Users,
>
> I am very new to Ceph product and want to gain some knowledge for my lab
> setup.
>
> Situation is --> I have installed openstack setup (Liberty) for my lab.
>
> Host 1 --> Controller + Compute1
> Host 2  --> Compute 2
>
> DELL SAN storage is attached to both hosts as
>
> [root@OSKVM1 ~]# iscsiadm -m node
>
> 10.35.0.3:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>
> I need to configure this SAN storage as CEPH. SO i want to know how to
> install Ceph Hammer version.  I need to use ceph as block storage for my
> openstack environment.
>
> I was following this link for ceph installation  "
> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/";
> Please help me with correct link for ceph installation and integration
> with Openstack.
>
> 1.  I think ceph-deploy, Mon0 , OSD0 would be host 1
>   OSD1 would be Host 2
>   Is it ok?
>
> 2.  What should be filesystem for ceph?
> 3. Basically, this is first time so want to have detailed
> information/guidance for confidence.
>
>
> Regards
> Gaurav Goyal
> +1647-685-3000
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Ceph installation and integration with Openstack

2016-07-02 Thread Gaurav Goyal
Dear Ceph Users,

I am very new to Ceph product and want to gain some knowledge for my lab
setup.

Situation is --> I have installed openstack setup (Liberty) for my lab.

Host 1 --> Controller + Compute1
Host 2  --> Compute 2

DELL SAN storage is attached to both hosts as

[root@OSKVM1 ~]# iscsiadm -m node

10.35.0.3:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4

I need to configure this SAN storage as CEPH. SO i want to know how to
install Ceph Hammer version.  I need to use ceph as block storage for my
openstack environment.

I was following this link for ceph installation  "
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/";
Please help me with correct link for ceph installation and integration with
Openstack.

1.  I think ceph-deploy, Mon0 , OSD0 would be host 1
  OSD1 would be Host 2
  Is it ok?

2.  What should be filesystem for ceph?
3. Basically, this is first time so want to have detailed
information/guidance for confidence.


Regards
Gaurav Goyal
+1647-685-3000
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com