Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
As it is a lab environment, can i install the setup in a way to achieve
less redundancy (replication factor) and more capacity?

How can i achieve that?




On Wed, Aug 17, 2016 at 7:47 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
wrote:

> Hello,
>
> Awaiting any suggestion please!
>
>
>
>
> Regards
>
> On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>
>> Hello Brian,
>>
>> Thanks for your response!
>>
>> Can you please elaborate on this.
>>
>> Do you mean i must use
>>
>> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>>
>> This is going to be a lab environment. Can you please suggest to have
>> best possible design for my lab environment.
>>
>>
>>
>> On Wed, Aug 17, 2016 at 9:54 AM, Brian :: <b...@iptel.co> wrote:
>>
>>> You're going to see pretty slow performance on a cluster this size
>>> with spinning disks...
>>>
>>> Ceph scales very very well but at this type of size cluster it can be
>>> challenging to get nice throughput and iops..
>>>
>>> for something small like this either use all ssd osds or consider
>>> having more spinning osds per node backed by nvme or ssd journals..
>>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
>>> wrote:
>>> > Dear Ceph Users,
>>> >
>>> > Can you please address my scenario and suggest me a solution.
>>> >
>>> > Regards
>>> > Gaurav Goyal
>>> >
>>> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <
>>> er.gauravgo...@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hello
>>> >>
>>> >>
>>> >> I need your help to redesign my ceph storage network.
>>> >>
>>> >> As suggested in earlier discussions, i must not use SAN storage. So we
>>> >> have decided to removed it.
>>> >>
>>> >> Now we are ordering Local HDDs.
>>> >>
>>> >> My Network would be
>>> >>
>>> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2-->
>>> Compute2 -->
>>> >> Local Disk 600GB Host 3 --> Compute2
>>> >>
>>> >> Is it right setup for ceph network? For Host1 and Host2 , we are
>>> using 1
>>> >> 600GB disk for basic filesystem.
>>> >>
>>> >> Should we use same size storage disks for ceph environment or i can
>>> order
>>> >> Disks in size of 2TB for ceph cluster?
>>> >>
>>> >> Making it
>>> >>
>>> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>> >>
>>> >> 12TB in total. replication factor 2 should make it 6 TB?
>>> >>
>>> >>
>>> >> Regards
>>> >>
>>> >> Gaurav Goyal
>>> >>
>>> >>
>>> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>>> bkris...@walmartlabs.com>
>>> >> wrote:
>>> >>>
>>> >>> Hi Gaurav,
>>> >>>
>>> >>> There are several ways to do it depending on how you deployed your
>>> ceph
>>> >>> cluster. Easiest way to do it is using ceph-ansible with
>>> purge-cluster yaml
>>> >>> ready made to wipe off CEPH.
>>> >>>
>>> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>> >>>
>>> >>> You may need to configure ansible inventory with ceph hosts.
>>> >>>
>>> >>> Else if you want to purge manually, you can do it using:
>>> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>> >>>
>>> >>>
>>> >>> Thanks
>>> >>> Bharath
>>> >>>
>>> >>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
>>> Gaurav
>>> >>> Goyal <er.gauravgo...@gmail.com>
>>> >>> Date: Thursday, August 4, 2016 at 8:19 AM
>>> >>> To: David Turner <david.tur...@storagecraft.com>
>>> >>> Cc: ceph-users <ceph-users@lists.ceph.com>
>>> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN
>>> storage to
>>> &

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Hello,

Awaiting any suggestion please!




Regards

On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
wrote:

> Hello Brian,
>
> Thanks for your response!
>
> Can you please elaborate on this.
>
> Do you mean i must use
>
> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
>
> This is going to be a lab environment. Can you please suggest to have best
> possible design for my lab environment.
>
>
>
> On Wed, Aug 17, 2016 at 9:54 AM, Brian :: <b...@iptel.co> wrote:
>
>> You're going to see pretty slow performance on a cluster this size
>> with spinning disks...
>>
>> Ceph scales very very well but at this type of size cluster it can be
>> challenging to get nice throughput and iops..
>>
>> for something small like this either use all ssd osds or consider
>> having more spinning osds per node backed by nvme or ssd journals..
>>
>>
>>
>> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
>> wrote:
>> > Dear Ceph Users,
>> >
>> > Can you please address my scenario and suggest me a solution.
>> >
>> > Regards
>> > Gaurav Goyal
>> >
>> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <
>> er.gauravgo...@gmail.com>
>> > wrote:
>> >>
>> >> Hello
>> >>
>> >>
>> >> I need your help to redesign my ceph storage network.
>> >>
>> >> As suggested in earlier discussions, i must not use SAN storage. So we
>> >> have decided to removed it.
>> >>
>> >> Now we are ordering Local HDDs.
>> >>
>> >> My Network would be
>> >>
>> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2
>> -->
>> >> Local Disk 600GB Host 3 --> Compute2
>> >>
>> >> Is it right setup for ceph network? For Host1 and Host2 , we are using
>> 1
>> >> 600GB disk for basic filesystem.
>> >>
>> >> Should we use same size storage disks for ceph environment or i can
>> order
>> >> Disks in size of 2TB for ceph cluster?
>> >>
>> >> Making it
>> >>
>> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>> >>
>> >> 12TB in total. replication factor 2 should make it 6 TB?
>> >>
>> >>
>> >> Regards
>> >>
>> >> Gaurav Goyal
>> >>
>> >>
>> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
>> bkris...@walmartlabs.com>
>> >> wrote:
>> >>>
>> >>> Hi Gaurav,
>> >>>
>> >>> There are several ways to do it depending on how you deployed your
>> ceph
>> >>> cluster. Easiest way to do it is using ceph-ansible with
>> purge-cluster yaml
>> >>> ready made to wipe off CEPH.
>> >>>
>> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>> >>>
>> >>> You may need to configure ansible inventory with ceph hosts.
>> >>>
>> >>> Else if you want to purge manually, you can do it using:
>> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>> >>>
>> >>>
>> >>> Thanks
>> >>> Bharath
>> >>>
>> >>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
>> Gaurav
>> >>> Goyal <er.gauravgo...@gmail.com>
>> >>> Date: Thursday, August 4, 2016 at 8:19 AM
>> >>> To: David Turner <david.tur...@storagecraft.com>
>> >>> Cc: ceph-users <ceph-users@lists.ceph.com>
>> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN
>> storage to
>> >>> Local Disks
>> >>>
>> >>> Please suggest a procedure for this uninstallation process?
>> >>>
>> >>>
>> >>> Regards
>> >>> Gaurav Goyal
>> >>>
>> >>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
>> >>> <er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:
>> >>>
>> >>> Thanks for your  prompt
>> >>> response!
>> >>>
>> >>> Situation is bit different now. Customer want us to remove the ceph
>> >>> storage configuration from scratch. Let is openstack system work
>> witho

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Hello Brian,

Thanks for your response!

Can you please elaborate on this.

Do you mean i must use

4 x 1TB HDD on each nodes rather than 2 x 2TB?

This is going to be a lab environment. Can you please suggest to have best
possible design for my lab environment.



On Wed, Aug 17, 2016 at 9:54 AM, Brian :: <b...@iptel.co> wrote:

> You're going to see pretty slow performance on a cluster this size
> with spinning disks...
>
> Ceph scales very very well but at this type of size cluster it can be
> challenging to get nice throughput and iops..
>
> for something small like this either use all ssd osds or consider
> having more spinning osds per node backed by nvme or ssd journals..
>
>
>
> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
> > Dear Ceph Users,
> >
> > Can you please address my scenario and suggest me a solution.
> >
> > Regards
> > Gaurav Goyal
> >
> > On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <er.gauravgo...@gmail.com
> >
> > wrote:
> >>
> >> Hello
> >>
> >>
> >> I need your help to redesign my ceph storage network.
> >>
> >> As suggested in earlier discussions, i must not use SAN storage. So we
> >> have decided to removed it.
> >>
> >> Now we are ordering Local HDDs.
> >>
> >> My Network would be
> >>
> >> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2
> -->
> >> Local Disk 600GB Host 3 --> Compute2
> >>
> >> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
> >> 600GB disk for basic filesystem.
> >>
> >> Should we use same size storage disks for ceph environment or i can
> order
> >> Disks in size of 2TB for ceph cluster?
> >>
> >> Making it
> >>
> >> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
> >>
> >> 12TB in total. replication factor 2 should make it 6 TB?
> >>
> >>
> >> Regards
> >>
> >> Gaurav Goyal
> >>
> >>
> >> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <
> bkris...@walmartlabs.com>
> >> wrote:
> >>>
> >>> Hi Gaurav,
> >>>
> >>> There are several ways to do it depending on how you deployed your ceph
> >>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster
> yaml
> >>> ready made to wipe off CEPH.
> >>>
> >>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
> >>>
> >>> You may need to configure ansible inventory with ceph hosts.
> >>>
> >>> Else if you want to purge manually, you can do it using:
> >>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
> >>>
> >>>
> >>> Thanks
> >>> Bharath
> >>>
> >>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
> Gaurav
> >>> Goyal <er.gauravgo...@gmail.com>
> >>> Date: Thursday, August 4, 2016 at 8:19 AM
> >>> To: David Turner <david.tur...@storagecraft.com>
> >>> Cc: ceph-users <ceph-users@lists.ceph.com>
> >>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage
> to
> >>> Local Disks
> >>>
> >>> Please suggest a procedure for this uninstallation process?
> >>>
> >>>
> >>> Regards
> >>> Gaurav Goyal
> >>>
> >>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
> >>> <er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:
> >>>
> >>> Thanks for your  prompt
> >>> response!
> >>>
> >>> Situation is bit different now. Customer want us to remove the ceph
> >>> storage configuration from scratch. Let is openstack system work
> without
> >>> ceph. Later on install ceph with local disks.
> >>>
> >>> So I need to know a procedure to uninstall ceph and unconfigure it from
> >>> openstack.
> >>>
> >>> Regards
> >>> Gaurav Goyal
> >>> On 03-Aug-2016 4:59 pm, "David Turner"
> >>> <david.tur...@storagecraft.com<mailto:david.tur...@storagecraft.com>>
> wrote:
> >>> If I'm understanding your question correctly that you're asking how to
> >>> actually remove the SAN osds from ceph, then it doesn't matter what is
> using
> >

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Brian ::
You're going to see pretty slow performance on a cluster this size
with spinning disks...

Ceph scales very very well but at this type of size cluster it can be
challenging to get nice throughput and iops..

for something small like this either use all ssd osds or consider
having more spinning osds per node backed by nvme or ssd journals..



On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal <er.gauravgo...@gmail.com> wrote:
> Dear Ceph Users,
>
> Can you please address my scenario and suggest me a solution.
>
> Regards
> Gaurav Goyal
>
> On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>>
>> Hello
>>
>>
>> I need your help to redesign my ceph storage network.
>>
>> As suggested in earlier discussions, i must not use SAN storage. So we
>> have decided to removed it.
>>
>> Now we are ordering Local HDDs.
>>
>> My Network would be
>>
>> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2 -->
>> Local Disk 600GB Host 3 --> Compute2
>>
>> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
>> 600GB disk for basic filesystem.
>>
>> Should we use same size storage disks for ceph environment or i can order
>> Disks in size of 2TB for ceph cluster?
>>
>> Making it
>>
>> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>>
>> 12TB in total. replication factor 2 should make it 6 TB?
>>
>>
>> Regards
>>
>> Gaurav Goyal
>>
>>
>> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <bkris...@walmartlabs.com>
>> wrote:
>>>
>>> Hi Gaurav,
>>>
>>> There are several ways to do it depending on how you deployed your ceph
>>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml
>>> ready made to wipe off CEPH.
>>>
>>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>>
>>> You may need to configure ansible inventory with ceph hosts.
>>>
>>> Else if you want to purge manually, you can do it using:
>>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>>
>>>
>>> Thanks
>>> Bharath
>>>
>>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Gaurav
>>> Goyal <er.gauravgo...@gmail.com>
>>> Date: Thursday, August 4, 2016 at 8:19 AM
>>> To: David Turner <david.tur...@storagecraft.com>
>>> Cc: ceph-users <ceph-users@lists.ceph.com>
>>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
>>> Local Disks
>>>
>>> Please suggest a procedure for this uninstallation process?
>>>
>>>
>>> Regards
>>> Gaurav Goyal
>>>
>>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
>>> <er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:
>>>
>>> Thanks for your  prompt
>>> response!
>>>
>>> Situation is bit different now. Customer want us to remove the ceph
>>> storage configuration from scratch. Let is openstack system work without
>>> ceph. Later on install ceph with local disks.
>>>
>>> So I need to know a procedure to uninstall ceph and unconfigure it from
>>> openstack.
>>>
>>> Regards
>>> Gaurav Goyal
>>> On 03-Aug-2016 4:59 pm, "David Turner"
>>> <david.tur...@storagecraft.com<mailto:david.tur...@storagecraft.com>> wrote:
>>> If I'm understanding your question correctly that you're asking how to
>>> actually remove the SAN osds from ceph, then it doesn't matter what is using
>>> the storage (ie openstack, cephfs, krbd, etc) as the steps are the same.
>>>
>>> I'm going to assume that you've already added the new storage/osds to the
>>> cluster, weighted the SAN osds to 0.0 and that the backfilling has finished.
>>> If that is true, then your disk used space on the SAN's should be basically
>>> empty while the new osds on the local disks should have a fair amount of
>>> data.  If that is the case, then for every SAN osd, you just run the
>>> following commands replacing OSD_ID with the osd's id:
>>>
>>> # On the server with the osd being removed
>>> sudo stop ceph-osd id=OSD_ID
>>> ceph osd down OSD_ID
>>> ceph osd out OSD_ID
>>> ceph osd crush remove osd.OSD_ID
>>> ceph auth del osd.OSD_ID
>>> ceph osd rm OSD_ID
>>>
>>> Test running tho

Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-17 Thread Gaurav Goyal
Dear Ceph Users,

Can you please address my scenario and suggest me a solution.

Regards
Gaurav Goyal

On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
wrote:

> Hello
>
>
> I need your help to redesign my ceph storage network.
>
> As suggested in earlier discussions, i must not use SAN storage. So we
> have decided to removed it.
>
> Now we are ordering Local HDDs.
>
> My Network would be
>
> Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2 -->
> Local Disk 600GB Host 3 --> Compute2
>
> Is it right setup for ceph network? For Host1 and Host2 , we are using 1
> 600GB disk for basic filesystem.
>
> Should we use same size storage disks for ceph environment or i can order
> Disks in size of 2TB for ceph cluster?
>
> Making it
>
> 2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3
>
> 12TB in total. replication factor 2 should make it 6 TB?
>
>
> Regards
>
> Gaurav Goyal
>
> On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <bkris...@walmartlabs.com>
> wrote:
>
>> Hi Gaurav,
>>
>> There are several ways to do it depending on how you deployed your ceph
>> cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml
>> ready made to wipe off CEPH.
>>
>> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>>
>> You may need to configure ansible inventory with ceph hosts.
>>
>> Else if you want to purge manually, you can do it using:
>> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>>
>>
>> Thanks
>> Bharath
>>
>> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Gaurav
>> Goyal <er.gauravgo...@gmail.com>
>> Date: Thursday, August 4, 2016 at 8:19 AM
>> To: David Turner <david.tur...@storagecraft.com>
>> Cc: ceph-users <ceph-users@lists.ceph.com>
>> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
>> Local Disks
>>
>> Please suggest a procedure for this uninstallation process?
>>
>>
>> Regards
>> Gaurav Goyal
>>
>> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal <er.gauravgo...@gmail.com
>> <mailto:er.gauravgo...@gmail.com>> wrote:
>>
>> Thanks for your  prompt
>> response!
>>
>> Situation is bit different now. Customer want us to remove the ceph
>> storage configuration from scratch. Let is openstack system work without
>> ceph. Later on install ceph with local disks.
>>
>> So I need to know a procedure to uninstall ceph and unconfigure it from
>> openstack.
>>
>> Regards
>> Gaurav Goyal
>> On 03-Aug-2016 4:59 pm, "David Turner" <david.tur...@storagecraft.com
>> <mailto:david.tur...@storagecraft.com>> wrote:
>> If I'm understanding your question correctly that you're asking how to
>> actually remove the SAN osds from ceph, then it doesn't matter what is
>> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
>> same.
>>
>> I'm going to assume that you've already added the new storage/osds to the
>> cluster, weighted the SAN osds to 0.0 and that the backfilling has
>> finished.  If that is true, then your disk used space on the SAN's should
>> be basically empty while the new osds on the local disks should have a fair
>> amount of data.  If that is the case, then for every SAN osd, you just run
>> the following commands replacing OSD_ID with the osd's id:
>>
>> # On the server with the osd being removed
>> sudo stop ceph-osd id=OSD_ID
>> ceph osd down OSD_ID
>> ceph osd out OSD_ID
>> ceph osd crush remove osd.OSD_ID
>> ceph auth del osd.OSD_ID
>> ceph osd rm OSD_ID
>>
>> Test running those commands on a test osd and if you had set the weight
>> of the osd to 0.0 previously and if the backfilling had finished, then what
>> you should see is that your cluster has 1 less osd than it used to, and no
>> pgs should be backfilling.
>>
>> HOWEVER, if my assumptions above are incorrect, please provide the output
>> of the following commands and try to clarify your question.
>>
>> ceph status
>> ceph osd tree
>>
>> I hope this helps.
>>
>> > Hello David,
>> >
>> > Can you help me with steps/Procedure to uninstall Ceph storage from
>> openstack environment?
>> >
>> >
>> > Regards
>> > Gaurav Goyal
>> 
>> [cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>
>>
>> David Turner | Cloud Operations Engineer | StorageCraft Technology
>> Corporation<https://storagecraft.com>
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> 
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> 
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-16 Thread Gaurav Goyal
Hello


I need your help to redesign my ceph storage network.

As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.

Now we are ordering Local HDDs.

My Network would be

Host1 --> Controller + COmpute --> Local Disk 600GB Host 2--> Compute2 -->
Local Disk 600GB Host 3 --> Compute2

Is it right setup for ceph network? For Host1 and Host2 , we are using 1
600GB disk for basic filesystem.

Should we use same size storage disks for ceph environment or i can order
Disks in size of 2TB for ceph cluster?

Making it

2T X 2 on Host1 2T X 2 on Host 2 2T X 2 on Host 3

12TB in total. replication factor 2 should make it 6 TB?


Regards

Gaurav Goyal

On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna <bkris...@walmartlabs.com>
wrote:

> Hi Gaurav,
>
> There are several ways to do it depending on how you deployed your ceph
> cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml
> ready made to wipe off CEPH.
>
> https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml
>
> You may need to configure ansible inventory with ceph hosts.
>
> Else if you want to purge manually, you can do it using:
> http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/
>
>
> Thanks
> Bharath
>
> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Gaurav
> Goyal <er.gauravgo...@gmail.com>
> Date: Thursday, August 4, 2016 at 8:19 AM
> To: David Turner <david.tur...@storagecraft.com>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
> Local Disks
>
> Please suggest a procedure for this uninstallation process?
>
>
> Regards
> Gaurav Goyal
>
> On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal <er.gauravgo...@gmail.com<
> mailto:er.gauravgo...@gmail.com>> wrote:
>
> Thanks for your  prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from scratch. Let is openstack system work without
> ceph. Later on install ceph with local disks.
>
> So I need to know a procedure to uninstall ceph and unconfigure it from
> openstack.
>
> Regards
> Gaurav Goyal
> On 03-Aug-2016 4:59 pm, "David Turner" <david.tur...@storagecraft.com
> <mailto:david.tur...@storagecraft.com>> wrote:
> If I'm understanding your question correctly that you're asking how to
> actually remove the SAN osds from ceph, then it doesn't matter what is
> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
> same.
>
> I'm going to assume that you've already added the new storage/osds to the
> cluster, weighted the SAN osds to 0.0 and that the backfilling has
> finished.  If that is true, then your disk used space on the SAN's should
> be basically empty while the new osds on the local disks should have a fair
> amount of data.  If that is the case, then for every SAN osd, you just run
> the following commands replacing OSD_ID with the osd's id:
>
> # On the server with the osd being removed
> sudo stop ceph-osd id=OSD_ID
> ceph osd down OSD_ID
> ceph osd out OSD_ID
> ceph osd crush remove osd.OSD_ID
> ceph auth del osd.OSD_ID
> ceph osd rm OSD_ID
>
> Test running those commands on a test osd and if you had set the weight of
> the osd to 0.0 previously and if the backfilling had finished, then what
> you should see is that your cluster has 1 less osd than it used to, and no
> pgs should be backfilling.
>
> HOWEVER, if my assumptions above are incorrect, please provide the output
> of the following commands and try to clarify your question.
>
> ceph status
> ceph osd tree
>
> I hope this helps.
>
> > Hello David,
> >
> > Can you help me with steps/Procedure to uninstall Ceph storage from
> openstack environment?
> >
> >
> > Regards
> > Gaurav Goyal
> 
> [cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>
>
> David Turner | Cloud Operations Engineer | StorageCraft Technology
> Corporation<https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> 
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> 
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Bharath Krishna
Hi Gaurav,

There are several ways to do it depending on how you deployed your ceph 
cluster. Easiest way to do it is using ceph-ansible with purge-cluster yaml 
ready made to wipe off CEPH.

https://github.com/ceph/ceph-ansible/blob/master/purge-cluster.yml

You may need to configure ansible inventory with ceph hosts.

Else if you want to purge manually, you can do it using: 
http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-purge/


Thanks
Bharath

From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Gaurav Goyal 
<er.gauravgo...@gmail.com>
Date: Thursday, August 4, 2016 at 8:19 AM
To: David Turner <david.tur...@storagecraft.com>
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local 
Disks

Please suggest a procedure for this uninstallation process?


Regards
Gaurav Goyal

On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal 
<er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:

Thanks for your  prompt
response!

Situation is bit different now. Customer want us to remove the ceph storage 
configuration from scratch. Let is openstack system work without ceph. Later on 
install ceph with local disks.

So I need to know a procedure to uninstall ceph and unconfigure it from  
openstack.

Regards
Gaurav Goyal
On 03-Aug-2016 4:59 pm, "David Turner" 
<david.tur...@storagecraft.com<mailto:david.tur...@storagecraft.com>> wrote:
If I'm understanding your question correctly that you're asking how to actually 
remove the SAN osds from ceph, then it doesn't matter what is using the storage 
(ie openstack, cephfs, krbd, etc) as the steps are the same.

I'm going to assume that you've already added the new storage/osds to the 
cluster, weighted the SAN osds to 0.0 and that the backfilling has finished.  
If that is true, then your disk used space on the SAN's should be basically 
empty while the new osds on the local disks should have a fair amount of data.  
If that is the case, then for every SAN osd, you just run the following 
commands replacing OSD_ID with the osd's id:

# On the server with the osd being removed
sudo stop ceph-osd id=OSD_ID
ceph osd down OSD_ID
ceph osd out OSD_ID
ceph osd crush remove osd.OSD_ID
ceph auth del osd.OSD_ID
ceph osd rm OSD_ID

Test running those commands on a test osd and if you had set the weight of the 
osd to 0.0 previously and if the backfilling had finished, then what you should 
see is that your cluster has 1 less osd than it used to, and no pgs should be 
backfilling.

HOWEVER, if my assumptions above are incorrect, please provide the output of 
the following commands and try to clarify your question.

ceph status
ceph osd tree

I hope this helps.

> Hello David,
>
> Can you help me with steps/Procedure to uninstall Ceph storage from openstack 
> environment?
>
>
> Regards
> Gaurav Goyal

[cid:image001.jpg@01D1EE42.88EF6E60]<https://storagecraft.com>

David Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Please suggest a procedure for this uninstallation process?


Regards
Gaurav Goyal

On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal 
wrote:

> Thanks for your  prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from scratch. Let is openstack system work without
> ceph. Later on install ceph with local disks.
>
> So I need to know a procedure to uninstall ceph and unconfigure it from
> openstack.
>
> Regards
> Gaurav Goyal
> On 03-Aug-2016 4:59 pm, "David Turner" 
> wrote:
>
>> If I'm understanding your question correctly that you're asking how to
>> actually remove the SAN osds from ceph, then it doesn't matter what is
>> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
>> same.
>>
>> I'm going to assume that you've already added the new storage/osds to the
>> cluster, weighted the SAN osds to 0.0 and that the backfilling has
>> finished.  If that is true, then your disk used space on the SAN's should
>> be basically empty while the new osds on the local disks should have a fair
>> amount of data.  If that is the case, then for every SAN osd, you just run
>> the following commands replacing OSD_ID with the osd's id:
>>
>> # On the server with the osd being removed
>> sudo stop ceph-osd id=OSD_ID
>> ceph osd down OSD_ID
>> ceph osd out OSD_ID
>> ceph osd crush remove osd.OSD_ID
>> ceph auth del osd.OSD_ID
>> ceph osd rm OSD_ID
>>
>> Test running those commands on a test osd and if you had set the weight
>> of the osd to 0.0 previously and if the backfilling had finished, then what
>> you should see is that your cluster has 1 less osd than it used to, and no
>> pgs should be backfilling.
>>
>> HOWEVER, if my assumptions above are incorrect, please provide the output
>> of the following commands and try to clarify your question.
>>
>> ceph status
>> ceph osd tree
>>
>> I hope this helps.
>>
>> > Hello David,
>> >
>> > Can you help me with steps/Procedure to uninstall Ceph storage from
>> openstack environment?
>> >
>> >
>> > Regards
>> > Gaurav Goyal
>>
>> --
>>
>>  David Turner | Cloud Operations Engineer | 
>> StorageCraft
>> Technology Corporation 
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> --
>>
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> --
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Thanks for your  prompt
response!

Situation is bit different now. Customer want us to remove the ceph storage
configuration from scratch. Let is openstack system work without ceph.
Later on install ceph with local disks.

So I need to know a procedure to uninstall ceph and unconfigure it from
openstack.

Regards
Gaurav Goyal
On 03-Aug-2016 4:59 pm, "David Turner" 
wrote:

> If I'm understanding your question correctly that you're asking how to
> actually remove the SAN osds from ceph, then it doesn't matter what is
> using the storage (ie openstack, cephfs, krbd, etc) as the steps are the
> same.
>
> I'm going to assume that you've already added the new storage/osds to the
> cluster, weighted the SAN osds to 0.0 and that the backfilling has
> finished.  If that is true, then your disk used space on the SAN's should
> be basically empty while the new osds on the local disks should have a fair
> amount of data.  If that is the case, then for every SAN osd, you just run
> the following commands replacing OSD_ID with the osd's id:
>
> # On the server with the osd being removed
> sudo stop ceph-osd id=OSD_ID
> ceph osd down OSD_ID
> ceph osd out OSD_ID
> ceph osd crush remove osd.OSD_ID
> ceph auth del osd.OSD_ID
> ceph osd rm OSD_ID
>
> Test running those commands on a test osd and if you had set the weight of
> the osd to 0.0 previously and if the backfilling had finished, then what
> you should see is that your cluster has 1 less osd than it used to, and no
> pgs should be backfilling.
>
> HOWEVER, if my assumptions above are incorrect, please provide the output
> of the following commands and try to clarify your question.
>
> ceph status
> ceph osd tree
>
> I hope this helps.
>
> > Hello David,
> >
> > Can you help me with steps/Procedure to uninstall Ceph storage from
> openstack environment?
> >
> >
> > Regards
> > Gaurav Goyal
>
> --
>
>  David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation 
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
> --
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread David Turner
If I'm understanding your question correctly that you're asking how to actually 
remove the SAN osds from ceph, then it doesn't matter what is using the storage 
(ie openstack, cephfs, krbd, etc) as the steps are the same.

I'm going to assume that you've already added the new storage/osds to the 
cluster, weighted the SAN osds to 0.0 and that the backfilling has finished.  
If that is true, then your disk used space on the SAN's should be basically 
empty while the new osds on the local disks should have a fair amount of data.  
If that is the case, then for every SAN osd, you just run the following 
commands replacing OSD_ID with the osd's id:

# On the server with the osd being removed
sudo stop ceph-osd id=OSD_ID
ceph osd down OSD_ID
ceph osd out OSD_ID
ceph osd crush remove osd.OSD_ID
ceph auth del osd.OSD_ID
ceph osd rm OSD_ID

Test running those commands on a test osd and if you had set the weight of the 
osd to 0.0 previously and if the backfilling had finished, then what you should 
see is that your cluster has 1 less osd than it used to, and no pgs should be 
backfilling.

HOWEVER, if my assumptions above are incorrect, please provide the output of 
the following commands and try to clarify your question.

ceph status
ceph osd tree

I hope this helps.

> Hello David,
>
> Can you help me with steps/Procedure to uninstall Ceph storage from openstack 
> environment?
>
>
> Regards
> Gaurav Goyal



[cid:image093b67.JPG@c4506d08.49b6558d]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-03 Thread Gaurav Goyal
Hello David,

Can you help me with steps/Procedure to uninstall Ceph storage from
openstack environment?


Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:57 AM, Gaurav Goyal 
wrote:

> Hello David,
>
> Thanks a lot for detailed information!
>
> This is going to help me.
>
>
> Regards
> Gaurav Goyal
>
> On Tue, Aug 2, 2016 at 11:46 AM, David Turner <
> david.tur...@storagecraft.com> wrote:
>
>> I'm going to assume you know how to add and remove storage
>> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/.  The
>> only other part of this process is reweighting the crush map for the old
>> osds to a new weight of 0.0
>> http://docs.ceph.com/docs/master/rados/operations/crush-map/.
>>
>> I would recommend setting the nobackfill and norecover flags.
>>
>> ceph osd set nobackfill
>> ceph osd set norecover
>>
>> Next you would add all of the new osds according to the ceph docs and
>> then reweight the old osds to 0.0.
>>
>> ceph osd crush reweight osd.1 0.0
>>
>> Once you have all of that set, unset nobackfill and norecover.
>>
>> ceph osd unset nobackfill
>> ceph osd unset norecover
>>
>> Wait until all of the backfilling finishes and then remove the old SAN
>> osds as per the ceph docs.
>>
>>
>> There is a thread from this mailing list about the benefits of weighting
>> osds to 0.0 instead of just removing them.  The best thing that you gain
>> from doing it this way is that you can remove multiple nodes/osds at the
>> same time without having degraded objects and especially without losing
>> objects.
>>
>> --
>>
>>  David Turner | Cloud Operations Engineer | 
>> StorageCraft
>> Technology Corporation 
>> 380 Data Drive Suite 300 | Draper | Utah | 84020
>> Office: 801.871.2760 | Mobile: 385.224.2943
>>
>> --
>>
>> If you are not the intended recipient of this message or received it
>> erroneously, please notify the sender and delete it, together with any
>> attachments, and be advised that any dissemination or copying of this
>> message is prohibited.
>>
>> --
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hello David,

Thanks a lot for detailed information!

This is going to help me.


Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:46 AM, David Turner  wrote:

> I'm going to assume you know how to add and remove storage
> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/.  The
> only other part of this process is reweighting the crush map for the old
> osds to a new weight of 0.0
> http://docs.ceph.com/docs/master/rados/operations/crush-map/.
>
> I would recommend setting the nobackfill and norecover flags.
>
> ceph osd set nobackfill
> ceph osd set norecover
>
> Next you would add all of the new osds according to the ceph docs and then
> reweight the old osds to 0.0.
>
> ceph osd crush reweight osd.1 0.0
>
> Once you have all of that set, unset nobackfill and norecover.
>
> ceph osd unset nobackfill
> ceph osd unset norecover
>
> Wait until all of the backfilling finishes and then remove the old SAN
> osds as per the ceph docs.
>
>
> There is a thread from this mailing list about the benefits of weighting
> osds to 0.0 instead of just removing them.  The best thing that you gain
> from doing it this way is that you can remove multiple nodes/osds at the
> same time without having degraded objects and especially without losing
> objects.
>
> --
>
>  David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation 
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread David Turner
I'm going to assume you know how to add and remove storage 
http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/.  The only 
other part of this process is reweighting the crush map for the old osds to a 
new weight of 0.0 http://docs.ceph.com/docs/master/rados/operations/crush-map/.

I would recommend setting the nobackfill and norecover flags.

ceph osd set nobackfill
ceph osd set norecover

Next you would add all of the new osds according to the ceph docs and then 
reweight the old osds to 0.0.

ceph osd crush reweight osd.1 0.0

Once you have all of that set, unset nobackfill and norecover.

ceph osd unset nobackfill
ceph osd unset norecover

Wait until all of the backfilling finishes and then remove the old SAN osds as 
per the ceph docs.


There is a thread from this mailing list about the benefits of weighting osds 
to 0.0 instead of just removing them.  The best thing that you gain from doing 
it this way is that you can remove multiple nodes/osds at the same time without 
having degraded objects and especially without losing objects.



[cid:imagea5f6bc.JPG@1f888d0d.4ab0136e]   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread David Turner
Just add the new storage and weight the old storage to 0.0 so all data will 
move off of the old storage to the new storage.  It's not unique to migrating 
from SANs to Local Disks.  You would do the same any time you wanted to migrate 
to newer servers and retire old servers.  After the backfilling is done, you 
can just remove the old osds from the cluster and no more backfilling will 
happen.



[cid:imaged112aa.JPG@2eb52165.4bb022da]<https://storagecraft.com>   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Gaurav Goyal 
[er.gauravgo...@gmail.com]
Sent: Tuesday, August 02, 2016 9:19 AM
To: ceph-users
Subject: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local 
Disks

Dear Ceph Team,

I need your guidance on this.


Regards
Gaurav Goyal

On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal 
<er.gauravgo...@gmail.com<mailto:er.gauravgo...@gmail.com>> wrote:
Dear Team,

I have ceph storage installed on SAN storage which is connected to Openstack 
Hosts via iSCSI LUNs.
Now we want to get rid of SAN storage and move over ceph to LOCAL disks.

Can i add new local disks as new OSDs and remove the old osds ?
or

I will have to remove the ceph from scratch and install it freshly with Local 
disks?


Regards
Gaurav Goyal






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Hi David,

Thanks for your comments!
Can you please help to share the procedure/Document if available?

Regards
Gaurav Goyal

On Tue, Aug 2, 2016 at 11:24 AM, David Turner <david.tur...@storagecraft.com
> wrote:

> Just add the new storage and weight the old storage to 0.0 so all data
> will move off of the old storage to the new storage.  It's not unique to
> migrating from SANs to Local Disks.  You would do the same any time you
> wanted to migrate to newer servers and retire old servers.  After the
> backfilling is done, you can just remove the old osds from the cluster and
> no more backfilling will happen.
>
> --
>
> <https://storagecraft.com> David Turner | Cloud Operations Engineer | 
> StorageCraft
> Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760 | Mobile: 385.224.2943
>
> --
>
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
>
> --
>
> --
> *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of
> Gaurav Goyal [er.gauravgo...@gmail.com]
> *Sent:* Tuesday, August 02, 2016 9:19 AM
> *To:* ceph-users
> *Subject:* [ceph-users] Fwd: Ceph Storage Migration from SAN storage to
> Local Disks
>
> Dear Ceph Team,
>
> I need your guidance on this.
>
>
> Regards
> Gaurav Goyal
>
> On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>
>> Dear Team,
>>
>> I have ceph storage installed on SAN storage which is connected to
>> Openstack Hosts via iSCSI LUNs.
>> Now we want to get rid of SAN storage and move over ceph to LOCAL disks.
>>
>> Can i add new local disks as new OSDs and remove the old osds ?
>> or
>>
>> I will have to remove the ceph from scratch and install it freshly with
>> Local disks?
>>
>>
>> Regards
>> Gaurav Goyal
>>
>>
>>
>>
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Ceph Storage Migration from SAN storage to Local Disks

2016-08-02 Thread Gaurav Goyal
Dear Ceph Team,

I need your guidance on this.


Regards
Gaurav Goyal

On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal 
wrote:

> Dear Team,
>
> I have ceph storage installed on SAN storage which is connected to
> Openstack Hosts via iSCSI LUNs.
> Now we want to get rid of SAN storage and move over ceph to LOCAL disks.
>
> Can i add new local disks as new OSDs and remove the old osds ?
> or
>
> I will have to remove the ceph from scratch and install it freshly with
> Local disks?
>
>
> Regards
> Gaurav Goyal
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com