Re: [ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-26 Thread Gregory Farnum
There have been bugs here in the recent past which have been fixed for
hammer, at least...it's possible we didn't backport it for the giant
point release. :(

But for users going forward that procedure should be good!
-Greg

On Thu, Mar 26, 2015 at 11:26 AM, Kyle Hutson  wrote:
> For what it's worth, I don't think  "being patient" was the answer. I was
> having the same problem a couple of weeks ago, and I waited from before 5pm
> one day until after 8am the next, and still got the same errors. I ended up
> adding a "new" cephfs pool with a newly-created small pool, but was never
> able to actually remove cephfs altogether.
>
> On Thu, Mar 26, 2015 at 12:45 PM, Jake Grimmett 
> wrote:
>>
>> On 03/25/2015 05:44 PM, Gregory Farnum wrote:
>>>
>>> On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett 
>>> wrote:

 Dear All,

 Please forgive this post if it's naive, I'm trying to familiarise myself
 with cephfs!

 I'm using Scientific Linux 6.6. with Ceph 0.87.1

 My first steps with cephfs using a replicated pool worked OK.

 Now trying now to test cephfs via a replicated caching tier on top of an
 erasure pool. I've created an erasure pool, cannot put it under the
 existing
 replicated pool.

 My thoughts were to delete the existing cephfs, and start again, however
 I
 cannot delete the existing cephfs:

 errors are as follows:

 [root@ceph1 ~]# ceph fs rm cephfs2
 Error EINVAL: all MDS daemons must be inactive before removing
 filesystem

 I've tried killing the ceph-mds process, but this does not prevent the
 above
 error.

 I've also tried this, which also errors:

 [root@ceph1 ~]# ceph mds stop 0
 Error EBUSY: must decrease max_mds or else MDS will immediately
 reactivate
>>>
>>>
>>> Right, so did you run "ceph mds set_max_mds 0" and then repeating the
>>> stop command? :)
>>>

 This also fail...

 [root@ceph1 ~]# ceph-deploy mds destroy
 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /root/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds
 destroy
 [ceph_deploy.mds][ERROR ] subcommand destroy not implemented

 Am I doing the right thing in trying to wipe the original cephfs config
 before attempting to use an erasure cold tier? Or can I just redefine
 the
 cephfs?
>>>
>>>
>>> Yeah, unfortunately you need to recreate it if you want to try and use
>>> an EC pool with cache tiering, because CephFS knows what pools it
>>> expects data to belong to. Things are unlikely to behave correctly if
>>> you try and stick an EC pool under an existing one. :(
>>>
>>> Sounds like this is all just testing, which is good because the
>>> suitability of EC+cache is very dependent on how much hot data you
>>> have, etc...good luck!
>>> -Greg
>>>

 many thanks,

 Jake Grimmett
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> Thanks for your help - much appreciated.
>>
>> The "set_max_mds 0" command worked, but only after I rebooted the server,
>> and restarted ceph twice. Before this I still got an
>> "mds active" error, and so was unable to destroy the cephfs.
>>
>> Possibly I was being impatient, and needed to let mds go inactive? there
>> were ~1 million files on the system.
>>
>> [root@ceph1 ~]# ceph mds set_max_mds 0
>> max_mds = 0
>>
>> [root@ceph1 ~]# ceph mds stop 0
>> telling mds.0 10.1.0.86:6811/3249 to deactivate
>>
>> [root@ceph1 ~]# ceph mds stop 0
>> Error EEXIST: mds.0 not active (up:stopping)
>>
>> [root@ceph1 ~]# ceph fs rm cephfs2
>> Error EINVAL: all MDS daemons must be inactive before removing filesystem
>>
>> There shouldn't be any other mds servers running..
>> [root@ceph1 ~]# ceph mds stop 1
>> Error EEXIST: mds.1 not active (down:dne)
>>
>> At this point I rebooted the server, did a "service ceph restart" twice.
>> Shutdown ceph, then restarted ceph before this command worked:
>>
>> [root@ceph1 ~]# ceph fs rm cephfs2 --yes-i-really-mean-it
>>
>> Anyhow, I've now been able to create an erasure coded pool, with a
>> replicated tier which cephfs is running on :)
>>
>> *Lots* of testing to go!
>>
>> Again, many thanks
>>
>> Jake
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-26 Thread Kyle Hutson
For what it's worth, I don't think  "being patient" was the answer. I was
having the same problem a couple of weeks ago, and I waited from before 5pm
one day until after 8am the next, and still got the same errors. I ended up
adding a "new" cephfs pool with a newly-created small pool, but was never
able to actually remove cephfs altogether.

On Thu, Mar 26, 2015 at 12:45 PM, Jake Grimmett 
wrote:

> On 03/25/2015 05:44 PM, Gregory Farnum wrote:
>
>> On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett 
>> wrote:
>>
>>> Dear All,
>>>
>>> Please forgive this post if it's naive, I'm trying to familiarise myself
>>> with cephfs!
>>>
>>> I'm using Scientific Linux 6.6. with Ceph 0.87.1
>>>
>>> My first steps with cephfs using a replicated pool worked OK.
>>>
>>> Now trying now to test cephfs via a replicated caching tier on top of an
>>> erasure pool. I've created an erasure pool, cannot put it under the
>>> existing
>>> replicated pool.
>>>
>>> My thoughts were to delete the existing cephfs, and start again, however
>>> I
>>> cannot delete the existing cephfs:
>>>
>>> errors are as follows:
>>>
>>> [root@ceph1 ~]# ceph fs rm cephfs2
>>> Error EINVAL: all MDS daemons must be inactive before removing filesystem
>>>
>>> I've tried killing the ceph-mds process, but this does not prevent the
>>> above
>>> error.
>>>
>>> I've also tried this, which also errors:
>>>
>>> [root@ceph1 ~]# ceph mds stop 0
>>> Error EBUSY: must decrease max_mds or else MDS will immediately
>>> reactivate
>>>
>>
>> Right, so did you run "ceph mds set_max_mds 0" and then repeating the
>> stop command? :)
>>
>>
>>> This also fail...
>>>
>>> [root@ceph1 ~]# ceph-deploy mds destroy
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /root/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds
>>> destroy
>>> [ceph_deploy.mds][ERROR ] subcommand destroy not implemented
>>>
>>> Am I doing the right thing in trying to wipe the original cephfs config
>>> before attempting to use an erasure cold tier? Or can I just redefine the
>>> cephfs?
>>>
>>
>> Yeah, unfortunately you need to recreate it if you want to try and use
>> an EC pool with cache tiering, because CephFS knows what pools it
>> expects data to belong to. Things are unlikely to behave correctly if
>> you try and stick an EC pool under an existing one. :(
>>
>> Sounds like this is all just testing, which is good because the
>> suitability of EC+cache is very dependent on how much hot data you
>> have, etc...good luck!
>> -Greg
>>
>>
>>> many thanks,
>>>
>>> Jake Grimmett
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
> Thanks for your help - much appreciated.
>
> The "set_max_mds 0" command worked, but only after I rebooted the server,
> and restarted ceph twice. Before this I still got an
> "mds active" error, and so was unable to destroy the cephfs.
>
> Possibly I was being impatient, and needed to let mds go inactive? there
> were ~1 million files on the system.
>
> [root@ceph1 ~]# ceph mds set_max_mds 0
> max_mds = 0
>
> [root@ceph1 ~]# ceph mds stop 0
> telling mds.0 10.1.0.86:6811/3249 to deactivate
>
> [root@ceph1 ~]# ceph mds stop 0
> Error EEXIST: mds.0 not active (up:stopping)
>
> [root@ceph1 ~]# ceph fs rm cephfs2
> Error EINVAL: all MDS daemons must be inactive before removing filesystem
>
> There shouldn't be any other mds servers running..
> [root@ceph1 ~]# ceph mds stop 1
> Error EEXIST: mds.1 not active (down:dne)
>
> At this point I rebooted the server, did a "service ceph restart" twice.
> Shutdown ceph, then restarted ceph before this command worked:
>
> [root@ceph1 ~]# ceph fs rm cephfs2 --yes-i-really-mean-it
>
> Anyhow, I've now been able to create an erasure coded pool, with a
> replicated tier which cephfs is running on :)
>
> *Lots* of testing to go!
>
> Again, many thanks
>
> Jake
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-26 Thread Jake Grimmett

On 03/25/2015 05:44 PM, Gregory Farnum wrote:

On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett  wrote:

Dear All,

Please forgive this post if it's naive, I'm trying to familiarise myself
with cephfs!

I'm using Scientific Linux 6.6. with Ceph 0.87.1

My first steps with cephfs using a replicated pool worked OK.

Now trying now to test cephfs via a replicated caching tier on top of an
erasure pool. I've created an erasure pool, cannot put it under the existing
replicated pool.

My thoughts were to delete the existing cephfs, and start again, however I
cannot delete the existing cephfs:

errors are as follows:

[root@ceph1 ~]# ceph fs rm cephfs2
Error EINVAL: all MDS daemons must be inactive before removing filesystem

I've tried killing the ceph-mds process, but this does not prevent the above
error.

I've also tried this, which also errors:

[root@ceph1 ~]# ceph mds stop 0
Error EBUSY: must decrease max_mds or else MDS will immediately reactivate


Right, so did you run "ceph mds set_max_mds 0" and then repeating the
stop command? :)



This also fail...

[root@ceph1 ~]# ceph-deploy mds destroy
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds destroy
[ceph_deploy.mds][ERROR ] subcommand destroy not implemented

Am I doing the right thing in trying to wipe the original cephfs config
before attempting to use an erasure cold tier? Or can I just redefine the
cephfs?


Yeah, unfortunately you need to recreate it if you want to try and use
an EC pool with cache tiering, because CephFS knows what pools it
expects data to belong to. Things are unlikely to behave correctly if
you try and stick an EC pool under an existing one. :(

Sounds like this is all just testing, which is good because the
suitability of EC+cache is very dependent on how much hot data you
have, etc...good luck!
-Greg



many thanks,

Jake Grimmett
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Thanks for your help - much appreciated.

The "set_max_mds 0" command worked, but only after I rebooted the 
server, and restarted ceph twice. Before this I still got an

"mds active" error, and so was unable to destroy the cephfs.

Possibly I was being impatient, and needed to let mds go inactive? 
there were ~1 million files on the system.


[root@ceph1 ~]# ceph mds set_max_mds 0
max_mds = 0

[root@ceph1 ~]# ceph mds stop 0
telling mds.0 10.1.0.86:6811/3249 to deactivate

[root@ceph1 ~]# ceph mds stop 0
Error EEXIST: mds.0 not active (up:stopping)

[root@ceph1 ~]# ceph fs rm cephfs2
Error EINVAL: all MDS daemons must be inactive before removing filesystem

There shouldn't be any other mds servers running..
[root@ceph1 ~]# ceph mds stop 1
Error EEXIST: mds.1 not active (down:dne)

At this point I rebooted the server, did a "service ceph restart" twice. 
Shutdown ceph, then restarted ceph before this command worked:


[root@ceph1 ~]# ceph fs rm cephfs2 --yes-i-really-mean-it

Anyhow, I've now been able to create an erasure coded pool, with a 
replicated tier which cephfs is running on :)


*Lots* of testing to go!

Again, many thanks

Jake
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-25 Thread Gregory Farnum
On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett  wrote:
> Dear All,
>
> Please forgive this post if it's naive, I'm trying to familiarise myself
> with cephfs!
>
> I'm using Scientific Linux 6.6. with Ceph 0.87.1
>
> My first steps with cephfs using a replicated pool worked OK.
>
> Now trying now to test cephfs via a replicated caching tier on top of an
> erasure pool. I've created an erasure pool, cannot put it under the existing
> replicated pool.
>
> My thoughts were to delete the existing cephfs, and start again, however I
> cannot delete the existing cephfs:
>
> errors are as follows:
>
> [root@ceph1 ~]# ceph fs rm cephfs2
> Error EINVAL: all MDS daemons must be inactive before removing filesystem
>
> I've tried killing the ceph-mds process, but this does not prevent the above
> error.
>
> I've also tried this, which also errors:
>
> [root@ceph1 ~]# ceph mds stop 0
> Error EBUSY: must decrease max_mds or else MDS will immediately reactivate

Right, so did you run "ceph mds set_max_mds 0" and then repeating the
stop command? :)

>
> This also fail...
>
> [root@ceph1 ~]# ceph-deploy mds destroy
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds destroy
> [ceph_deploy.mds][ERROR ] subcommand destroy not implemented
>
> Am I doing the right thing in trying to wipe the original cephfs config
> before attempting to use an erasure cold tier? Or can I just redefine the
> cephfs?

Yeah, unfortunately you need to recreate it if you want to try and use
an EC pool with cache tiering, because CephFS knows what pools it
expects data to belong to. Things are unlikely to behave correctly if
you try and stick an EC pool under an existing one. :(

Sounds like this is all just testing, which is good because the
suitability of EC+cache is very dependent on how much hot data you
have, etc...good luck!
-Greg

>
> many thanks,
>
> Jake Grimmett
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-25 Thread Jake Grimmett

Dear All,

Please forgive this post if it's naive, I'm trying to familiarise myself 
with cephfs!


I'm using Scientific Linux 6.6. with Ceph 0.87.1

My first steps with cephfs using a replicated pool worked OK.

Now trying now to test cephfs via a replicated caching tier on top of an 
erasure pool. I've created an erasure pool, cannot put it under the 
existing replicated pool.


My thoughts were to delete the existing cephfs, and start again, however 
I cannot delete the existing cephfs:


errors are as follows:

[root@ceph1 ~]# ceph fs rm cephfs2
Error EINVAL: all MDS daemons must be inactive before removing filesystem

I've tried killing the ceph-mds process, but this does not prevent the 
above error.


I've also tried this, which also errors:

[root@ceph1 ~]# ceph mds stop 0
Error EBUSY: must decrease max_mds or else MDS will immediately reactivate

This also fail...

[root@ceph1 ~]# ceph-deploy mds destroy
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds destroy
[ceph_deploy.mds][ERROR ] subcommand destroy not implemented

Am I doing the right thing in trying to wipe the original cephfs config 
before attempting to use an erasure cold tier? Or can I just redefine 
the cephfs?


many thanks,

Jake Grimmett
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com