Re: [ceph-users] cephfs survey results

2014-12-08 Thread Yan, Zheng
On Sat, Dec 6, 2014 at 10:40 AM, Lorieri lori...@gmail.com wrote:
 Hi,


 if I have a situation when each node in a cluster writes their own
 files in cephfs, is it safe to use multiple MDS ?
 I mean, is the problem using multiple MDS related to nodes writing same files 
 ?


It's not a problem. each file has an authority MDS.

Yan, Zheng

 thanks,

 -lorieri



 On Tue, Nov 4, 2014 at 9:47 PM, Shain Miley smi...@npr.org wrote:
 +1 for fsck and snapshots, being able to have snapshot backups and protect
 against accidental deletion, etc is something we are really looking forward
 to.

 Thanks,

 Shain



 On 11/04/2014 04:02 AM, Sage Weil wrote:

 On Tue, 4 Nov 2014, Blair Bethwaite wrote:

 On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

 In the Ceph session at the OpenStack summit someone asked what the
 CephFS
 survey results looked like.

 Thanks Sage, that was me!

   Here's the link:

  https://www.surveymonkey.com/results/SM-L5JV7WXL/

 In short, people want

 fsck
 multimds
 snapshots
 quotas

 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...

 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.

 Yeah, I agree, and am taking the results with a grain of salt.  I
 think the results are heavily influenced by the order they were
 originally listed (I whish surveymonkey would randomize is for each
 person or something).

 fsck is a clear #1.  Everybody wants multimds, but I think very few
 actually need it at this point.  We'll be merging a soft quota patch
 shortly, and things like performance (adding the inline data support to
 the kernel client, for instance) will probably compete with getting
 snapshots working (as part of a larger subvolume infrastructure).  That's
 my guess at least; for now, we're really focused on fsck and hard
 usability edges and haven't set priorities beyond that.

 We're definitely interested in hearing feedback on this strategy, and on
 peoples' experiences with giant so far...

 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 --
 Shain Miley | Manager of Systems and Infrastructure, Digital Media |
 smi...@npr.org | 202.513.3649

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-12-05 Thread Lorieri
Hi,


if I have a situation when each node in a cluster writes their own
files in cephfs, is it safe to use multiple MDS ?
I mean, is the problem using multiple MDS related to nodes writing same files ?

thanks,

-lorieri



On Tue, Nov 4, 2014 at 9:47 PM, Shain Miley smi...@npr.org wrote:
 +1 for fsck and snapshots, being able to have snapshot backups and protect
 against accidental deletion, etc is something we are really looking forward
 to.

 Thanks,

 Shain



 On 11/04/2014 04:02 AM, Sage Weil wrote:

 On Tue, 4 Nov 2014, Blair Bethwaite wrote:

 On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

 In the Ceph session at the OpenStack summit someone asked what the
 CephFS
 survey results looked like.

 Thanks Sage, that was me!

   Here's the link:

  https://www.surveymonkey.com/results/SM-L5JV7WXL/

 In short, people want

 fsck
 multimds
 snapshots
 quotas

 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...

 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.

 Yeah, I agree, and am taking the results with a grain of salt.  I
 think the results are heavily influenced by the order they were
 originally listed (I whish surveymonkey would randomize is for each
 person or something).

 fsck is a clear #1.  Everybody wants multimds, but I think very few
 actually need it at this point.  We'll be merging a soft quota patch
 shortly, and things like performance (adding the inline data support to
 the kernel client, for instance) will probably compete with getting
 snapshots working (as part of a larger subvolume infrastructure).  That's
 my guess at least; for now, we're really focused on fsck and hard
 usability edges and haven't set priorities beyond that.

 We're definitely interested in hearing feedback on this strategy, and on
 peoples' experiences with giant so far...

 sage
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


 --
 Shain Miley | Manager of Systems and Infrastructure, Digital Media |
 smi...@npr.org | 202.513.3649

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-09 Thread Patrick Hahn
On Mon, Nov 3, 2014 at 6:36 PM, Blair Bethwaite
blair.bethwa...@gmail.com wrote:
 On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
 In the Ceph session at the OpenStack summit someone asked what the CephFS
 survey results looked like.

 Thanks Sage, that was me!

  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

 In short, people want

 fsck
 multimds
 snapshots
 quotas

 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...

 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.

For the use case we're looking at cephfs for at $dayjob we really need
snapshots. I think anyone building a cheap-and-deep cluster for
archival storage would like to be more than one errant rm -rf away
from a *very* long weekend.

Thanks,
-- 
Patrick Hahn
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Sage Weil
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
 On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
  In the Ceph session at the OpenStack summit someone asked what the CephFS
  survey results looked like.
 
 Thanks Sage, that was me!
 
   Here's the link:
 
  https://www.surveymonkey.com/results/SM-L5JV7WXL/
 
  In short, people want
 
  fsck
  multimds
  snapshots
  quotas
 
 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...
 
 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.

Yeah, I agree, and am taking the results with a grain of salt.  I 
think the results are heavily influenced by the order they were 
originally listed (I whish surveymonkey would randomize is for each 
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few 
actually need it at this point.  We'll be merging a soft quota patch 
shortly, and things like performance (adding the inline data support to 
the kernel client, for instance) will probably compete with getting 
snapshots working (as part of a larger subvolume infrastructure).  That's 
my guess at least; for now, we're really focused on fsck and hard 
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on 
peoples' experiences with giant so far...

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Wido den Hollander
On 11/04/2014 10:02 AM, Sage Weil wrote:
 On Tue, 4 Nov 2014, Blair Bethwaite wrote:
 On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
 In the Ceph session at the OpenStack summit someone asked what the CephFS
 survey results looked like.

 Thanks Sage, that was me!

  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

 In short, people want

 fsck
 multimds
 snapshots
 quotas

 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...

 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.
 
 Yeah, I agree, and am taking the results with a grain of salt.  I 
 think the results are heavily influenced by the order they were 
 originally listed (I whish surveymonkey would randomize is for each 
 person or something).
 
 fsck is a clear #1.  Everybody wants multimds, but I think very few 
 actually need it at this point.  We'll be merging a soft quota patch 
 shortly, and things like performance (adding the inline data support to 
 the kernel client, for instance) will probably compete with getting 
 snapshots working (as part of a larger subvolume infrastructure).  That's 
 my guess at least; for now, we're really focused on fsck and hard 
 usability edges and haven't set priorities beyond that.
 
 We're definitely interested in hearing feedback on this strategy, and on 
 peoples' experiences with giant so far...
 

I think the approach is correct. Everybody I talk to wants to kick out
their NFS server, but you don't need multi MDS for that. Active/Standby
is just fine.

Wido

 sage
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mariusz Gronczewski
On Tue, 4 Nov 2014 10:36:07 +1100, Blair Bethwaite
blair.bethwa...@gmail.com wrote:

 
 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...
 
 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.
 

Those are related; if small file performance will be enough for one
MDS to handle high load with a lot of small files (typical case of
webserver), having multiple acive MDS will be less of a priority;

And if someone currently have OSD on bunch of relatively weak nodes,
again, having active-active setup with MDS will be more interesting to
him than someone that can just buy new fast machine for it.


-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczew...@efigence.com
mailto:mariusz.gronczew...@efigence.com


signature.asc
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Scottix
Agreed Multi-MDS is a nice to have but not required for full production use.
TBH stability and recovery will win any IT person dealing with filesystems.

On Tue, Nov 4, 2014 at 7:33 AM, Mariusz Gronczewski
mariusz.gronczew...@efigence.com wrote:
 On Tue, 4 Nov 2014 10:36:07 +1100, Blair Bethwaite
 blair.bethwa...@gmail.com wrote:


 TBH I'm a bit surprised by a couple of these and hope maybe you guys
 will apply a certain amount of filtering on this...

 fsck and quotas were there for me, but multimds and snapshots are what
 I'd consider icing features - they're nice to have but not on the
 critical path to using cephfs instead of e.g. nfs in a production
 setting. I'd have thought stuff like small file performance and
 gateway support was much more relevant to uptake and
 positive/pain-free UX. Interested to hear others rationale here.


 Those are related; if small file performance will be enough for one
 MDS to handle high load with a lot of small files (typical case of
 webserver), having multiple acive MDS will be less of a priority;

 And if someone currently have OSD on bunch of relatively weak nodes,
 again, having active-active setup with MDS will be more interesting to
 him than someone that can just buy new fast machine for it.


 --
 Mariusz Gronczewski, Administrator

 Efigence S. A.
 ul. Wołoska 9a, 02-583 Warszawa
 T: [+48] 22 380 13 13
 F: [+48] 22 380 13 14
 E: mariusz.gronczew...@efigence.com
 mailto:mariusz.gronczew...@efigence.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
Follow Me: @Scottix
http://about.me/scottix
scot...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood

On 04/11/14 22:02, Sage Weil wrote:

On Tue, 4 Nov 2014, Blair Bethwaite wrote:

On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.


Thanks Sage, that was me!


  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas


TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider icing features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.


Yeah, I agree, and am taking the results with a grain of salt.  I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few
actually need it at this point.  We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure).  That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...



Heh, not necessarily - I put multi mds in there, as we want the cephfs 
part to be of similar to the rest of ceph in its availability.


Maybe its because we are looking at plugging it in with an Openstack 
setup and for that you want everything to 'just look after itself'. If 
on the other hand we were wanting merely an nfs replacement, then sure 
multi mds not so important there.


regards

Mark

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Nelson

On 11/04/2014 03:11 PM, Mark Kirkwood wrote:

On 04/11/14 22:02, Sage Weil wrote:

On Tue, 4 Nov 2014, Blair Bethwaite wrote:

On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

In the Ceph session at the OpenStack summit someone asked what the
CephFS
survey results looked like.


Thanks Sage, that was me!


  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas


TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider icing features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.


Yeah, I agree, and am taking the results with a grain of salt.  I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few
actually need it at this point.  We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure).  That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...



Heh, not necessarily - I put multi mds in there, as we want the cephfs
part to be of similar to the rest of ceph in its availability.

Maybe its because we are looking at plugging it in with an Openstack
setup and for that you want everything to 'just look after itself'. If
on the other hand we were wanting merely an nfs replacement, then sure
multi mds not so important there.


Do you need active/active or is active/passive good enough?



regards

Mark

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood

On 05/11/14 10:58, Mark Nelson wrote:

On 11/04/2014 03:11 PM, Mark Kirkwood wrote:

Heh, not necessarily - I put multi mds in there, as we want the cephfs
part to be of similar to the rest of ceph in its availability.

Maybe its because we are looking at plugging it in with an Openstack
setup and for that you want everything to 'just look after itself'. If
on the other hand we were wanting merely an nfs replacement, then sure
multi mds not so important there.


Do you need active/active or is active/passive good enough?



That is of course a good question. We are certainly seeing active/active 
as much better - essentially because all the other bits are, and it 
avoids the need to wake people up to change things. Does that make it 
essential? I'm not 100% sure, it might just be a nice to have that is so 
nice that we'll wait for it to be there!


Cheers

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Sage Weil
On Wed, 5 Nov 2014, Mark Kirkwood wrote:
 On 04/11/14 22:02, Sage Weil wrote:
  On Tue, 4 Nov 2014, Blair Bethwaite wrote:
   On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the
CephFS
survey results looked like.
   
   Thanks Sage, that was me!
   
  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas
   
   TBH I'm a bit surprised by a couple of these and hope maybe you guys
   will apply a certain amount of filtering on this...
   
   fsck and quotas were there for me, but multimds and snapshots are what
   I'd consider icing features - they're nice to have but not on the
   critical path to using cephfs instead of e.g. nfs in a production
   setting. I'd have thought stuff like small file performance and
   gateway support was much more relevant to uptake and
   positive/pain-free UX. Interested to hear others rationale here.
  
  Yeah, I agree, and am taking the results with a grain of salt.  I
  think the results are heavily influenced by the order they were
  originally listed (I whish surveymonkey would randomize is for each
  person or something).
  
  fsck is a clear #1.  Everybody wants multimds, but I think very few
  actually need it at this point.  We'll be merging a soft quota patch
  shortly, and things like performance (adding the inline data support to
  the kernel client, for instance) will probably compete with getting
  snapshots working (as part of a larger subvolume infrastructure).  That's
  my guess at least; for now, we're really focused on fsck and hard
  usability edges and haven't set priorities beyond that.
  
  We're definitely interested in hearing feedback on this strategy, and on
  peoples' experiences with giant so far...
  
 
 Heh, not necessarily - I put multi mds in there, as we want the cephfs part to
 be of similar to the rest of ceph in its availability.
 
 Maybe its because we are looking at plugging it in with an Openstack setup and
 for that you want everything to 'just look after itself'. If on the other hand
 we were wanting merely an nfs replacement, then sure multi mds not so
 important there.

Important clarification: multimds == multiple *active* MDS's.  single 
mds means 1 active MDS and N standy's.  One perfectly valid strategy, 
for example, is to run a ceph-mds on *every* node and let the mon pick 
whichever one is active.  (That works as long as you have sufficient 
memory on all nodes.)

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood

On 05/11/14 11:47, Sage Weil wrote:

On Wed, 5 Nov 2014, Mark Kirkwood wrote:

On 04/11/14 22:02, Sage Weil wrote:

On Tue, 4 Nov 2014, Blair Bethwaite wrote:

On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

In the Ceph session at the OpenStack summit someone asked what the
CephFS
survey results looked like.


Thanks Sage, that was me!


   Here's the link:

  https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas


TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider icing features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.


Yeah, I agree, and am taking the results with a grain of salt.  I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few
actually need it at this point.  We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure).  That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...



Heh, not necessarily - I put multi mds in there, as we want the cephfs part to
be of similar to the rest of ceph in its availability.

Maybe its because we are looking at plugging it in with an Openstack setup and
for that you want everything to 'just look after itself'. If on the other hand
we were wanting merely an nfs replacement, then sure multi mds not so
important there.


Important clarification: multimds == multiple *active* MDS's.  single
mds means 1 active MDS and N standy's.  One perfectly valid strategy,
for example, is to run a ceph-mds on *every* node and let the mon pick
whichever one is active.  (That works as long as you have sufficient
memory on all nodes.)



Righty, so I think I've (plus a few others perhaps) misunderstood the 
nature of the 'promotion mechanism' for 1 active several standby design 
- I was under the (possibly wrong) impression that you needed to 'do 
something' to make a standby active? If not then yeah it would be fine, 
sorry!


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-04 Thread Shain Miley
+1 for fsck and snapshots, being able to have snapshot backups and 
protect against accidental deletion, etc is something we are really 
looking forward to.


Thanks,

Shain


On 11/04/2014 04:02 AM, Sage Weil wrote:

On Tue, 4 Nov 2014, Blair Bethwaite wrote:

On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:

In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.

Thanks Sage, that was me!


  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas

TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider icing features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.

Yeah, I agree, and am taking the results with a grain of salt.  I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few
actually need it at this point.  We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure).  That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Shain Miley | Manager of Systems and Infrastructure, Digital Media | 
smi...@npr.org | 202.513.3649

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs survey results

2014-11-03 Thread Sage Weil
In the Ceph session at the OpenStack summit someone asked what the CephFS 
survey results looked like.  Here's the link:

https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs survey results

2014-11-03 Thread Blair Bethwaite
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
 In the Ceph session at the OpenStack summit someone asked what the CephFS
 survey results looked like.

Thanks Sage, that was me!

  Here's the link:

 https://www.surveymonkey.com/results/SM-L5JV7WXL/

 In short, people want

 fsck
 multimds
 snapshots
 quotas

TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider icing features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.

-- 
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com