Hello,
On Mon, 6 Jun 2016 14:14:17 -0500 Brady Deetz wrote:
> This is an interesting idea that I hadn't yet considered testing.
>
> My test cluster is also looking like 2K per object.
>
> It looks like our hardware purchase for a one-half sized pilot is getting
> approved and I don't really wa
This is an interesting idea that I hadn't yet considered testing.
My test cluster is also looking like 2K per object.
It looks like our hardware purchase for a one-half sized pilot is getting
approved and I don't really want to modify it when we're this close to
moving forward. So, using spare NV
On Mon, Jun 6, 2016 at 7:06 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 3 Jun 2016 15:43:11 +0100 David wrote:
>
> > I'm hoping to implement cephfs in production at some point this year so
> > I'd be interested to hear your progress on this.
> >
> > Have you considered SSD for your metadat
Hello,
On Fri, 3 Jun 2016 15:43:11 +0100 David wrote:
> I'm hoping to implement cephfs in production at some point this year so
> I'd be interested to hear your progress on this.
>
> Have you considered SSD for your metadata pool? You wouldn't need loads
> of capacity although even with reliabl
On Wed, Jun 1, 2016 at 1:50 PM, Brady Deetz wrote:
> Question:
> I'm curious if there is anybody else out there running CephFS at the scale
> I'm planning for. I'd like to know some of the issues you didn't expect that
> I should be looking out for. I'd also like to simply see when CephFS hasn't
>
I'm hoping to implement cephfs in production at some point this year so I'd
be interested to hear your progress on this.
Have you considered SSD for your metadata pool? You wouldn't need loads of
capacity although even with reliable SSD I'd probably still do x3
replication for metadata. I've been
On Thu, 2 Jun 2016 21:13:41 -0500 Brady Deetz wrote:
> On Thu, Jun 2, 2016 at 8:58 PM, Christian Balzer wrote:
>
> > On Thu, 2 Jun 2016 11:11:19 -0500 Brady Deetz wrote:
> >
> > > On Wed, Jun 1, 2016 at 8:18 PM, Christian Balzer
> > > wrote:
> > >
> > > >
> > > > Hello,
> > > >
> > > > On Wed,
On Thu, Jun 2, 2016 at 8:58 PM, Christian Balzer wrote:
> On Thu, 2 Jun 2016 11:11:19 -0500 Brady Deetz wrote:
>
> > On Wed, Jun 1, 2016 at 8:18 PM, Christian Balzer wrote:
> >
> > >
> > > Hello,
> > >
> > > On Wed, 1 Jun 2016 15:50:19 -0500 Brady Deetz wrote:
> > >
> > > > Question:
> > > > I'm
On Thu, 2 Jun 2016 11:11:19 -0500 Brady Deetz wrote:
> On Wed, Jun 1, 2016 at 8:18 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Wed, 1 Jun 2016 15:50:19 -0500 Brady Deetz wrote:
> >
> > > Question:
> > > I'm curious if there is anybody else out there running CephFS at the
> > > scale
I have three comments on our CephFS deployment. Some background first, we
have been using CephFS since Giant with some not so important data. We are
using it more heavily now in Infernalis. We have our own raw data storage
using the POSIX semantics and keep everything as basic as possible.
Basicall
On Wed, Jun 1, 2016 at 8:18 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 1 Jun 2016 15:50:19 -0500 Brady Deetz wrote:
>
> > Question:
> > I'm curious if there is anybody else out there running CephFS at the
> > scale I'm planning for. I'd like to know some of the issues you didn't
> > expec
Hello,
On Wed, 1 Jun 2016 15:50:19 -0500 Brady Deetz wrote:
> Question:
> I'm curious if there is anybody else out there running CephFS at the
> scale I'm planning for. I'd like to know some of the issues you didn't
> expect that I should be looking out for. I'd also like to simply see
> when Ce
Question:
I'm curious if there is anybody else out there running CephFS at the scale
I'm planning for. I'd like to know some of the issues you didn't expect
that I should be looking out for. I'd also like to simply see when CephFS
hasn't worked out and why. Basically, give me your war stories.
Pr
13 matches
Mail list logo