Hi everyone! I would like to share the schedule and some logistical
details for next week's Ceph Developer Summit.
The Summit is happening next Tuesday, May 7, from 8:00am to 2:00pm. That
ends up being 16:00-22:00 in Europe and (unfortunately) the middle of the
night in Asia.
The schedule is a
Hi all,
I have 2 server 4U, earch server: 20 HDD 3TB, 1 card RAID. Ram 16Gb.
I want deploy ceph storage,.
How to implement reasonable?
Many thanks.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
Hi Guys,
Any additional thoughts on this? There was a bit of information shared
off-list I wanted to bring back:
Sam mentioned that the metadata looked odd, and suspected "some form of
32bit shenanigans in the key name construction".
However, that might not have been the case, because later cam
Thank you, Gandalf and Igor. I intuitively think that building a cluster on
another is not appropriate. Maybe I should give RadosGW a try first.
On Thu, May 2, 2013 at 3:00 AM, Igor Laskovy wrote:
> Or maybe in case the hosting purposes easier implement RadosGW.
>
--
Yudong Guang
guangyudong
Hello,
Speaking of rotating-media-under-filestore case(must be most common in
Ceph deployments), can peering be less greedy for disk operations
without slowing down entire 'blackhole timeout', e.g. when it blocks
client operations? I`m suffering of very long and very disk-intensive
peering process
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Sorry, I forgot to hit reply all.
That did it, I'm getting a "HEALTH_OK"!! Now I can move on with the
process! Thanks guys, hopefully you won't see me back here too much ;)
On Wed, May 1, 2013 at 5:43 PM, Gregory Farnum wrote:
> [ Please keep all discussions on the list. :) ]
>
> Okay, so you'
Le 01/05/2013 18:23, Wyatt Gorman a écrit :
Here is my ceph.conf. I just figured out that the second host = isn't
necessary, though it is like that on the 5-minute quick start guide...
(Perhaps I'll submit my couple of fixes that I've had to implement so
far). That fixes the "redefined host" is
On May 1, 2013, at 11:44 PM, Sage Weil
wrote:
> I added a blueprint for extending the crush rule language. If there are
> interesting or strange placement policies you'd like to do and aren't able
> to currently express using CRUSH, please help us out by enumerating them
> on that blueprint
Or maybe in case the hosting purposes easier implement RadosGW.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
10 matches
Mail list logo