Oh nasty typo in those release notes. RDB module :)
Good thing nonetheless !
--
David Moreau Simard
Le 2014-09-02, 8:57 PM, « Lorieri » a écrit :
>it is added officially now
>
>https://coreos.com/releases/#423.0.0
>
>cheers,
>-lorieri
>
>On Mon, Aug 11, 2014 at 12:28 AM, Lorieri wrote:
>> H
it is added officially now
https://coreos.com/releases/#423.0.0
cheers,
-lorieri
On Mon, Aug 11, 2014 at 12:28 AM, Lorieri wrote:
> Hi,
>
> I've playing with CoreOS and got it (dirty) running with Ceph.
> No big deal, but it can save some time.
>
> 1 - An image of docker-registry that stores on
On Tue, Sep 2, 2014 at 3:47 PM, Alfredo Deza wrote:
> This is an actual issue, so I created:
>
> http://tracker.ceph.com/issues/9319
>
> And should be fixing it soon.
Thank you!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
On Tue, Sep 2, 2014 at 3:40 PM, Konrad Gutkowski
wrote:
> Its just a text file, you can change it/create it on all your nodes before
> you run ceph-deploy.
>
> W dniu 02.09.2014 o 21:37 J David pisze:
>
>
>> On Tue, Sep 2, 2014 at 2:50 PM, Konrad Gutkowski
>> wrote:
>>>
>>> You need to set highe
Its just a text file, you can change it/create it on all your nodes before
you run ceph-deploy.
W dniu 02.09.2014 o 21:37 J David pisze:
On Tue, Sep 2, 2014 at 2:50 PM, Konrad Gutkowski
wrote:
You need to set higher priority for ceph repo, check "ceph-deploy with
--release (--stable) for d
On Tue, Sep 2, 2014 at 2:50 PM, Konrad Gutkowski
wrote:
> You need to set higher priority for ceph repo, check "ceph-deploy with
> --release (--stable) for dumpling?" thread.
Right, this is the same issue as that. It looks like the 0.80.1
packages are coming from Ubuntu; this is the first time w
Hi,
You need to set higher priority for ceph repo, check "ceph-deploy with
--release (--stable) for dumpling?" thread.
W dniu 02.09.2014 o 19:18 J David pisze:
On Tue, Sep 2, 2014 at 1:00 PM, Alfredo Deza
wrote:
correct, if you don't specify what release you want/need, ceph-deploy
will
On Tue, Sep 2, 2014 at 1:00 PM, Alfredo Deza wrote:
> correct, if you don't specify what release you want/need, ceph-deploy
> will use the latest stable release (firefly as of this writing)
So, ceph-deploy set up emperor repositories in
/etc/apt/sources.list.d/ceph.list and then didn't use them?
On Sat, Aug 30, 2014 at 11:35 PM, Christian Balzer wrote:
>
> Hello,
>
> On Sat, 30 Aug 2014 20:24:00 -0400 J David wrote:
>
>> While adding some nodes to a ceph emperor cluster using ceph-deploy,
>> the new nodes somehow wound up with 0.80.1, which I think is a Firefly
>> release.
>>
> This was a
Hi Sebastien,
Something I didn't see in the thread so far, did you secure erase the SSDs
before they got used? I assume these were probably repurposed for this test. We
have seen some pretty significant garbage collection issue on various SSD and
other forms of solid state storage to the point
It would nice if you could post the results :)
Yup gitbuilder is available on debian 7.6 wheezy.
On 02 Sep 2014, at 17:55, Alexandre DERUMIER wrote:
> I'm going to install next week a small 3 nodes test ssd cluster,
>
> I have some intel s3500 and crucial m550.
> I'll try to bench them with fi
Loic, My comments are inline.
On Tue, Sep 2, 2014 at 8:54 AM, Jakes John wrote:
> Thanks Loic.
>
>
>
> On Mon, Sep 1, 2014 at 11:31 PM, Loic Dachary wrote:
>
>> Hi John,
>>
>> On 02/09/2014 05:29, Jakes John wrote:> Hi,
>> >I have some general questions regarding the crush map. It would be
According to http://ceph.com/docs/master/rados/operations/crush-map/, you
should be able to construct a clever use of 'step take' and 'step choose'
rules in your CRUSH map to force one copy to a particular bucket and allow
the other two copies to be chosen elsewhere. I was looking for a way to
have
I'm going to install next week a small 3 nodes test ssd cluster,
I have some intel s3500 and crucial m550.
I'll try to bench them with firefly and master.
Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to compile
all packages)
- Mail original -
De: "Sebastien Ha
Thanks Loic.
On Mon, Sep 1, 2014 at 11:31 PM, Loic Dachary wrote:
> Hi John,
>
> On 02/09/2014 05:29, Jakes John wrote:> Hi,
> >I have some general questions regarding the crush map. It would be
> helpful if someone can help me out by clarifying them.
> >
> > 1. I saw that a bucket 'host'
We've chosen to use the gitbuilder site to make sure we get the same version
when we rebuild nodes, etc.
http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/
So our sources list looks like:
deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.80.5
precise main
Warren
-
FYI it is a known issue : http://tracker.ceph.com/issues/6109
On 01/09/2014 00:02, Loic Dachary wrote:
> Hi Ceph,
>
> In a mixed dumpling / emperor cluster, because osd 2 has been removed but is
> still in
>
> "might_have_unfound": [
> { "osd": 2,
> "
Well the last time I ran two processes in parallel I got half the total amount
available so 1,7k per client.
On 02 Sep 2014, at 15:19, Alexandre DERUMIER wrote:
>
> Do you have same results, if you launch 2 fio benchs in parallel on 2
> differents rbd volumes ?
>
>
> - Mail original ---
Do you have same results, if you launch 2 fio benchs in parallel on 2
differents rbd volumes ?
- Mail original -
De: "Sebastien Han"
À: "Cédric Lemarchand"
Cc: "Alexandre DERUMIER" , ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 13:59:13
Objet: Re: [ceph-users] [Singl
@Dan, hop my bad I forgot to use these settings, I’ll try again and see how
much I can get on the read performance side.
@Mark, thanks again and yes I believe that due to some hardware variance we
have difference results, I won’t say that the deviance is decent but results
are close enough to s
Hi Sebastian,
> Le 2 sept. 2014 à 10:41, Sebastien Han a écrit :
>
> Hey,
>
> Well I ran an fio job that simulates the (more or less) what ceph is doing
> (journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
> I could do this, but for me it definitely looks like a major w
On 02/09/14 19:38, Alexandre DERUMIER wrote:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
Oddly enough, it does not seem to
Hi Sebastien,
That sounds promising. Did you enable the sharded ops to get this result?
Cheers, Dan
> On 02 Sep 2014, at 02:19, Sebastien Han wrote:
>
> Mark and all, Ceph IOPS performance has definitely improved with Giant.
> With this version: ceph version 0.84-940-g3215c52
> (3215c520e1306
Hey,
Well I ran an fio job that simulates the (more or less) what ceph is doing
(journal writes with dsync and o_direct) and the ssd gave me 29K IOPS too.
I could do this, but for me it definitely looks like a major waste since we
don’t even get a third of the ssd performance.
On 02 Sep 2014, a
Hi Sebastien,
>>I got 6340 IOPS on a single OSD SSD. (journal and data on the same
>>partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
- Mail original -
De: "Sebastien Han"
À: "Somnath Roy"
Cc: ce
Hi,
We have 4 NIC controllers on ceph servers. Each server have installed
few osd's and one monitor. How should we setup networking on this hosts
with division on frontend network (10.20.8.0/22) and backend network
(10.20.4.0/22)?
At this time we are using this configuration of network:
auto
26 matches
Mail list logo