On 10-1-2017 20:35, Lionel Bouton wrote:
> Hi,
I usually don't top post, but this time it is just to agree whole
hartedly with what you wrote. And you have again more arguements as to why.
Using SSD that don't work right is a certain recipe for losing data.
--WjW
> Le 10/01/2017 à 19:32, Brian
Hi,
Le 10/01/2017 à 19:32, Brian Andrus a écrit :
> [...]
>
>
> I think the main point I'm trying to address is - as long as the
> backing OSD isn't egregiously handling large amounts of writes and it
> has a good journal in front of it (that properly handles O_DSYNC [not
> D_SYNC as Sebastien's
On Mon, Jan 9, 2017 at 3:33 PM, Willem Jan Withagen wrote:
> On 9-1-2017 23:58, Brian Andrus wrote:
> > Sorry for spam... I meant D_SYNC.
>
> That term does not run any lights in Google...
> So I would expect it has to O_DSYNC.
>
On 9-1-2017 23:58, Brian Andrus wrote:
> Sorry for spam... I meant D_SYNC.
That term does not run any lights in Google...
So I would expect it has to O_DSYNC.
(https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/)
Now you tell me there is a
Sorry for spam... I meant D_SYNC.
On Mon, Jan 9, 2017 at 2:56 PM, Brian Andrus
wrote:
> Hi Willem, the SSDs are probably fine for backing OSDs, it's the O_DSYNC
> writes they tend to lie about.
>
> They may have a failure rate higher than enterprise-grade SSDs, but
Hi Willem, the SSDs are probably fine for backing OSDs, it's the O_DSYNC
writes they tend to lie about.
They may have a failure rate higher than enterprise-grade SSDs, but are
otherwise suitable for use as OSDs if journals are placed elsewhere.
On Mon, Jan 9, 2017 at 2:39 PM, Willem Jan Withagen
On 9-1-2017 18:46, Oliver Humpage wrote:
>
>> Why would you still be using journals when running fully OSDs on
>> SSDs?
>
> In our case, we use cheaper large SSDs for the data (Samsung 850 Pro
> 2TB), whose performance is excellent in the cluster, but as has been
> pointed out in this thread can
: "Willem Jan Withagen" <w...@digiware.nl>
Sent: Sunday, January 08, 2017 1:47 PM
To: "Lionel Bouton" <lionel-subscript...@bouton.name>; "kevin parrikar"
<kevin.parker...@gmail.com>
Cc: <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] A
On 7-1-2017 15:03, Lionel Bouton wrote:
> Le 07/01/2017 à 14:11, kevin parrikar a écrit :
>> Thanks for your valuable input.
>> We were using these SSD in our NAS box(synology) and it was giving
>> 13k iops for our fileserver in raid1.We had a few spare disks which we
>> added to our ceph nodes
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of kevin
parrikar
Sent: 07 January 2017 13:11
To: Lionel Bouton <lionel-subscript...@bouton.name>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe
NIC and 2 re
- Original message
> From: kevin parrikar <kevin.parker...@gmail.com>
> Date: 07/01/2017 19:56 (GMT+02:00)
> To: Lionel Bouton <lionel-subscript...@bouton.name>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Analysing ceph performance with SSD journal,
&g
com>
Date: 07/01/2017 19:56 (GMT+02:00)
To: Lionel Bouton <lionel-subscript...@bouton.name>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe
NIC and 2 replicas -Hammer release
Wow thats a lot of good information. I wish i kn
Le 07/01/2017 à 14:11, kevin parrikar a écrit :
> Thanks for your valuable input.
> We were using these SSD in our NAS box(synology) and it was giving
> 13k iops for our fileserver in raid1.We had a few spare disks which we
> added to our ceph nodes hoping that it will give good performance same
Thanks for your valuable input.
We were using these SSD in our NAS box(synology) and it was giving 13k
iops for our fileserver in raid1.We had a few spare disks which we added to
our ceph nodes hoping that it will give good performance same as that of
NAS box.(i am not comparing NAS with ceph
; Original message
> From: kevin parrikar <kevin.parker...@gmail.com>
> Date: 07/01/2017 05:48 (GMT+02:00)
> To: Christian Balzer <ch...@gol.com>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Analysing ceph performance with SSD journal,
ker...@gmail.com>
Date: 07/01/2017 05:48 (GMT+02:00)
To: Christian Balzer <ch...@gol.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe
NIC and 2 replicas -Hammer release
i really need some help here :(
replaced all 7.2 r
i really need some help here :(
replaced all 7.2 rpm SAS disks with new Samsung 840 evo 512Gb SSD with no
seperate journal Disk .Now both OSD nodes are with 2 ssd disks with a
replica of *2* .
Total number of OSD process in the cluster is *4*.with all SSD.
But throughput has gone down from 1.4
Thanks Christian for your valuable comments,each comment is a new learning
for me.
Please see inline
On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
>
> > Hello All,
> >
> > I have setup a ceph cluster
Thanks Zhong.
We got 5 servers for testing ,two are already configured to be OSD nodes
and as per the storage requirement we need at least 5 OSD nodes .Let me try
to get more servers to try cache tier ,but i am not hopefull though :( .
Will try bcache and see how it improves performance,thanks
Hello,
On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
> Hello All,
>
> I have setup a ceph cluster based on 0.94.6 release in 2 servers each with
> 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
> which is connected to a 10G switch with a replica of 2 [ i will add 3 more
>
2017-01-06 11:10 GMT+08:00 kevin parrikar :
> Hello All,
>
> I have setup a ceph cluster based on 0.94.6 release in 2 servers each
> with 80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
> which is connected to a 10G switch with a replica of 2 [ i will add 3
Hello All,
I have setup a ceph cluster based on 0.94.6 release in 2 servers each with
80Gb intel s3510 and 2x3 Tb 7.2 SATA disks,16 CPU,24G RAM
which is connected to a 10G switch with a replica of 2 [ i will add 3 more
servers to the cluster] and 3 seperate monitor nodes which are vms.
22 matches
Mail list logo