Re: [ceph-users] SSD Journal

2016-07-14 Thread Christian Balzer
coalesced. All my other MONs (shared with on OSD storage nodes) are on S37x0 SSDs. OTOH, the Supermicro DoMs look nice enough on paper with 1 DWPD: https://www.supermicro.com/products/nfo/SATADOM.cfm The 64GB model should do the trick in most scenarios. Christian > Steffen > > > Chri

Re: [ceph-users] SSD Journal

2016-07-13 Thread Christian Balzer
you were to run a MON on those machines as well. Christian > Thanks, > Ashley > > -Original Message- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: 13 July 2016 01:12 > To: ceph-users@lists.ceph.com > Cc: Wido den Hollander <w...@42on.com>; Ashley Mer

Re: [ceph-users] SSD Journal

2016-07-13 Thread Kees Meijs
Hi, This is an OSD box running Hammer on Ubuntu 14.04 LTS with additional systems administration tools: > $ df -h | grep -v /var/lib/ceph/osd > Filesystem Size Used Avail Use% Mounted on > udev5,9G 4,0K 5,9G 1% /dev > tmpfs 1,2G 892K 1,2G 1% /run > /dev/dm-1

Re: [ceph-users] SSD Journal

2016-07-13 Thread Wido den Hollander
com] > Sent: 13 July 2016 10:44 > To: Ashley Merrick <ash...@amerrick.co.uk>; ceph-users@lists.ceph.com; > Christian Balzer <ch...@gol.com> > Subject: RE: [ceph-users] SSD Journal > > > > Op 13 juli 2016 om 11:34 schreef Ashley Merrick <ash...@am

Re: [ceph-users] SSD Journal

2016-07-13 Thread Ashley Merrick
[mailto:w...@42on.com] Sent: 13 July 2016 10:44 To: Ashley Merrick <ash...@amerrick.co.uk>; ceph-users@lists.ceph.com; Christian Balzer <ch...@gol.com> Subject: RE: [ceph-users] SSD Journal > Op 13 juli 2016 om 11:34 schreef Ashley Merrick <ash...@amerrick.co.uk>: >

Re: [ceph-users] SSD Journal

2016-07-13 Thread Wido den Hollander
ssage- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: 13 July 2016 01:12 > To: ceph-users@lists.ceph.com > Cc: Wido den Hollander <w...@42on.com>; Ashley Merrick <ash...@amerrick.co.uk> > Subject: Re: [ceph-users] SSD Journal > > > Hello, > > O

Re: [ceph-users] SSD Journal

2016-07-13 Thread Ashley Merrick
: Christian Balzer [mailto:ch...@gol.com] Sent: 13 July 2016 01:12 To: ceph-users@lists.ceph.com Cc: Wido den Hollander <w...@42on.com>; Ashley Merrick <ash...@amerrick.co.uk> Subject: Re: [ceph-users] SSD Journal Hello, On Tue, 12 Jul 2016 19:14:14 +0200 (CEST) Wido den Hol

Re: [ceph-users] SSD Journal

2016-07-12 Thread Christian Balzer
Hello, On Tue, 12 Jul 2016 19:14:14 +0200 (CEST) Wido den Hollander wrote: > > > Op 12 juli 2016 om 15:31 schreef Ashley Merrick : > > > > > > Hello, > > > > Looking at final stages of planning / setup for a CEPH Cluster. > > > > Per a Storage node looking @ > > > >

Re: [ceph-users] SSD Journal

2016-07-12 Thread Wido den Hollander
> Op 12 juli 2016 om 15:31 schreef Ashley Merrick : > > > Hello, > > Looking at final stages of planning / setup for a CEPH Cluster. > > Per a Storage node looking @ > > 2 x SSD OS / Journal > 10 x SATA Disk > > Will have a small Raid 1 Partition for the OS, however

[ceph-users] SSD Journal

2016-07-12 Thread Ashley Merrick
Hello, Looking at final stages of planning / setup for a CEPH Cluster. Per a Storage node looking @ 2 x SSD OS / Journal 10 x SATA Disk Will have a small Raid 1 Partition for the OS, however not sure if best to do: 5 x Journal Per a SSD 10 x Journal on Raid 1 of two SSD's Is the

Re: [ceph-users] SSD Journal Performance Priorties

2016-02-26 Thread Somnath Roy
16 4:16 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] SSD Journal Performance Priorties Ignoring the durability and network issues for now :) Are there any aspects of a journals performance that matter most for over all ceph performance? i.e my inital thought is if I want to improve ceph write p

[ceph-users] SSD Journal Performance Priorties

2016-02-26 Thread Lindsay Mathieson
Ignoring the durability and network issues for now :) Are there any aspects of a journals performance that matter most for over all ceph performance? i.e my inital thought is if I want to improve ceph write performance journal seq write speed is what matters. Does random write speed factor

Re: [ceph-users] SSD Journal

2016-02-01 Thread Jan Schermer
Hi, unfortunately I'm not a dev, so it's gonna be someone else ripping the journal out and trying. But I came to understand that getting rid of journal is not that easy of a task. To me, more important would be if the devs understood what I'm trying to say :-) because without that any new

Re: [ceph-users] SSD Journal

2016-01-31 Thread Alex Gorbachev
----------- > *From: *"Bill WONG" <wongahsh...@gmail.com> > *To: *"ceph-users" <ceph-users@lists.ceph.com> > *Sent: *Thursday, January 28, 2016 1:36:01 PM > *Subject: *[ceph-users] SSD Journal > > Hi, > i have tested with SSD Jour

Re: [ceph-users] SSD Journal

2016-01-29 Thread Lionel Bouton
Le 29/01/2016 01:12, Jan Schermer a écrit : > [...] >> Second I'm not familiar with Ceph internals but OSDs must make sure that >> their PGs are synced so I was under the impression that the OSD content for >> a PG on the filesystem should always be guaranteed to be on all the other >> active

Re: [ceph-users] SSD Journal

2016-01-29 Thread Jan Schermer
es, because you > mirrored that to a filesystem that mirrors that to a hard drive... No need of > journals on top of filesystems with journals with data on filesystems with > journals... My databases are not that fond of the multi-ms commiting limbo > while data falls down th

Re: [ceph-users] SSD Journal

2016-01-29 Thread Lionel Bouton
Le 29/01/2016 16:25, Jan Schermer a écrit : > > [...] > > > But if I understand correctly, there is indeed a log of the recent > modifications in the filestore which is used when a PG is recovering > because another OSD is lagging behind (not when Ceph reports a full > backfill

Re: [ceph-users] SSD Journal

2016-01-29 Thread Jan Schermer
> On 29 Jan 2016, at 16:00, Lionel Bouton > wrote: > > Le 29/01/2016 01:12, Jan Schermer a écrit : >> [...] >>> Second I'm not familiar with Ceph internals but OSDs must make sure that >>> their PGs are synced so I was under the impression that the OSD content

Re: [ceph-users] SSD Journal

2016-01-29 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Jan, I know that Sage has worked through a lot of this and spent a lot of time on it, so I'm somewhat inclined to say that if he says it needs to be there, then it needs to be there. I, however, have been known to stare at the tress so much that I

Re: [ceph-users] SSD Journal

2016-01-29 Thread Anthony D'Atri
> Right now we run the journal as a partition on the data disk. I've build > drives without journals and the write performance seems okay but random io > performance is poor in comparison to what it should be. Co-located journals have multiple issues: o The disks are presented with double

Re: [ceph-users] SSD Journal

2016-01-28 Thread Tyler Bishop
: "Bill WONG" <wongahsh...@gmail.com> To: "ceph-users" <ceph-users@lists.ceph.com> Sent: Thursday, January 28, 2016 1:36:01 PM Subject: [ceph-users] SSD Journal Hi, i have tested with SSD Journal with SATA, it works perfectly.. now, i am testing with full SSD ceph

Re: [ceph-users] SSD Journal

2016-01-28 Thread Somnath Roy
yler Bishop Sent: Thursday, January 28, 2016 1:35 PM To: Jan Schermer Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] SSD Journal What approach did sandisk take with this for jewel? [http://static.beyondhosting.net/img/bh-small.png] Tyler Bishop Chief Technical Officer 513-299-7108

Re: [ceph-users] SSD Journal

2016-01-28 Thread Lionel Bouton
Le 28/01/2016 22:32, Jan Schermer a écrit : > P.S. I feel very strongly that this whole concept is broken > fundamentaly. We already have a journal for the filesystem which is > time proven, well behaved and above all fast. Instead there's this > reinvented wheel which supposedly does it better in

Re: [ceph-users] SSD Journal

2016-01-28 Thread Tyler Bishop
ursday, January 28, 2016 4:32:54 PM Subject: Re: [ceph-users] SSD Journal You can't run Ceph OSD without a journal. The journal is always there. If you don't have a journal partition then there's a "journal" file on the OSD filesystem that does the same thing. If it's a partition then th

Re: [ceph-users] SSD Journal

2016-01-28 Thread Jan Schermer
any action in reliance on > the contents of this information is strictly prohibited. > > > From: "Bill WONG" <wongahsh...@gmail.com> > To: "ceph-users" <ceph-users@lists.ceph.com> > Sent: Thursday, January 28, 2016 1:36:01 PM > Subject: [ceph-users

Re: [ceph-users] SSD Journal

2016-01-28 Thread Jan Schermer
s down throught those dream layers :P I really don't know how to explain that more. I bet if you ask on LKML, someone like Theodore Ts'o would say "you're doing completely superfluous work" in more technical terms. Jan > > Thanks & Regards > Somnath > > From: c

Re: [ceph-users] SSD Journal

2016-01-28 Thread Jan Schermer
> On 28 Jan 2016, at 23:19, Lionel Bouton > wrote: > > Le 28/01/2016 22:32, Jan Schermer a écrit : >> P.S. I feel very strongly that this whole concept is broken fundamentaly. We >> already have a journal for the filesystem which is time proven, well behaved

[ceph-users] SSD Journal

2016-01-28 Thread Bill WONG
Hi, i have tested with SSD Journal with SATA, it works perfectly.. now, i am testing with full SSD ceph cluster, now with full SSD ceph cluster, do i still need to have SSD as journal disk? [ assumed i do not have PCIe SSD Flash which is better performance than normal SSD disk] please give some

Re: [ceph-users] SSD Journal

2016-01-28 Thread Somnath Roy
ailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tyler Bishop Sent: Thursday, January 28, 2016 1:35 PM To: Jan Schermer Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] SSD Journal What approach did sandisk take with this for jewel? [http

[ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
Hi everyone: I plan to use SSD Journal to improve performance. I have one 1.2T SSD disk per server. what is the best practice for SSD Journal ? There are there choice to deploy SSD Journal 1. all osd used same ssd partion ceph-deploy osd create

Re: [ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
For the first choice: ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd i find ceph-deploy will create partition automaticaly, and each partition is 5G default. So the first choice and second choice is almost the same. Compare to filesystem, I perfer to block device to get

Re: [ceph-users] SSD journal deployment experiences

2014-09-10 Thread Christian Balzer
On Tue, 9 Sep 2014 10:57:26 -0700 Craig Lewis wrote: On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer ch...@gol.com wrote: On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote: Backing up slightly, have you considered RAID 5 over your SSDs? Practically speaking, there's no

Re: [ceph-users] SSD journal deployment experiences

2014-09-09 Thread Craig Lewis
On Sat, Sep 6, 2014 at 7:50 AM, Dan van der Ster daniel.vanders...@cern.ch wrote: BTW, do you happen to know, _if_ we re-use an OSD after the journal has failed, are any object inconsistencies going to be found by a scrub/deep-scrub? I haven't tested this, but I did something I *think* is

Re: [ceph-users] SSD journal deployment experiences

2014-09-09 Thread Craig Lewis
On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer ch...@gol.com wrote: On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote: Backing up slightly, have you considered RAID 5 over your SSDs? Practically speaking, there's no performance downside to RAID 5 when your devices aren't IOPS-bound.

Re: [ceph-users] SSD journal deployment experiences

2014-09-08 Thread Quenten Grasso
: Sunday, 7 September 2014 1:38 AM To: ceph-users Subject: Re: [ceph-users] SSD journal deployment experiences On Sat, 6 Sep 2014 14:50:20 + Dan van der Ster wrote: September 6 2014 4:01 PM, Christian Balzer ch...@gol.com wrote: On Sat, 6 Sep 2014 13:07:27 + Dan van der Ster wrote

Re: [ceph-users] SSD journal deployment experiences

2014-09-08 Thread Christian Balzer
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Christian Balzer Sent: Sunday, 7 September 2014 1:38 AM To: ceph-users Subject: Re: [ceph-users] SSD journal deployment experiences On Sat, 6 Sep 2014 14:50:20 + Dan van der Ster wrote: September 6 2014 4:01 PM, Christian

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Christian Balzer
On Fri, 5 Sep 2014 09:42:02 + Dan Van Der Ster wrote: On 05 Sep 2014, at 11:04, Christian Balzer ch...@gol.com wrote: On Fri, 5 Sep 2014 07:46:12 + Dan Van Der Ster wrote: On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote: On Thu, 4 Sep 2014 14:49:39 -0700

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Dan van der Ster
Hi Christian, Let's keep debating until a dev corrects us ;) September 6 2014 1:27 PM, Christian Balzer ch...@gol.com wrote: On Fri, 5 Sep 2014 09:42:02 + Dan Van Der Ster wrote: On 05 Sep 2014, at 11:04, Christian Balzer ch...@gol.com wrote: On Fri, 5 Sep 2014 07:46:12 + Dan Van

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Christian Balzer
On Sat, 6 Sep 2014 13:07:27 + Dan van der Ster wrote: Hi Christian, Let's keep debating until a dev corrects us ;) For the time being, I give the recent: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12203.html And not so recent:

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Dan van der Ster
September 6 2014 4:01 PM, Christian Balzer ch...@gol.com wrote: On Sat, 6 Sep 2014 13:07:27 + Dan van der Ster wrote: Hi Christian, Let's keep debating until a dev corrects us ;) For the time being, I give the recent:

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Scott Laird
Backing up slightly, have you considered RAID 5 over your SSDs? Practically speaking, there's no performance downside to RAID 5 when your devices aren't IOPS-bound. On Sat Sep 06 2014 at 8:37:56 AM Christian Balzer ch...@gol.com wrote: On Sat, 6 Sep 2014 14:50:20 + Dan van der Ster wrote:

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Christian Balzer
On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote: Backing up slightly, have you considered RAID 5 over your SSDs? Practically speaking, there's no performance downside to RAID 5 when your devices aren't IOPS-bound. Well... For starters with RAID5 you would loose 25% throughput in both

Re: [ceph-users] SSD journal deployment experiences

2014-09-06 Thread Dan Van Der Ster
RAID5... Hadn't considered it due to the IOPS penalty (it would get 1/4th of the IOPS of separated journal devices, according to some online raid calc). Compared to RAID10, I guess we'd get 50% more capacity, but lower performance. After the anecdotes that the DCS3700 is very rarely failing,

Re: [ceph-users] SSD journal deployment experiences

2014-09-05 Thread Dan Van Der Ster
Hi Christian, On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote: Hello, On Thu, 4 Sep 2014 14:49:39 -0700 Craig Lewis wrote: On Thu, Sep 4, 2014 at 9:21 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: 1) How often are DC S3700's failing in your deployments?

Re: [ceph-users] SSD journal deployment experiences

2014-09-05 Thread Nigel Williams
On Fri, Sep 5, 2014 at 5:46 PM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote: You might want to look into cache pools (and dedicated SSD servers with fast controllers and CPUs) in your test cluster and for the future. Right

Re: [ceph-users] SSD journal deployment experiences

2014-09-05 Thread Dan Van Der Ster
On 05 Sep 2014, at 10:30, Nigel Williams nigel.d.willi...@gmail.com wrote: On Fri, Sep 5, 2014 at 5:46 PM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote: You might want to look into cache pools (and dedicated SSD servers

Re: [ceph-users] SSD journal deployment experiences

2014-09-05 Thread Dan Van Der Ster
On 05 Sep 2014, at 11:04, Christian Balzer ch...@gol.com wrote: Hello Dan, On Fri, 5 Sep 2014 07:46:12 + Dan Van Der Ster wrote: Hi Christian, On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote: Hello, On Thu, 4 Sep 2014 14:49:39 -0700 Craig Lewis wrote:

[ceph-users] SSD journal deployment experiences

2014-09-04 Thread Dan Van Der Ster
Dear Cephalopods, In a few weeks we will receive a batch of 200GB Intel DC S3700’s to augment our cluster, and I’d like to hear your practical experience and discuss options how best to deploy these. We’ll be able to equip each of our 24-disk OSD servers with 4 SSDs, so they will become 20

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Robert LeBlanc
We are still pretty early on in our testing of how to best use SSDs as well. What we are trying right now, for some of the reasons you mentioned already, is to use bcache as a cache for both journal and data. We have 10 spindles in our boxes with 2 SSDs. We created two bcaches (one for each SSD)

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Dan Van Der Ster
I've just been reading the bcache docs. It's a pity the mirrored writes aren't implemented yet. Do you know if you can use an md RAID1 as a cache dev? And is the graceful failover from wb to writethrough actually working without data loss? Also, write behind sure would help the filestore,

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Robert LeBlanc
You should be able to use any block device in a bcache device. Right now, we are OK losing one SSD and it takes out 5 OSDs. We would rather have twice the cache. Our opinion may change in the future. We wanted to keep as much overhead as low as possible. I think we may spend the extra on heavier

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Dan van der Ster
Thanks again for all of your input. I agree with your assessment -- in our cluster we avg 40ms for a 4k write. That's why we're adding the SSDs -- you just can't run a proportioned RBD service without them. I'll definitely give bcache a try in my test setup, but more reading has kinda

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Stefan Priebe
Hi Dan, hi Robert, Am 04.09.2014 21:09, schrieb Dan van der Ster: Thanks again for all of your input. I agree with your assessment -- in our cluster we avg 3ms for a random (hot) 4k read already, but 40ms for a 4k write. That's why we're adding the SSDs -- you just can't run a proportioned RBD

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Robert LeBlanc
This is good to know. I just recompiled the CentOS7 3.10 kernel to enable bcache (I doubt they patched bcache since they don't compile/enable it). I've seen when I ran Ceph in VMs on my workstation that there were oops with bcache, but doing the bcache device and the backend device even with two

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Dan van der Ster
Hi Stefan, September 4 2014 9:13 PM, Stefan Priebe s.pri...@profihost.ag wrote: Hi Dan, hi Robert, Am 04.09.2014 21:09, schrieb Dan van der Ster: Thanks again for all of your input. I agree with your assessment -- in our cluster we avg 3ms for a random (hot) 4k read already, but 40ms

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Martin B Nielsen
Hi Dan, We took a different approach (and our cluster is tiny compared to many others) - we have two pools; normal and ssd. We use 14 disks in each osd-server; 8 platter and 4 ssd for ceph, and 2 ssd for OS/journals. We partitioned the two OS ssd as raid1 using about half the space for the OS

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Dan van der Ster
Hi Craig, September 4 2014 11:50 PM, Craig Lewis cle...@centraldesktop.com wrote: On Thu, Sep 4, 2014 at 9:21 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: 1) How often are DC S3700's failing in your deployments? None of mine have failed yet. I am planning to monitor the wear

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Mark Kirkwood
On 05/09/14 10:05, Dan van der Ster wrote: That's good to know. I would plan similarly for the wear out. But I want to also prepare for catastrophic failures -- in the past we've had SSDs just disappear like a device unplug. Those were older OCZ's though... Yes - the Intel dc style drives

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Christian Balzer
Hello, On Thu, 4 Sep 2014 14:49:39 -0700 Craig Lewis wrote: On Thu, Sep 4, 2014 at 9:21 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: 1) How often are DC S3700's failing in your deployments? None of mine have failed yet. I am planning to monitor the wear level

Re: [ceph-users] SSD journal overload?

2014-04-30 Thread Indra Pramana
Hi Irek, Good day to you. Any updates/comments on below? Looking forward to your reply, thank you. Cheers. On Tue, Apr 29, 2014 at 12:47 PM, Indra Pramana in...@sg.or.id wrote: Hi Irek, Good day to you, and thank you for your e-mail. Is there a better way other than patching the

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Udo Lembke
Hi, perhaps due IOs from the journal? You can test with iostat (like iostat -dm 5 sdg). on debian iostat is in the package sysstat. Udo Am 28.04.2014 07:38, schrieb Indra Pramana: Hi Craig, Good day to you, and thank you for your enquiry. As per your suggestion, I have created a 3rd

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Irek Fasikhov
You what model SSD? Which version of the kernel? 2014-04-28 12:35 GMT+04:00 Udo Lembke ulem...@polarzone.de: Hi, perhaps due IOs from the journal? You can test with iostat (like iostat -dm 5 sdg). on debian iostat is in the package sysstat. Udo Am 28.04.2014 07:38, schrieb Indra

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Indra Pramana
Hi Udo and Irek, Good day to you, and thank you for your emails. perhaps due IOs from the journal? You can test with iostat (like iostat -dm 5 sdg). Yes, I have shared the iostat result earlier on this same thread. At times the utilisation of the 2 journal drives will hit 100%, especially when

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Irek Fasikhov
Most likely you need to apply a patch to the kernel. http://www.theirek.com/blog/2014/02/16/patch-dlia-raboty-s-enierghoniezavisimym-keshiem-ssd-diskov 2014-04-28 15:20 GMT+04:00 Indra Pramana in...@sg.or.id: Hi Udo and Irek, Good day to you, and thank you for your emails. perhaps due

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Indra Pramana
Hi Irek, Thanks for the article. Do you have any other web sources pertaining to the same issue, which is in English? Looking forward to your reply, thank you. Cheers. On Mon, Apr 28, 2014 at 7:40 PM, Irek Fasikhov malm...@gmail.com wrote: Most likely you need to apply a patch to the

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Irek Fasikhov
This is my article :). To patch to the kernel (http://www.theirek.com/downloads/code/CMD_FLUSH.diff ). After rebooting, run the following commands: echo temporary write through /sys/class/scsi_disk/disk/cache_type 2014-04-28 15:44 GMT+04:00 Indra Pramana in...@sg.or.id: Hi Irek, Thanks for

Re: [ceph-users] SSD journal overload?

2014-04-28 Thread Indra Pramana
Hi Irek, Good day to you, and thank you for your e-mail. Is there a better way other than patching the kernel? I would like to avoid having to compile a custom kernel for my OS. I read that I can disable write-caching on the drive using hdparm: hdparm -W0 /dev/sdf hdparm -W0 /dev/sdg I tested

Re: [ceph-users] SSD journal overload?

2014-04-27 Thread Indra Pramana
Hi Craig, Good day to you, and thank you for your enquiry. As per your suggestion, I have created a 3rd partition on the SSDs and did the dd test directly into the device, and the result is very slow. root@ceph-osd-08:/mnt# dd bs=1M count=128 if=/dev/zero of=/dev/sdg3 conv=fdatasync

Re: [ceph-users] SSD journal overload?

2014-04-25 Thread Craig Lewis
I am not able to do a dd test on the SSDs since it's not mounted as filesystem, but dd on the OSD (non-SSD) drives gives normal result. Since you have free space on the SSDs, you could add a 3rd 10G partition to one of the SSDs. Then you could put a filesystem on that partition, or just dd

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
Since the journal partitions are generally small, it shouldn't need to be. For example implement with substantial under-provisioning, either via HPA or simple partitions. On 2013-12-03 12:18, Loic Dachary wrote: Hi Ceph, When an SSD partition is used to store a journal

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread Emmanuel Lacour
On Tue, Dec 03, 2013 at 12:38:54PM +, James Pearce wrote: Since the journal partitions are generally small, it shouldn't need to be. here with 2 journals (2 osds) on two ssd (samsung 850 pro, soft raid1 + lvm + xfs) trim is just obligatory. We forget to set it at cluster setup and one

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
How much (%) is left unprovisioned on those (840s?) ? And were they trim'd/secure erased before deployment? On 2013-12-03 12:45, Emmanuel Lacour wrote: On Tue, Dec 03, 2013 at 12:38:54PM +, James Pearce wrote: Since the journal partitions are generally small, it shouldn't need to be.

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread Emmanuel Lacour
On Tue, Dec 03, 2013 at 12:48:21PM +, James Pearce wrote: How much (%) is left unprovisioned on those (840s?) ? And were they trim'd/secure erased before deployment? unfortunatly, everything was provisioned (thought, there is free spaces in the VG) due to lack of knowledge. Nothing

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
Most likely. When fully provisioned the device has a much smaller pool of cells to manage (i.e. charge) in the background, hence once that pool is exhausted the device has no option but to stall whilst it clears (re-charges) a cell, which takes something like 2-5ms. Daily cron task is though

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread Emmanuel Lacour
On Tue, Dec 03, 2013 at 01:22:50PM +, James Pearce wrote: Daily cron task is though still a good idea - enabling discard mount option is generally counter-productive since trim is issue way too often, destroying performance (in my testing). yes that's why we are using cron here.

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread Loic Dachary
On 03/12/2013 13:38, James Pearce wrote: Since the journal partitions are generally small, it shouldn't need to be. For example implement with substantial under-provisioning, either via HPA or simple partitions. Does that mean the problem will happen much later or that it will never

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread James Pearce
It shouldn't happen, provided the sustained write rate doesn't exceed the sustained erase capabilities of the device I guess. Daily fstrim will not hurt though. Essentially the mapping between LBAs and physical cells isn't persistent in SSDs (unlike LBA and physical sectors on an HDD).

Re: [ceph-users] SSD journal partition and trim

2013-12-03 Thread Loic Dachary
Crystal clear, thanks for the tutorial :-) On 03/12/2013 16:52, James Pearce wrote: It shouldn't happen, provided the sustained write rate doesn't exceed the sustained erase capabilities of the device I guess. Daily fstrim will not hurt though. Essentially the mapping between LBAs and