Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Nick Tan
On Mon, Aug 21, 2017 at 1:58 PM, Christian Balzer  wrote:

> On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote:
>
> > Hi all,
> >
> > I'm in the process of building a ceph cluster, primarily to use cephFS.
> At
> > this stage I'm in the planning phase and doing a lot of reading on best
> > practices for building the cluster, however there's one question that I
> > haven't been able to find an answer to.
> >
> > Is it better to use many hosts with single OSD's, or fewer hosts with
> > multiple OSD's?  I'm looking at using 8 or 10TB HDD's as OSD's and hosts
> > with up to 12 HDD's.  If a host dies, that means up to 120TB of data will
> > need to be recovered if the host has 12 x 10TB HDD's.  But if smaller
> hosts
> > with single HDD's are used then a single host failure will result in
> only a
> > maximum of 10TB to be recovered, so in this case it looks better to use
> > smaller hosts with single OSD's if the failure domain is the host.
> >
> > Are there other benefits or drawbacks of using many small servers with
> > single OSD's vs fewer large servers with lots of OSD's?
> >
>
> Ideally you'll have smallish hosts with smallish disk (not 10TB monsters),
> both to reduce the impact an OSD or host loss would have as well as
> improving IOPS (more spindles).
>
> With larger hosts you'll also want to make sure that a single host failure
> is not going to create a "full" (and thus unusable) cluster, besides the
> I/O strain recovery will cause.
> 5 or 10 hosts are a common, typical starting point.
>
> Also important to remember is the configuration parameter
> "mon_osd_down_out_subtree_limit = host"
> since repairing a large host is likely to be faster that replicating all
> the data it held.
>
> Of course "ideally" tends to mean "expensive" in most cases and this is
> no exception.
>
> Smaller hosts are more expensive in terms of space and parts (a NIC for
> each OSD instead of one per 12, etc).
> And before you mention really small hosts with 1GbE NICs, the latency
> penalty is significant there, the limitation to 100MB/s is more of an
> issue with reads than writes.
>
> Penultimately you need to balance your budget, rack space and needs.
>


Thanks Christian.  The tip about the "mon_osd_down_out_subtree_limit =
host" setting is very useful.  If we go down the path of large servers (12+
disks), my intention is to have a spare empty chassis so in the case of a
server failure I could move the disks into the spare chassis and bring it
back online which would be done much faster than trying to recover 12
OSD's.  That was my main concern with the large servers which this helps
alleviate.  Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Christian Balzer
On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote:

> Hi all,
> 
> I'm in the process of building a ceph cluster, primarily to use cephFS.  At
> this stage I'm in the planning phase and doing a lot of reading on best
> practices for building the cluster, however there's one question that I
> haven't been able to find an answer to.
> 
> Is it better to use many hosts with single OSD's, or fewer hosts with
> multiple OSD's?  I'm looking at using 8 or 10TB HDD's as OSD's and hosts
> with up to 12 HDD's.  If a host dies, that means up to 120TB of data will
> need to be recovered if the host has 12 x 10TB HDD's.  But if smaller hosts
> with single HDD's are used then a single host failure will result in only a
> maximum of 10TB to be recovered, so in this case it looks better to use
> smaller hosts with single OSD's if the failure domain is the host.
> 
> Are there other benefits or drawbacks of using many small servers with
> single OSD's vs fewer large servers with lots of OSD's?
> 

Ideally you'll have smallish hosts with smallish disk (not 10TB monsters),
both to reduce the impact an OSD or host loss would have as well as
improving IOPS (more spindles).

With larger hosts you'll also want to make sure that a single host failure
is not going to create a "full" (and thus unusable) cluster, besides the
I/O strain recovery will cause.
5 or 10 hosts are a common, typical starting point.

Also important to remember is the configuration parameter
"mon_osd_down_out_subtree_limit = host"
since repairing a large host is likely to be faster that replicating all
the data it held.

Of course "ideally" tends to mean "expensive" in most cases and this is
no exception. 

Smaller hosts are more expensive in terms of space and parts (a NIC for
each OSD instead of one per 12, etc).
And before you mention really small hosts with 1GbE NICs, the latency
penalty is significant there, the limitation to 100MB/s is more of an
issue with reads than writes.

Penultimately you need to balance your budget, rack space and needs.

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Nick Tan
Hi all,

I'm in the process of building a ceph cluster, primarily to use cephFS.  At
this stage I'm in the planning phase and doing a lot of reading on best
practices for building the cluster, however there's one question that I
haven't been able to find an answer to.

Is it better to use many hosts with single OSD's, or fewer hosts with
multiple OSD's?  I'm looking at using 8 or 10TB HDD's as OSD's and hosts
with up to 12 HDD's.  If a host dies, that means up to 120TB of data will
need to be recovered if the host has 12 x 10TB HDD's.  But if smaller hosts
with single HDD's are used then a single host failure will result in only a
maximum of 10TB to be recovered, so in this case it looks better to use
smaller hosts with single OSD's if the failure domain is the host.

Are there other benefits or drawbacks of using many small servers with
single OSD's vs fewer large servers with lots of OSD's?

Thanks,
Nick
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-08-20 Thread Hyun Ha
Hi, Thank you for response.

Details of my pool is below:
pool 2 'volumes' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 627 flags hashpspool
stripe_width 0
removed_snaps [1~3]

My test case was about a scenario of disaster. I think that the situation
that all copies of data is deleted can be occurred in production (in my
test, I deleted all copy of data by myself because simulate disaster state).

When all copy of data is deleted, ceph cluster never get back to clean.
How can I recover in this situation?

Thank you.


2017-08-18 21:28 GMT+09:00 David Turner :

> What were the settings for your pool? What was the size?  It looks like
> the size was 2 and that the PGs only existed on osds 2 and 6. If that's the
> case, it's like having a 4 disk raid 1+0, removing 2 disks of the same
> mirror, and complaining that the other mirror didn't pick up the data...
> Don't delete all copies of your data.  If your replica size is 2, you
> cannot loose 2 disks at the same time.
>
> On Fri, Aug 18, 2017, 1:28 AM Hyun Ha  wrote:
>
>> Hi, Cephers!
>>
>> I'm currently testing the situation of double failure for ceph cluster.
>> But, I faced that pgs are in stale state forever.
>>
>> reproduce steps)
>> 0. ceph version : jewel 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
>> 1. Pool create : exp-volumes (size = 2, min_size = 1)
>> 2. rbd create : testvol01
>> 3. rbd map and create mkfs.xfs
>> 4. mount and create file
>> 5. list rados object
>> 6. check osd map of each object
>>  # ceph osd map exp-volumes rbd_data.4a41f238e1f29.017a
>>osdmap e199 pool 'exp-volumes' (2) object 
>> 'rbd_data.4a41f238e1f29.017a'
>> -> pg 2.3f04d6e2 (2.62) -> up ([2,6], p2) acting ([2,6], p2)
>> 7. stop primary osd.2 and secondary osd.6 of above object at the same
>> time
>> 8. check ceph status
>> health HEALTH_ERR
>> 16 pgs are stuck inactive for more than 300 seconds
>> 16 pgs stale
>> 16 pgs stuck stale
>>  monmap e11: 3 mons at {10.105.176.85=10.105.176.85:
>> 6789/0,10.110.248.154=10.110.248.154:6789/0,10.110.249.153=
>> 10.110.249.153:6789/0}
>> election epoch 84, quorum 0,1,2 10.105.176.85,10.110.248.154,
>> 10.110.249.153
>>  osdmap e248: 6 osds: 4 up, 4 in; 16 remapped pgs
>> flags sortbitwise,require_jewel_osds
>>   pgmap v112095: 128 pgs, 1 pools, 14659 kB data, 17 objects
>> 165 MB used, 159 GB / 160 GB avail
>>  112 active+clean
>>   16 stale+active+clean
>>
>> # ceph health detail
>> HEALTH_ERR 16 pgs are stuck inactive for more than 300 seconds; 16 pgs
>> stale; 16 pgs stuck stale
>> pg 2.67 is stuck stale for 689.171742, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.5a is stuck stale for 689.171748, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.52 is stuck stale for 689.171753, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.4d is stuck stale for 689.171757, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.56 is stuck stale for 689.171755, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.d is stuck stale for 689.171811, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.79 is stuck stale for 689.171808, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.1f is stuck stale for 689.171782, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.76 is stuck stale for 689.171809, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.17 is stuck stale for 689.171794, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.63 is stuck stale for 689.171794, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.77 is stuck stale for 689.171816, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.1b is stuck stale for 689.171793, current state stale+active+clean,
>> last acting [6,2]
>> pg 2.62 is stuck stale for 689.171765, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.30 is stuck stale for 689.171799, current state stale+active+clean,
>> last acting [2,6]
>> pg 2.19 is stuck stale for 689.171798, current state stale+active+clean,
>> last acting [6,2]
>>
>>  # ceph pg dump_stuck stale
>> ok
>> pg_stat state   up  up_primary  acting  acting_primary
>> 2.67stale+active+clean  [2,6]   2   [2,6]   2
>> 2.5astale+active+clean  [6,2]   6   [6,2]   6
>> 2.52stale+active+clean  [2,6]   2   [2,6]   2
>> 2.4dstale+active+clean  [2,6]   2   [2,6]   2
>> 2.56stale+active+clean  [6,2]   6   [6,2]   6
>> 2.d stale+active+clean  [6,2]   6   [6,2]   6
>> 2.79stale+active+clean  [2,6]   2   [2,6]   2
>> 2.1fstale+active+clean  [6,2]   6   [6,2]   6
>> 2.76stale+active+clean  [6,2]   6   [6,2]   6
>> 2.17stale+active+clean  [6,2]   6   [6,2]   6
>> 2.63stale+

Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7

2017-08-20 Thread TYLin
You can get rpm from here

https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/old/2.3.0/CentOS/nfs-ganesha.repo

You have to fix the path mismatch error in the repo file manually.

> On Aug 20, 2017, at 5:38 AM, Marc Roos  wrote:
> 
> 
> 
> Where can you get the nfs-ganesha-ceph rpm? Is there a repository that 
> has these?
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
On Mon, 21 Aug 2017 01:48:49 + Adrian Saul wrote:

> > SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-
> > 75E4T0B/AM | Samsung  
> 
> The performance difference between these and the SM or PM863 range is night 
> and day.  I would not use these for anything you care about with performance, 
> particularly IOPS or latency.
> Their write latency is highly variable and even at best is still 5x higher 
> than what the SM863 range does.  When we compared them we could not get them 
> below 6ms and they frequently spiked to much higher values (25-30ms).  With 
> the SM863s they were a constant sub 1ms and didn't fluctuate.  I believe it 
> was the garbage collection on the Evos that causes the issue.  Here was the 
> difference in average latencies from a pool made of half Evo and half SM863:
> 
> Write latency - Evo 7.64ms - SM863 0.55ms
> Read Latency - Evo 2.56ms - SM863  0.16ms
> 
Yup, you get these unpredictable (and thus unsuitable) randomness and
generally higher latency with nearly all consumer SSDs.
And yes, typically GC related.

The reason it's so slow with sync writes is with near certainty that their
large DRAM cache is useless with these, as said cache isn't protected
against power failures and thus needs to be bypassed. 
Other consumer SSDs (IIRC Intel 510s amongst them) used to blatantly lie
about sync writes and thus appeared fast while putting your data at
significant risk.

Christian

> Add to that Christian's remarks on the write endurance and they are only good 
> for desktops that wont exercise them that much.   You are far better 
> investing in DC/Enterprise grade devices.
> 
> 
> 
> 
> >
> > On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
> >  wrote:  
> > > Yes, Its in production and used the pg count as per the pg calcuator @  
> > ceph.com.  
> > >
> > > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet  wrote:  
> > >> Which ssds are used? Are they in production? If so how is your PG Count?
> > >>
> > >> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
> > >> :  
> > >>>
> > >>> Hello,
> > >>> I am using the Ceph cluster with HDDs and SSDs. Created separate
> > >>> pool for each.
> > >>> Now, when I ran the "ceph osd bench", HDD's OSDs show around 500
> > >>> MB/s and SSD's OSD show around 280MB/s.
> > >>>
> > >>> Ideally, what I expected was - SSD's OSDs should be at-least 40%
> > >>> high as compared with HDD's OSD bench.
> > >>>
> > >>> Did I miss anything here? Any hint is appreciated.
> > >>>
> > >>> Thanks
> > >>> Swami
> > >>> 
> > >>>
> > >>> ceph-users mailing list
> > >>> ceph-users@lists.ceph.com
> > >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> > >>
> > >>
> > >> ___
> > >> ceph-users mailing list
> > >> ceph-users@lists.ceph.com
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >>  
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> Confidentiality: This email and any attachments are confidential and may be 
> subject to copyright, legal or some other professional privilege. They are 
> intended solely for the attention and use of the named addressee(s). They may 
> only be copied, distributed or disclosed with the consent of the copyright 
> owner. If you have received this email by mistake or by breach of the 
> confidentiality clause, please notify the sender immediately by return email 
> and delete or destroy all copies of the email. Any confidentiality, privilege 
> or copyright is not waived or lost because this email has been sent to you by 
> mistake.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Adrian Saul
> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-
> 75E4T0B/AM | Samsung

The performance difference between these and the SM or PM863 range is night and 
day.  I would not use these for anything you care about with performance, 
particularly IOPS or latency.
Their write latency is highly variable and even at best is still 5x higher than 
what the SM863 range does.  When we compared them we could not get them below 
6ms and they frequently spiked to much higher values (25-30ms).  With the 
SM863s they were a constant sub 1ms and didn't fluctuate.  I believe it was the 
garbage collection on the Evos that causes the issue.  Here was the difference 
in average latencies from a pool made of half Evo and half SM863:

Write latency - Evo 7.64ms - SM863 0.55ms
Read Latency - Evo 2.56ms - SM863  0.16ms

Add to that Christian's remarks on the write endurance and they are only good 
for desktops that wont exercise them that much.   You are far better investing 
in DC/Enterprise grade devices.




>
> On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy
>  wrote:
> > Yes, Its in production and used the pg count as per the pg calcuator @
> ceph.com.
> >
> > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet  wrote:
> >> Which ssds are used? Are they in production? If so how is your PG Count?
> >>
> >> Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
> >> :
> >>>
> >>> Hello,
> >>> I am using the Ceph cluster with HDDs and SSDs. Created separate
> >>> pool for each.
> >>> Now, when I ran the "ceph osd bench", HDD's OSDs show around 500
> >>> MB/s and SSD's OSD show around 280MB/s.
> >>>
> >>> Ideally, what I expected was - SSD's OSDs should be at-least 40%
> >>> high as compared with HDD's OSD bench.
> >>>
> >>> Did I miss anything here? Any hint is appreciated.
> >>>
> >>> Thanks
> >>> Swami
> >>> 
> >>>
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attention and use of the named addressee(s). They may 
only be copied, distributed or disclosed with the consent of the copyright 
owner. If you have received this email by mistake or by breach of the 
confidentiality clause, please notify the sender immediately by return email 
and delete or destroy all copies of the email. Any confidentiality, privilege 
or copyright is not waived or lost because this email has been sent to you by 
mistake.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Random Read Write Performance

2017-08-20 Thread Christian Balzer

Hello,

On Sun, 20 Aug 2017 18:07:09 +0700 Sam Huracan wrote:

> Hi,
> 
> I have a question about Ceph's performance
You really, really want to do yourself a favor and research things (aka
googling the archives of this ML).
Not a week or a month goes by with somebody asking this question.

> I've built a Ceph  cluster with 3 OSD host, each host's configuration:
>  - CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz
You will want higher clock speeds and less cores in general with SSDs and
NVMEs. 
Analyze your systems with atop, etc, during these benchmarks and see if
you're CPU bound, which you might very well be.

>  - Memory: 2 x 16GB RDIMM
While sufficient, not much space left for pagecache and SLAB.

>  - Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS)
Unless these are behind a HW cache RAID controller and also assuming
your MONs are also on these 3 hosts, SSDs for the leveldb activities
would be better.


> 4 x 800GB Solid State Drive SATA (non-RAID for OSD)(Intel SSD
> DC S3610)
Good luck getting any more of these.

>  - NIC: 1 x 10Gbps (bonding for both public and replicate network).
> 
1x doesn't sound like bonding, one assumes 4x 10Gbps?
Also splitting the network makes only _real_ performance sense in far less
scenarios than most people think.

> My ceph.conf: https://pastebin.com/r4pJ3P45
> We use this cluster for OpenStack cinder's backend.
> 
> We have benchmark this cluster by using 6 VMs, using vdbench.
> Our vdbench script:  https://pastebin.com/9sxhrjie
> 
> After test, we got the result:
>  - 100% random read: 100.000 IOPS
>  - 100% random write: 20.000 IOPS
>  - 75% RR - 25% RW: 80.000 IOPS
> 
> That results are so low, because we calculate the performance of this
> cluster is: 112.000 IOPS Write and 1000.000 IOPS Read,
> 

You're making the same mistake as all the people before you (see above,
google) and expecting the local performance of your SSDs when you're in
fact dealing with the much more complex and involved Ceph stack.

Firstly, for writes, these happen twice, once to the journal and once to
the actual OSD storage space. 
So just 56K write IOPS, _if_ they were local. 
Which they AREN'T. 

And now meet Mr. Latency, both in the network and the Ceph software stack.
This is were the rest of your performance gets lost.
The network part is unavoidable (a local SAS/SATA link is not the same as
a bonded 10Gbps link), though 25Gbps, IB etc can help.
The Ceph stack will benefit from faster CPUs as mentioned above.

> We are using Ceph Jewel 10.2.5-1trusty, kernel 4.4.0.-31 generic, Ubuntu
> 14.04
> 
Not exactly the latest crop of kernels, as a general comment.


Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Random Read Write Performance

2017-08-20 Thread Sam Huracan
Hi,

I have a question about Ceph's performance
I've built a Ceph  cluster with 3 OSD host, each host's configuration:
 - CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz
 - Memory: 2 x 16GB RDIMM
 - Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS)
4 x 800GB Solid State Drive SATA (non-RAID for OSD)(Intel SSD
DC S3610)
 - NIC: 1 x 10Gbps (bonding for both public and replicate network).

My ceph.conf: https://pastebin.com/r4pJ3P45
We use this cluster for OpenStack cinder's backend.

We have benchmark this cluster by using 6 VMs, using vdbench.
Our vdbench script:  https://pastebin.com/9sxhrjie

After test, we got the result:
 - 100% random read: 100.000 IOPS
 - 100% random write: 20.000 IOPS
 - 75% RR - 25% RW: 80.000 IOPS

That results are so low, because we calculate the performance of this
cluster is: 112.000 IOPS Write and 1000.000 IOPS Read,

We are using Ceph Jewel 10.2.5-1trusty, kernel 4.4.0.-31 generic, Ubuntu
14.04


Could you help me solve this issue.

Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
On Sun, 20 Aug 2017 08:38:54 +0200 Sinan Polat wrote:

> What has DWPD to do with performance / IOPS? The SSD will just fail earlier, 
> but it should not have any affect on the performance, right?
> 
Nothing, I listed BOTH reasons why these are unsuitable.

You just don't buy something huge like 4TB SSDs and expect to just write
40GB/day to them in.

> Correct me if I am wrong, just want to learn.
>
Learning is easy if you're will to make a little effort.
Just like the OP you failed to search for "Samsung Evo Ceph" and find all
the bad news like in this result:

https://forum.proxmox.com/threads/slow-ceph-journal-on-samsung-850-pro.27733/
 
Christian
> 
> > Op 20 aug. 2017 om 06:03 heeft Christian Balzer  het 
> > volgende geschreven:
> > 
> > DWPD  
> 
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com