Re: [ceph-users] xfs corruption

2016-03-06 Thread Ferhat Ozkasgarli
Rick; you mean Raid 0 environment right? If you use raid 5 or raid 10 or some other more complex raid configuration most of the physical disks' abilities vanishes. (trim, discard etc..) Only handful of hardware raid cards able to pass trim and discard commands to physical disks if the raid

Re: [ceph-users] xfs corruption

2016-03-06 Thread Ric Wheeler
It is perfectly reasonable and common to use hardware RAID cards in writeback mode under XFS (and under Ceph) if you configure them properly. The key thing is that for writeback cache enabled, you need to make sure that the S-ATA drives' write cache itself is disabled. Also make sure that

[ceph-users] osd up_from, up_thru

2016-03-06 Thread min fang
Dear, I used osd dump to extract osd monmap, and found up_from, up_thru list, what is the difference between up_from and up_thru? osd.0 up in weight 1 up_from 673 up_thru 673 down_at 670 last_clean_interval [637,669) Thanks. ___ ceph-users mailing

Re: [ceph-users] Ceph RBD latencies

2016-03-06 Thread Christian Balzer
Hello, On Mon, 7 Mar 2016 00:38:46 + Adrian Saul wrote: > > >The Samsungs are the 850 2TB > > > (MZ-75E2T0BW). Chosen primarily on price. > > > > These are spec'ed at 150TBW, or an amazingly low 0.04 DWPD (over 5 > > years). Unless you have a read-only cluster, you will wind up spending >

[ceph-users] how to downgrade when upgrade from firefly to hammer fail

2016-03-06 Thread Dong Wu
hi, cephers I want to upgrade my ceph cluster from firefly(0.80.11) to hammer, when i successfully install hammer deb package on all my hosts, then i update monitor first, and it success. but when i restart osds on one host to upgrade, it failed, osds cannot startup, then i want to

Re: [ceph-users] Cache tier operation clarifications

2016-03-06 Thread Christian Balzer
Hello, I'd like to get some insights, confirmations from people here who are either familiar with the code or have this tested more empirically than me (the VM/client node of my test cluster is currently pinning for the fjords). When it comes to flushing/evicting we already established that

Re: [ceph-users] Cache tier operation clarifications

2016-03-06 Thread Christian Balzer
On Sat, 5 Mar 2016 06:08:49 +0100 Francois Lafont wrote: > Hello, > > On 04/03/2016 09:17, Christian Balzer wrote: > > > Unlike the subject may suggest, I'm mostly going to try and explain how > > things work with cache tiers, as far as I understand them. > > Something of a reference to point

Re: [ceph-users] Ceph RBD latencies

2016-03-06 Thread Adrian Saul
> >The Samsungs are the 850 2TB > > (MZ-75E2T0BW). Chosen primarily on price. > > These are spec'ed at 150TBW, or an amazingly low 0.04 DWPD (over 5 years). > Unless you have a read-only cluster, you will wind up spending MORE on > replacing them (and/or loosing data when 2 fail at the same time)

Re: [ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-06 Thread Christian Balzer
On Sun, 6 Mar 2016 12:17:48 +0300 Mike Almateia wrote: > Hello Cephers! > > When my cluster hit "full ratio" settings, objects from cache pull > didn't flush to a cold storage. > As always, versions of everything, Ceph foremost. > 1. Hit the 'full ratio': > > 2016-03-06 11:35:23.838401

[ceph-users] Cache Pool and EC: objects didn't flush to a cold EC storage

2016-03-06 Thread Mike Almateia
Hello Cephers! When my cluster hit "full ratio" settings, objects from cache pull didn't flush to a cold storage. 1. Hit the 'full ratio': 2016-03-06 11:35:23.838401 osd.64 10.22.11.21:6824/31423 4327 : cluster [WRN] OSD near full (90%) 2016-03-06 11:35:55.447205 osd.64