Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
Hi Nick,

For brevity, I didn't detail all of the commands I issued.  Looking back 
through my command history, I can confirm that I did explicitly set cache-mode 
to writeback (and later reset it to forward to try flush-and-evict).  Question: 
how did you determine that your cache-mode was not writeback?  I'll do that, 
just to  confirm that this is the problem, then reestablish the cache-mode.

Thank you very much for your assistance!

-don-

-Original Message-
From: Nick Fisk [mailto:n...@fisk.me.uk] 
Sent: 30 April, 2015 10:38
To: Don Doerner; ceph-users@lists.ceph.com
Subject: RE: RHEL7/HAMMER cache tier doesn't flush or evict?
Sensitivity: Personal

Hi Don,

I experienced the same thing a couple of days ago on Hammer. On investigation 
the cache mode wasn't set to writeback even though I'm sure it accepted the 
command successfully when I set the pool up.

Could you reapply the cache mode writeback command and see if that makes a 
difference?

Nick

 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf 
 Of Don Doerner
 Sent: 30 April 2015 17:57
 To: ceph-users@lists.ceph.com
 Subject: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 All,
 
 Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
 
 Process:
 1. Fresh install of HAMMER on RHEL7 went well.
 2. Crush map adapted to provide two root level resources a.  
 ctstorage, to use as a cache tier based on very high-performance,
high
 IOPS SSD (intrinsic journal).  2 OSDs.
 b. ecstorage, to use as an erasure-coded poolbased on 
 low-performance, cost effective storage (extrinsic journal).  12 OSDs.
 3. Established a pool ctpool, 32 PGs on ctstorage (pool size = 
 min_size
= 1).
 Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
 4. Established a pool ecpool, 256 PGs on ecstorage (5+3 profile).  
 Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
 5. Ensured that both pools were empty (i.e., rados ls shows no 
 objects) 6. Put the cache tier on the erasure coded storage (one Bloom 
 hit set, interval 900 seconds), set up the overlay.  Used defaults for 
 flushing and eviction.  No errors.
 7. Started a 3600-second write test to ecpool.
 
 Objects piled up in ctpool (as expected) - went past the 40% mark (as 
 expected), then past the 80% mark (unexpected), then ran into the wall 
 (95% full - very unexpected).  Using rados df, I can see that the 
 cache
tier is
 full (duh!) but not one single object lives in the ecpool.  Nothing 
 was
ever
 flushed, nothing was ever evicted.  Thought I might be misreading 
 that, so
I
 went back to SAR data that I captured during the test: the SSDs were 
 the
only
 [ceph] devices that sustained I/O.
 
 I based this experiment on another (much more successful) experiment 
 that I performed using GIANT (.1) on RHEL7 a couple of weeks ago 
 (wherein I used RAM as a cache tier); that all worked.  It seems there 
 are at least
three
 possibilities.
 . I forgot a critical step this time around.
 . The steps needed for a cache tier in HAMMER are different than the 
 steps needed in GIANT (and different than the documentation online).
 . There is a problem with HAMMER in the area of cache tier.
 
 Has anyone successfully deployed cache-tiering in HAMMER?  Did you 
 have to do anything unusual?  Do you see any steps that I missed?
 
 Regards,
 
 -don-
 
 
 The information contained in this transmission may be confidential. 
 Any disclosure, copying, or further distribution of confidential 
 information
is not
 permitted unless such privilege is explicitly granted in writing by
Quantum.
 Quantum reserves the right to have electronic communications, 
 including email and attachments, sent across its networks filtered 
 through anti
virus
 and spam software programs and retain such messages in order to comply 
 with applicable data security and retention requirements. Quantum is 
 not responsible for the proper and complete transmission of the 
 substance of this communication or for any delay in its receipt.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Mohamed Pakkeer
Hi Don,

Did you configure target_ dirty_ratio, target_full_ratio and
target_max_bytes?


K.Mohamed Pakkeer

On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner don.doer...@quantum.com
wrote:

  All,



 Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.



 Process:

 1.  Fresh install of HAMMER on RHEL7 went well.

 2.  Crush map adapted to provide two “root” level resources

 a.   “ctstorage”, to use as a cache tier based on very
 high-performance, high IOPS SSD (intrinsic journal).  2 OSDs.

 b.  “ecstorage”, to use as an erasure-coded poolbased on
 low-performance, cost effective storage (extrinsic journal).  12 OSDs.

 3.  Established a pool “ctpool”, 32 PGs on ctstorage (pool size =
 min_size = 1).  Ran a quick RADOS bench test, all worked as expected.  Cleaned
 up.

 4.  Established a pool “ecpool”, 256 PGs on ecstorage (5+3 profile).  Ran
 a quick RADOS bench test, all worked as expected.  Cleaned up.

 5.  Ensured that both pools were empty (i.e., “rados ls” shows no
 objects)

 6.  Put the cache tier on the erasure coded storage (one Bloom hit
 set, interval 900 seconds), set up the overlay.  Used defaults for
 flushing and eviction.  No errors.

 7.  Started a 3600-second write test to ecpool.



 Objects piled up in ctpool (as expected) – went past the 40% mark (as
 expected), then past the 80% mark (unexpected), then ran into the wall (95%
 full – *very* unexpected).  Using “rados df”, I can see that the cache
 tier is full (duh!) but not one single object lives in the ecpool.  Nothing
 was ever flushed, nothing was ever evicted.  Thought I might be
 misreading that, so I went back to SAR data that I captured during the
 test: the SSDs were the only [ceph] devices that sustained I/O.



 I based this experiment on another (much more successful) experiment that
 I performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I used
 RAM as a cache tier); that all worked.  It seems there are at least three
 possibilities…

 ·I forgot a critical step this time around.

 ·The steps needed for a cache tier in HAMMER are different than
 the steps needed in GIANT (and different than the documentation online).

 ·There is a problem with HAMMER in the area of cache tier.



 Has anyone successfully deployed cache-tiering in HAMMER?  Did you have
 to do anything unusual?  Do you see any steps that I missed?



 Regards,



 -don-


  --
 The information contained in this transmission may be confidential. Any
 disclosure, copying, or further distribution of confidential information is
 not permitted unless such privilege is explicitly granted in writing by
 Quantum. Quantum reserves the right to have electronic communications,
 including email and attachments, sent across its networks filtered through
 anti virus and spam software programs and retain such messages in order to
 comply with applicable data security and retention requirements. Quantum is
 not responsible for the proper and complete transmission of the substance
 of this communication or for any delay in its receipt.

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
Thanks  Regards
K.Mohamed Pakkeer
Mobile- 0091-8754410114
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
OK...  When I hit the wall, Ceph got pretty unusable; I haven't figured out how 
to restore it to health.  So to do as you suggest, I am going to have to scrape 
everything into the trash and try again (3rd time's the charm, right?) - so let 
me get started on that.  I will pause before I run the big test that can 
overflow the cache and consult with you on what specific steps you might 
recommend.

-don-


-Original Message-
From: Nick Fisk [mailto:n...@fisk.me.uk] 
Sent: 30 April, 2015 10:58
To: Don Doerner; ceph-users@lists.ceph.com
Subject: RE: RHEL7/HAMMER cache tier doesn't flush or evict?
Sensitivity: Personal

I'm using Inkscope to monitor my cluster and looking at the pool details I saw 
that mode was set to none. I'm pretty sure there must be a ceph cmd line to get 
the option state but I couldn't find anything obvious when I was looking for it.

 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf 
 Of Don Doerner
 Sent: 30 April 2015 18:47
 To: Nick Fisk; ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 Hi Nick,
 
 For brevity, I didn't detail all of the commands I issued.  Looking 
 back
through
 my command history, I can confirm that I did explicitly set cache-mode 
 to writeback (and later reset it to forward to try flush-and-evict).
Question:
 how did you determine that your cache-mode was not writeback?  I'll do 
 that, just to  confirm that this is the problem, then reestablish the
cache-
 mode.
 
 Thank you very much for your assistance!
 
 -don-
 
 -Original Message-
 From: Nick Fisk [mailto:n...@fisk.me.uk]
 Sent: 30 April, 2015 10:38
 To: Don Doerner; ceph-users@lists.ceph.com
 Subject: RE: RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 Hi Don,
 
 I experienced the same thing a couple of days ago on Hammer. On 
 investigation the cache mode wasn't set to writeback even though I'm 
 sure
it
 accepted the command successfully when I set the pool up.
 
 Could you reapply the cache mode writeback command and see if that 
 makes a difference?
 
 Nick
 
  -Original Message-
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On 
  Behalf Of Don Doerner
  Sent: 30 April 2015 17:57
  To: ceph-users@lists.ceph.com
  Subject: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
  Sensitivity: Personal
 
  All,
 
  Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
 
  Process:
  1. Fresh install of HAMMER on RHEL7 went well.
  2. Crush map adapted to provide two root level resources a.
  ctstorage, to use as a cache tier based on very high-performance,
 high
  IOPS SSD (intrinsic journal).  2 OSDs.
  b. ecstorage, to use as an erasure-coded poolbased on 
  low-performance, cost effective storage (extrinsic journal).  12 OSDs.
  3. Established a pool ctpool, 32 PGs on ctstorage (pool size = 
  min_size
 = 1).
  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
  4. Established a pool ecpool, 256 PGs on ecstorage (5+3 profile).
  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
  5. Ensured that both pools were empty (i.e., rados ls shows no
  objects) 6. Put the cache tier on the erasure coded storage (one 
  Bloom hit set, interval 900 seconds), set up the overlay.  Used 
  defaults for flushing and eviction.  No errors.
  7. Started a 3600-second write test to ecpool.
 
  Objects piled up in ctpool (as expected) - went past the 40% mark 
  (as expected), then past the 80% mark (unexpected), then ran into 
  the wall (95% full - very unexpected).  Using rados df, I can see 
  that the cache
 tier is
  full (duh!) but not one single object lives in the ecpool.  Nothing 
  was
 ever
  flushed, nothing was ever evicted.  Thought I might be misreading 
  that, so
 I
  went back to SAR data that I captured during the test: the SSDs were 
  the
 only
  [ceph] devices that sustained I/O.
 
  I based this experiment on another (much more successful) experiment 
  that I performed using GIANT (.1) on RHEL7 a couple of weeks ago 
  (wherein I used RAM as a cache tier); that all worked.  It seems 
  there are at least
 three
  possibilities.
  . I forgot a critical step this time around.
  . The steps needed for a cache tier in HAMMER are different than the 
  steps needed in GIANT (and different than the documentation online).
  . There is a problem with HAMMER in the area of cache tier.
 
  Has anyone successfully deployed cache-tiering in HAMMER?  Did you 
  have to do anything unusual?  Do you see any steps that I missed?
 
  Regards,
 
  -don-
 
  
  The information contained in this transmission may be confidential.
  Any disclosure, copying, or further distribution of confidential 
  information
 is not
  permitted unless such privilege is explicitly granted in writing by
 Quantum.
  Quantum reserves the right to have

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Nick Fisk
I'm using Inkscope to monitor my cluster and looking at the pool details I
saw that mode was set to none. I'm pretty sure there must be a ceph cmd line
to get the option state but I couldn't find anything obvious when I was
looking for it.

 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Don Doerner
 Sent: 30 April 2015 18:47
 To: Nick Fisk; ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 Hi Nick,
 
 For brevity, I didn't detail all of the commands I issued.  Looking back
through
 my command history, I can confirm that I did explicitly set cache-mode to
 writeback (and later reset it to forward to try flush-and-evict).
Question:
 how did you determine that your cache-mode was not writeback?  I'll do
 that, just to  confirm that this is the problem, then reestablish the
cache-
 mode.
 
 Thank you very much for your assistance!
 
 -don-
 
 -Original Message-
 From: Nick Fisk [mailto:n...@fisk.me.uk]
 Sent: 30 April, 2015 10:38
 To: Don Doerner; ceph-users@lists.ceph.com
 Subject: RE: RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 Hi Don,
 
 I experienced the same thing a couple of days ago on Hammer. On
 investigation the cache mode wasn't set to writeback even though I'm sure
it
 accepted the command successfully when I set the pool up.
 
 Could you reapply the cache mode writeback command and see if that
 makes a difference?
 
 Nick
 
  -Original Message-
  From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
  Of Don Doerner
  Sent: 30 April 2015 17:57
  To: ceph-users@lists.ceph.com
  Subject: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
  Sensitivity: Personal
 
  All,
 
  Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
 
  Process:
  1. Fresh install of HAMMER on RHEL7 went well.
  2. Crush map adapted to provide two root level resources a.
  ctstorage, to use as a cache tier based on very high-performance,
 high
  IOPS SSD (intrinsic journal).  2 OSDs.
  b. ecstorage, to use as an erasure-coded poolbased on
  low-performance, cost effective storage (extrinsic journal).  12 OSDs.
  3. Established a pool ctpool, 32 PGs on ctstorage (pool size =
  min_size
 = 1).
  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
  4. Established a pool ecpool, 256 PGs on ecstorage (5+3 profile).
  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
  5. Ensured that both pools were empty (i.e., rados ls shows no
  objects) 6. Put the cache tier on the erasure coded storage (one Bloom
  hit set, interval 900 seconds), set up the overlay.  Used defaults for
  flushing and eviction.  No errors.
  7. Started a 3600-second write test to ecpool.
 
  Objects piled up in ctpool (as expected) - went past the 40% mark (as
  expected), then past the 80% mark (unexpected), then ran into the wall
  (95% full - very unexpected).  Using rados df, I can see that the
  cache
 tier is
  full (duh!) but not one single object lives in the ecpool.  Nothing
  was
 ever
  flushed, nothing was ever evicted.  Thought I might be misreading
  that, so
 I
  went back to SAR data that I captured during the test: the SSDs were
  the
 only
  [ceph] devices that sustained I/O.
 
  I based this experiment on another (much more successful) experiment
  that I performed using GIANT (.1) on RHEL7 a couple of weeks ago
  (wherein I used RAM as a cache tier); that all worked.  It seems there
  are at least
 three
  possibilities.
  . I forgot a critical step this time around.
  . The steps needed for a cache tier in HAMMER are different than the
  steps needed in GIANT (and different than the documentation online).
  . There is a problem with HAMMER in the area of cache tier.
 
  Has anyone successfully deployed cache-tiering in HAMMER?  Did you
  have to do anything unusual?  Do you see any steps that I missed?
 
  Regards,
 
  -don-
 
  
  The information contained in this transmission may be confidential.
  Any disclosure, copying, or further distribution of confidential
  information
 is not
  permitted unless such privilege is explicitly granted in writing by
 Quantum.
  Quantum reserves the right to have electronic communications,
  including email and attachments, sent across its networks filtered
  through anti
 virus
  and spam software programs and retain such messages in order to comply
  with applicable data security and retention requirements. Quantum is
  not responsible for the proper and complete transmission of the
  substance of this communication or for any delay in its receipt.
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Nick Fisk
Hi Don,

I experienced the same thing a couple of days ago on Hammer. On
investigation the cache mode wasn't set to writeback even though I'm sure it
accepted the command successfully when I set the pool up.

Could you reapply the cache mode writeback command and see if that makes a
difference?

Nick

 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
 Don Doerner
 Sent: 30 April 2015 17:57
 To: ceph-users@lists.ceph.com
 Subject: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
 Sensitivity: Personal
 
 All,
 
 Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
 
 Process:
 1. Fresh install of HAMMER on RHEL7 went well.
 2. Crush map adapted to provide two root level resources
 a.  ctstorage, to use as a cache tier based on very high-performance,
high
 IOPS SSD (intrinsic journal).  2 OSDs.
 b. ecstorage, to use as an erasure-coded poolbased on low-performance,
 cost effective storage (extrinsic journal).  12 OSDs.
 3. Established a pool ctpool, 32 PGs on ctstorage (pool size = min_size
= 1).
 Ran a quick RADOS bench test, all worked as expected.  Cleaned up.
 4. Established a pool ecpool, 256 PGs on ecstorage (5+3 profile).  Ran a
 quick RADOS bench test, all worked as expected.  Cleaned up.
 5. Ensured that both pools were empty (i.e., rados ls shows no objects)
 6. Put the cache tier on the erasure coded storage (one Bloom hit set,
 interval 900 seconds), set up the overlay.  Used defaults for flushing and
 eviction.  No errors.
 7. Started a 3600-second write test to ecpool.
 
 Objects piled up in ctpool (as expected) - went past the 40% mark (as
 expected), then past the 80% mark (unexpected), then ran into the wall
 (95% full - very unexpected).  Using rados df, I can see that the cache
tier is
 full (duh!) but not one single object lives in the ecpool.  Nothing was
ever
 flushed, nothing was ever evicted.  Thought I might be misreading that, so
I
 went back to SAR data that I captured during the test: the SSDs were the
only
 [ceph] devices that sustained I/O.
 
 I based this experiment on another (much more successful) experiment that
 I performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I
 used RAM as a cache tier); that all worked.  It seems there are at least
three
 possibilities.
 . I forgot a critical step this time around.
 . The steps needed for a cache tier in HAMMER are different than the steps
 needed in GIANT (and different than the documentation online).
 . There is a problem with HAMMER in the area of cache tier.
 
 Has anyone successfully deployed cache-tiering in HAMMER?  Did you have
 to do anything unusual?  Do you see any steps that I missed?
 
 Regards,
 
 -don-
 
 
 The information contained in this transmission may be confidential. Any
 disclosure, copying, or further distribution of confidential information
is not
 permitted unless such privilege is explicitly granted in writing by
Quantum.
 Quantum reserves the right to have electronic communications, including
 email and attachments, sent across its networks filtered through anti
virus
 and spam software programs and retain such messages in order to comply
 with applicable data security and retention requirements. Quantum is not
 responsible for the proper and complete transmission of the substance of
 this communication or for any delay in its receipt.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
Hi Mohamed,

I did not.  But:

·I confirmed that (by default) cache_target_dirty_ratio was set to 0.4 
(40%) and cache_target_full_ratio was set to 0.8 (80%).

·I did not set target_max_bytes (I felt that the simple, pure relative 
sizing controls were sufficient for my experiment).

·I confirmed that (by default) cache_min_flush_age and 
cache_min_evict_age were set to 0 (so no required delay for either flushing or 
eviction).

Given these settings, I expected to see:

·Flushing begin to happen at 40% of my cache tier size (~200 GB, as it 
happened), or about 80 GB.  Or earlier.

·Eviction begin to happen at 80% of my cache tier size, or about 160 
GB.  Or earlier.

·Cache tier capacity would only exceed 80% only if the flushing process 
couldn’t keep up with the ingest process for fairly long periods of time (at 
the observed ingest rate of ~400 MB/sec, a few hundred seconds).

Am I misunderstanding something?

Thank you very much for your assistance!

-don-

From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com]
Sent: 30 April, 2015 10:52
To: Don Doerner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

Hi Don,

Did you configure target_ dirty_ratio, target_full_ratio and target_max_bytes?


K.Mohamed Pakkeer

On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner 
don.doer...@quantum.commailto:don.doer...@quantum.com wrote:
All,

Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.

Process:

1.  Fresh install of HAMMER on RHEL7 went well.

2.  Crush map adapted to provide two “root” level resources

a.   “ctstorage”, to use as a cache tier based on very high-performance, 
high IOPS SSD (intrinsic journal).  2 OSDs.

b.  “ecstorage”, to use as an erasure-coded poolbased on low-performance, 
cost effective storage (extrinsic journal).  12 OSDs.

3.  Established a pool “ctpool”, 32 PGs on ctstorage (pool size = min_size 
= 1).  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.

4.  Established a pool “ecpool”, 256 PGs on ecstorage (5+3 profile).  Ran a 
quick RADOS bench test, all worked as expected.  Cleaned up.

5.  Ensured that both pools were empty (i.e., “rados ls” shows no objects)

6.  Put the cache tier on the erasure coded storage (one Bloom hit set, 
interval 900 seconds), set up the overlay.  Used defaults for flushing and 
eviction.  No errors.

7.  Started a 3600-second write test to ecpool.

Objects piled up in ctpool (as expected) – went past the 40% mark (as 
expected), then past the 80% mark (unexpected), then ran into the wall (95% 
full – very unexpected).  Using “rados df”, I can see that the cache tier is 
full (duh!) but not one single object lives in the ecpool.  Nothing was ever 
flushed, nothing was ever evicted.  Thought I might be misreading that, so I 
went back to SAR data that I captured during the test: the SSDs were the only 
[ceph] devices that sustained I/O.

I based this experiment on another (much more successful) experiment that I 
performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I used RAM 
as a cache tier); that all worked.  It seems there are at least three 
possibilities…

•I forgot a critical step this time around.

•The steps needed for a cache tier in HAMMER are different than the 
steps needed in GIANT (and different than the documentation online).

•There is a problem with HAMMER in the area of cache tier.

Has anyone successfully deployed cache-tiering in HAMMER?  Did you have to do 
anything unusual?  Do you see any steps that I missed?

Regards,

-don-


The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through anti virus and spam 
software programs and retain such messages in order to comply with applicable 
data security and retention requirements. Quantum is not responsible for the 
proper and complete transmission of the substance of this communication or for 
any delay in its receipt.

___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.comhttps://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.comd=BQMFaQc=8S5idjlO_n28Ko3lg6lskTMwneSC-WqZ5EBTEEvDlkgr=DAW8QzIBpV_iBddECxqMq8sRPZPQOBqikPmeCBg26bMm=Z-3d5aMnP4pxkHHCAf6pW_kRjxRPDF3dx6MfuVGZDgws=0C6rvtLPHnddUXaVLBff4sszXT6cKSkGmnZag2VVLfke=



--
Thanks  Regards
K.Mohamed Pakkeer
Mobile- 0091-8754410114
___
ceph-users mailing list
ceph-users@lists.ceph.com
http

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Gregory Farnum
On Thu, Apr 30, 2015 at 10:57 AM, Nick Fisk n...@fisk.me.uk wrote:
 I'm using Inkscope to monitor my cluster and looking at the pool details I
 saw that mode was set to none. I'm pretty sure there must be a ceph cmd line
 to get the option state but I couldn't find anything obvious when I was
 looking for it.

If you run ceph osd dump then any cache parameters that are set will
be output. I just set one up on a vstart cluster and the output
includes this line:
 pool 3 'ctp' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
 rjenkins pg_num 16 pgp_num 16 last_change 10 flags 
 hashpspool,incomplete_clones tier_of 1 cache_mode writeback stripe_width 0


If you dump it with --format=json[-pretty] it will include everything
unconditionally and you can inspect that way as well.
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Mohamed Pakkeer
Hi Don,

You have to provide the target size through target_max_bytes.
target_dirty_ratio and target_full_ratio values are based upon
target_max_bytes. If you provide target_max bytes as 200 GB and
target_dirty_ratio as 0.4, ceph will start the fulshing, when the cache
tier has 60GB of dirty objects.

Mohamed

On Thu, Apr 30, 2015 at 11:56 PM, Don Doerner don.doer...@quantum.com
wrote:

  Hi Mohamed,



 I did not.  But:

 ·I confirmed that (by default) cache_target_dirty_ratio was set
 to 0.4 (40%) and cache_target_full_ratio was set to 0.8 (80%).

 ·I did not set target_max_bytes (I felt that the simple, pure
 relative sizing controls were sufficient for my experiment).

 ·I confirmed that (by default) cache_min_flush_age and
 cache_min_evict_age were set to 0 (so no required delay for either
 flushing or eviction).



 Given these settings, I expected to see:

 ·Flushing begin to happen at 40% of my cache tier size (~200 GB,
 as it happened), or about 80 GB.  Or earlier.

 ·Eviction begin to happen at 80% of my cache tier size, or about
 160 GB.  Or earlier.

 ·Cache tier capacity would only exceed 80% only if the flushing
 process couldn’t keep up with the ingest process for fairly long periods of
 time (at the observed ingest rate of ~400 MB/sec, a few hundred seconds).



 Am I misunderstanding something?



 Thank you very much for your assistance!



 -don-



 *From:* Mohamed Pakkeer [mailto:mdfakk...@gmail.com]
 *Sent:* 30 April, 2015 10:52
 *To:* Don Doerner
 *Cc:* ceph-users@lists.ceph.com
 *Subject:* Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or
 evict?



 Hi Don,



 Did you configure target_ dirty_ratio, target_full_ratio and
 target_max_bytes?





 K.Mohamed Pakkeer



 On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner don.doer...@quantum.com
 wrote:

 All,



 Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.



 Process:

 1.  Fresh install of HAMMER on RHEL7 went well.

 2.  Crush map adapted to provide two “root” level resources

 a.   “ctstorage”, to use as a cache tier based on very
 high-performance, high IOPS SSD (intrinsic journal).  2 OSDs.

 b.  “ecstorage”, to use as an erasure-coded poolbased on
 low-performance, cost effective storage (extrinsic journal).  12 OSDs.

 3.  Established a pool “ctpool”, 32 PGs on ctstorage (pool size =
 min_size = 1).  Ran a quick RADOS bench test, all worked as expected.
 Cleaned up.

 4.  Established a pool “ecpool”, 256 PGs on ecstorage (5+3 profile).
 Ran a quick RADOS bench test, all worked as expected.  Cleaned up.

 5.  Ensured that both pools were empty (i.e., “rados ls” shows no
 objects)

 6.  Put the cache tier on the erasure coded storage (one Bloom hit
 set, interval 900 seconds), set up the overlay.  Used defaults for flushing
 and eviction.  No errors.

 7.  Started a 3600-second write test to ecpool.



 Objects piled up in ctpool (as expected) – went past the 40% mark (as
 expected), then past the 80% mark (unexpected), then ran into the wall (95%
 full – *very* unexpected).  Using “rados df”, I can see that the cache
 tier is full (duh!) but not one single object lives in the ecpool.  Nothing
 was ever flushed, nothing was ever evicted.  Thought I might be misreading
 that, so I went back to SAR data that I captured during the test: the SSDs
 were the only [ceph] devices that sustained I/O.



 I based this experiment on another (much more successful) experiment that
 I performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I used
 RAM as a cache tier); that all worked.  It seems there are at least three
 possibilities…

 ·I forgot a critical step this time around.

 ·The steps needed for a cache tier in HAMMER are different than
 the steps needed in GIANT (and different than the documentation online).

 ·There is a problem with HAMMER in the area of cache tier.



 Has anyone successfully deployed cache-tiering in HAMMER?  Did you have to
 do anything unusual?  Do you see any steps that I missed?



 Regards,



 -don-


  --

 The information contained in this transmission may be confidential. Any
 disclosure, copying, or further distribution of confidential information is
 not permitted unless such privilege is explicitly granted in writing by
 Quantum. Quantum reserves the right to have electronic communications,
 including email and attachments, sent across its networks filtered through
 anti virus and spam software programs and retain such messages in order to
 comply with applicable data security and retention requirements. Quantum is
 not responsible for the proper and complete transmission of the substance
 of this communication or for any delay in its receipt.


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 https://urldefense.proofpoint.com

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
Mohamed,

Well, that’s interesting… and in direct conflict with what is written in the 
documentationhttp://ceph.com/docs/master/rados/operations/cache-tiering/ 
(wherein it describes relative sizing as proportional to the cache pool’s 
capacity).  I am presently reinstalling, so I’ll give that a try.  Thanks very 
much.

-don-

From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com]
Sent: 30 April, 2015 11:45
To: Don Doerner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

Hi Don,

You have to provide the target size through target_max_bytes. 
target_dirty_ratio and target_full_ratio values are based upon 
target_max_bytes. If you provide target_max bytes as 200 GB and 
target_dirty_ratio as 0.4, ceph will start the fulshing, when the cache tier 
has 60GB of dirty objects.

Mohamed

On Thu, Apr 30, 2015 at 11:56 PM, Don Doerner 
don.doer...@quantum.commailto:don.doer...@quantum.com wrote:
Hi Mohamed,

I did not.  But:

•I confirmed that (by default) cache_target_dirty_ratio was set to 0.4 
(40%) and cache_target_full_ratio was set to 0.8 (80%).

•I did not set target_max_bytes (I felt that the simple, pure relative 
sizing controls were sufficient for my experiment).

•I confirmed that (by default) cache_min_flush_age and 
cache_min_evict_age were set to 0 (so no required delay for either flushing or 
eviction).

Given these settings, I expected to see:

•Flushing begin to happen at 40% of my cache tier size (~200 GB, as it 
happened), or about 80 GB.  Or earlier.

•Eviction begin to happen at 80% of my cache tier size, or about 160 
GB.  Or earlier.

•Cache tier capacity would only exceed 80% only if the flushing process 
couldn’t keep up with the ingest process for fairly long periods of time (at 
the observed ingest rate of ~400 MB/sec, a few hundred seconds).

Am I misunderstanding something?

Thank you very much for your assistance!

-don-

From: Mohamed Pakkeer [mailto:mdfakk...@gmail.commailto:mdfakk...@gmail.com]
Sent: 30 April, 2015 10:52
To: Don Doerner
Cc: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

Hi Don,

Did you configure target_ dirty_ratio, target_full_ratio and target_max_bytes?


K.Mohamed Pakkeer

On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner 
don.doer...@quantum.commailto:don.doer...@quantum.com wrote:
All,

Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.

Process:

1.  Fresh install of HAMMER on RHEL7 went well.

2.  Crush map adapted to provide two “root” level resources

a.   “ctstorage”, to use as a cache tier based on very high-performance, 
high IOPS SSD (intrinsic journal).  2 OSDs.

b.  “ecstorage”, to use as an erasure-coded poolbased on low-performance, 
cost effective storage (extrinsic journal).  12 OSDs.

3.  Established a pool “ctpool”, 32 PGs on ctstorage (pool size = min_size 
= 1).  Ran a quick RADOS bench test, all worked as expected.  Cleaned up.

4.  Established a pool “ecpool”, 256 PGs on ecstorage (5+3 profile).  Ran a 
quick RADOS bench test, all worked as expected.  Cleaned up.

5.  Ensured that both pools were empty (i.e., “rados ls” shows no objects)

6.  Put the cache tier on the erasure coded storage (one Bloom hit set, 
interval 900 seconds), set up the overlay.  Used defaults for flushing and 
eviction.  No errors.

7.  Started a 3600-second write test to ecpool.

Objects piled up in ctpool (as expected) – went past the 40% mark (as 
expected), then past the 80% mark (unexpected), then ran into the wall (95% 
full – very unexpected).  Using “rados df”, I can see that the cache tier is 
full (duh!) but not one single object lives in the ecpool.  Nothing was ever 
flushed, nothing was ever evicted.  Thought I might be misreading that, so I 
went back to SAR data that I captured during the test: the SSDs were the only 
[ceph] devices that sustained I/O.

I based this experiment on another (much more successful) experiment that I 
performed using GIANT (.1) on RHEL7 a couple of weeks ago (wherein I used RAM 
as a cache tier); that all worked.  It seems there are at least three 
possibilities…

•I forgot a critical step this time around.

•The steps needed for a cache tier in HAMMER are different than the 
steps needed in GIANT (and different than the documentation online).

•There is a problem with HAMMER in the area of cache tier.

Has anyone successfully deployed cache-tiering in HAMMER?  Did you have to do 
anything unusual?  Do you see any steps that I missed?

Regards,

-don-


The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic