[ceph-users] (no subject)

2018-05-18 Thread Don Doerner
unsubscribe ceph-users The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have

[ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
All, Synopsis: I can't get cache tiering to work in HAMMER on RHEL7. Process: 1. Fresh install of HAMMER on RHEL7 went well. 2. Crush map adapted to provide two root level resources a. ctstorage, to use as a cache tier based on very high-performance, high IOPS SSD (intrinsic

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
was not writeback? I'll do that, just to confirm that this is the problem, then reestablish the cache-mode. Thank you very much for your assistance! -don- -Original Message- From: Nick Fisk [mailto:n...@fisk.me.uk] Sent: 30 April, 2015 10:38 To: Don Doerner; ceph-users@lists.ceph.com Subject

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
the big test that can overflow the cache and consult with you on what specific steps you might recommend. -don- -Original Message- From: Nick Fisk [mailto:n...@fisk.me.uk] Sent: 30 April, 2015 10:58 To: Don Doerner; ceph-users@lists.ceph.com Subject: RE: RHEL7/HAMMER cache tier doesn't

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
rate of ~400 MB/sec, a few hundred seconds). Am I misunderstanding something? Thank you very much for your assistance! -don- From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com] Sent: 30 April, 2015 10:52 To: Don Doerner Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] RHEL7/HAMMER cache tier

Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?

2015-04-30 Thread Don Doerner
that a try. Thanks very much. -don- From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com] Sent: 30 April, 2015 11:45 To: Don Doerner Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict? Hi Don, You have to provide the target size through

[ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
Folks, I am having a hard time setting up a fresh install of GIANT on a fresh install of RHEL7 - which you would think would be about the easiest of all situations... 1. Using ceph-deploy 1.5.22 - for some reason it never updates the /etc/yum.repos.d to include all of the various ceph

Re: [ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
OK, apparently it's also a good idea to install EPEL, not just copy over the repo configuration from another installation. That resolved the key error and It appears that I have it all installed. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Don Doerner
Hi Justin, Ceph, proper, does not provide those services. Ceph does provide Linux block devices (look for Rados Block Devices, aka, RBD) and a filesystem, CephFS. I don’t know much about the filesystem, but the block devices are present on an RBD client that you set up, following the

Re: [ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
Key problem resolved by actually installing (as opposed to simply configuring) the EPEL repo. And with that, the cluster became viable. Thanks all. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent: 04 April, 2015 09:47 To: ceph-us...@ceph.com

Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded

2015-03-25 Thread Don Doerner
...@lists.ceph.com] On Behalf Of Don Doerner Sent: 25 March, 2015 08:01 To: Udo Lembke; ceph-us...@ceph.com Subject: Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded Assuming you've calculated the number of PGs reasonably, see herehttps://urldefense.proofpoint.com

Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2 active+undersized+degraded

2015-03-25 Thread Don Doerner
Assuming you've calculated the number of PGs reasonably, see herehttp://tracker.ceph.com/issues/10350 and herehttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soonhttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/. I'm guessing these

[ceph-users] Reliable OSD

2015-03-17 Thread Don Doerner
Situation: I need to use EC pools (for the economics/power/cooling) for the storage of data, but my use case requires a block device. Ergo, I require a cache tier. I have tried using a 3x replicated pool as a cache tier - the throughput was poor, mostly due to latency, mostly due to device

Re: [ceph-users] New EC pool undersized

2015-03-04 Thread Don Doerner
Oh duh… OK, then given a 4+4 erasure coding scheme, 14400/8 is 1800, so try 2048. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent: 04 March, 2015 12:14 To: Kyle Hutson; Ceph Users Subject: Re: [ceph-users] New EC pool undersized In this case

Re: [ceph-users] New EC pool undersized

2015-03-04 Thread Don Doerner
issue, I believe. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent: 04 March, 2015 12:49 To: Kyle Hutson Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] New EC pool undersized Hmmm, I just struggled through this myself. How many racks do you

Re: [ceph-users] New EC pool undersized

2015-03-04 Thread Don Doerner
12:43 To: Don Doerner Cc: Ceph Users Subject: Re: [ceph-users] New EC pool undersized It wouldn't let me simply change the pg_num, giving Error EEXIST: specified pg_num 2048 = current 8192 But that's not a big deal, I just deleted the pool and recreated with 'ceph osd pool create ec44pool 2048

Re: [ceph-users] New EC pool undersized

2015-03-04 Thread Don Doerner
, 2015 13:15 To: Don Doerner Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] New EC pool undersized So it sounds like I should figure out at 'how many nodes' do I need to increase pg_num to 4096, and again for 8192, and increase those incrementally when as I add more hosts, correct? On Wed

Re: [ceph-users] EC configuration questions...

2015-03-03 Thread Don Doerner
Loic, Thank you, I got it created. One of these days, I am going to have to try to understand some of the crush map details... Anyway, on to the next step! -don- -- The information contained in this transmission may be

Re: [ceph-users] EC configuration questions...

2015-03-02 Thread Don Doerner
Update: the attempt to define a traditional replicated pool was successful; it's online and ready to go. So the cluster basics appear sound... -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent: 02 March, 2015 16:18 To: ceph-users@lists.ceph.com

[ceph-users] EC configuration questions...

2015-03-02 Thread Don Doerner
Hello, I am trying to set up to measure erasure coding performance and overhead. My Ceph cluster-of-one has 27 disks, hence 27 OSDs, all empty. I have ots of memory, and I am using osd crush chooseleaf type = 0 in my config file, so my OSDs should be able to peer with others on the same

[ceph-users] Fresh install of GIANT failing?

2015-03-02 Thread Don Doerner
All, Using ceph-deploy, I see a failure to install ceph on a node. At the beginning of the ceph-deploy output, it says it is installing stable version giant. The last few lines are... [192.168.167.192][DEBUG ] -- Finished Dependency Resolution [192.168.167.192][WARNIN] Error: Package:

Re: [ceph-users] Fresh install of GIANT failing?

2015-03-02 Thread Don Doerner
Problem solved, I've been pointed at repository problem and an existing Ceph issue (http://tracker.ceph.com/issues/10476) by a couple of helpful folks. Thanks, -don- From: Don Doerner Sent: 02 March, 2015 10:20 To: Don Doerner; ceph-users@lists.ceph.com Subject: RE: Fresh install of GIANT

[ceph-users] RBD deprecated?

2015-02-05 Thread Don Doerner
All, I have been using Ceph to provide block devices for various, nefarious purposes (mostly testing ;-). But as I have worked with various Linux distributions (RHEL7, CentOS6, CentOS7) and various Ceph releases (firefly, giant), I notice that the onlycombination for which I seem able to find

Re: [ceph-users] RBD deprecated?

2015-02-05 Thread Don Doerner
, February 5, 2015 10:05 AM, Ken Dreyer kdre...@redhat.com wrote: On 02/05/2015 08:55 AM, Don Doerner wrote: I have been using Ceph to provide block devices for various, nefarious purposes (mostly testing ;-). But as I have worked with various Linux distributions (RHEL7, CentOS6, CentOS7

[ceph-users] Different flavors of storage?

2015-01-21 Thread Don Doerner
OK, I've set up 'giant' in a single-node cluster, played with a replicated pool and an EC pool. All goes well so far. Question: I have two different kinds of HDD in my server - some fast, 15K RPM SAS drives and some big, slow (5400 RPM!) SATA drives. Right now, I have OSDs on all, and when I

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-21 Thread Don Doerner
Well, look at it this way: with 3X replication, for each TB of data you need 3 TB disk. With (for example) 10+3 EC, you get better protection, and for each TB of data you need 1.3 TB disk. -don- -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf

[ceph-users] unsubscribe

2015-01-12 Thread Don Doerner
unsubscribe Regards, -don- -- The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is

[ceph-users] Ceph erasure-coded pool

2015-01-12 Thread Don Doerner
200 (i.e., (24*100)/12) placement groups? 4. As I add OSDs, can I adjust the number of PGs? Thanks in advance... ___ Don Doerner Quantum Corporation -- The information