unsubscribe ceph-users
The information contained in this transmission may be confidential. Any
disclosure, copying, or further distribution of confidential information is not
permitted unless such privilege is explicitly granted in writing by Quantum.
Quantum reserves the right to have
All,
Synopsis: I can't get cache tiering to work in HAMMER on RHEL7.
Process:
1. Fresh install of HAMMER on RHEL7 went well.
2. Crush map adapted to provide two root level resources
a. ctstorage, to use as a cache tier based on very high-performance,
high IOPS SSD (intrinsic
was not writeback? I'll do that,
just to confirm that this is the problem, then reestablish the cache-mode.
Thank you very much for your assistance!
-don-
-Original Message-
From: Nick Fisk [mailto:n...@fisk.me.uk]
Sent: 30 April, 2015 10:38
To: Don Doerner; ceph-users@lists.ceph.com
Subject
the big test that can
overflow the cache and consult with you on what specific steps you might
recommend.
-don-
-Original Message-
From: Nick Fisk [mailto:n...@fisk.me.uk]
Sent: 30 April, 2015 10:58
To: Don Doerner; ceph-users@lists.ceph.com
Subject: RE: RHEL7/HAMMER cache tier doesn't
rate of ~400 MB/sec, a few hundred seconds).
Am I misunderstanding something?
Thank you very much for your assistance!
-don-
From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com]
Sent: 30 April, 2015 10:52
To: Don Doerner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier
that a try. Thanks very
much.
-don-
From: Mohamed Pakkeer [mailto:mdfakk...@gmail.com]
Sent: 30 April, 2015 11:45
To: Don Doerner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RHEL7/HAMMER cache tier doesn't flush or evict?
Hi Don,
You have to provide the target size through
Folks,
I am having a hard time setting up a fresh install of GIANT on a fresh install
of RHEL7 - which you would think would be about the easiest of all situations...
1. Using ceph-deploy 1.5.22 - for some reason it never updates the
/etc/yum.repos.d to include all of the various ceph
OK, apparently it's also a good idea to install EPEL, not just copy over the
repo configuration from another installation.
That resolved the key error and It appears that I have it all installed.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent
Hi Justin,
Ceph, proper, does not provide those services. Ceph does provide Linux block
devices (look for Rados Block Devices, aka, RBD) and a filesystem, CephFS.
I don’t know much about the filesystem, but the block devices are present on an
RBD client that you set up, following the
Key problem resolved by actually installing (as opposed to simply configuring)
the EPEL repo. And with that, the cluster became viable. Thanks all.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 04 April, 2015 09:47
To: ceph-us...@ceph.com
...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 25 March, 2015 08:01
To: Udo Lembke; ceph-us...@ceph.com
Subject: Re: [ceph-users] Strange osd in PG with new EC-Pool - pgs: 2
active+undersized+degraded
Assuming you've calculated the number of PGs reasonably, see
herehttps://urldefense.proofpoint.com
Assuming you've calculated the number of PGs reasonably, see
herehttp://tracker.ceph.com/issues/10350 and
herehttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soonhttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/.
I'm guessing these
Situation: I need to use EC pools (for the economics/power/cooling) for the
storage of data, but my use case requires a block device. Ergo, I require a
cache tier. I have tried using a 3x replicated pool as a cache tier - the
throughput was poor, mostly due to latency, mostly due to device
Oh duh… OK, then given a 4+4 erasure coding scheme, 14400/8 is 1800, so try
2048.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 04 March, 2015 12:14
To: Kyle Hutson; Ceph Users
Subject: Re: [ceph-users] New EC pool undersized
In this case
issue, I believe.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 04 March, 2015 12:49
To: Kyle Hutson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] New EC pool undersized
Hmmm, I just struggled through this myself. How many racks do you
12:43
To: Don Doerner
Cc: Ceph Users
Subject: Re: [ceph-users] New EC pool undersized
It wouldn't let me simply change the pg_num, giving
Error EEXIST: specified pg_num 2048 = current 8192
But that's not a big deal, I just deleted the pool and recreated with 'ceph osd
pool create ec44pool 2048
, 2015 13:15
To: Don Doerner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] New EC pool undersized
So it sounds like I should figure out at 'how many nodes' do I need to increase
pg_num to 4096, and again for 8192, and increase those incrementally when as I
add more hosts, correct?
On Wed
Loic,
Thank you, I got it created. One of these days, I am going to have to try to
understand some of the crush map details... Anyway, on to the next step!
-don-
--
The information contained in this transmission may be
Update: the attempt to define a traditional replicated pool was successful;
it's online and ready to go. So the cluster basics appear sound...
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 02 March, 2015 16:18
To: ceph-users@lists.ceph.com
Hello,
I am trying to set up to measure erasure coding performance and overhead. My
Ceph cluster-of-one has 27 disks, hence 27 OSDs, all empty. I have ots of
memory, and I am using osd crush chooseleaf type = 0 in my config file, so my
OSDs should be able to peer with others on the same
All,
Using ceph-deploy, I see a failure to install ceph on a node. At the beginning
of the ceph-deploy output, it says it is installing stable version giant.
The last few lines are...
[192.168.167.192][DEBUG ] -- Finished Dependency Resolution
[192.168.167.192][WARNIN] Error: Package:
Problem solved, I've been pointed at repository problem and an existing Ceph
issue (http://tracker.ceph.com/issues/10476) by a couple of helpful folks.
Thanks,
-don-
From: Don Doerner
Sent: 02 March, 2015 10:20
To: Don Doerner; ceph-users@lists.ceph.com
Subject: RE: Fresh install of GIANT
All,
I have been using Ceph to provide block devices for various, nefarious purposes
(mostly testing ;-). But as I have worked with various Linux distributions
(RHEL7, CentOS6, CentOS7) and various Ceph releases (firefly, giant), I notice
that the onlycombination for which I seem able to find
, February 5, 2015 10:05 AM, Ken Dreyer kdre...@redhat.com wrote:
On 02/05/2015 08:55 AM, Don Doerner wrote:
I have been using Ceph to provide block devices for various, nefarious
purposes (mostly testing ;-). But as I have worked with various Linux
distributions (RHEL7, CentOS6, CentOS7
OK, I've set up 'giant' in a single-node cluster, played with a replicated pool
and an EC pool. All goes well so far. Question: I have two different kinds of
HDD in my server - some fast, 15K RPM SAS drives and some big, slow (5400 RPM!)
SATA drives.
Right now, I have OSDs on all, and when I
Well, look at it this way: with 3X replication, for each TB of data you need 3
TB disk. With (for example) 10+3 EC, you get better protection, and for each
TB of data you need 1.3 TB disk.
-don-
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
unsubscribe
Regards,
-don-
--
The information contained in this transmission may be confidential. Any
disclosure, copying, or further distribution of confidential information is not
permitted unless such privilege is
200 (i.e., (24*100)/12) placement groups?
4. As I add OSDs, can I adjust the number of PGs?
Thanks in advance...
___
Don Doerner
Quantum Corporation
--
The information
28 matches
Mail list logo