May be some one can spot a new light,
1. Only SSD-cache OSDs affected by this issue
2. Total cache OSD count is 12x60GiB, backend filesystem is ext4
3. I have created 2 cache tier pools with replica size=3 on that OSD,
both with pg_num:400, pgp_num:400
4. There was a crush ruleset:
Hello there,
I started to push data into my ceph cluster. There is something I
cannot understand in the output of ceph -w.
When I run ceph -w I get this kinkd of output:
2015-03-25 09:11:36.785909 mon.0 [INF] pgmap v278788: 26056 pgs: 26056
active+clean; 2379 MB data, 19788 MB used, 33497 GB /
Hi,
due to two more hosts (now 7 storage nodes) I want to create an new
ec-pool and get an strange effect:
ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2
pgs stuck undersized; 2 pgs undersized
pg 22.3e5 is stuck unclean since forever,
Hi,Jesus
I encountered similar problem.
1. shut down one of nodes, but all osds can't reactive on the node after reboot.
2. run service ceph restart manually, got the same error message:
[root@storage4 ~]# /etc/init.d/ceph start
=== osd.15 ===
2015-03-23 14:43:32.399811 7fed0fcf4700 -1
Hi guys,
We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data)
that we intend to grow later on as more storage is needed. We would very
much like to use Erasure Coding for some pools but are facing some
challenges regarding the optimal initial profile “replication” settings
Hello,
I have a few questions regarding snapshots and fstrim with cache tiers.
In the cache tier and erasure coding FAQ related to ICE 1.2 (based on
Firefly), Inktank says Snapshots are not supported in conjunction with cache
tiers.
What are the risks of using snapshots with cache
Hi Tom,
On 25/03/2015 11:31, Tom Verdaat wrote: Hi guys,
We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data)
that we intend to grow later on as more storage is needed. We would very much
like to use Erasure Coding for some pools but are facing some challenges
It doesn't look like your OSD is mounted. What do you have when you run
mount? How did you create your OSDs?
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Mar 25, 2015 1:31 AM, oyym...@gmail.com oyym...@gmail.com wrote:
Hi,Jesus
I encountered similar problem.
*1.* shut
On Wed, Mar 25, 2015 at 1:20 AM, Udo Lembke ulem...@polarzone.de wrote:
Hi,
due to two more hosts (now 7 storage nodes) I want to create an new
ec-pool and get an strange effect:
ceph@admin:~$ ceph health detail
HEALTH_WARN 2 pgs degraded; 2 pgs stuck degraded; 2 pgs stuck unclean; 2
pgs
Yes.
On Wed, Mar 25, 2015 at 4:13 AM, Frédéric Nass
frederic.n...@univ-lorraine.fr wrote:
Hi Greg,
Thank you for this clarification. It helps a lot.
Does this can't think of any issues apply to both rbd and pool snapshots ?
Frederic.
On Tue, Mar 24,
On Wed, Mar 25, 2015 at 1:24 AM, Saverio Proto ziopr...@gmail.com wrote:
Hello there,
I started to push data into my ceph cluster. There is something I
cannot understand in the output of ceph -w.
When I run ceph -w I get this kinkd of output:
2015-03-25 09:11:36.785909 mon.0 [INF] pgmap
- Original Message -
From: Neville neville.tay...@hotmail.co.uk
To: ceph-users@lists.ceph.com
Sent: Wednesday, March 25, 2015 8:16:39 AM
Subject: [ceph-users] Radosgw authorization failed
Hi all,
I'm testing backup product which supports Amazon S3 as target for Archive
storage
Sorry all: my company's e-mail security got in the way there. Try these
references...
*http://tracker.ceph.com/issues/10350
*
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon
-don-
From: ceph-users
Hi Gregory,
thanks for the answer!
I have look which storage nodes are missing, and it's two differrent:
pg 22.240 is stuck undersized for 24437.862139, current state
active+undersized+degraded, last acting
[38,85,17,74,2147483647,10,58]
pg 22.240 is stuck undersized for 24437.862139, current
Hi all,
I'm testing backup product which supports Amazon S3 as target for Archive
storage and I'm trying to setup a Ceph cluster configured with the S3 API to
use as an internal target for backup archives instead of AWS.
I've followed the online guide for setting up Radosgw and created a
Hi Don,
thanks for the info!
looks that choose_tries set to 200 do the trick.
But the setcrushmap takes a long long time (alarming, but the client have still
IO)... hope it's finished soon ;-)
Udo
Am 25.03.2015 16:00, schrieb Don Doerner:
Assuming you've calculated the number of PGs
Hi Somnath,
Thanks, the tcmalloc env variable trick definitely had an impact on
FetchFromSpans calls.
export TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=1310851072;
/etc/init.d/ceph stop; /etc/init.d/ceph start
Nevertheless, if these FetchFromSpans library calls activity is now even
on all
Assuming you've calculated the number of PGs reasonably, see
herehttp://tracker.ceph.com/issues/10350 and
herehttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soonhttp://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/.
I'm guessing these
Hi Fredrick,
See my response inline.
Thanks Regards
Somnath
From: f...@univ-lr.fr [mailto:f...@univ-lr.fr]
Sent: Wednesday, March 25, 2015 8:07 AM
To: Somnath Roy
Cc: Ceph Users
Subject: Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Somnath,
Thanks, the tcmalloc env variable trick
On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett j...@mrc-lmb.cam.ac.uk wrote:
Dear All,
Please forgive this post if it's naive, I'm trying to familiarise myself
with cephfs!
I'm using Scientific Linux 6.6. with Ceph 0.87.1
My first steps with cephfs using a replicated pool worked OK.
Now
I don't know much about ceph-deploy, but I know that ceph-disk has
problems automatically adding an SSD OSD when there are journals of
other disks already on it. I've had to partition the disk ahead of
time and pass in the partitions to make ceph-disk work.
Also, unless you are sure that the dev
On Wed, Mar 25, 2015 at 6:06 PM, Robert LeBlanc rob...@leblancnet.us wrote:
I don't know much about ceph-deploy, but I know that ceph-disk has
problems automatically adding an SSD OSD when there are journals of
other disks already on it. I've had to partition the disk ahead of
time and pass
Probably a case of trying to read too fast. Sorry about that.
As far as your theory on the cache pool, I haven't tried that, but my
gut feeling is that it won't help as much as having the journal on the
SSD. The Cache tier isn't trying to collate writes, not like the
journal is doing. Then on the
Hi,
due to PG-trouble with an EC-Pool I modify the crushmap (step set_choose_tries
200) from
rule ec7archiv {
ruleset 6
type erasure
min_size 3
max_size 20
step set_chooseleaf_tries 5
step take default
step chooseleaf indep 0 type host
Dear All,
Please forgive this post if it's naive, I'm trying to familiarise myself
with cephfs!
I'm using Scientific Linux 6.6. with Ceph 0.87.1
My first steps with cephfs using a replicated pool worked OK.
Now trying now to test cephfs via a replicated caching tier on top of an
erasure
Hi all,
I'm trying to install ceph on a 7-nodes preproduction cluster. Each
node has 24x 4TB SAS disks (2x dell md1400 enclosures) and 6x 800GB
SSDs (for cache tiering, not journals). I'm using Ubuntu 14.04 and
ceph-deploy to install the cluster, I've been trying both Firefly and
Giant and
Great info! Many thanks!
Tom
2015-03-25 13:30 GMT+01:00 Loic Dachary l...@dachary.org:
Hi Tom,
On 25/03/2015 11:31, Tom Verdaat wrote: Hi guys,
We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold
data) that we intend to grow later on as more storage is needed. We would
On Wed, 25 Mar 2015, Deneau, Tom wrote:
A couple of client-monitor questions:
1) When a client contacts a monitor to get the cluster map, how does it
decide which monitor to try to contact?
It picks a random monitor what the information it's seeded with at startup
(via ceph.conf or the
Hey cephers,
Just a reminder that the monthly Ceph Tech Talk tomorrow at 1p EDT
will be by Yehuda on the RADOS Gateway. Make sure you stop by to get a
deeper technical understanding of RGW if you're interested. It's an
open virtual meeting for those that wish to attend, and will also be
recorded
A couple of client-monitor questions:
1) When a client contacts a monitor to get the cluster map, how does it
decide which monitor to try to contact?
2) Having gotten the cluster map, assuming a client wants to do multiple reads
and writes,
does the client have to re-contact the monitor
Hi Greg,
Thank you for this clarification. It helps a lot.
Does this can't think of any issues apply to both rbd and pool snapshots ?
Frederic.
- Mail original -
On Tue, Mar 24, 2015 at 12:09 PM, Brendan Moloney molo...@ohsu.edu wrote:
Hi Loic and Markus,
By the way,
31 matches
Mail list logo