Thanks, Robert for your more specific explanation.
Rgds,
Shinobu
- Original Message -
From: "Robert LeBlanc"
To: "Shinobu Kinjo"
Cc: "ceph-users"
Sent: Thursday, February 25, 2016 2:56:15 PM
Subject: Re: [ceph-users]
With my S3500 drives in my test cluster, the latest master branch gave me
an almost 2x increase in performance compare to just a month or two ago.
There looks to be some really nice things coming in Jewel around SSD
performance. My drives are now 80-85% busy doing about 10-12K IOPS when
doing 4K
We are moving to the Intel S3610, from our testing it is a good balance
between price, performance and longevity. But as with all things, do your
testing ahead of time. This will be our third model of SSDs for our
cluster. The S3500s didn't have enough life and performance tapers off add
it gets
Try using more OSDs.
I was encountering this scenario when my osds were equal to k+m
The errors went away when I used k+m+2
So in your case try with 8 or 10 osds.
On Thu, Feb 25, 2016 at 11:18 AM, Daleep Singh Bais
wrote:
> hi All,
>
> Any help in this regard will be
hi All,
Any help in this regard will be appreciated.
Thanks..
Daleep Singh Bais
Forwarded Message
Subject:Erasure code Plugins
Date: Fri, 19 Feb 2016 12:13:36 +0530
From: Daleep Singh Bais
To: ceph-users
Hi All,
We have not seen this issue, but we don't run EC pools yet (we are waiting
for multiple layers to be available). We are not running 0.94.6 in
production yet either. We have adopted the policy to only run released
versions in production unless there is a really pressing need to have a
patch. We are
Hello,
For posterity and of course to ask some questions, here are my experiences
with a pure SSD pool.
SW: Debian Jessie, Ceph Hammer 0.94.5.
HW:
2 nodes (thus replication of 2) with each:
2x E5-2623 CPUs
64GB RAM
4x DC S3610 800GB SSDs
Infiniband (IPoIB) network
Ceph: no tuning or
Yes. download.ceph.com does not currently support IPv6 access.
On 02/14/2016 11:53 PM, Artem Fokin wrote:
> Hi
>
> It seems like download.ceph.com has some outdated IPv6 address
>
> ~ curl -v -s download.ceph.com > /dev/null
> * About to connect() to download.ceph.com port 80 (#0)
> * Trying
Thanks for the pointer.
That's perfect atm.
Rgds,
Shinobu
- Original Message -
From: "Christian Balzer"
To: "ceph-users"
Cc: "Shinobu Kinjo"
Sent: Thursday, February 25, 2016 10:49:02 AM
Subject: Re: [ceph-users] List of
On Wed, 24 Feb 2016 20:37:07 -0500 (EST) Shinobu Kinjo wrote:
> Hello,
>
> There has been a bunch of discussion about using SSD.
> Does anyone have any list of SSDs describing which SSD is highly
> recommended, which SSD is not.
>
The answer to that is of course in all those threads and the
Any idea what is going on here? I get these intermittently, especially with
very large file.
The client is doing RANGE requests on this >51 GB file, incrementally
fetching later chunks.
2016-02-24 16:30:59.669561 7fd33b7fe700 1 == starting new request
req=0x7fd32c0879c0 =
2016-02-24
Hello,
There has been a bunch of discussion about using SSD.
Does anyone have any list of SSDs describing which SSD is highly recommended,
which SSD is not.
Rgds,
Shinobu
___
ceph-users mailing list
ceph-users@lists.ceph.com
I'll speak to what I can answer off the top of my head. The most important
point is that this issue is only related to EC pool base tiers, not replicated
pools.
> Hello Jason (Ceph devs et al),
>
> On Wed, 24 Feb 2016 13:15:34 -0500 (EST) Jason Dillaman wrote:
>
> > If you run "rados -p ls
Hello Jason (Ceph devs et al),
On Wed, 24 Feb 2016 13:15:34 -0500 (EST) Jason Dillaman wrote:
> If you run "rados -p ls | grep "rbd_id." and
> don't see that object, you are experiencing that issue [1].
>
> You can attempt to work around this issue by running "rados -p irfu-virt
> setomapval
Hello,
On Wed, 24 Feb 2016 16:59:33 -0700 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Let's start from the top. Where are you stuck with [1]? I have noticed
> that after evicting all the objects with RBD that one object for each
> active RBD is still left, I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Let's start from the top. Where are you stuck with [1]? I have noticed
that after evicting all the objects with RBD that one object for each
active RBD is still left, I think this is the head object. We haven't
tried this, but our planned procedure
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think I saw someone say that they has issues with "step take" when
it was not a "root" node. Otherwise it looks good to me. The "step
chooseleaf firstn 0 type chassis" says to pick one OSD from different
chassis where 0 says to take as many is the
Hey cephers,
Just wanted to update you all on some of the upcoming important dates
in the Ceph community. We have a lot going on in the near future, so I
figured it would be good to get it all in one place:
25 Feb - 1p EST - Ceph Tech Talk: CephFS
(http://ceph.com/ceph-tech-talks) [Tomorrow!]
If you run "rados -p ls | grep "rbd_id." and don't see
that object, you are experiencing that issue [1].
You can attempt to work around this issue by running "rados -p irfu-virt
setomapval rbd_id. dummy value" to force-promote the object to the
cache pool. I haven't tested / verified that
Hi,
I just started testing VMs inside ceph this week, ceph-hammer 0.94-5 here.
I built several pools, using pool tiering:
- A small replicated SSD pool (5 SSDs only, but I thought it'd be
better for IOPS, I intend to test the difference with disks only)
- Overlaying a larger
try running:
$ radosgw-admin --name client.rgw.servergw001 metadata list user
Yehuda
On Wed, Feb 24, 2016 at 8:41 AM, Andrea Annoè wrote:
> I don’t see any user create in RGW
>
>
>
> sudo radosgw-admin metadata list user
>
> [
>
> ]
>
>
>
>
>
> sudo radosgw-admin user
I don't see any user create in RGW
sudo radosgw-admin metadata list user
[
]
sudo radosgw-admin user create --uid="user1site1" --display-name="User test
replica site1" --name client.rgw.servergw001 --access-key=user1site1
--secret=pwd1
{
"user_id": "user1site1",
"display_name": "User
Hello to all,
I have create an async replica from 2 zone in a region.
I have problem with user and permission: Http error code 403 content Forbidden
I have create user for manage replica but don't see the info.
sudo radosgw-admin user create --uid="site2" --display-name="Zone site2" --name
Hm. It seems that the cache pool qoutas have not been set. At least I'm
sure I didn't set them.
# ceph osd pool get-quota cache
quotas for pool 'cache':
max objects: N/A
max bytes : N/A
Hmm. It seems that the cache pool quota have not been set. At least I'm
sure I didn't set it. Maybe it
On Wed, Feb 24, 2016 at 4:31 AM, Dan van der Ster wrote:
> Thanks Sage, looking forward to some scrub randomization.
>
> Were binaries built for el6? http://download.ceph.com/rpm-hammer/el6/x86_64/
We are no longer building binaries for el6. Just for Centos 7, Ubuntu
Trusty,
Thanks for the responses John.
--Scott
On Wed, Feb 24, 2016 at 3:07 AM John Spray wrote:
> On Tue, Feb 23, 2016 at 5:36 PM, Scottix wrote:
> > I had a weird thing happen when I was testing an upgrade in a dev
> > environment where I have removed an MDS
Hi Esta,
how do you know, that its still active ?
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Hi,
I want to disable rbd cache in my ceph cluster. I've set the rbd cache to be
false in the [client] section of ceph.conf and rebooted the cluster. But
caching system was still working. How can I disable the rbd caching system? Any
help?
best regards.
2016-02-24
Esta
Hello Geeks
Can someone please review and comment on my custom crush maps. I would
really appreciate your help
My setup : 1 Rack , 4 chassis , 3 storage nodes each chassis ( so total 12
storage nodes ) , pool size = 3
What i want to achieve is:
- Survive chassis failures , even if i loose 2
On Tue, Feb 23, 2016 at 5:36 PM, Scottix wrote:
> I had a weird thing happen when I was testing an upgrade in a dev
> environment where I have removed an MDS from a machine a while back.
>
> I upgraded to 0.94.6 and low and behold the mds daemon started up on the
> machine
My 0.02, there are two kinds of balance, one for space utilization , another
for performance.
Now seems you will be good for the space utilization, but you might suffer a
bit for the performance as the density of disk increase.The new rack will hold
1/3 data by 1/5 disks, if we assume the
Thanks Sage, looking forward to some scrub randomization.
Were binaries built for el6? http://download.ceph.com/rpm-hammer/el6/x86_64/
Cheers, Dan
On Tue, Feb 23, 2016 at 5:01 PM, Sage Weil wrote:
> This Hammer point release fixes a range of bugs, most notably a fix for
>
I think this happened because of the wrongly removed OSD...
A bug maybe ?
Do you think that "ceph pg repair" will force the remove of the PG from the
missing osd ?
I am concerned about executing "pg repair" or "osd lost" because maybe it will
decide that the stuck one is the right data and try
On Wed, Feb 24, 2016 at 4:57 PM, wrote:
> >So you don't find any slow request
> Yes, exactly
> >we may has some problems on using poll call. The only potential related
> PR is https://github.com/ceph/ceph/pull/6971
> How we can clarify this hypothesis?
> Obviously, we can't
On 2016-02-23 20:48, Gregory Farnum wrote:
On Saturday, February 20, 2016, Sorin Manolache > wrote:
Hello,
I can set a watch on an object in librados. Does this object have to
exist already at the moment I'm setting the watch on it? What
Hi,
> 0> 2016-02-24 04:51:45.884445 7fd994825700 -1 osd/ReplicatedPG.cc: In
> function 'int ReplicatedPG::fill_in_copy_get(ReplicatedPG::OpContext*,
> ceph::buffer::list::iterator&, OSDOp&, ObjectContextRef&, bool)' thread
> 7fd994825700 time 2016-02-24 04:51:45.870995
osd/ReplicatedPG.cc:
36 matches
Mail list logo