> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Deneau, Tom
> Sent: Friday, May 29, 2015 1:10 AM
> To: ceph-devel
> Subject: rados bench throughput with no disk or network activity
>
> I've noticed that
>* with a s
On Thu, May 28, 2015 at 4:50 PM, Deneau, Tom wrote:
>
>
>> -Original Message-
>> From: Gregory Farnum [mailto:g...@gregs42.com]
>> Sent: Thursday, May 28, 2015 6:18 PM
>> To: Deneau, Tom
>> Cc: ceph-devel
>> Subject: Re: rados bench throughput with no disk or network activity
>>
>> On Thu,
On Thu, May 28, 2015 at 7:50 PM, Deneau, Tom wrote:
>
>
>> -Original Message-
>> From: Gregory Farnum [mailto:g...@gregs42.com]
>> Sent: Thursday, May 28, 2015 6:18 PM
>> To: Deneau, Tom
>> Cc: ceph-devel
>> Subject: Re: rados bench throughput with no disk or network activity
>>
>> On Thu,
> -Original Message-
> From: Gregory Farnum [mailto:g...@gregs42.com]
> Sent: Thursday, May 28, 2015 6:18 PM
> To: Deneau, Tom
> Cc: ceph-devel
> Subject: Re: rados bench throughput with no disk or network activity
>
> On Thu, May 28, 2015 at 4:09 PM, Deneau, Tom wrote:
> > I've noticed
On Thu, May 28, 2015 at 4:09 PM, Deneau, Tom wrote:
> I've noticed that
>* with a single node cluster with 4 osds
>* and running rados bench rand on that same node so no network traffic
>* with a number of objects small enough so that everything is in the cache
> so no disk traffic
>
I've noticed that
* with a single node cluster with 4 osds
* and running rados bench rand on that same node so no network traffic
* with a number of objects small enough so that everything is in the cache
so no disk traffic
we still peak out at about 1600 MB/sec.
And the cpu is 40% idle
Hi Shylesh,
On 28/05/2015 21:25, shylesh kumar wrote:
> Hi,
>
> I created a LRC ec pool with the configuration
>
> # ceph osd erasure-code-profile get mylrc
> directory=/usr/lib64/ceph/erasure-code
> k=4
> l=3
> m=2
> plugin=lrc
> ruleset-failure-domain=osd
>
>
>
>
> One of the pg mapping l
Hi Andrew,
I'm copying Milan Broz, who has looked at this ome. There was some
subsequent off-list discussion in Red Hat about using Petera[1] for the
key management, but this'll require a bit more effort than what was
described in that blueprint.
On Thu, 28 May 2015, Andrew Bartlett wrote:
> D
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On Thu, May 28, 2015 at 11:32 AM, Sage Weil wrote:
>> If for instance a directory is shared between tenant A and B, and A
>> can write and B can't, then when B tries to write because the perms
>> are correct for the UID/GID on the client side, the M
On Thu, May 28, 2015 at 2:32 AM, Loic Dachary wrote:
> Hi,
>
> This morning I'll schedule a job with priority 50, assuming nobody will get
> mad at me for using such a low priority because the associated bug fix blocks
> the release of v0.94.2 (http://tracker.ceph.com/issues/11546) and also
> a
On Thu, 28 May 2015, Robert LeBlanc wrote:
> On Thu, May 28, 2015 at 11:02 AM, Sage Weil wrote:
>
> >> > The MDS could combine a tenant ID and a UID/GID to store unique
> >> > UID/GIDs on the back end and just strip off the tenant ID when
> >> > presented to the client so there are no collisions
On Thu, May 28, 2015 at 11:02 AM, Sage Weil wrote:
>> > The MDS could combine a tenant ID and a UID/GID to store unique
>> > UID/GIDs on the back end and just strip off the tenant ID when
>> > presented to the client so there are no collisions of UID/GIDs between
>> > tenants in the MDS.
>>
>> Hm
On Thu, May 28, 2015 at 12:59 AM, kefu chai wrote:
> On Wed, May 27, 2015 at 3:36 AM, Patrick McGarry wrote:
>> Due to popular demand we are expanding the Ceph lists to include a
>> Chinese-language list to allow for direct communications for all of
>> our friends in China.
>>
>> ceph...@lists.ce
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think if there is a way to store the tenant ID with the UID/GID,
then a lot of the challenges could be resolved.
On Thu, May 28, 2015 at 10:42 AM, Gregory Farnum wrote:
> Right, this is basically what we're planning. The sticky bits are about
>
On Thu, 28 May 2015, Gregory Farnum wrote:
> On Thu, May 28, 2015 at 9:20 AM, Robert LeBlanc wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > I've been trying to follow this and I've been lost many times, but I'd
> > like to put in my $0.02. In my mind any multi-tenant syste
On 28/05/2015 17:41, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Let me see if I understand this... Your idea is to have a progress bar
that show (active+clean + active+scrub + active+deep-scrub) / pgs and
then estimate time remaining?
Not quite: it's not about doing
On Thu, May 28, 2015 at 9:20 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I've been trying to follow this and I've been lost many times, but I'd
> like to put in my $0.02. In my mind any multi-tenant system that
> relies on the client to specify UID/GID as aut
Hi Li,
Reviewing this now! See comments on the PR.
Just FYI, the current convention is to send kernel patches to the list,
and to use github for the userland stuff. Emails like this are helpful to
get people's attention but not strictly needed--we'll notice the PR either
way!
Thanks-
sage
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Let me see if I understand this... Your idea is to have a progress bar
that show (active+clean + active+scrub + active+deep-scrub) / pgs and
then estimate time remaining?
So if PGs are split the numbers change and the progress bar go
backwards, is t
I usually use:
priority [90,100]
for point releases validations.
This is a good thread to bring up for open approval/disapproval.
Does that sound reasonable ??
Thx
YuriW
- Original Message -
From: "Loic Dachary"
To: "Ceph Development"
Sent: Thursday, May 28, 2015 2:32:29 AM
Subject:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've been trying to follow this and I've been lost many times, but I'd
like to put in my $0.02. In my mind any multi-tenant system that
relies on the client to specify UID/GID as authoritative is
fundamentally flawed. The server needs to be authorit
Hi Dan,
Thanks for the pointer. I've added Milan Broz as a watcher to that ticket,
since Milan's working on SELinux integration with Ceph.
- Ken
- Original Message -
> From: "Dan van der Ster"
> To: "Ken Dreyer"
> Cc: ceph-devel@vger.kernel.org
> Sent: Thursday, May 28, 2015 6:30:31 A
For the record:
[28.05 18:09] loicd: you have my ack
On 22/05/2015 21:55, Loic Dachary wrote:
> Hi Sam,
>
> The next firefly release as found at
> https://github.com/ceph/ceph/tree/firefly
> (68211f695941ee128eb9a7fd0d80b615c0ded6cf) passed the rados suite
> (http://tracker.ceph.com/issues/1
I've been trying to debug this issue, and it's why I haven't pushed that
epel-testing package to stable yet. Your email is helping to illuminate a bit
more what is happening. Today I've unpushed -0.5 from epel-testing in Bodhi
because it's clear -0.5 doesn't resolve the situation and just makes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've got some more tests running right now. Once those are done, I'll
find a couple of tests that had extreme difference and gather some
perf data for them.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA
Hi,
On Thu, 28 May 2015, Ugis wrote:
> Hi!
>
> I have been watching changes in "ceph -s" output for a while and
> noticed that in this line:
> 3324/7888981 objects degraded (0.042%); 1995972/7888981 objects
> misplaced (25.301%)
> rather misplaced object count drops constantly, but degraded obje
On Thu, May 28, 2015 at 3:42 AM, John Spray wrote:
>
>
> On 28/05/2015 06:37, Gregory Farnum wrote:
>>
>> On Tue, May 12, 2015 at 5:42 PM, Josh Durgin wrote:
>>> Parallelism
>>> ^^^
>>>
>>> Mirroring many images is embarrassingly parallel. A simple unit of
>>> work is an image (more speci
Hi Ken,
I'm having trouble installing ceph cleanly on CentOS 7 -- I guess this
is related to https://bugzilla.redhat.com/1193182.
(1) If I disable epel-testing, and have check_obsoletes = 1, then the
install works [ http://pastebin.com/gvHbRJ3T ].
(2) With epel-testing enabled -- to get your new
This patch is to do write back throttling for cache tiering,
which is similar to what the Linux kernel does for
page cache write back. The motivation and original idea are
proposed by Nick Fisk, detailed in his email as below. In our
implementation, we introduce a paramter 'cache_target_dirty_hig
Hi Ken,
I had forgotten about this issue:
http://tracker.ceph.com/issues/9927
(as you see, it's similar to the updatedb indexing issue you recently
inquired about)
I didn't check, but I suspect this still affects major version
upgrades also in RHEL7. Do you think this should also be sent
upst
On 28/05/2015 06:37, Gregory Farnum wrote:
On Tue, May 12, 2015 at 5:42 PM, Josh Durgin wrote:
It will need some metadata regarding positions in the journal. These
could be stored as omap values in a 'journal header' object in a
replicated pool, for rbd perhaps the same pool as the image for
Hi Ken,
The commits with a + are found in v0.94.1.2 and are not in hammer
$ git rev-parse ceph/hammer
eb69cf758eb25e7ac71e36c754b9b959edb67cee
$ git --no-pager cherry -v ceph/hammer tags/v0.94.1.2
- 46e85f72a26186963836ee9071b93417ebc41af2 Dencoder should never be built with
tcmalloc
- e6911ec07
On 28/05/2015 06:47, Gregory Farnum wrote:
Thread necromancy! (Is it still necromancy if it's been waiting in my
inbox the whole time?)
Brains.
On Tue, Apr 7, 2015 at 5:54 AM, John Spray wrote:
Hi all,
[this is a re-send of a mail from yesterday that didn't make it, probably
due to
Would it be possible to backport this as well to 0.80.11:
http://tracker.ceph.com/issues/9792#change-46498
And I think this commit would be the easiest to backport:
https://github.com/ceph/ceph/commit/6b982e4cc00f9f201d7fbffa0282f8f3295f2309
This way we add a simple safeguard against pool remov
Gregory Farnum writes:
> On Wed, May 27, 2015 at 1:39 AM, Marcel Lauhoff wrote:
>> Hi,
>>
>> I wrote a prototype for an OSD-based object stub feature. An object stub
>> being an object with it's data moved /elsewhere/. I hope to get some
>> feedback, especially whether I'm on the right path her
I updated the release notes and sent a pull request.
Thanks,
Yehuda
- Original Message -
> From: "Loic Dachary"
> To: "Yehuda Sadeh"
> Cc: "Ceph Development"
> Sent: Wednesday, May 27, 2015 2:35:35 PM
> Subject: rgw release notes for hammer v0.94.2 (issue 11570)
>
> Hi Yehuda,
>
> Re
Robert LeBlanc writes:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> At first I thought this was to allow the OSDs to stub the location of
> the real data after a CRUSH map change so that it didn't have to
> relocate the data right away (or at all) and reduce the number of map
> chang
Hi!
I have been watching changes in "ceph -s" output for a while and
noticed that in this line:
3324/7888981 objects degraded (0.042%); 1995972/7888981 objects
misplaced (25.301%)
rather misplaced object count drops constantly, but degraded object
count drops just occasionally.
Quick googling di
Hi,
This morning I'll schedule a job with priority 50, assuming nobody will get mad
at me for using such a low priority because the associated bug fix blocks the
release of v0.94.2 (http://tracker.ceph.com/issues/11546) and also assuming
noone uses a priority lower than 100 just to get in front
39 matches
Mail list logo