Hi David,
You're right, now I see adding --run-name "" will clean all benchmark data from
specified namespace, so you can run command only once.
rados -p -N cleanup --prefix "" --run-name ""
Regards,
Igor.
-Original Message-
From: David Zafman [mailto:dzaf...@redhat.com]
Sent: Frid
On Fri, Jun 26, 2015 at 6:01 AM, James (Fei) Liu-SSI
wrote:
> Hi Cephers,
> It is not easy to ask when Ceph is going to support inline
> dedup/compression across OSDs in RADOS because it is not easy task and
> answered. Ceph is providing replication and EC for performance and failure
> reco
If you have rados bench data around, you'll need to run cleanup a second
time because the first time the "benchmark_last_metadata" object
will be consulted to find what objects to remove.
Also, using cleanup this way will only remove objects from the default
namespace unless a namespace is sp
Le Ven 26 juin 2015, à 00:01, James (Fei) Liu-SSI a écrit :
> Hi Cephers,
> It is not easy to ask when Ceph is going to support inline
> dedup/compression across OSDs in RADOS.
disclamer: I am not a Cepher.
This would mean some kind of distributed key value store that is fast
enough
Hi Cephers,
It is not easy to ask when Ceph is going to support inline
dedup/compression across OSDs in RADOS because it is not easy task and
answered. Ceph is providing replication and EC for performance and failure
recovery. But we also lose the efficiency of storage store and cost associ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
+1
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jun 25, 2015 at 10:39 AM, Alexandre DERUMIER wrote:
> Thanks Mark !
>
> - Mail original -
> De: "Mark Nelson"
> À: "aderumier"
On 06/25/2015 11:56 AM, Benoît Canet wrote:
Spotted while hunting http://tracker.ceph.com/issues/10905.
From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
boollast_piece; /* current is last piece */
In ceph_msg_data_next():
*last_piece = cursor->last_piece;
A call t
ceph_msgr_slab_init may fail due to a temporary ENOMEM.
Delay a bit the initialization of zero_page in ceph_msgr_init and
reorder it's cleanup in _ceph_msgr_exit so it's done in reverse
order of setup.
BUG_ON() will not suffer to be postponed in case it is triggered.
Signed-off-by: Benoît Canet
On Thu, Jun 25, 2015 at 7:56 PM, Benoît Canet wrote:
> Spotted while hunting http://tracker.ceph.com/issues/10905.
>
> From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
>
> boollast_piece; /* current is last piece */
>
> In ceph_msg_data_next():
>
> *last_piece = cursor->
Spotted while hunting http://tracker.ceph.com/issues/10905.
>From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
boollast_piece; /* current is last piece */
In ceph_msg_data_next():
*last_piece = cursor->last_piece;
A call to ceph_msg_data_next() is followed by:
ret =
Thanks Mark !
- Mail original -
De: "Mark Nelson"
À: "aderumier" , "Robert LeBlanc"
Cc: "ceph-devel"
Envoyé: Jeudi 25 Juin 2015 18:36:45
Objet: Re: 06/24/2015 Weekly Ceph Performance Meeting IS ON!
Hi Guys,
I've updated the etherpad with links to the new recordings. Sorry these
get
Hi Guys,
I've updated the etherpad with links to the new recordings. Sorry these
get backlogged. It takes a little while for the recording to become
available and then I have to manually go in and mark each of them public
and available for download. There doesn't seem to be any way to set
I would like to have them too ;)
I have missed the yesterday session,
I would like to have infos about this
>>Fujitsu presenting on bufferlist tuning
>> - about 2X savings in overall CPU Time with new code.
- Mail original -
De: "Robert LeBlanc"
À: "Mark Nelson"
Cc: "ceph-devel"
Envo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Any update on the recorded sessions? Last I see is 10 Jun 2015 in the Etherpad.
Thanks,
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jun 24, 2015 at 7:28 AM, Mark Nelson wrote:
8AM P
Igor --
Good command to know, but this is still very slow on an erasure pool.
For example, on my cluster it took 10 seconds with rados bench to write 10,000
40K size objects to an ecpool.
And it took almost 6 minutes to delete them using the command below.
-- Tom
> -Original Message-
>
On 06/25/2015 04:01 AM, Ilya Dryomov wrote:
The default queue_limits::max_segments value (BLK_MAX_SEGMENTS = 128)
unnecessarily limits bio sizes to 512k (assuming 4k pages). rbd, being
a virtual block device, doesn't have any restrictions on the number of
physical segments, so bump max_segments
On Thu, Jun 25, 2015 at 5:24 PM, Alex Elder wrote:
> On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
>>
>> nr_requests (/sys/block/rbd/queue/nr_requests) is pretty much
>> irrelevant in blk-mq case because each driver sets its own max depth
>> that it can handle and that's the number of tags that gets
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
nr_requests (/sys/block/rbd/queue/nr_requests) is pretty much
irrelevant in blk-mq case because each driver sets its own max depth
that it can handle and that's the number of tags that gets preallocated
on setup. Users can't increase queue depth beyond
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
Signed-off-by: Ilya Dryomov
Now that you need it when initializing the disk, this
makes sense.
Reviewed-by: Alex Elder
---
drivers/block/rbd.c | 18 +++---
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/bloc
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
Also nuke useless Opt_last_bool and don't break lines unnecessarily.
Signed-off-by: Ilya Dryomov
Good idea.
Reviewed-by: Alex Elder
---
drivers/block/rbd.c | 24
1 file changed, 8 insertions(+), 16 deletions(-)
diff
If I get it to happen again I will send you the kernel message.
Thanks again Zheng!
On Wed, Jun 24, 2015 at 8:48 AM, Yan, Zheng wrote:
> Could you please run "echo 1 > /proc/sys/kernel/sysrq; echo t >
> /proc/sysrq-trigger" when this warning happens again. then send the
> kernel message to us.
On 06/24/2015 08:27 PM, Benoît Canet wrote:
Spotted by visual inspection.
Applies on libceph: Remove spurious kunmap() of the zero page.
Benoît Canet (1):
libceph: Avoid holding the zero page on ceph_msgr_slab_init errors
net/ceph/messenger.c | 10 +-
1 file changed, 5 insertions
On 06/24/2015 08:27 PM, Benoît Canet wrote:
ceph_msgr_slab_init may fail due to a temporary ENOMEM.
Looks good.
Delay a bit the initialization of zero_page in ceph_msgr_init and
reorder it's cleanup in _ceph_msgr_exit for readability sake.
I'd say it's not readability, but a proper ordering
On Thu, Jun 25, 2015 at 3:35 AM, Alex Elder wrote:
> On 06/24/2015 04:18 PM, Benoît Canet wrote:
>>
>> ceph_tcp_sendpage already does the work of mapping/unmapping
>> the zero page if needed.
>>
>> Signed-off-by: Benoît Canet
>
>
> This looks good.
>
> Reviewed-by: Alex Elder
Applied.
Thanks,
nr_requests (/sys/block/rbd/queue/nr_requests) is pretty much
irrelevant in blk-mq case because each driver sets its own max depth
that it can handle and that's the number of tags that gets preallocated
on setup. Users can't increase queue depth beyond that value via
writing to nr_requests.
For r
Signed-off-by: Ilya Dryomov
---
drivers/block/rbd.c | 18 +++---
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 4de8c9167c4b..e502bce02d2c 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -346,6 +346,7 @@ str
Also nuke useless Opt_last_bool and don't break lines unnecessarily.
Signed-off-by: Ilya Dryomov
---
drivers/block/rbd.c | 24
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index bc88fbcb9715..4de8c9167c4b 100644
Hi,
See 3/3. I'll patch rbd cli tool once this is in.
Thanks,
Ilya
Ilya Dryomov (3):
rbd: terminate rbd_opts_tokens with Opt_err
rbd: store rbd_options in rbd_device
rbd: queue_depth map option
drivers/block/rbd.c | 59 ++
The default queue_limits::max_segments value (BLK_MAX_SEGMENTS = 128)
unnecessarily limits bio sizes to 512k (assuming 4k pages). rbd, being
a virtual block device, doesn't have any restrictions on the number of
physical segments, so bump max_segments to max_hw_sectors, in theory
allowing a sector
29 matches
Mail list logo