-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
+1
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jun 25, 2015 at 10:39 AM, Alexandre DERUMIER wrote:
Thanks Mark !
- Mail original -
De: Mark Nelson
À: aderumier , Robert
Hi David,
You're right, now I see adding --run-name will clean all benchmark data from
specified namespace, so you can run command only once.
rados -p poolname -N namespace cleanup --prefix --run-name
Regards,
Igor.
-Original Message-
From: David Zafman [mailto:dzaf...@redhat.com]
On 06/25/2015 11:56 AM, Benoît Canet wrote:
Spotted while hunting http://tracker.ceph.com/issues/10905.
From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
boollast_piece; /* current is last piece */
In ceph_msg_data_next():
*last_piece = cursor-last_piece;
A call
On Thu, Jun 25, 2015 at 5:24 PM, Alex Elder el...@ieee.org wrote:
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
nr_requests (/sys/block/rbdid/queue/nr_requests) is pretty much
irrelevant in blk-mq case because each driver sets its own max depth
that it can handle and that's the number of tags
On 06/25/2015 04:01 AM, Ilya Dryomov wrote:
The default queue_limits::max_segments value (BLK_MAX_SEGMENTS = 128)
unnecessarily limits bio sizes to 512k (assuming 4k pages). rbd, being
a virtual block device, doesn't have any restrictions on the number of
physical segments, so bump max_segments
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
nr_requests (/sys/block/rbdid/queue/nr_requests) is pretty much
irrelevant in blk-mq case because each driver sets its own max depth
that it can handle and that's the number of tags that gets preallocated
on setup. Users can't increase queue depth
If I get it to happen again I will send you the kernel message.
Thanks again Zheng!
On Wed, Jun 24, 2015 at 8:48 AM, Yan, Zheng uker...@gmail.com wrote:
Could you please run echo 1 /proc/sys/kernel/sysrq; echo t
/proc/sysrq-trigger when this warning happens again. then send the
kernel
On 06/24/2015 08:27 PM, Benoît Canet wrote:
Spotted by visual inspection.
Applies on libceph: Remove spurious kunmap() of the zero page.
Benoît Canet (1):
libceph: Avoid holding the zero page on ceph_msgr_slab_init errors
net/ceph/messenger.c | 10 +-
1 file changed, 5
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
Also nuke useless Opt_last_bool and don't break lines unnecessarily.
Signed-off-by: Ilya Dryomov idryo...@gmail.com
Good idea.
Reviewed-by: Alex Elder el...@linaro.org
---
drivers/block/rbd.c | 24
1 file changed, 8
On 06/25/2015 04:11 AM, Ilya Dryomov wrote:
Signed-off-by: Ilya Dryomov idryo...@gmail.com
Now that you need it when initializing the disk, this
makes sense.
Reviewed-by: Alex Elder el...@linaro.org
---
drivers/block/rbd.c | 18 +++---
1 file changed, 11 insertions(+), 7
On 06/24/2015 08:27 PM, Benoît Canet wrote:
ceph_msgr_slab_init may fail due to a temporary ENOMEM.
Looks good.
Delay a bit the initialization of zero_page in ceph_msgr_init and
reorder it's cleanup in _ceph_msgr_exit for readability sake.
I'd say it's not readability, but a proper
Thanks Mark !
- Mail original -
De: Mark Nelson mnel...@redhat.com
À: aderumier aderum...@odiso.com, Robert LeBlanc rob...@leblancnet.us
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Jeudi 25 Juin 2015 18:36:45
Objet: Re: 06/24/2015 Weekly Ceph Performance Meeting IS ON!
Hi Guys,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Any update on the recorded sessions? Last I see is 10 Jun 2015 in the Etherpad.
Thanks,
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, Jun 24, 2015 at 7:28 AM, Mark Nelson wrote:
8AM
I would like to have them too ;)
I have missed the yesterday session,
I would like to have infos about this
Fujitsu presenting on bufferlist tuning
- about 2X savings in overall CPU Time with new code.
- Mail original -
De: Robert LeBlanc rob...@leblancnet.us
À: Mark Nelson
Spotted while hunting http://tracker.ceph.com/issues/10905.
From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
boollast_piece; /* current is last piece */
In ceph_msg_data_next():
*last_piece = cursor-last_piece;
A call to ceph_msg_data_next() is followed by:
ret =
On Thu, Jun 25, 2015 at 7:56 PM, Benoît Canet benoit.ca...@nodalink.com wrote:
Spotted while hunting http://tracker.ceph.com/issues/10905.
From struct ceph_msg_data_cursor in include/linux/ceph/messenger.h:
boollast_piece; /* current is last piece */
In ceph_msg_data_next():
Hi Guys,
I've updated the etherpad with links to the new recordings. Sorry these
get backlogged. It takes a little while for the recording to become
available and then I have to manually go in and mark each of them public
and available for download. There doesn't seem to be any way to set
Igor --
Good command to know, but this is still very slow on an erasure pool.
For example, on my cluster it took 10 seconds with rados bench to write 10,000
40K size objects to an ecpool.
And it took almost 6 minutes to delete them using the command below.
-- Tom
-Original Message-
ceph_msgr_slab_init may fail due to a temporary ENOMEM.
Delay a bit the initialization of zero_page in ceph_msgr_init and
reorder it's cleanup in _ceph_msgr_exit so it's done in reverse
order of setup.
BUG_ON() will not suffer to be postponed in case it is triggered.
Signed-off-by: Benoît Canet
Le Ven 26 juin 2015, à 00:01, James (Fei) Liu-SSI a écrit :
Hi Cephers,
It is not easy to ask when Ceph is going to support inline
dedup/compression across OSDs in RADOS.
disclamer: I am not a Cepher.
This would mean some kind of distributed key value store that is fast
enough
to
On Fri, Jun 26, 2015 at 6:01 AM, James (Fei) Liu-SSI
james@ssi.samsung.com wrote:
Hi Cephers,
It is not easy to ask when Ceph is going to support inline
dedup/compression across OSDs in RADOS because it is not easy task and
answered. Ceph is providing replication and EC for
If you have rados bench data around, you'll need to run cleanup a second
time because the first time the benchmark_last_metadata object
will be consulted to find what objects to remove.
Also, using cleanup this way will only remove objects from the default
namespace unless a namespace is
Hi,
It appears, that cleanup can be used as a purge:
rados -p poolname cleanup --prefix
Regards,
Igor.
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Deneau, Tom
Sent: Wednesday, June 24, 2015 10:22 PM
To: Dałek,
On Thu, Jun 25, 2015 at 3:35 AM, Alex Elder el...@ieee.org wrote:
On 06/24/2015 04:18 PM, Benoît Canet wrote:
ceph_tcp_sendpage already does the work of mapping/unmapping
the zero page if needed.
Signed-off-by: Benoît Canet benoit.ca...@nodalink.com
This looks good.
Reviewed-by: Alex
nr_requests (/sys/block/rbdid/queue/nr_requests) is pretty much
irrelevant in blk-mq case because each driver sets its own max depth
that it can handle and that's the number of tags that gets preallocated
on setup. Users can't increase queue depth beyond that value via
writing to nr_requests.
The default queue_limits::max_segments value (BLK_MAX_SEGMENTS = 128)
unnecessarily limits bio sizes to 512k (assuming 4k pages). rbd, being
a virtual block device, doesn't have any restrictions on the number of
physical segments, so bump max_segments to max_hw_sectors, in theory
allowing a
Hi,
See 3/3. I'll patch rbd cli tool once this is in.
Thanks,
Ilya
Ilya Dryomov (3):
rbd: terminate rbd_opts_tokens with Opt_err
rbd: store rbd_options in rbd_device
rbd: queue_depth map option
drivers/block/rbd.c | 59
Signed-off-by: Ilya Dryomov idryo...@gmail.com
---
drivers/block/rbd.c | 18 +++---
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 4de8c9167c4b..e502bce02d2c 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@
Also nuke useless Opt_last_bool and don't break lines unnecessarily.
Signed-off-by: Ilya Dryomov idryo...@gmail.com
---
drivers/block/rbd.c | 24
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index
29 matches
Mail list logo