Hi Yehuda,
I trace the code and the read op seems happen at RGWRados::raw_obj_stat()? This
looks like some unnecessary if the range does not fall on to the head object.
I tried to set the rgw_max_chunk_size = 0 and it's not working on the PUT side.
Do you have any ideas on how to avoid this
On Fri, Jul 10, 2015 at 10:45 PM, Deneau, Tom tom.den...@amd.com wrote:
I have an osd log file from an osd that hit a suicide timeout (with the
previous 1 events logged).
(On this node I have also seen this suicide timeout happen once before and
also a sync_entry timeout.
I can see
-- All Branches --
Abhishek Lekshmanan abhishek.lekshma...@ril.com
2015-06-13 10:30:09 +0530 hammer-backports
Adam Crume adamcr...@gmail.com
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza ad...@redhat.com
2015-03-23 16:39:48 -0400 wip-11212
I was originally thinking that you were just proposing to have librbd write to
the eventfd descriptor when your AIO op completed so that you could hook librbd
callbacks into an existing app poll loop. If librbd is doing the polling via
poll_io_events, I guess I don't see why you would even
Lots of new tickets in tracker related to forward scrub, this is a
handly list (mainly for Greg and myself) that maps to the notes from our
design discussion. They're all under the 'fsck' category in tracker.
John
Tagging
prerequisite:
- forward scrub (traverse tree, at least)
#12255
On Fri, 10 Jul 2015, Wido den Hollander wrote:
On 07/10/2015 11:08 PM, Sage Weil wrote:
It's official! We have a new port number for the monitor:
3300(that's CE4h, or 0xCE4).
Sometime in the next cycle we'll need to make a transition plan to move
from 6789.
Very
On Mon, 13 Jul 2015, Jason Dillaman wrote:
But it doesn't provide an easily compassable way
of integrating waiting on other events in the application. eventfd is
easy to embed in your (e)pool loop or any kind of event library
(libev).
Agreed -- which is why I asked about the proposed
On Mon, Jul 13, 2015 at 2:39 PM, Jason Dillaman dilla...@redhat.com wrote:
But it doesn't provide an easily compassable way
of integrating waiting on other events in the application. eventfd is
easy to embed in your (e)pool loop or any kind of event library
(libev).
Agreed -- which is why I
But it doesn't provide an easily compassable way
of integrating waiting on other events in the application. eventfd is
easy to embed in your (e)pool loop or any kind of event library
(libev).
Agreed -- which is why I asked about the proposed design since it appears (to
me) that everything is
On Mon, Jul 13, 2015 at 1:32 PM, Jason Dillaman dilla...@redhat.com wrote:
Sorry, I'm not following. Even we have poll_io_events, we need to when
to call poll_io_events.
I guess you mean we could notify user's side fd in rbd callback.
Yes, we could do this. But a extra rbd callback could be
FWIW,
It would be very interesting to see the output of:
https://github.com/ceph/cbt/blob/master/tools/readpgdump.py
If you see something that looks anomalous. I'd like to make sure that
I'm detecting issues like this.
Mark
On 07/09/2015 06:03 PM, Samuel Just wrote:
I've seen some odd
Greg --
Thanks. I put the osd.log file at
https://drive.google.com/file/d/0B_rfwWh40kPwQjZ3OXdjLUZNRVU/view?usp=sharing
I noticed the following from journalctl output around that time, so other nodes
were complaining they could not reach osd.8.
Jul 09 15:53:04 seattle-04-ausisv bash[8486]:
On Mon, Jul 13, 2015 at 9:52 PM, Jason Dillaman dilla...@redhat.com wrote:
I was originally thinking that you were just proposing to have librbd write
to the eventfd descriptor when your AIO op completed so that you could hook
librbd callbacks into an existing app poll loop. If librbd is
Sorry, I'm not following. Even we have poll_io_events, we need to when
to call poll_io_events.
I guess you mean we could notify user's side fd in rbd callback.
Yes, we could do this. But a extra rbd callback could be omitted if we
embed standard notification methods, we can get performance
Hi Konstantin,
I'm definitely interested in looking at your tools and seeing if we can
merge them into cbt! One of the things we lack right now in cbt is any
kind of real openstack integration. Right now CBT basically just
assumes you've already launched VMs and specified them as clients in
- Original Message -
From: Yuan Zhou yuan.z...@intel.com
To: Ceph Devel ceph-devel@vger.kernel.org, Yehuda Sadeh-Weinraub
yeh...@redhat.com
Sent: Monday, July 13, 2015 4:46:20 AM
Subject: head object read with range get on RGW
Hi Yehuda,
I trace the code and the read op seems
heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x3ff6eb0efd0' had
suicide timed out after 150
So that's the OSD's op thread, which is the one that does most of
the work. You often see the FileStore::op_tp when it's the disk or
filesystem breaking, but I do see the line
waiting 51 50 ops ||
Looking at the output, it looks like even pool 19 has a pretty small
number of PGs for that many OSDs:
++
| Pool ID: 19 |
++
| Participating
On 07/13/2015 11:42 AM, Sage Weil wrote:
On Mon, 13 Jul 2015, Jason Dillaman wrote:
But it doesn't provide an easily compassable way
of integrating waiting on other events in the application. eventfd is
easy to embed in your (e)pool loop or any kind of event library
(libev).
Agreed -- which
Greg --
Not sure how to tell whether rebalancing occurred at that time.
I do see in other osd logs complaints that they do not get a reply from
osd.8 starting around 15:52 on that day.
I see the deep-scrub of pool 14 but that was almost 30 minutes earlier.
-- Tom
-Original Message-
Absolutely!
On 07/10/2015 03:00 AM, Konstantin Danilov wrote:
Can I propose a topic for meeting by adding it to etherpad?
On Wed, Jul 8, 2015 at 5:30 PM, Mark Nelson mnel...@redhat.com
mailto:mnel...@redhat.com wrote:
8AM PST as usual! Let's discuss the topics that we didn't cover
hi,Deneau
we set the filestore_max_queue_ops=1000, unlucky we alse meet the osd suicide.
we test random 4K writes, the osd_op_tp thread suicide event if we set
osd_op_thread_timeout=120,
the threads in osd_op_tp will reset the timeout handle after submit op
to FileJournal's workqueue,
so we have
Hello Cepher,
I am working on prorting Google Cloud Storage API to rados gateway and I notice
that Google Cloud Storage support parallel uploads as below:
Object composition enables a simple mechanism for uploading an object in
parallel: simply divide your data into as many chunks as required to
On 13-07-15 08:07, 王 健 wrote:
Hello Cepher,
I am working on prorting Google Cloud Storage API to rados gateway and I
notice that Google Cloud Storage support parallel uploads as below:
Object composition enables a simple mechanism for uploading an object in
parallel: simply divide your
Does rados gateway support object composition? Like Google could storage, user
can upload divide a large object into several objects and upload parallel, then
combine them as a single object.
Thanks
Jian
On Jul 13, 2015, at 2:16 AM, Wido den Hollander w...@42on.com wrote:
On 13-07-15
On 13-07-15 08:32, 王 健 wrote:
Does rados gateway support object composition? Like Google could storage,
user can upload divide a large object into several objects and upload
parallel, then combine them as a single object.
Thanks
Jian
Yes, that is how multipart uploads work. The RADOS
Got it.
Thanks
Jian
On Jul 13, 2015, at 2:36 AM, Wido den Hollander w...@42on.com wrote:
On 13-07-15 08:32, 王 健 wrote:
Does rados gateway support object composition? Like Google could storage,
user can upload divide a large object into several objects and upload
parallel, then
27 matches
Mail list logo