.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
End of device check is done twice in __generic_make_request() and it's
fully inlined each time. Factor out bio_check_eod().
Tejun, yeah I should seperate the cleanups and put them
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
End of device check is done twice in __generic_make_request() and it's
fully inlined each time. Factor out bio_check_eod().
Tejun
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
End of device check is done twice in __generic_make_request() and it's
fully
On Wed, Jul 18 2007, Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
End of device check is done
On Wed, Jul 18 2007, Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Jul 18 2007, Tejun Heo wrote:
End of device check is done
On Sat, Jun 02 2007, Tejun Heo wrote:
Hello,
Jens Axboe wrote:
Would that be very different from issuing barrier and not waiting for
its completion? For ATA and SCSI, we'll have to flush write back cache
anyway, so I don't see how we can get performance advantage by
implementing
On Fri, Jun 01 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Thu, May 31 2007, David Chinner
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
IOWs, there are two parts to the problem:
1 - guaranteeing I/O ordering
2 - guaranteeing blocks are on persistent storage
On Thu, May 31 2007, Bill Davidsen wrote:
Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
On Thu, May 31, 2007 at 08:26:45AM +0200, Jens Axboe wrote:
On Thu, May 31 2007, David Chinner wrote:
IOWs, there are two parts to the problem:
1 - guaranteeing I/O
On Thu, May 31 2007, Phillip Susi wrote:
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violates that principle. Look
support or not), it may be
nearly as slow as a real barrier write.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, May 31 2007, [EMAIL PROTECTED] wrote:
On Thu, 31 May 2007, Jens Axboe wrote:
On Thu, May 31 2007, Phillip Susi wrote:
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour
the highest pfn in the system -- in that case,
* don't waste time iterating over bio segments
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
() would never be
needed (unless you want to do a data-less barrier, and we should
probably add that specific functionality with an empty bio instead of
providing an alternate way of doing that).
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
On Thu, May 10 2007, Jan Engelhardt wrote:
On May 9 2007 15:38, Jens Axboe wrote:
I am a mdadm/disk/hard drive fanatic, I was curious:
On i386, we can at most fit 256 scatterlist elements into a page,
and on x86-64 we are stuck with 128. So that puts us somewhere
between 512kb
and abysmal performance.
I have an mdadm raid5 of 10 raptors and get 434MB/s write and 622MB/s
read, would I see an increase in performance with this patch?
Perhaps, depends on a lot of factors.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
+0100
@@ -1602,6 +1602,8 @@
**/
void generic_unplug_device(request_queue_t *q)
{
+ WARN_ON(irqs_disabled());
+
spin_lock_irq(q-queue_lock);
__generic_unplug_device(q);
spin_unlock_irq(q-queue_lock);
--
Jens Axboe
-
To unsubscribe from this list: send the line
backport. Neil, are you OK with that?
Yes, I'm OK with that, thanks.
Ack from me as well, it's really a quite nasty bug from a performance
POV. Not just for DRDB, but for io schedulers as well.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
the problem more
obvious.
Agree, that would be a good plan to enable. Other questions: are you
seeing timeouts at any point? The ide timeout code has some request/bio
resetting code which might be worrisome.
NeilBrown
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe
testing with all the debugging
options enabled, that should make us a little wiser.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
-request_fn is set. And in that case, you must have an io
scheduler attached.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
= NULL;
del_gendisk(mddev-gendisk);
mddev-gendisk = NULL;
That's the wrong order, isn't it. :-(
Yep, you want to reverse that :-)
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
later in the function.
Cc: Jens Axboe [EMAIL PROTECTED]
Signed-off-by: Neil Brown [EMAIL PROTECTED]
Code looks good to me, but for some reason your comment exceeds 80
chars. Can you please fix that up?
Acked-by: Jens Axboe [EMAIL PROTECTED]
--
Jens Axboe
-
To unsubscribe from this list: send
developers really should take this more seriously...
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, May 22 2006, NeilBrown wrote:
Else a subsequence bio_clone might make a mess.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
Cc: Don Dupuis [EMAIL PROTECTED]
Cc: Jens Axboe [EMAIL PROTECTED]
### Diffstat output
./fs/bio.c |3 +++
1 file changed, 3 insertions(+)
diff ./fs
resides _below_ the raid
personality? So if you want to balance what goes to what io scheduler
(and thus, disk), you'd want to mess with the raid personality.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
they
support NCQ). Does the binary promise driver support NCQ?
Jeff likely knows a lot more.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
support for my nforce system in libata:-)
Don't hold your breath, it's unlikely to get supported as nvidia wont
open the specs. ahci is a really really nice controller, if you want ncq
I suggest going with that. sil is probably the next in line for ncq
support.
--
Jens Axboe
-
To unsubscribe from
On Wed, Mar 01 2006, Jeff Garzik wrote:
Jens Axboe wrote:
On Wed, Mar 01 2006, Gentoopower wrote:
P.S. Just waiting to see NCQ support for my nforce system in libata:-)
Don't hold your breath, it's unlikely to get supported as nvidia wont
open the specs. ahci is a really really nice
-ops-elevator_completed_req_fn)
- e-ops-elevator_completed_req_fn(q, rq);
}
}
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
to
this list.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
,
feels like the problem is elsewhere (driver, most likely).
If we still can't get closer to this, it would be interesting to try my
block tracing stuff so we can see what is going on at the queue level.
But lets gather some more info first, since it requires testing -mm.
--
Jens Axboe
in the system!
Please just do
# echo 512 /sys/block/dev/queue/nr_requests
after boot for each device you want to increase the queue size too. 512
should be enough with the 3ware.
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
, the reading
would be much faster if it was. If the fusion is using a large queue
depth, increasing nr_requests would likely help the writes (but not to
the extent of where it would suddenly be as fast as it should).
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux-raid
less system time.
And you are still 1/3 off the target data rate, hmmm...
With the reads, how does the aggregate bandwidth look when you add
'clients'? Same as with writes, gradually decreasing per-device
throughput?
--
Jens Axboe
-
To unsubscribe from this list: send the line unsubscribe linux
36 matches
Mail list logo