Re: Linux disk performance.

2006-12-31 Thread Bill Davidsen

Phillip Susi wrote:

Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers 
and making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 
tps), which dropped to about 50 tps when doing large file copies even 
at low priority. By using O_DIRECT the impact essentially vanished, at 
the cost of the copy running about 10-15% slower. Changing various 
programs to use O_DIRECT only helped when really large blocks of data 
were involved, and only when i/o clould be done in a way to satisfy 
the alignment and size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


I would point out that if you are looking for optimal throughput and 
reduced cpu overhead, and avoid blowing out the kernel fs cache, you 
need to couple aio with O_DIRECT.  By itself O_DIRECT will lower 
throughput because there will be brief pauses between each IO while the 
application prepares the next buffer.  You can overcome this by posting 
a few pending buffers concurrently with aio, allowing the kernel to 
always have a buffer ready for the next io as soon as the previous one 
completes.


A good point, but in this case there was no particular urgency, other 
than not to stop the application while doing background data moves. The 
best way to do it would have been to put it where it belonged in the 
first place :-(


--
bill davidsen <[EMAIL PROTECTED]>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-31 Thread Bill Davidsen

Phillip Susi wrote:

Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers 
and making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 
tps), which dropped to about 50 tps when doing large file copies even 
at low priority. By using O_DIRECT the impact essentially vanished, at 
the cost of the copy running about 10-15% slower. Changing various 
programs to use O_DIRECT only helped when really large blocks of data 
were involved, and only when i/o clould be done in a way to satisfy 
the alignment and size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


I would point out that if you are looking for optimal throughput and 
reduced cpu overhead, and avoid blowing out the kernel fs cache, you 
need to couple aio with O_DIRECT.  By itself O_DIRECT will lower 
throughput because there will be brief pauses between each IO while the 
application prepares the next buffer.  You can overcome this by posting 
a few pending buffers concurrently with aio, allowing the kernel to 
always have a buffer ready for the next io as soon as the previous one 
completes.


A good point, but in this case there was no particular urgency, other 
than not to stop the application while doing background data moves. The 
best way to do it would have been to put it where it belonged in the 
first place :-(


--
bill davidsen [EMAIL PROTECTED]
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-27 Thread Phillip Susi

Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers and 
making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 tps), 
which dropped to about 50 tps when doing large file copies even at low 
priority. By using O_DIRECT the impact essentially vanished, at the cost 
of the copy running about 10-15% slower. Changing various programs to 
use O_DIRECT only helped when really large blocks of data were involved, 
and only when i/o clould be done in a way to satisfy the alignment and 
size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


I would point out that if you are looking for optimal throughput and 
reduced cpu overhead, and avoid blowing out the kernel fs cache, you 
need to couple aio with O_DIRECT.  By itself O_DIRECT will lower 
throughput because there will be brief pauses between each IO while the 
application prepares the next buffer.  You can overcome this by posting 
a few pending buffers concurrently with aio, allowing the kernel to 
always have a buffer ready for the next io as soon as the previous one 
completes.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-27 Thread Phillip Susi

Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers and 
making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 tps), 
which dropped to about 50 tps when doing large file copies even at low 
priority. By using O_DIRECT the impact essentially vanished, at the cost 
of the copy running about 10-15% slower. Changing various programs to 
use O_DIRECT only helped when really large blocks of data were involved, 
and only when i/o clould be done in a way to satisfy the alignment and 
size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


I would point out that if you are looking for optimal throughput and 
reduced cpu overhead, and avoid blowing out the kernel fs cache, you 
need to couple aio with O_DIRECT.  By itself O_DIRECT will lower 
throughput because there will be brief pauses between each IO while the 
application prepares the next buffer.  You can overcome this by posting 
a few pending buffers concurrently with aio, allowing the kernel to 
always have a buffer ready for the next io as soon as the previous one 
completes.



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:

>
> Thanks  for the suggestion but the performance was terrible when write
> cache was disabled.

Performance degradation is expected. But the point is - did the
anomaly, that you have pointed out, go away? Because if it did, then
it is the disk cache which is causing the issue, and you will have to
live with it. Else you will have to look elsewhere.


oops, sorry for incomplete answer.
Actually i did not tested thoroughly but my initial tests showed some
bumps and serious performance degradation. But anyway there was still
some bumps... :(

(sequence)(channel)(write time in microseconds)
0  06366
0  19949
0  210125
0  310165
0  411043
0  510129
0  610089
0  710165
0  871572
0  99882
0  10   8105
0  11   10085


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Bhanu Kalyan Chetlapalli

On 12/22/06, Manish Regmi <[EMAIL PROTECTED]> wrote:

On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:
>
> I am assuming that your program is not seeking inbetween writes.
>
> Try disabling the Disk Cache, now-a-days some disks can have as much
> as 8MB write cache. so the disk might be buffering as much as it can,
> and trying to write only when it can no longer buffer. Since you have
> an app which continously write copious amounts of data, in order,
> disabling write cache might make some sense.
>

Thanks  for the suggestion but the performance was terrible when write
cache was disabled.


Performance degradation is expected. But the point is - did the
anomaly, that you have pointed out, go away? Because if it did, then
it is the disk cache which is causing the issue, and you will have to
live with it. Else you will have to look elsewhere.


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.




--
There is only one success - to be able to spend your life in your own way.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/21/06, Erik Mouw <[EMAIL PROTECTED]> wrote:

Bursty video traffic is really an application that could take advantage
from the kernel buffering. Unless you want to reinvent the wheel and do
the buffering yourself (it is possible though, I've done it on IRIX).


But in my test O_DIRECT gave a slight better performance. Also the CPU
usage decreased.



BTW, why are you so keen on smooth-at-the-microlevel writeout? With
real time video applications it's only important not to drop frames.
How fast those frames will go to the disk isn't really an issue, as
long as you don't overflow the intermediate buffer.


Actually i dont require  smooth-at-the-microlevel writeout but the
timing bumps are overflowing the intermediate buffers . I was just
wondering if i could decrease the 20ms bumps to 3 ms as in other
writes.



Erik

--
They're all fools. Don't worry. Darwin may be slow, but he'll
eventually get them. -- Matthew Lammers in alt.sysadmin.recovery




--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:


I am assuming that your program is not seeking inbetween writes.

Try disabling the Disk Cache, now-a-days some disks can have as much
as 8MB write cache. so the disk might be buffering as much as it can,
and trying to write only when it can no longer buffer. Since you have
an app which continously write copious amounts of data, in order,
disabling write cache might make some sense.



Thanks  for the suggestion but the performance was terrible when write
cache was disabled.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Bhanu Kalyan Chetlapalli

On 12/20/06, Manish Regmi <[EMAIL PROTECTED]> wrote:

On 12/19/06, Nick Piggin <[EMAIL PROTECTED]> wrote:
> When you submit a request to an empty block device queue, it can
> get "plugged" for a number of timer ticks before any IO is actually
> started. This is done for efficiency reasons and is independent of
> the IO scheduler used.
>

Thanks for the information..

> Use the noop IO scheduler, as well as the attached patch, and let's
> see what your numbers look like.
>

Unfortunately i got the same results even after applying your patch. I
also tried putting
q->unplug_delay = 1;
But it did not work. The result was similar.


I am assuming that your program is not seeking inbetween writes.

Try disabling the Disk Cache, now-a-days some disks can have as much
as 8MB write cache. so the disk might be buffering as much as it can,
and trying to write only when it can no longer buffer. Since you have
an app which continously write copious amounts of data, in order,
disabling write cache might make some sense.


--
---
regards
Manish Regmi


Bhanu

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.

--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:   http://mail.nl.linux.org/kernelnewbies/
FAQ:   http://kernelnewbies.org/faq/





--
There is only one success - to be able to spend your life in your own way.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Erik Mouw
On Thu, Dec 21, 2006 at 11:48:42AM +0545, Manish Regmi wrote:
> Yes... my application does large amount of I/O. It actually writes
> video data received from ethernet(IP camera) to the disk using 128 K
> chunks.

Bursty video traffic is really an application that could take advantage
from the kernel buffering. Unless you want to reinvent the wheel and do
the buffering yourself (it is possible though, I've done it on IRIX).

BTW, why are you so keen on smooth-at-the-microlevel writeout? With
real time video applications it's only important not to drop frames.
How fast those frames will go to the disk isn't really an issue, as
long as you don't overflow the intermediate buffer.


Erik

-- 
They're all fools. Don't worry. Darwin may be slow, but he'll
eventually get them. -- Matthew Lammers in alt.sysadmin.recovery
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Erik Mouw
On Thu, Dec 21, 2006 at 11:48:42AM +0545, Manish Regmi wrote:
 Yes... my application does large amount of I/O. It actually writes
 video data received from ethernet(IP camera) to the disk using 128 K
 chunks.

Bursty video traffic is really an application that could take advantage
from the kernel buffering. Unless you want to reinvent the wheel and do
the buffering yourself (it is possible though, I've done it on IRIX).

BTW, why are you so keen on smooth-at-the-microlevel writeout? With
real time video applications it's only important not to drop frames.
How fast those frames will go to the disk isn't really an issue, as
long as you don't overflow the intermediate buffer.


Erik

-- 
They're all fools. Don't worry. Darwin may be slow, but he'll
eventually get them. -- Matthew Lammers in alt.sysadmin.recovery
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Bhanu Kalyan Chetlapalli

On 12/20/06, Manish Regmi [EMAIL PROTECTED] wrote:

On 12/19/06, Nick Piggin [EMAIL PROTECTED] wrote:
 When you submit a request to an empty block device queue, it can
 get plugged for a number of timer ticks before any IO is actually
 started. This is done for efficiency reasons and is independent of
 the IO scheduler used.


Thanks for the information..

 Use the noop IO scheduler, as well as the attached patch, and let's
 see what your numbers look like.


Unfortunately i got the same results even after applying your patch. I
also tried putting
q-unplug_delay = 1;
But it did not work. The result was similar.


I am assuming that your program is not seeking inbetween writes.

Try disabling the Disk Cache, now-a-days some disks can have as much
as 8MB write cache. so the disk might be buffering as much as it can,
and trying to write only when it can no longer buffer. Since you have
an app which continously write copious amounts of data, in order,
disabling write cache might make some sense.


--
---
regards
Manish Regmi


Bhanu

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.

--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:   http://mail.nl.linux.org/kernelnewbies/
FAQ:   http://kernelnewbies.org/faq/





--
There is only one success - to be able to spend your life in your own way.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/22/06, Bhanu Kalyan Chetlapalli [EMAIL PROTECTED] wrote:


I am assuming that your program is not seeking inbetween writes.

Try disabling the Disk Cache, now-a-days some disks can have as much
as 8MB write cache. so the disk might be buffering as much as it can,
and trying to write only when it can no longer buffer. Since you have
an app which continously write copious amounts of data, in order,
disabling write cache might make some sense.



Thanks  for the suggestion but the performance was terrible when write
cache was disabled.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/21/06, Erik Mouw [EMAIL PROTECTED] wrote:

Bursty video traffic is really an application that could take advantage
from the kernel buffering. Unless you want to reinvent the wheel and do
the buffering yourself (it is possible though, I've done it on IRIX).


But in my test O_DIRECT gave a slight better performance. Also the CPU
usage decreased.



BTW, why are you so keen on smooth-at-the-microlevel writeout? With
real time video applications it's only important not to drop frames.
How fast those frames will go to the disk isn't really an issue, as
long as you don't overflow the intermediate buffer.


Actually i dont require  smooth-at-the-microlevel writeout but the
timing bumps are overflowing the intermediate buffers . I was just
wondering if i could decrease the 20ms bumps to 3 ms as in other
writes.



Erik

--
They're all fools. Don't worry. Darwin may be slow, but he'll
eventually get them. -- Matthew Lammers in alt.sysadmin.recovery




--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Bhanu Kalyan Chetlapalli

On 12/22/06, Manish Regmi [EMAIL PROTECTED] wrote:

On 12/22/06, Bhanu Kalyan Chetlapalli [EMAIL PROTECTED] wrote:

 I am assuming that your program is not seeking inbetween writes.

 Try disabling the Disk Cache, now-a-days some disks can have as much
 as 8MB write cache. so the disk might be buffering as much as it can,
 and trying to write only when it can no longer buffer. Since you have
 an app which continously write copious amounts of data, in order,
 disabling write cache might make some sense.


Thanks  for the suggestion but the performance was terrible when write
cache was disabled.


Performance degradation is expected. But the point is - did the
anomaly, that you have pointed out, go away? Because if it did, then
it is the disk cache which is causing the issue, and you will have to
live with it. Else you will have to look elsewhere.


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.




--
There is only one success - to be able to spend your life in your own way.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-21 Thread Manish Regmi

On 12/22/06, Bhanu Kalyan Chetlapalli [EMAIL PROTECTED] wrote:


 Thanks  for the suggestion but the performance was terrible when write
 cache was disabled.

Performance degradation is expected. But the point is - did the
anomaly, that you have pointed out, go away? Because if it did, then
it is the disk cache which is causing the issue, and you will have to
live with it. Else you will have to look elsewhere.


oops, sorry for incomplete answer.
Actually i did not tested thoroughly but my initial tests showed some
bumps and serious performance degradation. But anyway there was still
some bumps... :(

(sequence)(channel)(write time in microseconds)
0  06366
0  19949
0  210125
0  310165
0  411043
0  510129
0  610089
0  710165
0  871572
0  99882
0  10   8105
0  11   10085


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Daniel Cheng

Manish Regmi wrote:
[...]


If you upgrade to a newer kernel you can try other i/o scheduler
options, default cfq or even deadline might be helpful.


I tried all disk schedulers but all had timing bumps. :(



Did you try to disable the on disk write cache?

man hdparm(8)

  -W Disable/enable the IDE drive´s write-caching
 feature


--

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Manish Regmi

On 12/21/06, Bill Davidsen <[EMAIL PROTECTED]> wrote:

>
> But isn't O_DIRECT supposed to bypass buffering in Kernel?
That's correct. But it doesn't put your write at the head of any queue,
it just doesn't buffer it for you.

> Doesn't it directly write to disk?
Also correct, when it's your turn to write to disk...


But the only process accessing that disk is my application.


> I tried to put fdatasync() at regular intervals but there was no
> visible effect.
>
Quite honestly, the main place I have found O_DIRECT useful is in
keeping programs doing large i/o quantities from blowing the buffers and
making the other applications run like crap. If you application is
running alone, unless you are very short of CPU or memory avoiding the
copy to an o/s buffer will be down in the measurement noise.


Yes... my application does large amount of I/O. It actually writes
video data received from ethernet(IP camera) to the disk using 128 K
chunks.


I had a news (usenet) server which normally did 120 art/sec (~480 tps),
which dropped to about 50 tps when doing large file copies even at low
priority. By using O_DIRECT the impact essentially vanished, at the cost
of the copy running about 10-15% slower. Changing various programs to
use O_DIRECT only helped when really large blocks of data were involved,
and only when i/o clould be done in a way to satisfy the alignment and
size requirements of O_DIRECT.

If you upgrade to a newer kernel you can try other i/o scheduler
options, default cfq or even deadline might be helpful.


I tried all disk schedulers but all had timing bumps. :(


--
bill davidsen <[EMAIL PROTECTED]>
   CTO TMR Associates, Inc
   Doing interesting things with small computers since 1979




--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Bill Davidsen

Manish Regmi wrote:

On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:

if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...


Just to say it another way.


But isn't O_DIRECT supposed to bypass buffering in Kernel?
That's correct. But it doesn't put your write at the head of any queue, 
it just doesn't buffer it for you.



Doesn't it directly write to disk?

Also correct, when it's your turn to write to disk...


I tried to put fdatasync() at regular intervals but there was no
visible effect.

Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers and 
making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 tps), 
which dropped to about 50 tps when doing large file copies even at low 
priority. By using O_DIRECT the impact essentially vanished, at the cost 
of the copy running about 10-15% slower. Changing various programs to 
use O_DIRECT only helped when really large blocks of data were involved, 
and only when i/o clould be done in a way to satisfy the alignment and 
size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


--
bill davidsen <[EMAIL PROTECTED]>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Manish Regmi

On 12/19/06, Nick Piggin <[EMAIL PROTECTED]> wrote:

When you submit a request to an empty block device queue, it can
get "plugged" for a number of timer ticks before any IO is actually
started. This is done for efficiency reasons and is independent of
the IO scheduler used.



Thanks for the information..


Use the noop IO scheduler, as well as the attached patch, and let's
see what your numbers look like.



Unfortunately i got the same results even after applying your patch. I
also tried putting
q->unplug_delay = 1;
But it did not work. The result was similar.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Manish Regmi

On 12/19/06, Nick Piggin [EMAIL PROTECTED] wrote:

When you submit a request to an empty block device queue, it can
get plugged for a number of timer ticks before any IO is actually
started. This is done for efficiency reasons and is independent of
the IO scheduler used.



Thanks for the information..


Use the noop IO scheduler, as well as the attached patch, and let's
see what your numbers look like.



Unfortunately i got the same results even after applying your patch. I
also tried putting
q-unplug_delay = 1;
But it did not work. The result was similar.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Bill Davidsen

Manish Regmi wrote:

On 12/18/06, Arjan van de Ven [EMAIL PROTECTED] wrote:

if you want truely really smooth writes you'll have to work for it,
since bumpy writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...


Just to say it another way.


But isn't O_DIRECT supposed to bypass buffering in Kernel?
That's correct. But it doesn't put your write at the head of any queue, 
it just doesn't buffer it for you.



Doesn't it directly write to disk?

Also correct, when it's your turn to write to disk...


I tried to put fdatasync() at regular intervals but there was no
visible effect.

Quite honestly, the main place I have found O_DIRECT useful is in 
keeping programs doing large i/o quantities from blowing the buffers and 
making the other applications run like crap. If you application is 
running alone, unless you are very short of CPU or memory avoiding the 
copy to an o/s buffer will be down in the measurement noise.


I had a news (usenet) server which normally did 120 art/sec (~480 tps), 
which dropped to about 50 tps when doing large file copies even at low 
priority. By using O_DIRECT the impact essentially vanished, at the cost 
of the copy running about 10-15% slower. Changing various programs to 
use O_DIRECT only helped when really large blocks of data were involved, 
and only when i/o clould be done in a way to satisfy the alignment and 
size requirements of O_DIRECT.


If you upgrade to a newer kernel you can try other i/o scheduler 
options, default cfq or even deadline might be helpful.


--
bill davidsen [EMAIL PROTECTED]
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Manish Regmi

On 12/21/06, Bill Davidsen [EMAIL PROTECTED] wrote:


 But isn't O_DIRECT supposed to bypass buffering in Kernel?
That's correct. But it doesn't put your write at the head of any queue,
it just doesn't buffer it for you.

 Doesn't it directly write to disk?
Also correct, when it's your turn to write to disk...


But the only process accessing that disk is my application.


 I tried to put fdatasync() at regular intervals but there was no
 visible effect.

Quite honestly, the main place I have found O_DIRECT useful is in
keeping programs doing large i/o quantities from blowing the buffers and
making the other applications run like crap. If you application is
running alone, unless you are very short of CPU or memory avoiding the
copy to an o/s buffer will be down in the measurement noise.


Yes... my application does large amount of I/O. It actually writes
video data received from ethernet(IP camera) to the disk using 128 K
chunks.


I had a news (usenet) server which normally did 120 art/sec (~480 tps),
which dropped to about 50 tps when doing large file copies even at low
priority. By using O_DIRECT the impact essentially vanished, at the cost
of the copy running about 10-15% slower. Changing various programs to
use O_DIRECT only helped when really large blocks of data were involved,
and only when i/o clould be done in a way to satisfy the alignment and
size requirements of O_DIRECT.

If you upgrade to a newer kernel you can try other i/o scheduler
options, default cfq or even deadline might be helpful.


I tried all disk schedulers but all had timing bumps. :(


--
bill davidsen [EMAIL PROTECTED]
   CTO TMR Associates, Inc
   Doing interesting things with small computers since 1979




--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-20 Thread Daniel Cheng

Manish Regmi wrote:
[...]


If you upgrade to a newer kernel you can try other i/o scheduler
options, default cfq or even deadline might be helpful.


I tried all disk schedulers but all had timing bumps. :(



Did you try to disable the on disk write cache?

man hdparm(8)

  -W Disable/enable the IDE drive´s write-caching
 feature


--

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-19 Thread Arjan van de Ven
On Tue, 2006-12-19 at 17:38 +1100, Nick Piggin wrote:
> Manish Regmi wrote:
> 
> > Nick Piggin:
> > 
> >> but
> >> they look like they might be a (HZ quantised) delay coming from
> >> block layer plugging.
> > 
> > 
> > Sorry i didn´t understand what you mean.
> 
> When you submit a request to an empty block device queue, it can
> get "plugged" for a number of timer ticks before any IO is actually
> started. This is done for efficiency reasons and is independent of
> the IO scheduler used.

however the O_DIRECT codepath unplugs the queues always immediately..



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-19 Thread Arjan van de Ven
On Tue, 2006-12-19 at 17:38 +1100, Nick Piggin wrote:
 Manish Regmi wrote:
 
  Nick Piggin:
  
  but
  they look like they might be a (HZ quantised) delay coming from
  block layer plugging.
  
  
  Sorry i didn´t understand what you mean.
 
 When you submit a request to an empty block device queue, it can
 get plugged for a number of timer ticks before any IO is actually
 started. This is done for efficiency reasons and is independent of
 the IO scheduler used.

however the O_DIRECT codepath unplugs the queues always immediately..



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Nick Piggin

Manish Regmi wrote:


Nick Piggin:


but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.



Sorry i didn´t understand what you mean.


When you submit a request to an empty block device queue, it can
get "plugged" for a number of timer ticks before any IO is actually
started. This is done for efficiency reasons and is independent of
the IO scheduler used.

Use the noop IO scheduler, as well as the attached patch, and let's
see what your numbers look like.

Thanks,
Nick

--
SUSE Labs, Novell Inc.
Index: linux-2.6/block/ll_rw_blk.c
===
--- linux-2.6.orig/block/ll_rw_blk.c2006-12-19 17:35:00.0 +1100
+++ linux-2.6/block/ll_rw_blk.c 2006-12-19 17:35:53.0 +1100
@@ -226,6 +226,8 @@ void blk_queue_make_request(request_queu
q->unplug_delay = (3 * HZ) / 1000;  /* 3 milliseconds */
if (q->unplug_delay == 0)
q->unplug_delay = 1;
+   q->unplug_delay = 0;
+   q->unplug_thresh = 0;
 
INIT_WORK(>unplug_work, blk_unplug_work, q);
 


Re: Linux disk performance.

2006-12-18 Thread Manish Regmi

On 12/18/06, Erik Mouw <[EMAIL PROTECTED]> wrote:
<...snip...>

>
> But isn't O_DIRECT supposed to bypass buffering in Kernel?

It is.

> Doesn't it directly write to disk?

Yes, but it still uses an IO scheduler.



Ok. but i also tried with noop to turnoff disk scheduling effects.
There was still timing differences. Usually i get 3100 microseconds
but upto 2 microseconds at certain intervals. I am just using
gettimeofday between two writes to read the timing.




In your first message you mentioned you were using an ancient 2.6.10
kernel. That kernel uses the anticipatory IO scheduler. Update to the
latest stable kernel (2.6.19.1 at time of writing) and it will default
to the CFQ scheduler which has a smoother writeout, plus you can give
your process a different IO scheduling class and level (see
Documentation/block/ioprio.txt).


Thanks... i will try with CFQ.



Nick Piggin:

but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.


Sorry i didn´t understand what you mean.

To minimise scheduling effects i tried giving it maximum priority.


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Erik Mouw
On Mon, Dec 18, 2006 at 06:24:39PM +0545, Manish Regmi wrote:
> On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> >if you want truely really smooth writes you'll have to work for it,
> >since "bumpy" writes tend to be better for performance so naturally the
> >kernel will favor those.
> >
> >to get smooth writes you'll need to do a threaded setup where you do an
> >msync/fdatasync/sync_file_range on a frequent-but-regular interval from
> >a thread. Be aware that this is quite likely to give you lower maximum
> >performance than the batching behavior though.
> >
> 
> Thanks...
> 
> But isn't O_DIRECT supposed to bypass buffering in Kernel?

It is.

> Doesn't it directly write to disk?

Yes, but it still uses an IO scheduler.

> I tried to put fdatasync() at regular intervals but there was no
> visible effect.

In your first message you mentioned you were using an ancient 2.6.10
kernel. That kernel uses the anticipatory IO scheduler. Update to the
latest stable kernel (2.6.19.1 at time of writing) and it will default
to the CFQ scheduler which has a smoother writeout, plus you can give
your process a different IO scheduling class and level (see
Documentation/block/ioprio.txt).


Erik
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Nick Piggin

Manish Regmi wrote:

On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:


if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...

But isn't O_DIRECT supposed to bypass buffering in Kernel?
Doesn't it directly write to disk?
I tried to put fdatasync() at regular intervals but there was no
visible effect.



I don't know exactly how to interpret the numbers you gave, but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.

O_DIRECT bypasses caching, but not (all) buffering.

Not sure whether the block layer can handle an unplug_delay set
to 0, but that might be something to try (see block/ll_rw_blk.c).

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Manish Regmi

On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:

if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...

But isn't O_DIRECT supposed to bypass buffering in Kernel?
Doesn't it directly write to disk?
I tried to put fdatasync() at regular intervals but there was no
visible effect.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Arjan van de Ven

> 
> Can we achieve smooth write times in Linux?

if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.

-- 
if you want to mail me at work (you don't), use arjan (at) linux.intel.com
Test the interaction between Linux and your BIOS via 
http://www.linuxfirmwarekit.org

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Arjan van de Ven

 
 Can we achieve smooth write times in Linux?

if you want truely really smooth writes you'll have to work for it,
since bumpy writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.

-- 
if you want to mail me at work (you don't), use arjan (at) linux.intel.com
Test the interaction between Linux and your BIOS via 
http://www.linuxfirmwarekit.org

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Manish Regmi

On 12/18/06, Arjan van de Ven [EMAIL PROTECTED] wrote:

if you want truely really smooth writes you'll have to work for it,
since bumpy writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...

But isn't O_DIRECT supposed to bypass buffering in Kernel?
Doesn't it directly write to disk?
I tried to put fdatasync() at regular intervals but there was no
visible effect.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Nick Piggin

Manish Regmi wrote:

On 12/18/06, Arjan van de Ven [EMAIL PROTECTED] wrote:


if you want truely really smooth writes you'll have to work for it,
since bumpy writes tend to be better for performance so naturally the
kernel will favor those.

to get smooth writes you'll need to do a threaded setup where you do an
msync/fdatasync/sync_file_range on a frequent-but-regular interval from
a thread. Be aware that this is quite likely to give you lower maximum
performance than the batching behavior though.



Thanks...

But isn't O_DIRECT supposed to bypass buffering in Kernel?
Doesn't it directly write to disk?
I tried to put fdatasync() at regular intervals but there was no
visible effect.



I don't know exactly how to interpret the numbers you gave, but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.

O_DIRECT bypasses caching, but not (all) buffering.

Not sure whether the block layer can handle an unplug_delay set
to 0, but that might be something to try (see block/ll_rw_blk.c).

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Erik Mouw
On Mon, Dec 18, 2006 at 06:24:39PM +0545, Manish Regmi wrote:
 On 12/18/06, Arjan van de Ven [EMAIL PROTECTED] wrote:
 if you want truely really smooth writes you'll have to work for it,
 since bumpy writes tend to be better for performance so naturally the
 kernel will favor those.
 
 to get smooth writes you'll need to do a threaded setup where you do an
 msync/fdatasync/sync_file_range on a frequent-but-regular interval from
 a thread. Be aware that this is quite likely to give you lower maximum
 performance than the batching behavior though.
 
 
 Thanks...
 
 But isn't O_DIRECT supposed to bypass buffering in Kernel?

It is.

 Doesn't it directly write to disk?

Yes, but it still uses an IO scheduler.

 I tried to put fdatasync() at regular intervals but there was no
 visible effect.

In your first message you mentioned you were using an ancient 2.6.10
kernel. That kernel uses the anticipatory IO scheduler. Update to the
latest stable kernel (2.6.19.1 at time of writing) and it will default
to the CFQ scheduler which has a smoother writeout, plus you can give
your process a different IO scheduling class and level (see
Documentation/block/ioprio.txt).


Erik
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Manish Regmi

On 12/18/06, Erik Mouw [EMAIL PROTECTED] wrote:
...snip...


 But isn't O_DIRECT supposed to bypass buffering in Kernel?

It is.

 Doesn't it directly write to disk?

Yes, but it still uses an IO scheduler.



Ok. but i also tried with noop to turnoff disk scheduling effects.
There was still timing differences. Usually i get 3100 microseconds
but upto 2 microseconds at certain intervals. I am just using
gettimeofday between two writes to read the timing.




In your first message you mentioned you were using an ancient 2.6.10
kernel. That kernel uses the anticipatory IO scheduler. Update to the
latest stable kernel (2.6.19.1 at time of writing) and it will default
to the CFQ scheduler which has a smoother writeout, plus you can give
your process a different IO scheduling class and level (see
Documentation/block/ioprio.txt).


Thanks... i will try with CFQ.



Nick Piggin:

but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.


Sorry i didn´t understand what you mean.

To minimise scheduling effects i tried giving it maximum priority.


--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux disk performance.

2006-12-18 Thread Nick Piggin

Manish Regmi wrote:


Nick Piggin:


but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.



Sorry i didn´t understand what you mean.


When you submit a request to an empty block device queue, it can
get plugged for a number of timer ticks before any IO is actually
started. This is done for efficiency reasons and is independent of
the IO scheduler used.

Use the noop IO scheduler, as well as the attached patch, and let's
see what your numbers look like.

Thanks,
Nick

--
SUSE Labs, Novell Inc.
Index: linux-2.6/block/ll_rw_blk.c
===
--- linux-2.6.orig/block/ll_rw_blk.c2006-12-19 17:35:00.0 +1100
+++ linux-2.6/block/ll_rw_blk.c 2006-12-19 17:35:53.0 +1100
@@ -226,6 +226,8 @@ void blk_queue_make_request(request_queu
q-unplug_delay = (3 * HZ) / 1000;  /* 3 milliseconds */
if (q-unplug_delay == 0)
q-unplug_delay = 1;
+   q-unplug_delay = 0;
+   q-unplug_thresh = 0;
 
INIT_WORK(q-unplug_work, blk_unplug_work, q);
 


Linux disk performance.

2006-12-17 Thread Manish Regmi

Hi all,
 I was working in one application that requires heavy disk
writes, I noticed some inconsistencies in the write timing.
We are using raw reads to bypass filesystem overhead.

Firstly i tried open("/dev/hda",O_RDWR) i.e without O_DIRECT option.
I saw that after some writes 1 write took too much time.

the results are for writing 128KB data in MIPS 400mhz
sequence channel time (in microseconds)
0  1  1675
0  2   1625
0  3  1836
...
0   16   3398
0   63  1678
1   0 1702
1   1  1845
.
346  17875  // large value
...
4   13  17142  ///
...
4  44  18711/// large value

Is this behaviour ok?
I beleive this is due to deep request queue.

But when i used O_DIRECT. I got a little higher write times but it
also had such time bumps but at smaller rate.
-
0  03184
0  13165
0  23126
...
0 52   10613// large value
0 6019004   //  large value

results similar with O_DIRECT|O_SYNC


Can we achieve smooth write times in Linux?

I am using 2.6.10 the results are moreover same (i dont mean
numerically same but i am getting thiming difference) in both P4 3 GHZ
512MB ram and MIPS. Disk is working in UDMA 5.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Linux disk performance.

2006-12-17 Thread Manish Regmi

Hi all,
 I was working in one application that requires heavy disk
writes, I noticed some inconsistencies in the write timing.
We are using raw reads to bypass filesystem overhead.

Firstly i tried open(/dev/hda,O_RDWR) i.e without O_DIRECT option.
I saw that after some writes 1 write took too much time.

the results are for writing 128KB data in MIPS 400mhz
sequence channel time (in microseconds)
0  1  1675
0  2   1625
0  3  1836
...
0   16   3398
0   63  1678
1   0 1702
1   1  1845
.
346  17875  // large value
...
4   13  17142  ///
...
4  44  18711/// large value

Is this behaviour ok?
I beleive this is due to deep request queue.

But when i used O_DIRECT. I got a little higher write times but it
also had such time bumps but at smaller rate.
-
0  03184
0  13165
0  23126
...
0 52   10613// large value
0 6019004   //  large value

results similar with O_DIRECT|O_SYNC


Can we achieve smooth write times in Linux?

I am using 2.6.10 the results are moreover same (i dont mean
numerically same but i am getting thiming difference) in both P4 3 GHZ
512MB ram and MIPS. Disk is working in UDMA 5.

--
---
regards
Manish Regmi

---
UNIX without a C Compiler is like eating Spaghetti with your mouth
sewn shut. It just doesn't make sense.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Linux Disk Performance/File IO per process

2001-01-29 Thread List User

It depends on what the performance hit is 'after coding'.  If the code is
say less than 5%
overhead I honestly don't see there being a problem then just to compile it
in the kernel
and keep it active all the time.  Only people who would need it would
compile it in, and
from experience 5% or less for the systems that would be keeping this data
is negligible
considering functionality/statistics gained.


Steve
- Original Message -
From: "James Sutherland" <[EMAIL PROTECTED]>
To: "List User" <[EMAIL PROTECTED]>
Cc: "Chris Evans" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, January 29, 2001 20:18
Subject: Re: Linux Disk Performance/File IO per process


> On Mon, 29 Jan 2001, List User wrote:
>
> > Just wanted to 'chime' in here.  Yes this would be noisy and will have
> > an affect on system performance however these statistics are what are
> > used in conjunction with several others to size systems as well as to
> > plan on growth.  If Linux is to be put into an enterprise environment
> > these types of statistics will be needed.
> >
> > When you start hooking up 100's of 'physical volumes' (be it real
> > disks or raided logical drives) this data helps you pin-point
> > problems.  I think the idea of having the ability to turn such
> > accounting on/off via /proc entry a very nice method of doing things.
>
> Question: how will the extra overhead of checking this configuration
> compare with just doing it anyway?
>
> If the code ends up as:
>
> if (stats_enabled)
>   counter++;
>
> then you'd be better off keeping stats enabled all the time...
>
> Obviously it'll be a bit more complex, but will the stats code be able to
> remove itself completely when disabled, even at runtime??
>
> Might be possible with IBM's dprobes, perhaps...?
>
> > That way you can leave it off for normal run-time but when users
> > complain or DBA's et al you can turn it on get some stats for a couple
> > hours/days whatever, then turn it back off and plan an upgrade or
> > re-create a logical volume or stripping set.
>
> NT allows boot-time (en|dis)abling of stats; they quote a percentage for
> the performance hit caused - 4%, or something like that?? Of course, they
> don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so
> the accuracy of that figure is a bit questionable...
>
>
> James.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread James Sutherland

On Mon, 29 Jan 2001, List User wrote:

> Just wanted to 'chime' in here.  Yes this would be noisy and will have
> an affect on system performance however these statistics are what are
> used in conjunction with several others to size systems as well as to
> plan on growth.  If Linux is to be put into an enterprise environment
> these types of statistics will be needed.
> 
> When you start hooking up 100's of 'physical volumes' (be it real
> disks or raided logical drives) this data helps you pin-point
> problems.  I think the idea of having the ability to turn such
> accounting on/off via /proc entry a very nice method of doing things.

Question: how will the extra overhead of checking this configuration
compare with just doing it anyway?

If the code ends up as:

if (stats_enabled)
  counter++;

then you'd be better off keeping stats enabled all the time...

Obviously it'll be a bit more complex, but will the stats code be able to
remove itself completely when disabled, even at runtime??

Might be possible with IBM's dprobes, perhaps...?

> That way you can leave it off for normal run-time but when users
> complain or DBA's et al you can turn it on get some stats for a couple
> hours/days whatever, then turn it back off and plan an upgrade or
> re-create a logical volume or stripping set.

NT allows boot-time (en|dis)abling of stats; they quote a percentage for
the performance hit caused - 4%, or something like that?? Of course, they
don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so
the accuracy of that figure is a bit questionable...


James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread List User

Just wanted to 'chime' in here.  Yes this would be noisy and will have an
affect
on system performance however these statistics are what are used in
conjunction with
several others to size systems as well as to plan on growth.  If Linux is to
be put into
an enterprise environment these types of statistics will be needed.

When you start hooking up 100's of 'physical volumes' (be it real disks or
raided logical drives)
this data helps you pin-point problems.   I think the idea of having the
ability to turn such accounting
on/off via  /proc entry a very nice method of doing things.

That way you can leave it off for normal run-time but when users complain or
DBA's et al
you can turn it on get some stats for a couple hours/days whatever, then
turn it back off and
plan an upgrade or re-create a logical volume or stripping set.


Steve
- Original Message -
From: "Chris Evans" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Monday, January 29, 2001 07:26
Subject: RE: Linux Disk Performance/File IO per process


>
> On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:
>
> > Thanks to both Jens and Chris - this provides the information I need to
> > obtain our busy rate
> > It's unfortunate that the kernel needs to be patched to provide this
> > information - hopefully it will become part of the kernel soon.
> >
> > I had a response saying that this shouldn't become part of the kernel
due to
> > the performance cost that obtaining such data will involve. I agree that
a
> > cost is involved here, however I think it's up to the user to decide
which
> > cost is more expensive to them - getting the data, or not being able to
see
> > how busy their disks are. My feeling here is that this support could be
user
> > configurable at run time - eg 'cat 1 > /proc/getdiskperf'.
>
> Hi,
>
> I disagree with this runtime variable. It is unnecessary complexity.
> Maintaining a few counts is total noise compared with the time I/O takes.
>
> Cheers
> Chris
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> Please read the FAQ at http://www.tux.org/lkml/
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] Re: Linux Disk Performance/File IO per process

2001-01-29 Thread Mike Galbraith

On Mon, 29 Jan 2001, Szabolcs Szakacsits wrote:

> On Mon, 29 Jan 2001, Chris Evans wrote:
> 
> > Stephen Tweedie has a rather funky i/o stats enhancement patch which
> > should provide what you need. It comes with RedHat7.0 and gives decent
> > disk statistics in /proc/partitions.
> 
> Monitoring via /proc [not just IO but close to anything] has the
> features:
>  - slow, not atomic, not scalable
>  - if kernel decides explicitely or due to a "bug" to refuse doing
>IO, you get something like this [even using a mlocked, RT monitor],
>procsmemoryswap  io system cpu
>  r  b  w   swpd  free  buff  cache  si  sobibo   incs  us  sy  id
>  0  1  1  27116  1048   736 152832 128 1972 2544   869   44  1812   2  43  55
>  5  0  2  27768  1048   744 153372  52 1308 2668   777   43  1772   2  61  37
>  0  2  1  28360  1048   752 153900 332 564  2311   955   49  2081   1  68  31
> 
>  1  7  2  28356  1048   752 153708 3936  0  2175 29091  494 27348   0   1  99
>  1  0  2  28356  1048   792 153656 172   0  7166 0  144   838   4  17  80
> 
> In short, monitoring via /proc is unreliable.

Not really unreliable, but definitely with _serious_ latency issues :)
due to taking the mmap_sem.  Acquiring the mmap_sem semaphore can take
a really long time under load.. and sys_brk downs this semaphore first
thing, as does task_mem() and proc_pid_stat()...  If someone has the
mmap_sem you want, and is pushing disk I/O when that disk is saturated,
you are in for a long wait.  This I think is what you see with your
mlocked RT monitor (pretty similar to my mlocked RT monitor I suspect)

In fact, that darn monitor can have a decidedly negative impact on system
performance because it can take an arbitrary task's mana connection and
then fault while throttling it... I think ;-)

-Mike



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread Szabolcs Szakacsits


On Mon, 29 Jan 2001, Chris Evans wrote:

> Stephen Tweedie has a rather funky i/o stats enhancement patch which
> should provide what you need. It comes with RedHat7.0 and gives decent
> disk statistics in /proc/partitions.

Monitoring via /proc [not just IO but close to anything] has the
features:
 - slow, not atomic, not scalable
 - if kernel decides explicitely or due to a "bug" to refuse doing
   IO, you get something like this [even using a mlocked, RT monitor],
   procsmemoryswap  io system cpu
 r  b  w   swpd  free  buff  cache  si  sobibo   incs  us  sy  id
 0  1  1  27116  1048   736 152832 128 1972 2544   869   44  1812   2  43  55
 5  0  2  27768  1048   744 153372  52 1308 2668   777   43  1772   2  61  37
 0  2  1  28360  1048   752 153900 332 564  2311   955   49  2081   1  68  31

 1  7  2  28356  1048   752 153708 3936  0  2175 29091  494 27348   0   1  99
 1  0  2  28356  1048   792 153656 172   0  7166 0  144   838   4  17  80

In short, monitoring via /proc is unreliable.

Szaka

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] RE: Linux Disk Performance/File IO per process

2001-01-29 Thread Chris Evans


On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:

> Thanks to both Jens and Chris - this provides the information I need to
> obtain our busy rate
> It's unfortunate that the kernel needs to be patched to provide this
> information - hopefully it will become part of the kernel soon.
>
> I had a response saying that this shouldn't become part of the kernel due to
> the performance cost that obtaining such data will involve. I agree that a
> cost is involved here, however I think it's up to the user to decide which
> cost is more expensive to them - getting the data, or not being able to see
> how busy their disks are. My feeling here is that this support could be user
> configurable at run time - eg 'cat 1 > /proc/getdiskperf'.

Hi,

I disagree with this runtime variable. It is unnecessary complexity.
Maintaining a few counts is total noise compared with the time I/O takes.

Cheers
Chris



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



[SLUG] RE: Linux Disk Performance/File IO per process

2001-01-29 Thread Chris Evans


On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:

 Thanks to both Jens and Chris - this provides the information I need to
 obtain our busy rate
 It's unfortunate that the kernel needs to be patched to provide this
 information - hopefully it will become part of the kernel soon.

 I had a response saying that this shouldn't become part of the kernel due to
 the performance cost that obtaining such data will involve. I agree that a
 cost is involved here, however I think it's up to the user to decide which
 cost is more expensive to them - getting the data, or not being able to see
 how busy their disks are. My feeling here is that this support could be user
 configurable at run time - eg 'cat 1  /proc/getdiskperf'.

Hi,

I disagree with this runtime variable. It is unnecessary complexity.
Maintaining a few counts is total noise compared with the time I/O takes.

Cheers
Chris



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread Szabolcs Szakacsits


On Mon, 29 Jan 2001, Chris Evans wrote:

 Stephen Tweedie has a rather funky i/o stats enhancement patch which
 should provide what you need. It comes with RedHat7.0 and gives decent
 disk statistics in /proc/partitions.

Monitoring via /proc [not just IO but close to anything] has the
features:
 - slow, not atomic, not scalable
 - if kernel decides explicitely or due to a "bug" to refuse doing
   IO, you get something like this [even using a mlocked, RT monitor],
   procsmemoryswap  io system cpu
 r  b  w   swpd  free  buff  cache  si  sobibo   incs  us  sy  id
 0  1  1  27116  1048   736 152832 128 1972 2544   869   44  1812   2  43  55
 5  0  2  27768  1048   744 153372  52 1308 2668   777   43  1772   2  61  37
 0  2  1  28360  1048   752 153900 332 564  2311   955   49  2081   1  68  31
frozen
 1  7  2  28356  1048   752 153708 3936  0  2175 29091  494 27348   0   1  99
 1  0  2  28356  1048   792 153656 172   0  7166 0  144   838   4  17  80

In short, monitoring via /proc is unreliable.

Szaka

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] Re: Linux Disk Performance/File IO per process

2001-01-29 Thread Mike Galbraith

On Mon, 29 Jan 2001, Szabolcs Szakacsits wrote:

 On Mon, 29 Jan 2001, Chris Evans wrote:
 
  Stephen Tweedie has a rather funky i/o stats enhancement patch which
  should provide what you need. It comes with RedHat7.0 and gives decent
  disk statistics in /proc/partitions.
 
 Monitoring via /proc [not just IO but close to anything] has the
 features:
  - slow, not atomic, not scalable
  - if kernel decides explicitely or due to a "bug" to refuse doing
IO, you get something like this [even using a mlocked, RT monitor],
procsmemoryswap  io system cpu
  r  b  w   swpd  free  buff  cache  si  sobibo   incs  us  sy  id
  0  1  1  27116  1048   736 152832 128 1972 2544   869   44  1812   2  43  55
  5  0  2  27768  1048   744 153372  52 1308 2668   777   43  1772   2  61  37
  0  2  1  28360  1048   752 153900 332 564  2311   955   49  2081   1  68  31
 frozen
  1  7  2  28356  1048   752 153708 3936  0  2175 29091  494 27348   0   1  99
  1  0  2  28356  1048   792 153656 172   0  7166 0  144   838   4  17  80
 
 In short, monitoring via /proc is unreliable.

Not really unreliable, but definitely with _serious_ latency issues :)
due to taking the mmap_sem.  Acquiring the mmap_sem semaphore can take
a really long time under load.. and sys_brk downs this semaphore first
thing, as does task_mem() and proc_pid_stat()...  If someone has the
mmap_sem you want, and is pushing disk I/O when that disk is saturated,
you are in for a long wait.  This I think is what you see with your
mlocked RT monitor (pretty similar to my mlocked RT monitor I suspect)

In fact, that darn monitor can have a decidedly negative impact on system
performance because it can take an arbitrary task's mana connection and
then fault while throttling it... I think ;-)

-Mike



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread List User

Just wanted to 'chime' in here.  Yes this would be noisy and will have an
affect
on system performance however these statistics are what are used in
conjunction with
several others to size systems as well as to plan on growth.  If Linux is to
be put into
an enterprise environment these types of statistics will be needed.

When you start hooking up 100's of 'physical volumes' (be it real disks or
raided logical drives)
this data helps you pin-point problems.   I think the idea of having the
ability to turn such accounting
on/off via  /proc entry a very nice method of doing things.

That way you can leave it off for normal run-time but when users complain or
DBA's et al
you can turn it on get some stats for a couple hours/days whatever, then
turn it back off and
plan an upgrade or re-create a logical volume or stripping set.


Steve
- Original Message -
From: "Chris Evans" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Monday, January 29, 2001 07:26
Subject: RE: Linux Disk Performance/File IO per process



 On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:

  Thanks to both Jens and Chris - this provides the information I need to
  obtain our busy rate
  It's unfortunate that the kernel needs to be patched to provide this
  information - hopefully it will become part of the kernel soon.
 
  I had a response saying that this shouldn't become part of the kernel
due to
  the performance cost that obtaining such data will involve. I agree that
a
  cost is involved here, however I think it's up to the user to decide
which
  cost is more expensive to them - getting the data, or not being able to
see
  how busy their disks are. My feeling here is that this support could be
user
  configurable at run time - eg 'cat 1  /proc/getdiskperf'.

 Hi,

 I disagree with this runtime variable. It is unnecessary complexity.
 Maintaining a few counts is total noise compared with the time I/O takes.

 Cheers
 Chris

 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread List User

It depends on what the performance hit is 'after coding'.  If the code is
say less than 5%
overhead I honestly don't see there being a problem then just to compile it
in the kernel
and keep it active all the time.  Only people who would need it would
compile it in, and
from experience 5% or less for the systems that would be keeping this data
is negligible
considering functionality/statistics gained.


Steve
- Original Message -
From: "James Sutherland" [EMAIL PROTECTED]
To: "List User" [EMAIL PROTECTED]
Cc: "Chris Evans" [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, January 29, 2001 20:18
Subject: Re: Linux Disk Performance/File IO per process


 On Mon, 29 Jan 2001, List User wrote:

  Just wanted to 'chime' in here.  Yes this would be noisy and will have
  an affect on system performance however these statistics are what are
  used in conjunction with several others to size systems as well as to
  plan on growth.  If Linux is to be put into an enterprise environment
  these types of statistics will be needed.
 
  When you start hooking up 100's of 'physical volumes' (be it real
  disks or raided logical drives) this data helps you pin-point
  problems.  I think the idea of having the ability to turn such
  accounting on/off via /proc entry a very nice method of doing things.

 Question: how will the extra overhead of checking this configuration
 compare with just doing it anyway?

 If the code ends up as:

 if (stats_enabled)
   counter++;

 then you'd be better off keeping stats enabled all the time...

 Obviously it'll be a bit more complex, but will the stats code be able to
 remove itself completely when disabled, even at runtime??

 Might be possible with IBM's dprobes, perhaps...?

  That way you can leave it off for normal run-time but when users
  complain or DBA's et al you can turn it on get some stats for a couple
  hours/days whatever, then turn it back off and plan an upgrade or
  re-create a logical volume or stripping set.

 NT allows boot-time (en|dis)abling of stats; they quote a percentage for
 the performance hit caused - 4%, or something like that?? Of course, they
 don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so
 the accuracy of that figure is a bit questionable...


 James.

 -
 To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
 the body of a message to [EMAIL PROTECTED]
 Please read the FAQ at http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: Linux Disk Performance/File IO per process

2001-01-29 Thread James Sutherland

On Mon, 29 Jan 2001, List User wrote:

 Just wanted to 'chime' in here.  Yes this would be noisy and will have
 an affect on system performance however these statistics are what are
 used in conjunction with several others to size systems as well as to
 plan on growth.  If Linux is to be put into an enterprise environment
 these types of statistics will be needed.
 
 When you start hooking up 100's of 'physical volumes' (be it real
 disks or raided logical drives) this data helps you pin-point
 problems.  I think the idea of having the ability to turn such
 accounting on/off via /proc entry a very nice method of doing things.

Question: how will the extra overhead of checking this configuration
compare with just doing it anyway?

If the code ends up as:

if (stats_enabled)
  counter++;

then you'd be better off keeping stats enabled all the time...

Obviously it'll be a bit more complex, but will the stats code be able to
remove itself completely when disabled, even at runtime??

Might be possible with IBM's dprobes, perhaps...?

 That way you can leave it off for normal run-time but when users
 complain or DBA's et al you can turn it on get some stats for a couple
 hours/days whatever, then turn it back off and plan an upgrade or
 re-create a logical volume or stripping set.

NT allows boot-time (en|dis)abling of stats; they quote a percentage for
the performance hit caused - 4%, or something like that?? Of course, they
don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so
the accuracy of that figure is a bit questionable...


James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] RE: Linux Disk Performance/File IO per process

2001-01-28 Thread Tony . Young



> -Original Message-
> From: Chris Evans [mailto:[EMAIL PROTECTED]]
> Sent: Monday, 29 January 2001 13:04
> To: Tony Young
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Linux Disk Performance/File IO per process
> 
> 
> 
> On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:
> 
> > All,
> >
> > I work for a company that develops a systems and 
> performance management
> > product for Unix (as well as PC and TANDEM) called 
> PROGNOSIS. Currently we
> > support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
> >
> > I've hit a bit of a wall trying to expand the data provided 
> by our Linux
> > solution - I can't seem to find anywhere that provides the 
> metrics needed to
> > calculate disk busy in the kernel! This is a major piece of 
> information that
> > any mission critical system administrator needs to 
> successfully monitor
> > their systems.
> 
> Stephen Tweedie has a rather funky i/o stats enhancement patch which
> should provide what you need. It comes with RedHat7.0 and gives decent
> disk statistics in /proc/partitions.
> 
> Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. 
> I'd like to
> see it make the kernel as a 2.4.x item. Failing that, it'll 
> probably make
> the 2.5 kernel.
> 
> Cheers
> Chris
>

Thanks to both Jens and Chris - this provides the information I need to
obtain our busy rate
It's unfortunate that the kernel needs to be patched to provide this
information - hopefully it will become part of the kernel soon.

I had a response saying that this shouldn't become part of the kernel due to
the performance cost that obtaining such data will involve. I agree that a
cost is involved here, however I think it's up to the user to decide which
cost is more expensive to them - getting the data, or not being able to see
how busy their disks are. My feeling here is that this support could be user
configurable at run time - eg 'cat 1 > /proc/getdiskperf'.

Thanks for your quick responses.

Tony...


-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



Re: Linux Disk Performance/File IO per process

2001-01-28 Thread Chris Evans


On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:

> All,
>
> I work for a company that develops a systems and performance management
> product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
> support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
>
> I've hit a bit of a wall trying to expand the data provided by our Linux
> solution - I can't seem to find anywhere that provides the metrics needed to
> calculate disk busy in the kernel! This is a major piece of information that
> any mission critical system administrator needs to successfully monitor
> their systems.

Stephen Tweedie has a rather funky i/o stats enhancement patch which
should provide what you need. It comes with RedHat7.0 and gives decent
disk statistics in /proc/partitions.

Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to
see it make the kernel as a 2.4.x item. Failing that, it'll probably make
the 2.5 kernel.

Cheers
Chris

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] Re: Linux Disk Performance/File IO per process

2001-01-28 Thread Jens Axboe

On Mon, Jan 29 2001, [EMAIL PROTECTED] wrote:
> All,
> 
> I work for a company that develops a systems and performance management
> product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
> support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
> 
> I've hit a bit of a wall trying to expand the data provided by our Linux
> solution - I can't seem to find anywhere that provides the metrics needed to
> calculate disk busy in the kernel! This is a major piece of information that
> any mission critical system administrator needs to successfully monitor
> their systems.

The stock kernel doesn't provide either, but at least with Stephen's
sard patches you can get system wide I/O metrics.

ftp.linux.org.uk/pub/linux/sct/fs/profiling

-- 
Jens Axboe



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



[SLUG] Linux Disk Performance/File IO per process

2001-01-28 Thread Tony . Young

All,

I work for a company that develops a systems and performance management
product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
support AIX, HP, Solaris, UnixWare, IRIX, and Linux.

I've hit a bit of a wall trying to expand the data provided by our Linux
solution - I can't seem to find anywhere that provides the metrics needed to
calculate disk busy in the kernel! This is a major piece of information that
any mission critical system administrator needs to successfully monitor
their systems.

I've looked in /proc - it provides I/O rates, but no time related
information (which is required to calculate busy%)
I've looked in the 2.4 kernel source
(drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) -
but can only see those /proc I/O rates being calculated.

Is this data provided somewhere that I haven't looked? Or does the kernel
really not provide the data necessary to calculate a busy rate?

I'm also interested in finding out file I/O metrics on a per process basis.
The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to
provide summarised I/O metrics per process using a loadable kernel module.
That is, it provides I/O rates for a process, but not for each file open by
that process.

Are there any existing methods to obtain this data? If so, can someone point
me in the right direction?
If not, what is the possibility of 'people-in-the-know' working towards
making these sort of metrics available from the kernel?
Could some of these metrics be added to the CSA project? (directed at the
CSA people of course.)

I'm more than willing to put in time to get these metrics into the kernel.
However, I'm new to kernel development, so it would take longer for me than
for someone who knows the code. But if none of the above questions can
really be answered I'd appreciate some direction as to where in the kernel
would be a good place to calculate/extract these metrics.

I believe that the lack of these metrics will make it difficult for Linux to
move into the mission critical server market. For this reason I'm keen to
see this information made available.

Thank you all for any help you may be able to provide.

I'm not actually subscribed to either the CSA or the linux-kernel mailing
lists, so I'd appreciate being CC'ed on any replies.
Thanks.

Tony...
--
Tony Young
Senior Software Engineer
Integrated Research Limited
Level 10, 168 Walker St
North Sydney NSW 2060, Australia
Ph: +61 2 9966 1066
Fax: +61 2 9966 1042
Mob: 0414 649942
[EMAIL PROTECTED]
www.ir.com


-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



[SLUG] Re: Linux Disk Performance/File IO per process

2001-01-28 Thread Jens Axboe

On Mon, Jan 29 2001, [EMAIL PROTECTED] wrote:
 All,
 
 I work for a company that develops a systems and performance management
 product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
 support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
 
 I've hit a bit of a wall trying to expand the data provided by our Linux
 solution - I can't seem to find anywhere that provides the metrics needed to
 calculate disk busy in the kernel! This is a major piece of information that
 any mission critical system administrator needs to successfully monitor
 their systems.

The stock kernel doesn't provide either, but at least with Stephen's
sard patches you can get system wide I/O metrics.

ftp.linux.org.uk/pub/linux/sct/fs/profiling

-- 
Jens Axboe



-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



Re: Linux Disk Performance/File IO per process

2001-01-28 Thread Chris Evans


On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:

 All,

 I work for a company that develops a systems and performance management
 product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
 support AIX, HP, Solaris, UnixWare, IRIX, and Linux.

 I've hit a bit of a wall trying to expand the data provided by our Linux
 solution - I can't seem to find anywhere that provides the metrics needed to
 calculate disk busy in the kernel! This is a major piece of information that
 any mission critical system administrator needs to successfully monitor
 their systems.

Stephen Tweedie has a rather funky i/o stats enhancement patch which
should provide what you need. It comes with RedHat7.0 and gives decent
disk statistics in /proc/partitions.

Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to
see it make the kernel as a 2.4.x item. Failing that, it'll probably make
the 2.5 kernel.

Cheers
Chris

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



[SLUG] RE: Linux Disk Performance/File IO per process

2001-01-28 Thread Tony . Young



 -Original Message-
 From: Chris Evans [mailto:[EMAIL PROTECTED]]
 Sent: Monday, 29 January 2001 13:04
 To: Tony Young
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Linux Disk Performance/File IO per process
 
 
 
 On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:
 
  All,
 
  I work for a company that develops a systems and 
 performance management
  product for Unix (as well as PC and TANDEM) called 
 PROGNOSIS. Currently we
  support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
 
  I've hit a bit of a wall trying to expand the data provided 
 by our Linux
  solution - I can't seem to find anywhere that provides the 
 metrics needed to
  calculate disk busy in the kernel! This is a major piece of 
 information that
  any mission critical system administrator needs to 
 successfully monitor
  their systems.
 
 Stephen Tweedie has a rather funky i/o stats enhancement patch which
 should provide what you need. It comes with RedHat7.0 and gives decent
 disk statistics in /proc/partitions.
 
 Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. 
 I'd like to
 see it make the kernel as a 2.4.x item. Failing that, it'll 
 probably make
 the 2.5 kernel.
 
 Cheers
 Chris


Thanks to both Jens and Chris - this provides the information I need to
obtain our busy rate
It's unfortunate that the kernel needs to be patched to provide this
information - hopefully it will become part of the kernel soon.

I had a response saying that this shouldn't become part of the kernel due to
the performance cost that obtaining such data will involve. I agree that a
cost is involved here, however I think it's up to the user to decide which
cost is more expensive to them - getting the data, or not being able to see
how busy their disks are. My feeling here is that this support could be user
configurable at run time - eg 'cat 1  /proc/getdiskperf'.

Thanks for your quick responses.

Tony...


-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug



[SLUG] Linux Disk Performance/File IO per process

2001-01-28 Thread Tony . Young

All,

I work for a company that develops a systems and performance management
product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
support AIX, HP, Solaris, UnixWare, IRIX, and Linux.

I've hit a bit of a wall trying to expand the data provided by our Linux
solution - I can't seem to find anywhere that provides the metrics needed to
calculate disk busy in the kernel! This is a major piece of information that
any mission critical system administrator needs to successfully monitor
their systems.

I've looked in /proc - it provides I/O rates, but no time related
information (which is required to calculate busy%)
I've looked in the 2.4 kernel source
(drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) -
but can only see those /proc I/O rates being calculated.

Is this data provided somewhere that I haven't looked? Or does the kernel
really not provide the data necessary to calculate a busy rate?

I'm also interested in finding out file I/O metrics on a per process basis.
The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to
provide summarised I/O metrics per process using a loadable kernel module.
That is, it provides I/O rates for a process, but not for each file open by
that process.

Are there any existing methods to obtain this data? If so, can someone point
me in the right direction?
If not, what is the possibility of 'people-in-the-know' working towards
making these sort of metrics available from the kernel?
Could some of these metrics be added to the CSA project? (directed at the
CSA people of course.)

I'm more than willing to put in time to get these metrics into the kernel.
However, I'm new to kernel development, so it would take longer for me than
for someone who knows the code. But if none of the above questions can
really be answered I'd appreciate some direction as to where in the kernel
would be a good place to calculate/extract these metrics.

I believe that the lack of these metrics will make it difficult for Linux to
move into the mission critical server market. For this reason I'm keen to
see this information made available.

Thank you all for any help you may be able to provide.

I'm not actually subscribed to either the CSA or the linux-kernel mailing
lists, so I'd appreciate being CC'ed on any replies.
Thanks.

Tony...
--
Tony Young
Senior Software Engineer
Integrated Research Limited
Level 10, 168 Walker St
North Sydney NSW 2060, Australia
Ph: +61 2 9966 1066
Fax: +61 2 9966 1042
Mob: 0414 649942
[EMAIL PROTECTED]
www.ir.com


-- 
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug