Phillip Susi wrote:
Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in
keeping programs doing large i/o quantities from blowing the buffers
and making the other applications run like crap. If you application is
running alone, unless you are very short of CPU
Bill Davidsen wrote:
Quite honestly, the main place I have found O_DIRECT useful is in
keeping programs doing large i/o quantities from blowing the buffers and
making the other applications run like crap. If you application is
running alone, unless you are very short of CPU or memory avoiding t
On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:
>
> Thanks for the suggestion but the performance was terrible when write
> cache was disabled.
Performance degradation is expected. But the point is - did the
anomaly, that you have pointed out, go away? Because if it did, then
i
On 12/22/06, Manish Regmi <[EMAIL PROTECTED]> wrote:
On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:
>
> I am assuming that your program is not seeking inbetween writes.
>
> Try disabling the Disk Cache, now-a-days some disks can have as much
> as 8MB write cache. so the disk mig
On 12/21/06, Erik Mouw <[EMAIL PROTECTED]> wrote:
Bursty video traffic is really an application that could take advantage
from the kernel buffering. Unless you want to reinvent the wheel and do
the buffering yourself (it is possible though, I've done it on IRIX).
But in my test O_DIRECT gave a
On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote:
I am assuming that your program is not seeking inbetween writes.
Try disabling the Disk Cache, now-a-days some disks can have as much
as 8MB write cache. so the disk might be buffering as much as it can,
and trying to write only w
On 12/20/06, Manish Regmi <[EMAIL PROTECTED]> wrote:
On 12/19/06, Nick Piggin <[EMAIL PROTECTED]> wrote:
> When you submit a request to an empty block device queue, it can
> get "plugged" for a number of timer ticks before any IO is actually
> started. This is done for efficiency reasons and is i
On Thu, Dec 21, 2006 at 11:48:42AM +0545, Manish Regmi wrote:
> Yes... my application does large amount of I/O. It actually writes
> video data received from ethernet(IP camera) to the disk using 128 K
> chunks.
Bursty video traffic is really an application that could take advantage
from the kerne
Manish Regmi wrote:
[...]
If you upgrade to a newer kernel you can try other i/o scheduler
options, default cfq or even deadline might be helpful.
I tried all disk schedulers but all had timing bumps. :(
Did you try to disable the on disk write cache?
man hdparm(8)
-W Disable/enable
On 12/21/06, Bill Davidsen <[EMAIL PROTECTED]> wrote:
>
> But isn't O_DIRECT supposed to bypass buffering in Kernel?
That's correct. But it doesn't put your write at the head of any queue,
it just doesn't buffer it for you.
> Doesn't it directly write to disk?
Also correct, when it's your turn t
Manish Regmi wrote:
On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.
to get smooth writes you'll need to do a threaded set
On 12/19/06, Nick Piggin <[EMAIL PROTECTED]> wrote:
When you submit a request to an empty block device queue, it can
get "plugged" for a number of timer ticks before any IO is actually
started. This is done for efficiency reasons and is independent of
the IO scheduler used.
Thanks for the info
On Tue, 2006-12-19 at 17:38 +1100, Nick Piggin wrote:
> Manish Regmi wrote:
>
> > Nick Piggin:
> >
> >> but
> >> they look like they might be a (HZ quantised) delay coming from
> >> block layer plugging.
> >
> >
> > Sorry i didn´t understand what you mean.
>
> When you submit a request to an e
Manish Regmi wrote:
Nick Piggin:
but
they look like they might be a (HZ quantised) delay coming from
block layer plugging.
Sorry i didn´t understand what you mean.
When you submit a request to an empty block device queue, it can
get "plugged" for a number of timer ticks before any IO is a
On 12/18/06, Erik Mouw <[EMAIL PROTECTED]> wrote:
<...snip...>
>
> But isn't O_DIRECT supposed to bypass buffering in Kernel?
It is.
> Doesn't it directly write to disk?
Yes, but it still uses an IO scheduler.
Ok. but i also tried with noop to turnoff disk scheduling effects.
There was stil
On Mon, Dec 18, 2006 at 06:24:39PM +0545, Manish Regmi wrote:
> On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> >if you want truely really smooth writes you'll have to work for it,
> >since "bumpy" writes tend to be better for performance so naturally the
> >kernel will favor those.
> >
Manish Regmi wrote:
On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.
to get smooth writes you'll need to do a threaded se
On 12/18/06, Arjan van de Ven <[EMAIL PROTECTED]> wrote:
if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.
to get smooth writes you'll need to do a threaded setup where you do an
ms
>
> Can we achieve smooth write times in Linux?
if you want truely really smooth writes you'll have to work for it,
since "bumpy" writes tend to be better for performance so naturally the
kernel will favor those.
to get smooth writes you'll need to do a threaded setup where you do an
msync/fdat
Hi all,
I was working in one application that requires heavy disk
writes, I noticed some inconsistencies in the write timing.
We are using raw reads to bypass filesystem overhead.
Firstly i tried open("/dev/hda",O_RDWR) i.e without O_DIRECT option.
I saw that after some writes 1 write took t
s" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, January 29, 2001 20:18
Subject: Re: Linux Disk Performance/File IO per process
> On Mon, 29 Jan 2001, List User wrote:
>
> > Just wa
On Mon, 29 Jan 2001, List User wrote:
> Just wanted to 'chime' in here. Yes this would be noisy and will have
> an affect on system performance however these statistics are what are
> used in conjunction with several others to size systems as well as to
> plan on growth. If Linux is to be put i
ssage -
From: "Chris Evans" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Monday, January 29, 2001 07:26
Subject: RE: Linux Disk Performance/File IO per process
&g
On Mon, 29 Jan 2001, Szabolcs Szakacsits wrote:
> On Mon, 29 Jan 2001, Chris Evans wrote:
>
> > Stephen Tweedie has a rather funky i/o stats enhancement patch which
> > should provide what you need. It comes with RedHat7.0 and gives decent
> > disk statistics in /proc/partitions.
>
> Monitoring
On Mon, 29 Jan 2001, Chris Evans wrote:
> Stephen Tweedie has a rather funky i/o stats enhancement patch which
> should provide what you need. It comes with RedHat7.0 and gives decent
> disk statistics in /proc/partitions.
Monitoring via /proc [not just IO but close to anything] has the
feature
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:
> Thanks to both Jens and Chris - this provides the information I need to
> obtain our busy rate
> It's unfortunate that the kernel needs to be patched to provide this
> information - hopefully it will become part of the kernel soon.
>
> I had a respo
> -Original Message-
> From: Chris Evans [mailto:[EMAIL PROTECTED]]
> Sent: Monday, 29 January 2001 13:04
> To: Tony Young
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: Linux Disk Performance/File IO per process
>
>
>
&
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote:
> All,
>
> I work for a company that develops a systems and performance management
> product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
> support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
>
> I've hit a bit of a wall trying
On Mon, Jan 29 2001, [EMAIL PROTECTED] wrote:
> All,
>
> I work for a company that develops a systems and performance management
> product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
> support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
>
> I've hit a bit of a wall trying
All,
I work for a company that develops a systems and performance management
product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we
support AIX, HP, Solaris, UnixWare, IRIX, and Linux.
I've hit a bit of a wall trying to expand the data provided by our Linux
solution - I can't
30 matches
Mail list logo