Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
I did try it recently with creating a SchilliX ISO image (~ 2 files
700 MB) and it did not help.
Did you do the test on a machine with limited memory? I may have asked
this before, but I don't see a reply, sorry if
Bill Davidsen [EMAIL PROTECTED] wrote:
I did try it recently with creating a SchilliX ISO image (~ 2 files
700 MB) and it did not help.
Did you do the test on a machine with limited memory? I may have asked
this before, but I don't see a reply, sorry if this is a repeated
question. I
Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
But now I see where you like to use O_DIRECT.
If you use O_DIRECT for writing, it makes sense and in the same case
it makes sense to use directio(fd, DIRECTIO_ON); on Solaris.
By not buffering the output of mkisofs,
Bill Davidsen [EMAIL PROTECTED] wrote:
But now I see where you like to use O_DIRECT.
If you use O_DIRECT for writing, it makes sense and in the same case
it makes sense to use directio(fd, DIRECTIO_ON); on Solaris.
By not buffering the output of mkisofs, for example, buffer space is
Joerg Schilling wrote:
Rob Bogus [EMAIL PROTECTED] wrote:
I'm attaching a tiny program to show that isn't the case. It's for
zeroing out large files. If you have a disk intensive application you
might run this to zero out say 50GB or so, and compare the impact on the
Bill Davidsen [EMAIL PROTECTED] wrote:
the data should be sent to the drive NOW. Eliminated writing a full CD
to buffer, with data from cache, then closing the file and having the
drive go dead busy for a minute.
This is a problem arising from the bad caching strategies on Linux.
Solaris
Joerg Schilling schrieb am 2006-01-30:
Bill Davidsen [EMAIL PROTECTED] wrote:
the data should be sent to the drive NOW. Eliminated writing a full CD
to buffer, with data from cache, then closing the file and having the
drive go dead busy for a minute.
This is a problem arising from
Joerg Schilling schrieb am 2006-01-30:
Matthias Andree [EMAIL PROTECTED] wrote:
2. scheduling writes is a complex matter in itself, and there is not a
single answer. The later you start writing, the more data you have
available if the input rate is low for a while to write a large blob
Matthias Andree [EMAIL PROTECTED] wrote:
2. scheduling writes is a complex matter in itself, and there is not a
single answer. The later you start writing, the more data you have
available if the input rate is low for a while to write a large blob of
data (reducing seeks), the more
Joerg Schilling schrieb am 2006-01-30:
Matthias Andree [EMAIL PROTECTED] wrote:
2. scheduling writes is a complex matter in itself, and there is not a
single answer. The later you start writing, the more data you have
available if the input rate is low for a while to write a large
Matthias Andree [EMAIL PROTECTED] wrote:
2. scheduling writes is a complex matter in itself, and there is not a
single answer. The later you start writing, the more data you have
available if the input rate is low for a while to write a large blob of
data (reducing seeks), the more
Bill Davidsen [EMAIL PROTECTED] wrote:
This sounds a bit confused. Are you able to describe your concern?
You said star doesn't run as fast on Linux as Solaris, you can probably
fix that problem by using O_DIRECT, so the writes and reads don't
compete for the same buffer memory.
And I
Bill Davidsen [EMAIL PROTECTED] wrote:
My belief is that if O_DIRECT is in the kernel headers it works. I would
love to have time to test timing of O_DIRECT on (a) disk, (b) partition,
(c) file on disk, and alignment on minimal vs. page size boundaries.
Doing O_DIRECT writes of anything
Joerg Schilling [EMAIL PROTECTED] writes:
Bill Davidsen [EMAIL PROTECTED] wrote:
This sounds a bit confused. Are you able to describe your concern?
You said star doesn't run as fast on Linux as Solaris, you can probably
fix that problem by using O_DIRECT, so the writes and reads don't
Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
My belief is that if O_DIRECT is in the kernel headers it works. I would
love to have time to test timing of O_DIRECT on (a) disk, (b) partition,
(c) file on disk, and alignment on minimal vs. page size boundaries.
Doing
Matthias Andree wrote:
Joerg Schilling [EMAIL PROTECTED] writes:
Bill Davidsen [EMAIL PROTECTED] wrote:
This sounds a bit confused. Are you able to describe your concern?
You said star doesn't run as fast on Linux as Solaris, you can probably
fix that problem by using
Matthias Andree [EMAIL PROTECTED] wrote:
You said star doesn't run as fast on Linux as Solaris, you can probably
fix that problem by using O_DIRECT, so the writes and reads don't
compete for the same buffer memory.
And I already mentioned that it is higly improbable that O_DIRECT will
Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
From reading your last mail, it seems that you seem to be a novice.
Let me give you the advise that it helps a lot to shorten quotings
in order to get accepted by knowledgeable people.
I did not test O_DIRECT because I
Andy Polyakov wrote:
as you still have to pull a lot of data from disk, you still put
quite a pressure on VM subsystem, so direct I/O can still help,
But how to talk afio, star or mkisofs into that ?
I was thinking that a simple wrapper to open() which adds
O_DIRECT might be sufficient,
Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
O_DIRECT is no standard and it seems that it is just a reimplementation of the
very old DG/UX idea of O_DG_UNBUFFERED.
The idea of doing unbuffered i/o is hardly that young, I believe you
will trace it back to MULTICS
Bill Davidsen [EMAIL PROTECTED] wrote:
From reading your last mail, it seems that you seem to be a novice.
Let me give you the advise that it helps a lot to shorten quotings
in order to get accepted by knowledgeable people.
I did not test O_DIRECT because I believe it is not worth testing
as you
still have to pull a lot of data from disk, you still put quite a
pressure on VM subsystem, so direct I/O can still help,
But how to talk afio, star or mkisofs into that ?
I was thinking that a simple wrapper to open() which adds
O_DIRECT might be sufficient, but it turned out that
Bill Davidsen [EMAIL PROTECTED] wrote:
O_DIRECT is no standard and it seems that it is just a reimplementation of
the
very old DG/UX idea of O_DG_UNBUFFERED.
The idea of doing unbuffered i/o is hardly that young, I believe you
will trace it back to MULTICS and GECOS in the late
On Tue, 2006-01-17 at 20:09 +0100, [EMAIL PROTECTED] wrote:
as you
still have to pull a lot of data from disk, you still put quite a
pressure on VM subsystem, so direct I/O can still help,
But how to talk afio, star or mkisofs into that ?
I was thinking that a simple wrapper to open()
[EMAIL PROTECTED] wrote:
Hi,
i nearly gave up the hope to see more growisofs releases.
The release of version 6.0 is good news.
dvd+rw-tools 6.0 are available for download at usual location,
http://fy.chalmers.se/~appro/linux/DVD+RW/. In addition to bug fixes
[most notably for Pioneer
[EMAIL PROTECTED] wrote:
Hi,
How come that the time granularity of the backup processing chain
does not get finer as the systems get faster ?
What do you understand by time granularity?
I see a fifo as a method to smoothen out peaks and gaps in a
input function and to bring
Patrick Ohly wrote:
On Tue, 2006-01-17 at 20:09 +0100, [EMAIL PROTECTED] wrote:
as you
still have to pull a lot of data from disk, you still put quite a
pressure on VM subsystem, so direct I/O can still help,
But how to talk afio, star or mkisofs into that ?
I was thinking
Bill Davidsen [EMAIL PROTECTED] wrote:
I was thinking that a simple wrapper to open() which adds
O_DIRECT might be sufficient, but it turned out that this
alone is not sufficient: the buffers used by the programs
must have a certain alignment. This is not guaranteed
without modifying the way
Joerg Schilling wrote:
Bill Davidsen [EMAIL PROTECTED] wrote:
I was thinking that a simple wrapper to open() which adds
O_DIRECT might be sufficient, but it turned out that this
alone is not sufficient: the buffers used by the programs
must have a certain alignment. This is not guaranteed
Joerg Schilling [EMAIL PROTECTED] writes:
For something else:
In
http://lists.debian.org/cdwrite/2004/cdwrite/2006/01/msg00057.html
i ask wether the current behavior of cdrecord with padsize= and
multiple tracks will be uphold or wether it will be changed to
comply to the man page.
A
[EMAIL PROTECTED] wrote:
How come that the time granularity of the backup processing chain
does not get finer as the systems get faster ?
What do you understand by time granularity?
I see a fifo as a method to smoothen out peaks and gaps in a
input function and to bring the output
Hi,
A FIFO allows you to survive a period with low input data rates.
If everything goes faster, you need to increase the size of the FIFO
proportional to the size improvements.
But if *everything* goes faster, why not that period
with low data rates too ?
It's the absolute speed that
Hi,
[EMAIL PROTECTED] (2006-01-20 at 1649.29 +0100):
A FIFO allows you to survive a period with low input data rates.
If everything goes faster, you need to increase the size of the FIFO
proportional to the size improvements.
But if *everything* goes faster, why not that period
with low
Hi,
How come that the time granularity of the backup processing chain
does not get finer as the systems get faster ?
What do you understand by time granularity?
I see a fifo as a method to smoothen out peaks and gaps in a
input function and to bring the output nearer to the input
average.
[EMAIL PROTECTED] writes:
How come that the time granularity of the backup processing chain
does not get finer as the systems get faster ?
What do you understand by time granularity?
I see a fifo as a method to smoothen out peaks and gaps in a
input function and to bring the output
Hi,
How come that the time granularity of the backup processing chain
does not get finer as the systems get faster ?
What effect did change the shape of our input functions ?
I think the main problem is that hard disk seek times have not
improved anywhere nearly as fast as the
[EMAIL PROTECTED] wrote:
(The jump between 64 and 128 MB might be a bit coarse.)
My standpoint is that if you get into situation when you consider more
than 64MB, it's likely that bottleneck is abnormal
Or the user is an aberrated personality like me who
streams compressed archives
Hi,
Joerg Schilling wrote:
Guess why I recommend to use more than 128MB for the star FIFO
in order to keep the tape streaming.
With current I/O speed, you need current RAM sizes for buffering.
Googling for contemporary speeds ... HP ... 36 MB/s DLT ... 80 MB/s LTO
... well, i'd need a new
Would it be possible to get a ring buffer size option ?
-use-the-force-luke=bufsize:8m or whatever larger than 1m you consider
appropriate.
Embarrassingly i looked for new Lukes but did not scroll
far down enough.
Do i get it right that minimum buffer size is 1 MB
Yes.
and that
Hi,
(The jump between 64 and 128 MB might be a bit coarse.)
My standpoint is that if you get into situation when you consider more
than 64MB, it's likely that bottleneck is abnormal
Or the user is an aberrated personality like me who
streams compressed archives on the fly. :))
and it's
(The jump between 64 and 128 MB might be a bit coarse.)
My standpoint is that if you get into situation when you consider more
than 64MB, it's likely that bottleneck is abnormal
Or the user is an aberrated personality like me who
streams compressed archives on the fly. :))
Keep in mind
Hi,
My standpoint is that if you get into situation when you consider more
than 64MB, it's likely that bottleneck is abnormal
Or the user is an aberrated personality like me who
streams compressed archives on the fly. :))
Keep in mind that ring buffer harmonizes *variations* in input
AP dvd+rw-tools 6.0 are available for download at usual location,
This version gives me the following error:
:-( unable to anonymously mmap 33554432: Resource temporarily unavailable
Ouch! The problem is caused by ridiculously low default memorylocked
resource limit in newer Linux kernels.
dvd+rw-tools 6.0 are available for download at usual location,
http://fy.chalmers.se/~appro/linux/DVD+RW/. In addition to bug fixes
[most notably for Pioneer units] this version features:
- DVD-R Dual Layer support(*);
- internal ring buffer to harmonize the rates at which data is produced
Hi,
i nearly gave up the hope to see more growisofs releases.
The release of version 6.0 is good news.
dvd+rw-tools 6.0 are available for download at usual location,
http://fy.chalmers.se/~appro/linux/DVD+RW/. In addition to bug fixes
[most notably for Pioneer units] this version features:
- internal ring buffer to harmonize the rates at which data is produced
and written(**);
top reports growisofs gained 32 MB of weight. (The difference
between RBU 92.2% and RBU 37.0% is 18513920. That matches.)
Isn't 32 MB a bit greedy as default buffer size ?
Well, it's definitely wasteful
46 matches
Mail list logo