Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-22 at 16:46 +0100, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I'd prefer to set this as a tablespace level storage parameter.
Sounds, like a good idea, except... what's a tablespace level storage
parameter?
A
Ron Mayer wrote:
Even more often on systems I see these days, spindles
is an implementation detail that the DBA has no way to know
what the correct value is.
For example, on our sites hosted with Amazon's compute cloud (a great
place to host web sites), I know nothing about spindles, but
Gregory Stark wrote:
Ron Mayer [EMAIL PROTECTED] writes:
For example, on our sites hosted with Amazon's compute cloud (a great
place to host web sites), I know nothing about spindles, but know
about Amazon Elastic Block Store[2]'s and Instance Store's[1]. I
have some specs and are
Bruce Momjian wrote:
Ron Mayer wrote:
Even more often on systems I see these days, spindles
is an implementation detail that the DBA has no way to know
what the correct value is.
For example, on our sites hosted with Amazon's compute cloud (a great
place to host web sites), I know nothing
On Wed, 2008-09-24 at 17:42 +0300, Heikki Linnakangas wrote:
Yeah. Nevertheless I like the way effective_spindle_count works, as
opposed to an unintuitive number of blocks to prefetch (assuming the
formula we use to turn the former into latter works). Perhaps we should
keep the meaning
[resending due to the attachment being too large for the -hackers list --
weren't we going to raise it when we killed -patches?]
Greg Smith [EMAIL PROTECTED] writes:
Using the maximum prefetch working set tested, 8192, here's the speedup
multiplier on this benchmark for both sorted and
Greg Smith napsal(a):
On Mon, 22 Sep 2008, Gregory Stark wrote:
I'm quite surprised Solaris doesn't support posix_fadvise -- perhaps
it's in some other version of Solaris?
Solaris has only fake variant of posix_fadvise. See
Greg Smith [EMAIL PROTECTED] writes:
On Mon, 22 Sep 2008, Gregory Stark wrote:
Hm, I'm disappointed with the 48-drive array here. I wonder why it maxed out
at only 10x the bandwidth of one drive. I would expect more like 24x or more.
The ZFS RAID-Z implementation doesn't really scale that
Gregory Stark [EMAIL PROTECTED] writes:
Perhaps access paths which expect to be able to prefetch most of their
accesses should use random_page_cost / effective_spindle_count for their i/o
costs?
But then if people don't set random_page_cost high enough they could easily
find themselves with
Greg Smith [EMAIL PROTECTED] writes:
I have an updated patch I'll be sending along shortly. You might want to test
with that?
Obviously I've got everything setup to test right now, am currently analyzing
your earlier patch and the sequential scan fork that derived from it. If
you've got a
On Tue, 23 Sep 2008, Gregory Stark wrote:
I have *not* been able to observe any significant effect from
POSIX_FADV_SEQUENTIAL but I'm not sure what circumstances it was a problem. It
sounds like it's a peculiar situation which is not easy to reliably reproduce.
Zoltan, Hans-Juergen: would it
The complicated patch I've been working with for a while now is labeled
sequential scan posix fadvise in the CommitFest queue. There are a lot
of parts to that, going back to last December, and I've added the many
most relevant links to the September CommitFest page.
The first message there
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense. His
suggestion to model the user GUC as effective_spindle_count looks like a
good one. The
On Sep 22, 2008, at 12:02 PM, Simon Riggs wrote:
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense. His
suggestion to model the user GUC as
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense. His
suggestion to model the user GUC as
On Mon, 2008-09-22 at 16:46 +0100, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense.
On Mon, 22 Sep 2008, Simon Riggs wrote:
I'd prefer to set this as a tablespace level storage parameter.
That seems reasonable, but I'm not working at that level yet. There's
still a larger open questions about how the buffer manager interaction
will work here, and I'd like to have a better
On Mon, 2008-09-22 at 13:06 -0400, Greg Smith wrote:
prefetch_... is a much better name since its an existing industry term.
I'm not in favour of introducing the concept of spindles, since I can
almost hear the questions about ramdisks and memory-based storage.
It's possible to make a
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I'm not in favour of introducing the concept of spindles
In principle I quite strongly disagree with this
Number of blocks to prefetch is an internal implementation detail that the DBA
has absolutely no way to know what the
On Mon, 22 Sep 2008, Gregory Stark wrote:
Hm, I'm disappointed with the 48-drive array here. I wonder why it maxed out
at only 10x the bandwidth of one drive. I would expect more like 24x or more.
The ZFS RAID-Z implementation doesn't really scale that linearly. It's
rather hard to get the
Ron Mayer [EMAIL PROTECTED] writes:
For example, on our sites hosted with Amazon's compute cloud (a great
place to host web sites), I know nothing about spindles, but know
about Amazon Elastic Block Store[2]'s and Instance Store's[1]. I
have some specs and are able to run benchmarks on
Gregory Stark wrote:
Ron Mayer [EMAIL PROTECTED] writes:
I'd rather a parameter that expressed things more in terms of
measurable quantities [...]
...What we're
dealing with now is an entirely orthogonal property of your system: how many
concurrent requests can the system handle.
Really?
22 matches
Mail list logo