Gerhard Wiesinger writes:
> I've one idea, which is not ideal, but may work and shouldn't be much
> effort to implement:
> As in the example above we read B1-B5 and B7-B10 on a higher level outside
> of normal buffer management with large request sizes (e.g. where hash
> index scans and sequent
On Fri, 9 Oct 2009, Greg Smith wrote:
On Sat, 3 Oct 2009, Gerhard Wiesinger wrote:
I wouldn't read 128k blocks all the time. I would do the following:
When e.g. B0, B127, B256 should be read I would read in 8k random block
I/O.
When B1, B2, B3, B4, B5, B7, B8, B9, B10 are needed I would mak
On Sat, 3 Oct 2009, Gerhard Wiesinger wrote:
I wouldn't read 128k blocks all the time. I would do the following:
When e.g. B0, B127, B256 should be read I would read in 8k random block I/O.
When B1, B2, B3, B4, B5, B7, B8, B9, B10 are needed I would make 2 requests
with the largest possible bl
On Fri, 2 Oct 2009, Simon Riggs wrote:
On Sun, 2009-09-27 at 18:05 +0200, Gerhard Wiesinger wrote:
So I saw, that even on sequential reads (and also on bitmap heap scan acces)
PostgreSQL uses only 8k blocks. I think that's a major I/O bottleneck.
A commercial software database vendor solved
On Fri, 2 Oct 2009, Greg Smith wrote:
On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:
Larger blocksizes also reduce IOPS (I/Os per second) which might be a
critial threshold on storage systems (e.g. Fibre Channel systems).
True to some extent, but don't forget that IOPS is always relative to a
On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:
Larger blocksizes also reduce IOPS (I/Os per second) which might be a critial
threshold on storage systems (e.g. Fibre Channel systems).
True to some extent, but don't forget that IOPS is always relative to a
block size in the first place. If you'
On Sun, 2009-09-27 at 18:05 +0200, Gerhard Wiesinger wrote:
> So I saw, that even on sequential reads (and also on bitmap heap scan acces)
> PostgreSQL uses only 8k blocks. I think that's a major I/O bottleneck.
>
> A commercial software database vendor solved the problem by reading multiple
>
On Fri, 2 Oct 2009, Greg Smith wrote:
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
I think this is one of the most critical performance showstopper of
PostgreSQL on the I/O side.
I wish, this is an easy problem compared to the real important ones that need
to be resolved. Situations where
On Fri, 2 Oct 2009, Greg Smith wrote:
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
I think this is one of the most critical performance showstopper of
PostgreSQL on the I/O side.
I wish, this is an easy problem compared to the real important ones that need
to be resolved. Situations where
On Sun, 27 Sep 2009, Sam Mason wrote:
The first run of:
select count(*) from benchmark;
Will cause the "hint" bits to get set and will cause a lot of writing to
happen. Subsequent runs will be testing read performance.
You just need to be careful of caching effects here. Either stop the
da
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
I think this is one of the most critical performance showstopper of
PostgreSQL on the I/O side.
I wish, this is an easy problem compared to the real important ones that
need to be resolved. Situations where the OS is capable of faster
sequential
On Mon, Sep 28, 2009 at 5:53 AM, Sam Mason wrote:
> On Sun, Sep 27, 2009 at 07:22:47PM -0600, Scott Marlowe wrote:
>> >> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
>> >> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
>> >>
>> >> dd if=test.txt of=/dev/null bs=8192
>>
On Sun, Sep 27, 2009 at 07:22:47PM -0600, Scott Marlowe wrote:
> >> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
> >> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
> >>
> >> dd if=test.txt of=/dev/null bs=8192
> >> 10737418240 bytes (11 GB) copied, 86.4457 s, 124 MB/s
>
>
>> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
>> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
>>
>> dd if=test.txt of=/dev/null bs=8192
>> 10737418240 bytes (11 GB) copied, 86.4457 s, 124 MB/s
>
> These look slow. RAID5 isn't going to be amazing, but it should be
On Sun, Sep 27, 2009 at 10:01:27PM +0100, Sam Mason wrote:
> Tested by turning on "\timing" mode in psql, dropping
> caches and running:
>
> SELECT 715833344 / 7597.216 / 1024;
Help, I can't do maths! This is overestimating the performance and
should be:
SELECT 715833344 / 7597.216 / 1024 /
On Sun, Sep 27, 2009 at 09:04:31PM +0200, Gerhard Wiesinger wrote:
> I'm talking about 2 cases
> 1.) Sequential scans
> 2.) Bitmap index scans
> which both hopefully end physically in blocks which are after each other
> and were larger block sizes can benefit.
Unfortunately it's all a bit more co
On Sun, 27 Sep 2009, Sam Mason wrote:
On Sun, Sep 27, 2009 at 06:05:51PM +0200, Gerhard Wiesinger wrote:
A google research has shown that Gregory Stark already worked on that issue
(see references below) but as far as I saw only on bitmap heap scans.
Greg Stark's patches are about giving the
On Sun, Sep 27, 2009 at 06:05:51PM +0200, Gerhard Wiesinger wrote:
> A google research has shown that Gregory Stark already worked on that issue
> (see references below) but as far as I saw only on bitmap heap scans.
Greg Stark's patches are about giving the IO subsystem enough
information about
Hello,
As blocksizes, random I/O and linear I/O are critical I/O performance
parameters I had a look on PostgreSQL and a commercial software vendor.
Therefore I enhanced the system tap script:
http://www.wiesinger.com/opensource/systemtap/disktop_gw.stp
Output per 5 seconds on a sequence sc
Hello,
As blocksizes, random I/O and linear I/O are critical I/O performance
parameters I had a look on PostgreSQL and a commercial software vendor.
Therefore I enhanced the system tap script:
http://www.wiesinger.com/opensource/systemtap/disktop_gw.stp
Output per 5 seconds on a sequence sc
20 matches
Mail list logo