On 22 Sep 2004 at 15:59, Tom Lane wrote:
> Scott Kirkwood <[EMAIL PROTECTED]> writes:
> > What do you think?
>
> I think this would allow the problems of cached plans to bite
> applications that were previously not subject to them :-(.
> An app that wants plan re-use can use PREPARE to identify t
On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:
> According to the manpage, O_DIRECT implies O_SYNC:
>
> File I/O is done directly to/from user space buffers. The I/O is
> synchronous, i.e., at the completion of the read(2) or write(2)
> system call, data is guaranteed to
Andrew Hammond <[EMAIL PROTECTED]> writes:
> My goal is to tune the disk / filesystem on our prototype system. It's
> an EMC disk array, so sectors on disk are 512 bytes of usable space.
> We've decided to go with RAID 10 since the goal is to maximize
> performance. Currently the raid element siz
On Thu, 2004-09-23 at 05:59, Tom Lane wrote:
> I think this would allow the problems of cached plans to bite
> applications that were previously not subject to them :-(.
> An app that wants plan re-use can use PREPARE to identify the
> queries that are going to be re-executed.
I agree; if you want
Steven Rosenstein <[EMAIL PROTECTED]> writes:
> Environment: Red Hat Enterprise Linux 3 Workstation, PostgreSQL V7.3.6
> vsa=# explain
> SELECT dev.name, dev.vss_site_id, tbl.log_type, tbl.severity, tbl.count
> FROM vsa.tbl_device AS dev
> LEFT OUTER JOIN
> (SELECT stbl.device_id, stbl.log_type,
Hello,
I’ll be moving a DB from internal RAID-10 SCSI storage
to an EMC CX300 FC RAID-10 LUN, bound to the host. I’ve setup a test host
machine and a test LUN. The /var/lib/pgsql/data folder is sym-linked to a partition
on the LUN.
Other than the shared_buffers, effective cache siz
> Rod Taylor wrote:
> | I've used both a NetApp and Hitachi based SANs with PostgreSQL. Both
> | work as well as expected, but do require some tweeking as they normally
> | are not optimized for the datablock size that PostgreSQL likes to deal
> | with (8k by default) -- this can make as much as a
I just realized in my haste to send this email out I provided the wrong
table in my example. Below is the same email, but with
vsa.dtbl_logged_event_20040922 substituted for vsa.tbl_sad_event.
Sorry for the inconvenience.
--- Steve
Gentlefolk,
I'm not sure if this is the proper forum for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rod Taylor wrote:
| I've used both a NetApp and Hitachi based SANs with PostgreSQL. Both
| work as well as expected, but do require some tweeking as they normally
| are not optimized for the datablock size that PostgreSQL likes to deal
| with (8k by def
Gentlefolk,
I'm not sure if this is the proper forum for this question, and it might
have been answered in a previous thread, but I'm new to PostgreSQL and the
research I did in the archives did not turn up anything addressing this
issue. Please direct me to the proper forum is this is not th
Scott Kirkwood <[EMAIL PROTECTED]> writes:
> What do you think?
I think this would allow the problems of cached plans to bite
applications that were previously not subject to them :-(.
An app that wants plan re-use can use PREPARE to identify the
queries that are going to be re-executed.
I couldn't find anything in the docs or in the mailing list on this,
but it is something that Oracle appears to do as does MySQL.
The idea, I believe, is to do a quick (hash) string lookup of the
query and if it's exactly the same as another query that has been done
recently to re-use the old parse
Greg Stark <[EMAIL PROTECTED]> writes:
>> It would also be interesting to prefetch one row from the outer table and fall
>> out immediately (without building the hash table) if the outer table is
>> empty. This seems to require some contortion of the code though :-(
> Why is it any more complicat
Tom Lane <[EMAIL PROTECTED]> writes:
> Yeah, I was just looking at doing that.
Well I imagine it takes you as long to read my patch as it would for you to
write it. But anyways it's still useful to me as exercises.
> It would also be interesting to prefetch one row from the outer table and fall
Greg Stark <[EMAIL PROTECTED]> writes:
> No, postgres didn't do things in reverse order. It hashed the empty table and
> then went ahead and checked every record of the non-empty table against the
> empty hash table.
> Reading the code there's no check for this, and it seems like it would be a
> u
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> On 22 Sep 2004, Greg Stark wrote:
>
> > Actually this looks like it's arguably a bug to me. Why does the hash
> > join execute the sequential scan at all? Shouldn't it also like the
> > merge join recognize that the other hashed relation is empty an
On 22 Sep 2004, Greg Stark wrote:
> Actually this looks like it's arguably a bug to me. Why does the hash
> join execute the sequential scan at all? Shouldn't it also like the
> merge join recognize that the other hashed relation is empty and skip
> the sequential scan entirely?
I'm not sure you
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> hash_join = on
> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16
> width=32) (actual time=0.017..0.038 rows=16 loops=1)
>
> hash_join = off
> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.1
Dennis Bjorklund wrote:
> On Wed, 22 Sep 2004, Gaetano Mendola wrote:
>
>
>> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0
loops=1)
>> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0
loops=1)
>
>
> These estimated costs are almost the sa
On Wed, 22 Sep 2004, Gaetano Mendola wrote:
> Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0
> loops=1)
> Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0
> loops=1)
These estimated costs are almost the same, but the runtime differs a
Dennis Bjorklund wrote:
On Wed, 22 Sep 2004, Gaetano Mendola wrote:
Now my question is why the 7.4 choose the hash join ? :-(
It looks to me that the marge join is faster because there wasn't really
anything to merge, it resulted in 0 rows. Maybe the hash join that is
choosen in 7.4 would have
21 matches
Mail list logo