he V2 protocol (use
URL option protocolVersion=2). This does remove some functionality of
the driver that is only available for V3 protocol, but will work just
fine for query execution.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On Wed, 21 Apr 2010, Robert Haas wrote:
On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka wrote:
b) Using the parameter values for statistics, but not making any stronger
guarantees about them. So the parameters will be used for evaluating the
selectivity, but not to perform other optimizations
erver isn't
constantly spewing out rows that the driver must deal with, the driver
only gets the rows it asks for. Once the ResultSet is closed, it won't
ask for any more.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On Tue, 20 Apr 2010, Nikolas Everett wrote:
You can absolutely use copy if you like but you need to use a non-standard
jdbc driver: kato.iki.fi/sw/db/postgresql/jdbc/copy/. I've used it in the
past and it worked.
Copy support has been added to the 8.4 driver.
Kris Jurka
--
Sen
eue so that the next time a message is sent to
the backend we'll also send the cursor close message. This avoids an
extra network roundtrip for the close action.
In any case Statement.close isn't helping you here either. It's really
Connection.commit/rollback that's releas
ions, sigh.
You can disable the named statement by adding the parameter
prepareThreshold=0 to your connection URL.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Marlowe wrote:
On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka wrote:
As a note for non-JDBC users, the JDBC driver's batch interface allows
executing multiple statements in a single network roundtrip. This is
something you can't get in libpq, so beware of this for compari
tly it
could handle this better and send the full batch size, but at the moment
that's not possible and we're hoping the gains beyond this size aren't too
large.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Tom Lane wrote:
Kris Jurka writes:
The hash join takes less than twenty seconds, the other two joins I
killed after five minutes. I can try to collect explain analyze results
later today if you'd like.
Attached are the explain analyze results. The analyze part hits the
hash join
ins I
killed after five minutes. I can try to collect explain analyze results
later today if you'd like.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, 16 Apr 2009, Kris Jurka wrote:
Perhaps the cost estimates for the real data are so high because of this
bogus row count that the fudge factor to disable mergejoin isn't enough?
Indeed, I get these cost estimates on 8.4b1 with an increased
disable_cost value:
nes
On Thu, 16 Apr 2009, Tom Lane wrote:
Kris Jurka writes:
PG (8.3.7) doesn't seem to want to do a hash join across two partitioned
tables.
Could we see the whole declaration of these tables? (pg_dump -s output
would be convenient)
The attached table definition with no data wan
2797.96 rows=18450796 width=21)
-> Seq Scan on liens l (cost=0.00..14.00 rows=400
width=21)
-> [Seq Scans on other partitions]
Disabling mergejoin pushes it back to a nestloop join. Why can't it hash
join these two together?
Kris Jurka
--
Sent via pgsql-p
On Thu, 20 Mar 2008, Albe Laurenz wrote:
PostgreSQL doesn't write into the table files when it SELECTs data.
It could easily be hint bit updates that are set by selects getting
written.
Kris Jurka
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
situation would be to
implement per column permissions as the sql spec has, so that you could
revoke select on just the prosrc column and allow clients to retrieve the
metadata they need.
Kris Jurka
---(end of broadcast)---
TIP 5: don't forg
to non-zero at some later point. I believe prepareThreshold=0
should work. Do you have a test case showing it doesn't?
Kris Jurka
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
have to use
createStatement(resultSetType, resultSetConcurrency) respectively
prepareStatement (resultSetType, resultSetConcurrency) to achieve the
cursor behaviour?
http://jdbc.postgresql.org/documentation/81/query.html#query-with-cursor
Kris Jurka
---(end of
On Fri, 24 Mar 2006, Jim C. Nasby wrote:
On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:
On Wed, 22 Mar 2006, Jim C. Nasby wrote:
Ok, I saw disk activity on the base directory and assumed it was pg_xlog
stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore
this is on 8.1.2, btw).
This has been fixed in CVS HEAD as part of a patch to allow additional
options to CREATE TABLE AS.
http://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php
Kris Jurka
---(end of broadcast)---
TIP 9: In versions belo
On Wed, 27 Jul 2005, Josh Berkus wrote:
> b) you can't index a temp table.
>
jurka# create temp table t (a int);
CREATE
jurka# create index myi on t(a);
CREATE
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an app
On Wed, 4 May 2005, Mischa Sandberg wrote:
> Quoting Kris Jurka <[EMAIL PROTECTED]>:
>
> > Not true. A client may send any number of Bind/Execute messages on
> > a prepared statement before a Sync message.
> Hunh. Interesting optimization in the JDBC driver.
e there is the potential to deadlock
if both sides of network buffers are filled up and each side is blocked
waiting on a write. The JDBC driver has conservatively selected 256 as
the maximum number of queries to send at once.
Kris Jurka
---(en
error handling) and will meet the same objections
that kept the original patch out of the driver in the first place (we want
a friendlier API than just a data stream).
Kris Jurka
---(end of broadcast)---
TIP 9: the planner will ignore your desire t
s returned false.
I guess the question is why are you calling these methods if they didn't
work previously?
Kris Jurka
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
hem at a higher level, but couldn't find a way to know when to
flush that cache.
Kris Jurka
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
really applicable to you. The other method
retrieves all results at once and stashes them in a Vector. This makes
next, absolute, and relative positioning all equal cost.
Kris Jurka
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
cross-column
statistics there is no way it could expect all of the rows to match.
Thanks for the analysis.
Kris Jurka
---(end of broadcast)---
TIP 8: explain analyze is your friend
0..90.88 rows=3288 width=54) (actual
time=0.118..12.126 rows=3288 loops=1)
Kris Jurka
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
ost=1.07..5.15 rows=83 width=12) (actual
time=1.937..2.865 rows=83 loops=1)
Hash Cond: ("outer".salesperson = "inner".salesperson)
-> Seq Scan on customer (cost=0.00..2.83 rows=83
width=20) (actual time=0.137..0.437 rows=83 loops=1)
-> Hash (cost=1.06..1.06 rows=6 width=24) (actual
time=0.152..0.152 rows=0 loops=1)
-> Seq Scan on shd_salesperson (cost=0.00..1.06
rows=6 width=24) (actual time=0.045..0.064 rows=6 loops=1)
Total runtime: 39974.236 ms
(27 rows)
Given better row estimates the resulting plan runs more than ten times
faster. Why is the planner doing so poorly with estimating the number of
rows returned? I tried:
SET default_statistics_target = 1000;
VACUUM FULL ANALYZE;
but the results were the same. This is on 8.0beta4. Any ideas?
Kris Jurka
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
y do this against a 7.4 or
8.0 database and not older versions.
Kris Jurka
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
ges'
intersection is actually tiny. The planner assumes a complete or nearly
complete overlap so it thinks it will need to fetch 10% of the rows from
both the index and the heap and chooses a seqscan.
Kris Jurka
---(end of broadcast)
ata
from a scrollable one.
Kris Jurka
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
u think that is something that is really required.
Kris Jurka
---(end of broadcast)---
TIP 8: explain analyze is your friend
to spit this data back to the client.
I would agree with Dave's suggestion to use log_duration and compare the
values for the first and subsequent fetches.
Kris Jurka
---(end of broadcast)---
TIP 2: you can get off all lists at once wit
; if that's how COPY works? (For that matter, would that
> also be true of a transaction consisting of a set of
> inserts?)
>
The table is not locked in either the copy or the insert case.
Kris Jurka
---(end of broadcast)---
TIP 6: H
perhaps the initial patch is sufficient.
http://archives.postgresql.org/pgsql-jdbc/2003-12/msg00186.php
Kris Jurka
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
the heap and end up
fetching the same page multiple times because table rows are in the same
page, but were found in different places in the index.
Kris Jurka
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
I sent this message to the list and although it shows up in the archives,
I did not receive a copy of it through the list, so I'm resending as I
suspect others did not see it either.
-- Forwarded message --
Date: Sat, 13 Mar 2004 22:48:01 -0500 (EST)
From: Kris Jurka &l
38 matches
Mail list logo