Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Greg Smith

David Kerr wrote:

I don't think it's anything in the Db that's causing it. ( drop and re-create
the db between tests) I actually suspect a hardware issue somewhere. 
  


You might find my "Database Hardware Benchmarking" talk, available at 
http://projects.2ndquadrant.com/talks , useful to help sort out what's 
good and bad on each server, and correspondingly what'd different 
between the two.  Many of the ideas there came from fighting with SAN 
hardware that didn't do what I expected.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Tom Lane
Scott Marlowe  writes:
> On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging  wrote:
>> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have
>> noticed that on the 8.4.* versions, a lot of our code is either taking much
>> longer to complete, or never completing.  I think I have isolated the
>> problem to queries using in(), not in() or not exists().  I've put together
>> a test case with one particular query that demonstrates the problem.

> We get a Seq Scan with a huge cost, and no hash agg or quick sort.  Is
> the work_mem the same or similar?

It looks to me like it's not.  The 8.4 plan is showing sorts spilling to
disk for amounts of data that the 8.3 plan is perfectly willing to hold
in memory.  I'm also wondering if the 8.4 server is on comparable
hardware, because it seems to be only about half as fast for the plain
seqscan steps, which surely ought to be no worse than before.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] significant slow down with various LIMIT

2010-04-20 Thread Kevin Grittner
norn  wrote:
 
>> (1) Try it without the ORDER BY clause and the LIMIT.
> W/o the 'order by' it works instantly (about 1ms!)
 
> W/o the limit it takes 1.4 seconds
 
>>(2) Temporarily take that top index out of consideration
> It works nice! Query takes about 0.6 seconds as expected!
 
> So, as we can see, dropping index may help, but why? What shall I
> do in my particular situation? Probably analyzing my tests help
> you giving some recommendations, I hope so! :)
 
The combination of the ORDER BY DESC and the LIMIT causes it to
think it can get the right data most quickly by scanning backwards
on the index.  It's wrong about that.  With the information from the
additional plans, it seems that this bad estimate might be why it's
not recognizing the plan which is actually four orders of magnitude
faster:
 
Index Scan using plugins_guide_address_city_id
  on plugins_guide_address
  Index Cond: (city_id = 4535)
  estimated rows=27673
  actual rows=5
 
Try this:
 
ALTER TABLE ALTER plugins_guide_address
  ALTER COLUMN city_id SET STATISTICS 1000;
ANALYZE plugins_guide_address;
 
Then try your query.
 
I have one more diagnostic query to test, if the above doesn't work:
 
explain analyze
SELECT id FROM
  (
SELECT core_object.id
  FROM "core_object"
  JOIN "plugins_plugin_addr"
ON ("core_object"."id" = "plugins_plugin_addr"."oid_id")
  JOIN "plugins_guide_address"
ON ("plugins_plugin_addr"."address_id" =
"plugins_guide_address"."id")
  WHERE "plugins_guide_address"."city_id" = 4535
  ) x
  ORDER BY id DESC
  LIMIT 4;
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kris Jurka



On Tue, 20 Apr 2010, Dave Crooke wrote:


a. Make setFetchSize(1) the default


The reason this is not done is that the mechanism used for fetching a 
piece of the results at a time can change the query plan used if using a 
PreparedStatement.  There are three ways to plan a PreparedStatement:


a) Using the exact parameter values by substituting them directly into the 
query.  This isn't really "planned" as you can't re-use it at all.  This 
is only available using the V2 protocol.


b) Using the parameter values for statistics, but not making any stronger
guarantees about them.  So the parameters will be used for evaluating the 
selectivity, but not to perform other optimizations like 
contraint_exclusion or transforming a LIKE operation to a range query. 
This is the default plan type the JDBC driver uses.


c) Planning the query with no regard for the parameters passed to it. 
This is the plan type the JDBC driver uses when it sees the same 
PreparedStatement being re-used multiple times or when it is respecting 
setFetchSize and allowing for partial results.


We must use (c) for partial results instead of (b) because of some 
limitations of the server.  Currently you cannot have two statements of 
type (b) open on the same connection.  So since the driver can't know if 
the user will issue another query before fetching the remainder of the 
first query's results, it must setup the first query to be of type (c) so 
that multiple statements can exist simultaneously.


Switching the default plan type to (c) will cause a significant number of 
complaints as performance on some queries will go into the tank.  Perhaps 
we could have a default fetchSize for plain Statements as it won't affect 
the plan.  I could also see making this a URL parameter though so it could 
be set as the default with only a configuration, not a code change.



b. If someone does call rs.close() before the end of the ResultSet, and has
not created an explicit cursor at the JDBC level, flag the query / lock /
virtual transaction in some way in the JDBC driver that tells it that it can
just dump the cursor on a subsequent stmt.close(), conn.commit() or
conn.close() call without sucking down the rest of the data.


This is already true.  The JDBC driver only asks the server for more of 
the ResultSet when a next() call requires it.  So the server isn't 
constantly spewing out rows that the driver must deal with, the driver 
only gets the rows it asks for.  Once the ResultSet is closed, it won't 
ask for any more.


Kris Jurka


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
I digest this down to "this is the best that can be achieved on a connection
that's single threaded"

I think the big difference with Oracle is this:

i. in Oracle, a SELECT does not have to be a transaction, in the sense that
PG's SELECT does ... but in Oracle, a SELECT can fail mid-stream if you wait
too long and the UNDO tablespace wraps (ORA-600), i.e. Oracle does not lock
on SELECT. Oracle is optimized for lots of small transactions that typically
commit, PG supports arbitrary transaction mixes of any size, but is less
efficient at the workload for which Oracle is specialized.

ii. SELECT always creates an implicit cursor in Oracle, but access to these
cursors can be interleaved arbitrarily on one connection both with each
other and transactions (writes)

After consiering the context you offered, I'd recommend the following two
minor changes to the PG driver 

a. Make setFetchSize(1) the default

b. If someone does call rs.close() before the end of the ResultSet, and has
not created an explicit cursor at the JDBC level, flag the query / lock /
virtual transaction in some way in the JDBC driver that tells it that it can
just dump the cursor on a subsequent stmt.close(), conn.commit() or
conn.close() call without sucking down the rest of the data.

AFAICT, this will make the behaviour more like other DB's without sacrifcing
anything, but I don't know what default behaviour expectations might be out
there in PG land.

Cheers
Dave

On Tue, Apr 20, 2010 at 3:22 PM, Kevin Grittner  wrote:
(Lots of good explanatory stuff)


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote:
- David Kerr wrote:
- >the db, xlog and logs are all on separate areas of the SAN.
- >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't 
- >expect
- >contention there.
- >  
- 
- Just because you don't expect it doesn't mean it's not there.  
- Particularly something as complicated as a SAN setup, presuming anything 
- without actually benchmarking it is a recipe for fuzzy diagnostics when 
- problems pop up.  If you took anyone's word that your SAN has good 
- performance without confirming it yourself, that's a path that's lead 
- many to trouble.

that's actually what I'm doing, performance testing this environment.
everything's on the table for me at this point. 

- Anyway, as Robert already stated, effective_cache_size only impacts how 
- some very specific types of queries are executed; that's it.  If there's 
- some sort of query behavior involved in your load, maybe that has 
- something to do with your slowdown, but it doesn't explain general slow 
- performance.  Other possibilities include that something else changed 
- when you reloaded the server as part of that, or it's a complete 
- coincidence--perhaps autoanalyze happened to finish at around the same 
- time and it lead to new plans.

Ok that's good to know. I didn't think it would have any impact, and was
surprised when it appeared to.

I just finished running the test on another machine and wasn't able to 
reproduce the problem, so that's good news in some ways. But now i'm back 
to the drawing board.

I don't think it's anything in the Db that's causing it. ( drop and re-create
the db between tests) I actually suspect a hardware issue somewhere. 

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Greg Smith

David Kerr wrote:

the db, xlog and logs are all on separate areas of the SAN.
separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect
contention there.
  


Just because you don't expect it doesn't mean it's not there.  
Particularly something as complicated as a SAN setup, presuming anything 
without actually benchmarking it is a recipe for fuzzy diagnostics when 
problems pop up.  If you took anyone's word that your SAN has good 
performance without confirming it yourself, that's a path that's lead 
many to trouble.


Anyway, as Robert already stated, effective_cache_size only impacts how 
some very specific types of queries are executed; that's it.  If there's 
some sort of query behavior involved in your load, maybe that has 
something to do with your slowdown, but it doesn't explain general slow 
performance.  Other possibilities include that something else changed 
when you reloaded the server as part of that, or it's a complete 
coincidence--perhaps autoanalyze happened to finish at around the same 
time and it lead to new plans.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke  wrote:
 
> a. the fact that Statement.executeQuery("select * from
> huge_table") works out of the box with every one of those
> databases, but results in java.langOutOfMemory with PG without
> special setup. Again, this is to the letter of the standard, it's
> just not very user friendly.
 
The way I read it, it's *allowed* by the standard, but not
*required* by the standard.  I agree it's not very friendly
behavior.  I made some noise about it early in my use of PostgreSQL,
but let it go once I had it covered for my own shop.  I agree it's a
barrier to conversion -- it often comes up here with new PostgreSQL
users, and who knows how many people give up on PostgreSQL without
coming here when they hit it?
 
It's not just an issue in JDBC, either; it's generally the default
in PostgreSQL interfaces.  That seems to be by design, with the
rationale that it prevents returning some part of a result set and
then throwing an error.  Anyone coming from another database
probably already handles that, so they won't tend to be impressed by
that argument, but it would be hard to change that as a default
behavior in PostgreSQL without breaking a lot of existing code for
PostgreSQL users at this point.  :-(
 
> b. The fact that with enterprise grade commercital databases, you
> can mix reads and writes on the same Connection, whereas with PG
> Connection.commit() kills open cursors.
 
Well, I know that with Sybase ASE (and therefore it's probably also
true of Microsoft SQL Server, since last I saw they both use TDS
protocol), unless you're using a cursor, if you execute another
statement through JDBC on the same connection which has a pending
ResultSet, it reads the rest of the ResultSet into RAM (the behavior
you don't like), before executing the new statement.  So at least
for those databases you can't really claim *both* a and b as points.
 
Oops -- I just noticed you said "enterprise grade".  ;-)
 
> The fact that I've been using JDBC for 12 years with half a dozen
> database products, in blissful ignorance of these fine
> distinctions in the standard until I had to deal with them with
> PG, is kinda what my point is :-)
 
OK, point taken.
 
> I understand the reasons for some of these limitations, but by no
> means all of them.
 
Well, one of the cool things about open source is that users have
the opportunity to "scratch their own itches".  The JDBC
implementation is 100% Java, so if changing something there would be
helpful to you, you can do so.  If you're careful about it, you may
be able to contribute it back to the community to save others the
pain.  If you want to take a shot at some of this, I'd be willing to
help a bit.  If nothing else, the attempt may give you better
perspective on the reasons for some of the limitations.  ;-)
 
>> (1)  Needing to setFetchSize to avoid materializing the entire
>> result set in RAM on the client.
> 
> I don't understand the rationale for why PG, unlike every other
> database, doesn't make this a sensible default, e.g, 10,000 rows
 
I took a bit of a look at this, years ago.  My recollection is that,
based on the nature of the data stream, you would need to do
something similar to databases using TDS -- you could read as you go
as long as no other statement is executed on the connection; but
you'd need to add code to recognize the exceptional circumstance and
suck the rest of the result set down the wire to RAM should it be
necessary to "clear the way" for another statement.
 
If you give it a shot, you might want to see whether it's possible
to avoid an irritating implementation artifact of the TDS JDBC
drivers: if you close a ResultSet or a Statement with an open
ResultSet without first invoking Statement.cancel, they would suck
back the rest of the results (and ignore them) -- making for a big
delay sometimes on a close invocation.  As I recall, the
justification was that for executions involving multiple result
sets, they needed to do this to get at the next one cleanly;
although some forms of execute don't support multiple results, and
it doesn't do you a lot of good on Statement close, so you'd think
these could have been optimized.
 
> I find Oracle's JDBC implmentation to be both user friendly and
> (largely) standards compliant.
 
Where there are issues with usability or standards compliance with
PostgreSQL, especially for something which works well for you in
other products, I hope you raise them on these lists.  Perhaps there
are already ways to deal with them, perhaps we need to better
document something, and perhaps some change can be made to
accommodate the issue.  Even if no action is taken at the time it is
helpful to the project, because the number of people raising an
issue is often taken into consideration when deciding whether to
change something.  Also, someone running into the issue later may
find the discussion on a search and gain helpful information.
 
> I hope this can be taken in the amicable spirit of gentlemanly
> debate in which it

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Nikolas Everett
On Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke  wrote:
>
> I wouldn't hold MySQL up to be a particularly good implmentation of
> anything, other than speed (MyISAM) and usability (the CLI)  I find
> Oracle's JDBC implmentation to be both user friendly and (largely) standards
> compliant.
>

Dave,

I've been following along at home and agree with you right up until you
mention the MySQL CLI being usable.  I work with the thing every day.  The
plain, vanilla install on my Ubuntu laptop lacks proper readline support.
 Hitting ctrl-c will sometimes kill the running query and sometimes kill the
CLI.  Its far from a paragon of usability.  That last time I used psql it
didn't have any of those issues.

Full disclosure:  mysql does have proper readline support on a Centos
machine I have access to.  ctrl-c still kills the shell.

Your other points are good though.

--Nik


Re: [PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging  wrote:
> Hi,
>
> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have
> noticed that on the 8.4.* versions, a lot of our code is either taking much
> longer to complete, or never completing.  I think I have isolated the
> problem to queries using in(), not in() or not exists().  I've put together
> a test case with one particular query that demonstrates the problem.
>
> select count(*) from traderhank.vendor_catalog = 147,352
>
> select count(*) from traderhank.xc_products = 8,610
>
> The sub query (select vc1.th_sku from traderhank.vendor_catalog vc1
> join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75
> and vc2.th_sku != vc1.th_sku
> where vc1.cutoff_date is null and vc2.cutoff_date is null
> group by vc1.th_sku
> )  yields 54,390 rows
>
> The sub query (select vc_th_Sku from traderhank.xc_products where vc_th_sku
> is not null) yields 5,132 rows
>
> These 2 tables have been loaded from a pg_dump on all servers, vacuum
> analyze run after load.
>
> 1st case: pg 8.3.1 using left join finishes the query in about 3.5 seconds
>
> explain analyze
> select vc.* from traderhank.vendor_catalog vc
> left join
> (
> select vc1.th_sku from traderhank.vendor_catalog vc1
> join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75
> and vc2.th_sku != vc1.th_sku
> where vc1.cutoff_date is null and vc2.cutoff_date is null
> group by vc1.th_sku
> ) vcj on vcj.th_sku = vc.th_sku
> left join traderhank.xc_products xc on xc.vc_th_sku = vc.th_sku
> where vcj.th_sku is null
> and xc.vc_th_sku is null
>
> "Merge Left Join  (cost=71001.53..72899.35 rows=36838 width=310) (actual
> time=9190.446..10703.509 rows=78426 loops=1)"
> "  Merge Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)"
> "  Filter: (vc1.th_sku IS NULL)"
> "  ->  Merge Left Join  (cost=19362.72..20201.46 rows=73676 width=310)
> (actual time=917.947..1784.593 rows=141962 loops=1)"
> "        Merge Cond: ((vc.th_sku)::text = (xc.vc_th_sku)::text)"
> "        Filter: (xc.vc_th_sku IS NULL)"
> "        ->  Sort  (cost=17630.88..17999.26 rows=147352 width=310) (actual
> time=871.130..1114.453 rows=147352 loops=1)"
> "              Sort Key: vc.th_sku"
> "              Sort Method:  quicksort  Memory: 45285kB"
> "              ->  Seq Scan on vendor_catalog vc  (cost=0.00..4981.52
> rows=147352 width=310) (actual time=0.020..254.023 rows=147352 loops=1)"
> "        ->  Sort  (cost=1731.84..1753.37 rows=8610 width=8) (actual
> time=46.783..62.347 rows=9689 loops=1)"
> "              Sort Key: xc.vc_th_sku"
> "              Sort Method:  quicksort  Memory: 734kB"
> "              ->  Seq Scan on xc_products xc  (cost=0.00..1169.10 rows=8610
> width=8) (actual time=0.013..25.490 rows=8610 loops=1)"
> "  ->  Sort  (cost=51638.80..51814.57 rows=70309 width=32) (actual
> time=8272.483..8382.258 rows=66097 loops=1)"
> "        Sort Key: vc1.th_sku"
> "        Sort Method:  quicksort  Memory: 4086kB"

So here we get a hash agg in ~4M memory:

> "        ->  HashAggregate  (cost=44572.25..45275.34 rows=70309 width=8)
> (actual time=7978.928..8080.317 rows=54390 loops=1)"

And the row estimate is similar.

(much deleted)

> on any version from 8.3.8 on, this query has never returned, and explain
> analyze never returns, so I am only posting explain output

We get a Seq Scan with a huge cost, and no hash agg or quick sort.  Is
the work_mem the same or similar?  I'd crank it up for testing just to
see if it helps.  16Meg is pretty safe on a low traffic machine.

> "Seq Scan on vendor_catalog vc  (cost=140413.05..91527264.28 rows=36838
> width=309)"
> "  Filter: ((NOT (hashed SubPlan 2)) AND (NOT (SubPlan 1)))"
> "  SubPlan 2"
> "    ->  Seq Scan on xc_products  (cost=0.00..1716.99 rows=5132 width=8)"
> "          Filter: (vc_th_sku IS NOT NULL)"
> "  SubPlan 1"
> "    ->  Materialize  (cost=138683.23..139734.64 rows=75541 width=8)"
> "          ->  Group  (cost=134997.43..138311.69 rows=75541 width=8)"
> "                ->  Sort  (cost=134997.43..136654.56 rows=662853 width=8)"
> "                      Sort Key: vc1.th_sku"
> "                      ->  Merge Join  (cost=39600.73..52775.08 rows=662853
> width=8)"
> "                            Merge Cond: ((vc1.short_desc_75)::text =
> (vc2.short_desc_75)::text)"
> "                            Join Filter: ((vc2.th_sku)::text <>
> (vc1.th_sku)::text)"
> "                            ->  Sort  (cost=19800.37..20062.75 rows=104954
> width=27)"
> "                                  Sort Key: vc1.short_desc_75"
> "                                  ->  Seq Scan on vendor_catalog vc1
>  (cost=0.00..8534.52 rows=104954 width=27)"
> "                                        Filter: ((cutoff_date IS NULL) AND
> (th_sku IS NOT NULL))"
> "                            ->  Materialize  (cost=19800.37..21112.29
> rows=104954 width=27)"
> "                                  ->  Sort  (cost=19800.37..20062.75
> rows=104954 width=27)"
> "          

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
I don't want to get into a big debate about standards, but I will clarify a
couple of things inline below.

My key point is that the PG JDBC driver resets people's expecations who have
used JDBC with other databases, and that is going to reflect negatively on
Postgres if Postgres is in the minority, standards nothwithstanding, and I
feel badly about that, because PG rocks!

Cheers
Dave

On Tue, Apr 20, 2010 at 11:32 AM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:

> Dave Crooke  wrote:
>
> > AFAICT from the Java end, ResultSet.close() is supposed to be
> > final.
>
> For that ResultSet.  That doesn't mean a ResultSet defines a cursor.
> Such methods as setCursorName, setFetchSize, and setFetchDirection
> are associated with a Statement.  Think of the ResultSet as the
> result of a cursor *scan* generated by opening the cursor defined by
> the Statement.
>
> http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#close%28%29
>
> Notice that the ResultSet is automatically closed if the Statement
> that generated it is re-executed.  That is very much consistent with
> Statement as the equivalent of a cursor, and not very consistent
> with a ResultSet as the equivalent of a cursor.
>

True, but mechanically there is no other choice - the ResultSet is created
by Statement.executeQuery() and by then it's already in motion  in the
case of Postgres with default settings, the JVM blows out before that call
returns.

I am not explicitly creating any cursors, all I'm doing is running a query
with a very large ResultSet.


 Again, you're talking about the *results* from *opening* the cursor.
>
> > At a pragmatic level, the PGSQL JDBC driver has a lot of odd
> > behaviours which, while they may or may not be in strict
> > compliance with the letter of the standard, are very different
> > from any other mainstream database that I have accessed from Java
> >  what I'd consider as normative behaviour, using regular JDBC
> > calls without the need to jump through all these weird hoops, is
> > exhibited by all of the following: Oracle, SQL Server, DB2, MySQL,
> > Apache Derby and JET (MS-Access file-based back end, the .mdb
> > format)
>
> Are you talking about treating the Statement object as representing
> a cursor and the ResultSet representing the results from opening
> the cursor, or are you thinking of something else here?
>

Specific examples:

a. the fact that Statement.executeQuery("select * from huge_table") works
out of the box with every one of those databases, but results in
java.langOutOfMemory with PG without special setup. Again, this is to the
letter of the standard, it's just not very user friendly.

b. The fact that with enterprise grade commercital databases, you can mix
reads and writes on the same Connection, whereas with PG Connection.commit()
kills open cursors.

The fact that I've been using JDBC for 12 years with half a dozen database
products, in blissful ignorance of these fine distinctions in the standard
until I had to deal with them with PG, is kinda what my point is :-)

I understand the reasons for some of these limitations, but by no means all
of them.


> Huh.  I dropped PostgreSQL into an environment with hundreds of
> databases, and the applications pretty much "just worked" for us.
> Of course, we were careful to write to the SQL standard and the JDBC
> API, not to some other product's implementation of them.
>

True, but not everyone can hire every developer to be a JDBC / SQL language
lawyer. All of our SQL is either ANSI or created by the Hibernate PGSQL
adapter, with the exception of a daily "VACUUM ANALYSE" which I added ;-)

I do believe that when there are two ways to implement a standard, the "it
just works" way is far preferable to the "well, I know you probably think
this is a bug, because 90% of the client code out there chokes on it, but
actually we are standards compliant, it's everyone else who is doing it
wrong" way.

I used to work at a storage startup that did exactly the latter, using an
obscure HTTP/1.1 standard feature that absolutely none of the current
browsers or HTTP libraries supports, and so it was a constant source of
frustration for customers and tech support alike. I no longer work there ;-)

It's kinda like making stuff that has to work with Windows - you know
Microsoft doesn't follow it's own standards, but you gotta make our code
work with theirs, so you play ball with their rules.


> (1)  Needing to setFetchSize to avoid materializing the entire
> result set in RAM on the client.
>

I don't understand the rationale for why PG, unlike every other database,
doesn't make this a sensible default, e.g, 10,000 rows ... maybe because the
locks stay in place until you call Connection.close() or Connection.commit()
? ;-)


>
> (2)  Fixing a race condition in our software which was benign in
> other products, but clearly my own bug.
>

Been there and done that with code developed on single-threaded DB's (JET /
Derby) ... not what I'm gripin

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr  wrote:
> On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:
> - So are you logging to the same drive that has pg_xlog and your
> - data/base directory on this machine?
> -
>
> the db, xlog and logs are all on separate areas of the SAN.
>
> separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect
> contention there.

Same xkb/s gigabit connection?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:47 PM, David Kerr  wrote:
> On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:
> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
> - > that thought occured to me while I was testing this. I ran a vacuumdb -z
> - > on my database during the load and it didn't impact performance at all.
> -
> - The window to run ANALYZE usefully is pretty short.  If you run it
> - before the load is complete, your stats will be wrong.  If you run it
> - after the select statements that hit the table are planned, the
> - updated stats won't arrive in time to do any good.
>
> right, but i'm loading 20 million records in 1000 record increments. so
> the analyze should affect all subsequent increments, no?

I keep thinking FK checks are taking a long time because they aren't
cached because in import they went through the ring buffer in pg or
some other way aren't in a buffer but large effective cache size says
it's 99.99% chance or better that it's in cache, and chooses a poor
plan to look them up.  Just a guess.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Roger Ging

Hi,

I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have 
noticed that on the 8.4.* versions, a lot of our code is either taking 
much longer to complete, or never completing.  I think I have isolated 
the problem to queries using in(), not in() or not exists().  I've put 
together a test case with one particular query that demonstrates the 
problem.


select count(*) from traderhank.vendor_catalog = 147,352

select count(*) from traderhank.xc_products = 8,610

The sub query (select vc1.th_sku from traderhank.vendor_catalog vc1
join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75
and vc2.th_sku != vc1.th_sku
where vc1.cutoff_date is null and vc2.cutoff_date is null
group by vc1.th_sku
)  yields 54,390 rows

The sub query (select vc_th_Sku from traderhank.xc_products where 
vc_th_sku is not null) yields 5,132 rows


These 2 tables have been loaded from a pg_dump on all servers, vacuum 
analyze run after load.


1st case: pg 8.3.1 using left join finishes the query in about 3.5 seconds

explain analyze
select vc.* from traderhank.vendor_catalog vc
left join
(
select vc1.th_sku from traderhank.vendor_catalog vc1
join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75
and vc2.th_sku != vc1.th_sku
where vc1.cutoff_date is null and vc2.cutoff_date is null
group by vc1.th_sku
) vcj on vcj.th_sku = vc.th_sku
left join traderhank.xc_products xc on xc.vc_th_sku = vc.th_sku
where vcj.th_sku is null
and xc.vc_th_sku is null

"Merge Left Join  (cost=71001.53..72899.35 rows=36838 width=310) (actual 
time=9190.446..10703.509 rows=78426 loops=1)"

"  Merge Cond: ((vc.th_sku)::text = (vc1.th_sku)::text)"
"  Filter: (vc1.th_sku IS NULL)"
"  ->  Merge Left Join  (cost=19362.72..20201.46 rows=73676 width=310) 
(actual time=917.947..1784.593 rows=141962 loops=1)"

"Merge Cond: ((vc.th_sku)::text = (xc.vc_th_sku)::text)"
"Filter: (xc.vc_th_sku IS NULL)"
"->  Sort  (cost=17630.88..17999.26 rows=147352 width=310) 
(actual time=871.130..1114.453 rows=147352 loops=1)"

"  Sort Key: vc.th_sku"
"  Sort Method:  quicksort  Memory: 45285kB"
"  ->  Seq Scan on vendor_catalog vc  (cost=0.00..4981.52 
rows=147352 width=310) (actual time=0.020..254.023 rows=147352 loops=1)"
"->  Sort  (cost=1731.84..1753.37 rows=8610 width=8) (actual 
time=46.783..62.347 rows=9689 loops=1)"

"  Sort Key: xc.vc_th_sku"
"  Sort Method:  quicksort  Memory: 734kB"
"  ->  Seq Scan on xc_products xc  (cost=0.00..1169.10 
rows=8610 width=8) (actual time=0.013..25.490 rows=8610 loops=1)"
"  ->  Sort  (cost=51638.80..51814.57 rows=70309 width=32) (actual 
time=8272.483..8382.258 rows=66097 loops=1)"

"Sort Key: vc1.th_sku"
"Sort Method:  quicksort  Memory: 4086kB"
"->  HashAggregate  (cost=44572.25..45275.34 rows=70309 width=8) 
(actual time=7978.928..8080.317 rows=54390 loops=1)"
"  ->  Merge Join  (cost=27417.09..42493.30 rows=831580 
width=8) (actual time=1317.874..6380.928 rows=810012 loops=1)"
"Merge Cond: ((vc1.short_desc_75)::text = 
(vc2.short_desc_75)::text)"
"Join Filter: ((vc2.th_sku)::text <> 
(vc1.th_sku)::text)"
"->  Sort  (cost=13708.55..13970.22 rows=104669 
width=27) (actual time=661.319..834.131 rows=104624 loops=1)"

"  Sort Key: vc1.short_desc_75"
"  Sort Method:  quicksort  Memory: 11235kB"
"  ->  Seq Scan on vendor_catalog vc1  
(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.010..268.552 
rows=104624 loops=1)"

"Filter: (cutoff_date IS NULL)"
"->  Sort  (cost=13708.55..13970.22 rows=104669 
width=27) (actual time=656.447..2130.290 rows=914636 loops=1)"

"  Sort Key: vc2.short_desc_75"
"  Sort Method:  quicksort  Memory: 11235kB"
"  ->  Seq Scan on vendor_catalog vc2  
(cost=0.00..4981.52 rows=104669 width=27) (actual time=0.015..266.926 
rows=104624 loops=1)"

"Filter: (cutoff_date IS NULL)"
"Total runtime: 10837.005 ms"


This query returns same set of rows, in about 2.8 seconds:

explain analyze
select vc.* from traderhank.vendor_catalog vc
where vc.th_sku not in
(
select vc1.th_sku from traderhank.vendor_catalog vc1
join traderhank.vendor_catalog vc2 on vc2.short_desc_75 = vc1.short_desc_75
and vc2.th_sku != vc1.th_sku
where vc1.cutoff_date is null and vc2.cutoff_date is null and vc1.th_sku 
is not null

group by vc1.th_sku
)
and vc.th_sku not in
(select vc_th_Sku from traderhank.xc_products where vc_th_sku is not null)


"Seq Scan on vendor_catalog vc  (cost=46633.03..52351.31 rows=36838 
width=310) (actual time=8216.197..8506.825 rows=78426 loops=1)"

"  Filter: ((NOT (hashed subplan)) AND (NOT (hashed subplan)))"
"  SubPlan"
"->  Seq Scan on

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
- 
- The window to run ANALYZE usefully is pretty short.  If you run it
- before the load is complete, your stats will be wrong.  If you run it
- after the select statements that hit the table are planned, the
- updated stats won't arrive in time to do any good.

right, but i'm loading 20 million records in 1000 record increments. so
the analyze should affect all subsequent increments, no?

- > I did turn on log_min_duration_statement but that caused performance to be 
unbearable,
- > but i could turn it on again if it would help.
- 
- I think you need to find a way to identify exactly which query is
- running slowly.  You could sit there and run "select * from
- pg_stat_activity", or turn on log_min_duration_statement, or have your
- application print out timestamps at key points, or some other
- method...

I'm on it.

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote:
- David Kerr  wrote:
-  
- > Incidentally the code is written to work like this :
- > 
- > while (read X lines in file){
- > Process those lines.
- > write lines to DB.
- > }
-  
- Unless you're selecting from multiple database tables in one query,
- effective_cache_size shouldn't make any difference.  There's
- probably some other reason for the difference.
-  
- A couple wild shots in the dark:
-  
- Any chance the source files were cached the second time, but not the
- first?
-  
- Do you have a large checkpoint_segments setting, and did the second
- run without a new initdb?
-  
- -Kevin

no i don't think the files would be cached the 2nd time. I ran it multiple times
and got the same performance each time. It wasn't until i changed the parameter
that performance got better.

I've got checkpoint_segments = 300

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:28 PM, David Kerr  wrote:
- >
- > I'm logging via syslog, I've had trouble with that before. when i moved to 
syslog-ng
- > on my dev environments that mostly resoved the probelm for me. but these 
machines
- > still have vanilla syslog.
- 
- Yea, I almost always log directly via stdout on production machines
- because of that.
- 

Ah well good to know i'm not the only one =)

I'll get the query info. I've got a twin system that I can use and abuse.

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr  wrote:
>
> I'm logging via syslog, I've had trouble with that before. when i moved to 
> syslog-ng
> on my dev environments that mostly resoved the probelm for me. but these 
> machines
> still have vanilla syslog.

Yea, I almost always log directly via stdout on production machines
because of that.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:20 PM, David Kerr  wrote:
- > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
- > -
- > - You can absolutely use copy if you like but you need to use a non-standard
- > - jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in 
the
- > - past and it worked.
- > -
- > - Is the whole thing going in in one transaction?  I'm reasonably sure
- > - statistics aren't kept for uncommited transactions.
- > -
- > - For inserts the prepared statements can only help.  For selects they can
- > - hurt because eventually the JDBC driver will turn them into back end
- > - prepared statements that are only planned once.  The price here is that 
that
- > - plan may not be the best plan for the data that you throw at it.
- > -
- > - What was log_min_duration_statement logging that it killed performance?
- > -
- > - --Nik
- >
- > Good to know about the jdbc-copy. but this is a huge project and the load is
- > just one very very tiny component, I don't think we could introduce anything
- > new to assist that.
- >
- > It's not all in one tx. I don't have visibility to the code to determine how
- > it's broken down, but most likely each while loop is a tx.
- >
- > I set it to log all statements (i.e., = 0.). that doubled the load time from
- > ~15 to ~30 hours. I could, of course, be more granular if it would be 
helpful.
- 
- So are you logging to the same drive that has pg_xlog and your
- data/base directory on this machine?
- 

the db, xlog and logs are all on separate areas of the SAN.

separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect
contention there.

I'm logging via syslog, I've had trouble with that before. when i moved to 
syslog-ng
on my dev environments that mostly resoved the probelm for me. but these 
machines
still have vanilla syslog.

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:20 PM, David Kerr  wrote:
> On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
> -
> - > that thought occured to me while I was testing this. I ran a vacuumdb -z
> - > on my database during the load and it didn't impact performance at all.
> - >
> - > Incidentally the code is written to work like this :
> - >
> - > while (read X lines in file){
> - > Process those lines.
> - > write lines to DB.
> - > }
> - >
> - > So i would generally expect to get the benefits of the updated staticis
> - > once the loop ended. no?  (would prepared statements affect that 
> possibly?)
> - >
> - > Also, while I was debugging the problem, I did load a 2nd file into the DB
> - > ontop of one that had been loaded. So the statistics almost certinaly
> - > should
> - > have been decent at that point.
> - >
> - > I did turn on log_min_duration_statement but that caused performance to be
> - > unbearable,
> - > but i could turn it on again if it would help.
> - >
> - > Dave
> -
> -
> - You can absolutely use copy if you like but you need to use a non-standard
> - jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the
> - past and it worked.
> -
> - Is the whole thing going in in one transaction?  I'm reasonably sure
> - statistics aren't kept for uncommited transactions.
> -
> - For inserts the prepared statements can only help.  For selects they can
> - hurt because eventually the JDBC driver will turn them into back end
> - prepared statements that are only planned once.  The price here is that that
> - plan may not be the best plan for the data that you throw at it.
> -
> - What was log_min_duration_statement logging that it killed performance?
> -
> - --Nik
>
> Good to know about the jdbc-copy. but this is a huge project and the load is
> just one very very tiny component, I don't think we could introduce anything
> new to assist that.
>
> It's not all in one tx. I don't have visibility to the code to determine how
> it's broken down, but most likely each while loop is a tx.
>
> I set it to log all statements (i.e., = 0.). that doubled the load time from
> ~15 to ~30 hours. I could, of course, be more granular if it would be helpful.

So are you logging to the same drive that has pg_xlog and your
data/base directory on this machine?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
- 
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
- >
- > Incidentally the code is written to work like this :
- >
- > while (read X lines in file){
- > Process those lines.
- > write lines to DB.
- > }
- >
- > So i would generally expect to get the benefits of the updated staticis
- > once the loop ended. no?  (would prepared statements affect that possibly?)
- >
- > Also, while I was debugging the problem, I did load a 2nd file into the DB
- > ontop of one that had been loaded. So the statistics almost certinaly
- > should
- > have been decent at that point.
- >
- > I did turn on log_min_duration_statement but that caused performance to be
- > unbearable,
- > but i could turn it on again if it would help.
- >
- > Dave
- 
- 
- You can absolutely use copy if you like but you need to use a non-standard
- jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the
- past and it worked.
- 
- Is the whole thing going in in one transaction?  I'm reasonably sure
- statistics aren't kept for uncommited transactions.
- 
- For inserts the prepared statements can only help.  For selects they can
- hurt because eventually the JDBC driver will turn them into back end
- prepared statements that are only planned once.  The price here is that that
- plan may not be the best plan for the data that you throw at it.
- 
- What was log_min_duration_statement logging that it killed performance?
- 
- --Nik

Good to know about the jdbc-copy. but this is a huge project and the load is 
just one very very tiny component, I don't think we could introduce anything
new to assist that.

It's not all in one tx. I don't have visibility to the code to determine how 
it's broken down, but most likely each while loop is a tx.

I set it to log all statements (i.e., = 0.). that doubled the load time from 
~15 to ~30 hours. I could, of course, be more granular if it would be helpful.

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:15 PM, Scott Marlowe  wrote:
> On Tue, Apr 20, 2010 at 11:39 AM, David Kerr  wrote:
>> Howdy all,
>>
>> I've got a huge server running just postgres. It's got 48 cores and 256GB of 
>> ram. Redhat 5.4, Postgres 8.3.9.
>> 64bit OS. No users currently.
>
> What's your IO subsystem look like?  What did vmstat actually say?

Note that on a 48 core machine, if vmstat shows 2% wait and 98% idle
then you'd be 100% io bound, because it's % of total CPU.  iostat -x
10 will give a better view of how hard your disks are working, and if
they're the issue.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Kris Jurka



On Tue, 20 Apr 2010, Nikolas Everett wrote:


You can absolutely use copy if you like but you need to use a non-standard
jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the
past and it worked.


Copy support has been added to the 8.4 driver.

Kris Jurka

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Kevin Grittner
David Kerr  wrote:
 
> Incidentally the code is written to work like this :
> 
> while (read X lines in file){
> Process those lines.
> write lines to DB.
> }
 
Unless you're selecting from multiple database tables in one query,
effective_cache_size shouldn't make any difference.  There's
probably some other reason for the difference.
 
A couple wild shots in the dark:
 
Any chance the source files were cached the second time, but not the
first?
 
Do you have a large checkpoint_segments setting, and did the second
run without a new initdb?
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 11:39 AM, David Kerr  wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of 
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.

What's your IO subsystem look like?  What did vmstat actually say?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Robert Haas
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:
> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.

The window to run ANALYZE usefully is pretty short.  If you run it
before the load is complete, your stats will be wrong.  If you run it
after the select statements that hit the table are planned, the
updated stats won't arrive in time to do any good.

> I did turn on log_min_duration_statement but that caused performance to be 
> unbearable,
> but i could turn it on again if it would help.

I think you need to find a way to identify exactly which query is
running slowly.  You could sit there and run "select * from
pg_stat_activity", or turn on log_min_duration_statement, or have your
application print out timestamps at key points, or some other
method...

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Nikolas Everett
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr  wrote:

> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.
>
> Incidentally the code is written to work like this :
>
> while (read X lines in file){
> Process those lines.
> write lines to DB.
> }
>
> So i would generally expect to get the benefits of the updated staticis
> once the loop ended. no?  (would prepared statements affect that possibly?)
>
> Also, while I was debugging the problem, I did load a 2nd file into the DB
> ontop of one that had been loaded. So the statistics almost certinaly
> should
> have been decent at that point.
>
> I did turn on log_min_duration_statement but that caused performance to be
> unbearable,
> but i could turn it on again if it would help.
>
> Dave


You can absolutely use copy if you like but you need to use a non-standard
jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the
past and it worked.

Is the whole thing going in in one transaction?  I'm reasonably sure
statistics aren't kept for uncommited transactions.

For inserts the prepared statements can only help.  For selects they can
hurt because eventually the JDBC driver will turn them into back end
prepared statements that are only planned once.  The price here is that that
plan may not be the best plan for the data that you throw at it.

What was log_min_duration_statement logging that it killed performance?

--Nik


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 1:39 PM, David Kerr  wrote:
- > My initial thought was hardware issues so we got sar, vmstat, etc all 
running on the box and they didn't give
- > any indication that we had resource issues.
- >
- > So I decided to just make the 2 PG config files look the same. (the only 
change was dropping effective_cache_size
- > from 128GB to 2GB).
- >
- > Now the large box performs the same as the smaller box. (which is fine).
- >
- > incidentally, both tests were starting from a blank database.
- >
- > Is this expected?
- 
- Lowering effective_cache_size tends to discourage the planner from
- using a nested-loop-with-inner-indexscan plan - that's it.
- 
- What may be happening is that you may be loading data into some tables
- and then running a query against those tables before the autovacuum
- daemon has a chance to analyze them.  I suspect that if you enable
- some logging you'll find that one of those queries is really, really
- slow, and that (by happy coincidence) discouraging it from using the
- index it thinks it should use happens to produce a better plan.  What
- you should probably do is, for each table that you bulk load and then
- query, insert a manual ANALYZE between the two.
- 
- ...Robert
- 

that thought occured to me while I was testing this. I ran a vacuumdb -z 
on my database during the load and it didn't impact performance at all.

Incidentally the code is written to work like this :

while (read X lines in file){
Process those lines.
write lines to DB.
}

So i would generally expect to get the benefits of the updated staticis 
once the loop ended. no?  (would prepared statements affect that possibly?)

Also, while I was debugging the problem, I did load a 2nd file into the DB
ontop of one that had been loaded. So the statistics almost certinaly should
have been decent at that point. 

I did turn on log_min_duration_statement but that caused performance to be 
unbearable,
but i could turn it on again if it would help.

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Robert Haas
On Tue, Apr 20, 2010 at 1:39 PM, David Kerr  wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of 
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
>
> I've got a J2EE app that loads data into the DB, it's got logic behind it so 
> it's not a simple bulk load, so
> i don't think we can use copy.
>
> Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the 
> available memory) on the box.
>
> When I ran my load, it took aproximately 15 hours to do load 20 million 
> records. I thought this was odd because
> on a much smaller machine I was able to do that same amount of records in 6 
> hours.
>
> My initial thought was hardware issues so we got sar, vmstat, etc all running 
> on the box and they didn't give
> any indication that we had resource issues.
>
> So I decided to just make the 2 PG config files look the same. (the only 
> change was dropping effective_cache_size
> from 128GB to 2GB).
>
> Now the large box performs the same as the smaller box. (which is fine).
>
> incidentally, both tests were starting from a blank database.
>
> Is this expected?

Lowering effective_cache_size tends to discourage the planner from
using a nested-loop-with-inner-indexscan plan - that's it.

What may be happening is that you may be loading data into some tables
and then running a query against those tables before the autovacuum
daemon has a chance to analyze them.  I suspect that if you enable
some logging you'll find that one of those queries is really, really
slow, and that (by happy coincidence) discouraging it from using the
index it thinks it should use happens to produce a better plan.  What
you should probably do is, for each table that you bulk load and then
query, insert a manual ANALYZE between the two.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Joshua D. Drake
On Tue, 2010-04-20 at 10:39 -0700, David Kerr wrote:
> Howdy all,
> 
> I've got a huge server running just postgres. It's got 48 cores and 256GB of 
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
> 
> I've got a J2EE app that loads data into the DB, it's got logic behind it so 
> it's not a simple bulk load, so
> i don't think we can use copy.
> 
> Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the 
> available memory) on the box.
> 
> When I ran my load, it took aproximately 15 hours to do load 20 million 
> records. I thought this was odd because
> on a much smaller machine I was able to do that same amount of records in 6 
> hours.
> 
> My initial thought was hardware issues so we got sar, vmstat, etc all running 
> on the box and they didn't give
> any indication that we had resource issues.
> 
> So I decided to just make the 2 PG config files look the same. (the only 
> change was dropping effective_cache_size 
> from 128GB to 2GB).
> 
> Now the large box performs the same as the smaller box. (which is fine).
> 
> incidentally, both tests were starting from a blank database.
> 
> Is this expected? 

Without a more complete picture of the configuration, this post doesn't
mean a whole lot. Further, effective_cash_size is not likely to effect a
bulk load at all.

Joshua D. Drake



> 
> Thanks!
> 
> Dave
> 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
Howdy all,

I've got a huge server running just postgres. It's got 48 cores and 256GB of 
ram. Redhat 5.4, Postgres 8.3.9.
64bit OS. No users currently.

I've got a J2EE app that loads data into the DB, it's got logic behind it so 
it's not a simple bulk load, so
i don't think we can use copy.

Based on the tuning guides, it set my effective_cache_size to 128GB (1/2 the 
available memory) on the box.

When I ran my load, it took aproximately 15 hours to do load 20 million 
records. I thought this was odd because
on a much smaller machine I was able to do that same amount of records in 6 
hours.

My initial thought was hardware issues so we got sar, vmstat, etc all running 
on the box and they didn't give
any indication that we had resource issues.

So I decided to just make the 2 PG config files look the same. (the only change 
was dropping effective_cache_size 
from 128GB to 2GB).

Now the large box performs the same as the smaller box. (which is fine).

incidentally, both tests were starting from a blank database.

Is this expected? 

Thanks!

Dave

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke  wrote:
 
> AFAICT from the Java end, ResultSet.close() is supposed to be
> final.
 
For that ResultSet.  That doesn't mean a ResultSet defines a cursor.
Such methods as setCursorName, setFetchSize, and setFetchDirection
are associated with a Statement.  Think of the ResultSet as the
result of a cursor *scan* generated by opening the cursor defined by
the Statement.
 
http://java.sun.com/javase/6/docs/api/java/sql/ResultSet.html#close%28%29
 
Notice that the ResultSet is automatically closed if the Statement
that generated it is re-executed.  That is very much consistent with
Statement as the equivalent of a cursor, and not very consistent
with a ResultSet as the equivalent of a cursor.
 
> There is no way I know of in JDBC to get a handle back to the
> cursor on the server side once you have made this call - in fact,
> its sole purpose is to inform the server in a timely fashion that
> this cursor is no longer required, since the ResultSet itself is a
> Java object and thus subject to garbage collection and finalizer
> hooks.
 
Again, you're talking about the *results* from *opening* the cursor.
 
> At a pragmatic level, the PGSQL JDBC driver has a lot of odd
> behaviours which, while they may or may not be in strict
> compliance with the letter of the standard, are very different
> from any other mainstream database that I have accessed from Java
>  what I'd consider as normative behaviour, using regular JDBC
> calls without the need to jump through all these weird hoops, is
> exhibited by all of the following: Oracle, SQL Server, DB2, MySQL,
> Apache Derby and JET (MS-Access file-based back end, the .mdb
> format)
 
Are you talking about treating the Statement object as representing
a cursor and the ResultSet representing the results from opening
the cursor, or are you thinking of something else here?
 
> In practce, this places PGSQL as the odd one out, which is a bit
> of a turn-off to expereinced Java people who are PG newbies for
> what is otherwise an excellent database.
 
Huh.  I dropped PostgreSQL into an environment with hundreds of
databases, and the applications pretty much "just worked" for us.
Of course, we were careful to write to the SQL standard and the JDBC
API, not to some other product's implementation of them. 
 
There were a few bugs we managed to hit which hadn't previously been
noticed, but those were promptly fixed.  As I recall, about the only
other things which caused me problems were:
 
(1)  Needing to setFetchSize to avoid materializing the entire
result set in RAM on the client.
 
(2)  Fixing a race condition in our software which was benign in
other products, but clearly my own bug.
 
(3)  Working around the fact that COALESCE(NULL, NULL) can't be used
everywhere NULL can.
 
> At my previous shop, we built a couple of database-backed apps
> from scratch, and despite a desire to use PG due to there being
> more certainty over its future licensing (it was just after Sun
> had bought MySQL AG), I ended up switching from PG to MySQL 5.0.47
> (last open source version) because of the difficulties I was
> having with the PG driver.
 
Just out of curiosity, did you discuss that on the PostgreSQL lists?
Can you reference the thread(s)?
 
> I consider part of the acme of great FOSS is to make it easy to
> use for newbies and thus attract a larger user base, but that is
> just my $0.02 worth.
 
Sure, but I would consider it a step away from that to follow
MySQL's interpretation of cursors rather than the standard's.
YMMV, of course.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kris Jurka



On Mon, 19 Apr 2010, Dave Crooke wrote:


Statement.close() appears to get the job done (in my envrionment, PG's
driver never sees a Connection.close() because of DBCP).

I'd consider the fact that ResultSet.close() does not release the implicit
cursor to be something of a bug, but it may well have been fixed already.


PG doesn't release the locks acquired by the query until transaction end. 
So closing a cursor will release some backend memory, but it won't release 
the locks.  The way the driver implements ResultSet.close() is to put 
the close message into a queue so that the next time a message is sent to 
the backend we'll also send the cursor close message.  This avoids an 
extra network roundtrip for the close action.


In any case Statement.close isn't helping you here either.  It's really 
Connection.commit/rollback that's releasing the locks.


Kris Jurka

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
AFAICT from the Java end, ResultSet.close() is supposed to be final. There
is no way I know of in JDBC to get a handle back to the cursor on the server
side once you have made this call - in fact, its sole purpose is to inform
the server in a timely fashion that this cursor is no longer required, since
the ResultSet itself is a Java object and thus subject to garbage collection
and finalizer hooks.

At a pragmatic level, the PGSQL JDBC driver has a lot of odd behaviours
which, while they may or may not be in strict compliance with the letter of
the standard, are very different from any other mainstream database that I
have accessed from Java  what I'd consider as normative behaviour, using
regular JDBC calls without the need to jump through all these weird hoops,
is exhibited by all of the following: Oracle, SQL Server, DB2, MySQL, Apache
Derby and JET (MS-Access file-based back end, the .mdb format)

In practce, this places PGSQL as the odd one out, which is a bit of a
turn-off to expereinced Java people who are PG newbies for what is otherwise
an excellent database.

At my current post, I came into a shop that had PG as the only real
database, so I have learned to love it, and de-supported Derby and the other
toy back ends we used to use. And to be fair, from a back end perspective,
PG is better than MySQL in terms of manageability  I am running 250GB
databases on small systems with no issues.

At my previous shop, we built a couple of database-backed apps from scratch,
and despite a desire to use PG due to there being more certainty over its
future licensing (it was just after Sun had bought MySQL AG), I ended up
switching from PG to MySQL 5.0.47 (last open source version) because of the
difficulties I was having with the PG driver.

I consider part of the acme of great FOSS is to make it easy to use for
newbies and thus attract a larger user base, but that is just my $0.02
worth.

Cheers
Dave

On Tue, Apr 20, 2010 at 9:28 AM, Kevin Grittner  wrote:

> Dave Crooke  wrote:
>
> > I'd consider the fact that ResultSet.close() does not release the
> > implicit cursor to be something of a bug
>
> What's your reasoning on that?  The definitions of cursors in the
> spec, if memory serves, allow a cursor to be closed and re-opened;
> why would this be treated differently?
>
> -Kevin
>


Re: [PERFORM] Planner not using column limit specified for one column for another column equal to first

2010-04-20 Thread Hannu Krosing
On Fri, 2010-04-16 at 11:02 +0300, Віталій Тимчишин wrote:
> Hello.
> 
> 
> I have a query that performs very poor because there is a limit on
> join column that is not applied to other columns:
> 
> 
> select * from company this_ left outer join company_tag this_1_ on
> this_.id=this_1_.company_id left outer join company_measures
> companymea2_ on this_.id=companymea2_.company_id left outer join
> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id
> = 7 and this_.id>5000 
> and this_1_.company_id>5000
> order by this_.id asc limit 1000;
> 
> 
> (plan1.txt)
> Total runtime: 7794.692 ms
> 
> 
> At the same time if I apply the limit (>5000) to other columns in
> query itself it works like a charm:
> 
> 
> select * from company this_ left outer join company_tag this_1_ on
> this_.id=this_1_.company_id left outer join company_measures
> companymea2_ on this_.id=companymea2_.company_id left outer join
> company_descr ces3_ on this_.id=ces3_.company_id where this_1_.tag_id
> = 7 and this_.id>5000 
> and this_1_.company_id>5000
> and companymea2_.company_id>5000 and ces3_.company_id>5000
> order by this_.id asc limit 1000;

The queries are not the same.

2nd variant will not return the rows where there are no matching rows
inthis_1_ , companymea2_ or ces3_.company_id

A query equivalent to first one would be:


select * from company this_ 
  left outer join company_tag this_1_ 
   on (this_.id=this_1_.company_id 
   and this_1_.company_id>5000)
  left outer join company_measures companymea2_ 
   on (this_.id=companymea2_.company_id 
   and companymea2_.company_id>5000)
  left outer join company_descr ces3_ 
   on (this_.id=ces3_.company_id 
   and ces3_.company_id>5000)
 where this_1_.tag_id = 7 
   and this_.id>5000 
 order by this_.id asc 
 limit 1000;


I'm not sure that planner considers the above form of plan rewrite, nor
that it would make much sense to do so unless there was a really small
number of rows where x_.company_id>5000 


-- 
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability 
   Services, Consulting and Training



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
Hey folks

I am trying to do a full table scan on a large table from Java, using a
straightforward "select * from foo". I've run into these problems:

1. By default, the PG JDBC driver attempts to suck the entire result set
into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, in
fact I consider it a serious bug (even MySQL gets this right ;-) I am only
testing with a 9GB result set, but production needs to scale to 200GB or
more, so throwing hardware at is is not feasible.

2. I tried using the official taming method, namely *
java.sql.Statement.setFetchSize(1000)* and this makes it blow up entirely
with an error I have no context for, as follows (the number C_10 varies,
e.g. C_12 last time) ...

org.postgresql.util.PSQLException: ERROR: portal "C_10" does not exist
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)
at
org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527)
at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)

This is definitely a bug :-)


Is there a known workaround for this ... will updating to a newer version of
the driver fix this?

Is there a magic incation of JDBC calls that will tame it?

Can I cast the objects to PG specific types and access a hidden API to turn
off this behaviour?

If the only workaround is to explicitly create a cursor in PG, is there a
good example of how to do this from Java?

Cheers
Dave


Re: [PERFORM] significant slow down with various LIMIT

2010-04-20 Thread norn
Kevin,
thanks for your time!
Here the requested tests.

> (1) Try it without the ORDER BY clause and the LIMIT.
W/o the 'order by' it works instantly (about 1ms!)
 Limit  (cost=0.00..3.59 rows=5 width=4) (actual time=0.127..0.229
rows=5 loops=1)
   ->  Nested Loop  (cost=0.00..277863.53 rows=386544 width=4) (actual
time=0.125..0.224 rows=5 loops=1)
 ->  Nested Loop  (cost=0.00..91136.78 rows=386544 width=4)
(actual time=0.106..0.154 rows=5 loops=1)
   ->  Index Scan using plugins_guide_address_city_id on
plugins_guide_address  (cost=0.00..41109.07 rows=27673 width=4)
(actual time=0.068..0.080 rows=5 loops=1)
 Index Cond: (city_id = 4535)
   ->  Index Scan using plugins_plugin_addr_address_id on
plugins_plugin_addr  (cost=0.00..1.63 rows=14 width=8) (actual
time=0.011..0.012 rows=1 loops=5)
 Index Cond: (plugins_plugin_addr.address_id =
plugins_guide_address.id)
 ->  Index Scan using core_object_pkey on core_object
(cost=0.00..0.47 rows=1 width=4) (actual time=0.011..0.012 rows=1
loops=5)
   Index Cond: (core_object.id =
plugins_plugin_addr.oid_id)
 Total runtime: 0.328 ms
(10 rows)


W/o the limit it takes 1.4 seconds, which is anyway better than...
 Sort  (cost=199651.74..200618.10 rows=386544 width=4) (actual
time=1153.167..1157.841 rows=43898 loops=1)
   Sort Key: core_object.id
   Sort Method:  quicksort  Memory: 3594kB
   ->  Hash Join  (cost=81234.35..163779.93 rows=386544 width=4)
(actual time=122.050..1128.909 rows=43898 loops=1)
 Hash Cond: (core_object.id = plugins_plugin_addr.oid_id)
 ->  Seq Scan on core_object  (cost=0.00..46467.07
rows=3221307 width=4) (actual time=0.011..378.677 rows=3221349
loops=1)
 ->  Hash  (cost=76402.55..76402.55 rows=386544 width=4)
(actual time=121.170..121.170 rows=43898 loops=1)
   ->  Nested Loop  (cost=368.81..76402.55 rows=386544
width=4) (actual time=8.645..104.842 rows=43898 loops=1)
 ->  Bitmap Heap Scan on plugins_guide_address
(cost=368.81..26374.83 rows=27673 width=4) (actual time=8.599..15.590
rows=26583 loops=1)
   Recheck Cond: (city_id = 4535)
   ->  Bitmap Index Scan on
plugins_guide_address_city_id  (cost=0.00..361.89 rows=27673 width=0)
(actual time=7.856..7.856 rows=26583 loops=1)
 Index Cond: (city_id = 4535)
 ->  Index Scan using
plugins_plugin_addr_address_id on plugins_plugin_addr
(cost=0.00..1.63 rows=14 width=8) (actual time=0.002..0.003 rows=2
loops=26583)
   Index Cond: (plugins_plugin_addr.address_id
= plugins_guide_address.id)
 Total runtime: 1162.193 ms
(15 rows)

>(2) Temporarily take that top index out of consideration
It works nice! Query takes about 0.6 seconds as expected!

explain analyze SELECT core_object.id from "core_object" INNER JOIN
"plugins_plugin_addr" ON ("core_object"."id" =
"plugins_plugin_addr"."oid_id") INNER JOIN "plugins_guide_address" ON
("plugins_plugin_addr"."address_id" = "plugins_guide_address"."id")
WHERE "plugins_guide_address"."city_id" = 4535  ORDER BY
"core_object"."id" DESC;

 Limit  (cost=112274.36..112275.66 rows=5 width=4) (actual
time=200.758..637.039 rows=5 loops=1)
   ->  Merge Join  (cost=112274.36..213042.22 rows=386544 width=4)
(actual time=200.754..637.035 rows=5 loops=1)
 Merge Cond: (core_object.id = plugins_plugin_addr.oid_id)
 ->  Index Scan Backward using core_object_pkey on
core_object  (cost=0.00..86916.44 rows=3221307 width=4) (actual
time=0.115..302.512 rows=1374693 loops=1)
 ->  Sort  (cost=112274.36..113240.72 rows=386544 width=4)
(actual time=154.635..154.635 rows=5 loops=1)
   Sort Key: plugins_plugin_addr.oid_id
   Sort Method:  quicksort  Memory: 3594kB
   ->  Nested Loop  (cost=368.81..76402.55 rows=386544
width=4) (actual time=9.522..126.206 rows=43898 loops=1)
 ->  Bitmap Heap Scan on plugins_guide_address
(cost=368.81..26374.83 rows=27673 width=4) (actual time=9.367..21.311
rows=26583 loops=1)
   Recheck Cond: (city_id = 4535)
   ->  Bitmap Index Scan on
plugins_guide_address_city_id  (cost=0.00..361.89 rows=27673 width=0)
(actual time=8.577..8.577 rows=26583 loops=1)
 Index Cond: (city_id = 4535)
 ->  Index Scan using
plugins_plugin_addr_address_id on plugins_plugin_addr
(cost=0.00..1.63 rows=14 width=8) (actual time=0.002..0.003 rows=2
loops=26583)
   Index Cond: (plugins_plugin_addr.address_id
= plugins_guide_address.id)
 Total runtime: 637.620 ms
(15 rows)


> (3) Try it like this (untested, so you may need to fix it up):
explain analyze
SELECT core_object.id
  from (SELECT id, city_id FROM "plugins_guide_address"
  WHERE "city_id" = 4535) "plugins_guide_address"
  JOIN "plugins_plugin_addr"
ON

[PERFORM] stats collector suddenly causing lots of IO

2010-04-20 Thread Chris
I have a lot of centos servers which are running postgres.  Postgres isn't used
that heavily on any of them, but lately, the stats collector process keeps
causing tons of IO load.  It seems to happen only on servers with centos 5.
The versions of postgres that are running are:

8.1.18
8.2.6
8.3.1
8.3.5
8.3.6
8.3.7
8.3.8
8.3.9
8.4.2
8.4.3

I've tried turning off everything under RUNTIME STATISTICS in postgresql.conf
except track_counts (since auto vacuum says it needs it), but it seems to have
little affect on the IO caused by the stats collector.

Has anyone else noticed this?  Have there been recent kernel changes
that could cause this that anyone knows about?  Since we haven't touched
postgres on these boxes since they were setup initially, I'm a bit baffled as
to what might be causing the problem, and why I can't make it go away short of
kill -STOP.

Any suggestions would be much appreciated!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke  wrote:
 
> I'd consider the fact that ResultSet.close() does not release the
> implicit cursor to be something of a bug
 
What's your reasoning on that?  The definitions of cursors in the
spec, if memory serves, allow a cursor to be closed and re-opened;
why would this be treated differently?
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Dbt2 with postgres i ssues on CentOS-5.3‏

2010-04-20 Thread MUHAMMAD ASIF

Hi,
 
I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled 
latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am 
little confused about the results. I ran dbt2 with the following configuration 
i.e.
 
DBT2 Options :
WAREHOUSES=75
DB_CONNECTIONS=20
REGRESS_DURATION=1 #HOURS
REGRESS_DURATION_SEC=$((60*60*$REGRESS_DURATION))
 
DBT2 Command :
./dbt2-pgsql-create-db
./dbt2-pgsql-build-db -d $DBDATA -g -r -w $WAREHOUSES
./dbt2-run-workload -a pgsql -c $DB_CONNECTIONS -d 
$REGRESS_DURATION_SEC -w $WAREHOUSES -o $OUTPUT_DIR
./dbt2-pgsql-stop-db
 
I am not able to understand the sar related graphs. Iostat,mpstat and vmstat 
results are similar but sar results are strange. I tried to explore the dbt2 
source code to find out the how graphs are drawn and why sar results 
differ.DBT2.pm : 189 reads sar.out and parse it and consider 1 minute elapsed 
time between each record i.e.

ActivePerl-5.10.1.1007-i686-linux-glibc-2.3.2-291969/inst/lib/Test/Parser/Sar.pm
 : 266
elapsed_time is a counter, with every record it increment to 1 
(++$elapsed_time)
 
 Sar.out shows the following results i.e.
 
08:54:47 PM   cswch/s
..
..
09:21:47 PM   1809.46
09:22:47 PM   2251.26
09:23:47 PM   2151.27
09:24:47 PM   2217.33
09:27:01 PM   2189.83
09:29:02 PM   2155.13
09:30:02 PM   2048.04
09:32:19 PM   2033.16
09:34:20 PM   2032.47
09:36:20 PM   2006.02
09:37:20 PM   1966.03
09:39:35 PM   1974.77
09:41:37 PM   1973.88
09:42:37 PM   1960.65
09:44:56 PM   1993.15
09:45:56 PM   1989.46
09:47:57 PM   2430.77
09:48:57 PM   2416.64
09:51:08 PM   2330.02
09:53:19 PM   1738.46
09:54:19 PM   2182.27
09:55:19 PM   2221.31
09:56:19 PM   2131.81
09:57:19 PM   2183.47
09:59:31 PM   2156.70
10:01:32 PM   2114.38
10:02:32 PM   2030.05
10:04:51 PM   2059.56
10:05:51 PM   1995.06
10:08:09 PM   1355.43
10:09:09 PM218.73
10:10:09 PM175.13
10:11:09 PM168.30
10:12:09 PM168.58
..
..
It shows that sar results for each record is not after every 1 minute duration, 
it varies.  Is it expected or there are some bugs in CentOS default sar package 
(sysstat-7.0.2-3.el5). I tried latest package sysstat-9.0.6.1 from but it 
behaving the same. Systat utilities depends on procfs, is there something wrong 
with the system ?. Thanks.
 
Best Regards,
Asif Naeem
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969