Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-02 22:08, Merlin Moncure wrote:

On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga  wrote:

Intel latency graph at http://imgur.com/Hh3xI
Ocz latency graph at http://imgur.com/T09LG

curious: what were the pgbench results in terms of tps?

merlin


Both comparable near 10K tps.

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Guide to PG's capabilities for inlining, predicate hoisting, flattening, etc?

2011-11-03 Thread Craig Ringer

On 11/03/2011 04:22 AM, Igor Neyman wrote:


Hints here we come :)


Pfft! No more than `VOLATILE' vs `STABLE' vs `IMMUTABLE'. It's a 
semantic difference, not just a performance hint.


That said, I'm not actually against performance hints if done sensibly.

--
Craig Ringer


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Guide to PG's capabilities for inlining, predicate hoisting, flattening, etc?

2011-11-03 Thread Igor Neyman
> -Original Message-
> From: Craig Ringer [mailto:ring...@ringerc.id.au]
> Sent: Thursday, November 03, 2011 5:07 AM
> To: Igor Neyman
> Cc: Robert Haas; Tom Lane; Jay Levitt;
pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Guide to PG's capabilities for inlining,
> predicate hoisting, flattening, etc?
> 
> On 11/03/2011 04:22 AM, Igor Neyman wrote:
> 
> That said, I'm not actually against performance hints if done
sensibly.
> 
> --
> Craig Ringer
> 


> ...sensibly
As it is with any other feature...

Igor Neyman

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Merlin Moncure
On Thu, Nov 3, 2011 at 4:38 AM, Yeb Havinga  wrote:
> On 2011-11-02 22:08, Merlin Moncure wrote:
>>
>> On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga  wrote:
>>>
>>> Intel latency graph at http://imgur.com/Hh3xI
>>> Ocz latency graph at http://imgur.com/T09LG
>>
>> curious: what were the pgbench results in terms of tps?
>>
>> merlin
>
> Both comparable near 10K tps.

Well, and this is just me, I'd probably stick with the 710, but that's
based on my understanding of things on paper, not real world
experience with that drive.  The vertex 2 is definitely a more
reliable performer, but it looks like the results in your graph are
mostly skewed by a few outlying data points.  If the 710 can has the
write durability that intel is advertising, then ISTM that is one less
thing to think about.  My one experience with the vertex 2 pro was
that it was certainly fast but burned out just shy of the 10k write
cycle point after all the numbers were crunched.  This is just too
close for comfort on databases that are doing a lot of writing.

Note that either drive is giving you the performance of somewhere
between a 40 and 60 drive tray of 15k drives configured in a raid 10
(once you overflow the write cache on the raid controller(s)).  It
would take a pretty impressive workload indeed to become i/o bound
with either one of these drives...high scale pgbench is fairly
pathological.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Poor performance on a simple join

2011-11-03 Thread Shaun Thomas

On 11/02/2011 09:04 PM, Scott Marlowe wrote:


Take a look here:
http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views


Not sure materialized views are the approach I would take here. We 
actually see a lot of these kinds of queries with giant result sets, 
here. If they actually need all 12k rows for every execution (not 
likely, but possible) and 300ms is just too darn slow for that, there's 
always client-side caching.


We have a couple queries that we need to keep cached at all times. Stock 
quotes and positions, for example, have to be available in sub-ms time 
thanks to the level of parallelism involved. One query in particular 
effectively grabs the entire set of current positions and every 
optimization in the book brings its execution time down to about two 
seconds. We can't have thousands of clients executing that all the time, 
so it gets shoved into a local memcached on each webserver.


But if he's getting back 12k rows even *after* specifying a contact ID, 
a materialized view is still going to return 12k rows, and still has to 
perform at least an index scan unless he creates an MV for each contact 
ID (eww). This doesn't really look like fact-table territory either.


I think the real question is: Why isn't 300ms fast enough? Is it because 
the client executes this repeatedly? If so, what changes often enough it 
must fetch all 12k rows every single time? Would implementing a 
timestamp and only grabbing newer rows work better? Is it because of 
several connections each running it in parallel? Why not cache a local 
copy and refresh periodically? Do they actually need all 12k rows every 
time? maybe some limit and offset clauses are in order.


There's very little a human can do with 12k results. An automated tool 
shouldn't be grabbing them either, unless they're actually changing with 
every execution. If they're not, the tool really wants items since the 
last change, or it's doing too much work. If it were a report, 300ms is 
nothing compared to most reporting queries which can run for several 
minutes.


I think we're missing a lot of details here.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See http://www.peak6.com/email-disclaimer/ for terms and conditions related to 
this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Shaun Thomas

On 11/03/2011 04:38 AM, Yeb Havinga wrote:


Both comparable near 10K tps.


That's another thing I was wondering about. Why are we talking about 
Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and 
are still capacitor-backed.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See http://www.peak6.com/email-disclaimer/ for terms and conditions related to 
this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-03 15:31, Shaun Thomas wrote:

On 11/03/2011 04:38 AM, Yeb Havinga wrote:


Both comparable near 10K tps.


That's another thing I was wondering about. Why are we talking about 
Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and 
are still capacitor-backed.




Not for sale yet..

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] function slower than the same code in an sql file

2011-11-03 Thread Robert Haas
On Fri, Oct 28, 2011 at 9:39 AM, CS DBA  wrote:
> No parameters,  one of them looks like this:
>
> [ code snippet ]

It's hard to believe this is the real code, because SELECT without
INTO will bomb out inside a PL/pgsql function, won't it?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-03 Thread Albe Laurenz
Merlin Moncure wrote:
 We selected a 30MB bytea with psql connected with
 "-h localhost" and found that it makes a huge
 difference whether we have SSL encryption on or off.

 Without SSL the SELECT finished in about a second,
 with SSL it took over 23 seconds (measured with
 \timing in psql).
 During that time, the CPU is 100% busy.
 All data are cached in memory.

 Is this difference as expected?

>>> I tried to reproduce that, but only saw about 4x difference in the
>>> timing, not 23x.

>>> oprofile suggests that all that overhead is coming from compression.
>>> Apparently SSL does compression automatically. Oprofile report of the
>>> above test case with SSL enabled:
[...]

>> Funny, I cannot see any calls to libz. On my system (RHEL 3, PostgreSQL
>> 8.4.8, openssl 0.9.7a) the oprofile reports of the server process look
>> like this:

>> samples  %           symbol name      image name
>> 5326     77.6611     (no symbol)      /lib/libcrypto.so.0.9.7a

> that's a pretty ancient crypto you got there...it may not compress by
> default.  Heikki's test data will compress super well which would
> totally skew performance testing to libz since the amount of data
> actually encrypted will be fairly tiny.  real world high entropy cases
> often show crypto as the worse offender in my experience.

I experimented some more on a recent system (RHEL6, OpenSSL 1.0.0-fips),
and it is as you say. Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.

Seems you pretty much have to live with at most 1/4 of the performance
if you want to SELECT large images using SSL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-03 Thread k...@rice.edu
On Thu, Nov 03, 2011 at 03:48:11PM +0100, Albe Laurenz wrote:
> 
> I experimented some more on a recent system (RHEL6, OpenSSL 1.0.0-fips),
> and it is as you say. Disabling OpenSSL compression in the source (which
> is possible since OpenSSL 1.0.0) does not give me any performance
> improvement.
> 
> Seems you pretty much have to live with at most 1/4 of the performance
> if you want to SELECT large images using SSL.
> 
> Yours,
> Laurenz Albe
> 

Have you tried different ciphers? RC4 is much lighter weight CPU-wise
then the typically negotiated cipher. AES128 is also not bad if you
have the newer Intel chips with the hardware encryption support. Just
another thing to check.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Mario Weilguni

Am 02.11.2011 08:12, schrieb Mohamed Hashim:

Dear All

Thanks for your suggestions & replies.

The below are the sample query which i put for particular one bill_id

EXPLAIN ANALYZE SELECT abd.bill_no as 
bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as 
product_desc,std.quantity,std.area,rip.price AS rate

FROM acc_bill_items_106 abi
JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
JOIN stk_source ss ON  ss.source_detail[1]=1 and 
ss.source_detail[2]=abi.item_id

JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
JOIN master_product_106 mp ON mp.product_id= sd106.product_id
JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
WHERE abi.bill_id=12680;


First I would try this:
explain analyze select * from stk_source where source_detail[1] = 1;
explain analyze select * from stk_source where source_detail[2] = 12356;

Both times you'll get sequential scans, and that's the root of the 
problem. Oh, you mentioned that you use partitioning, but there seems to 
be no condition for that.


You should really rethink your database schema, at least try to pull out 
all indexable fields out of that int[] into columns, and use indices on 
those fields.


Regards
Mario







Re: [PERFORM] function slower than the same code in an sql file

2011-11-03 Thread Rodrigo Gonzalez

El 03/11/11 11:42, Robert Haas escribió:

On Fri, Oct 28, 2011 at 9:39 AM, CS DBA  wrote:

No parameters,  one of them looks like this:

[ code snippet ]

It's hard to believe this is the real code, because SELECT without
INTO will bomb out inside a PL/pgsql function, won't it?


But he's using CREATE TABLE xyz_view_m AS

So it seems correct to me

Regards

Rodrigo


Re: [PERFORM] Query running a lot faster with enable_nestloop=false

2011-11-03 Thread Robert Haas
On Tue, Oct 25, 2011 at 5:09 AM, Mohanaraj Gopala Krishnan
 wrote:
> I have a query that runs a lot slower (~5 minutes) when I run it with
> the default enable_nestloop=true and enable_nestloop=false (~10 secs).
> The actual query is available here http://pastie.org/2754424 . It is a
> reporting query with many joins as the database is mainly used for
> transaction processing.
>
> Explain analyse result for both cases:
>
> Machine A nestloop=true - http://explain.depesz.com/s/nkj0 (~5 minutes)
> Machine A nestloop=false - http://explain.depesz.com/s/wBM (~10 secs)

A good start might be to refactor this:

Seq Scan on retailer_categories retailer_category_leaf_nodes
(cost=0.00..18.52 rows=1 width=16) (actual time=0.016..0.392 rows=194
loops=1)
Filter: ((tree_right - tree_left) = 1)

And this:

Seq Scan on product_categories product_category_leaf_nodes
(cost=0.00..148.22 rows=2 width=32) (actual time=0.031..1.109 rows=383
loops=1)
Filter: ((tree_right - tree_left) = 1)

The query planner really has no idea what selectivity to assign to
that condition, and so it's basically guessing, and it's way off.  You
could probably improve the estimate a lot by adding a column that
stores the values of tree_right - tree_left and is updated manually or
by triggers as you insert and update data.  Then you could just check
tree_left_right_difference = 1, which should get a much more accurate
estimate, and hopefully therefore a better plan.

You've also got a fairly large estimation error here:

Index Scan using invoices_create_date_idx on invoices (cost=0.00..8.28
rows=1 width=4) (actual time=0.055..0.305 rows=109 loops=1)
Index Cond: ((create_date >= '2011-09-15'::date) AND (create_date
<= '2011-09-15'::date))
Filter: (status = 128)

Apparently, status 128 is considerably more common among rows in that
date range than it is overall.  Unfortunately, it's not so easy to fix
this kind of estimation error, unless you can refactor your schema to
avoid needing to filter on both create_date and status at the same
time.

It might be worth using temporary tables here - factor out sections of
the query that are referenced multiple times, like the join between
sales_order_items and invoices, and create a temporary table.  ANALYZE
it, and then use it to run the main query.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] function slower than the same code in an sql file

2011-11-03 Thread Robert Haas
On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez
 wrote:
> El 03/11/11 11:42, Robert Haas escribió:
>
> On Fri, Oct 28, 2011 at 9:39 AM, CS DBA  wrote:
>
> No parameters,  one of them looks like this:
>
> [ code snippet ]
>
> It's hard to believe this is the real code, because SELECT without
> INTO will bomb out inside a PL/pgsql function, won't it?
>
> But he's using CREATE TABLE xyz_view_m AS
>
> So it seems correct to me

Oh, right, I missed that.

That seems pretty mysterious then.  But is it possible the function is
getting called more times than it should?  I notice that it's set up
as a trigger; is it FOR EACH ROW when it should be a statement-level
trigger or something like that?  Maybe run EXPLAIN ANALYZE on the
query that's invoking the trigger to get some more detail on what's
going on?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread siva palanisamy
I basically have 3 tables. One being the core table and the other 2 depend
on the 1st. I have the requirement to add upto 7 records in the tables.
I do have constraints (primary & foreign keys, index, unique etc) set for
the tables. I can't go for bulk import (using COPY command) as there is no
standard .csv file in requirement, and the mapping is explicitly required
plus few validations are externally applied in a C based programming file.
Each record details (upto 7) will be passed from .pgc (an ECPG based C
Programming file) to postgresql file. It takes less time for the 1st few
records and the performance is turning bad to the latter records! The
result is very sad that it takes days to cover upto 2! What are the
performance measures could I step in into this? Please guide me


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-02 16:06, Magnus Hagander wrote:

On Wed, Nov 2, 2011 at 16:04, Yeb Havinga  wrote:

On 2011-11-02 15:06, Kevin Grittner wrote:

Yeb Havingawrote:


I'm now contemplating not using the 710 at all. Why should I not
buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
3 Max IOPS) with a IO controller+BBU?

Wouldn't the data be subject to loss between the time the IO
controller writes to the SSD and the time it makes it from buffers
to flash RAM?

Good question. My guess would be no, if the raid controller does
'write-throughs' on the attached disks, and the SSD's don't lie about when
they've written to RAM.

Doesn't most SSDs without supercaps lie about the writes, though?



I happened to have a Vertex 3, no supercap, available to test this with 
diskchecker. On a ext4 filesystem (just mounted with noatime, not 
barriers=off), this happenend:


# /root/diskchecker.pl -s 192.168.73.1 verify testfile
 verifying: 0.00%
 verifying: 30.67%
 verifying: 78.97%
 verifying: 100.00%
Total errors: 0

So I guess that's about as much as I can test without actually hooking 
it behind a hardware controller and test that. I will soon test the 
3ware 9750 with Vertex 3 and Intel 510 - both in the 3ware's ssd 
compatibility list.


More info from testing software raid 1:
- with lvm mirroring, discards / trim go through to the disks. This is 
where the Intel is fast enough, but the vertex 2 pro is busy for ~ 10 
seconds.


-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Kevin Grittner
siva palanisamy  wrote:
 
> I basically have 3 tables. One being the core table and the other
> 2 depend on the 1st. I have the requirement to add upto 7
> records in the tables. I do have constraints (primary & foreign
> keys, index, unique etc) set for the tables. I can't go for bulk
> import (using COPY command) as there is no standard .csv file in
> requirement, and the mapping is explicitly required plus few
> validations are externally applied in a C based programming file.
> Each record details (upto 7) will be passed from .pgc (an ECPG
> based C Programming file) to postgresql file. It takes less time
> for the 1st few records and the performance is turning bad to the
> latter records! The result is very sad that it takes days to cover
> upto 2! What are the performance measures could I step in into
> this? Please guide me
 
There's an awful lot you're not telling us, like what version of
PostgreSQL you're using, what your hardware looks like, how many
rows you're trying to insert per database transaction, what resource
looks like on the machine when it's running slow, what the specific
slow queries are and what their execution plans look like, etc.  I
could make a lot of guesses and take a shot in the dark with some
generic advice, but you would be better served by the more specific
advice you will get if you provide more detail.
 
Please review this page (and its links) and post again:
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Tomas Vondra
On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
> Am 02.11.2011 08:12, schrieb Mohamed Hashim:
>> Dear All
>>
>> Thanks for your suggestions & replies.
>>
>> The below are the sample query which i put for particular one bill_id
>>
>> EXPLAIN ANALYZE SELECT abd.bill_no as
>> bill_no,to_char(abd.bill_date,'dd/mm/') AS date,mp.product_desc as
>> product_desc,std.quantity,std.area,rip.price AS rate
>> FROM acc_bill_items_106 abi
>> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
>> JOIN stk_source ss ON  ss.source_detail[1]=1 and
>> ss.source_detail[2]=abi.item_id
>> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
>> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
>> JOIN master_product_106 mp ON mp.product_id= sd106.product_id
>> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
>> WHERE abi.bill_id=12680;
>
> First I would try this:
> explain analyze select * from stk_source where source_detail[1] = 1;
> explain analyze select * from stk_source where source_detail[2] = 12356;
>
> Both times you'll get sequential scans, and that's the root of the
> problem. Oh, you mentioned that you use partitioning, but there seems to
> be no condition for that.
>
> You should really rethink your database schema, at least try to pull out
> all indexable fields out of that int[] into columns, and use indices on
> those fields.

No doubt about that, querying tables using conditions on array columns is
not the best direction in most cases, especially when those tables are
huge.

Still, the interesting part here is that the OP claims this worked just
fine in the older version and after an upgrade the performance suddenly
dropped. This could be caused by many things, and we're just guessing
because we don't have any plans from the old version.

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Tomas Vondra
On 3 Listopad 2011, 16:52, siva palanisamy wrote:
> I basically have 3 tables. One being the core table and the other 2 depend
> on the 1st. I have the requirement to add upto 7 records in the
> tables.
> I do have constraints (primary & foreign keys, index, unique etc) set for
> the tables. I can't go for bulk import (using COPY command) as there is no
> standard .csv file in requirement, and the mapping is explicitly required
> plus few validations are externally applied in a C based programming file.
> Each record details (upto 7) will be passed from .pgc (an ECPG based C
> Programming file) to postgresql file. It takes less time for the 1st few
> records and the performance is turning bad to the latter records! The
> result is very sad that it takes days to cover upto 2! What are the
> performance measures could I step in into this? Please guide me

As Kevin already pointed out, this overall and very vague description is
not sufficient. We need to know at least this for starters

- version of PostgreSQL
- environment (what OS, what hardware - CPU, RAM, drives)
- basic PostgreSQL config values (shared buffers, checkpoint segments)
- structure of the tables, indexes etc.
- output of vmstat/iostat collected when the inserts are slow

And BTW the fact that you're not using a standard .csv file does not mean
you can't use COPY. You can either transform the file to CSV or create it
on the fly.

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSL encryption makes bytea transfer slow

2011-11-03 Thread Marti Raudsepp
On Fri, Oct 28, 2011 at 14:02, Albe Laurenz  wrote:
> Without SSL the SELECT finished in about a second,
> with SSL it took over 23 seconds (measured with
> \timing in psql).

When you query with psql, it requests columns in text format. Since
bytea hex-encodes its value if output is text, this means it's
transmitting 60 MB for a 30 MB bytea value.

If you could make sure that your app is requesting binary output, then
you could cut 50% off this time. As others mentioned, most of the
overhead is in SSL compression (not encryption), which can be
disabled, but is not very easy to do.

But 23 seconds for 60 MB is still *very* slow, so something else could
be going wrong. What kind of CPU is this?

On Thu, Nov 3, 2011 at 16:48, Albe Laurenz  wrote:
> Disabling OpenSSL compression in the source (which
> is possible since OpenSSL 1.0.0) does not give me any performance
> improvement.

If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs can easily saturate 1 GbitE with
AES256-encrypted connections. Compression is usually the bottleneck,
at 20-30 MB/s.

Regards,
Marti

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Guide to PG's capabilities for inlining, predicate hoisting, flattening, etc?

2011-11-03 Thread Justin Pitts
On Wed, Nov 2, 2011 at 11:13 AM, Robert Haas  wrote:

> […]  Perhaps we could let people say
> something like WITH x AS FENCE (...) when they want the fencing
> behavior, and otherwise assume they don't (but give it to them anyway
> if there's a data-modifying operation in there).
>

I would love to be able to test some of our CTE queries in such a scenario.

None of them do data modification. How hard would it be to patch my own
build to disable the fence unilaterally for testing purposes?


Re: [PERFORM] Guide to PG's capabilities for inlining, predicate hoisting, flattening, etc?

2011-11-03 Thread Gavin Flower

On 03/11/11 09:22, Igor Neyman wrote:



-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Wednesday, November 02, 2011 11:13 AM
To: Tom Lane
Cc: Jay Levitt; pgsql-performance@postgresql.org
Subject: Re: Guide to PG's capabilities for inlining, predicate
hoisting, flattening, etc?
...
...
Perhaps we could let people say
something like WITH x AS FENCE (...) when they want the fencing
behavior, and otherwise assume they don't (but give it to them anyway
if there's a data-modifying operation in there).



--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Hints here we come :)


Is that a hint???

[Sorry, my perverse sense of humour kicked in]

I too would like CTE's to take part in optimisation - as I don't like 
the mass slaughter of kittens, but I still want to pander to my speed 
addiction.


So I think that having some sort of fence mechanism would be good.


Cheers,
Gavin



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Predicates not getting pushed into SQL function?

2011-11-03 Thread Jay Levitt
I'm confused.  I have a now-trivial SQL function that, unrestricted, would 
scan my whole users table.  When I paste the body of the function as a 
subquery and restrict it to one row, it only produces one row.  When I paste 
the body of the function into a view and restrict it to one row, it produces 
one row.  But when I put it in a SQL function... it scans the whole users 
table and then throws the other rows away.


I thought SQL functions were generally inline-able, push-down-able, etc.  As 
a workaround, I can put my WHERE clause inside the function and pass it 
parameters, but that feels ugly, and it won't help for things like 
resticting via JOINs.  The real function needs parameters, so I can't use it 
as a view.  Are there better workarounds?


I suspect the problem is (something like) the planner doesn't realize the 
function will produce a variable number of rows; I can specify COST or ROWS, 
but they're both fixed values.


Pretty-printed function and explain analyze results:

https://gist.github.com/1336963

In ASCII for web-haters and posterity:

-- THE OVERLY SIMPLIFIED FUNCTION

create or replace function matcher()
returns table(user_id int, match int) as $$

select o.user_id, 1 as match
from (
  select u.id as user_id, u.gender
  from users as u
) as o
cross join
(
  select u.id as user_id, u.gender
  from users as u
  where u.id = 1
) as my;

$$ language sql stable;

-- WHEN I CALL IT AS A FUNCTION

select * from matcher() where user_id = 2;

LOG:  duration: 1.242 ms  plan:
Query Text:

select o.user_id, 1 as match
from (
  select u.id as user_id, u.gender
  from users as u
) as o
cross join
(
  select u.id as user_id, u.gender
  from users as u
  where u.id = 1
) as my;


Nested Loop  (cost=0.00..118.39 rows=1656 width=4) (actual 
time=0.022..0.888 rows=1613 loops=1)

  Output: u.id, 1
  ->  Index Scan using users_pkey on public.users u  (cost=0.00..8.27 
rows=1 width=0) (actual time=0.013..0.015 rows=1 loops=1)

Index Cond: (u.id = 1)
  ->  Seq Scan on public.users u  (cost=0.00..93.56 rows=1656 width=4) 
(actual time=0.004..0.479 rows=1613 loops=1)

Output: u.id
CONTEXT:  SQL function "matcher" statement 1
LOG:  duration: 1.951 ms  plan:
Query Text: select * from matcher() where user_id = 2;
Function Scan on public.matcher  (cost=0.25..12.75 rows=5 width=8) 
(actual time=1.687..1.940 rows=1 loops=1)

  Output: user_id, match
  Filter: (matcher.user_id = 2)

-- WHEN I CALL IT AS A SUBQUERY

select * from
(
  select o.user_id, 1 as match
  from (
select u.id as user_id, u.gender
from users as u
  ) as o
  cross join
  (
select u.id as user_id, u.gender
from users as u
where u.id = 1
  ) as my
) as matcher
where user_id = 2;

LOG:  duration: 0.044 ms  plan:
Query Text: select * from
(
  select o.user_id, 1 as match
  from (
select u.id as user_id, u.gender
from users as u
  ) as o
  cross join
  (
select u.id as user_id, u.gender
from users as u
where u.id = 1
  ) as my
) as matcher
where user_id = 2;
Nested Loop  (cost=0.00..16.55 rows=1 width=4) (actual 
time=0.028..0.031 rows=1 loops=1)

  Output: u.id, 1
  ->  Index Scan using users_pkey on public.users u  (cost=0.00..8.27 
rows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)

Output: u.id
Index Cond: (u.id = 2)
  ->  Index Scan using users_pkey on public.users u  (cost=0.00..8.27 
rows=1 width=0) (actual time=0.004..0.006 rows=1 loops=1)

Index Cond: (u.id = 1)

-- WHEN I CALL IT AS A VIEW

create view matchview as
select o.user_id, 1 as match
  from (
select u.id as user_id, u.gender
from users as u
  ) as o
  cross join
  (
select u.id as user_id, u.gender
from users as u
where u.id = 1
  ) as my;

select * from matchview where user_id = 2;


LOG:  duration: 0.044 ms  plan:
Query Text: select * from matchview where user_id = 2;
Nested Loop  (cost=0.00..16.55 rows=1 width=4) (actual 
time=0.028..0.031 rows=1 loops=1)

  Output: u.id, 1
  ->  Index Scan using users_pkey on public.users u  (cost=0.00..8.27 
rows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)

Output: u.id
Index Cond: (u.id = 2)
  ->  Index Scan using users_pkey on public.users u  (cost=0.00..8.27 
rows=1 width=0) (actual time=0.005..0.007 rows=1 loops=1)

Index Cond: (u.id = 1)


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Claudio Freire
Hi list,

I've been experiencing a weird performance issue lately.

I have a very simple (and usually very fast) query:

SELECT track_logs.id
FROM track_logs
WHERE track_logs.track_id =  AND track_logs.track_status_id =
1 AND track_logs.date >= now() - interval '1 hours'
FOR UPDATE  

Whose plan is:

"LockRows  (cost=0.00..26.73 rows=1 width=14)"
"  ->  Index Scan using idx_track_logs_track_id on track_logs
(cost=0.00..26.72 rows=1 width=14)"
"Index Cond: (track_id = )"
"Filter: ((track_status_id = 1) AND (date >= (now() -
'01:00:00'::interval)))"

The same query, without FOR UPDATE, takes just 68 milliseconds.

With the FOR UPDATE, it takes like half a minute or more to finish.

Now, I understand the for update part may be blocking on some other
transaction, and it's probably the case.
But I cannot figure out which transaction it would be. There *are*, in
fact, connections in  state, which makes me think
those would be the culprit. But for the life of me, I cannot make
sense of the pg_locks view, which shows all locks as granted:


PID   RelationXID   TX  Mode
Granted Start
14751   5551986 154/4038460 AccessShareLock Yes 
2011-11-03 12:45:03.551516-05
14751   5526310 154/4038460 RowShareLock
Yes 2011-11-03 12:45:03.551516-05
14751   5552008 154/4038460 RowExclusiveLock
Yes 2011-11-03 12:45:03.551516-05
14751   5552020 154/4038460 RowExclusiveLock
Yes 2011-11-03 12:45:03.551516-05
14751   5552008 154/4038460 AccessShareLock Yes 
2011-11-03 12:45:03.551516-05
14751   5525296 154/4038460 RowShareLock
Yes 2011-11-03 12:45:03.551516-05
14751   5525292 154/4038460 RowShareLock
Yes 2011-11-03 12:45:03.551516-05
14751   5552019 154/4038460 AccessShareLock Yes 
2011-11-03 12:45:03.551516-05
14751   5552019 154/4038460 RowExclusiveLock
Yes 2011-11-03 12:45:03.551516-05
14751   5552020 154/4038460 AccessShareLock Yes 
2011-11-03 12:45:03.551516-05
14751   5525292 154/4038460 RowExclusiveLock
Yes 2011-11-03 12:45:03.551516-05
14751   154/4038460 154/4038460 ExclusiveLock   
Yes 2011-11-03
12:45:03.551516-05
14751   154/4038460 ExclusiveLock   
Yes 2011-11-03 12:45:03.551516-05
14751   5526308 154/4038460 AccessShareLock Yes 
2011-11-03 12:45:03.551516-05

Where should I look?
What other information should I provide?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Claudio Freire
On Thu, Nov 3, 2011 at 2:51 PM, Claudio Freire  wrote:
> What other information should I provide?

Forgot all the usual details:

Server is postgresql 9.0.3, running in linux, quite loaded (load
average ~7), WAL on raid 1 2 spindles, data on raid 10 4 spindles, 16G
RAM.

Could it be high contention between the worker processes? (because of
the high load)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Kevin Grittner
[Please keep the list copied.]

siva palanisamy  wrote:
 
> Could you pls guide me on how to minimize time consumption? I've
> postgresql 8.1.4; Linux OS.
 
Well, the first thing to do is to use a supported version of
PostgreSQL.  More recent releases perform better, for starters.
 
http://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy
 
Whichever major release you use, you should be up-to-date on bug
fixes, some of which are fixes for bugs which cause performance
problems:
 
http://www.postgresql.org/support/versioning
 
> I'm yet to check its RAM and other memory capacity but I
> guess it would've the necessary stuffs.
 
Knowing what hardware you have, and what your current PostgreSQL
configuration setting are, would allow us to suggest what you might
reconfigure to tune your database.
 
> My master table's schema is
> 
> CREATE TABLE contacts ( contact_id SERIAL PRIMARY KEY,
> contact_type INTEGER DEFAULT 0, display_name TEXT NOT NULL DEFAULT
> '', first_name TEXT DEFAULT '', last_name TEXT DEFAULT '',
> company_name TEXT DEFAULT '', last_updated TIMESTAMP NOT NULL
> DEFAULT current_timestamp, UNIQUE(display_name) ) WITHOUT OIDS;
 
Not that this is a performance issue, but you almost certainly will
expect the semantics provided by TIMESTAMP WITH TIME ZONE for your
last_updated column.  Just specifying TIMESTAMP is probably going to
give you an unpleasant surprise somewhere down the road.
 
> I've a sql function that is called from a C program where
> parameters are being passed. It is replicated for the other 2
> tables as well. Totally, I've 3 tables.
 
Which table is the source of your slowness, and how do you know
that?
 
> FYI, database connection is opened for the 1st and closed
> only after the last record is attempted. Do you think these
> constraints take a lot of time?
 
The only constraints you've shown are PRIMARY KEY and UNIQUE.  It is
somewhat slower to add rows with those constraints in place than to
blast in data without the constraints and then add the constraints;
but I understand that if the data is not known to be clean and free
of duplicates, that's not possible.  That certainly doesn't account
for the timings you describe.
 
> taking days to complete 2 odd records are not encouraging!
 
I've seen PostgreSQL insert more rows than that per second, so it's
not like it is some inherent slowness of PostgreSQL.  There is
something you're doing with it that is that slow.  Getting onto a
modern version of PostgreSQL may help a lot, but most likely there's
something you're not telling us yet that is the thing that really
needs to change.
 
Just as one off-the-wall example of what *can* happen -- if someone
disabled autovacuum and had a function which did an update to all
rows in a table each time the function was called, they would see
performance like you describe.  How do I know, from what you've told
me, that you're *not* doing that?  Or one of a hundred other things
I could postulate?  (Hint, if you showed us your current PostgreSQL
settings I could probably have ruled this out.)
 
-Kevin


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Predicates not getting pushed into SQL function?

2011-11-03 Thread Tom Lane
Jay Levitt  writes:
> I'm confused.  I have a now-trivial SQL function that, unrestricted, would 
> scan my whole users table.  When I paste the body of the function as a 
> subquery and restrict it to one row, it only produces one row.  When I paste 
> the body of the function into a view and restrict it to one row, it produces 
> one row.  But when I put it in a SQL function... it scans the whole users 
> table and then throws the other rows away.

> I thought SQL functions were generally inline-able, push-down-able, etc.

inline-able, yes, but if they're not inlined you don't get any such
thing as pushdown of external conditions into the function body.
A non-inlined function is a black box.

The interesting question here is why the function doesn't get inlined
into the calling query.  You got the obvious showstoppers: it has a
SETOF result, it's not volatile, nor strict.  The only other possibility
I can see offhand is that there's some sort of result datatype mismatch,
but you've not provided enough info to be sure about that.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Tom Lane
Claudio Freire  writes:
> The same query, without FOR UPDATE, takes just 68 milliseconds.
> With the FOR UPDATE, it takes like half a minute or more to finish.

> Now, I understand the for update part may be blocking on some other
> transaction, and it's probably the case.

Yeah, that's what I'd guess.

> But I cannot figure out which transaction it would be. There *are*, in
> fact, connections in  state, which makes me think
> those would be the culprit. But for the life of me, I cannot make
> sense of the pg_locks view, which shows all locks as granted:

A block on a row would typically show up as one transaction waiting on
another's XID.  Did you capture this *while* the query was blocked?
Also, I'm suspicious that you may be using a view that filters out
the relevant lock types --- that's obviously not a raw display of
pg_locks.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Predicates not getting pushed into SQL function?

2011-11-03 Thread Jay Levitt
What other info can I 
provide?  id is int, gender is varchar(255), and it's happening on 
9.0.4...

  
   
   	Tom Lane 
 
  November 3, 2011 
2:41 PM
  
  Jay Levitt  writes:
I'm confused.  I have a now-trivial SQL function that, unrestricted, would 
scan my whole users table.  When I paste the body of the function as a 
subquery and restrict it to one row, it only produces one row.  When I paste 
the body of the function into a view and restrict it to one row, it produces 
one row.  But when I put it in a SQL function... it scans the whole users 
table and then throws the other rows away.

I thought SQL functions were generally inline-able, push-down-able, etc.

inline-able, yes, but if they're not inlined you don't get any such
thing as pushdown of external conditions into the function body.
A non-inlined function is a black box.

The interesting question here is why the function doesn't get inlined
into the calling query.  You got the obvious showstoppers: it has a
SETOF result, it's not volatile, nor strict.  The only other possibility
I can see offhand is that there's some sort of result datatype mismatch,
but you've not provided enough info to be sure about that.

			regards, tom lane

   	   
   	Jay Levitt
  
  November 3, 2011 
1:47 PM
  I'm confused.  I have a 
now-trivial SQL function that, unrestricted, would 
scan my whole users table.  When I paste the body of the function as a 
subquery and restrict it to one row, it only produces one row.  When I 
paste 
the body of the function into a view and restrict it to one row, it 
produces 
one row.  But when I put it in a SQL function... it scans the whole 
users 
table and then throws the other rows away.

I thought SQL functions were generally inline-able, push-down-able, 
etc.  As 
a workaround, I can put my WHERE clause inside the function and pass it 
parameters, but that feels ugly, and it won't help for things like 
resticting via JOINs.  The real function needs parameters, so I can't 
use it 
as a view.  Are there better workarounds?

I suspect the problem is (something like) the planner doesn't 
realize the 
function will produce a variable number of rows; I can specify COST or 
ROWS, 
but they're both fixed values.

Pretty-printed function and explain analyze results:

https://gist.github.com/1336963

In ASCII for web-haters and posterity:

-- THE OVERLY SIMPLIFIED FUNCTION

create or replace function matcher()
returns table(user_id int, match int) as $$

    select o.user_id, 1 as match
    from (
  select u.id as user_id, u.gender
  from users as u
    ) as o
    cross join
    (
  select u.id as user_id, u.gender
  from users as u
  where u.id = 1
    ) as my;

$$ language sql stable;

-- WHEN I CALL IT AS A FUNCTION

select * from matcher() where user_id = 2;

LOG:  duration: 1.242 ms  plan:
    Query Text:

    select o.user_id, 1 as match
    from (
  select u.id as user_id, u.gender
  from users as u
    ) as o
    cross join
    (
  select u.id as user_id, u.gender
  from users as u
  where u.id = 1
    ) as my;


    Nested Loop  (cost=0.00..118.39 rows=1656 width=4) (actual 
time=0.022..0.888 rows=1613 loops=1)
  Output: u.id, 1
  ->  Index Scan using users_pkey on public.users u  
(cost=0.00..8.27 
rows=1 width=0) (actual time=0.013..0.015 rows=1 loops=1)
    Index Cond: (u.id = 1)
  ->  Seq Scan on public.users u  (cost=0.00..93.56 rows=1656
 width=4) 
(actual time=0.004..0.479 rows=1613 loops=1)
    Output: u.id
CONTEXT:  SQL function "matcher" statement 1
LOG:  duration: 1.951 ms  plan:
    Query Text: select * from matcher() where user_id = 2;
    Function Scan on public.matcher  (cost=0.25..12.75 rows=5 
width=8) 
(actual time=1.687..1.940 rows=1 loops=1)
  Output: user_id, match
  Filter: (matcher.user_id = 2)

-- WHEN I CALL IT AS A SUBQUERY

select * from
(
  select o.user_id, 1 as match
  from (
    select u.id as user_id, u.gender
    from users as u
  ) as o
  cross join
  (
    select u.id as user_id, u.gender
    from users as u
    where u.id = 1
  ) as my
) as matcher
where user_id = 2;

LOG:  duration: 0.044 ms  plan:
    Query Text: select * from
    (
  select o.user_id, 1 as match
  from (
    select u.id as user_id, u.gender
    from users as u
  ) as o
  cross join
  (
    select u.id as user_id, u.gender
    from users as u
    where u.id = 1
  ) as my
    ) as matcher
    where user_id = 2;
    Nested Loop  (cost=0.00..16.55 rows=1 width=4) (actual 
time=0.028..0.031 rows=1 loops=1)
  Output: u.id, 1
  ->  Index Scan using users_pkey on public.users u  
(cost=0.00..8.27 
rows=1 width=4) (actual time=0.021..0.022 rows=1 loops=1)
    Output: u.id
    Index Cond: (u.id = 2)
  ->  Index Scan using users_pkey on public.users u  
(cost=0.00..8.27 
rows=1 width=0) (actual time=0.004..0.006 rows=1 loops=1)

Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Claudio Freire
On Thu, Nov 3, 2011 at 3:45 PM, Tom Lane  wrote:
> Claudio Freire  writes:
>> But I cannot figure out which transaction it would be. There *are*, in
>> fact, connections in  state, which makes me think
>> those would be the culprit. But for the life of me, I cannot make
>> sense of the pg_locks view, which shows all locks as granted:
>
> A block on a row would typically show up as one transaction waiting on
> another's XID.  Did you capture this *while* the query was blocked?

Yes

> Also, I'm suspicious that you may be using a view that filters out
> the relevant lock types --- that's obviously not a raw display of
> pg_locks.

It's pgadmin, which I usually use to monitor pg_stats_activity and
pg_locks in a "pretty" view.
pg_locks does not show the query, only the pid, so it's harder to spot.

Next time I find it blocking, I will check pg_locks directly and post
the output.

I did that once, and they were all granted. I didn't correlate with
other XIDs since I thought the "granted" column meant it wasn't
waiting. Is that wrong?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Claudio Freire
On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire  wrote:
> Next time I find it blocking, I will check pg_locks directly and post
> the output.

Here it is, two instances of the query, while blocked:

select * from pg_locks where pid = 22636;

   locktype| database | relation | page | tuple | virtualxid  |
transactionid | classid | objid | objsubid | virtualtransaction |  pid
 |   mode   | granted
---+--+--+--+---+-+---+-+---+--++---+--+-
 transactionid |  |  |  |   | |
 360992199 | |   |  | 89/22579344| 22636 |
ExclusiveLock| t
 virtualxid|  |  |  |   | 89/22579344 |
   | |   |  | 89/22579344| 22636 |
ExclusiveLock| t
 relation  |16398 |  5552020 |  |   | |
   | |   |  | 89/22579344| 22636 |
AccessShareLock  | t
 relation  |16398 |  5552020 |  |   | |
   | |   |  | 89/22579344| 22636 |
RowExclusiveLock | t
 relation  |16398 |  5552019 |  |   | |
   | |   |  | 89/22579344| 22636 |
AccessShareLock  | t
 relation  |16398 |  5552019 |  |   | |
   | |   |  | 89/22579344| 22636 |
RowExclusiveLock | t
 relation  |16398 |  5525292 |  |   | |
   | |   |  | 89/22579344| 22636 |
RowShareLock | t
 relation  |16398 |  5525292 |  |   | |
   | |   |  | 89/22579344| 22636 |
RowExclusiveLock | t
 relation  |16398 |  5552008 |  |   | |
   | |   |  | 89/22579344| 22636 |
AccessShareLock  | t
 relation  |16398 |  5552008 |  |   | |
   | |   |  | 89/22579344| 22636 |
RowExclusiveLock | t
(10 rows)

select * from pg_locks where pid = 22618;

   locktype| database | relation | page | tuple | virtualxid  |
transactionid | classid | objid | objsubid | virtualtransaction |  pid
 |   mode   | granted
---+--+--+--+---+-+---+-+---+--++---+--+-
 virtualxid|  |  |  |   | 159/2706505 |
   | |   |  | 159/2706505| 22618 |
ExclusiveLock| t
 relation  |16398 |  5551986 |  |   | |
   | |   |  | 159/2706505| 22618 |
AccessShareLock  | t
 transactionid |  |  |  |   | |
 360992478 | |   |  | 159/2706505| 22618 |
ExclusiveLock| t
 relation  |16398 |  5552008 |  |   | |
   | |   |  | 159/2706505| 22618 |
AccessShareLock  | t
 relation  |16398 |  5552008 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowExclusiveLock | t
 relation  |16398 |  5526310 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowShareLock | t
 relation  |16398 |  5552020 |  |   | |
   | |   |  | 159/2706505| 22618 |
AccessShareLock  | t
 relation  |16398 |  5552020 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowExclusiveLock | t
 relation  |16398 |  5526308 |  |   | |
   | |   |  | 159/2706505| 22618 |
AccessShareLock  | t
 relation  |16398 |  5552019 |  |   | |
   | |   |  | 159/2706505| 22618 |
AccessShareLock  | t
 relation  |16398 |  5552019 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowExclusiveLock | t
 relation  |16398 |  5525296 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowShareLock | t
 relation  |16398 |  5525292 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowShareLock | t
 relation  |16398 |  5525292 |  |   | |
   | |   |  | 159/2706505| 22618 |
RowExclusiveLock | t
(14 rows)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

2011-11-03 Thread Mario Weilguni

Am 03.11.2011 17:08, schrieb Tomas Vondra:

On 3 Listopad 2011, 16:02, Mario Weilguni wrote:

No doubt about that, querying tables using conditions on array columns is
not the best direction in most cases, especially when those tables are
huge.

Still, the interesting part here is that the OP claims this worked just
fine in the older version and after an upgrade the performance suddenly
dropped. This could be caused by many things, and we're just guessing
because we don't have any plans from the old version.

Tomas




Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about 
problems after the migration from 8.x to 9.x. Or did I miss something here?


Regards,
Mario


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Predicates not getting pushed into SQL function?

2011-11-03 Thread Tom Lane
Jay Levitt  writes:
> 
> 
> What other info can I 
> provide?  id is int, gender is varchar(255), and it's happening on 
> 9.0.4...
>  [ etc etc ]

Please don't send HTML-only email to these lists.

Anyway, the answer seems to be that inline_set_returning_function needs
some work to handle cases with declared OUT parameters.  I will see
about fixing that going forward, but in existing releases what you need
to do is declare the function as returning SETOF some named composite
type, eg

create type matcher_result as (user_id int, match int);

create or replace function matcher() returns setof matcher_result as ...

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] function slower than the same code in an sql file

2011-11-03 Thread CS DBA

On 11/03/2011 09:40 AM, Robert Haas wrote:

On Thu, Nov 3, 2011 at 11:31 AM, Rodrigo Gonzalez
  wrote:

El 03/11/11 11:42, Robert Haas escribió:

On Fri, Oct 28, 2011 at 9:39 AM, CS DBA  wrote:

No parameters,  one of them looks like this:

[ code snippet ]

It's hard to believe this is the real code, because SELECT without
INTO will bomb out inside a PL/pgsql function, won't it?

But he's using CREATE TABLE xyz_view_m AS

So it seems correct to me

Oh, right, I missed that.

That seems pretty mysterious then.  But is it possible the function is
getting called more times than it should?  I notice that it's set up
as a trigger; is it FOR EACH ROW when it should be a statement-level
trigger or something like that?  Maybe run EXPLAIN ANALYZE on the
query that's invoking the trigger to get some more detail on what's
going on?


I'll give it a shot ...




--
-
Kevin Kempter   -   Constent State
A PostgreSQL Professional Services Company
  www.consistentstate.com
-


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Blocking excessively in FOR UPDATE

2011-11-03 Thread Tom Lane
Claudio Freire  writes:
> On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire  wrote:
>> Next time I find it blocking, I will check pg_locks directly and post
>> the output.

> Here it is, two instances of the query, while blocked:

Hmm ... definitely seems that you're not blocked on a FOR UPDATE tuple
lock.  If you were, there'd be an ungranted ShareLock on a transactionid
in there.

It seems possible that you're blocked on an LWLock, which would not show
in pg_locks.  But before pursuing that idea, probably first you should
back up and confirm whether the process is actually waiting, or running,
or just really slow due to CPU contention.  It might be useful to see
what strace has to say about it.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance