Re: [GENERAL] better performance on poorer machine?

2014-05-14 Thread Vegard Bønes
Hi, all.

I found the problem - it was merely a poorly written query, which for some 
reason was less terrible on my laptop. Looking at http://explain.depesz.com/ 
helped me solve the issue. Performance is now back to normal.

It seems this was a classic case of pebcak. Thank you for your help and time!


VG


- Original Message -
Fra: Alban Hertroys haram...@gmail.com
Til: Vegard Bønes vegard.bo...@met.no
Kopi: pgsql-general@postgresql.org
Sendt: 13. mai 2014 17:49:28
Emne: Re: [GENERAL] better performance on poorer machine?

On 13 May 2014 16:48, Vegard Bønes vegard.bo...@met.no wrote:
 I have a database on a test server with queries that perform terribly. Trying 
 to fix this problem, I copied the database (using pg_dump) to my laptop, and 
 reran tests there. The same queries perform perfectly on my laptop.

We can but guess, but... Quite possibly your laptop is not actually a
poorer machine for single uncomplicated queries. If it's cores are
faster than the test servers, than your laptop would out-perform the
server for such queries.

Once you get other users starting to run queries as well, turning the
load into a parallel load, the server will probably turn out faster
again.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] better performance on poorer machine?

2014-05-13 Thread Vegard Bønes
Hi,

I have a database on a test server with queries that perform terribly. Trying 
to fix this problem, I copied the database (using pg_dump) to my laptop, and 
reran tests there. The same queries perform perfectly on my laptop.

I have tried to use the same postgresql.conf, and run ANALYZE and even VACUUM 
ANALYZE on the databases, but the problem remains.

EXPLAIN shows a somewhat different query plan for each database.

Now I have no idea what to do next. How can I go about trying to find the cause 
of this? I can see no other explanation than hardware issues, but in theory, 
the test servers (there are more than one, with the same performance problems) 
should be better than my laptop in all ways. Have I missed something obvious?


regards,
Vegard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large amount of serialization errors in transactions

2013-05-15 Thread Vegard Bønes
Thanks, I tried increasing max_pred_locks_per_transaction by a factor 20, and 
that seems to have helped. btw, is there any reason, beside memory concerns, 
not to have a very high value for max_pred_locks_per_transaction?


- Vegard


- Original Message -
Fra: Kevin Grittner kgri...@ymail.com
Til: Vegard Bønes vegard.bo...@met.no, pgsql-general@postgresql.org
Sendt: 10. mai 2013 20:35:05
Emne: Re: [GENERAL] Large amount of serialization errors in transactions


Vegard Bønes vegard.bo...@met.no wrote: 

 I have a problem understanding how transactions with serializable 
 isolation level works in postgresql. What exactly may cause a 
 serialization error? 

A pattern of read-write dependencies among concurrent transactions 
which indicate that a serialization anomaly is possible. There is 
a possibility of false positives -- mostly because of the 
granularity of tracking the information, to limit the RAM used to 
track reads and read-write conflicts. 

The granularity promotion logic is pretty simple, and we may need 
to make it more sophisticated, but I haven't yet seen a real-life 
use case where that would help. So far a high serailization 
failure rate with serializable transactions is either intrinsic to 
the nature of the workload or could be solved by increasing the RAM 
used to track locks. If you have a case where a more sophisticated 
heuristic would help, I would be interested in more details, but 
first let's see whether things can be improved to an acceptable 
level with tuning. 

For details on how the logic works, you can review the examples and 
an overview of the technique here: 

http://wiki.postgresql.org/wiki/SSI 

... which links to the documentation and to a Wiki page which was 
used to manage development of the current implementation -- which 
goes into more detail about how it works, and which in turn links 
to the academic papers which were the foundation for the 
development. 

 My problem is a system where process one adds data to a database. 
 Shortly afterwards, process two reads and possibly modifies the 
 same data (keys are not touched). When large amounts of data 
 arrives at about the same time, I get loads of serialization 
 errors in process two. In a perfect world this should not happen, 
 since data that is entered by process one is almost always 
 different from the data that at the same time is being read and 
 written by process two. 

It would be easier to tell what the cause is if you pasted the 
exact messages which are getting logged most frequently, with 
associated DETAIL. 

 I have tried increasing max_pred_locks_per_transaction, but it 
 seems to have no effect. 

How high have you tried to set it. I have seen some reports of 
people needing to set it about 10 times higher than the default to 
reduce false positives to a negligible level. 

 I do retry the transactions, and eventually they succeed, but my 
 concern here is the amount of errors I get at certain periods. Am 
 I expecting too much of serializable isolation level 
 transactions, or is there anyting else that I am missing? 

Have you reviewed the issues listed under for optimal performance 
on this page?: 

http://www.postgresql.org/docs/current/interactive/transaction-iso.html#XACT-SERIALIZABLE
 

If you allow large numbers of connections, you may see a large 
benefit from using a connection pooler which funnels many logical 
client connections down to a much smaller number of database 
connections. This tends to help in general, but is especially 
important when using serializable transactions. 

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections 

-- 
Kevin Grittner 
EnterpriseDB: http://www.enterprisedb.com 
The Enterprise PostgreSQL Company 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large amount of serialization errors in transactions

2013-05-10 Thread Vegard Bønes
Hi!

I have a problem understanding how transactions with serializable isolation 
level works in postgresql. What exactly may cause a serialization error?

My problem is a system where process one adds data to a database. Shortly 
afterwards, process two reads and possibly modifies the same data (keys are not 
touched). When large amounts of data arrives at about the same time, I get 
loads of serialization errors in process two. In a perfect world this should 
not happen, since data that is entered by process one is almost always 
different from the data that at the same time is being read and written by 
process two.

I have tried increasing max_pred_locks_per_transaction, but it seems to have no 
effect.

I do retry the transactions, and eventually they succeed, but my concern here 
is the amount of errors I get at certain periods. Am I expecting too much of 
serializable isolation level transactions, or is there anyting else that I am 
missing?



- Vegard


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQL names for C constants

2007-02-27 Thread Vegard Bønes

Hi!

I am writing a serverside function in plpgsql, which returns a part of a 
large object.


To solve this problem I can do something like this:

fd := lo_open( some_oid, 262144 );
PERFORM lo_lseek( fd, index, 0 );
RETURN loread( fd, read_size );

This works well enough, but I do feel slightly uneasy about using 
numbers instead of symbolic names (262144 for INV_READ and 0 for 
SEEK_SET). I do especially find SEEK_SET unsettling, since this constant 
is defined outside of postgresql, and it may therefore be defined to 
other values than 0 on other platforms.


Am I missing something here? Is there a way to specify these names 
without writing the actual numbers, or should I just close my eyes and 
hope that everything will work?



thanks

VG

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/