Re: [GENERAL] plpgsql function confusing behaviour

2011-07-13 Thread Shianmiin

Merlin Moncure-2 wrote:
> 
> One proposed solution is to cache plpgsql plans around the search path.  
> 

I like the proposed solution, since search_path plays a part when generating
plpgsql plan, it make sense to be part of the cache.


Merlin Moncure-2 wrote:
> 
> *) use sql functions for portions that float across schemas 
> 

Just to clarify, does this mean the sql functions doesn't cache plans like
plpgsql functions do?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/plpgsql-function-confusing-behaviour-tp4576354p4579619.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] About permissions on large objects

2011-07-13 Thread Giuseppe Sacco
Hi all,
I moved a few clusters from 8.4 to 9.0 since I required the new way of
authenticating against LDAP (or, in my case, AD). Now, I found the new
database version introduced permissions on large object, so my
application, in order to share large object across a group, require a
bit of change.

While the application code will be changed in order to give rights on
large objects too, I would like to know if there is any way for listing
current rights, i.e., for finding all large objects that still need to
have permissions changed.

Currently I cannot know how to distinguish what large objects have
already been granted, so I do give permissions to all large objects.
This is quite time consuming, about 5 minutes, and need to be executed a
few times per hour.

This is what I do now:

do $$
declare r record;
begin
for r in select distinct loid from pg_catalog.pg_largeobject loop
   execute 'GRANT SELECT,UPDATE ON LARGE OBJECT ' || r.loid || ' TO agenzia_r';
end loop;
end$$;

Is there a better/faster way?

Thanks,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_rank seems very slow (140 ranked documents / second on my machine)

2011-07-13 Thread Nicolas Grilly
Hi Oleg and all,

On Wed, Jul 13, 2011 at 08:16, Oleg Bartunov  wrote:
> there is problem with estimating of cost scanning gin index in < 9.1
> versions,
> so you can set enable_seqscan=off;
> or try 9.1 which beta3 now.

I re-ran my queries using enable seqscan=off.

Now the first query, without ts_rank, uses the GIN index:

set enable_seqscan=off;
explain analyze select id
from posts_100
where to_tsquery('english', 'crare') @@ document_vector
limit 50;

Limit  (cost=42290.12..42306.31 rows=50 width=4) (actual
time=16.259..16.412 rows=50 loops=1)
  ->  Bitmap Heap Scan on posts_100  (cost=42290.12..57877.02
rows=48152 width=4) (actual time=16.256..16.344 rows=50 loops=1)
Recheck Cond: ('''crare'''::tsquery @@ document_vector)
->  Bitmap Index Scan on index_posts_documents_100
(cost=0.00..42278.08 rows=48152 width=0) (actual time=13.265..13.265
rows=49951 loops=1)
  Index Cond: ('''crare'''::tsquery @@ document_vector)
Total runtime: 16.484 ms

But the second query, the one that uses ts_rank, is still very slow...
Any idea why? Is ts_rank efficient enough to find the best 50 matches
among 50 000 documents?

set enable_seqscan=off;
explain analyze select id
from posts_100
where to_tsquery('english', 'crare') @@ document_vector
order by ts_rank_cd(document_vector, to_tsquery('english', 'crare'), 32) desc
limit 50;

Limit  (cost=59596.98..59597.10 rows=50 width=22) (actual
time=296212.052..296212.257 rows=50 loops=1)
  ->  Sort  (cost=59596.98..59717.36 rows=48152 width=22) (actual
time=296186.928..296187.007 rows=50 loops=1)"
Sort Key: (ts_rank_cd(document_vector, '''crare'''::tsquery, 32))
Sort Method:  top-N heapsort  Memory: 27kB
->  Bitmap Heap Scan on posts_100
(cost=42290.12..57997.40 rows=48152 width=22) (actual
time=70.861..296059.515 rows=49951 loops=1)
  Recheck Cond: ('''crare'''::tsquery @@ document_vector)
  ->  Bitmap Index Scan on index_posts_documents_100
(cost=0.00..42278.08 rows=48152 width=0) (actual time=24.922..24.922
rows=49951 loops=1)
Index Cond: ('''crare'''::tsquery @@ document_vector)
Total runtime: 296220.493 ms

>> By the way, does ts_rank is supposed to use a GIN index when it's
>> available?
>
> no, I see no benefit :)

Ok. But what is the solution to improve ts_rank execution time? Am I
doing something wrong?

Thanks for your help,

Nicolas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Andrus

Sometimes per week server stops randomly responding for approx 5 minutes.
User  should wait for 5 minutes before server responds.
Other users can work normally at same time.

Monday this happens at  12:16
I havent noticed anythis special in PostgreSql and windows logs at this 
time.


How to fix or find reason for this ?

Andrus.

Config:

Win 2008 server  2 GB  RAM
PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit
10-25 users manually entering sales order, e.q. low server usage

Server is installed using setup program from postgresql.org directed site,
postresql.conf file is not modified after installation.

Database size: 2165 MB

Biggest files in decreasing order of size:

 1  firma1.bilkaib880 MB table
 2  public.attachme   454 MB table
 3  firma1.rid401 MB table
 4  firma1.omrid  88 MB  table
 5  firma1.omdok  79 MB  table
 6  firma1.bilkaib_cr_idx 75 MB  index
 7  firma1.bilkaib_cr_pattern_idx 74 MB  index
 8  firma1.bilkaib_db_pattern_idx 74 MB  index
 9  firma1.bilkaib_db_idx 74 MB  index
10  firma1.klient 69 MB  table
11  firma1.dok64 MB  table
12  firma1.bilkaib_kuupaev_idx60 MB  index
13  firma1.bilkaib_dokumnr_idx53 MB  index
14  firma1.bilkaib_pkey   43 MB  index
15  firma1.rid_rtellimus_idx  38 MB  index
16  firma1.rid_toode_idx  25 MB  index
17  firma1.rid_toode_pattern_idx  24 MB  index
18  public.strings23 MB  table
19  firma1.rid_inpdokumnr_idx 19 MB  index
20  firma1.toode  17 MB  table
21  firma1.rid_dokumnr_idx16 MB  index
22  firma1.rid_pkey   16 MB  index
23  firma1.summav 15 MB  table
24  public.report 13 MB  table
25  public.desktop9784 kBtable
26  public.mailbox7128 kBtable
27  public.localfil   5584 kBtable
28  pg_toast.pg_toast_36145_index 5392 kBindex
...


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Tony Wang
I think logs may help. Have you checked that?

2011/7/13 Andrus 

> Sometimes per week server stops randomly responding for approx 5 minutes.
> User  should wait for 5 minutes before server responds.
> Other users can work normally at same time.
>
> Monday this happens at  12:16
> I havent noticed anythis special in PostgreSql and windows logs at this
> time.
>
> How to fix or find reason for this ?
>
> Andrus.
>
> Config:
>
> Win 2008 server  2 GB  RAM
> PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit
> 10-25 users manually entering sales order, e.q. low server usage
>
> Server is installed using setup program from postgresql.org directed site,
> postresql.conf file is not modified after installation.
>
> Database size: 2165 MB
>
> Biggest files in decreasing order of size:
>
> 1  firma1.bilkaib880 MB table
> 2  public.attachme   454 MB table
> 3  firma1.rid401 MB table
> 4  firma1.omrid  88 MB  table
> 5  firma1.omdok  79 MB  table
> 6  firma1.bilkaib_cr_idx 75 MB  index
> 7  firma1.bilkaib_cr_pattern_idx 74 MB  index
> 8  firma1.bilkaib_db_pattern_idx 74 MB  index
> 9  firma1.bilkaib_db_idx 74 MB  index
>10  firma1.klient 69 MB  table
>11  firma1.dok64 MB  table
>12  firma1.bilkaib_kuupaev_idx60 MB  index
>13  firma1.bilkaib_dokumnr_idx53 MB  index
>14  firma1.bilkaib_pkey   43 MB  index
>15  firma1.rid_rtellimus_idx  38 MB  index
>16  firma1.rid_toode_idx  25 MB  index
>17  firma1.rid_toode_pattern_idx  24 MB  index
>18  public.strings23 MB  table
>19  firma1.rid_inpdokumnr_idx 19 MB  index
>20  firma1.toode  17 MB  table
>21  firma1.rid_dokumnr_idx16 MB  index
>22  firma1.rid_pkey   16 MB  index
>23  firma1.summav 15 MB  table
>24  public.report 13 MB  table
>25  public.desktop9784 kBtable
>26  public.mailbox7128 kBtable
>27  public.localfil   5584 kBtable
>28  pg_toast.pg_toast_36145_index 5392 kBindex
> ...
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Tomáš Vondra
> Sometimes per week server stops randomly responding for approx 5 minutes.
> User  should wait for 5 minutes before server responds.
> Other users can work normally at same time.

So does the whole machine just stop responding, or just the postgresql?
Are those other users using postgresql or some other services?

> Monday this happens at  12:16
> I havent noticed anythis special in PostgreSql and windows logs at this
> time.

Have you done some basic monitoring? This typically happens when the
machine does a lot of I/O (swapping, checkpoints, ...) - not sure how this
is logged.

> How to fix or find reason for this ?

First, you have to determine what's wrong. Set up some basic monitoring,
on Linux I'd use iostat/vmstat, not sure about the windows - try process
explorer from sysinternals.

> 10-25 users manually entering sales order, e.q. low server usage

So they're entering the data directly into the database? Are you sure
there's not something wrong in the application (e.g. a loop that takes a
lot of time in some cases)?

> Server is installed using setup program from postgresql.org directed site,
> postresql.conf file is not modified after installation.

Not sure what setup program you mean (there's an install from
EnterpriseDB), but the default config values are usually too low (e.g. the
default shared buffers is 24MB IIRC, but in your case 512MB would be
probably better). Not sure if that's the problem, though.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Peter Geoghegan
2011/7/13 Andrus :
> Sometimes per week server stops randomly responding for approx 5 minutes.
> User  should wait for 5 minutes before server responds.
> Other users can work normally at same time.

Sounds very much like a locking issue. Are you doing something like
storing a frequently updated system-wide setting in a table with a
single row?

http://wiki.postgresql.org/wiki/Lock_Monitoring

When the problem happens again, report back what the top query you see
on that wiki page shows.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Andrus

Tomáš and Tony,

thank you.


Have you done some basic monitoring? This typically happens when the
machine does a lot of I/O (swapping, checkpoints, ...) - not sure how this
is logged.


This is dedicated server, used only for PostgreSql.

I filtered windows event logs near this time (12:16) . The is error

The OpsMgr Connector could not connect to OPSMAN.y.xxx:5723.  The error
code is 11004L(The requested name is valid, but no data of the requested
type was found.).  Please verify there is network connectivity, the server
is running and has registered it's listening port, and there are no
firewalls blocking traffic to the destination.

and OpsMgr related warnings. I do'nt know what is OpsMgr but expect that
this is not realted to this issue.

postgres log at this time shows

2011-07-11 11:45:27 EEST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.

2011-07-11 11:45:27 EEST LOG:  unexpected EOF on client connection
2011-07-11 12:18:35 EEST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2011-07-11 12:18:35 EEST LOG:  unexpected EOF on client connection
2011-07-11 12:18:46 EEST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.


2011-07-11 12:18:46 EEST LOG:  unexpected EOF on client connection
2011-07-11 12:20:55 EEST LOG:  could not receive data from client: No
connection could be made because the target machine actively refused it.

those messages appear all the time so I expect that they are also not
related this issue.


First, you have to determine what's wrong. Set up some basic monitoring,
on Linux I'd use iostat/vmstat, not sure about the windows - try process
explorer from sysinternals.


Users report hangup later, I cannot monitor this at hangup time.
No idea what / how to monitor more. Anyway I added lines

log_lock_waits = on
log_temp_files = 2000
log_min_duration_statement = 1
log_line_prefix='%t %u %d '
log_min_error_statement = warning

to end of postgresql.conf and restarted server.


10-25 users manually entering sales order, e.q. low server usage


So they're entering the data directly into the database?


Try are using windows application from RDP (other computer in LAN) which
generates insert, update, delete commands to server for every entered order.


Are you sure
there's not something wrong in the application (e.g. a loop that takes a
lot of time in some cases)?


This application is used for may other sites without hangup.
I havent written indentionally such loops, statements cannot take so much
time.


Server is installed using setup program from postgresql.org directed
site,
postresql.conf file is not modified after installation.


Not sure what setup program you mean (there's an install from
EnterpriseDB), but the default config values are usually too low (e.g. the
default shared buffers is 24MB IIRC, but in your case 512MB would be
probably better). Not sure if that's the problem, though.


PostgreSql was installed from EnterpriceDB, postgresql.org site contains
link into it.

I ran now EnterpiseDb Tuning wizard and optimized postgresql.conf using it
for mixed app server and restarted service.

postgresql.conf now contains

# NOTE: This has been modified by EnterpriseDB's Tuning Wizard on 2011/07/13
14:10:42
#   Original Value for "shared_buffers" was "32MB"
shared_buffers = 69608   # min 128kB

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Tomas Vondra
On 13 Červenec 2011, 13:34, Andrus wrote:
> 2011-07-11 12:18:35 EEST LOG:  unexpected EOF on client connection
> 2011-07-11 12:18:46 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.
>
>
> 2011-07-11 12:18:46 EEST LOG:  unexpected EOF on client connection
> 2011-07-11 12:20:55 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.

AFAIK this happens when the connection is not closed properly (e.g. the
application exits without closing the connection etc.).

>> First, you have to determine what's wrong. Set up some basic monitoring,
>> on Linux I'd use iostat/vmstat, not sure about the windows - try process
>> explorer from sysinternals.
>
> Users report hangup later, I cannot monitor this at hangup time.
> No idea what / how to monitor more. Anyway I added lines
>
> log_lock_waits = on
> log_temp_files = 2000
> log_min_duration_statement = 1
> log_line_prefix='%t %u %d '
> log_min_error_statement = warning

OK, but you need to monitor the system too. See for example the Process
Explorer - you can run it and then see the history when the users report
there was a problem.

> I ran now EnterpiseDb Tuning wizard and optimized postgresql.conf using it
> for mixed app server and restarted service.

> shared_buffers = 69608   # min 128kB

OK, that's 540MB - pretty close to 512MB I have recommended.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Andrus

Peter,


Sometimes per week server stops randomly responding for approx 5 minutes.
User should wait for 5 minutes before server responds.
Other users can work normally at same time.

Sounds very much like a locking issue.


Thank you.
This may be the issue.


Are you doing something like
storing a frequently updated system-wide setting in a table with a
single row?
http://wiki.postgresql.org/wiki/Lock_Monitoring


AFAIK Order entry should not update single row global parameter setting 
table. Order id and numbers, order detail row ids and row order numbers  are 
read from PostgreSql sequence tables using nextval. Hopefully this cannot 
cause the issue.
Specific for this site there every order entry invokes lot of tiny updates 
for single row foreign key fields on large rid table which are shown below. 
Not sure that they cause delay.



When the problem happens again, report back what the top query you see
on that wiki page shows.


Issue is reported by users later, I cannot access to server at the time when 
issue occurs. Users can report exact time when issue occured.
How to set additional logging for find reason for this after issue is 
reported?


I optimized postgresql.conf to mized server using Tuning Wizard and  added

log_lock_waits = on
log_temp_files = 2000
log_min_duration_statement = 1
log_line_prefix='%t %u %d '
log_min_error_statement = warning

to end of postgresql.conf

How to tune logging so that reason is logged if this happens again ??

Andrus.


Order entry updates:

update rid SET rid7obj='.', reakuupaev=?dok.kuupaev where (toode like 'KM%' 
or toode like 'TT%') and

  dokumnr= <>;
update rid SET rid7obj='.' where toode like 'YM%' and 
dokumnr=<>;
update rid SET reakuupaev=?dok.kuupaev where toode like 'TT%' and 
dokumnr=<>;


update rid SET rid6obj='XX-XXX' where dokumnr=<> and 
toode like 'TT%' ;
update rid SET rid6obj='XX-XXX' where dokumnr=<> and 
toode like 'K%' ;
update rid SET rid6obj='XX-XXX'   where  dokumnr=<> 
and toode like 'Y%' ;


update rid SET myygikood='E' where  dokumnr=<> and 
rid2obj like 'MLE%';
update rid SET myygikood='Y' where  dokumnr=<> and 
rid2obj like 'MLE%' and rid6obj like 'EU%' ;
update rid SET rid3obj='MNOEU' where  dokumnr=<> and 
rid2obj like 'MLE%';
update rid SET rid3obj='MEURO' where  dokumnr=<> and 
rid2obj like 'MLE%' and rid6obj like 'EU%' ;


update rid SET myygikood='E' where  dokumnr=<> and 
rid2obj like 'MMA%';
update rid SET rid3obj='MNOEU' where  dokumnr=<> and 
rid2obj like 'MMA%';
update rid SET myygikood='Y' where  dokumnr=<> and 
rid2obj like 'MMA%' and rid6obj like 'EU%';
update rid SET rid3obj='MEURO' where  dokumnr=<> and 
rid2obj like 'MMA%' and rid6obj like 'EU%';


update rid SET myygikood='Y' where  dokumnr=<> and 
rid2obj like 'MPU%';
update rid SET rid3obj='MNOEU' where  dokumnr=<> and 
rid2obj like 'MPU%';
update rid SET rid3obj='MEURO' where  dokumnr=<> and 
rid2obj like 'MPU%' and rid6obj like 'EU%';




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-13 Thread Howard Cole

On 13/07/2011 8:49 AM, Giuseppe Sacco wrote:

Hi all,
I moved a few clusters from 8.4 to 9.0 since I required the new way of
authenticating against LDAP (or, in my case, AD). Now, I found the new
database version introduced permissions on large object, so my
application, in order to share large object across a group, require a
bit of change.

While the application code will be changed in order to give rights on
large objects too, I would like to know if there is any way for listing
current rights, i.e., for finding all large objects that still need to
have permissions changed.

Currently I cannot know how to distinguish what large objects have
already been granted, so I do give permissions to all large objects.
This is quite time consuming, about 5 minutes, and need to be executed a
few times per hour.

This is what I do now:

do $$
declare r record;
begin
for r in select distinct loid from pg_catalog.pg_largeobject loop
execute 'GRANT SELECT,UPDATE ON LARGE OBJECT ' || r.loid || ' TO agenzia_r';
end loop;
end$$;

Is there a better/faster way?

Thanks,
Giuseppe




As an interim solution, you could set the large object compatibility:

www.postgresql.org/docs/9.0/interactive/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGEScompatible.html#GUC-LO-COMPAT-PRIVILEGES

Howard
www.selestial.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Web-based Graphical Query Building Tool for PostgreSQL

2011-07-13 Thread Caleb Palmer
Hi all,

My company sells software that uses PostgreSQL and the need has come up to
provide a tool that gives our clients access to query the database but we
don't want to expect these users to be able to use SQL.  Is there a product
out there that provides a graphical query builder?  Preferably web-based.  I
know that Pgadmin as a graphical query building.

Thanks in advance,
Caleb Palmer


Re: [GENERAL] dirty read from plpgsql

2011-07-13 Thread Willy-Bas Loos
erm, you're right (re-tested that today)
I don't know what happened the other day. The query updating the flag
would not return until the test function was done.
I must have made the test duration too short, so that it was only appearances.
whatever, it works. thanks.

WBL

On Wed, Jul 6, 2011 at 2:36 PM, hubert depesz lubaczewski
 wrote:
> On Wed, Jul 06, 2011 at 12:54:21PM +0200, Willy-Bas Loos wrote:
>> I'd like to do a dirty read from plpgsql, so that i can stop the function
>> that is in a long loop without rolling back the work that it did.
>> All i want to read is a flag that says 'stop'.
>
> this doesn't need dirty read.
> just read committed.
> make table with flags, and insert there row which says "stop". make sure
> the insert gets committed.
>
> every so often, in your function check flags in the table, and since the
> change got committed - it will be visible, and function will stop.
>
> Best regards,
>
> depesz
>
> --
> The best thing about modern society is how easy it is to avoid contact with 
> it.
>                                                             http://depesz.com/
>



-- 
"Patriotism is the conviction that your country is superior to all
others because you were born in it." -- George Bernard Shaw

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql function confusing behaviour

2011-07-13 Thread Merlin Moncure
On Tue, Jul 12, 2011 at 12:10 PM, Shianmiin  wrote:
>
> Merlin Moncure-2 wrote:
>>
>> One proposed solution is to cache plpgsql plans around the search path.
>>
>
> I like the proposed solution, since search_path plays a part when generating
> plpgsql plan, it make sense to be part of the cache.
>
>
> Merlin Moncure-2 wrote:
>>
>> *) use sql functions for portions that float across schemas
>>
>
> Just to clarify, does this mean the sql functions doesn't cache plans like
> plpgsql functions do?

correct. so you could wrap schema dependent bits inside set returning
sql functions.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using LDAP roles in PostgreSQL

2011-07-13 Thread Lars Kanis
Hi,

LDAP is often used to do a centralized user and role management in an 
enterprise environment. PostgreSQL offers different 
authentication methods, like LDAP, SSPI, GSSAPI or SSL. However, for any of 
these methods the user must already exist in the 
database, before the authentication can be used. There is currently no 
authorization of database users directly based on LDAP.

Unfortunately, I couldn't find a programm for synchronizing users, groups and 
their memberships from LDAP to PostgreSQL. So I wrote 
my own and just released v0.1.0.

Access to LDAP is used read-only. pg_ldap_sync issues proper CREATE ROLE, DROP 
ROLE, GRANT and REVOKE commands to 
synchronize users and groups. It is meant to be started as a cron job.

FEATURES:
* Configurable per YAML config file
* Can use Active Directory as LDAP-Server
* Nested groups/roles supported
* Runs with pg.gem (C-library) or postgres-pr.gem (pure Ruby)
* Test mode which doesn’t do any changes to the DBMS

Homepage: https://github.com/larskanis/pg-ldap-sync

Is it something useful for someone apart of mine?

--
Kind regards,
Lars Kanis

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_rank seems very slow (140 ranked documents / second on my machine)

2011-07-13 Thread Oleg Bartunov

I didn't notice, reading 40K tuples in random order takes a long time and this
is a problem of any database. Can you measure time to read all documents found ?
 :( The only solution I see is to store enough
information for ranking in index.

Oleg
On Wed, 13 Jul 2011, Nicolas Grilly wrote:


Hi Oleg and all,

On Wed, Jul 13, 2011 at 08:16, Oleg Bartunov  wrote:

there is problem with estimating of cost scanning gin index in < 9.1
versions,
so you can set enable_seqscan=off;
or try 9.1 which beta3 now.


I re-ran my queries using enable seqscan=off.

Now the first query, without ts_rank, uses the GIN index:

set enable_seqscan=off;
explain analyze select id
from posts_100
where to_tsquery('english', 'crare') @@ document_vector
limit 50;

Limit  (cost=42290.12..42306.31 rows=50 width=4) (actual
time=16.259..16.412 rows=50 loops=1)
 ->  Bitmap Heap Scan on posts_100  (cost=42290.12..57877.02
rows=48152 width=4) (actual time=16.256..16.344 rows=50 loops=1)
   Recheck Cond: ('''crare'''::tsquery @@ document_vector)
   ->  Bitmap Index Scan on index_posts_documents_100
(cost=0.00..42278.08 rows=48152 width=0) (actual time=13.265..13.265
rows=49951 loops=1)
 Index Cond: ('''crare'''::tsquery @@ document_vector)
Total runtime: 16.484 ms

But the second query, the one that uses ts_rank, is still very slow...
Any idea why? Is ts_rank efficient enough to find the best 50 matches
among 50 000 documents?

set enable_seqscan=off;
explain analyze select id
from posts_100
where to_tsquery('english', 'crare') @@ document_vector
order by ts_rank_cd(document_vector, to_tsquery('english', 'crare'), 32) desc
limit 50;

Limit  (cost=59596.98..59597.10 rows=50 width=22) (actual
time=296212.052..296212.257 rows=50 loops=1)
 ->  Sort  (cost=59596.98..59717.36 rows=48152 width=22) (actual
time=296186.928..296187.007 rows=50 loops=1)"
   Sort Key: (ts_rank_cd(document_vector, '''crare'''::tsquery, 32))
   Sort Method:  top-N heapsort  Memory: 27kB
   ->  Bitmap Heap Scan on posts_100
(cost=42290.12..57997.40 rows=48152 width=22) (actual
time=70.861..296059.515 rows=49951 loops=1)
 Recheck Cond: ('''crare'''::tsquery @@ document_vector)
 ->  Bitmap Index Scan on index_posts_documents_100
(cost=0.00..42278.08 rows=48152 width=0) (actual time=24.922..24.922
rows=49951 loops=1)
   Index Cond: ('''crare'''::tsquery @@ document_vector)
Total runtime: 296220.493 ms


By the way, does ts_rank is supposed to use a GIN index when it's
available?


no, I see no benefit :)


Ok. But what is the solution to improve ts_rank execution time? Am I
doing something wrong?

Thanks for your help,

Nicolas



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ts_rank seems very slow (140 ranked documents / second on my machine)

2011-07-13 Thread Nicolas Grilly
On Wed, Jul 13, 2011 at 17:36, Oleg Bartunov  wrote:
> I didn't notice, reading 40K tuples in random order takes a long time and
> this
> is a problem of any database. Can you measure time to read all documents
> found ?

As you asked, I measured the time required to read all documents.

For reference, after having dropped the operating system cache, my
machine can read a 1 GB file in 20 seconds, that is 50 MB / second.

Here are the stats for table posts_100:
Table size: 117 MB  
TOAST table size: 8356 MB   
Index size: 1720 MB 

I forced PostgreSQL to read all documents using the following query,
which doesn't involve ts_rank:

explain analyze select sum(length(document_vector)) from posts_100;

Aggregate  (cost=27472.52..27472.53 rows=1 width=18) (actual
time=346952.556..346952.557 rows=1 loops=1)
  ->  Seq Scan on posts_100  (cost=0.00..24975.01 rows=999001
width=18) (actual time=0.023..1793.523 rows=999001 loops=1)
Total runtime: 346952.595 ms

Then I ran a similar query that involves ts_rank:

explain analyze select sum(ts_rank_cd(document_vector,
to_tsquery('english', 'crare'), 32)) from posts_100

Aggregate  (cost=27472.52..27472.53 rows=1 width=18) (actual
time=373713.957..373713.958 rows=1 loops=1)
  ->  Seq Scan on posts_100  (cost=0.00..24975.01 rows=999001
width=18) (actual time=20.045..1847.897 rows=999001 loops=1)
Total runtime: 373714.031 ms

The first query ran in 347 seconds; the second one in 374 seconds.
Conclusion: There is no significant overhead in the ts_rank function
itself. It's slow because ts_rank has to read in random order 40 000
ts_vector stored in TOAST table. The   slow execution time looks like
a direct consequence of storing ts_vector in TOAST table...

>  :( The only solution I see is to store enough information for ranking in 
> index.

Is it the expected behavior? How can I improve that?

Thanks,

Nicolas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding randomly for 5 minutes

2011-07-13 Thread Chris Travers
I think your log files, and the description of your issue provide at
least a decent idea of what is probably happening.  Unfortunately
without being able to observe the server when it happens, I think you
are going to have limited success tracking this down.

Proximal causes could be network errors or software bugs.

On Wed, Jul 13, 2011 at 4:34 AM, Andrus  wrote:
> Tomáš and Tony,
>
> thank you.
>
>> Have you done some basic monitoring? This typically happens when the
>> machine does a lot of I/O (swapping, checkpoints, ...) - not sure how this
>> is logged.
>
> This is dedicated server, used only for PostgreSql.
>
> I filtered windows event logs near this time (12:16) . The is error
>
> The OpsMgr Connector could not connect to OPSMAN.y.xxx:5723.  The error
> code is 11004L(The requested name is valid, but no data of the requested
> type was found.).  Please verify there is network connectivity, the server
> is running and has registered it's listening port, and there are no
> firewalls blocking traffic to the destination.
>
> and OpsMgr related warnings. I do'nt know what is OpsMgr but expect that
> this is not realted to this issue.

It's a product called Microsoft Operations Manager or MOM for short.

The error could be related even if the software is not.  I wouldn't
toss it out entirely yet.
>
> postgres log at this time shows
>
> 2011-07-11 11:45:27 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.
>
> 2011-07-11 11:45:27 EEST LOG:  unexpected EOF on client connection
> 2011-07-11 12:18:35 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.

Sounds like evidence of a network problem or unexpected applicatoin termination.

What I suspect is actually happening (although this is based on sparse
evidence, it may be a useful starting point) is that a session which
is locking certain records is getting spuriously disconnected and the
session is timing out along with locks.  During this timeout period,
the transaction is not committed or rolled back and the locks are
still there.  However that's hardly enough to solve the problem.

Given that you are running the application over RDP it would be
interesting to find out whether these happen when the RDP session is
closed.  Maybe the application is killed and has locks that aren't
released until it times out on the server?  Of course this only causes
an issue in the cases where there is a request for the same locks by
another user when the RDP session closes.


>
> 2011-07-11 12:18:35 EEST LOG:  unexpected EOF on client connection
> 2011-07-11 12:18:46 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.
>
>
> 2011-07-11 12:18:46 EEST LOG:  unexpected EOF on client connection
> 2011-07-11 12:20:55 EEST LOG:  could not receive data from client: No
> connection could be made because the target machine actively refused it.
>
> those messages appear all the time so I expect that they are also not
> related this issue.

They could be though.  If you have a lock conflict and a connection
timeout that could be causing your problem.
>
>> First, you have to determine what's wrong. Set up some basic monitoring,
>> on Linux I'd use iostat/vmstat, not sure about the windows - try process
>> explorer from sysinternals.
>
> Users report hangup later, I cannot monitor this at hangup time.
> No idea what / how to monitor more. Anyway I added lines
>
> log_lock_waits = on
> log_temp_files = 2000
> log_min_duration_statement = 1
> log_line_prefix='%t %u %d '
> log_min_error_statement = warning

add %r also to your log line prefix.
>
> to end of postgresql.conf and restarted server.
>
>>> 10-25 users manually entering sales order, e.q. low server usage
>>
>> So they're entering the data directly into the database?
>
> Try are using windows application from RDP (other computer in LAN) which
> generates insert, update, delete commands to server for every entered order.

What happens when an RDP session times out or is disconnected without
closing an application?

I.e. if a user just closes the remote desktop session without closing
the app first, the app is killed, right?

>
>> Are you sure
>> there's not something wrong in the application (e.g. a loop that takes a
>> lot of time in some cases)?
>
> This application is used for may other sites without hangup.
> I havent written indentionally such loops, statements cannot take so much
> time.

How many others are running over RDP?
>
>>> Server is installed using setup program from postgresql.org directed
>>> site,
>>> postresql.conf file is not modified after installation.
>>
>> Not sure what setup program you mean (there's an install from
>> EnterpriseDB), but the default config values are usually too low (e.g. the
>> default shared buffers is 24MB IIRC, but in your case 512MB would be
>> probably better)

[GENERAL] SerializableSnapshot removed from postgresql 8.4

2011-07-13 Thread Duarte Fonseca
Hi list,

I'm currently upgrading from Postgresql 8.1 to 8.4 one of the steps of the 
process for me involves compiling the replication toolkit we use against 8.4.

I've just run into a problem since this replication code references 
SerializableSnapshot which as been removed in 8.4, i was wondering what should 
our code use instead, I found a thread[1] in the hackers mailing list where 
GetActiveSnapshot() was recommended, i would greatly appreciate it if someone 
could point me in the right direction on this.


The code in question goes something like:

if (SerializableSnapshot == NULL)
elog(ERROR, "SerializableSnapshot is NULL ");

// Return the minxid from the current snapshot
PG_RETURN_TRANSACTIONID(SerializableSnapshot->xmin);

Thanks,

[1] - http://archives.postgresql.org/pgsql-hackers/2008-04/msg01556.php
--
Duarte Fonseca






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using LDAP roles in PostgreSQL

2011-07-13 Thread Chris Travers
On Wed, Jul 13, 2011 at 6:59 AM, Lars Kanis  wrote:

> Homepage: https://github.com/larskanis/pg-ldap-sync
>
> Is it something useful for someone apart of mine?

Hi Lars;

While I don't have an immediate use for it, it is very nice to know
such a tool exists, and I think it is likely that at some unspecified
point in the future, something like this might be helpful to my own
customers.

I do have a question though.  Does your application allow for creating
only users and groups in part of the LDAP tree?  Or does it have that
possibility yet?  Also can it be configured to ignore grants of
specific Pg roles to users?  I am not saying these are must-haves.
If I need them at some point I could probably add the features and
contribute the change back.  But it would be nice to know.

Just as an example of where I am going with this.  One of my main
projects (LedgerSMB) uses database roles to enforce permissions.  One
of the nice things is that password authentication could passed
through to an LDAP server to provide SSO for an organization.  I plan
to forward this announcement to the list there as well as a
potentially useful tool.  I figure it is worth noting this on the list
because I can't imagine I am the only one doing this.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SerializableSnapshot removed from postgresql 8.4

2011-07-13 Thread Jeff Davis
On Wed, 2011-07-13 at 18:10 +0100, Duarte Fonseca wrote:
> Hi list,
> 
> I'm currently upgrading from Postgresql 8.1 to 8.4 one of the steps of the 
> process for me involves compiling the replication toolkit we use against 8.4.
> 
> I've just run into a problem since this replication code references 
> SerializableSnapshot which as been removed in 8.4, i was wondering what 
> should our code use instead, I found a thread[1] in the hackers mailing list 
> where GetActiveSnapshot() was recommended, i would greatly appreciate it if 
> someone could point me in the right direction on this.
> 
> 
> The code in question goes something like:
> 
> if (SerializableSnapshot == NULL)
> elog(ERROR, "SerializableSnapshot is NULL ");
> 
> // Return the minxid from the current snapshot
> PG_RETURN_TRANSACTIONID(SerializableSnapshot->xmin);

I believe that equivalent code in 8.4 would look something like:

if (!IsXactIsoLevelSerializable || !ActiveSnapshotSet())
  elog(ERROR, "Could not find serializable snapshot");

PG_RETURN_TRANSACTIONID(GetActiveSnpashot()->xmin);


However, be careful! Some of this code changes again in 9.1. In 9.1, you
probably want to look for the "repeatable read" transaction.

Regards,
Jeff Davis



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-13 Thread Giuseppe Sacco
Hi Howard,

Il giorno mer, 13/07/2011 alle 13.18 +0100, Howard Cole ha scritto:
[...]
> As an interim solution, you could set the large object compatibility:
> 
> www.postgresql.org/docs/9.0/interactive/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES

thanks for pointing to this option. I already evaluated it and decided
to keep 9.0 with new large object permissions since I think it is a good
thing.

Is there any other possibility?

Thanks to all,
Giuseppe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-13 Thread Howard Cole

On 13/07/2011 8:15 PM, Giuseppe Sacco wrote:


Is there any other possibility?


Hi Guiseppe,

Perhaps you can create a trigger that monitors for the insertion of an 
oid and then grant permissions. No idea if this can be done, but if it 
can it will save you lots of repeated grants.


An easier option to use the compatibility option and then, when you have 
updated your code, you can turn off the compatibility mode and run your 
script once.


Howard Cole
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using LDAP roles in PostgreSQL

2011-07-13 Thread Lars Kanis

Hi Chris,

> I do have a question though.  Does your application allow for creating
> only users and groups in part of the LDAP tree?  Or does it have that
> possibility yet? Also can it be configured to ignore grants of
> specific Pg roles to users?
Yes, filters on both sides can be set and they can be different for users and 
groups. The LDAP filter is according to RFC 2254 and the PG filter is plain 
SQL. 
You may also collect all synchronized roles into a PG-group, so that you can 
catch them easily. That's shown in https://github.com/larskanis/pg-ldap-
sync/blob/master/config/sample-config2.yaml

I just updated the README.txt according to your question.

> Just as an example of where I am going with this.  One of my main
> projects (LedgerSMB) uses database roles to enforce permissions.  One
> of the nice things is that password authentication could passed
> through to an LDAP server to provide SSO for an organization.
I use it together with Kerberos and with SSL-certificate authentication. Since 
there are default privilegs in Postgres 9.0, it is practicable to use fine 
graded privileges now.

> I plan
> to forward this announcement to the list there as well as a
> potentially useful tool.  I figure it is worth noting this on the list
> because I can't imagine I am the only one doing this.

Yes, thanks. I could announce it too, in case the list is writeable for me.

--
Regards,
Lars Kanis

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web-based Graphical Query Building Tool for PostgreSQL

2011-07-13 Thread Joseph Marlin
phpPgAdmin works great for me! You can do most simple queries without any SQL, 
including add, select, update, sort (order by), alter, create, drop, etc etc. 
All that can be done just by clicking buttons and labels in the browser. There 
is the ability to execute actual SQL queries if your users know how. 
http://phppgadmin.sourceforge.net/doku.php?id=start

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About permissions on large objects

2011-07-13 Thread Guillaume Lelarge
On Wed, 2011-07-13 at 23:30 +0100, Howard Cole wrote:
> On 13/07/2011 8:15 PM, Giuseppe Sacco wrote:
> >
> > Is there any other possibility?
> 
> Hi Guiseppe,
> 
> Perhaps you can create a trigger that monitors for the insertion of an 
> oid and then grant permissions. No idea if this can be done, but if it 
> can it will save you lots of repeated grants.
> 

Large Objects are inserted in a system table. And you cannot add
triggers to system tables. So this can't work.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
Could I consider it a hardware problem, or postgresql running too long which
causes problems? (It ran about half a month, however, it ran much longer
than that without problems)

On Wed, Jul 13, 2011 at 00:52, Tony Wang  wrote:

> Hi,
>
> The configuration information is listed at the end.
> I met this problem last weekend. The presentation was that, the db locks
> became enormous, up to 8.3k, and the db hanged there. About half an hour to
> one hour later, it recovered: the locks became 1 or 2 hundreds, which was
> its average level. It happened every 5-8 hours.
>
> I checked the log, but nothing interesting. The log about dead lock
> happened several times a day, and not when hanging. I had a cron job running
> every minute to record the locks using the command below:
>
> select pg_class.relname, pg_locks.mode, pg_locks.granted,
> pg_stat_activity.current_query, pg_stat_activity.query_start,
> pg_stat_activity.xact_start as transaction_start,
> age(now(),pg_stat_activity.query_start) as query_age,
> age(now(),pg_stat_activity.xact_start) as transaction_age,
> pg_stat_activity.procpid from pg_stat_activity,pg_locks left outer join
> pg_class on (pg_locks.relation = pg_class.oid) where
> pg_locks.pid=pg_stat_activity.procpid and substr(pg_class.relname,1,3) !=
> 'pg_' order by query_start;
>
> The only special thing I can find is that there were a lot ExclusiveLock,
> while it's normal the locks are only AccessShareLock and RowExclusiveLock.
>
> After suffering from that for whole weekend, I restarted postgresql, and my
> service, and reduced a bit db pressure by disabling some service, and it
> didn't happen again till now.
>
> The possible reason I think of is that someone was reindexing index, which
> is almost impossible; or the hardware problem, which is also little
> possible.
>
> Have any one experienced that, or any suggestion on researching/debugging?
>
> The configuration information:
> System: Ubuntu server 10.04.2
> Postgresql version: 8.4.8-0ubuntu0.10.04
> CPU: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (24 cores)
> Disk: Fusion IO drive
> Memory: 32G
> Postgresql configuration:
>   max_connection = 800
>   shared_buffers = 2000MB
>   effective_cache_size = 14000MB
>   autovacuum = off
>
> --
> BR,
> Tony Wang
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread John R Pierce

On 07/13/11 6:55 PM, Tony Wang wrote:
Could I consider it a hardware problem, or postgresql running too long 
which causes problems? (It ran about half a month, however, it ran 
much longer than that without problems)


i have postgres servers that run for months and even years without problems.

based on what I see in your original posting, there's no way anyone on 
this list could possibly guess what is happening on your server.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web-based Graphical Query Building Tool for PostgreSQL

2011-07-13 Thread Craig Ringer

On 13/07/2011 9:02 PM, Caleb Palmer wrote:

Hi all,

My company sells software that uses PostgreSQL and the need has come up
to provide a tool that gives our clients access to query the database
but we don't want to expect these users to be able to use SQL.  Is there
a product out there that provides a graphical query builder?  Preferably
web-based.  I know that Pgadmin as a graphical query building.


Check out iReport builder, Crystal Reports, etc.

If you need query building for purposes other than reporting, I'm not 
really sure where to go for that.


--
Craig Ringer


POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 10:01, John R Pierce  wrote:

> On 07/13/11 6:55 PM, Tony Wang wrote:
>
>> Could I consider it a hardware problem, or postgresql running too long
>> which causes problems? (It ran about half a month, however, it ran much
>> longer than that without problems)
>>
>
> i have postgres servers that run for months and even years without
> problems.
>

Yeah, same for me.


>
> based on what I see in your original posting, there's no way anyone on this
> list could possibly guess what is happening on your server.
>

Sorry but is there anything I'm missing? I just want to know any possible
situation can cause high locks. The server runs for more than a year, and I
didn't do any related update recently and it just happened.


>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread John R Pierce

On 07/13/11 7:16 PM, Tony Wang wrote:
On Thu, Jul 14, 2011 at 10:01, John R Pierce > wrote:


On 07/13/11 6:55 PM, Tony Wang wrote:

Could I consider it a hardware problem, or postgresql running
too long which causes problems? (It ran about half a month,
however, it ran much longer than that without problems)


i have postgres servers that run for months and even years without
problems.


Yeah, same for me.


based on what I see in your original posting, there's no way
anyone on this list could possibly guess what is happening on your
server.


Sorry but is there anything I'm missing? I just want to know any 
possible situation can cause high locks. The server runs for more than 
a year, and I didn't do any related update recently and it just happened.


If I run into locking problems, the first thing *I* do is look at 
pg_stat_activity to see what sort of queries are active, and relate the 
transaction OIDs to the pg_locks and the queries to figure out whats 
locking on what, which it appears your join is doingIf you had 
that many exclusive_locks,  just what were the queries making these 
locks doing?


We don't know what sort of schema you have, what kind of queries your 
applications make, etc etc etc.   were there any hardware events related 
to storage in the kernel message buffer (displayed by dmesg (1) on most 
unix and linux systems) ?   If linux, has the oomkiller run amok? (this 
also should be logged in dmesg)



800 concurrent connections is a very large number for a server that has 
at most a dozen cores. (you say you have x5650, thats a 6 core 
processor, which supports at most 2 sockets, for 12 cores total.  these 
12 cores support hyperthreading, which allows 24 total threads).  With 
24 hardware threads and 800 queries running, you'd have 33 queries 
contending for each CPU, which likely will result in LOWER total 
performance than if you tried to execute fewer queries at once.If 
most of those connections are idle at a given time, you likely should 
consider using a connection pooler with a lot fewer max_connections, 
say, no more than 100 or so.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 10:35, John R Pierce  wrote:

> On 07/13/11 7:16 PM, Tony Wang wrote:
>
>  On Thu, Jul 14, 2011 at 10:01, John R Pierce > pie...@hogranch.com>> wrote:
>>
>>On 07/13/11 6:55 PM, Tony Wang wrote:
>>
>>Could I consider it a hardware problem, or postgresql running
>>too long which causes problems? (It ran about half a month,
>>however, it ran much longer than that without problems)
>>
>>
>>i have postgres servers that run for months and even years without
>>problems.
>>
>>
>> Yeah, same for me.
>>
>>
>>based on what I see in your original posting, there's no way
>>anyone on this list could possibly guess what is happening on your
>>server.
>>
>>
>> Sorry but is there anything I'm missing? I just want to know any possible
>> situation can cause high locks. The server runs for more than a year, and I
>> didn't do any related update recently and it just happened.
>>
>
> If I run into locking problems, the first thing *I* do is look at
> pg_stat_activity to see what sort of queries are active, and relate the
> transaction OIDs to the pg_locks and the queries to figure out whats locking
> on what, which it appears your join is doingIf you had that many
> exclusive_locks,  just what were the queries making these locks doing?
>

It's a game server, and the queries are updating users' money, as normal.
The sql is like "UPDATE player SET money = money + 100 where id = 12345".
The locks were RowExclusiveLock for the table "player" and the indexes. The
weird thing is there was another ExclusiveLock for the table "player", i.e.
"player" got two locks, one RowExclusiveLock and one ExclusiveLock.

In the postgresql documentation (
http://www.postgresql.org/docs/8.4/static/explicit-locking.html), it's said
about the  Exclusive "This lock mode is not automatically acquired on user
tables by any PostgreSQL command."


>
> We don't know what sort of schema you have, what kind of queries your
> applications make, etc etc etc.   were there any hardware events related to
> storage in the kernel message buffer (displayed by dmesg (1) on most unix
> and linux systems) ?   If linux, has the oomkiller run amok? (this also
> should be logged in dmesg)
>

Mostly update players' info, and another table called items for the items
ownership for users.
As I listed, I'm using ubuntu 10.04. I didn't find useful messages there.
Does oomkiller means out of memory killer? from the munin graph, the memory
usage is quite normal.


>
>
> 800 concurrent connections is a very large number for a server that has at
> most a dozen cores. (you say you have x5650, thats a 6 core processor, which
> supports at most 2 sockets, for 12 cores total.  these 12 cores support
> hyperthreading, which allows 24 total threads).  With 24 hardware threads
> and 800 queries running, you'd have 33 queries contending for each CPU,
> which likely will result in LOWER total performance than if you tried to
> execute fewer queries at once.If most of those connections are idle at a
> given time, you likely should consider using a connection pooler with a lot
> fewer max_connections, say, no more than 100 or so.


Yeah, that's what I planned to do next.

Thanks for your concerns! :)


>
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread John R Pierce

On 07/13/11 8:47 PM, Tony Wang wrote:
It's a game server, and the queries are updating users' money, as 
normal. The sql is like "UPDATE player SET money = money + 100 where 
id = 12345". The locks were RowExclusiveLock for the table "player" 
and the indexes. The weird thing is there was another ExclusiveLock 
for the table "player", i.e. "player" got two locks, 
one RowExclusiveLock and one ExclusiveLock.


that query should be quite fast. is it part of a larger transaction?  is 
there any possibility of multiple sessions/connections accessing the 
same player.id?



it would be interesting to identify the process that issued the 
exclusive lock and determine what query/queries its made.  if its not 
apparent in pg_stat_activity, perhaps enable logging of all DDL 
commands, and check the logs.


if there's a lot of active queries (you ahve 800 connections)

select count(*),current_query from pg_stat_activity group by 
current_query order by count(*) desc;


can help you make sense of them.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird problem that enormous locks

2011-07-13 Thread Tony Wang
On Thu, Jul 14, 2011 at 12:35, John R Pierce  wrote:

> On 07/13/11 8:47 PM, Tony Wang wrote:
>
>> It's a game server, and the queries are updating users' money, as normal.
>> The sql is like "UPDATE player SET money = money + 100 where id = 12345".
>> The locks were RowExclusiveLock for the table "player" and the indexes. The
>> weird thing is there was another ExclusiveLock for the table "player", i.e.
>> "player" got two locks, one RowExclusiveLock and one ExclusiveLock.
>>
>
> that query should be quite fast. is it part of a larger transaction?  is
> there any possibility of multiple sessions/connections accessing the same
> player.id?
>
>
That's possible, but I think only one row will be locked for a while, but
not thousands of locks for an hour. It's rare that thousands of users update
the value at once.


>
> it would be interesting to identify the process that issued the exclusive
> lock and determine what query/queries its made.  if its not apparent in
> pg_stat_activity, perhaps enable logging of all DDL commands, and check the
> logs.
>

yeah, I've made the log_statement to "all" now. Previously, it only logged
slow queries more than 50ms. I could know something from logs if it happens
again (hope not).


>
> if there's a lot of active queries (you ahve 800 connections)
>
>select count(*),current_query from pg_stat_activity group by
> current_query order by count(*) desc;
>

that's helpful, thanks.


>
> can help you make sense of them.
>
>
>
> --
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>