Re: [GENERAL] pg advisory locks

2014-10-13 Thread Alban Hertroys

On 12 Oct 2014, at 18:41, Arun Gokule arun.gok...@gmail.com wrote:

 Hi,
 
 I am executing pg_advisory_locks using the following set of statements:
 
  SELECT pg_advisory_lock(317,2);
  UPDATE posts SET dislikers = array_remove(dislikers, 7) WHERE id = 317;
  update posts set num_dislikes = icount(dislikers), updated_at = now() where 
 id = 317;
  WITH update_likers AS (SELECT pg_advisory_unlock(317,2)) select num_likes, 
 num_dislikes, (7 IN (select(unnest(likers as liked, (7 IN 
 (select(unnest(dislikers as disliked from posts where id = 317 LIMIT 1;
 
 These are issued from a multithreaded app. One in 1000 queries, I get a 
 deadlock after the execution of the above set of statements. i.e.  SELECT 
 try_pg_advisory_lock(317,2) returns false. Is there something obvious that I 
 am doing wrong?

Your last query doesn’t call the unlock statement, that WITH section is going 
to be ignored as the query isn’t using it.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Vick Khera
On Mon, Oct 13, 2014 at 12:11 AM, Stephen Davies sdav...@sdc.com.au wrote:
 I have fixed this by manually granting access where necessary but wonder
 whether the original issue is a bug or something that I have missed in the
 migration.

pg_dump emits the necessary GRANTs for the tables.

Did you use pg_dumpall --globals-only to copy over your users and
their settings?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Jeff Janes
On Sun, Oct 12, 2014 at 9:11 PM, Stephen Davies sdav...@sdc.com.au wrote:

 I am in the process of migrating several PostgreSQL databases from a
 32-bit V9.1.4 environment to a 64-bit V9.3 environment.

 I have used pg_dump and pg_restore (or postgis_restore.pl) as required by
 the combination of version and word size migration and the results have
 been (superficially) good.

 However, some tables in some databases have lost access privileges.
 That is, users who could access tables on the old server are denied access
 on the new.
 I have fixed this by manually granting access where necessary but wonder
 whether the original issue is a bug or something that I have missed in the
 migration.


Did you get any error messages during the load?

Cheers,

Jeff


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Stephen Davies

Nope. All went very smoothly apart from these grant issues.

On 14/10/14 01:57, Jeff Janes wrote:

On Sun, Oct 12, 2014 at 9:11 PM, Stephen Davies sdav...@sdc.com.au
mailto:sdav...@sdc.com.au wrote:

I am in the process of migrating several PostgreSQL databases from a
32-bit V9.1.4 environment to a 64-bit V9.3 environment.

I have used pg_dump and pg_restore (or postgis_restore.pl
http://postgis_restore.pl) as required by the combination of version and
word size migration and the results have been (superficially) good.

However, some tables in some databases have lost access privileges.
That is, users who could access tables on the old server are denied access
on the new.
I have fixed this by manually granting access where necessary but wonder
whether the original issue is a bug or something that I have missed in the
migration.


Did you get any error messages during the load?

Cheers,

Jeff



--
=
Stephen Davies Consulting P/L Phone: 08-8177 1595
Adelaide, South Australia.Mobile:040 304 0583
Records  Collections Management.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Stephen Davies

No. Just pg_dump and pg_restore/postgis_restore.pl.

On 13/10/14 22:24, Vick Khera wrote:

On Mon, Oct 13, 2014 at 12:11 AM, Stephen Davies sdav...@sdc.com.au wrote:

I have fixed this by manually granting access where necessary but wonder
whether the original issue is a bug or something that I have missed in the
migration.


pg_dump emits the necessary GRANTs for the tables.

Did you use pg_dumpall --globals-only to copy over your users and
their settings?





--
=
Stephen Davies Consulting P/L Phone: 08-8177 1595
Adelaide, South Australia.Mobile:040 304 0583
Records  Collections Management.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Adrian Klaver

On 10/13/2014 04:27 PM, Stephen Davies wrote:

Nope. All went very smoothly apart from these grant issues.


I think what Jeff was after was any error messages related to the grant 
issues. I would expect that if users where granted access to tables and 
where now denied, there would be an error on restore when that GRANT was 
issued.




On 14/10/14 01:57, Jeff Janes wrote:

On Sun, Oct 12, 2014 at 9:11 PM, Stephen Davies sdav...@sdc.com.au
mailto:sdav...@sdc.com.au wrote:

I am in the process of migrating several PostgreSQL databases from a
32-bit V9.1.4 environment to a 64-bit V9.3 environment.

I have used pg_dump and pg_restore (or postgis_restore.pl
http://postgis_restore.pl) as required by the combination of
version and
word size migration and the results have been (superficially) good.

However, some tables in some databases have lost access privileges.
That is, users who could access tables on the old server are
denied access
on the new.
I have fixed this by manually granting access where necessary but
wonder
whether the original issue is a bug or something that I have
missed in the
migration.


Did you get any error messages during the load?

Cheers,

Jeff






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Processor usage/tuning question

2014-10-13 Thread Israel Brewster

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---


BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
 

On Oct 10, 2014, at 1:04 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 10/8/14, 3:17 PM, Israel Brewster wrote:
 Except that the last data point received is still valid as the aircraft's 
 current location, even if it came in several hours ago - which we may well 
 care about. That is, some users may need to see where a given aircraft (or 
 the entire fleet) is, even if an aircraft hasn't updated in a while. That 
 said, I did discuss this with my higher-ups, and got the ok to take it down 
 to four hours.
 
 Note that in your explain output nothing is filtering by time at all; are you 
 sure you posted the right explain?
 
 I don't think PostgreSQL is going to be able to reason very effectively 
 about a ROW_NUMBER() in a inner table and then a row=5 in the outer one 
 being equivalent to a LIMIT query for which it could walk an index and then 
 stopping once it finds 5 of them.
 
 Does this need to issued as a single query?  Why not issue 55 different 
 queries?  It seems like the client is likely going to need to pick the 
 returned list back out by tail number anyway, so both the client and the 
 server might be happier with separate queries.
 
 Good point. Doing that brought the entire execution time down to around 
 60ms. Just ever so slightly better than the ~1200ms I was getting before. 
 :-) I just have an unreasonable aversion to making multiple queries. I guess 
 this is a prime example of why I need to overcome that :-)
 
 Do you actually need the last 5 points? If you could get away with just the 
 most recent point, SELECT DISTINCT ON might do a better job of this in a 
 single query.

At the moment, unfortunately yes - I have to do some calculations based on the 
past few data points. At some point I should be able to re-work the system such 
that said calculations are done when the points are saved, rather than when 
they are retrieved, which would be beneficial for a number of reasons. However, 
until I can get that done I need multiple points here.

 
 As for the concern about issuing multiple queries, if you code this into a 
 database function it should still be quite fast because there won't be any 
 round-trip between your application and the database.

I've had phenomenally bad luck with coding queries into database functions. I 
had a number of functions written at one point that allowed me to do things 
like select table.function, other_column FROM table - until I noticed that 
said queries ran significantly slower than just doing the query I had encoded 
in the function as a sub-query instead. I was doing these same sub-queries in a 
bunch of different places, so I figured it would clarify things if I could just 
code them into a DB function that I called just like a column. It's been a 
while since I looked at those, however, so I can't say why they were slow. This 
usage may not suffer from the same problem.

 
 Something else to consider is having a second table that only keeps the last 
 X aircraft positions. I would do this by duplicating every insert into that 
 table via a trigger, and then have a separate process that ran once a minute 
 to delete any records other than the newest X. Because that table would 
 always be tiny queries against it should be blazing fast. Do note that you'll 
 want to vacuum that table frequently, like right after each delete.

Indeed. I think I'm happy with the performance of the multiple queries, but 
this would doubtless be the best option (from a performance standpoint), as 
the table would be small and my select would be essentially SELECT * FROM 
TABLE, with a potential WHERE ... IN... clause.

Thanks for all the help!

 -- 
 Jim Nasby, Data Architect, Blue Treble Consulting
 Data in Trouble? Get it in Treble! http://BlueTreble.com
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Stephen Davies

I no longer have the logs but I do not recall any errors during the restores.

The first I knew of the issue was when scripts started failing because access 
was denied to some tables for the nominated user.


Due to other, non-PostgreSQL issues, I am going to have to repeat some of the 
migrations next week. I shall watch carefully for any grant errors.


Cheers,
Stephen

On 14/10/14 10:05, Adrian Klaver wrote:

On 10/13/2014 04:27 PM, Stephen Davies wrote:

Nope. All went very smoothly apart from these grant issues.


I think what Jeff was after was any error messages related to the grant
issues. I would expect that if users where granted access to tables and where
now denied, there would be an error on restore when that GRANT was issued.



On 14/10/14 01:57, Jeff Janes wrote:

On Sun, Oct 12, 2014 at 9:11 PM, Stephen Davies sdav...@sdc.com.au
mailto:sdav...@sdc.com.au wrote:

I am in the process of migrating several PostgreSQL databases from a
32-bit V9.1.4 environment to a 64-bit V9.3 environment.

I have used pg_dump and pg_restore (or postgis_restore.pl
http://postgis_restore.pl) as required by the combination of
version and
word size migration and the results have been (superficially) good.

However, some tables in some databases have lost access privileges.
That is, users who could access tables on the old server are
denied access
on the new.
I have fixed this by manually granting access where necessary but
wonder
whether the original issue is a bug or something that I have
missed in the
migration.


Did you get any error messages during the load?

Cheers,

Jeff









--
=
Stephen Davies Consulting P/L Phone: 08-8177 1595
Adelaide, South Australia.Mobile:040 304 0583
Records  Collections Management.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Adrian Klaver

On 10/13/2014 04:28 PM, Stephen Davies wrote:

No. Just pg_dump and pg_restore/postgis_restore.pl.


Roles(users) are global to a cluster so they will not be picked up by 
pg_dump. You have the options of:


1) Using pg_dumpall to dump the entire cluster into a text file

http://www.postgresql.org/docs/9.3/interactive/app-pg-dumpall.html

$ pg_dumpall  db.out

2) Or do pg_dump on the individual databases and pg_dumpall -g to get 
just the global objects, which is what Vick Khera was getting at.


-g
--globals-only

Dump only global objects (roles and tablespaces), no databases.




On 13/10/14 22:24, Vick Khera wrote:

On Mon, Oct 13, 2014 at 12:11 AM, Stephen Davies sdav...@sdc.com.au
wrote:

I have fixed this by manually granting access where necessary but wonder
whether the original issue is a bug or something that I have missed
in the
migration.


pg_dump emits the necessary GRANTs for the tables.

Did you use pg_dumpall --globals-only to copy over your users and
their settings?








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3 migration issue

2014-10-13 Thread Stephen Davies

Thanks for that. I shall use it when I do the repeat migration.

Cheers,
Stephen

On 14/10/14 10:21, Adrian Klaver wrote:

On 10/13/2014 04:28 PM, Stephen Davies wrote:

No. Just pg_dump and pg_restore/postgis_restore.pl.


Roles(users) are global to a cluster so they will not be picked up by pg_dump.
You have the options of:

1) Using pg_dumpall to dump the entire cluster into a text file

http://www.postgresql.org/docs/9.3/interactive/app-pg-dumpall.html

$ pg_dumpall  db.out

2) Or do pg_dump on the individual databases and pg_dumpall -g to get just the
global objects, which is what Vick Khera was getting at.

-g
--globals-only

 Dump only global objects (roles and tablespaces), no databases.




On 13/10/14 22:24, Vick Khera wrote:

On Mon, Oct 13, 2014 at 12:11 AM, Stephen Davies sdav...@sdc.com.au
wrote:

I have fixed this by manually granting access where necessary but wonder
whether the original issue is a bug or something that I have missed
in the
migration.


pg_dump emits the necessary GRANTs for the tables.

Did you use pg_dumpall --globals-only to copy over your users and
their settings?











--
=
Stephen Davies Consulting P/L Phone: 08-8177 1595
Adelaide, South Australia.Mobile:040 304 0583
Records  Collections Management.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FK check implementation

2014-10-13 Thread Nick Barnes
On Sat, Oct 11, 2014 at 5:01 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 10/10/2014 10:41 AM, Nick Barnes wrote:


 I understand why the FK insert needs to lock on the PK row. But why is
 the PK delete trying to lock the FK row? If it finds one, won't the
 delete fail anyway? If it doesn't find one, what is there to lock?


 I would say this has to do with setting DEFERRABLE on a constraint.


Any guesses why this might be? I would have thought that by this point,
where we're actually performing the check, anything related to deferring
the check would be behind us.

And even if we do require a lock, why FOR KEY SHARE? As I understand it,
this won't lock the referencing field, which should be the only thing in
the FK relation that we're interested in.