Re: [GENERAL] Use cases for lateral that do not involve a set returning function

2014-12-09 Thread Albe Laurenz
AJ Welch wrote:
 http://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/
 
 I suspected some of the claims in the post may not have been accurate. This 
 one in particular:
 
 Without lateral joins, we would need to resort to PL/pgSQL to do this 
 analysis. Or, if our data set
 were small, we could get away with complex, inefficient queries.
 
 
 The sum(1) and order by time limit 1 approach seemed less than ideal to me 
 and I thought this analysis
 could be done with normal left joins instead of lateral left joins. So I came 
 up with a proof of
 concept:
 
 https://github.com/ajw0100/snippets/tree/master/SQL/lateral
 
 
 Is my conclusion in the README correct? Does anything beyond 
 select...from...where force a nested
 loop? In that case, is lateral really only useful with set returning 
 functions as the docs suggest?
 Does anyone know of any use cases for lateral that do not involve a set 
 returning function?

Only recently I used lateral joins to optimize a query.

This is a sample of how the query looked bfore:

SELECT ...
FROM people p
 LEFT JOIN names n
ON (n.people_id = p.people_id
AND current_timestamp  n.validfrom
AND NOT EXISTS (SELECT 1 FROM names n2
WHERE n2.people_id = p.people_id
AND current_timestamp  n2.validfrom
AND n2.validfrom  n.validfrom)
   )
WHERE p.id = ...

So basically it is supposed to find the latest valid name for a person.

This required two scans of the names table per person record.

I rewrote it as

SELECT ...
FROM people p
 LEFT JOIN LATERAL (SELECT * FROM names n
WHERE n.people_id = p.people_id
AND current_timestamp  n.validfrom
ORDER BY n.validfrom DESC LIMIT 1) n
ON TRUE
WHERE p.id = ...

With the correct index this touched fewer blocks and worked faster.
Also, though this is of course a matter of taste, it is more readable.

Of course this forces a nested loop, but that is not bad as such.
In my case it was not problem (I tried to hint at that with the WHERE clause).

So yes, I think that LATERAL is useful even without set returning functions.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing duplicate records from a bulk upload (rationale behind selecting a method)

2014-12-09 Thread Daniel Begin
Thank Tom, 
I understand that the rationale behind choosing to create a new table from
distinct records is that, since both approaches need full table scans,
selecting distinct records is faster (and seems more straight forward) than
finding/deleting duplicates; 

Best regards,
Daniel

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: December-08-14 21:52
To: Scott Marlowe
Cc: Andy Colson; Daniel Begin; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Removing duplicate records from a bulk upload
(rationale behind selecting a method)

Scott Marlowe scott.marl...@gmail.com writes:
 If you're de-duping a whole table, no need to create indexes, as it's 
 gonna have to hit every row anyway. Fastest way I've found has been:

 select a,b,c into newtable from oldtable group by a,b,c;

 On pass, done.

 If you want to use less than the whole row, you can use select 
 distinct on (col1, col2) * into newtable from oldtable;

Also, the DISTINCT ON method can be refined to control which of a set of
duplicate keys is retained, if you can identify additional columns that
constitute a preference order for retaining/discarding dupes.  See the
latest weather reports example in the SELECT reference page.

In any case, it's advisable to crank up work_mem while performing this
operation.

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore -n sch1 : schema sch1 does not exist

2014-12-09 Thread lin
Hi,all:
  I create a schema (sch1)  in the database of db1, and I also create a 
table sch1.t1, then I use the comand pg_dump -d db1 -Fd -f dir1 to back up 
the database of db1; at lase  I drop the schema of sch1;   I want to restore 
the schema sch1, but when I execute the command pg_restore  -d db1 -Fd dir1 -n 
sch1, get error, just like below.  
 I just want to test  the parameter of -n for pg_restore,  how can do to 
use the pg_restore correctly?


[wln@localhost test]$ pg_restore  -d db1 -Fd dir1 -n sch1
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 171; 1259 24669 TABLE t1 wln
pg_restore: [archiver (db)] could not execute query: ERROR:  permission denied 
to create pg_catalog.t1
DETAIL:  System catalog modifications are currently disallowed.
Command was: CREATE TABLE t1 (
id integer
);






pg_restore: [archiver (db)] could not execute query: ERROR:  schema sch1 does 
not exist
Command was: ALTER TABLE sch1.t1 OWNER TO wln;




pg_restore: [archiver (db)] Error from TOC entry 2777; 0 24669 TABLE DATA t1 wln
pg_restore: [archiver (db)] could not execute query: ERROR:  relation t1 does 
not exist
Command was: COPY t1 (id) FROM stdin;


WARNING: errors ignored on restore: 3




[wln@localhost test]$ pg_restore  -d db1 -Fd dir1 -n sch1 -l
;
; Archive created at Fri Sep 26 10:26:43 2014
; dbname: db1
; TOC Entries: 11
; Compression: -1
; Dump Version: 1.12-0
; Format: DIRECTORY
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 9.3beta2
; Dumped by pg_dump version: 9.3beta2
;
;
; Selected TOC Entries:
;
171; 1259 24669 TABLE sch1 t1 wln
2777; 0 24669 TABLE DATA sch1 t1 wln




Thanks,
 wanglin









Re: [GENERAL] pg_restore -n sch1 : schema sch1 does not exist

2014-12-09 Thread Tom Lane
lin jlu...@163.com writes:
   I create a schema (sch1)  in the database of db1, and I also create a 
 table sch1.t1, then I use the comand pg_dump -d db1 -Fd -f dir1 to back up 
 the database of db1; at lase  I drop the schema of sch1;   I want to restore 
 the schema sch1, but when I execute the command pg_restore  -d db1 -Fd dir1 
 -n sch1, get error, just like below.  
  I just want to test  the parameter of -n for pg_restore,  how can do 
 to use the pg_restore correctly?

Well, you could adopt an actually supported release:

 ; Dumped from database version: 9.3beta2
 ; Dumped by pg_dump version: 9.3beta2

I'm not just being snarky: the messages you're getting here remind me of a
bug we fixed some time ago.  But even aside from this specific complaint,
the list of known data-loss bugs in 9.3beta should terrify you.  Why in
the world are you still using it?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication - changing IP addresses

2014-12-09 Thread Dara Unglaube
Thank so very for the info, John! This will help a lot to have it set up
locally.

I have it set up across the local network and double checked with
pg_stat_replication and it was using the local IP. But now I am not sure
about the archive_command command on the master. We had it set up to
archive onto the slave through the archive_command using SSH  rsync
because we were going through the public IP. Below is the command line.

archive_command = 'rsync  -az  -e  \/cygdrive/c/cygwin64/bin/ssh.exe\
/cygdrive/d/PostgreSQL/9.1/data/%p
postgres@0.0.0.0:/cygdrive/d/PostgreSQL/9.1/data/wals/%f
'

Is this neccessary? What is the benefit of having the archive on? Could I
map a network drive from the slave to the master and set the
archive_command to that mapped drive? Or what would be the best approach
for this?

Thanks so very much!
Dara

On Mon, Dec 8, 2014 at 2:11 PM, John R Pierce pie...@hogranch.com wrote:

  On 12/8/2014 11:56 AM, Dara Unglaube wrote:

 We have streaming replication set up on two servers that are on our local
 network using their external/public IP addresses. We are switching internet
 providers and need to change the external/public IP addresses of both
 servers. I'm not sure how to go about this correctly.

1. Our local network IP addresses will not be changing, is there any
reason we could/should not use our local network IP addresses in the
streaming replication process?
 2. What is the best route to make these changes? Is it as simple as
turning postgres off on both servers, changing the IP settings (master -
postgres.conf and hba.conf and slave - recovery.conf files to the local
network IPs) and starting the service up again? Can anyone point me to an
example?

 Any tips, suggestions, help you could provide would be greatly appreciated.

 Thanks in advance for your time!


 if these two servers are both local, there's absolutely no reason to use
 public IP space for this, they should be using their local addresses,
 faster, lower latency, otherwise each round trip packet is going through 4
 layers of NAT (out, in, then out, in again to return).

 make sure the master allows the slave replication account to connect via
 its local IP (this would be in pg_hba.conf, and  a 'reload'), then change
 the address the slave is using to connect to the master, and restart the
 slave, and it all should be good.



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast

  --

*Dara J. Olson Unglaube*
Aquatic Invasive Species Coordinator
Great Lakes Indian Fish  Wildlife Commission
P.O. Box 9, 72682 Maple Street
Odanah, WI 54861
(715) 682-6619 ext.2129
d...@glifwc.org dol...@glifwc.org
maps.glifwc.org


Re: [GENERAL] Help Optimizing a Summary Query

2014-12-09 Thread Robert DiFalco
I'm sorry, I missed a JOIN on the second variation. It is:

SELECT u.id, u.name, u.imageURL, u.bio,
   CASE
  WHEN f.friend_id IS NOT NULL THEN 'isFriend'
  WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
  WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
  ELSE 'none'
   END AS 'friendStatus',
   (SELECT COUNT(1) AS d
  FROM friends f1
 JOIN friends f2 ON f1.fiend_id = f2.friend_id
  WHERE f1.user_id = 33 AND f2.user_id = u.id)
FROM users u
*LEFT OUTER JOIN friends f ON f.user_id = 33 AND f.friend_id = u.id
http://u.id*
LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
WHERE u.id != 33 AND u.name LIKE '%John%' ORDER BY u.name;


On Tue, Dec 9, 2014 at 10:15 AM, Robert DiFalco robert.difa...@gmail.com
wrote:

 I have users, friends, and friend_requests. I need a query that
 essentially returns a summary containing:

 * user (name, imageURL, bio, ...)
 * Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the active user?
* Has the user received a friend request from the active user?
 * # of mutualFriends
 * Exclude the active user from the result set.

 So I have mocked this up two ways but both have complicated query plans
 that will be problematic with large data sets. I'm thinking that my lack of
 deep SQL knowledge is making me miss the obvious choice.

 Here's my two query examples:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
 f.friend_id = u.id)   THEN 'isFriend'
   WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33   AND
 s.from_id = u.id) THEN 'hasSentRequest'
   WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id
 AND r.from_id = 33)   THEN 'hasReceivedRequest'
   ELSE 'none'
END AS friendStatus,
(SELECT COUNT(1)
   FROM friends f1
  JOIN friends f2 ON f1.friend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
 FROM users u
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 33 is just the id of the active user I am using for testing. The WHERE
 clause could be anything. I'm just using u.name here but I'm more
 concerned about the construction of the result set than the WHERE clause.
 These have more or less similar query plans, nothing that would change
 things factorially. Is this the best I can do or am I missing the obvious?

 Here are the tables:


 CREATE TABLE users (
   idBIGINT,
   name  VARCHAR,
   imageURL  VARCHAR
   created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   phone_natlBIGINT,   /* National Phone Number */
   country_e164  SMALLINT, /* E164 country code */
   email VARCHAR(255),
   PRIMARY KEY (id),
   UNIQUE (email),
   UNIQUE (phone_natl, country_e164)
 );


 CREATE TABLE friends (
   user_id  BIGINT,
   friend_id   BIGINT,
   PRIMARY KEY (user_id, user_id),
   FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (friend_id)  REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friends_friend ON friends(friend_id);

 CREATE TABLE friend_requests (
   from_id  BIGINT,
   to_idBIGINT,
   created  TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (from_id, user_id),
   FOREIGN KEY (from_id)  REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (to_id)REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friend_requests_to ON friend_requests(to_id);

 Let me know if you guys need anything else.




Re: [GENERAL] Streaming Replication - changing IP addresses

2014-12-09 Thread John R Pierce

On 12/9/2014 9:07 AM, Dara Unglaube wrote:
Is this neccessary? What is the benefit of having the archive on? 
Could I map a network drive from the slave to the master and set the 
archive_command to that mapped drive? Or what would be the best 
approach for this? 


having a wal archive lets the slave resume after a service interruption 
faster, and also farther back than your wal_keep_Segments setting would 
otherwise allow.


having a wal archive, along with occasional base backups will let you do 
'point in time recovery' (PITR) right up to a specified transaction, as 
long as you have a base backup from prior to that transaction and all 
wal archives since that base backup.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help Optimizing a Summary Query

2014-12-09 Thread Arthur Silva
On Tue, Dec 9, 2014 at 4:18 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 I'm sorry, I missed a JOIN on the second variation. It is:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 *LEFT OUTER JOIN friends f ON f.user_id = 33 AND f.friend_id = u.id
 http://u.id*
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE '%John%' ORDER BY u.name;


 On Tue, Dec 9, 2014 at 10:15 AM, Robert DiFalco robert.difa...@gmail.com
 wrote:

 I have users, friends, and friend_requests. I need a query that
 essentially returns a summary containing:

 * user (name, imageURL, bio, ...)
 * Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the active user?
* Has the user received a friend request from the active user?
 * # of mutualFriends
 * Exclude the active user from the result set.

 So I have mocked this up two ways but both have complicated query plans
 that will be problematic with large data sets. I'm thinking that my lack of
 deep SQL knowledge is making me miss the obvious choice.

 Here's my two query examples:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
 f.friend_id = u.id)   THEN 'isFriend'
   WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33
 AND s.from_id = u.id) THEN 'hasSentRequest'
   WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id
 AND r.from_id = 33)   THEN 'hasReceivedRequest'
   ELSE 'none'
END AS friendStatus,
(SELECT COUNT(1)
   FROM friends f1
  JOIN friends f2 ON f1.friend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
 FROM users u
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 33 is just the id of the active user I am using for testing. The WHERE
 clause could be anything. I'm just using u.name here but I'm more
 concerned about the construction of the result set than the WHERE clause.
 These have more or less similar query plans, nothing that would change
 things factorially. Is this the best I can do or am I missing the obvious?

 Here are the tables:


 CREATE TABLE users (
   idBIGINT,
   name  VARCHAR,
   imageURL  VARCHAR
   created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   phone_natlBIGINT,   /* National Phone Number */
   country_e164  SMALLINT, /* E164 country code */
   email VARCHAR(255),
   PRIMARY KEY (id),
   UNIQUE (email),
   UNIQUE (phone_natl, country_e164)
 );


 CREATE TABLE friends (
   user_id  BIGINT,
   friend_id   BIGINT,
   PRIMARY KEY (user_id, user_id),
   FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (friend_id)  REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friends_friend ON friends(friend_id);

 CREATE TABLE friend_requests (
   from_id  BIGINT,
   to_idBIGINT,
   created  TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (from_id, user_id),
   FOREIGN KEY (from_id)  REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (to_id)REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friend_requests_to ON friend_requests(to_id);

 Let me know if you guys need anything else.




Hello Robert, none of your schemas worked for me, here's a clean version

CREATE TABLE users (
  idBIGINT,
  name  VARCHAR,
  imageURL  VARCHAR,
  created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  phone_natlBIGINT,
  country_e164  SMALLINT,
  email VARCHAR(255),
  PRIMARY KEY (id),
  UNIQUE (email),
  UNIQUE (phone_natl, country_e164)
);


CREATE TABLE friends (
  user_id  BIGINT,
  friend_id   BIGINT,
  PRIMARY KEY (user_id, friend_id),
  FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
  FOREIGN 

[GENERAL] List of shorthand casts

2014-12-09 Thread FarjadFarid(ChkNet)

Is the list of shorthand casts documented somewhere? 
If so can you please direct me to it. A working URL  would be great.

Many thanks.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread David G Johnston
FarjadFarid(ChkNet) wrote
 Is the list of shorthand casts documented somewhere? 
 If so can you please direct me to it. A working URL  would be great.

Do you mean implicit casts - those that do not require an explicit CAST(...)
or :: in the query?

Can you give an example of one that you know exists?

The catalog is probably the best way to get a definitive list though I do
not know the exact query you would use.  The catalog tables are fairly well
documented so you should start there.

David J.





--
View this message in context: 
http://postgresql.nabble.com/List-of-shorthand-casts-tp5829791p5829795.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating single/multiple fields of JSON document

2014-12-09 Thread Bruce Momjian
On Mon, Dec  8, 2014 at 04:56:00PM +0530, bln prasad wrote:
 Hi,
    Is there any way to update single/multiple fields of JSON document of a
 column?

No, not yet.  We are thinking of options but don't have any work in
progress.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread FarjadFarid(ChkNet)
Hi David,

I mean the ones with :: like ::text or ::uuid etc

Some of Postgresql data types have two words or more like double precision
or   timestamp without time zone. It is not clear how these are
implemented if at all. 

Many thank.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G Johnston
Sent: 09 December 2014 20:23
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] List of shorthand casts

FarjadFarid(ChkNet) wrote
 Is the list of shorthand casts documented somewhere? 
 If so can you please direct me to it. A working URL  would be great.

Do you mean implicit casts - those that do not require an explicit CAST(...)
or :: in the query?

Can you give an example of one that you know exists?

The catalog is probably the best way to get a definitive list though I do
not know the exact query you would use.  The catalog tables are fairly well
documented so you should start there.

David J.





--
View this message in context:
http://postgresql.nabble.com/List-of-shorthand-casts-tp5829791p5829795.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread Tom Lane
FarjadFarid\(ChkNet\) farjad.fa...@checknetworks.com writes:
 I mean the ones with :: like ::text or ::uuid etc

 Some of Postgresql data types have two words or more like double precision
 or   timestamp without time zone. It is not clear how these are
 implemented if at all. 

Any type name works fine after ::.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread Adrian Klaver

On 12/09/2014 01:37 PM, FarjadFarid(ChkNet) wrote:

Hi David,

I mean the ones with :: like ::text or ::uuid etc

Some of Postgresql data types have two words or more like double precision
or   timestamp without time zone. It is not clear how these are
implemented if at all.


Try the query below to get most of the casts available:

select castsource::regtype, casttarget::regtype from pg_cast order by 
castsource;


For more info and caveats see:

http://www.postgresql.org/docs/9.3/interactive/catalog-pg-cast.html



Many thank.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G Johnston
Sent: 09 December 2014 20:23
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] List of shorthand casts

FarjadFarid(ChkNet) wrote

Is the list of shorthand casts documented somewhere?
If so can you please direct me to it. A working URL  would be great.


Do you mean implicit casts - those that do not require an explicit CAST(...)
or :: in the query?

Can you give an example of one that you know exists?

The catalog is probably the best way to get a definitive list though I do
not know the exact query you would use.  The catalog tables are fairly well
documented so you should start there.

David J.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up an in-progress wraparound-preventing vacuum

2014-12-09 Thread Vincent de Phily
On Tuesday 09 December 2014 01:58:11 Vincent de Phily wrote:
 On Monday 08 December 2014 10:17:37 Jeff Janes wrote:
  You can `strace` for the lseek command to see which file handles it is
  currently working on, and
  use lsof to turn those into names.  You want to look at where it is in the
  table files, not the index files.
 
 Neat trick, thanks. I'll try it in the morning when I'm fresh. Are the files
 read in C-sorting order ?

I did this (stracing the first 1000 read() and write() every 30s) and kept an 
eye on progress throughout the day. It follows a (to me) surprising pattern, 
which looks unnecessarily time-consuming :

It reads about 8G of the table (often doing a similar number of writes, but 
not always), then starts reading the pkey index and the second index (only 2 
indexes on this table), reading both of them fully (some writes as well, but 
not as many as for the table), which takes around 8h.

And the cycle apparently repeats: process a few more GB of the table, then go 
reprocess both indexes fully. A rough estimate is that it spends ~6x more time 
(re)processing the indexes as it does processing the table (looking at data 
size alone the ratio would be 41x, but the indexes go faster). I'm probably 
lucky to only have two indexes on this table.

Is that the expected behaviour ? Why ? I can imagine that it skips some pages 
and needs to go back, but then it should only do this once at the end of the 
process, or it should only revisit a handfull of pages. Is that something that 
can be improved by throwing more maintenance_work_mem at it ?


-- 
Vincent de Phily



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread FarjadFarid(ChkNet)
Got it thanks. 
 

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: 09 December 2014 21:40
To: FarjadFarid(ChkNet)
Cc: 'David G Johnston'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] List of shorthand casts

FarjadFarid\(ChkNet\) farjad.fa...@checknetworks.com writes:
 I mean the ones with :: like ::text or ::uuid etc

 Some of Postgresql data types have two words or more like double
precision
 or   timestamp without time zone. It is not clear how these are 
 implemented if at all.

Any type name works fine after ::.

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up an in-progress wraparound-preventing vacuum

2014-12-09 Thread Tom Lane
Vincent de Phily vincent.deph...@mobile-devices.fr writes:
 It reads about 8G of the table (often doing a similar number of writes, but 
 not always), then starts reading the pkey index and the second index (only 2 
 indexes on this table), reading both of them fully (some writes as well, but 
 not as many as for the table), which takes around 8h.

 And the cycle apparently repeats: process a few more GB of the table, then go 
 reprocess both indexes fully. A rough estimate is that it spends ~6x more 
 time 
 (re)processing the indexes as it does processing the table (looking at data 
 size alone the ratio would be 41x, but the indexes go faster). I'm probably 
 lucky to only have two indexes on this table.

 Is that the expected behaviour ?

Yes.  It can only remember so many dead tuples at a time, and it has
to go clean the indexes when the dead-TIDs buffer fills up.  You could
increase maintenance_work_mem to increase the size of that buffer.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List of shorthand casts

2014-12-09 Thread FarjadFarid(ChkNet)

Thanks. 

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: 09 December 2014 21:46
To: FarjadFarid(ChkNet); 'David G Johnston'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] List of shorthand casts

On 12/09/2014 01:37 PM, FarjadFarid(ChkNet) wrote:
 Hi David,

 I mean the ones with :: like ::text or ::uuid etc

 Some of Postgresql data types have two words or more like double
precision
 or   timestamp without time zone. It is not clear how these are 
 implemented if at all.

Try the query below to get most of the casts available:

select castsource::regtype, casttarget::regtype from pg_cast order by
castsource;

For more info and caveats see:

http://www.postgresql.org/docs/9.3/interactive/catalog-pg-cast.html


 Many thank.

 -Original Message-
 From: pgsql-general-ow...@postgresql.org
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G 
 Johnston
 Sent: 09 December 2014 20:23
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] List of shorthand casts

 FarjadFarid(ChkNet) wrote
 Is the list of shorthand casts documented somewhere?
 If so can you please direct me to it. A working URL  would be great.

 Do you mean implicit casts - those that do not require an explicit 
 CAST(...) or :: in the query?

 Can you give an example of one that you know exists?

 The catalog is probably the best way to get a definitive list though I 
 do not know the exact query you would use.  The catalog tables are 
 fairly well documented so you should start there.

 David J.


--
Adrian Klaver
adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating single/multiple fields of JSON document

2014-12-09 Thread Roxanne Reid-Bennett

On 12/9/2014 3:38 PM, Bruce Momjian wrote:

On Mon, Dec  8, 2014 at 04:56:00PM +0530, bln prasad wrote:

Hi,
Is there any way to update single/multiple fields of JSON document of a
column?

No, not yet.  We are thinking of options but don't have any work in
progress.

Given a pretty distinct well understood structure, we have used 
regexp_replace against JSON stored as text.
Depending upon your case, in the mean time... you may be able to cast, 
update, then cast back.


Roxanne


--
[At other schools] I think the most common fault in general is to teach 
students how to pass exams instead of teaching them the science.
Donald Knuth



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up an in-progress wraparound-preventing vacuum

2014-12-09 Thread Vincent de Phily
On Tuesday 09 December 2014 16:56:39 Tom Lane wrote:
 Vincent de Phily vincent.deph...@mobile-devices.fr writes:
  It reads about 8G of the table (often doing a similar number of writes,
  but
  not always), then starts reading the pkey index and the second index (only
  2 indexes on this table), reading both of them fully (some writes as
  well, but not as many as for the table), which takes around 8h.
  
  And the cycle apparently repeats: process a few more GB of the table, then
  go reprocess both indexes fully. A rough estimate is that it spends ~6x
  more time (re)processing the indexes as it does processing the table
  (looking at data size alone the ratio would be 41x, but the indexes go
  faster). I'm probably lucky to only have two indexes on this table.
  
  Is that the expected behaviour ?
 
 Yes.  It can only remember so many dead tuples at a time, and it has
 to go clean the indexes when the dead-TIDs buffer fills up.

Fair enough. And I guess it scans the whole index each time because the dead 
tuples are spread all over ?

What happens when vacuum is killed before it had time to go though the index 
with its dead-TID buffer ? Surely the index isn't irreversibly bloated; and 
whatever is done then could be done in the normal case ?

It still feels like a lot of wasted IO.

 You could
 increase maintenance_work_mem to increase the size of that buffer.

Will do, thanks.

-- 
Vincent de Phily



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up an in-progress wraparound-preventing vacuum

2014-12-09 Thread Tom Lane
Vincent de Phily vincent.deph...@mobile-devices.fr writes:
 What happens when vacuum is killed before it had time to go though the index 
 with its dead-TID buffer ?

The next run will go through the index again, looking for those same TIDs
(and possibly more).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -n sch1 : schema sch1 does not exist

2014-12-09 Thread lin


Are you sure it is a bug and you have fixed it? I test it with 
postgresql-9.4rcl, the result is the same with postgresql-9.3 beta2.






At 2014-12-09 23:54:01, Tom Lane t...@sss.pgh.pa.us wrote:
lin jlu...@163.com writes:
   I create a schema (sch1)  in the database of db1, and I also create a 
 table sch1.t1, then I use the comand pg_dump -d db1 -Fd -f dir1 to back up 
 the database of db1; at lase  I drop the schema of sch1;   I want to restore 
 the schema sch1, but when I execute the command pg_restore  -d db1 -Fd dir1 
 -n sch1, get error, just like below.  
  I just want to test  the parameter of -n for pg_restore,  how can do 
 to use the pg_restore correctly?

Well, you could adopt an actually supported release:

 ; Dumped from database version: 9.3beta2
 ; Dumped by pg_dump version: 9.3beta2

I'm not just being snarky: the messages you're getting here remind me of a
bug we fixed some time ago.  But even aside from this specific complaint,
the list of known data-loss bugs in 9.3beta should terrify you.  Why in
the world are you still using it?

   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore -n sch1 : schema sch1 does not exist

2014-12-09 Thread Adrian Klaver

On 12/09/2014 07:29 AM, lin wrote:

Hi,all:
   I create a schema (sch1)  in the database of db1, and I also
create a table sch1.t1, then I use the comand pg_dump -d db1 -Fd -f
dir1 to back up the database of db1; at lase  I drop the schema of
sch1;   I want to restore the schema sch1, but when I execute the
command pg_restore  -d db1 -Fd dir1 -n sch1, get error, just like below.
  I just want to test  the parameter of -n for pg_restore,  how
can do to use the pg_restore correctly?


Hmm,  seems -n is not the same in pg_dump and pg_restore. If you do a 
dump with the
-n switch you get the SCHEMA creation in the restore. If you use the -n 
on the restore

you get the contained objects but not the SCHEMA creation statement.
Might try generating a TOC from your dump and then pulling the entry for 
schema

sch1 and put it at the top of the TOC you generated below.




[wln@localhost test]$ pg_restore  -d db1 -Fd dir1 -n sch1
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 171; 1259 24669 TABLE
t1 wln
pg_restore: [archiver (db)] could not execute query: ERROR:  permission
denied to create pg_catalog.t1
DETAIL:  System catalog modifications are currently disallowed.
 Command was: CREATE TABLE t1 (
 id integer
);



pg_restore: [archiver (db)] could not execute query: ERROR:  schema
sch1 does not exist
 Command was: ALTER TABLE sch1.t1 OWNER TO wln;


pg_restore: [archiver (db)] Error from TOC entry 2777; 0 24669 TABLE
DATA t1 wln
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
t1 does not exist
 Command was: COPY t1 (id) FROM stdin;

WARNING: errors ignored on restore: 3


[wln@localhost test]$ pg_restore  -d db1 -Fd dir1 -n sch1 -l
;
; Archive created at Fri Sep 26 10:26:43 2014
; dbname: db1
; TOC Entries: 11
; Compression: -1
; Dump Version: 1.12-0
; Format: DIRECTORY
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 9.3beta2
; Dumped by pg_dump version: 9.3beta2
;
;
; Selected TOC Entries:
;
171; 1259 24669 TABLE sch1 t1 wln
2777; 0 24669 TABLE DATA sch1 t1 wln


Thanks,
  wanglin









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-09 Thread Eric Svenson
Hi Adrian,

I try to get access to the non-VM machine, at the moment access is not
possible for me unfortunately.

You are right, there are more tables in the database which are restored
correctly but these tables do NOT contain float values. These two tables
are the only tables in the database which contain floats.

The errors occur with the first float in the table, the restore process
seems to terminate with that table and seems to continue with the next
table. The result are completely empty tables for dev_my_settings and
file_item.

There are float values in the table which can be viewed with pg_admin.

The table definitions for dev_my_settings and file_item contain lots of
BIGINTS, smallints and integers, and several double precision values. All
other tables do not contain any double precision values.

Regards,
Eric


2014-12-08 15:22 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 12:21 AM, Eric Svenson wrote:

 Hi Adrian,

 here are the results of today:

   How where the Postgres instances installed?
 From a package?

 Yes. It is Version 9.2 for Windows, comment of the package is
 The PostgreSQL RDBMS, version 9.2, packaged by EnterpriseDB


Compiled and if so with what compiler and what settings?
 No.


   Use --inserts with pg_dump to get INSERT statements instead of a COPY
 and then feed to psql.
   This will slow the process down, so I would try with a small sample
 set.

 Much slower, but still the same error:

 (part of the output)

 INSERT 50415934 1
 INSERT 50415935 1
 INSERT 50415936 1
 INSERT 50415937 1
 INSERT 50415938 1
 INSERT 50415939 1
 ERROR:  invalid input syntax for type double precision:
 0.10001

   Do pg_dump -Fc and then use pg_restore.

 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 2558; 0 5161040 TABLE
 DATA dev_my_settings my_Database
 pg_restore: [archiver (db)] COPY failed for table dev_my_settings:
 ERROR:  invalid input syntax
 for type double precision: 0.10001
 CONTEXT:  COPY dev_meas_settings, line 718, column dms_dble_value:
 0.10001
 pg_restore: [archiver (db)] Error from TOC entry 2552; 0 5160884 TABLE
 DATA file_item my_Database
 pg_restore: [archiver (db)] COPY failed for table file_item: ERROR:
 invalid input syntax for type
   double precision: 48.2000829
 CONTEXT:  COPY file_item, line 54, column fi_latitude:
 48.2000829
 WARNING: errors ignored on restore: 2


 Well at least it is consistent:) Postgres is deployed to a lot of Windows
 machines, so if this was a generic Windows problem I would expect more
 reports on this. There is something about this setup that is causing the
 problem and we are missing.

 In a previous post you made mention of a possible instance where this
 cropped up on a non-VM machine. Did you get a chance to track that down?

 Also the output from pg_restore shows only two errors on restore which I
 presume are the two COPY errors with the input syntax. So are there other
 tables in the database, with float values, that do restore correctly?

 Also in the errors above, in the first case COPY does not error until line
 718 and in the second case line 54. So are there float values in the data
 for those columns that are valid?

 Also what are the table definitions for dev_my_settings and file_item?


 Regards,
 Eric



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] Idle Sessions inTransaction with RELEASE_EXEC_SVP queries and locks

2014-12-09 Thread chris . jurado
​I am having a problem with having idle sessions in transactions. In pgAdmin 
Server Status, it is showing RELEASE_EXEC_SVP_XX (XX data are varied) 
as its query and it's locks also contain a lot of these 
RELEASE_EXEC_SVP_XX entries. What do these mean? These commonly cause 
lock blocking problems because these sessions sometimes lock other rows but the 
session status is always Idle in transaction. I do not know why there are 
sessions like these. I have reviewed our system and do not see any possibility 
of transactions getting started that are not rollbacked or committed and just 
stay idle so this must be something internal to postgresql.
Version of PostgreSQL is 9.2.8 running on Windows using psqlodbc driver 
9.0.3.210.


Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-09 Thread Eric Svenson
Hi Adrian,

here are the results of today:

 How where the Postgres instances installed?
   From a package?

Yes. It is Version 9.2 for Windows, comment of the package is
The PostgreSQL RDBMS, version 9.2, packaged by EnterpriseDB


  Compiled and if so with what compiler and what settings?
No.


 Use --inserts with pg_dump to get INSERT statements instead of a COPY and
then feed to psql.
 This will slow the process down, so I would try with a small sample set.

Much slower, but still the same error:

(part of the output)

INSERT 50415934 1
INSERT 50415935 1
INSERT 50415936 1
INSERT 50415937 1
INSERT 50415938 1
INSERT 50415939 1
ERROR:  invalid input syntax for type double precision:
0.10001

 Do pg_dump -Fc and then use pg_restore.

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2558; 0 5161040 TABLE DATA
dev_my_settings my_Database
pg_restore: [archiver (db)] COPY failed for table dev_my_settings:
ERROR:  invalid input syntax
for type double precision: 0.10001
CONTEXT:  COPY dev_meas_settings, line 718, column dms_dble_value:
0.10001
pg_restore: [archiver (db)] Error from TOC entry 2552; 0 5160884 TABLE DATA
file_item my_Database
pg_restore: [archiver (db)] COPY failed for table file_item: ERROR:
invalid input syntax for type
 double precision: 48.2000829
CONTEXT:  COPY file_item, line 54, column fi_latitude: 48.2000829
WARNING: errors ignored on restore: 2

Regards,
Eric


2014-12-05 16:22 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/05/2014 01:13 AM, Eric Svenson wrote:



 Hi Adrian,

  Is the above how the message was actually presented or has a
 partialtranslation taken place? Just asking because it would seem to
 indicate

 further confusion about the locale.

 This is an exact copy of the screen contents, no translation by me has
 taken place. Indeed strange, ERROR (english) KONTEXT (german, one
 line below) ??

 To add further confusion: I have a report that the error also appeared
 on a non-VM native PC with Windows 7 Enterprise. Unfortunately, I dont
 have the exact error message, but I try to get it ASAP.


 Well nothing came back to me on VMware and locales, but that does not seem
 to be the issue if the above is correct.

 So:

 How where the Postgres instances installed?
   From a package?
   Compiled and if so with what compiler and what settings?

 What happens if you?:

 Use --inserts with pg_dump to get INSERT statements instead of a COPY and
 then feed to psql.
 This will slow the process down, so I would try with a small sample set.

 Do pg_dump -Fc and then use pg_restore.

 The above are more a way of seeing if the issue is on a particular path or
 is generic, than a solution.



 regards,
 Eric Svenson



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Use cases for lateral that do not involve a set returning function

2014-12-09 Thread AJ Welch
Thanks for the response. Yea, using lateral there definitely reads better
to me than using a correlated subquery. And it makes sense that performance
is ok since you're filtering on a specific person's id (as you hinted at
with `WHERE p.id = ...`) and the nested loop forced by `order by...limit 1`
presumably only loops once.

However, I would consider that more of an OLTP style query where I'm kind
of more interested in OLAP style queries as the referenced post was
building a funnel analysis over an events table. I don't think their
approach will scale. I guess it's kind of a specific question but that post
got me wondering if there are any use cases for lateral outside of SRFs
that do not generate a nested loop AND can not be achieved without lateral?
Basically, something I could use in an OLAP query that I couldn't use prior
to 9.3.

Also, just for the heck of it, I took a look at the explain plans for both
of your queries without `WHERE p.id = ...` to see how they would scale.

Correlated subquery:

QUERY PLAN
Hash Right Join  (cost=163943.00..5932603426739.67 rows=7499298 width=24)
  Hash Cond: (n.people_id = p.people_id)
  Join Filter: (NOT (SubPlan 1))
  -  Seq Scan on names n  (cost=0.00..320543.41 rows=14998595 width=18)
Filter: (now()  validfrom)
  -  Hash  (cost=77028.00..77028.00 rows=500 width=10)
-  Seq Scan on people p  (cost=0.00..77028.00 rows=500
width=10)
  SubPlan 1
-  Seq Scan on names n2  (cost=0.00..395543.88 rows=1 width=0)
  Filter: ((validfrom  n.validfrom) AND (people_id = p.people_id)
AND (now()  validfrom))


Lateral:

Nested Loop Left Join  (cost=358043.67..1790218514528.00 rows=500
width=24)
  -  Seq Scan on people p  (cost=0.00..77028.00 rows=500 width=10)
  -  Limit  (cost=358043.67..358043.67 rows=1 width=18)
-  Sort  (cost=358043.67..358043.68 rows=4 width=18)
  Sort Key: n.validfrom
  -  Seq Scan on names n  (cost=0.00..358043.65 rows=4
width=18)
Filter: ((people_id = p.people_id) AND (now() 
validfrom))

Granted I haven't set up any indexes, but it looks like the correlated
subquery, after an initial access of the names table before the join,
accesses the names table again for each (person, name) pair after the join
(in the join filter). So it's worse than just 2 scans per person. Indeed,
the lateral subquery seems better because it accesses the person table and
then the names table once for each person. However, I might instead do
something like this to access each table just once:

explain

select *
from people p
left join (
  select *, rank() over(partition by people_id
order by validfrom desc) as rank
  from names
) n on p.people_id = n.people_id
   and n.rank = 1

QUERY PLAN
Hash Right Join  (cost=3427870.26..3942220.56 rows=500 width=36)
  Hash Cond: (n.people_id = p.people_id)
  -  Subquery Scan on n  (cost=3263927.26..3751430.31 rows=75000 width=26)
Filter: (n.rank = 1)
-  WindowAgg  (cost=3263927.26..3563929.14 rows=1594 width=18)
  -  Sort  (cost=3263927.26..3301427.49 rows=1594 width=18)
Sort Key: names.people_id, names.validfrom
-  Seq Scan on names  (cost=0.00..245542.94
rows=1594 width=18)
  -  Hash  (cost=77028.00..77028.00 rows=500 width=10)
-  Seq Scan on people p  (cost=0.00..77028.00 rows=500
width=10)

Thanks,
AJ
http://chartio.com/
https://www.linkedin.com/in/ajw0100

On Tue, Dec 9, 2014 at 2:24 AM, Albe Laurenz laurenz.a...@wien.gv.at
wrote:

 AJ Welch wrote:
 
 http://blog.heapanalytics.com/postgresqls-powerful-new-join-type-lateral/
 
  I suspected some of the claims in the post may not have been accurate.
 This one in particular:
 
  Without lateral joins, we would need to resort to PL/pgSQL to do this
 analysis. Or, if our data set
  were small, we could get away with complex, inefficient queries.
 
 
  The sum(1) and order by time limit 1 approach seemed less than ideal to
 me and I thought this analysis
  could be done with normal left joins instead of lateral left joins. So I
 came up with a proof of
  concept:
 
  https://github.com/ajw0100/snippets/tree/master/SQL/lateral
 
 
  Is my conclusion in the README correct? Does anything beyond
 select...from...where force a nested
  loop? In that case, is lateral really only useful with set returning
 functions as the docs suggest?
  Does anyone know of any use cases for lateral that do not involve a set
 returning function?

 Only recently I used lateral joins to optimize a query.

 This is a sample of how the query looked bfore:

 SELECT ...
 FROM people p
  LEFT JOIN names n
 ON (n.people_id = p.people_id
 AND current_timestamp  n.validfrom
 AND NOT EXISTS (SELECT 1 FROM names n2
 WHERE n2.people_id = p.people_id
 AND current_timestamp  n2.validfrom
 AND 

[GENERAL] Weird CPU utilization patterns with Postgres

2014-12-09 Thread István
Hi,

We are having a really interesting problem with our Postgres 9.3 instance
in our infrastructure.

Few days ago our box started to show huge CPU spikes while the IO Wait is
negligible on the box. After a while I have installed perf and started to
monitor the Postgres master process and here is what I have found:

Samples: 372K of event 'cycles', Event count (approx.): 110095222173,
ThreaSamples: 372K of event 'cycles', Event count (approx.): 1100 93.65%
libc-2.12.so  [.] __strcoll_l
  0.97%  libc-2.12.so  [.] memcpy
  0.90%  postgres  [.] slot_getattr
  0.88%  postgres  [.] nocachegetattr
  0.64%  postgres  [.] varstr_cmp
  0.52%  libc-2.12.so  [.] __strcmp_sse42
  0.43%  postgres  [.] hash_any
  0.32%  postgres  [.] pg_detoast_datum_packed
  0.31%  libc-2.12.so  [.] __strlen_sse2
  0.22%  postgres  [.] bttextcmp
  0.18%  postgres  [.] ExecStoreTuple
  0.14%  postgres  [.] MemoryContextReset
  0.09%  postgres  [.] pgstat_end_function_usage
  0.08%  libc-2.12.so  [.] strcoll
  0.08%  postgres  [.] heap_hot_search_buffer
  0.07%  postgres  [.] lc_collate_is_c
  0.06%  [kernel]  [k] sys_semtimedop
  0.06%  postgres  [.] heap_page_prune_opt
  0.05%  postgres  [.] slot_getsomeattrs
  0.05%  postgres  [.] heap_fill_tuple
  0.04%  postgres  [.] hash_search
  0.03%  postgres  [.] GetMemoryChunkSpace
  0.03%  postgres  [.] heap_form_minimal_tuple
  0.03%  [kernel]  [k] update_queue
  0.02%  postgres  [.] ReadBufferExtended
  0.02%  postgres  [.] memcpy@plt

It seems that the box is using __strcoll a lot. The query performance is
down, while previously the box was able to sustain with ~20 clients right
now it is hardly able to keep up with 5.

I am wondering why the root cause might be here.

Let me know if anybody has seen this before.

Regards,
Istvan





-- 
the sun shines for all


Re: [GENERAL] Weird CPU utilization patterns with Postgres

2014-12-09 Thread Peter Geoghegan
On Fri, Dec 5, 2014 at 5:14 PM, István lecc...@gmail.com wrote:
 I am wondering why the root cause might be here.

My guess would be that an important text-based sort operation began to
go to disk. The external sort code (tapesort) is known to do far more
comparisons than quicksort. With text sorts, you tend to see tapesort
very CPU bound, where that might not be the case with integer sorts.

I'm currently trying to fix this across the board [1], but my first
suggestion is to try enabling log_temp_files to see if external sorts
can be correlated with these stalls.

[1] https://commitfest.postgresql.org/action/patch_view?id=1462
-- 
Regards,
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird CPU utilization patterns with Postgres

2014-12-09 Thread Peter Geoghegan
On Tue, Dec 9, 2014 at 5:46 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
 I'm currently trying to fix this across the board [1], but my first
 suggestion is to try enabling log_temp_files to see if external sorts
 can be correlated with these stalls.

See also: 
http://www.postgresql.org/message-id/cam3swztijobppqff7mn3021vvtu+5fd1ymabq8tlov4zhfa...@mail.gmail.com

-- 
Regards,
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Idle Sessions inTransaction with RELEASE_EXEC_SVP queries and locks

2014-12-09 Thread Sameer Kumar
On 10 Dec 2014 08:52, chris.jur...@primesoft.ph wrote:

 ​I am having a problem with having idle sessions in transactions. In
pgAdmin Server Status, it is showing RELEASE_EXEC_SVP_XX (XX data
are varied) as its query and it's locks also contain a lot of these
RELEASE_EXEC_SVP_XX entries. What do these mean? These commonly cause
lock blocking problems because these sessions sometimes lock other rows but
the session status is always Idle in transaction. I do not know why there
are sessions like these. I have reviewed our system and do not see any
possibility of transactions getting started that are not rollbacked or
committed and just stay idle so this must be something internal to
postgresql.

 Version of PostgreSQL is 9.2.8 running on Windows using psqlodbc driver
9.0.3.210.

What is the application platform? I know there are some APIs/packages in
languages viz python which would start an implicit transaction even when
you fire a select statement. What is the kind of lock and what is their
granted status?

What is the isolation level that you use in application?

Regards
Sameer Kumar
Ashnik Pte Ltd, Singapore


Re: [GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-09 Thread M Tarkeshwar Rao
Hi all,

We have done all the suggested things from reindex and vaccume to  hardware 
heat issue.

But do not get the clue why this happened?

Can you please suggest what are the checkpoints we follow so that we can avoid 
this kind of issue in future?

Regards
Tarkeshwar

-Original Message-
From: M Tarkeshwar Rao 
Sent: 04 December 2014 17:32
To: M Tarkeshwar Rao; Adrian Klaver; pgsql-general@postgresql.org
Subject: RE: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

Hi all,

Finally DB was restored from the backup and the issue was solved.

Is the data corrupted? 

I feel that data is corrupted as well as its some of the data dictionary also 
corrupted.
That is the reason it is not displaying any columns for the table even the 
table is present.

Regards
Tarkeshwar

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of M Tarkeshwar Rao
Sent: 04 December 2014 16:38
To: Adrian Klaver; pgsql-general@postgresql.org
Subject: Re: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

Hi,

We performed the following actions to recover

1. Restart the DB
2. Rebuild the index
3. Vacume the index

mgrdb=# select count(*) from mm_activealarm;
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619 mgrdb=# 
REINDEX table pg_toast.pg_toast_2619; REINDEX mgrdb=# VACUUM ANALYZE 
mm_activealarm; VACUUM mgrdb=# commit;
WARNING: there is no transaction in progress COMMIT mgrdb=# select count(*) 
from mm_activealarm;
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619 mgrdb=#

4. Finally we drop the MM_ActiveAlarm table.and recreated it 5. Still after 
that we were not able to see the column even after recreating the table with 
columns there are no columns present in table itself


Can you please suggest more on this ?

Regards
Tarkeshwar
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: 03 December 2014 20:22
To: M Tarkeshwar Rao; pgsql-general@postgresql.org
Subject: Re: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

On 12/03/2014 03:07 AM, M Tarkeshwar Rao wrote:
 Hi all,

 We are getting following error message on doing any action on the 
 table like(Select or open from pgadmin).

 Please suggest.


http://dba.stackexchange.com/questions/31008/pg-dump-and-error-missing-chunk-number-0-for-toast-value

Though the question still remains, what got it into that state?

Any recent hardware/software issues?


 Regards

 Tarkeshwar



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: getting error while running sql on mm_activealrm table

2014-12-09 Thread M Tarkeshwar Rao
Is column size less than 8 kb will help us?

-Original Message-
From: M Tarkeshwar Rao 
Sent: 10 December 2014 11:51
To: 'Adrian Klaver'; 'pgsql-general@postgresql.org'
Subject: RE: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

Hi all,

We have done all the suggested things from reindex and vaccume to  hardware 
heat issue.

But do not get the clue why this happened?

Can you please suggest what are the checkpoints we follow so that we can avoid 
this kind of issue in future?

Regards
Tarkeshwar

-Original Message-
From: M Tarkeshwar Rao
Sent: 04 December 2014 17:32
To: M Tarkeshwar Rao; Adrian Klaver; pgsql-general@postgresql.org
Subject: RE: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

Hi all,

Finally DB was restored from the backup and the issue was solved.

Is the data corrupted? 

I feel that data is corrupted as well as its some of the data dictionary also 
corrupted.
That is the reason it is not displaying any columns for the table even the 
table is present.

Regards
Tarkeshwar

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of M Tarkeshwar Rao
Sent: 04 December 2014 16:38
To: Adrian Klaver; pgsql-general@postgresql.org
Subject: Re: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

Hi,

We performed the following actions to recover

1. Restart the DB
2. Rebuild the index
3. Vacume the index

mgrdb=# select count(*) from mm_activealarm;
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619 mgrdb=# 
REINDEX table pg_toast.pg_toast_2619; REINDEX mgrdb=# VACUUM ANALYZE 
mm_activealarm; VACUUM mgrdb=# commit;
WARNING: there is no transaction in progress COMMIT mgrdb=# select count(*) 
from mm_activealarm;
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619 mgrdb=#

4. Finally we drop the MM_ActiveAlarm table.and recreated it 5. Still after 
that we were not able to see the column even after recreating the table with 
columns there are no columns present in table itself


Can you please suggest more on this ?

Regards
Tarkeshwar
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: 03 December 2014 20:22
To: M Tarkeshwar Rao; pgsql-general@postgresql.org
Subject: Re: [GENERAL] FW: getting error while running sql on mm_activealrm 
table

On 12/03/2014 03:07 AM, M Tarkeshwar Rao wrote:
 Hi all,

 We are getting following error message on doing any action on the 
 table like(Select or open from pgadmin).

 Please suggest.


http://dba.stackexchange.com/questions/31008/pg-dump-and-error-missing-chunk-number-0-for-toast-value

Though the question still remains, what got it into that state?

Any recent hardware/software issues?


 Regards

 Tarkeshwar



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general