Re: [GENERAL] Explanantion on pgbouncer please

2013-11-04 Thread Jayadevan
Why don't you try adding layers one by one?
1) Ensure you can connect to PostgreSQL from psql client (on another
machine)
2) Configure pgbouncer
3) See if you can connect from psql > pgbouncer > PostgreSQL
Check the data in pgbpuncer and PostgreSQL (number of sessions, idle
connections etc). You could also try the effect of the timeout parameters in
pgbouncer.
If the above is working fine, try connecting from tomcat/map server/whatever




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Explanantion-on-pgbouncer-please-tp5776515p5776952.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json datatype and table bloat?

2013-11-04 Thread Gregory Haase
While I agree that an equality operator doesn't really make sense for json
operationally, there are certain maintenance reasons why it may come in
handy. Removing duplicate records comes to mind.

Other than adding basic stats to necessary columns, I would say that the
equality operator is really one of the most basic tenets of a relational
database and should probably exist for any data type - regardless of it's
given usefullness.

Greg Haase
On Nov 4, 2013 6:31 PM, "ajeli...@gmail.com"  wrote:

> >>> Along the lines of the equality operator; I have ran into issues trying
> to
> >>> pivot a table/result set with a json type due what seemed to be no
> >>> equality
> >>> operator.
> >>
> >> For the curious, and also use-case considerations for development, would
> >> you
> >> be able to share what it is you are doing (and how) that combines full
> >> json
> >> documents with pivoting?
> >>
> >> Compound types holding source data for a pivot seems problematic since
> >> generally all the pivot components are single-valued and, for data,
> often
> >> numerical.
>
> >would also like to see this. json type has completely displaced
> >crosstab in my usage. I don't typically pivot json though: I pivot the
> >raw data then transform to json.  With limited exceptions I consider
> >storing json in actual table rows to be an anti-pattern (but it should
> >still work if you do it).
>
> I could not figure out what I was doing last month to reproduce this.  So
>  I
> did a small pivot poc, and it is erroring on the max function. So it is
> probably not the same issue. My guess is I tried the using the GREATEST
> function as a hail marry (which would not have worked) and got the
> following
> message; ERROR:  could not identify a comparison function for type json and
> then thought/hopped it was the same thing when reading the emails.
>
> CREATE TABLE bad_table_json(id int, detail_type text, details json);
> INSERT INTO bad_table_json values(1, 'a', '{"a":1}'::json);
> INSERT INTO bad_table_json values(1, 'b', '{"a":1}'::json);
> INSERT INTO bad_table_json values(1, 'c', '{"a":1}'::json);
>
> SELECT id
>   ,MAX(CASE WHEN detail_type = 'a' THEN details END) AS a
>   ,MAX(CASE WHEN detail_type = 'b' THEN details END) AS b
>   ,MAX(CASE WHEN detail_type = 'c' THEN details END) AS c
>   FROM bad_table_json
>  GROUP BY id
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/json-datatype-and-table-bloat-tp5776182p5776947.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] json datatype and table bloat?

2013-11-04 Thread ajeli...@gmail.com
>>> Along the lines of the equality operator; I have ran into issues trying
to 
>>> pivot a table/result set with a json type due what seemed to be no 
>>> equality 
>>> operator. 
>> 
>> For the curious, and also use-case considerations for development, would
>> you 
>> be able to share what it is you are doing (and how) that combines full
>> json 
>> documents with pivoting? 
>> 
>> Compound types holding source data for a pivot seems problematic since 
>> generally all the pivot components are single-valued and, for data, often 
>> numerical.

>would also like to see this. json type has completely displaced 
>crosstab in my usage. I don't typically pivot json though: I pivot the 
>raw data then transform to json.  With limited exceptions I consider 
>storing json in actual table rows to be an anti-pattern (but it should 
>still work if you do it). 

I could not figure out what I was doing last month to reproduce this.  So  I
did a small pivot poc, and it is erroring on the max function. So it is
probably not the same issue. My guess is I tried the using the GREATEST
function as a hail marry (which would not have worked) and got the following
message; ERROR:  could not identify a comparison function for type json and
then thought/hopped it was the same thing when reading the emails.

CREATE TABLE bad_table_json(id int, detail_type text, details json);
INSERT INTO bad_table_json values(1, 'a', '{"a":1}'::json);
INSERT INTO bad_table_json values(1, 'b', '{"a":1}'::json);
INSERT INTO bad_table_json values(1, 'c', '{"a":1}'::json);

SELECT id
  ,MAX(CASE WHEN detail_type = 'a' THEN details END) AS a
  ,MAX(CASE WHEN detail_type = 'b' THEN details END) AS b
  ,MAX(CASE WHEN detail_type = 'c' THEN details END) AS c
  FROM bad_table_json
 GROUP BY id



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/json-datatype-and-table-bloat-tp5776182p5776947.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WITH RECURSIVE doesn't work properly for me

2013-11-04 Thread Jing Fan
I use following command to get a shortest-path query:

with recursive paths( src_id, dest_id, dist) as(
select n1,n2,1
from nodes
union
select src_id, dest_id, min(dist)
from (  select paths.src_id as src_id, nodes.n2 as dest_id,
paths.dist+1 as dist
from paths, nodes
where paths.dest_id=nodes.n1
and paths.src_id<>nodes.n2
) as Temp
group by src_id, dest_id
)
select paths.src_id, paths.dest_id, min(dist)
from paths
group by 1,2;

It seems that this query goes into infinite loops and finally run out of
disk space. However, I testrf every iteration seperately and found that it
will converge after 3-4 iterations. I wonder where is the problem. Could
anyone help with it? The attatchment is the test data.

Thank you very much:)
--
-- PostgreSQL database dump
--

SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;

SET search_path = public, pg_catalog;

--
-- Data for Name: edge; Type: TABLE DATA; Schema: public; Owner: postgres
--

INSERT INTO edge (src_id, dest_id) VALUES (20, 7235);
INSERT INTO edge (src_id, dest_id) VALUES (54, 7969);
INSERT INTO edge (src_id, dest_id) VALUES (110, 6345);
INSERT INTO edge (src_id, dest_id) VALUES (6345, 110);
INSERT INTO edge (src_id, dest_id) VALUES (111, 7314);
INSERT INTO edge (src_id, dest_id) VALUES (135, 3339);
INSERT INTO edge (src_id, dest_id) VALUES (145, 9652);
INSERT INTO edge (src_id, dest_id) VALUES (9652, 145);
INSERT INTO edge (src_id, dest_id) VALUES (147, 3170);
INSERT INTO edge (src_id, dest_id) VALUES (157, 1673);
INSERT INTO edge (src_id, dest_id) VALUES (193, 3694);
INSERT INTO edge (src_id, dest_id) VALUES (3694, 193);
INSERT INTO edge (src_id, dest_id) VALUES (208, 5654);
INSERT INTO edge (src_id, dest_id) VALUES (5654, 208);
INSERT INTO edge (src_id, dest_id) VALUES (222, 8731);
INSERT INTO edge (src_id, dest_id) VALUES (242, 9141);
INSERT INTO edge (src_id, dest_id) VALUES (9141, 242);
INSERT INTO edge (src_id, dest_id) VALUES (284, 9088);
INSERT INTO edge (src_id, dest_id) VALUES (9088, 284);
INSERT INTO edge (src_id, dest_id) VALUES (285, 5756);
INSERT INTO edge (src_id, dest_id) VALUES (290, 9111);
INSERT INTO edge (src_id, dest_id) VALUES (300, 5534);
INSERT INTO edge (src_id, dest_id) VALUES (5534, 300);
INSERT INTO edge (src_id, dest_id) VALUES (305, 4694);
INSERT INTO edge (src_id, dest_id) VALUES (4694, 305);
INSERT INTO edge (src_id, dest_id) VALUES (372, 6555);
INSERT INTO edge (src_id, dest_id) VALUES (6555, 372);
INSERT INTO edge (src_id, dest_id) VALUES (383, 3259);
INSERT INTO edge (src_id, dest_id) VALUES (422, 3410);
INSERT INTO edge (src_id, dest_id) VALUES (3410, 422);
INSERT INTO edge (src_id, dest_id) VALUES (456, 3054);
INSERT INTO edge (src_id, dest_id) VALUES (504, 7314);
INSERT INTO edge (src_id, dest_id) VALUES (515, 9550);
INSERT INTO edge (src_id, dest_id) VALUES (517, 5314);
INSERT INTO edge (src_id, dest_id) VALUES (522, 5778);
INSERT INTO edge (src_id, dest_id) VALUES (5778, 522);
INSERT INTO edge (src_id, dest_id) VALUES (539, 2531);
INSERT INTO edge (src_id, dest_id) VALUES (540, 3611);
INSERT INTO edge (src_id, dest_id) VALUES (554, 9678);
INSERT INTO edge (src_id, dest_id) VALUES (593, 2816);
INSERT INTO edge (src_id, dest_id) VALUES (600, 5524);
INSERT INTO edge (src_id, dest_id) VALUES (5524, 600);
INSERT INTO edge (src_id, dest_id) VALUES (606, 1300);
INSERT INTO edge (src_id, dest_id) VALUES (1300, 606);
INSERT INTO edge (src_id, dest_id) VALUES (616, 5651);
INSERT INTO edge (src_id, dest_id) VALUES (5651, 616);
INSERT INTO edge (src_id, dest_id) VALUES (653, 3044);
INSERT INTO edge (src_id, dest_id) VALUES (679, 1479);
INSERT INTO edge (src_id, dest_id) VALUES (691, 6806);
INSERT INTO edge (src_id, dest_id) VALUES (691, 8032);
INSERT INTO edge (src_id, dest_id) VALUES (6806, 8032);
INSERT INTO edge (src_id, dest_id) VALUES (8032, 6806);
INSERT INTO edge (src_id, dest_id) VALUES (697, 3905);
INSERT INTO edge (src_id, dest_id) VALUES (3905, 697);
INSERT INTO edge (src_id, dest_id) VALUES (720, 2244);
INSERT INTO edge (src_id, dest_id) VALUES (726, 2920);
INSERT INTO edge (src_id, dest_id) VALUES (740, 2939);
INSERT INTO edge (src_id, dest_id) VALUES (2939, 740);
INSERT INTO edge (src_id, dest_id) VALUES (756, 5236);
INSERT INTO edge (src_id, dest_id) VALUES (813, 8713);
INSERT INTO edge (src_id, dest_id) VALUES (8713, 813);
INSERT INTO edge (src_id, dest_id) VALUES (827, 1985);
INSERT INTO edge (src_id, dest_id) VALUES (827, 6264);
INSERT INTO edge (src_id, dest_id) VALUES (1985, 827);
INSERT INTO edge (src_id, dest_id) VALUES (6264, 827);
INSERT INTO edge (src_id, dest_id) VALUES (859, 2975);
INSERT INTO edge (src_id, dest_id) VALUES (872, 1532);
INSERT INTO edge (src_id, dest_id) VALUES (885, 9134);
INSERT INTO edge (src_id, dest_id) VALUES (9134, 885);
INSERT INTO edge (src_id, dest_id) VAL

Re: [GENERAL] 9.1.9 -> 9.1.10 causing corruption

2013-11-04 Thread Ben Chobot
Anybody? I've tried this again on another streaming replication server, and 
again had pg_toast errors until I re-basebackup'd it. Does it make sense to try 
disabling streaming replication for the backup and just use wal replay from the 
archive? My slave will be a bit behind, but I'll have drained the clients from 
it anyway, so that's not a big deal for a temporary thing.

On Nov 1, 2013, at 1:44 PM, Ben Chobot wrote:

> I've got a bunch of independent database clusters, each with a couple of 
> streaming replication slaves. I'm starting to upgrade them to 9.1.10, but on 
> 3 of the 3 I've tried so far, this has somehow resulted in data corruption. 
> I'm hoping it was the upgrade itself that caused the corruption, instead of 
> 9.1.10 just exposing some latent corruption, for two reasons:
> 1. after doing a basebackup from the master, the same queries that started 
> causing problems on the new 9.1.10 node no longer cause problems
> 2. the second 9.1.9 slave of each cluster doesn't have issues that the 
> newly-upgraded 9.1.10 node did have with the same query. 
> 
> Interestingly, this seems to only be affecting my replication slaves. We 
> snapshot all our clusters at a filesystem level and restore those snapshots 
> into standalone, non-replicating test environments, and those test 
> environments were all upgraded to 9.1.10 without issue. 
> 
> All my servers are running a mostly-stock config of 9.1.9, which has been 
> modified like so:
> 
> max_connections = 1000 # we don't really go this high, fwiw
> shared_buffers = 6000MB # 60GB machines
> wal_level = hot_standby
> hot_standby = on
> effective_cache_size = 34278MB
> log_min_duration_statement = 50
> log_duration = on
>  
> They store their data on XFS, with the options noatime,allocsize=1m.
> 
> As for the errors, on two nodes, a primary key lookup resulted in multiple 
> rows. Interestingly, this was on the same index for both nodes, which I would 
> find unlikely if it was just random corruption or hardware-related, given 
> that the systems have the same schema but independent data and usage. Using 
> pg_reorg on that table on the master node fixed the issue on the node I tried 
> it on, but exposed another problem:
> 
> ERROR: unexpected chunk number 3 (expected 1) for toast value 67728890 in 
> pg_toast_2619
> 
> The pg_toast error occurred on another node too. Sadly these were are 
> production systems and I didn't know if the pg_toast table was the same for 
> all three nodes, and it's quite possible that all three nodes would have 
> exhibited the same set of problems if I'd let them run production queries 
> long enough.
> 
> Anyway, naturally this is making me nervous to do more upgrades, so I'm 
> hoping I can figure out if it's the upgrade that's killing me, or if 9.1.10 
> is exposing underlying problems, or what. Unfortunately I have no idea what 
> to look for.
> 
> These system are ubuntu precise, and I did recall this from my memory:
> 
> http://www.postgresql.org/message-id/21697.1343141...@sss.pgh.pa.us
> 
> ... but it appears that the fix for that has already been applied to my 
> postgresql-common package. Looking over the postgres logs before the upgrade, 
> I don't see anything abnormal.
> 
> Does this behavior sound familiar to anybody else out there, or does anybody 
> have suggestions of what to look for or try?



Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Adrian Klaver

On 11/04/2013 01:56 PM, Jeff Amiel wrote:






On Monday, November 4, 2013 3:23 PM, Adrian Klaver  
wrote:



Probably poor choice of words:). So then, what we are looking at is
other clients trying to update user_profile but not succeeding because
pid 4899 is blocking. At this point all I can see is that the offending
query is updating some fields the others are not; disabled and reset_code.

Is that always the case?

If so any thing in the code path that is different when those fields are
updated?


We have scenarios where exact same query is in play in all instances.


Which query is that?
And what scenario are you talking about, blocking query or something else?


Any thoughts as to the fact that this could be a full table_lock simply based 
on the use of username (non primary key - but specifically unique constraint) 
in the where clause?  I'm grasping I know


What makes you think the username condition is the problem?







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suitable Index for my Table

2013-11-04 Thread Martijn van Oosterhout
On Mon, Nov 04, 2013 at 10:44:29PM +0100, Janek Sendrowski wrote:
> I am always searching for single colums. The  values are different every 
> time, because these are distances from my new sentence to my pivots.
> Does the Index support a query with this WHERE statement: WHERE value BETWEEN 
> (distance1 - radius) AND (distance1 + radius)?

Ok, this is not consistant. You say you're only searching single
columns, but in your example query you're doing 23 columns in one
query.

> A working query looks like this:
> SELECT id FROM distance WHERE
> value BETWEEN (distance1 - radius) AND (distance1 + radius) AND
> value BETWEEN (distance2 - radius) AND (distance2 + radius) AND
> value BETWEEN (distance3 - radius) AND (distance3 + radius) AND

So you're looking for the same value amongst all distances? That's
doesn't seem very useful. Then if distance1=0 and distance2=1 and
radius=0.2 then this query will never match that row, no matter what
value of value.

Anyway, unless you can describe this problem as something geometric
(such that you can consider it a search for overlapping cubes) I'd do
what the other post suggests and just put a btree index on every
column.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Suitable Index for my Table

2013-11-04 Thread Kevin Grittner
Janek Sendrowski  wrote:

> Does the Index support a query with this WHERE statement: WHERE
> value BETWEEN (distance1 - radius) AND (distance1 + radius)?

Note that the following are all equivalent:

value BETWEEN (distance1 - radius) AND (distance1 + radius)

value >= (distance1 - radius) and value <= (distance1 + radius)

(value + radius) >= distance1 and (value - radius) <= distance1

distance1 >= (value - radius) and distance1 <= (value + radius)

distance BETWEEN (value - radius) AND (value + radius)

The first two are not suitable for an index scan on distance1, but
the last three are.  If you can rewrite your query to use that
syntax, it will be able to use a btree index on distance1.

> okay, I will use arrays instead of multiple columns.

It's probably worth looking at, but I can't say that is the best
way from information available so far.

> A working query looks like this:
> SELECT id FROM distance WHERE
> value BETWEEN (distance1 - radius) AND (distance1 + radius) AND
> value BETWEEN (distance2 - radius) AND (distance2 + radius) AND
> value BETWEEN (distance3 - radius) AND (distance3 + radius) AND
> value BETWEEN (distance4 - radius) AND (distance4 + radius) AND
> value BETWEEN (distance5 - radius) AND (distance5 + radius) AND
> value BETWEEN (distance6 - radius) AND (distance6 + radius) AND
> value BETWEEN (distance7 - radius) AND (distance7 + radius) AND
> value BETWEEN (distance8 - radius) AND (distance8 + radius) AND
> value BETWEEN (distance9 - radius) AND (distance9 + radius) AND
> value BETWEEN (distance10 - radius) AND (distance10 + radius) AND
> value BETWEEN (distance11 - radius) AND (distance11 + radius) AND
> value BETWEEN (distance12 - radius) AND (distance12 + radius) AND
> value BETWEEN (distance13 - radius) AND (distance13 + radius) AND
> value BETWEEN (distance14 - radius) AND (distance14 + radius) AND
> value BETWEEN (distance15 - radius) AND (distance15 + radius) AND
> value BETWEEN (distance16 - radius) AND (distance16 + radius) AND
> value BETWEEN (distance17 - radius) AND (distance17 + radius) AND
> value BETWEEN (distance18 - radius) AND (distance18 + radius) AND
> value BETWEEN (distance19 - radius) AND (distance19 + radius) AND
> value BETWEEN (distance20 - radius) AND (distance20 + radius) AND
> value BETWEEN (distance21 - radius) AND (distance22 + radius) AND
> value BETWEEN (distance22 - radius) AND (distance23 + radius) AND
> value BETWEEN (distance23 - radius) AND (distance24 + radius);

An array column called dist_array might support something along the
lines of (untested):

SELECT id FROM distance
  WHERE (value - radius) <= ALL (dist_array)
    AND (value + radius) >= ALL (dist_array);

I'm not sure whether a GIN index on the dist_array column would be
usable by such a query, but it might be worth testing.

> Until now It just does a Seq Scan, when I'm searching through the
> table 'distances'. I can show your the Query Plan, if you want.

Actual query text, table definitions (with indexes), and EXPLAIN
ANALYZE output are always helpful.

http://wiki.postgresql.org/wiki/SlowQueryQuestions

Which reminds me, this sort of question might be better on the
pgsql-performance list next time.

> The number of rows which are resulting have a range of 0 until
> something like 100 for the begining.

Keep in mind that indexes will rarely be used on small tables.  It
isn't until there are many data pages that access through indexes
begins to be faster.

Also note that (as previously mentioned) the names of variables
here suggest that geometry or PostGIS types may be a cleaner way to
implement this than dealing in raw coordinates.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Jeff Amiel





On Monday, November 4, 2013 3:23 PM, Adrian Klaver  
wrote:


>Probably poor choice of words:). So then, what we are looking at is 
>other clients trying to update user_profile but not succeeding because 
>pid 4899 is blocking. At this point all I can see is that the offending 
>query is updating some fields the others are not; disabled and reset_code.
>
>Is that always the case?
>
>If so any thing in the code path that is different when those fields are 
>updated?

We have scenarios where exact same query is in play in all instances.
Any thoughts as to the fact that this could be a full table_lock simply based 
on the use of username (non primary key - but specifically unique constraint) 
in the where clause?  I'm grasping I know


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suitable Index for my Table

2013-11-04 Thread Janek Sendrowski
Thanks for your Answers!
 
@Martijn van Oosterhout
 
I am always searching for single colums. The  values are different every time, 
because these are distances from my new sentence to my pivots.
Does the Index support a query with this WHERE statement: WHERE value BETWEEN 
(distance1 - radius) AND (distance1 + radius)?
 
@Kevin Grittner
okay, I will use arrays instead of multiple columns.
A working query looks like this:
SELECT id FROM distance WHERE
value BETWEEN (distance1 - radius) AND (distance1 + radius) AND
value BETWEEN (distance2 - radius) AND (distance2 + radius) AND
value BETWEEN (distance3 - radius) AND (distance3 + radius) AND
value BETWEEN (distance4 - radius) AND (distance4 + radius) AND
value BETWEEN (distance5 - radius) AND (distance5 + radius) AND
value BETWEEN (distance6 - radius) AND (distance6 + radius) AND
value BETWEEN (distance7 - radius) AND (distance7 + radius) AND
value BETWEEN (distance8 - radius) AND (distance8 + radius) AND
value BETWEEN (distance9 - radius) AND (distance9 + radius) AND
value BETWEEN (distance10 - radius) AND (distance10 + radius) AND
value BETWEEN (distance11 - radius) AND (distance11 + radius) AND
value BETWEEN (distance12 - radius) AND (distance12 + radius) AND
value BETWEEN (distance13 - radius) AND (distance13 + radius) AND
value BETWEEN (distance14 - radius) AND (distance14 + radius) AND
value BETWEEN (distance15 - radius) AND (distance15 + radius) AND
value BETWEEN (distance16 - radius) AND (distance16 + radius) AND
value BETWEEN (distance17 - radius) AND (distance17 + radius) AND
value BETWEEN (distance18 - radius) AND (distance18 + radius) AND
value BETWEEN (distance19 - radius) AND (distance19 + radius) AND
value BETWEEN (distance20 - radius) AND (distance20 + radius) AND
value BETWEEN (distance21 - radius) AND (distance22 + radius) AND
value BETWEEN (distance22 - radius) AND (distance23 + radius) AND
value BETWEEN (distance23 - radius) AND (distance24 + radius);
 
I'm not sure about the number of pivots yet. It can get bigger,
but this is a single query to determine the fitting sentences.
The query just returns the ID of the colum which is a foreign key to a sentence 
in another table.
The table distance contains the metric distance of every pivot to all of my 
sentences in the other table.
I haven't found yet the right pivots, but the distance should be between 0 and 
1 in most cases, but it's also likely that it's 0.
Until now It just does a Seq Scan, when I'm searching through the table 
'distances'. I can show your the Query Plan, if you want.
The number of rows which are resulting have a range of 0 until something like 
100 for the begining.
 
Hope these were enough informations
 
Thank you for your help
Janek Sendrowki


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Adrian Klaver

On 11/04/2013 01:16 PM, Jeff Amiel wrote:






On Monday, November 4, 2013 2:56 PM, Adrian Klaver  
wrote:


In the screenshot you posted what are the columns indicating, in
particular the third one?



Assuming the third column is pointing to the pid of the offending query
it is interesting that the other queries are coming from other IPs.
Almost as if the original query is bouncing off something. Is that possible?


The third column is indeed the pid of the backend query that the query is 
'blocked' by.
Hmm...what means "bouncing off"?



Probably poor choice of words:). So then, what we are looking at is 
other clients trying to update user_profile but not succeeding because 
pid 4899 is blocking. At this point all I can see is that the offending 
query is updating some fields the others are not; disabled and reset_code.


Is that always the case?

If so any thing in the code path that is different when those fields are 
updated?


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Rob Sargent

On 11/04/2013 01:56 PM, Adrian Klaver wrote:

On 11/04/2013 12:44 PM, Jeff Amiel wrote:






On Monday, November 4, 2013 2:25 PM, Adrian Klaver 
 wrote:




I grepped the schema (just to be sure) - no foreign keys on columns 
or table at all.
I do have an audit trigger on the table for updates - inserts into 
an audit table when changes are made and it DOES do a separate 
select from user_profile for other reasons - but not "for update" or 
anything - no explicit locking.





Would it be possible to see that audit function?


it's kind of long (really just a lot of compares of old/new values.
The relevant portion (that selects from user_profile) looks like this:

BEGIN
 SELECT user_id, user_ip INTO my_user_id, my_user_ip FROM 
audit_metadata WHERE pg_pid = getpid();

 IF ((NOT FOUND) OR (my_user_id = -1)) THEN
 SELECT user_id INTO my_user_id FROM user_profile 
WHERE username = 'db-'||CURRENT_USER and user_type='DBASE';

 IF (NOT FOUND) THEN
  RAISE EXCEPTION 'USERNAME NOT FOUND IN 
USER_PROFILE: % ',CURRENT_USER;

 END IF;
 my_user_ip := inet_client_addr();
 END IF;

 INSERT INTO audit .

 END;



Hmmm, nothing obvious here.

In the screenshot you posted what are the columns indicating, in 
particular the third one?


Assuming the third column is pointing to the pid of the offending 
query it is interesting that the other queries are coming from other 
IPs. Almost as if the original query is bouncing off something. Is 
that possible?



Are we sure the interaction with audit_metadata is clean and tidy?


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Adrian Klaver

On 11/04/2013 12:44 PM, Jeff Amiel wrote:






On Monday, November 4, 2013 2:25 PM, Adrian Klaver  
wrote:



I grepped the schema (just to be sure) - no foreign keys on columns or table at 
all.
I do have an audit trigger on the table for updates - inserts into an audit table when 
changes are made and it DOES do a separate select from user_profile for other reasons - 
but not "for update" or anything - no explicit locking.




Would it be possible to see that audit function?


it's kind of long (really just a lot of compares of old/new values.
The relevant portion (that selects from user_profile) looks like this:

BEGIN
 SELECT user_id, user_ip INTO my_user_id, my_user_ip FROM 
audit_metadata WHERE pg_pid = getpid();
 IF ((NOT FOUND) OR (my_user_id = -1)) THEN
 SELECT user_id INTO my_user_id FROM user_profile WHERE 
username = 'db-'||CURRENT_USER and user_type='DBASE';
 IF (NOT FOUND) THEN
  RAISE EXCEPTION 'USERNAME NOT FOUND IN USER_PROFILE: % 
',CURRENT_USER;
 END IF;
 my_user_ip := inet_client_addr();
 END IF;

 INSERT INTO audit .

 END;



Hmmm, nothing obvious here.

In the screenshot you posted what are the columns indicating, in 
particular the third one?


Assuming the third column is pointing to the pid of the offending query 
it is interesting that the other queries are coming from other IPs. 
Almost as if the original query is bouncing off something. Is that possible?


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Jeff Amiel





On Monday, November 4, 2013 2:25 PM, Adrian Klaver  
wrote:

>
> I grepped the schema (just to be sure) - no foreign keys on columns or table 
> at all.
> I do have an audit trigger on the table for updates - inserts into an audit 
> table when changes are made and it DOES do a separate select from 
> user_profile for other reasons - but not "for update" or anything - no 
> explicit locking.
>

>Would it be possible to see that audit function?

it's kind of long (really just a lot of compares of old/new values.
The relevant portion (that selects from user_profile) looks like this:

BEGIN
            SELECT user_id, user_ip INTO my_user_id, my_user_ip FROM 
audit_metadata WHERE pg_pid = getpid();
            IF ((NOT FOUND) OR (my_user_id = -1)) THEN
                SELECT user_id INTO my_user_id FROM user_profile WHERE username 
= 'db-'||CURRENT_USER and user_type='DBASE';
                IF (NOT FOUND) THEN
                     RAISE EXCEPTION 'USERNAME NOT FOUND IN USER_PROFILE: % 
',CURRENT_USER;
                END IF;
                my_user_ip := inet_client_addr();
            END IF;

            INSERT INTO audit .
        
        END;



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Adrian Klaver

On 11/04/2013 12:15 PM, Jeff Amiel wrote:






On Monday, November 4, 2013 1:48 PM, Adrian Klaver  
wrote:



Any triggers on user_profile?
Any FK relationship in either direction?



I grepped the schema (just to be sure) - no foreign keys on columns or table at 
all.
I do have an audit trigger on the table for updates - inserts into an audit table when 
changes are made and it DOES do a separate select from user_profile for other reasons - 
but not "for update" or anything - no explicit locking.



Would it be possible to see that audit function?

--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON DELETE CASCADE Question

2013-11-04 Thread Elliot

On 2013-11-04 14:44, Jason Long wrote:

CREATE TABLE t_a
(
   id bigint NOT NULL,
   CONSTRAINT pk_a PRIMARY KEY (id)
);

CREATE TABLE t_b
(
   id bigint NOT NULL,
   CONSTRAINT pk_b PRIMARY KEY (id),
   CONSTRAINT fk_b_a FOREIGN KEY (id) REFERENCES t_a (id) ON DELETE
CASCADE
);


INSERT INTO t_a VALUES (1),(2),(3);
INSERT INTO t_b VALUES (1),(2),(3);

delete from t_b where id = 2;

select * from t_a;
This depends entirely on your use case and how your data actually relate 
to each other, but an alternative to using a trigger to do that delete 
you could possibly go with inheritance and avoid the foreign keys 
altogether. Presumably the other tables you mention that might have 
references to t_a should also be defined as inheriting from A if they 
have the same relationship to A that B does.


Example:

CREATE TABLE t_a
(
  id bigint NOT NULL,
  CONSTRAINT pk_a PRIMARY KEY (id)
);

CREATE TABLE t_b
(
  CONSTRAINT pk_b PRIMARY KEY (id)
)
inherits (t_a);

INSERT INTO t_b VALUES (1),(2),(3);

select * from t_a;

delete from t_a where id = 2;

select * from t_a;



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Jeff Amiel





On Monday, November 4, 2013 1:48 PM, Adrian Klaver  
wrote:


>Any triggers on user_profile?
>Any FK relationship in either direction?


I grepped the schema (just to be sure) - no foreign keys on columns or table at 
all.
I do have an audit trigger on the table for updates - inserts into an audit 
table when changes are made and it DOES do a separate select from user_profile 
for other reasons - but not "for update" or anything - no explicit locking.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON DELETE CASCADE Question

2013-11-04 Thread Adrian Klaver

On 11/04/2013 11:44 AM, Jason Long wrote:

I would like for corresponding records in t_a to be deleted when I
delete a record from t_b.  This deletes from t_b when I delete from t_a,
but not the other way around.  I am unable to create a foreign key
constraint on t_a because this table holds records from several other
tables. I added a simple script below that demonstrates my problem.


As I sent my previous post, it dawned on me what I think you where 
trying to say. That the id field has values that have relevance to 
tables other than t_b and would not be accepted by a FK to ta_b. In 
other words what you are looking for is a conditional FK relationship 
between t_a and t_b. AFAIK to get that you will have to roll your own 
trigger on t_b.



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ON DELETE CASCADE Question

2013-11-04 Thread Adrian Klaver

On 11/04/2013 11:44 AM, Jason Long wrote:

I would like for corresponding records in t_a to be deleted when I
delete a record from t_b.  This deletes from t_b when I delete from t_a,
but not the other way around.  I am unable to create a foreign key
constraint on t_a because this table holds records from several other
tables.


I am not sure how this is a problem? If you propose to delete a value 
from t_a and that value is used by other tables how is the manner of its 
deletion relevant?


I added a simple script below that demonstrates my problem.


Any suggestions?





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Adrian Klaver

On 11/04/2013 09:06 AM, Jeff Amiel wrote:

PostgreSQL 9.2.4 on x86_64-pc-solaris2.10, compiled by gcc (GCC) 3.4.3 
(csl-sol210-3_4-branch+sol_rpath), 64-bit
Have got an annoying scenario that has been creating issues for us for years….
Time to try to figure it out.
Essentially, we have a user table where we maintain username, id number, 
enabled/disabled state, etc.
When a user logs in successfully, we reset any failed login attempts on the 
user’s unique entry in this table.

CREATE TABLE user_profile
(
   user_id serial NOT NULL,
   username character varying(50) NOT NULL,
   login_attempts integer DEFAULT 0,
   …
   CONSTRAINT user_id PRIMARY KEY (user_id),
   CONSTRAINT name UNIQUE (username)
)

However - we often get “lock storms” where SOMEHOW, updates for individual 
users are causing all other updates to ‘lock’ on each other.
Eventually the storm abates (sometimes in seconds - sometimes in minutes)
See edited screen cap:
http://i.imgur.com/x4DdYaV.png
(PID 4899 just has a “where username = $1 cut off that you can’t see out to the 
right)
All updates are done using the username (unique constraint) instead of the 
primary key (the serial)
In retrospect, I suppose these queries should be using the primary key (9 year 
old code) but I am flummoxed as to how these updates can be causing table? 
level locks.
I’ve never been able to catch the lock information during one of these storms - 
but I assume it is a table level lock causing this.
Thoughts?  Is this just ‘normal’ behavior that I am not expecting? (because 
postgres doesn’t know that the username is a unique field)


Any triggers on user_profile?

Any FK relationship in either direction?








--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ON DELETE CASCADE Question

2013-11-04 Thread Jason Long
I would like for corresponding records in t_a to be deleted when I
delete a record from t_b.  This deletes from t_b when I delete from t_a,
but not the other way around.  I am unable to create a foreign key
constraint on t_a because this table holds records from several other
tables. I added a simple script below that demonstrates my problem. 

Any suggestions?

/***/
drop table IF EXISTS t_b;
drop table IF EXISTS t_a;

CREATE TABLE t_a
(
  id bigint NOT NULL,
  CONSTRAINT pk_a PRIMARY KEY (id)
);

CREATE TABLE t_b
(
  id bigint NOT NULL,
  CONSTRAINT pk_b PRIMARY KEY (id),
  CONSTRAINT fk_b_a FOREIGN KEY (id) REFERENCES t_a (id) ON DELETE
CASCADE
);


INSERT INTO t_a VALUES (1),(2),(3);
INSERT INTO t_b VALUES (1),(2),(3);

delete from t_b where id = 2;

select * from t_a;







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Rob Sargent

On 11/04/2013 11:38 AM, Jeff Amiel wrote:


to: Rob Sargent

The login references have nothing to do with postgres - is simply table/column 
names being used.



I'm sorry, I thought user_profile.login_attempts was being set to zero 
during login (or perhaps after successfulloginin a two-step process) and 
that interaction was leading to the "lock storm". Suspected something in 
the details therewould be interesting.


Re: [GENERAL] Suitable Index for my Table

2013-11-04 Thread Martijn van Oosterhout
On Mon, Nov 04, 2013 at 07:21:11PM +0100, Janek Sendrowski wrote:
> Hi,
> 
> I've got a table with many Values of the Type REAL.
> These are my metric distances or my pivots to my sentences.
> The table looks like this:
> 
> ID INTEGER, distance1 REAL, distance2 REAL, distance3 REAL, distance4 REAL, 
> ..., distance24 REAL
> 
> The range of the Value is in between 0 and 1. So it looks like this 0.196 or 
> 0.891
> 
> That my query
> 
> WHERE value BETWEEN (distance1 - radius) AND (distance1 + radius)
> WHERE value BETWEEN (distance2 - radius) AND (distance2 + radius)
> WHERE value BETWEEN (distance3 - radius) AND (distance3 + radius)
> WHERE value BETWEEN (distance4 - radius) AND (distance4 + radius)
> ...
> 
> Now I'm searching for a suitable index.

This sounds like a job for a geometric datatype, a la GiST.

http://www.postgresql.org/docs/9.3/static/cube.html

CREATE INDEX foo ON bar USING GIST ( cube( ARRAY(distance1), ARRAY(distance1) ) 
);

The you can do lookups with:

SELECT * FROM bar WHERE 
  cube( ARRAY(distance1), ARRAY(distance1) )
  &&
  cube( ARRAY(value-radius), ARRAY(value+radius) )

If you commonly use sets of columns you can go multiple dimensional for
extra benefit.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Suitable Index for my Table

2013-11-04 Thread Kevin Grittner
Janek Sendrowski  wrote:

> I've got a table with many Values of the Type REAL.
> These are my metric distances or my pivots to my sentences.
> The table looks like this:
>
> ID INTEGER, distance1 REAL, distance2 REAL, distance3 REAL,
> distance4 REAL, ..., distance24 REAL

It should always raise a big red flag when you see column names
with numeric suffixes like that.  Usually these represent data
which should be normalized out to another table, or possibly
represented by a single column which is an array, hstore, or json
type.

> The range of the Value is in between 0 and 1. So it looks like
> this 0.196 or 0.891
>
> That my query
>
> WHERE value BETWEEN (distance1 - radius) AND (distance1 + radius)
> WHERE value BETWEEN (distance2 - radius) AND (distance2 + radius)
> WHERE value BETWEEN (distance3 - radius) AND (distance3 + radius)
> WHERE value BETWEEN (distance4 - radius) AND (distance4 + radius)
> ...

Are those the WHERE clauses of four different queries, or did you
mean for those to be four criteria on a single query?  It would be
better to show an actual, working query and table layout, so people
have a more clear idea of the problem they are being asked to help
solve.

> Now I'm searching for a suitable index.

A btree index on each distance column, maybe?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Jeff Amiel


to: Rob Sargent

The login references have nothing to do with postgres - is simply table/column 
names being used.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json datatype and table bloat?

2013-11-04 Thread Merlin Moncure
On Mon, Nov 4, 2013 at 12:14 PM, David Johnston  wrote:
> ajeli...@gmail.com wrote
>> Along the lines of the equality operator; I have ran into issues trying to
>> pivot a table/result set with a json type due what seemed to be no
>> equality
>> operator.
>
> For the curious, and also use-case considerations for development, would you
> be able to share what it is you are doing (and how) that combines full json
> documents with pivoting?
>
> Compound types holding source data for a pivot seems problematic since
> generally all the pivot components are single-valued and, for data, often
> numerical.

would also like to see this. json type has completely displaced
crosstab in my usage. I don't typically pivot json though: I pivot the
raw data then transform to json.  With limited exceptions I consider
storing json in actual table rows to be an anti-pattern (but it should
still work if you do it).

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suitable Index for my Table

2013-11-04 Thread Janek Sendrowski
Hi,

I've got a table with many Values of the Type REAL.
These are my metric distances or my pivots to my sentences.
The table looks like this:

ID INTEGER, distance1 REAL, distance2 REAL, distance3 REAL, distance4 REAL, 
..., distance24 REAL

The range of the Value is in between 0 and 1. So it looks like this 0.196 or 
0.891

That my query

WHERE value BETWEEN (distance1 - radius) AND (distance1 + radius)
WHERE value BETWEEN (distance2 - radius) AND (distance2 + radius)
WHERE value BETWEEN (distance3 - radius) AND (distance3 + radius)
WHERE value BETWEEN (distance4 - radius) AND (distance4 + radius)
...

Now I'm searching for a suitable index.

Does any of you have an idea?

TX very much for any support!

Janek Sendrowski


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json datatype and table bloat?

2013-11-04 Thread David Johnston
ajeli...@gmail.com wrote
> Along the lines of the equality operator; I have ran into issues trying to
> pivot a table/result set with a json type due what seemed to be no
> equality
> operator.

For the curious, and also use-case considerations for development, would you
be able to share what it is you are doing (and how) that combines full json
documents with pivoting?

Compound types holding source data for a pivot seems problematic since
generally all the pivot components are single-valued and, for data, often
numerical.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/json-datatype-and-table-bloat-tp5776182p5776880.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Rob Sargent

On 11/04/2013 10:06 AM, Jeff Amiel wrote:

PostgreSQL 9.2.4 on x86_64-pc-solaris2.10, compiled by gcc (GCC) 3.4.3 
(csl-sol210-3_4-branch+sol_rpath), 64-bit
Have got an annoying scenario that has been creating issues for us for years….
Time to try to figure it out.
Essentially, we have a user table where we maintain username, id number, 
enabled/disabled state, etc.
When a user logs in successfully, we reset any failed login attempts on the 
user’s unique entry in this table.

CREATE TABLE user_profile
(
   user_id serial NOT NULL,
   username character varying(50) NOT NULL,
   login_attempts integer DEFAULT 0,
   …
   CONSTRAINT user_id PRIMARY KEY (user_id),
   CONSTRAINT name UNIQUE (username)
)

However - we often get “lock storms” where SOMEHOW, updates for individual 
users are causing all other updates to ‘lock’ on each other.
Eventually the storm abates (sometimes in seconds - sometimes in minutes)
See edited screen cap:
http://i.imgur.com/x4DdYaV.png
(PID 4899 just has a “where username = $1 cut off that you can’t see out to the 
right)
All updates are done using the username (unique constraint) instead of the 
primary key (the serial)
In retrospect, I suppose these queries should be using the primary key (9 year 
old code) but I am flummoxed as to how these updates can be causing table? 
level locks.
I’ve never been able to catch the lock information during one of these storms - 
but I assume it is a table level lock causing this.
Thoughts?  Is this just ‘normal’ behavior that I am not expecting? (because 
postgres doesn’t know that the username is a unique field)



And the login process is what, exactly, from the db perspective?



[GENERAL] table lock when where clause uses unique constraing instead of primary key.

2013-11-04 Thread Jeff Amiel
PostgreSQL 9.2.4 on x86_64-pc-solaris2.10, compiled by gcc (GCC) 3.4.3 
(csl-sol210-3_4-branch+sol_rpath), 64-bit
Have got an annoying scenario that has been creating issues for us for years….
Time to try to figure it out.
Essentially, we have a user table where we maintain username, id number, 
enabled/disabled state, etc.
When a user logs in successfully, we reset any failed login attempts on the 
user’s unique entry in this table.

CREATE TABLE user_profile
(
  user_id serial NOT NULL,
  username character varying(50) NOT NULL, 
  login_attempts integer DEFAULT 0,
  …
  CONSTRAINT user_id PRIMARY KEY (user_id),
  CONSTRAINT name UNIQUE (username)
)

However - we often get “lock storms” where SOMEHOW, updates for individual 
users are causing all other updates to ‘lock’ on each other.
Eventually the storm abates (sometimes in seconds - sometimes in minutes)
See edited screen cap:
http://i.imgur.com/x4DdYaV.png
(PID 4899 just has a “where username = $1 cut off that you can’t see out to the 
right)
All updates are done using the username (unique constraint) instead of the 
primary key (the serial)
In retrospect, I suppose these queries should be using the primary key (9 year 
old code) but I am flummoxed as to how these updates can be causing table? 
level locks.
I’ve never been able to catch the lock information during one of these storms - 
but I assume it is a table level lock causing this.
Thoughts?  Is this just ‘normal’ behavior that I am not expecting? (because 
postgres doesn’t know that the username is a unique field)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] json datatype and table bloat?

2013-11-04 Thread Adam Jelinek
Along the lines of the equality operator; I have ran into issues trying to
pivot a table/result set with a json type due what seemed to be no equality
operator.



On Nov 4, 2013 10:14 AM, "Merlin Moncure"
>
wrote:

> On Fri, Nov 1, 2013 at 1:50 PM, Tom Lane 
> >
> wrote:
> > Gregory Haase  'haa...@onefreevoice.com');>> writes:
> >> The json_data column is not accounted for in pg_stats:
> >
> > Ah!  I hadn't twigged to the fact that your bloat measurement approach
> > assumed you had pg_stats entries for all the columns.
> >
> >> So I'm not sure if I'd actually qualify this as a "bug", but it appears
> >> that there is no way to currently get stats on a json data type.
> >
> > ANALYZE currently punts on columns that don't have an equality operator,
> > which json does not.  There isn't that much in the way of stats that we
> > could collect, though I suppose we could still compute average datum
> width
> > and null fraction.
> >
> > I'm not sure whether there are plans to invent an equality operator for
> > json.
>
> IMNSO, this may not be a bug, but it's pretty close.  All base types
> should have equality operator as well as other supporting
> infrastructure that the database itself depends on (in/out and
> send/receive for example).  This is a pretty good example of why.
>
> merlin
>
>
> --
> Sent via pgsql-general mailing list 
> (pgsql-general@postgresql.org 'pgsql-general@postgresql.org');>
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Sent from Gmail Mobile


Re: [GENERAL] json datatype and table bloat?

2013-11-04 Thread Merlin Moncure
On Fri, Nov 1, 2013 at 1:50 PM, Tom Lane  wrote:
> Gregory Haase  writes:
>> The json_data column is not accounted for in pg_stats:
>
> Ah!  I hadn't twigged to the fact that your bloat measurement approach
> assumed you had pg_stats entries for all the columns.
>
>> So I'm not sure if I'd actually qualify this as a "bug", but it appears
>> that there is no way to currently get stats on a json data type.
>
> ANALYZE currently punts on columns that don't have an equality operator,
> which json does not.  There isn't that much in the way of stats that we
> could collect, though I suppose we could still compute average datum width
> and null fraction.
>
> I'm not sure whether there are plans to invent an equality operator for
> json.

IMNSO, this may not be a bug, but it's pretty close.  All base types
should have equality operator as well as other supporting
infrastructure that the database itself depends on (in/out and
send/receive for example).  This is a pretty good example of why.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Where can I find the 9.3 beta1 rpm now ?

2013-11-04 Thread Massimo Ortensi

Hi everybody.
I downloaded and tested 9.3 beta 1 back in june and used for a while.
Today I upgraded to 9.3.1 but just discovered that database cannot be 
directly used, so I need to switch back to the beta version in order to 
dump the data.

Is there any site I can download the old beta rpm's ?

Thanks in advance



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-04 Thread Adrian Klaver

On 11/04/2013 01:25 AM, si24 wrote:

show the [databases] section in the pgbouncer.ini


[databases]
manifold = host=127.0.0.1 port=5432 dbname=manifold user=postgrest
password=123ert

I have tested now with both the ip address and using localhost in the host
section

I have also ended up swapping round the 2 port numbers so now postgres is
6432 and pgbouncer is 5432 the only thing is when I use the admin command to
check the databases for the pgbouncer i get this:

pgbouncer=# show databases;
name  |  host| port  | database   | force_user |
pool_size | reserve_pool
---+--+--+---++---+--
  manifold|  127.0.0.1 | 5432 | manifold | postgrest   |
20 |0
  pgbouncer || 5432 | pgbouncer  | pgbouncer  |
2 |0
  postgres   | 127.0.0.1   | 5432 | postgres| |
20 |0

It seems that they are all are using the same port.

I'm at my winds end now and I need to get this working badley.
Maybe I'm not understanding it at all if some could explaine it to me in
more lamens terms hopefully that would help otherwise I don't know what to
do.



Well the first thing would be to work on one thing at a time and  also 
maintain some consistency. For example in the above you say you have 
switched the Postgres port to 6432, yet in your [databases] example you 
show a port of 5432. So one of those statements is wrong.


In order to work with this we will need to see a more detailed 
description of what you are doing, example:



client application --> 5432 pgBouncer --> 6432 Postgres

and the associated config files

pgbouncer.ini pgBouncer

from postgresql.conf Postgres

listen_addresses
port

pg_hba.conf Postgres

also

Make sure when making any changes to reload the appropriate software 
after the changes.



I can't change to linux which might of made this job a lot easier I have to
use windows.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/changing-port-numbers-so-pgbouncer-can-read-geoserver-and-postgres-tp5776631p5776821.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.0.13 core dump

2013-11-04 Thread anjanu
We figured out the problem. It was not FreeBSD jail related but libxml2. We
had it compiled with thread support. After we recompiled libxml with thread
support disabled, the core dumps disappeared.

Thanks everyone for your help.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Postgresql-9-0-13-core-dump-tp5774532p5776850.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detect bin directory

2013-11-04 Thread Tom Lane
Tiziano Valdemarin  writes:
> I know the data directory path can be found in pg_settings, but what about
> bin? Is its path stored anywhere?

It's intentionally not stored anywhere, but rather identified at runtime.

> I already googled the topic but found only negative answers. Is there any
> motive to not expose this information as for the data dir?

If you mean making it visible at SQL level, security might be a reason.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Detect bin directory

2013-11-04 Thread Tiziano Valdemarin
Hi, i deploy one windows application which uses pg and since i have hundred
of customers i wrote an updater (updates are mainly monthly). The updater
manage pg upgrades too through uninstall and reinstall if there are major
server updates to do.

This process needs backup and restore and i recently started to do it using
pg_dump and pg_restore instead of copyin and copyout.

Since these are windows installation and since i plan to port my app to mac
and linux locating these executables is important for me and i'd like a
simple way to query pg for its bin path.

I know the data directory path can be found in pg_settings, but what about
bin? Is its path stored anywhere?

I already googled the topic but found only negative answers. Is there any
motive to not expose this information as for the data dir?

Thanks in advance
Tiziano


Re: [GENERAL] Curious question about physical files to store database

2013-11-04 Thread Albe Laurenz
Patrick Dung wrote:
> As I have seen, some database created or pre-allocate large physical files on 
> the file system to as
> the backend of the database tablespace.
> 
> For Postgresql, I have observed that it created several files in the base and 
> global directory.
> 
> It may be by design, what is the pros and cons of this behavior?

You are a bit unspecific; are you talking about Oracle?

The disk layout is of course by design.

Oracle uses large container files and keeps its data in those.
As far as I know, this is to bypass file system functionality.
Oracle usually recommends direct I/O and bypasses file system
functionality (caching etc.) as much as possible.

I guess one reason for this is that, historically, file system
implementations incurred more overhead than they do now and had
all sorts of other problems with larger amounts of data.
These days, filesystems perform much better, so this is no longer
necessary, but Oracle is quite old software.

Another reason may be Oracle's quest to rule the world, and the
storage layer is part of that.  Lately, Oracle tries to get everybody
to use ASM, their storage layer, which completely bypasses
file system functionality.

PostgreSQL, on the other hand, does not have the resources or
intentions to write a better file system and actually uses
file system capabilities like caching to improve performance.

PostgreSQL keeps what Oracle calls segments as individual files
in a directory structure.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why release index relation lock

2013-11-04 Thread sivananda reddy
Hi kurt023,

>> 1. What's the rule of index relation locking?
>> 2. Releasing lock is for higher concurrency, but for INSERT/UPDATE, i
did not find
   any operation that could get benefit from releasing index relation lock?
Or to
   say : what will happen if we treat index relation lock like heap
relation lock?

Ans]The reason we hold relation locks till end of transaction is mainly to
avoid transactional behavior surprises, eg an in-progress transaction
finding that a relation's schema has changed underneath it.  There is
no corresponding risk for indexes, because there is no such thing as
a schema-definition change for an index --- short of dropping it,
which we disallow without having AccessExclusiveLock on the parent rel.
However, there are *physical* changes to indexes, such as REINDEX or
ALTER INDEX TABLESPACE, which require locking out other accesses till
they finish.  So the point of locking indexes in use by a query is
just to interlock against those types of operations, and there's no
need to continue holding the lock once the query is done.

VACUUM might look like an exception, but it's not since it can't
run inside a transaction block.  There's no meaningful difference
between statement end and transaction end for it.

Hope this helps.

With regards,
Sivananda Reddy.



On Sat, Nov 2, 2013 at 11:03 AM, DT  wrote:

> Hi,
>
>   For some operations, such as INSERT/UPDATE, heap relation lock is held
> until
> xact commit, but index relation lock is released after command end. Some
> other
> operations also hold index relation lock until xact commit, such as VACUUM.
>
>   I have read chapter 54. Index Access Method Interface Definition in
> documentation,
> and README in nbtree, but still can not figure out:
> 1. What's the rule of index relation locking?
> 2. Releasing lock is for higher concurrency, but for INSERT/UPDATE, i did
> not find
>any operation that could get benefit from releasing index relation
> lock? Or to
>say : what will happen if we treat index relation lock like heap
> relation lock?
>
> Thanks.
>


Re: [GENERAL] Explanantion on pgbouncer please

2013-11-04 Thread si24
everytime I change something it seems when i type in show servers in the
admin console, it keeps showing 0 rows.

I have gone back and forth several time trying to get this thing sorted out
nut to no avail i keep getting the same message.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Explanantion-on-pgbouncer-please-tp5776515p5776822.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-04 Thread si24
>show the [databases] section in the pgbouncer.ini 

[databases]
manifold = host=127.0.0.1 port=5432 dbname=manifold user=postgrest
password=123ert

I have tested now with both the ip address and using localhost in the host
section

I have also ended up swapping round the 2 port numbers so now postgres is
6432 and pgbouncer is 5432 the only thing is when I use the admin command to
check the databases for the pgbouncer i get this:

pgbouncer=# show databases;
   name  |  host| port  | database   | force_user |
pool_size | reserve_pool
---+--+--+---++---+--
 manifold|  127.0.0.1 | 5432 | manifold | postgrest   |
20 |0
 pgbouncer || 5432 | pgbouncer  | pgbouncer  | 
2 |0
 postgres   | 127.0.0.1   | 5432 | postgres| |
20 |0

It seems that they are all are using the same port.

I'm at my winds end now and I need to get this working badley.
Maybe I'm not understanding it at all if some could explaine it to me in
more lamens terms hopefully that would help otherwise I don't know what to
do.
I can't change to linux which might of made this job a lot easier I have to
use windows.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/changing-port-numbers-so-pgbouncer-can-read-geoserver-and-postgres-tp5776631p5776821.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general