Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread fox7

Views do not help or hurt performance.  Views encapsulate complex queries.

If you have a slow running query, the usual way to get help is to post:
*) explain analyze results (most important)
*) the query (important)
*) interesting tables/indexes (somewhat important)
--

These are 2 queries for example...
The first runs with 55ms, the 2nd with views is executed in 4500ms...
:confused:

SELECT DISTINCT table_0.term1, table_1.term2 FROM TC table_0 , TO table_1
WHERE table_1.term1='c'  AND table_0.term2=table_1.term2 
 UNION 
SELECT DISTINCT table_0.term1, table_1.term1 FROM TC table_0 , TB table_1
WHERE table_0.term2=table_1.term1  AND table_1.term2='c' 

--

SELECT DISTINCT V2TC.term1 AS term1,V2TO.term2 AS term2
FROM V2TO,V2TC
WHERE V2TO.term2=V2TC.term2 AND V2TO.term1='c'


-Definition of tables and views involved-

-- Table: TC
CREATE TABLE TC(
  term1 character varying(100),
  term2 character varying(100)
  )
WITH (OIDS=FALSE);
ALTER TABLE TC OWNER TO postgres;

-- Index: TC_index1
CREATE INDEX TC_index1
  ON TC
  USING btree
  (term1);

-- Index: TC_index2
CREATE INDEX TC_index2
  ON TC
  USING btree
  (term2);
  
--TO and TB are more or less equal to TC

-- View: v2TC
CREATE OR REPLACE VIEW v2TC AS 
 SELECT DISTINCT TC.term1, TC.term2
   FROM TC
  ORDER BY TC.term1, TC.term2;

ALTER TABLE v2TC OWNER TO postgres;

-- View: v2TO
CREATE OR REPLACE VIEW v2TO AS ( SELECT DISTINCT TO.term1, TO.term2
   FROM TO
  ORDER BY TO.term1, TO.term2)
UNION 
 SELECT TB.term2 AS term1, TB.term1 AS term2
   FROM TB;

ALTER TABLE v2TO OWNER TO postgres;


-- 
View this message in context: 
http://www.nabble.com/Slow-running-query-with-views...how-to-increase-efficiency--with-index--tp26086104p26091310.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] still on joining array/inline values was and is: design, ref integrity and performance

2009-10-28 Thread Ivan Sergio Borgonovo
On Tue, 27 Oct 2009 10:54:06 +
Richard Huxton d...@archonet.com wrote:

  Association between email and password is just meant to build up
  a queue for mailing and there is no uniqueness constraint on
  (password, email) pair.
 
  create table pw_email(
password varchar(16),
email varchar(64)
  );
 
  create table pw_resource(
res int references ...
password varchar(16)
  );

  But I've to generate password/email couples first before filling
  pw_resource.
 
 The simplest thing would be to do them the other way around, but
 assuming you can't...

Trying to understand why I can't do the other way around I made
clearer to myself the constraints.

- There is no reason to have a password without an associated
  recipient.
- There is no reason to have a password without an associated
  resource resource.
- I don't want the same password for more than one resource.

So to satisfy the first 2 requirements... it looks to me the table
should be:


create table resources(
  res int primary key
  /* other characteristics of the resource... */
);

create table pw(
  res int references resources(ref) on delete cascade,
  password varchar(16) not null,
  email varchar(64) not null,
  /* Should I chose some unique constraint on the couples? which? */
);

To have the 3rd constraint I'd have a table:
create table pw_res(
  password varchar(16) primary key,
  res int references resources (res) on delete cascade
);

This comes handy for 2 reasons:
- it helps me to enforce the third constraint
- it makes it easier to find which resource is associated with a
  password that will be a common search

But this introduces one more problem if I decide to delete a
password. This could happen for shared and non shared passwords.
I don't think it is something that may happen frequently... but it
may happen... and once you start to introduce mess in your data it
is hard to come back.

So a fk on pw.password may help... but... once I've that fk I can't
insert res,pw,email without pw baing in pw_res.

If I do the other way around inserting password(s) in pw_res I may
end up with passwords with no destination associated.

Surely I could wrap everything in a transaction so that if I can't
succede to insert email/pw records everything will be rolled back.

But I can't see how to make it cleaner.

I can get emails and associate them with a resource and a password
in one sql statement if I can defer the constraint check on password.
The next statement would be used to fill pw_res with distinct values
from pw.

If I do the other way around in case I need different passwords for
different emails I'll have to:
- count the emails and fill pw_res with as many password as needed.
  I think I'll need generate_series.
- find a way to associate these passwords with emails

I don't know how to do the later.
Still supposing I knew how to write the sql statement that will take
values('email1', 'email2'...)
as suggested by Merlin Moncure in a different thread I can see some
pros and cons of these approaches.

The first approach requires a distinct/group by that may be
expensive.
The second one requires I keep in memory all the emails while the
first statement run.

I'd think that if memory consumption start to be an issue, that
means that the number of emails start to be too large to be loaded
from an HTTP request, so I could have it on file and load it in a
temp table and so on...

Still before I make any test I need to find how to associate emails
with passwords considering that I'd like to pass email inline the
SQL statement and I'll have the passwords in pw_res.


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] log slow queries and hints

2009-10-28 Thread Vasiliy G Tolstov
Hello. I'm new with postgresql, some times ago i'm turn on log slow
queries, but log file contains not only queries , nor
Oct 28 13:03:44 selfip postgres[18072]: [5-1]
user=dbu_vase_1,db=db_vase_1 WARNING:  nonstandard use of \\ in a string
literal at character 90
Oct 28 13:03:44 selfip postgres[18072]: [5-2]
user=dbu_vase_1,db=db_vase_1 HINT:  Use the escape string syntax for
backslashes, e.g., E'\\'.


How can i disable this hints, or (i'm use drupal for this database) fix
queries?

-- 
Vasiliy G Tolstov v.tols...@selfip.ru
Selfip.Ru


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure for feature requests?

2009-10-28 Thread Sam Mason
On Tue, Oct 27, 2009 at 06:53:55PM +, Tim Landscheidt wrote:
 You would have to adjust the result of (EXTRACT('epoch'
 FROM B) - EXTRACT('epoch' FROM A)) / EXTRACT('epoch' FROM
 C) by a factor of 31/30 (30/28? 28/30?) and then chop off
 timestamps after B with a WHERE clause.

I'm not sure where you're going with this.  The original idea was
generate_series for intervals wasn't it?  When you start moving from
intervals to specific periods of time (i.e. 1st Jan 1970) then you've
lost the reason for working with intervals and you may as well just be
working with dates or timestamps.  The purpose of intervals, as far as
I can tell, is so that you can use things like '1 month' and have it do
the right thing in our Gregorian calender.

   JFTR: Hours can of course also be 3601 (or theoretically
 3599) seconds long, but not in PostgreSQL :-).

Depending on the standard you use, yes.  BTW, I believe up to two leap
seconds are allowed forward in UTC.  I believe there are also plans to
drop leap seconds and let time slowly drift out of alignment.  I think
the idea is that when it starts to matter to people, in a thousand years
or so, we'll be an interplanetary species anyway and tying time to earth
this way is thought to be silly.  It also unnecessarily complicates
things that don't really care and not be good enough for things that do
care.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto truncate/vacuum full

2009-10-28 Thread JC Praud
On Tue, Oct 27, 2009 at 6:31 PM, Alvaro Herrera
alvhe...@commandprompt.comwrote:

 JC Praud escribió:

  So my question are: can the autovacuum daemon perform vacuum full ? Or
  another internal postgres process ? Could it come from the TRUNCATE I run
  and canceled 4 days before ?

 No.  Autovacuum only issues commands that don't lock tables strongly.  I
 doubt this has anything to do with your old TRUNCATE either.  My guess
 is that somebody else ran TRUNCATE and forgot to tell you; or maybe an
 automatic external process (cron or some such).


I suspected that, too. And asked the ninjas we have here ;) I also checked
the vacuum cronjobs we have.
None performing full vacuum on this table.
If it was the case, I should have seen the query in the pg_log as a slow
query, I guess ?

Regards,



 --
 Alvaro Herrera
 http://www.CommandPrompt.com/
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
JC
Ph'nglui  mglw'nafh  Cthulhu  n'gah  Bill  R'lyeh  Wgah'nagl fhtagn!


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread Alban Hertroys

On 28 Oct 2009, at 9:57, fox7 wrote:

Views do not help or hurt performance.  Views encapsulate complex  
queries.


If you have a slow running query, the usual way to get help is to  
post:

*) explain analyze results (most important)


You forgot to show us the most important part.


*) the query (important)
*) interesting tables/indexes (somewhat important)
--

These are 2 queries for example...
The first runs with 55ms, the 2nd with views is executed in 4500ms...
:confused:

SELECT DISTINCT table_0.term1, table_1.term2 FROM TC table_0 , TO  
table_1

WHERE table_1.term1='c'  AND table_0.term2=table_1.term2
UNION
SELECT DISTINCT table_0.term1, table_1.term1 FROM TC table_0 , TB  
table_1

WHERE table_0.term2=table_1.term1  AND table_1.term2='c'


If you're using a UNION you can drop the DISTINCTs, as the results of  
UNION are guaranteed to be distinct. If you don't want that, use UNION  
ALL instead.



-Definition of tables and views involved-
-- View: v2TC
CREATE OR REPLACE VIEW v2TC AS
SELECT DISTINCT TC.term1, TC.term2
  FROM TC
 ORDER BY TC.term1, TC.term2;

-- View: v2TO
CREATE OR REPLACE VIEW v2TO AS ( SELECT DISTINCT TO.term1, TO.term2
  FROM TO
 ORDER BY TO.term1, TO.term2)
UNION
SELECT TB.term2 AS term1, TB.term1 AS term2
  FROM TB;


Do you absolutely need to order the output of your views? You could  
just order the results of your queries on your views instead. The way  
you do it now the database needs to order results always, even if the  
order doesn't actually matter to you. I suspect this is part of why  
your query is slow.


Besides that, the order of your V2TO view is going to be determined by  
the UNION clause anyway, as it needs to sort the results of the union  
to make them unique. The order by in the first subquery of that view  
can safely be removed I think.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ae823b911071766412181!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to identify outliers

2009-10-28 Thread Sam Mason
 Rhys A.D. Stewart wrote:
 I would like to remove the outliers in distance

As others have said; an outlier is normally a human call and not
something that's generally valid to do automatically.  The operator
would probably want to go in and look to see why it's that far out and
either fix the typo or do whatever else it takes to fix the problem.

On Tue, Oct 27, 2009 at 04:58:23PM -0700, John R Pierce wrote:
 you could probably do something with a standard deviation that is more 
 accurate for large sets than just tossing the 2 outliers.

I'd agree, stddev is probably best and the following should do something
reasonable for what the OP was asking:

  SELECT d.*
  FROM data d, (
SELECT avg(distance), stddev(distance) FROM data) x
  WHERE abs(d.distance - x.avg)  x.stddev * 2;

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to identify outliers

2009-10-28 Thread Chris Spotts

 
 I'd agree, stddev is probably best and the following should do
 something
 reasonable for what the OP was asking:
 
   SELECT d.*
   FROM data d, (
 SELECT avg(distance), stddev(distance) FROM data) x
   WHERE abs(d.distance - x.avg)  x.stddev * 2;
 
[Spotts, Christopher] 
Statistically speaking if you dataset is of a fairly normal distribution the
following works well and is a *fairly* standard outlier definition.

First get a median function (there's a million of them on the net, doogle a
google).
You'll need one pass to get the median.
Divide your data set in half based on that median.  
Get the median of the first half (this is Q1).
Get the median of the second half (this is Q3).
Then your range for your good data should be from (Q1 - (Q3-Q1)*1.5) TO (Q3
+ (Q3-Q1)*1.5). 
Anything outside that range is an outlier.  Adjust the 1.5 up or down to be
more or less aggressive.

Using the avg formula for outliers is bad news.

I HIGHLY suggest installing PL/R for this, it makes it trivial.

Chris


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread fox7



Alban Hertroys-3 wrote:
 
 On 28 Oct 2009, at 9:57, fox7 wrote:
 
 You forgot to show us the most important part.
 ---
 Do you absolutely need to order the output of your views? You could  
 just order the results of your queries on your views instead. The way  
 you do it now the database needs to order results always, even if the  
 order doesn't actually matter to you. I suspect this is part of why  
 your query is slow.
 
 Besides that, the order of your V2TO view is going to be determined by  
 the UNION clause anyway, as it needs to sort the results of the union  
 to make them unique. The order by in the first subquery of that view  
 can safely be removed I think.
 
 

What do you mean for analyze results?

I create views by means of jdbc...
For example I have created V2TO as:
CREATE VIEW v2TO AS ( 
SELECT DISTINCT TO.term1, TO.term2
FROM TO
UNION
SELECT TB.term2 AS term1, TB.term1 AS term2
FROM TB;
)

The following format is like it appear selecting the view in Postgre...
 CREATE OR REPLACE VIEW v2TO AS ( SELECT DISTINCT TO.term1, TO.term2
   FROM TO
  ORDER BY TO.term1, TO.term2)
 UNION
 SELECT TB.term2 AS term1, TB.term1 AS term2
   FROM TB;

The problem is the execution time of the query with view...the first is ok!
-- 
View this message in context: 
http://www.nabble.com/Slow-running-query-with-views...how-to-increase-efficiency--with-index--tp26086104p26093967.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread Alban Hertroys

On 28 Oct 2009, at 13:42, fox7 wrote:

What do you mean for analyze results?



http://www.postgresql.org/docs/8.4/interactive/sql-explain.html

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ae83f5911071064615400!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread A. Kretschmer
In response to fox7 :
 
 What do you mean for analyze results?

Try explain analyse select ...

 
 I create views by means of jdbc...
 For example I have created V2TO as:
 CREATE VIEW v2TO AS ( 
 SELECT DISTINCT TO.term1, TO.term2
 FROM TO
 UNION
 SELECT TB.term2 AS term1, TB.term1 AS term2
 FROM TB;
 )

In your case: explain analyse select * from v2to where ...

Please read:
http://www.postgresql.org/docs/8.4/interactive/using-explain.html


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread fox7


Alban Hertroys-3 wrote:
 
 What do you mean for analyze results?
 http://www.postgresql.org/docs/8.4/interactive/sql-explain.html
 
thanks...

Now I try and put here the results...

However I'm using Postgre 8.3, not 8.4...
...but I don't think this is the problem!
-- 
View this message in context: 
http://www.nabble.com/Slow-running-query-with-views...how-to-increase-efficiency--with-index--tp26086104p26094297.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Thom Brown
Hi,

Are there any test guides/plans generated for alpha releases, or are
such things only distributed to other developers?  I've seen postings
which mention what the new features are, and links to documentation
and other postings as to what it can do, but no single page outlining
the changes together.

And are there any test packs which can be run against each release to
ensure everything still functions as normal?  What I mean is it would
run through individual tests, like performing an update, checking
whether the update has applied, and returning pass if successful, and
fail otherwise.  Such tests should be inherently massive to match the
feature set of PostgreSQL, but could be built up over time if it
doesn't already exist.  Would there be any value in such a thing, or
is this generally not really a problem that needs solving?

Obviously real-world testing is needed to see how it works in a
realistic scenario, so I'm not suggesting that's any less important.

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Grzegorz Jaśkiewicz
have you seen that one:
http://it.toolbox.com/blogs/database-soup/alpha2-is-out-and-we-need-you-to-test-it-35032?rss=1
?


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Thom Brown
2009/10/28 Grzegorz Jaśkiewicz gryz...@gmail.com:
 have you seen that one:
 http://it.toolbox.com/blogs/database-soup/alpha2-is-out-and-we-need-you-to-test-it-35032?rss=1
 ?

That's partly why I was asking.  It mentions the areas where the
changes have occurred, but not necessarily the changes themselves.  An
example of this is hstore.  There are mentions of improvements and
issues being eliminated, but these haven't been specified, not even in
the documentation.  I'm not sure how to test whatever change has gone
in.  I could open 8.4 and 8.5 documentation for that same page and
flip between the two until I find a difference, but even if I do that
and find changes, I doubt that covers what the fixes are.  I'd want
scenarios that were problematic in 8.4 that are not so in 8.5.

Entirely new features are easier to deal with though.  I still would,
however, want something like a detailed version of Josh's post which
breaks down where the changes have occurred.  It seems quite scattered
and unclear at the moment.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-28 Thread fox7

I copy the results derived by istruction EXPLAIN ANALYZE for the two
query...

--Query without views-
Unique  (cost=406.58..407.13 rows=73 width=114) (actual time=1.262..1.448
rows=40 loops=1)
  -  Sort  (cost=406.58..406.77 rows=73 width=114) (actual
time=1.257..1.313 rows=40 loops=1)
Sort Key: table_0.term1, table_1.term2
Sort Method:  quicksort  Memory: 23kB
-  Append  (cost=302.63..404.32 rows=73 width=114) (actual
time=0.747..1.147 rows=40 loops=1)
  -  Unique  (cost=302.63..302.99 rows=49 width=114) (actual
time=0.742..0.933 rows=40 loops=1)
-  Sort  (cost=302.63..302.75 rows=49 width=114)
(actual time=0.737..0.795 rows=40 loops=1)
  Sort Key: table_0.term1, table_1.term2
  Sort Method:  quicksort  Memory: 23kB
  -  Nested Loop  (cost=0.00..301.25 rows=49
width=114) (actual time=0.088..0.477 rows=40 loops=1)
-  Index Scan using TO_index1 on TO
table_1  (cost=0.00..15.81 rows=3 width=52) (actual time=0.046..0.050 rows=2
loops=1)
  Index Cond: ((term1)::text =
'c'::text)
-  Index Scan using TC_index2 on TC
table_0  (cost=0.00..94.85 rows=24 width=111) (actual time=0.095..0.144
rows=20 loops=2)
  Index Cond: ((table_0.term2)::text =
(table_1.term2)::text)
  -  Unique  (cost=100.42..100.60 rows=24 width=110) (actual
time=0.095..0.095 rows=0 loops=1)
-  Sort  (cost=100.42..100.48 rows=24 width=110)
(actual time=0.091..0.091 rows=0 loops=1)
  Sort Key: table_0.term1, table_1.term1
  Sort Method:  quicksort  Memory: 17kB
  -  Nested Loop  (cost=0.00..99.87 rows=24
width=110) (actual time=0.060..0.060 rows=0 loops=1)
-  Seq Scan on TB table_1 
(cost=0.00..4.72 rows=1 width=48) (actual time=0.054..0.054 rows=0 loops=1)
  Filter: ((term2)::text = 'c'::text)
-  Index Scan using TC_index2 on TC
table_0  (cost=0.00..94.85 rows=24 width=111) (never executed)
  Index Cond: ((table_0.term2)::text =
(table_1.term1)::text)
Total runtime: 1.641 ms


--Query with views-
  -  Sort  (cost=40863.02..40865.50 rows=994 width=436) (actual
time=5142.974..5143.026 rows=40 loops=1)
Sort Key: TC.term1, v2TO.term2
Sort Method:  quicksort  Memory: 23kB
-  Hash Join  (cost=38857.33..40813.53 rows=994 width=436) (actual
time=3547.557..5142.853 rows=40 loops=1)
  Hash Cond: ((TC.term2)::text = (v2TO.term2)::text)
  -  Unique  (cost=38837.21..40099.83 rows=49719 width=111)
(actual time=3546.697..4869.647 rows=168340 loops=1)
-  Sort  (cost=38837.21..39258.08 rows=168350
width=111) (actual time=3546.691..4363.092 rows=168350 loops=1)
  Sort Key: TC.term1, TC.term2
  Sort Method:  external merge  Disk: 21032kB
  -  Seq Scan on TC  (cost=0.00..4658.50
rows=168350 width=111) (actual time=0.010..294.459 rows=168350 loops=1)
  -  Hash  (cost=20.07..20.07 rows=4 width=218) (actual
time=0.219..0.219 rows=2 loops=1)
-  Subquery Scan v2TO  (cost=20.00..20.07 rows=4
width=218) (actual time=0.192..0.207 rows=2 loops=1)
  -  Unique  (cost=20.00..20.03 rows=4 width=108)
(actual time=0.186..0.195 rows=2 loops=1)
-  Sort  (cost=20.00..20.01 rows=4
width=108) (actual time=0.182..0.185 rows=2 loops=1)
  Sort Key: TO.term1, TO.term2
  Sort Method:  quicksort  Memory:
17kB
  -  Append  (cost=15.17..19.96 rows=4
width=108) (actual time=0.094..0.169 rows=2 loops=1)
-  Unique  (cost=15.17..15.19
rows=3 width=108) (actual time=0.090..0.100 rows=2 loops=1)
  -  Sort 
(cost=15.17..15.18 rows=3 width=108) (actual time=0.086..0.088 rows=2
loops=1)
Sort Key: TO.term2
Sort Method: 
quicksort  Memory: 17kB
-  Bitmap Heap
Scan on TO  (cost=4.28..15.15 rows=3 width=108) (actual time=0.064..0.067
rows=2 loops=1)
  Recheck Cond:
((term1)::text = 'c'::text)
  -  Bitmap
Index Scan on TO_index1  (cost=0.00..4.28 rows=3 width=0) (actual
time=0.052..0.052 rows=2 loops=1)

Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Adrian Klaver
On Wednesday 28 October 2009 6:46:13 am Thom Brown wrote:
 2009/10/28 Grzegorz Jaśkiewicz gryz...@gmail.com:
  have you seen that one:
  http://it.toolbox.com/blogs/database-soup/alpha2-is-out-and-we-need-you-t
 o-test-it-35032?rss=1 ?

 That's partly why I was asking.  It mentions the areas where the
 changes have occurred, but not necessarily the changes themselves.  An
 example of this is hstore.  There are mentions of improvements and
 issues being eliminated, but these haven't been specified, not even in
 the documentation.  I'm not sure how to test whatever change has gone
 in.  I could open 8.4 and 8.5 documentation for that same page and
 flip between the two until I find a difference, but even if I do that
 and find changes, I doubt that covers what the fixes are.  I'd want
 scenarios that were problematic in 8.4 that are not so in 8.5.

 Entirely new features are easier to deal with though.  I still would,
 however, want something like a detailed version of Josh's post which
 breaks down where the changes have occurred.  It seems quite scattered
 and unclear at the moment.

 Thom

http://developer.postgresql.org/pgdocs/postgres/release-8-5.html

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Thom Brown
2009/10/28 Adrian Klaver akla...@comcast.net:
 Entirely new features are easier to deal with though.  I still would,
 however, want something like a detailed version of Josh's post which
 breaks down where the changes have occurred.  It seems quite scattered
 and unclear at the moment.

 Thom

 http://developer.postgresql.org/pgdocs/postgres/release-8-5.html

Thanks Adrian.  I just wasn't looking hard enough obviously :)  That
list still doesn't appear to be explicit enough though as we have
Multiple improvements in contrib/hstore, including raising limits on
keys and values.  What exactly is meant by limit, what was this limit
before and what has it been raised to?

Similarly: Fix encoding handling in binary input function of xml
type.  What was the problem before?

And: Allow the collection of statistics on sequences.  How would
your average end-user see whether these statistics are being colelcted
on sequences?  And are these statistics actually used anywhere yet?

I'm not really asking for the answer to those questions.  I'm pointing
out that it isn't clear (at least to me) how to determine what exactly
has been fixed in order to test it.  This doesn't apply to everything
listed as some of it is quite clear, like pg_dump/pg_restore --clean
now drops large objects.

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] still on joining array/inline values was and is: design, ref integrity and performance

2009-10-28 Thread Peter Hunsberger
On Wed, Oct 28, 2009 at 4:50 AM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:

 To have the 3rd constraint I'd have a table:
 create table pw_res(
  password varchar(16) primary key,
  res int references resources (res) on delete cascade
 );

 This comes handy for 2 reasons:
 - it helps me to enforce the third constraint
 - it makes it easier to find which resource is associated with a
  password that will be a common search

 But this introduces one more problem if I decide to delete a
 password. This could happen for shared and non shared passwords.
 I don't think it is something that may happen frequently... but it
 may happen... and once you start to introduce mess in your data it
 is hard to come back.

Not sure why this is a mess?  Cascaded deletes can clean up the data.
If you need it for some business reason add an active flag.


 So a fk on pw.password may help... but... once I've that fk I can't
 insert res,pw,email without pw baing in pw_res.

 If I do the other way around inserting password(s) in pw_res I may
 end up with passwords with no destination associated.

 Surely I could wrap everything in a transaction so that if I can't
 succede to insert email/pw records everything will be rolled back.

 But I can't see how to make it cleaner.

At some point you need business logic to enforce your business
requirements.  There is little point in trying to enforce business
constraints in the database.  Stick to pure low level data integrity
for the database and keep the business constraints at some higher
level.


 I can get emails and associate them with a resource and a password
 in one sql statement if I can defer the constraint check on password.
 The next statement would be used to fill pw_res with distinct values
 from pw.

 If I do the other way around in case I need different passwords for
 different emails I'll have to:
 - count the emails and fill pw_res with as many password as needed.
  I think I'll need generate_series.
 - find a way to associate these passwords with emails

 I don't know how to do the later.
 Still supposing I knew how to write the sql statement that will take
 values('email1', 'email2'...)
 as suggested by Merlin Moncure in a different thread I can see some
 pros and cons of these approaches.

 The first approach requires a distinct/group by that may be
 expensive.
 The second one requires I keep in memory all the emails while the
 first statement run.

Unless you're dealing with 100,000's of these things I think you're
engaging in a process of premature optimization.  Group by can work
efficiently over millions of rows.


 I'd think that if memory consumption start to be an issue, that
 means that the number of emails start to be too large to be loaded
 from an HTTP request, so I could have it on file and load it in a
 temp table and so on...

 Still before I make any test I need to find how to associate emails
 with passwords considering that I'd like to pass email inline the
 SQL statement and I'll have the passwords in pw_res.


Do the simplest thing possible.  Get it working, then see if you have
any new problems you need to solve.  Every issue you've described so
far is database design 101 and should present no real problem.  I
think you're agonizing over nothing...

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto truncate/vacuum full

2009-10-28 Thread JC Praud
Sorry, I got a bit lost in the thread. BTW, Thanks for all the answers :)

On Wed, Oct 28, 2009 at 12:29 AM, Alvaro Herrera alvhe...@commandprompt.com
 wrote:

 JC Praud escribió:

  - Last night the database locked. pg_log full of messages about insert
 into
  the mother table waiting for a lock.

 This bit does not make much sense to me.  A transaction waiting will not
 show up in the log.  Were they cancelled?  Can you paste an extract from
 the log?


No, the transactions were not cancelled. All I saw in he pg_log is this
(taken at the time le lock was  lifted):

2009-10-27 05:39:19 CET dbuser dbase 10.33.10.133 LOG:  process 14866 still
waiting for RowExclusiveLock on relation 26683 of database 23806 after
5000.271 ms
2009-10-27 05:39:19 CET dbuser dbase 10.33.10.133 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)
2009-10-27 05:39:19 CET dbuser dbase 10.33.10.133 LOG:  process 14048 still
waiting for RowExclusiveLock on relation 26683 of database 23806 after
5000.409 ms
2009-10-27 05:39:19 CET dbuser dbase 10.33.10.133 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)
2009-10-27 05:39:50 CET dbuser dbase 10.33.10.41 LOG:  duration: 62103.487
ms  statement: SELECT put_root_files_into_ag_scheduler_delete_files( 8 , 50
);
2009-10-27 05:39:57 CET dbuser dbase 10.33.10.133 LOG:  process 14797 still
waiting for RowExclusiveLock on relation 26683 of database 23806 after
5000.362 ms
2009-10-27 05:39:57 CET dbuser dbase 10.33.10.133 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.33 LOG:  process 18964
acquired RowExclusiveLock on relation 26683 of database 23806 after
3572619.123 ms
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.33 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.133 LOG:  process 24284
acquired RowExclusiveLock on relation 26683 of database 23806 after
3572193.509 ms
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.133 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.133 LOG:  process 19497
acquired RowExclusiveLock on relation 26683 of database 23806 after
3572005.173 ms
2009-10-27 05:40:09 CET dbuser dbase 10.33.10.133 STATEMENT:  INSERT INTO
log_user (id_user, action, comment, date, id_session, ip_addr) VALUES (...)

The log_user table is the mother table with rules redirecting the inserts to
the monthly partitions.


  - After about 40min, the waiting queries acquired their locks and ran.

 Do you have a vacuum in cron or something like that?  As Tom says, if it
 had been autovacuum, it should have been cancelled automatically (else
 we've got a bug); but something invoking vacuum externally wouldn't
 have, so what you describe is what we would expect.


I was not monitoring the database at the time, all I saw is that
pg_stat_user_tables dates the last_autovacuum at 2009-10-27
05:40:09.611129+01 this day on the partition.
We have no cronjobs running vacuum on these monthly partition.
No sign of canceled vacuums neither.


Regards,





 --
 Alvaro Herrera
 http://www.CommandPrompt.com/
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
JC
Ph'nglui  mglw'nafh  Cthulhu  n'gah  Bill  R'lyeh  Wgah'nagl fhtagn!


[GENERAL] could not find array type for data type character varying[]

2009-10-28 Thread Viktor Rosenfeld
Hi,

I'm trying to aggregate a list of table attributes into an array.

The actual code looks something like this:

  SELECT
node_ref AS id,
array_agg(DISTINCT ARRAY[namespace, name, value]) as annotations
  ...
  GROUP BY id;

I guess the minimal example that reproduces the error is:

  annis= select array_agg(array['a'::varchar, 'b', 'c']);
  ERROR:  could not find array type for data type character varying[]

Why doesn't this work?

Cheers,
Viktor

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not find array type for data type character varying[]

2009-10-28 Thread Tom Lane
Viktor Rosenfeld listuse...@googlemail.com writes:
   annis= select array_agg(array['a'::varchar, 'b', 'c']);
   ERROR:  could not find array type for data type character varying[]

 Why doesn't this work?

The output of array_agg would have to be an array whose elements
are array-of-varchar.  Which is a datatype we don't have.
(2-D array is a different concept...)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto truncate/vacuum full

2009-10-28 Thread Alvaro Herrera
JC Praud escribió:

 On Wed, Oct 28, 2009 at 12:29 AM, Alvaro Herrera alvhe...@commandprompt.com

  This bit does not make much sense to me.  A transaction waiting will not
  show up in the log.  Were they cancelled?  Can you paste an extract from
  the log?
 
 No, the transactions were not cancelled. All I saw in he pg_log is this
 (taken at the time le lock was  lifted):
 
 2009-10-27 05:39:19 CET dbuser dbase 10.33.10.133 LOG:  process 14866 still
 waiting for RowExclusiveLock on relation 26683 of database 23806 after
 5000.271 ms

Oh, you have log_lock_waits enabled.  It makes plenty of sense now,
thanks.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not find array type for data type character varying[]

2009-10-28 Thread Sam Mason
On Wed, Oct 28, 2009 at 04:17:32PM +0100, Viktor Rosenfeld wrote:
 I'm trying to aggregate a list of table attributes into an array.

I'd suggest using a tuple, arrays for things where each element means
the same thing.  I'd guess you care about the substructure (i.e. the
element has a namespace, a name and a value) and hence using an
array in the first place seems wrong.  Maybe something like:

  CREATE TYPE foo AS ( namespace TEXT, name TEXT, value TEXT );
  SELECT id, array_accum(row(a,b,c)::foo)
  FROM data
  GROUP BY id;

 Why doesn't this work?

Arrays of arrays aren't directly supported; you currently have to put
them into a tuple first.  Something like:

  CREATE TYPE bar AS ( a TEXT[] );
  SELECT array_agg(row(array['a'])::bar);

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not find array type for data type character varying[]

2009-10-28 Thread Merlin Moncure
On Wed, Oct 28, 2009 at 11:17 AM, Viktor Rosenfeld
listuse...@googlemail.com wrote:
 Hi,

 I'm trying to aggregate a list of table attributes into an array.

 The actual code looks something like this:

  SELECT
    node_ref AS id,
    array_agg(DISTINCT ARRAY[namespace, name, value]) as annotations
  ...
  GROUP BY id;

 I guess the minimal example that reproduces the error is:

  annis= select array_agg(array['a'::varchar, 'b', 'c']);
  ERROR:  could not find array type for data type character varying[]

 Why doesn't this work?

There are no arrays of arrays.  There are however multi dimension
arrays and arrays of composite types (which may contain arrays).
Pick your poison.  Also, prefer array() to array_agg if you are not
truly aggregating (hard to tell in this query).

your problem:
postgres=# select array(select current_schemas(true));
ERROR:  could not find array type for datatype name[]

possible fix:
postgres=# select array(select row(current_schemas(true)));
   ?column?
---
 {(\{pg_catalog,public,dblink,pgcrypto}\)}

another way:
postgres=# select (v[1]).n[1] from (select array(select
row(current_schemas(true))::a) as v) q;
 n

 pg_catalog
(1 row)

head spinning yet? :-)

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] still on joining array/inline values was and is: design, ref integrity and performance

2009-10-28 Thread Ivan Sergio Borgonovo
On Wed, 28 Oct 2009 10:12:19 -0500
Peter Hunsberger peter.hunsber...@gmail.com wrote:

  The first approach requires a distinct/group by that may be
  expensive.
  The second one requires I keep in memory all the emails while the
  first statement run.

 Unless you're dealing with 100,000's of these things I think you're
 engaging in a process of premature optimization.  Group by can
 work efficiently over millions of rows.

We may get in the range of half that number occasionally but not
feeding emails directly from a HTTP request.
Still the number of passwords generated in one run may be in the
range of 50K. But well I could calmly wait 2 or 3 seconds.
Making some very rough test on a similar box to the one I'll have to
use it takes few milliseconds on a not indexed table.

 Do the simplest thing possible.  Get it working, then see if you
 have any new problems you need to solve.  Every issue you've
 described so far is database design 101 and should present no real
 problem.  I think you're agonizing over nothing...

That's always a good advice. Sometimes you're out just for moral
support.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto truncate/vacuum full

2009-10-28 Thread Jaime Casanova
On Tue, Oct 27, 2009 at 6:29 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Do you have a vacuum in cron or something like that?  As Tom says, if it
 had been autovacuum, it should have been cancelled automatically (else
 we've got a bug); but something invoking vacuum externally wouldn't
 have, so what you describe is what we would expect.


then we have a bug (at least in 8.3, haven't tried in 8.4)... i see
this a month ago, an autovacuum blocking a lot of concurrent updates
and selects... once i pg_cancel_backend() the autovacuum process the
other ones starting to move


-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto truncate/vacuum full

2009-10-28 Thread Tom Lane
Jaime Casanova jcasa...@systemguards.com.ec writes:
 On Tue, Oct 27, 2009 at 6:29 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 Do you have a vacuum in cron or something like that?  As Tom says, if it
 had been autovacuum, it should have been cancelled automatically (else
 we've got a bug); but something invoking vacuum externally wouldn't
 have, so what you describe is what we would expect.

 then we have a bug (at least in 8.3, haven't tried in 8.4)... i see
 this a month ago, an autovacuum blocking a lot of concurrent updates
 and selects... once i pg_cancel_backend() the autovacuum process the
 other ones starting to move

Hmm ... actually there is one case where autovac won't allow itself
to be kicked off locks, which is if it's performing an anti-wraparound
vacuum.  Perhaps anti-wraparound vacuums should skip trying to truncate
relations?

I'm not convinced that that explains Jaime's report though.  You'd
expect AW vacuums to only happen on mostly-unused tables, not ones
that are sufficiently central to an application to result in blocking
a lot of queries ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Forms generator ?

2009-10-28 Thread Stuart Adams


 Looking for a forms generator for a web based UI for
 entering/modifiying/viewing a table's records.

 Any recommendations ???

Thanks,
   Stuart

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Emal reg expression

2009-10-28 Thread Xai
i want to create a type for an email field but i'm not good with regx
can some one help me?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Has anyone seen this while running pg_dumpall?

2009-10-28 Thread Penrod, John
We are running version:

edb=# select version();
   version
-
 EnterpriseDB 8.3.0.106 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 
4.1.0
(1 row)

Has anyone seen this while running pg_dumpall?

[enterpris...@sjmemedbt1 cluster]$ pg_dumpall  full_bu_sjmemedbt1_091028a.dmp
pg_dump: NOTICE:  [HINTS] Unrecognized Const type.
pg_dump: NOTICE:  [HINTS] Unrecognized Const type.
[enterpris...@sjmemedbt1 cluster]$

The output into the dump file appears to be clean.




John J. Penrod, OCP
Oracle/EnterpriseDB Database Administrator
St. Jude Children's Research 
Hospitalhttp://www.stjude.org/stjude/v/index.jsp?vgnextoid=f2bfab46cb118010VgnVCM100e2015acRCRDplt=STJGENSEGOOGL009gclid=CM6Imp6I0Z0CFSMNDQodNXLerQ
262 Danny Thomas Place, MS 0574
Memphis, TN  38105
Phone: (901) 595-4941
FAX: (901) 595-2963
john.pen...@stjude.org




  
Email Disclaimer: www.stjude.org/emaildisclaimer


Re: [GENERAL] PHP + PDO + PGPOOL = Segmentation fault

2009-10-28 Thread VladK

This script executed by cron. And segmentation fault generated by PHP script.

Richard Huxton wrote:
 
 PHP doesn't really do connection pools anyway. You would have ended up
 with one connection for each Apache backend.
 
 What fails with segmentation fault - Apache+PHP, pgpool or PostgreSQL?
 
   Richard Huxton
   Archonet Ltd
 
 

-- 
View this message in context: 
http://www.nabble.com/PHP-%2B-PDO-%2B-PGPOOL-%3D-Segmentation-fault-tp26071405p26088267.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms generator ?

2009-10-28 Thread Thomas Kellerer

Stuart Adams wrote on 28.10.2009 17:59:


 Looking for a forms generator for a web based UI for
 entering/modifiying/viewing a table's records.

 Any recommendations ???

Thanks,
   Stuart


I haven't used this (yet), but once:Radix seems to be what you are looking for

http://www.oncetechnologies.com/newonceradix/index.html

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has anyone seen this while running pg_dumpall?

2009-10-28 Thread Alvaro Herrera
Penrod, John wrote:
 We are running version:
 
 edb=# select version();
version
 -
  EnterpriseDB 8.3.0.106 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
 (GCC) 4.1.0
 (1 row)
 
 Has anyone seen this while running pg_dumpall?
 
 [enterpris...@sjmemedbt1 cluster]$ pg_dumpall  full_bu_sjmemedbt1_091028a.dmp
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.

This is not a Postgres error message.  Ask EDB.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms generator ?

2009-10-28 Thread Ries van Twisk


Hi Stuart,

I have seen some form generators, but for some reason or the other  
they always partially worked,
or never fit my dataset because more often then others they assume  
very simple relations.


Nowdays I tend to use Adobe Flex for a lot of my work (there are some  
form generators for it :) )
and make simple RPC services or using AMF3 (Java and PHP do have some  
nice frameworks, amfphp and Blaze-DS come into my mind)
If you are into buying a commercial subscription then weborb is worth  
to take a look at, although I am not sure anymore if

it has a form generator...


Ries



On Oct 28, 2009, at 11:59 AM, Stuart Adams wrote:



Looking for a forms generator for a web based UI for
entering/modifiying/viewing a table's records.

Any recommendations ???

Thanks,
  Stuart

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has anyone seen this while running pg_dumpall?

2009-10-28 Thread Tom Lane
Penrod, John john.pen...@stjude.org writes:
 edb=# select version();
version
 -
  EnterpriseDB 8.3.0.106 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
 (GCC) 4.1.0
 (1 row)

 Has anyone seen this while running pg_dumpall?

 [enterpris...@sjmemedbt1 cluster]$ pg_dumpall  full_bu_sjmemedbt1_091028a.dmp
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.

Not around here, because there is no such message in the standard
Postgres sources.  Presumably it's coming from some EDB-specific code.
I suggest you take it up with EDB's support.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has anyone seen this while running pg_dumpall?

2009-10-28 Thread Penrod, John
Thank you.  I will do that.

John J. Penrod, OCP
Oracle/EnterpriseDB Database Administrator
St. Jude Children's Research Hospital
262 Danny Thomas Place, MS 0574
Memphis, TN  38105
Phone: (901) 595-4941
FAX: (901) 595-2963
john.pen...@stjude.org

-Original Message-
From: Alvaro Herrera [mailto:alvhe...@commandprompt.com]
Sent: Wednesday, October 28, 2009 12:55 PM
To: Penrod, John
Cc: 'pgsql-general@postgresql.org'; 'pgsql-ad...@postgresql.org'
Subject: Re: [GENERAL] Has anyone seen this while running pg_dumpall?

Penrod, John wrote:
 We are running version:

 edb=# select version();
version
 -
  EnterpriseDB 8.3.0.106 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
 (GCC) 4.1.0
 (1 row)

 Has anyone seen this while running pg_dumpall?

 [enterpris...@sjmemedbt1 cluster]$ pg_dumpall  full_bu_sjmemedbt1_091028a.dmp
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.
 pg_dump: NOTICE:  [HINTS] Unrecognized Const type.

This is not a Postgres error message.  Ask EDB.

--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Email Disclaimer:  www.stjude.org/emaildisclaimer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help with postgresql memory issue

2009-10-28 Thread Brooks Lyrette

Hello All,

I'm new to postgres and it seems my server is unable to fork new  
connections.


Here is the log:

LOG:  could not fork new process for connection: Not enough space
LOG:  could not fork new process for connection: Not enough space
TopMemoryContext: 84784 total in 8 blocks; 5584 free (10 chunks);  
79200 used
  Prepared Queries: 8192 total in 1 blocks; 1800 free (0 chunks);  
6392 used
  TopTransactionContext: 8192 total in 1 blocks; 7856 free (0  
chunks); 336 used
  Record information cache: 8192 total in 1 blocks; 1800 free (0  
chunks); 6392 used
  MessageContext: 8192 total in 1 blocks; 7128 free (0 chunks); 1064  
used
  smgr relation table: 8192 total in 1 blocks; 3840 free (0 chunks);  
4352 used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0  
chunks); 16 used

  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 976 free (0 chunks); 48  
used
  Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816  
used
  CacheMemoryContext: 405328 total in 19 blocks; 70888 free (1  
chunks); 334440 used
unnamed prepared statement: 24576 total in 2 blocks; 7912 free (1  
chunks); 16664 used

CachedPlan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
CachedPlanSource: 1024 total in 1 blocks; 840 free (0 chunks);  
184 used
pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_language_name_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720 used
pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_opclass_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_amop_fam_strat_index: 1024 total in 1 blocks; 128 free (0  
chunks); 896 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0  
chunks); 832 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240 free  
(0 chunks); 784 used
pg_cast_source_target_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 280  
free (0 chunks); 744 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 280 free  
(0 chunks); 744 used
pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0 chunks);  
720 used
pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0  
chunks); 896 used
pg_language_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 192 free  
(0 chunks); 832 used
pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_database_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280 free  
(0 chunks); 744 used
pg_class_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
pg_amproc_fam_proc_index: 1024 total in 1 blocks; 128 free (0  
chunks); 896 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 128 free  
(0 chunks); 896 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks);  
720 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free  
(0 chunks); 744 used
pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 280  
free (0 chunks); 744 used
pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0  
chunks); 744 used
pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680 used
pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free  
(0 chunks); 744 used
pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0  
chunks); 832 used
pg_namespace_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720 used
  

Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Thom Brown
2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:
 Hello All,

 I'm new to postgres and it seems my server is unable to fork new
 connections.

 Here is the log:

 LOG:  could not fork new process for connection: Not enough space
 LOG:  could not fork new process for connection: Not enough space
 TopMemoryContext: 84784 total in 8 blocks; 5584 free (10 chunks); 79200 used
  Prepared Queries: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 used
  TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336
 used
  Record information cache: 8192 total in 1 blocks; 1800 free (0 chunks);
 6392 used
  MessageContext: 8192 total in 1 blocks; 7128 free (0 chunks); 1064 used
  smgr relation table: 8192 total in 1 blocks; 3840 free (0 chunks); 4352
 used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16
 used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
    PortalHeapMemory: 1024 total in 1 blocks; 976 free (0 chunks); 48 used
  Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used
  CacheMemoryContext: 405328 total in 19 blocks; 70888 free (1 chunks);
 334440 used
    unnamed prepared statement: 24576 total in 2 blocks; 7912 free (1
 chunks); 16664 used
    CachedPlan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
    CachedPlanSource: 1024 total in 1 blocks; 840 free (0 chunks); 184 used
    pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0 chunks);
 680 used
    pg_language_name_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0 chunks);
 720 used
    pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_opclass_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_amop_fam_strat_index: 1024 total in 1 blocks; 128 free (0 chunks); 896
 used
    pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0
 chunks); 832 used
    pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240 free (0
 chunks); 784 used
    pg_cast_source_target_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_auth_members_role_member_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0 chunks);
 896 used
    pg_language_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
    pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 192 free (0
 chunks); 832 used
    pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_database_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_class_relname_nsp_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_class_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
    pg_amproc_fam_proc_index: 1024 total in 1 blocks; 128 free (0 chunks);
 896 used
    pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 128 free (0
 chunks); 896 used
    pg_index_indexrelid_index: 1024 total in 1 blocks; 344 free (0 chunks);
 680 used
    pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 used
    pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    pg_auth_members_member_role_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0 chunks);
 744 used
    pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
 used
    pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free (0
 chunks); 744 used
    pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0 chunks); 832
 used
    pg_namespace_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
 used
    

Re: [GENERAL] PHP + PDO + PGPOOL = Segmentation fault

2009-10-28 Thread Richard Huxton
VladK wrote:
 This script executed by cron. And segmentation fault generated by PHP
 script.

In that case you have a bug in one of: Apache, PHP, PDO libraries.

If the PDO libraries use PostgreSQL's libpq library then that could be
involved too.

Even if pgpool has a bug and isn't communicating correctly with the PHP
code it shouldn't be possible to cause a segfault.

Your best bet is to see if you can reduce it to as simple a test as
possible. Then we can see what to do next. This may involve working with
the PDO / PHP groups.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Thom Brown
2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:
 There should be no other processes running, this system is dedicated to
 running postgresql.

 Max connections is configured to: max_connections = 400


Well it sounds like you've somehow run out of swap space.  Are you
able to run top and sort by resident memory and also swap memory to
see where it's all going?  Also use pg_top if you have it.  That will
tell you how much memory each connection is using.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log slow queries and hints

2009-10-28 Thread Richard Huxton
Vasiliy G Tolstov wrote:
 user=dbu_vase_1,db=db_vase_1 HINT:  Use the escape string syntax for
 backslashes, e.g., E'\\'.
 
 How can i disable this hints, or (i'm use drupal for this database) fix
 queries?

See the manual section on configuration, escape_string_warning.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Emal reg expression

2009-10-28 Thread Richard Huxton
Xai wrote:
 i want to create a type for an email field but i'm not good with regx
 can some one help me?

Google for email regex. Be warned - this is very complicated if you
want to match *all* possible email addresses.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Tom Lane
Brooks Lyrette brooks.lyre...@gmail.com writes:
 I'm new to postgres and it seems my server is unable to fork new  
 connections.

 LOG:  could not fork new process for connection: Not enough space

For what I suppose is a lightly loaded machine, that is just plain
weird.  What's the platform exactly?  Is it possible that the postmaster
is being launched under very restrictive ulimit settings?

If it's a Unix-ish machine, it would be useful to look at top and
vmstat output to see if the machine is under severe memory
pressure for some reason.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] could not identify an equality operator for type annotation (Was: could not find array type for data type character varying[])

2009-10-28 Thread Viktor Rosenfeld
Hi,

this looks good, but it does not work with DISTINCT.

  CREATE TYPE annotation AS ( namespace varchar, name varchar, value varchar );

  SELECT 
node.id as id, 
array_agg(DISTINCT ROW(namespace, name, value)::annotation) as annotation
  ...
  GROUP BY id

produces:

  ERROR:  could not identify an equality operator for type annotation

I tried to create a custom operator like this:

  CREATE OR REPLACE FUNCTION annotation_equal(lhs annotation, rhs annotation) 
RETURNS boolean AS $$
  BEGIN
RETURN
  lhs.namespace = rhs.namespace AND
  lhs.name = rhs.name AND
  lhs.value = rhs.value;
  END;
  $$ LANGUAGE plpgsql IMMUTABLE;

  CREATE OPERATOR = (
PROCEDURE = annotation_equal,
LEFTARG = annotation,
RIGHTARG = annotation
  );

But it doesn't seem to get picked up.  If I leave out the DISTINCT in
the first SELECT query, it works as expected.  I could leave it out, but
then application logic would be more complex.  (Well, not really, I'd
just use a Set and not a List in Java, but it would increase the amount
of data send over the network and the reason I'm aggregating in the
first place is to minimize the data.)

I've also tried this:

  CREATE OR REPLACE FUNCTION annotation_hash(rhs annotation) RETURNS INTEGER AS 
$$
  BEGIN
RETURN hashtext (rhs.namespace || '-' || rhs.name || '-' || rhs.value);
  END;
  $$ LANGUAGE plpgsql IMMUTABLE STRICT;

  CREATE OPERATOR CLASS annotation_ops DEFAULT FOR TYPE annotation USING hash AS
OPERATOR 1 =,
FUNCTION 1 annotation_hash(rhs annotation)
  ;

But then the error message changes to:

  ERROR:  cache lookup failed for operator 34755

Any idea?

Cheers,
Viktor

Sam Mason wrote:

 On Wed, Oct 28, 2009 at 04:17:32PM +0100, Viktor Rosenfeld wrote:
  I'm trying to aggregate a list of table attributes into an array.
 
 I'd suggest using a tuple, arrays for things where each element means
 the same thing.  I'd guess you care about the substructure (i.e. the
 element has a namespace, a name and a value) and hence using an
 array in the first place seems wrong.  Maybe something like:
 
   CREATE TYPE foo AS ( namespace TEXT, name TEXT, value TEXT );
   SELECT id, array_accum(row(a,b,c)::foo)
   FROM data
   GROUP BY id;
 
  Why doesn't this work?
 
 Arrays of arrays aren't directly supported; you currently have to put
 them into a tuple first.  Something like:
 
   CREATE TYPE bar AS ( a TEXT[] );
   SELECT array_agg(row(array['a'])::bar);
 
 -- 
   Sam  http://samason.me.uk/
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to list a role's permissions for a given relation?

2009-10-28 Thread Kynn Jones
Thanks!

kynn

On Tue, Oct 27, 2009 at 4:02 PM, Richard Huxton d...@archonet.com wrote:

 Kynn Jones wrote:
  How can I list the permissions of a given user/role for a specific
  relation/view/index, etc.?

 From psql use \dp tablename

 Using plain SQL, the closest I can think of are the has_xxx_privilege()
 functions:
 http://www.postgresql.org/docs/8.4/static/functions-info.html

 There is also the standard information_schema views:
 http://www.postgresql.org/docs/8.4/static/information-schema.html
 e.g. table_privileges, column_privileges

 Finally, there are the system-catalogues themselves:
 http://www.postgresql.org/docs/8.4/static/catalogs.html
 Look in pg_class for relacl - an array of table (and index) permissions.
 Similarly pg_proc has proacl for functions.

 --
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Brooks Lyrette
The machine is running a moderate load. This is running on a Solaris  
Zone.


Top is showing:

load averages:  2.49,  4.00,  3.78;up  
124 
+ 
12 
: 
24 
: 
47 
16 
:04:21

46 processes: 45 sleeping, 1 on cpu
CPU states: 76.6% idle, 14.6% user,  8.8% kernel,  0.0% iowait,  0.0%  
swap

Memory: 32G phys mem, 942M free mem, 76G swap, 74G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
  5069 postgres   1  520  167M   20M sleep0:04 13.50% postgres
   902 postgres   1   10  167M   21M sleep0:12  6.39% postgres
  5068 postgres   1  590  167M   21M sleep0:01  4.92% postgres
  5070 postgres   1  590  166M   20M sleep0:00  3.72% postgres
 27817 postgres   1  590  167M   22M sleep0:23  1.43% postgres
   903 postgres   1  590  157M   11M sleep0:02  1.14% postgres
 23594 postgres   1  590  148M 2096K sleep0:10  0.11% postgres
  5510 brooks 1  590 5624K 2184K cpu  0:00  0.10% top
 23598 postgres   1  590 6404K 1680K sleep0:11  0.10% postgres
 23595 postgres   1  590  148M 1852K sleep0:01  0.01% postgres
 23597 postgres   1  590 6220K 1556K sleep0:00  0.01% postgres
 24870 root  30  390 7060K 3332K sleep7:01  0.00% nscd
   736 brooks 1  590 6292K 2060K sleep0:00  0.00% sshd
 23596 postgres   1  590  148M 2024K sleep0:00  0.00% postgres
 24828 root  13  290 9300K 2128K sleep2:02  0.00% svc.st

And vmstat shows:

 kthr  memorypagedisk   
faults  cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s2 s3   in   sy   cs  
us sy id
 0 0 0 74805352 2910024 373 4154 96 7 7 0 0 -0 52 19 19 4561 230770  
6889 11 13 76


On 28-Oct-09, at 4:01 PM, Tom Lane wrote:


Brooks Lyrette brooks.lyre...@gmail.com writes:

I'm new to postgres and it seems my server is unable to fork new
connections.



LOG:  could not fork new process for connection: Not enough space


For what I suppose is a lightly loaded machine, that is just plain
weird.  What's the platform exactly?  Is it possible that the  
postmaster

is being launched under very restrictive ulimit settings?

If it's a Unix-ish machine, it would be useful to look at top and
vmstat output to see if the machine is under severe memory
pressure for some reason.

regards, tom lane



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Brooks Lyrette
There should be no other processes running, this system is dedicated  
to running postgresql.


Max connections is configured to: max_connections = 400 

Brooks L.

On 28-Oct-09, at 3:46 PM, Thom Brown wrote:


2009/10/28 Brooks Lyrette brooks.lyre...@gmail.com:

Hello All,

I'm new to postgres and it seems my server is unable to fork new
connections.

Here is the log:

LOG:  could not fork new process for connection: Not enough space
LOG:  could not fork new process for connection: Not enough space
TopMemoryContext: 84784 total in 8 blocks; 5584 free (10 chunks);  
79200 used
 Prepared Queries: 8192 total in 1 blocks; 1800 free (0 chunks);  
6392 used
 TopTransactionContext: 8192 total in 1 blocks; 7856 free (0  
chunks); 336

used
 Record information cache: 8192 total in 1 blocks; 1800 free (0  
chunks);

6392 used
 MessageContext: 8192 total in 1 blocks; 7128 free (0 chunks); 1064  
used
 smgr relation table: 8192 total in 1 blocks; 3840 free (0 chunks);  
4352

used
 TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0  
chunks); 16

used
 Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
 PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
   PortalHeapMemory: 1024 total in 1 blocks; 976 free (0 chunks);  
48 used
 Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks);  
4816 used
 CacheMemoryContext: 405328 total in 19 blocks; 70888 free (1  
chunks);

334440 used
   unnamed prepared statement: 24576 total in 2 blocks; 7912 free (1
chunks); 16664 used
   CachedPlan: 1024 total in 1 blocks; 904 free (0 chunks); 120 used
   CachedPlanSource: 1024 total in 1 blocks; 840 free (0 chunks);  
184 used
   pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0  
chunks);

680 used
   pg_language_name_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0  
chunks);

720 used
   pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_opclass_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_amop_fam_strat_index: 1024 total in 1 blocks; 128 free (0  
chunks); 896

used
   pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0
chunks); 832 used
   pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240  
free (0

chunks); 784 used
   pg_cast_source_target_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_auth_members_role_member_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0  
chunks);

896 used
   pg_language_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
   pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 192 free  
(0

chunks); 832 used
   pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_database_oid_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_class_relname_nsp_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_class_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);  
680 used
   pg_amproc_fam_proc_index: 1024 total in 1 blocks; 128 free (0  
chunks);

896 used
   pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 128  
free (0

chunks); 896 used
   pg_index_indexrelid_index: 1024 total in 1 blocks; 344 free (0  
chunks);

680 used
   pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks);  
720 used

   pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
   pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0  
chunks); 720

used
   pg_auth_members_member_role_index: 1024 total in 1 blocks; 280  
free (0

chunks); 744 used
   pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0  
chunks);

744 used
   pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0  
chunks); 680

used
   

Re: [GENERAL] Emal reg expression

2009-10-28 Thread Roman Neuhauser
On Wed, Oct 28, 2009 at 05:45:14AM -0700, Xai wrote:
 i want to create a type for an email field but i'm not good with regx
 can some one help me?

http://marc.info/?l=postgresql-generalm=112612299412819w=2

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Stark
On Wed, Oct 28, 2009 at 1:05 PM, Brooks Lyrette
brooks.lyre...@gmail.com wrote:
 The machine is running a moderate load. This is running on a Solaris Zone.

 Memory: 32G phys mem, 942M free mem, 76G swap, 74G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  5069 postgres   1  52    0  167M   20M sleep    0:04 13.50% postgres

Hm, well 400 processes if each were taking 190M would be 76G. But that
doesn't really make much sense since most of the 167M of that process
is presumably the shared buffers. What is your shared buffers set to
btw? And your work_mem and maintenance_work_mem?

Fwiw ENOMEM is documented as There is not enough swap space..

Perhaps you have some big usage spike which uses up lots of swap and
causes postgres to start needing lots of new processes at the same
time?


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Smith

On Wed, 28 Oct 2009, Tom Lane wrote:

What's the platform exactly?  Is it possible that the postmaster is 
being launched under very restrictive ulimit settings?


Now that Brooks mentioned this being run inside of a Solaris zone, seems 
like this might be running into some memory upper limit controlled by the 
zone configuration.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Greg Smith

On Wed, 28 Oct 2009, Greg Stark wrote:


  PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
 5069 postgres   1  52    0  167M   20M sleep    0:04 13.50% postgres


Hm, well 400 processes if each were taking 190M would be 76G. But that
doesn't really make much sense since most of the 167M of that process
is presumably the shared buffers. What is your shared buffers set to
btw? And your work_mem and maintenance_work_mem?


Pieced together from the upthread config file posts:

shared_buffers = 128MB
work_mem = 16MB
max_connections = 400

So somewhere bewteen 128MB and 167MB of that SIZE is shared_buffers plus 
the other usual shared memory suspects.  Let's say each process is using 
40MB, which is on the high side.  I'd guess this system might peak at 40MB 
* 400 connections+170MB~=16GB of database RAM used, which is so much less 
than physical RAM it seems more like a software limit is being run into 
instead.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with postgresql memory issue

2009-10-28 Thread Tom Lane
Greg Smith gsm...@gregsmith.com writes:
 On Wed, 28 Oct 2009, Tom Lane wrote:
 What's the platform exactly?  Is it possible that the postmaster is 
 being launched under very restrictive ulimit settings?

 Now that Brooks mentioned this being run inside of a Solaris zone, seems 
 like this might be running into some memory upper limit controlled by the 
 zone configuration.

A bit of quick googling confirmed that there is (or can be) a per-zone
memory cap.  I'll bet Greg has nailed it.  The docs I found claim that
the cap management code is smart enough to count shared memory only
once, which would eliminate the most obvious way in which the cap might
be way off; but it still sounds like checking into that configuration
setting is job #1.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Guillaume Lelarge
Le mercredi 28 octobre 2009 à 15:13:06, Thom Brown a écrit :
 2009/10/28 Adrian Klaver akla...@comcast.net:
  Entirely new features are easier to deal with though.  I still would,
  however, want something like a detailed version of Josh's post which
  breaks down where the changes have occurred.  It seems quite scattered
  and unclear at the moment.
 
  Thom
 
  http://developer.postgresql.org/pgdocs/postgres/release-8-5.html
 
 Thanks Adrian.  I just wasn't looking hard enough obviously :)  That
 list still doesn't appear to be explicit enough though as we have
 Multiple improvements in contrib/hstore, including raising limits on
 keys and values.  What exactly is meant by limit, what was this limit
 before and what has it been raised to?
 
 Similarly: Fix encoding handling in binary input function of xml
 type.  What was the problem before?
 
 And: Allow the collection of statistics on sequences.  How would
 your average end-user see whether these statistics are being colelcted
 on sequences?  And are these statistics actually used anywhere yet?
 
 I'm not really asking for the answer to those questions.  I'm pointing
 out that it isn't clear (at least to me) how to determine what exactly
 has been fixed in order to test it.  This doesn't apply to everything
 listed as some of it is quite clear, like pg_dump/pg_restore --clean
 now drops large objects.
 

You're completely right. But release notes never intended to be this. What you 
need is more a visual tour, and I don't think anyone did write such a thing 
for any PostgreSQL releases (but I may be proven wrong). I wrote something 
like this in french for 8.2, 8.3, and 8.4. The last two were even published in 
a french Linux magazine. I suppose other people from other countries do the 
same. The advocacy group would do a good thing if it starts working on this 
kind of document. I could probably work on this too.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not identify an equality operator for type annotation (Was: could not find array type for data type character varying[])

2009-10-28 Thread Tom Lane
Viktor Rosenfeld listuse...@googlemail.com writes:
 this looks good, but it does not work with DISTINCT.

   CREATE TYPE annotation AS ( namespace varchar, name varchar, value varchar 
 );

   ERROR:  could not identify an equality operator for type annotation

My recollection is you need a complete btree operator class to support
DISTINCT.  8.4 would provide that automatically for composite types,
but in 8.3 you'll have to build it yourself.

 I've also tried [ a hash opclass ]
 But then the error message changes to:
   ERROR:  cache lookup failed for operator 34755

That's not what I get when I copy your example, so I think you left
something out.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Guillaume Lelarge
Le mercredi 28 octobre 2009 à 23:30:01, Adrian Klaver a écrit :
 - Guillaume Lelarge guilla...@lelarge.info wrote:
  Le mercredi 28 octobre 2009 à 15:13:06, Thom Brown a écrit :
   Similarly: Fix encoding handling in binary input function of xml
   type.  What was the problem before?
 
 See attached screen shot for one possible solution.
 

This solution aims at developers, not users. I mean, I can do this and I 
already do. My customers won't.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Adrian Klaver

- Guillaume Lelarge guilla...@lelarge.info wrote:

 Le mercredi 28 octobre 2009 à 23:30:01, Adrian Klaver a écrit :
  - Guillaume Lelarge guilla...@lelarge.info wrote:
   Le mercredi 28 octobre 2009 à 15:13:06, Thom Brown a écrit :
Similarly: Fix encoding handling in binary input function of
 xml
type.  What was the problem before?
  
  See attached screen shot for one possible solution.
  
 
 This solution aims at developers, not users. I mean, I can do this and
 I 
 already do. My customers won't.
 
 
 -- 
 Guillaume.
  http://www.postgresqlfr.org
  http://dalibo.com


True, but I took it Thom was looking for a way to find answers in the meantime. 
This sort of comes under:
'Give a man a fish feed him for a day, teach him to fish feed him for a 
lifetime' :)


Adrian Klaver
akla...@comcast.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Thom Brown
2009/10/28 Adrian Klaver akla...@comcast.net:



 - Guillaume Lelarge guilla...@lelarge.info wrote:

 Le mercredi 28 octobre 2009 à 15:13:06, Thom Brown a écrit :

 
  Similarly: Fix encoding handling in binary input function of xml
  type.  What was the problem before?
 

 See attached screen shot for one possible solution.


In other words we need to scour the committers mailing list to hunt
for this information?  This is exactly my point.  Testing doesn't
appear to be well organised.  In my last place of work we had a set of
requirements, technical solution design and a test guide which
instructed testers on what areas need testing.  From these a test plan
was built to ensure that the requirements were met, and that the
technical solution was working as specified.  In addition to this they
performed regression testing in the affected areas to ensure
everything else still worked as expected and wasn't negatively
affected by the new changes.

All we have are a summary of changes.  We can find out all the
information if we do plenty of searching of mailing lists and
comparing old and new documentation, but obviously this can be
off-putting and is duplicated for everyone who wants to participate in
testing.

I'm suggesting that while this is technically sufficient, it might be
a better idea to provide a clear technical document of the changes
that have been committed.

Such documentation may also potentially be reused when the final
version is released for end-users to review for any changes they might
need to make to their existing code and queries to ensure they don't
break.

Obviously PostgreSQL has survived very well without this, but I would
expect this would help more users perform more testing.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Alvaro Herrera
Thom Brown escribió:

 Obviously PostgreSQL has survived very well without this, but I would
 expect this would help more users perform more testing.

Keep in mind alphas are new.  Last time around, we only released a test
version when we were going to go to beta.  And the alpha idea was
accepted only because it was said that it was going to be very light on
the developer team.

If anyone (you?) wants to step up and produce the document you request,
it'll probably be linked to.  But please do not request the current
development team to work on it, because most of them are overloaded
already (or have other reasons not to).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Adrian Klaver
On Wednesday 28 October 2009 3:55:02 pm Thom Brown wrote:
 2009/10/28 Adrian Klaver akla...@comcast.net:
  - Guillaume Lelarge guilla...@lelarge.info wrote:
  Le mercredi 28 octobre 2009 à 15:13:06, Thom Brown a écrit :
   Similarly: Fix encoding handling in binary input function of xml
   type.  What was the problem before?
 
  See attached screen shot for one possible solution.

 In other words we need to scour the committers mailing list to hunt
 for this information?  This is exactly my point.  Testing doesn't
 appear to be well organised.  In my last place of work we had a set of
 requirements, technical solution design and a test guide which
 instructed testers on what areas need testing.  From these a test plan
 was built to ensure that the requirements were met, and that the
 technical solution was working as specified.  In addition to this they
 performed regression testing in the affected areas to ensure
 everything else still worked as expected and wasn't negatively
 affected by the new changes.

On the database side this handled by 'make check' which runs a regression suite 
against the source. So this would be the first thing to do to ensure that the 
database is not affected by a regression when compiled in your particular 
environment. As to your particular needs/application things are a bit more 
involved as you mention below. Here to date it seems most people have scanned 
the change list for items that affected them and then dug deeper to get the 
particulars. One of the benefits/problems of Open Source ,some assembly 
required.


 All we have are a summary of changes.  We can find out all the
 information if we do plenty of searching of mailing lists and
 comparing old and new documentation, but obviously this can be
 off-putting and is duplicated for everyone who wants to participate in
 testing.

 I'm suggesting that while this is technically sufficient, it might be
 a better idea to provide a clear technical document of the changes
 that have been committed.

This can be seen as an opportunity to participate in the project. I am sure 
plenty of people would be grateful if you where to spearhead just such a 
document :)


 Such documentation may also potentially be reused when the final
 version is released for end-users to review for any changes they might
 need to make to their existing code and queries to ensure they don't
 break.

 Obviously PostgreSQL has survived very well without this, but I would
 expect this would help more users perform more testing.

 Thom

As was mentioned in another post the whole Alpha release program is new, so it 
is still in the learning curve stage. My experience with the Postgres project 
is that most itches do get scratched. It just does not always happen as fast as 
everybody would like.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Thom Brown
2009/10/28 Alvaro Herrera alvhe...@commandprompt.com:

 If anyone (you?) wants to step up and produce the document you request,
 it'll probably be linked to.  But please do not request the current
 development team to work on it, because most of them are overloaded
 already (or have other reasons not to).

I can understand that, and I wouldn't expect our valuable developers
to do all this work.  I was thinking more of someone (or maybe more
than 1 person) taking the role of test documenter yes, yes, I know
I should get involved myself.  I will look into putting something
together to meet my own proposals.  I'm not entirely what the result
should look like for this particular project, but I'll see if I can
come up with something.  At least I might feel a little useful. ;)

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-28 Thread Greg Smith

On Wed, 28 Oct 2009, Thom Brown wrote:

All we have are a summary of changes.  We can find out all the 
information if we do plenty of searching of mailing lists and comparing 
old and new documentation, but obviously this can be off-putting and is 
duplicated for everyone who wants to participate in testing.


For the last release, we had some people who updated blogs etc. with usage 
examples for many of the new major features.  That doesn't seem to be 
happening as well for the 8.5 development.


In any case, the whole process is still being worked out.  I for example 
and working on some instructions for doing performance regression testing 
of the alpha releases.  There actually is a full regression test suite 
that gets runs all the time on many platforms.  The point of the alphas is 
actually for you to try *your* tests, not for everyone to test the same 
thing.


There is another route to get information here that might be a bit easier 
than directly looking up things in the mailing lists or commit logs. 
Each alpha is being generated after a CommitFest period during which 
patches are commited.  The web application organizing that process 
provides one way to more easily find the relevant discussion leading up 
that patch being applied, and many of those include better/more obvious 
examples and documentation.  The current alpha2 is based on the results of 
https://commitfest.postgresql.org/action/commitfest_view?id=3


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general