[GENERAL] searchable database

2011-04-08 Thread quickinfo
Dear Friends,

I need help from you.

We have more than thousand electronic journals. I want to make a searchable
database for easy access. Is there any light wight database available for
that. Please provide me the details for the same.

If you have any questions please mail me.

Looking forward to your help,

Thanks&Regards
Srinath


Re: [GENERAL] .pgpass not working?

2011-04-08 Thread Yang Zhang
Dah, left out the port.

On Fri, Apr 8, 2011 at 10:36 PM, Yang Zhang  wrote:
> I'm using the postgresql 8.4.7 in Ubuntu 10.04, and I'm trying to use
> .pgpass documented here:
>
> http://www.postgresql.org/docs/8.4/interactive/libpq-pgpass.html
>
> I have a ~/.pgpass with 600 perms containing:
>
> myhostname.com:yang:yang:mypassword
>
> However, it doesn't seem to get picked up by psql -h myhostname.com.
> I also tried explicitly specifying PGPASSFILE, but that didn't work.
> Neither that env var nor .pgpass is documented in man psql.
>
> Entering my password manually works, however.  (Don't need to specify
> username/database since they default to my username, which is also
> yang.)
>
> Also tried connecting with psycopg2 via sqlalchemy (connect string
> "postgresql://myhostname.com/yang"), and it also didn't auto pick up
> .pgpass.
>
> Any hints on what's up?  Thanks in advance.
>
> Yang
>



-- 
Yang Zhang
http://yz.mit.edu/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] .pgpass not working?

2011-04-08 Thread Yang Zhang
I'm using the postgresql 8.4.7 in Ubuntu 10.04, and I'm trying to use
.pgpass documented here:

http://www.postgresql.org/docs/8.4/interactive/libpq-pgpass.html

I have a ~/.pgpass with 600 perms containing:

myhostname.com:yang:yang:mypassword

However, it doesn't seem to get picked up by psql -h myhostname.com.
I also tried explicitly specifying PGPASSFILE, but that didn't work.
Neither that env var nor .pgpass is documented in man psql.

Entering my password manually works, however.  (Don't need to specify
username/database since they default to my username, which is also
yang.)

Also tried connecting with psycopg2 via sqlalchemy (connect string
"postgresql://myhostname.com/yang"), and it also didn't auto pick up
.pgpass.

Any hints on what's up?  Thanks in advance.

Yang

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename or Re-Create Constraints?

2011-04-08 Thread Rob Sargent



Carlos Mennens wrote:

On Fri, Apr 8, 2011 at 8:35 PM, Tom Lane  wrote:
  

I believe you can rename the underlying indexes and the constraints will
follow them.  (This works in HEAD anyway, not sure how far back.)



I'm sorry but I don't understand what that means or how to relate that
to a SQL command to rename the constraint. Do you have an example of
how that command would look?

  

2. When renaming the table, is there a way to rename both the table
and all associated constraints?
  

No, there's nothing automatic for that.  IIRC there used to be code to
try to do this when you renamed a single column; but we took it out,
probably because it risked unexpected failures due to index name
collisions.

   regards, tom lane




I wonder if OP wouldn't be best advised to externalize his DDL where by 
simple editing would take care of the issue?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename or Re-Create Constraints?

2011-04-08 Thread Carlos Mennens
On Fri, Apr 8, 2011 at 8:35 PM, Tom Lane  wrote:
> I believe you can rename the underlying indexes and the constraints will
> follow them.  (This works in HEAD anyway, not sure how far back.)

I'm sorry but I don't understand what that means or how to relate that
to a SQL command to rename the constraint. Do you have an example of
how that command would look?

>> 2. When renaming the table, is there a way to rename both the table
>> and all associated constraints?
>
> No, there's nothing automatic for that.  IIRC there used to be code to
> try to do this when you renamed a single column; but we took it out,
> probably because it risked unexpected failures due to index name
> collisions.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rename or Re-Create Constraints?

2011-04-08 Thread Tom Lane
Carlos Mennens  writes:
> 1. Do I need to remove all the table constraints or is there a way to
> rename them?

I believe you can rename the underlying indexes and the constraints will
follow them.  (This works in HEAD anyway, not sure how far back.)

> 2. When renaming the table, is there a way to rename both the table
> and all associated constraints?

No, there's nothing automatic for that.  IIRC there used to be code to
try to do this when you renamed a single column; but we took it out,
probably because it risked unexpected failures due to index name
collisions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rename or Re-Create Constraints?

2011-04-08 Thread Carlos Mennens
I've searched and really can't find a definitive example or someone
renaming a constraint. I renamed a table yesterday and noticed that
the constraint name was still named the old table name:

inkpress=# ALTER TABLE accounts RENAME TO fashion;
ALTER TABLE

inkpress=# \d fashion
   Table "public.fashion"
 Column  | Type  | Modifiers
-+---+---
 id  | integer   | not null
 vendor  | character varying(40) | not null
 account | integer   | not null
 email   | character varying(40) | not null
 state   | character(2)  | not null
Indexes:
"accounts_pkey" PRIMARY KEY, btree (id)
"accounts_account_key" UNIQUE, btree (account)
"accounts_email_key" UNIQUE, btree (email)
"accounts_vendor_key" UNIQUE, btree (vendor)

1. Do I need to remove all the table constraints or is there a way to
rename them?

2. When renaming the table, is there a way to rename both the table
and all associated constraints?

I've looked over the following guide and am more confused than ever:

http://www.postgresql.org/docs/8.1/static/sql-altertable.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deferred foreign key constraint downsides

2011-04-08 Thread Jeff Davis
On Fri, 2011-04-08 at 14:08 -0500, Jack Christensen wrote:
> Are there any other downsides to just setting all my foreign keys to 
> initially deferred?

It may consume memory resources until the transaction is complete.

Also, when it's possible to write the SQL in an order that always
maintains the integrity of the FK, then it's usually more readable and
understandable. So using immediate constraints may encourage a more
readable style.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Critical fix for pg_upgrade/pg_migrator users

2011-04-08 Thread momjian
Critical Fix for pg_upgrade/pg_migrator Users
-

A bug has been discovered in all released versions of pg_upgrade and
(formerly) pg_migrator.  Anyone who has used pg_upgrade or pg_migrator
should take the following corrective actions as soon as possible.  You
might also want to make a backup of the pg_clog directory if you do not
already have a recent copy.  If you fail to take action promptly, it
might result in unexpected downtime.   

This bug can cause queries to return the following error:

ERROR: could not access status of transaction ##
DETAIL: could not open file "pg_clog/": No such file or directory=20

This error prevents access to very wide values stored in the database.
To prevent such failures users need to run the following psql script,
as the superuser, in all upgraded databases as soon as possible:

-- This script fixes data in pre-8.4.8 and pre-PG 9.0.4
-- servers that were upgraded by pg_upgrade and pg_migrator.
-- Run the script using psql for every database in the cluster
-- except 'template0', e.g.:
-- psql -U postgres -a -f pg_upgrade_fix.sql dbname
-- This must be run from a writable directory.
--
-- Depending on the size and configuration of your database, 
-- this script might generate a lot of I/O and degrade database
-- performance.  Users should execute this script during a low
-- traffic period and monitor the database load.
--
CREATE TEMPORARY TABLE pg_upgrade_fix AS
SELECT 'VACUUM FREEZE pg_toast.' || c.relname || ';'
FROMpg_class c, pg_namespace n
WHERE   c.relnamespace = n.oid AND
n.nspname = 'pg_toast' AND
c.relkind = 't'
ORDER by c.oid;
\copy pg_upgrade_fix TO 'pg_upgrade_tmp.sql';
\i pg_upgrade_tmp.sql

A fix will be included in upcoming Postgres releases 8.4.8 and 9.0.4.
These releases will remove the need to run the above script after
upgrades by correctly restoring all TOAST tables in the migrated
databases.  However, databases which have already been upgraded still
need the script run, even if they are running Postgres 9.0.4.

For further details, see http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix.

2011-04-08

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Howto sort the result of UNION (without modifying its type)?

2011-04-08 Thread Rob Sargent


On 04/08/2011 03:44 PM, Clemens Eisserer wrote:

Hi,

I have a query where I UNION several select statements which return
IDs of type INTEGER.
This works well, if the IDs don't need to be sorted:

SELECT id FROM table WHERE id IN ((select id FROM table WHERE ...) UNION 
(SELECT id  FROM table_ WHERE ))



However I need the result the UNIONs to be sorted, so I tried the following:


SELECT id FROM table WHERE id IN (SELECT col0 FROM ((select id FROM table WHERE 
...) UNION (SELECT id  FROM table_ WHERE )) AS col0 ORDER BY col0 OFFSET X 
LIMIT Y)

FEHLER:  Operator existiert nicht: integer = record
ERROR: Operator does not exist: integer = record
LINE 1: Select id FROM table WHERE id IN (Select col0 FROM ...

Is there any way to get the results of UNIONs sorted, without
converting it to record?

Thanks, Clemens

PS: I know the query looks really stupid, however the queries I UNION
are auto-generated SQL.
I don't have much choice here to write the query more efficient :/



does this work for you?

select u.id from (your unions) as u order by u.id

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin

Merlin Moncure-2 wrote:
> 
> On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin 
> wrote:
>> Further clarification,
>>
>> if I run two concurrent threads
>>
>> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>>
>> both backend processes uses 1.5GB and result in 3GB in total.
> 
> yes. could you please post a capture of top after running the modified
> test that does not run in single transaction.
> 
> merlin
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

Here you are.

http://postgresql.1045698.n5.nabble.com/file/n4292010/PostgreSQLBackendMemoryUsage.gif
PostgreSQLBackendMemoryUsage.gif 

The screen shot was taken with running the following command

pgbench memoryusagetest -c 2 -j 2 -T1800 -f test.sql

I have to set the test time longer so that I got the chance to take the
snapshot because if pgbench finishes, the connection will be gone.

The snapshot was taken when I wait until the memory usage stable at 1.5GB.

Please note in this tests, there are two PostgreSQL backend processes, each
use 1.5GB of memory.

Samuel

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4292010.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Howto sort the result of UNION (without modifying its type)?

2011-04-08 Thread Clemens Eisserer
Hi,

I have a query where I UNION several select statements which return
IDs of type INTEGER.
This works well, if the IDs don't need to be sorted:
> SELECT id FROM table WHERE id IN ((select id FROM table WHERE ...) UNION 
> (SELECT id  FROM table_ WHERE ))


However I need the result the UNIONs to be sorted, so I tried the following:

> SELECT id FROM table WHERE id IN (SELECT col0 FROM ((select id FROM table 
> WHERE ...) UNION (SELECT id  FROM table_ WHERE )) AS col0 ORDER BY col0 
> OFFSET X LIMIT Y)
FEHLER:  Operator existiert nicht: integer = record
ERROR: Operator does not exist: integer = record
LINE 1: Select id FROM table WHERE id IN (Select col0 FROM ...

Is there any way to get the results of UNIONs sorted, without
converting it to record?

Thanks, Clemens

PS: I know the query looks really stupid, however the queries I UNION
are auto-generated SQL.
I don't have much choice here to write the query more efficient :/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Merlin Moncure
On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin  wrote:
> Further clarification,
>
> if I run two concurrent threads
>
> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>
> both backend processes uses 1.5GB and result in 3GB in total.

yes. could you please post a capture of top after running the modified
test that does not run in single transaction.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
Further clarification,

if I run two concurrent threads

pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql 

both backend processes uses 1.5GB and result in 3GB in total.

Samuel

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4291699.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
No I didn't configured 1.5GB shared memory. For this test I recreated a
database cluster and leave everything in the configuration as default.

As in the original post,
when the connection was first established, the memory usage of backend
process showed in top was 

VIRT = 182MB, RES = 6240K, SHR=4648K 

while running

pgbench memoryusagetest -c 1 -j 1 -T180 -f test.sql 

I saw the memory climb up rapidly to

VIRT = 1661MB  RES = 1.5GB SHR = 55MB

once the tests is done, the process terminated and the memory returned to OS

Samuel



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4291691.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Carlos Mennens
On Fri, Apr 8, 2011 at 2:01 PM, Adrian Klaver  wrote:
> Per here:
> http://www.postgresql.org/docs/8.4/static/ssl-tcp.html
> File    Contents        Effect
> server.crt      server certificate      requested by client
> server.key      server private key      proves server certificate sent by
> owner; does not indicate certificate owner is trustworthy
> root.crt        trusted certificate authorities checks that client
> certificate is signed by a trusted certificate authority
> root.crl        certificates revoked by certificate authorities client
> certificate must not be on this list
>
> Rename your certs to above.

Oh I mis-understood. I just need to rename my symbolic links, not my
actual certificate file names. Changed symbolic link names and
everything is happy again.

Thanks so much for everyones help!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deferred foreign key constraint downsides

2011-04-08 Thread Jerry Sievers
Jack Christensen  writes:

> I recently had cause to use a deferred foreign key constraint for the
> first time. I like it. It seems it could make life simpler, especially
> when an obstinate ORM insists on doing things in the wrong order.
>
> The only downside I can see is it may be harder to track down where a
> violation occurred since the error won't be raised until commit.
>
> Are there any other downsides to just setting all my foreign keys to
> initially deferred?

I'd say, use the feature only as needed.  Gratuitous deviation from
reasonable default should be avoided.

Why promote being able to insert rows in related tables using other than
top-down sequencing?

Sure, if you have an existing app that does that, which you can't
change, go with deferred validation, otherwise no.

YMMV

> Thanks.
>
> -- 
> Jack Christensen
> ja...@hylesanderson.edu

-- 
Jerry Sievers
e: gsiever...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comma vs cross join question

2011-04-08 Thread Jason Long
On Fri, 2011-04-08 at 14:45 -0400, Tom Lane wrote:
> Jason Long  writes:
> > I am using 9.0.3 and the only setting I have changed is 
> > geqo_effort = 10
> 
> > One of the joins is a view join.
> 
> Ah.  The explain shows there are actually nine base tables in that
> query, which is more than the default join_collapse_limit.  Try cranking
> up both join_collapse_limit and from_collapse_limit to 10 or so.
> (I'm not sure offhand if from_collapse_limit affects this case, but it
> might.)
> 
>   regards, tom lane


I have to say I love this mailing list and thank you Tom for your
expertise.

I played with the settings with the following results.

Worked like a charm
from_collapse_limit = 10
join_collapse_limit = 10 


Worked like a charm
from_collapse_limit = 10
join_collapse_limit = 8 

Failed
from_collapse_limit = 8
join_collapse_limit = 10 

It looks like from_collapse_limit was the key.

I am going to leave them both at 10. 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deferred foreign key constraint downsides

2011-04-08 Thread Jack Christensen
I recently had cause to use a deferred foreign key constraint for the 
first time. I like it. It seems it could make life simpler, especially 
when an obstinate ORM insists on doing things in the wrong order.


The only downside I can see is it may be harder to track down where a 
violation occurred since the error won't be raised until commit.


Are there any other downsides to just setting all my foreign keys to 
initially deferred?


Thanks.

--
Jack Christensen
ja...@hylesanderson.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comma vs cross join question

2011-04-08 Thread Tom Lane
Jason Long  writes:
> I am using 9.0.3 and the only setting I have changed is 
> geqo_effort = 10  

> One of the joins is a view join.

Ah.  The explain shows there are actually nine base tables in that
query, which is more than the default join_collapse_limit.  Try cranking
up both join_collapse_limit and from_collapse_limit to 10 or so.
(I'm not sure offhand if from_collapse_limit affects this case, but it
might.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Count for pagination

2011-04-08 Thread Stephen Frost
* Jason Long (ja...@octgsoftware.com) wrote:
> The main search screen of my application has pagination.

http://www.depesz.com/index.php/2007/08/29/better-results-paging-in-postgresql-82/

Thanks,

Stephen


signature.asc
Description: Digital signature


[GENERAL] Count for pagination

2011-04-08 Thread Jason Long
The main search screen of my application has pagination.

I am basically running 3 queries with the same where clause.

1.  Totals for the entire results(not just the number of rows on the
first page)
   a.  <300 ms
2.   Subset of the total records for one page.
   a.  1-2 sec
3.   Count of the total records for the pagination to show the number of
pages
  a. 1-2 sec

The queries are generated by Hibernate and I am looking to rewrite them
in native SQL
to improve performance.

Any suggestions on how to get the count of all records that could be
returned and only 
a subset of those records for that  page in an optimized fashion?  I
have no problem using 
a widow query or a Postgres specific feature as my app only runs on
Postgres. 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Merlin Moncure
On Fri, Apr 8, 2011 at 10:30 AM, Shianmiin  wrote:
>
> Shianmiin wrote:
>>
>> Hi Merlin,
>>
>> I revised the test code with attached files and use pgbench to send the
>> test queries.
>>
>>  http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
>>  http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
>> initialize.sql
>>  http://postgresql.1045698.n5.nabble.com/file/n4290723/test.sql test.sql
>>
>> Samuel
>>
>
>
> Just want to put some descriptions on the revised test code:
>
> 1. This test is not a real life testing, it's is specifically designed to
> emphasize the high memory usage issue of PostgreSQL backend process we
> encountered.
> 2. The tests in the dotest file uses pgbench's feature to simulate 20
> concurrent clients sending random queries, which will make the host memory
> usage climb even faster; eventually all 20 concurrent PostgreSQL process
> will all use 1.5 GB (30GB in total). To do the same tests as in the original
> post, we can run
>
> pgbench memoryusagetest -c 1 -j 1 -T180 -f test.sql

are you sure you're actually using that memory? I bet you have
configured around 1.5gb shared buffers.  What does 'top' say when this
is happening?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
No. The highmemory usage issueis stll there.

We could change select count(*) to select * or select 1 if you like. Therre
is no data in the tables anyway.

Sent from my iPad

On 2011-04-08, at 8:25 AM, "Merlin Moncure-2 [via PostgreSQL]" <
ml-node+4290983-1196677718-196...@n5.nabble.com> wrote:

On Fri, Apr 8, 2011 at 7:43 AM, Shianmiin <[hidden
email]>
wrote:
> Hi Merlin,
>
> I revised the test code with attached files and use pgbench to send the
test
> queries.
>
> http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest
>  dotest

> http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
> initialize.sql
> http://postgresql.1045698.n5.nabble.com/file/n4290723/test.sql
>  test.sql


Not to beat on your test some more, but spamming count(*) is probably
not a good real world indicator of application performance (but it
might be).  I'm assuming your memory usage problem has gone away?

merlin

-- 
Sent via pgsql-general mailing list ([hidden
email])

To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
 If you reply to this email, your message will be added to the discussion
below:
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4290983.html
 To unsubscribe from PostgreSQL backend process high memory usage issue, click
here.


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4291100.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin

Shianmiin wrote:
> 
> Hi Merlin,
> 
> I revised the test code with attached files and use pgbench to send the
> test queries.
> 
>  http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest 
>  http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
> initialize.sql 
>  http://postgresql.1045698.n5.nabble.com/file/n4290723/test.sql test.sql 
> 
> Samuel
> 


Just want to put some descriptions on the revised test code:

1. This test is not a real life testing, it's is specifically designed to
emphasize the high memory usage issue of PostgreSQL backend process we
encountered.
2. The tests in the dotest file uses pgbench's feature to simulate 20
concurrent clients sending random queries, which will make the host memory
usage climb even faster; eventually all 20 concurrent PostgreSQL process
will all use 1.5 GB (30GB in total). To do the same tests as in the original
post, we can run

pgbench memoryusagetest -c 1 -j 1 -T180 -f test.sql

Samuel

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4291173.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Count for pagination

2011-04-08 Thread Jason Long
The main search screen of my application has pagination.

I am basically running 3 queries with the same where clause.

1.  Totals for the entire results(not just the number of rows on the
first page)
   a.  <300 ms
2.   Subset of the total records on that page.
   a.  1-2 sec
3.   Count of the total records for the pagination to show the number of
pages
  a. 1-2 sec

The queries are generated by Hibernate and I am looking to rewrite them
in native SQL
to improve performance.

Any suggestions on how to get the count of all records that could be
returned and only 
a subset of those records for that  page in an optimized fashion?  I
have no problem using 
a widow query or a Postgres specific feature as my app only runs on
Postgres. 

-- 
Thank you for your time,

Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com


Re: [GENERAL] comma vs cross join question

2011-04-08 Thread Tom Lane
Jason Long  writes:
> I recently upgraded to JBoss AS 6.0.0.Final which includes a newer
> version of Hibernate.
> Previously the Postgres dialect was using a comma, but now is is using
> cross join.
> With the cross join this query never completes.  With the comma the
> query is identical to what was there before and takes less than 300 ms.

Those should be semantically equivalent AFAICS.  Do you maybe have
join_collapse_limit set to a smaller-than-default value?  If not, are
any of those tables really join views?

Please see
http://wiki.postgresql.org/wiki/SlowQueryQuestions
if you need further help, because there's not enough information here
to do more than guess wildly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Tom Lane
Carlos Mennens  writes:
> I had self signed SSL certificates on my database server but since
> then removed them and received updated certificates from the security
> team. I removed (backedup) the old server.crt & server.key and now
> have db1_ssl.crt & db1_ssl.key in the identical location as the old
> SSL certificates. I then went to /etc/postgres/8.4/main and removed
> the old symbolic links for the old certificates and generated new
> symbolic links:

> ln -s /etc/ssl/certs/db1_ssl.crt db1_ssl.crt
> ln -s /etc/ssl/private/db1_ssl.key db1_ssl.key

> I then restarted PostgreSQL and got the following error:

> 2011-04-08 09:54:34 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory
> 2011-04-08 10:00:43 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory

Well, yeah.  The server's key and cert files have to be named exactly
server.crt and server.key.  They can be symlinks, I think, but you
can't just randomly use some other names and expect the server to intuit
that those are the files to use.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Vick Khera
On Fri, Apr 8, 2011 at 12:42 PM, Carlos Mennens wrote:

> ln -s /etc/ssl/certs/db1_ssl.crt db1_ssl.crt
> ln -s /etc/ssl/private/db1_ssl.key db1_ssl.key
>
> I then restarted PostgreSQL and got the following error:
>
> 2011-04-08 09:54:34 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory
> 2011-04-08 10:00:43 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory
>

Perhaps using the file name "server.crt" postgres is looking for instead of
"db1_ssl.crt" would help?

That advice for full paths on the symlink not so useful. If you are in the
right directory then they are equivalent commands.


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Adrian Klaver

On 04/08/2011 09:42 AM, Carlos Mennens wrote:

I had self signed SSL certificates on my database server but since
then removed them and received updated certificates from the security
team. I removed (backedup) the old server.crt&  server.key and now
have db1_ssl.crt&  db1_ssl.key in the identical location as the old
SSL certificates. I then went to /etc/postgres/8.4/main and removed
the old symbolic links for the old certificates and generated new
symbolic links:

ln -s /etc/ssl/certs/db1_ssl.crt db1_ssl.crt
ln -s /etc/ssl/private/db1_ssl.key db1_ssl.key

I then restarted PostgreSQL and got the following error:

2011-04-08 09:54:34 EDT FATAL:  could not load server certificate file
"server.crt": No such file or directory
2011-04-08 10:00:43 EDT FATAL:  could not load server certificate file
"server.crt": No such file or directory

I looked for anywhere else in /var/lib/postgres/&  /etc/postgres/ but
can't find anything else that's calling the old certificates. I
changed the ownership on the certificates and symbolic links to either
root or postgres and nothing worked. It fails to start with the
following error:


root@db1:/# /etc/init.d/postgresql start
Starting PostgreSQL 8.4 database server: mainThe PostgreSQL server
failed to start. Please check the log output: 2011-04-08 12:36:54 EDT
FATAL: could not load server certificate file "server.crt": No such
file or directory ... failed!

I checked the documentation page:

http://www.postgresql.org/docs/8.4/static/libpq-ssl.html

Table 30-4. Libpq/Client SSL File Usage

FileContentsEffect
~/.postgresql/postgresql.crtclient certificate  requested by server
~/.postgresql/postgresql.keyclient private key  proves client
certificate sent by owner; does not indicate certificate owner is
trustworthy
~/.postgresql/root.crt  trusted certificate authorities checks server
certificate is signed by a trusted certificate authority
~/.postgresql/root.crl  certificates revoked by certificate
authorities server certificate must not be on this list

Can anyone tell me what I'm doing wrong or missing here? I can't
disable SSL per DoD requirements sadly.


Per here:
http://www.postgresql.org/docs/8.4/static/ssl-tcp.html
FileContentsEffect
server.crt  server certificate  requested by client
server.key	server private key	proves server certificate sent by owner; 
does not indicate certificate owner is trustworthy
root.crt	trusted certificate authorities	checks that client certificate 
is signed by a trusted certificate authority
root.crl	certificates revoked by certificate authorities	client 
certificate must not be on this list


Rename your certs to above.


-Carlos




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Diego Schulz
On Fri, Apr 8, 2011 at 2:21 PM, Carlos Mennens wrote:

> On Fri, Apr 8, 2011 at 1:15 PM, Diego Schulz  wrote:
> > Hi,
> > When linking to the certificate and key you should specify the full path.
> > ln -s /etc/ssl/certs/db1_ssl.crt  /full/path/to/db1_ssl.crt
> > ln -s /etc/ssl/private/db1_ssl.key   /full/path/to/db1_ssl.key
>
> Thanks for the quick reply Diego. I posted the commands above and I
> used the full path to the certificates as you can see. Here's the
> info:
>
> lrwxrwxrwx 1 postgres postgres   26 Apr  8 10:43 db1_ssl.crt ->
> /etc/ssl/certs/db1_ssl.crt
> lrwxrwxrwx 1 postgres postgres   28 Apr  8 10:50 db1_ssl.key ->
> /etc/ssl/private/db1_ssl.key
>
> The 1st part is just the symbolic link referenced in
> /var/lib/postgresql/8.4/main but you can see it knows to reference the
> symbolic links to /etc/ssl/...
>
> I'm thinking there's some random configuration file for PostgreSQL
> that has pointers to the old server.crt and server.key files but I've
> searched /etc/postgres/ and /var/lib/postgresql/8.4/main completely
> and can't find it what so ever. I am not authorized to disable SSL per
> DoD standards / requirements sadly.
>
> Any thing else I am missing? I can't be the 1st person to switch SSL
> certificates during utilization.
>
>
Make sure the files have the right ownership and permissions.
It looks like ownership is correct (postgres:postgres) but permissions might
be too loose.
Try chmod 400 on your key and certificate and see what happens.

cheers,

diego


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Gipsz Jakab
Ok Vick, thanks, and sorry for the off-list message.

regards,
Carl

2011/4/8 Vick Khera 

>
> On Fri, Apr 8, 2011 at 9:53 AM, Gipsz Jakab wrote:
>
>> Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and
>> maintenance_work_mem, and 102 MB of work_mem.
>>
>> A question: I didn't use (it's marked with #) the effective_planner (or
>> any other planner method or config option). Is it ok, when I turn it on with
>> that parameter: 1036MB?
>>
>>
> the variables below are all the ones I change from default other than the
> logging settings (I like more verbose logging).
>
> I have no opinion or experiences on any other settings.
>
>
>>
>> DROP/ADD TABLE stuck: I realized, that the locks number is so high, what
>> about these settings:
>>
>>
> I don't think it has to do with number of locks, but with actually waiting
> for a lock.
>
>
>> deadlock_timeout = 1s
>> max_locks_per_transaction = 64
>>
>> is it ok? or is it too high?
>>
>
> That depends really on your application's needs.  I wouldn't change it
> unless you get warnings that you are hitting this limit.
>
>
> Please keep the message on-list.
>
>


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Carlos Mennens
On Fri, Apr 8, 2011 at 1:15 PM, Diego Schulz  wrote:
> Hi,
> When linking to the certificate and key you should specify the full path.
> ln -s /etc/ssl/certs/db1_ssl.crt      /full/path/to/db1_ssl.crt
> ln -s /etc/ssl/private/db1_ssl.key   /full/path/to/db1_ssl.key

Thanks for the quick reply Diego. I posted the commands above and I
used the full path to the certificates as you can see. Here's the
info:

lrwxrwxrwx 1 postgres postgres   26 Apr  8 10:43 db1_ssl.crt ->
/etc/ssl/certs/db1_ssl.crt
lrwxrwxrwx 1 postgres postgres   28 Apr  8 10:50 db1_ssl.key ->
/etc/ssl/private/db1_ssl.key

The 1st part is just the symbolic link referenced in
/var/lib/postgresql/8.4/main but you can see it knows to reference the
symbolic links to /etc/ssl/...

I'm thinking there's some random configuration file for PostgreSQL
that has pointers to the old server.crt and server.key files but I've
searched /etc/postgres/ and /var/lib/postgresql/8.4/main completely
and can't find it what so ever. I am not authorized to disable SSL per
DoD standards / requirements sadly.

Any thing else I am missing? I can't be the 1st person to switch SSL
certificates during utilization.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changed SSL Certificates

2011-04-08 Thread Diego Schulz
On Fri, Apr 8, 2011 at 1:42 PM, Carlos Mennens wrote:

> I had self signed SSL certificates on my database server but since
> then removed them and received updated certificates from the security
> team. I removed (backedup) the old server.crt & server.key and now
> have db1_ssl.crt & db1_ssl.key in the identical location as the old
> SSL certificates. I then went to /etc/postgres/8.4/main and removed
> the old symbolic links for the old certificates and generated new
> symbolic links:
>
> ln -s /etc/ssl/certs/db1_ssl.crt db1_ssl.crt
> ln -s /etc/ssl/private/db1_ssl.key db1_ssl.key
>
> I then restarted PostgreSQL and got the following error:
>
> 2011-04-08 09:54:34 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory
> 2011-04-08 10:00:43 EDT FATAL:  could not load server certificate file
> "server.crt": No such file or directory
>
> I looked for anywhere else in /var/lib/postgres/ & /etc/postgres/ but
> can't find anything else that's calling the old certificates. I
> changed the ownership on the certificates and symbolic links to either
> root or postgres and nothing worked. It fails to start with the
> following error:
>
>
> root@db1:/# /etc/init.d/postgresql start
> Starting PostgreSQL 8.4 database server: mainThe PostgreSQL server
> failed to start. Please check the log output: 2011-04-08 12:36:54 EDT
> FATAL: could not load server certificate file "server.crt": No such
> file or directory ... failed!
>
> I checked the documentation page:
>
> http://www.postgresql.org/docs/8.4/static/libpq-ssl.html
>
> Table 30-4. Libpq/Client SSL File Usage
>
> FileContentsEffect
> ~/.postgresql/postgresql.crtclient certificate  requested by server
> ~/.postgresql/postgresql.keyclient private key  proves client
> certificate sent by owner; does not indicate certificate owner is
> trustworthy
> ~/.postgresql/root.crt  trusted certificate authorities checks server
> certificate is signed by a trusted certificate authority
> ~/.postgresql/root.crl  certificates revoked by certificate
> authorities server certificate must not be on this list
>
> Can anyone tell me what I'm doing wrong or missing here? I can't
> disable SSL per DoD requirements sadly.
>
> -Carlos
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Hi,

When linking to the certificate and key you should specify the full path.

ln -s /etc/ssl/certs/db1_ssl.crt  /full/path/to/db1_ssl.crt
ln -s /etc/ssl/private/db1_ssl.key   /full/path/to/db1_ssl.key

HTH,

diego


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Gipsz Jakab
I think, the main problem is the following: all of the user are autheticated
in the psql with the same username, and the thing, that you've mentioned,
the locks (I will talk with the programmer, or create new users in the psql,
and modify the ODBC settings at the client side). How can I setup a shorter
time period for the idle time or less lock time?

regards,
Carl

2011/4/8 Merlin Moncure 

> On Fri, Apr 8, 2011 at 3:00 AM, Gipsz Jakab 
> wrote:
> > After the settings in the postgresql.conf our system is much faster, and
> no
> > more error messages in the postgres.log, but If I try to drop a table, or
> > add a new one, our system is stopping, until I kill the process, which is
> > dropping or adding a table.
>
> you are almost certainly waiting on a lock.  check pg_locks for
> ungranted locks and pg_stat_activity for idle/long running
> transactions.  If you ever find yourself having to bump
> max_connections again, it is almost certainly a good idea to
> investigate a connection pooler like pgbouncer.
>
> merlin
>


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Gipsz Jakab
Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and
maintenance_work_mem, and 102 MB of work_mem.

A question: I didn't use (it's marked with #) the effective_planner (or any
other planner method or config option). Is it ok, when I turn it on with
that parameter: 1036MB?

DROP/ADD TABLE stuck: I realized, that the locks number is so high, what
about these settings:

deadlock_timeout = 1s
max_locks_per_transaction = 64

is it ok? or is it too high?

Thanks,
Carl

2011/4/8 Vick Khera 

>
> On Fri, Apr 8, 2011 at 4:00 AM, Gipsz Jakab wrote:
>
>> My question is the following: if this is a dedicated database server, with
>> maximum 30 users (but they are using ODBC with Microsoft Acces, and each of
>> them generating 4-6 connection at the same time), and other 200 people will
>> use this server through drupal, php, apache not in daily basis, but weekly,
>> what is the ideal memory configuration?
>>
>
> if it is a dedicated DB server, then give shared memory  about 1/4 of the
> RAM, and perhaps a slightly larger maintenance work mem.  depending on your
> workload you may want to increase the checkpoint segments (if write-mostly,
> then add more segments).
>
> Here is what I use on my FreeBSD 8, Pg 9 big-memory servers.  these have
> 24GB or more of RAM and are attached to SSD external storage for the
> database:
>
> max_connections = 200
> shared_buffers = 5120MB
> work_mem = 512MB
> maintenance_work_mem = 1024MB
> max_stack_depth = 8MB
> vacuum_cost_delay = 15checkpoint_segments = 64
> checkpoint_timeout = 15min
> checkpoint_completion_target = 0.8
> random_page_cost = 1.0  # RAM disk. set equal seq_page_cost
> effective_cache_size = 6400MB   # shared_buffers + `sysctl -n
> vfs.hibufspace` / 8192 (BLKSZ)
>
>
> for individual complicated queries, you can increase the sort mem and work
> mem on a per-connection basis as needed.
>
>
>>
>> After the settings in the postgresql.conf our system is much faster, and
>> no more error messages in the postgres.log, but If I try to drop a table, or
>> add a new one, our system is stopping, until I kill the process, which is
>> dropping or adding a table.
>>
>
> Is something else using the table you want to drop and blocking the drop
> statement from taking the locks it needs?  It should be fairly instant.
>
>


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Vick Khera
On Fri, Apr 8, 2011 at 9:53 AM, Gipsz Jakab  wrote:

> Thanks Vick, I'll try it tonight. I will give 1024 shared_buffers and
> maintenance_work_mem, and 102 MB of work_mem.
>
> A question: I didn't use (it's marked with #) the effective_planner (or any
> other planner method or config option). Is it ok, when I turn it on with
> that parameter: 1036MB?
>
>
the variables below are all the ones I change from default other than the
logging settings (I like more verbose logging).

I have no opinion or experiences on any other settings.


>
> DROP/ADD TABLE stuck: I realized, that the locks number is so high, what
> about these settings:
>
>
I don't think it has to do with number of locks, but with actually waiting
for a lock.


> deadlock_timeout = 1s
> max_locks_per_transaction = 64
>
> is it ok? or is it too high?
>

That depends really on your application's needs.  I wouldn't change it
unless you get warnings that you are hitting this limit.


Please keep the message on-list.


[GENERAL] comma vs cross join question

2011-04-08 Thread Jason Long
I recently upgraded to JBoss AS 6.0.0.Final which includes a newer
version of Hibernate.

Previously the Postgres dialect was using a comma, but now is is using
cross join.

In order do to the migration I had to override the cross join operator
to a comma in HIbernate so it would generate the same query.

With the cross join this query never completes.  With the comma the
query is identical to what was there before and takes less than 300 ms.

The rest of the application seems fine, but this one query is a show
stopper.

I have attached the queries below for reference.  The only difference is
the use of cross join vs comma.


Do you think this is the right way to correct this or should I be
looking to tune Postgres to work when cross join is used? 


**

--Hibernate 3.6.0
select count(pipe0_.id) as col_0_0_,
   sum(pipe0_.numFeet) as col_1_0_,
   sum(pipecalc1_.nt) as col_2_0_,
   sum(pipecalc1_.mt) as col_3_0_,
   sum(pipe0_1_.numPieces) as col_4_0_,
   sum(pipecalc1_.wt100) as col_5_0_ 
from inventory.t_pipe pipe0_ 
inner join inventory.t_generic_item pipe0_1_ on pipe0_.id=pipe0_1_.id, 
public.v_pipe_calc
pipecalc1_ 
cross join state.t_state state4_ 
cross join property.t_status status5_ 
cross join state.t_po_pipe popipe6_ 
inner join state.t_state popipe6_1_ on popipe6_.id=popipe6_1_.id 
where
pipe0_.id=pipecalc1_.id and 
pipe0_1_.state_id=state4_.id and
state4_.status_id=status5_.id and 
pipe0_.poPipe_id=popipe6_.id and 
status5_.activeStatus=true and 
popipe6_1_.spec=true

--Hibernate 3.3.1
select count(pipe0_.id) as col_0_0_,
   sum(pipe0_.numFeet) as col_1_0_,
   sum(pipecalc1_.nt) as col_2_0_,
   sum(pipecalc1_.mt) as col_3_0_,
   sum(pipe0_1_.numPieces) as col_4_0_,
   sum(pipecalc1_.wt100) as col_5_0_ 
from inventory.t_pipe pipe0_ 
inner join inventory.t_generic_item pipe0_1_ on pipe0_.id=pipe0_1_.id, 
public.v_pipe_calc
pipecalc1_, 
state.t_state state4_, 
property.t_status
status5_, 
state.t_po_pipe
popipe6_ 
inner join state.t_state popipe6_1_ on popipe6_.id=popipe6_1_.id 
where pipe0_.id=pipecalc1_.id and
  pipe0_1_.state_id=state4_.id and
  state4_.status_id=status5_.id and
  pipe0_.poPipe_id=popipe6_.id and
  status5_.activeStatus=true and
  popipe6_1_.spec=true



[GENERAL] Changed SSL Certificates

2011-04-08 Thread Carlos Mennens
I had self signed SSL certificates on my database server but since
then removed them and received updated certificates from the security
team. I removed (backedup) the old server.crt & server.key and now
have db1_ssl.crt & db1_ssl.key in the identical location as the old
SSL certificates. I then went to /etc/postgres/8.4/main and removed
the old symbolic links for the old certificates and generated new
symbolic links:

ln -s /etc/ssl/certs/db1_ssl.crt db1_ssl.crt
ln -s /etc/ssl/private/db1_ssl.key db1_ssl.key

I then restarted PostgreSQL and got the following error:

2011-04-08 09:54:34 EDT FATAL:  could not load server certificate file
"server.crt": No such file or directory
2011-04-08 10:00:43 EDT FATAL:  could not load server certificate file
"server.crt": No such file or directory

I looked for anywhere else in /var/lib/postgres/ & /etc/postgres/ but
can't find anything else that's calling the old certificates. I
changed the ownership on the certificates and symbolic links to either
root or postgres and nothing worked. It fails to start with the
following error:


root@db1:/# /etc/init.d/postgresql start
Starting PostgreSQL 8.4 database server: mainThe PostgreSQL server
failed to start. Please check the log output: 2011-04-08 12:36:54 EDT
FATAL: could not load server certificate file "server.crt": No such
file or directory ... failed!

I checked the documentation page:

http://www.postgresql.org/docs/8.4/static/libpq-ssl.html

Table 30-4. Libpq/Client SSL File Usage

FileContentsEffect
~/.postgresql/postgresql.crtclient certificate  requested by server
~/.postgresql/postgresql.keyclient private key  proves client
certificate sent by owner; does not indicate certificate owner is
trustworthy
~/.postgresql/root.crt  trusted certificate authorities checks server
certificate is signed by a trusted certificate authority
~/.postgresql/root.crl  certificates revoked by certificate
authorities server certificate must not be on this list

Can anyone tell me what I'm doing wrong or missing here? I can't
disable SSL per DoD requirements sadly.

-Carlos

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Function returning setof record in JOIN

2011-04-08 Thread Pavel Stehule
2011/4/8 Merlin Moncure :
> On Fri, Apr 8, 2011 at 5:57 AM, gmb  wrote:
>> Hi
>>
>> Is it possible to do a JOIN between a VIEW and the output of a FUNCTION?
>
> yes.

yes, it is possible. Just I am not sure if original query wasn't
directed to >>lateral<< feature.

Pavel



>
> merlin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why autocommit mode is slow?

2011-04-08 Thread Andrew Sullivan
On Thu, Apr 07, 2011 at 10:59:50PM +0200, Szymon Guz wrote:
> Hi,
> this is maybe a stupid question, but I don't know how to explain to my
> coworkers why performing many inserts in autocommit mode is so much slower
> than making all of them in one transaction.
> 
> Why is that so slow?

If you do BEGIN; [statement]; COMMIT; one after another, is that as
slow as autocommit?  (My bet is yes.)

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Surge 2011 CFP Deadline Extended

2011-04-08 Thread Katherine Jeschke
OmniTI is pleased to announce that the CFP deadline for Surge 2011, the
Scalability and Performance Conference, (Baltimore: Sept 28-30, 2011) has
been extended to 23:59:59 EDT, April 17, 2011. The event focuses upon case
studies that demonstrate successes (and failures) in Web applications and
Internet architectures. New this year: Hack Day and Unconference on
September 28th.

For information about topics: http://omniti.com/surge/2011. Get inspired by
the 2010 sessions, now online at (http://omniti.com/surge/2010)

2010 attendees compared Surge to the early days of Velocity, and our
speakers received 3.5-4 out of 4 stars for quality of presentation and
quality of content! Nearly 90% of first-year attendees are planning to come
again in 2011.

For more information about the CFP or sponsorship of the event, please
contact us: surge (AT) omniti (DOT) com.


-- 
Katherine Jeschke
Marketing Director
OmniTI Computer Consulting, Inc.
7070 Samuel Morse Drive, Ste.150
Columbia, MD 21046
O: 410/872-4910, 222
C: 443/643-6140
omniti.com
circonus.com


Re: [GENERAL] Using Function returning setof record in JOIN

2011-04-08 Thread Merlin Moncure
On Fri, Apr 8, 2011 at 5:57 AM, gmb  wrote:
> Hi
>
> Is it possible to do a JOIN between a VIEW and the output of a FUNCTION?

yes.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Function returning setof record in JOIN

2011-04-08 Thread David Johnston
I do not know the answer but it isn't that difficult to use trial-and-error
to check and see whether the TWO most logical forms would work and then ask
for further assistance if they do not.  Just pretend you have a view with
the same name as your function (though you will need to add the
parenthesises) and write the queries normally. 

As a hint the second form uses an inline view definition [ ... FROM ( SELECT
* FROM relation ) alias ... ]

This presumes you know how to do normal joins (i.e., between two tables or
views).  If you do not then you will find the documentation to be of great
and timely value.  

You should find that both versions work but the "inline view" form most
definitely will whereas the "direct" form should but I haven't ever
attempted to use that form before so I cannot say for certain.

David J.


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of gmb
Sent: Friday, April 08, 2011 6:57 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Using Function returning setof record in JOIN

Hi 

Is it possible to do a JOIN between a VIEW and the output of a FUNCTION?
e.g.
I have a function returning a SETOF records (using OUT parameters) with the
following output:

testdb=# SELECT * FROM myfunc( 'AAA1' ); -- returns calculcated values for
all orders for account 'AAA1'

 _acc   |   _order   |  _date  |  _calc_amount  
++-+--
 AAA1   | ORDER_1| 2010-12-13  |  1000.00 
 AAA1   | ORDER_2| 2010-12-13  |80.00 
 AAA1   | ORDER_5| 2010-12-13  |10.00 
(the example is oversimplified - _calc_amount is one of many calculated
values returned by the funtion)

I also have a VIEW returning the following:

testdb=# SELECT _accno, _client, _deladdress, _orderno FROM orders; 

 _accno | _client|  _deladdress  | _orderno   | more order related
data
++---++-
++---++-
++---++---
 AAA1   | JOHN   | SMITH STR | ORDER_1|  
 AAA1   | JOHN   | MAIN STR  | ORDER_2| 
 AAA1   | JOHN   | PARK RD   | ORDER_5| 
 CCC1   | CHARLIE| 2ND STR   | ORDER_3| 
 BBB1   | BENN   | 5TH AVE   | ORDER_4| 

I want to do a JOIN resulting in:

 _acc   |   _order   |  _date  |  _amount  | _client|  _deladdress
|more order related data
++-+---++---
+--
 AAA1   | ORDER_1| 2010-12-13  |  1000.00  | JOHN   | SMITH STR
|
 AAA1   | ORDER_2| 2010-12-13  |80.00  | JOHN   | MAIN STR
|
 AAA1   | ORDER_5| 2010-12-13  |10.00  | JOHN   | PARK RD
|

Hope this is possible.

Thanks in advance.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why autocommit mode is slow?

2011-04-08 Thread Simon Riggs
On Thu, Apr 7, 2011 at 9:59 PM, Szymon Guz  wrote:

> this is maybe a stupid question, but I don't know how to explain to my
> coworkers why performing many inserts in autocommit mode is so much slower
> than making all of them in one transaction.
> Why is that so slow?

There's a performance bug that slows things down more than necessary.

I've worked out a solution that we can use to improve that.

Not sure that we should backpatch it though.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast list of oid

2011-04-08 Thread Pavel Stehule
Hello

2011/4/8 salah jubeh :
> is it possible to cast a list of  oids  . i.e something like below. Or I
> need to write a procedure
>
> SELECT groname, grolist::regclass::text FROM pg_group;
>

what is list? Is it a array?

you can use a unnest and array()

postgres=# select array(select unnest(array[25,26,27]::oid[])::regtype);
?column?

 {text,oid,tid}
(1 row)


Regards

Pavel Stehule

>
>  Regards
>
> Best Regard
>
> Eng. Salah Al Jubeh
> Palestine Polytechnic University
> College of Applied Science
> Computer Science
> P.O. Box 198
> Mobile:++97259369122
> Tel:++9754680
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Merlin Moncure
On Fri, Apr 8, 2011 at 7:43 AM, Shianmiin  wrote:
> Hi Merlin,
>
> I revised the test code with attached files and use pgbench to send the test
> queries.
>
> http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
> http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
> initialize.sql
> http://postgresql.1045698.n5.nabble.com/file/n4290723/test.sql test.sql

Not to beat on your test some more, but spamming count(*) is probably
not a good real world indicator of application performance (but it
might be).  I'm assuming your memory usage problem has gone away?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
Thanks. Probably, but that's not the point here.
The issue here is how PostgreSQL backend process uses memory and I wonder if
there any way to configure it.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4289550.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Merlin Moncure
On Fri, Apr 8, 2011 at 3:00 AM, Gipsz Jakab  wrote:
> After the settings in the postgresql.conf our system is much faster, and no
> more error messages in the postgres.log, but If I try to drop a table, or
> add a new one, our system is stopping, until I kill the process, which is
> dropping or adding a table.

you are almost certainly waiting on a lock.  check pg_locks for
ungranted locks and pg_stat_activity for idle/long running
transactions.  If you ever find yourself having to bump
max_connections again, it is almost certainly a good idea to
investigate a connection pooler like pgbouncer.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
if we go with single-db-multiple-schema model, either our data access layer
will need to ensure qualifying all the database objects with proper schema
name, or with postgresql, just to change the search path while the
connection passed from pool to app code.  Another model under evaluation is
single-db-single-schema model, which will need to do we you described.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4289639.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
Good point. Thanks.

The tests we did in house is all from client site and definitely not in a
single transaction. I just found this simplified test case can reproduce the
same memory usage issue and didn't pay too much attention to it. 

If we repeatedly doing smaller batches, we can still see the backend memory
usage growing (and stable after all the views are selected). 

I hope this is enough for looking into the issue. It would take me a while
since I am relatively new to PostgreSQL/Linux. That says, I could try to
rewrite the testing code in more convincing way if it's really needed. :)

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4289630.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cast list of oid

2011-04-08 Thread salah jubeh
is it possible to cast a list of  oids  . i.e something like below. Or I need 
to 
write a procedure

SELECT groname, grolist::regclass::textFROM pg_group;



 Regards


Best Regard
 
Eng. Salah Al Jubeh

PalestinePolytechnic University
College of Applied Science
Computer Science
P.O. Box 198

Mobile:++97259369122
Tel:++9754680

[GENERAL] Index scan vs table inheritance

2011-04-08 Thread Artem Shpynov aka FYR
Hi All,

Now I have PostgreSQL 8.3.4 and next problem:

I have hierarchy of tables:

Master table (empty, has not data, indexes and over). Generally it is empty,
but in production it may have some data or indexes and I have to select from
it for backward compatibility.

Child tables inherited from data and have 'time' field indexed. There are
about 1 million real rows there.

During Execution of  query:
  
SELECT * FROM master ORDER BY time LIMIT 100 

is see sequence scan both master and child tables:

EXPLAIN SELECT * FROM master
ORDER BY time LIMIT 100;
  QUERY PLAN

---
 Limit  (cost=76715.88..76716.13 rows=100 width=1374)
   ->  Sort  (cost=76715.88..79511.69 rows=1118326 width=1374)
 Sort Key: public.master.time
 ->  Result  (cost=0.00..33974.26 rows=1118326 width=1374)
   ->  Append  (cost=0.00..33974.26 rows=1118326 width=1374)
 ->  Seq Scan on master  (cost=0.00..10.50 rows=50
width=1374)
* ->  Seq Scan on child master  (cost=0.00..33963.76
rows=1118276 width=1374)   *

But if I direct:

SELECT * FROM child ORDER BY time LIMIT 100 

or use UNION clause:

((SELECT * FROM ONLY master ORDER BY time LIMIT 100
UNION ALL
SELECT * FROM child ORDER BY time LIMIT 100 ) )
ORDER BY LIMIT 100;

I see index scan on child as expected and sequence scan on Master (OK there
is no any data).

# EXPLAIN ((SELECT * FROM ONLY master ORDER BY time LIMIT 100 )
UNION ALL
( SELECT * FROM child ORDER BY time LIMIT 100 ) )
ORDER BY time LIMIT 100;
QUERY
PLAN

--
 Limit  (cost=170.42..170.67 rows=100 width=1374)
   ->  Sort  (cost=170.42..170.80 rows=150 width=1374)
 Sort Key: master.time
 ->  Append  (cost=11.91..165.00 rows=150 width=1374)
   ->  Limit  (cost=11.91..12.04 rows=50 width=1374)
 ->  Sort  (cost=11.91..12.04 rows=50 width=1374)
   Sort Key: master.time
   ->  Seq Scan on master  (cost=0.00..10.50 rows=50
width=1374)
   ->  Limit  (cost=0.00..151.47 rows=100 width=1374)
* ->  Index Scan using child_time_index on child
(cost=0.00..1693818.51 rows=1118276 width=1374) *

The question is:  Why index scans is not used on  general  query to Master?

I tried VACUUM ANALYZE and REINDEX ON Both tables, created index on 'time'
in 'master' table but nothing changed.

Is it some knows issue and I have to migrate up to 9.0.3 or it is some
statistic misusage by planner and somehow can be reconfigured?


Thanks in advance,
-
BR,

Artem Shpynov aka FYR.


Re: [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-08 Thread Shianmiin
Hi Merlin,

I revised the test code with attached files and use pgbench to send the test
queries.

http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest 
http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
initialize.sql 
http://postgresql.1045698.n5.nabble.com/file/n4290723/test.sql test.sql 

Samuel

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4290723.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using Function returning setof record in JOIN

2011-04-08 Thread gmb
Hi 

Is it possible to do a JOIN between a VIEW and the output of a FUNCTION?
e.g.
I have a function returning a SETOF records (using OUT parameters) with the 
following output:

testdb=# SELECT * FROM myfunc( 'AAA1' ); -- returns calculcated values for all 
orders for account 'AAA1'

 _acc   |   _order   |  _date  |  _calc_amount  
++-+--
 AAA1   | ORDER_1| 2010-12-13  |  1000.00 
 AAA1   | ORDER_2| 2010-12-13  |80.00 
 AAA1   | ORDER_5| 2010-12-13  |10.00 
(the example is oversimplified - _calc_amount is one of many calculated values 
returned by the funtion)

I also have a VIEW returning the following:

testdb=# SELECT _accno, _client, _deladdress, _orderno FROM orders; 

 _accno | _client|  _deladdress  | _orderno   | more order related data
++---++-
 AAA1   | JOHN   | SMITH STR | ORDER_1|  
 AAA1   | JOHN   | MAIN STR  | ORDER_2| 
 AAA1   | JOHN   | PARK RD   | ORDER_5| 
 CCC1   | CHARLIE| 2ND STR   | ORDER_3| 
 BBB1   | BENN   | 5TH AVE   | ORDER_4| 

I want to do a JOIN resulting in:

 _acc   |   _order   |  _date  |  _amount  | _client|  _deladdress  
|more order related data
++-+---++---+--
 AAA1   | ORDER_1| 2010-12-13  |  1000.00  | JOHN   | SMITH STR |
 AAA1   | ORDER_2| 2010-12-13  |80.00  | JOHN   | MAIN STR  |
 AAA1   | ORDER_5| 2010-12-13  |10.00  | JOHN   | PARK RD   |

Hope this is possible.

Thanks in advance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Vick Khera
On Fri, Apr 8, 2011 at 4:00 AM, Gipsz Jakab  wrote:

> My question is the following: if this is a dedicated database server, with
> maximum 30 users (but they are using ODBC with Microsoft Acces, and each of
> them generating 4-6 connection at the same time), and other 200 people will
> use this server through drupal, php, apache not in daily basis, but weekly,
> what is the ideal memory configuration?
>

if it is a dedicated DB server, then give shared memory  about 1/4 of the
RAM, and perhaps a slightly larger maintenance work mem.  depending on your
workload you may want to increase the checkpoint segments (if write-mostly,
then add more segments).

Here is what I use on my FreeBSD 8, Pg 9 big-memory servers.  these have
24GB or more of RAM and are attached to SSD external storage for the
database:

max_connections = 200
shared_buffers = 5120MB
work_mem = 512MB
maintenance_work_mem = 1024MB
max_stack_depth = 8MB
vacuum_cost_delay = 15checkpoint_segments = 64
checkpoint_timeout = 15min
checkpoint_completion_target = 0.8
random_page_cost = 1.0  # RAM disk. set equal seq_page_cost
effective_cache_size = 6400MB   # shared_buffers + `sysctl -n
vfs.hibufspace` / 8192 (BLKSZ)


for individual complicated queries, you can increase the sort mem and work
mem on a per-connection basis as needed.


>
> After the settings in the postgresql.conf our system is much faster, and no
> more error messages in the postgres.log, but If I try to drop a table, or
> add a new one, our system is stopping, until I kill the process, which is
> dropping or adding a table.
>

Is something else using the table you want to drop and blocking the drop
statement from taking the locks it needs?  It should be fairly instant.


Re: [GENERAL] why autocommit mode is slow?

2011-04-08 Thread Vick Khera
On Thu, Apr 7, 2011 at 4:59 PM, Szymon Guz  wrote:

> this is maybe a stupid question, but I don't know how to explain to my
> coworkers why performing many inserts in autocommit mode is so much slower
> than making all of them in one transaction.


as others have said, there is overhead in each transaction.  the biggest of
these is the file sync to disk of the write-ahead log.  on spinning media
this involves on average one rotation of the platter.  when you batch the
inserts, you save that sync per insert.


[GENERAL] PostgreSQL + FreeBSD memory configuration, and an issue

2011-04-08 Thread Gipsz Jakab
Hi,

I've got an IBM x3200 server, with 1,8 Ghz dual core processor, and with 4
GB RAM. I've installed a FreeBSD 8.2, PostgreSQL 9.0.3, Apache22, with
php5.3.5 and extensions for postgre, session,pdf and others.

After the install, I recieved lot of "too many user" in the postgresql.log,
and after that, I reconfigured the postgresql.conf with some parameters:

max connection = 200
shared buffers = 512 MB
work mem = 1 MB
maintenance_work_mem = 128 MB
checkpoint_segments = 32
checkpoint_timeout = 10min
deadlock_timeout = 1s
max_lock_per_transaction = 64

I've saw in the postgresql manual, that I have to reconfigure the kernel,
with these parameters in sysctl.conf:

kern.ipc.shmall=262144
kern.ipc.shmmax=1073741824
kern.ipc.semmap=256

and loader.conf:
kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256

My question is the following: if this is a dedicated database server, with
maximum 30 users (but they are using ODBC with Microsoft Acces, and each of
them generating 4-6 connection at the same time), and other 200 people will
use this server through drupal, php, apache not in daily basis, but weekly,
what is the ideal memory configuration?

After the settings in the postgresql.conf our system is much faster, and no
more error messages in the postgres.log, but If I try to drop a table, or
add a new one, our system is stopping, until I kill the process, which is
dropping or adding a table.

Thank you in advance, the config files can be found here:
http://kissceg.sitefarm.hu/conf_files.tar
regards,
Carl


Re: [GENERAL] why autocommit mode is slow?

2011-04-08 Thread Szymon Guz
On 8 April 2011 05:06, Scott Marlowe  wrote:

> On Thu, Apr 7, 2011 at 2:59 PM, Szymon Guz  wrote:
> > Hi,
> > this is maybe a stupid question, but I don't know how to explain to my
> > coworkers why performing many inserts in autocommit mode is so much
> slower
> > than making all of them in one transaction.
> > Why is that so slow?
>
> The real issue is that transactions have fairly high overhead.  When
> in autocommit mode, the cost of the transaction is much higher than
> the individual insert, so it's relatively slow.  OTOH, when inserting
> a dozen or a hundred or a thousand rows, the transactional overhead to
> build up and tear down the transaction becomes smaller and smaller in
> comparison to the inserts.  The inserts in each instance cost the same
> / take just as long, but the transactional wrapping is only paid for
> once in the large transaction scenario, and it's paid every time in
> the autocommit.
>
> The good news is postgresql has no real practical limit to transaction
> size, and the theoretical limit is VERY large (like 2B or so
> statements I believe.)  So no error about running out of rollback
> space etc.
>


Thank you all for clarification.

regards
Szymon