[GENERAL] Fwd: createdb but revoke dropdb

2010-03-03 Thread Ben Eliott
Sleep often seems a better problem solver than thinking. Create  
databases ahead of time and assign at the appropriate time.


Begin forwarded message:


From: Ben Eliott ben.apperr...@googlemail.com
Date: 2 March 2010 18:22:17 GMT
To: pgsql-general@postgresql.org
Subject: createdb but revoke dropdb

Hi,
In using 8.3. I'm trying to set up programmatic database creation  
but is there a way that the user creating the databases can be  
restricting from dropping them?


I have two roles, 'adminuser' with createdb permission, and 'dbuser'  
a user with CRUD privileges.


adminuser is a member of the dbuser role, this seems to allow  
adminuser to createdb databases for dbuser with:

createdb -U adminuser -O dbuser  new_database_name
Adding .pgpass to the linux user's home directory allows createdb to  
work without additional user input.


But now it seems the linux user also has dropdb privileges. How can  
i restrict this?
Perhaps there is a recommended method to disable dropdb? Can anyone  
suggest?


The adminuser has no login privileges so by removing dropdb this  
should remove the possibility for any hacker chaos other than  
creating more databases?


Thanks in advance for any advice,
Ben





Re: [GENERAL] createdb but revoke dropdb

2010-03-03 Thread Richard Huxton

On 02/03/10 18:22, Ben Eliott wrote:

I have two roles, 'adminuser' with createdb permission, and 'dbuser' a
user with CRUD privileges.

adminuser is a member of the dbuser role, this seems to allow adminuser
to createdb databases for dbuser with:
createdb -U adminuser -O dbuser new_database_name
Adding .pgpass to the linux user's home directory allows createdb to
work without additional user input.

But now it seems the linux user also has dropdb privileges. How can i
restrict this?
Perhaps there is a recommended method to disable dropdb? Can anyone
suggest?


From the SQL reference page for GRANT
The right to drop an object, or to alter its definition in any way, is 
not treated as a grantable privilege; it is inherent in the owner, and 
cannot be granted or revoked. (However, a similar effect can be obtained 
by granting or revoking membership in the role that owns the object; see 
below.) The owner implicitly has all grant options for the object, too.


Don't make dbuser the owner of the database, make adminuser the 
owner, then grant whatever top-level privileges dbuser needs. Make sure 
you don't have adminuser as an automatic login through .pgpass



The adminuser has no login privileges so by removing dropdb this should
remove the possibility for any hacker chaos other than creating more
databases?


Or deleting/modifying all your data, presumably. If you don't trust the 
linux user account, don't give it automatic login.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-03 Thread Richard Huxton

On 02/03/10 21:47, akp geek wrote:

I am doing a replication of one of the database and the size of the slave
database is growing exponentially . Right now the size of master db is 849M
and the slave is 7GB.



my master is 8.3 and slave is 8.4


I'm guessing your slave isn't being vacuumed. Or, perhaps you have a 
long-running transaction on the slave that is preventing dead rows from 
being cleaned up.


Two useful commands:

  vacuum verbose mytable;

This will show you how many rows/pages can/should be cleaned up.

  SELECT pg_size_pretty( pg_total_relation_size('mytable') );

This will show you the size of mytable (formatted nicely).

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] createdb but revoke dropdb

2010-03-03 Thread Ben Eliott

Hi,

Thank-you for coming back and your advice. I understand what you mean.  
However, in order to run the script without additional user  
input, .pgpass is always needed. One way or another, which ever way i  
try and twist this, something has to give on security.  Perhaps it  
would be just about ok-ish if I could restrict the linux user to just  
creating databases, but the privilege to add a database means the  
privilege to drop them too. And ok-ish isn't great either.


So, rather than fight this  I think perhaps instead another approach -  
to pre-prepare sets of databases ahead of time and then, rather than  
create them programmatically, just assign them programmatically  
instead. It doesn't exactly solve the original problem, but I think i  
prefer it from a security standpoint anyhow.


Ben

On 3 Mar 2010, at 09:17, Richard Huxton wrote:


On 02/03/10 18:22, Ben Eliott wrote:
I have two roles, 'adminuser' with createdb permission, and  
'dbuser' a

user with CRUD privileges.

adminuser is a member of the dbuser role, this seems to allow  
adminuser

to createdb databases for dbuser with:
createdb -U adminuser -O dbuser new_database_name
Adding .pgpass to the linux user's home directory allows createdb to
work without additional user input.

But now it seems the linux user also has dropdb privileges. How can i
restrict this?
Perhaps there is a recommended method to disable dropdb? Can anyone
suggest?


From the SQL reference page for GRANT
The right to drop an object, or to alter its definition in any way,  
is not treated as a grantable privilege; it is inherent in the  
owner, and cannot be granted or revoked. (However, a similar effect  
can be obtained by granting or revoking membership in the role that  
owns the object; see below.) The owner implicitly has all grant  
options for the object, too.


Don't make dbuser the owner of the database, make adminuser the  
owner, then grant whatever top-level privileges dbuser needs. Make  
sure you don't have adminuser as an automatic login through .pgpass


The adminuser has no login privileges so by removing dropdb this  
should

remove the possibility for any hacker chaos other than creating more
databases?


Or deleting/modifying all your data, presumably. If you don't trust  
the linux user account, don't give it automatic login.


--
 Richard Huxton
 Archonet Ltd



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Nilesh Govindarajan
On Tue, Mar 2, 2010 at 8:26 PM, Thom Brown thombr...@gmail.com wrote:

 On 2 March 2010 14:49, Antonio Goméz Soto antonio.gomez.s...@gmail.com
 wrote:
  Op 02-03-10 13:00, Thom Brown schreef:
 
  On 2 March 2010 11:46, Nilesh Govindarajanli...@itech7.com  wrote:
 
  On Tue, Mar 2, 2010 at 4:57 PM, Thom Brownthombr...@gmail.com
  wrote:
 
  On 2 March 2010 11:12, Antonio Goméz Soto
 antonio.gomez.s...@gmail.com
  wrote:
 
  Hi,
 
  I tried this:
 
  names=# grant select on database names to spice;
  ERROR:  invalid privilege type SELECT for database
 
  The documentation seems to imply I need to grant SELECT
  to each table separately. That's a lot of work, and what if
  new tables are created?
 
  Thanks,
  Antonio
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 
  The privileges you can grant on a database are only related to the
  creation of tables and connecting to that database.
 
  You could create a role which has SELECT-only access, apply that role
  to all your tables, and assign users (other roles) as members of that
  role.
 
  Regards
 
  Thom
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
  How to create that ? I'm also interested in this as I need this for
  backing
  up my databases.
 
  --
 
  Okay, here's an example:
 
  CREATE ROLE readonly; -- This user won't be able to do anything by
  default, not even log in
 
  GRANT SELECT on table_a TO readonly;
  GRANT SELECT on table_b TO readonly;
  GRANT SELECT on table_c TO readonly;
 
  CREATE ROLE testuser WITH LOGIN; -- At this point we haven't assigned
  this user to any group
 
  SET ROLE testuser;
  SELECT * FROM table_a;
 
  We get:
  ERROR:  permission denied for relation table_a
 
  SET ROLE postgres;
 
  DROP ROLE testuser;
  CREATE ROLE testuser WITH LOGIN IN ROLE readonly;
 
  SET ROLE testuser;
  SELECT * FROM table_a;
 
  This would then return the results from table_a
 
  Regards
 
  Thom
 
  But I still need to define access to each table separately?
 
  Thanks,
  Antonio.
 

 As far as I'm aware.  It's only in the upcoming version 9.0 that you
 can do things like:

 GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;

 Other folk on here may have some alternative suggestions though.

 Thom


Eagerly waiting for 9.0

-- 
Nilesh Govindarajan
Site  Server Administrator
www.itech7.com


Re: [GENERAL] FSM and VM file

2010-03-03 Thread akp geek
Thank you all for the suggestions. I did a vacuum and the size has gone down
drastically. But still it is not the same size as my master. I am looking
into it

Appreicate your help

On Wed, Mar 3, 2010 at 4:41 AM, Richard Huxton d...@archonet.com wrote:

 On 02/03/10 21:47, akp geek wrote:

 I am doing a replication of one of the database and the size of the slave
 database is growing exponentially . Right now the size of master db is
 849M
 and the slave is 7GB.


  my master is 8.3 and slave is 8.4


 I'm guessing your slave isn't being vacuumed. Or, perhaps you have a
 long-running transaction on the slave that is preventing dead rows from
 being cleaned up.

 Two useful commands:

  vacuum verbose mytable;

 This will show you how many rows/pages can/should be cleaned up.

  SELECT pg_size_pretty( pg_total_relation_size('mytable') );

 This will show you the size of mytable (formatted nicely).

 --
  Richard Huxton
  Archonet Ltd



Re: [GENERAL] FSM and VM file

2010-03-03 Thread Richard Huxton

On 03/03/10 13:32, akp geek wrote:

Thank you all for the suggestions. I did a vacuum and the size has gone down
drastically. But still it is not the same size as my master. I am looking
into it


Check your autovacuum settings and you should be able to keep things 
stable at least.


You might need to cluster tables / restart the replication to get the 
best case. Vacuuming needs to be a continual process.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] stopping processes, preventing connections

2010-03-03 Thread Herouth Maoz
Hi.

I'm continuing on with the problems I have in our reports/data warehouse 
system. Basically, the system brings in tables from our various production 
systems (sybase, postgresql, mssql, different servers) every night. Some tables 
are brought in whole, and some are brought in based on a date field, and only 
the relevant interval is imported.

For tables which are brought whole, I first truncate the local table, then copy 
in the up-to-date data. For the ones that are brought partially, I delete 
partially first, and then copy in the same way.

The trouble is that sometimes there is a stray select which has been initiated 
and then abandoned (without cancellation) by the crystal reports system. When 
these queries happen to last into the night, they lock some of the tables which 
are supposed to be truncated. Then the whole process hangs until the query 
quits or dies, which, we have seen in the past, can take several hours 
sometimes.

What I want to do is write a script that kills any queries or connections from 
the crystal system, and then prevents new queries from being ran, until I 
finish loading all the tables, at which point I want to allow queries again.

First, the easy part - regarding allowing/disallowing queries. Is it possible 
to GRANT or REVOKE access to tables based on the originating IP?

Second, and the more complicated one - what do I do about rogue queries that 
are running when my process starts? Today we had a query that ran since 
yesterday. I called pg_cancel_backend() on it several times and waited for 
almost two hours - to no avail. Eventually I had to ask our sysadmin to 
shutdown PostgreSQL, which took some five minutes, but eventually worked. Is 
there a way to do the same thing to a single process without shutting down the 
whole server, and without causing any harm to the database or memory 
corruption? Something I can call from within SQL? I run the nightly script from 
a linux user which is not postgres, so I'd prefer a way that doesn't require 
using kill.

Thank you,
Herouth Maoz

Re: [GENERAL] FSM and VM file

2010-03-03 Thread Devrim GÜNDÜZ
On Wed, 2010-03-03 at 08:32 -0500, akp geek wrote:
 Thank you all for the suggestions. I did a vacuum and the size has
 gone down drastically. But still it is not the same size as my master.
 I am looking into it 

Plain vacuum does not file system size of relations. It just marks dead
spaces are writable. If you haven't vacuumed for a long time, it
probably means that you have lots of space to be reused.
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Merlin Moncure
On Tue, Mar 2, 2010 at 9:56 AM, Thom Brown thombr...@gmail.com wrote:
 As far as I'm aware.  It's only in the upcoming version 9.0 that you
 can do things like:

 GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;

 Other folk on here may have some alternative suggestions though.

9.0 will also have the hot standby feature.  setting up a standby is
pretty much always a good idea and access to the standby is
automatically read only.  this would be a cheap way to get what you
want without dealing with privileges which is nice.  you are also
relatively insulated from problematic queries the user might make like
accidental unconstrained joins, full table sorts etc..

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FSM and VM file

2010-03-03 Thread akp geek
I just made changes to postgresql.conf to make sure the autovacuum is turned
on and I will update you.. Again thanks for your time

Regards

2010/3/3 Devrim GÜNDÜZ dev...@gunduz.org

 On Wed, 2010-03-03 at 08:32 -0500, akp geek wrote:
  Thank you all for the suggestions. I did a vacuum and the size has
  gone down drastically. But still it is not the same size as my master.
  I am looking into it

 Plain vacuum does not file system size of relations. It just marks dead
 spaces are writable. If you haven't vacuumed for a long time, it
 probably means that you have lots of space to be reused.
 --
 Devrim GÜNDÜZ
 PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
 devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
 http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz



[GENERAL] finding duplicate numbers in a select distinct statement

2010-03-03 Thread Terry
Hello all,

I am trying to do the following query but I would like to know how
many rows were deduplicated in the process:

psql -d foo -c 'SELECT DISTINCT ON (error) error,ev_text FROM
clients_event_log' -o fullfoo


For example, I would want the output to be the following where count
is the number of error rows that were deduplicated.

count|   error|   ev_text
55 |  525152 |   Some text
72 |  125124 |   Some other text

I imagine it's using the count function somewhere, just not sure how
to embed that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Kevin Kempter
On Wednesday 03 March 2010 07:29:21 am Merlin Moncure wrote:
 On Tue, Mar 2, 2010 at 9:56 AM, Thom Brown thombr...@gmail.com wrote:
  As far as I'm aware.  It's only in the upcoming version 9.0 that you
  can do things like:
  
  GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
  
  Other folk on here may have some alternative suggestions though.
 
 9.0 will also have the hot standby feature.  setting up a standby is
 pretty much always a good idea and access to the standby is
 automatically read only.  this would be a cheap way to get what you
 want without dealing with privileges which is nice.  you are also
 relatively insulated from problematic queries the user might make like
 accidental unconstrained joins, full table sorts etc..
 
 merlin


I believe all you have to do is this to create a read only user:

create user ro_user with password 'passwd';

alter user ro_user set default_transaction_read_only = true;




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding duplicate numbers in a select distinct statement

2010-03-03 Thread Grzegorz Jaśkiewicz
select count(*) AS count, error, ev_text FROM clients_event_log GROUP BY
error, ev_text;

you can add 'HAVING count(*)  X'; , if you want to see only those with
count above X, etc.


-- 
GJ


Re: [GENERAL] finding duplicate numbers in a select distinct statement

2010-03-03 Thread Terry
2010/3/3 Grzegorz Jaśkiewicz gryz...@gmail.com:
 select count(*) AS count, error, ev_text FROM clients_event_log GROUP BY
 error, ev_text;

 you can add 'HAVING count(*)  X'; , if you want to see only those with
 count above X, etc.


 --
 GJ


I was just about to reply to the group.  Would this work too?
psql -d foo -c 'SELECT COUNT(DISTINCT(error)) AS count,error,ev_text
FROM dsclient_logs WHERE ev_id  23572151 GROUP BY error,ev_text'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding duplicate numbers in a select distinct statement

2010-03-03 Thread Grzegorz Jaśkiewicz
just try if it does what you want it to do ;)


[GENERAL] bug in function arguments recognition

2010-03-03 Thread Ivan Sergio Borgonovo
I've a function defined as:

create or replace function catalog_relateditems(__itemid bigint,
  families int[]...

I call it with

select * from catalog_relateditems(6538::bigint, ARRAY[1,2,3,4,5]);
and I get:

HINT:  No function matches the given name and argument types. You
might need to add explicit type casts.

The I try to drop the function using psql autocomplete and I get...

DROP FUNCTION catalog_relateditems ( bigint, integer[], character
varying)

\df *rela*
reports:

public | catalog_relateditems   | setof record | __itemid
bigint, families integer[], OUT ...

There is no other function named catalog_relateditems

I'm on 8.3.9 debian lenny

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] LDAP Login Problem

2010-03-03 Thread Tom Robst

Hi,

I am having a problem with authentication using LDAP on PostgreSQL 8.4.2.

The problem seems to be limited to which attribute is specified in the 
ldapprefix. If I specify uid= and then try login using the username 
trobst (which is the value in the ldap db) I get an error:


hostall all 192.168.1.0/24ldap 
ldapserver=ldap.thermocable.com ldapprefix=uid= 
ldapsuffix=,cn=Staff,dc=thermocable,dc=com


LOG:  LDAP login failed for user
uid=trobst,cn=Staff,dc=thermocable,dc=com on server
ldap.thermocable.com: error code 49
FATAL:  LDAP authentication failed for user trobst

However if I specify the ldapprefix to be cn= and login using the 
username Tom Robst it all works fine.


hostall all 192.168.1.0/24ldap 
ldapserver=ldap.thermocable.com ldapprefix=cn= 
ldapsuffix=,cn=Staff,dc=thermocable,dc=com


Any help would be appreciated.
Kind Regards,
Tom Robst
--

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Thom Brown
On 3 March 2010 14:51, Kevin Kempter kev...@consistentstate.com wrote:
 On Wednesday 03 March 2010 07:29:21 am Merlin Moncure wrote:
 On Tue, Mar 2, 2010 at 9:56 AM, Thom Brown thombr...@gmail.com wrote:
  As far as I'm aware.  It's only in the upcoming version 9.0 that you
  can do things like:
 
  GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
 
  Other folk on here may have some alternative suggestions though.

 9.0 will also have the hot standby feature.  setting up a standby is
 pretty much always a good idea and access to the standby is
 automatically read only.  this would be a cheap way to get what you
 want without dealing with privileges which is nice.  you are also
 relatively insulated from problematic queries the user might make like
 accidental unconstrained joins, full table sorts etc..

 merlin


 I believe all you have to do is this to create a read only user:

 create user ro_user with password 'passwd';

 alter user ro_user set default_transaction_read_only = true;


I believe that will only affect the *default* setting of the
transaction.  The user could still run the following before a query to
write again:

SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LDAP Login Problem

2010-03-03 Thread Magnus Hagander
2010/3/3 Tom Robst tomro...@thermocable.com:
 Hi,

 I am having a problem with authentication using LDAP on PostgreSQL 8.4.2.

 The problem seems to be limited to which attribute is specified in the 
 ldapprefix. If I specify uid= and then try login using the username 
 trobst (which is the value in the ldap db) I get an error:

 host    all         all         192.168.1.0/24        ldap 
 ldapserver=ldap.thermocable.com ldapprefix=uid= 
 ldapsuffix=,cn=Staff,dc=thermocable,dc=com

 LOG:  LDAP login failed for user
 uid=trobst,cn=Staff,dc=thermocable,dc=com on server
 ldap.thermocable.com: error code 49
 FATAL:  LDAP authentication failed for user trobst

 However if I specify the ldapprefix to be cn= and login using the username 
 Tom Robst it all works fine.

 host    all         all         192.168.1.0/24        ldap 
 ldapserver=ldap.thermocable.com ldapprefix=cn= 
 ldapsuffix=,cn=Staff,dc=thermocable,dc=com

The LDAP authentication needs to bind with the full DN, which is
cn= Specifying uid= doesn't make it a valid LDAP distinguished
name. So unless your LDAP server is tricky (like the Microsoft one,
which accepts both DN and DOMAIN\username in the login packet),
there's nothing you can do I think. (well, you can also change all
your DNs in the LDAP catalog, but that's likely to break a lot of
other things)

PostgreSQL 9.0 will allow you do do a search+bind to get the
functionality you want. The change should be fairly standalone so you
could probably have it backpatched if it's urgent for you, but since
it's a new feature it's not something the community backpatches.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Tom Lane
Kevin Kempter kev...@consistentstate.com writes:
 I believe all you have to do is this to create a read only user:
 create user ro_user with password 'passwd';
 alter user ro_user set default_transaction_read_only = true;

You do realize the user can just unset that again?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bug in function arguments recognition

2010-03-03 Thread Tom Lane
Ivan Sergio Borgonovo m...@webthatworks.it writes:
 I've a function defined as:
 create or replace function catalog_relateditems(__itemid bigint,
   families int[]...

If you want any useful comments, you're going to have to show a complete
example, rather than selectively editing out what you think is
irrelevant (and, no doubt, isn't).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LDAP Login Problem

2010-03-03 Thread Tom Robst
Thanks Magnus. I should have mentioned I'm using OpenLDAP 2.2. I guess 
I'll just have to wait for Postgres 9 and workaround it in the meantime. 
It's not an insurmountable issue...


Regards,
Tom Robst
--

On 03/03/10 15:18, Magnus Hagander wrote:

2010/3/3 Tom Robsttomro...@thermocable.com:

Hi,

I am having a problem with authentication using LDAP on PostgreSQL 8.4.2.

The problem seems to be limited to which attribute is specified in the ldapprefix. If I specify 
uid= and then try login using the username trobst (which is the value in 
the ldap db) I get an error:

hostall all 192.168.1.0/24ldap ldapserver=ldap.thermocable.com 
ldapprefix=uid= ldapsuffix=,cn=Staff,dc=thermocable,dc=com

LOG:  LDAP login failed for user
uid=trobst,cn=Staff,dc=thermocable,dc=com on server
ldap.thermocable.com: error code 49
FATAL:  LDAP authentication failed for user trobst

However if I specify the ldapprefix to be cn= and login using the username Tom 
Robst it all works fine.

hostall all 192.168.1.0/24ldap ldapserver=ldap.thermocable.com 
ldapprefix=cn= ldapsuffix=,cn=Staff,dc=thermocable,dc=com


The LDAP authentication needs to bind with the full DN, which is
cn= Specifying uid= doesn't make it a valid LDAP distinguished
name. So unless your LDAP server is tricky (like the Microsoft one,
which accepts both DN and DOMAIN\username in the login packet),
there's nothing you can do I think. (well, you can also change all
your DNs in the LDAP catalog, but that's likely to break a lot of
other things)

PostgreSQL 9.0 will allow you do do a search+bind to get the
functionality you want. The change should be fairly standalone so you
could probably have it backpatched if it's urgent for you, but since
it's a new feature it's not something the community backpatches.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bug in function arguments recognition

2010-03-03 Thread Ivan Sergio Borgonovo
On Wed, 3 Mar 2010 16:05:29 +0100
Ivan Sergio Borgonovo m...@webthatworks.it wrote:

 I've a function defined as:
 
 create or replace function catalog_relateditems(__itemid bigint,
   families int[]...

Forget about it... there was a typo (missed out) that mixed in/out
parameters.

Sorry for the noise.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bug in function arguments recognition

2010-03-03 Thread Ivan Sergio Borgonovo
On Wed, 03 Mar 2010 10:22:31 -0500
Tom Lane t...@sss.pgh.pa.us wrote:

 Ivan Sergio Borgonovo m...@webthatworks.it writes:
  I've a function defined as:
  create or replace function catalog_relateditems(__itemid bigint,
families int[]...
 
 If you want any useful comments, you're going to have to show a
 complete example, rather than selectively editing out what you
 think is irrelevant (and, no doubt, isn't).

Golden rule... I thought that psql auto completion was a good enough
proof something wasn't working and I was thinking if someone could
give me a good advice to trim down the clutter to build up a simpler
function that could misbehave.

I was not aware that in and out parameters could be intermixed and I
was expecting an error, but pg was pretty happy with the syntax and
this was enough to put me astray.

Sorry

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] disable triggers isolated to transaction only?

2010-03-03 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 My question is this: will ALTER TABLE ONLY $subtable DISABLE TRIGGER
 ALL within a transaction only affect my transaction, or will it affect
 anyone inserting into this subtable.  If it blocks external inserts
 that's ok since my transactions are small while moving the data.  I
 guess at worse I lock the table.

ALTER TABLE will lock and block, but I'd be remiss if I didn't point 
out the use of session_replication_role as a much better solution to 
this particular class of problem. (Even if your version does not 
support it, Vick, it should be noted here for the archives). The 
session_replication_role was added in 8.3:

http://www.postgresql.org/docs/8.3/interactive/sql-altertable.html

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201003031020
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkuOhDYACgkQvJuQZxSWSsiPxwCg1JGjrfxvv0gmJDJPGCd2pLdE
X0sAn3t+IYPnAIPcZqqxtBIaUUbkm1jL
=US8W
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] disable triggers isolated to transaction only?

2010-03-03 Thread Richard Huxton

On 03/03/10 15:46, Greg Sabino Mullane wrote:

ALTER TABLE will lock and block, but I'd be remiss if I didn't point
out the use of session_replication_role as a much better solution to
this particular class of problem. (Even if your version does not
support it, Vick, it should be noted here for the archives). The
session_replication_role was added in 8.3:

http://www.postgresql.org/docs/8.3/interactive/sql-altertable.html


That wouldn't have occurred to me. Definitely worth adding to the archives.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Massive table bloat

2010-03-03 Thread Markus Wollny
Hi!

I've set up some system to track slow page executions in one of our (as
yet not live) web apps. The tracking itself is handled completely within
the database using a function. Within a very short time (approx. 1 week)
and although we haven't got that much traffic on our testpages, the
table in question as grown beyond a size of 23 GB, even though a SELECT
count(*) on it will tell me that it only contains 235 rows. I'm sure I
must be missing something obvious here...

Here's the DDL for the table:

CREATE TABLE stats.slowpages
(
url text NOT NULL,
lastexecduration integer NOT NULL,
avgslowexecduration integer,
execcount integer,
lastexectime timestamp without time zone,
site_id integer NOT NULL,
slowestexecduration integer,
totaltimespent bigint,
CONSTRAINT slowpages_pkey PRIMARY KEY (url)
)WITHOUT OIDS;

-- Indexes
CREATE INDEX idx_slowpages_duration ON stats.slowpages USING btree
(lastexecduration);
CREATE INDEX idx_slowpages_avgduration ON stats.slowpages USING btree
(avgslowexecduration);
CREATE INDEX idx_slowpages_execcount ON stats.slowpages USING btree
(execcount);
CREATE INDEX idx_slowpages_lastexec ON stats.slowpages USING btree
(lastexectime);
CREATE INDEX idx_slowpages_site ON stats.slowpages USING btree
(site_id);
CREATE UNIQUE INDEX uidx_slowpages_url_site ON stats.slowpages USING
btree (url, site_id);
CREATE INDEX idx_slowpages_totaltimespent ON stats.slowpages USING btree
(totaltimespent);

And this here is the function we use to insert or update entries in this
table:

CREATE or REPLACE FUNCTION stats.iou_slowpages(
IN _site_id integer,
IN _url text,
IN _duration integer)
RETURNS void AS 
$BODY$
BEGIN
LOOP

UPDATE stats.slowpages
   SET  avgslowexecduration =
((avgslowexecduration*execcount)+_duration)/(execcount+1)
   ,execcount = execcount+1
   ,lastexectime = now()
   ,lastexecduration = _duration
   ,totaltimespent = totaltimespent + _duration
   ,slowestexecduration = CASE WHEN _duration 
slowestexecduration   
THEN _duration ELSE slowestexecduration END 
   WHERE url = _url AND site_id = _site_id;
IF found THEN
RETURN;
END IF;

BEGIN 
INSERT INTO
stats.slowpages(url,lastexecduration,avgslowexecduration,slowestexecdura
tion,totaltimespent,execcount,lastexectime,site_id) 
VALUES (_url, _duration, _duration,_duration,_duration, 1,
now(), _site_id);
RETURN;
EXCEPTION WHEN unique_violation THEN

END;
END LOOP;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

_site_id is a small integer value, _url is a full URL string to a page
and _duration is a value in milliseconds. We're on PostgreSQL 8.3.7.

Any idea about what I may be missing here?

Kind regards

   Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Furth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jurg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massive table bloat

2010-03-03 Thread Thom Brown
On 3 March 2010 15:33, Markus Wollny markus.wol...@computec.de wrote:
 Hi!

 I've set up some system to track slow page executions in one of our (as
 yet not live) web apps. The tracking itself is handled completely within
 the database using a function. Within a very short time (approx. 1 week)
 and although we haven't got that much traffic on our testpages, the
 table in question as grown beyond a size of 23 GB, even though a SELECT
 count(*) on it will tell me that it only contains 235 rows. I'm sure I
 must be missing something obvious here...

 Here's the DDL for the table:

 CREATE TABLE stats.slowpages
 (
 url text NOT NULL,
 lastexecduration integer NOT NULL,
 avgslowexecduration integer,
 execcount integer,
 lastexectime timestamp without time zone,
 site_id integer NOT NULL,
 slowestexecduration integer,
 totaltimespent bigint,
 CONSTRAINT slowpages_pkey PRIMARY KEY (url)
 )WITHOUT OIDS;

 -- Indexes
 CREATE INDEX idx_slowpages_duration ON stats.slowpages USING btree
 (lastexecduration);
 CREATE INDEX idx_slowpages_avgduration ON stats.slowpages USING btree
 (avgslowexecduration);
 CREATE INDEX idx_slowpages_execcount ON stats.slowpages USING btree
 (execcount);
 CREATE INDEX idx_slowpages_lastexec ON stats.slowpages USING btree
 (lastexectime);
 CREATE INDEX idx_slowpages_site ON stats.slowpages USING btree
 (site_id);
 CREATE UNIQUE INDEX uidx_slowpages_url_site ON stats.slowpages USING
 btree (url, site_id);
 CREATE INDEX idx_slowpages_totaltimespent ON stats.slowpages USING btree
 (totaltimespent);

 And this here is the function we use to insert or update entries in this
 table:

 CREATE or REPLACE FUNCTION stats.iou_slowpages(
 IN _site_id integer,
 IN _url text,
 IN _duration integer)
 RETURNS void AS
 $BODY$
 BEGIN
    LOOP

        UPDATE stats.slowpages
               SET  avgslowexecduration =
 ((avgslowexecduration*execcount)+_duration)/(execcount+1)
               ,    execcount = execcount+1
               ,    lastexectime = now()
               ,    lastexecduration = _duration
               ,    totaltimespent = totaltimespent + _duration
               ,    slowestexecduration = CASE WHEN _duration 
 slowestexecduration
                    THEN _duration ELSE slowestexecduration END
               WHERE url = _url AND site_id = _site_id;
        IF found THEN
            RETURN;
        END IF;

        BEGIN
            INSERT INTO
 stats.slowpages(url,lastexecduration,avgslowexecduration,slowestexecdura
 tion,totaltimespent,execcount,lastexectime,site_id)
            VALUES (_url, _duration, _duration,_duration,_duration, 1,
 now(), _site_id);
            RETURN;
        EXCEPTION WHEN unique_violation THEN

        END;
    END LOOP;
 END;
 $BODY$
 LANGUAGE 'plpgsql' VOLATILE;

 _site_id is a small integer value, _url is a full URL string to a page
 and _duration is a value in milliseconds. We're on PostgreSQL 8.3.7.

 Any idea about what I may be missing here?

 Kind regards

   Markus


If you update rows, it actually creates a new version of it.  The old
one doesn't get removed until the VACUUM process cleans it up, so
maybe you need to run that against the database?

Regards

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Massive table bloat

2010-03-03 Thread Grzegorz Jaśkiewicz
do a vacuum analyze verbose on it, and see if it complains about FSM (free
space map) setting. Which it probably will be.


Re: [GENERAL] stopping processes, preventing connections

2010-03-03 Thread Josh Kupershmidt
On Wed, Mar 3, 2010 at 8:31 AM, Herouth Maoz hero...@unicell.co.il wrote:


 First, the easy part - regarding allowing/disallowing queries. Is it
 possible to GRANT or REVOKE access to tables based on the originating IP?


I'd suggest separating out access to your tables by roles, and then
restricting those roles to certain IP ranges in pg_hba.conf.


 Second, and the more complicated one - what do I do about rogue queries
 that are running when my process starts? Today we had a query that ran since
 yesterday. I called pg_cancel_backend() on it several times and waited for
 almost two hours - to no avail. Eventually I had to ask our sysadmin to
 shutdown PostgreSQL, which took some five minutes, but eventually worked. Is
 there a way to do the same thing to a single process without shutting down
 the whole server, and without causing any harm to the database or memory
 corruption? Something I can call from within SQL? I run the nightly script
 from a linux user which is not postgres, so I'd prefer a way that doesn't
 require using kill.


On 8.4, you can use pg_terminate_backend(), which sends a SIGTERM instead of
pg_cancel_backend's SIGINT. If you're not on 8.4, you can just do this
manually with a kill -SIGTERM backend_pid. If that doesn't work either,
you might have to resort to a kill -SIGKILL backend_pid. Killing a single
backend should be much better for you than restarting Postgres entirely.
These operations shouldn't result in database corruption.

You have to be database superuser to use pg_cancel_backend() or
pg_terminate_backend(), or have a shell login as the database user to use
kill. No way around that for now.

Though next time you see a query which doesn't respond to
pg_cancel_backend(), try gathering information about the query and what the
backend is doing; either you're doing something unusual (e.g. an app is
restarting the query automatically after getting canceled) or perhaps you've
stumbled on a bug in Postgres.

Josh


Re: [GENERAL] stopping processes, preventing connections

2010-03-03 Thread Herouth Maoz

On Mar 3, 2010, at 18:01 , Josh Kupershmidt wrote:

 
 On Wed, Mar 3, 2010 at 8:31 AM, Herouth Maoz hero...@unicell.co.il wrote:
 
 First, the easy part - regarding allowing/disallowing queries. Is it possible 
 to GRANT or REVOKE access to tables based on the originating IP?
 
 I'd suggest separating out access to your tables by roles, and then 
 restricting those roles to certain IP ranges in pg_hba.conf. 

Thank you. I guess I will go for something simple - I'll give the lady in 
charge of the reports machine a new user/password to use, and revoke that 
user's access. I was hoping to avoid her needing to change settings in Windows, 
but it seems to be the easiest way.

  
 Second, and the more complicated one - what do I do about rogue queries that 
 are running when my process starts? Today we had a query that ran since 
 yesterday. I called pg_cancel_backend() on it several times and waited for 
 almost two hours - to no avail. Eventually I had to ask our sysadmin to 
 shutdown PostgreSQL, which took some five minutes, but eventually worked. Is 
 there a way to do the same thing to a single process without shutting down 
 the whole server, and without causing any harm to the database or memory 
 corruption? Something I can call from within SQL? I run the nightly script 
 from a linux user which is not postgres, so I'd prefer a way that doesn't 
 require using kill.
 
 
 On 8.4, you can use pg_terminate_backend(), which sends a SIGTERM instead of 
 pg_cancel_backend's SIGINT. If you're not on 8.4, you can just do this 
 manually with a kill -SIGTERM backend_pid. If that doesn't work either, you 
 might have to resort to a kill -SIGKILL backend_pid. Killing a single 
 backend should be much better for you than restarting Postgres entirely. 
 These operations shouldn't result in database corruption.
 
 You have to be database superuser to use pg_cancel_backend() or 
 pg_terminate_backend(), or have a shell login as the database user to use 
 kill. No way around that for now.

Unfortunately, we only have 8.3 - is SIGTERM safe on 8.3?

I guess I'll have to sudo or use local ssh.

 
 Though next time you see a query which doesn't respond to 
 pg_cancel_backend(), try gathering information about the query and what the 
 backend is doing; either you're doing something unusual (e.g. an app is 
 restarting the query automatically after getting canceled) or perhaps you've 
 stumbled on a bug in Postgres.

I'd appreciate it if you tell me what to look for. It was running a join on 
several tables, but nothing too complicated. It may be that the query is not 
optimized (one of the tables is not indexed properly) but it still should 
respond to cancel - shouldn't it?

Thank you very much,
Herouth

[GENERAL] Raid 10 settings for optimal postgres performance?

2010-03-03 Thread Chris Barnes

I have just configured our disks to raid10 from raid5.

The raid 10 is now 6 drives giving us 750G use by OS and postgres.

 

What is the preferred setup for postgres concerning cache settings on the 
controller and disks and the preferred block size allocated for the disks when 
postgres uses block size 8192?

 

df -h

/dev/sda3 750G   65G  647G  10% /
/dev/sda1 496M   17M  454M   4% /boot


show all;

server_version| 8.4.2

block_size| 8192

 

 

Controller settings

Read-cache mode - Enabled

Write-cache mode - Enabled (write-back)

Write-cache setting - Enabled (write-back) when protected by battery

Stripe-unit size : 256 KB

 

Disk setting

Write Cache: Disabled (write-through)

 

Postgresql.conf

fsync = on  # turns forced synchronization on or off
#synchronous_commit = on# immediate fsync at commit
wal_sync_method = fsync # the default is the first option

 

Logical drive number 1
   Logical drive name   : RAID10
   RAID level   : 10
   Status of logical drive  : Okay
   Size : 858000 MB
   Read-cache mode  : Enabled
   Write-cache mode : Enabled (write-back)
   Write-cache setting  : Enabled (write-back) when 
protected by battery
   Partitioned  : Yes
   Number of segments   : 6
   Stripe-unit size : 256 KB
   Stripe order (Channel,Device): 0,0 0,1 0,2 0,3 0,4 0,5 
   Defunct segments : No
   Defunct stripes  : No


 

  Device #0
 Device is a Hard drive
 State  : Online
 Supported  : Yes
 Transfer Speed : SAS 3.0 Gb/s
 Reported Channel,Device: 0,0
 Vendor : IBM-ESXS
 Model  : ST3300655SS
 Firmware   : BA26
 World-wide name: 5000C5000A42EFAC
 Size   : 286102 MB
 Write Cache: Disabled (write-through)
 FRU: None
 PFA 

 

Sincerely,

 

Chris

 
  
_
Take your contacts everywhere
http://go.microsoft.com/?linkid=9712959

[GENERAL] Work Mem Allocation Questions

2010-03-03 Thread Brad Nicholson
1: Is the default work_mem pre-allocated to the Postgres processes - or
does it get allocated when needed?  Say I have work_mem at 30MB - will
Postgres allocate that 30MB on connection, or only when it needed by a
query?

2: If work_mem is, say,  30MB, and my query needs 10MB - will Postgres
allocate all 30MB, or just the 10MB I need?

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding duplicate numbers in a select distinct statement

2010-03-03 Thread Terry
2010/3/3 Terry td3...@gmail.com:
 2010/3/3 Grzegorz Jaśkiewicz gryz...@gmail.com:
 select count(*) AS count, error, ev_text FROM clients_event_log GROUP BY
 error, ev_text;

 you can add 'HAVING count(*)  X'; , if you want to see only those with
 count above X, etc.


 --
 GJ


 I was just about to reply to the group.  Would this work too?
 psql -d foo -c 'SELECT COUNT(DISTINCT(error)) AS count,error,ev_text
 FROM dsclient_logs WHERE ev_id  23572151 GROUP BY error,ev_text'


My statement was broke.  GJ's was spot on.  Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Work Mem Allocation Questions

2010-03-03 Thread Bill Moran
In response to Brad Nicholson bnich...@ca.afilias.info:

 1: Is the default work_mem pre-allocated to the Postgres processes - or
 does it get allocated when needed?  Say I have work_mem at 30MB - will
 Postgres allocate that 30MB on connection, or only when it needed by a
 query?

It's allocated on demand and freed when no longer needed.

 2: If work_mem is, say,  30MB, and my query needs 10MB - will Postgres
 allocate all 30MB, or just the 10MB I need?

Same answer.

Keep in mind that work_mem is not an upper cap, it is a per-sort limit.
Thus a query with 5 sorts could allocate 5 * work_mem.  The docs have
a little more detail on this.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to grant a user read-only access to a database?

2010-03-03 Thread Kevin Kempter
On Wednesday 03 March 2010 08:19:46 am Tom Lane wrote:
 Kevin Kempter kev...@consistentstate.com writes:
  I believe all you have to do is this to create a read only user:
  create user ro_user with password 'passwd';
  alter user ro_user set default_transaction_read_only = true;
 
 You do realize the user can just unset that again?
 
   regards, tom lane


I did not. Thanks for the heads up.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Auto VACUUM

2010-03-03 Thread akp geek
Hi All -

  I need some help from you. this question is in follow up with my
earlier questions. I turned the autovacuum and restarted the db and the
settings I have as follows. It seems the autovacuum process has not been
turned on. It's almost more than 3 hours I have restarted my DB with
following setting.  I have ps -ef to see the proces list. Is there some
thing I am doing wrong.

Can you please help?

Regards

# - Query/Index Statistics Collector -

#track_activities = on
track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'


#--
# AUTOVACUUM PARAMETERS
#--

autovacuum = on # Enable autovacuum subprocess?  'on'
# requires track_counts to also be
on.
#log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions
and
# their durations,  0 logs only
# actions running at least this
number
# of milliseconds.
autovacuum_max_workers = 10 # max number of autovacuum
subprocesses
autovacuum_naptime = 180min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50  # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before
vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before
analyze
#autovacuum_freeze_max_age = 2  # maximum XID age before forced
vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms# default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit


Re: [GENERAL] Massive table bloat

2010-03-03 Thread Markus Wollny
 -Ursprüngliche Nachricht-
 Von: Thom Brown [mailto:thombr...@gmail.com] 
 Gesendet: Mittwoch, 3. März 2010 16:56
 An: Markus Wollny
 Cc: pgsql-general@postgresql.org
 Betreff: Re: [GENERAL] Massive table bloat

 If you update rows, it actually creates a new version of it.  
 The old one doesn't get removed until the VACUUM process 
 cleans it up, so maybe you need to run that against the database?

I already do on a nightly basis (which is probably not often enough in this 
case) and have got autovacuum running. I'll check into FSM settings as 
suggested by Grzegorz Jaśkiewicz, there's probably half a solution to the 
problem there, the other half being probably the autovacuum daemon not visiting 
this table nearly often enough.

Kind regards

  Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto VACUUM

2010-03-03 Thread Joao Ferreira gmail
On Wed, 2010-03-03 at 12:46 -0500, akp geek wrote:
 Hi All -
  
   I need some help from you. this question is in follow up
 with my earlier questions. I turned the autovacuum and restarted the
 db and the settings I have as follows. It seems the autovacuum process
 has not been turned on. It's almost more than 3 hours I have restarted
 my DB with following setting.  I have ps -ef to see the proces list.
 Is there some thing I am doing wrong.
  
 Can you please help?

I'dd suggest leaving the naptime in the default (60 seconds)

Your value is very high... too high... I'dd say

Use values around 60 seconds (never minutes)...

  
 Regards
  
 # - Query/Index Statistics Collector -
 #track_activities = on
 track_counts = on
 #track_functions = none # none, pl, all
 #track_activity_query_size = 1024
 #update_process_title = on
 #stats_temp_directory = 'pg_stat_tmp'
 
 
 #--
 # AUTOVACUUM PARAMETERS
 #--
 
 autovacuum = on # Enable autovacuum subprocess?  'on'
 # requires track_counts to
 also be on.
 #log_autovacuum_min_duration = -1   # -1 disables, 0 logs all
 actions and
 # their durations,  0 logs
 only
 # actions running at least
 this number
 # of milliseconds.
 autovacuum_max_workers = 10 # max number of autovacuum
 subprocesses
 autovacuum_naptime = 180min # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50   # min number of row updates
 before
 # vacuum
 #autovacuum_analyze_threshold = 50  # min number of row updates
 before
 # analyze
 #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size
 before vacuum
 #autovacuum_analyze_scale_factor = 0.1  # fraction of table size
 before analyze
 #autovacuum_freeze_max_age = 2  # maximum XID age before
 forced vacuum
 # (change requires restart)
 #autovacuum_vacuum_cost_delay = 20ms# default vacuum cost delay
 for
 # autovacuum, in milliseconds;
 # -1 means use
 vacuum_cost_delay
 #autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit
 for
 # autovacuum, -1 means use
 # vacuum_cost_limit
 
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto VACUUM

2010-03-03 Thread akp geek
thank you . I changed the value to 1M and I started seeing the autovacuum
being triggered. But I am getting the following message

 ERROR:  canceling autovacuum task, is it because the table are getting
updated and the vacuum process in place and vacuum happens at a later point
of time

Regards



On Wed, Mar 3, 2010 at 1:02 PM, Joao Ferreira gmail 
joao.miguel.c.ferre...@gmail.com wrote:

 On Wed, 2010-03-03 at 12:46 -0500, akp geek wrote:
  Hi All -
 
I need some help from you. this question is in follow up
  with my earlier questions. I turned the autovacuum and restarted the
  db and the settings I have as follows. It seems the autovacuum process
  has not been turned on. It's almost more than 3 hours I have restarted
  my DB with following setting.  I have ps -ef to see the proces list.
  Is there some thing I am doing wrong.
 
  Can you please help?

 I'dd suggest leaving the naptime in the default (60 seconds)

 Your value is very high... too high... I'dd say

 Use values around 60 seconds (never minutes)...

 
  Regards
 
  # - Query/Index Statistics Collector -
  #track_activities = on
  track_counts = on
  #track_functions = none # none, pl, all
  #track_activity_query_size = 1024
  #update_process_title = on
  #stats_temp_directory = 'pg_stat_tmp'
 
 
 
 #--
  # AUTOVACUUM PARAMETERS
 
 #--
 
  autovacuum = on # Enable autovacuum subprocess?  'on'
  # requires track_counts to
  also be on.
  #log_autovacuum_min_duration = -1   # -1 disables, 0 logs all
  actions and
  # their durations,  0 logs
  only
  # actions running at least
  this number
  # of milliseconds.
  autovacuum_max_workers = 10 # max number of autovacuum
  subprocesses
  autovacuum_naptime = 180min # time between autovacuum runs
  #autovacuum_vacuum_threshold = 50   # min number of row updates
  before
  # vacuum
  #autovacuum_analyze_threshold = 50  # min number of row updates
  before
  # analyze
  #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size
  before vacuum
  #autovacuum_analyze_scale_factor = 0.1  # fraction of table size
  before analyze
  #autovacuum_freeze_max_age = 2  # maximum XID age before
  forced vacuum
  # (change requires restart)
  #autovacuum_vacuum_cost_delay = 20ms# default vacuum cost delay
  for
  # autovacuum, in milliseconds;
  # -1 means use
  vacuum_cost_delay
  #autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit
  for
  # autovacuum, -1 means use
  # vacuum_cost_limit
 
 


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] PQntuples returns an int.

2010-03-03 Thread Guillaume Yziquel

Hello.

I've been looking at the PQntuples function, in section 29.3.2. 
'Retrieving query result information' of


http://www.postgresql.org/docs/8.2/interactive/libpq-exec.html

The declaration is:


int PQntuples(const PGresult *res);


I'm wondering: why the 'int'? why not a 'long int', or an 'unsigned 
int', or whatever?


I'm asking this because I'm wondering whether you could have more tuples 
in the result of a query than could be indexed by the 'int' type. May 
this happen?


All the best,

--
 Guillaume Yziquel
http://yziquel.homelinux.org/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Larger volumes of chronologically ordered data and the planner

2010-03-03 Thread John Moran
Hello,

What is PostgreSQL's likely behaviour when it encounters a large
volume of data that is chronologically ordered (there's a btree index
on a date column)? Is postgreSQL intelligent enough to discern that
since the most frequently accessed data is invariably recent data,
that it should store only that in memory, and efficiently store less
relevant, older data on disk (the volume of data in production at the
moment is still small enough to fit entirely in memory)? The
application I maintain is not really a data warehousing app, but this
is likely to be where I first encounter performance issues, if I ever
do.

Where can I learn more about this subject in general?

Regards,
John Moran

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stopping processes, preventing connections

2010-03-03 Thread Josh Kupershmidt


 Second, and the more complicated one - what do I do about rogue queries
 that are running when my process starts? Today we had a query that ran since
 yesterday. I called pg_cancel_backend() on it several times and waited for
 almost two hours - to no avail. Eventually I had to ask our sysadmin to
 shutdown PostgreSQL, which took some five minutes, but eventually worked. Is
 there a way to do the same thing to a single process without shutting down
 the whole server, and without causing any harm to the database or memory
 corruption? Something I can call from within SQL? I run the nightly script
 from a linux user which is not postgres, so I'd prefer a way that doesn't
 require using kill.


 On 8.4, you can use pg_terminate_backend(), which sends a SIGTERM instead
 of pg_cancel_backend's SIGINT. If you're not on 8.4, you can just do this
 manually with a kill -SIGTERM backend_pid. If that doesn't work either,
 you might have to resort to a kill -SIGKILL backend_pid. Killing a single
 backend should be much better for you than restarting Postgres entirely.
 These operations shouldn't result in database corruption.

 You have to be database superuser to use pg_cancel_backend() or
 pg_terminate_backend(), or have a shell login as the database user to use
 kill. No way around that for now.


 Unfortunately, we only have 8.3 - is SIGTERM safe on 8.3?

 I guess I'll have to sudo or use local ssh.


Yeah, I think any of the kill modes on the backends should be safe in
terms of your data. If you're interested in killing without having to SSH
in, you could play with something like:
http://wiki.postgresql.org/wiki/Backend_killer_function
and adjust it as needed (permissions, etc) for your needs.


 Though next time you see a query which doesn't respond to
 pg_cancel_backend(), try gathering information about the query and what the
 backend is doing; either you're doing something unusual (e.g. an app is
 restarting the query automatically after getting canceled) or perhaps you've
 stumbled on a bug in Postgres.


 I'd appreciate it if you tell me what to look for. It was running a join on
 several tables, but nothing too complicated. It may be that the query is not
 optimized (one of the tables is not indexed properly) but it still should
 respond to cancel - shouldn't it?


I came across an old thread discussing a problem which sounds similar to
yours (process not dying with a pg_cancel_backend()):
http://archives.postgresql.org/pgsql-general/2007-10/msg01696.php
there's some good info throughout that thread, including the recommendation
from Tom Lane to try to reproduce, assuming your query isn't stuck inside a
plpython or similar function.

If you can reproduce your hung query which doesn't respond to a
pg_cancel_backend(), try following instructions here (assuming your server
is a Linux/BSD machine) to gather more information about what's going on
inside the backend:
http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
strace, top, and/or vmstat information might also be useful.

Hope this helps,
Josh


Re: [GENERAL] Larger volumes of chronologically ordered data and the planner

2010-03-03 Thread Tom Lane
John Moran johnfrederickmo...@gmail.com writes:
 What is PostgreSQL's likely behaviour when it encounters a large
 volume of data that is chronologically ordered (there's a btree index
 on a date column)? Is postgreSQL intelligent enough to discern that
 since the most frequently accessed data is invariably recent data,
 that it should store only that in memory, and efficiently store less
 relevant, older data on disk (the volume of data in production at the
 moment is still small enough to fit entirely in memory)?

There's no dedicated intelligence about such a case, but I don't see why
the ordinary cache management algorithms won't handle it perfectly well.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQntuples returns an int.

2010-03-03 Thread Steve Atkins

On Mar 3, 2010, at 10:05 AM, Guillaume Yziquel wrote:

 Hello.
 
 I've been looking at the PQntuples function, in section 29.3.2. 'Retrieving 
 query result information' of
 
   http://www.postgresql.org/docs/8.2/interactive/libpq-exec.html
 
 The declaration is:
 
 int PQntuples(const PGresult *res);
 
 I'm wondering: why the 'int'? why not a 'long int', or an 'unsigned int', or 
 whatever?
 
 I'm asking this because I'm wondering whether you could have more tuples in 
 the result of a query than could be indexed by the 'int' type. May this 
 happen?

That's returning the number of tuples you've already retrieved and have in 
memory on the client.

I'm not sure what the overhead in storage is, but if you have more than two 
billion tuples in your result set you're probably going to blow out of memory 
long before you get there (and should be using a cursor instead).

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Michael Gould
I have several tables in a SQL Anywhere 10 database that I'm converting to
PostgreSQL 8.4.2.  In ASA we did not use any GUID columns but we are in
Postgres.


I have one table that is citystateinfo and the primary key is a UUID and it
is automatically generated using the contrib module oosp-uuid.  That works
fine.


We are also using the citext contrib module on most of our text columns so
that we can do case insensitive searches.  We now have this working
properly.


Here is a simple select that is used to get the UUID primary key for the
citystateinfo table to put into a terminaladdress table.


select citystateinfoid as KeyName from iss.citystateinfo where
cityname='JACKSONVILLE' and statecode='FL' and zipcode='32226'.


In PGAdmin this returns the correct uuid in the proper format.


In the program that we're using to convert this data (a commerical product
called DMC), when it is run on the tech support machine at the authors
company, it returns the proper UUID to all columns where there is a matching
row in the citystateinfo table.


On my machine the UUID that is returned is 16 bytes and I cannot make out
any relevant numbers from the UUID key in the citystateinfo table.  I've
tried this in a Windows XP machine and a Windows 7 64 bit.  


Now here is the weird thing.  I did a complete backup of my database and had
the author restore the data on his.  It works every time there.  He then did
a pg_dumpall and I reloaded my db with his script and while his runs all of
the queries fine, mine is still not returning the proper length UUID column.


I'm using the UTF ODBC driver 8.4 and Postgres 8.4.2.


Does anyone have any idea on what could possibly be going on? It's running
find on computers located in other locations but doesn't work properly on
either one of my machines.  It seems to me that it must be enviornmental. 
The author has both Windows 2008 server and Windows 7 64 bit.


Best Regards


Michael Gould




Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Justin Graf
On 3/3/2010 3:40 PM, Michael Gould wrote:
 On my machine the UUID that is returned is 16 bytes and I cannot make 
 out any relevant numbers from the UUID key in the citystateinfo 
 table.  I've tried this in a Windows XP machine and a Windows 7 64 bit.

 Now here is the weird thing.  I did a complete backup of my database 
 and had the author restore the data on his.  It works every time 
 there.  He then did a pg_dumpall and I reloaded my db with his script 
 and while his runs all of the queries fine, mine is still not 
 returning the proper length UUID column.

 I'm using the UTF ODBC driver 8.4 and Postgres 8.4.2.


  You state the PgAdmin returns the correct result on your machine 
connecting to the database  that returns the wrong result using Data 
Conversion App, RIGHT???

If so I'm betting it has to do with ODBC driver doing something odd or 
the DATA Conversion App is doing something ODD.

What is the name of app being used to convert the data?

Which ODBC are you using ?
http://psqlodbc.projects.postgresql.org/
or
http://projects.commandprompt.com/public/odbcng/

Note: PgAdmin does not use ODBC interface .


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQntuples returns an int.

2010-03-03 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 I'm asking this because I'm wondering whether you could have more tuples 
 in the result of a query than could be indexed by the 'int' type. May 
 this happen?

Yes, but it's a low concern as many other things would break before 
you got to that point. See:

http://groups.google.com/group/pgsql.general/browse_thread/thread/462c454345221267/f61926526b539036
or:
http://5z8.info/super-nsfw_cyd


- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201003031555
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkuOz1cACgkQvJuQZxSWSsge+wCg0Wmo/YyEkymE7Y1FILtRoO3J
CDQAn36Gs8Si37YdeheaM6v2JiU+EPOz
=e+3U
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Raid 10 settings for optimal postgres performance?

2010-03-03 Thread Greg Smith

Chris Barnes wrote:
What is the preferred setup for postgres concerning cache settings on 
the controller and disks and the preferred block size allocated for 
the disks when postgres uses block size 8192? 
 
Controller settings

Read-cache mode - Enabled
Write-cache mode - Enabled (write-back)
Write-cache setting - Enabled (write-back) when protected by battery
Stripe-unit size : 256 KB
 
Disk setting
Write Cache: Disabled (write-through) 


You already have the right cache settings:  write-back when protected by 
battery, write cache on disks disabled.  And there aren't any 
interesting settings to change for block of stripe size on a RAID10 
array that do anything you'll be able to measure.  You're done with 
controller and disk setup.


Looks like a standard ext3 filesystem under Linux, right?  Next you 
should run tests with bonnie++ to make sure the array is performing as 
expecting.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us



Re: [GENERAL] Larger volumes of chronologically ordered data and the planner

2010-03-03 Thread Greg Smith

John Moran wrote:

Is postgreSQL intelligent enough to discern that
since the most frequently accessed data is invariably recent data,
that it should store only that in memory, and efficiently store less
relevant, older data on disk


When you ask for a database block from disk, it increments a usage count 
figure for that block when it's read into memory, and again if it turns 
out it was already there.  Those requests to allocate new blocks are 
constantly decreasing those usage counts as they clock sweep over the 
cache looking for space that hasn't been used recently.  This will 
automatically keep blocks you've used recently in RAM, while evicting 
ones that aren't.


The database doesn't have any intelligence to determining what data to 
keep in memory or not beyond that.  Its sole notion of relevant is 
whether someone has accessed that block recently or not.  The operating 
system cache will sit as a second layer on top of this, typically with 
its own LRU scheme typically for determining what gets cached or not.


I've written a long paper covering the internals here named Inside the 
PostgreSQL Buffer Cache at 
http://www.westnet.com/~gsmith/content/postgresql/ if you want to know 
exactly how this is all implemented.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto VACUUM

2010-03-03 Thread Igor Neyman
What's the complete error message?
Vacuum is using maintenance_work_mem.  What is your setting
maintenance_work_mem compared to your RAM size. 

Igor Neyman 

 -Original Message-
 From: akp geek [mailto:akpg...@gmail.com] 
 Sent: Wednesday, March 03, 2010 1:10 PM
 To: Joao Ferreira gmail
 Cc: pgsql-general
 Subject: Re: Auto VACUUM
 
 thank you . I changed the value to 1M and I started seeing 
 the autovacuum being triggered. But I am getting the 
 following message 
  
  ERROR:  canceling autovacuum task, is it because the table 
 are getting updated and the vacuum process in place and 
 vacuum happens at a later point of time
  
 Regards
 
 
  
 On Wed, Mar 3, 2010 at 1:02 PM, Joao Ferreira gmail 
 joao.miguel.c.ferre...@gmail.com wrote:
 
 
   On Wed, 2010-03-03 at 12:46 -0500, akp geek wrote:
Hi All -
   
  I need some help from you. this question is 
 in follow up
with my earlier questions. I turned the autovacuum 
 and restarted the
db and the settings I have as follows. It seems the 
 autovacuum process
has not been turned on. It's almost more than 3 hours 
 I have restarted
my DB with following setting.  I have ps -ef to see 
 the proces list.
Is there some thing I am doing wrong.
   
Can you please help?
   
   
   I'dd suggest leaving the naptime in the default (60 seconds)
   
   Your value is very high... too high... I'dd say
   
   Use values around 60 seconds (never minutes)...
   
 
   
Regards
   
# - Query/Index Statistics Collector -
#track_activities = on
track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
   
   

 #-
 -
# AUTOVACUUM PARAMETERS

 #-
 -
   
autovacuum = on # Enable autovacuum 
 subprocess?  'on'
# requires 
 track_counts to
also be on.
#log_autovacuum_min_duration = -1   # -1 
 disables, 0 logs all
actions and
# their 
 durations,  0 logs
only
# actions 
 running at least
this number
# of milliseconds.
autovacuum_max_workers = 10 # max number 
 of autovacuum
subprocesses
autovacuum_naptime = 180min # time 
 between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number 
 of row updates
before
# vacuum
#autovacuum_analyze_threshold = 50  # min number 
 of row updates
before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of 
 table size
before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of 
 table size
before analyze
#autovacuum_freeze_max_age = 2  # maximum XID 
 age before
forced vacuum
# (change 
 requires restart)
#autovacuum_vacuum_cost_delay = 20ms# default 
 vacuum cost delay
for
# autovacuum, 
 in milliseconds;
# -1 means use
vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default 
 vacuum cost limit
for
# autovacuum, 
 -1 means use
# vacuum_cost_limit
   
   
   
   
   
   --
   Sent via pgsql-general mailing list 
 (pgsql-general@postgresql.org)
   To make changes to your subscription:
   http://www.postgresql.org/mailpref/pgsql-general
   
 
 
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cacti + PostgreSQL Graphing

2010-03-03 Thread Marc G. Fournier


Greg (or anyone out there) ... have you had any experience with using 
check_postgres.pl --action=dbstats with Cacti?  I'm getting, in my 
cacti.log, a result of:


SPINE: Poller[0] Host[124] DS[2619] SCRIPT: 
/usr/local/bin/check_postgres.pl --action=dbstats --host=192.0.2.23 
--dbuser=monitor --dbpass=monitor --dbname=test, output: U


The specific problem bing the 'output: U', which should look something 
like:


output: backends:1 commits:97645 rollbacks:5 read:177034 hit:6872718 
idxscan:0 idxtupread:0 idxtupfetch:0 idxblksread:2199 idxblkshit:3 
seqscan:3 seqtupread:1000110 ret:24845129 fetch:2956289 ins:2201738 
upd:3999 del:312 dbname:test


Google seems to have a bunch of stuff talking about 'output: U', but 
dating back to '06 and older ... the command runs fine (and fast) from the 
command line, so not a timeout ...


So, anyone using dbstats with the newest version of Cacti, and have it 
working?


On Mon, 1 Mar 2010, Marc G. Fournier wrote:



Greg, before I start hacking away at it, do you have any examples of using 
check_postgres_dbstats with cacti?  rather not recreate teh wheel is someone 
has already done it ...


thx ...

On Sat, 27 Feb 2010, Greg Smith wrote:


Marc G. Fournier wrote:

We are mainly a nagios / cacti environment, and I came across...


Why start with Cacti then?  There's way more Nagios integration available; 
the little check_pgsql that comes with it (I think it's still there), and 
the whole list at 
http://exchange.nagios.org/directory/Plugins/Databases/PostgresQL


I'd think that starting with http://bucardo.org/wiki/Check_postgres for 
example would put you way ahead of anything that's available for Cacti.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us





Marc G. FournierHub.Org Hosting Solutions S.A.
scra...@hub.org http://www.hub.org

Yahoo:yscrappySkype: hub.orgICQ:7615664MSN:scra...@hub.org




Marc G. FournierHub.Org Hosting Solutions S.A.
scra...@hub.org http://www.hub.org

Yahoo:yscrappySkype: hub.orgICQ:7615664MSN:scra...@hub.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Michael Gould
Justin Graf jus...@magwerks.com wrote:
 On 3/3/2010 3:40 PM, Michael Gould wrote:
 On my machine the UUID that is returned is 16 bytes and I cannot make 
 out any relevant numbers from the UUID key in the citystateinfo 
 table.  I've tried this in a Windows XP machine and a Windows 7 64 bit.

 Now here is the weird thing.  I did a complete backup of my database 
 and had the author restore the data on his.  It works every time 
 there.  He then did a pg_dumpall and I reloaded my db with his script 
 and while his runs all of the queries fine, mine is still not 
 returning the proper length UUID column.

 I'm using the UTF ODBC driver 8.4 and Postgres 8.4.2.


   You state the PgAdmin returns the correct result on your machine 
 connecting to the database  that returns the wrong result using Data 
 Conversion App, RIGHT???

Any SQL client that can accept and process a query will bring it back
correctly. Also the author of DMC's can run it on his local PostGres server
restored using my backup from this afternoon and he gets the correct results
through DMC.

On my machine I get a UUID that looks like 8BA92F06-BCD6-49.  Also I've
noticed that when I bring up the ODBC administrator, I do not see any entry
for the Postgres ODBC driver.  I've tried to uninstall the driver, reboot
and reinstall with the same results.

Best Regards

Mike Gould




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] The REAL cost of joins

2010-03-03 Thread Marcin Krol

Hello everyone,

I have inadvertently set off a furball on an unrelated ng on what is the 
actual cost of SQL joins. But there's no reliable conclusion. I would 
like to actually know that, that is, are JOINs truly expensive?


As they say, one measurement is worth a thousand opinions, so I've done 
measurement on my PG app:


$ time echo \c hrs;
SELECT hosts.ip, reservation.start_date, architecture.architecture, 
os_kind.os_kind, os_rel.os_rel, os_version.os_version, project.project, 
email.email FROM hosts

 INNER JOIN project ON project.id = hosts.project_id
 INNER JOIN architecture ON hosts.architecture_id = architecture.id
 INNER JOIN os_kind ON os_kind.id = hosts.os_kind_id
 INNER JOIN os_rel ON hosts.os_rel_id = os_rel.id
 INNER JOIN os_version ON hosts.os_version_id = os_version.id
 INNER JOIN reservation_hosts ON hosts.id = reservation_hosts.host_id
 INNER JOIN reservation on reservation.id = 
reservation_hosts.reservation_id

 INNER JOIN email ON reservation.email_id = email.id

; | psql  /dev/null

real0m0.099s
user0m0.015s
sys 0m0.005s



$ time echo \c hrs;
 SELECT hosts.ip FROM hosts;
 SELECT reservation.start_date FROM reservation;
 SELECT architecture.architecture FROM architecture;
 SELECT os_rel.os_rel FROM os_rel;
 SELECT os_version.os_version FROM os_version;
 SELECT project.project FROM project;
 SELECT email.email FROM email;
  | psql  /dev/null

real0m0.046s
user0m0.008s
sys 0m0.004s

Note: I've created indexes on those tables, both on data columns like 
hosts.ip and on .id columns.


What do you think of this? And in general: when (if?) should one 
denormalize data?


Regards,
mk


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Michael Gould
One thing I've noticed is that on my machines, when I install the odbc
driver I get no error messages but when I look in the ODBC administrator I
do not see any entry for PostGres in the drivers list.

I do know that it somehow is working because the DMC conversion tool only
uses ODBC and builds it's data structures based on what it gets back from
querying the database.

Programs like WinSQL also work and I believe it uses ODBC.

In the past I believe I remember seeing the Postgres driver listed in the
ODBC admin tool under the drivers tab.

I'm running on a Windows 7 64 bit machine and I'm logged into our domain as
the domain administrator.  Is there something else I need to do to install
the odbc driver under windows?  I've even turned UAC off and it didn't seem
to help

Best Regards

--
Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cacti + PostgreSQL Graphing

2010-03-03 Thread Marc G. Fournier


Found it ... spine (the new poller for cacti) doesnt' seem to report 
errors very well ... switched to cmd.php and it turns out that, low and 
behold, it can't find psql since /usr/local/bin isn't part of the path ... 
fixed that in check_postgres.pl and all works well now ...




On Wed, 3 Mar 2010, Marc G. Fournier wrote:



Greg (or anyone out there) ... have you had any experience with using 
check_postgres.pl --action=dbstats with Cacti?  I'm getting, in my cacti.log, 
a result of:


SPINE: Poller[0] Host[124] DS[2619] SCRIPT: /usr/local/bin/check_postgres.pl 
--action=dbstats --host=192.0.2.23 --dbuser=monitor --dbpass=monitor 
--dbname=test, output: U


The specific problem bing the 'output: U', which should look something like:

output: backends:1 commits:97645 rollbacks:5 read:177034 hit:6872718 
idxscan:0 idxtupread:0 idxtupfetch:0 idxblksread:2199 idxblkshit:3 seqscan:3 
seqtupread:1000110 ret:24845129 fetch:2956289 ins:2201738 upd:3999 del:312 
dbname:test


Google seems to have a bunch of stuff talking about 'output: U', but dating 
back to '06 and older ... the command runs fine (and fast) from the command 
line, so not a timeout ...


So, anyone using dbstats with the newest version of Cacti, and have it 
working?


On Mon, 1 Mar 2010, Marc G. Fournier wrote:



Greg, before I start hacking away at it, do you have any examples of using 
check_postgres_dbstats with cacti?  rather not recreate teh wheel is 
someone has already done it ...


thx ...

On Sat, 27 Feb 2010, Greg Smith wrote:


Marc G. Fournier wrote:

We are mainly a nagios / cacti environment, and I came across...


Why start with Cacti then?  There's way more Nagios integration available; 
the little check_pgsql that comes with it (I think it's still there), and 
the whole list at 
http://exchange.nagios.org/directory/Plugins/Databases/PostgresQL


I'd think that starting with http://bucardo.org/wiki/Check_postgres for 
example would put you way ahead of anything that's available for Cacti.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us





Marc G. FournierHub.Org Hosting Solutions S.A.
scra...@hub.org http://www.hub.org

Yahoo:yscrappySkype: hub.orgICQ:7615664MSN:scra...@hub.org




Marc G. FournierHub.Org Hosting Solutions S.A.
scra...@hub.org http://www.hub.org

Yahoo:yscrappySkype: hub.orgICQ:7615664MSN:scra...@hub.org




Marc G. FournierHub.Org Hosting Solutions S.A.
scra...@hub.org http://www.hub.org

Yahoo:yscrappySkype: hub.orgICQ:7615664MSN:scra...@hub.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQntuples returns an int.

2010-03-03 Thread Guillaume Yziquel

Greg Sabino Mullane a écrit :

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


I'm asking this because I'm wondering whether you could have more tuples 
in the result of a query than could be indexed by the 'int' type. May 
this happen?


Yes, but it's a low concern as many other things would break before 
you got to that point. See:


http://groups.google.com/group/pgsql.general/browse_thread/thread/462c454345221267/f61926526b539036
or:
http://5z8.info/super-nsfw_cyd


Thank you.

Indeed, I perfectly do believe you when you state that many things would 
break before.


It's much more a typing issue in a strongly typed language (OCaml).

Thanks again.

Guillaume Yziquel.
--
 Guillaume Yziquel
http://yziquel.homelinux.org/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The REAL cost of joins

2010-03-03 Thread Peter Hunsberger
On Wed, Mar 3, 2010 at 3:59 PM, Marcin Krol mrk...@gmail.com wrote:

 Hello everyone,

 I have inadvertently set off a furball on an unrelated ng on what is the 
 actual cost of SQL joins. But there's no reliable conclusion. I would like to 
 actually know that, that is, are JOINs truly expensive?

 As they say, one measurement is worth a thousand opinions, so I've done 
 measurement on my PG app:

 $ time echo \c hrs;
 SELECT hosts.ip, reservation.start_date, architecture.architecture, 
 os_kind.os_kind, os_rel.os_rel, os_version.os_version, project.project, 
 email.email FROM hosts
  INNER JOIN project ON project.id = hosts.project_id
  INNER JOIN architecture ON hosts.architecture_id = architecture.id
  INNER JOIN os_kind ON os_kind.id = hosts.os_kind_id
  INNER JOIN os_rel ON hosts.os_rel_id = os_rel.id
  INNER JOIN os_version ON hosts.os_version_id = os_version.id
  INNER JOIN reservation_hosts ON hosts.id = reservation_hosts.host_id
  INNER JOIN reservation on reservation.id = reservation_hosts.reservation_id
  INNER JOIN email ON reservation.email_id = email.id

 ; | psql  /dev/null

 real    0m0.099s
 user    0m0.015s
 sys     0m0.005s



 $ time echo \c hrs;
  SELECT hosts.ip FROM hosts;
  SELECT reservation.start_date FROM reservation;
  SELECT architecture.architecture FROM architecture;
  SELECT os_rel.os_rel FROM os_rel;
  SELECT os_version.os_version FROM os_version;
  SELECT project.project FROM project;
  SELECT email.email FROM email;
   | psql  /dev/null

 real    0m0.046s
 user    0m0.008s
 sys     0m0.004s

 Note: I've created indexes on those tables, both on data columns like 
 hosts.ip and on .id columns.

 What do you think of this? And in general: when (if?) should one denormalize 
 data?

I don't get what these two examples are supposed to show?  They do
not, in general, create equivalent output data. If you had to write
procedural logic to do the same correlation between the individual
result sets as the joins are performing then you'd be back to at least
the same performance and probably worse, so what exactly is being
compared here?

As to your second question; well, outside of not doing premature
optimization I'll observe that for operational databases,
denormalization rarely gives me enough savings to justify the
resultant extra data management required to maintain it.  If you're
talking about a data warehouse (and given the way you've worded the
question I supsect you are not?) then that's a different question, in
that case I prefer to generate the data from the operational side of
the house so you're no longer paying an operational cost to maintain
the denormalized data (you've got a whole different set of costs
instead).

--
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Justin Graf
On 3/3/2010 5:16 PM, Michael Gould wrote:
 One thing I've noticed is that on my machines, when I install the odbc
 driver I get no error messages but when I look in the ODBC administrator I
 do not see any entry for PostGres in the drivers list.

 I do know that it somehow is working because the DMC conversion tool only
 uses ODBC and builds it's data structures based on what it gets back from
 querying the database.

 Programs like WinSQL also work and I believe it uses ODBC.

 In the past I believe I remember seeing the Postgres driver listed in the
 ODBC admin tool under the drivers tab.

 I'm running on a Windows 7 64 bit machine and I'm logged into our domain as
 the domain administrator.  Is there something else I need to do to install
 the odbc driver under windows?  I've even turned UAC off and it didn't seem
 to help


Can't help on the Windows 7  ODBC  not  appearing in the drivers list

If you can manage to configure a Data Source Name  turn on the logging

then connect using this DSN in your DMC app just do a simple select

The log will get big quick so keep the selects small and use limit.  
This will allow to see what ODBC driver is up to

open the C:\mylog_XXX and look at and you will find all the commands 
sent the Postgresql and the results

then you will look for something like this.  This will tell us what the 
Select looks like and what being sent DMC app.

*[1444-387.642]Exec_with_parameters_resolved: copying statement params: 
trans_status=1, len=51, stmt='Select user_password from mediawiki.mwuser 
limit 10'*
[1444-387.645]   stmt_with_params = 'Select user_password from 
mediawiki.mwuser limit 10'
[1444-387.646]about to begin SC_execute
[1444-387.647]   Sending SELECT statement on stmt=02C5D8C0, 
cursor_name='SQL_CUR02C5D8C0' qflag=0,1
[1444-387.648]CC_send_query: conn=04313E00, query='Select user_password 
from mediawiki.mwuser limit 10'
--snip--
*[1444-387.665]qresult: len=44, 
buffer=':B:e2e28556:8ff13b68ebc64eccad0921d1571bf08c'*
[1444-387.666]qresult: len=0, buffer=''
*[1444-387.667]qresult: len=44, 
buffer=':B:60a118ba:592cb633245cc330d48cc2e52c922f90'*
[1444-387.667]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.670]qresult: len=0, buffer=''






All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


[GENERAL] ArgoUML SQL code generator for PostgreSQL

2010-03-03 Thread John Gage
Is there an SQL code generator for ArgoUML for Postgres?  I have not been
able to find it, which does not mean it is not there.

Thanking you,

John


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Michael Gould
Is there anyone out there using Windows 7 64 bit with Postgres 8.4.2 and
the psqlodbc driver?  I've tried to install using the one that is available
with the standard windows installer.  I've also tried to install with the
one from EDB.   It appears that the installer finishes the install but when
I look for the drivers there are none in the odbc administrator.  I've
installed and uninstalled a couple of times.


Does anyone have any idea on what is going on here and how to fix the
problem?  I've logged in as the domain adminstrator and the system
administrator for our domain and I've turned UAC off also, but as of yet no
luck.


I know that they are at least partially installed because I can access our
database via two tools that use ODBC as it's access method.  It's just that
they are not showing up in the administrator and this makes me think that
I've got a problem. 


I'm trying to get a data conversion tool to work (was working when I was
using Windows XP) to convert data from SQL Anywhere into PostgreSQL 8.4.2. 
The problem is with UUID columns that are looked up and used in other
tables.  The UUID's are created properly in their parent tables, but the
value isn't correct when it is used as a lookup column.


The author of this product has taken a complete backup of my database and
restored it on his machine. When he runs the conversion routine it produces
the correct data.  The only difference between his machine and mine seems
to be how the odbc driver is installed.  He's using the one he downloaded
from EDB.  He sent it to me, I uninstalled the previous one I had, rebooted
and installed the one he sent me.  Same results, no driver shows up in the
odbc admin.


Best Regards


Michael Gould


 


 


Justin Graf jus...@magwerks.com wrote:
 



On 3/3/2010 5:16 PM, Michael Gould wrote: 


One thing I've noticed is that on my machines, when I install the odbc
driver I get no error messages but when I look in the ODBC administrator I
do not see any entry for PostGres in the drivers list.

I do know that it somehow is working because the DMC conversion tool only
uses ODBC and builds it's data structures based on what it gets back from
querying the database.

Programs like WinSQL also work and I believe it uses ODBC.

In the past I believe I remember seeing the Postgres driver listed in the
ODBC admin tool under the drivers tab.

I'm running on a Windows 7 64 bit machine and I'm logged into our domain
as
the domain administrator.  Is there something else I need to do to install
the odbc driver under windows?  I've even turned UAC off and it didn't
seem
to help
  /pre


Can't help on the Windows 7  ODBC  not  appearing in the drivers list

If you can manage to configure a Data Source Name  turn on the logging

then connect using this DSN in your DMC app just do a simple select 

The log will get big quick so keep the selects small and use limit.  This
will allow to see what ODBC driver is up to 

open the C:\mylog_XXX and look at and you will find all the commands sent
the Postgresql and the results  

then you will look for something like this.  This will tell us what the
Select looks like and what being sent DMC app.  

b[1444-387.642]Exec_with_parameters_resolved: copying statement params:
trans_status=1, len=51, stmt='Select user_password from mediawiki.mwuser
limit 10'/b
[1444-387.645]   stmt_with_params = 'Select user_password from
mediawiki.mwuser limit 10'
[1444-387.646]about to begin SC_execute
[1444-387.647]   Sending SELECT statement on stmt=02C5D8C0,
cursor_name='SQL_CUR02C5D8C0' qflag=0,1
[1444-387.648]CC_send_query: conn=04313E00, query='Select user_password
from mediawiki.mwuser limit 10'
--snip--
b[1444-387.665]qresult: len=44,
buffer=':B:e2e28556:8ff13b68ebc64eccad0921d1571bf08c'/b
[1444-387.666]qresult: len=0, buffer=''
b[1444-387.667]qresult: len=44,
buffer=':B:60a118ba:592cb633245cc330d48cc2e52c922f90'/b
[1444-387.667]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.670]qresult: len=0, buffer=''





All legitimate Magwerks Corporation quotations are sent in a .PDF file
attachment with a unique ID number generated by our proprietary quotation
system. Quotations received via any other form of communication will not be
honored. 

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain
legally privileged, confidential or other information proprietary to
Magwerks Corporation and is intended solely for the use of the individual
to whom it addresses. If the reader of this e-mail is not the intended
recipient or authorized agent, the reader is hereby notified that any
unauthorized viewing, dissemination, distribution or copying of this e-mail
is strictly prohibited. If you have received this e-mail in error, please
notify the sender by replying to this message and destroy all occurrences
of this e-mail immediately. 
Thank 

[GENERAL] Failed to run initdb: 128

2010-03-03 Thread Niranjan Maturi (nmaturi)
Hi
 
I am trying to install Postgres 8.2 on a virtual machine that has
Windown 2003 SP2. The installation fails with the error Failed to run
initdb: 128. I searched the archives and looks like it is a catch-all
error. But is there anything specific that we can do to resolve this
issue?
 
I am logged in as an administrator, so no problem with permissions.
Postgres was able to create the windows user account properly and after
that it fails with the above error. It tells us to copy a log file
initdb.log, but that file does not exist.
 
There is another log file pginstall.log, where I do see some error
messages, I printed them below. Any help is appreciated.
 
Niranjan
 
 
 
 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2769 2: GetAvailableLocales 3:
1 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2205 2:  3: Error 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2228 2:  3: Error 4: SELECT
`Message` FROM `Error` WHERE `Error` = 2769 
DEBUG: Error 2769:  Custom Action GetAvailableLocales did not close 1
MSIHANDLEs.
The installer has encountered an unexpected error installing this
package. This may indicate a problem with this package. The error code
is 2769. The arguments are: GetAvailableLocales, 1, 
MSI (c) (C4:64) [15:59:30:258]: Closing MSIHANDLE (1) of type 790542 for
thread 2824
Action ended 15:59:30: GetAvailableLocales. Return value 1.
MSI (c) (C4:08) [15:59:30:273]: Doing action: CheckAvailableDLLs
Action 15:59:30: CheckAvailableDLLs. 
Action start 15:59:30: CheckAvailableDLLs.
MSI (c) (C4:08) [15:59:30:351]: Creating MSIHANDLE (154) of type 790542
for thread 2824
MSI (c) (C4:68) [15:59:30:351]: Invoking remote custom action. DLL:
C:\DOCUME~1\LOCALI~1\LOCALS~1\Temp\1\MSI5.tmp, Entrypoint:
checkavailabled...@4
MSI (c) (C4:68) [15:59:30:570]: Closing MSIHANDLE (154) of type 790542
for thread 2824
Action ended 15:59:30: CheckAvailableDLLs. Return value 1.
MSI (c) (C4:08) [15:59:30:570]: Skipping action: DisableInitdb
(condition is false)
MSI (c) (C4:08) [15:59:30:570]: Doing action: ValidateProductID
Action 15:59:30: ValidateProductID. 
Action start 15:59:30: ValidateProductID.
Action ended 15:59:30: ValidateProductID. Return value 1.
MSI (c) (C4:08) [15:59:30:586]: Doing action: CostInitialize
Action 15:59:30: CostInitialize. Computing space requirements
Action start 15:59:30: CostInitialize.
MSI (c) (C4:08) [15:59:30:586]: Machine policy value 'MaxPatchCacheSize'
is 10
MSI (c) (C4:08) [15:59:30:586]: Baseline: Sorting baselines for
{1F701DBD-1660-4108-B10A-FB435EA63BF0}.
MSI (c) (C4:08) [15:59:30:586]: Baseline: New baseline 8.2.0 from
transaction.
MSI (c) (C4:08) [15:59:30:586]: Baseline: Sorted order Native: Order 0.
MSI (c) (C4:08) [15:59:30:586]: Baseline Data Table:
MSI (c) (C4:08) [15:59:30:586]: ProductCode:
{1F701DBD-1660-4108-B10A-FB435EA63BF0} Version: 8.2.0 Attributes: 0
PatchId: Native BaselineId: -
 
 


[GENERAL] Failed to run initdb: 128

2010-03-03 Thread Niranjan Maturi (nmaturi)
Hi
 
I am trying to install Postgres 8.2 on a virtual machine that has
Windown 2003 SP2. The installation fails with the error Failed to run
initdb: 128. I searched the archives and looks like it is a catch-all
error. But is there anything specific that we can do to resolve this
issue?
 
I am logged in as an administrator, so no problem with permissions.
Postgres was able to create the windows user account properly and after
that it fails with the above error. It tells us to copy a log file
initdb.log, but that file does not exist.
 
There is another log file pginstall.log, where I do see some error
messages, I printed them below. Any help is appreciated.
 
Niranjan
 
 
 
 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2769 2: GetAvailableLocales 3:
1 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2205 2:  3: Error 
MSI (c) (C4:64) [15:59:30:242]: Note: 1: 2228 2:  3: Error 4: SELECT
`Message` FROM `Error` WHERE `Error` = 2769 
DEBUG: Error 2769:  Custom Action GetAvailableLocales did not close 1
MSIHANDLEs.
The installer has encountered an unexpected error installing this
package. This may indicate a problem with this package. The error code
is 2769. The arguments are: GetAvailableLocales, 1, 
MSI (c) (C4:64) [15:59:30:258]: Closing MSIHANDLE (1) of type 790542 for
thread 2824
Action ended 15:59:30: GetAvailableLocales. Return value 1.
MSI (c) (C4:08) [15:59:30:273]: Doing action: CheckAvailableDLLs
Action 15:59:30: CheckAvailableDLLs. 
Action start 15:59:30: CheckAvailableDLLs.
MSI (c) (C4:08) [15:59:30:351]: Creating MSIHANDLE (154) of type 790542
for thread 2824
MSI (c) (C4:68) [15:59:30:351]: Invoking remote custom action. DLL:
C:\DOCUME~1\LOCALI~1\LOCALS~1\Temp\1\MSI5.tmp, Entrypoint:
checkavailabled...@4
MSI (c) (C4:68) [15:59:30:570]: Closing MSIHANDLE (154) of type 790542
for thread 2824
Action ended 15:59:30: CheckAvailableDLLs. Return value 1.
MSI (c) (C4:08) [15:59:30:570]: Skipping action: DisableInitdb
(condition is false)
MSI (c) (C4:08) [15:59:30:570]: Doing action: ValidateProductID
Action 15:59:30: ValidateProductID. 
Action start 15:59:30: ValidateProductID.
Action ended 15:59:30: ValidateProductID. Return value 1.
MSI (c) (C4:08) [15:59:30:586]: Doing action: CostInitialize
Action 15:59:30: CostInitialize. Computing space requirements
Action start 15:59:30: CostInitialize.
MSI (c) (C4:08) [15:59:30:586]: Machine policy value 'MaxPatchCacheSize'
is 10
MSI (c) (C4:08) [15:59:30:586]: Baseline: Sorting baselines for
{1F701DBD-1660-4108-B10A-FB435EA63BF0}.
MSI (c) (C4:08) [15:59:30:586]: Baseline: New baseline 8.2.0 from
transaction.
MSI (c) (C4:08) [15:59:30:586]: Baseline: Sorted order Native: Order 0.
MSI (c) (C4:08) [15:59:30:586]: Baseline Data Table:
MSI (c) (C4:08) [15:59:30:586]: ProductCode:
{1F701DBD-1660-4108-B10A-FB435EA63BF0} Version: 8.2.0 Attributes: 0
PatchId: Native BaselineId: -
 
 


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Craig Ringer
Michael Gould wrote:

 Does anyone have any idea on what is going on here and how to fix the
 problem?  I've logged in as the domain adminstrator and the system
 administrator for our domain and I've turned UAC off also, but as of yet
 no luck.

I'm wondering if you should post to the pgsql-odbc list too, since some
of the ODBC folks don't follow the main list very closely.

 I know that they are at least partially installed because I can access
 our database via two tools that use ODBC as it's access method.  It's
 just that they are not showing up in the administrator and this makes me
 think that I've got a problem. 

If it's anything like it was in the win16 - win32 transition, you'll
need to fire up the 32-bit ODBC administrator, as it could be different.

Try running odbcad32.exe

 The author of this product has taken a complete backup of my database
 and restored it on his machine. [where it works]

Is his machine running windows 7?

Is it 32-bit of 64-bit?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Craig Ringer
Argh - a follow-up re ODBC 32 and 64 bit-ness:

http://blog.danovich.com.au/2010/02/02/odbc-settings-on-64-bit-servers/

It's way crazier than you'd ever expect. Both the 32-bit and 64-bit
versions are called odbcad32.exe but are in different (but not
distinctively named) locations.

http://support.microsoft.com/kb/942976/en-us

*bangs head against desk*

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 It's way crazier than you'd ever expect. Both the 32-bit and 64-bit
 versions are called odbcad32.exe but are in different (but not
 distinctively named) locations.

 http://support.microsoft.com/kb/942976/en-us

Wow. If I even had the slightest regret about my move to abandon 
Windows years ago, this would have pretty much squashed it. 
Magnus et. al., I don't know how you do it. :)

- -- 
Greg Sabino Mullane g...@turnstep.com
PGP Key: 0x14964AC8 201003032043
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkuPEHoACgkQvJuQZxSWSsj0XQCg07IhnWKKEK1YRNevteSdGv/z
joMAoMJuGLwVsuybmumGgLkYe6yJyyGY
=9NGC
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread John R Pierce

Greg Sabino Mullane wrote:
Wow. If I even had the slightest regret about my move to abandon 
Windows years ago, this would have pretty much squashed it. 
Magnus et. al., I don't know how you do it. :)
  


yeah, (l)unix systems/users/apps never get ./lib and ./lib64 mixed up, 
oh no. 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Scott Marlowe
On Wed, Mar 3, 2010 at 6:45 PM, Greg Sabino Mullane g...@turnstep.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160


 It's way crazier than you'd ever expect. Both the 32-bit and 64-bit
 versions are called odbcad32.exe but are in different (but not
 distinctively named) locations.

 http://support.microsoft.com/kb/942976/en-us

 Wow. If I even had the slightest regret about my move to abandon
 Windows years ago, this would have pretty much squashed it.
 Magnus et. al., I don't know how you do it. :)

That page has to be an abandoned april fools joke, right?  The 64 bit
version is named odbcad32.exe?  That has to be a joke of some kind.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Scott Marlowe
On Wed, Mar 3, 2010 at 6:54 PM, John R Pierce pie...@hogranch.com wrote:
 Greg Sabino Mullane wrote:

 Wow. If I even had the slightest regret about my move to abandon Windows
 years ago, this would have pretty much squashed it. Magnus et. al., I don't
 know how you do it. :)


 yeah, (l)unix systems/users/apps never get ./lib and ./lib64 mixed up, oh
 no.

My experience has been that when they get things like wrong (which is
rare) they fix it instead of declaring it the new standard.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need to Reload a Dump File - Need to Delete the PGDATA directory Again?

2010-03-03 Thread Wang, Mary Y
Hi All,

I'm in the processing of migrating to Postgres 8.3.8.
I did  :
(1) initdb
(2) psql -e template1 -f 21.bak /tmp/out21.bak

Now, I've  decided that I want to reload the same file again, do I just delete 
the PGDATA directory and then
perform the same steps again?
(1) initdb
(2) psql -e template1 -f 21.bak /tmp/out21.bak

What I'm asking is that do I always need to delete the PGDATA directory and 
then initdb to ensure that it will be a clean load?

Thanks
Mary




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need to Reload a Dump File - Need to Delete the PGDATA directory Again?

2010-03-03 Thread John R Pierce

Wang, Mary Y wrote:

Hi All,

I'm in the processing of migrating to Postgres 8.3.8.
I did  :
(1) initdb
(2) psql -e template1 -f 21.bak /tmp/out21.bak

Now, I've  decided that I want to reload the same file again, do I just delete 
the PGDATA directory and then
perform the same steps again?
(1) initdb
(2) psql -e template1 -f 21.bak /tmp/out21.bak

What I'm asking is that do I always need to delete the PGDATA directory and 
then initdb to ensure that it will be a clean load?
  


is that dumpfile from pg_dumpall or pg_dump of a single database or what?

if its just a single database, it should suffice to drop database mydb; 
create database mydb with owner myowner;   and then run the script again.


if its the full cluster dump, then yeah, deleting the $PGDATA and 
redoing initdb is the best way to insure clean initial conditions.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general