[GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Михаил Кечинов
Hello.

Here is a problem, which I can't understand.

One week ago our database has crashed and after restore begins some
problems.

One of them:

When I try to delete one row from database (for example):
delete from document where numdoc = 901721617

I have this error:
ERROR: tuple concurrently updated
SQL state: XX000

I know, that no one deleting this row at same time.

What's mean this error?

Thanks.

-- 
Mikhail Kechinov
http://www.mkechinov.ru


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Greg Stark
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов kechin...@gmail.com wrote:
 One week ago our database has crashed and after restore begins some
 problems.

What version?

And how was this backup taken? It sounds like it might be an
inconsistent backup.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Михаил Кечинов
Version 8.4
So, it was not database crash - HDD died. We copied data to new HDD, droped
some dead indexes (when vacuuming we has errors with indexes, so we drop it
and recreate new indexes), made vacuum full. That's all.

2009/12/29 Greg Stark gsst...@mit.edu

 On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов kechin...@gmail.com
 wrote:
  One week ago our database has crashed and after restore begins some
  problems.

 What version?

 And how was this backup taken? It sounds like it might be an
 inconsistent backup.

 --
 greg




-- 
Михаил Кечинов
http://www.mkechinov.ru


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread John R Pierce

Михаил Кечинов wrote:

Version 8.4
So, it was not database crash - HDD died. We copied data to new HDD, 
droped some dead indexes (when vacuuming we has errors with indexes, 
so we drop it and recreate new indexes), made vacuum full. That's all.


where did you copy this data from if the drive died?   if it was from a 
previously made file system backup, was this backup made with some sort 
of point-in-time volume snapshot capability, or was pg_start_backup() 
called before it was made ?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Greg Stark
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов kechin...@gmail.com wrote:
 When I try to delete one row from database (for example):
 delete from document where numdoc = 901721617
 I have this error:
 ERROR: tuple concurrently updated
 SQL state: XX000
 I know, that no one deleting this row at same time.
 What's mean this error?

So this error can only come from a normal SQL-level delete if there is
associated TOAST data which is being deleted as well. In which case
that TOAST data must be already marked deleted -- which shouldn't be
possible.

It sounds like you have a database where some writes from earlier
transactions reached the database and others didn't. That can happen
if you take an inconsistent backup (without using pg_start_backup())
or if the drive you're using confirmed writes before crashing but
didn't actually write them.

You might be able to get somewhat further by reindexing the TOAST
table for this table. To do so do REINDEX TABLE document. But note
that you could run into further errors from the missing toast data.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Михаил Кечинов
Good. Now I have error:

docs=# REINDEX TABLE document;
ERROR:  could not create unique index pkey_document
DETAIL:  Table contains duplicated values.

So, I have primary key and I have some rows with similar numdoc, but
numdoc is primary key and must be unique.

I can't drop pkey because there are some tables with foreign keys:

docs=# alter table document drop constraint pkey_document;
NOTICE:  constraint fk_search_document_vid_numdoc on table
ref_search_document_v id depends
on index pkey_document
ERROR:  cannot drop constraint pkey_document on table document because other
obj ects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

So, I can drop pkey only with other fkey's. It's bad. Is there any methods,
how to clean pkey? When I try to delete some duplicate document, I have that
error: tuple concurrently updated

...

2009/12/29 Greg Stark gsst...@mit.edu

 On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов kechin...@gmail.com
 wrote:
  When I try to delete one row from database (for example):
  delete from document where numdoc = 901721617
  I have this error:
  ERROR: tuple concurrently updated
  SQL state: XX000
  I know, that no one deleting this row at same time.
  What's mean this error?

 So this error can only come from a normal SQL-level delete if there is
 associated TOAST data which is being deleted as well. In which case
 that TOAST data must be already marked deleted -- which shouldn't be
 possible.

 It sounds like you have a database where some writes from earlier
 transactions reached the database and others didn't. That can happen
 if you take an inconsistent backup (without using pg_start_backup())
 or if the drive you're using confirmed writes before crashing but
 didn't actually write them.

 You might be able to get somewhat further by reindexing the TOAST
 table for this table. To do so do REINDEX TABLE document. But note
 that you could run into further errors from the missing toast data.

 --
 greg




-- 
Михаил Кечинов
http://www.mkechinov.ru


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Osvaldo Kussama
2009/12/24 Israel Brewster isr...@frontierflying.com:
 This is sort of a PostgreSQL question/sort of a general SQL question, so I
 apologize if this isn't the best place to ask. At any rate, I know in
 PostgreSQL you can issue a command like 'SELECT time(timestamp_column)
 from table_name' to get the time part of a timestamp. The problem is that
 this command for some reason requires quotes around the time function
 name, which breaks the command when used in SQLite (I don't know about MySQL
 yet, but I suspect the same would be true there). The program I am working
 on is designed to work with all three types of databases (SQLite,
 PostgreSQL, and MySQL) so it would be nice (save me some programing) if
 there was a single SQL statement to get the time portion of a timestamp that
 would work with all three. Is there such a beast? On a related note, why do
 we need the quotes around time for the function to work in PostgreSQL? the
 date function doesn't need them, so I know it's not just a general
 PostgreSQL formating difference. Thanks :)
 ---


It's a bug?

bdteste=# SELECT time(CURRENT_TIMESTAMP);
ERRO:  erro de sintaxe em ou próximo a CURRENT_TIMESTAMP
LINE 1: SELECT time(CURRENT_TIMESTAMP);
^
bdteste=# SELECT pg_catalog.time(CURRENT_TIMESTAMP);
  time
-
 10:55:07.073911
(1 registro)

bdteste=# SELECT time(CURRENT_TIMESTAMP);
  time
-
 10:55:20.679684
(1 registro)

bdteste=# SELECT $$time$$(CURRENT_TIMESTAMP);
ERRO:  erro de sintaxe em ou próximo a (
LINE 1: SELECT $$time$$(CURRENT_TIMESTAMP);
   ^

Osvaldo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Adrian Klaver
On Tuesday 29 December 2009 5:01:03 am Osvaldo Kussama wrote:
 2009/12/24 Israel Brewster isr...@frontierflying.com:
  This is sort of a PostgreSQL question/sort of a general SQL question, so
  I apologize if this isn't the best place to ask. At any rate, I know in
  PostgreSQL you can issue a command like 'SELECT time(timestamp_column)
  from table_name' to get the time part of a timestamp. The problem is that
  this command for some reason requires quotes around the time function
  name, which breaks the command when used in SQLite (I don't know about
  MySQL yet, but I suspect the same would be true there). The program I am
  working on is designed to work with all three types of databases (SQLite,
  PostgreSQL, and MySQL) so it would be nice (save me some programing) if
  there was a single SQL statement to get the time portion of a timestamp
  that would work with all three. Is there such a beast? On a related note,
  why do we need the quotes around time for the function to work in
  PostgreSQL? the date function doesn't need them, so I know it's not just
  a general PostgreSQL formating difference. Thanks :)
  ---

 It's a bug?

 bdteste=# SELECT time(CURRENT_TIMESTAMP);
 ERRO:  erro de sintaxe em ou próximo a CURRENT_TIMESTAMP
 LINE 1: SELECT time(CURRENT_TIMESTAMP);
 ^
 bdteste=# SELECT pg_catalog.time(CURRENT_TIMESTAMP);
   time
 -
  10:55:07.073911
 (1 registro)

 bdteste=# SELECT time(CURRENT_TIMESTAMP);
   time
 -
  10:55:20.679684
 (1 registro)

 bdteste=# SELECT $$time$$(CURRENT_TIMESTAMP);
 ERRO:  erro de sintaxe em ou próximo a (
 LINE 1: SELECT $$time$$(CURRENT_TIMESTAMP);
^

 Osvaldo

It is documented behavior.

To quote from here:
http://www.postgresql.org/docs/8.4/interactive/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS

 It is also possible to specify a type cast using a function-like syntax:

typename ( expression )

However, this only works for types whose names are also valid as function 
names. 
For example, double precision cannot be used this way, but the equivalent 
float8 can. Also, the names interval, time, and timestamp can only be used in 
this fashion if they are double-quoted, because of syntactic conflicts. 
Therefore, the use of the function-like cast syntax leads to inconsistencies 
and should probably be avoided.

Note: The function-like syntax is in fact just a function call. When one of 
the two standard cast syntaxes is used to do a run-time conversion, it will 
internally invoke a registered function to perform the conversion. By 
convention, these conversion functions have the same name as their output type, 
and thus the function-like syntax is nothing more than a direct invocation of 
the underlying conversion function. Obviously, this is not something that a 
portable application should rely on. For further details see CREATE CAST. 


-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Alvaro Herrera
Osvaldo Kussama escribió:

 bdteste=# SELECT time(CURRENT_TIMESTAMP);
   time
 -
  10:55:20.679684
 (1 registro)
 
 bdteste=# SELECT $$time$$(CURRENT_TIMESTAMP);
 ERRO:  erro de sintaxe em ou próximo a (
 LINE 1: SELECT $$time$$(CURRENT_TIMESTAMP);
^

Note that the reason the second query doesn't work is that $$ is a way
to replace quoting for string literals, i.e. what ' (single quote) does
normally.   (double quote) is used to quote identifiers, not literals.
Different thing.

When you write time you are invoking the function because it's parsed
as an identifier.  When you write time (no quotes) you are invoking the
reserved keyword.  The double quotes strip the reservedness and it's
treated like an ordinary keyword.  When you write pg_catalog.time this
is parsed as an identifier too because keywords cannot be
schema-qualified.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Adrian Klaver
On Monday 28 December 2009 8:58:38 am Israel Brewster wrote:
 On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:
  On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
  This is sort of a PostgreSQL question/sort of a general SQL question,
  so I apologize if this isn't the best place to ask. At any rate, I
  know in PostgreSQL you can issue a command like 'SELECT
  time(timestamp_column) from table_name' to get the time part of a
  timestamp. The problem is that this command for some reason requires
  quotes around the time function name, which breaks the command when
  used in SQLite (I don't know about MySQL yet, but I suspect the same
  would be true there). The program I am working on is designed to work
  with all three types of databases (SQLite, PostgreSQL, and MySQL) so
  it would be nice (save me some programing) if there was a single SQL
  statement to get the time portion of a timestamp that would work with
  all three. Is there such a beast? On a related note, why do we need
  the quotes around time for the function to work in PostgreSQL? the
  date function doesn't need them, so I know it's not just a general
  PostgreSQL formating difference. Thanks :)
  ---
  Israel Brewster
  Computer Support Technician II
  Frontier Flying Service Inc.
  5245 Airport Industrial Rd
  Fairbanks, AK 99709
  (907) 450-7250 x293
  ---
 
  select cast(timestamp_column as time) from table_name

 Didn't realize you could do that- thanks. This does work, sort of...
 In PosgreSQL, it returns the time portion of the timestamp as desired.
 However, in SQLite, while the function runs, it returns the year
 portion of the timestamp, rather than the time. That would seem to be
 a SQLite issue/question however. Thanks for the suggestion.

  --
  Adrian Klaver
  akla...@comcast.net


It would seem that the best solution is your original one of SELECT 
time(timestamp_field). This works in the three databases you mentioned with the 
provision that you have to double quote time in Postgres. 



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE ERROR: tuple concurrently updated

2009-12-29 Thread Михаил Кечинов
Ok, I did it.

1. Drop primary key CASCADE (foreign keys too).
2. Reindex table.
3. Delete duplicate rows.
4. Create primary key.
5. Create foreign keys.

Thanks for help.

29 декабря 2009 г. 15:24 пользователь Михаил Кечинов
kechin...@gmail.comнаписал:

 Good. Now I have error:

 docs=# REINDEX TABLE document;
 ERROR:  could not create unique index pkey_document
 DETAIL:  Table contains duplicated values.

 So, I have primary key and I have some rows with similar numdoc, but
 numdoc is primary key and must be unique.

 I can't drop pkey because there are some tables with foreign keys:

 docs=# alter table document drop constraint pkey_document;
 NOTICE:  constraint fk_search_document_vid_numdoc on table
 ref_search_document_v id depends
 on index pkey_document
 ERROR:  cannot drop constraint pkey_document on table document because
 other obj ects depend on it
 HINT:  Use DROP ... CASCADE to drop the dependent objects too.

 So, I can drop pkey only with other fkey's. It's bad. Is there any methods,
 how to clean pkey? When I try to delete some duplicate document, I have that
 error: tuple concurrently updated

 ...

 2009/12/29 Greg Stark gsst...@mit.edu

 On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов kechin...@gmail.com
 wrote:
  When I try to delete one row from database (for example):
  delete from document where numdoc = 901721617
  I have this error:
  ERROR: tuple concurrently updated
  SQL state: XX000
  I know, that no one deleting this row at same time.
  What's mean this error?

 So this error can only come from a normal SQL-level delete if there is
 associated TOAST data which is being deleted as well. In which case
 that TOAST data must be already marked deleted -- which shouldn't be
 possible.

 It sounds like you have a database where some writes from earlier
 transactions reached the database and others didn't. That can happen
 if you take an inconsistent backup (without using pg_start_backup())
 or if the drive you're using confirmed writes before crashing but
 didn't actually write them.

 You might be able to get somewhat further by reindexing the TOAST
 table for this table. To do so do REINDEX TABLE document. But note
 that you could run into further errors from the missing toast data.

 --
 greg




 --
 Михаил Кечинов
 http://www.mkechinov.ru




-- 
Михаил Кечинов
http://www.mkechinov.ru


[GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
Hi, I need to create a trigger on a table used by our sofware, the
problem is, when I issue a create trigger on this table, it takes
forever. It doesn't matter if I use pgAdmin, or psql.

The only way to do it is by disconnecting all the instances of the
program from the database, execute the DDL command, and reconnect the
program again.

What can be causing this behavior? any workaround?.

-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Bill Moran
In response to Leonardo M. Ramé l.r...@griensu.com:

 Hi, I need to create a trigger on a table used by our sofware, the
 problem is, when I issue a create trigger on this table, it takes
 forever. It doesn't matter if I use pgAdmin, or psql.
 
 The only way to do it is by disconnecting all the instances of the
 program from the database, execute the DDL command, and reconnect the
 program again.
 
 What can be causing this behavior? any workaround?.

Most likely those programs have locks out that are causing the DDL command
to have to wait until the locks clear.

Disconnecting everyone is the quick workaround.

The real fix is to ensure that your application doesn't hold onto DB locks
longer than is needed, which won't guarantee that everything moves fast,
but will help immensely.  Often this is caused by the application holding
transactions open for long periods of time, which is a bad idea anyway
since it interferes with vacuum and other maintenance operations.

You can look at the pg_locks table to see what locks are out, to help you
diagnose what apps are holding locks open.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Comparing 2 databases

2009-12-29 Thread akp geek
Hi all -

  I have postgres running on 2 servers. one production and one
testing. What would be the best way to compare the 2 database, so find out
the differences? Can you please advice?

regards


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Osvaldo Kussama
2009/12/29 Adrian Klaver akla...@comcast.net:
 On Tuesday 29 December 2009 5:01:03 am Osvaldo Kussama wrote:
 2009/12/24 Israel Brewster isr...@frontierflying.com:
  This is sort of a PostgreSQL question/sort of a general SQL question, so
  I apologize if this isn't the best place to ask. At any rate, I know in
  PostgreSQL you can issue a command like 'SELECT time(timestamp_column)
  from table_name' to get the time part of a timestamp. The problem is that
  this command for some reason requires quotes around the time function
  name, which breaks the command when used in SQLite (I don't know about
  MySQL yet, but I suspect the same would be true there). The program I am
  working on is designed to work with all three types of databases (SQLite,
  PostgreSQL, and MySQL) so it would be nice (save me some programing) if
  there was a single SQL statement to get the time portion of a timestamp
  that would work with all three. Is there such a beast? On a related note,
  why do we need the quotes around time for the function to work in
  PostgreSQL? the date function doesn't need them, so I know it's not just
  a general PostgreSQL formating difference. Thanks :)
  ---

 It's a bug?

 bdteste=# SELECT time(CURRENT_TIMESTAMP);
 ERRO:  erro de sintaxe em ou próximo a CURRENT_TIMESTAMP
 LINE 1: SELECT time(CURRENT_TIMESTAMP);
                     ^
 bdteste=# SELECT pg_catalog.time(CURRENT_TIMESTAMP);
       time
 -
  10:55:07.073911
 (1 registro)

 bdteste=# SELECT time(CURRENT_TIMESTAMP);
       time
 -
  10:55:20.679684
 (1 registro)

 bdteste=# SELECT $$time$$(CURRENT_TIMESTAMP);
 ERRO:  erro de sintaxe em ou próximo a (
 LINE 1: SELECT $$time$$(CURRENT_TIMESTAMP);
                        ^

 Osvaldo

 It is documented behavior.

 To quote from here:
 http://www.postgresql.org/docs/8.4/interactive/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS

  It is also possible to specify a type cast using a function-like syntax:

 typename ( expression )

 However, this only works for types whose names are also valid as function 
 names.
 For example, double precision cannot be used this way, but the equivalent
 float8 can. Also, the names interval, time, and timestamp can only be used in
 this fashion if they are double-quoted, because of syntactic conflicts.
 Therefore, the use of the function-like cast syntax leads to inconsistencies
 and should probably be avoided.

    Note: The function-like syntax is in fact just a function call. When one of
 the two standard cast syntaxes is used to do a run-time conversion, it will
 internally invoke a registered function to perform the conversion. By
 convention, these conversion functions have the same name as their output 
 type,
 and thus the function-like syntax is nothing more than a direct invocation 
 of
 the underlying conversion function. Obviously, this is not something that a
 portable application should rely on. For further details see CREATE CAST. 



Adrian and Alvaro, thanks for explanation.

Osvaldo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Michael Fork
I have an index scan on a custom function that is returning a wildly incorrect 
row estimate that is throwing off the rest of the query planning.  The result 
of the function is roughly unique - there are a handful with multiple entries - 
but the planner is estimating 227,745 rows.  I re-ran ANALYZE on the table and 
the results did not change.  Any suggestions on how to get more accurate 
planner result?


Function definition:

CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
BEGIN
RETURN SUBSTRING($1 FROM 3 FOR 13);
END;
$_$ LANGUAGE plpgsql IMMUTABLE;

Explain output:

# explain select * from trail.event where type='CREDIT' and 
parsecardidfromreferencecode(reference_code) = lpad(CAST('57729970' AS text), 
13, '0');
QUERY PLAN
---
 Index Scan using idx_event_card_id on event  (cost=0.25..468642.89 rows=227745 
width=104)
   Index Cond: (parsecardidfromreferencecode(reference_code) = 
'057729970'::text)

Statistics:

# SELECT null_frac, avg_width, n_distinct, most_common_vals, most_common_freqs, 
correlation, histogram_bounds FROM pg_stats WHERE tablename = 
'idx_event_card_id';
 null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | 
correlation |   
   histogram_bounds
---+---++--+---+-+-
 0.0935673 |17 | -1 |  |   |
0.672617 | 
{007932138,029448430,033432203,037841145,042171261,046399858,050938838,055122354,0003139385377,1220821134582,1261876426760}
Thanks.


Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Pavel Stehule
Hello

2009/12/29 Michael Fork mfor...@yahoo.com:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?


 Function definition:

 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 BEGIN
 RETURN SUBSTRING($1 FROM 3 FOR 13);
 END;
 $_$ LANGUAGE plpgsql IMMUTABLE;


try
 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
SELECT SUBSTRING($1 FROM 3 FOR 13);
 $_$ LANGUAGE sql;

regards
Pavel Stehule

 Explain output:

 # explain select * from trail.event where type='CREDIT' and 
 parsecardidfromreferencecode(reference_code) = lpad(CAST('57729970' AS text), 
 13, '0');
                                        QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.25..468642.89 
 rows=227745 width=104)
   Index Cond: (parsecardidfromreferencecode(reference_code) = 
 '057729970'::text)

 Statistics:

 # SELECT null_frac, avg_width, n_distinct, most_common_vals, 
 most_common_freqs, correlation, histogram_bounds FROM pg_stats WHERE 
 tablename = 'idx_event_card_id';
  null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | 
 correlation |                                                                 
      histogram_bounds
 ---+---++--+---+-+-
  0.0935673 |        17 |         -1 |                  |                   |  
   0.672617 | 
 {007932138,029448430,033432203,037841145,042171261,046399858,050938838,055122354,0003139385377,1220821134582,1261876426760}
 Thanks.


 Michael


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cross-database time extract?

2009-12-29 Thread Israel Brewster


On Dec 29, 2009, at 5:41 AM, Adrian Klaver wrote:


On Monday 28 December 2009 8:58:38 am Israel Brewster wrote:

On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:

On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
This is sort of a PostgreSQL question/sort of a general SQL  
question,

so I apologize if this isn't the best place to ask. At any rate, I
know in PostgreSQL you can issue a command like 'SELECT
time(timestamp_column) from table_name' ...
---
Israel Brewster
Computer Support Technician II
Frontier Flying Service Inc.
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x293
---


select cast(timestamp_column as time) from table_name


Didn't realize you could do that- thanks. This does work, sort of...
In PosgreSQL, it returns the time portion of the timestamp as  
desired.

However, in SQLite, while the function runs, it returns the year
portion of the timestamp, rather than the time. That would seem to be
a SQLite issue/question however. Thanks for the suggestion.


--
Adrian Klaver
akla...@comcast.net



It would seem that the best solution is your original one of SELECT
time(timestamp_field). This works in the three databases you  
mentioned with the

provision that you have to double quote time in Postgres.


Agreed. It's fairly easy to add the quotes when needed, after which  
everything works as desired. Thanks for all the feedback and  
explanations!






--
Adrian Klaver
akla...@comcast.net

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


---
Israel Brewster
Computer Support Technician II
Frontier Flying Service Inc.
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x293
---

BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] esql vs libpq

2009-12-29 Thread svcntk
Hello,

I'm doing a comparison between ESQL interfaces and libpq. For libp I
use pgbench, based on TCP-C, while for ESQL have a program that also
follows the transactions carried out on TCP-C.

However, the result with libpq is much better, with about 700
transactions per second, whereas with ESQL does not reach the figure
of 400. Does anyone know if libpq has superior performance to ESQL, or
is there something I am not taking into account?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
El mar, 29-12-2009 a las 11:20 -0500, Bill Moran escribió:
 In response to Leonardo M. Ramé l.r...@griensu.com:
 
  Hi, I need to create a trigger on a table used by our sofware, the
  problem is, when I issue a create trigger on this table, it takes
  forever. It doesn't matter if I use pgAdmin, or psql.
  
  The only way to do it is by disconnecting all the instances of the
  program from the database, execute the DDL command, and reconnect the
  program again.
  
  What can be causing this behavior? any workaround?.
 
 Most likely those programs have locks out that are causing the DDL command
 to have to wait until the locks clear.
 
 Disconnecting everyone is the quick workaround.
 
 The real fix is to ensure that your application doesn't hold onto DB locks
 longer than is needed, which won't guarantee that everything moves fast,
 but will help immensely.  Often this is caused by the application holding
 transactions open for long periods of time, which is a bad idea anyway
 since it interferes with vacuum and other maintenance operations.
 
 You can look at the pg_locks table to see what locks are out, to help you
 diagnose what apps are holding locks open.
 
 -- 
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/
 

Thanks Bill, I'll take a look at my app. I hope the problem is on my
side, the worst case would be the cause of the problem is on the
connection component I'm using to connect to Postgres (the app is
written in Delphi with Zeos Lib).


-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump ERROR, usename postgres duplicated

2009-12-29 Thread Gastón Quiroga

Thank You Tom:
   I'll Try to make an update, but the 2 fields are equals row by row, 
how could I make a difference in the WHERE statement?


Regards

Gastón Quiroga
Allytech S.A.



Tom Lane wrote:

=?ISO-8859-1?Q?Gast=F3n?= ta...@allytech.com writes:
  

It's Postgres version 8.0.8



Well, that's pretty ancient, and I see at least one bug in the release
history that could result in duplicated rows.  I'd counsel an update to
8.0.something-recent.  You can probably delete the extra row using
a WHERE on ctid.

regards, tom lane
  


Re: [GENERAL] pg_dump ERROR, usename postgres duplicated

2009-12-29 Thread Alvaro Herrera
Gastón Quiroga wrote:
 Thank You Tom:
I'll Try to make an update, but the 2 fields are equals row by
 row, how could I make a difference in the WHERE statement?

Use the ctid hidden system field.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump ERROR, usename postgres duplicated

2009-12-29 Thread Adrian Klaver




- Gastón Quiroga gast...@allytech.com wrote:

 Thank You Tom:
 I'll Try to make an update, but the 2 fields are equals row by row,
 how could I make a difference in the WHERE statement?
 
 Regards
 
 Gastón Quiroga
 Allytech S.A.
 
 Tom Lane wrote:
 
 =?ISO-8859-1?Q?Gast=F3n?= ta...@allytech.com writes:
 
 It's Postgres version 8.0.8 Well, that's pretty ancient, and I see at
 least one bug in the release
 history that could result in duplicated rows.  I'd counsel an update
 to
 8.0.something-recent.  You can probably delete the extra row using
 a WHERE on ctid.
 
   regards, tom lane

Per Toms previous post use the following query:
select ctid,xmin,xmax,* from pg_shadow;

Then use the ctid value of the duplicate value in the where clause.


Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Michael Fork
Pavel,

Thanks for the suggestion but unfortunately the planner estimate was not really 
affected:

QUERY PLAN
---
 Index Scan using idx_event_card_id on event  (cost=0.00..401311.59 rows=223890 
width=103)
   Index Cond: (substring(reference_code, 3, 13) = '057729970'::text)
(2 rows)
Thanks.


Michael



- Original Message 
From: Pavel Stehule pavel.steh...@gmail.com
To: Michael Fork mfor...@yahoo.com
Cc: pgsql-general@postgresql.org
Sent: Tue, December 29, 2009 12:18:52 PM
Subject: Re: [GENERAL] Planner Row Estimate with Function

Hello

2009/12/29 Michael Fork mfor...@yahoo.com:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?


 Function definition:

 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 BEGIN
 RETURN SUBSTRING($1 FROM 3 FOR 13);
 END;
 $_$ LANGUAGE plpgsql IMMUTABLE;


try
CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
SELECT SUBSTRING($1 FROM 3 FOR 13);
$_$ LANGUAGE sql;

regards
Pavel Stehule

 Explain output:

 # explain select * from trail.event where type='CREDIT' and 
 parsecardidfromreferencecode(reference_code) = lpad(CAST('57729970' AS text), 
 13, '0');
QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.25..468642.89 
 rows=227745 width=104)
   Index Cond: (parsecardidfromreferencecode(reference_code) = 
 '057729970'::text)

 Statistics:

 # SELECT null_frac, avg_width, n_distinct, most_common_vals, 
 most_common_freqs, correlation, histogram_bounds FROM pg_stats WHERE 
 tablename = 'idx_event_card_id';
  null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | 
 correlation | 
  histogram_bounds
 ---+---++--+---+-+-
  0.0935673 |17 | -1 |  |   |  
   0.672617 | 
 {007932138,029448430,033432203,037841145,042171261,046399858,050938838,055122354,0003139385377,1220821134582,1261876426760}
 Thanks.


 Michael


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Pavel Stehule
2009/12/29 Michael Fork mfor...@yahoo.com:
 Pavel,

 Thanks for the suggestion but unfortunately the planner estimate was not 
 really affected:

any string estimation are not exact.

you can use following dirty trick:


http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Using_IMMUTABLE_functions_as_hints_for_the_optimizer

Regards
Pavel

                                        QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.00..401311.59 
 rows=223890 width=103)
   Index Cond: (substring(reference_code, 3, 13) = '057729970'::text)
 (2 rows)
 Thanks.


 Michael



 - Original Message 
 From: Pavel Stehule pavel.steh...@gmail.com
 To: Michael Fork mfor...@yahoo.com
 Cc: pgsql-general@postgresql.org
 Sent: Tue, December 29, 2009 12:18:52 PM
 Subject: Re: [GENERAL] Planner Row Estimate with Function

 Hello

 2009/12/29 Michael Fork mfor...@yahoo.com:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?


 Function definition:

 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 BEGIN
 RETURN SUBSTRING($1 FROM 3 FOR 13);
 END;
 $_$ LANGUAGE plpgsql IMMUTABLE;


 try
 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 SELECT SUBSTRING($1 FROM 3 FOR 13);
 $_$ LANGUAGE sql;

 regards
 Pavel Stehule

 Explain output:

 # explain select * from trail.event where type='CREDIT' and 
 parsecardidfromreferencecode(reference_code) = lpad(CAST('57729970' AS 
 text), 13, '0');
                                        QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.25..468642.89 
 rows=227745 width=104)
   Index Cond: (parsecardidfromreferencecode(reference_code) = 
 '057729970'::text)

 Statistics:

 # SELECT null_frac, avg_width, n_distinct, most_common_vals, 
 most_common_freqs, correlation, histogram_bounds FROM pg_stats WHERE 
 tablename = 'idx_event_card_id';
  null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | 
 correlation |                                                                
       histogram_bounds
 ---+---++--+---+-+-
  0.0935673 |        17 |         -1 |                  |                   | 
    0.672617 | 
 {007932138,029448430,033432203,037841145,042171261,046399858,050938838,055122354,0003139385377,1220821134582,1261876426760}
 Thanks.


 Michael


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Thomas Kellerer

Michael Fork wrote on 29.12.2009 18:08:

I have an index scan on a custom function that is returning a wildly
incorrect row estimate that is throwing off the rest of the query
planning.  The result of the function is roughly unique - there are a
handful with multiple entries - but the planner is estimating 227,745
rows.  I re-ran ANALYZE on the table and the results did not change.



Any suggestions on how to get more accurate planner result?


You can add the ROWS nnn option to your create statement to give the planner a 
hint about the number of rows:

http://www.postgresql.org/docs/current/static/sql-createfunction.html

Check out the /ROWS result_rows/ part.

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread Joshua Tolley
On Tue, Dec 29, 2009 at 11:43:58AM -0500, akp geek wrote:
Hi all -
 
  I have postgres running on 2 servers. one production and one
testing. What would be the best way to compare the 2 database, so find out
the differences? Can you please advice?
 
regards

That depends on what you mean by compare. check_postgres[1] has a schema
comparison action you can use.

[1] http://bucardo.org/wiki/Check_postgres

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
El mar, 29-12-2009 a las 14:48 -0300, Leonardo M. Ramé escribió:
 El mar, 29-12-2009 a las 11:20 -0500, Bill Moran escribió:
  In response to Leonardo M. Ramé l.r...@griensu.com:
  
   Hi, I need to create a trigger on a table used by our sofware, the
   problem is, when I issue a create trigger on this table, it takes
   forever. It doesn't matter if I use pgAdmin, or psql.
   
   The only way to do it is by disconnecting all the instances of the
   program from the database, execute the DDL command, and reconnect the
   program again.
   
   What can be causing this behavior? any workaround?.
  
  Most likely those programs have locks out that are causing the DDL command
  to have to wait until the locks clear.
  
  Disconnecting everyone is the quick workaround.
  
  The real fix is to ensure that your application doesn't hold onto DB locks
  longer than is needed, which won't guarantee that everything moves fast,
  but will help immensely.  Often this is caused by the application holding
  transactions open for long periods of time, which is a bad idea anyway
  since it interferes with vacuum and other maintenance operations.
  
  You can look at the pg_locks table to see what locks are out, to help you
  diagnose what apps are holding locks open.
  
  -- 
  Bill Moran
  http://www.potentialtech.com
  http://people.collaborativefusion.com/~wmoran/
  
 
 Thanks Bill, I'll take a look at my app. I hope the problem is on my
 side, the worst case would be the cause of the problem is on the
 connection component I'm using to connect to Postgres (the app is
 written in Delphi with Zeos Lib).
 
 
 -- 
 Leonardo M. Ramé
 Griensu S.A. - Medical IT Córdoba
 Tel.: 0351-4247979
 
 

Well, I'm trying to debug the problem, and found that when I do a simple
select * from table from my app, then go to pgAdmin, and do select *
from pg_locks, it shows many locks (23 to be exact).

On the other hand, if I do the same query with pgAdmin on one machine,
and the select * from pg_locks from another machine, there are only
three locks.

Is there a way to tell Postgres how to handle locks at connection time?,
maybe pgAdmin is using a *special* parameter to connect?.


-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Merlin Moncure
2009/12/29 Leonardo M. l.r...@griensu.com:
 El mar, 29-12-2009 a las 14:48 -0300, Leonardo M. Ramé escribió:
 El mar, 29-12-2009 a las 11:20 -0500, Bill Moran escribió:
  In response to Leonardo M. Ramé l.r...@griensu.com:
 
   Hi, I need to create a trigger on a table used by our sofware, the
   problem is, when I issue a create trigger on this table, it takes
   forever. It doesn't matter if I use pgAdmin, or psql.
  
   The only way to do it is by disconnecting all the instances of the
   program from the database, execute the DDL command, and reconnect the
   program again.
  
   What can be causing this behavior? any workaround?.
 
  Most likely those programs have locks out that are causing the DDL command
  to have to wait until the locks clear.
 
  Disconnecting everyone is the quick workaround.
 
  The real fix is to ensure that your application doesn't hold onto DB locks
  longer than is needed, which won't guarantee that everything moves fast,
  but will help immensely.  Often this is caused by the application holding
  transactions open for long periods of time, which is a bad idea anyway
  since it interferes with vacuum and other maintenance operations.
 
  You can look at the pg_locks table to see what locks are out, to help you
  diagnose what apps are holding locks open.
 
  --
  Bill Moran
  http://www.potentialtech.com
  http://people.collaborativefusion.com/~wmoran/
 

 Thanks Bill, I'll take a look at my app. I hope the problem is on my
 side, the worst case would be the cause of the problem is on the
 connection component I'm using to connect to Postgres (the app is
 written in Delphi with Zeos Lib).


 --
 Leonardo M. Ramé
 Griensu S.A. - Medical IT Córdoba
 Tel.: 0351-4247979



 Well, I'm trying to debug the problem, and found that when I do a simple
 select * from table from my app, then go to pgAdmin, and do select *
 from pg_locks, it shows many locks (23 to be exact).

Those locks are not coming from your query.  Something else is
connecting and acquiring locks.  If your database is busy, this is
normal: every transaction is going to apply a lock of some sort or
another.   The key thing to look out for is the level of lock, what it
is on, and whether or not it has been granted.

Also, you should check: select * from pg_stat_activity; to see who is
executing what queries and if you have any long running transactions
that shouldn't be.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
El mar, 29-12-2009 a las 14:18 -0500, Merlin Moncure escribió:
 
 
  Well, I'm trying to debug the problem, and found that when I do a simple
  select * from table from my app, then go to pgAdmin, and do select *
  from pg_locks, it shows many locks (23 to be exact).
 
 Those locks are not coming from your query.  Something else is
 connecting and acquiring locks.  If your database is busy, this is
 normal: every transaction is going to apply a lock of some sort or
 another.   The key thing to look out for is the level of lock, what it
 is on, and whether or not it has been granted.
 
 Also, you should check: select * from pg_stat_activity; to see who is
 executing what queries and if you have any long running transactions
 that shouldn't be.
 
 merlin
 

In fact, my app is acquiring the locks. Also the app doesn't have long
running transactions, and the locks shows even if I just to a SELECT, to
reproduce the problem don't have to do an BEGIN ... COMMIT/ROLLBACK.

I found the problem is in the way I connect the app to the database,
when I set the Transaction Isolation Level to ReadCommitted, the locking
problem appears, when I use the default connection method, the locks
doesn't appear when I do select * from pg_locks. 

This solves the locking problem, but what happens to transactions? the
app is still working in transaction mode, or just applying changes after
every Insert/Update/Delete?.

-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Merlin Moncure
2009/12/29 Leonardo M. l.r...@griensu.com:
 El mar, 29-12-2009 a las 14:18 -0500, Merlin Moncure escribió:
 
 
  Well, I'm trying to debug the problem, and found that when I do a simple
  select * from table from my app, then go to pgAdmin, and do select *
  from pg_locks, it shows many locks (23 to be exact).

 Those locks are not coming from your query.  Something else is
 connecting and acquiring locks.  If your database is busy, this is
 normal: every transaction is going to apply a lock of some sort or
 another.   The key thing to look out for is the level of lock, what it
 is on, and whether or not it has been granted.

 Also, you should check: select * from pg_stat_activity; to see who is
 executing what queries and if you have any long running transactions
 that shouldn't be.

 merlin


 In fact, my app is acquiring the locks. Also the app doesn't have long
 running transactions, and the locks shows even if I just to a SELECT, to
 reproduce the problem don't have to do an BEGIN ... COMMIT/ROLLBACK.

 I found the problem is in the way I connect the app to the database,
 when I set the Transaction Isolation Level to ReadCommitted, the locking
 problem appears, when I use the default connection method, the locks
 doesn't appear when I do select * from pg_locks.

 This solves the locking problem, but what happens to transactions? the
 app is still working in transaction mode, or just applying changes after
 every Insert/Update/Delete?.

huh...the default transaction mode _is_ read committed :-).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread akp geek
thanks for the repsonse. I appreciate it. are there any limitations on using
this one?  Means that we have to the same user on both databases and same
passwords.

I have used the command following way

  check_postgres.pl --action=same_schema -H 172.  -p 1550
--db=myProdDB  --dbuser=prodUser  --dbpass=prodPwd  --dbhost2=172.x
--db=testDB  --dbuser=testUser  --dbpass=testPwd  --verbose  difference.txt

what happend was , it complained about the password, then I tried replacing
the testPwd with prodPwd, then it started executing. but it prompted for
password for testuser. that's where I got confused

One question I have is, is there an option to specify schema also

Thanks once again

Regards



On Tue, Dec 29, 2009 at 1:57 PM, Joshua Tolley eggyk...@gmail.com wrote:

  On Tue, Dec 29, 2009 at 11:43:58AM -0500, akp geek wrote:
 Hi all -
 
   I have postgres running on 2 servers. one production and one
 testing. What would be the best way to compare the 2 database, so find
 out
 the differences? Can you please advice?
 
 regards

 That depends on what you mean by compare. check_postgres[1] has a schema
 comparison action you can use.

 [1] http://bucardo.org/wiki/Check_postgres

 --
 Joshua Tolley / eggyknap
 End Point Corporation
 http://www.endpoint.com

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAks6USQACgkQRiRfCGf1UMOvoQCgm5R9XioQ8mKcw2sDkYtW8SbO
 k3gAn3jDp/xhzHjQkE0O2MCHVcYrQlLL
 =dwE1
 -END PGP SIGNATURE-




Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
El mar, 29-12-2009 a las 15:05 -0500, Merlin Moncure escribió:
  This solves the locking problem, but what happens to transactions? the
  app is still working in transaction mode, or just applying changes after
  every Insert/Update/Delete?.
 
 huh...the default transaction mode _is_ read committed :-).
 
 merlin
 

Merlin, knowning this, I'm asking to the developers of the connection
library because in their code, if I use the default connection mode,
then the transactions are ingnored, applying the changes immediately
after every Insert, Update or Delete.

-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory usage of group by select

2009-12-29 Thread Anthony
Hi all,

I'm running a group by query on a table with over a billion rows and my
memory usage is seemingly growing without bounds.  Eventually the mem usage
exceeds my physical memory and everything starts swapping.  Here is what I
gather to be the relevant info:

My machine has 768 megs of ram.

shared_buffers = 128MB
work_mem = 8MB # this was originally higher, but I brought it down to try to
fix the problem - it hasn't
maintenance_work_mem = 256MB
fsync = off
checkpoint_segments = 30
effective_cache_size = 256MB #this was originally 512MB but I just recently
brought it down - as I expected that didn't affect anything

data=# explain select pid, min(oid) into nd_min from nd group by pid;
   QUERY PLAN

 HashAggregate  (cost=28173891.00..28174955.26 rows=85141 width=8)
   -  Seq Scan on nd  (cost=0.00..21270355.00 rows=1380707200 width=8)
(2 rows)

data=# \d+ nd
Table fullplanet091207osm.nd
 Column |  Type   | Modifiers | Storage | Description
+-+---+-+-
 oid| integer | not null  | plain   |
 pid| integer | not null  | plain   |
 ref| integer |   | plain   |
Indexes:
nd_pkey PRIMARY KEY, btree (pid, oid)
Has OIDs: no

VERSION = 'PostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC
gcc-4.4.real (Ubuntu 4.4.1-3ubuntu3) 4.4.1, 64-bit'


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Tom Lane
Michael Fork mfor...@yahoo.com writes:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?

What PG version is this exactly?

Also, what happened to the type='CREDIT' condition in your query?  Is
that a partial index?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory usage of group by select

2009-12-29 Thread Anthony
On Tue, Dec 29, 2009 at 3:41 PM, Anthony o...@inbox.org wrote:

 I'm running a group by query on a table with over a billion rows and my
 memory usage is seemingly growing without bounds.  Eventually the mem usage
 exceeds my physical memory and everything starts swapping.


I guess I didn't ask my question.  Is this expected behavior?  Is there any
way for me to adjust my settings to avoid using so much memory?


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Merlin Moncure
2009/12/29 Leonardo M. l.r...@griensu.com:
 El mar, 29-12-2009 a las 15:05 -0500, Merlin Moncure escribió:
  This solves the locking problem, but what happens to transactions? the
  app is still working in transaction mode, or just applying changes after
  every Insert/Update/Delete?.

 huh...the default transaction mode _is_ read committed :-).

 merlin


 Merlin, knowning this, I'm asking to the developers of the connection
 library because in their code, if I use the default connection mode,
 then the transactions are ingnored, applying the changes immediately
 after every Insert, Update or Delete.

right. IIRC the zeos library has a transaction mode that controls if
commits are explicit or invoked via the library commit method.  either
way, you you need to make sure that transactions are not left
open...this can lead (as you noticed) to unexpected problems like
blocking queries, performance problems, data loss, etc.

if you notice the slow ddl issue again, throw an immediate select *
from pg_locks and look for granted = f.  If you find some and they
match your pid, then you know that you have a transaction open that is
blocking you.  From there, it's just a matter if using pg_locks and
pg_stat_activity to narrow down who/what is doing it.  You should
especially take note of 'idle in transaction' in
pg_stat_activity...this is classic red flag of leaky application code.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Leonardo M.
El mar, 29-12-2009 a las 15:44 -0500, Merlin Moncure escribió:
 right. IIRC the zeos library has a transaction mode that controls if
 commits are explicit or invoked via the library commit method.  either
 way, you you need to make sure that transactions are not left
 open...this can lead (as you noticed) to unexpected problems like
 blocking queries, performance problems, data loss, etc.
 
 if you notice the slow ddl issue again, throw an immediate select *
 from pg_locks and look for granted = f.  If you find some and they
 match your pid, then you know that you have a transaction open that is
 blocking you.  From there, it's just a matter if using pg_locks and
 pg_stat_activity to narrow down who/what is doing it.  You should
 especially take note of 'idle in transaction' in
 pg_stat_activity...this is classic red flag of leaky application code.
 
 merlin

I did the Select * from pg_locks right after your answer, and found that
almost all locks originated by my app have granted = t, also, all are
in IDLE in transaction. The interesting thing is the app is doing
only Selects, without opening transactions.


-- 
Leonardo M. Ramé
Griensu S.A. - Medical IT Córdoba
Tel.: 0351-4247979


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Merlin Moncure
2009/12/29 Leonardo M. l.r...@griensu.com:
 El mar, 29-12-2009 a las 15:44 -0500, Merlin Moncure escribió:
 right. IIRC the zeos library has a transaction mode that controls if
 commits are explicit or invoked via the library commit method.  either
 way, you you need to make sure that transactions are not left
 open...this can lead (as you noticed) to unexpected problems like
 blocking queries, performance problems, data loss, etc.

 if you notice the slow ddl issue again, throw an immediate select *
 from pg_locks and look for granted = f.  If you find some and they
 match your pid, then you know that you have a transaction open that is
 blocking you.  From there, it's just a matter if using pg_locks and
 pg_stat_activity to narrow down who/what is doing it.  You should
 especially take note of 'idle in transaction' in
 pg_stat_activity...this is classic red flag of leaky application code.

 merlin

 I did the Select * from pg_locks right after your answer, and found that
 almost all locks originated by my app have granted = t, also, all are
 in IDLE in transaction. The interesting thing is the app is doing
 only Selects, without opening transactions.

ok, the problem is clear: find out why those happened (a client issued
'begin' without subsequent 'commit') and your problem will go away.
Turn on sql logging if you have to.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL commands take forever

2009-12-29 Thread Raymond O'Donnell
On 29/12/2009 20:59, Leonardo M. Ramé wrote:

 in IDLE in transaction. The interesting thing is the app is doing
 only Selects, without opening transactions.

Everything in PG happens in a transaction, whether you open one
explicitly or not.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory usage of group by select

2009-12-29 Thread Alvaro Herrera
Anthony wrote:
 On Tue, Dec 29, 2009 at 3:41 PM, Anthony o...@inbox.org wrote:
 
  I'm running a group by query on a table with over a billion rows and my
  memory usage is seemingly growing without bounds.  Eventually the mem usage
  exceeds my physical memory and everything starts swapping.
 
 
 I guess I didn't ask my question.  Is this expected behavior?  Is there any
 way for me to adjust my settings to avoid using so much memory?

It's expecting 85k distinct groups.  If that's not accurate, then
HashAggregate would use more memory than expected.  See if you can make
it work by setting enable_hashagg = off.

If that works, good -- the real solution is different.  Maybe you need
to ANALYZE more, or increase the size of the stats bin for this column.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory usage of group by select

2009-12-29 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 It's expecting 85k distinct groups.  If that's not accurate, then
 HashAggregate would use more memory than expected.  See if you can make
 it work by setting enable_hashagg = off.

 If that works, good -- the real solution is different.  Maybe you need
 to ANALYZE more, or increase the size of the stats bin for this column.

If ANALYZE consistently underestimates the number of distinct values,
you may have to force matters with ALTER TABLE SET STATISTICS DISTINCT.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory usage of group by select

2009-12-29 Thread Anthony
On Tue, Dec 29, 2009 at 4:09 PM, Alvaro Herrera
alvhe...@commandprompt.comwrote:

 It's expecting 85k distinct groups.  If that's not accurate, then
 HashAggregate would use more memory than expected.


Great diagnosis.  There are actually about 76 million distinct groups.


 See if you can make it work by setting enable_hashagg = off.


Will do.  Maybe overnight tonight.  The explain is estimating that to take
10 times as long, and I can't afford to do that right now.

If that works, good -- the real solution is different.  Maybe you need
 to ANALYZE more, or increase the size of the stats bin for this column.


On Tue, Dec 29, 2009 at 5:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 If ANALYZE consistently underestimates the number of distinct values,
 you may have to force matters with ALTER TABLE SET STATISTICS DISTINCT.


Seems to be an 8.5 feature.  Otherwise, it'd be perfect since I happen to
know the actual number (it's the row count of another table).

I've run the analyze again and it's still estimating around 85K distinct
pids.  That's with the default setting of 100 for default_statistics_target,
but I'm not sure I want to mess with that right now (this table is just
going to be dropped in a few days after I manipulate it a bit more).

The only question I really have is this: if I fix this number and leave
enable_hashagg on, is it just going to have the same effect (for this one
query) of turning enable_hashagg off?  Because if I'm just going to have to
bite the bullet and run the query with GroupAggregate
(cost=287511359.15..297867743.71), I guess I'll just have to do that.  Or
try to figure out another way to get what it is I'm trying to get (I should
have just imported the right numbers from the beginning, but the import took
3 days so I don't feel like doing that again).

In any case, thanks a lot for the help, both of you.


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
Problem is we already have gcc-c++ but when we try to do a ./configure
for the geos3.2.0 install it gives errors saying it is looking for g+
+, is it version of gcc-c++ is the highest upgrade and should have g++
included according to tech specialist. He is actually doing the
install...trying to find more info so we can move on from install
stage. Any more ideas appreciated...Thanks, Nick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
Problem is we already have gcc-c++ but when we try to do a ./configure
for the geos3.2.0 install it gives errors saying it is looking for g+
+, is it version of gcc-c++ is the highest upgrade and should have g++
included according to tech specialist. He is actually doing the
install...trying to find more info so we can move on from install
stage. Any more ideas appreciated...Thanks, Nick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
Problem is we already have gcc-c++ but when we try to do a ./configure
for the geos3.2.0 install it gives errors saying it is looking for g+
+, is it version of gcc-c++ is the highest upgrade and should have g++
included according to tech specialist. He is actually doing the
install...trying to find more info so we can move on from install
stage. Any more ideas appreciated...Thanks, Nick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
 Nick wrote:
  'g++' or g++ says -bash g++: command not found

  distro is red hat

 Assuming thats RHEL5,

     yum install gcc-c++

 if its RHEL4 or earlier, use up2date instead.   either of these will
 require a RHN subscription to be activated.

 if it is in fact, centos 3,4,5,

     yum install gcc-c++

 and you don't need any subscription

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

More info...RHEL5 and postgis version is 1.4.0.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 29, 1:21 pm, Nick nick.uebel...@noaa.gov wrote:
 On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:



  Nick wrote:
   'g++' or g++ says -bash g++: command not found

   distro is red hat

  Assuming thats RHEL5,

      yum install gcc-c++

  if its RHEL4 or earlier, use up2date instead.   either of these will
  require a RHN subscription to be activated.

  if it is in fact, centos 3,4,5,

      yum install gcc-c++

  and you don't need any subscription

  --
  Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
  To make changes to your 
  subscription:http://www.postgresql.org/mailpref/pgsql-general

 More info...RHEL5 and postgis version is 1.4.0.

thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
 Nick wrote:
  'g++' or g++ says -bash g++: command not found

  distro is red hat

 Assuming thats RHEL5,

     yum install gcc-c++

 if its RHEL4 or earlier, use up2date instead.   either of these will
 require a RHN subscription to be activated.

 if it is in fact, centos 3,4,5,

     yum install gcc-c++

 and you don't need any subscription

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

More info...RHEL5 and postgis version is 1.4.0.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Joshua D. Drake
On Tue, 2009-12-29 at 13:21 -0800, Nick wrote:
 On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
  Nick wrote:
   'g++' or g++ says -bash g++: command not found
 
   distro is red hat
 
  Assuming thats RHEL5,
 
  yum install gcc-c++
 
  if its RHEL4 or earlier, use up2date instead.   either of these will
  require a RHN subscription to be activated.
 
  if it is in fact, centos 3,4,5,
 
  yum install gcc-c++
 
  and you don't need any subscription
 
  --
  Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
  To make changes to your 
  subscription:http://www.postgresql.org/mailpref/pgsql-general
 
 More info...RHEL5 and postgis version is 1.4.0.

Why are you not using the yum repository that gives you all of this
packaged?

 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread Joshua Tolley
On Tue, Dec 29, 2009 at 03:21:18PM -0500, akp geek wrote:
thanks for the repsonse. I appreciate it. are there any limitations on
using this one?  Means that we have to the same user on both databases and
same passwords.
 
I have used the command following way
 
  check_postgres.pl --action=same_schema -H 172.  -p 1550  
--db=myProdDB  --dbuser=prodUser  --dbpass=prodPwd  --dbhost2=172.x
--db=testDB  --dbuser=testUser  --dbpass=testPwd  --verbose 
difference.txt
 
what happend was , it complained about the password, then I tried
replacing the testPwd with prodPwd, then it started executing. but it
prompted for password for testuser. that's where I got confused

You might try a pgpass file[1] and skip providing the passwords on the command
line. 

One question I have is, is there an option to specify schema also

Check the docs under BASIC FILTERING[2].  You can tell it to ignore objects
with certain names, or to include only those objects with the given names.

[1] http://www.postgresql.org/docs/current/interactive/libpq-pgpass.html
[2] http://bucardo.org/check_postgres/check_postgres.pl.html#basic_filtering

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread DM
Also check this out Very interesting – it can compare data between the DBs
(tables/views). Check this out –

http://www.zidsoft.com/

http://www.zidsoft.com/screenshots.html


Thanks

Deepak

On Tue, Dec 29, 2009 at 4:37 PM, Joshua Tolley eggyk...@gmail.com wrote:

 On Tue, Dec 29, 2009 at 03:21:18PM -0500, akp geek wrote:
 thanks for the repsonse. I appreciate it. are there any limitations on
 using this one?  Means that we have to the same user on both databases
 and
 same passwords.
 
 I have used the command following way
 
   check_postgres.pl --action=same_schema -H 172.  -p 1550
 --db=myProdDB  --dbuser=prodUser  --dbpass=prodPwd
  --dbhost2=172.x
 --db=testDB  --dbuser=testUser  --dbpass=testPwd  --verbose 
 difference.txt
 
 what happend was , it complained about the password, then I tried
 replacing the testPwd with prodPwd, then it started executing. but it
 prompted for password for testuser. that's where I got confused

 You might try a pgpass file[1] and skip providing the passwords on the
 command
 line.

 One question I have is, is there an option to specify schema also

 Check the docs under BASIC FILTERING[2].  You can tell it to ignore objects
 with certain names, or to include only those objects with the given names.

 [1] http://www.postgresql.org/docs/current/interactive/libpq-pgpass.html
 [2]
 http://bucardo.org/check_postgres/check_postgres.pl.html#basic_filtering

 --
 Joshua Tolley / eggyknap
 End Point Corporation
 http://www.endpoint.com

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAks6oNcACgkQRiRfCGf1UMOQVgCghRaU2VCwyXNg0KbkqI/FhA9J
 xpoAn2RJRSmJmbgybRytNjo0ZiPNruL4
 =Lk0m
 -END PGP SIGNATURE-




[GENERAL] DataBase Problem

2009-12-29 Thread Premila Devi
I am having problem as :

 

Caused by: org.springframework.transaction.TransactionSystemException: Could
not roll back Hibernate transaction; nested exception is
org.hibernate.TransactionException: JDBC rollback failed

  at
org.springframework.orm.hibernate3.HibernateTransactionManager.doRollback(Hi
bernateTransactionManager.java:677)

  at
org.springframework.transaction.support.AbstractPlatformTransactionManager.d
oRollbackOnCommitException(AbstractPlatformTransactionManager.java:850)

  at
org.springframework.transaction.support.AbstractPlatformTransactionManager.p
rocessCommit(AbstractPlatformTransactionManager.java:737)

  at
org.springframework.transaction.support.AbstractPlatformTransactionManager.c
ommit(AbstractPlatformTransactionManager.java:678)

  at
org.springframework.transaction.interceptor.TransactionAspectSupport.commitT
ransactionAfterReturning(TransactionAspectSupport.java:321)

  at
org.springframework.transaction.interceptor.TransactionInterceptor.invoke(Tr
ansactionInterceptor.java:116)

  at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(Reflect
iveMethodInvocation.java:171)

  at
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopPro
xy.java:204)

  at $Proxy57.generateReport(Unknown Source)

  ... 106 more

Caused by: org.hibernate.TransactionException: JDBC rollback failed

  at
org.hibernate.transaction.JDBCTransaction.rollback(JDBCTransaction.java:204)

  at
org.springframework.orm.hibernate3.HibernateTransactionManager.doRollback(Hi
bernateTransactionManager.java:674)

  ... 114 more

Caused by: java.sql.SQLException: Couldn't perform the operation rollback:
You can't perform any operations on this connection. It has been
automatically closed by Proxool for some reason (see logs).

  at
org.logicalcobwebs.proxool.WrappedConnection.invoke(WrappedConnection.java:2
07)

  at
org.logicalcobwebs.proxool.WrappedConnection.intercept(WrappedConnection.jav
a:87)

  at
org.postgresql.PGConnection$$EnhancerByProxool$$4e6ef3b1.rollback(generated
)

  at
org.hibernate.transaction.JDBCTransaction.rollbackAndResetAutoCommit(JDBCTra
nsaction.java:217)

  at
org.hibernate.transaction.JDBCTransaction.rollback(JDBCTransaction.java:196)

  ... 115 more

 

Pls guide me.Thanks you.

 



Re: [GENERAL] DataBase Problem

2009-12-29 Thread Craig Ringer

On 30/12/2009 9:49 AM, Premila Devi wrote:


Caused by: _java.sql.SQLException_: Couldn't perform the operation
rollback: You can't perform any operations on this connection. It has
been automatically closed by Proxool for some reason (see logs).


see logs

Look at your proxool logs and see why the connection was closed. The 
error message its self gives you the next step.


I suggest reading this:
  http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
for some hints in case you need to ask a more detailed follow-up.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 29, 4:19 pm, j...@commandprompt.com (Joshua D. Drake) wrote:
 On Tue, 2009-12-29 at 13:21 -0800, Nick wrote:
  On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
   Nick wrote:
'g++' or g++ says -bash g++: command not found

distro is red hat

   Assuming thats RHEL5,

       yum install gcc-c++

   if its RHEL4 or earlier, use up2date instead.   either of these will
   require a RHN subscription to be activated.

   if it is in fact, centos 3,4,5,

       yum install gcc-c++

   and you don't need any subscription

   --
   Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
   To make changes to your 
   subscription:http://www.postgresql.org/mailpref/pgsql-general

  More info...RHEL5 and postgis version is 1.4.0.

 Why are you not using the yum repository that gives you all of this
 packaged?



 --
 PostgreSQL.org Major Contributor
 Command Prompt, Inc:http://www.commandprompt.com/- 503.667.4564
 Consulting, Training, Support, Custom Development, Engineering
 Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
 Sir.

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

More info did a

rpm -qa | grep gcc

and gave me

gcc-4.1.2-46el14.4.1

Thinking was that this included the g++, is that not the case? Or do
we need a version gcc-c++? Thanks, hope you can help.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 29, 4:19 pm, j...@commandprompt.com (Joshua D. Drake) wrote:
 On Tue, 2009-12-29 at 13:21 -0800, Nick wrote:
  On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
   Nick wrote:
'g++' or g++ says -bash g++: command not found

distro is red hat

   Assuming thats RHEL5,

       yum install gcc-c++

   if its RHEL4 or earlier, use up2date instead.   either of these will
   require a RHN subscription to be activated.

   if it is in fact, centos 3,4,5,

       yum install gcc-c++

   and you don't need any subscription

   --
   Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
   To make changes to your 
   subscription:http://www.postgresql.org/mailpref/pgsql-general

  More info...RHEL5 and postgis version is 1.4.0.

 Why are you not using the yum repository that gives you all of this
 packaged?



 --
 PostgreSQL.org Major Contributor
 Command Prompt, Inc:http://www.commandprompt.com/- 503.667.4564
 Consulting, Training, Support, Custom Development, Engineering
 Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
 Sir.

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

More info did a

rpm -qa | grep gcc

and gave me

gcc-4.1.2-46el14.4.1

Thinking was that this included the g++, is that not the case? Or do
we need a version gcc-c++? Thanks, hope you can help.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation of Postgis/postgresql

2009-12-29 Thread Nick
On Dec 29, 4:19 pm, j...@commandprompt.com (Joshua D. Drake) wrote:
 On Tue, 2009-12-29 at 13:21 -0800, Nick wrote:
  On Dec 28, 8:22 pm, pie...@hogranch.com (John R Pierce) wrote:
   Nick wrote:
'g++' or g++ says -bash g++: command not found

distro is red hat

   Assuming thats RHEL5,

       yum install gcc-c++

   if its RHEL4 or earlier, use up2date instead.   either of these will
   require a RHN subscription to be activated.

   if it is in fact, centos 3,4,5,

       yum install gcc-c++

   and you don't need any subscription

   --
   Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
   To make changes to your 
   subscription:http://www.postgresql.org/mailpref/pgsql-general

  More info...RHEL5 and postgis version is 1.4.0.

 Why are you not using the yum repository that gives you all of this
 packaged?



 --
 PostgreSQL.org Major Contributor
 Command Prompt, Inc:http://www.commandprompt.com/- 503.667.4564
 Consulting, Training, Support, Custom Development, Engineering
 Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
 Sir.

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general

More info did a

rpm -qa | grep gcc

and gave me

gcc-4.1.2-46el14.4.1

Thinking was that this included the g++, is that not the case? Or do
we need a version gcc-c++? Thanks, hope you can help.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Idle processes chewing up CPU?

2009-12-29 Thread Tom Lane
Brendan Hill brend...@jims.net writes:
 I think I've confirmed the fix. Using a dirty disconnect generator, I was
 able to reliably recreate the problem within about 30-60 seconds. The
 symptoms were the same as before, however it occurred around SSL_write
 instead of SSL_read - I assume this was due to the artificial nature of the
 dirty disconnect (easier for the client to artificially break the connection
 while waiting/receiving, than sending).

 The solution you proposed solved it for SSL_write (ran for 30 minutes, no
 runaway processes), and I think it's safe to assume SSL_read too. So I
 suggest two additions:

Done, and also the same on the libpq side, since presumably it could
happen at that end as well.  I suspect there is some underlying OpenSSL
bug we are masking here, but it's cheap enough that we might as well
just do it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Michael Fork
springboard_v2=# SELECT version();
 version
--
 PostgreSQL 8.3.7 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 
20070719  [FreeBSD]
(1 row)

Yes, this is partial index.  I should have included the index definition 
earlier:

# CREATE INDEX CONCURRENTLY idx_event_card_id ON 
trail.event(parsecardidfromreferencecode(reference_code)) WHERE type = 
'CREDIT'; Thanks.


Michael



- Original Message 
From: Tom Lane t...@sss.pgh.pa.us
To: Michael Fork mfor...@yahoo.com
Cc: pgsql-general@postgresql.org
Sent: Tue, December 29, 2009 3:43:06 PM
Subject: Re: [GENERAL] Planner Row Estimate with Function 

Michael Fork mfor...@yahoo.com writes:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?

What PG version is this exactly?

Also, what happened to the type='CREDIT' condition in your query?  Is
that a partial index?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Michael Fork
I tried the trick below and the planner estimate was roughly the same:

springboard_v2=# explain select * from trail.event where type='CREDIT' and 
lpad(CAST('57729970' AS text), 13, '0') = 
ANY(parsecardidfromreferencecode(reference_code));  
 QUERY PLAN
-
 Bitmap Heap Scan on event  (cost=1669366.06..15120311.84 rows=2178778 
width=103)
   Recheck Cond: (type = 'CREDIT'::text)
   Filter: ('057729970'::text = ANY 
(parsecardidfromreferencecode(reference_code)))
   -  Bitmap Index Scan on idx_event_card_id  (cost=0.00..1668821.37 
rows=44565021 width=0)
 Thanks.


Michael



- Original Message 
From: Pavel Stehule pavel.steh...@gmail.com
To: Michael Fork mfor...@yahoo.com
Cc: pgsql-general@postgresql.org
Sent: Tue, December 29, 2009 1:16:10 PM
Subject: Re: [GENERAL] Planner Row Estimate with Function

2009/12/29 Michael Fork mfor...@yahoo.com:
 Pavel,

 Thanks for the suggestion but unfortunately the planner estimate was not 
 really affected:

any string estimation are not exact.

you can use following dirty trick:


http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Using_IMMUTABLE_functions_as_hints_for_the_optimizer

Regards
Pavel

QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.00..401311.59 
 rows=223890 width=103)
   Index Cond: (substring(reference_code, 3, 13) = '057729970'::text)
 (2 rows)
 Thanks.


 Michael



 - Original Message 
 From: Pavel Stehule pavel.steh...@gmail.com
 To: Michael Fork mfor...@yahoo.com
 Cc: pgsql-general@postgresql.org
 Sent: Tue, December 29, 2009 12:18:52 PM
 Subject: Re: [GENERAL] Planner Row Estimate with Function

 Hello

 2009/12/29 Michael Fork mfor...@yahoo.com:
 I have an index scan on a custom function that is returning a wildly 
 incorrect row estimate that is throwing off the rest of the query planning.  
 The result of the function is roughly unique - there are a handful with 
 multiple entries - but the planner is estimating 227,745 rows.  I re-ran 
 ANALYZE on the table and the results did not change.  Any suggestions on how 
 to get more accurate planner result?


 Function definition:

 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 BEGIN
 RETURN SUBSTRING($1 FROM 3 FOR 13);
 END;
 $_$ LANGUAGE plpgsql IMMUTABLE;


 try
 CREATE FUNCTION parsecardidfromreferencecode(text) RETURNS text AS $_$
 SELECT SUBSTRING($1 FROM 3 FOR 13);
 $_$ LANGUAGE sql;

 regards
 Pavel Stehule

 Explain output:

 # explain select * from trail.event where type='CREDIT' and 
 parsecardidfromreferencecode(reference_code) = lpad(CAST('57729970' AS 
 text), 13, '0');
QUERY PLAN
 ---
  Index Scan using idx_event_card_id on event  (cost=0.25..468642.89 
 rows=227745 width=104)
   Index Cond: (parsecardidfromreferencecode(reference_code) = 
 '057729970'::text)

 Statistics:

 # SELECT null_frac, avg_width, n_distinct, most_common_vals, 
 most_common_freqs, correlation, histogram_bounds FROM pg_stats WHERE 
 tablename = 'idx_event_card_id';
  null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | 
 correlation |
   histogram_bounds
 ---+---++--+---+-+-
  0.0935673 |17 | -1 |  |   | 
0.672617 | 
 {007932138,029448430,033432203,037841145,042171261,046399858,050938838,055122354,0003139385377,1220821134582,1261876426760}
 Thanks.


 Michael


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Tom Lane
Michael Fork mfor...@yahoo.com writes:
 Also, what happened to the type='CREDIT' condition in your query?  Is
 that a partial index?

 Yes, this is partial index.  I should have included the index definition 
 earlier:

Ah.  The optimizer is ignoring the index stats on the grounds that
they are not representative of the whole table.  I'm not entirely sure
offhand whether it would be safe to use them anyway if the index
predicate is known to match the query --- it seems a bit risky but on
the other hand it'd probably be better than having no stats at all.

What you can do to get a better estimate is to create a non-partial
index on parsecardidfromreferencecode(reference_code).  It's fairly
likely that if you do that, maintaining the partial index as well
is not worth your time; but that's a decision you'd have to make.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner Row Estimate with Function

2009-12-29 Thread Michael Fork
That solved it.
 Thanks!


Michael



- Original Message 
From: Tom Lane t...@sss.pgh.pa.us
To: Michael Fork mfor...@yahoo.com
Cc: pgsql-general@postgresql.org
Sent: Tue, December 29, 2009 11:19:42 PM
Subject: Re: [GENERAL] Planner Row Estimate with Function 

Michael Fork mfor...@yahoo.com writes:
 Also, what happened to the type='CREDIT' condition in your query?  Is
 that a partial index?

 Yes, this is partial index.  I should have included the index definition 
 earlier:

Ah.  The optimizer is ignoring the index stats on the grounds that
they are not representative of the whole table.  I'm not entirely sure
offhand whether it would be safe to use them anyway if the index
predicate is known to match the query --- it seems a bit risky but on
the other hand it'd probably be better than having no stats at all.

What you can do to get a better estimate is to create a non-partial
index on parsecardidfromreferencecode(reference_code).  It's fairly
likely that if you do that, maintaining the partial index as well
is not worth your time; but that's a decision you'd have to make.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general