Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Martijn van Oosterhout
On Sun, Apr 06, 2008 at 07:32:58PM -0400, David Wilson wrote:
> I have a reasonably large table (~75m rows,~18gb) called "vals". It
> includes an integer datestamp column with approximately 4000 unique
> entries across the rows; there is a normal btree index on the
> datestamp column. When I attempt something like "select distinct
> datestamp from vals", however, explain tells me it's doing a
> sequential scan:

I'm a bit late to the party, but someone had a similar problem a while
back and solved it with an SRF as follows (pseudo-code):

BEGIN
  curr := (SELECT field FROM table ORDER BY field LIMIT 1 )
  RETURN NEXT curr;

  WHILE( curr )
curr := (SELECT field FROM table WHERE field > curr ORDER BY field LIMIT 1 )
RETURN NEXT curr;
  END
END

If you have 5000 unique values it will do 5000 index lookup which would
be bad except it's better than 75 million rows as is in your case.
Whether it's faster is something you'll have to test, but it's another
approach to the problem.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] Most Occurring Value

2008-04-07 Thread Volkan YAZICI
Mike Ginsburg <[EMAIL PROTECTED]> writes:
> There is probably a really simple solution for this problem, but for
> the life of me I can't see to think of it.  I have three tables
>
> --contains u/p for all users in the site
> TABLE users (user_id INT primary key, username VARCHAR(50), password TEXT)
> --list of all possible events (login, logout, timeout)
> TABLE events (event_id INT primary key, event VARCHAR(255))
> --logs the activity of all users logging in/out, etc
> TABLE log (log_id INT primary key, user_id INT REFERENCES users,
> event_id INT REFERENCES event);
>
> How would I query to find out which user has the most activity?
> SELECT user_id, COUNT(event_id)
> FROM log
> GROUP BY (user_id)
> HAVNG COUNT(event_id) = ???

SELECT user_id, max(count(event_id))
  FROM log
 GROUP BY user_id;

or

SELECT user_id, count(event_id)
  FROM log
 GROUP BY user_id
 ORDER BY count(event_id) DESC
 LIMIT 1;


Regards.

P.S. It'd be better if you can send such questions to pgsql-sql mailing
 list.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsvector_update_trigger throws error "column is not of tsvector type"

2008-04-07 Thread Markus Wollny
Hi!

Tom Lane wrote:
> This is leaping to conclusions, but what I suspect is that you've got
> two types "tsvector" in your database and the column is the wrong
> one.
> This situation is not too hard to get into if you try to restore a
> dump from an old database that used contrib/tsearch2 --- the dump may
> create a partially broken type "public.tsvector" while the built-in
> pg_catalog.tsvector still remains.   

It's a fair suspicion, but I have been in the fortunate situation to have 
tsearch2 installed to it's own schema in 8.2.4; so I dumped the old db without 
the tsearch2-schema like this, using 8.3.1's pg_dump on the new machine:
pg_dump -h123.123.123.123 -U postgres -N tsearch2 -s community > 
community.schema.sql
pg_dump -h123.123.123.123 -U postgres -N tsearch2 -a community -Fc > 
community.data.pg

Then I edited community.schema.sql, doing these two sed's:

sed -e 's/tsearch2\.tsvector/tsvector/g' community.schema.sq | \
sed -e 's/idxfti tsearch2\.gist_tsvector_ops/idxfti/g' - > 
community.schema.sql.83.tmp

Afterwards I replaced all the old trigger-declarations for the update-trigger 
with the new style, using tsvector_update_trigger.

Then I created a new 8.3-DB, imported the tsearch2-compatibility-script like 
this:

psql -U postgres community < /opt/pgsql/share/contrib/tsearch2.sql

And only then did I import the edited schema.

Afterwards I restored the dump like this:

pg_restore --disable-triggers -U postgres -v -Fc -d community community.data.pg

There haven't been any errors during the import, everything went fine. The 
restored database doesn't have a tsearch2-schema any more. I scanned through 
the edited schema-definiton which I imported and theres's no CREATE TYPE in 
there at all. I checked the public schema and there's no tsvector there either. 
So it must be the builtin-tsvector type alright - it seems to be there and work 
perfectly:

community=# select 'foo'::tsvector;
 tsvector
--
 'foo'
(1 row)

community=# select to_tsvector('foo');
 to_tsvector
-
 'foo':1
(1 row)

community=# SELECT message_id FROM ct_com_board_message WHERE idxfti @@ 
to_tsquery('Starcraft') LIMIT 3;
 message_id

5669043
5671762
5670197
(3 rows)

I can even update that idxfti-column manually like so:

community=# UPDATE ct_com_board_message 
SET idxfti = to_tsvector(coalesce(title,'')) || 
 to_tsvector(coalesce(text,'')) || 
 to_tsvector(coalesce(user_login,'')) WHERE 
message_id = 6575830;
UPDATE 1

And when I use a custom-trigger-function, there's no problem either:

CREATE FUNCTION board_message_trigger() RETURNS trigger AS $$
begin
  new.idxfti :=
 to_tsvector(coalesce(new.title,'')) || 
 to_tsvector(coalesce(new.text,'')) || 
 to_tsvector(coalesce(new.user_login,''));
  return new;
end
$$ LANGUAGE plpgsql;

CREATE TRIGGER "tsvectorupdate"
BEFORE 
INSERT OR UPDATE 
ON "public"."ct_com_board_message"
FOR EACH ROW 
EXECUTE PROCEDURE board_message_trigger();

community=# UPDATE ct_com_board_message set count_reply = count_reply where 
message_id = 6575830;
UPDATE 1

community=# SELECT message_id FROM ct_com_board_message WHERE idxfti @@ 
to_tsquery('markus') AND message_id = 6575830 LIMIT 3;
 message_id

6575830
(1 row)

So everything's working as expected apart from that built-in trigger function.  
 
> There's some hints in the manual about safe migration from tsearch2
> to built-in tsearch: 
> http://www.postgresql.org/docs/8.3/static/textsearch-migration.html

I read that carefully before I went on that journey (that's why I did load that 
new contrib/tsearch2 module), but I didn't find anything helpful regarding this 
situation.

This is very puzzling. I'll resort to writing custom trigger-functions for the 
time being.

Kind regards

   Markus
 




Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Johannes S. Gözalan (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot use a standalone backend to VACUUM in "postgres""

2008-04-07 Thread Manuel Sugawara
Manuel Sugawara <[EMAIL PROTECTED]> writes:

> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>
>> Hi Manuel,
>
> Hi Alvaro!
>
>> I suggest you look for temp tables that have not been reclaimed.
>> We've had a couple of reports where leftover temp tables have
>> stopped the frozen-xid counter from advancing.  (They would have a
>> very old relfrozenxid.)
>
> Thank you very much for the suggestion. Any pointers on how to do
> that? A quick serch in google didn't show anything relevant.

Will look into pg_class, of course. Somehow I was thinking something
else. Thanks again.

Regards,
Manuel.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsvector_update_trigger throws error "column is not of tsvector type"

2008-04-07 Thread Tom Lane
"Markus Wollny" <[EMAIL PROTECTED]> writes:
> Now when I do anything that fires the trigger like
> UPDATE ct_com_board_message set count_reply = 1 where message_id = 12345;
> I get an error
> ERROR:  column "idxfti" is not of tsvector type

This is leaping to conclusions, but what I suspect is that you've got
two types "tsvector" in your database and the column is the wrong one.
This situation is not too hard to get into if you try to restore a dump
from an old database that used contrib/tsearch2 --- the dump may create
a partially broken type "public.tsvector" while the built-in
pg_catalog.tsvector still remains.

There's some hints in the manual about safe migration from tsearch2
to built-in tsearch:
http://www.postgresql.org/docs/8.3/static/textsearch-migration.html

The whole thing is a bit of a mess :-(, and has certainly helped make
it clear that we need to invent some better-defined module concept to
help with major upgrades/replacements of add-on modules.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Stephen Denne
Alban Hertroys wrote
> Something that might help you, but I'm not sure whether it 
> might hurt  
> the performance of other queries, is to cluster that table on  
> val_datestamp_idx. That way the records are already (mostly) sorted  
> on disk in the order of the datestamps, which seems to be the brunt  
> of above query plan.

I've a question about this suggestion, in relation to what the cost estimation 
calculation does, or could possibly do:
If there are 4000 distinct values in the index, found randomly amongst 75 
million rows, then you might be able to check the visibility of all those index 
values through reading a smaller number of disk pages than if the table was 
clustered by that index.
As an example, say there are 50 rows per page, at a minimum you could be very 
lucky and determine that they where all visible through reading only 80 data 
pages. More likely you'd be able to determine that through a few hundred pages. 
If the table was clustered by an index on that field, you'd have to read 4000 
pages.

Is this question completely unrelated to PostgreSQL implementation reality, or 
something worth considering?

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.
__
  This email has been scanned by the DMZGlobal Business Quality
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/dmzmessaging.htm for details.
__



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot use a standalone backend to VACUUM in "postgres""

2008-04-07 Thread Manuel Sugawara
Alvaro Herrera <[EMAIL PROTECTED]> writes:

> Hi Manuel,

Hi Alvaro!

> I suggest you look for temp tables that have not been reclaimed.
> We've had a couple of reports where leftover temp tables have
> stopped the frozen-xid counter from advancing.  (They would have a
> very old relfrozenxid.)

Thank you very much for the suggestion. Any pointers on how to do
that? A quick serch in google didn't show anything relevant.

Regards,
Manuel.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane escribió:
>> What I think you'll find, though, is that once you do force an indexscan
>> to be picked it'll be slower.  Full-table index scans are typically
>> worse than seqscan+sort, unintuitive though that may sound.

> Hmm, should we switch the CLUSTER code to do that?

It's been suggested before, but I'm not sure.  The case where an
indexscan can win is where the table is roughly in index order already.
So if you think about periodic CLUSTER to maintain table ordering,
I suspect you'd want the indexscan implementation for all but maybe
the first time.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Alvaro Herrera
Tom Lane escribió:

> What I think you'll find, though, is that once you do force an indexscan
> to be picked it'll be slower.  Full-table index scans are typically
> worse than seqscan+sort, unintuitive though that may sound.

Hmm, should we switch the CLUSTER code to do that?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Tom Lane
"David Wilson" <[EMAIL PROTECTED]> writes:
> It appears to be doing a sequential scan regardless of the set, as if
> it doesn't believe it can use the index for some reason

More likely, it's getting a cost estimate for the indexscan that's so
bad that it even exceeds the 1-unit thumb on the scales that's
inserted by enable_seqscan=off.

You could try setting enable_sort=off also, which'd give you another
1 worth of thumb on the scales.  And if that doesn't help,
reduce random_page_cost to 1 or even less.

What I think you'll find, though, is that once you do force an indexscan
to be picked it'll be slower.  Full-table index scans are typically
worse than seqscan+sort, unintuitive though that may sound.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] tsvector_update_trigger throws error "column is not of tsvector type"

2008-04-07 Thread Markus Wollny
Hi!

I am in the process of migrating a PostgreSQL 8.2.4 database to 8.3. So far, 
everything has worked fine, even tsearch2-searching an indexed table.

There's something severely wrong with the trigger-function I use to keep the 
tsvector-column updated.

Here's my table definition:

CREATE TABLE public.ct_com_board_message
(
board_id integer DEFAULT 0,
thread_id integer DEFAULT 0,
father_id integer DEFAULT 0,
message_id integer NOT NULL DEFAULT 0,
user_id integer DEFAULT 0,
title text,
signature text,
follow_up text,
count_reply integer DEFAULT 0,
last_reply timestamptz,
created timestamptz DEFAULT now(),
article_id integer DEFAULT 0,
logged_ip text,
state_id smallint DEFAULT 0,
text text,
deleted_date timestamptz,
deleted_login text,
poll_id integer DEFAULT 0,
last_updated timestamptz DEFAULT now(),
idxfti tsvector,
CONSTRAINT "pk_ct_com_board_message" PRIMARY KEY (message_id)
);

And there's this trigger definition:

CREATE TRIGGER "tsvectorupdate"
BEFORE 
INSERT OR UPDATE 
ON "public"."ct_com_board_message"
FOR EACH ROW 
EXECUTE PROCEDURE 
pg_catalog.tsvector_update_trigger(idxfti,pg_catalog.german,title,text,user_login);

Now when I do anything that fires the trigger like

UPDATE ct_com_board_message set count_reply = 1 where message_id = 12345;

I get an error

ERROR:  column "idxfti" is not of tsvector type

I didn't touch the tsvector_update_trigger-function at all, it still reads

CREATE or REPLACE FUNCTION "pg_catalog"."tsvector_update_trigger"()
RETURNS "pg_catalog"."trigger" AS 
$BODY$
tsvector_update_trigger_byid
$BODY$
LANGUAGE 'internal' VOLATILE;

So what's happening here?

Kind regards

   Markus


Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Johannes S. Gözalan (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot use a standalone backend to VACUUM in "postgres""

2008-04-07 Thread Alvaro Herrera
Manuel Sugawara wrote:

Hi Manuel,

> The funny thing is that there was no open transactions, even after
> restarting the cluster the same message was logged. Today, the
> database stopped working as expected:
> 
> ERROR: database is shut down to avoid wraparound data loss in database 
> "postgres"
> HINT: Stop the postmaster and use a standalone backend to VACUUM in "postgres"

I suggest you look for temp tables that have not been reclaimed.  We've
had a couple of reports where leftover temp tables have stopped the
frozen-xid counter from advancing.  (They would have a very old
relfrozenxid.)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot use a standalone backend to VACUUM in "postgres""

2008-04-07 Thread Manuel Sugawara
We have a PostgreSQL 8.2.6 installation running for about six-months
now. There was a lot of log entries saying (sometimes 10 or more in
just one second):

WARNING: oldest xmin is far in the past 
HINT: Close open transactions soon to avoid wraparound problems. 

(actually it was in Spanish but I think that's irrelevant). 

The funny thing is that there was no open transactions, even after
restarting the cluster the same message was logged. Today, the
database stopped working as expected:

ERROR: database is shut down to avoid wraparound data loss in database 
"postgres"
HINT: Stop the postmaster and use a standalone backend to VACUUM in "postgres"

So, the postmaster was stopped to follow the hint but even in
stand-alone mode postgres keeps saying:

WARNING:  database "postgres" must be vacuumed within 999805 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"postgres".  
WARNING: oldest xmin is far in the past 
HINT: Close open transactions soon to avoid wraparound problems. 

Every time vacuum is run the number decreases by one but after a few
runs I still cannot access the cluster :-(. (My plan was to take a
pg_dumpall and then re-init the cluster.)

Attached is the output of:

  SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
  SELECT datname, age(datfrozenxid) FROM pg_database;

pg_controldata says:

pg_control version number:822
Catalog version number:   200611241
Database system identifier:   5040396405114363383
Database cluster state:   in production
pg_control last modified: Mon 07 Apr 2008 09:22:19 PM CDT
Current log file ID:  33
Next log file segment:91
Latest checkpoint location:   21/5A8EC824
Prior checkpoint location:21/5A8C8CDC
Latest checkpoint's REDO location:21/5A8EC824
Latest checkpoint's UNDO location:0/0
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  0/2280224912
Latest checkpoint's NextOID:  103405
Latest checkpoint's NextMultiXactId:  64513935
Latest checkpoint's NextMultiOffset:  154155767
Time of latest checkpoint:Mon 07 Apr 2008 09:20:54 PM CDT
Minimum recovery ending location: 0/0
Maximum data alignment:   4
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Date/time type storage:   floating-point numbers
Maximum length of locale name:128
LC_COLLATE:   es_MX.ISO-8859-1
LC_CTYPE: es_MX.ISO-8859-1

Please let me know if there is more information needed.

Regards,
Manuel.

backend> SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
user=,db=WARNING:  base de datos ╴postgres╴ debe ser limpiada dentro de 998972 
transacciones
user=,db=HINT:  Para evitar que la base de datos se desactive, ejecute VACUUM 
en toda la base de datos ╴postgres╴


1: relname  (typeid = 19, len = 64, typmod = -1, byval = f)
2: age  (typeid = 23, len = 4, typmod = -1, byval = t)

1: relname = "sql_sizing"   (typeid = 19, len = 64, typmod = -1, byval = f)
2: age = "10063"(typeid = 23, len = 4, typmod = -1, byval = t)

1: relname = "sql_sizing_profiles"  (typeid = 19, len = 64, typmod = -1, 
byval = f)
2: age = "10061"(typeid = 23, len = 4, typmod = -1, byval = t)

1: relname = "sql_features" (typeid = 19, len = 64, typmod = -1, byval = f)
2: age = "10059"(typeid = 23, len = 4, typmod = -1, byval = t)


1: relname = "sql_implementation_info"  (typeid = 19, len = 64, typmod = -1, 
byval = f)
2: age = "10057"(typeid = 23, len = 4, typmod = -1, byval = t)


1: relname = "pg_authid"(typeid = 19, len = 64, typmod = -1, byval = f)
2: age = "2146484675"   (typeid = 23, len = 4, typmod = -1, byval = t)


1: relname = "sql_languages"(typeid = 19, len = 64, typmod = -1, byval = f)
2: age = "10053"(typeid = 23, len = 4, typmod = -1, byval = t)



1: relname = "sql_packages" (typeid = 19, len = 64, typmod = -1, byval = f)
2: age = "1

Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Stephen Denne
> Still doing the sequential scan on the table, but at least it's
> avoiding the expensive disk merge sort. It still seems as if I ought
> to be able to coax it into using an index for this type of query,
> though- especially since it's using one on the other table. Is there
> perhaps some way to reformulate the index in such a way as to make it
> more useful to the planner?

You're asking postgres to examine EVERY visible row (hence the sequential scan 
makes good sense), and tell you what field values there are.

You may be able to make use of an index by rearranging your query to generate a 
series between your min & max values, testing whether each value is in your 
table.

You've got 4252 distinct values, but what is the range of max - min? Say it's 
5000 values, you'd do 5000 lookups via an index, unless postgres thought that 
the number of index based lookups where going to be more expensive than reading 
the entire table.

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.
__
  This email has been scanned by the DMZGlobal Business Quality
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/dmzmessaging.htm for details.
__



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Most Occurring Value

2008-04-07 Thread Colin Wetherbee

Mike Ginsburg wrote:

There is probably a really simple solution for this problem, but
for the life of me I can't see to think of it.  I have three tables


--contains u/p for all users in the site TABLE users (user_id INT
primary key, username VARCHAR(50), password TEXT) --list of all
possible events (login, logout, timeout) TABLE events (event_id INT
primary key, event VARCHAR(255)) --logs the activity of all users
logging in/out, etc TABLE log (log_id INT primary key, user_id INT
REFERENCES users, event_id INT REFERENCES event);

How would I query to find out which user has the most activity? 
SELECT user_id, COUNT(event_id) FROM log GROUP BY (user_id) HAVNG

COUNT(event_id) = ???

Any and all help is appreciated. Thank you.


I'd say...

SELECT user_id, count(event_id) AS event_count FROM log GROUP BY 
user_id ORDER BY event_count DESC LIMIT 1;


Or something to that effect.

Colin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Alvaro Herrera
David Wilson escribió:

> explain analyze select datestamp from vals group by datestamp;
>  QUERY
> PLAN
> 
>  HashAggregate  (cost=1719740.40..1719783.03 rows=4263 width=4)
> (actual time=120192.018..120193.930 rows=4252 loops=1)
>->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472 width=4)
> (actual time=17.441..66807.429 rows=75391476 loops=1)
>  Total runtime: 120195.144 ms

> Still doing the sequential scan on the table, but at least it's
> avoiding the expensive disk merge sort. It still seems as if I ought
> to be able to coax it into using an index for this type of query,
> though- especially since it's using one on the other table. Is there
> perhaps some way to reformulate the index in such a way as to make it
> more useful to the planner?

Hmm, why do you think an indexscan would be faster?  Since there's no
sort step involved, a seqscan as input for the HashAggregate is actually
better than an indexscan, because there's no need for the index entries
at all.

If you want to test, try SET enable_seqscan TO 0 and then rerun the
explain analyze.  My bet is that it will use the index, and it will take
longer.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread David Wilson
On Mon, Apr 7, 2008 at 7:57 PM, Gregory Stark <[EMAIL PROTECTED]> wrote:
>
>  You could try changing it to the equivalent GROUP BY query. The planner,
>  unfortunately, doesn't know they're equivalent and has two separate sets of
>  plans available. In this case where there are only 4,000 distinct values out
>  of 75M original records you might find a HashAggregate plan, which the 
> planner
>  doesn't know can be used for DISTINCT, best. You might have to raise work_mem
>  before the planner feels a hash will fit.
>
>  --
>   Gregory Stark
>   EnterpriseDB  http://www.enterprisedb.com
>   Ask me about EnterpriseDB's RemoteDBA services!
>

Progress!

explain analyze select datestamp from vals group by datestamp;
 QUERY
PLAN

 HashAggregate  (cost=1719740.40..1719783.03 rows=4263 width=4)
(actual time=120192.018..120193.930 rows=4252 loops=1)
   ->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472 width=4)
(actual time=17.441..66807.429 rows=75391476 loops=1)
 Total runtime: 120195.144 ms


Compared with:

explain analyze select distinct datestamp from vals;
  QUERY
PLAN
-
 Unique  (cost=15003047.47..15380004.83 rows=4263 width=4) (actual
time=649599.159..721671.595 rows=4252 loops=1)
  ->  Sort  (cost=15003047.47..15191526.15 rows=75391472 width=4)
(actual time=649599.157..694392.602 rows=75391476 loops=1)
Sort Key: datestamp
Sort Method:  external merge  Disk: 1178592kB
->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472
width=4) (actual time=9.104..93130.468 rows=75391476 loops=1)
 Total runtime: 722379.434 ms

Still doing the sequential scan on the table, but at least it's
avoiding the expensive disk merge sort. It still seems as if I ought
to be able to coax it into using an index for this type of query,
though- especially since it's using one on the other table. Is there
perhaps some way to reformulate the index in such a way as to make it
more useful to the planner?

-- 
- David T. Wilson
Princeton Satellite Systems
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Most Occurring Value

2008-04-07 Thread Mike Ginsburg
There is probably a really simple solution for this problem, but for the 
life of me I can't see to think of it.  I have three tables


--contains u/p for all users in the site
TABLE users (user_id INT primary key, username VARCHAR(50), password TEXT)
--list of all possible events (login, logout, timeout)
TABLE events (event_id INT primary key, event VARCHAR(255))
--logs the activity of all users logging in/out, etc
TABLE log (log_id INT primary key, user_id INT REFERENCES users, 
event_id INT REFERENCES event);


How would I query to find out which user has the most activity?
SELECT user_id, COUNT(event_id)
FROM log
GROUP BY (user_id)
HAVNG COUNT(event_id) = ???

Any and all help is appreciated. Thank you.

Mike Ginsburg
[EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Gregory Stark
"David Wilson" <[EMAIL PROTECTED]> writes:

> I appreciate the responses so far! I'm used to several minutes for
> some of the complex queries on this DB, but 12.5 minutes for a select
> distinct just seems wrong. :)

You could try changing it to the equivalent GROUP BY query. The planner,
unfortunately, doesn't know they're equivalent and has two separate sets of
plans available. In this case where there are only 4,000 distinct values out
of 75M original records you might find a HashAggregate plan, which the planner
doesn't know can be used for DISTINCT, best. You might have to raise work_mem
before the planner feels a hash will fit.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-advocacy] pgus-general now up

2008-04-07 Thread Joshua D. Drake
On Mon, 7 Apr 2008 10:47:58 -0700
"Joshua D. Drake" <[EMAIL PROTECTED]> wrote: 

> http://www.postgresql.org/mailpref/pgus-general

Sorry for the confusion folks. The /community/lists link is currently
broken. It has just been fixed but will take a little bit to migrate to
the mirrors. Please use the mailpref link instead.

Sincerely,

Joshua D. Drake



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
thanks Tony,
I'm going to take a look to it tomorrow morning.

Josep Porres

2008/4/7, Tony Caduto <[EMAIL PROTECTED]>:
>
> josep porres wrote:
>
> >
> > 2008/4/7, Dave Page <[EMAIL PROTECTED] >:
> >
> >On Mon, Apr 7, 2008 at 10:21 AM, josep porres <[EMAIL PROTECTED]
> >> wrote:
> >> well, when you asked me about where I downloaded it, I
> >downloaded it from I
> >> installed it again.
> >> It seems I can debug with no problems till now.
> >> However, when I begin to debug and the function parameters
> >window appears,
> >>  if I click cancel it freezes.
> >
> >
> >
>
> Josep,
>
> The stand alone Lightning Debugger does not have this issue and if it does
> crash or hang (not likely) it won't take down whatever admin tool you are
> using.  Oh, and it's FREE.
>
> http://www.amsoftwaredesign.com/debugger_client_announce
>
> Check it out works great on win32 and built with a native compiler with a
> high performance memory manager.
>
>
> Tony Caduto
> AM Software
> http://www.amsoftwaredesign.com
>


[GENERAL] test ignore

2008-04-07 Thread Joshua D. Drake
test

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] BD removed

2008-04-07 Thread Miguel A. Lopera Tejero




Hi there,

I need some help. We have a Web application connecting to a postgresql
8.2 database on Windows Server 2003. According to the log, we logged on
the server using a remote control desktop, after uninstalling Tomcat we
installed a new Tomcat version. After that, we shutted down the
Postgres service and restarted the service again.

The problem is that the folder where the tablespace of the database is
placed has been removed.

This is the log:

2008-04-04 09:40:15 LOG: unexpected EOF on client connection 
2008-04-04 09:40:15 LOG: unexpected EOF on client connection 
2008-04-04 09:45:51 LOG: unexpected EOF on client connection 
2008-04-04 13:01:58 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:01:58 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:03:58 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:03:58 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:05:59 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:05:59 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:07:59 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:07:59 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:09:33 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:09:33 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:10:01 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:10:01 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:10:02 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:10:02 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:12:02 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:12:02 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:14:02 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:14:02 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:16:02 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:16:02 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:18:03 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:18:03 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:20:03 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:20:03 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:22:03 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:22:03 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
2008-04-04 13:22:06 LOG: received fast shutdown request 
2008-04-04 13:22:06 LOG: shutting down 
2008-04-04 13:22:06 LOG: database system is shut down 
2008-04-04 13:22:07 LOG: logger shutting down 
  
2008-04-04 13:22:52 LOG: database system was shut down at 2008-04-04
13:22:06 Hora de verano romance 
2008-04-04 13:22:52 LOG: checkpoint record is at 0/274ED648 
2008-04-04 13:22:52 LOG: redo record is at 0/274ED648; undo record is
at 0/0; shutdown TRUE 
2008-04-04 13:22:52 LOG: next transaction ID: 0/1192198; next OID:
17381 
2008-04-04 13:22:52 LOG: next MultiXactId: 27; next MultiXactOffset: 53
  
2008-04-04 13:22:52 LOG: database system is ready 
2008-04-04 13:23:53 FATAL: database "bdaplicacion" does not exist 
2008-04-04 13:23:53 DETAIL: The database subdirectory
"pg_tblspc/16404/16405" is missing. 
  
Alguien sabe que ha podido pasar y como podemos recuperarlo.

Does anybody know what happened? and how can we recover the folder?

Thanks in advance!




Re: [GENERAL] Subtracting Two Intervals

2008-04-07 Thread Bruce Momjian
Terry Lee Tucker wrote:
> In porting from 7.4.19 to 8.3.1 I have found the following:
> 
> 7.4.19:
> mwr=# select interval '1 day 15 hours 30 minutes' - interval '29 hours';
>   ?column?
> 
>  @ 10 hours 30 mins
> (1 row)
> 
> 8.3.1:
> mwr83=# select interval '1 day 15 hours 30 minutes' - interval '29 hours';
>   ?column?
> 
>  @ 1 day -13 hours -30 mins
> (1 row)
> 
> Is that right? I mean if you take 1 day (24 hours) and add -13 hours and -30 
> minutes, you get 10 hours and 30 minutes, but is it supposed to display that 
> way?

We removed assumptions that every day is 24 hours between those
releases. You can use justify_hours() to get the proper result:

test=> select justify_hours(interval '1 day 15 hours 30 minutes' -
interval '29 hours');
 justify_hours
---
 10:30:00
(1 row)

There have been good arguments that justify_hours() behavior should be
the default.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread David Wilson
On Mon, Apr 7, 2008 at 1:42 PM, Alban Hertroys
<[EMAIL PROTECTED]> wrote:
>
>  Have you tried this query with enable_seqscan=off? If my guess is right
> (and the planners, in that case) it'd be even slower.

set enable_seqscan=off;
explain select distinct datestamp from vals;
  QUERY PLAN
--
 Unique  (cost=115003047.47..115380004.83 rows=4263 width=4)
   ->  Sort  (cost=115003047.47..115191526.15 rows=75391472 width=4)
 Sort Key: datestamp
 ->  Seq Scan on vals  (cost=1.00..101531261.72
rows=75391472 width=4)

It appears to be doing a sequential scan regardless of the set, as if
it doesn't believe it can use the index for some reason
>
>  Something that might help you, but I'm not sure whether it might hurt the
> performance of other queries, is to cluster that table on val_datestamp_idx.
> That way the records are already (mostly) sorted on disk in the order of the
> datestamps, which seems to be the brunt of above query plan.

That's a good thought. I'll give that a try this evening when the DB
has some downtime and see what happens.

>
>  There seems to be quite a bit of overlap in your index definitions. From my
> experience this can confuse the planner.
>
>  I suggest you combine them, but not knowing your data... Maybe rewriting
> your UNIQUE constraint to (val_dur, datestamp, eid, sid) would be enough to
> replace all those other indexes.
>  If not, it's probably better to have one index per column, so that the
> planner is free to combine them as it sees fit. That'd result in a bitmap
> index scan, btw.

I can take a look at the other indices again, but those are all in
place for specific other queries that generally involve some set of
a=1, b=2, c=3, datestamp>5 type of where-clause and were created
specifically in response to sequential scans showing up in other
queries (and had the proper effect of fixing them!)
>
>  I'm not a postgresql tuning expert (I know my way around though), other
> people can explain you way better than I can. Bruce Momjian for example:
> http://www.linuxjournal.com/article/4791

I'll take a look at that, thanks.

>  That calculation doesn't look familiar to me, I'm more used to:
>   select pg_size_pretty(pg_relation_size('...'));
>
>  You can put the name of any relation in there, be it tables, indexes, etc.
>
>  11GB is pretty large for an index on an integer column, especially with
> only 75M rows: that's 146 bytes/row in your index. Maybe your index got
> bloated somehow? I think it should be about a tenth of that.

pg_total_relation_size('..') gives the number of bytes for the table +
all associated indices; pg_relation_size('..') gives for just the
table. The difference between the two should be total bytes take up by
the 5 total indices (11 total index cols), giving a
back-of-the-envelope estimation of 1gb for the size of the datestamp
index. I am fairly certain that I didn't give pg 1gb to fit the index
in memory, so I'll try upping its total available memory tonight and
see if that doesn't improve things.

I appreciate the responses so far! I'm used to several minutes for
some of the complex queries on this DB, but 12.5 minutes for a select
distinct just seems wrong. :)

-- 
- David T. Wilson
Princeton Satellite Systems
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Subtracting Two Intervals

2008-04-07 Thread Terry Lee Tucker
In porting from 7.4.19 to 8.3.1 I have found the following:

7.4.19:
mwr=# select interval '1 day 15 hours 30 minutes' - interval '29 hours';
  ?column?

 @ 10 hours 30 mins
(1 row)

8.3.1:
mwr83=# select interval '1 day 15 hours 30 minutes' - interval '29 hours';
  ?column?

 @ 1 day -13 hours -30 mins
(1 row)

Is that right? I mean if you take 1 day (24 hours) and add -13 hours and -30 
minutes, you get 10 hours and 30 minutes, but is it supposed to display that 
way?

TIA
-- 
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812  Fax: (336) 372-6812  Cell: (336) 404-6987
[EMAIL PROTECTED]
www.turbocorp.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Joshua D. Drake
On Mon, 7 Apr 2008 19:42:02 +0200
Alban Hertroys <[EMAIL PROTECTED]> wrote:
> > explain analyze select distinct datestamp from vals;
> >QUERY
> > PLAN
> > -- 
> > ---
> >  Unique  (cost=15003047.47..15380004.83 rows=4263 width=4) (actual
> > time=649599.159..721671.595 rows=4252 loops=1)
> >->  Sort  (cost=15003047.47..15191526.15 rows=75391472 width=4)
> > (actual time=649599.157..694392.602 rows=75391476 loops=1)
> >  Sort Key: datestamp
> >  Sort Method:  external merge  Disk: 1178592kB
> >  ->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472
> > width=4) (actual time=9.104..93130.468 rows=75391476 loops=1)
> >  Total runtime: 722379.434 ms
> 
> Wow, great estimates! The planner obviously knows how your data is  
> structured. So much for the bad planner estimation scenario...
> 
> I haven't seen this "external merge Disk"-sort method before, maybe  
> it's new in 8.3, but it doesn't look promising for query

I have to double check but I think that means he overflowed his work
mem and is sorting on disk. Try increasing workmem for the query.

Joshua D. Drake

-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread Alban Hertroys

On Apr 7, 2008, at 9:47 AM, David Wilson wrote:

On Mon, Apr 7, 2008 at 2:05 AM, Alban Hertroys
<[EMAIL PROTECTED]> wrote:
The databases estimates seem consistent with yours, so why is it  
doing
this? Could you provide an EXPLAIN ANALYSE? It shows the actual  
numbers next

to the estimates, although I figure that query might take a while...


explain analyze select distinct datestamp from vals;
   QUERY
PLAN
-- 
---

 Unique  (cost=15003047.47..15380004.83 rows=4263 width=4) (actual
time=649599.159..721671.595 rows=4252 loops=1)
   ->  Sort  (cost=15003047.47..15191526.15 rows=75391472 width=4)
(actual time=649599.157..694392.602 rows=75391476 loops=1)
 Sort Key: datestamp
 Sort Method:  external merge  Disk: 1178592kB
 ->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472
width=4) (actual time=9.104..93130.468 rows=75391476 loops=1)
 Total runtime: 722379.434 ms


Wow, great estimates! The planner obviously knows how your data is  
structured. So much for the bad planner estimation scenario...


I haven't seen this "external merge Disk"-sort method before, maybe  
it's new in 8.3, but it doesn't look promising for query performance.  
Considering it's using 1.1GB it seems the planner may have chosen for  
the least memory exhaustive method; I have to admit I don't know the  
planner in that much detail. Take this with a grain of salt, but my  
guess is that as the index is even bigger, the planner figures this  
approach would involve the least disk i/o and will therefore be faster.


Have you tried this query with enable_seqscan=off? If my guess is  
right (and the planners, in that case) it'd be even slower.


Something that might help you, but I'm not sure whether it might hurt  
the performance of other queries, is to cluster that table on  
val_datestamp_idx. That way the records are already (mostly) sorted  
on disk in the order of the datestamps, which seems to be the brunt  
of above query plan.


 Pg estimates the costs quite high too. It's almost as if there  
isn't an
index on that column and it has no other way then doing a  
sequential scan...
Could you show us the table definition and its indexes? What  
version of Pg

is this?


Pg is 8.3.1

Table definition:
CREATE TABLE vals (
sid integer NOT NULL,
eid integer NOT NULL,
datestamp integer NOT NULL,
val_dur integer NOT NULL,
acc real NOT NULL,
yld real NOT NULL,
rt real NOT NULL,
ydev real NOT NULL,
vydev real NOT NULL,
adev real NOT NULL,
achange real NOT NULL,
ychange real NOT NULL,
arsi real NOT NULL,
yrsi real NOT NULL,
UNIQUE (sid,eid,val_dur,datestamp),
FOREIGN KEY (sid,eid,datestamp) REFERENCES preds
(sid,eid,datestamp) ON DELETE CASCADE
);
create index val_datestamp_idx on vals(datestamp);
create index val_evaluator_idx on vals(eid);
create index val_search_key on vals(val_dur,eid,datestamp);
create index val_vd_idx on vals(val_dur,datestamp);


There seems to be quite a bit of overlap in your index definitions.  
From my experience this can confuse the planner.


I suggest you combine them, but not knowing your data... Maybe  
rewriting your UNIQUE constraint to (val_dur, datestamp, eid, sid)  
would be enough to replace all those other indexes.
If not, it's probably better to have one index per column, so that  
the planner is free to combine them as it sees fit. That'd result in  
a bitmap index scan, btw.


(The various indices are for a variety of common queries into the  
table)


 It may be that your index on vals.datestamp doesn't fit into  
memory; what

are the relevant configuration parameters for your database?


That's a very good question. I recently had to rebuild this particular
database and haven't played with the configuration parameters as much
as I'd like- what parameters would be most relevant here? I hadn't
realized that an index needed to fit into memory.


Well, it doesn't _need_ to fit into memory, but if the database needs  
to fetch different parts of it from disk or swap, the costs of using  
the index will sear up. Especially random access would be bad.


Anything that fits entirely into memory will be faster than having to  
fetch it from disk, as long as it doesn't mean other things will have  
to come from disk instead.


I'm not a postgresql tuning expert (I know my way around though),  
other people can explain you way better than I can. Bruce Momjian for  
example: http://www.linuxjournal.com/article/4791



pg_total_relation_size('vals') - pg_relation_size('vals') gives 11gb.
All indexed columns are integers. My guess is that this means that
it's likely the index doesn't fit into memory.


That calculation doesn't look familiar to me, I'm more used to:
 select pg_size_pretty(pg_relation_size('...'));

You can put t

Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

Magnus Hagander wrote:

You know, kinda like PostgreSQL vs Oracle Express ;)


Well, not quite the same since  LA Debugger Client is not crippled in 
some way Like Oracle or MS SQL Express :-)


It's just plain old freeware.

Later,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Magnus Hagander
Joshua D. Drake wrote:
> On Mon, 07 Apr 2008 19:07:22 +0200
> Magnus Hagander <[EMAIL PROTECTED]> wrote:
> 
> 
> > Yes. The edb debugger is open source (it's on pgfoundry), and the
> > LA debugger client is free-as-in-beer. You know, kinda like
> > PostgreSQL vs Oracle Express ;)
> 
> Uhmm Oracle Express is crippleware. I don't think Tony's is. I know
> you put a wink there but still :)

True point. It's "just a debugger", but as such AFAIK it's feature
complete and not crippled.

//Magnus

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Joshua D. Drake
On Mon, 07 Apr 2008 19:07:22 +0200
Magnus Hagander <[EMAIL PROTECTED]> wrote:


> Yes. The edb debugger is open source (it's on pgfoundry), and the LA 
> debugger client is free-as-in-beer. You know, kinda like PostgreSQL
> vs Oracle Express ;)

Uhmm Oracle Express is crippleware. I don't think Tony's is. I know you
put a wink there but still :)

Joshua D. Drake

> 
> //Magnsu
> 


-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Magnus Hagander

Joshua D. Drake wrote:

On Mon, 07 Apr 2008 10:26:23 -0500
Tony Caduto <[EMAIL PROTECTED]> wrote:


You can use the Lightning Admin win32 debugger, very stable and FREE.


The edb-debugger is open source??? Or are you just saying that although
the Lightning Admin debugger is not open source it is FREE? (which is
fine of course)


Yes. The edb debugger is open source (it's on pgfoundry), and the LA 
debugger client is free-as-in-beer. You know, kinda like PostgreSQL vs 
Oracle Express ;)


//Magnsu

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Joshua D. Drake
On Mon, 07 Apr 2008 10:26:23 -0500
Tony Caduto <[EMAIL PROTECTED]> wrote:

> >
> You can use the Lightning Admin win32 debugger, very stable and FREE.

The edb-debugger is open source??? Or are you just saying that although
the Lightning Admin debugger is not open source it is FREE? (which is
fine of course)

Sincerely,

Joshua D. Drake



-- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
United States PostgreSQL Association: http://www.postgresql.us/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] too many LWLocks taken

2008-04-07 Thread Alex Vinogradovs
It appears the errors were caused by table corruption. I've
truncated and reloaded some large table (300m entries),
and the problem disappeared. Table corruption was probably
caused by hardware failure, not by PostgreSQL :)


On Fri, 2008-04-04 at 20:15 -0400, Tom Lane wrote:
> Alex Vinogradovs <[EMAIL PROTECTED]> writes:
> > Version is 8.2.5 running on FreeBSD 6.2. I can try upgrading
> > to 8.2.7, if you think that would help.
> 
> Well, an upgrade would be a good idea on general principles, but
> I doubt it will fix a previously unknown bug.
> 
> Does the postmaster log show any other odd behavior around these
> errors?  Which process is throwing the error anyway?  If it's
> a regular backend, what query is it executing?
> 
>   regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

josep porres wrote:


2008/4/7, Dave Page <[EMAIL PROTECTED] >:

On Mon, Apr 7, 2008 at 10:21 AM, josep porres <[EMAIL PROTECTED]
> wrote:
> well, when you asked me about where I downloaded it, I
downloaded it from I
> installed it again.
> It seems I can debug with no problems till now.
> However, when I begin to debug and the function parameters
window appears,
>  if I click cancel it freezes.





Josep,

The stand alone Lightning Debugger does not have this issue and if it 
does crash or hang (not likely) it won't take down whatever admin tool 
you are

using.  Oh, and it's FREE.

http://www.amsoftwaredesign.com/debugger_client_announce

Check it out works great on win32 and built with a native compiler with 
a high performance memory manager.



Tony Caduto
AM Software
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Tony Caduto

josep porres wrote:


Hi,

I don't know if here is the right place to post this, but anyway ...
Does anybody know if there is any binary dist for win32 edb-debugger 
for pgsql8.3?

If not, do you know which compiler I have to use?
I don't need to install anything on the client side where is my 
pgadmin, right?




You can use the Lightning Admin win32 debugger, very stable and FREE.
From my testing it's more stable on win32 than the others, but that 
will probably be debated since I am biased.


Anyway I encourage you to check it out at:


http://www.amsoftwaredesign.com/debugger_client_announce


Tony Caduto
AM Software Design





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow pgsql tables - need to vacuum?

2008-04-07 Thread Alan Hodgson
On Monday 07 April 2008, Dan99 <[EMAIL PROTECTED]> wrote:
> Does TRUNCATE TABLE keep all necessary table
> information such as indexes, constraints, triggers, rules, and
> privileges?

Yes. It does require an exclusive lock on the table very briefly, though, 
which DELETE does not.

> Currently a mass DELETE is being used to remove the data. 

And that's why the table is bloating. Especially if you aren't VACUUMing it 
before loading the new data.

> Since VACUUM has never been done on the tables before, should a VACUUM
> FULL be done first?  If so, approximately how long does a VACUUM FULL
> take on a database with 25 tables each having anywhere form 1,000 to
> 50,000 rows?  

Honestly, you'd be better off dumping and reloading the database. With that 
little data, it would be pretty quick. Although, VACUUM is pretty fast on 
tables with no indexes.

> The reason I ask is because this is a live website, and 
> any down time is very inconvenient.  Also, would it be sufficient
> (after the first VACUUM FULL) to simply use a VACUUM ANALYZE after the
> tables are repopulated (ie. every night)?

If you do a TRUNCATE and then a fresh load, a simple ANALYZE is sufficient.

You really should create some indexes though. Right now your queries are 
looping through the whole table for every SELECT. The only reason you're 
not dying is your tables are small enough to completely fit in memory, and 
presumably your query load is fairly low.

-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow pgsql tables - need to vacuum?

2008-04-07 Thread Douglas McNaught
On Mon, Apr 7, 2008 at 9:51 AM, Dan99 <[EMAIL PROTECTED]> wrote:

>  Unfortunately, I did not design this database (or the website for that
>  matter) and am only maintaining it.  As a result of the inexperience
>  of the website designer, there are no indexes in any of the tables and
>  it would be rather difficult to put them in after the fact since this
>  is a live website.

Indexes can be created online with no downtime.  They do block some
operations.  If you're running 8.2 or 8.3, you can use CREATE INDEX
CONCURRENTLY which takes longer but doesn't block normal operations.
Otherwise, pick a time when activity is minimal to do your CREATE
INDEX.

>  Does TRUNCATE TABLE keep all necessary table
>  information such as indexes, constraints, triggers, rules, and
>  privileges? Currently a mass DELETE is being used to remove the data.

Read the docs.  It may depend on your version of Postgres.  See below
for docs location.

>  Since VACUUM has never been done on the tables before, should a VACUUM
>  FULL be done first?  If so, approximately how long does a VACUUM FULL
>  take on a database with 25 tables each having anywhere form 1,000 to
>  50,000 rows?  The reason I ask is because this is a live website, and
>  any down time is very inconvenient.  Also, would it be sufficient
>  (after the first VACUUM FULL) to simply use a VACUUM ANALYZE after the
>  tables are repopulated (ie. every night)?

If you have the extra disk space, CLUSTER is supposed to be better
than VACUUM FULL, but you need an index to cluster the table on.

If you use TRUNCATE, the VACUUM is not necessary but an ANALYZE would be useful.

I don't think you ever said what version you're running--that would be
helpful.  "SELECT version();" at the psql prompt will tell you the
server version.

I highly recommend referring to the docs for your version of Postgres at:

http://www.postgresql.org/docs/

if you have any questions about the above commands.

-Doug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plperlu and perl 5.10

2008-04-07 Thread Tom Lane
Kev <[EMAIL PROTECTED]> writes:
> I'm trying to upgrade to perl 5.10, and realized that plperlu is still
> using 5.8.  How exactly does pgsql determine where to look/what
> version to use for plperlu?

You need to recompile.  You also need a pretty darn recent release of PG
(we only fixed things for 5.10 on 2008-01-22, according to the CVS
history).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plperlu and perl 5.10

2008-04-07 Thread Kev
I'm trying to upgrade to perl 5.10, and realized that plperlu is still
using 5.8.  How exactly does pgsql determine where to look/what
version to use for plperlu?

Thanks,
Kev

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow pgsql tables - need to vacuum?

2008-04-07 Thread Dan99
On Apr 5, 6:36 pm, [EMAIL PROTECTED] ("Douglas McNaught") wrote:
> On Thu, Apr 3, 2008 at 2:34 PM, Dan99 <[EMAIL PROTECTED]> wrote:
> > Hi,
>
> >  I am having some troubles with a select group of tables in a database
> >  which are acting unacceptably slow.  For example a table with
> >  approximately < 10,000 rows took about 3,500ms to extract a single row
> >  using the following select statement:
>
> >  SELECT * FROM table WHERE column = 'value'
>
> >  I have preformed this same test on a number of different tables, only
> >  a few of which have this same problem.  The only common thing that I
> >  can see between these affected tables is the fact that they are dumped
> >  and re-populated every day from an outside source.
>
> You need to ANALYZE the tables after you load them, and make sure you
> have indexes on the column you're querying (which it sounds like you
> do, but they're not being used because the statistics for the table
> are inaccurate).  There may also be a lot of dead tuples which will
> further slow down a sequential scan.
>
> Do read up on VACUUM and MVCC in the docs--it's a very important
> thing. You will suffer horribly unless you have a working periodic
> VACUUM.
>
> Also, are you using TRUNCATE TABLE to clear out before the reload, or
> a mass DELETE?  The latter will leave a lot of dead rows, bloating the
> table and slowing down scans.  TRUNCATE just deletes the table file
> and recreates it empty.
>
> -Doug
>
> --
> Sent via pgsql-general mailing list ([EMAIL PROTECTED])
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-general

Unfortunately, I did not design this database (or the website for that
matter) and am only maintaining it.  As a result of the inexperience
of the website designer, there are no indexes in any of the tables and
it would be rather difficult to put them in after the fact since this
is a live website.  Does TRUNCATE TABLE keep all necessary table
information such as indexes, constraints, triggers, rules, and
privileges? Currently a mass DELETE is being used to remove the data.
Since VACUUM has never been done on the tables before, should a VACUUM
FULL be done first?  If so, approximately how long does a VACUUM FULL
take on a database with 25 tables each having anywhere form 1,000 to
50,000 rows?  The reason I ask is because this is a live website, and
any down time is very inconvenient.  Also, would it be sufficient
(after the first VACUUM FULL) to simply use a VACUUM ANALYZE after the
tables are repopulated (ie. every night)?

Your help is much appreciated.
Daniel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] drop database regardless of connections

2008-04-07 Thread Kev
On Apr 6, 1:34 pm, [EMAIL PROTECTED] (Craig Ringer) wrote:
> Kev wrote:
> > So I tried to do this in Perl, but for some reason neither
> > kill() nor Win32::Process::KillProcess() actually terminate the
> > threads.
>
> Threads? Each backend is a distinct process. I haven't the foggiest why
> they might be ignoring the signal, but then I'm very far from clueful
> about Pg on win32.

Ah, sorry, I meant processes.

> Anyway, you can use `psql' to query the activity tables using something
> like "SELECT procpid FROM pg_stat_activity WHERE datname = 'dbtodrop'"
> and see which backend pids need to be killed, then use 'pg_ctl kill
> signame pid' to kill them. A bit of powershell, cmd.exe, etc should do
> the job, though I agree that for win32 a builtin "pg_kill_backend()"
> function would be nicer, in that you could just execute a query like:

Oh, of course!  I'll go try that.  For some reason I had forgotten
about pg_ctl.

> SELECT pg_kill_backend(procpid)
> FROM pg_stat_activity
> WHERE datname = 'dbtodrop';
>
> You can use pg_cancel_backend() to cancel queries, but there doesn't
> seem to be an equivalent to actually disconnect / terminate a backend.

Exactly...

> Note that you can also update the system tables to prevent new
> connections being made to the database you're about to drop by setting
> pg_database.datallowconn to 'f' for the DB in question. That way, while
> you're killing off backends you won't have more joining.

Ah, this is new...thanks for the tip!

> I'm curious about why you need to drop and create so many databases that
> this is an issue, though.

Well, frankly, it was just the *one* time that I was having enough
trouble with, but I also wanted to automate it so that I could do
things like refresh our sandbox database easily, and test going live
with my development database--drop it, load a copy of the production
one onto it, and then apply all the updates I had done since then.

Kev

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exception messages -> application?

2008-04-07 Thread Karsten Hilbert
On Sat, Apr 05, 2008 at 08:42:34PM -0700, Frank Miles wrote:

> Unfortunately this does not help for lesser events (i.e. NOTICE and WARNING).
> My preliminary effort suggests that psycopg isn't passing these.
Not as exceptions, certainly.

For one thing there's

cursor.statusmessage which (should) contain whatever
PostgreSQL replies to a command, say, SELECT or UPDATE and
the number of rows affected. It's what you see in psql.

>  I'll
> take a further look at the client logging settings - will these be
> sufficient?
They should. They'll define what gets shipped to the client
in the status message.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] calendar best practices for postgres

2008-04-07 Thread Roberts, Jon
pgAgent works well for me and it is built into pgAdmin.

http://www.pgadmin.org/docs/1.8/pgagent.html


Jon

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Tim Uckun
> Sent: Sunday, April 06, 2008 11:10 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] calendar best practices for postgres
> 
> Does anybody know of an open source application which leverages
> postgres to build a scheduling/calendaring application. Especially if
> it uses some of the datetime types and functions that are unique to
> postgres.
> 
> I am specifically interested in methods to deal with recurring events
> and dealing with "every third monday of the month" types of events.
> 
> Thanks.
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Conversion to 8.3

2008-04-07 Thread Terry Lee Tucker
On Saturday 05 April 2008 11:21, Tom Lane wrote:
> "Scott Marlowe" <[EMAIL PROTECTED]> writes:
> > what type is new.ontime ??   timestamp or interval.  I would expect it
> > to be an interval.  But intervals are either negative or positive, not
> > "ago" unless that's something peculiar to 7.4 that I've long since
> > forgotten.
>
> No, it's still around:
>
> regression=# select '-1 day'::interval;
>  interval
> --
>  -1 days
> (1 row)
>
> regression=# set datestyle = postgres;
> SET
> regression=# select '-1 day'::interval;
>   interval
> -
>  @ 1 day ago
> (1 row)
>
> The most bletcherous aspect of Terry's original coding is that it fails
> entirely, and silently, if the DateStyle setting isn't what it's
> assuming...
>
>regards, tom lane

Well, I didn't realize that "ago" was only applicable to a certain date style 
setting. I don't recall ever reading that anywhere but maybe I missed it. Now 
I know how to do it correctly.

Thanks to all who responded...
-- 
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812  Fax: (336) 372-6812  Cell: (336) 404-6987
[EMAIL PROTECTED]
www.turbocorp.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
if I try to close the parameter window using X button instead of Cancel,
it's the same.
It's a bit annoying when I finnish debugging, but I can debug, so it's fine
:)

I will be looking forward new versions.

thanks again Dave

Josep Porres

2008/4/7, Dave Page <[EMAIL PROTECTED]>:
>
> On Mon, Apr 7, 2008 at 10:21 AM, josep porres <[EMAIL PROTECTED]> wrote:
> > well, when you asked me about where I downloaded it, I downloaded it
> from I
> > installed it again.
> > It seems I can debug with no problems till now.
> > However, when I begin to debug and the function parameters window
> appears,
> >  if I click cancel it freezes.
>
>
> Yeah, there is a known issue when doing that. There is a patch being
> tested at the moment.
>
>
>
> --
>
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
>


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Dave Page
On Mon, Apr 7, 2008 at 10:21 AM, josep porres <[EMAIL PROTECTED]> wrote:
> well, when you asked me about where I downloaded it, I downloaded it from I
> installed it again.
> It seems I can debug with no problems till now.
> However, when I begin to debug and the function parameters window appears,
>  if I click cancel it freezes.

Yeah, there is a known issue when doing that. There is a patch being
tested at the moment.


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
well, when you asked me about where I downloaded it, I downloaded it from I
installed it again.
It seems I can debug with no problems till now.
However, when I begin to debug and the function parameters window appears,
if I click cancel it freezes.

the log says:

2008-04-07 11:05:05 STATUS : Obteniendo Función detalles...
2008-04-07 11:05:05 STATUS : Obteniendo Función detalles... (0,01 seg)
2008-04-07 11:05:05 QUERY  : Scalar query (192.168.1.3:5432): SELECT
count(*) FROM pg_proc WHERE proname = 'pldbg_get_target_info';
2008-04-07 11:05:05 QUERY  : Query result: 1
2008-04-07 11:05:05 QUERY  : Scalar query (192.168.1.3:5432): SELECT
count(*) FROM pg_proc WHERE proname = 'plpgsql_oid_debug';
2008-04-07 11:05:05 QUERY  : Query result: 1
2008-04-07 11:05:08 QUERY  : Scalar query (192.168.1.3:5432): SELECT
count(*) FROM pg_proc WHERE oid = 16439
2008-04-07 11:05:08 QUERY  : Query result: 1
2008-04-07 11:05:08 QUERY  : SET log_min_messages TO fatal
2008-04-07 11:05:08 QUERY  : select t.*,   pg_catalog.oidvectortypes(
t.argtypes ) as argtypenames, t.argtypes as argtypeoids,  l.lanname,
n.nspname, p.proretset, y.typname AS rettype from  pldbg_get_target_info(
'16439', 'o' ) t , pg_namespace n, pg_language l, pg_proc p, pg_type y
where  n.oid = t.schema and   l.oid = t.targetlang and   p.oid = t.target
and   y.oid = t.returntype
2008-04-07 11:05:08 QUERY  : SELECT version();


thanks dave


Josep Porres



2008/4/7, Dave Page <[EMAIL PROTECTED]>:
>
> On Mon, Apr 7, 2008 at 9:40 AM, josep porres <[EMAIL PROTECTED]> wrote:
> > I downloaded it from:
> > http://www.postgresql.org/ftp/pgadmin3/release/v1.8.2/win32/
> > spaninsh http mirror
> >
> http://wwwmaster.postgresql.org/download/mirrors-ftp?file=%2Fpgadmin3%2Frelease%2Fv1.8.2%2Fwin32%2Fpgadmin3-1.8.2-2.zip
> >
> > Is this version wrong?
>
>
> No, I forgot it got re-rolled.
>
> Can you supply a debug log showing pgAdmin hanging after you start
> debugging? Look under File -> Options -> Logging.
>
>
>
>
> --
>
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
>


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Dave Page
On Mon, Apr 7, 2008 at 9:40 AM, josep porres <[EMAIL PROTECTED]> wrote:
> I downloaded it from:
> http://www.postgresql.org/ftp/pgadmin3/release/v1.8.2/win32/
> spaninsh http mirror
> http://wwwmaster.postgresql.org/download/mirrors-ftp?file=%2Fpgadmin3%2Frelease%2Fv1.8.2%2Fwin32%2Fpgadmin3-1.8.2-2.zip
>
> Is this version wrong?

No, I forgot it got re-rolled.

Can you supply a debug log showing pgAdmin hanging after you start
debugging? Look under File -> Options -> Logging.



-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Silent install 8.3 diiffers from 8.2

2008-04-07 Thread Syra . Didelez
"You should probably send those logs to the list, rather than straight to 
me.

However, this line sticks out at me:


2008-04-07 07:43:14 GMT FATAL:  lock file "postmaster.pid" already exists
2008-04-07 07:43:14 GMT HINT:  Is another postmaster (PID 3724) running
in data directory "F:/B"?


Is it possible that the old server wasn't shut down cleanly on remove?
Is there really a postmaster.pid file present?

--
Craig Ringer"


Hi,

Crais, sorry to bother you directly !

The previous server was shut down (killed 6 postgres.exe processes) and 
then removed completely,
both directories basedir & datadir are empty when starting the install.

And yes, there is a postmaster.pid file created during install.
It might not be important, but 'usually' there are 4 postgres.exe 
processes showing up.
Here there are 6 .. it might look like the server is trying to be started 
up twice ...
-also regardig the error in the application log about the lock file...

thus, here the log files:


the system event log shows the following:
TypeDateTimeSource  CategoryEvent   UserComputer
Information 4/7/20089:44:18 AM  Service Control Manager 
None7036N/A GS2479
--> The PostgreSQL Database Server 8.3 service entered the stopped state.
Information 4/7/20089:43:14 AM  Service Control Manager 
None7035SYSTEM  GS2479
--> The PostgreSQL Database Server 8.3 service was successfully sent a 
start control.
Information 4/7/20089:43:09 AM  Service Control Manager 
None7036N/A GS2479
--> The PostgreSQL Database Server 8.3 service entered the stopped state.
Information 4/7/20089:42:05 AM  Service Control Manager 
None7035SYSTEM  GS2479
--> The PostgreSQL Database Server 8.3 service was successfully sent a 
start contro
Information 4/7/20089:41:59 AM  Service Control Manager 
None7036N/A GS2479
--> The PostgreSQL Database Server 8.3 service entered the stopped state.
Information 4/7/20089:40:55 AM  Service Control Manager 
None7035SYSTEM  GS2479
--> The PostgreSQL Database Server 8.3 service was successfully sent a 
start control.
Information 4/7/20089:40:49 AM  Service Control Manager 
None7036N/A GS2479
-->The PostgreSQL Database Server 8.3 service entered the stopped state.
Information 4/7/20089:39:43 AM  Service Control Manager 
None7035SYSTEM  GS2479
--> The PostgreSQL Database Server 8.3 service was successfully sent a 
start control.
Information 4/7/20089:38:13 AM  Service Control Manager 
None7036N/A GS2479
--> The Windows Installer service entered the running state.
Information 4/7/20089:38:13 AM  Service Control Manager 
None7035SYSTEM  GS2479
--> The Windows Installer service was successfully sent a start control.

the application event log:
TypeDateTimeSource  CategoryEvent   UserComputer
Error   4/7/20089:43:14 AM  PostgreSQL  None0 N/A 
GS2479 
--> 2008-04-07 07:43:14 GMT FATAL:  lock file "postmaster.pid" already 
exists
  2008-04-07 07:43:14 GMT HINT:  Is another postmaster (PID 3724) 
running in data directory "F:/B"?

Information 4/7/20089:43:14 AM  PostgreSQL  None0 
N/A GS2479
 --> Waiting for server startup...

Information 4/7/20089:43:09 AM  PostgreSQL  None0 
N/A GS2479
 --> Timed out waiting for server startup

Error   4/7/20089:42:05 AM  PostgreSQL  None0 N/A 
GS2479 
-->2008-04-07 07:42:05 GMT FATAL:  lock file "postmaster.pid" already 
exists
2008-04-07 07:42:05 GMT HINT:  Is another postmaster (PID 3724) running in 
data directory "F:/B"?

Information 4/7/20089:42:05 AM  PostgreSQL  None0 
N/A GS2479 
--> Waiting for server startup...

Information 4/7/20089:41:59 AM  PostgreSQL  None0 
N/A GS2479
 --> Timed out waiting for server startup

Error   4/7/20089:40:55 AM  PostgreSQL  None0 N/A 
GS2479
--> 2008-04-07 07:40:55 GMT FATAL:  lock file "postmaster.pid" already 
exists

2008-04-07 07:40:55 GMT HINT:  Is another postmaster (PID 3724) running in 
data directory "F:/B"?
Information 4/7/20089:40:55 AM  PostgreSQL  None0 
N/A GS2479 
--> Waiting for server startup...

Warning 4/7/20089:40:49 AM  Userenv None1517SYSTEM 
GS2479 
--> Windows saved user GS2479\postgres registry while an application or 
service was still using the registry during log off. The memory used by 
the user's registry has not been freed. The registry will be unloaded when 
it is no longer in use. 
 This is often caused by services running as a user account, try 
configuring the services to run in either the LocalService or 
NetworkService account.

Information 4/7/20089:40:48 AM  PostgreSQL  None0 
N/A

Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
I downloaded it from:
http://www.postgresql.org/ftp/pgadmin3/release/v1.8.2/win32/
spaninsh http mirror
http://wwwmaster.postgresql.org/download/mirrors-ftp?file=%2Fpgadmin3%2Frelease%2Fv1.8.2%2Fwin32%2Fpgadmin3-1.8.2-2.zip

Is this version wrong?


Josep Porres



2008/4/7, Dave Page <[EMAIL PROTECTED]>:
>
> On Mon, Apr 7, 2008 at 9:29 AM, josep porres <[EMAIL PROTECTED]> wrote:
> > ok, thx,
> > I can debug from server pgadmin which ships with pg8.3 (1.8.2 rev 7030),
> > because it runs on windows,
> > but in the client side, running pgadmin1.8.2 rev 7050 it seems to hang,
> it
> > freezes.
> > Do I need to config anything?
>
>
> No, but where did you get rev 7050 from? That's quite a few revisions
> after 1.8.2 was tagged.
>
>
> --
>
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
>


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Dave Page
On Mon, Apr 7, 2008 at 9:29 AM, josep porres <[EMAIL PROTECTED]> wrote:
> ok, thx,
> I can debug from server pgadmin which ships with pg8.3 (1.8.2 rev 7030),
> because it runs on windows,
> but in the client side, running pgadmin1.8.2 rev 7050 it seems to hang, it
> freezes.
> Do I need to config anything?

No, but where did you get rev 7050 from? That's quite a few revisions
after 1.8.2 was tagged.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
ok, thx,
I can debug from server pgadmin which ships with pg8.3 (1.8.2 rev 7030),
because it runs on windows,
but in the client side, running pgadmin1.8.2 rev 7050 it seems to hang, it
freezes.
Do I need to config anything?

thx


Josep Porres

2008/4/7, Dave Page <[EMAIL PROTECTED]>:
>
> On Mon, Apr 7, 2008 at 8:11 AM, josep porres <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
> >
> > I don't know if here is the right place to post this, but anyway ...
> > Does anybody know if there is any binary dist for win32 edb-debugger for
> > pgsql8.3?
> > If not, do you know which compiler I have to use?
> >  I don't need to install anything on the client side where is my
> pgadmin,
> > right?
> >
>
>
> It ships with the binary distro of PG 8.3 for Windows.
>
> And, no, you don't need anything except pgAdmin 1.8+ on the client side.
>
>
> --
> Dave Page
> EnterpriseDB UK: http://www.enterprisedb.com
>


Re: [HACKERS] [GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-04-07 Thread Stuart Brooks

Pavan Deolasee wrote:

On Fri, Apr 4, 2008 at 11:10 AM, Tom Lane <[EMAIL PROTECTED]> wrote:

  

 The
 policy of this project is that we only put nontrivial bug fixes into
 back branches, and I don't think this item qualifies ...




Got it. I will submit a patch for HEAD.

Thanks,
As I mentioned earlier, I patched 8.3.1 with Pavan's patch and have been 
running tests. After a few days I have got postgres to lock up - not 
sure if it is related. Below is a ps from my system (NetBSD 3).


TEST> ps -ax | grep post
1952 ?  IW2472 ?  DWCOMMIT

2661 ?  DW[WARN] PSQL:exec - failed in command relname,n_tup_ins,n_live_tup,n_dead_tup,pg_total_relation_size('s8_.' 
|| relname)*10/(1024*1024),last_autovacuum FROM pg_stat_user_tables 
WHERE schemaname='s8_' ORDER BY n_tup_ins DESC>

[WARN]   error = 'server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.'
[WARN] ConnectionNB: PQconsumeInput failed with error 'server closed the 
connection unexpectedly

   This probably means the server terminated abnormally
   before or while processing the request.'


The server is still running but I can't access it. A top yields:

load averages:  0.23,  0.23,  0.2109:53:58
110 processes: 109 sleeping, 1 on processor

Memory: 513M Act, 256M Inact, 1336K Wired, 75M Exec, 557M File, 2776K Free
Swap: 600M Total, 600M Free


 PID USERNAME PRI NICE   SIZE   RES STATE  TIME   WCPUCPU COMMAND
 463 root   20  6132K   14M select 0:06  0.05%  0.05% kdeinit
2472 postgres -22   -2  4580K4K mclpl814:23  0.00%  0.00% 
2631 root -220   644K4K mclpl606:25  0.00%  0.00% 
 233 root   2024M   31M select 4:47  0.00%  0.00% XFree86
 451 root   20  3544K   15M select 4:45  0.00%  0.00% kdeinit
  16 root  180 0K  182M syncer 3:51  0.00%  0.00% [ioflush]
  17 root -180 0K  182M aiodoned   1:46  0.00%  0.00% [aiodoned]
  15 root -180 0K  182M pgdaemon   1:30  0.00%  0.00% [pagedaemon]
1301 root -220  4092K4K mclpl  1:23  0.00%  0.00% 
2680 postgres   2   -2  3560K 1588K poll   1:18  0.00%  0.00% postgres
1493 root   20  3488K   17M select 1:09  0.00%  0.00% korgac
 461 root   20  3748K   16M select 0:57  0.00%  0.00% kdeinit
3156 postgres   2   -2  3448K 1792K select 0:45  0.00%  0.00% postgres
1174 root   20  2608K 2928K select 0:40  0.00%  0.00% profiler
1428 root   20  3376K   13M select 0:26  0.00%  0.00% kdeinit
2661 postgres -22   -2  4896K4K mclpl  0:11  0.00%  0.00% 

I'm not convinced this is a postgresql bug (state=mclpl concerns me), 
but it's the first time I've seen it. I suppose it could be: 
http://www.netbsd.org/cgi-bin/query-pr-single.pl?number=35224.


Anything I can do which might help isolating the problem?

Regards
Stuart




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread Dave Page
On Mon, Apr 7, 2008 at 8:11 AM, josep porres <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I don't know if here is the right place to post this, but anyway ...
> Does anybody know if there is any binary dist for win32 edb-debugger for
> pgsql8.3?
> If not, do you know which compiler I have to use?
>  I don't need to install anything on the client side where is my pgadmin,
> right?
>

It ships with the binary distro of PG 8.3 for Windows.

And, no, you don't need anything except pgAdmin 1.8+ on the client side.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select distinct and index usage

2008-04-07 Thread David Wilson
On Mon, Apr 7, 2008 at 2:05 AM, Alban Hertroys
<[EMAIL PROTECTED]> wrote:
> On Apr 7, 2008, at 1:32 AM, David Wilson wrote:
>
> >
>
>  The databases estimates seem consistent with yours, so why is it doing
> this? Could you provide an EXPLAIN ANALYSE? It shows the actual numbers next
> to the estimates, although I figure that query might take a while...

explain analyze select distinct datestamp from vals;
   QUERY
PLAN
-
 Unique  (cost=15003047.47..15380004.83 rows=4263 width=4) (actual
time=649599.159..721671.595 rows=4252 loops=1)
   ->  Sort  (cost=15003047.47..15191526.15 rows=75391472 width=4)
(actual time=649599.157..694392.602 rows=75391476 loops=1)
 Sort Key: datestamp
 Sort Method:  external merge  Disk: 1178592kB
 ->  Seq Scan on vals  (cost=0.00..1531261.72 rows=75391472
width=4) (actual time=9.104..93130.468 rows=75391476 loops=1)
 Total runtime: 722379.434 ms

(There were a couple other very long-running, disk-intensive queries
going on in the background of this, so that runtime is a little
inflated, but the values should still all be relevant.)

>  Pg estimates the costs quite high too. It's almost as if there isn't an
> index on that column and it has no other way then doing a sequential scan...
> Could you show us the table definition and its indexes? What version of Pg
> is this?

Pg is 8.3.1

Table definition:
CREATE TABLE vals (
sid integer NOT NULL,
eid integer NOT NULL,
datestamp integer NOT NULL,
val_dur integer NOT NULL,
acc real NOT NULL,
yld real NOT NULL,
rt real NOT NULL,
ydev real NOT NULL,
vydev real NOT NULL,
adev real NOT NULL,
achange real NOT NULL,
ychange real NOT NULL,
arsi real NOT NULL,
yrsi real NOT NULL,
UNIQUE (sid,eid,val_dur,datestamp),
FOREIGN KEY (sid,eid,datestamp) REFERENCES preds
(sid,eid,datestamp) ON DELETE CASCADE
);
create index val_datestamp_idx on vals(datestamp);
create index val_evaluator_idx on vals(eid);
create index val_search_key on vals(val_dur,eid,datestamp);
create index val_vd_idx on vals(val_dur,datestamp);

(The various indices are for a variety of common queries into the table)

>  It may be that your index on vals.datestamp doesn't fit into memory; what
> are the relevant configuration parameters for your database?

That's a very good question. I recently had to rebuild this particular
database and haven't played with the configuration parameters as much
as I'd like- what parameters would be most relevant here? I hadn't
realized that an index needed to fit into memory.

pg_total_relation_size('vals') - pg_relation_size('vals') gives 11gb.
All indexed columns are integers. My guess is that this means that
it's likely the index doesn't fit into memory.

-- 
- David T. Wilson
Princeton Satellite Systems
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection reset by peer / broken pipe

2008-04-07 Thread Albe Laurenz
Jeff Wigal wrote:
>>> I am running Postgres 8.2.3 and am seeing the following error messages in my
>>> logs:
>>>
>>> LOG:  SSL SYSCALL error: Connection reset by peer
>>> LOG:  could not receive data from client: Connection reset by peer
>>> LOG:  unexpected EOF on client connection
>>> LOG:  could not send data to client: Broken pipe
>>  
>> Do your client applications tend to leave an open connection sitting
>> idle for awhile?  If so you might be getting burnt by idle-connection
>> timeouts in intervening routers.  NAT-capable boxes in particular
>> will kill a connection that carries no data for "too long".  If you're
>> lucky the router will offer a way to adjust its timeout ...
>   
> I'm in the process of tracking down the cause of this... Is 
> there any way on the server side of things to terminate a 
> connection after "x" number of minutes?  For what we're 
> doing, there is no reason to have a connection open after 10 
> minutes.

There are tcp_keepalives_idle, tcp_keepalives_interval, and tcp_keepalives_count
(see http://www.postgresql.org/docs/8.3/static/runtime-config-connection.html).

They can make the server check idle connections and detect dead ones,
which will be closed.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] edb-debugger, debugging pl/pgsql

2008-04-07 Thread josep porres
Hi,

I don't know if here is the right place to post this, but anyway ...
Does anybody know if there is any binary dist for win32 edb-debugger for
pgsql8.3?
If not, do you know which compiler I have to use?
I don't need to install anything on the client side where is my pgadmin,
right?

thx


Josep Porres