[GENERAL] Changing array subscripting to zero-offset

2006-06-22 Thread Erin Sheldon

Hello everyone -

Array columns are, by default, 1-offset in their
subscripting.  Since I usually am calling postgres
from a language with zero-offset, I would prefer
that postgres conform to that.  The online
documentation hints that this may be configurable
but I haven't been able to find how this is done.

Any help is greatly appreciated,
Erin Sheldon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] getting comment about constraint

2006-06-22 Thread canfieldsteve
How do you retrieve a comment on a constraint?  For example, consider
the following table and comment:

 create table people (
person_id serialprimary key,
uid   varchar(25)   not null,
constraint uid_alphanumeric check (uid ~ '^[a-z0-9_]+$')
 );

 comment on constraint uid_alphanumeric on people
   is 'UID may only contain letters, numerals, and underscores';

That code code creates a constraint within the people table named
uid_alphanumeric.  Furthermore, it creates a comment on that
constraint.

Given the name of the table and constraint, how would you retrieve the
comment?  Chapter 9. Functions and Operators says that
obj_description(object_oid, catalog_name) returns a comment given the
object's OID and class name.  How do I get the OID for the constraint?
It doesn't seem to be stored in pg_constraint.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Unable to start pg service

2006-06-22 Thread Vladimir Leban
Title: Message



Hello,

I'm curently in 
version 7.3

I know, this is an 
old version and It would be a good idea to migrate.
Before doing that, I 
would like to make a dump of my base. That's why I must running this service at 
any coast.

Thank's for your 
help !! :)


2006-06-19 10:31:55 LOG: database system shutdown was interrupted 
at 2006-06-19
10:23:06
2006-06-19 10:31:55 LOG: checkpoint record is at 
0/17B1FC0
2006-06-19 10:31:55 LOG: redo record is at 0/17B1FC0; undo record 
is at 0/0; sh
utdown TRUE
2006-06-19 10:31:55 LOG: next transaction id: 105375; next oid: 
28662
2006-06-19 10:31:55 LOG: database system was not properly shut 
down; automatic
recovery in progress
2006-06-19 10:31:55 LOG: ReadRecord: invalid magic number  in 
log file 0, s
egment 1, offset 8069120
2006-06-19 10:31:55 LOG: redo is not 
required
2006-06-19 10:31:57 PANIC: XLogWrite: write request 0/17B2000 is 
past end of lo
g 0/17B2000
2006-06-19 10:31:57 DEBUG: reaping dead 
processes
2006-06-19 10:31:57 LOG: startup process (pid 2304) was 
terminated by signal 6
2006-06-19 10:31:57 LOG: aborting startup due to startup process 
failure
2006-06-19 10:31:57 DEBUG: proc_exit(1)
2006-06-19 10:31:57 DEBUG: shmem_exit(1)
IpcMemoryDetach: shmdt(0xd0) failed: 
Invalid argument
2006-06-19 10:31:57 DEBUG: exit(1)




[GENERAL] BackUp

2006-06-22 Thread Daniel
Hi All..
  Is there a way to do auto database bacl up in PostgreSQL.
Please Reply..
Thanks,
Daniel


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Interface Guidance and Opinions Needed

2006-06-22 Thread Chris Golden








Hello to all,



This is my first time posting to this forum and I am very
new to PostgreSQL. I am very excited about using it. I have set up
a database and just need a point in the right direction on
interfacing. I have an orders and messages database. One of
the providers I would like to interface with has sent me an XML spec to make
HTTPS posts to. I am just wondering what would be the easiest thing to use
to implement the two. I would like to scan my database for new orders
then do an HTTPS post to the provider. I would also like to periodically
check for new orders/messages from the provider via an HTTPS post and bring
them into my database (a constant exchange of information). I am
wondering if something like PHP would be a universal solution, or something
similar?



Sorry if the information I have given is basic, I have been
doing research on the internet trying to figure out which direction to go and
starting off is a little overwhelming.



Thanks in advance for any information



Chris Golden










[GENERAL] SELECT statement takes 10 minutes to answer

2006-06-22 Thread Mustafa Korkmaz
Hi,

Iam using PostgreSQL 7.4.5 on a linux machine, and have the problem
that an easy SELECT field1, field2, field3 FROM a_table-statement
takes 10 minutes to give an answer. The table has 750.000 datarows. I
also made an VACUUM a_table before the statement, but it doesnt help at
all. The table has an index to a timestamp field..
What could be the problem, or is it normal that postgresql cant handle
with so many data in an acceptable response-time?

Regards,
M.Korkmaz


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Why my cursor construction is so slow?

2006-06-22 Thread [EMAIL PROTECTED]
Hi
I have following table:

CREATE OR REPLACE FUNCTION alias(
   v_mask alias.mask%TYPE,
   ) RETURNS INT8 AS
with index:
CREATE INDEX alias_mask_ind ON alias(mask);

and this table has about 1 million rows.

In DB procedure I execute:

LOOP
   
OPEN cursor1 FOR SELECT * FROM alias WHERE mask=alias_out
ORDER BY mask;
i:=0;
LOOP
   i:=i+1;
 FETCH cursor1 INTO alias_row;
   EXIT WHEN i=10;
END LOOP;
  CLOSE cursor1;
 EXIT WHEN end_number=1;
END LOOP;

Such construction is very slow but when I modify SQL to:
OPEN cursor1 FOR SELECT * FROM alias WHERE mask=alias_out
ORDER BY mask LIMIT 100;

it works very fast. It is strange for me becuase I've understood so far
that when cursor is open select is executed but Postgres does not
select all rows - only cursor is positioned on first row, when you
execute fetch next row is read. But this example shows something
different.

Can somebody clarify what is wrong with my example? I need select
without LIMIT 100 part.

Regards
Michal Szymanski
http://blog.szymanskich.net


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why my cursor construction is so slow?

2006-06-22 Thread Szymic1

 CREATE OR REPLACE FUNCTION alias(
v_mask alias.mask%TYPE,
) RETURNS INT8 AS

Sorry my mistake it should be:

CREATE TABLE  alias (
   alias_id   BIGSERIAL PRIMARY KEY,
   mask  VARCHAR(20) NOT NULL DEFAULT '',
);


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] XML Support

2006-06-22 Thread Essam Mansour
Hi Everybody,In my research project, I need a DBMS that supports XML storage and retrieval, and  provides ECA rule support.Currently, I am using Oracle 10g. However, I am interested in using Open Sources DB.
I am looking for a document addressing the XML support in the current Postgresql release (8.1.4).I would like to know - whether Postgresql DB could be used as XML Repository or not?- Whether Postgresql DB support the XQuery or SQL/XML or not?
- if Yes, could I write triggers combined with SQL/XML statement or XQuery statement?-- Regards,Essam Mansour


[GENERAL] VACUUM hanging on idle system

2006-06-22 Thread Clarence
I have a completely idle postgresql system (all backends idle, none
in transaction); every time I attempt to vacuum a particular table,
it hangs
after a while. Here is the output of vacuumdb:

INFO:  vacuuming public.ledgerdetail
INFO:  index ledgerdetail_pkey now contains 11574842 row versions in
33032
pages
DETAIL:  1 index row versions were removed.
42 index pages have been deleted, 42 are currently reusable.
CPU 1.23s/1.57u sec elapsed 12.13 sec.

The table has another index besides that one. In the system error log
there
is this line:
   could not write block 32756 of relation 1663/134097/385649401:
   No space left on device
The referenced block is in the table's index file. The disk has plenty
of space.
I've seen that semaphore operations can give an out of space error. I
wrote
a program to dump the state of the semaphores, and ran it with the
system
idle and with the vacuum hanging. There are about 25 semaphore sets in
the
system, but only one changes. Here is the before and after:

0 1 2 3 4 5 6 7 8 91011
   1213141516
pid  ...
value ...
ncnt  ...
zcnt  ...

Set 131091
47542 47395 47376 47375 47374 47373 47372 47371 84426 47370 47541 45292
46849 46810 46809 46808   530
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0   537
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0

Set 131091
47542 47395 47376 47375 47374 47373 47372 47371 84426 47370 47541 45381
46849 46810 46809 46808   530
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0   537
0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0

The pid of the backend doing the vacuum is 45381, and it seems to be
waiting
on semaphore #11. (I have the complete dump if it's of interest.)

Someone tried to drop the index and got this:
ERROR:  FlushRelationBuffers(idx_ld1, 0): block 32756 is referenced
(private
0, global 1)

I'm going to restart postgresql and see if any possible semaphore
problem
goes away.


Any ideas about what the problem is here?

Thanks.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Form builder?

2006-06-22 Thread webb . sprague
I don't want to revisit or be redundant... but is there a quick and
dirty and cross-platform system for developing user input forms for
Postgres?  Ideally, I am interested in something such that you can give
it (it being something like a Python function) a table name,
resulting in a magically appearing input form.  It should be able to
deal with getting the format more or less correct for a datatype, not
displaying system columns, etc, using the system tables to get the
necessary info.

I thought first about hacking xdialog, but they don't have multiple
field forms.  I have looked at wx, but real gui programing is complex
and not general.  Right now I have an ugly hack that writes out a text
file with colon separated lines for each row, with a defaults option so
that you can chain together a set of One-Many forms (eg you enter a
person, then you can cycle through with library books each referencing
that person in a for loop).

I would rather trade ease of use for functionality, if it can get it
90% correct just by the table name and the defaults.  And I refuse to
use XML.  And I want pure Linux.  If I have to develop the darn thing,
of course I am happy to share.

I want to develop an anthropological fieldwork database, but the form
based data entry is important for ease of use.

Cheers.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Help from Havana

2006-06-22 Thread Giraldo Leon
HelloI want to know how to print a database eschema with the information of tables, fields,keys,etcThanksGiraldo 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.

[GENERAL] Bitwise OR in a group statement

2006-06-22 Thread Greg Gazanian
I was wondering if anyone happens to know whether there is an Aggregate 
function available in Postgres that can do an OR on a column of the bit varying 
type. For example I would like to do something as follows:

bitstring
*
1110
0100



SELECT bitwise_or(bitstring) FROM table;

Resulting in:


Any thoughts? Thanks.

- Greg Gazanian
Network Systems Analyst
Technology and Information Services
Arcadia Unified School District


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] OLEDB Provider for Postgres

2006-06-22 Thread amishsami
Hi all

I desperately needs OLEDB Provider for Postgres. Can any one help me?

Thanks and Regards
A.M.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Exporting data from view

2006-06-22 Thread Brent Wood



 I read recently about the efforts underway to COPY from a view,
 however I was wondering what the current best-practices are for being
 able to copy out of a view and import that data into an actual table
 elsewhere.  I am currently doing psql -c SELECT ... and the using a
 bit of perl to transform that into something copyable (i.e. CSV), but
 is there a way to directly export the data in an easily importable
 form?


psql -A -t -c select * from view;


You can use -F to set the delimiter
-o to specify an output file name (or just redirect stdout)

etc.

Try  man psql


To redirect it into a table,

insert into table 
 select  ;

between databases/systems


psql -d DB -p port -A -t -c select * from view; | psql ... copy
from stdin...;

can work, as the psql extract can be written to generate the same format
as copy from.


Brent Wood

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] join on next row

2006-06-22 Thread Aaron Evans


sorry to nitpick, but I think that to get this query to do exactly  
what you want you'll need to add ordering over EventTime on your sub- 
selects to assure that you get the next event and not just some event  
later event on the given day.


-ae

On Jun 20, 2006, at 11:12 AM, Gurjeet Singh wrote:


Gurjeet Singh wrote:
 It would have been quite easy if done in Oracle's 'lateral view'
 feature. But I think it is achievable in standard SQL too; using
 subqueries in the select-clause.

 Try something like this:

 select
 Employee, EventDate,
 EventTime as e1_time,
 EventType as e1_type,
 (select
 EventTime
 from
 Events
 whereEmployee = O.Employee
 andEventDate = O.EventDate
 andEventTime  O.EventTime
 limit1
 )as e_time_1,
 (select
 EventType
 from
 Events
 whereEmployee = O.Employee
 andEventDate = O.EventDate
 andEventTime  O.EventTime
 limit1
 )
 from
 Events

 Hope it helps...

 Regards,
 Gurjeet.




smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] sql question; checks if data already exists before inserted

2006-06-22 Thread nuno
hi, there. i'm trying to write a SQL statement which does the following
things.

1. checks if data already exists in the database
2. if not, insert data into database otherwise skip.

for example, i'd like to insert a student called 'Michael Jordan' whose
ID is 'JORDANMICHAEL' only if the id, 'JORDANMICHAEL' does not already
exist in the database. anyway, my query looks like...

insert into student (studentid, fname, lname)
select 'JORDANMICHAEL', 'Michale', 'Jordan' from student where
studentid not in
(select studentid from student);

however, this does not seem to work. it does not insert data even if it
does not exist in the database. hmm!

any clue?

thanks.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] BackUp

2006-06-22 Thread Chris

Daniel wrote:

Hi All..
  Is there a way to do auto database bacl up in PostgreSQL.
Please Reply..


There are different ways, the most common is to do a pg_dump / 
pg_dumpall once a night via cron.


http://www.postgresql.org/docs/8.1/static/backup.html

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SELECT statement takes 10 minutes to answer

2006-06-22 Thread A. Kretschmer
am  19.06.2006, um  5:26:54 -0700 mailte Mustafa Korkmaz folgendes:
 Hi,
 
 Iam using PostgreSQL 7.4.5 on a linux machine, and have the problem
 that an easy SELECT field1, field2, field3 FROM a_table-statement

Hey, read my answer on your question in the news, MID
[EMAIL PROTECTED] ;-)


Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Why my cursor construction is so slow?

2006-06-22 Thread Martijn van Oosterhout
On Tue, Jun 20, 2006 at 02:06:19AM -0700, [EMAIL PROTECTED] wrote:
 Such construction is very slow but when I modify SQL to:
 OPEN cursor1 FOR SELECT * FROM alias WHERE mask=alias_out
 ORDER BY mask LIMIT 100;
 
 it works very fast. It is strange for me becuase I've understood so far
 that when cursor is open select is executed but Postgres does not
 select all rows - only cursor is positioned on first row, when you
 execute fetch next row is read. But this example shows something
 different.

PostgreSQL tries to optimise for overall query time. Without the limit
it tries to find a plan that will return the whole set as quick as
possible. With the LIMIT it might take a different approach, which
might be worse if you read the whole lot, but better for a limited set.
A fast-start plan so to speak.

To see detail I'd suggest doing an EXPLAIN ANALYZE over the query with
and with limit to see the changes.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Dynamic loading of C functions

2006-06-22 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 12:03:26PM -0400, Jasbinder Bali wrote:
 I've tried everything so that my .so file is recognized but in vein.
 Don't know whats going wrong.

The easiest may be to attach strace to the backend (strace -p) and then
do a LOAD blah on the frontend and see what happens.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] sql question; checks if data already exists before inserted

2006-06-22 Thread Martijn van Oosterhout
On Wed, Jun 21, 2006 at 10:59:42PM -0700, nuno wrote:
 insert into student (studentid, fname, lname)
 select 'JORDANMICHAEL', 'Michale', 'Jordan' from student where
 studentid not in
 (select studentid from student);
 
 however, this does not seem to work. it does not insert data even if it
 does not exist in the database. hmm!

There are no NULL studentid's, right? Because that would make NOT IN do
something other than you think. Perhaps NOT EXISTS would work better.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Help from Havana

2006-06-22 Thread brian ally

Giraldo Leon wrote:

I want to know how to print a database eschema with the information of tables, 
fields,keys,etc
   


Have a look at PostgreSQL Autodoc

http://www.rbt.ca/autodoc/

b

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Bitwise OR in a group statement

2006-06-22 Thread Florian G. Pflug

Greg Gazanian wrote:

I was wondering if anyone happens to know whether there is an Aggregate 
function available in Postgres that can do an OR on a column of the bit varying 
type. For example I would like to do something as follows:

bitstring
*
1110
0100


SELECT bitwise_or(bitstring) FROM table;

Resulting in:

There is no such function predefined, but you can easily define one 
yourself.


create aggregate or_bit64(
  basetype=bit,
  sfunc=bitor,
  stype=bit,
  initcond=
''
) ;

I'm using this in production, and it works fine. I didn't find a way to
make this length-agnostic, so I defined this for all lenghts of 
bitstrings the my app uses (32 and 64).


greetings, Florian Pflug


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-22 Thread Kenneth Downs

John Tregea wrote:


Greeting again,

I am writing records to postgreSQL from an IDE called revolution. At 
the time I perform the INSERT command I need to retrieve the value of 
the serial_id column from the newly created row.


We have an after-insert trigger that raises it as a notice.  NOTICE 
SKEY(xxx)
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Return the primary key of a newly inserted row?

2006-06-22 Thread Kenneth Downs

Tim Allen wrote:



using syntax along the lines of INSERT ... RETURNING ...

SQL Server had a nifty feature here.  You could simply toss a SELECT 
statement at the end of a trigger of sproc and the results would be 
returned.


This in effect made a table the potential return type of all commands, 
which could be exploited very powerfully.


Do the hackers have any thoughts along those lines?
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Form builder?

2006-06-22 Thread Kenneth Downs

[EMAIL PROTECTED] wrote:

We wrote a system that does exactly that, its called Andromeda, and it 
is GPL.


http://docs.andromeda.com

It uses a data dictionary to do two things: 1) build the database and 2) 
generate HTML maintenance forms.  But it can also have multiple virtual 
sites going into the same database, so in many cases we have the admin 
site which is generated for free and then 1 or more public sites reading 
the same database but intended for anonymous access.  These other sites 
go through a professional design process quite different from the 
table-maintence sites.


It also automatically generates links to child and parent tables based 
on foreign keys, so for a table customers you will see a link 
automatically generated for orders that goes to that customer's orders.


Our largest project technically has about 290+ tables, our most active 
project is a system of about 30 tables in which we've made small hacks 
to a couple of pages to enhance the defaults.


The codebase is extremely small.  The main library is less than 10,000 
lines, easy to walk through and change.


The default interface has been tested on IE and Firefox, and also 
supports keyboard navigation (at least on Firefox). 

The project is running on PHP and currently targets postgres.  We run on 
Linux.  In principle it can run on Windows but we haven't tried.


If you would like to see a running system I can give you an account any 
of our systems under development and you can see it.



I don't want to revisit or be redundant... but is there a quick and
dirty and cross-platform system for developing user input forms for
Postgres?  Ideally, I am interested in something such that you can give
it (it being something like a Python function) a table name,
resulting in a magically appearing input form.  It should be able to
deal with getting the format more or less correct for a datatype, not
displaying system columns, etc, using the system tables to get the
necessary info.

I thought first about hacking xdialog, but they don't have multiple
field forms.  I have looked at wx, but real gui programing is complex
and not general.  Right now I have an ugly hack that writes out a text
file with colon separated lines for each row, with a defaults option so
that you can chain together a set of One-Many forms (eg you enter a
person, then you can cycle through with library books each referencing
that person in a for loop).

I would rather trade ease of use for functionality, if it can get it
90% correct just by the table name and the defaults.  And I refuse to
use XML.  And I want pure Linux.  If I have to develop the darn thing,
of course I am happy to share.

I want to develop an anthropological fieldwork database, but the form
based data entry is important for ease of use.

Cheers.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
 



begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] cpan perl module - plperlu danger?

2006-06-22 Thread Philippe Lang
Hi,

I would like to access a cpan perl module (FSA::Rules) in a perl
procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows
you to define a finite state machine in perl. No disk access.

In order to to that, I need to use plperlu instead of plperl. And it
works just fine.

What exactly is the danger using a cpan library under plperlu? How can I
make sure it won't crash my system, even when run concurrently by a few
people?

Thanks!

--
Philippe Lang, Ing. Dipl. EPFL
Attik System
rte de la Fonderie 2
1700 Fribourg
Switzerland
http://www.attiksystem.ch

Tel:  +41 (26) 422 13 75
Fax:  +41 (26) 422 13 76  


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] XML Support

2006-06-22 Thread Stephane Bortzmeyer
On Tue, Jun 20, 2006 at 02:17:28PM +0100,
 Essam Mansour [EMAIL PROTECTED] wrote 
 a message of 35 lines which said:

 In my research project, I need a DBMS that supports XML storage and
 retrieval, and provides ECA rule support.

(No idea about ECA.)

 - whether Postgresql DB could be used as XML Repository or not?

Certainly, you can put XML in a TEXT field. Of course, without
XML-oriented retrieving tools, it is not very useful. At, least, with
xml2 (see beyond), you can insert XML and check if it is well-formed.

 - Whether Postgresql DB support the XQuery or SQL/XML or not?

Why not Xpath? It is supported in PostgreSQL (see
http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/xml2/README.xml2)
but I do not know about the performance (I don't know if there is
XML-oriented indexation).


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Missing domain socket after reboot.

2006-06-22 Thread Douglas McNaught
Bill Moseley [EMAIL PROTECTED] writes:

 Hum.  lsof knows about the file.

 $ lsof -p 1512 | grep /var/run
 postmaste 1512 postgres4u  unix 0xf78b5980   1631 
 /var/run/postgresql/.s.PGSQL.5432


 Any ideas what happened to the socket?

Maybe something in your bootup process tried to clean up /var/run and
deleted it after the postmaster had started?

 I had to stop and start the postmaster to get the socket back.

Be interesting to see if you can reproduce it...

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Interface Guidance and Opinions Needed

2006-06-22 Thread Douglas McNaught
Chris Golden [EMAIL PROTECTED] writes:

 This is my first time posting to this forum and I am very new to
 PostgreSQL.  I am very excited about using it.  I have set up a
 database and just need a point in the right direction on
 interfacing.  I have an orders and messages database.  One of the
 providers I would like to interface with has sent me an XML spec to
 make HTTPS posts to.  I am just wondering what would be the easiest
 thing to use to implement the two.  I would like to scan my database
 for new orders then do an HTTPS post to the provider.  I would also
 like to periodically check for new orders/messages from the provider
 via an HTTPS post and bring them into my database (a constant
 exchange of information).  I am wondering if something like PHP
 would be a universal solution, or something similar?

This would be a good project for any of the scripting languages (PHP,
Perl, Python, Ruby).  Since you're a beginner it might be best to look
at Python, as it's well-designed for people who are just starting
out.

Probably the simplest approach would be to write two programs, one
that does the database scan and posts new orders via XML; the other
would connect to the provider, get messages via XML and insert them
into the database.  Each one would do its thing until there was no
more work to do, and sleep for a small amount of time before checking
again.

Hope this helps you get started!

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Changing array subscripting to zero-offset

2006-06-22 Thread Jorge Godoy
Em Domingo 18 Junho 2006 11:33, Erin Sheldon escreveu:
 Hello everyone -

 Array columns are, by default, 1-offset in their
 subscripting.  Since I usually am calling postgres
 from a language with zero-offset, I would prefer
 that postgres conform to that.  The online
 documentation hints that this may be configurable
 but I haven't been able to find how this is done.

 Any help is greatly appreciated,
 Erin Sheldon

You can always sum 1 to your calculations, so that you have an offset from 
your programming language from 0 to 1 for the initial position of the array.

I also use a language where zero is the first position of the array.  But if 
the behavior is changed, what to do with people that use languages where 1 is 
the first position of the array?  I believe that we can't please everyone, 
so... 

-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Form builder?

2006-06-22 Thread John DeSoi


On Jun 22, 2006, at 7:38 AM, Kenneth Downs wrote:

We wrote a system that does exactly that, its called Andromeda,  
and it is GPL.


http://docs.andromeda.com



Sounds interesting but this link does not work (apparently no server  
at that address).




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Form builder?

2006-06-22 Thread Kenneth Downs

John DeSoi wrote:



On Jun 22, 2006, at 7:38 AM, Kenneth Downs wrote:

We wrote a system that does exactly that, its called Andromeda,   
and it is GPL.


http://docs.andromeda.com



OOPS:

http://docs.secdat.com




Sounds interesting but this link does not work (apparently no  
server  at that address).




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL



begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] getting comment about constraint

2006-06-22 Thread Michael Fuhr
On Sun, Jun 18, 2006 at 03:02:05PM -0700, [EMAIL PROTECTED] wrote:
 Given the name of the table and constraint, how would you retrieve the
 comment?  Chapter 9. Functions and Operators says that
 obj_description(object_oid, catalog_name) returns a comment given the
 object's OID and class name.  How do I get the OID for the constraint?
 It doesn't seem to be stored in pg_constraint.

How did you determine that?  Are you aware that oid is a system
column that isn't ordinarily displayed?  The following works in
every version of PostgreSQL I checked (7.3.15, 7.4.13, 8.0.8, 8.1.4,
8.2devel):

SELECT obj_description(oid, 'pg_constraint')
FROM pg_constraint
WHERE conname = 'constraint_name'
  AND conrelid = 'table_name'::regclass;

Replace constraint_name and table_name with the actual names
(uid_alphanumeric and people in the example you gave).

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Missing domain socket after reboot.

2006-06-22 Thread Bill Moseley
On Thu, Jun 22, 2006 at 08:16:05AM -0400, Douglas McNaught wrote:
 Bill Moseley [EMAIL PROTECTED] writes:
 
  Hum.  lsof knows about the file.
 
  $ lsof -p 1512 | grep /var/run
  postmaste 1512 postgres4u  unix 0xf78b5980   1631 
  /var/run/postgresql/.s.PGSQL.5432
 
 
  Any ideas what happened to the socket?
 
 Maybe something in your bootup process tried to clean up /var/run and
 deleted it after the postmaster had started?

That's what I thought, but my quick look couldn't find anything in
the init scripts, not that that's conclusive:

$ fgrep /var/run * | grep rm
apache2:[ -f /var/run/apache2/ssl_scache ]  rm -f 
/var/run/apache2/*ssl_scache*
bootclean.sh:   rm -f /var/run/.clean
bootmisc.sh:rm -f /tmp/.clean /var/run/.clean /var/lock/.clean
portmap:  rm -f /var/run/portmap.upgrade-state
portmap:rm -f /var/run/portmap.state
rsync:  rm -f /var/run/rsync.pid
rsync:  rm -f /var/run/rsync.pid
rsync:  rm -f /var/run/rsync.pid
umountnfs.sh:rm -f /tmp/.clean /var/lock/.clean /var/run/.clean

But maybe postgresql is started too early.

$ ls /etc/rc?.d  | grep postgres | head -1
K20postgresql-8.1
K20postgresql-8.1
S20postgresql-8.1
S20postgresql-8.1
S20postgresql-8.1
S20postgresql-8.1
K20postgresql-8.1


Apache, for example, starts S91.

/etc/rc2.d:
K10atdS20courier-imap  S20mysqld-helper  
S21nfs-common
K10cron   S20courier-imap-ssl  S20netatalk   S21quotarpc
K10syslog-ng  S20courier-mta   S20nfs-kernel-server  
S23ntp-server
S10sysklogd   S20courier-pop   S20ntop   S25mdadm
S11klogd  S20courier-pop-ssl   S20oidentdS30sysctl
S14pppS20darwinss  S20postfixS89cron
S15logicalS20exim4 S20postgresql-8.1 S91apache2
S16mountnfsforlogical.sh  S20grlogcheckS20rmnologin  
S91ifp_httpd
S18atdS20httpd S20rsync  S99jabber
S18portmapS20httpd2S20saslauthd  
S99stop-bootlogd
S19spamassassin   S20inetd S20sshS99ud
S19syslog-ng  S20jabberS20syslog-ng
S20binfmt-support S20makedev   S20sysstat
S20courier-authdaemon S20mysqldS20xmail



 Be interesting to see if you can reproduce it...

Next reboot I'll look again.  It's a a production machine so I can't
really bring it up one service at a time.

-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] cpan perl module - plperlu danger?

2006-06-22 Thread A.M.
On Thu, June 22, 2006 7:41 am, Philippe Lang wrote:
 Hi,


 I would like to access a cpan perl module (FSA::Rules) in a perl
 procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows you
 to define a finite state machine in perl. No disk access.

 In order to to that, I need to use plperlu instead of plperl. And it
 works just fine.

 What exactly is the danger using a cpan library under plperlu? How can I
 make sure it won't crash my system, even when run concurrently by a few
 people?

You can't be sure, that is what plperl is for. [But even Safe.pm has had
dozens of bugs revealed over the years- caveat emptor.] A workaround is to
create a set of plperlu functions which can be called by other functions
(using security definer, if necessary). This means that you have to wrap
the major functional components in perl.

Another option is to use plperl as a template to create your own
procedural language which includes all the modules you need while still
locking down everything else.

Good luck.

-M


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] simple query terminated by signal 11

2006-06-22 Thread Thomas Chille

Thanks for your Tipps!


Since it is repeatable in your machine, you can compile a new postgres
version with --enable-cassert (enable assertions in code) and
--enable-debug  (enable gcc debug support) configuration. Then run it on
your data and bt the core dump.


I try to found out the reason for that behavoir.

For now i could drop this damaged table und restore it from an older
backup, so all works fine again.

regards,
thomas!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] sequence in select

2006-06-22 Thread SunWuKung
Hi,
I put the following into a function. Is this always going to give me a
sequence with an increment of 1 independently from other transactions?
If not what would be the standard way to return a sequence from a
query?

CREATE TEMPORARY SEQUENCE rank_seq;
SELECT nextval('rank_seq') FROM whatever ORDER BY id;
DROP SEQUENCE rank_seq;

thx.
Balázs


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Download pl/python

2006-06-22 Thread Alejandro Michelin Salomon \( Adinet \)
Title: Mensagem



Hi 
:

I want to 
use plpython.
Where i can 
find and download pl/python for windows plattaform ?

Help is 
apreciated.

Thanks in 
advance.


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.2/372 - Release Date: 21/6/2006
 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.9.2/372 - Release Date: 21/6/2006

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] join on next row

2006-06-22 Thread Gurjeet Singh

Thanks for pointing it out You are right; I forgot to add that...

On 6/20/06, Aaron Evans [EMAIL PROTECTED] wrote:


sorry to nitpick, but I think that to get this query to do exactly
what you want you'll need to add ordering over EventTime on your sub-
selects to assure that you get the next event and not just some event
later event on the given day.

-ae

On Jun 20, 2006, at 11:12 AM, Gurjeet Singh wrote:

 Gurjeet Singh wrote:


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] OLEDB Provider for Postgres

2006-06-22 Thread Richard Broersma Jr
 I desperately needs OLEDB Provider for Postgres. Can any one help me?

Notice the following link:
http://www.postgresql.org/download/

It provides an assortment of useful tools as well as a link to the above 
mentioned OLE-DB
provider.

http://pgfoundry.org/projects/oledb/

Regards,

Richard Broersma Jr.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] cpan perl module - plperlu danger?

2006-06-22 Thread Randal L. Schwartz
 A == A M [EMAIL PROTECTED] writes:

A  [But even Safe.pm has had
A dozens of bugs revealed over the years- caveat emptor.]

Eeeh?  Proof please? That's just FUD-raking.

From what I recall, there have been a few clever leakages that have been fixed
rather rapidly.

few ne dozens.

The main problem with Safe is that it's at the wrong granularity (per opcode,
not per semantic operation).

But let's not be throwing the baby out with the bathwater... Safe is 99.97% of
the way there.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Relyea, Mike
So what's my next step?  How do I track down what is causing this
problem? 

-Original Message-
From: Qingqing Zhou [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 21, 2006 11:01 PM
To: Relyea, Mike
Cc: pgsql-general@postgresql.org; Tom Lane
Subject: RE: [GENERAL] Out of memory error in 8.1.0 Win32 



On Wed, 21 Jun 2006, Relyea, Mike wrote:

 ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
 chunks); 355336392 used
 HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
 chunks); 290485792 used
 TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks); 1164240
 used
 HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15
chunks);
 37032016 used
 TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks);
1076840
 used

The same problem. ExecutorState uses much more memory than we expect --
but not sure where they are from :-(

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Form builder?

2006-06-22 Thread Tony Caduto

[EMAIL PROTECTED] wrote:

I don't want to revisit or be redundant... but is there a quick and
dirty and cross-platform system for developing user input forms for
Postgres?  

Don't know if this has already been mentioned, but how about Rekall?
http://www.rekallrevealed.org/kbExec.py#

You could also use Lazarus.
http://www.lazarus.freepascal.org/


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unable to start pg service

2006-06-22 Thread Tom Lane
Vladimir Leban [EMAIL PROTECTED] writes:
 I'm curently in version 7.3

 2006-06-19 10:31:57 PANIC:  XLogWrite: write request 0/17B2000 is past
 end of log 0/17B2000

Update to 7.3.latest --- IIRC we fixed that in 7.3.4.  You should be able
to update in-place to a newer 7.3.* postmaster without losing any data.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] htonl necesssary for PQexecParams input?

2006-06-22 Thread Kevin Jenkins
Previously I was getting a bad result when calling PQexecParams with 
binary results because PostgreSQL stores its data big endian.  So I 
had to do ntohl on the result to get it to little endian.


My question is, do I also need to do htonl then, as in this scenario?

outStr[0]=blah;
outLengths[0]=htonl((int)strlen(blah));
formats[0]=1;
PQexecParams(pgConn, query,1,0,outStr,outLengths,formats,1);

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] VACUUM hanging on idle system

2006-06-22 Thread Tom Lane
Clarence [EMAIL PROTECTED] writes:
 I have a completely idle postgresql system (all backends idle, none
 in transaction); every time I attempt to vacuum a particular table,
 it hangs after a while.
 ...
 In the system error log there is this line:
could not write block 32756 of relation 1663/134097/385649401:
No space left on device
 The referenced block is in the table's index file. The disk has plenty
 of space.

... but at one time not so much, right?

 Someone tried to drop the index and got this:
 ERROR:  FlushRelationBuffers(idx_ld1, 0): block 32756 is referenced
 (private 0, global 1)

What it looks like to me is that some backend tried to write out a dirty
page, failed for lack of disk space, and neglected to release the buffer
pin during error recovery.  The extra pin would block VACUUM but not
much of anything else.  A postmaster restart should clear the problem.

I seem to recall having fixed a bug like this in the past.  What PG
version are you running?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] best way to get PKey and FKey from inside applications (permission pb)

2006-06-22 Thread zzzeek
sandro -

SQL2003 fixes these issues with information_schema but it seems that
postgres has not caught up yet:
http://groups.google.com/group/pgsql.interfaces/browse_thread/thread/9f19995849b3cdf4/c20b81bf8cf183af?lnk=stq=information+schemarnum=9hl=en#c20b81bf8cf183af

although i am running only 8.0.4 here.  if 8.1 still has this problem
(and its documentation for information_schema still seems to indicate
the views show constraints owned by the current user rather than
accessible), then we might as well use PG's system tables until this
is resolved.  can you submit a patch to postgres.py that uses the PG
system tables ?

Sandro Dentella wrote:
 Hi all,

 I started using sqlalchemy (python ORM) that works really well but fails
 detecting Primary and Foreign key other than for owner or
 superusers. Sqlalchemy queries the information schema with the following query
 that returns nothing if you are connected as a user with no particular
 privileges:

 SELECT
   table_constraints.constraint_name ,
   table_constraints.constraint_type ,
   table_constraints.table_name ,
   key_column_usage.table_schema ,
   key_column_usage.table_name ,
   key_column_usage.column_name ,
   key_column_usage.constraint_name ,
   constraint_column_usage.table_schema ,
   constraint_column_usage.table_name ,
   constraint_column_usage.column_name ,
   constraint_column_usage.constraint_name

 FROM information_schema.table_constraints
   JOIN information_schema.constraint_column_usage ON
constraint_column_usage.constraint_name = table_constraints.constraint_name
   JOIN  information_schema.key_column_usage ON
key_column_usage.constraint_name = constraint_column_usage.constraint_name

 WHERE table_constraints.table_name = 'my_table'
   AND table_constraints.table_schema = 'public' ;

 If you use '\d mytable' you get these information correctly so that there's
 no reason to deny the same info from information_schema, correct?

 Looking at how '\d' returns the information I always used a different query
 (look here http://www.sqlalchemy.org/trac/ticket/71) that is not using
 information_schema, but sqlalchemy would prefere to stick to the more
 standard information_schema.  What would you guys suggest in this case?

 Thanks in advance
 sandro
 *:-)


 --
 Sandro Dentella  *:-)
 e-mail: [EMAIL PROTECTED]
 http://www.tksql.orgTkSQL Home page - My GPL work

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] htonl necesssary for PQexecParams input?

2006-06-22 Thread Michael Fuhr
On Thu, Jun 22, 2006 at 08:52:04AM -0700, Kevin Jenkins wrote:
 Previously I was getting a bad result when calling PQexecParams with 
 binary results because PostgreSQL stores its data big endian.  So I 
 had to do ntohl on the result to get it to little endian.

Clarification: PostgreSQL stores data in host byte order but returns
it in network byte order if you request binary format.

 My question is, do I also need to do htonl then, as in this scenario?
 
 outStr[0]=blah;
 outLengths[0]=htonl((int)strlen(blah));
 formats[0]=1;
 PQexecParams(pgConn, query,1,0,outStr,outLengths,formats,1);

Only the data needs byte order conversion; if you convert lengths
then you'll probably get a database error or segmentation fault
(assuming you're on a machine where host and network byte order
differ).  I tested the above and PQexecParams failed with

  ERROR:  invalid byte sequence for encoding SQL_ASCII: 0x00

A packet sniff of the connection showed a large amount of data being
transferred (0x0400 bytes instead of 0x0004 bytes), so the
length needs to be in host byte order.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] htonl necesssary for PQexecParams input?

2006-06-22 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 ... the length needs to be in host byte order.

Right.  The only place where a libpq client would be dealing with
network-byte-order data is within binary-format values for data fields
(in SELECT results) or binary-format parameters (when transmitting to
the database).  Metadata like field lengths is just in the local native
representation.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Todd A. Cook

Hi,

I am consistently running into out-of-memory issues in 8.1.4 running on
RHEL3 and 8.0.5 on RHEL4.  The logs show entries like this:

AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks); 
-2130724312 used
TupleHashTable: 893902872 total in 119 blocks; 1088688 free (449 chunks); 
892814184 used

which looks mighty suspicious to me. :-;  I can provide a self-contained
test case if anyone wants to look at it.

-- todd


Relyea, Mike wrote:

So what's my next step?  How do I track down what is causing this
problem? 


-Original Message-
From: Qingqing Zhou [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 21, 2006 11:01 PM

To: Relyea, Mike
Cc: pgsql-general@postgresql.org; Tom Lane
Subject: RE: [GENERAL] Out of memory error in 8.1.0 Win32 




On Wed, 21 Jun 2006, Relyea, Mike wrote:


ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
chunks); 355336392 used
HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
chunks); 290485792 used
TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks); 1164240
used
HashBatchContext: 41877664 total in 11 blocks; 4845648 free (15

chunks);

37032016 used
TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 chunks);

1076840

used


The same problem. ExecutorState uses much more memory than we expect --
but not sure where they are from :-(

Regards,
Qingqing

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] auto-vacuum Negative anl Values

2006-06-22 Thread Dylan Hansen
Greetings all,I have been spending some time looking into how auto-vacuum is performing on one of our servers.  After putting the PostgreSQL logs in debug I noticed that the threshold for ANALYZE was never being hit for a particular table because the calculated value becomes increasingly negative.We have an entry in the pg_autovacuum table for a table that has quite frequent inserts and updates, but not deletes.  Here are the values:-[ RECORD 1 ]+--vacrelid         | #enabled          | tvac_base_thresh  | 500vac_scale_factor | 0.1anl_base_thresh  | 200anl_scale_factor | 0.05vac_cost_delay   | -1vac_cost_limit   | -1I've noticed that the threshold for ANALYZE never gets met because the threshold for VACUUM is hit first, therefore resetting the counters.  Here is a snippet of the log that shows what's happening:DEBUG:  tablename: vac: 961 (threshold 14217), anl: -9756 (threshold 7058)DEBUG:  tablename: vac: 1924 (threshold 14217), anl: -8792 (threshold 7058)DEBUG:  tablename: vac: 2953 (threshold 14217), anl: -7763 (threshold 7058)DEBUG:  tablename: vac: 3998 (threshold 14217), anl: -6718 (threshold 7058)DEBUG:  tablename: vac: 5170 (threshold 14217), anl: -5546 (threshold 7058)DEBUG:  tablename: vac: 6405 (threshold 14217), anl: -4311 (threshold 7058)DEBUG:  tablename: vac: 7635 (threshold 14217), anl: -3081 (threshold 7058)DEBUG:  tablename: vac: 8818 (threshold 14217), anl: -1898 (threshold 7058)DEBUG:  tablename: vac: 9917 (threshold 14217), anl: -798 (threshold 7058)DEBUG:  tablename: vac: 10987 (threshold 14217), anl: 272 (threshold 7058)DEBUG:  tablename: vac: 12016 (threshold 14217), anl: 1301 (threshold 7058)DEBUG:  tablename: vac: 12929 (threshold 14217), anl: 2214 (threshold 7058)DEBUG:  tablename: vac: 13717 (threshold 14217), anl: 3002 (threshold 7058)DEBUG:  tablename: vac: 14441 (threshold 14217), anl: 3727 (threshold 7058)...vacuum threshold is hit...DEBUG:  tablename: vac: 752 (threshold 14217), anl: -9962 (threshold 7058)DEBUG:  tablename: vac: 1491 (threshold 14217), anl: -9223 (threshold 7058)DEBUG:  tablename: vac: 2213 (threshold 14217), anl: -8501 (threshold 7058)DEBUG:  tablename: vac: 2984 (threshold 14217), anl: -7730 (threshold 7058)The outcome of this is that an ANALYZE is never run, as after the counters are reset for each VACUUM the counter for ANALYZE gets increasingly larger.  But as you can see from our entries in pg_autovacuum above, an ANALYZE should occur much more frequently than a VACUUM.We're running PostgreSQL 8.1.4 on a RHEL-3 with a 2.4.21-27.0.2.ELsmp kernel.Input is appreciated to explain exactly what is happening here.  Thanks so much! --Dylan HansenEnterprise Systems Developer 

Re: [GENERAL] best way to get PKey and FKey from inside applications (permission pb)

2006-06-22 Thread Tom Lane
[EMAIL PROTECTED] writes:
 SQL2003 fixes these issues with information_schema but it seems that
 postgres has not caught up yet:
 http://groups.google.com/group/pgsql.interfaces/browse_thread/thread/9f19995849b3cdf4/c20b81bf8cf183af?lnk=stq=information+schemarnum=9hl=en#c20b81bf8cf183af

I believe we've updated to the SQL2003 definitions for PG 8.2.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Tom Lane
Todd A. Cook [EMAIL PROTECTED] writes:
 I am consistently running into out-of-memory issues in 8.1.4 running on
 RHEL3 and 8.0.5 on RHEL4.  The logs show entries like this:

 AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks); 
 -2130724312 used
 TupleHashTable: 893902872 total in 119 blocks; 1088688 free (449 chunks); 
 892814184 used

Misestimated hash aggregation, perhaps?  What is the query and what does
EXPLAIN show for it?  What have you got work_mem set to?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Todd A. Cook

Tom Lane wrote:


Misestimated hash aggregation, perhaps?  What is the query and what does
EXPLAIN show for it?  What have you got work_mem set to?


oom_test= \d oom_tab
Table public.oom_tab
 Column |  Type   | Modifiers
+-+---
 val| integer |

oom_test= explain select val,count(*) from oom_tab group by val;
   QUERY PLAN
-
 HashAggregate  (cost=1163446.13..1163448.63 rows=200 width=4)
   -  Seq Scan on oom_tab  (cost=0.00..867748.42 rows=59139542 width=4)

The row estimitate for oom_tab is close to the actual value.  Most of
the values are unique, however, so the result should have around 59M
rows too.

I've tried it with work_mem set to 32M, 512M, 1G, and 2G.  It fails in
all cases, but it hits the failure point quicker with work_mem=32M.

-- todd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] auto-vacuum Negative anl Values

2006-06-22 Thread Tom Lane
Dylan Hansen [EMAIL PROTECTED] writes:
 I have been spending some time looking into how auto-vacuum is  
 performing on one of our servers.  After putting the PostgreSQL logs  
 in debug I noticed that the threshold for ANALYZE was never being hit  
 for a particular table because the calculated value becomes  
 increasingly negative.

Hmm, it shouldn't ever be negative at all, I would think.  The
calculation in question is

anltuples = tabentry-n_live_tuples + tabentry-n_dead_tuples -
tabentry-last_anl_tuples;

Apparently somehow last_anl_tuples has managed to get to be bigger than
n_live_tuples, which maybe could happen after a delete.  Should we be
clamping last_anl_tuples to not exceed n_live_tuples somewhere?
Alvaro and Matthew, what do you think?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] OT: publicly available databases?

2006-06-22 Thread Andrew Gould
I just finished migrating US county level census data
into a PostgreSQL database; and thought I'd save
others the trouble of doing the same.  (I've been
laid-off and am trying to stay busy.)  The gzipped,
dump file is approximately 9.5MB.

Is there a place online where people share data?  I
thought about offering it to techdocs or pgfoundry;
but it's neither documentation, nor an application, so
I didn't think it would be appropriate.

If there is no such repository; but you would like
more information, please contact me off-list.

Thanks,

Andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Tom Lane
Todd A. Cook [EMAIL PROTECTED] writes:
 oom_test= explain select val,count(*) from oom_tab group by val;
 QUERY PLAN
 -
   HashAggregate  (cost=1163446.13..1163448.63 rows=200 width=4)
 -  Seq Scan on oom_tab  (cost=0.00..867748.42 rows=59139542 width=4)

 The row estimitate for oom_tab is close to the actual value.  Most of
 the values are unique, however, so the result should have around 59M
 rows too.

Well, that's the problem right there :-(.  Have you ANALYZEd this table?
I think 200 is the default estimate for number of groups in the absence
of any ANALYZE stats, but it should surely not be that far off if it's
got real stats to play with.

If you need to make the query not fail without stats, you could set
enable_hashagg false, but I wouldn't recommend that as a production
choice (unless you set it just for this one query).

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Greg Stark
Todd A. Cook [EMAIL PROTECTED] writes:

Todd A. Cook [EMAIL PROTECTED] writes:

 QUERY PLAN
 -
   HashAggregate  (cost=1163446.13..1163448.63 rows=200 width=4)
 -  Seq Scan on oom_tab  (cost=0.00..867748.42 rows=59139542 width=4)
 
 The row estimitate for oom_tab is close to the actual value.  Most of
 the values are unique, however, so the result should have around 59M
 rows too.

ouch. it's estimating 200 distinct values. The estimates for distinct values
are known to be unreliable but they shouldn't be *that* bad. Do you have a few
hundred extremely common values and then a few million other values?

What does this say:

select * from pg_statistic where starelid = (select oid from pg_class where 
relname = 'oom_tab')

You may need to reanalyze and maybe increase the statistics target
(possibly by a lot). It may be interesting to compare the results of the above
query before and after analyzing too.

-- 
greg


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] OT: publicly available databases?

2006-06-22 Thread Michael Fuhr
On Thu, Jun 22, 2006 at 11:52:03AM -0700, Andrew Gould wrote:
 Is there a place online where people share data?  I
 thought about offering it to techdocs or pgfoundry;
 but it's neither documentation, nor an application, so
 I didn't think it would be appropriate.

PgFoundry has a Sample Databases project:

http://pgfoundry.org/projects/dbsamples/

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Todd A. Cook

Tom Lane wrote:

Todd A. Cook [EMAIL PROTECTED] writes:

oom_test= explain select val,count(*) from oom_tab group by val;
QUERY PLAN
-
  HashAggregate  (cost=1163446.13..1163448.63 rows=200 width=4)
-  Seq Scan on oom_tab  (cost=0.00..867748.42 rows=59139542 width=4)



The row estimitate for oom_tab is close to the actual value.  Most of
the values are unique, however, so the result should have around 59M
rows too.


Well, that's the problem right there :-(.  Have you ANALYZEd this table?


My production table and query are more complex.  In the original, the
query above was in a sub-select; the work-around was to create a temp
table with the sub-query results, analyze it, and then do the larger
query based off of the temp table.

There have been off and on discussions on the pg lists about out of
memory issues (see 
http://archives.postgresql.org/pgsql-bugs/2006-03/msg00102.php).
I was just offering my test case as an example in case it might be of
any use in tracking those problems down. :)

-- todd


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Relyea, Mike
I've zipped the results of EXPLAIN INSERT INTO tblSummary SELECT *
FROM qrySummary; for my case.  It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by certain mail
servers.
PKyÖ4¨y‰ïœýexplain.txtí]]oÇÕ¾/[EMAIL 
PROTECTED](šÉ`ç{ÖHHvÓº¯å:– ½)i#•H…¢Ú¸ýíï잝ýš™%’»3+֚¤¨3g¾Îœóœ3?üø§Ÿ½{òî÷¿ûózu{ruµÎ®æ›l6ûâbu·ù–`BQˆ¦Ï#LgëÕî¾¥œÍþ³¸Ü|ú–$äËßÿn6ûú»Ùì|µÞx›¤ˆHßoŠßÿ¿ìóËً_ןßÿpžmÞϯ²»èÅÙüâÓb™½yýâ+ûÍ÷ëÅrójuû¹xûöþçëÅz±ùùúÕêzµÎ?Q俪_¿¿½½^dÅëæ¹|ë«eñ…ùÏò¥â»áÃïæ7Yùçÿ¾È®/õKð³ó7onV˳l~w¿În²å¦%aþnñZÝê\s™ß}š½Í~ÙÌþºZ,:L¸dˆR‡˜–ôëP|ß«Õòòåì‹«{݀Fcµ–fßÎ^,–ËîËÖ÷$E
 7²%ˆ°®l8¥ÖwöHX*»#`ýªç»’#.K)HjI©¤ç›NYó1Ò´|©÷»rYϲõUæ3AL9)[EMAIL 
PROTECTED]'[EMAIL PROTECTED]@X¸t‡ùGÑ  _J¤©´ÐcBA‰´õ˜Ú[ËöB™;ˆ×¯AõÎóaS)  
 C‚Ùê‘b‡¿˜Ôˆk¢O½‚Ð$•(aZ½‚0ªônGlõ
µ³€É*Чj…zs֐~–ŠR=¢ªNö$`â
ô©'”i{Næ{NK1RÜV;ße}õáY(ÐÓ©÷JAJ…V¹6°Ýl‹úixFÝèé
•2©²\UÊ9Qˆ§ŽÎØïdãÙu   
§c$W˜èí#¤%Or˜Óô˔gÚ=€žN⊥Doz)㊳$EÜaŒ’Ã-e6žuWz:Œê#åH›¸ˆ¦*Ñö—rØ\äÐ˝c·•èõgáD›nçî,LxŠ¤ÃpÀ»ýž†c¶`ŨRúà.tçÑTïl©ÃêÆÃlg[EMAIL
 
PROTECTED]'„å^‡Ý7r¬ÔœêÄ].Ú4Uk¤4G†þ¯`¡ˆkýäö6[^Ö¾7EU¹úú‰9Cé`ûº8e”aÖ!žrj3ݙÐhŠT*ë¦$,ì¶êÉaµÅ´8ŸßÜ^[‹«ç#í‰ä
24·lä=û.»Ûd—³·«ÕmÕÁ…'[EMAIL PROTECTED] —j¡®¦J?B3÷åm
«u®²Åüzñ߬Ýúß²óL¿
y ÜVçÕ³ªÝsyCÃn¢c# ©^Û£˜…’j7#Ï.k¨ÚAIîÉ+ÛAe*âÜ|\×^ð 
¯gª›e±Øúà^U¬Nl¯0‚!føAL?w†QáÎ`­›Jb¥
+ž›ÏúQ¢Ôœ$  
­ö€±¼IÏâØY¼¦ñ#­ç±¯éñs‚”„¸ZÂé!€“ˆx–m֋‹®3ϼˆgÅ©s¬õ]葲¢
ò”™Mf¹ÍWûˆ¥þæ;Á´Â³ÕŒ’ù*J…¹~„® 
”¬Ýíß|=¿ÉtÏäm{HMÈ,è-ðûŵnAáÙoµçŗ/_n²ß6zýñՇ³ó?Âì“ÓÅæf~;ûK¦ÿ±ûæ§ùõ}£sꑇ…Ô¶—2
rBSøéCvñ)[EMAIL PROTECTED]'ґÕvkvZ¦ô! ¥ùáž wRHÞNðJ¥^‰ª)§T5‡bªª
„„duúഉÞÜ̯º‘eóZÀméóŸ,[EMAIL 
PROTECTED]@Ã8§Çqu»þD9.NFõN¬nž×‹«Åf~],Ýy—ªÉzÚɦ]Ý)È1E\ [EMAIL 
PROTECTED]t*m.W+“ƨDØ8͍Y²}òch8HBi˜Èû´q⹿[,¯JÒÌ]Ù88ôôù¢iðQÊíáVÈÇ÷ï\ÚÐ/算“å¥S/¬Å?)íƒccJøÒÞcAÃõQ¯æïçýßeËõ¿ÿÍþø‡ïÕÂôEm¿¿ÊÅ)÷°—Ëäæ7ö°ƒ'¯OV½Æûºþ[=Ø;ÏG£^ø.°óN[n°b¨XC$Þh­v“%î­ïìönÞÜöz¸,[EMAIL
 PROTECTED] 
‘S²ô¿åj^xË:܃?mqw¶º¼·Ù™A·Ð0ŽÚméFêÎæëՏËÅ/«õÍbóy,Cɦ¤!8¢%E™ ÔD|   
6œ§ÑÎäíô˜?Òë•×JŽi¼ÑMi¼5ï¤PþýÏ¿ÞgëÏ0Z6ú„)3@   ÎJh8}0‹U*®#[EMAIL 
PROTECTED]z/¾œ}óÝ,[EMAIL PROTECTED]/¼ú爰BÿTòjÔKZåø±Õá0
`  ­c»Òšp)Ãqñ »©ð)èÓâbs™…ìXzVT'À„  O€çF{LŠü°)P   
eˆˆœ7Oæ„É86Ä!fÙ»Ç)U7âkÀiQU­¢“
_BÔëuíS¢Ež9
Ð]ô˜äÅÚQTc’Æ3Î)—ª¥ÉZxt)À³MºS3U¢(õ?ÍÐå2Ρpï‹HÊi5М­ÛÍĝb¼yÛ© uR6O+ŸyhlO:‘“N–•QøHí«ª’
 cçH94‡L°¹§2IÊdÖö0)W6çԆ€ª/k1L†jÃ5­äÓÄÎT [EMAIL 
PROTECTED]//õtë0µžÄ»P#6妨“VË*0Õu£Ö™våwÔ¾¨öU]í% 
¡'ñp$¼ºÕ[{ZUu1·…SÕ%^þÃ.l‡ü|»R±·©ÒòÓéù_VëÅWËÍü:oÅäyc)NBô
›oW'QÌÔ)1s¸:%ž-¥¨K’ß ¸ø·ƒÑ4b*ë[ 
.­;o`Ê,˓W)ò'à¡]—qN•¢TJ][S÷OºßRyiüúsøiaL%B|›i]ú$Îv­3õNªÛ£ã
âëau¬›^ÄÎÁx|a“r‘©ÈaÆ®ê^®z÷›ï[EMAIL PROTECTED]/Fr,@èqö«¢#Ó/4²3£ 
ŽâS((26ÈÂ!Y,$†!CAR½i ‘ÇnáÑÄFܶ«€œžoæͳ×W­W»õ?ÌëãP
[EMAIL PROTECTED]|¶FZ9æmK¶C³¸i˜ñJ\Ê[!
¯’ ÑäJæH0!ÁX‹Êþ8__e]ªGõbRûTÎJv¤˜×õ^8¯\¾Aè/Ó¦‰‡Y7zÀó*W™T¦ÁÖٕ‚«ÔÏÇÑÃkyz7½{“$—
  6)§‰   —©]˝Êb¹‡ùë4
²´˜çÀ9jbÒ×?ñ'¢¸¸Ù•^#…©   ãðÐV
鍊W”Q5à´çªæP‘„KBÛƕrx-vºRS¦.5Ñ]Öç7Aóu»xˆjœ+/[EMAIL 
PROTECTED]'`ªh‘“?Ëq*[EMAIL PROTECTED];ôŸ×ÞÁ“ 
sÚpuº¯É$p³\ÃláÎd(wy•™õ±£¬µ$Qjq
?xN*
[EMAIL PROTECTED]¬UµÖdZÝÕ-X¨•¡lËX£êŽ£pŠw7ÈÓëÕÅ¿N^u£9±4
Poy'ëlþjõïl=¿Êò道qO›¶Ö_V9É!„×­ñ±-3¨V¶b¾Î®7ó?
sZ/Üd^R!Ïéû¨zlÓ.v8—ó‹]­W÷ËK›xÙ|Ëb_6ßš‚YèÖ_Öu•Žk¥wÓ£†\3|c¢!éPqÆ,ÖW¬¯ØW3g}±¡âöûÉ9J³0]ã€xø¢^Å2Dd©Xmâáö†bE°€­kŸ’iŠ%“¥¦$1K:vV¨—šI6€Á‰µ`è6Þöhí¦|‘Ü$YŸ¾›8QrIë
 萈æ‡ñò©\µ‰¤]àÛ{U‘´ÐÛ¢çHÙ
zIÞ~ôw‰¤Ñ4’¦¾ò¼9©¤µ]ÃG?mé¾=Ì=ÜP*R*Â^W~ÕS9U´ºê=2JÕö4*ÛuO¶¿Ÿˆ)pKž
¡ÆA™‰úîžÇb¢¿—çõ›¾fô1Ë)JL­BÏ¡ÌÊh·;ŠwϾ??)³‰êN}8U'®®uwj3±ÊÕâ «B?nW|[EMAIL
 PROTECTED],¢0¥4ÍÞBzËâ­£Õ#›½ÛÒV-†ø€Àˆ*]:‚$   
ÖÛ¨BžLâ+Vr¨¹MNù]¯æ61¥z¹KJ©ÞŽâ¦¢”‚ÔaÛ4A’zÍ3ÚÉàzxbÊ
áXUo²Ž
Éôr'ÃvU*ì8q7¥ŠO‰SÂks4Oa-UI‡/äéóRTƨÄ(-˺“ó1:½‰xx^†’¬Ð¬^BM¹‡†fG¶j㡼“M’b«bT¥‰RÏRRÜÝK®AÂÓ$ŇGqãã
(®ï.QésµÇ[îòօHQªwïò!à¡pNõv„ãiÀm«Ä\¸Ãš3Ó)Óѝ8L
Äs¨øYŽ8¦Âq€!i›;$HQL4Âv¶Œ 
¾ùáäû¯?¾§I4QÄF7刼[»ÝKl¸}õƒ¿c¿û¶Ñ±º_/é߅
þØìßóõçÅòêËÙÉ»×}Cã›ú°ó±'í=Ÿ€ÇH§([EMAIL 
PROTECTED]|Xc$Oköd7ãqÙۆTÝþ©•Ò{8ôÐ(YeaAÇ6{Ü+¡cs¬GK‘±—Â{.€dû„GzKi,ÝØOžO¦hTEæ±³KН֎úœ­6›ëìŸïξ·?­·ºÑŸÖ›CD€Ü±Ÿ†]åµrÙ`ʘYl_ûbËFÚ²
tŠÏSÊyá*-…KÄ([EMAIL PROTECTED]
1¯6ÄË|}I0w=㤺g,S/–üŸfYŠ˜Íê!k´˜ÉdlÍâ‰L¼ñI¦-bÐ3›\g‡{£Ó3É 
èF¡]mìàîÚÆøÖ¶Û1|È[óé˜@
,—゠Πý:[EMAIL 
PROTECTED](3Eâh-ô¶ƒ°TÖôª*ë:ÐÛú»D…¸Š¤)·qٓšÁ5M“óy$%÷q¤–ŽÉDdÛ7{
²5§—|ìŽnóYù¬ñví1Ÿ5.FN`çô0/bÊh
I~Û´ñ—¹Ñ=0îû[èà’u;;¸Ë\ýÉu¡\X`Oè6¥±ÊOÞcoõî¶ÆelMXPHÁ¢ñHJ…D~ç
ïqu4:ĘkxNÎ7ófœù«Ö«]ڎyý°Œ7WþvGyLVÊxåüK*“¡f¤:8Æéìㄠ 
‚¯Ä¢l`±¡’r¼JKT¥Ò8RæúQñG†Œ¾‡Íið©á¥I«sDÉWã¼JA…Ó,_áav‚1_!©LGg'DÎE؆y ‡Åòänü¼†§R
$— Õ`Ăú Ѳ83~¦/_^®î¾Îf·ëìbq·X-G?šO‘õ%Ç¡{øã 
Îé ym+…©qŒÆðnNJïû2H‘N{¥›Š„TŒ½Ÿ†wZ/—ùè^ÇXàÃrÂa}N9Õ‚8çÀO•Ð¢Ç 
À9-ça͍)òèNTÑ¢ˆü,Ǘ¡Ž/€{(Ìç¨ÚxN”©¾ý¦,…yjfTl©íyR*š”؏‰²oHU,.¢ë
ڔËDÄÚ @,ik…+d9Ék4qð‹Èù…›0Åc¢\¬Ð™W֜N3VøŠc1‚{×cI«4  

Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Tom Lane
Todd A. Cook [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Well, that's the problem right there :-(.  Have you ANALYZEd this table?

 My production table and query are more complex.  In the original, the
 query above was in a sub-select; the work-around was to create a temp
 table with the sub-query results, analyze it, and then do the larger
 query based off of the temp table.

Hmm.  One of the things that's on my TODO list is to make the planner
smarter about drilling down into sub-selects to extract statistics.
I think that's what's called for here, but your example has eliminated
all the interesting details.  Can you show us the actual query, its
EXPLAIN plan, and the definitions of the tables/views involved?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] sequence in select

2006-06-22 Thread Jim Nasby

On Jun 22, 2006, at 9:10 AM, SunWuKung wrote:

Hi,
I put the following into a function. Is this always going to give me a
sequence with an increment of 1 independently from other transactions?
If not what would be the standard way to return a sequence from a
query?

CREATE TEMPORARY SEQUENCE rank_seq;
SELECT nextval('rank_seq') FROM whatever ORDER BY id;
DROP SEQUENCE rank_seq;


That will do what you want.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Tom Lane
Relyea, Mike [EMAIL PROTECTED] writes:
 I've zipped the results of EXPLAIN INSERT INTO tblSummary SELECT *
 FROM qrySummary; for my case.  It's a zip file that I've renamed to
 .txt in order to get around the attachment being blocked by certain mail
 servers.

Egad, what a mess :-(.  By my count you have 89 hash joins, 24 sorts,
and 8 hash aggregations in there.  In total these will feel authorized
to use 121 times work_mem.  Since you've got work_mem set to 256 meg,
an out-of-memory condition doesn't seem that surprising.  You need to
make work_mem drastically smaller for this query.  Or else break it down
into multiple steps.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Jim Nasby

On Jun 22, 2006, at 2:23 PM, Tom Lane wrote:

Relyea, Mike [EMAIL PROTECTED] writes:

I've zipped the results of EXPLAIN INSERT INTO tblSummary SELECT *
FROM qrySummary; for my case.  It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by  
certain mail

servers.


Egad, what a mess :-(.  By my count you have 89 hash joins, 24 sorts,
and 8 hash aggregations in there.  In total these will feel authorized
to use 121 times work_mem.  Since you've got work_mem set to 256 meg,
an out-of-memory condition doesn't seem that surprising.  You need to
make work_mem drastically smaller for this query.  Or else break it  
down

into multiple steps.


Except won't the sorts pull in all data from their underlying node  
before proceeding, which should free the memory from those underlying  
nodes? If so, it looks like it's not nearly as bad, only taking about  
20x work_mem (which of course still isn't great...)

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Adding foreign key constraints without integrity

2006-06-22 Thread Jim Nasby

On Jun 21, 2006, at 8:38 AM, Wes wrote:
Maybe you could ask at some postgresql support companies how much  
effort it

would
be to add a without check flag to alter table add constraint  
foreign key,

and
how much they'd charge for it...


Or if I get ambitious, dig into the code myself if I can figure out  
where to

start...


If nothing else, you should bring it up on -hackers and ask to have  
this added as a TODO. It seems like a worth-while addition to pg_dump/ 
restore to me...


To answer another of your emails in this thread... a LAZY vacuum of  
the entire database will read every table and index in the database.  
Wanting to read the entire database to check for corruption is no  
reason to do a VACUUM FULL. Also, if you're curious about restore  
time for your upgrade, you should be doing the restore to an 8.1.4  
database, not to your current version. There's been a ton of  
performance improvements made. In fact, I'm wondering if constraint  
checking in restore has been improved...

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] OT: publicly available databases?

2006-06-22 Thread Chris Browne
[EMAIL PROTECTED] (Andrew Gould) writes:
 I just finished migrating US county level census data
 into a PostgreSQL database; and thought I'd save
 others the trouble of doing the same.  (I've been
 laid-off and am trying to stay busy.)  The gzipped,
 dump file is approximately 9.5MB.

 Is there a place online where people share data?  I
 thought about offering it to techdocs or pgfoundry;
 but it's neither documentation, nor an application, so
 I didn't think it would be appropriate.

 If there is no such repository; but you would like
 more information, please contact me off-list.

There is a relevant project at pgFoundry:
  http://pgfoundry.org/projects/dbsamples/

Having interesting databases of interesting size is definitely a
useful thing.  Good for plenty of purposes.

BTW, the MySQL folk created a sample database called Sakila
http://www.openwin.org/mike/index.php/archives/2006/04/sakila-08/,
loosely based on the Dell DVD application.  They licensed under BSDL
specifically to allow it to be widely used for samples in books and
magazine articles and such...

Robert Treat has done a PostgreSQL port, which is at pgFoundry, part
of that project, called Pagila.
-- 
output = reverse(gro.gultn @ enworbbc)
http://cbbrowne.com/info/nonrdbms.html
I tell my students to think of Steele's book as the Oxford English
Dictionary and Norvig's as the complete works of Shakespeare.
-- Prof. Wendy Lenhert (Massachusetts)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to build with bigger WAL segment file?

2006-06-22 Thread Jim Nasby

On Jun 19, 2006, at 4:10 PM, Clodoaldo Pinto wrote:

2006/6/19, Clodoaldo Pinto [EMAIL PROTECTED]:
Since I have a database with a frequent and huge update batch, the  
WAL

segment file number is about 130. I suspect these files management
during the update is hurting performance. Is it likely? I'm using
Fedora Core 3 and checkpoint_segments is set to 64.

To overcome this I'm considering to make the WAL segment files  
bigger.

The segment file default size is 16 MB. How to build from the source
rpm to have bigger files, say 128 MB?



I found it in src/include/pg_config_manual.h:


BTW, someone (Mark Wong?) recently posted some performance testing  
results from upping XLOG_SEG_SIZE... take a look in the archives.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] aggregate of bitstrings

2006-06-22 Thread TJ O'Donnell

AHA!  I hadn't encountered any null values in my bitstrings.
and having to include 'where xxx is not null' is rather
inconvenient and easy to forget.

indeed bitor(B'1000',null) returns null

but as a test, i nulled one row's data (the bitstring column portion only)
and my orsum without INITCOND returned the proper OR of the remaining values
when including all rows in the aggregate.  it did not return null.

maybe the aggregator (whoever,whatever that is) handles null args differently,
not calling the func when the arg is null?

pg8.1.3

TJ


Florian G. Pflug wrote:

TJ O'Donnell wrote:


create aggregate or_bit64(
   basetype=bit,
   sfunc=bitor,
   stype=bit,
   initcond=
 ''
) ;

I'm using this in production, and it works fine. I didn't find a way to
make this length-agnostic, so I defined this for all lenghts of 
bitstrings the my app uses (32 and 64).


greetings, Florian Pflug



I've created a similar aggregate using:
CREATE AGGREGATE gnova.orsum (
BASETYPE = bit,
SFUNC = bitor,
STYPE = bit
);
Notice, not using INITCOND allows bit of any length.  While it may be
poor programming practice to not initialize, the docs say:
If it is not supplied then the state value starts out null.
which is good enough for this old programmer.   AND it works :)


The problem was, as far as I remember, that bitor returns NULL if any
of it's arguments is null. So not specifying an INITCOND makes the
aggregate work for any length, but always returns null then...

greetings, Florian Pflug


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] OT: publicly available databases?

2006-06-22 Thread Joshua D. Drake

Andrew Gould wrote:

I just finished migrating US county level census data
into a PostgreSQL database; and thought I'd save
others the trouble of doing the same.  (I've been
laid-off and am trying to stay busy.)  The gzipped,
dump file is approximately 9.5MB.

Is there a place online where people share data?  I
thought about offering it to techdocs or pgfoundry;
but it's neither documentation, nor an application, so
I didn't think it would be appropriate.

If there is no such repository; but you would like
more information, please contact me off-list.



I would appreciate the data but to ask your question... There is a 
sample databases project on pgfoundry which I think this would be 
perfect for.


Joshua D. Drake



Thanks,

Andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] auto-vacuum Negative anl Values

2006-06-22 Thread Alvaro Herrera
Tom Lane wrote:
 Dylan Hansen [EMAIL PROTECTED] writes:
  I have been spending some time looking into how auto-vacuum is  
  performing on one of our servers.  After putting the PostgreSQL logs  
  in debug I noticed that the threshold for ANALYZE was never being hit  
  for a particular table because the calculated value becomes  
  increasingly negative.
 
 Hmm, it shouldn't ever be negative at all, I would think.  The
 calculation in question is
 
   anltuples = tabentry-n_live_tuples + tabentry-n_dead_tuples -
   tabentry-last_anl_tuples;
 
 Apparently somehow last_anl_tuples has managed to get to be bigger than
 n_live_tuples, which maybe could happen after a delete.  Should we be
 clamping last_anl_tuples to not exceed n_live_tuples somewhere?
 Alvaro and Matthew, what do you think?

Hmm ... I'd think that the number of dead tuples plus live tuples should
never be smaller than the number of tuples seen at last analyze.  Unless
some stats messages are lost (say, stop the server, start with stats
disabled, do a big DELETE, stop, restart normally).

I think there's a bug elsewhere.  On pgstat_recv_tabstat, I see

tabentry-n_live_tuples = tabmsg[i].t_tuples_inserted;
tabentry-n_dead_tuples = tabmsg[i].t_tuples_updated +
tabmsg[i].t_tuples_deleted;

But really n_live_tuples should be decreased by t_tuples_deleted as
well, so this should read

tabentry-n_live_tuples = tabmsg[i].t_tuples_inserted -
tabmsg[i].t_tuples_deleted;
tabentry-n_dead_tuples = tabmsg[i].t_tuples_updated +
tabmsg[i].t_tuples_deleted;

On the other hand I don't see how this would explain the problem Dylan
is seeing, because the effect would be the inverse -- anltuples would be
erroneously calculated too high, so it wouldn't become negative.

I think we should catch a negative anltuples value, log a WARNING, and
force an analyze, because it's indicative of a bug.


I'm interested in what Dylan can tell about the activity over this
table?  What does pg_stat_user_tables say about it?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] sql question; checks if data already exists before inserted

2006-06-22 Thread Guy Rouillier
nuno wrote:
 hi, there. i'm trying to write a SQL statement which does the
 following things. 
 
 1. checks if data already exists in the database
 2. if not, insert data into database otherwise skip.
 
 for example, i'd like to insert a student called 'Michael Jordan'
 whose ID is 'JORDANMICHAEL' only if the id, 'JORDANMICHAEL' does not
 already exist in the database. anyway, my query looks like...  
 
 insert into student (studentid, fname, lname)
 select 'JORDANMICHAEL', 'Michale', 'Jordan' from student where
 studentid not in (select studentid from student); 
 
 however, this does not seem to work. it does not insert data even if
 it does not exist in the database. hmm! 
 
 any clue?

Your query is not doing what you think it's doing.  Try running just the
select portion of the query you provide above (minus the insert into
part.)  You are selecting from student where studentid not in (select
studentid from student).  That will always return the empty set, since
you are looking at the same column and the same table in both the inner
and outer selects.

What you want to do is create a primary key on your student table.  Then
try to do the insert, catching the not unique error return.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] aggregate of bitstrings

2006-06-22 Thread Joe Conway

TJ O'Donnell wrote:


maybe the aggregator (whoever,whatever that is) handles null args 
differently,

not calling the func when the arg is null?


see:

http://www.postgresql.org/docs/8.1/interactive/sql-createaggregate.html

specifically:

If the state transition function is declared strict, then it cannot 
be called with null inputs. With such a transition function, aggregate 
execution behaves as follows. Null input values are ignored (the 
function is not called and the previous state value is retained). If the 
initial state value is null, then the first nonnull input value replaces 
the state value, and the transition function is invoked beginning with 
the second nonnull input value. This is handy for implementing 
aggregates like max. Note that this behavior is only available when 
state_data_type is the same as input_data_type. When these types are 
different, you must supply a nonnull initial condition or use a 
nonstrict transition function.


If the state transition function is not strict, then it will be called 
unconditionally at each input value, and must deal with null inputs and 
null transition values for itself. This allows the aggregate author to 
have full control over the aggregate's handling of null values.


If the final function is declared strict, then it will not be called 
when the ending state value is null; instead a null result will be 
returned automatically. (Of course this is just the normal behavior of 
strict functions.) In any case the final function has the option of 
returning a null value. For example, the final function for avg returns 
null when it sees there were zero input rows.


HTH,

Joe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] auto-vacuum Negative anl Values

2006-06-22 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Apparently somehow last_anl_tuples has managed to get to be bigger than
 n_live_tuples, which maybe could happen after a delete.  Should we be
 clamping last_anl_tuples to not exceed n_live_tuples somewhere?
 Alvaro and Matthew, what do you think?

 Hmm ... I'd think that the number of dead tuples plus live tuples should
 never be smaller than the number of tuples seen at last analyze.

The scenario I was imagining was big delete followed by
VACUUM-without-ANALYZE.  In this situation, it looks to me like
pgstat_recv_vacuum updates n_live_tuples to the new smaller value
and doesn't do anything to last_anl_tuples.  I'm thinking we need

tabentry-n_live_tuples = msg-m_tuples;
tabentry-n_dead_tuples = 0;
if (msg-m_analyze)
{
tabentry-last_anl_tuples = msg-m_tuples;
if (msg-m_autovacuum)
tabentry-autovac_analyze_timestamp = msg-m_vacuumtime;
else
tabentry-analyze_timestamp = msg-m_vacuumtime;
}
+   else
+   {
+   /* last_anl_tuples must never exceed n_live_tuples */
+   tabentry-last_anl_tuples = Min(tabentry-last_anl_tuples,
+   msg-m_tuples);
+   }
}

but perhaps I'm confused.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] auto-vacuum Negative anl Values

2006-06-22 Thread Dylan Hansen
Hi Tom, Alvaro.  Thanks so much for your replies.On 22-Jun-06, at 12:56 PM, Alvaro Herrera wrote:Hmm ... I'd think that the number of dead tuples plus live tuples shouldnever be smaller than the number of tuples seen at last analyze.  Unlesssome stats messages are lost (say, stop the server, start with statsdisabled, do a big DELETE, stop, restart normally).We had to restart the server to upgrade to 8.1.4 when it was released, but since then we have not restarted our server.  We haven't changed our stats gathering recently, although we've made some changes to auto-vacuum to do it more frequently (outlined in my first e-mail).  Unfortunately, I do not have any data to supply previous to the upgrade as we have just recently changed our logging to be more verbose.I'm interested in what Dylan can tell about the activity over thistable?  What does pg_stat_user_tables say about it?Here's the pg_stat_user_tables entry for this table:-[ RECORD 1 ]-+--relid         | #schemaname    | publicrelname       | tablenameseq_scan      | 345seq_tup_read  | 46699390idx_scan      | 33731562idx_tup_fetch | 171166937n_tup_ins     | 3359n_tup_upd     | 8205786n_tup_del     | 0As you can see, the number of deleted tuples is 0, so I don't believe that it could be an issue with DELETEs on the table because there aren't any!It has to be something that occurs when the VACCUM threshold is hit, because it resets both the VACUUM and ANALYZE counters, and it decreases Yesterday we tried running the ANALYZE command and it did reset the counter to a non-negative value.  However, the next time that the VACUUM threshold was reached, the ANALYZE counter was decreased to a negative number.Thanks for looking into this, please let me know if there's any information I can provide!--Dylan HansenEnterprise Systems Developer

[GENERAL] autovacuum template0

2006-06-22 Thread Jacob Coby
I've got a pg 8.1.1 server running autovacuum, and it started attempting 
to vacuum template0.


2006-06-22 15:00:50 EDT LOG:  autovacuum: processing database template0
2006-06-22 15:00:50 EDT ERROR:  could not access status of transaction 
3242180415
2006-06-22 15:00:50 EDT DETAIL:  could not open file pg_clog/0C13: No 
such file or directory


I found a thread about the same problem at 
http://archives.postgresql.org/pgsql-hackers/2006-03/msg01294.php, but 
it doesn't say if the resolution works.  Does anyone have experience 
with this?  I'm at a loss as to what I should do.


Thanks,
--
Jacob Coby


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Todd A. Cook

Tom Lane wrote:


Hmm.  One of the things that's on my TODO list is to make the planner
smarter about drilling down into sub-selects to extract statistics.
I think that's what's called for here, but your example has eliminated
all the interesting details.  Can you show us the actual query, its
EXPLAIN plan, and the definitions of the tables/views involved?


I've already altered the queries that ran into this problem.  I'll
dig the old ones out of CVS and send them to you tomorrow.

-- todd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Relyea, Mike
Thanks Jim and Tom.  At least now I've got a direction to head in.  I
think for now I'll probably reduce work_mem as a stop-gap measure to get
the query running again.  This will buy me some time to redesign it.
I'll probably separate out each sub query and store the results in a
table (would a temp table be a good solution here?) before I pull it all
together with the final query.




 Egad, what a mess :-(.  By my count you have 89 hash joins, 24 sorts,
 and 8 hash aggregations in there.  In total these will feel authorized
 to use 121 times work_mem.  Since you've got work_mem set to 256 meg,
 an out-of-memory condition doesn't seem that surprising.  You need to
 make work_mem drastically smaller for this query.  Or else break it  
 down
 into multiple steps.

Except won't the sorts pull in all data from their underlying node  
before proceeding, which should free the memory from those underlying  
nodes? If so, it looks like it's not nearly as bad, only taking about  
20x work_mem (which of course still isn't great...)
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to optimize PostgreSQL database size

2006-06-22 Thread Tom Lane
Jim Nasby [EMAIL PROTECTED] writes:
 On Jun 18, 2006, at 10:47 PM, Qingqing Zhou wrote:
 After dump/restore the database size is 1685 MB and after
 vacuum-full/reindex is 1990 MB. Where we saved 305 MB?

 My guess would be due to toasted text columns...

No, it's probably got more to do with the fact that VACUUM FULL stops
trying to compact a table once it finds a row that it can't move down.
There can be quite a lot of empty space left in the table, if one of the
last rows is significantly wider than the average for the table.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] autovacuum template0

2006-06-22 Thread Tom Lane
Jacob Coby [EMAIL PROTECTED] writes:
 I've got a pg 8.1.1 server running autovacuum, and it started attempting 
 to vacuum template0.

 2006-06-22 15:00:50 EDT LOG:  autovacuum: processing database template0
 2006-06-22 15:00:50 EDT ERROR:  could not access status of transaction 
 3242180415

template0 should surely not contain any such transaction number.  Either
there's something you're not telling us about the history of this
installation, or template0 contains corrupt data, or ???.

What does the pg_database row for template0 contain?  If you clone
template0 (CREATE DATABASE foo WITH TEMPLATE template0), does a vacuum
in the new database succeed?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] autovacuum template0

2006-06-22 Thread Jacob Coby

Tom Lane wrote:

Jacob Coby [EMAIL PROTECTED] writes:
I've got a pg 8.1.1 server running autovacuum, and it started attempting 
to vacuum template0.



2006-06-22 15:00:50 EDT LOG:  autovacuum: processing database template0
2006-06-22 15:00:50 EDT ERROR:  could not access status of transaction 
3242180415


template0 should surely not contain any such transaction number.  Either
there's something you're not telling us about the history of this
installation, or template0 contains corrupt data, or ???.


database has been running for ~6 months now.  no history of crashing, no 
history of issues.  this problem started on the 10th of this month.




What does the pg_database row for template0 contain?  If you clone
template0 (CREATE DATABASE foo WITH TEMPLATE template0), does a vacuum
in the new database succeed?


create database works.  vacuum verbose does have 1 error, which is 
probably very helpful:


INFO:  vacuuming pg_catalog.pg_statistic
ERROR:  could not access status of transaction 3242180415
DETAIL:  could not open file pg_clog/0C13: No such file or directory

I then tried:
foo=# select * from pg_catalog.pg_statistic ;
ERROR:  could not access status of transaction 3242180415
DETAIL:  could not open file pg_clog/0C13: No such file or directory

so it seems that pg_catalog.pg_statistic has somehow become corrupt on 
template0?


--
Jacob Coby


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum template0

2006-06-22 Thread Tom Lane
Jacob Coby [EMAIL PROTECTED] writes:
 I then tried:
 foo=# select * from pg_catalog.pg_statistic ;
 ERROR:  could not access status of transaction 3242180415
 DETAIL:  could not open file pg_clog/0C13: No such file or directory

 so it seems that pg_catalog.pg_statistic has somehow become corrupt on 
 template0?

Yeah, that's what it looks like.  If you're interested in digging into
why, it'd be productive to run pg_filedump -i -f on the table
(see http://sources.redhat.com/rhdb/ for that utility).

If you just want to get out of the problem, it's fortunately not hard
because pg_statistic is all derived data.  Go into template0, TRUNCATE
pg_statistic, and then VACUUM ANALYZE to regenerate it and VACUUM FREEZE
to re-freeze template0.  (You'll need to fool with
pg_database.datallowconn to let yourself into template0.  Read the
manual's bit about template databases if you aren't sure what you're
doing here.)

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] aggregate of bitstrings

2006-06-22 Thread Florian G. Pflug

TJ O'Donnell wrote:

AHA!  I hadn't encountered any null values in my bitstrings.
and having to include 'where xxx is not null' is rather
inconvenient and easy to forget.

indeed bitor(B'1000',null) returns null

but as a test, i nulled one row's data (the bitstring column portion only)
and my orsum without INITCOND returned the proper OR of the remaining 
values

when including all rows in the aggregate.  it did not return null.

Hm.. strange.. I could have sworn that I added the initcond to fix some
null-related issue...
Maybe is was the aggregate returning null where there where no rows
to aggregate.. I'll test this again, I'd actually love to get rid of
that initcond.

greetings, Florian Pflug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Copy from text file

2006-06-22 Thread Ricardo Vaz
Hi,
 
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
Thanks,

Ricardo Vaz
Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
Rua Venceslau Brás, 183 - 2º andar
(11) 3292-3266 ramal 3640
[EMAIL PROTECTED]



-- 
Esta mensagem foi verificada pelo sistema de antivírus e
 acredita-se estar livre de perigo.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Copy from text file

2006-06-22 Thread Ricardo Vaz

Hi,


It is possible to copy data from text file (CSV) ignoring some columns of 
this text file?


Thanks,


Ricardo Vaz

Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
[EMAIL PROTECTED]


--
Esta mensagem foi verificada pelo sistema de antivírus e
acredita-se estar livre de perigo.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Copy from text file

2006-06-22 Thread Ricardo Vaz
Hi,
 
It is possible to copy data from text file (CSV) ignoring some columns
of this text file?
Thanks,
 
Ricardo Vaz
Tribunal de Contas do Estado de São Paulo
Diretoria de Tecnologia
Rua Venceslau Brás, 183 - 2º andar
(11) 3292-3266 ramal 3640
[EMAIL PROTECTED] 


-- 
Esta mensagem foi verificada pelo sistema de antivírus e
 acredita-se estar livre de perigo.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] join on next row

2006-06-22 Thread Merlin Moncure

On 20 Jun 2006 18:20:55 +0200, Harald Fuchs [EMAIL PROTECTED] wrote:

In article [EMAIL PROTECTED],
Sim Zacks [EMAIL PROTECTED] writes:
 To get this result set it would have to be an inner join on employee
 and date where the second event time is greater then the first. But I
 don't want the all of the records with a greater time, just the first
 event after.

You can filter the others out by an OUTER JOIN:

  SELECT e1.Employee, e1.EventDate,
 e1.EventTime, e1.EventType,
 e2.EventTime, e2.EventType
  FROM events e1
  JOIN events e2 ON e2.Employee = e1.Employee
AND e2.EventDate = e1.EventDate
AND e2.EventTime  e1.EventTime
  LEFT JOIN events e3 ON e3.Employee = e1.Employee
 AND e3.EventDate = e1.EventDate
 AND e3.EventTime  e1.EventTime
 AND e3.EventTime  e2.EventTime
  WHERE e3.EventID IS NULL
  ORDER BY e1.EventDate, e1.EventTime


This will only give the correct answer if the next event is on the
same day.  This does not match the problem as stated.  The actual
answer is more complex than it looks (in  pg 8.2).  In pg 8.2, you
can make:

SELECT e1.Employee, e1.EventDate,
  e1.EventTime, e1.EventType,
  e2.EventTime, e2.EventType
FROM events e1
JOIN events e2 ON
(e2.Employee, e2.EventDate, e2.EventTime) 
(e1.Employee, e1.EventDate, e1.EventTime)
LEFT JOIN events e3 ON
(e3.Employee, e3.EventDate, e3.EventTime) 
(e1.Employee, e1.EventDate, e1.EventTime) AND
e3.EventTime  e2.EventTime
WHERE e3.EventID IS NULL
ORDER BY e1.EventDate, e1.EventTime

if you only want answers that match the same date as the selected
event, harald's answer is correct.  to get the correct answer in 8.1
and down you must make a monster of a sql statement ;)

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Copy from text file

2006-06-22 Thread Chris

Ricardo Vaz wrote:

Hi,


It is possible to copy data from text file (CSV) ignoring some columns 
of this text file?


(How come this was posted 3 times?)

I don't think so but you could load it into a temporary table and then:

insert into new_table select col1, col2 from temp_table;


--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Copy from text file

2006-06-22 Thread Geoffrey

Chris wrote:

Ricardo Vaz wrote:

Hi,


It is possible to copy data from text file (CSV) ignoring some columns 
of this text file?


Assuming you're on a Unix box, you could easily use awk or perl to parse 
your csv and create one that contains only the columns you want.


--
Until later, Geoffrey

Any society that would give up a little liberty to gain a little
security will deserve neither and lose both.  - Benjamin Franklin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Copy from text file

2006-06-22 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Geoffrey wrote:
 Chris wrote:
 Ricardo Vaz wrote:
 Hi,
 
 
 It is possible to copy data from text file (CSV) ignoring
 some columns of this text file?
 
 Assuming you're on a Unix box, you could easily use awk or perl
 to parse your csv and create one that contains only the columns
 you want.

And probably pipe the results directly into the copy command,
bypassing any intermediary steps.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEm1vQS9HxQb37XmcRAkeiAKDiAvWlTcKPoYrV2V2M1KHqVujcegCg4TUf
ctDcdCBkmwLoZ8CwWFahq7c=
=GNho
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Copy from text file

2006-06-22 Thread Rich Shepard

On Thu, 22 Jun 2006, Ron Johnson wrote:


Assuming you're on a Unix box, you could easily use awk or perl to parse
your csv and create one that contains only the columns you want.


And probably pipe the results directly into the copy command, bypassing any
intermediary steps.


  Well, why not use the 'cut' command with 'ls' feeding it? Redirect output
to a new file.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
http://www.appl-ecosys.com Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Out of memory error in 8.1.0 Win32

2006-06-22 Thread Jim Nasby

On Jun 22, 2006, at 4:02 PM, Relyea, Mike wrote:

Thanks Jim and Tom.  At least now I've got a direction to head in.  I
think for now I'll probably reduce work_mem as a stop-gap measure  
to get

the query running again.  This will buy me some time to redesign it.
I'll probably separate out each sub query and store the results in a
table (would a temp table be a good solution here?) before I pull  
it all

together with the final query.


Yes, it would. It's also possible that you could structure the query  
better, to reduce the amount of concurrent sorting/hashing going on.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Idea for vacuuming

2006-06-22 Thread Jim Nasby

On Jun 22, 2006, at 7:12 PM, Joseph Shraibman wrote:
I'm running a 8.0 database.  I have a very large log table that is  
rarely updated or deleted from.  The nightly vacuum does not know  
this, and spends a lot of time on it, and all its indexes.


My RFE: When vacuuming a table, pg should try to vacuum the primary  
key first.  If that results in 0 recovered entries, then assume the  
table has no updates/deletes and skip the rest of that table.  I'm  
picking the primary key here, but any index that indexes each row  
of the table will do.  Maybe it should just pick the smallest index  
that indexes each row of the table.


*shrug* It's kinda hard to get excited about that when running  
autovacuum (or pg_autovacuum in the case of 8.0) would be a much  
better solution.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-22 Thread Jim Nasby

On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:
The last 15 years we also used Informix and we never, never had to  
unload/load

the database because of an upgrade.

Perhaps somebody knows how they do the trick?


Do they provide a migration/upgrade utility?
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster