Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Hannes Dorbath

On 10.07.2007 03:09, novnov wrote:

I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
would like to install 8.2, but it's not offered in the list. I think 8.2 is
offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
the recommondation will be to compile 8.2 on 6.10, but I've only used binary
installations on linux and am not eager to open up another can of worms.


I might be missing something, but why can't you just grab the package 
from sites like this:


http://rpmseek.com/rpm-pl/postgresql-8.2.html?hl=de&cx=824%3AP%3A0%3A0%3A0%3A0%3A0&qDnStr=109&qArStr=1&qRtStr=2

and install it?


--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar
Hello All,

I am trying tu run a script to create database from a batch programme and dont 
want to supply password everytime.
So i tried to setup pgpass.conf file.
File is kept in user profile/application data 
i.e
C:\Documents and Settings\postgres\Application Data\postgresql\pgpass.conf

file contains:

localhost:5432 :*:postgres:mypass
localhost:5432:qsweb:qsweb:mypass1

Still the batch asks for the password.!!!
 I am just not getting why its not reading password from pgpass file.

can anyone please figure out what is going wrong.

Thanks in Advance
Ashish...









Re : [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Laurent ROCHE
Hi,

I am not moving from 6.10 to anything else for now.
Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want: 
everyhting working better and better (via updates) and no major changes ! 
Getting always the latest version is definitely asking for troubles.
I don't need the latest version of all the desktop apps and libraries. But it's 
true that it would be nice to be able to get the latest version of PG.

For pgAdmin, we had to compile it our-self.

Hannes recommendation (in another post) of using rpmseek.com (I did not know 
that one either) seems good.

 
Cheers,
[EMAIL PROTECTED]
The Computing Froggy

- Message d'origine 
De : novnov <[EMAIL PROTECTED]>
À : pgsql-general@postgresql.org
Envoyé le : Mardi, 10 Juillet 2007, 3h09mn 45s
Objet : [GENERAL] Postgres 8.2 binary for ubuntu 6.10?


I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
would like to install 8.2, but it's not offered in the list. I think 8.2 is
offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
the recommondation will be to compile 8.2 on 6.10, but I've only used binary
installations on linux and am not eager to open up another can of worms.

In a way I would like to move to the later ubuntu distro. But I'm concerned
that since my eventual deploy may be on something fairly barebones like
centos, it might be good to stay with the somewhat less polished version of
ubuntu...not sure if there is any value in this approach, but it's hard to
make good choices when one has so little exposure to what the various
options mean. 
-- 
View this message in context: 
http://www.nabble.com/Postgres-8.2-binary-for-ubuntu-6.10--tf4053274.html#a11513301
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly





  
__ 
Stockage illimité de vos mails avec Yahoo! Mail. Changez aujourd'hui de mail !

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] plpgsql equivalent to plperl $_SHARED and plpythonu global dictionary GD?

2007-07-10 Thread hubert depesz lubaczewski

On 7/9/07, Zlatko Matic <[EMAIL PROTECTED]> wrote:


Does plpgsql has something equivalent to plperl $_SHARED or plpythonu
global
dictionary GD?



no, but you can use some table to emulate this. or a temp table.

depesz

--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Jan Muszynski
On 10 Jul 2007 at 9:13, Hannes Dorbath wrote:

On 10.07.2007 03:09, novnov wrote:
> I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
> would like to install 8.2, but it's not offered in the list. I think 8.2 is
> offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
> the recommondation will be to compile 8.2 on 6.10, but I've only used binary
> installations on linux and am not eager to open up another can of worms.

Use ubuntu backports. Add:

deb http://archive.ubuntu.com/ubuntu edgy-backports main restricted 
universe multiverse

to /etc/apt/sources.list (by using sudo gedit /etc/apt/sources.list)

The newer postgres versions will be available (I think 8.2.4 is there at 
the moment). For more info check out the ubuntu backports forum at:
http://ubuntuforums.org/forumdisplay.php?f=47

HTH

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Re : [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Mario Guenterberg
On Tue, Jul 10, 2007 at 01:13:07AM -0700, Laurent ROCHE wrote:
> Hi,
> 
> I am not moving from 6.10 to anything else for now.
> Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want: 
> everyhting working better and better (via updates) and no major changes ! 
> Getting always the latest version is definitely asking for troubles.
> I don't need the latest version of all the desktop apps and libraries. But 
> it's true that it would be nice to be able to get the latest version of PG.
> 
> For pgAdmin, we had to compile it our-self.
> 
> Hannes recommendation (in another post) of using rpmseek.com (I did not know 
> that one either) seems good.
> 

Ubuntu 6.10 has no LTS, you mean Ubuntu 6.06.
For 6.10 you can install Postgresql 8.2.4 from the backports
repository. It works fine.

Greetings
Mario

-- 
 -
| havelsoft.com - Ihr Service Partner für Open Source |
| Tel:  033876-21 966 |
| Notruf: 0173-277 33 60  |
| http://www.havelsoft.com|
| |
| Inhaber: Mario Günterberg   |
| Mützlitzer Strasse 19   |
| 14715 Märkisch Luch |
 -


pgp8NvvC8PFay.pgp
Description: PGP signature


Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Dimitri Fontaine
Le mardi 10 juillet 2007, novnov a écrit :
> I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
> would like to install 8.2, but it's not offered in the list. I think 8.2 is
> offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10?
> Probably the recommondation will be to compile 8.2 on 6.10, but I've only
> used binary installations on linux and am not eager to open up another can
> of worms.

You can backport PostgreSQL on debian platforms quite easily, as related here:
  http://archives.postgresql.org/pgsql-general/2007-05/msg00427.php

But using the backports as given on another post is still easier, and provides 
automatic upgrades of minor versions. Just though you might be interrested in 
how to easily recompile a postgresql package under your system. Think about 
replacing the deb-src sid target by ubuntu feisty (or newer, don't know) one.

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Raymond O'Donnell

On 10/07/2007 08:47, Ashish Karalkar wrote:


Still the batch asks for the password.!!!
 I am just not getting why its not reading password from pgpass file.


Probably a silly question, but if you're using the createdb utility in 
the batch file, have you inadvertently included the -W option? - this 
forces a password prompt.


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Vacuum issue

2007-07-10 Thread Dimitri Fontaine
Le lundi 09 juillet 2007, Gregory Stark a écrit :0
> The output of vacuum verbose can be hard to interpret, if you want help
> adjusting the fsm settings send it here.

Using pgfouine, one gets easy to read reports:
  http://pgfouine.projects.postgresql.org/vacuum.html
  http://pgfouine.projects.postgresql.org/reports/sample_vacuum.html

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar


- Original Message - 
From: "Raymond O'Donnell" <[EMAIL PROTECTED]>

To: "Ashish Karalkar" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, July 10, 2007 3:51 PM
Subject: Re: [GENERAL] pgpass.conf



On 10/07/2007 08:47, Ashish Karalkar wrote:


Still the batch asks for the password.!!!
 I am just not getting why its not reading password from pgpass file.


Probably a silly question, but if you're using the createdb utility in the 
batch file, have you inadvertently included the -W option? - this forces a 
password prompt.


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---


Thanks Ray for your replay,

No I haven't  included -W option.

I have set this succesfully on redhat linux but iam messed up in Windows XP 
prof.


Is there any other thing to do?

Thanks in advance
Ashish...


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Raymond O'Donnell

On 10/07/2007 11:28, Ashish Karalkar wrote:

I have set this succesfully on redhat linux but iam messed up in Windows 
XP prof.


Is there any other thing to do?


I'm not a guru, but maybe it's a permissions problem on the pgpass file?

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Dave Page
Ashish Karalkar wrote:
> Hello All,
>  
> I am trying tu run a script to create database from a batch programme
> and dont want to supply password everytime.
> So i tried to setup pgpass.conf file.
> File is kept in user profile/application data
> i.e
> C:\Documents and Settings\postgres\Application Data\postgresql\pgpass.conf
>  
> file contains:
>  
> localhost:5432 :*:postgres:mypass
> localhost:5432:qsweb:qsweb:mypass1
>  
> Still the batch asks for the password.!!!
>  I am just not getting why its not reading password from pgpass file.
>  
> can anyone please figure out what is going wrong.

Under what user account is the batch file being run? The pgpass.conf
file needs to be under *that* user account, which is not necessarily the
one that the postgresql server runs under.

Regards, Dave

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar


- Original Message - 
From: "Dave Page" <[EMAIL PROTECTED]>

To: "Ashish Karalkar" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, July 10, 2007 4:25 PM
Subject: Re: [GENERAL] pgpass.conf



Ashish Karalkar wrote:

Hello All,

I am trying tu run a script to create database from a batch programme
and dont want to supply password everytime.
So i tried to setup pgpass.conf file.
File is kept in user profile/application data
i.e
C:\Documents and Settings\postgres\Application 
Data\postgresql\pgpass.conf


file contains:

localhost:5432 :*:postgres:mypass
localhost:5432:qsweb:qsweb:mypass1

Still the batch asks for the password.!!!
 I am just not getting why its not reading password from pgpass file.

can anyone please figure out what is going wrong.


Under what user account is the batch file being run? The pgpass.conf
file needs to be under *that* user account, which is not necessarily the
one that the postgresql server runs under.

Regards, Dave


The batch file is run under postgres user, also owner of the pgpass.conf 
file is postgres.
As far as my knowledge the permission checking is not done on windows 
anyways the owner is same so i dont think there is any problem of permission


from pg documents:
"The permissions on .pgpass must disallow any access to world or group; 
achieve this by the command chmod 0600 ~/.pgpass. If the permissions are 
less strict than this, the file will be ignored. (The file permissions are 
not currently checked on Microsoft Windows, however.)"




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Magnus Hagander
On Tue, Jul 10, 2007 at 04:34:56PM +0530, Ashish Karalkar wrote:
> >>Hello All,
> >>
> >>I am trying tu run a script to create database from a batch programme
> >>and dont want to supply password everytime.
> >>So i tried to setup pgpass.conf file.
> >>File is kept in user profile/application data
> >>i.e
> >>C:\Documents and Settings\postgres\Application 
> >>Data\postgresql\pgpass.conf
> >>
> >>file contains:
> >>
> >>localhost:5432 :*:postgres:mypass
> >>localhost:5432:qsweb:qsweb:mypass1
> >>
> >>Still the batch asks for the password.!!!
> >> I am just not getting why its not reading password from pgpass file.
> >>
> >>can anyone please figure out what is going wrong.
> >
> >Under what user account is the batch file being run? The pgpass.conf
> >file needs to be under *that* user account, which is not necessarily the
> >one that the postgresql server runs under.
> >
> >Regards, Dave
> 
> The batch file is run under postgres user, also owner of the pgpass.conf 
> file is postgres.
> As far as my knowledge the permission checking is not done on windows 
> anyways the owner is same so i dont think there is any problem of permission
> 
> from pg documents:
> "The permissions on .pgpass must disallow any access to world or group; 
> achieve this by the command chmod 0600 ~/.pgpass. If the permissions are 
> less strict than this, the file will be ignored. (The file permissions are 
> not currently checked on Microsoft Windows, however.)"

That part is correct. But it's of course necessary that the user can *read*
the file, in order to get it's contents. But there is no check if others
can (on Windows).

So just to be sure of that,  I suggest you try logging in as the user in
question and making sure you can read the file from that account.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgpass.conf

2007-07-10 Thread Dave Page
Ashish Karalkar wrote:
> The batch file is run under postgres user, also owner of the pgpass.conf
> file is postgres.
> As far as my knowledge the permission checking is not done on windows
> anyways the owner is same so i dont think there is any problem of
> permission
> 

OK - have you tried 127.0.0.1 instead of localhost in the pgpass file?

> "The permissions on .pgpass must disallow any access to world or group;
> achieve this by the command chmod 0600 ~/.pgpass. If the permissions are
> less strict than this, the file will be ignored. (The file permissions
> are not currently checked on Microsoft Windows, however.)"
> 

That's referring to the checks we do on *nix to ensure the file is
secure enough, rather than whether or not the client program can read it
which is what Rod was suggesting I think. The Windows ACL model is
sufficiently more complex that doing that check is far harder than it is
on *nix, and the Application Data directory should be secure anyway
(unless you're using FAT32, but then there's no helping you anyway :-) ).

Regards, Dave



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] catalog location

2007-07-10 Thread John DeSoi


On Jul 7, 2007, at 8:16 AM, Carmen Martinez wrote:

Please, I need to know where the catalog tables (pg_class,  
pg_attrdef...) are located in the postgresql rdbms. Because I can  
not see them in the pgAdminII interface, like other tables or  
objects. And I can not find any reference about where this  
information is, please any help would be welcome.


These are located in the pg_catalog schema. pgAdmin III (don't know  
about II) can show them if you turn on the "System Objects" menu item  
in the View menu. The tables are documented in the PostgreSQL manual.




Also I'd like to know if these catalog classes are owned by the  
superuser (I defined it when I installed ) or all the users could  
access to them and perhaps modify them.


All users can see them. You should not modify them directly.




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Alain Peyrat



  Hello,

  System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)

  Initial problem:

  # pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR:  invalid memory  
alloc request size 9000688640


  After some research, it seems to be related to a corruption of the  
database. Running a vacum crashes the db:


-bash-3.00$ vacuumdb -z -a
vacuumdb: vacuuming database "template1"
vacuumdb: vacuuming of database "template1" failed: PANIC:  corrupted  
item pointer: offset = 3336, size = 20

server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
-bash-3.00$

  Can someone help me to track down the problem ?

  Can I recover the datas (last backup failed) ?

  Best regards,

  Alain.


Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread Karsten Hilbert
On Tue, Jul 10, 2007 at 08:40:24AM +0400, alexander lunyov wrote:

>> Just to clarify: lower() on both sides of a comparison
>> should still work as expected on multibyte encodings ? It's
>> been suggested here before.
>
> lower() on both sides also does not working in my case, it still search for 
> case-sensitive data. String in this example have first char capitalized, 
> and result is the same. Seems that lower() can't lower multibyte character.
>
> db=> select lower('Зелен');
Well, no,

select my_string where lower(my_string) ~ lower(search_fragment);

Does that help ?

(~ does work for eg. German in my experience)

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] plpgsql equivalent to plperl $_SHARED and plpythonu global dictionary GD?

2007-07-10 Thread Zlatko Matic
Hello.
OK. I created a new table that holds information about rows inserted/updated in 
a transaction.
I realized that after row-level trigger fires always before after 
statement-level trigger.
Therefore I can use row-level triger to populate the auxiliary table which 
holds information about affected rows, so that after statement-level trigger 
can read that information. 
It works and is fast enough.
So, I emulated NEW and OLD for statement level trigger:)
Regards,

Zlatko
  - Original Message - 
  From: hubert depesz lubaczewski 
  To: Zlatko Matic 
  Cc: pgsql-general@postgresql.org 
  Sent: Tuesday, July 10, 2007 10:17 AM
  Subject: Re: [GENERAL] plpgsql equivalent to plperl $_SHARED and plpythonu 
global dictionary GD?


  On 7/9/07, Zlatko Matic <[EMAIL PROTECTED]> wrote:
Does plpgsql has something equivalent to plperl $_SHARED or plpythonu global
dictionary GD?


  no, but you can use some table to emulate this. or a temp table.

  depesz

  -- 
  http://www.depesz.com/ - nowy, lepszy depesz 

[GENERAL] free scheduled import utility

2007-07-10 Thread Zlatko Matic

Hello.
Is there any free program/utility for batch imports from .csv files, that 
can be easily scheduled for daily inserts of data to PostgreSQL tables?

Regards,

Zlatko 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread alexander lunyov

Karsten Hilbert wrote:

Just to clarify: lower() on both sides of a comparison
should still work as expected on multibyte encodings ? It's
been suggested here before.
lower() on both sides also does not working in my case, it still search for 
case-sensitive data. String in this example have first char capitalized, 
and result is the same. Seems that lower() can't lower multibyte character.


db=> select lower('Зелен');

Well, no,


   With this i just wanted to say that lower() doesn't work at all on
russian unicode characters, even in "select lower('String')" 'String'
don't become lowercase, and further it does not work in more complex
select statement.



select my_string where lower(my_string) ~ lower(search_fragment);

Does that help ?

(~ does work for eg. German in my experience)


No, for russian unicode strings it is not working.
I searched pgsql-patches@ list and found there this thread:
http://archives.postgresql.org/pgsql-patches/2007-06/msg00021.php
I wrote Andrew (he didn't answer yet) about whether this patch can
help with my problem.

P.S.: if this issue is a known bug (as we talked earlier), then how long
will it take to fix it? I know little about postgresql development
process, maybe you know it little better?

--
alexander lunyov
[EMAIL PROTECTED]




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] free scheduled import utility

2007-07-10 Thread Reid Thompson

On Tue, 2007-07-10 at 14:32 +0200, Zlatko Matic wrote:
> Hello.
> Is there any free program/utility for batch imports from .csv files, that 
> can be easily scheduled for daily inserts of data to PostgreSQL tables?
> Regards,
> 
> Zlatko 
> 
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings

cron, psql and http://www.postgresql.org/docs/8.2/static/sql-copy.html

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] free scheduled import utility

2007-07-10 Thread Dimitri Fontaine
Le mardi 10 juillet 2007, Zlatko Matic a écrit :
> Is there any free program/utility for batch imports from .csv files, that
> can be easily scheduled for daily inserts of data to PostgreSQL tables?

COPY itself would do the job, but you can also use pgloader:
  http://pgfoundry.org/projects/pgloader

It has many options, and is able to load data in the presence of errors, 
logging them (both error.log and reject.log).

Hope this helps, regard,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] free scheduled import utility

2007-07-10 Thread A. Kretschmer
am  Tue, dem 10.07.2007, um 14:32:58 +0200 mailte Zlatko Matic folgendes:
> Hello.
> Is there any free program/utility for batch imports from .csv files, that 
> can be easily scheduled for daily inserts of data to PostgreSQL tables?
> Regards,

You can use the scheduler from your OS. For Unix-like systems is this
the CRON, with windows i'm not familiar.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Benoit Mathieu

Hi all,

I want to use postgres to store data and large files, typically audio 
files from 100ko to 20Mo. For those files, I just need to store et 
retrieve them, in an ACID way. (I don't need search, or substring, or 
others functionnalities)
I saw postgres offers at least 2 method : bytea column with TOAST, or 
large objects API.


I wonder what are the differences of the 2 methods.
* I found that large objects need a vacuum after delete to really 
release place. That may be handled by a trigger or automatic vaccum, is 
it right ?
* Large objects are used via a special API available in libpq C client 
library.
* I really care keeping my transaction fully ACID. Documentation on 
large objects doesn't explicitly say if lo_import an lo_export (and 
other primitives) are fully ACID. Some ideas ?

* I going to bench insertion and read with this 2 methods.

other advices are wellcome, particularly about integration of TOAST or 
large objects with pgclusters.


thanks

Benoit

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Leonel

On 7/9/07, novnov <[EMAIL PROTECTED]> wrote:


I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
would like to install 8.2, but it's not offered in the list. I think 8.2 is
offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
the recommondation will be to compile 8.2 on 6.10, but I've only used binary
installations on linux and am not eager to open up another can of worms.

In a way I would like to move to the later ubuntu distro. But I'm concerned
that since my eventual deploy may be on something fairly barebones like
centos, it might be good to stay with the somewhat less polished version of
ubuntu...not sure if there is any value in this approach, but it's hard to
make good choices when one has so little exposure to what the various
options mean.
--
View this message in context: 
http://www.nabble.com/Postgres-8.2-binary-for-ubuntu-6.10--tf4053274.html#a11513301
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



There won't be a release for postgresql 8.2 for  ubuntu 6.10
you can :
1 upgrade to ubuntu feisty  with  sudo "update-manager -c -d"
2 request a backport
3 do the backport yourserf
4 compile from source

--
Leonel

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton

Alain Peyrat wrote:

  Hello,

  System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)

  Initial problem:

  # pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR:  invalid memory alloc 
request size 9000688640


  After some research, it seems to be related to a corruption of the 
database. Running a vacum crashes the db:


-bash-3.00$ vacuumdb -z -a
vacuumdb: vacuuming database "template1"
vacuumdb: vacuuming of database "template1" failed: PANIC:  corrupted 
item pointer: offset = 3336, size = 20


It would be nice if it's just template1 that is damaged, but I'm not 
sure that's the case.


1. Have you had crashes or other hardware problems recently?
2. Can you vacuum the other databases?
3. Can you just dump the schema for your selected database with 
--schema-only? (your pg_dump seemed to fail fetching column details)
4. Can you dump individual tables with --table=? (it might just be one 
table that's damaged)


http://www.postgresql.org/docs/7.4/static/app-pgdump.html


  Can someone help me to track down the problem ?

  Can I recover the datas (last backup failed) ?


How much can be recovered will depend on what corruption has occurred.
If it's just a damaged index, then reindexing will fix it.
If it's a damaged system catalogue we might be able to fix the catalogue 
then read the data.
If data files are damaged we'll need to locate the damage and work 
around it. You will probably lose data on any damaged pages.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Alexander Staubo

On 7/10/07, Benoit Mathieu <[EMAIL PROTECTED]> wrote:

I saw postgres offers at least 2 method : bytea column with TOAST, or
large objects API.



From the documentation:



All large objects are placed in a single system table called pg_largeobject.
PostgreSQL also supports a storage system called "TOAST" that automatically
stores values larger than a single database page into a secondary storage area
per table. This makes the large object facility partially obsolete. One 
remaining
advantage of the large object facility is that it allows values up to 2 GB in 
size,
whereas TOASTed fields can be at most 1 GB. Also, large objects can be
randomly modified using a read/write API that is more efficient than performing
such operations using TOAST.


My take: Stick with TOAST unless you need fast random access. TOAST is
faster, more consistently supported (eg., in Slony) and easier to work
with.

If you need fast random access, use the file system. Will you ever be
updating parts of the contents of a single file, or will you be
writing each file just once? If the latter, writing your own robust,
ACID-compliant file storage is fairly trivial.


* I really care keeping my transaction fully ACID. Documentation on
large objects doesn't explicitly say if lo_import an lo_export (and
other primitives) are fully ACID. Some ideas ?


Since the lo_* functions are implemented as relational operations on
the internal large object table, they're ACID-compliant.

Alexander.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread Tom Lane
alexander lunyov <[EMAIL PROTECTED]> writes:
> With this i just wanted to say that lower() doesn't work at all on
> russian unicode characters,

In that case you're using the wrong locale (ie, not russian unicode).
Check "show lc_ctype".

Or [ checks back in thread... ] maybe you're using the wrong operating
system.  Not so long ago FreeBSD didn't have Unicode locale support at
all; I'm not sure if 6.2 has that problem but it is worth checking.
Does it work for you to do case-insensitive russian comparisons in
"grep", for instance?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
Version 7.4.12
AIX 5.3

Scenario - a large table was not being vacuumed correctly, there now ~
15 million dead tuples that account for approximately 20%-25% of the
table.  Vacuum appears to be stalling - ran for approximately 10 hours
before I killed it.  I hooked up to the process with gdb and this looks
a bit suspicious to me.

gdb attach 979354
GNU gdb 6.0
Copyright 2003 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain 
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "powerpc-ibm-aix5.1.0.0"...attach: No such 
file or directory.

Attaching to process 979354
[Switching to Thread 1]
0xd033ce1c in read () from /usr/lib/libc.a(shr.o)
(gdb) bt
#0  0xd033ce1c in read () from /usr/lib/libc.a(shr.o)
#1  0x10034398 in FileRead (file=805585096, buffer=0xb38d0e40 "",
amount=8192) at fd.c:973
#2  0x1009eea8 in mdread (reln=0x301aa618, blocknum=160246, 
buffer=0xb38d0e40 "") at md.c:434
#3  0x1009d3d0 in smgrread (which=8192, reln=0x301aa618,
blocknum=160246,
buffer=0x ) at smgr.c:316
#4  0x1002c0a8 in ReadBufferInternal (reln=0x301aa618, blockNum=160246, 
bufferLockHeld=84 'T') at bufmgr.c:230
#5  0x101e5080 in _bt_getbuf (rel=0x301aa618, blkno=4294967295, 
access=1) at nbtpage.c:401
#6  0x101e41c4 in btbulkdelete (fcinfo=0x) at nbtree.c:692
#7  0x10048670 in OidFunctionCall3 (functionId=8192, arg1=807052824, 
arg2=805496980, arg3=806803160)
at fmgr.c:1260
#8  0x100dad04 in index_bulk_delete (indexRelation=0x301aa618,
[EMAIL PROTECTED]: 0x10194150 , 
callback_state=0x3016d6d8) at indexam.c:618
#9  0x10193fcc in lazy_vacuum_index (indrel=0x301aa618, 
vacrelstats=0x3016d6d8) at vacuumlazy.c:712
#10 0x10194b04 in lazy_vacuum_rel (onerel=0x302357b8, 
vacstmt=0x) at vacuumlazy.c:279
#11 0x10192da4 in vacuum_rel (relid=807622584, vacstmt=0x3016b558, 
expected_relkind=114 'r') at vacuum.c:855
#12 0x10192fb0 in vacuum (vacstmt=0x3016b558) at vacuum.c:290
#13 0x10141670 in ProcessUtility (parsetree=0x3016b558,
dest=0x3016b5a8, 
completionTag=0x2ff1d0c4 "")
at utility.c:831
#14 0x10121b20 in PortalRunUtility (portal=0x301f3768,
query=0x3016b5f8, 
dest=0x3016b5a8,
completionTag=0x2ff1d0c4 "") at pquery.c:772
#15 0x1012204c in PortalRun (portal=0x301f3768, count=806794920, 
dest=0x3016b5a8, altdest=0x3016b5a8,
completionTag=0x2ff1d0c4 "") at pquery.c:836
#16 0x10010034 in exec_simple_query (query_string=0x3016b278 "VACUUM 
ANALYZE v\erbos\e;") at postgres.c:914
---Type  to continue, or q  to quit---
#17 0x100119f8 in PostgresMain (argc=4, argv=0x30086e78, 
username=0x30086db8 "slony") at postgres.c:2973
#18 0x101b79e4 in ServerLoop () at postmaster.c:2564
#19 0x101b95a0 in PostmasterMain (argc=1, argv=0x300853c8) at 
postmaster.c:897
#20 0x153c in main (argc=1, argv=0x2ff22c40) at main.c:222
#21 0x1204 in __start ()

Ideas?
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Tomasz Ostrowski
On Tue, 10 Jul 2007, Alexander Staubo wrote:

> My take: Stick with TOAST unless you need fast random access. TOAST
> is faster, more consistently supported (eg., in Slony) and easier
> to work with.

Toasted bytea columns have some other disadvantages also:

1.

It is impossible to create its value in chunks - it means that
you'll need to create one very big "insert" with file contents. If
your client library does not support binary arguments to prepared
statements you'll need to escape this data, which makes it several
times bigger (I think it could be 4 times bigger). For parsing and
saving this a server would need several copies of this data (I think
a server would need at least 10 times more memory than binary data
size).

If I'm not mistaken it means that for saving a 20MB data file a
server would need at least 200MB of memory - and this is a huge
amount. Also a client would need over 100MB.


I've worked around this with a temporary table:
create temporary table chunks (
chunk_nr int primary key,
chunk_data bytea not null
);
and an aggregate:
create aggregate bytea_accum(
sfunc = byteacat,
basetype = bytea,
stype = bytea,
initcond = ''
);
I put 1MB chunks into this "chunks" table and then do:
insert into attachments
(filename, filedata)
select
('filename.txt', bytea_accum(chunk_data)
from (
select chunk_data
from chunks
order by chunk_nr
)

I've proposed that it would be easier if there was a memory-efficient
function:
bytea_from_lo(oid lo_id) returns bytea
But there was no feedback.


2.

Also there's the same problem when reading this bytea data. I'm also doing
this in chunks - I've set on this table external storage:
alter table attachments alter column filedata set storage external;
If this is set then a function
substring(filedata from [offset] for [chunk_size])
is efficient.




In case of large objects you'll not be able to enforce constraints in
database, for example:
- you will be able to delete lo which is referenced elsewhere;
- you won't be able to limit lo size;
- I think that you'll not be able to limit access to lo;
- you will be able to delete a reference to a lo without
  deleting this object (you can use contrib/vacuum_lo for garbage
  collecting though).

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Problems with linkage

2007-07-10 Thread Kevin martins
Hello, 
I am new in using c in postgresql. My problem is that when i compile my program 
code,it generate the folowing error mensage:
 
fu01.o(.idata$3+0xc): undefined reference to 
`libpostgres_a_iname'nmth00.o(.idata$4+0x0): undefined reference to 
`_nm__SPI_processed'collect2: ld returned 1 exit status
dllwrap.exe: no export definition file provided.Creating one, but that may not 
be what you wantdllwrap.exe: gcc exited with status 1
make.exe: *** [selectAll.dll] Error 1
Execution terminated
Could anyone help me?
 
Cheers 
 
Kevin Martins
_
Receba as últimas notícias do Brasil e do mundo direto no seu Messenger com 
Alertas MSN! É GRÁTIS!
http://alertas.br.msn.com/

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson <[EMAIL PROTECTED]> writes:
> Scenario - a large table was not being vacuumed correctly, there now ~
> 15 million dead tuples that account for approximately 20%-25% of the
> table.  Vacuum appears to be stalling - ran for approximately 10 hours
> before I killed it.  I hooked up to the process with gdb and this looks
> a bit suspicious to me.

Looks perfectly normal to me.  Reads in btbulkdelete are exactly where
I'd expect 7.4's vacuum to be spending the bulk of its wait time on a
large table, because that's a logical-order traversal of the index, and
cannot benefit from any sequential-access advantage.  (As of 8.2 we
are able to do this with a physical-order traversal, which can be a
whole lot faster.)

If you can afford to lock the table against writes for awhile,
reindexing might help by bringing the index back into physical order.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
> Oh, I forgot to mention --- you did check that vacuum_mem is set to
> a pretty high value, no?  Else you might be doing a lot more
> btbulkdelete scans than you need to.
> 
>   regards, tom lane

What would you define as high for 7.4?  I bumped it up to ~ 245mbs

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Oh, I forgot to mention --- you did check that vacuum_mem is set to
a pretty high value, no?  Else you might be doing a lot more
btbulkdelete scans than you need to.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson <[EMAIL PROTECTED]> writes:
> On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
>> Oh, I forgot to mention --- you did check that vacuum_mem is set to
>> a pretty high value, no?  Else you might be doing a lot more
>> btbulkdelete scans than you need to.

> What would you define as high for 7.4?  I bumped it up to ~ 245mbs

That sounds like plenty --- you only need 6 bytes per dead tuple,
so that should be enough to handle all your 15-20M dead tuples in
one scan.

How big is this index again?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote:
> Brad Nicholson <[EMAIL PROTECTED]> writes:
> > On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
> >> Oh, I forgot to mention --- you did check that vacuum_mem is set to
> >> a pretty high value, no?  Else you might be doing a lot more
> >> btbulkdelete scans than you need to.
> 
> > What would you define as high for 7.4?  I bumped it up to ~ 245mbs
> 
> That sounds like plenty --- you only need 6 bytes per dead tuple,
> so that should be enough to handle all your 15-20M dead tuples in
> one scan.
> 
> How big is this index again?

Not sure which one it's working on - there are 6 of them each are ~
2.5GB

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson <[EMAIL PROTECTED]> writes:
> On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote:
>> How big is this index again?

> Not sure which one it's working on - there are 6 of them each are ~
> 2.5GB

OK, about 300K pages each ... so even assuming the worst case that
each page requires a physical disk seek, it should take less than an
hour to vacuum each one.  So 10 hours is beginning to sound a bit
suspicious to me too, though it's not beyond the threshold of
incredulity quite yet.

It's conceivable that that index has been corrupted in such a way
that there's a loop of pages whose right-links point back to each other,
which would cause the btbulkdelete scan to never terminate.  If that's
the case then the best fix is to REINDEX.  But I think I'd counsel
letting the VACUUM run awhile longer first, just in case it will finish;
unless you have clear evidence that it won't, like previous runs having
also gone until killed.  One thing you could try is strace'ing the
vacuum for awhile to see if you can detect any evidence of fetching the
same pages over and over.  (This would also help you find out which
index it's working on.)

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Pavel Stehule

Hello

I have similar problem with vacuum on 8.1

I have 256M table. pgstattuple reports 128M free. I stopped vacuum
after 1hour (maintenance_work_mem = 160M). I had not more time.

Regards
Pavel Stehule



2007/7/10, Tom Lane <[EMAIL PROTECTED]>:

Brad Nicholson <[EMAIL PROTECTED]> writes:
> On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
>> Oh, I forgot to mention --- you did check that vacuum_mem is set to
>> a pretty high value, no?  Else you might be doing a lot more
>> btbulkdelete scans than you need to.

> What would you define as high for 7.4?  I bumped it up to ~ 245mbs

That sounds like plenty --- you only need 6 bytes per dead tuple,
so that should be enough to handle all your 15-20M dead tuples in
one scan.

How big is this index again?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Andrew Sullivan
On Sat, Jul 07, 2007 at 05:16:56AM -0700, Gabriele wrote:
> Let's have a server which feed data to multiple slaves, usually using
> direct online connections. Now, we may want to allow those client to
> sync the data to a local replica, work offline and then resync the
> data back to the server. Which is the easiest way to approach this
> problem?

Write yourself a new system that does it?  To my knowledge, nobody
has yet built a production-ready system that can do this sort of
multi-master merge-back replication.  I can think of possible ways
you could bodge this up using Slony-I and log shipping, but I don't
think it'd be pretty.
 
> resource on this topic? I've found people talking about Slony-I, what
> can you tell me on this tool? Are there other tools?

You can find out more about Slony-I at .

> Moreover, additionally to the previous question, let's take the case i
> do have multiple indipendent and separate server and then i want to
> upload the information of these server (actually not everything but a
> subset of the data) to another server (here we have a unidirectional
> comm, as opposed to previous bidirectional comm and we have asymmetric
> structure as opposed to the previous exact replica), what are the
> tools, documentation, best practices or other resources to approach
> this problem?

This also sounds like a multimaster problem, and again I know of no
system that currently supports what you're talking about.  This is
easier to bodge together under Slony using views and such like, but
it's still not trivial.  If you wish to discuss how to do this with
Slony, I suggest taking it up on that list (available from the site
mentioned above).

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Pavel Stehule

Hello

I have similar problem with vacuum on 8.1

I have 256M table. pgstattuple reports 128M free. I stopped vacuum
after 1hour (maintenance_work_mem = 160M). I had not more time.



I test it on 8.3 with random data. Vacuum from 190M to 94M neded
30sec. It's much better. It isn't 100% comparable, but it is one from
more arguments for upgrade.

Regards
Pavel Stehule





2007/7/10, Tom Lane <[EMAIL PROTECTED]>:
> Brad Nicholson <[EMAIL PROTECTED]> writes:
> > On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
> >> Oh, I forgot to mention --- you did check that vacuum_mem is set to
> >> a pretty high value, no?  Else you might be doing a lot more
> >> btbulkdelete scans than you need to.
>
> > What would you define as high for 7.4?  I bumped it up to ~ 245mbs
>
> That sounds like plenty --- you only need 6 bytes per dead tuple,
> so that should be enough to handle all your 15-20M dead tuples in
> one scan.
>
> How big is this index again?
>
> regards, tom lane
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org/
>



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Nested Transactions in PL/pgSQL

2007-07-10 Thread Nykolyn, Andrew
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alvaro Herrera
Sent: Friday, July 06, 2007 9:49 AM
To: Nykolyn, Andrew
Cc: John DeSoi; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Nested Transactions in PL/pgSQL

Nykolyn, Andrew wrote:
> My real issue is not that I need subtransactions but I need to flush 
> the buffer so that I can regain memory for my stored procedure to 
> complete without getting a memory error.

Normal operation does not "fill buffers", thus they don't need any
flushing.  There must be something weird about what your SP is doing.
Can we see it?  Are you maybe opening and keeping half-scanned a lot of
cursors?  If not, can you explain in much detail what it is doing?

Alvaro,

The stored procedures are long and complicated with recursive routines
and temporary tables and deal with a lot of data.   What ended up
happening is that we added another very large chuck of data to the
processing and that gave us the memory error.  After looking closer at
the error we upped the max_locks_per_transaction parameter to 256 and
then we were able to execute without an error.  Thanks for your help.

Andy Nykolyn
Northrop Grumman


 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Alain Peyrat wrote:
>> Initial problem:
>> 
>> # pg_dump -O dbname -Ft -f /tmp/database.tar
>> pg_dump: query to get table columns failed: ERROR:  invalid memory alloc 
>> request size 9000688640
>> 
>> After some research, it seems to be related to a corruption of the 
>> database. Running a vacum crashes the db:
>> 
>> -bash-3.00$ vacuumdb -z -a
>> vacuumdb: vacuuming database "template1"
>> vacuumdb: vacuuming of database "template1" failed: PANIC:  corrupted 
>> item pointer: offset = 3336, size = 20

> It would be nice if it's just template1 that is damaged, but I'm not 
> sure that's the case.

This looks pretty bad, because the above already proves corruption in two
different databases --- there is something wrong somewhere in template1,
and something else wrong somewhere in "dbname" (unless that is actually
template1).  It seems likely that there's been widespread damage from
some hardware or filesystem-level misfortune.

FWIW, a look in the source code shows that the 'corrupted item pointer'
message comes only from PageIndexTupleDelete, so that indicates a
damaged index which should be fixable by reindexing.  But the other one
looks more like heap damage, since it's apparently trying to copy a
damaged variable-width field and I don't think index entries get copied
anywhere in a normal query.  The query pg_dump is complaining about
looks at pg_attribute and pg_type entries for a particular table ---
hmm, I notice that it's not fetching any variable-width fields, so it's
not entirely clear where the error is coming from.  If you are really
lucky it could be that the corruption is actually in pg_statistic,
and it's failing when the planner tries to estimate row counts for the
query.  That would be really lucky because that's all discardable data.
Try "DELETE FROM pg_statistic" as superuser and see if you can dump
then.

> 1. Have you had crashes or other hardware problems recently?

Indeed.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread novnov

Thanks all of you. It does seem like the backport is the way to go. 

So now I have 8.2 and some new postgres/linux newb questions.

I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't
be dependent on 8.1 at all?

I don't understand how postgres is installed with these package managers.
The windows installer asks me for the superuser name (postgres is the
default) and I provide a password. There is an option to run as a service,
and a windows user acct can be created for that. After the synaptic install,
postgres seems to be running as a service. I see a postgres user account is
created 'postgres' but don't know what the password is?

Finally, I've never been able to get pgadmin to work on linux. I can't
connect, probably because the password is unknown, but another question is
the host address, is localhost is ok?
-- 
View this message in context: 
http://www.nabble.com/Postgres-8.2-binary-for-ubuntu-6.10--tf4053274.html#a11526007
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Guido Neitzer

On 07.07.2007, at 06:16, Gabriele wrote:


Let's have a server which feed data to multiple slaves, usually using
direct online connections. Now, we may want to allow those client to
sync the data to a local replica, work offline and then resync the
data back to the server. Which is the easiest way to approach this
problem?


I don't know anything for PostgreSQL doing that.

FrontBase [1] might have something like that if I understood Geert  
correctly. But I'm not sure about that. Maybe you ask them directly.  
There DBMS is free but not open source though.


cug

[1] http://www.frontbase.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton

Tom Lane wrote:

Richard Huxton <[EMAIL PROTECTED]> writes:

Alain Peyrat wrote:

Initial problem:

# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR:  invalid memory alloc 
request size 9000688640


After some research, it seems to be related to a corruption of the 
database. Running a vacum crashes the db:


-bash-3.00$ vacuumdb -z -a
vacuumdb: vacuuming database "template1"
vacuumdb: vacuuming of database "template1" failed: PANIC:  corrupted 
item pointer: offset = 3336, size = 20


It would be nice if it's just template1 that is damaged, but I'm not 
sure that's the case.


This looks pretty bad, because the above already proves corruption in two
different databases --- there is something wrong somewhere in template1,
and something else wrong somewhere in "dbname" (unless that is actually
template1).  It seems likely that there's been widespread damage from
some hardware or filesystem-level misfortune.

FWIW, a look in the source code shows that the 'corrupted item pointer'
message comes only from PageIndexTupleDelete, so that indicates a
damaged index which should be fixable by reindexing.


Tom - could it be damage to a shared system-catalogue, and template1 
just the first DB to be vacuumed? It just strikes me as odd that an 
index on template1 would be corrupted - assuming it's your typical empty 
template1 and is just being connected to during DB creation etc.


Unless, of course, you're looking at genuine on-disk corruption of 
unused files/inodes in which it could be anything. Ick :-(


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal

my primary key is neither SERIAL nor a SEQUENCE.

CONSTRAINT pk_dig PRIMARY KEY (dig_id)

This is the clause that I have for my primary key in the create table
script.

thanks,
~Harpreet

On 7/10/07, Ron St-Pierre <[EMAIL PROTECTED]> wrote:


Harpreet Dhaliwal wrote:
> Hi,
>
> I keep getting this duplicate unique key constraint error for my
> primary key even
> though I'm not inserting anything duplicate. It even inserts the
> records properly
> but my console throws this error that I'm sure of what it is all about.
>
> Corruption of my Primary Key can be one of the possibilities but I'm
> really not sure how
> to get rid of this corruption and how to re-index the primary key.
>
> Also, I was wondering what could be the cause of this PK  corruption,
> if possible and what does can this corruption lead to.
> I mean what are its cons.
>
> Thanks,
> ~Harpreet
You haven't really given any useful information about your primary key,
but if you are using SERIAL as the column type (INT type with a
sequence) you may just be having a problem with its current value (but
then inserts shouldn't work).

If you are using a sequence here, see what it's current value is and
compare it to the highest value in the column. If its value is less than
the columns max() value, just reset the value in the sequence.

imp=# CREATE TABLE dup_pkey (id SERIAL PRIMARY KEY, insert_order int);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (1);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (2);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (3);
imp=# INSERT INTO dup_pkey (insert_order) VALUES (4);

imp=# SELECT * FROM dup_pkey;
id | insert_order
+--
  1 |1
  2 |2
  3 |3
  4 |4
(4 rows)

Now, if you set the value below what the max() column value is, you will
have a problem with inserts.
imp=# SELECT setval('dup_pkey_id_seq',3);
setval

  3
(1 row)

imp=# INSERT INTO dup_pkey (insert_order) VALUES (5);
ERROR:  duplicate key violates unique constraint "dup_pkey_pkey"


If this is the case, use setval() to update the value of the sequence to
the max() value of your primary key. You can use \d to get information
about your table, including the sequence name. However if, as you say,
it IS inserting records properly, then this ISN'T going to help.

hth

Ron



Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Adrian von Bidder
On Saturday 07 July 2007 11.34:04 Евгений Кононов wrote:
> Hello !
>
> How to force POSTGRES to use all virtual processors at included
> Hyper-Trading ?

If your  operating system is able to schedule the threads/processes across 
CPUs, PostgreSQL will use them.  Often, the limit is disk, not CPU, so 
check this first if you feel your db should be running faster than it does.

greetings
-- vbi


-- 
this email is protected by a digital signature: http://fortytwo.ch/gpg

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Adrian von Bidder
On Saturday 07 July 2007 14.16:56 Gabriele wrote:
> I know this is a delicate topic which must be approached cautiously.
>
> Let's have a server which feed data to multiple slaves, usually using
> direct online connections. Now, we may want to allow those client to
> sync the data to a local replica, work offline and then resync the
> data back to the server. Which is the easiest way to approach this
> problem?

Sounds like something you'd want to handle within the application, not at 
the database layer.  The application can know much more about how data is 
modified and how it can be modified safely so a later merge operation won't 
fail, or how merge conflicts (however these are defined) can be safely 
handled.  The database must assume that a user can modify "anything" during 
the offline period, and can't assume anything about what to do on merge 
operations.

cheers
-- vbi

-- 
The days just prior to marriage are like a snappy introduction to a
tedious book.


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> FWIW, a look in the source code shows that the 'corrupted item pointer'
>> message comes only from PageIndexTupleDelete, so that indicates a
>> damaged index which should be fixable by reindexing.

> Tom - could it be damage to a shared system-catalogue, and template1 
> just the first DB to be vacuumed?

Possible, but in any case the other error indicates an independent
problem.

> It just strikes me as odd that an 
> index on template1 would be corrupted - assuming it's your typical empty 
> template1 and is just being connected to during DB creation etc.

Yeah, I was wondering about that too.  PageIndexTupleDelete wouldn't
even get called unless there was something to delete, which indicates
a table that's been changed.  Is the OP in the habit of doing real work
in template1?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Mario Guenterberg
On Tue, Jul 10, 2007 at 10:50:39AM -0700, novnov wrote:
> 
> Thanks all of you. It does seem like the backport is the way to go. 
> 
> So now I have 8.2 and some new postgres/linux newb questions.
> 
> I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't
> be dependent on 8.1 at all?
> 
> I don't understand how postgres is installed with these package managers.
> The windows installer asks me for the superuser name (postgres is the
> default) and I provide a password. There is an option to run as a service,
> and a windows user acct can be created for that. After the synaptic install,
> postgres seems to be running as a service. I see a postgres user account is
> created 'postgres' but don't know what the password is?
> 
> Finally, I've never been able to get pgadmin to work on linux. I can't
> connect, probably because the password is unknown, but another question is
> the host address, is localhost is ok?

Show the doc in /usr/share/doc/postgresql* for details.
If pgadmin runs on the same machine then localhost is your choice.
Don't forget to edit the pg_hda.conf under your data-dir (ex.
/var/lib/pgsql or so).

Greetings
Mario

-- 
 -
| havelsoft.com - Ihr Service Partner für Open Source |
| Tel:  033876-21 966 |
| Notruf: 0173-277 33 60  |
| http://www.havelsoft.com|
| |
| Inhaber: Mario Günterberg   |
| Mützlitzer Strasse 19   |
| 14715 Märkisch Luch |
 -


pgpzf8HKqnbgb.pgp
Description: PGP signature


Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal

I lately figured out the actual problem PHEW.
Its something like two different transactions are seeing the same snapshot
of the database.

Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.

I thought this would be taken care by the database itself by locking the
transactions but now I really don't know how does this locking takes place
in postgres. I used to work with SQL Server and never faced this problem
there.

Please guide me throug to get rid of this problem.

Thanks,
~Harpreet

On 7/10/07, Harpreet Dhaliwal <[EMAIL PROTECTED]> wrote:


my primary key is neither SERIAL nor a SEQUENCE.

CONSTRAINT pk_dig PRIMARY KEY (dig_id)

This is the clause that I have for my primary key in the create table
script.

thanks,
~Harpreet

On 7/10/07, Ron St-Pierre <[EMAIL PROTECTED]> wrote:
>
> Harpreet Dhaliwal wrote:
> > Hi,
> >
> > I keep getting this duplicate unique key constraint error for my
> > primary key even
> > though I'm not inserting anything duplicate. It even inserts the
> > records properly
> > but my console throws this error that I'm sure of what it is all
> about.
> >
> > Corruption of my Primary Key can be one of the possibilities but I'm
> > really not sure how
> > to get rid of this corruption and how to re-index the primary key.
> >
> > Also, I was wondering what could be the cause of this PK  corruption,
> > if possible and what does can this corruption lead to.
> > I mean what are its cons.
> >
> > Thanks,
> > ~Harpreet
> You haven't really given any useful information about your primary key,
> but if you are using SERIAL as the column type (INT type with a
> sequence) you may just be having a problem with its current value (but
> then inserts shouldn't work).
>
> If you are using a sequence here, see what it's current value is and
> compare it to the highest value in the column. If its value is less than
>
> the columns max() value, just reset the value in the sequence.
>
> imp=# CREATE TABLE dup_pkey (id SERIAL PRIMARY KEY, insert_order int);
> imp=# INSERT INTO dup_pkey (insert_order) VALUES (1);
> imp=# INSERT INTO dup_pkey (insert_order) VALUES (2);
> imp=# INSERT INTO dup_pkey (insert_order) VALUES (3);
> imp=# INSERT INTO dup_pkey (insert_order) VALUES (4);
>
> imp=# SELECT * FROM dup_pkey;
> id | insert_order
> +--
>   1 |1
>   2 |2
>   3 |3
>   4 |4
> (4 rows)
>
> Now, if you set the value below what the max() column value is, you will
> have a problem with inserts.
> imp=# SELECT setval('dup_pkey_id_seq',3);
> setval
> 
>   3
> (1 row)
>
> imp=# INSERT INTO dup_pkey (insert_order) VALUES (5);
> ERROR:  duplicate key violates unique constraint "dup_pkey_pkey"
>
>
> If this is the case, use setval() to update the value of the sequence to
>
> the max() value of your primary key. You can use \d to get information
> about your table, including the sequence name. However if, as you say,
> it IS inserting records properly, then this ISN'T going to help.
>
> hth
>
> Ron
>




Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Michael Glaesemann


On Jul 10, 2007, at 13:22 , Harpreet Dhaliwal wrote:

Transaction 1 started, saw max(dig_id) = 30 and inserted new  
dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was  
still 30

and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence the unique key constraint error.

I thought this would be taken care by the database itself by  
locking the
transactions but now I really don't know how does this locking  
takes place

in postgres.


Why would the server lock the table? It can't know your intention is  
to add one to the number returned and insert. If this is what you  
want, you have to lock the table explicitly.



Please guide me throug to get rid of this problem.


This exact reason is why sequences are often used for primary keys. I  
recommend you change your primary key.



Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Matthew Hixson
Does Postgres have any native support for hierarchical data storage?   
I'm familiar with the Adjacency List technique, but am trying to  
determine whether or not Nested Sets would make sense for our  
application or not.  I understand that Nested Sets might be better  
for high read applications, but write heavy applications suffer from  
poor performance.

  Any thoughts on the topic?
  -M@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Richard Huxton

Matthew Hixson wrote:
Does Postgres have any native support for hierarchical data storage?  
I'm familiar with the Adjacency List technique, but am trying to 
determine whether or not Nested Sets would make sense for our 
application or not.  I understand that Nested Sets might be better for 
high read applications, but write heavy applications suffer from poor 
performance.


You might find the "ltree" add-on in contrib useful. Look in the 
"contrib" directory of your source installation, or the addons/extras 
package of your distribution.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Tom Lane
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
> Now the time when Transaction 2 started and read max(dig_id) it was still 30
> and by the time it tried to insert 31, 31 was already inserted by
> Transaction 1 and hence the unique key constraint error.

This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs.  Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Michael Glaesemann


On Jul 10, 2007, at 13:51 , Richard Huxton wrote:


Matthew Hixson wrote:
Does Postgres have any native support for hierarchical data  
storage?  I'm familiar with the Adjacency List technique, but am  
trying to determine whether or not Nested Sets would make sense  
for our application or not.  I understand that Nested Sets might  
be better for high read applications, but write heavy applications  
suffer from poor performance.


You might find the "ltree" add-on in contrib useful. Look in the  
"contrib" directory of your source installation, or the addons/ 
extras package of your distribution.


And as for performance, do remember to benchmark your app using the  
different techniques. Adjacency lists have their own performance  
issues as well. It definitely depends on your app.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal

Thanks alot for all your suggestions gentlemen.
I changed it to a SERIAL column and all the pain has been automatically
alleviated :)

Thanks a ton.
~Harpreet

On 7/10/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
> Now the time when Transaction 2 started and read max(dig_id) it was
still 30
> and by the time it tried to insert 31, 31 was already inserted by
> Transaction 1 and hence the unique key constraint error.

This is exactly why you're recommended to use sequences (ie serial
columns) for generating IDs.  Taking max()+1 does not work, unless
you're willing to lock the whole table and throw away vast amounts of
concurrency.

regards, tom lane



Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread AlJeux

Richard Huxton a écrit :

Alain wrote:

  Hello,

  System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)

  Initial problem:

  # pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR:  invalid memory 
alloc request size 9000688640


  After some research, it seems to be related to a corruption of the 
database. Running a vacum crashes the db:


-bash-3.00$ vacuumdb -z -a
vacuumdb: vacuuming database "template1"
vacuumdb: vacuuming of database "template1" failed: PANIC:  corrupted 
item pointer: offset = 3336, size = 20


It would be nice if it's just template1 that is damaged, but I'm not 
sure that's the case.


First, thank you Richard and Tom for helping me but I finally decided to 
restore the last backup so now, the problem is no longer critical.


Anyway, being able to rescue some datas may be interesting.
The strange point is that the database was running quite well for simple 
queries (nothing really visible but no backup possible).



1. Have you had crashes or other hardware problems recently?


No crash but we changed our server (<= seems the cause).

First try was using a file system copy to reduce downtime as it was two 
same 7.4.x version but the result was not working (maybe related to 
architecture change 32bits => 64 bits) so I finally dropped the db and 
performed an dump/restore. I think, the db was a mix of 32/64 bits files.



2. Can you vacuum the other databases?


No.

3. Can you just dump the schema for your selected database with 
--schema-only? (your pg_dump seemed to fail fetching column details)


No.

4. Can you dump individual tables with --table=? (it might just be one 
table that's damaged)


No.



http://www.postgresql.org/docs/7.4/static/app-pgdump.html


  Can someone help me to track down the problem ?

  Can I recover the datas (last backup failed) ?


How much can be recovered will depend on what corruption has occurred.
If it's just a damaged index, then reindexing will fix it.
If it's a damaged system catalogue we might be able to fix the catalogue 
then read the data.
If data files are damaged we'll need to locate the damage and work 
around it. You will probably lose data on any damaged pages.





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Jasbinder Singh Bali

Hi,

How can I have two different transactions is a plperlu function?
My purpose is as follows:-

Transaction 1 does some series of inserts in tbl_abc
Transaction 2 updates some columns in tbl_abc fetching records from some
other table.

I basically want 2 independent transactions in my function so that 1 commits
as soon as it is done and 2 doesn't
depend on it at all.

Thanks,
~Jas


[GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Ben

So, I'm working on a script that does PITR and basing it off the one here:

http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php

(BTW, thanks for posting that, Rajesh.)

My frustration comes from the output format of pg_stop_backup(). 
Specifically, it outputs a string like this:


550/E788D70

...which I then need to tokenize in order to find which archived wal files 
I'll need to keep as part of the backup. For instance, that output above 
will result in a WAL file named like so:


0001055E.00788D70.backup

That's easy enough to parse when using fixed positions, but sometimes 
pg_stop_backup won't always give me a string with the same number of 
characters. In fact, I suspect the output can be up to 23 characters 
(including the /) if I've commided enough transactions and that's 
before the first 1 in the WAL filename presumably increments past 1. At 
which point, I'm not sure what I'd see. "2/ABC/12345678"?


I guess my questions boil down to:

1. Why not delimit the tokens it's obviously trying to represent with 
a delimiter, instead of just mashing the second two tokens together?


2. Why use / to delimit the first two tokens? That's unhelpfully the 
character usually used for pattern matching, and so needs to be escaped.


3. Am I missing an obvious way to map this function output to the 
generated WAL filname?


4. Actually why doesn't pg_stop_backup() just output the WAL filename 
that was used?


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Richard Huxton

Jasbinder Singh Bali wrote:

Hi,

How can I have two different transactions is a plperlu function?
My purpose is as follows:-

Transaction 1 does some series of inserts in tbl_abc
Transaction 2 updates some columns in tbl_abc fetching records from some
other table.


You'll have to connect back to yourself using dblink() or dbilink() - 
see contrib/ for details.


Don't forget to consider what it means to have a connection string in a 
function-body either.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Michael Glaesemann


On Jul 10, 2007, at 14:41 , Jasbinder Singh Bali wrote:

I basically want 2 independent transactions in my function so that  
1 commits as soon as it is done and 2 doesn't

depend on it at all.


If they're truly independent, I'd write them as two separate  
functions., possibly calling both of them from within a third.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Jasbinder Singh Bali

You mean to say keep using spi_exec till I want everything in the same
transaction and the point where I want a separate transaction, use DBI ?

On 7/10/07, Richard Huxton <[EMAIL PROTECTED]> wrote:


Jasbinder Singh Bali wrote:
> Hi,
>
> How can I have two different transactions is a plperlu function?
> My purpose is as follows:-
>
> Transaction 1 does some series of inserts in tbl_abc
> Transaction 2 updates some columns in tbl_abc fetching records from some
> other table.

You'll have to connect back to yourself using dblink() or dbilink() -
see contrib/ for details.

Don't forget to consider what it means to have a connection string in a
function-body either.

--
   Richard Huxton
   Archonet Ltd



Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay

On 7/10/07, Евгений Кононов <[EMAIL PROTECTED]> wrote:

Здравствуйте, Andrej.

Privet ;) ... not that I speak any Russian, really.


ARB> What OS are you using, and what's hyper-trading? Hyper threading
ARB> by any chance?  That's the OSes responsibility, not the databases.



I'm use Fedora Core 5, based on two Intel(R) Xeon(TM) MP CPU 3.16GHz
processors with Hyper-Trading technology. It means, that I use 4
virtual processors.
And, then I run postgreSQL, he use only one processor and work very
easy. I want to use postgreSQL on all my processors, but I don't know
how did it.

As I said ... if the OS supports those hyperthreaded CPUs
properly (I assume you're seing four CPUs in /proc/cpuinfo?)
and you fire up concurrent connections to the database
the OS should start spreading the load across CPUs.  It's
not the databases task (or capability) to make that decision.



Thank's !

Cheers,
Andrej

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Richard Huxton

Jasbinder Singh Bali wrote:

You mean to say keep using spi_exec till I want everything in the same
transaction and the point where I want a separate transaction, use DBI ?


Yes - if you have two functions A,B then do everything as normal in 
each, except you call function B using dblink() from function A. That 
will give you a separate session which can commit/rollback separately 
from the original.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu

Hi, All,

I'm working on a GIS project using PostgreSQL and PostGIS. In the project I 
need to find locations of about 12K addresses (the process is referred to as 
geocoding). I wrote some script to perform this task by calling a procedure 
"tiger_geocoding" that is provided by PostGIS. My script seems to crash the 
server after a while with the following message:


server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


The log shows the following message:


CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
ExprContext: 8192 total in 1 blocks; 8096 free (1 chunks); 96 used
SPI Exec: 0 total in 0 blocks; 0 free (0 chunks); 0 used
SPI Proc: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
ExecutorState: 57344 total in 3 blocks; 35776 free (7 chunks); 21568 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
2007-07-10 12:25:57 LOG:  server process (PID 2004) exited with exit code 
-1073741819
2007-07-10 12:25:57 LOG:  terminating any other active server processes
2007-07-10 12:25:59 WARNING:  terminating connection because of crash of 
another server process


My script is listed below:

CREATE OR REPLACE FUNCTION geocode_fall2006 () RETURNS VOID AS $$
DECLARE
address VARCHAR(255);
geom GEOMETRY;
tmp_geom GEOMETRY;
counter INTEGER := 0;
app_id VARCHAR(50);
st VARCHAR(50);
f_processed BOOLEAN;
f_geocoded BOOLEAN;
BEGIN
FOR app_id, address, st, f_processed, f_geocoded IN SELECT 
APPLICATION_ID, add, state, geoprocessed, geocoded FROM fall2006 LOOP
RAISE NOTICE 'add=%, app_id=%, state=%', address, app_id, st;
IF upper(st)='OH' AND f_processed='f'
THEN
geom := geocode_test(address);
IF geom IS NOT NULL 
THEN
counter := counter + 1;
-- TIGER SRID is 32767.
tmp_geom := PointFromText('POINT(' || X(geom) 
|| ' ' || Y(geom) || ')',4269);
tmp_geom := transform(tmp_geom,4326);
-- id := SRID(tmp_geom);
-- RAISE NOTICE 'srid=%', id;
UPDATE fall2006 SET lat_lon = tmp_geom WHERE 
APPLICATION_ID = app_id;
UPDATE fall2006 SET geocoded = 't' WHERE 
APPLICATION_ID = app_id;
RAISE NOTICE 'UPDATE fall2006 SET lat_lon = % 
WHERE APPLICATION_ID = %;', AsText(tmp_geom), app_id;
END IF;
UPDATE fall2006 SET geoprocessed = 't' WHERE APPLICATION_ID = 
app_id;
END IF;
END LOOP;
RAISE NOTICE 'counter=%', counter;
END;
$$ LANGUAGE plpgsql


I googled and found a similar bug was reported for version 8.1 and was claimed 
to be fixed (http://archives.postgresql.org/pgsql-bugs/2006-12/msg00214.php). 
However, the PostgreSQL in my machine is 8.2.4, which is supposed to be free of 
the bug. Any suggestion will be greatly appreciated.


___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Richard Huxton

Ben wrote:

So, I'm working on a script that does PITR and basing it off the one here:

http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php

(BTW, thanks for posting that, Rajesh.)

My frustration comes from the output format of pg_stop_backup(). 
Specifically, it outputs a string like this:


550/E788D70

...which I then need to tokenize in order to find which archived wal 
files I'll need to keep as part of the backup. For instance, that output 
above will result in a WAL file named like so:


0001055E.00788D70.backup


4. Actually why doesn't pg_stop_backup() just output the WAL 
filename that was used?


Have you looked in the "backup history file":
http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-BASE-BACKUP

"The backup history file is just a small text file. It contains the 
label string you gave to pg_start_backup, as well as the starting and 
ending times and WAL segments of the backup. If you used the label to 
identify where the associated dump file is kept, then the archived 
history file is enough to tell you which dump file to restore, should 
you need to do so."


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton

AlJeux wrote:

Richard Huxton a écrit :

1. Have you had crashes or other hardware problems recently?


No crash but we changed our server (<= seems the cause).

First try was using a file system copy to reduce downtime as it was two 
same 7.4.x version but the result was not working (maybe related to 
architecture change 32bits => 64 bits) so I finally dropped the db and 
performed an dump/restore. I think, the db was a mix of 32/64 bits files.


Ah! That'll do it. You will get problems with a filesystem copy if
1. The database is running.
2. The architectures aren't identical
3. The installation options are different (a simple date option 
difference can cause problems)


I'm surprised you didn't get more problems frankly.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Ben


On Tue, 10 Jul 2007, Richard Huxton wrote:


Have you looked in the "backup history file":
http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-BASE-BACKUP

"The backup history file is just a small text file. It contains the label 
string you gave to pg_start_backup, as well as the starting and ending times 
and WAL segments of the backup. If you used the label to identify where the 
associated dump file is kept, then the archived history file is enough to 
tell you which dump file to restore, should you need to do so."


Yeah, I saw that, but looking for the most recent backup history file and 
blindly assuming that's what I want, regardless of what pg_stop_backup() 
told me, seems like it might be problematic in some random corner cases. 
For instance, in the case when the backup history file from the previous 
backup is, for some reason, still around, but the backup history file from 
the current run has yet to be written, then I'd backup more WAL files than 
I need. That's hardly a critical error, but the general method strikes me 
as far from bulletproof and there may well be another corner case I'm not 
thinking of right now that *would* be a critical error.


I was hoping I could parse the output of pg_stop_backup() more cleanly 
than it seems possible to do. but, well, for reasons unclear to me, 
it's not so clean to do it right.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrew Sullivan
On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote:

> If your  operating system is able to schedule the threads/processes across 
> CPUs, PostgreSQL will use them. 

But notice that hyperthreading imposes its own overhead.  I've not
seen evidence that enabling hyperthreading actually helps, although I
may have overlooked a couple of cases.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Greg Smith

On Tue, 10 Jul 2007, Ben wrote:

"The backup history file is just a small text file. It contains the label 
string you gave to pg_start_backup, as well as the starting and ending 
times and WAL segments of the backup.


For instance, in the case when the backup history file from the previous 
backup is, for some reason, still around, but the backup history file 
from the current run has yet to be written, then I'd backup more WAL 
files than I need.


You should give each backup a unique label (normally today's date 
suffices) and confirm that the history file you find has that label in it. 
That's how you prevent this class of problem.  If you don't find the label 
you expect in the history, abort the whole thing because your backup 
didn't happen correctly.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay

On 7/11/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote:

On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote:

> If your  operating system is able to schedule the threads/processes across
> CPUs, PostgreSQL will use them.

But notice that hyperthreading imposes its own overhead.  I've not
seen evidence that enabling hyperthreading actually helps, although I
may have overlooked a couple of cases.

I don't have any metrics of my own to present (nor do I care enough
to try and gather them), but there are a few tests with kind of varied
results, depending on WHAT one wants to use HT for.

Have a look at these:
http://www.ibm.com/developerworks/linux/library/l-htl/
http://www.2cpu.com/articles/41_6.html

In how far they're relevant to Postgres is left as an
exercise to the reader :}



A

Cheers,
Andrej


--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Ben

Ah, perfect, that's what I was looking for. Thanks!

On Tue, 10 Jul 2007, Greg Smith wrote:


On Tue, 10 Jul 2007, Ben wrote:

"The backup history file is just a small text file. It contains the label 
string you gave to pg_start_backup, as well as the starting and ending 
times and WAL segments of the backup.


For instance, in the case when the backup history file from the previous 
backup is, for some reason, still around, but the backup history file from 
the current run has yet to be written, then I'd backup more WAL files than 
I need.


You should give each backup a unique label (normally today's date suffices) 
and confirm that the history file you find has that label in it. That's how 
you prevent this class of problem.  If you don't find the label you expect in 
the history, abort the whole thing because your backup didn't happen 
correctly.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Tom Lane
"Andrej Ricnik-Bay" <[EMAIL PROTECTED]> writes:
> On 7/11/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
>> But notice that hyperthreading imposes its own overhead.  I've not
>> seen evidence that enabling hyperthreading actually helps, although I
>> may have overlooked a couple of cases.

> Have a look at these:
> http://www.ibm.com/developerworks/linux/library/l-htl/
> http://www.2cpu.com/articles/41_6.html

Conventional wisdom around here has been that HT doesn't help database
performance, and that IBM link might provide a hint as to why: the
only item for which they show a large loss in performance is disk I/O.
Ooops.

Personally I keep HT turned on on my devel machine, because I do find
that recompiling Postgres is noticeably faster ("make -j4" rocks on a
dual Xeon w/HT).  I doubt that's the benchmark of greatest interest
to the average *user* of Postgres, though.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
>> First try was using a file system copy to reduce downtime as it was two 
>> same 7.4.x version but the result was not working (maybe related to 
>> architecture change 32bits => 64 bits) so I finally dropped the db and 
>> performed an dump/restore. I think, the db was a mix of 32/64 bits files.

> Ah! That'll do it. You will get problems with a filesystem copy if
> 1. The database is running.
> 2. The architectures aren't identical
> 3. The installation options are different (a simple date option 
> difference can cause problems)

PG 8.1 and later record MAXALIGN in pg_control.h, and will refuse to
start up if there is a 32/64 bit compatibility problem.  But 7.4 has
no such defense.  It does check for --enable-integer-datetimes
compatibility, though.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> The log shows the following message:

> CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
> ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
> ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
> ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
> ExprContext: 8192 total in 1 blocks; 8096 free (1 chunks); 96 used
> SPI Exec: 0 total in 0 blocks; 0 free (0 chunks); 0 used
> SPI Proc: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
> ExecutorState: 57344 total in 3 blocks; 35776 free (7 chunks); 21568 used
> ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used

The above is a fragment of a memory stats dump, which normally would
only be emitted if you had an out-of-memory situation.  However the
part of it that you've shown doesn't indicate any particularly heavy
memory usage.  Were there any large numbers in the preceding similarly-
formatted lines?  Was there anything possibly relevant in the log
entries before that?

> 2007-07-10 12:25:57 LOG:  server process (PID 2004) exited with exit code 
> -1073741819

I suppose you're on Windows?  This is what is currently printed for an
Access Violation trap on Windows.  The fact that it came out partway
through a stats dump is pretty suspicious; it suggests that the trap
happened as a result of trying to scan the memory context bookkeeping
data, which implies a memory clobber of some sort.

So you're looking at a bug, but there's much too little data here to
guess what the bug is.  Can you get a debugger stack trace?  Or put
together a self-contained test case for someone else to poke at?

Actually the *first* thing to do is make sure you are up to date on
both Postgres and PostGIS versions.  No sense in spending a lot of time
chasing down a bug if it's already been fixed.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay

On 7/11/07, Tom Lane <[EMAIL PROTECTED]> wrote:

Conventional wisdom around here has been that HT doesn't help database
performance, and that IBM link might provide a hint as to why: the
only item for which they show a large loss in performance is disk I/O.
Ooops.

Thanks Tom, great summary.  How does this compare with
SMP vs HT?


Personally I keep HT turned on on my devel machine, because I do find
that recompiling Postgres is noticeably faster ("make -j4" rocks on a
dual Xeon w/HT).  I doubt that's the benchmark of greatest interest
to the average *user* of Postgres, though.

Understood :)




regards, tom lane

Cheers,
Andrej

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu

Hi, Tom,

Thanks for the reply. I'll try to provide as much information as I can.


> ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
> ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
> ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used
> ExprContext: 8192 total in 1 blocks; 8096 free (1 chunks); 96 used
> SPI Exec: 0 total in 0 blocks; 0 free (0 chunks); 0 used
> SPI Proc: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
> ExecutorState: 57344 total in 3 blocks; 35776 free (7 chunks); 21568 used
> ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>The above is a fragment of a memory stats dump, which normally wouldonly be 
>emitted if you had an out-of-memory situation.  However thepart of it that 
>you've shown doesn't indicate any particularly heavymemory usage.  Were there 
>any large numbers in the preceding similarly-formatted lines?  


The lines that start with "ExecutorState" have similar big numbers and numbers 
in other lines are smaller.


>Was there anything possibly relevant in the logentries before that?


Logentries right after normal script output are listed below (those are just 
the ones on the top and the rest are similar):


TopMemoryContext: 11550912 total in 1377 blocks; 123560 free (833 chunks); 
11427352 used
SPI Plan: 3072 total in 2 blocks; 888 free (0 chunks); 2184 used
SPI Plan: 7168 total in 3 blocks; 3368 free (0 chunks); 3800 used
SPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used


> 2007-07-10 12:25:57 LOG:  server process (PID 2004) exited with exit code 
> -1073741819I suppose you're on Windows?  

Yes, it is Windows XP SP2. Sorry that I didn't mention it before.


>This is what is currently printed for an Access Violation trap on Windows.  
>The fact that it came out partway through a stats dump is pretty suspicious; 
>it suggests that the trap happened as a result of trying to scan the memory 
>context bookkeeping data, which implies a memory clobber of some sort.So 
>you're looking at a bug, but there's much too little data here toguess what 
>the bug is.  Can you get a debugger stack trace?  Or put together a 
>self-contained test case for someone else to poke at?

The spatial database that the script is using is quite large (about 4 GB). So I 
think making a self-contained test case would be the last resort. I'll reset 
some parameters and try to generat debug info. I'll post it here as soon as it 
is available.

>Actually the *first* thing to do is make sure you are up to date on both 
>Postgres and PostGIS versions.  No sense in spending a lot of timechasing down 
>a bug if it's already been fixed.  

The version of PostgreSQL is 8.2.4. PostGIS is 1.2.1 that is for PostgreSQL 
8.2.4. So I think the installation is up to date.

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu

Hi, Tom,

One more question. I'm new to PostgreSQL and not an expert in debugging. After 
checking the manual, I think I need to turn on the following parameters in 
order to generate debug info. Do you think doing so would give us what we need 
to pinpoint the problem?

debug_assertions
trace_notify
trace_sort
debug_print_parse
debug_print_rewritten
debug_print_plan 
debug_pretty_print

Thanks.



 --- On Tue 07/10, Tom Lane < [EMAIL PROTECTED] > wrote:
From: Tom Lane [mailto: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
 Cc: pgsql-general@postgresql.org
Date: Tue, 10 Jul 2007 20:04:41 -0400
Subject: Re: [GENERAL] exit code -1073741819

"Shuo Liu" <[EMAIL PROTECTED]> writes:> The log shows the following message:> 
CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used> 
ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used> 
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used> 
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used> 
ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used> 
ExprContext: 8192 total in 1 blocks; 8000 free (1 chunks); 192 used> 
ExprContext: 8192 total in 1 blocks; 8096 free (1 chunks); 96 used> SPI Exec: 0 
total in 0 blocks; 0 free (0 chunks); 0 used> SPI Proc: 8192 total in 1 blocks; 
2616 free (0 chunks); 5576 used> ExecutorState: 57344 total in 3 blocks; 35776 
free (7 chunks); 21568 used> ExprContext: 0 total in 0 blocks; 0 free (0 
chunks); 0 usedThe above is a fragment of a memory stats dump, which normally 
wouldonly be emitted if you had an out-of-memory situation.  However thepart of 
it that you've shown
  
doesn't indicate any particularly heavymemory usage.  Were there any large 
numbers in the preceding similarly-formatted lines?  Was there anything 
possibly relevant in the logentries before that?> 2007-07-10 12:25:57 LOG:  
server process (PID 2004) exited with exit code -1073741819I suppose you're on 
Windows?  This is what is currently printed for anAccess Violation trap on 
Windows.  The fact that it came out partwaythrough a stats dump is pretty 
suspicious; it suggests that the traphappened as a result of trying to scan the 
memory context bookkeepingdata, which implies a memory clobber of some sort.So 
you're looking at a bug, but there's much too little data here toguess what the 
bug is.  Can you get a debugger stack trace?  Or puttogether a self-contained 
test case for someone else to poke at?Actually the *first* thing to do is make 
sure you are up to date onboth Postgres and PostGIS versions.  No sense in 
spending a lot of timechasing down a bug if it's already been fixed.

regards, tom lane

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> TopMemoryContext: 11550912 total in 1377 blocks; 123560 free (833 chunks); 
> 11427352 used

Whoa ... that is a whole lot more data than I'm used to seeing in
TopMemoryContext.  How many stats dump lines are there exactly (from
here to the crash report)?  If there are many (like many thousands) that
would explain why TopMemoryContext is so large, but would direct our
attention to what's generating so many.  If there are just a hundred or
so, which'd be typical, then we are looking at some kind of leak of
TopMemoryContext data --- though that still doesn't explain the crash.

> The spatial database that the script is using is quite large (about 4
> GB). So I think making a self-contained test case would be the last
> resort.

FWIW, I doubt that the content of the database is the key point here;
you could probably generate a test case with relatively little data,
or maybe a lot of easily-created dummy data.  However stripping it down
might require more insight into the nature of the bug than we have at
this point.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> One more question. I'm new to PostgreSQL and not an expert in debugging. 
> After checking the manual, I think I need to turn on the following parameters 
> in order to generate debug info. Do you think doing so would give us what we 
> need to pinpoint the problem?

> debug_assertions
> trace_notify
> trace_sort
> debug_print_parse
> debug_print_rewritten
> debug_print_plan 
> debug_pretty_print

If you can turn on debug_assertions, do so and see if it catches
anything; but it's not enabled in a standard build, and if it is enabled
it's on by default, so I doubt there's any win there unless you're
prepared to rebuild Postgres from source.  The others seem unlikely to
be helpful for this problem.

What would probably be most helpful is a stack trace, but that's not
easy to get on Windows ... which is why the shortest route to a fix
may well be to create a test case that someone can exercise on a more
debug-friendly platform.  If you got a stack trace, and it wasn't
conclusive, we'd have to go in that direction anyway.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu

> Whoa ... that is a whole lot more data than I'm used to seeing in
TopMemoryContext. How many stats dump lines are there exactly (from
here to the crash report)? 

OK, I didn't know that was a surprise. There are about 600 stats dump lines 
in between.


>> The spatial database that the script is using is quite large (about 4
>> GB). So I think making a self-contained test case would be the last
>> resort.

>FWIW, I doubt that the content of the database is the key point here;
you could probably generate a test case with relatively little data,
or maybe a lot of easily-created dummy data. However stripping it down
might require more insight into the nature of the bug than we have at
this point.

I did a test in a small area (which is the street network in a county) before 
and it worked without crashing the server. In that test there were about 600 
records (or addresses) to be processed while in the current case there are 
about 12K records. Another difference is that the current case uses a larger 
base relation (the one I mentioned my previous email) that covers a whole 
state. I'm not sure whether it is the amount of records to be processed or the 
size of base relation that causes the crash.

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu

> OK, so maybe it's dependent on the size of the table. Try generating a
test case by loading up just your schema + functions + a lot of dummy
entries generated by script.
> Is the data proprietary? If not, maybe you could arrange to send me a
dump off-list. A short test-case script would be better, but let's not
waste huge amounts of person-time to save a few hours of downloading ...

That's what I was planning to do. I'll generate a table with dummy entries. I 
think we may try to use the smaller base table first. Once I can reproduce the 
problem I'll dump the database into a file and send it to you. Is there a 
server that I can upload the data to? 

Thanks again for your help.

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> That's what I was planning to do. I'll generate a table with dummy entries. I 
> think we may try to use the smaller base table first. Once I can reproduce 
> the problem I'll dump the database into a file and send it to you. Is there a 
> server that I can upload the data to? 

If what you come up with is too big for email (my threshold of pain for
that is in the tens of MB, dunno about you), contact me off-list and
we'll set something up.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match