Re: [GENERAL] limits

2007-12-04 Thread Gregory Stark
<[EMAIL PROTECTED]> writes:

> Hi All.
> My question is simple and plain: Are there some limit in the number of
> database operations between a BEGIN statement and a COMMIT statement?

The most relevant limitation is that the whole body has to fit in 1GB.

You can also only execute 4 billion statements even if it's in a loop. 
8.3 helps this by making most read-only operations not count.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Richard Huxton

Ivan Sergio Borgonovo wrote:

On Mon, 03 Dec 2007 19:06:29 +
Richard Huxton <[EMAIL PROTECTED]> wrote:


Ivan Sergio Borgonovo wrote:

Any general rule about dynamically generated queries in stored
procedures vs. performances?

It's the same decision as any with any prepared plan vs
plan-each-time trade-off.


Should I guess this trade off on aruspices or is it possible to gain
a little bit of culture or it's something based on heuristic and
experience?


Unless it's an obvious decision (millions of small identical queries vs. 
occasional large complex ones) then you'll have to test. That's going to 
be true of any decision like this on any system.



If the second, any good reading?
How should I take into account parameters like:
- is it a insert/update vs select query
- is it performed on a large table?
- is it performed frequently? frequently with same parameters?
frequently with different parameters?
- does the table is growing?
- are there frequent delete?


Read the manuals, particularly the sections on MVCC, also the planner 
and EXPLAIN. Read up on the WAL and checkpoints.


You might want to skim through the archives on the performance list. You 
might find some of the community-related resources useful too. Some of 
the following might be a little out-of-date, so check:


http://www.westnet.com/~gsmith/content/postgresql/
http://www.powerpostgresql.com/PerfList
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php

HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] 8.3beta4 needs a dump/restore?

2007-12-04 Thread Louis-David Mitterrand
Hi,

While upgrading from 8.3-beta3 to beta4, postgres complained that the 
database format was not supported. I had to restore from backup.

Was that intended? I didn't see any beta4 announcement on -general or 
-hackers.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] "Suspending" indexes and constraint updates

2007-12-04 Thread Reg Me Please
Hi all.

I'd need to do large updates into already populated tables as
a maintenance activity.
The updates are generated by software and are consistent with
the constraints defined in the tables.

As of now I've been looking at the index and constraint definitions,
dropping them, doing the inserts and then re-creating both indexes and
constraints.
Is there a way to "suspend" the index updates and the constraint checks
before the inserts in order to later re-enable them and do a reindex?

TIA.

-- 
Reg me, please!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] CPU

2007-12-04 Thread Harald Armin Massa
Josh,

However, the two extra cores (even if slower), will greatly help if you
> have any kind of concurrency.
>

as much as I understand with running Postgres in the default configuration,
there *will* be concurrency, without an "if" ?

I am thinking of the background writer, the autovacuum process, the log
writer and finally the connection serving process.  ... quite sure of that
"default concurrency" because I had to explain those basic 5 postgres.exe to
at least 8 Windows Admins...

My non-benchmarked experience is that "multicore and postgres good"
(experience mainly drawn from windows)

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!


Fwd: Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl
When I login as user postgres, I can copy the files without any
problem.
Here is the error message (translated as it appears in German in my log
files):
>>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
1
>>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
1
>>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
1
WARNING: transaction log file could not be archived: too many errors


Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Mon, Dec 3, 2007 at 11:27 PM, in message
<[EMAIL PROTECTED]>,
Paul Lambert <[EMAIL PROTECTED]> wrote: 
> Christian Rengstl wrote:
>> Hi list,
>> 
>> a have a problem using the following archiving command on windows:
>> 'copy %p C:\\Archive\\DBArchive\\%f'
>> The error message i get is saying that the file could not be copied
>> because of error code 1. The strange thing is that in the archive
>> directory there are files that were copied one or two weeks ago and
>> since then i have not change the command.
>> 
>> Thanks for any advice!
>> 
>> 
> 
> It would help to paste the actual error message, not your description
of 
> the error message.
> 
> Have you checked that the postgres user has full control over the 
> DBArchive directory and that there is available space on the disk?
> 
> -- 
> Paul Lambert
> Database Administrator
> AutoLedgers - A Reynolds & Reynolds Company


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Ivan Sergio Borgonovo
On Tue, 04 Dec 2007 08:14:56 +
Richard Huxton <[EMAIL PROTECTED]> wrote:

> Unless it's an obvious decision (millions of small identical
> queries vs. occasional large complex ones) then you'll have to
> test. That's going to be true of any decision like this on any
> system.

:(

I'm trying to grasp a general idea from the view point of a developer
rather than a sysadmin. At this moment I'm not interested in
optimisation, I'm interested in understanding the trade off of
certain decisions in the face of a cleaner interface.

Most of the documents available are from a sysadmin point of view.
That makes me think that unless I write terrible SQL it won't make a
big difference and the first place I'll have to look at if the
application need to run faster is pg config.

This part (for posterity) looks as the most interesting for
developers:
http://www.gtsm.com/oscon2003/toc.html
Starting from Functions

Still I can't understand some things, I'll come back.

thanks for the right pointers.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Fwd: Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread A. Kretschmer
am  Tue, dem 04.12.2007, um 11:31:41 +0100 mailte Christian Rengstl folgendes:
> When I login as user postgres, I can copy the files without any
> problem.
> Here is the error message (translated as it appears in German in my log
> files):
> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
> 1
> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
> 1
> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error code
> 1
> WARNING: transaction log file could not be archived: too many errors

2 questions:

1. really 'myfile', or is 'myfile' only a placeholder for the real
logfile?

2. please try to specify the full path for the source file, maybe the
copy-command can't found the file without the full path.


Hope that helps, and please no top-posting. (in german: TOFU)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] 8.3beta4 needs a dump/restore?

2007-12-04 Thread Peter Eisentraut
Am Dienstag, 4. Dezember 2007 schrieb Louis-David Mitterrand:
> While upgrading from 8.3-beta3 to beta4, postgres complained that the
> database format was not supported. I had to restore from backup.
>
> Was that intended?

Yes

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] "Suspending" indexes and constraint updates

2007-12-04 Thread Peter Eisentraut
Am Dienstag, 4. Dezember 2007 schrieb Reg Me Please:
> Is there a way to "suspend" the index updates and the constraint checks
> before the inserts in order to later re-enable them and do a reindex?

You can disable foreign-key constraints (see ALTER TABLE ... DISABLE TRIGGER; 
not quite obvious, but there is a trigger beneath each foreign key 
constraint).  But there is no general solution for all constraints and 
indexes.

It might also be helpful to investigate the order in which pg_dump puts out 
things, since it faces mostly the same issues.  It puts index and constraint 
creation at the end after the data load.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl



Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Tue, Dec 4, 2007 at 12:03 PM, in message
<[EMAIL PROTECTED]>, "A. Kretschmer"
<[EMAIL PROTECTED]> wrote: 
> am  Mon, dem 03.12.2007, um 12:29:39 +0100 mailte Christian Rengstl 
> folgendes:
>> Hi list,
>> 
>> a have a problem using the following archiving command on windows:
>> 'copy %p C:\\Archive\\DBArchive\\%f'
> 
> According the doc, the command should be:
> 
> archive_command = 'copy "%p" /mnt/server/archivedir/"%f"'  # Windows
> 
> 
> Perhaps missing "" around the parameters %p and %f, AND/OR wrong path
for
> the destination (\\ instead /), i'm not sure, i'm not familiar with
PG
> under Windows.
> 
> 
> Andreas

I read that part of the docs, too, and changed it accordingly, but
without any success.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread A. Kretschmer
am  Mon, dem 03.12.2007, um 12:29:39 +0100 mailte Christian Rengstl folgendes:
> Hi list,
> 
> a have a problem using the following archiving command on windows:
> 'copy %p C:\\Archive\\DBArchive\\%f'

According the doc, the command should be:

archive_command = 'copy "%p" /mnt/server/archivedir/"%f"'  # Windows


Perhaps missing "" around the parameters %p and %f, AND/OR wrong path for
the destination (\\ instead /), i'm not sure, i'm not familiar with PG
under Windows.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] "Suspending" indexes and constraint updates

2007-12-04 Thread Reg Me Please
Il Tuesday 04 December 2007 11:50:21 Peter Eisentraut ha scritto:
> Am Dienstag, 4. Dezember 2007 schrieb Reg Me Please:
> > Is there a way to "suspend" the index updates and the constraint checks
> > before the inserts in order to later re-enable them and do a reindex?
>
> You can disable foreign-key constraints (see ALTER TABLE ... DISABLE
> TRIGGER; not quite obvious, but there is a trigger beneath each foreign key
> constraint).  But there is no general solution for all constraints and
> indexes.
>
> It might also be helpful to investigate the order in which pg_dump puts out
> things, since it faces mostly the same issues.  It puts index and
> constraint creation at the end after the data load.

As far as I know, pg_dump usually starts with an empty DB.
Then it creates plain table to be filled with COPY.
And at last it creates indexes and constraints.
Which is not what I nedd.

In any case thanks for the hint.

-- 
Reg me, please!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Suspending" indexes and constraint updates

2007-12-04 Thread Martijn van Oosterhout
On Tue, Dec 04, 2007 at 10:09:06AM +0100, Reg Me Please wrote:
> Is there a way to "suspend" the index updates and the constraint checks
> before the inserts in order to later re-enable them and do a reindex?

You can defer foreign key checks and possibly constraints, but unique
index checks can't be done. For really big loads it's not uncommon to
remove the constraints and indexes prior to te load.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: Fwd: Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl



Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Tue, Dec 4, 2007 at 11:44 AM, in message
<[EMAIL PROTECTED]>, "A. Kretschmer"
<[EMAIL PROTECTED]> wrote: 
> am  Tue, dem 04.12.2007, um 11:31:41 +0100 mailte Christian Rengstl 
> folgendes:
>> When I login as user postgres, I can copy the files without any
>> problem.
>> Here is the error message (translated as it appears in German in my
log
>> files):
>> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error
code
>> 1
>> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error
code
>> 1
>> >>copy pg_xlog\myfile C:\Archive\DBArchive\myfile<< failed: error
code
>> 1
>> WARNING: transaction log file could not be archived: too many
errors
> 
> 2 questions:
> 
> 1. really 'myfile', or is 'myfile' only a placeholder for the real
> logfile?
> 
> 2. please try to specify the full path for the source file, maybe
the
> copy- command can't found the file without the full path.
> 
> 
> Hope that helps, and please no top- posting. (in german: TOFU)
> 
> 
> Andreas

1) myfile is just a placeholder, as I am currently not working on the
server directly and i can't copy the filename using vnc...
2) the path given, is the path postgresql obviously interprets my
archive_command, so i don't know how to specify it.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Transaction problem

2007-12-04 Thread x asasaxax
Its just use a constraint then? there´s no problem id two sessions decrease
the number, and this number goes to less then or equals as zero?
I´m programming with php.


Thanks


2007/12/3, Cesar Alvarez <[EMAIL PROTECTED]>:
>
> What are you programing with?.
> are you using npgsql?
>
> Regards Cesar Alvarez.
> > Hi everyone,
> >
> >I would like to know how can i do a simple transaction for this
> > situation:
> >
> > I have n products in certain row of a table. When the user buys a
> > product,  the quantity of this product will be decreased. The user can
> > only buy a product that has a quantity n > 0. This means that when the
> > user send the product confirmation to the system, the bd will decrease
> > the product quantity with a transaction if the number of product in
> > stock is greater than zero.
> >
> >
> > Did anyone knows how can i do that with postgre?
> >
> > Thanks a lot.
>
>
>


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl



Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Tue, Dec 4, 2007 at 12:21 PM, in message
<[EMAIL PROTECTED]>, "A. Kretschmer"
<[EMAIL PROTECTED]> wrote: 
> am  Tue, dem 04.12.2007, um 12:05:41 +0100 mailte Christian Rengstl 
> folgendes:
>> >> a have a problem using the following archiving command on
windows:
>> >> 'copy %p C:\\Archive\\DBArchive\\%f'
>> > 
>> > According the doc, the command should be:
>> > 
>> > archive_command = 'copy "%p" /mnt/server/archivedir/"%f"'  #
Windows
>> > 
>> > 
>> > Perhaps missing "" around the parameters %p and %f, AND/OR wrong
path
>> for
>> > the destination (\\ instead /), i'm not sure, i'm not familiar
with
>> PG
>> > under Windows.
>> 
>> I read that part of the docs, too, and changed it accordingly, but
>> without any success.
> 
> The propper config- file? (compare with the output from the command 
> "show config_file;" within psql), reload/restart the server?
> 
> 
> Andreas

show config_file points to the right one. I restarted the server after
every change. After including the quotation marks as in the docs, the
quotation marks also appeared in the command, so at least it shows that
the config file was read.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] libpq messages language

2007-12-04 Thread Albe Laurenz
Efraín López wrote:
> I am using Windows, and pg 8.2.5
>  
> When making a connection with libpq, if it fails I would like
> to get the errors messages in spanish (PQerrorMessage  )
>  
> Is this possible? How can this be done?

Set the program's locale prior to calling libpq functions.

I did not try it on Windows, but Microsoft seems to work like
UNIX in that respect:

http://msdn2.microsoft.com/en-us/library/x99tb11d(VS.80).aspx

Try something like

setlocale(LC_MESSAGES, "Spanish");

or, if your language environment is Spanish, simply

setlocale(LC_MESSAGES, "");

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread A. Kretschmer
am  Tue, dem 04.12.2007, um 12:05:41 +0100 mailte Christian Rengstl folgendes:
> >> a have a problem using the following archiving command on windows:
> >> 'copy %p C:\\Archive\\DBArchive\\%f'
> > 
> > According the doc, the command should be:
> > 
> > archive_command = 'copy "%p" /mnt/server/archivedir/"%f"'  # Windows
> > 
> > 
> > Perhaps missing "" around the parameters %p and %f, AND/OR wrong path
> for
> > the destination (\\ instead /), i'm not sure, i'm not familiar with
> PG
> > under Windows.
> 
> I read that part of the docs, too, and changed it accordingly, but
> without any success.

The propper config-file? (compare with the output from the command 
"show config_file;" within psql), reload/restart the server?


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 1 cluster on several servers

2007-12-04 Thread Michelle Konzack
Am 2007-11-29 12:50:58, schrieb Willy-Bas Loos:
> Hi,
> 
> Is it possible to run one PostgreSQL cluster on more than one (hardware)
> server?

Yes of course...  I run at a customer "Monster"
with 42 PostgreSQL servers of each 1,8 TByte.

Thanks, Greetings and nice Day
Michelle Konzack
Tamay Dogan Network
Open Hardware Developer
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: [GENERAL] Transaction problem

2007-12-04 Thread Bill Moran
In response to "x asasaxax" <[EMAIL PROTECTED]>:

> Its just use a constraint then? there´s no problem id two sessions decrease
> the number, and this number goes to less then or equals as zero?
> I´m programming with php.

BEGIN;
SELECT quantity FROM products WHERE productid=[productid] FOR UPDATE;
[Check in PHP to ensure enough product exists for this purchase]
UPDATE products SET quantity=[new quantity after purchase]
WHERE productid=[productid];
[... any other table updates you need to do for this transaction ...]
COMMIT WORK;

SELECT ... FOR UPDATE will prevent other transactions from locking this
row until this transaction completes.  It guarantees that only 1
transaction can modify a particular row at a time.  See the docs for
more details:
http://www.postgresql.org/docs/8.1/static/sql-select.html#SQL-FOR-UPDATE-SHARE
http://www.postgresql.org/docs/8.1/static/explicit-locking.html

> 2007/12/3, Cesar Alvarez <[EMAIL PROTECTED]>:
> >
> > What are you programing with?.
> > are you using npgsql?
> >
> > Regards Cesar Alvarez.
> > > Hi everyone,
> > >
> > >I would like to know how can i do a simple transaction for this
> > > situation:
> > >
> > > I have n products in certain row of a table. When the user buys a
> > > product,  the quantity of this product will be decreased. The user can
> > > only buy a product that has a quantity n > 0. This means that when the
> > > user send the product confirmation to the system, the bd will decrease
> > > the product quantity with a transaction if the number of product in
> > > stock is greater than zero.
> > >
> > >
> > > Did anyone knows how can i do that with postgre?
> > >
> > > Thanks a lot.
> >
> >
> >
> 


-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Richard Huxton

Ivan Sergio Borgonovo wrote:

On Tue, 04 Dec 2007 08:14:56 +
Richard Huxton <[EMAIL PROTECTED]> wrote:


Unless it's an obvious decision (millions of small identical
queries vs. occasional large complex ones) then you'll have to
test. That's going to be true of any decision like this on any
system.


:(

I'm trying to grasp a general idea from the view point of a developer
rather than a sysadmin. At this moment I'm not interested in
optimisation, I'm interested in understanding the trade off of
certain decisions in the face of a cleaner interface.


Always go for the cleaner design. If it turns out that isn't fast 
enough, *then* start worrying about having a bad but faster design.



Most of the documents available are from a sysadmin point of view.
That makes me think that unless I write terrible SQL it won't make a
big difference and the first place I'll have to look at if the
application need to run faster is pg config.


The whole point of a RDBMS is so that you don't have to worry about 
this. If you have to start tweaking the fine details of these things, 
then that's a point where the RDBMS has reached its limits. In a perfect 
world you wouldn't need to configure PG either, but it's not that clever 
I'm afraid.


Keep your database design clean, likewise with your queries, consider 
whether you can cache certain results and get everything working first.


Then, look for where bottle-necks are, do you have any unexpectedly 
long-running queries? (see www.pgfoundry.org for some tools to help with 
log analysis)



This part (for posterity) looks as the most interesting for
developers:
http://www.gtsm.com/oscon2003/toc.html
Starting from Functions


Note that this is quite old now, so some performance-related assumptions 
will be wrong for current versions of PG.



Still I can't understand some things, I'll come back.



--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] 1 cluster on several servers

2007-12-04 Thread Raymond O'Donnell

On 02/12/2007 13:02, Michelle Konzack wrote:


with 42 PostgreSQL servers of each 1,8 TByte.


Wow! Who ever said size wasn't everything :-)

Would you be willing to tell us a little about your hardware and 
software set-up?


Also, are the servers running separate databases? - or is it one 
whopping database load-balanced across the servers?


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Glen W. Mabey
Hello,

I have been trying to write a trigger in C operating on each row 
that would (as a first step) copy a field of type TEXT into a text*
variable.

I think that I've got the SPI_connect, SPI_fnumber, and SPI_getbinval
calls correct, but this gives me a Datum.

How do I convert a Datum to a text*?

The following is the best I could do, and it segfaults ...

Thank you,
Glen Mabey


if( CALLED_AS_TRIGGER( fcinfo ) ) {
elog(INFO, "unlink_file was called as a trigger." );
trigdata = (TriggerData *) fcinfo->context;

if( ( ret = SPI_connect() ) != SPI_OK_CONNECT ) {
elog(INFO, "SPI_connect returned %d", ret);
}

file_name_colnumber = SPI_fnumber( trigdata->tg_relation->rd_att, 
"file_name" );

if( file_name_colnumber == SPI_ERROR_NOATTRIBUTE ) {
elog(INFO, "SPI_fnumber returned SPI_ERROR_NOATTRIBUTE" );
}

SPI_getbinval( trigdata->tg_trigtuple, trigdata->tg_relation->rd_att, 
file_name_colnumber, isnull );
relative_filename_t = PG_DETOAST_DATUM( relative_filename_datum );

if( SPI_result == SPI_ERROR_NOATTRIBUTE ) {
elog(INFO, "SPI_fnumber returned SPI_ERROR_NOATTRIBUTE" );
}

if( ( ret = SPI_finish() ) != SPI_OK_FINISH ) {
elog(INFO, "SPI_finish returned %d", ret);
}
}


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] WAL shipping question

2007-12-04 Thread SHARMILA JOTHIRAJAH
Hi,
Im trying to play a bit with log shipping between 2 servers primary and 
standby. These servers are running versions 8.3betat3. I had set up continuous 
archiving at the primary server. The manuals' example of archive_command is
archive_command = 'cp -i %p /mnt/server/archivedir/%f http://tools.search.yahoo.com/newsearch/category.php?category=shopping

Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Ted Byers
--- Richard Huxton <[EMAIL PROTECTED]> wrote:
> Ivan Sergio Borgonovo wrote:
> > On Tue, 04 Dec 2007 08:14:56 +
> > Richard Huxton <[EMAIL PROTECTED]> wrote:
> > 
> >> Unless it's an obvious decision (millions of
> small identical
> >> queries vs. occasional large complex ones) then
> you'll have to
> >> test. That's going to be true of any decision
> like this on any
> >> system.
> > 
> > :(
> > 
> > I'm trying to grasp a general idea from the view
> point of a developer
> > rather than a sysadmin. At this moment I'm not
> interested in
> > optimisation, I'm interested in understanding the
> trade off of
> > certain decisions in the face of a cleaner
> interface.
> 
> Always go for the cleaner design. If it turns out
> that isn't fast 
> enough, *then* start worrying about having a bad but
> faster design.
> 
I don't know about "clean", but the mantra here is
"First you make it provably correct, then you make it
fast."

I am a fan of making things as simple as practicable,
BUT NO SIMPLER. I don't know how that meshes with
"clean", but the general idea is right.

If you look at my code, my C++ and my java code is
infinitely better than my SQL, so with either of
those, I end up making fewer variants, and my starting
point is generally much closer to my end point, and
that is just a function of experience.  With my SQL
code, I generally find myself producing a variety of
scripts to support a given task.  And these include
exploration of just about everything SQL has to offer.
 I TRY everything, from joins to correlated subqueries
to stored procedures to views, and beyond.  And I am
still trying to develop an intuition as to which
options will serve best in a given situation,
analogous to design patterns I routinely use in my C++
and Java code.

Your reaction to Richard's advice to test seems to
imply you want a more direct approach to "THE" answer.
 That is chasing something that doesn't exist!  I tell
you, as a developer (and NOT a sysadmin), there is no
substitute for experience and THAT can ONLY be
obtained through trial and error.  I haven't seen any
books about design patterns in SQL (while there are
plenty in C++ and Java, and other languages), so there
is no short cut.

In short, the general idea most developers I know work
with is "code and test, and then code again and test
again, until you've tried the variety of options that
exist."

There is no substitute for testing yourself.  I have
seen repeated advice to replace correlated subqueries
by left joins, claiming that ubiquitous experience is
that the joins are faster, and yet, in my tests, in
most cases there was little difference in speed while
in others the correlated subqueries were faster.  So
it appears the advice was based on experience with
dated software and the RDBMS in question had
subsequently dramatically improved how it handled
correlated subqueries.  And my use of EXPLAIN
confirmed what I was finding through testing.

> > Most of the documents available are from a
> sysadmin point of view.
> > That makes me think that unless I write terrible
> SQL it won't make a
> > big difference and the first place I'll have to
> look at if the
> > application need to run faster is pg config.
> 
> The whole point of a RDBMS is so that you don't have
> to worry about 
> this. If you have to start tweaking the fine details
> of these things, 
> then that's a point where the RDBMS has reached its
> limits. In a perfect 
> world you wouldn't need to configure PG either, but
> it's not that clever 
> I'm afraid.
> 
I am not sure I buy this, if I properly understand it.
 Trust me, I have written some really bad but simple
queries that took hours to complete a task that was
completed in less than a minute with smarter code. 
And yet the "bad" code I'd written was similar in
nature to examples used in some texts to explain ideas
in SQL.  The point is, until you get extensive
experience in SQL programming and optimization, you
won't know what is bad code until you test it.

Personally, I rely on the sysadmin to administer the
RDBMS properly, to ensure it is configured
appropriately for our application, AND I ask his or
her advice and input on how I design and implement my
SQL code, as well as for input on distributed
application architecture.  You can't do it all.  On my
development machine, I just use whatever the default
configuration is, so I have it up and running in a
flash and can focus on my development.  I'll change
that configuration ONLY if the sysadmin tells me there
is a problem with the default.  My advice, therefore
is forget about configuration issues and focus on
getting your SQL right, and then fast, and let your
sysadmin advise, and possibly help, with changes to
your configuration should he or she feel it needs to
be modified to better represent how your application
will behave once in production.

> Keep your database design clean, likewise with your
> queries, consider 
> whether you can cache certain results and get
> everything w

Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Pavel Stehule
On 04/12/2007, Glen W. Mabey <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have been trying to write a trigger in C operating on each row
> that would (as a first step) copy a field of type TEXT into a text*
> variable.
>
> I think that I've got the SPI_connect, SPI_fnumber, and SPI_getbinval
> calls correct, but this gives me a Datum.
>
> How do I convert a Datum to a text*?
>

use macro DatumGetPointer(datum)

Pavel

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Glen W. Mabey
On Tue, Dec 04, 2007 at 09:10:21AM -0600, Pavel Stehule wrote:
> use macro DatumGetPointer(datum)

When I do that, I get the following compiler warning:

warning: assignment from incompatible pointer type

which is what originally motivated me to look for another means, which
led me to use PG_DETOAST_DATUM.

Using DatumGetPointer also seg faults ...

Is there somewhere that I am not adequately checking for an error?

Thanks,
Glen

TriggerData *trigdata;
Datum relative_filename_datum;
text *relative_filename_t;
int file_name_colnumber, ret;
bool *isnull;

if( CALLED_AS_TRIGGER( fcinfo ) ) {
elog(INFO, "unlink_file was called as a trigger." );
trigdata = (TriggerData *) fcinfo->context;

if( ( ret = SPI_connect() ) != SPI_OK_CONNECT ) {
elog(INFO, "SPI_connect returned %d", ret);
}

file_name_colnumber = SPI_fnumber( trigdata->tg_relation->rd_att, 
"file_name" );

if( file_name_colnumber == SPI_ERROR_NOATTRIBUTE ) {
elog(INFO, "SPI_fnumber returned SPI_ERROR_NOATTRIBUTE" );
}

SPI_getbinval( trigdata->tg_trigtuple, trigdata->tg_relation->rd_att, 
file_name_colnumber, isnull );

if( SPI_result == SPI_ERROR_NOATTRIBUTE ) {
elog(INFO, "SPI_fnumber returned SPI_ERROR_NOATTRIBUTE" );
}

relative_filename_t = PG_DETOAST_DATUM( relative_filename_datum );

if( ( ret = SPI_finish() ) != SPI_OK_FINISH ) {
elog(INFO, "SPI_finish returned %d", ret);
}

}

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Martijn van Oosterhout
On Tue, Dec 04, 2007 at 09:26:46AM -0600, Glen W. Mabey wrote:
> On Tue, Dec 04, 2007 at 09:10:21AM -0600, Pavel Stehule wrote:
> > use macro DatumGetPointer(datum)
> 
> When I do that, I get the following compiler warning:
> 
> warning: assignment from incompatible pointer type

Well yes, you're getting a (void*) which you need to cast to a (text*).
It's the generic cast for indirect types.

> Using DatumGetPointer also seg faults ...

What actually segfault? Not DatumGetPointer() since that's just a cast.

> Is there somewhere that I am not adequately checking for an error?

Depends what you're doing. Remember, a (text*) is a varlena object so
you need to use the VAR_DATA,VAR_LEN macros to access it. The string is
not nul-terminated for example. If you want to get a plain C-string,
use textout or DatumToCString or something similar.

However, DETOAST is the right step, because if you don't do that you
might be handed a toasted string.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] WAL shipping question

2007-12-04 Thread Richard Huxton

SHARMILA JOTHIRAJAH wrote:

Hi,
Im trying to play a bit with log shipping between 2 servers primary and 
standby. These servers are running versions 8.3betat3. I had set up continuous 
archiving at the primary server. The manuals' example of archive_command is
archive_command = 'cp -i %p /mnt/server/archivedir/%f This basically archives the data in the primary server itself...right!!! 
But how can I set up continuous archiving from primary to a directory (WAL archive directory)

on the stand-by server ?


a. Mount the remote directory via NFS/SMBFS/...
b. Use a remote copy, e.g. scp, FTP...

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Glen W. Mabey
On Tue, Dec 04, 2007 at 09:53:37AM -0600, Tom Lane wrote:
> "Glen W. Mabey" <[EMAIL PROTECTED]> writes:
> > Is there somewhere that I am not adequately checking for an error?
> 
> 1. You're passing SPI_getbinval an uninitialized bool pointer.

Doh!  

> 2. You're discarding its result, which you need.

Ah yes, a change I made while trying to debug things.

> 3. You're not checking for a null, and the error check you do have
>is wrong/redundant.

The field is constrained to be NOT NULL, so I wasn't worried about
checking that, but I don't see how the error check is wrong, according
to the description of SPI_getbinval at

http://www.postgresql.org/docs/8.2/interactive/spi-spi-getbinval.html

What is then the appropriate way to check for failure of SPI_getbinval?

> 4. Use DatumGetTextP(), not DatumGetPointer nor PG_DETOAST_DATUM.

Searching for DatumGetTextP from the search text box at the top of
www.postgresql.org yields no hits, and a google search on DatumGetTextP 
does not seem to turnip [sic] any direct documentation on this function.

Is there somewhere in the docs that I should have found this (and other
useful) functions?

Thank you very much,
Glen Mabey

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Alvaro Herrera
Glen W. Mabey escribió:

> TriggerData *trigdata;
> Datum relative_filename_datum;
> text *relative_filename_t;
> int file_name_colnumber, ret;
> bool *isnull;

This is wrong.  Try

bool isnull;

and later:

SPI_getbinval( trigdata->tg_trigtuple, trigdata->tg_relation->rd_att, 
file_name_colnumber, &isnull );

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"No hay ausente sin culpa ni presente sin disculpa" (Prov. francés)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Tom Lane
"Glen W. Mabey" <[EMAIL PROTECTED]> writes:
> Is there somewhere that I am not adequately checking for an error?

1. You're passing SPI_getbinval an uninitialized bool pointer.

2. You're discarding its result, which you need.

3. You're not checking for a null, and the error check you do have
   is wrong/redundant.

4. Use DatumGetTextP(), not DatumGetPointer nor PG_DETOAST_DATUM.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Ivan Sergio Borgonovo
On Tue, 04 Dec 2007 13:54:15 +
Richard Huxton <[EMAIL PROTECTED]> wrote:

> Always go for the cleaner design. If it turns out that isn't fast 
> enough, *then* start worrying about having a bad but faster design.

mmm yeah right. I did express myself badly.
What I mean I've first to know what are the boundaries to know what
a good design is.
I'm ready to refactor... I'd just like to avoid it for ignorance of
common knowledge about good practice.

BTW still a good reading for dev:

http://www.gtsm.com/oscon2004/

> > Most of the documents available are from a sysadmin point of view.
> > That makes me think that unless I write terrible SQL it won't
> > make a big difference and the first place I'll have to look at if
> > the application need to run faster is pg config.

> The whole point of a RDBMS is so that you don't have to worry about 
> this. If you have to start tweaking the fine details of these

This will definitively be the last resort. These times you can't wear
so many hats as before.

> Keep your database design clean, likewise with your queries,
> consider whether you can cache certain results and get everything
> working first.

At the end... if you don't look to much to details everything will
reach a defined deterministic state after all ;)

> Note that this is quite old now, so some performance-related
> assumptions will be wrong for current versions of PG.

I noticed. Maybe this will be part of some other question later.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Tom Lane
"Glen W. Mabey" <[EMAIL PROTECTED]> writes:
> On Tue, Dec 04, 2007 at 09:53:37AM -0600, Tom Lane wrote:
>> 3. You're not checking for a null, and the error check you do have
>> is wrong/redundant.

> The field is constrained to be NOT NULL, so I wasn't worried about
> checking that, but I don't see how the error check is wrong,

Well, a check on the isnull state is a good idea anyway IMHO.
This code has no way of being sure that a NOT NULL constraint
exists, and dumping core if it's not there isn't my idea of being
robust.

What I didn't like about the error test was that it assumed that
SPI_ERROR_NOATTRIBUTE is and always will be the only possible
error code from SPI_getbinval.  I'd test for SPI_result != 0 instead.
(The "redundant" comment came from the thought that it's probably
pointless to be making this test at all, considering that you just
got the column number from SPI_fnumber on the same tupdesc.)

>> 4. Use DatumGetTextP(), not DatumGetPointer nor PG_DETOAST_DATUM.

> Is there somewhere in the docs that I should have found this (and other
> useful) functions?

fmgr.h, perhaps, or by looking at existing code that does more or less
what you want to do.  Postgres does not follow the model that you are
supposed to look only at the SGML docs to find out how to do server-side
programming.  We are open source and one of the biggest benefits of that
is that you can (and should) look at the source code to learn.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Can i Force to postgrsql to use a certain index?

2007-12-04 Thread Pau Marc Munoz Torres
Hi every body

 it is possible to force to postgresql to use a certain index? I know that
in mysql exits Force index (index_name) option for select queries
SELECT * FROM precalc FORCE INDEX (hladrb50101) ;

any of you have used this?

thanks

pau


-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] Can i Force to postgrsql to use a certain index?

2007-12-04 Thread Joshua D. Drake

Pau Marc Munoz Torres wrote:

Hi every body

 it is possible to force to postgresql to use a certain index? I know 
that in mysql exits Force index (index_name) option for select queries

SELECT * FROM precalc FORCE INDEX (hladrb50101) ;

any of you have used this?


No. PostgreSQL will not allow you to do that. Not to mention, a SELECT * 
without a where clause will always scan the whole table so an Index 
would be useless.


Joshua D. Drake



thanks

pau


--
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional  
Institut de  Biotecnologia   i Biomedicina Vicent 
Villar
Universitat Autonoma de Barcelona

E-08193 Bellaterra (Barcelona)
  
telèfon: 93 5812807

Email : [EMAIL PROTECTED] 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] difficulty extracting variable-sized field on triggered row

2007-12-04 Thread Sam Mason
On Tue, Dec 04, 2007 at 10:11:04AM -0600, Glen W. Mabey wrote:
> Searching for DatumGetTextP from the search text box at the top of
> www.postgresql.org yields no hits, and a google search on DatumGetTextP 
> does not seem to turnip [sic] any direct documentation on this function.
> 
> Is there somewhere in the docs that I should have found this (and other
> useful) functions?

I find the following tool helpful:

  http://doxygen.postgresql.org/


  Sam

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] WAL shipping question

2007-12-04 Thread SHARMILA JOTHIRAJAH

> Im trying to play a bit with log shipping between 2 servers primary
 and standby. These servers are running versions 8.3betat3. I had set up
 continuous archiving at the primary server. The manuals' example of
 archive_command is
> archive_command = 'cp -i %p /mnt/server/archivedir/%f  
> This basically archives the data in the primary server
 itself...right!!! 
> But how can I set up continuous archiving from primary to a directory
 (WAL archive directory)
> on the stand-by server ?

>>a. Mount the remote directory via NFS/SMBFS/...
>>b. Use a remote copy, e.g. scp, FTP...


Thanks...
I tried this command to both copy in the primary server and scp to standby 
server.

archive_command = 'cp -i "%p" .../archivedir/"%f" &&
scp -B "%p".../archivedir/"%f"  "%p" [EMAIL PROTECTED]:/archivedir/"%f"' 

cp works but scp doesnt work. Is it becoz scp generally asks for password. I 
can do scp or ssh without password authentication but Im not sure if it is safe 
to do that.

What is the alternative for getting this done?

Thanks
sharmila












  

Be a better pen pal. 
Text or chat with friends inside Yahoo! Mail. See how.  
http://overview.mail.yahoo.com/

Re: [GENERAL] WAL shipping question

2007-12-04 Thread Richard Huxton

SHARMILA JOTHIRAJAH wrote:

Im trying to play a bit with log shipping between 2 servers primary

 and standby. These servers are running versions 8.3betat3. I had set up
 continuous archiving at the primary server. The manuals' example of
 archive_command is

archive_command = 'cp -i %p /mnt/server/archivedir/%f  itself...right!!! 

But how can I set up continuous archiving from primary to a directory

 (WAL archive directory)

on the stand-by server ?



a. Mount the remote directory via NFS/SMBFS/...
b. Use a remote copy, e.g. scp, FTP...



Thanks...
I tried this command to both copy in the primary server and scp to standby 
server.

archive_command = 'cp -i "%p" .../archivedir/"%f" &&
scp -B "%p".../archivedir/"%f"  "%p" [EMAIL PROTECTED]:/archivedir/"%f"' 


cp works but scp doesnt work. Is it becoz scp generally asks for password. I 
can do scp or ssh without password authentication but Im not sure if it is safe 
to do that.

What is the alternative for getting this done?


You can generate a public/private key pair on the database server and 
store the public version in ~/.ssh/authorized_keys on the remote server. 
Google for details.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Tuning configuration

2007-12-04 Thread Konrad Neuwirth
Hello,

I have a hopefully simple question.  I've found documentation about
the meanings of the various variables in postgres.conf -- if it comes
to memory consumption and buffer size.  I've also found hints as to
making them too large decreases performance.  But -- how can I measure
how well the current settings fit? Are there tables to watch on how
quickly things like temp tables get written out to disk vs. being held
in memory?  We'd like to find an appropriate size for our database,
but can't even yet say if we're in the right ballpark on a machine
that has RAM to spare still.

What can I look into?

Thank you,
 Konrad Neuwirth

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
Guys,

We had a bit of a misfortunate communication breakdown here at work,
which led to a particular database not being backed up. Before we
recognized this problem, and entire database table was deleted.

I immediately copied the pgdata directory and have been able to find
the file that represents the data that was blown away. Fortunately, I
only really need to restore two pieces of data from the table...the id
field (primary key) and the employee_number field. Once I have this
data, I can re-populate and all my relationships will be honored.

I've been looking through the records with a hex editor, but the
unfortunate thing is that I either don't see consistency with field
separators or I'm overlooking them.

The table definition looks like this:

   Table "public.users"
  Column   |Type |
Modifiers
---+-+
 id| integer | not null
default nextval('users_id_seq'::regclass)
 login | character varying(255)  |
 email | character varying(255)  |
 crypted_password  | character varying(40)   |
 salt  | character varying(40)   |
 created_at| timestamp without time zone |
 updated_at| timestamp without time zone |
 remember_token| character varying(255)  |
 remember_token_expires_at | timestamp without time zone |
 manager_id| integer |
 employee_number   | integer |
 classification_id | integer |
 name  | character varying(255)  |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)

Can anyone guide me in how I might parse this out? There has to be a
way...I think I just need a helpful push ;)

Thanks!
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Alvaro Herrera
John Wells wrote:

> I've been looking through the records with a hex editor, but the
> unfortunate thing is that I either don't see consistency with field
> separators or I'm overlooking them.

There are no field separators.  Perhaps you could extract some useful
info with pg_filedump, which you can grab at
http://sources.redhat.com/rhdb


-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"No hay hombre que no aspire a la plenitud, es decir,
la suma de experiencias de que un hombre es capaz"

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> There are no field separators.  Perhaps you could extract some useful
> info with pg_filedump, which you can grab at
> http://sources.redhat.com/rhdb

So is it simply field width? Can one count the number of bytes based
on native datatype length and determine field start/end?

Thanks!
John

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Transaction problem

2007-12-04 Thread Scott Marlowe
On Dec 4, 2007 7:45 AM, Bill Moran <[EMAIL PROTECTED]> wrote:
> In response to "x asasaxax" <[EMAIL PROTECTED]>:
>
> > Its just use a constraint then? there´s no problem id two sessions decrease
> > the number, and this number goes to less then or equals as zero?
> > I´m programming with php.
>
> BEGIN;
> SELECT quantity FROM products WHERE productid=[productid] FOR UPDATE;
> [Check in PHP to ensure enough product exists for this purchase]
> UPDATE products SET quantity=[new quantity after purchase]
> WHERE productid=[productid];
> [... any other table updates you need to do for this transaction ...]
> COMMIT WORK;
>
> SELECT ... FOR UPDATE will prevent other transactions from locking this
> row until this transaction completes.  It guarantees that only 1
> transaction can modify a particular row at a time.  See the docs for
> more details:
> http://www.postgresql.org/docs/8.1/static/sql-select.html#SQL-FOR-UPDATE-SHARE
> http://www.postgresql.org/docs/8.1/static/explicit-locking.html

Seems like a lot more work than my method of using a check constraint
on quantity >=0.  The advantage to doing it my way is you use a single
statement with no race conditions and no "for update" locking of the
row required.  If the update succeeds there was one, and you have
"checked it out".  If it fails there weren't any.  It's race proof and
far simpler.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tuning configuration

2007-12-04 Thread Erik Jones

On Dec 4, 2007, at 11:17 AM, Konrad Neuwirth wrote:


Hello,

I have a hopefully simple question.  I've found documentation about
the meanings of the various variables in postgres.conf -- if it comes
to memory consumption and buffer size.  I've also found hints as to
making them too large decreases performance.  But -- how can I measure
how well the current settings fit? Are there tables to watch on how
quickly things like temp tables get written out to disk vs. being held
in memory?  We'd like to find an appropriate size for our database,
but can't even yet say if we're in the right ballpark on a machine
that has RAM to spare still.

What can I look into?


What version of postgres are you using?  8.3 (currently in beta) has  
facilities for tracking temp file creation.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Alvaro Herrera
John Wells wrote:
> On 12/4/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > There are no field separators.  Perhaps you could extract some useful
> > info with pg_filedump, which you can grab at
> > http://sources.redhat.com/rhdb
> 
> So is it simply field width? Can one count the number of bytes based
> on native datatype length and determine field start/end?

Yes.  For variable length types, there is a 4-byte length word at the
start of the field (unless you are using 8.3 which introduces more
compact representations in some cases).

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
"If it wasn't for my companion, I believe I'd be having
the time of my life"  (John Dunbar)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] WAL shipping question

2007-12-04 Thread Greg Smith

On Tue, 4 Dec 2007, SHARMILA JOTHIRAJAH wrote:

This basically archives the data in the primary server itself...right!!! 
But how can I set up continuous archiving from primary to a directory 
(WAL archive directory) on the stand-by server ?


The closest thing to a worked out example of how to do this I'm aware of 
is at http://archives.postgresql.org/sydpug/2006-10/msg1.php


That uses rsync as the transport mechanism for reasons it explains (the 
'atomic copy' feature).  You can certainly replicate that using ssh, but 
you may have to use a secondary directory to hold files while they're 
being transferred so the stand-by doesn't try to do something with the 
partial copies.  Mounting filesystems and copying the files over 
Samba/NFS/etc. is another approach with its own issues.  It's been my 
experience that remote filesystems will hang in odd ways when there's a 
connectivity problem, while copying with ssh/scp gives you a more 
predictable copied/failed return code without retrying too hard. 
PostgreSQL can tolerate the archive_command spitting back an error just 
fine and will retry automatically, I prefer not to expose the server to a 
situation where the archive_command might not return quickly.


The main thing that's improved in 8.3 is the integration of pg_standby as 
a more rugged restore_command than most people were coding on their own:


http://www.postgresql.org/docs/8.3/static/pgstandby.html

You should use it instead of the example restore.sh included in the 
message I referenced above.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Martijn van Oosterhout
On Tue, Dec 04, 2007 at 03:38:16PM -0300, Alvaro Herrera wrote:
> > So is it simply field width? Can one count the number of bytes based
> > on native datatype length and determine field start/end?
> 
> Yes.  For variable length types, there is a 4-byte length word at the
> start of the field (unless you are using 8.3 which introduces more
> compact representations in some cases).

And NULLs are skipped entirely. They are represented in the null-bitmap
at the beginning of the tuple.

What sometimes works is creating a new table with the exact same
structure, shutting down the postmaster and copying the old table over
the new one. If it's the same cluster and the clog/xlog are still there
it might work.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


[GENERAL] bug with >to_char('2007-12-31'::DATE, 'YYYYIW')

2007-12-04 Thread Daniel Schuchardt

LOLL2=# SELECT  to_char('2007-12-31'::DATE, 'IW');
to_char
-
200701
(1 row)


2007-12-31 should be week 2008-01

regards,

   Daniel.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] bug with >to_char('2007-12-31'::DATE, 'YYYYIW')

2007-12-04 Thread Daniel Schuchardt

BTW:

Windows 2003 Server.

LOLL2=# SELECT version();

  version

PostgreSQL 8.1.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 
3.4.2 (mingw-special)

(1 row)

Regards :)


Daniel Schuchardt schrieb:

LOLL2=# SELECT  to_char('2007-12-31'::DATE, 'IW');
to_char
-
200701
(1 row)


2007-12-31 should be week 2008-01

regards,

   Daniel.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Tuning configuration

2007-12-04 Thread Gauthier, Dave
Is there something like a freeware windows client app that does DBA
stuff for a remote server?  Sort of like TOAD for Oracle?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik Jones
Sent: Tuesday, December 04, 2007 1:34 PM
To: Konrad Neuwirth
Cc: Postgresql
Subject: Re: [GENERAL] Tuning configuration

On Dec 4, 2007, at 11:17 AM, Konrad Neuwirth wrote:

> Hello,
>
> I have a hopefully simple question.  I've found documentation about
> the meanings of the various variables in postgres.conf -- if it comes
> to memory consumption and buffer size.  I've also found hints as to
> making them too large decreases performance.  But -- how can I measure
> how well the current settings fit? Are there tables to watch on how
> quickly things like temp tables get written out to disk vs. being held
> in memory?  We'd like to find an appropriate size for our database,
> but can't even yet say if we're in the right ballpark on a machine
> that has RAM to spare still.
>
> What can I look into?

What version of postgres are you using?  8.3 (currently in beta) has  
facilities for tracking temp file creation.

Erik Jones

Software Developer | Emma(r)
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Alvaro Herrera
John Wells wrote:
> On 12/4/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > John Wells wrote:
> > > On 12/4/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > > > There are no field separators.  Perhaps you could extract some useful
> > > > info with pg_filedump, which you can grab at
> > > > http://sources.redhat.com/rhdb
> > >
> > > So is it simply field width? Can one count the number of bytes based
> > > on native datatype length and determine field start/end?
> >
> > Yes.  For variable length types, there is a 4-byte length word at the
> > start of the field (unless you are using 8.3 which introduces more
> > compact representations in some cases).
> 
> Oh fun...what about record separators? I think I could live with doing
> it by name and id. Since name is the last field in the table, and id
> is the first, one could assume that it looks something like this:
> 
> 1rst record name | record separator | 2nd record id | etc, etc
> 
> If I could split on that record separator I might be able to get what I'm 
> after.

There are no record separators either.  Records are stored following
offsets which are stored in a fixed-size array at the start of each
page, called "line pointers".  Have a look at what pg_filedump shows you
for interpreted output.

Maybe you can get away with your idea, keeping in mind that record
separators are just not there.

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
"Aprender sin pensar es inútil; pensar sin aprender, peligroso" (Confucio)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> On Tue, Dec 04, 2007 at 03:38:16PM -0300, Alvaro Herrera wrote:
> > > So is it simply field width? Can one count the number of bytes based
> > > on native datatype length and determine field start/end?
> >
> > Yes.  For variable length types, there is a 4-byte length word at the
> > start of the field (unless you are using 8.3 which introduces more
> > compact representations in some cases).
>
> And NULLs are skipped entirely. They are represented in the null-bitmap
> at the beginning of the tuple.
>
> What sometimes works is creating a new table with the exact same
> structure, shutting down the postmaster and copying the old table over
> the new one. If it's the same cluster and the clog/xlog are still there
> it might work.
>
> Have a nice day,

Martijn,

Wow...interesting idea...but to clarify, I copied the table file
*after* the delete was run on the table. Although the data appears to
still be there, wouldn't they be marked as deleted in some way and not
appear in the new table even if the copy worked?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] WAL shipping question

2007-12-04 Thread SHARMILA JOTHIRAJAH

> This basically archives the data in the primary server
 itself...right!!! 
> But how can I set up continuous archiving from primary to a directory
 
> (WAL archive directory) on the stand-by server ?

The closest thing to a worked out example of how to do this I'm aware
 of 
is at http://archives.postgresql.org/sydpug/2006-10/msg1.php

That uses rsync as the transport mechanism for reasons it explains (the
 
'atomic copy' feature).  You can certainly replicate that using ssh,
 but 
you may have to use a secondary directory to hold files while they're 
being transferred so the stand-by doesn't try to do something with the 
partial copies.  Mounting filesystems and copying the files over 
Samba/NFS/etc. is another approach with its own issues.  It's been my 
experience that remote filesystems will hang in odd ways when there's a
 
connectivity problem, while copying with ssh/scp gives you a more 
predictable copied/failed return code without retrying too hard. 
PostgreSQL can tolerate the archive_command spitting back an error just
 
fine and will retry automatically, I prefer not to expose the server to
 a 
situation where the archive_command might not return quickly.

The main thing that's improved in 8.3 is the integration of pg_standby
 as 
a more rugged restore_command than most people were coding on their
 own:

http://www.postgresql.org/docs/8.3/static/pgstandby.html

You should use it instead of the example restore.sh included in the 
message I referenced above.

Thanks for your reply. I will look into the pg_standby  and rsync.

I currently have this in my config file

archive_command = 'cp -i "%p" /export/home/user/archivedir/"%f" && 
scp -i userKey "%p" [EMAIL PROTECTED]:/export/home/user/archivedir/"%f"' 

userKey is a public-private key generated without the passphrase. This works...
   scp -i userKey dummy [EMAIL PROTECTED]:/export/home/user/archivedir
and it just copies the dummy file from primary to standby without prompting for 
a password.

It doesnt scp the wal files from primary to stand-by when i add it to 
archive_command as given above. I get this error
Warning: Identity file userKey does not exist.
ssh_askpass: exec(/usr/lib/ssh/ssh-askpass): No such file or directory
Write failed: Broken pipe
lost connection
LOG:  archive command "cp -i "pg_xlog/00010039" 
/export/home/user/archivedir/"00010039" && scp -i userKey 
"pg_xlog/00010039" [EMAIL 
PROTECTED]:/export/home/user/archivedir/"00010039"" failed: 
return code 256







  

Be a better sports nut!  Let your teams follow you 
with Yahoo Mobile. Try it now.  
http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ

Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Martijn van Oosterhout
On Tue, Dec 04, 2007 at 02:26:21PM -0500, John Wells wrote:
> Wow...interesting idea...but to clarify, I copied the table file
> *after* the delete was run on the table. Although the data appears to
> still be there, wouldn't they be marked as deleted in some way and not
> appear in the new table even if the copy worked?

Ah sorry, I though you meant de table was dropped or the database was
deleted. If you actually ran a DELETE FROM on the table, then yes
they'll all be marked deleted.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> Ah sorry, I though you meant de table was dropped or the database was
> deleted. If you actually ran a DELETE FROM on the table, then yes
> they'll all be marked deleted.


So, given a database table file that still has records in it, and
given the fact that these records could be parsed and displayed if the
proper utilty knew how to read the various data structures used to
denote field and record length, is there no utility to do this? I
seems that it would be fairly straight forward to somehow read the
records, yet to pay no mind to the deleted flag (or whatever mechanism
postgresql uses to mark them as deleted).

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] stored procedures and dynamic queries

2007-12-04 Thread Obe, Regina

 As a side note, there is actually a book on design patterns in SQL,
although I personally haven't read it. From the reviews I recall reading
about it, I think its mostly based on Oracle Features.  Still might be a
good read as far as PostgreSQL is concerned except for the sections on
Graphs and recursive trees since Oracle has special syntactical sugar
for that kind of stuff that is unique to Oracle.

http://www.rampant-books.com/book_2006_1_sql_coding_styles.htm

Hope that helps,
Regina



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ted Byers
Sent: Tuesday, December 04, 2007 9:59 AM
To: Richard Huxton; Ivan Sergio Borgonovo
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] stored procedures and dynamic queries

--- Richard Huxton <[EMAIL PROTECTED]> wrote:
> Ivan Sergio Borgonovo wrote:
> > On Tue, 04 Dec 2007 08:14:56 +
> > Richard Huxton <[EMAIL PROTECTED]> wrote:
> > 
> >> Unless it's an obvious decision (millions of
> small identical
> >> queries vs. occasional large complex ones) then
> you'll have to
> >> test. That's going to be true of any decision
> like this on any
> >> system.
> > 
> > :(
> > 
> > I'm trying to grasp a general idea from the view
> point of a developer
> > rather than a sysadmin. At this moment I'm not
> interested in
> > optimisation, I'm interested in understanding the
> trade off of
> > certain decisions in the face of a cleaner
> interface.
> 
> Always go for the cleaner design. If it turns out
> that isn't fast 
> enough, *then* start worrying about having a bad but
> faster design.
> 
I don't know about "clean", but the mantra here is
"First you make it provably correct, then you make it
fast."

I am a fan of making things as simple as practicable,
BUT NO SIMPLER. I don't know how that meshes with
"clean", but the general idea is right.

If you look at my code, my C++ and my java code is
infinitely better than my SQL, so with either of
those, I end up making fewer variants, and my starting
point is generally much closer to my end point, and
that is just a function of experience.  With my SQL
code, I generally find myself producing a variety of
scripts to support a given task.  And these include
exploration of just about everything SQL has to offer.
 I TRY everything, from joins to correlated subqueries
to stored procedures to views, and beyond.  And I am
still trying to develop an intuition as to which
options will serve best in a given situation,
analogous to design patterns I routinely use in my C++
and Java code.

Your reaction to Richard's advice to test seems to
imply you want a more direct approach to "THE" answer.
 That is chasing something that doesn't exist!  I tell
you, as a developer (and NOT a sysadmin), there is no
substitute for experience and THAT can ONLY be
obtained through trial and error.  I haven't seen any
books about design patterns in SQL (while there are
plenty in C++ and Java, and other languages), so there
is no short cut.

In short, the general idea most developers I know work
with is "code and test, and then code again and test
again, until you've tried the variety of options that
exist."

There is no substitute for testing yourself.  I have
seen repeated advice to replace correlated subqueries
by left joins, claiming that ubiquitous experience is
that the joins are faster, and yet, in my tests, in
most cases there was little difference in speed while
in others the correlated subqueries were faster.  So
it appears the advice was based on experience with
dated software and the RDBMS in question had
subsequently dramatically improved how it handled
correlated subqueries.  And my use of EXPLAIN
confirmed what I was finding through testing.

> > Most of the documents available are from a
> sysadmin point of view.
> > That makes me think that unless I write terrible
> SQL it won't make a
> > big difference and the first place I'll have to
> look at if the
> > application need to run faster is pg config.
> 
> The whole point of a RDBMS is so that you don't have
> to worry about 
> this. If you have to start tweaking the fine details
> of these things, 
> then that's a point where the RDBMS has reached its
> limits. In a perfect 
> world you wouldn't need to configure PG either, but
> it's not that clever 
> I'm afraid.
> 
I am not sure I buy this, if I properly understand it.
 Trust me, I have written some really bad but simple
queries that took hours to complete a task that was
completed in less than a minute with smarter code. 
And yet the "bad" code I'd written was similar in
nature to examples used in some texts to explain ideas
in SQL.  The point is, until you get extensive
experience in SQL programming and optimization, you
won't know what is bad code until you test it.

Personally, I rely on the sysadmin to administer the
RDBMS properly, to ensure it is configured
appropriately for our application, AND I ask his or
her advice and input on how I design and implement my
SQL code, as well a

Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread Martijn van Oosterhout
On Tue, Dec 04, 2007 at 03:05:53PM -0500, John Wells wrote:
> So, given a database table file that still has records in it, and
> given the fact that these records could be parsed and displayed if the
> proper utilty knew how to read the various data structures used to
> denote field and record length, is there no utility to do this? I
> seems that it would be fairly straight forward to somehow read the
> records, yet to pay no mind to the deleted flag (or whatever mechanism
> postgresql uses to mark them as deleted).

Ofcourse, see the pg_filedump mentioned at the beginning of this
thread.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Disconnects hanging server

2007-12-04 Thread Brian Wipf

On 3-Dec-07, at 3:51 PM, A.M. wrote:

On Dec 3, 2007, at 4:16 PM, Brian Wipf wrote:

We have a dual 3.0 GHz Intel Dual-core Xserve, running Mac OS X  
10.5.1 Leopard Server and PostgreSQL 8.2.5. When we disconnect  
several clients at a time (30+) in production, the CPU goes through  
the roof and the server will hang for many seconds where it is  
completely non-responsive. It seems the busier the server is, the  
longer the machine will hang.


You should run Shark or Instruments to determine where the system is  
getting hung up. You will likely need to install developer tools. If  
you need help reading the profilers' output, please join up on an  
Apple list.


As per A.M.'s suggestion, I have run a time profile in Shark to get  
some idea of what's going on when the server hangs when disconnecting  
clients.


Nearly 100% of the CPU is going into pmap_remove_range. The stack  
trace for pmap_remove_range, viewable within Shark, is:

-> pmap_remove_range
--> pmap_remove
---> vm_map_simplify
> vm_map_remove
-> task_terminate_internal
--> exit1
---> exit
> unix_syscall64
-> lo64_unix_scall

The call taking up the next highest amount of CPU, at 0.1%, is  
AtProcExit_Buffers. And its stack trace:

-> AtProcExit_Buffers
--> shmem_exit
---> proc_exit
> PostgresMain
-> BackendRun
--> BackendStartup
---> ServerLoop
> PostmasterMain
-> main
--> start

Brian Wipf
<[EMAIL PROTECTED]>
ClickSpace Interactive Inc.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL Beta4 released

2007-12-04 Thread Joshua D. Drake

Thanks to all the testing, feedback and bug reports the community has
performed with the current betas, we now have our fourth beta
of 8.4.

Due to continued testing by our community we have found performance
improvements, fixed bugs in PLperl and XML handling. We have also made
many documentation improvements, particularly in the Partitioning
section.

We hope that this will be our last beta before release candidate so
please download and continue testing. Testing is the only way to
help ensure that any issues you find will be resolved.  As always, our
community is the first line of defense to help us find any "corner
cases" of possible issues.

Further Beta information is available here:
http://www.postgresql.org/developer/beta

Binaries for are already available for our major platforms.

Joshua D. Drake
PostgreSQL Liaison

-- 
* PostgreSQL - The world's most advanced open source database
& http://www.postgresql.org/
() Everything else pales in comparison

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL Beta4 released

2007-12-04 Thread Bill Moran
In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:
> 
> Thanks to all the testing, feedback and bug reports the community has
> performed with the current betas, we now have our fourth beta
> of 8.4.

I assume you meant 8.3.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Insert/Update to multiple tables with relations

2007-12-04 Thread Dave
Hi,

I need help with designing a set of queries I am running with psql -f 
reports.sql

I have a tb_master with report headers, tb_records table with it's own 
serial field and foreign key referencing an "id" of tb_master. The third 
tb_details table has two foreign keys referencing the "id"'s of both, 
tb_master, and tb_records. Below is a simplistic representation of those 
three tables:

CREATE TABLE "sch_reports"."tb_master" (
  "id" SERIAL,
 "some_ref" VARCHAR
 "some_text" VARCHAR
) WITH OIDS;

CREATE TABLE "sch_reports"."tb_records" (
  "id" SERIAL,
  "master_id" INTEGER NOT NULL,
  "some_text" VARCHAR,
CONSTRAINT "fk_master_id" FOREIGN KEY ("master_id")
REFERENCES "sch_reports"."tb_master"("id")
ON DELETE CASCADE
ON UPDATE CASCADE
NOT DEFERRABLE
) WITH OIDS;

CREATE TABLE "sch_reports"."tb_details" (
  "master_id" BIGINT NOT NULL,
  "record_id" INTEGER NOT NULL,
  "some_text" VARCHAR NOT NULL,
  CONSTRAINT "fk_record_id" FOREIGN KEY ("record_id")
REFERENCES "sch_reports"."tb_records"("id")
ON DELETE CASCADE
ON UPDATE CASCADE
NOT DEFERRABLE,
CONSTRAINT "fk_master_id" FOREIGN KEY ("master_id")
REFERENCES "sch_reports"."tb_master"("id")
ON DELETE CASCADE
ON UPDATE CASCADE
NOT DEFERRABLE
) WITH OIDS;

During update/insert a single row is inserted in tb_master, representing a 
single report.
In the same transactiom I would like to enter multiple rows (report line 
items) into tb_records. A new record should get a new id, and foreign key of 
the tb_master. If rows with  should USE the id of tb_master and update all 
matching rows.
In the same transaction I would like to enter multiple rows into tb_details. 
A new record will need to have two foreign keys referencing above two tables 
serial "id" rows.
I'm using a function to insert/update tb_master and here is what I have so 
far, but I'm having trouble getting/setting appropriate row id's to insert 
records for the other two table in the same transaction. I realize I can use 
some_ref of tb_master to handle the relations, but I'd like to try with 
serial id's first. Aslo, I cannot use some_ref for tb_records and tb_details 
relationship, as I require serial IDs of tb_records to be incremential, and 
not all tb_details rows will exist for each tb_records row:

CREATE OR REPLACE FUNCTION "public"."report_ins_upd" (in_some_text varchar, 
in_some_ref varchar) RETURNS "pg_catalog"."void" AS
$body$
BEGIN
 LOOP
 UPDATE sch_reports.tb_master SET
some_text = in_some_text,
WHERE some_ref=in_some_ref;
 IF found THEN
RETURN;
 END IF;
 BEGIN
 INSERT INTO sch_reports.tb_master (
some_ref,
some_text
)
 VALUES (
in_some_ref,
in_some_text,
);
 RETURN;
 EXCEPTION WHEN unique_violation THEN
  -- do nothing
  END;
 END LOOP;
END;
$body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

Thanks! 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Deadlock when updating table partitions (and presumed solution)

2007-12-04 Thread Paul Boddie
I recently encountered an interesting situation with regard to
partitioned tables, concurrent updates and deadlocks which probably
has an obvious explanation, although I can't seem to find one in the
manual. Below, I explain the situation and provide some of my own
naive reasoning about what seems to be happening. Since I think I now
know how to avoid such matters, this message is mostly for the
purposes of sharing my recent experiences with those who may one day
encounter similar problems. I'd be grateful if anyone can explain what
must really be occurring and correct any erroneous conclusions,
however.

I have one process querying a table P with partitions P0, P1, P2, ...
Pn joined with table R as follows:

select * from R inner join P on R.id = P.id and P.section = 5

...where the column "section" determines which partition shall be
searched utilising the constraint exclusion support in PostgreSQL.
Here, I use the specific value of 5 to indicate that the involvement
of a specific partition is envisaged.

Now, each partition of P is created inheriting from P, and I also
include a rule which "redirects" inserts from P to the specific
partition of P depending on the value of "section". This latter detail
is, I believe, the principal contributing factor to the problems
subsequently experienced.

I have another process performing updates to individual partitions of
P - specifically "alter table" operations adding foreign key
constraints referencing R as follows:

alter table Pm add constraint Pm_fk_id foreign key(id) references
R(id)

...where "m" is the specific partition number, starting at 0,
increasing by 1, ending at n.

What seems to happen, by looking at pg_lock (and pg_class) is that the
following sequence of events occurs:

 1. The query process acquires an AccessShareLock on R and P.
 2. The update process acquires an AccessExclusiveLock on Pm and seeks
an AccessExclusiveLock on R.
 3. The query process seeks an AccessShareLock on P0 ... Pn.
 4. Deadlock is declared.

Since the query should only involve a single partition of P, one might
expect that the query process might immediately obtain an
AccessShareLock on P5, but what seems to occur is a race condition:
the update process is sometimes able to acquire a lock on P5 before
the query process is able to realise the involvement of P5 in the
query operation. Moreover, a deadlock occurs even when the update
process is adding the foreign key constraint to tables other than P5,
suggesting as I note above that all child tables are involved in the
query operation.

My initial conclusions were as follows:

 1. A query on a partitioned table only initially causes lock
acquisition on the parent table.
 2. Subsequent attempts to acquire locks on child tables conflict with
the locking done by the "alter table" operation.
 3. The classic solution (ensure consistent lock acquisition order)
may not be readily applicable.

Intuitively, I understood that PostgreSQL may only resolve the child
tables involved in a query by using a mechanism specific to the
partitioning infrastructure. I then considered the role of the rules
(collectively redirecting inserts from P to P0 ... Pn), even though
they are concerned with insert statements. By dropping the rule
associated with a given child table before attempting the "alter
table" operation on that table, then recreating the rule, it would
appear that the issues with lock acquisition disappear.

It makes sense that, if operating on a specific child table, the links
to the parent should be broken temporarily in order to isolate it from
the parent and any operations which may involve all children (or even
the checking of the involvement of all children), and to not realise
this may have been an oversight on my part. Can anyone help me to
refine my thinking further on this matter?

Paul

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] initdb - encoding question

2007-12-04 Thread Josh Harrison
initdb -E en_CA.utf-8 -D /export/home/sjothirajah/postgres8.3/pgsql/data
gives this error
initdb: "en_CA.utf-8" is not a valid server encoding name

Thanks
josh

On Dec 3, 2007 1:01 PM, brian <[EMAIL PROTECTED]> wrote:

> Josh Harrison wrote:
> > Hi
> > Im tying to initialize the cluster using initdb and encoding 'UTF8'
> > initdb -E UTF8 -D /export/home/josh/postgres8.3/pgsql/data
> >
> > But I get this error
> > The files belonging to this database system will be owned by user
> "josh".
> > This user must also own the server process.
> >
> > The database cluster will be initialized with locales
> >   COLLATE:  en_CA.ISO8859-1
> >   CTYPE:en_CA.ISO8859-1
> >   MESSAGES: C
> >   MONETARY: en_CA.ISO8859-1
> >   NUMERIC:  en_CA.ISO8859-1
> >   TIME: C
> > initdb: encoding mismatch
> > The encoding you selected (UTF8) and the encoding that the
> > selected locale uses (LATIN1) do not match.  This would lead to
> > misbehavior in various character string processing functions.
> > Rerun initdb and either do not specify an encoding explicitly,
> > or choose a matching combination.
> >
> > Why do I get this error? How can I solve this?
> > Thanks
> > josh
> >
>
> ISO8859-1 is "latin 1". Try using en_CA.utf-8 instead.
>
> brian
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>


[GENERAL] Transaction isolation and constraints

2007-12-04 Thread cliff
Hi:

I'd like to know how PostgreSQL's transaction isolation mechanisms
interact with (e.g., UNIQUE) constraints.  Section 12.2 of the manual
mentions that UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE
commands may block when a concurrent transaction updates a target row
(for both isolation levels, Read Committed and Serializable).  But
suppose a table has a UNIQUE constraint on a column, and two
concurrent transactions attempt to INSERT a row with the same value
for that column:

  o Will the "first" INSERT wait to see if the "second" aborts (as is
done with UPDATE, DELETE, et al.)?
 
  o Or will it immediately abort?

  o Or will it continue until immediately before commit, then abort?

It's not clear when the constraint check is run (or what version of
the table it sees).

Note this point isn't specific to INSERTs, another example would be
two concurrent transactions that UPDATE completely different rows and
in so doing violate a UNIQUE constraint.

Thanks.

--Cliff Krumvieda

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Can i Force to postgrsql to use a certain index?

2007-12-04 Thread Pau Marc Munoz Torres
Hi every body

 it is possible to force to postgresql to use a certain index? I know that
in mysql exits Force index (index_name) option for select queries
SELECT * FROM precalc FORCE INDEX (hladrb50101) ;

any of you have used this?

thanks

pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de  Biotecnologia   i Biomedicina Vicent
Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


[GENERAL] how to configure postgresql to display messages in italian

2007-12-04 Thread jo

Hello,

I would like to ask you how to configure Pg to display messages in 
italian instead of english.

I'm using PostgreSQL versione 8.1.8

For example, my pg display data as:


\d cani
  Table "public.cani"
 Column  |Type 
|  Modifiers

--+-+--
matricola  | character varying(17)   | not null
iscrizione | date| not null
nome_cane  | character varying(12)   |


I would like to have it as:
 Tabella "public.cani"
   Colonna |Tipo 
|Modificatori

+-+
matricola  | character varying(17)   | not null
iscrizione | date| not null
nome_cane  | character varying(12)   |

thank you,
jo


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] postgres freezes up on a query from slony

2007-12-04 Thread [EMAIL PROTECTED]
Hi All,

Not sure if this is a slony issue or a postgres issue...I'm posting on
both.

I'm running slony on a one master/two subscriber system.  One of the
subscribers seems to get stuck on a group of queries, and I can't seem
to figure out why.  If I do a select on pg_stat_activity, I get the
following:

 datid | datname | procpid | usesysid | usename
|
current_query |
waiting |  query_start  |
backend_start | client_addr  | client_port
---+-+-+--+--
+--
+-+---
+---+--+-
16384 | tii |   12204 |16392 | slony| update only
"public"."m_report_stats" set date_start='2007-12-03 13:27:05.661155'
where objectid='56917411'; | f   | 2007-12-04 11:20:23.839088-08 |
2007-12-04 11:20:23.005228-08 |  |
-1 : update only
"public"."m_object_paper" set overwriteflag='t' where
id='56069688';
 : insert into
"public"."m_search_list" (nodeid,id) values
('0','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('1','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('4','45844662');
 : update only
"public"."m_dg_read" set delete_flag='t' where
id='1474821';
 : insert into
"public"."m_search_list" (nodeid,id) values
('5','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('14','45844662');
 : update only
"public"."m_user" set duration='02:52:24.744252' where
id='10369924';
 : insert into
"public"."m_search_list" (nodeid,id) values
('32','45844662');
 :

What I've done so far is do an strace on the process, and I get the
following:
tii-db1 postgres # strace -p 12204
Process 12204 attached - interrupt to quit
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
Process 12204 detached
tii-db1 postgres #


I've also set log_min_error_statement=debug5 on postgresql.conf, did a
restart, and the logs show the following:
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-1] 2007-12-04
11:20:23.846 PST [user=slony,db=tii [local] PID:12204 XID:
129605365]DEBUG:  ProcessQuery
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-2] 2007-12-04
11:20:23.846 PST [user=slony,db=tii [local] PID:12204 XID:
129605365]STATEMENT:  update only "public"."m_report_stats"
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-3]  set
date_start='2007-12-03 13:27:05.661155' where objectid='56917411';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-4]   update only
"public"."m_object_paper" set overwriteflag='t' where id='56069688';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-5]   insert into
"public"."m_search_list" (nodeid,id) values ('0','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-6]   insert into
"public"."m_search_list" (nodeid,id) values ('1','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-7]   insert into
"public"."m_search_list" (nodeid,id) values ('4','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-8]   update only
"public"."m_dg_read" set delete_flag='t' where id='1474821';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-9]   insert into
"public"."m_search_list" (nodeid,id) values ('5','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-10]  insert into
"public"."m_search_list" (nodeid,id) values ('14','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-11]  update only
"public"."m_user" set duration='02:52:24.744252' where id='10369924';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-12]  insert into
"public"."m_search_list" (nodeid,id) values ('32','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-13]

For this process, the log is just stuck here and doesn't do anythi

[GENERAL] Older version of PGSQL help

2007-12-04 Thread Ed Burgstaler
Would anyone be able to direct me as to whether or not there is a pgadmin
for windows utility available that will work for a PostgreSQL v7.0.3
database? I had installed the latest Pgadmin version 1.8 but it needs a
minimum of PostgreSQL v7.3 to work so I don't know what to do now. Would one
of the older versions work?
 
I'm not very database literate so unless an upgrade of the PostgreSQL
database was idiot proof I guess I'm screwed.
I am administering a network for a company that had their IT guy quit and it
happens to be a newspaper subscriber database that is heavily used and
relied upon. Unfortunately, the hardware that the server runs on is very old
and I'm afraid that it will crash any day which would leave them screwed so
I'm trying to figure out how I could get this over to a new Linux server.
The current Linux version that it is running on is Red Hat Linux release 6.2
(Zoot) Kernel 2.2.18 on a 2-processor i686 and I would like to get it
switched over to an recent distro like CentOS4.5 or 5.
 
Thanks in advance for any light that you may be able to shed on this for me.
 
Ed


[GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-04 Thread Andrus
Using string concatenation in where clause causes huge perfomance loss:

explain analyze select
   rid.toode
   FROM dok JOIN rid USING (dokumnr)
   JOIN toode USING (toode)
   LEFT JOIN artliik using(grupp,liik)
   WHERE rid.toode='NAH S'
AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
and dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'

"Nested Loop Left Join  (cost=68.75..5064.86 rows=1 width=24) (actual 
time=8.081..26995.552 rows=567 loops=1)"
"  Join Filter: ((toode.grupp = artliik.grupp) AND (toode.liik = 
artliik.liik))"
"  ->  Nested Loop  (cost=68.75..5062.19 rows=1 width=43) (actual 
time=8.045..26965.731 rows=567 loops=1)"
"->  Index Scan using toode_pkey on toode  (cost=0.00..8.27 rows=1 
width=43) (actual time=0.023..0.026 rows=1 loops=1)"
"  Index Cond: ('NAH S'::bpchar = toode)"
"->  Nested Loop  (cost=68.75..5053.91 rows=1 width=24) (actual 
time=8.016..26964.698 rows=567 loops=1)"
"  ->  Index Scan using dok_kuupaev_idx on dok 
(cost=0.00..4326.16 rows=10 width=4) (actual time=0.059..67.985 rows=3543 
loops=1)"
"Index Cond: ((kuupaev >= '2007-11-01'::date) AND 
(kuupaev <= '2007-12-04'::date))"
"Filter: kuupaev)::text || (kellaaeg)::text) >= 
'2007-11-01'::text) AND (((kuupaev)::text || (kellaaeg)::text) <= 
'2007-12-0423 59'::text))"
"  ->  Bitmap Heap Scan on rid  (cost=68.75..72.76 rows=1 
width=28) (actual time=7.577..7.577 rows=0 loops=3543)"
"Recheck Cond: ((dok.dokumnr = rid.dokumnr) AND 
(rid.toode = 'NAH S'::bpchar))"
"->  BitmapAnd  (cost=68.75..68.75 rows=1 width=0) 
(actual time=7.574..7.574 rows=0 loops=3543)"
"  ->  Bitmap Index Scan on rid_dokumnr_idx 
(cost=0.00..5.13 rows=83 width=0) (actual time=0.037..0.037 rows=14 
loops=3543)"
"Index Cond: (dok.dokumnr = rid.dokumnr)"
"  ->  Bitmap Index Scan on rid_toode_idx 
(cost=0.00..63.03 rows=1354 width=0) (actual time=7.528..7.528 rows=21144 
loops=3543)"
"Index Cond: (toode = 'NAH S'::bpchar)"
"  ->  Seq Scan on artliik  (cost=0.00..2.27 rows=27 width=19) (actual 
time=0.007..0.020 rows=27 loops=567)"
"Total runtime: 26996.399 ms"

takes 26 seconds !

If I remove last line it takes only 0 seconds:

SET SEARCH_PATH TO FIRMA1,public;
explain analyze select
   rid.toode
   FROM dok JOIN rid USING (dokumnr)
   JOIN toode USING (toode)
   LEFT JOIN artliik using(grupp,liik)
   WHERE rid.toode='NAH S'
AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'

"Hash Left Join  (cost=4313.85..7702.10 rows=24 width=24) (actual 
time=10.138..48.884 rows=567 loops=1)"
"  Hash Cond: ((toode.grupp = artliik.grupp) AND (toode.liik = 
artliik.liik))"
"  ->  Nested Loop  (cost=4311.17..7699.14 rows=24 width=43) (actual 
time=10.049..47.877 rows=567 loops=1)"
"->  Index Scan using toode_pkey on toode  (cost=0.00..8.27 rows=1 
width=43) (actual time=0.043..0.046 rows=1 loops=1)"
"  Index Cond: ('NAH S'::bpchar = toode)"
"->  Hash Join  (cost=4311.17..7690.63 rows=24 width=24) (actual 
time=9.998..47.341 rows=567 loops=1)"
"  Hash Cond: (rid.dokumnr = dok.dokumnr)"
"  ->  Index Scan using rid_toode_idx on rid 
(cost=0.00..3372.45 rows=1354 width=28) (actual time=0.089..24.265 
rows=21144 loops=1)"
"Index Cond: (toode = 'NAH S'::bpchar)"
"  ->  Hash  (cost=4286.20..4286.20 rows=1998 width=4) (actual 
time=9.871..9.871 rows=3543 loops=1)"
"->  Index Scan using dok_kuupaev_idx on dok 
(cost=0.00..4286.20 rows=1998 width=4) (actual time=0.057..6.779 rows=3543 
loops=1)"
"  Index Cond: ((kuupaev >= '2007-11-01'::date) AND 
(kuupaev <= '2007-12-04'::date))"
"  ->  Hash  (cost=2.27..2.27 rows=27 width=19) (actual time=0.060..0.060 
rows=27 loops=1)"
"->  Seq Scan on artliik  (cost=0.00..2.27 rows=27 width=19) (actual 
time=0.009..0.027 rows=27 loops=1)"
"Total runtime: 49.409 ms"


How to rewrite the query

select
   rid.toode,
  artliik.*
   FROM dok JOIN rid USING (dokumnr)
   JOIN toode USING (toode)
   LEFT JOIN artliik using(grupp,liik)
   WHERE rid.toode='NAH S'
AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
and dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'

so it runs fast ?

Andrus.


"PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 
(mingw-special)" 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] slow UPDATE speed

2007-12-04 Thread GUO Zhijun
Hi all,

I met a problem that it's slow to update all rows in a table.  My
procedures are:

ALTER TABLE user ADD COLUMN foo smallint;
UPDATE user SET foo = 2;
ALTER TABLE user ALTER COLUMN foo SET NOT NULL;
ALTER TABLE user ALTER COLUMN foo SET DEFAULT 2;

The 2nd step took me 2600s.  It's a simple table having 0.4M records
and it's running on a test machine, that is, no load at all.
Is there any problem with my settings or any other?

Thank you.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] One or more tables?

2007-12-04 Thread rokj
On 3 dec., 01:18, [EMAIL PROTECTED] (Ron Johnson) wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 12/02/07 14:58, Usama Dar wrote:
>
>
>
> > On Dec 2, 2007 6:35 PM, rokj <[EMAIL PROTECTED]> wrote:
>
> >> Hi.
>
> >> For an example let me say that I have a big (over 1 million) user
> >> "base". Then every user does a lot of inserting/updating of data.
> >> Would it be better to create different tables for insert/updating for
> >> every user or  would it be better just to have one big table with all
> >> data (tables would have of course the same columns, ...). How do you
> >> cope with this kind of things?
>
> >> 1.example (1 enormous table)
> >> tablename (id, user_id, datetime, some_data)
>
> >> 2. example (a big number of tables)
> >> tablename_user_id( id, datetime, some_data)
>
> > Although  there isn't enough information in the email, but instead of
> > creating a separate table for every user, you could use one table ,
> > partitioned on userid,  that would , however, add a maint overhead whenever
> > you add a new user.
>
> Cluster by *range* of user ids, and preallocate some number of
> tablespaces.
>
> - --
> Ron Johnson, Jr.
> Jefferson LA  USA
>
>

I was just looking 
http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html
which is something you said about and which is something I was looking
for.

So if I do table like:
CREATE TABLE measurement (
city_id int not null,
logdate date not null,
peaktempint,
unitsales   int
);

CREATE TABLE measurement_y2004m02 (
CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE
'2004-03-01' )
) INHERITS (measurement);
...
..
.

I do SELECT with:
SET constraint_exclusion = on;
SELECT count(*) FROM measurement WHERE logdate >= DATE '2004-02-01';

-

Personally I think this is really powerfull thing, since it saves a
lot of resources especially in big "environments".

Regards,

Rok




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Including pg_dump in .NET Application

2007-12-04 Thread Penelope Dramas
Hello,

We're developing .NET application (closed-source) and would like to include
pg_dump to allow users to perform "quick database backup" within main
application.

Two questions:

1. Do we need to include any dependent files besides pg_dump.exe ?
2. Are we in violation of PostgreSQL license if we include this files?


Thank you in advance,

Penelope




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgres shutting down by itself...why?

2007-12-04 Thread Dave Horn
Thanks Tom.  I don't know what change I've made that the server seems to
like, but it ran all weekend with no problems.  But I've definitely got some
new things to consider if/when it starts having this problem again.  Thanks.

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, November 30, 2007 8:22 PM
To: Dave Horn
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgres shutting down by itself...why?


"Dave Horn" <[EMAIL PROTECTED]> writes:
> I've recently installed Postgres 8.2.x on a RedHat Linux system.
Everything
> seems fine and normal.  I can start and stop the server without errors.
> When the server is running, everything appears normal.  The problem is
that
> about every 2 to 3 hours I find the server is just not running anymore.
> I've turned on logging at the info level and up and am not getting any
> messages that is giving me any clue why the server is shutting down.

I've never seen or heard of a case of PG shutting down "by itself".
You should be looking for outside causes.  What else is running
on that system?

If there is absolutely nothing in the server log file mentioning
a shutdown, I would tend to think that something has kill -9'd
the postmaster process.  This has been known to happen on
short-of-memory Linux systems, if you don't disable memory overcommit
(google "OOM kill" for details).

Whether that's it or not, the kernel syslog file would be a good
first stop to check for clues.

Oh, another line of thought: maybe you are launching the postmaster
under restrictive ulimit settings that kill it after not very much
runtime?

regards, tom lane


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Vacuum output redirect

2007-12-04 Thread Wim Chalmet
Hi,

Sorry for my basic question here, but I can't figure things out. I am
running postgres in a windows environment. I would like to redirect the
output from a "vacuum full analyze verbose;" to a text file. How do I do
this?

I have tried this (the file "run_vacuum.sql" just contains this one line -
without quotes of course: "vacuum full analyze verbose;" )
psql -d sirius_lm -U postgres -f run_vacuum.sql -o vacuum.log

And also this:
psql -d sirius_lm -U postgres -f run_vacuum.sql >vacuum.log

But none of that works. Can anybody help a beginner out? Thanks!

-- 
Do you want to make a difference? Issue a microloan at www.kiva.org.


[GENERAL] Create function errors

2007-12-04 Thread Peck, Brian
Hey all,

 

I'm trying to create functions for the calls I'm making to limit the
number of DB pings I have to make (i.e. after they are all calls make
one call that calls them in succession)  and I'm getting an error.

 

The function is

 

CREATE OR REPLACE FUNCTION nearestVertex(x1 double precision, y1 double
precision,road_network character varying) RETURNS integer AS

$$

DECLARE

result integer;

 

BEGIN

SELECT source_id as vertex INTO result FROM $3 ORDER BY
Distance(the_geom,PointFromText(POINT( $1 $2 ))) LIMIT 1;

 

RETURN result;

END;

$$

LANGUAGE plpgsql;

 

And the error I get is

 

psql:D:/LMCO/AFUO/trunk/ASDFS_ODBC_2007_08_16/Scripts/PostgreSQL/functio
ns.sql:12: ERROR:  syntax error at or near "$1"

LINE 1: SELECT source_id as vertex FROM  $1  ORDER BY Distance(the_g...

 ^

QUERY:  SELECT source_id as vertex FROM  $1  ORDER BY
Distance(the_geom,PointFromText(POINT(  $2   $3  ))) LIMIT 1

CONTEXT:  SQL statement in PL/PgSQL function "nearestvertex" near line 5

 

I'm having trouble troubleshooting it since it's query is not what
should be the query actually being performed.

 

- Brian Peck

- 858-795-1398

- Software Engineer

- Lockheed Martin

 



Re: [GENERAL] Vacuum output redirect

2007-12-04 Thread Raymond O'Donnell

On 04/12/2007 15:27, Wim Chalmet wrote:

Sorry for my basic question here, but I can't figure things out. I am 
running postgres in a windows environment. I would like to redirect the 
output from a "vacuum full analyze verbose;" to a text file. How do I do 
this?


You could use the standalone program vacuumdb and redirect its output, thus:

  vacuumdb -U postgres your_database >out.txt 2>&1

HTH

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Beta4 released

2007-12-04 Thread Joshua D. Drake
On Tue, 4 Dec 2007 13:59:13 -0800
"Joshua D. Drake" <[EMAIL PROTECTED]> wrote:

> 
> Thanks to all the testing, feedback and bug reports the community has
> performed with the current betas, we now have our fourth beta
> of 8.4.

Pardon. I am living 12 months in the future. This should be:

we now have our fourth beta for 8.3.

Sincerely,

Joshua D. Drake


-- 
* PostgreSQL - The world's most advanced open source database
& http://www.postgresql.org/
() Everything else pales in comparison

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Older version of PGSQL help

2007-12-04 Thread Andrew Sullivan
On Mon, Dec 03, 2007 at 03:20:45PM -0600, Ed Burgstaler wrote:
> Would anyone be able to direct me as to whether or not there is a pgadmin
> for windows utility available that will work for a PostgreSQL v7.0.3
> database? 

Not to my knowledge.  But I wouldn't spend any time on it -- you need to get
off that PostgreSQL release pronto.  Really, really pronto.

> I'm not very database literate so unless an upgrade of the PostgreSQL
> database was idiot proof I guess I'm screwed.

The upgrades are mostly not that hard.  Just read the release notes
carefully for any compatibility issues with your application.  Generally,
they're a matter of pg_dump database | psql newdatabase.  

> relied upon. Unfortunately, the hardware that the server runs on is very old
> and I'm afraid that it will crash any day which would leave them screwed so

It'll leave them "more screwed" if it happens on that release.  My
suggestion is to download and compile a more recent release -- 7.3 is about
to become unsupported, but you may need to go through that version anyway in
order to get off 7.0.  

Compiling from source isn't hard, but it may be frustrating on such an old
installed system (because probably some of the necessary support files
aren't installed, and you will have a hard time getting the necessary
packages.

My suggestion is to go out and buy a reasonable server for this system, and
move it all there as soon as you can.  If you need help with how to do that,
the pgsql-admin or -general lists are probably the right place to ask
questions, once you've read the documentation (which is pretty good on this
topic).

A

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Tuning configuration

2007-12-04 Thread Andrew Sullivan
On Tue, Dec 04, 2007 at 02:21:43PM -0500, Gauthier, Dave wrote:
> Is there something like a freeware windows client app that does DBA
> stuff for a remote server?  Sort of like TOAD for Oracle?

TOAD either does or used to work for Postgres.

A


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Beta4 released

2007-12-04 Thread Shane Ambler

Joshua D. Drake wrote:

On Tue, 4 Dec 2007 13:59:13 -0800
"Joshua D. Drake" <[EMAIL PROTECTED]> wrote:


Thanks to all the testing, feedback and bug reports the community has
performed with the current betas, we now have our fourth beta
of 8.4.


Pardon. I am living 12 months in the future. This should be:



You must be a Jedi Master as well because I can't see the source.

(at least not the tarballs)


;-)


--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Vacuum output redirect

2007-12-04 Thread Martin Gainty
does psql have write access to the folder?

M-
  - Original Message - 
  From: Wim Chalmet 
  To: pgsql-general@postgresql.org 
  Sent: Tuesday, December 04, 2007 10:27 AM
  Subject: [GENERAL] Vacuum output redirect


  Hi,

  Sorry for my basic question here, but I can't figure things out. I am running 
postgres in a windows environment. I would like to redirect the output from a 
"vacuum full analyze verbose;" to a text file. How do I do this? 

  I have tried this (the file "run_vacuum.sql" just contains this one line - 
without quotes of course: "vacuum full analyze verbose;" )
  psql -d sirius_lm -U postgres -f run_vacuum.sql -o vacuum.log

  And also this:
  psql -d sirius_lm -U postgres -f run_vacuum.sql >vacuum.log

  But none of that works. Can anybody help a beginner out? Thanks!

  -- 
  Do you want to make a difference? Issue a microloan at www.kiva.org. 

Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-04 Thread Bill Moran
"Andrus" <[EMAIL PROTECTED]> wrote:
>
> Using string concatenation in where clause causes huge perfomance loss:
> 
> explain analyze select
>rid.toode
>FROM dok JOIN rid USING (dokumnr)
>JOIN toode USING (toode)
>LEFT JOIN artliik using(grupp,liik)
>WHERE rid.toode='NAH S'
> AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
> and dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'

You provide zero information on the table layout, and the explain output
has been horribly mangled by your MUA.

I would suspect the problem is that there's no index that can be used
for that final comparison.  Do you have an index along the lines of
CREATE INDEX dokindex ON dok (kuupaeve||kellaaeg) ?

Overall, the fact that you're concatenating two text fields to generate a
date field tends to suggest that your database schema has some fairly
major design problems, but I can only speculate at this point.

If neither of those help, I expect you'll need to provide more information.

> "Nested Loop Left Join  (cost=68.75..5064.86 rows=1 width=24) (actual 
> time=8.081..26995.552 rows=567 loops=1)"
> "  Join Filter: ((toode.grupp = artliik.grupp) AND (toode.liik = 
> artliik.liik))"
> "  ->  Nested Loop  (cost=68.75..5062.19 rows=1 width=43) (actual 
> time=8.045..26965.731 rows=567 loops=1)"
> "->  Index Scan using toode_pkey on toode  (cost=0.00..8.27 rows=1 
> width=43) (actual time=0.023..0.026 rows=1 loops=1)"
> "  Index Cond: ('NAH S'::bpchar = toode)"
> "->  Nested Loop  (cost=68.75..5053.91 rows=1 width=24) (actual 
> time=8.016..26964.698 rows=567 loops=1)"
> "  ->  Index Scan using dok_kuupaev_idx on dok 
> (cost=0.00..4326.16 rows=10 width=4) (actual time=0.059..67.985 rows=3543 
> loops=1)"
> "Index Cond: ((kuupaev >= '2007-11-01'::date) AND 
> (kuupaev <= '2007-12-04'::date))"
> "Filter: kuupaev)::text || (kellaaeg)::text) >= 
> '2007-11-01'::text) AND (((kuupaev)::text || (kellaaeg)::text) <= 
> '2007-12-0423 59'::text))"
> "  ->  Bitmap Heap Scan on rid  (cost=68.75..72.76 rows=1 
> width=28) (actual time=7.577..7.577 rows=0 loops=3543)"
> "Recheck Cond: ((dok.dokumnr = rid.dokumnr) AND 
> (rid.toode = 'NAH S'::bpchar))"
> "->  BitmapAnd  (cost=68.75..68.75 rows=1 width=0) 
> (actual time=7.574..7.574 rows=0 loops=3543)"
> "  ->  Bitmap Index Scan on rid_dokumnr_idx 
> (cost=0.00..5.13 rows=83 width=0) (actual time=0.037..0.037 rows=14 
> loops=3543)"
> "Index Cond: (dok.dokumnr = rid.dokumnr)"
> "  ->  Bitmap Index Scan on rid_toode_idx 
> (cost=0.00..63.03 rows=1354 width=0) (actual time=7.528..7.528 rows=21144 
> loops=3543)"
> "Index Cond: (toode = 'NAH S'::bpchar)"
> "  ->  Seq Scan on artliik  (cost=0.00..2.27 rows=27 width=19) (actual 
> time=0.007..0.020 rows=27 loops=567)"
> "Total runtime: 26996.399 ms"
> 
> takes 26 seconds !
> 
> If I remove last line it takes only 0 seconds:
> 
> SET SEARCH_PATH TO FIRMA1,public;
> explain analyze select
>rid.toode
>FROM dok JOIN rid USING (dokumnr)
>JOIN toode USING (toode)
>LEFT JOIN artliik using(grupp,liik)
>WHERE rid.toode='NAH S'
> AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
> 
> "Hash Left Join  (cost=4313.85..7702.10 rows=24 width=24) (actual 
> time=10.138..48.884 rows=567 loops=1)"
> "  Hash Cond: ((toode.grupp = artliik.grupp) AND (toode.liik = 
> artliik.liik))"
> "  ->  Nested Loop  (cost=4311.17..7699.14 rows=24 width=43) (actual 
> time=10.049..47.877 rows=567 loops=1)"
> "->  Index Scan using toode_pkey on toode  (cost=0.00..8.27 rows=1 
> width=43) (actual time=0.043..0.046 rows=1 loops=1)"
> "  Index Cond: ('NAH S'::bpchar = toode)"
> "->  Hash Join  (cost=4311.17..7690.63 rows=24 width=24) (actual 
> time=9.998..47.341 rows=567 loops=1)"
> "  Hash Cond: (rid.dokumnr = dok.dokumnr)"
> "  ->  Index Scan using rid_toode_idx on rid 
> (cost=0.00..3372.45 rows=1354 width=28) (actual time=0.089..24.265 
> rows=21144 loops=1)"
> "Index Cond: (toode = 'NAH S'::bpchar)"
> "  ->  Hash  (cost=4286.20..4286.20 rows=1998 width=4) (actual 
> time=9.871..9.871 rows=3543 loops=1)"
> "->  Index Scan using dok_kuupaev_idx on dok 
> (cost=0.00..4286.20 rows=1998 width=4) (actual time=0.057..6.779 rows=3543 
> loops=1)"
> "  Index Cond: ((kuupaev >= '2007-11-01'::date) AND 
> (kuupaev <= '2007-12-04'::date))"
> "  ->  Hash  (cost=2.27..2.27 rows=27 width=19) (actual time=0.060..0.060 
> rows=27 loops=1)"
> "->  Seq Scan on artliik  (cost=0.00..2.27 rows=27 width=19) (actual 
> time=0.009..0.027 rows=27 loops=1)"
> "Total runtime: 49.409 ms"
> 
> 
> How to rewrite the query
> 
> select
>rid.toode,
>   artli

Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-04 Thread Merlin Moncure
On Dec 4, 2007 8:02 PM, Bill Moran <[EMAIL PROTECTED]> wrote:
> "Andrus" <[EMAIL PROTECTED]> wrote:
> > Using string concatenation in where clause causes huge perfomance loss:
> >
> > explain analyze select
> >rid.toode
> >FROM dok JOIN rid USING (dokumnr)
> >JOIN toode USING (toode)
> >LEFT JOIN artliik using(grupp,liik)
> >WHERE rid.toode='NAH S'
> > AND dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
> > and dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'

can you please give us the types of  dok.kuupaev and dok.kellaaeg?  I
think a simple fix is possible here.


> You provide zero information on the table layout, and the explain output
> has been horribly mangled by your MUA.
>
> I would suspect the problem is that there's no index that can be used
> for that final comparison.  Do you have an index along the lines of
> CREATE INDEX dokindex ON dok (kuupaeve||kellaaeg) ?
>
> Overall, the fact that you're concatenating two text fields to generate a
> date field tends to suggest that your database schema has some fairly
> major design problems, but I can only speculate at this point.

just small correction here...expressions like that in the create index
need an extra set of parens (but I agree with your sentiment):
CREATE INDEX dokindex ON dok ((kuupaeve||kellaaeg))

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Create function errors

2007-12-04 Thread Rodrigo De León
On Dec 4, 2007 6:04 PM, Peck, Brian <[EMAIL PROTECTED]> wrote:

> SELECT source_id as vertex INTO result FROM $3 ORDER BY
> Distance(the_geom,PointFromText(POINT( $1 $2 ))) LIMIT 1;

I think you're missing a comma, e.g. POINT( $1 , $2 ).

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Windows Client App Was [Re: [GENERAL] Tuning configuration]

2007-12-04 Thread Ow Mun Heng

On Tue, 2007-12-04 at 14:21 -0500, Gauthier, Dave wrote:
> Is there something like a freeware windows client app that does DBA
> stuff for a remote server?  Sort of like TOAD for Oracle?

pgadmin3?
and please don't hijack threads

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Paul Lambert

Christian Rengstl wrote:

show config_file points to the right one. I restarted the server after
every change. After including the quotation marks as in the docs, the
quotation marks also appeared in the command, so at least it shows that
the config file was read.



How about show archive_command;? Just to verify that it's reading in the 
right thing.


e.g.
My postgresql.conf:
archive_command = 'copy "%p" E:\\PostgreSQL\\WAL_Archive\\%f'

From pgsql:
postgres=# show archive_command;
archive_command

 copy "%p" E:\PostgreSQL\WAL_Archive\%f

Cheers,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds & Reynolds Company


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] bug with >to_char('2007-12-31'::DATE, 'YYYYIW')

2007-12-04 Thread Tom Lane
Daniel Schuchardt <[EMAIL PROTECTED]> writes:
> LOLL2=# SELECT  to_char('2007-12-31'::DATE, 'IW');
>  to_char
> -
>  200701
> (1 row)

> 2007-12-31 should be week 2008-01

No, it shouldn't.   is defined to deliver calendar year.
For ISO year use IYYY.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ldap authentication allows logon with blank password

2007-12-04 Thread lighthouse . software
Here is the log output when I try different passwords:

When I enter an incorrect password I get the following:

[unknown] [unknown]  2007-12-05 13:55:29 CST LOG:  connection
received: host=111.111.111.111 port=1791
user test_db 111.111.111.111 2007-12-05 13:55:29 CST DEBUG:  received
password packet
user test_db 111.111.111.111 2007-12-05 13:55:29 CST LOG:  LDAP login
failed for user "DOMAIN\user" on server "pdc": error code 49
user test_db 111.111.111.111 2007-12-05 13:55:29 CST FATAL:  LDAP
authentication failed for user "user"
user test_db 111.111.111.111 2007-12-05 13:55:29 CST DEBUG:
proc_exit(1)
user test_db 111.111.111.111 2007-12-05 13:55:29 CST DEBUG:
shmem_exit(1)
user test_db 111.111.111.111 2007-12-05 13:55:29 CST DEBUG:  exit(1)
   2007-12-05 13:55:29 CST DEBUG:  reaping dead processes
   2007-12-05 13:55:29 CST DEBUG:  server process (PID 29417) exited
with exit code 1

When I enter a blank password I get:

[unknown] [unknown]  2007-12-05 13:56:53 CST LOG:  connection
received: host=111.111.111.111 port=1962
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:  received
password packet
user test_db 111.111.111.111 2007-12-05 13:56:53 CST LOG:  connection
authorized: user=user database=test_db
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:  postgres
child[29422]: starting with (
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:
postgres
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:-
v196608
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:-y
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:test_db
user test_db 111.111.111.111 2007-12-05 13:56:53 CST DEBUG:

And when I enter the correct password I get:

[unknown] [unknown]  2007-12-05 13:58:27 CST LOG:  connection
received: host=111.111.111.111 port=2152
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:  received
password packet
user test_db 111.111.111.111 2007-12-05 13:58:27 CST LOG:  connection
authorized: user=user database=test_db
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:  postgres
child[29445]: starting with (
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:
postgres
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:-
v196608
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:-y
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:test_db
user test_db 111.111.111.111 2007-12-05 13:58:27 CST DEBUG:

The difference is when I connect with the blank password the domain
account gets locked out.

Is there anywhere else I can look to help diagnose the problem?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Deadlock when updating table partitions (and presumed solution)

2007-12-04 Thread Tom Lane
Paul Boddie <[EMAIL PROTECTED]> writes:
> I have one process querying a table P with partitions P0, P1, P2, ...
> Pn joined with table R as follows:
> select * from R inner join P on R.id = P.id and P.section = 5
> ...
> I have another process performing updates to individual partitions of
> P - specifically "alter table" operations adding foreign key
> constraints referencing R as follows:
> alter table Pm add constraint Pm_fk_id foreign key(id) references
> R(id)

Yeah, this is a problem.  The SELECT will acquire AccessShareLock
on R and P, and subsequently try to acquire AccessShareLock on all
the inheritance children of P (and I don't think the order in which
these locks are acquired is very clear).  Meanwhile the ALTER acquires
AccessExclusiveLock on Pm and R --- probably in that order, though
I'd not really want to promise that ordering either.  So the potential
for deadlock is obvious.

You seem to be hoping that the SELECT would avoid acquiring lock
on child tables Pn that it didn't need to access, but this cannot be:
it has to get at least AccessShareLock on those tables before it can
even examine their constraints to find out that they don't need to be
scanned.  And even if it could magically not take those locks, the
deadlock condition still exists with regard to the child table that
it *does* need to access.

I guess I'm wondering why you need to be adding foreign key constraints
during live operations.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Transaction isolation and constraints

2007-12-04 Thread Tom Lane
[EMAIL PROTECTED] writes:
> suppose a table has a UNIQUE constraint on a column, and two
> concurrent transactions attempt to INSERT a row with the same value
> for that column:

Whichever one manages to get to the index page first will go through.
The second one will block waiting to see if the first one commits,
and will error out if so --- or proceed, if it aborts.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Older version of PGSQL help

2007-12-04 Thread Greg Smith

On Mon, 3 Dec 2007, Ed Burgstaler wrote:


Would anyone be able to direct me as to whether or not there is a pgadmin
for windows utility available that will work for a PostgreSQL v7.0.3
database?


From reading the rest of your message, I think you're under the impression 
that you need pgadmin in order to migrate the database to a newer 
platform.  You do not; the main utility you need is pg_dump, and you 
should be able to run it just fine directly from the command prompt (when 
logged in as the database user) on your RH6.2 system, or from a newer 
installation connecting to the old one.  The output from that is a text 
file which you can then copy anywhere, or reload into a newer version 
(with some restrictions as discussed below).


Before you do anything else, it would be wise to login to the old server, 
run pg_dump to dump everything, and copy that file elsewhere.  That will 
at least give you some sort of backup if the whole system dies on you 
before you can migrate to a newer version.


On Tue, 4 Dec 2007, Andrew Sullivan wrote:


My suggestion is to download and compile a more recent release -- 7.3 is
about to become unsupported, but you may need to go through that version
anyway in order to get off 7.0.  Compiling from source isn't hard, but
it may be frustrating on such an old installed system


Ed should be able to get 7.3 running on his target CentOS boxes using the 
RPMs at ftp://ftp.postgresql.org/pub/binary/v7.3.20/linux/rpms/redhat/


The issue Andrew is bringing up here is that really new PostgreSQL 
versions probably aren't necessairly backward compatible talking to or 
reading dumps from your 7.0 system, so your odds are better trying to 
upgrade to 7.3 instead of a really recent one.  But be warned that 7.3 is 
going away relatively soon as well.


I think the path of least resistance here is:
1) Pick a target CentOS system.  Make sure there is no current PostgreSQL 
installed.

2) Grab the 7.3 RPMs from ftp.postgresql.org, install them
3) Create a new database cluster ('service postgresql start' may be all you 
need)
4) Run pg_dump against the old system.  See 
http://www.postgresql.org/docs/7.3/static/app-pgdump.html for details. 
You want to do something like


pg_dump  -h  > dump.sql

You may need to specify the port and username if they're not at the 
default as well.  Hopefully the old server is setup to be accessed over 
the network like this.


5) If that goes OK, import into the new version with 'psql -f dump.sql'

That should get you started in the right direction.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Server crashed and now experiencing slow running queries

2007-12-04 Thread Keaton Adams

We have two servers configured the same way running the same type of
processes that write/read to the database.  Server 2 filled up pg_xlog and
crashed.  When it came back we began to experience slow query performance.
I ran an ANALYZE against the tables involved in the query, but for some
reason the optimizer is still choosing a hash join instead of a nested loop
join, which is killing performance.  The query on Server 1 runs in 14
seconds and the same query on Server 2 runs in 15 minutes:

Server 1 
   ->  HashAggregate  (cost=501922.84..501922.85
rows=1 width=532)
 ->  Nested Loop  (cost=250961.41..501922.83
rows=1 width=532)
   Join Filter: (("outer".host)::text =
("inner".host)::text)
   ->  HashAggregate
(cost=250961.40..250961.41 rows=1 width=26)
 ->  Index Scan using
mxl_fs_size_did_cre_idx on mxl_fs_size  (cost=0.01..250961.40 rows=1
width=26)
   Index Cond: ((created >=
(now() - '01:00:00'::interval)) AND (created <= now()))
   ->  Index Scan using
mxl_fs_size_did_cre_idx on mxl_fs_size lh  (cost=0.01..250961.40 rows=1
width=42)
 Index Cond: ((created >= (now()
- '01:00:00'::interval)) AND (created <= now()))


Server 2

   ->  HashAggregate  (cost=1814101.48..1814129.36
rows=2230 width=532)
 ->  Hash Join  (cost=906978.28..1814079.18
rows=2230 width=532)
   Hash Cond: (("outer".host)::text =
("inner".host)::text)
   ->  Index Scan using
mxl_fs_size_did_cre_idx on mxl_fs_size lh  (cost=0.01..906877.88 rows=40147
width=42)
 Index Cond: ((created >= (now()
- '01:00:00'::interval)) AND (created <= now()))
   ->  Hash  (cost=906978.27..906978.27
rows=1 width=516)
 ->  HashAggregate
(cost=906978.25..906978.26 rows=1 width=26)
   ->  Index Scan using
mxl_fs_size_did_cre_idx on mxl_fs_size  (cost=0.01..906877.88 rows=40147
width=26)
 Index Cond:
((created >= (now() - '01:00:00'::interval)) AND (created <= now()))

Besides ANALYZE, what else can I do / look at to figure out why the
optimizer is making the choices it is on Server 2, now causing slow
performance problems?

Thanks,

Keaton



Re: [GENERAL] Server crashed and now experiencing slow running queries

2007-12-04 Thread Keaton Adams

We¹re running PostgreSQL 8.1.4 on RHEL.  I¹m running a vacuum analyze on the
mxl_fs_size table to see if that shows anything.

-Keaton


On 12/4/07 10:50 PM, "Keaton Adams" <[EMAIL PROTECTED]> wrote:

> 
> We have two servers configured the same way running the same type of processes
> that write/read to the database.  Server 2 filled up pg_xlog and crashed.
> When it came back we began to experience slow query performance.  I ran an
> ANALYZE against the tables involved in the query, but for some reason the
> optimizer is still choosing a hash join instead of a nested loop join, which
> is killing performance.  The query on Server 1 runs in 14 seconds and the same
> query on Server 2 runs in 15 minutes:
> 
> Server 1 
>->  HashAggregate  (cost=501922.84..501922.85
> rows=1 width=532)
>  ->  Nested Loop  (cost=250961.41..501922.83
> rows=1 width=532)
>Join Filter: (("outer".host)::text =
> ("inner".host)::text)
>->  HashAggregate
> (cost=250961.40..250961.41 rows=1 width=26)
>  ->  Index Scan using
> mxl_fs_size_did_cre_idx on mxl_fs_size  (cost=0.01..250961.40 rows=1 width=26)
>Index Cond: ((created >=
> (now() - '01:00:00'::interval)) AND (created <= now()))
>->  Index Scan using
> mxl_fs_size_did_cre_idx on mxl_fs_size lh  (cost=0.01..250961.40 rows=1
> width=42)
>  Index Cond: ((created >= (now() -
> '01:00:00'::interval)) AND (created <= now()))
> 
> 
> Server 2
> 
>->  HashAggregate  (cost=1814101.48..1814129.36
> rows=2230 width=532)
>  ->  Hash Join  (cost=906978.28..1814079.18
> rows=2230 width=532)
>Hash Cond: (("outer".host)::text =
> ("inner".host)::text)
>->  Index Scan using
> mxl_fs_size_did_cre_idx on mxl_fs_size lh  (cost=0.01..906877.88 rows=40147
> width=42)
>  Index Cond: ((created >= (now() -
> '01:00:00'::interval)) AND (created <= now()))
>->  Hash  (cost=906978.27..906978.27
> rows=1 width=516)
>  ->  HashAggregate
> (cost=906978.25..906978.26 rows=1 width=26)
>->  Index Scan using
> mxl_fs_size_did_cre_idx on mxl_fs_size  (cost=0.01..906877.88 rows=40147
> width=26)
>  Index Cond: ((created
> >= (now() - '01:00:00'::interval)) AND (created <= now()))
> 
> Besides ANALYZE, what else can I do / look at to figure out why the optimizer
> is making the choices it is on Server 2, now causing slow performance
> problems?
> 
> Thanks,
> 
> Keaton
> 




[GENERAL] Conservative postgresql.conf made by initdb?

2007-12-04 Thread rihad
In postgresql.conf generated by initdb shared_buffers is set to 32MB 
even though there was more available (see below; also ipcs shows 
postgres is (and will be) the only shared memory user). Is this enough 
or maybe it's less than ok? I don't know. What I do know is that MySQL 
4.0.x uses 500-550 mb RAM for similar access patterns. Also, 
max_connections preset to 40 is more than ok for our case. Is 
max_fsm_pages = 204800 OK ? Should I increase it if I expect 15-20 gb 
disk usage by db?


FreeBSD 7.0-BETA3 PostgreSQL 8.3 beta2

sysctl settings at the time initdb ran more or less resembled the 
recommendations given here: 
http://www.postgresql.org/docs/8.3/static/kernel-resources.html#SYSVIPC

namely:

kern.ipc.shmmax: 134217728
kern.ipc.shmmin: 1
kern.ipc.shmall: 32768
kern.ipc.shmseg: 128
kern.ipc.shmmni: 192
kern.ipc.semmns: 60
kern.ipc.semmsl: 60
kern.ipc.semmap: 256
kern.ipc.semvmx: 32767

Maybe I should increase shmall/shmmax and rerun initdb to maximize 
performance of expectedly 25-30 concurrent connections, which are 
"persistent" and hence many of which are idle at the low hours?


Thanks.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Conservative postgresql.conf made by initdb?

2007-12-04 Thread Robert Treat
On Wednesday 05 December 2007 01:45, rihad wrote:
> In postgresql.conf generated by initdb shared_buffers is set to 32MB
> even though there was more available (see below; also ipcs shows
> postgres is (and will be) the only shared memory user). Is this enough
> or maybe it's less than ok?

Chances are it's to low, but the installer doesnt make assumptions that it 
will be the sole user of RAM on the box. 

> I don't know. What I do know is that MySQL 
> 4.0.x uses 500-550 mb RAM for similar access patterns.

Unfortunatly this doesn't mean anything, since depending on the table types 
you're using you have to configure MySQL ram usage in significantly different 
ways.

> Also, 
> max_connections preset to 40 is more than ok for our case. Is
> max_fsm_pages = 204800 OK ? Should I increase it if I expect 15-20 gb
> disk usage by db?
>

You probably need to increase it, the problem is that the amount of data 
modification your database will have effects just how high you need this. 

> FreeBSD 7.0-BETA3 PostgreSQL 8.3 beta2
>

Beta2? Get with the times man! Beta4 has been out for at least... several 
hours at this point. 

> sysctl settings at the time initdb ran more or less resembled the
> recommendations given here:
> http://www.postgresql.org/docs/8.3/static/kernel-resources.html#SYSVIPC
> namely:
>
> kern.ipc.shmmax: 134217728
> kern.ipc.shmmin: 1
> kern.ipc.shmall: 32768
> kern.ipc.shmseg: 128
> kern.ipc.shmmni: 192
> kern.ipc.semmns: 60
> kern.ipc.semmsl: 60
> kern.ipc.semmap: 256
> kern.ipc.semvmx: 32767
>
> Maybe I should increase shmall/shmmax and rerun initdb to maximize
> performance of expectedly 25-30 concurrent connections, which are
> "persistent" and hence many of which are idle at the low hours?
>

Nah, initdb is not a tuning device. I'd recommend setting your shared_buffers 
to 20% of total RAM for starters. You'll also want to set your 
expected_cache_size accordingly.  After that, google and benchmarking are 
your friends. HTH.

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Including pg_dump in .NET Application

2007-12-04 Thread Magnus Hagander
> We're developing .NET application (closed-source) and would like to include
> pg_dump to allow users to perform "quick database backup" within main
> application.
> 
> Two questions:
> 
> 1. Do we need to include any dependent files besides pg_dump.exe ?

IIRC, you need libpq and it's dependencies. Check with depends.exe from the ms 
support tools. Other than the linked dll:s, nothing should be required.


> 2. Are we in violation of PostgreSQL license if we include this files?

No, that's allowed.

/Magnus


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl



Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Wed, Dec 5, 2007 at  2:47 AM, in message
<[EMAIL PROTECTED]>,
Paul Lambert <[EMAIL PROTECTED]> wrote: 
> Christian Rengstl wrote:
>> show config_file points to the right one. I restarted the server
after
>> every change. After including the quotation marks as in the docs,
the
>> quotation marks also appeared in the command, so at least it shows
that
>> the config file was read.
>> 
> 
> How about show archive_command;? Just to verify that it's reading in
the 
> right thing.
> 
> e.g.
> My postgresql.conf:
> archive_command = 'copy "%p" E:\\PostgreSQL\\WAL_Archive\\%f'
> 
>  From pgsql:
> postgres=# show archive_command;
>  archive_command
> 
>   copy "%p" E:\PostgreSQL\WAL_Archive\%f
> 
> Cheers,
> Paul.
> 
> -- 
> Paul Lambert
> Database Administrator
> AutoLedgers - A Reynolds & Reynolds Company
> 
> 
> ---(end of
broadcast)---
> TIP 5: don't forget to increase your free space map settings

The archive_command is 'copy %p C:\\Archive\\DBArchive\\%f',
nevertheless I changed it to your version, but without success.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Archiving problem on Windows

2007-12-04 Thread Christian Rengstl



Christian Rengstl M.A.
Klinik und Poliklinik für Innere Medizin II
Kardiologie - Forschung
Universitätsklinikum Regensburg
B3 1.388
Franz-Josef-Strauss-Allee 11
93053 Regensburg
Tel.: +49-941-944-7230




>>> On Wed, Dec 5, 2007 at  8:25 AM, in message
<[EMAIL PROTECTED]>,
Paul Lambert <[EMAIL PROTECTED]> wrote: 
> Christian Rengstl wrote:
>> The archive_command is 'copy %p C:\\Archive\\DBArchive\\%f',
>> nevertheless I changed it to your version, but without success.
>> 
>> 
>> ---(end of
broadcast)---
>> TIP 5: don't forget to increase your free space map settings
>> 
>> 
> 
> And what does pgsql show when you do a "show archive_command;"
> 
> -- 
> Paul Lambert
> Database Administrator
> AutoLedgers - A Reynolds & Reynolds Company
> 
> ---(end of
broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do
not
>match

It shows: "copy "%p" F:\TransaktionsLogArchiv\%f" -> I changed the
directory to archive the files in to see if it would help, but it
doesn't.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


  1   2   >