Re: [GENERAL] PostgreSQL Installer for Windows x64

2007-06-20 Thread Magnus Hagander
On Wed, Jun 20, 2007 at 10:17:10AM +0900, EBIHARA, Yuichiro wrote:
> Hi,
> 
> Can I get a PostgreSQL Installer for Windows x64(EM64T)?
> That for 32bit Windows is available at http://www.postgresql.org/ftp/win32/ 
> but I need x64 native
> version.

There is no such thing. PostgreSQL 64-bit is currently only supported on
Unix based platforms.

The 32-bit version for Windows works just fine on 64-bit windows, though.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Excell

2007-06-20 Thread Garry Saddington
On Wednesday 20 June 2007 03:09, Bob Pawley wrote:
> Thanks
>
> Does one version of ODBC work for all versions of Excel and Postgresql.
>
> I am wanting to transfer one or two tables from Excel and manipulate the
> information in Postgresql then transfer the results back to Excel as a
> single table.
>
> I am using Excel 2000 and PostgreSql 8.1.
>
Save as CSV from Excel and use Posrgres 'copy' command to import the data, 
would be a straightforward solution, works both ways -copy to, copy from.
http://www.postgresql.org/docs/8.2/static/sql-copy.html

Regards
Garry

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Dynamic Log tigger (plpgsql)

2007-06-20 Thread Sergey Konoplev

My Question:
How can I do "OLD.columnName != NEW.columnName" if I don't know what the
columnNames are at Compile Time?
I have the columnName in a variable.



I suggest you use plpython. In this case you'll be able to do it.

TD['old'][colNameVar] != TD['new'][colNameVar]

--
Regards,
Sergey Konoplev

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Installer for Windows x64

2007-06-20 Thread EBIHARA, Yuichiro
Magnus,

Thank you for your quick reply.

> > Can I get a PostgreSQL Installer for Windows x64(EM64T)?
> > That for 32bit Windows is available at 
> > http://www.postgresql.org/ftp/win32/ but I need x64 native version.
> 
> There is no such thing. PostgreSQL 64-bit is currently only 
> supported on Unix based platforms.

I thought I can run 64bit PG on Windows if I compile it myself...

> The 32-bit version for Windows works just fine on 64-bit 
> windows, though.

I'll go this way.

BTW, nobody wants to utilize much larger cache?

Thanks,

ebi



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [NOVICE] Recursive relationship - preventing cross-index entries.

2007-06-20 Thread Richard Huxton

Michael Glaesemann wrote:
INSERT INTO accounts_receivable_receipts (accounts_receivable_id, 
receipt_id)

VALUES (2, 1), (4, 3), (6, 1), (5, 3);

I have not done much accounting-style design, and I don't think this is 
really the best way to set these up (for example, I think it's a bit odd 
to map these amounts against each other without indicating how much of 
the amount is matched), but without more information about your business 
logic, I don't really know what else to suggest.


In most I've seen, you have an "allocations" table linking payments 
received against individual invoices. So, you might have payment 1234 
with £100 against inv #10001 and £150 against inv #10002.


The allocations table tends to have an "amount unallocated" column too, 
imposing an order to the allocations. Not strictly necessary from a 
database point of view, but I suspect left over from manual methods of 
working.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL Installer for Windows x64

2007-06-20 Thread Magnus Hagander
On Wed, Jun 20, 2007 at 04:31:40PM +0900, EBIHARA, Yuichiro wrote:
> Magnus,
> 
> Thank you for your quick reply.
> 
> > > Can I get a PostgreSQL Installer for Windows x64(EM64T)?
> > > That for 32bit Windows is available at 
> > > http://www.postgresql.org/ftp/win32/ but I need x64 native version.
> > 
> > There is no such thing. PostgreSQL 64-bit is currently only 
> > supported on Unix based platforms.
> 
> I thought I can run 64bit PG on Windows if I compile it myself...

Not yet. We hope to do this in the future, but we're not there yet.
Basically, 8.2 and earlier relied on MingW for building which cannot do
64-bit. 8.3 will also fully support MSVC, which will be the default
environment for the binary build. And since MSVC supports 64-bit, that's a
good start on our path to support native 64-bit.

> > The 32-bit version for Windows works just fine on 64-bit 
> > windows, though.
> 
> I'll go this way.
> 
> BTW, nobody wants to utilize much larger cache?

You can do that with the 32-bit version! PostgreSQL mainly relies on the
filesystem cache for this, and 32-bit pg on 64-bit windows will utilize
that one just fine. The advantages of a native 64-bit version is mainly
that you can use more *sort memory* (not very common requirement), or that
you can use more registers and new instructions.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] unexpected shutdown

2007-06-20 Thread Marco Colombo

[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] writes:

My database has shutdown several times in the last couple days.  I have
no
idea why.  I am running centos and I have not rebooted the server or
made
any configuration changes.


Oh, I forgot. You do have plenty of swap space compared to RAM, yes? If 
you're running w/o swap, or little swap, the default settings of

overcommit_memory = 2 will cut your available RAM by a factor of 2.

This thread is interesting reading:
http://www.mail-archive.com/pgsql-general@postgresql.org/msg97648.html

Since disk space is usually cheap these days, my rule of thumb is (the 
old one):


swap = 2 * ram

read it this way: "it you have 32GB of RAM, you can afford 64GB of disk 
storage"


BTW, this is a good idea both with overcommit on and off, IMHO.

.TM.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [NOVICE] Recursive relationship - preventing cross-index entries.

2007-06-20 Thread Sean Davis
Andrew Maclean wrote:
> I got no answer so I am trying again.
>  
> In a nutshell, if I have a recrusive relationship as outlined below, how
> do I implement a rule for the adjustments table that prevents the entry
> of an Id into the Ref column if the id exists in the Id column and vice
> versa?
>  
> If I have a payments table which holds an Id and a payment and I also
> have an adjustments table that holds a payment id and a reference id so
> that adjustments can be made to payments.
> So the payments table looks like this:
> Id Payment
> 1 500.0
> 2 100.0
> 3 1000.0
> 4 50.0
> 5 750.0
> 6 50.0
> 7 800.0
> 8 1200.0
>  
> and the adjustments table looks like this:
> Id Ref
> 1 2
> 3 4
> 1 6
> 3 5
> The idea is that, if for example Id=1 is a credit dard payment, then
> entries 2 and 6 could be payments that are already included in the
> credit card payment so we need to adjust the total payment to take this
> into account.
>  
> This means that the payment for Id=1 ($500) in the payments table needs
> to be reduced by $100 (ref=2) and $50 (ref=6) , similarly, the payment
> for Id=3 ($1000) needs to be reduced by $850). So the question is:
>  
>  How do I design the adjustments table to ensure that:
> a) For any value entered in the Id column a check should occur to
> ensure that it does not exist in the Ref column.
> b) For any value entered in the Ref column, a check should occur to
> ensure that it does not exist in the Id column.
>  
> In other words, looking at the adjustments table, I should be
> prevented from entering 2,4,6,5 in the Id column and 1, 3 in the Ref
> column.

I th8ink you can put a trigger on the table that can check the
constraints.

http://www.postgresql.org/docs/8.2/static/triggers.html

However, I wonder whether it might not make more sense to go with an
account system, with an account balance and credits and debits to the
account.  Is the system you are proposing really the best data model?

Sean

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] copying indexes and f.keys from a template table

2007-06-20 Thread Enrico Sirola

Hello,
I'm trying to build a set of plpgsql functions in order to ease  
partitioning by date, let's say

one table per day for a total of 30 days.
I have plpgsql functions to create the partition tables (having a  
date suffix) and to manage
a retention period, however I still need to solve a problem with  
indexes/foreign keys.
The partition tables are inherited from a template table, and while  
inheriting I loose
the information on indexes/keys. One possibility would be to write a  
facility to execute
a set of ddl statements on all the partition tables using a template  
(e.g. using a
placeholder for the table name) or maybe one could use the  
information on Primary/foreign keys
and indexes present also in the system tables so I guess one should  
be able to use it to build

the PK/FKs and indexes on the partition tables. Any advice?
Thanks in advance,

Enrico Sirola
[EMAIL PROTECTED]





Re: [GENERAL] regexp searching in arrays not working?

2007-06-20 Thread Rhys Stewart

thats cool,

thanks.

2007/6/19, Tom Lane <[EMAIL PROTECTED]>:


"Rhys Stewart" <[EMAIL PROTECTED]> writes:
> Is regex searching not functioning (as i expect it to?)

~ expects the pattern on the right, not the left.  So it's taking your
array entries as patterns, which don't match the data 'Trans'.

Since there's no "(array) ANY op scalar" syntax, the only way to get
this to work is to make a reverse-pattern-match operator that takes
the pattern on the left.  You can glue one together from spare parts
easily enough:

regression=# create function reverse_regex(text,text) returns bool as
regression-# 'select $2 ~ $1' language sql immutable strict;
CREATE FUNCTION
regression=# create operator ~~~ (procedure = reverse_regex,
regression(# leftarg = text, rightarg = text);
CREATE OPERATOR

but I'm not sure what the performance will be like with a SQL function
in there...

regards, tom lane



Re: [GENERAL] pg_restore out of memory

2007-06-20 Thread Francisco Reyes

Would it help at all to run a ktrace?
Or are the logs I have supplied enough?  




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Need help with generic query

2007-06-20 Thread Rodrigo De León
On Jun 20, 5:55 am, David Abrahams <[EMAIL PROTECTED]> wrote:
> The problem is, psql is complaining:
>
>   ERROR:  wrong record type supplied in RETURN NEXT

Try:

... RETURNS SETOF RECORD ...


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problem compiling on CentOS

2007-06-20 Thread Robin Ericsson

On 6/20/07, Robin Ericsson <[EMAIL PROTECTED]> wrote:

`-mcpu=' is deprecated. Use `-mtune=' or '-march=' instead.


Ok solved. This is what the configure-script barfs on. Hacking the
spec-file to change this flag removes this problem.

--
   regards,
   Robin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Surrogate VS natural keys

2007-06-20 Thread Naz Gassiep
OK so which is the "correct" way to do it?

E.g., Say I have a table with users, and a table with clubs, and a table
that links them. Each user can be in more than one club and each club
has more than one member. Standard M:M relationship. Which link table is
the "right" way to do it?

This:

CREATE TABLE (
userid INTEGER NOT NULL REFERENCES users,
clubid INTEGER NOT NULL REFERENCES clubs,
PRIMARY KEY (userid, clubid)
);

Or this:

CREATE TABLE (
id SERIAL PRIMARY KEY,
userid INTEGER NOT NULL REFERENCES users,
clubid INTEGER NOT NULL REFERENCES clubs
);

I've always favored natural keys (the first option) as it just seems to
me a more natural and semantic representation of the data, however I
often get app designers complaining about it being more complex or
something.

Comments?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Excell

2007-06-20 Thread Ireneusz Pluta



Bob Pawley napisał(a):

Thanks

Does one version of ODBC work for all versions of Excel and Postgresql.

I am wanting to transfer one or two tables from Excel and manipulate the 
information in Postgresql then transfer the results back to Excel as a 
single table.


I am using Excel 2000 and PostgreSql 8.1.


If this is a one-time or infrequent opreration then the round-trip of 
Excel->Access->ODBC->PostgreSQL is the way to go. I do this way usually.


But when you need to process your data in this way more frequently then you'll probably start 
looking for something more sophisticated, involving much less clicking or drag'n dropping.


If you are familiar with Perl then using:
http://search.cpan.org/~timb/DBI-1.57/DBI.pm
http://search.cpan.org/~hmbrand/Spreadsheet-Read-0.22/Read.pm
http://search.cpan.org/~jmcnamara/Spreadsheet-WriteExcel-2.18/lib/Spreadsheet/WriteExcel.pm
is the way to go and I do this way when I need to update and process sets of data provided to me as 
Excel spreadsheets.

However, this usually involves tailoring a separate solution for each 
individual data-processing case.

Irek.





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread brian

Naz Gassiep wrote:

OK so which is the "correct" way to do it?

E.g., Say I have a table with users, and a table with clubs, and a table
that links them. Each user can be in more than one club and each club
has more than one member. Standard M:M relationship. Which link table is
the "right" way to do it?

This:

CREATE TABLE (
userid INTEGER NOT NULL REFERENCES users,
clubid INTEGER NOT NULL REFERENCES clubs,
PRIMARY KEY (userid, clubid)
);

Or this:

CREATE TABLE (
id SERIAL PRIMARY KEY,
userid INTEGER NOT NULL REFERENCES users,
clubid INTEGER NOT NULL REFERENCES clubs
);



The former uses a primary key across both columns to enforce a unique 
constraint. In the latter, you have a seperate ID column, which does not 
enforce that constraint. And you have to ask yourself if you'll ever be 
referencing that ID column for anything at all. I doubt i ever would. 
Generally, you'd be using this to relate rows from a more generalised 
table using either the club ID or the user ID. I can't see how having a 
seperate serial ID column would be useful for any kind of select.


brian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL Installer for Windows x64

2007-06-20 Thread marcelo Cortez
Magnus , folks

 The only caveat i found in winxp64 is with psqlODBC 
 my application dont work very well on it into xp64
platform, the work around was change odbc profile in
favor of  dns file , with dns file my application
return  to work, ok.
 BTW i using linux server without problems, but my 
client aplication layer was the problem :(.

best regards
 
  MDC

--- Magnus Hagander <[EMAIL PROTECTED]> escribió:

> On Wed, Jun 20, 2007 at 10:17:10AM +0900, EBIHARA,
> Yuichiro wrote:
> > Hi,
> > 
> > Can I get a PostgreSQL Installer for Windows
> x64(EM64T)?
> > That for 32bit Windows is available at
> http://www.postgresql.org/ftp/win32/ but I need x64
> native
> > version.
> 
> There is no such thing. PostgreSQL 64-bit is
> currently only supported on
> Unix based platforms.
> 
> The 32-bit version for Windows works just fine on
> 64-bit windows, though.
> 
> //Magnus
> 
> ---(end of
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org/
> 







__ 
Preguntá. Respondé. Descubrí. 
Todo lo que querías saber, y lo que ni imaginabas, 
está en Yahoo! Respuestas (Beta). 
¡Probalo ya! 
http://www.yahoo.com.ar/respuestas 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Rich Shepard

On Wed, 20 Jun 2007, brian wrote:


The former uses a primary key across both columns to enforce a unique
constraint. In the latter, you have a seperate ID column, which does not
enforce that constraint. And you have to ask yourself if you'll ever be
referencing that ID column for anything at all. I doubt i ever would. 
Generally, you'd be using this to relate rows from a more generalised

table using either the club ID or the user ID. I can't see how having a
seperate serial ID column would be useful for any kind of select.


  Also, the reason for a third, M-M, table is to relate multiple players and
multiple clubs. If you think of the logic involved, your third table has
only one row for each player-club combination. Therefore, each row is unique
by definition and a surrogate key adds no value.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Martijn van Oosterhout
On Wed, Jun 20, 2007 at 08:39:23AM -0700, Rich Shepard wrote:
>   Also, the reason for a third, M-M, table is to relate multiple players and
> multiple clubs. If you think of the logic involved, your third table has
> only one row for each player-club combination. Therefore, each row is unique
> by definition and a surrogate key adds no value.

While true in this simple case, it can quickly become more complicated
if your relationship starts gaining attributes. For example, if you add
start and stop dates, so the (player,club) combination is not unique
anymore. If you track invoices, games or scores it may be easier to
reference the relatioship via a surrogate key rather than copying the
other IDs around everywhere.

For simple tables like this I generally don't bother, but sometimes I
find myself adding a surrogate key later.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Joshua D. Drake

Martijn van Oosterhout wrote:

On Wed, Jun 20, 2007 at 08:39:23AM -0700, Rich Shepard wrote:

  Also, the reason for a third, M-M, table is to relate multiple players and
multiple clubs. If you think of the logic involved, your third table has
only one row for each player-club combination. Therefore, each row is unique
by definition and a surrogate key adds no value.


While true in this simple case, it can quickly become more complicated
if your relationship starts gaining attributes. For example, if you add
start and stop dates, so the (player,club) combination is not unique
anymore. If you track invoices, games or scores it may be easier to
reference the relatioship via a surrogate key rather than copying the
other IDs around everywhere.

For simple tables like this I generally don't bother, but sometimes I
find myself adding a surrogate key later.


The value of a surrogate key is easy retrieval and really has nothing to 
do with normalization or proper modeling.


I often add a surrogate key, even when one is not required just so I 
don't have to worry about have a 4 element where clause.


Joshua D. Drake




Have a nice day,



--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Rich Shepard

On Wed, 20 Jun 2007, Martijn van Oosterhout wrote:


While true in this simple case, it can quickly become more complicated if
your relationship starts gaining attributes. For example, if you add start
and stop dates, so the (player,club) combination is not unique anymore. If
you track invoices, games or scores it may be easier to reference the
relatioship via a surrogate key rather than copying the other IDs around
everywhere.


  That's very true, Martijn. It did not seem to be the case in the original
post. When you add a time history and need to identify a player's team at a
specified point in time, it gets much more complicated. That's when reading
Rick Snodgrass' book helps a lot.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Richard Broersma Jr

--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> The value of a surrogate key is easy retrieval and really has nothing to 
> do with normalization or proper modeling.
> 
> I often add a surrogate key, even when one is not required just so I 
> don't have to worry about have a 4 element where clause.


I've often wondered about this.  Since PostgreSQL allows FOREIGN KEYS to be 
referenced from UNIQUE
(non-primary) natural keys, couldn't the schema be designed so that every table 
has a surrogate
PRIMARY KEY and yet still maintain the relationships using UNIQUE natural keys.

Would a design like this be practical?

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Joshua D. Drake

Richard Broersma Jr wrote:

--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
The value of a surrogate key is easy retrieval and really has nothing to 
do with normalization or proper modeling.


I often add a surrogate key, even when one is not required just so I 
don't have to worry about have a 4 element where clause.



I've often wondered about this.  Since PostgreSQL allows FOREIGN KEYS to be 
referenced from UNIQUE
(non-primary) natural keys, couldn't the schema be designed so that every table 
has a surrogate
PRIMARY KEY and yet still maintain the relationships using UNIQUE natural keys.

Would a design like this be practical?


I would do it the other way. Have your primary keys be natural.

Joshua D. Drake




Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [pgadmin-support] Problem editing tables (geom columns)

2007-06-20 Thread Pedro Doria Meunier
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

(First of all sorry for cross-posting but I feel this is a matter that
interests all recipients)
Thread on pgadmin support:
http://www.pgadmin.org/archives/pgadmin-support/2007-06/msg00046.php

Hello Dave,

This behavior (trying to show the entire geometry field) in Fedora
Core 6, pgAdmin3 1.6.2, doesn't happen...
In that scenario the geom field is simply shown blank.

Nevertheless, if I recall it correctly, proj4, geos, postgis versions,
in the above scenario, were older than the ones I'm using...
So I wonder... could it be that pgadmin's code changed for showing
large fields?
Could one of proj4, geos, postgis code changed that really interferes
with pgadmin3?

IMHO, regardless of the scenario involved, the output for large geom
fields should be suppressed in table edition... (as seen in the past)
The current behavior hogs way too much processor time.


I've tested with a micro-subset of my data (only one record with a
small parish geometry) and it shows although the slowness is
immediately apparent...


Kind regards,
Pedro Doria Meunier

Dave Page wrote:
> Dave Page wrote:
>> Pedro Doria Meunier wrote:
>>> I've seen in the known issues for ver 1.6.3 that there's a
>>> problem with editing tables with long fields...
>> Where do you see that?
>>
>>> Strangely enough I've used this version before with Fedora Core
>>> 6 and there wasn't a problem with postgis tables :$ A long
>>> work has already been done setting up Fedora 7 to my taste so
>>> I'm really not too keen on downgrading to FC6... :-(
>>>
>>> So, I'm a bit lost here... Is this a GTK issue? What can I do
>>> to fix this?
>> Can you supply me a sample table and data to test with please?
>
> I've been playing with the data that Pedro supplied me offlist, and
> the problem is basically that he has a geometry value (basically a
> string) in a column of around 4MB. I think it's fairly safe to say
> that we'd be lucky to find that any of the grid controls on any of
> the platforms we support were happy with this amount of data in a
> single cell - in testing on Windows for example, whilst it works,
> it does slow to a crawl.
>
> I think the only sensible option would be to add an additional tab
> to the sort/filter dialog to allow the data to be vertically
> partitioned to exclude such columns. This isn't going to happen for
> the next release though I'm afraid.
>
> Thoughts?
>
> Regards, Dave.
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Remi - http://enigmail.mozdev.org

iD8DBQFGeWFf2FH5GXCfxAsRArySAKCeVIK5uzDEs+Q6ZS0A2Jye6c5h0ACeKlkf
MqNA+rBORwi5Ko2x/rRV+Cc=
=rOET
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Michael Glaesemann


On Jun 20, 2007, at 11:28 , Joshua D. Drake wrote:


Richard Broersma Jr wrote:

I've often wondered about this.  Since PostgreSQL allows FOREIGN  
KEYS to be referenced from UNIQUE
(non-primary) natural keys, couldn't the schema be designed so  
that every table has a surrogate
PRIMARY KEY and yet still maintain the relationships using UNIQUE  
natural keys.

Would a design like this be practical?


I would do it the other way. Have your primary keys be natural.


The albeit small advantage of using PRIMARY KEY on your surrogate if  
you're using the surrogate for foreign key constraints is that you  
can leave off the column name when using REFERENCES: it'll default to  
the PRIMARY KEY column(s). For example:


CREATE TABLE foos
(
foo_id SERIAL PRIMARY KEY
, foo TEXT NOT NULL
, bal TEXT NOT NULL, UNIQUE (foo, bal)
);

CREATE TABLE bars
(
bar_id SERIAL PRIMARY KEY
, bar TEXT NOT NULL
, foo_id INTEGER NOT NULL
REFERENCES foos
);

\d bars
   Table "public.bars"
Column |  Type   |   Modifiers
+- 
+---

bar_id | integer | not null default nextval('bars_bar_id_seq'::regclass)
bar| text| not null
foo_id | integer | not null
Indexes:
"bars_pkey" PRIMARY KEY, btree (bar_id)
Foreign-key constraints:
"bars_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(foo_id)

It does work for multi-column primary keys as well:

CREATE TABLE baz_quuxen
(
baz TEXT NOT NULL
, quux TEXT NOT NULL
, PRIMARY KEY (baz, quux)
);

CREATE TABLE blurfls
(
blurfl TEXT PRIMARY KEY
, baz TEXT NOT NULL
, quux TEXT NOT NULL
, FOREIGN KEY (baz, quux) REFERENCES baz_quuxen
);

\d blurfls
  Table "public.blurfls"
Column | Type | Modifiers
+--+---
blurfl | text | not null
baz| text | not null
quux   | text | not null
Indexes:
"blurfls_pkey" PRIMARY KEY, btree (blurfl)
Foreign-key constraints:
"blurfls_baz_fkey" FOREIGN KEY (baz, quux) REFERENCES baz_quuxen 
(baz, quux)


Having PRIMARY KEY on your natural key does provide some additional  
documentation.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Joshua D. Drake

Michael Glaesemann wrote:


On Jun 20, 2007, at 11:28 , Joshua D. Drake wrote:


Richard Broersma Jr wrote:

I've often wondered about this.  Since PostgreSQL allows FOREIGN KEYS 
to be referenced from UNIQUE
(non-primary) natural keys, couldn't the schema be designed so that 
every table has a surrogate
PRIMARY KEY and yet still maintain the relationships using UNIQUE 
natural keys.

Would a design like this be practical?


I would do it the other way. Have your primary keys be natural.


The albeit small advantage of using PRIMARY KEY on your surrogate if 
you're using the surrogate for foreign key constraints is that you can 
leave off the column name when using REFERENCES: it'll default to the 
PRIMARY KEY column(s). For example:


Sure but for the sake of doing normalization correctly ;) a primary key 
should be natural.


Joshua D. Drake




CREATE TABLE foos
(
foo_id SERIAL PRIMARY KEY
, foo TEXT NOT NULL
, bal TEXT NOT NULL, UNIQUE (foo, bal)
);

CREATE TABLE bars
(
bar_id SERIAL PRIMARY KEY
, bar TEXT NOT NULL
, foo_id INTEGER NOT NULL
REFERENCES foos
);

\d bars
   Table "public.bars"
Column |  Type   |   Modifiers
+-+---
bar_id | integer | not null default nextval('bars_bar_id_seq'::regclass)
bar| text| not null
foo_id | integer | not null
Indexes:
"bars_pkey" PRIMARY KEY, btree (bar_id)
Foreign-key constraints:
"bars_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(foo_id)

It does work for multi-column primary keys as well:

CREATE TABLE baz_quuxen
(
baz TEXT NOT NULL
, quux TEXT NOT NULL
, PRIMARY KEY (baz, quux)
);

CREATE TABLE blurfls
(
blurfl TEXT PRIMARY KEY
, baz TEXT NOT NULL
, quux TEXT NOT NULL
, FOREIGN KEY (baz, quux) REFERENCES baz_quuxen
);

\d blurfls
  Table "public.blurfls"
Column | Type | Modifiers
+--+---
blurfl | text | not null
baz| text | not null
quux   | text | not null
Indexes:
"blurfls_pkey" PRIMARY KEY, btree (blurfl)
Foreign-key constraints:
"blurfls_baz_fkey" FOREIGN KEY (baz, quux) REFERENCES 
baz_quuxen(baz, quux)


Having PRIMARY KEY on your natural key does provide some additional 
documentation.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] A problem in inheritance

2007-06-20 Thread Talha Khan

Hi Guyz,

I need some help in an inheritance issue .

The scenario is as follows :

THE SAMPLE DDL:

CREATE TABLE account_login
 (
  account_id int4 NOT NULL,
  account_login_time timestamptz NOT NULL DEFAULT now(),
 ip_address varchar(32) NOT NULL,
  originating_source varchar(32) NOT NULL DEFAULT 'game'::character
varying
 )
 WITHOUT OIDS;


 CREATE OR REPLACE RULE account_login_no_delete AS ON DELETE TO
 account_login DO INSTEAD NOTHING;
 CREATE OR REPLACE RULE account_login_no_update AS ON UPDATE TO
 account_login DO INSTEAD NOTHING;

 -- child partition
 CREATE TABLE account_login_200705_2
 (
 -- Inherited:   account_id int4 NOT NULL,
 -- Inherited:   account_login_time timestamptz NOT NULL DEFAULT now(),
 -- Inherited:   ip_address varchar(32) NOT NULL,
 -- Inherited:   originating_source varchar(32) NOT NULL DEFAULT
 'game'::character varying,
 ) INHERITS (account_login)
 WITHOUT OIDS;
 CREATE OR REPLACE RULE account_login_no_delete_200705_2 ASON DELETE
 TO account_login_200705_2 DO INSTEAD NOTHING;
 CREATE OR REPLACE RULE account_login_no_update_200705_2 ASON UPDATE
 TO account_login_200705_2 DO INSTEAD NOTHING;

 -- set up the redirection to the partition
 CREATE OR REPLACE RULE account_login_insert_200705_2 AS
ON INSERT TO account_login
   WHERE new.account_login_time >= '2007-05-16 00:00:00+00'::timestamp
 with time zone AND new.account_login_time < '2007-06-01
 00:00:00+00'::timestamp with time zone DO INSTEAD  INSERT INTO
 account_login_200705_2 (account_id, account_login_time, ip_address,
 originating_source)
   VALUES (new.account_id, new.account_login_time, new.ip_address,
new.originating_source);

 -- seed the partition table with rows
 insert into account_login values (1, '20070522 5:00+00', '1.1.1.1',
 'developer');
 insert into account_login values (2, '20070522 6:00+00', '1.1.1.1',
 'developer');
 insert into account_login values (3, '20070522 7:00+00', '1.1.1.1',
 'developer');
 insert into account_login values (4, '20070522 8:00+00', '1.1.1.1',
 'developer');

 THE ACTUAL TEST:

 DROP RULE account_login_no_update ON account_login;


 UPDATE account_login set originating_source = 'xxx';

 Now the update should not effect the child table but it does, evident
 from the output of the following query:

 SELECT * FROM account_login_200705_2;

 TEST # 2:


 I replicated the same scenario and created just one more rule on the
 child table


 CREATE OR REPLACE RULE account_login_no_insert_200705_2 ASON INSERT
TO account_login_200705_2 DO INSTEAD NOTHING;

 Based on the output from the previous scenario i thought that an
 insert into the parent table i.e account_login should go into the
 child table i.e account_login_200705_2 but the insert does not go and
 the on insert do nothing rule on the child table does affect.

 The basic problem is that the on update do nothing rule is not working
 on the child table when an update is done to the parent table and an
on insert do nothing rule is working on the child table.

Please an guidance in this regard would be really appreciated.

Regards,

Talha Amjad


Re: [GENERAL] Surrogate VS natural keys

2007-06-20 Thread Merlin Moncure

On 6/20/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:


--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> The value of a surrogate key is easy retrieval and really has nothing to
> do with normalization or proper modeling.
>
> I often add a surrogate key, even when one is not required just so I
> don't have to worry about have a 4 element where clause.


I've often wondered about this.  Since PostgreSQL allows FOREIGN KEYS to be 
referenced from UNIQUE
(non-primary) natural keys, couldn't the schema be designed so that every table 
has a surrogate
PRIMARY KEY and yet still maintain the relationships using UNIQUE natural keys.

Would a design like this be practical?


yeah, although I prefer to throw the primary key on the natural.
Either way, the natural key is identified...my major issue with the
surrogate design style is that the natural key is often not identified
which inevitably leads to a mess.

I also find databases with natural keys to be much easier to follow
and feel much 'cleaner' to me.  People who've never seen a large
database without surrogates will be amazed at how much more expressive
the tables are.  Surrogates have certain advantages but I classify
them as an optimization, meaning they should be introduced at the last
possible moment in the design.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Sean Murphy
Maybe this is a well duh kind of question, or maybe there's no
straightforward way to do it, but is there any way that I could have a
pg function initiate a process on the host system?

Specifically I'd like to script an email to send off on an insert
trigger, but the ability to initiate system processes in general seems
like it would come in handy.

My present need is for email notification; if there's a pg function or
module that would handle this (I haven't turned up anything in my
searches, but maybe I'm using the wrong search terms in the wrong
places) I'd be OK for now, but I'd rather have the flexibility of
starting whatever process a given circumstance calls for.

Thanks.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] allocate chunk of sequence

2007-06-20 Thread Scott Marlowe

PFC wrote:


The chunk to be allocated is not the same size, so to set the 
increment value will not help.


I'm sometimes not that subtle, so I'd just use a BIGINT sequence. 
Think about the largest chunk you'll ever get (probably less than 2^30 
rows, yes ?), set this sequence increment to this very high value 
(this will still let you have 2^32 chunks at least), and make each 
chunk be (big_sequence_value + N).
And that's only if they need to be contiguous.  If they don't, then just 
grab however many you need one after the other.


I'm not sure how the nextval function to handle this internally, if 
it has to read and update the sequence object. Does it use some kind 
of lock ? Otherwise the problem mentioned here should happen to 
nextval function also.


Yes it takes a lock, but the lock is held for a very short time 
(just increment the value and release the lock).
And that lock will be WAY shorter and bothersome than the lock you'll 
need if you do select setval('seqname',select max(id)+1 from sometable, 
FALSE); in the middle of your UDF.


In InnoDB, the AUTO_INCREMENT lock is held for the entire duration 
of the SQL statement, including any triggers that might fire. This 
means if you have ON INSERT triggers which take some time to process, 
INSERT concurrency is horrendous. Not so with Postgres sequences.
Note that that hasn't been true for some time now, according to Heikki 
Tuuri (sp?).  He told me they changed the way that worked about halfway 
through the 5.0 dev cycle so that they use a kind of internal sequence 
much like postgresql.  Except in postgresql the sequence is exposed.


Again, needing contiguous ids is gonna cause a problem.  It is going to 
serialize all inserts to your database, and slow it down in any kind of 
large parallel environment.


If you can get away without them then do so.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Joshua D. Drake

Sean Murphy wrote:

Maybe this is a well duh kind of question, or maybe there's no
straightforward way to do it, but is there any way that I could have a
pg function initiate a process on the host system?


Yeah you can use any of the untrusted pl languages for that.



Specifically I'd like to script an email to send off on an insert
trigger, but the ability to initiate system processes in general seems
like it would come in handy.

My present need is for email notification; if there's a pg function or
module that would handle this (I haven't turned up anything in my
searches, but maybe I'm using the wrong search terms in the wrong
places) I'd be OK for now, but I'd rather have the flexibility of
starting whatever process a given circumstance calls for.

Thanks.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] persistent db connections in PHP

2007-06-20 Thread Scott Marlowe

[EMAIL PROTECTED] wrote:

This seems to be a problem with PHP, or at least my set up.

I'm writing pages in basically the same way. Each page has an include
at the top that gets you a database session. The function, either
pg_connect() or mysql_connect(), is supposed to either create a new
connection, or return your existing one.

So after I have a connection, I can navigate to other pages, reload or
post to the current one, trigger the x_connect(), and get the session
I created earlier.

In my Mysql site, if I create temporary tables, I still have access to
them after I have traversed a mysql_connect. So it looks like PHP is
giving me the connection I had when I created the temp tables.

However, with this new Postgres site, I don't have access to my temp
tables after I've traversed another pg_connect. So PHP is either
creating a new connection, or giving me another session, not the one
which I created my tables in.


MySQL reuses old connections within the same script.  PostgreSQL's php 
extension does not, it starts a new connection each time.


MySQL's behaviour is surprising, PostgreSQL's behaviour is what you'd 
expect.  Which is typical of both dbs.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Shoaib Mir

If you can use tcl based pl function, the this might help you here -->
http://sourceforge.net/projects/pgmail/

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 6/20/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:


Sean Murphy wrote:
> Maybe this is a well duh kind of question, or maybe there's no
> straightforward way to do it, but is there any way that I could have a
> pg function initiate a process on the host system?

Yeah you can use any of the untrusted pl languages for that.

>
> Specifically I'd like to script an email to send off on an insert
> trigger, but the ability to initiate system processes in general seems
> like it would come in handy.
>
> My present need is for email notification; if there's a pg function or
> module that would handle this (I haven't turned up anything in my
> searches, but maybe I'm using the wrong search terms in the wrong
> places) I'd be OK for now, but I'd rather have the flexibility of
> starting whatever process a given circumstance calls for.
>
> Thanks.
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
>


--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Re: [GENERAL] persistent db connections in PHP

2007-06-20 Thread Raymond O'Donnell


[EMAIL PROTECTED] wrote:

However, with this new Postgres site, I don't have access to my temp
tables after I've traversed another pg_connect. So PHP is either
creating a new connection, or giving me another session, not the one
which I created my tables in.


You wouldn't expect to be given back the same connection (and hence the 
same temp tables) from a pool of connections - they're returned randomly.


Scott Marlowe wrote:
MySQL reuses old connections within the same script.  PostgreSQL's php 
extension does not, it starts a new connection each time.


Isn't pg_pconnect supposed to recycle a pooled connection?

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Steve Atkins


On Jun 20, 2007, at 12:07 PM, Sean Murphy wrote:


Maybe this is a well duh kind of question, or maybe there's no
straightforward way to do it, but is there any way that I could have a
pg function initiate a process on the host system?


You can use pl/perlu or any of the other untrusted languages for
that.



Specifically I'd like to script an email to send off on an insert
trigger, but the ability to initiate system processes in general seems
like it would come in handy.

My present need is for email notification; if there's a pg function or
module that would handle this (I haven't turned up anything in my
searches, but maybe I'm using the wrong search terms in the wrong
places) I'd be OK for now, but I'd rather have the flexibility of
starting whatever process a given circumstance calls for.


Another approach, and one that can be more robust in the case
of external failures, is to have the trigger put the message it wants
to send into a queue table and have an external process that
monitors the table (via polling or listen/notify) and sends the
queued mail.

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Philip Hallstrom

Maybe this is a well duh kind of question, or maybe there's no
straightforward way to do it, but is there any way that I could have a
pg function initiate a process on the host system?

Specifically I'd like to script an email to send off on an insert
trigger, but the ability to initiate system processes in general seems
like it would come in handy.

My present need is for email notification; if there's a pg function or
module that would handle this (I haven't turned up anything in my
searches, but maybe I'm using the wrong search terms in the wrong
places) I'd be OK for now, but I'd rather have the flexibility of
starting whatever process a given circumstance calls for.


I've never done it, and while you can do it with the various pl languages, 
you might also look into the notice/listen stuff.


That way your app doesn't have to wait for the process to finish, but can 
send a notice which another server side app can listen for and do whatever 
it needs to do.


http://www.postgresql.org/docs/current/static/sql-notify.html
http://www.postgresql.org/docs/current/static/sql-listen.html

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Andrew Sullivan
On Wed, Jun 20, 2007 at 12:55:23PM -0700, Steve Atkins wrote:
> 
> Another approach, and one that can be more robust in the case
> of external failures, is to have the trigger put the message it wants
> to send into a queue table and have an external process that
> monitors the table (via polling or listen/notify) and sends the
> queued mail.

For the record, this is _way_ more robust.  It also prevents your
database from accidentally DoSing your mail server, as it would if
thousands of mail messages were all triggered in a very short period
of time.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The very definition of "news" is "something that hardly ever happens."  
--Bruce Schneier

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] standard LOB support

2007-06-20 Thread Scott Marlowe

EBIHARA, Yuichiro wrote:

Hi,

I'm developing a software that supports several RDBMSs including PostgreSQL.

The software needs an ability to handle large objects and now it uses 'bytea' 
datatype for binary
data and 'text' for text data. 
But for portability, I'd rather use BLOB and CLOB defined by the SQL standards indeed.


Is there any plan to support BLOB and CLOB in future releases?
  
Looking at the spec, and postgresql's implementation, I can't see much 
reason you couldn't just use a domain to declare that a bytea is a blob 
and varchar is a clob.


Unless there's some spefici thing you need I'm not seeing.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Christan Josefsson

Hi!

I'm working on building a PostgreSQL based data warehouse, and I'm thus very
interested in any experiences and usage of the PostgreSQL bitmap index
patches (which I've found on pgsql-patches).

Anyone using these patchese on production systems?
Anyone know if the patches run on latest stable 8.2?
Issues applying the patch to latest 8.2 source?
Other experiences of the patches?

If there's any PgSQL developer reading this - when can on-disk bitmap
indexes be expected to be included in stable PostgreSQL versions?

Thank you folks!

Best regards,
Christian


Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-20 Thread Scott Marlowe

Sergei Shelukhin wrote:

This is my first (and, by the love of the God, last) project w/pgsql
and everything but the simplest selects is so slow I want to cry.
This is especially bad with vacuum analyze - it takes several hours
for a database of mere 15 Gb on a fast double-core server w/2Gb of RAM
and virtually no workload at the moment. Maintenance work mem is set
to 512 Mb.

  
I noticed you didn't mention your disk subsystem.  PostgreSQL tends to 
use a fair bit of disk I/O when running vacuum and / or analyze.  If you 
check with top / iostat while vacuum analyze is running, I'm betting 
you'll see a lot of waiting on I/O going on.


You do know those two commands (vacuum and analyze) aren't married 
anymore, right?  You can run analyze all by itself if you want?


And I hope you're not running vacuum analyze full all the time, cause 
there's usually no need for that.


Look up pg_autovacuum.  Saves a lot of har pulling.

Is there any way to speed up ANALYZE?
Analyze is usually REALY fast.  Even on my rather pokey workstation, 
with a single SATA hard drive and other things to do, I can run analyze 
on a 31 Gig database in

 Without it all the queries run
so slow that I want to cry after a couple of hours of operation and
with it system has to go down for hours per day and that is
unacceptable.
  
You should only need to run analyze every so often.  You should only 
need vacuum after lots of updates / deletes.  You should not need to 
take the system down to vacuum, as vacuum doesn't block.  Vacuum full 
does block, but if you need that you either aren't vacuuming often 
enough or you don't have the autovacuum daemon configured.

The same database running on mysql on basically the same server used
to run optimize table on every table every half an hour without any
problem, I am actually pondering scraping half the work on the
conversion and stuff and going back to mysql but I wonder if there's
some way to improve it.
  
And when you ran optimize on those tables, were they not locked for 
regular users the whole time?


There may be a way to improve it.  Tell us, what OS are you running, 
what are your non-default postgresql.conf settings, what ACTUAL commands 
are you running here?  Vacuum, vacuum analyze, vacuum full analyze?  Are 
you inserting / deleting / updating tons of rows between vacuums and / 
or analyzes?





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Tom Lane
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Wed, Jun 20, 2007 at 12:55:23PM -0700, Steve Atkins wrote:
>> Another approach, and one that can be more robust in the case
>> of external failures, is to have the trigger put the message it wants
>> to send into a queue table and have an external process that
>> monitors the table (via polling or listen/notify) and sends the
>> queued mail.

> For the record, this is _way_ more robust.  It also prevents your
> database from accidentally DoSing your mail server, as it would if
> thousands of mail messages were all triggered in a very short period
> of time.

You need to think about which failure mode you like better.  If you have
transactions send mail on their own behalf, you have the problem that a
transaction might send mail and then fail, rolling back and leaving no
trace of itself in the DB ... but that mail went somewhere.  If you use
the queue approach, then the mail-sending process will only see queue
entries from committed transactions, and so it will certainly never send
mail that shouldn't have been sent.  But you might have problems with
the mail-sending process not actually doing its job, or doing it
multiple times (eg, if it sends mail and then fails to commit removal of
the queue entry, it'd probably reprocess the same entry later).  So
neither way is perfect.  You pays your money and you takes your choice.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Joshua D. Drake

Christan Josefsson wrote:

Hi!
 
I'm working on building a PostgreSQL based data warehouse, and I'm thus 
very interested in any experiences and usage of the PostgreSQL bitmap 
index patches (which I've found on pgsql-patches).
 
Anyone using these patchese on production systems?

Anyone know if the patches run on latest stable 8.2?
Issues applying the patch to latest 8.2 source?
Other experiences of the patches?


They work against -HEAD.
They are not finished.
They have not been tested.
They do not support Vacuum.

Joshua D. Drake


 
If there's any PgSQL developer reading this - when can on-disk bitmap 
indexes be expected to be included in stable PostgreSQL versions?
 
Thank you folks!
 
Best regards,

Christian



--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Alexander Staubo

On 6/20/07, Christan Josefsson <[EMAIL PROTECTED]> wrote:

Anyone using these patchese on production systems?


If these are the same patches that were made for Bizgres, then they
are bound to be in use in some current production systems of that
version of PostgreSQL.


If there's any PgSQL developer reading this - when can on-disk bitmap
indexes be expected to be included in stable PostgreSQL versions?


It's scheduled for inclusion in 8.3, which is close to getting a beta release:

 http://www.postgresql.org/developer/roadmap

Alexander.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Tom Lane
"Alexander Staubo" <[EMAIL PROTECTED]> writes:
> On 6/20/07, Christan Josefsson <[EMAIL PROTECTED]> wrote:
>> If there's any PgSQL developer reading this - when can on-disk bitmap
>> indexes be expected to be included in stable PostgreSQL versions?

> It's scheduled for inclusion in 8.3,

Not any more --- we gave up waiting for a finished patch to be
submitted.
http://archives.postgresql.org/pgsql-patches/2007-05/msg00013.php
http://archives.postgresql.org/pgsql-patches/2007-05/msg00299.php
http://developer.postgresql.org/index.php/Todo:PatchStatus

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Joshua D. Drake

Alexander Staubo wrote:

On 6/20/07, Christan Josefsson <[EMAIL PROTECTED]> wrote:

Anyone using these patchese on production systems?


If these are the same patches that were made for Bizgres, then they
are bound to be in use in some current production systems of that
version of PostgreSQL.


If there's any PgSQL developer reading this - when can on-disk bitmap
indexes be expected to be included in stable PostgreSQL versions?


It's scheduled for inclusion in 8.3, which is close to getting a beta 
release:


 http://www.postgresql.org/developer/roadmap


Just so there is no confusion. These WILL NOT be in 8.3:

http://developer.postgresql.org/index.php/Todo:PatchStatus

Joshua D. Drake




Alexander.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] insane index scan times

2007-06-20 Thread Scott Marlowe

Sergei Shelukhin wrote:

This is just an example isolating the problem.
Actual queries contain more tables and more joins and return
reasonable amount of data.
Performance of big indices however is appalling, with planner always
reverting to seqscan with default settings.

I tried to pre-filter the data as much as possible in preceding joins
to put less strain on the offending join (less rows to join by?) but
it doesn't help.

I wonder what exactly makes index perform 100+ times slower than
seqscan - I mean even if it's perfromed on the HD which it should not
be given the index size, index and table are on the same HD and index
is smaller and also sorted, isn't it?

set enable_seqscan = on;
  
explain select * from entries inner join stuff on entries.id =

stuff.id;

 ->  Seq Scan on stuff  (cost=0.00..193344.00 rows=12550400 width=12)


set enable_seqscan = off;

explain select * from entries inner join stuff on entries.id =
stuff.id;

 ->  Index Scan using blah on stuff  (cost=0.00..25406232.30
rows=12550400 width=12)

I don't think you really understand postgresql very well.

There's no evidence that the index scan is 100 times slower.  The 
planner is guesstimating that it will take much longer for the index to 
do the same work.


Do some research on postgresql's MVCC model and "visibility".  The 
indexes don't have it, so every access has to hit the tables anyway, so 
if the query planner figures you're going to hit 10% or so of the table, 
just seq scan it and go.


Run your queries with "explain analyze" and see which is faster.  If the 
seq scan is faster, then pgsql made the right choice.  What you can do 
to speed it up depends on your system.


Post the output of explain analyze select ... here and let us look at it.

More importantly, post your REAL Queries with explain analyze output 
(not just explain) and let's see what we see.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle

2007-06-20 Thread Scott Marlowe

Andrew Kelly wrote:

On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote:
  

On 6/18/07, Andreas Kostyrka <[EMAIL PROTECTED]> wrote:


As a cynic, I might ask, what Oracle is fearing?
  

As a realist, I might ask, how many times do we have to answer this
type of anti-commercial-database flamewar-starting question?




As a nudist, I think I have to answer, "About every 9 weeks, it would
seem".


As a surrealist, I'd have to say purple.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-06-20 Thread Alexander Staubo

On 6/20/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:

Just so there is no confusion. These WILL NOT be in 8.3:

http://developer.postgresql.org/index.php/Todo:PatchStatus


Apologies. I didn't know they had been put on hold.

Alexander.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Chris Browne
[EMAIL PROTECTED] (Sean Murphy) writes:
> My present need is for email notification; if there's a pg function or
> module that would handle this (I haven't turned up anything in my
> searches, but maybe I'm using the wrong search terms in the wrong
> places) I'd be OK for now, but I'd rather have the flexibility of
> starting whatever process a given circumstance calls for.

There are ways of doing this via untrusted stored function languages,
but this approach tends to be discouraged.

Consider: If you submit 200 of these requests, you may spawn your
mailer system 200 times, which may be a spectacularly inefficient
thing to do.  Indeed, spawning it 200 times in a second might trigger
Negative Reactions in system components.  (Consider: What if your
system tried delivering 200 pieces of mail concurrently, and spawned,
as a side-effect, 200 copies of SpamAssassin?)

The better answer tends to be to try to decouple this.

- Write the data that you want sent out into a row in a table that
  implements a Message Queue.

- Perhaps submit a NOTIFY request, if you want things to kick off at
  once.

- Have some other, separate process, that LISTENs for notifications
  (or which simply wakes up once in a while).  This process goes
  through the Message Queue, doing some work on each item to send the
  message on towards its destination.

This way there is only ONE process that wakes up and processes the
work queue.  It might be a program that's mostly outside PostgreSQL...
-- 
"cbbrowne","@","cbbrowne.com"
http://linuxdatabases.info/info/rdbms.html
"It is not enough to succeed, others must fail."  -- Gore Vidal

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] standard LOB support

2007-06-20 Thread EBIHARA, Yuichiro
Hi Scott,

Thank you for your comment.

> > Is there any plan to support BLOB and CLOB in future releases?
> >   
> Looking at the spec, and postgresql's implementation, I can't 
> see much reason you couldn't just use a domain to declare that
> a bytea is a blob and varchar is a clob.

That sounds a good idea!
My application accesses databases via JDBC and PostgreSQL JDBC driver can 
handle 'bytea' as BLOB.
I'm not sure if 'text' is compatible with CLOB, but I guess it'll also work 
well.

Thanks again!

ebi



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster