Re: [GENERAL] query performance

2008-01-17 Thread Alban Hertroys

On Jan 14, 2008, at 3:58 AM, pepone.onrez wrote:

I have this query in a table with 150 thowsand tuples and it takes  
to long


t_documentcontent._id AS _id
FROM t_documentcontent LIMIT 50 OFFSET 8



You want an ORDER BY there. Not only will it probably speed things  
up, without it there's no guaranteed order in the results returned.


As table records have no specific order and updates and inserts on  
that table take the first free position, you risk ending up showing  
some records twice in your set (same id on different 'pages') and  
missing others because you've already skipped past them when there's  
concurrent access.


There's still some risk for that if you order, but as you seem to  
order on a sequence-generated column, updates aren't a problem for  
you and inserts end up at the end anyway. But only if you order them.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,478f1e139491365710960!



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Accessing composite type columns from C

2008-01-17 Thread Reg Me Please
Hi all.
Is there a way with the libpq to access subcolumns in a composite type 
column?
The documentation (8.2) seems not to mention this case.
Thanks.
-- 
Reg me, please!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] count(*) and bad design was: Experiences with extensibility

2008-01-17 Thread Alban Hertroys

On Jan 15, 2008, at 3:03 PM, Ivan Sergio Borgonovo wrote:


On Tue, 15 Jan 2008 14:43:35 +0100
Alban Hertroys [EMAIL PROTECTED] wrote:


You need to scroll to the last row to find the size of the result
set, but after that it's pretty easy to return random rows by
scrolling to them (and marking them 'read' in some way to prevent
accidentally returning the same row again).


Could you post a snippet of code or something giving a more detailed
idea of it?

BTW since cursors support offset if you're not interested if the
order of the retrieved rows is random too you don't even have to
remember which one you read I think.


I posted it on this list a while ago when I came up with this  
solution. I had some trouble finding my old post in the pgsql-general  
archives though - I could find the thread, just not my final posting,  
and searching didn't even turn up the thread.


I did find it here: http://www.mail-archive.com/pgsql- 
[EMAIL PROTECTED]/msg103670.html
The thread contains several other approaches to the problem, it  
really depends on your problem domain which one fits your bill.


I think the function in my original posting could do with clearer  
comments though, so here's the function again:


/*
 * Return $limit random rows from the result set of SQL query $query
 */
function randomSet(
$query, // The query to execute
$limit  // The (max) number of random rows required
) {
// SQL to declare the cursor
query(DECLARE _cur SCROLL CURSOR WITHOUT HOLD FOR $query);

/* Get the range for random(1, n)
 *
 * Determined by scrolling the cursor to the last row.
 * Equivalent to select count(*), but without a separate query.
 */
query(MOVE FORWARD ALL IN _cur);
$count = pg_affected_rows();

$uniques = array(); // A list of used cursor offsets
$resultSet = array();

// Fetch random rows until we have enough or there are no more
while ($limit  0  count($uniques)  $count) {
// Determine random scroll offset
$idx = random(1, $count);

// Skip records with an index we already used
if (in_array($idx, $uniques))
continue;

//Fetch the random row
$record = query(FETCH ABSOLUTE $idx FROM _cur);

// Add the row offset to the list of used offsets
$uniques[] = $idx;

$resultSet[] = $record;
$limit--;
}

// query
query(CLOSE _cur);

return $resultSet;
}



Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,478f32e59497683469944!



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Can't make backup (again)

2008-01-17 Thread Sebastián Baioni
Hi,
Some days ago we were having problems running pg_dump (v. 8.2.5.7260) from 
Windows XP SP2 to a database in a sever PostgreSQL 8.2.5 on 
amd64-portbld-freebsd6.2.
We thought the problem was solved but we are having problems again.
Now we got an error:
cannot allocate memory for input buffer
The command was: COPY public.sipat00 (sipasede, ... ) TO stdout;

¿Is this a memory problem? ¿From server (FreeBSD) or from client PC (Windows 
XP)?

Thanks
 Sebastián


Sebastián Baioni
http://www.acomplejados.com.ar
http://www.extremista.com.ar
http://www.coolartists.com.ar

   
-

Yahoo! Encuentros
Ahora encontrar pareja es mucho más fácil, probá el nuevo Yahoo! Encuentros.
 Visitá http://yahoo.cupidovirtual.com/servlet/NewRegistration

Re: [GENERAL] LIKE and REGEX optimization

2008-01-17 Thread hubert depesz lubaczewski
On Tue, Jan 15, 2008 at 04:49:41PM -0600, Scott Marlowe wrote:
 This query is not capable of using an index on name, since you can't
 use an index with a like beginning with a %... So

actually you can. you just can't use index for like %something%, but it
can be solved using trigrams or another approaches. for example:
http://www.depesz.com/index.php/2007/09/15/speeding-up-like-xxx/

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Can't make backup (again)

2008-01-17 Thread Richard Huxton

Sebastián Baioni wrote:

Hi,
Some days ago we were having problems running pg_dump (v. 8.2.5.7260) from 
Windows XP SP2 to a database in a sever PostgreSQL 8.2.5 on 
amd64-portbld-freebsd6.2.
We thought the problem was solved but we are having problems again.
Now we got an error:
cannot allocate memory for input buffer
The command was: COPY public.sipat00 (sipasede, ... ) TO stdout;

¿Is this a memory problem? 


Well, yes you can't allocate memory.

 ¿From server (FreeBSD) or from client PC (Windows XP)?

Did you see the error on the client PC or the server?
What do the server logs show?
What was the memory usage on the client when this happened?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-17 Thread Stefan Schwarzer
This looks like an endianess mismatch; did you already mention on  
what

architecture you are on?



MacPro, Leopard


Did you just move from a PPC-based Mac to an Intel-based one?
If so, you're out of luck --- you need to go back to the PPC
to make a dump of those files.



No, I just re-installed my Intel Mac. First I just upgraded from Tiger  
to Leopard (without getting my database to run; but I didn't put much  
effort into it); and then I completely erased the disk and installed  
Leopard from scratch.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Can't make backup (again)

2008-01-17 Thread Sebastián Baioni

--- Richard Huxton [EMAIL PROTECTED] escribió:

 Sebastián Baioni wrote:
  Hi,
  Some days ago we were having problems running pg_dump (v. 8.2.5.7260) from 
  Windows XP SP2 to a
 database in a sever PostgreSQL 8.2.5 on amd64-portbld-freebsd6.2.
  We thought the problem was solved but we are having problems again.
  Now we got an error:
  cannot allocate memory for input buffer
  The command was: COPY public.sipat00 (sipasede, ... ) TO stdout;
  
  ¿Is this a memory problem? 
 
 Well, yes you can't allocate memory.
 
   ¿From server (FreeBSD) or from client PC (Windows XP)?
 
 Did you see the error on the client PC or the server?
 What do the server logs show?
 What was the memory usage on the client when this happened?
 
 -- 
Richard Huxton
Archonet Ltd

The error was in the client PC, the command we used in a command line was: 
C:\Program
files\pgAdmin III\1.8\pg_dump.exe -h mihost -p 5432 -U miuser -F c -v -b -f 
C:\back\db.backup
db
I contacted the Admin and told me there was no erros nor in the postgresql log 
nor in the server
log.
I don't know the memory usage on the client, I'll try to repeat the error and 
check it.

Thank you
 Sebastián



  Yahoo! Encuentros.

Ahora encontrar pareja es mucho más fácil, probá el nuevo Yahoo! Encuentros 
http://yahoo.cupidovirtual.com/servlet/NewRegistration

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can't make backup (again)

2008-01-17 Thread Sebastián Baioni

--- Sebastián Baioni [EMAIL PROTECTED] escribió:

 
 --- Richard Huxton [EMAIL PROTECTED] escribió:
 
  Sebastián Baioni wrote:
   Hi,
   Some days ago we were having problems running pg_dump (v. 8.2.5.7260) 
   from Windows XP SP2 to
 a
  database in a sever PostgreSQL 8.2.5 on amd64-portbld-freebsd6.2.
   We thought the problem was solved but we are having problems again.
   Now we got an error:
   cannot allocate memory for input buffer
   The command was: COPY public.sipat00 (sipasede, ... ) TO stdout;
   
   ¿Is this a memory problem? 
  
  Well, yes you can't allocate memory.
  
¿From server (FreeBSD) or from client PC (Windows XP)?
  
  Did you see the error on the client PC or the server?
  What do the server logs show?
  What was the memory usage on the client when this happened?
  
  -- 
 Richard Huxton
 Archonet Ltd
 
 The error was in the client PC, the command we used in a command line was: 
 C:\Program
 files\pgAdmin III\1.8\pg_dump.exe -h mihost -p 5432 -U miuser -F c -v -b -f 
 C:\back\db.backup
 db
 I contacted the Admin and told me there was no erros nor in the postgresql 
 log nor in the server
 log.
 I don't know the memory usage on the client, I'll try to repeat the error and 
 check it.
 
 Thank you
  Sebastián
 
I forgot to tell that the error was seen on the client but it was a response 
from the server:
pg_dump: Mensaje de error del servidor: cannot allocate memory for input buffer
pg_dump: El comando es: COPY public.sipat00 (sipasede, ...) TO stdout;
pg_dump: *** se abort¾ por un error

*Mensaje de error del servidor = Server's Error Message
*El comando es: The command is
*se abortó por un error: it was aborted because an error.


  Los referentes más importantes en compra/ venta de autos se juntaron:
Demotores y Yahoo!
Ahora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Don't cascade drop to view

2008-01-17 Thread Sim Zacks

Unfortuantely, there is no way around it.
Without cascade it won't let you delete the schema or table.
Functions will not be dropped.

Sim

Peter Bauer wrote:

Hi all,

i made some views for the slony1 configuration tables in the public schema 
which refer to tables in the _slony1 schema. My problem now is that if the 
_slony1 schema is dropped with cascade or slony is uninstalled, these views 
are also dropped and i have to recreate them if slony is initialized again.
Is there a possibility to let the views live there even if the refered schema 
or tables are dropped? Would a plpgsql Function also be dropped?


thx,
Peter



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Don't cascade drop to view

2008-01-17 Thread Peter Bauer

Hi all,

i made some views for the slony1 configuration tables in the public schema 
which refer to tables in the _slony1 schema. My problem now is that if the 
_slony1 schema is dropped with cascade or slony is uninstalled, these views 
are also dropped and i have to recreate them if slony is initialized again.
Is there a possibility to let the views live there even if the refered schema 
or tables are dropped? Would a plpgsql Function also be dropped?

thx,
Peter

-- 
Peter Bauer
APUS Software G.m.b.H.
A-8074 Raaba, Bahnhofstrasse 1/1
Email: [EMAIL PROTECTED]
Tel: +43 316 401629 24
Fax: +43 316 401629 9

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread James B. Byrne

On Thu, January 17, 2008 10:15, Scott Marlowe wrote:

 If race conditions are a possible issue, you use a sequence and
 increment that until you get a number that isn't used.  That way two
 clients connecting at the same time can get different, available
 numbers.


That is close to the idea that I originally had.  I was simply wondering
if the built-in sequencer could handle this case or whether I need to roll
my own.

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:[EMAIL PROTECTED]
Harte  Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread Scott Marlowe
On Jan 17, 2008 9:05 AM, James B. Byrne [EMAIL PROTECTED] wrote:

 If the entries involved numbered in the millions then Scott's approach has
 considerable merit.  In my case, as the rate of additions is very low and
 the size of the existing blocks is in the hundreds rather than hundreds of
 thousands then I believe that I will simply write my own iterator and do a
 repetitive select when on the incrementally proposed values until an
 opening is found then insert the new entry and update the iterator next
 value accordingly.

If race conditions are a possible issue, you use a sequence and
increment that until you get a number that isn't used.  That way two
clients connecting at the same time can get different, available
numbers.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Can't make backup (again)

2008-01-17 Thread Richard Huxton

Sebastián Baioni wrote:

I contacted the Admin and told me there was no erros nor in the
postgresql log nor in the server log.



I forgot to tell that the error was seen on the client but it was a
response from the server: pg_dump: Mensaje de error del servidor:
cannot allocate memory for input buffer pg_dump: El comando es: COPY
public.sipat00 (sipasede, ...) TO stdout; pg_dump: *** se abort¾ por
un error

*Mensaje de error del servidor = Server's Error Message *El comando
es: The command is *se abortó por un error: it was aborted because an
error.


If it is something server-side then there should be something in your 
server logs.


However, if you're taking a backup, I would expect the input buffer to 
be growing on the client rather than the server.


I think the Server's error message is abortó por un error, the error 
being out-of-memory on your client.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Can't make backup (again)

2008-01-17 Thread Tom Lane
=?iso-8859-1?q?Sebasti=E1n=20Baioni?= [EMAIL PROTECTED] writes:
 I forgot to tell that the error was seen on the client but it was a response 
 from the server:
 pg_dump: Mensaje de error del servidor: cannot allocate memory for input 
 buffer
 pg_dump: El comando es: COPY public.sipat00 (sipasede, ...) TO stdout;
 pg_dump: *** se abort¾ por un error

No, that error text only appears in libpq, so it's happening on the
client side --- pg_dump just doesn't know the difference between an
error sent from the server and one generated within libpq.

My guess is that there's some extremely wide row(s) in your database.
The client-side libpq has to be able to buffer the widest row during
a COPY, and your client machine doesn't seem to be up to the task...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Accessing composite type columns from C

2008-01-17 Thread Tom Lane
Reg Me Please [EMAIL PROTECTED] writes:
 Is there a way with the libpq to access subcolumns in a composite type 
 column?

libpq knows nothing in particular about composite columns.  You'd need
to parse out the data for yourself, per the syntax rules at
http://www.postgresql.org/docs/8.2/static/rowtypes.html#AEN5986

Depending on what you're doing, it might be better to expand the
column at the server side, ie, SELECT (ccol).* FROM ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Tom Hart

Joshua D. Drake wrote:

Robert Treat wrote:

There's been a big move in the php community to push people towards 
php5 (one of which was EOL of php4), which has started to pay off.  
I'd guess that if they wanted to, they could switch to PDO with 
Drupal 7 and not hurt themselves too much.


When I spoke with Dries about this issue one of the big hold backs 
wasn't PHP 4 but actually MySQL 3. When Drupal 6, MySQL 3 is not 
longer supported. So they can actually do some nicer stuff (like 
foreign keys) etc..


I am sure that with PHP5 things will improve as well.

Sincerely,

Joshua D. Drake

Let me just sneak in a quick rant here, from somebody who really doesn't 
matter.


We run drupal for our corporate intranet (currently being built) and we 
use postgreSQL as the backend. Some of the modules and things don't work 
perfectly, but drupal supported it and that made me happy enough to work 
with it. Now after reading this garbage, I'm extremely disappointed. 
Completely dropping postgresql capability might not affect them too 
largely in the huge run, because a large amount of their user base is 
using mySQL, but it would send a message to those of us that believe in 
choice. I'm afraid that they're choosing the route of convenience over 
their users, and every time I think about it I want to go looking for 
replacements.


It'd be easier to build drupal to only run on mySQL, but then again it'd 
be easy to build postgreSQL to only run on linux and forget about the 
windows users. I know it's not their duty to make drupal work with 
postgresql, but if they drop it like they're talking about, I'll be 
making a push here and to everyone I know who uses drupal to switch to 
another system, whether they're running postgres or not. If drupal 6 
absolutely doesn't support postgres, then I'm dropping my drupal 5 
install on the spot. This is a cold move drupal, and you should be ashamed.


Sorry, I'll end the rant here.

BTW, I'm a PHP developer who uses postgreSQL almost exclusively and I'm 
on this list as well as other postgres lists constantly (even if as a 
reader most of the time). If they have this big of an issue, why not ask 
for help?


--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread James B. Byrne

On Wed, January 16, 2008 18:40, Scott Marlowe wrote:
 You're essentially wanting to fill in the blanks here.  If you need
 good performance, then what you'll need to do is to preallocate all
 the numbers that haven't been assigned somewhere.  So, we make a table
 something like:

 create table locatorcodes (i int, count_id serial);

 Then we insert an id into that table for everyone that's missing from
 the main table:

 insert into locatorcodes (i)
 select b.i from (
 select * from generate_series(1,100)as i
 ) as b
 left join main_table a on (b.i=a.i)
 where a.i is null;

 Or something like that.

 Now, we've got a table with all the unused ids, and a serial count
 assigned to them.  Create another sequence:

 create checkout_sequence;

 and use that to check out numbers from locatorcodes:

 select i from locatorcodes where count_id=nextval('checkout_sequence');

 And since the sequence will just count up, there's little or no
 problems with performance.

 There's lots of ways of handling this.  That's just one of the ones
 that doesn't slow your database down a lot.

 If you need to, you can shuffle the numbers going into the
 locatorcodes table with an order by random() when you create it.


Martin and Scott,

Thank you both for your responses.

If the entries involved numbered in the millions then Scott's approach has
considerable merit.  In my case, as the rate of additions is very low and
the size of the existing blocks is in the hundreds rather than hundreds of
thousands then I believe that I will simply write my own iterator and do a
repetitive select when on the incrementally proposed values until an
opening is found then insert the new entry and update the iterator next
value accordingly.

Regards,

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:[EMAIL PROTECTED]
Harte  Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread Scott Marlowe
On Jan 17, 2008 9:19 AM, James B. Byrne [EMAIL PROTECTED] wrote:

 On Thu, January 17, 2008 10:15, Scott Marlowe wrote:
 
  If race conditions are a possible issue, you use a sequence and
  increment that until you get a number that isn't used.  That way two
  clients connecting at the same time can get different, available
  numbers.
 

 That is close to the idea that I originally had.  I was simply wondering
 if the built-in sequencer could handle this case or whether I need to roll
 my own.

Yeah, the built in sequencer just increments by one, nothing else.
But it should be pretty easy to write a pl/pgsql function that grabs
the next value and loop until it finds one that's available.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Bill Moran
In response to Tom Hart [EMAIL PROTECTED]:

 Joshua D. Drake wrote:
  Robert Treat wrote:
 
  There's been a big move in the php community to push people towards 
  php5 (one of which was EOL of php4), which has started to pay off.  
  I'd guess that if they wanted to, they could switch to PDO with 
  Drupal 7 and not hurt themselves too much.
 
  When I spoke with Dries about this issue one of the big hold backs 
  wasn't PHP 4 but actually MySQL 3. When Drupal 6, MySQL 3 is not 
  longer supported. So they can actually do some nicer stuff (like 
  foreign keys) etc..
 
  I am sure that with PHP5 things will improve as well.
 
  Sincerely,
 
  Joshua D. Drake

 Let me just sneak in a quick rant here, from somebody who really doesn't 
 matter.
 
 We run drupal for our corporate intranet (currently being built) and we 
 use postgreSQL as the backend. Some of the modules and things don't work 
 perfectly, but drupal supported it and that made me happy enough to work 
 with it. Now after reading this garbage, I'm extremely disappointed. 
 Completely dropping postgresql capability might not affect them too 
 largely in the huge run, because a large amount of their user base is 
 using mySQL, but it would send a message to those of us that believe in 
 choice. I'm afraid that they're choosing the route of convenience over 
 their users, and every time I think about it I want to go looking for 
 replacements.

I run my personal site on Drupal+PostgreSQL.  If Drupal drops PG support,
I'll switch the front-end.  I'm not switching the back end.

I'm also planning a small enterprise that I was originally considering
using Drupal for.  I'm now more seriously considering Bricolage.

However, read on ...

 It'd be easier to build drupal to only run on mySQL, but then again it'd 
 be easy to build postgreSQL to only run on linux and forget about the 
 windows users. I know it's not their duty to make drupal work with 
 postgresql, but if they drop it like they're talking about, I'll be 
 making a push here and to everyone I know who uses drupal to switch to 
 another system, whether they're running postgres or not. If drupal 6 
 absolutely doesn't support postgres, then I'm dropping my drupal 5 
 install on the spot. This is a cold move drupal, and you should be ashamed.

I made a post on the drupal-devel list to this effect.  I got chewed out
for flaming Karoly ... who's obviously some big Drupal code guru.

Frankly, every time this topic comes up, it's initiated by Karoly, and
I've lost patience with the crap, so I unsubscribed.

If I can get my life back in order, I'll re-subscribe some time in Feb,
and hopefully start to do something productive, like contribute testing
and patches.

 Sorry, I'll end the rant here.
 
 BTW, I'm a PHP developer who uses postgreSQL almost exclusively and I'm 
 on this list as well as other postgres lists constantly (even if as a 
 reader most of the time). If they have this big of an issue, why not ask 
 for help?

If you read through the thread, it's just Karoly and a few other minor
players in the Drupal community.  Many people have stepped up and said,
I _do_ test on PostgreSQL, so what are you complaining about?

As best I can tell, Karoly writes patches, and when they don't work on
PostgreSQL and therefore don't get committed right away, he starts this
argument up on the Drupal lists yet again.  The guy is a whiner who has
a personal axe to grind and seems unable to accept that Drupal wants to
run on more than just MySQL.  If he loves MySQL so much, he should join
a project that only supports MySQL and leave the Drupal people to their
work.  There's a LOT of effort in the Drupal community to build code
abstractions that will make the system database-agnostic, and Karoly's
constant whining is simply counterproductive.

To a large degree, I think Karoly has blown the situation out of
proportion.  Look at how it affects _this_ list every time he starts
bitching, for example.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] losing db user password going from 7.4 to 8.2

2008-01-17 Thread Steve Clark

Hello List,

I am doing a pg_dumpall -c on 7.4.
I then use psql to load into 8.2 everything seems to be right except my
db user passwords don't work anymore.

What am I missing.

I have already tried starting 8.2 postgres with both
#password_encryption = on
password_encryption = off

then reloading the database dump.

Thanks,
Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread Scott Marlowe
On Jan 17, 2008 9:19 AM, James B. Byrne [EMAIL PROTECTED] wrote:

 On Thu, January 17, 2008 10:15, Scott Marlowe wrote:
 
  If race conditions are a possible issue, you use a sequence and
  increment that until you get a number that isn't used.  That way two
  clients connecting at the same time can get different, available
  numbers.
 

 That is close to the idea that I originally had.  I was simply wondering
 if the built-in sequencer could handle this case or whether I need to roll
 my own.

Got bored, hacked this aggregious pl/pgsql routine up.  It looks
horrible, but I wanted it to be able to use indexes.  Seems to work.
Test has ~750k rows and returns in it and returns a new id in   1ms
on my little server.

File attached.
drop table a;
drop sequence aseq;
create table a (i integer primary key);
create sequence aseq;
insert into a(i) select * from generate_series(1,100) where random()  0.25;

create or replace function getnext() returns int as $$
DECLARE
	niq int;
	tf bool;
BEGIN
	loop
		select nextval('aseq') into niq;
		select case 
			when (select true from (select niq as i) as x join a on (a.i=x.i)) 
			then TRUE 
			else FALSE end into tf;
		exit when not tf ;
	end loop;
	return niq;
END;
$$ language plpgsql;

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] pg_dumpall

2008-01-17 Thread Steve Clark

Hello List,

the man page for pg_dump say:
pg_dump is a utility for backing up a  PostgreSQL  database.  It  makes
   consistent  backups  even  if  the database is being used 
concurrently.


does pg_dumpall make consistent backups if the database is being used 
concurrently?

Even though the man page doesn't say it does.

Thanks,
Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] losing db user password going from 7.4 to 8.2

2008-01-17 Thread Scott Marlowe
On Jan 17, 2008 10:35 AM, Steve Clark [EMAIL PROTECTED] wrote:
 Hello List,

 I am doing a pg_dumpall -c on 7.4.
 I then use psql to load into 8.2 everything seems to be right except my
 db user passwords don't work anymore.

 What am I missing.


What error message are you getting?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Martijn van Oosterhout
On Thu, Jan 17, 2008 at 11:14:22AM -0800, Glyn Astill wrote:
  begin;
  set transaction isolation level serializable;
  
  --- begin dumping stuff;
  
 
 Wouldn't that just lock everything so nothing could be updated? Or
 just the table it is outputting?

PostgreSQL uses MVCC, which means the whole thing is lock free. It just
requires more diskspace. To keep the older versions around.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Glyn Astill

Alvaro Herrera [EMAIL PROTECTED] wrote:
 Glyn Astill wrote:
  Out of interest, how does pg_dump manage to do a snapshot of a
  database at an instant in time?
 
  My mental picture of pg_dump was just a series of queries dumping
 out
  the tables...
 
 begin;
 set transaction isolation level serializable;
 
 --- begin dumping stuff;
 

Wouldn't that just lock everything so nothing could be updated? Or
just the table it is outputting?

I'm guessing I need to go off and school myself on different
isolation levels etc to understand, but say I have 2 tables sales
and sold, and users are selling items with inserts into the sales
table and a count updating manually in sold. Wouldn't these end up
inconsistant in the dump?




  ___
Support the World Aids Awareness campaign this month with Yahoo! For Good 
http://uk.promotions.yahoo.com/forgood/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Greg Smith

On Thu, 17 Jan 2008, Tom Lane wrote:

There isn't any good way to guarantee time coherence of dumps across two 
databases.


Whether there's a good way depends on what you're already doing.  If 
you're going to the trouble of making a backup using PITR anyway, it's not 
hard to stop applying new logs to that replica and dump from it to get a 
point in time backup across all the databases.  That's kind of painful now 
because you have to start the server to run pg_dumpall, so resuming 
recovery is difficult, but you can play filesystem tricks to make that 
easier.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] losing db user password going from 7.4 to 8.2

2008-01-17 Thread Steve Clark

Scott Marlowe wrote:

On Jan 17, 2008 10:35 AM, Steve Clark [EMAIL PROTECTED] wrote:


Hello List,

I am doing a pg_dumpall -c on 7.4.
I then use psql to load into 8.2 everything seems to be right except my
db user passwords don't work anymore.

What am I missing.




What error message are you getting?


Duh - my bad - my browser had the wrong password in it - so i thought 
things were broken.


Sorry for the noise - as he hides his head in shame.

Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Don't cascade drop to view

2008-01-17 Thread Erik Jones


On Jan 17, 2008, at 8:27 AM, Sim Zacks wrote:

Peter Bauer wrote:

Hi all,
i made some views for the slony1 configuration tables in the  
public schema which refer to tables in the _slony1 schema. My  
problem now is that if the _slony1 schema is dropped with cascade  
or slony is uninstalled, these views are also dropped and i have  
to recreate them if slony is initialized again.
Is there a possibility to let the views live there even if the  
refered schema or tables are dropped? Would a plpgsql Function  
also be dropped?

thx,
Peter



Unfortuantely, there is no way around it.
Without cascade it won't let you delete the schema or table.
Functions will not be dropped.


If you dropped tables out from under views, how would you expect them  
to act if someone were to query them?  Inconsistent and unpredictable  
are just two words I'd use to describe a system that allowed that.   
However, if these are relatively simple views, you may be able to get  
away with re-implementing them as functions that return sets of  
whatever record type your views are.


If you're building custom views that depend on a separate package  
then you're going to be pretty much required to write scripts to  
generate those view and custom install scripts for the package you're  
building on.  Oh, and document all of that, as well.  If you don't do  
this you'll be stuck managing things by hand which, on all but teeny  
tiny projects, is a BadThing.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Glyn Astill
Out of interest, how does pg_dump manage to do a snapshot of a
database at an instant in time?

My mental picture of pg_dump was just a series of queries dumping out
the tables...

--- Tom Lane [EMAIL PROTECTED] wrote:

 Steve Clark [EMAIL PROTECTED] writes:
  does pg_dumpall make consistent backups if the database is being
 used 
  concurrently?
  Even though the man page doesn't say it does.
 
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.
 
   regards, tom lane
 
 ---(end of
 broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 



  ___
Support the World Aids Awareness campaign this month with Yahoo! For Good 
http://uk.promotions.yahoo.com/forgood/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Alvaro Herrera
Tom Lane wrote:
 Steve Clark [EMAIL PROTECTED] writes:
  does pg_dumpall make consistent backups if the database is being used 
  concurrently?
  Even though the man page doesn't say it does.
 
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.

The fine point possibly being missed is that each database's dump
produced by pg_dumpall is, of course, self-consistent.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] testing the news gateway

2008-01-17 Thread Alvaro Herrera
Hello!  This is just a test, please ignore.

If you don't ignore it, I'll ignore you.

Thanks,

-- 
Alvaro Herrera

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Tom Lane
Steve Clark [EMAIL PROTECTED] writes:
 does pg_dumpall make consistent backups if the database is being used 
 concurrently?
 Even though the man page doesn't say it does.

That's intentional, because it doesn't.  What you get is a pg_dump
snapshot of each database in sequence; those snapshots don't all
correspond to the same time instant.  There isn't any good way to
guarantee time coherence of dumps across two databases.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Erik Jones


On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:


On Thu, 17 Jan 2008, Tom Lane wrote:

There isn't any good way to guarantee time coherence of dumps  
across two databases.


Whether there's a good way depends on what you're already doing.   
If you're going to the trouble of making a backup using PITR  
anyway, it's not hard to stop applying new logs to that replica and  
dump from it to get a point in time backup across all the  
databases.  That's kind of painful now because you have to start  
the server to run pg_dumpall, so resuming recovery is difficult,  
but you can play filesystem tricks to make that easier.


Actually, this exact scenario brings up a question I was thinking of  
last night.  If you stop a PITR standby server and bring it up to  
dump from, will all of the database file have something written to  
them at some point during the dump?  Transactional information is  
what I'd assume would be written, if so, but I'm not really sure of  
the low level details there.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Tom Hart

Bill Moran wrote:

In response to Tom Hart [EMAIL PROTECTED]:

  


Let me just sneak in a quick rant here, from somebody who really doesn't 
matter.


We run drupal for our corporate intranet (currently being built) and we 
use postgreSQL as the backend. Some of the modules and things don't work 
perfectly, but drupal supported it and that made me happy enough to work 
with it. Now after reading this garbage, I'm extremely disappointed. 
Completely dropping postgresql capability might not affect them too 
largely in the huge run, because a large amount of their user base is 
using mySQL, but it would send a message to those of us that believe in 
choice. I'm afraid that they're choosing the route of convenience over 
their users, and every time I think about it I want to go looking for 
replacements.



I run my personal site on Drupal+PostgreSQL.  If Drupal drops PG support,
I'll switch the front-end.  I'm not switching the back end.

I'm also planning a small enterprise that I was originally considering
using Drupal for.  I'm now more seriously considering Bricolage.

However, read on ...

  
It'd be easier to build drupal to only run on mySQL, but then again it'd 
be easy to build postgreSQL to only run on linux and forget about the 
windows users. I know it's not their duty to make drupal work with 
postgresql, but if they drop it like they're talking about, I'll be 
making a push here and to everyone I know who uses drupal to switch to 
another system, whether they're running postgres or not. If drupal 6 
absolutely doesn't support postgres, then I'm dropping my drupal 5 
install on the spot. This is a cold move drupal, and you should be ashamed.



I made a post on the drupal-devel list to this effect.  I got chewed out
for flaming Karoly ... who's obviously some big Drupal code guru.

Frankly, every time this topic comes up, it's initiated by Karoly, and
I've lost patience with the crap, so I unsubscribed.

If I can get my life back in order, I'll re-subscribe some time in Feb,
and hopefully start to do something productive, like contribute testing
and patches.

  

Sorry, I'll end the rant here.

BTW, I'm a PHP developer who uses postgreSQL almost exclusively and I'm 
on this list as well as other postgres lists constantly (even if as a 
reader most of the time). If they have this big of an issue, why not ask 
for help?



If you read through the thread, it's just Karoly and a few other minor
players in the Drupal community.  Many people have stepped up and said,
I _do_ test on PostgreSQL, so what are you complaining about?

As best I can tell, Karoly writes patches, and when they don't work on
PostgreSQL and therefore don't get committed right away, he starts this
argument up on the Drupal lists yet again.  The guy is a whiner who has
a personal axe to grind and seems unable to accept that Drupal wants to
run on more than just MySQL.  If he loves MySQL so much, he should join
a project that only supports MySQL and leave the Drupal people to their
work.  There's a LOT of effort in the Drupal community to build code
abstractions that will make the system database-agnostic, and Karoly's
constant whining is simply counterproductive.

To a large degree, I think Karoly has blown the situation out of
proportion.  Look at how it affects _this_ list every time he starts
bitching, for example.

  
Is it just Karoly (chx) who has all these things to say about pg? He's 
just one person on the drupal team. Has anybody else in the core team 
spoken out on this subject?


Let's keep in mind as well that this doesn't only affect pg users but 
any other database as well that drupal supports or plans on supporting. 
Drupal is pretty popular, and I expect there are a number of 
organizations that don't fit in their mold of the ideal drupal user.


I'd almost consider trying to take drupal and create a derivative 
product and build in the pg and oracle and mssql, etc. support myself, 
but if the drupal team really pulls a messed up move like this, I really 
don't want to have anything to do with them anymore. It's not that I'm 
that huge of a pg nut (I used mySQL for a while myself), but any team 
that can turn it's back on that many of it's users to make their lives a 
little easier isn't in it for the right reasons (the advancement of 
technology, computing as a science, etc.). I am literally astonished 
that they would even consider telling even 1% of their users Take off, 
you're too much work. How many drupal+postgres users are large 
corporations, or regular donators? What about code contributors? How 
many people are they looking at pissing off with a move like this?


Obviously emotion has gotten the better of me which is why I won't post 
to the drupal boards/lists (I might be accused of flaming and I don't 
want to paint the pgSQL community in a negative light), but I think that 
somebody should let the drupal people know that we're still here and 
we'd like to use the new drupal, just not on 

Re: [GENERAL] Don't cascade drop to view

2008-01-17 Thread Martijn van Oosterhout
On Thu, Jan 17, 2008 at 11:10:25AM -0600, Erik Jones wrote:
 If you dropped tables out from under views, how would you expect them  
 to act if someone were to query them?  Inconsistent and unpredictable  
 are just two words I'd use to describe a system that allowed that.   

I'd expect it to throw an error that the tables are missing. I ran into
this today. All it really requires is that the view definition be
parsed at use time rather than at creation time.

 However, if these are relatively simple views, you may be able to get  
 away with re-implementing them as functions that return sets of  
 whatever record type your views are.

As you say, functions are compiled at use time, and hence don't suffer
this problem. You can build a view on the function and it should be
transparent...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Alvaro Herrera
Glyn Astill wrote:
 Out of interest, how does pg_dump manage to do a snapshot of a
 database at an instant in time?

 My mental picture of pg_dump was just a series of queries dumping out
 the tables...

begin;
set transaction isolation level serializable;

--- begin dumping stuff;



-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dumpall

2008-01-17 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 That's intentional, because it doesn't.  What you get is a pg_dump
 snapshot of each database in sequence; those snapshots don't all
 correspond to the same time instant.  There isn't any good way to
 guarantee time coherence of dumps across two databases.

 The fine point possibly being missed is that each database's dump
 produced by pg_dumpall is, of course, self-consistent.

Right, but Steve already knew that.

Hmm ... it suddenly strikes me that Simon's transaction snapshot
cloning idea could provide a way to get exactly coherent dumps from
multiple databases in the same cluster.  Maybe he already realized that,
but I didn't.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] [OT] RAID controllers blocking one another?

2008-01-17 Thread Sean Davis
We have a machine that serves as a fileserver and a database server.  Our
server hosts a raid array of 40 disk drives, attached to two3-ware cards,
one 9640SE-24 and one 9640SE-16. We have noticed that activity on one
controller blocks access on the second controller, not only for disk-IO but
also the command line tools which become unresponsive for the inactive
controller.   The controllers are sitting in adjacent PCI-express slots on a
machine with dual-dual AMD and 16GB of RAM.  Has anyone else noticed issues
like this?  Throughput for either controller is a pretty respectable
150-200MB/s writing and somewhat faster for reading, but the blocking is
problematic, as the machine is serving multiple purposes.

I know this is off-topic, but I know lots of folks here deal with very large
disk arrays; it is hard to get real-world input on machines such as these.


Thanks,
Sean


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Ivan Sergio Borgonovo
On Thu, 17 Jan 2008 11:03:43 -0500
Tom Hart [EMAIL PROTECTED] wrote:

 Let me just sneak in a quick rant here, from somebody who really
 doesn't matter.
 
 We run drupal for our corporate intranet (currently being built)
 and we use postgreSQL as the backend. Some of the modules and
 things don't work perfectly, but drupal supported it and that made
 me happy enough to work with it. Now after reading this garbage,

Same here.

 I'm extremely disappointed. Completely dropping postgresql
 capability might not affect them too largely in the huge run,
 because a large amount of their user base is using mySQL, but it
 would send a message to those of us that believe in choice. I'm
 afraid that they're choosing the route of convenience over their
 users, and every time I think about it I want to go looking for
 replacements.

Same here.

I think Postgres is a great DB... but what upset me most is:
- missing freedom
- missing freedom

I'd explain better the 2 above point and add a 3rd one later.
- If you've just one choice it is nice it is Open, but having a
competitor is better
- once you go for one DB it will be extremely painful to go back and
the overall design of such a beast will suffer a lot

3rd point:
- going MySQL only may be OK for Joomla. Not for Drupal.
Drupal is halfway between a CMS and a framework. You can use to do
more than just showing semi-static pages. You can use it to deal with
money where transaction and ref. integrity is important and from the
point of view of a developer it is not yet a comoditised software.
Client are asking more than just install it on a hosting.

I've seen some of the problems Drupal has with Postgres and quite a
bunch are caused by absolutely unnecessary Mysqlisms.
Then when there are patches that require to fix badly written SQL
spread around people complain pg is holding the world back.

There are very smart developer some of whom (Larry Garfield, Edison
Wong) are actually going in the right direction without complaining
every now and then that pg is holding drupal back and drupal I think
is still the leader of a marketplace no body is in.
Joshua posted the link to Edison's project that can support pg, MS
SQL, Oracle, DB2(?)... but well I had the feeling that Edison is a bit
ostracised. While I wouldn't define his work a DB AL... well it works
so kudos!
Unfortunately 6.X should be out soon and his work won't be included
and it looks that 7.X infrastructure will be even better.

Maybe some people hope to get rich fast enough they won't have to
work with any other DB other than MySQL...

 postgres or not. If drupal 6 absolutely doesn't support postgres,

Thanks to schema api drupal 6 should support pg even better. But it is
half the work since the DB abstraction layer... is not abstract.
Substantially queries are passed to db_query as strings and adapted
with regexp.
Schema api uses array to define tables so you don't have to
serialise, unserialise, serialise queries and you have some metadata.
But... but... an DB abstraction layer is complicated and doesn't come
for free. Actually some people complain about the overhead of a DB AL
but then write awful SQL...
Anyway... well maybe adopting a full fledged ORM for 7.0 will
actually be too much... and writing one may take advantage of the
knowledge of underlying objects... but still be too much to be
written for 7.

 then I'm dropping my drupal 5 install on the spot. This is a cold
 move drupal, and you should be ashamed.

No... I just posted here so more people would be aware of the problem
and help correct it. I don't think drupal is really going to become
mono-db. It seems that even MS had some interest in porting drupal to
MS SQL... and there is actually a MS employee doing so...

Just I'd like it to be done better and faster and avoid to read the
same thing on drupal ML every 20 days or so ;)

 BTW, I'm a PHP developer who uses postgreSQL almost exclusively and
 I'm on this list as well as other postgres lists constantly (even
 if as a reader most of the time). If they have this big of an
 issue, why not ask for help?

Because some won't have any more excuse to write MySQLish SQL ;)
I think that people that use drupal on Postgres could get a bit more
involved in drupal all the project will gain *a lot* and not just
because of improved pg support but because pg people generally are
better DB guys knowing more than one DB and conscious of what a DB
can do and it is done for.


I know that what I'm writing worth nearly 0, but I'm terribly busy
(guess what...) with building up a module for drupal that will run
just on pg (sorry... I'm not going to put money in something that
discovered transactions just yesterday no matter how fast it is to
serve thousands semi-static pages and if I can find it even on South
Pole hosting, If I had to serve thousands static pages I'd consider
MySQL, really, but I'm not).

As soon as I wake up from this deadline nightmare I'll definitively
try to review and propose patches to offer a 

[GENERAL] plpythonu

2008-01-17 Thread Alexandre da Silva
Hello,
someone can tell me if is secure to create external python modules and
import them to functions/procedures/triggers to use?

I am doing the following:

function/procedure/trigger:

CREATE OR REPLACE FUNCTION tabela_be_i_tg_fx() RETURNS trigger AS $body$
from dbfunctions.postgres.pg_trigger import TestTrigger as fx
fe = fx()
return fe.execute(args=TD[args], event=TD[event], when=TD[when],
level=TD[level], name=TD[name], relid=TD[relid], new=TD[new],
old=TD[old], pl_py=plpy)
$body$
LANGUAGE plpythonu;


at python module I have:


class TestTrigger(object):
def execute(self, args, event, when, level, name, relid, new, old,
pl_py):
new[group_name]='__modified__'
return 'MODIFY'


all this works properly (on windows and linux), but I don't know if is
the correct way to use plpythonu, and if I will have future problems.

Another question is that I have read in some discussion list (old
message year 2003) the possibility of plpython be removed from
postgresql, this information is valid yet?

sorry bad English

Thank's for all

-- 
Alexandre da Silva
Analista de Sistemas - Bacharel em Sistemas de Informação (2003-2007)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Scott Marlowe
On Jan 17, 2008 1:43 PM, Tom Hart [EMAIL PROTECTED] wrote:

 Obviously emotion has gotten the better of me which is why I won't post
 to the drupal boards/lists

Really, honestly, you're controlling it quite well.  Passion is fine.
As long as the lists stay civil, passion has its place.

 (I might be accused of flaming and I don't
 want to paint the pgSQL community in a negative light), but I think that
 somebody should let the drupal people know that we're still here and
 we'd like to use the new drupal, just not on mySQL.

 Oh, and a collective middle finger to anybody that says the pg community
 is too small to bother with.

I agree.

What gets me is the tortured logic I read in the post by nk on the
drupal board.  Two examples:

1: With MySQL 5.0 and 5.1, there's no need for pgsql
This statement shows that he knows nothing of the differences of the
two database engines of which he speaks.  And when you don't know
anything about a subject, it's best to ask someone who does.

2: There's only 5% of drupal users that use pgsql, therefore they
aren't important.
-- The fact that PostgreSQL isn't fully supported (i.e. some modules
don't work) and it STILL has a 5% user base in Drupal is actually a
testament to the pgsql userbase.  They're willing to climb uphill to
get drupal working on their chosen platform.  If drupal properly
support pgsql, it might well be a much higher percentage that chose to
run on top of pgsql.
-- Which users are those 5%?  Maybe they're the sites that really show
off drupal to the public, maybe they're internal sites for very large
corporates, or maybe they're sites that just need to make sure the
accounting is done right.

I just read Ivan's post, and I agree, it sounds like people who
learned bad habits on mysql and are now whinging about their mysql
inspired sql not working on other platforms.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Bill Moran
In response to Tom Hart [EMAIL PROTECTED]:
 Bill Moran wrote:

[snip]

  To a large degree, I think Karoly has blown the situation out of
  proportion.  Look at how it affects _this_ list every time he starts
  bitching, for example.
 

 Is it just Karoly (chx) who has all these things to say about pg? He's 
 just one person on the drupal team. Has anybody else in the core team 
 spoken out on this subject?

Last time this came up (which was in Dec, I believe) a few other core
members jumped in eventually and said, No, we're keeping PG.  The
goal of Drupal is to be database agnostic, so dropping PG is counter-
productive.  To which Karoly responded that he didn't want to _drop_
PG support, he just wanted to drop PG support ... or something equally
nonsensical.

The guy sets my jerk alarms ringing like a 5 alarm fire.  He doesn't
play well with others, he constantly starts fights, and he threatens
to take his ball and go home every time he loses.  I don't care how
much code he writes, I don't think he's worth the headache.

 Let's keep in mind as well that this doesn't only affect pg users but 
 any other database as well that drupal supports or plans on supporting. 
 Drupal is pretty popular, and I expect there are a number of 
 organizations that don't fit in their mold of the ideal drupal user.

As I said, I get the impression that most of the Drupal developers get
this, and they have mentioned more than once that Drupal's design goal
is to be database-agnostic.  It just seems to be Karoly and a few people
here and there that he's able to incite into riot.

 I'd almost consider trying to take drupal and create a derivative 
 product and build in the pg and oracle and mssql, etc. support myself, 
 but if the drupal team really pulls a messed up move like this, I really 
 don't want to have anything to do with them anymore. It's not that I'm 
 that huge of a pg nut (I used mySQL for a while myself), but any team 
 that can turn it's back on that many of it's users to make their lives a 
 little easier isn't in it for the right reasons (the advancement of 
 technology, computing as a science, etc.). I am literally astonished 
 that they would even consider telling even 1% of their users Take off, 
 you're too much work. How many drupal+postgres users are large 
 corporations, or regular donators? What about code contributors? How 
 many people are they looking at pissing off with a move like this?

I'm upset with the community.  The other core members need to stand up
to Karoly and say, You opinions are not those of the community, and
we'll ban you from the lists if you continue to start this fight over
and over again.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Alex Turner
I evaluated Drupal with PostgreSQL, but it wasn't powerful enough, and it's
written in PHP which is buggy, and lots of modules force you to use MySQL
which is not ACID (I'm sorry but inserting 31-Feb-2008 and not throwing an
error by default makes you non-ACID in my book).  PostgreSQL support was
spotty at best, and it sounds like one would have received precious little
help from the Drupal community.

I plumped for Plone SQLAlchemy and Postgresql instead.

Alex

On Jan 17, 2008 3:42 PM, Scott Marlowe [EMAIL PROTECTED] wrote:

 On Jan 17, 2008 1:43 PM, Tom Hart [EMAIL PROTECTED] wrote:

  Obviously emotion has gotten the better of me which is why I won't post
  to the drupal boards/lists

 Really, honestly, you're controlling it quite well.  Passion is fine.
 As long as the lists stay civil, passion has its place.

  (I might be accused of flaming and I don't
  want to paint the pgSQL community in a negative light), but I think that
  somebody should let the drupal people know that we're still here and
  we'd like to use the new drupal, just not on mySQL.
 
  Oh, and a collective middle finger to anybody that says the pg community
  is too small to bother with.

 I agree.

 What gets me is the tortured logic I read in the post by nk on the
 drupal board.  Two examples:

 1: With MySQL 5.0 and 5.1, there's no need for pgsql
 This statement shows that he knows nothing of the differences of the
 two database engines of which he speaks.  And when you don't know
 anything about a subject, it's best to ask someone who does.

 2: There's only 5% of drupal users that use pgsql, therefore they
 aren't important.
 -- The fact that PostgreSQL isn't fully supported (i.e. some modules
 don't work) and it STILL has a 5% user base in Drupal is actually a
 testament to the pgsql userbase.  They're willing to climb uphill to
 get drupal working on their chosen platform.  If drupal properly
 support pgsql, it might well be a much higher percentage that chose to
 run on top of pgsql.
 -- Which users are those 5%?  Maybe they're the sites that really show
 off drupal to the public, maybe they're internal sites for very large
 corporates, or maybe they're sites that just need to make sure the
 accounting is done right.

 I just read Ivan's post, and I agree, it sounds like people who
 learned bad habits on mysql and are now whinging about their mysql
 inspired sql not working on other platforms.

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Ivan Sergio Borgonovo
On Thu, 17 Jan 2008 15:52:37 -0500
Alex Turner [EMAIL PROTECTED] wrote:

 I evaluated Drupal with PostgreSQL, but it wasn't powerful enough,
 and it's written in PHP which is buggy, and lots of modules force
 you to use MySQL which is not ACID (I'm sorry but inserting
 31-Feb-2008 and not throwing an error by default makes you non-ACID
 in my book).  PostgreSQL support was spotty at best, and it sounds
 like one would have received precious little help from the Drupal
 community.
 
 I plumped for Plone SQLAlchemy and Postgresql instead.

It could be interesting.

Plone does look more enterprise oriented and python is a definitive
plus once you're not on hosting.
Other choices could be some form of RAD. I'd prefer the pythonic RAD.
Up to my memory some works on top of SQLAlchemy...

But still Drupal find itself in an interesting market place that is
not the one of Joomla neither the one of Plone and I think that in
that market place it fits better with PostgreSQL rather than MySQL.

I'd be interested in your experience with SQLAlchemy and how it fits
with pg.
I'm not that sure that a full fledged ORM fits with Drupal since it
is something in between a CMS and a framework so more flexible than a
CMS but less that a framework like Django so it would be better to
build up a DB AL around actual objects in drupal.
At least I'll try to find the time to read through SQLAlchemy to
learn.

OK... I'll stop to hijack pg list things that start to be just
tangential to postgres ;)

Many thanks to everybody who listened to the call.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Kevin Hunter

At 4:11p -0500 on 17 Jan 2008, Bill Moran wrote:

The guy sets my jerk alarms ringing like a 5 alarm fire.  He doesn't
play well with others, he constantly starts fights, and he threatens
to take his ball and go home every time he loses.  I don't care how
much code he writes, I don't think he's worth the headache.



As I said, I get the impression that most of the Drupal developers get
this, and they have mentioned more than once that Drupal's design goal
is to be database-agnostic.  It just seems to be Karoly and a few people
here and there that he's able to incite into riot.



I'm upset with the community.  The other core members need to stand up
to Karoly and say, You opinions are not those of the community, and
we'll ban you from the lists if you continue to start this fight over
and over again.


I recently ran across this video about poisonous people in open source 
projects.  It's perhaps prudent.  It's about 55 minutes long.


http://video.google.com/videoplay?docid=-4216011961522818645

Kevin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Trouble with UTF-8 data

2008-01-17 Thread Tom Lane
Janine Sisk [EMAIL PROTECTED] writes:
 But I'm still getting this error when loading the data into the new  
 database:

 ERROR:  invalid byte sequence for encoding UTF8: 0xeda7a1

The reason PG doesn't like this sequence is that it corresponds to
a Unicode surrogate pair code point, which is not supposed to
ever appear in UTF-8 representation --- surrogate pairs are a kluge for
UTF-16 to deal with Unicode code points of more than 16 bits.  See

http://en.wikipedia.org/wiki/UTF-16

I think you need a version of iconv that knows how to fold surrogate
pairs into proper UTF-8 form.  It might also be that the data is
outright broken --- if this sequence isn't followed by another
surrogate-pair sequence then it isn't valid Unicode by anybody's
interpretation.

7.2.x unfortunately didn't check Unicode data carefully, and would
have let this data pass without comment ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] case dumbiness in return from functions

2008-01-17 Thread Ivan Sergio Borgonovo
After discovering that pg_get_serial_sequence behaves in a bit
strange way[1] when it deals to case sensitiveness... I just
discovered that you've the same behaviour for any function... at
least in PHP.

postgresql
Versione: 8.1.11
php:
Versione: 5.2.0

eg.

create or replace function testA(out pIpPo int) as
$$
begin
  pIpPo:=7;
  return;
end;
$$ language plpgsql;

...

$result=pg_query('select pIpPo from testA()');
$row=pg_fetch_array($result);
print(var_export($row));

array ( 'pippo' = '7', )

that makes
$ImSoSad=$row['pIpPo'];
return null

And 2h went trying to understand where a session went lost :(

Fortunately it can be fixed at a small price with an alias... but
still I find it a pain.
Whan you've identifiers composed of more than 2 words, camel case can
make your code lines much shorter.

Please, please, please... fix this. Minor things like this can make
coding in Postgres MUCH more enjoyable.

[1] this is documented... is this aw bw bwaa behaviour of
functions documented as well?

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Accessing composite type columns from C

2008-01-17 Thread Merlin Moncure
On Jan 17, 2008 5:48 AM, Reg Me Please [EMAIL PROTECTED] wrote:
 Hi all.
 Is there a way with the libpq to access subcolumns in a composite type
 column?
 The documentation (8.2) seems not to mention this case.
 Thanks.
 --


We have a proposal to do this for 8.4.  We will probably maintain this
outside of the project for the 8.3 cycle (and it could be adapted to
8.2 very easily).  See:

http://archives.postgresql.org/pgsql-hackers/2008-01/msg00257.php

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] case dumbiness in return from functions

2008-01-17 Thread Tom Lane
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
 After discovering that pg_get_serial_sequence behaves in a bit
 strange way[1] when it deals to case sensitiveness

The SQL standard specifies that unquoted identifiers are case-insensitive.
You're welcome to spell them as camelCase in your source code if you
feel like it, but don't expect that PG, or any other SQL-compliant
database, will pay attention.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Trouble with UTF-8 data

2008-01-17 Thread Janine Sisk

Hi all,

I'm moving a database from PG 7.2.4 to 8.2.6.  I have already run  
iconv on the dump file like so:


iconv -c -f UTF-8 -t UTF-8 -o out.dmp in.dmp

But I'm still getting this error when loading the data into the new  
database:


ERROR:  invalid byte sequence for encoding UTF8: 0xeda7a1
HINT:  This error can also happen if the byte sequence does not match  
the encoding expected by the server, which is controlled by  
client_encoding.

CONTEXT:  COPY article, line 2

FWIW this is the second database I've moved this way and for the  
first one, iconv fixed all the byte sequence errors.  No such luck  
this time.


The 7.2.4 database has encoding UNICODE, and the 8.2.6 one is in UTF-8.

To make matters even more fun, the data is in Traditional Chinese  
characters, which I don't read, so there seems to be no way for me to  
identify the problem bits.  I've loaded the dump file into a hex  
editor and searched for the value that's reported as the problem but  
it's not in the file.


Is there anything I can do to fix this?

Thanks in advance,

janine


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] case dumbiness in return from functions

2008-01-17 Thread Ivan Sergio Borgonovo
On Thu, 17 Jan 2008 19:07:59 -0500
Tom Lane [EMAIL PROTECTED] wrote:

 Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
  After discovering that pg_get_serial_sequence behaves in a bit
  strange way[1] when it deals to case sensitiveness
 
 The SQL standard specifies that unquoted identifiers are
 case-insensitive. You're welcome to spell them as camelCase in your
 source code if you feel like it, but don't expect that PG, or any
 other SQL-compliant database, will pay attention.

OK... I did get tricked mixing in the same code base (same file)
access to MS SQL and pg and I confused a

select PiPpO from table in MS SQL

and a

select PiPpO from function() in pg

And I thought that select from table behave differently from select
from functions. Having met the admittedly strange behaviour in
pg_get_serial_sequence that is not case-insensitive I got confused.
All the other select were in fact aliased.

I noticed that the argument of case insensitivity vs. case
preservation is recurrent and I'm not going to argue about it.
I'd prefer to have case preservation but I'd prefer to be able to fly
too. I bet there are good reasons for both even if hope is not going
to die.

Sorry, it was not meant to be disrespectful of the good work you're
doing.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-17 Thread James B. Byrne

On Thu, January 17, 2008 11:48, Scott Marlowe wrote:

 Got bored, hacked this aggregious pl/pgsql routine up.  It looks
 horrible, but I wanted it to be able to use indexes.  Seems to work.
 Test has ~750k rows and returns in it and returns a new id in   1ms
 on my little server.

 File attached.


Many thanks.

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:[EMAIL PROTECTED]
Harte  Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-17 Thread Stefan Schwarzer
This looks like an endianess mismatch; did you already mention on  
what

architecture you are on?



MacPro, Leopard


Did you just move from a PPC-based Mac to an Intel-based one?
If so, you're out of luck --- you need to go back to the PPC
to make a dump of those files.



No, I just re-installed my Intel Mac. First I just upgraded from  
Tiger to Leopard (without getting my database to run; but I didn't  
put much effort into it); and then I completely erased the disk and  
installed Leopard from scratch.


H. Can't be that I am standing now there having lost my data,  
no? Please, any faintest idea what I can try?


Thanks for hints!

Stef

smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] [OT] Slony Triggers pulling down performance?

2008-01-17 Thread Ow Mun Heng
Just wondering if my 'Perceived' feeling that since implementing slony
for master/slave replication of select tables, my master database
performance is getting slower.

I'm constantly seeing a very high amount of IO wait. ~40-80 according to
vmstat 1

and according to atop. (hdb/hdc = raid1 mirror)
DSK | hdb | busy 83% | read1052 | write 50 | avio7 ms |
DSK | hdc | busy 81% | read1248 | write 49 | avio6 ms |


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] advocacy: drupal and PostgreSQL

2008-01-17 Thread Jean-Michel Pouré

On jeu, 2008-01-17 at 21:25 +0100, Ivan Sergio Borgonovo wrote:
 Joshua posted the link to Edison's project that can support pg, MS
 SQL, Oracle, DB2(?)... but well I had the feeling that Edison is a bit
 ostracised. While I wouldn't define his work a DB AL... well it works
 so kudos!
 Unfortunately 6.X should be out soon and his work won't be included
 and it looks that 7.X infrastructure will be even better.
 
 Maybe some people hope to get rich fast enough they won't have to
 work with any other DB other than MySQL...
 

I registered myself on Drupal devel mailing list and offered my services
to fix


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq