Re: [GENERAL] CPU load high

2007-08-23 Thread Patrick Lindeman
Hi Max,

To find out what is causing the big load you could also try to use 'ATOP'
which can be found at http://www.atcomputing.nl/atop. This tool shows more
(accurate) information than the regular TOP.

There are also some kernel patches available which, when applied to your
kernel, even show more information which might come in handy.

Good Luck,

-
Patrick Lindeman

> Hello.
>
> I have a web-server with php 5.2 connected to postgres 8.0 backend. Most
> of the queries the users are doing are SELECTs (100-150 in a second for
> 100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There
> is also a demon running in the background doing some work once every
> 100ms. The problem is that after the number of concurrent users rises to
> 100, CPU becomes almost 100% loaded. How do I find out what's hogging the
> CPU?
>
> 'top' shows demon using 8% cpu on top, and some amount of postgres
> processes each using 2% cpu with some apache processes occassionally
> rising with 2% cpu also. Often the writer process is at the top using 10%
> cpu.
>
> And the second question is that over time demon and writer processes use
> more and more shared memory - is it normal?
>
> Thanks in advance.
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] table column vs. out param [1:0]

2007-08-23 Thread Kristo Kaiv
I am trying to implement the new out parameters in functions and  
stumbled upon a problem.
There is an internal requirement for our databases that every  
function call always returns 2 params status & status_text.
The problem now is that plpgsql selects the out params themselves  
into out params instead of function call results that i need there.
If this is the expected behavior of out params it makes using out  
params a bit complicated if some table attributes
happen to have the same name as out params. How can i overcome this  
situation? I can understand function variables having precedence
over column names as you can freely rename them but out params is a  
different situation.


snippet from code
-[cut]--

out status int, -- 200
out status_text text-- OK
) AS $$
BEGIN

-[cut]--
SELECT status, status_text
  FROM service._simple_add(
i_key_user
   ,i_key_service
   ,i_action
   ,i_subscr_len)
  INTO status, status_text;
-[cut]-

Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff

On Thu, 23 Aug 2007 08:29:03 +0400, Tom Lane <[EMAIL PROTECTED]> wrote:


"Max Zorloff" <[EMAIL PROTECTED]> writes:

... The problem is that after the number of concurrent users rises to
100, CPU becomes almost 100% loaded. How do I find out what's hogging  
the

CPU?



'top' shows demon using 8% cpu on top, and some amount of postgres
processes each using 2% cpu with some apache processes occassionally
rising with 2% cpu also. Often the writer process is at the top using  
10%

cpu.


IOW there's nothing particular hogging the CPU?  Maybe you need more
hardware than you've got, or maybe you could fix it by trying to
optimize your most common queries.  It doesn't sound like there'll be
any quick single-point fix though.


There's no one big process chugging everything yes, but all these 2%  
postgres

processes look like they're having their hand in overall cpu consumption.
I looked through every query and they all use indexes and whats more,  
return
1-20 rows at most. Yes, I think there won't be any fix, but I wanted to  
know,

are there some tools or techinques for finding where the problem lies?

I've looked into query time statistics - they all grow with cpu usage but  
it

doesn't really mean anything - cpu usage grows, queries get slower.

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] table column vs. out param [1:0]

2007-08-23 Thread Kristo Kaiv


On 23.08.2007, at 8:51, Kristo Kaiv wrote:

I am trying to implement the new out parameters in functions and  
stumbled upon a problem.
There is an internal requirement for our databases that every  
function call always returns 2 params status & status_text.
The problem now is that plpgsql selects the out params themselves  
into out params instead of function call results that i need there.
If this is the expected behavior of out params it makes using out  
params a bit complicated if some table attributes
happen to have the same name as out params. How can i overcome this  
situation? I can understand function variables having precedence
over column names as you can freely rename them but out params is a  
different situation.


snippet from code
-[cut]--

out status int, -- 200
out status_text text-- OK
) AS $$
BEGIN

-[cut]--
SELECT status, status_text
  FROM service._simple_add(
i_key_user
   ,i_key_service
   ,i_action
   ,i_subscr_len)
  INTO status, status_text;
-[cut]-


using a table (function) alias seems to solve the problem.

Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




[GENERAL] Local authentication/security

2007-08-23 Thread Lange Marcus
Hello,

I would like to be able to restrict the access to a database so that
only a specific program running on the same machine can access it, is
this possible ? So I would like to have some kind of secure
authentication(or something) between the database and the program, and
the user running the program should not be able to get access to the
database through any other way than this specific program.

Any help would be most valuable!
Regards Marcus


Re: [GENERAL] table column vs. out param [1:0]

2007-08-23 Thread Kristo Kaiv


On 23.08.2007, at 11:11, Kristo Kaiv wrote:



On 23.08.2007, at 8:51, Kristo Kaiv wrote:

I am trying to implement the new out parameters in functions and  
stumbled upon a problem.
There is an internal requirement for our databases that every  
function call always returns 2 params status & status_text.
The problem now is that plpgsql selects the out params themselves  
into out params instead of function call results that i need there.
If this is the expected behavior of out params it makes using out  
params a bit complicated if some table attributes
happen to have the same name as out params. How can i overcome  
this situation? I can understand function variables having precedence
over column names as you can freely rename them but out params is  
a different situation.


snippet from code
-[cut]--

out status int, -- 200
out status_text text-- OK
) AS $$
BEGIN

-[cut]--
SELECT status, status_text
  FROM service._simple_add(
i_key_user
   ,i_key_service
   ,i_action
   ,i_subscr_len)
  INTO status, status_text;
-[cut]-


using a table (function) alias seems to solve the problem.

then again select "status", "status_text" takes the variable again?  
why is that?

this kind of behaviour seems kind of bizarre to me.

Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [GENERAL] Converting non-null unique idx to pkey

2007-08-23 Thread Alban Hertroys
Ed L. wrote:
> On Tuesday 21 August 2007 1:45 pm, Scott Marlowe wrote:
>> If you have a large db in 7.4.6, you should do two things.
>>
>> 1: Update to 7.4.19 or whatever the latest flavor of 7.4 is,
>> right now.  There are a few known data eating bugs in 7.4.6.
> 
> Sounds like good advice from a strictly technical viewpoint.  
> Unfortunately, in our particular real world, there are also 
> political, financial, and resource constraints and impacts from 
> downtime that at times outweigh the technical merits of 
> upgrading 'right now'.

Since you're setting up replication to another database, you might as
well try replicating to a newer release and swap them around once it's
done. I've seen that method of upgrading mentioned on this list a few times.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] CPU load high

2007-08-23 Thread Hannes Dorbath

On 23.08.2007 11:04, Max Zorloff wrote:

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


No, but are you sure what you see is not i/o wait? What values does top 
display in the %wa columns in the CPU rows? What does iostat -dm 1 say 
say under load?



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Dave Page
Tony Caduto wrote:
> Other than that I would say PG kicks butt.

You're just realising that? :-)

> If there is any interest I could also add MySQL 5.0 to the mix as the
> third column.

I'd be interested to see that.

Regards, Dave

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] reporting tools

2007-08-23 Thread Phoenix Kiula
On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
> for quite some time now.  We had it hitting a pg7.2 db back in the
> day, when hip kids road around in rag top roadsters and wore tshirts
> with cigarettes rolled in their sleeves.
>
> Also, look at Pentaho.  It's open source and pretty good.



Thanks. Pentaho looks good. But are there any alternatives that don't
require me to spend days installing the whole Java shebang?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgres, fsync and RAID controller with 100M of in ternal cache & dedicated battery

2007-08-23 Thread Franz . Rasper
yes, 128 MB is pretty pretty small.

Maybe the HP Smart Array P800 controller would be a better choice(if you
need an hp product).

BTW how many harddisks are you using ? Wich RAID ? I am using ext3 as a
filesystem (but you have to use the new linux kernels).
Try to use another filesystem then ext2.

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag von Scott Marlowe
Gesendet: Donnerstag, 23. August 2007 01:49
An: [EMAIL PROTECTED]
Cc: Greg Smith; Postgres General
Betreff: Re: [GENERAL] Postgres, fsync and RAID controller with 100M of
internal cache & dedicated battery


On 8/22/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> Also, the controller is configured to use 75% of its memory for write
> caching and 25% - for read caching. So reads cannot flood writes.

128 Meg is a pretty small cache for a modern RAID controller.  I
wonder if this one is just a dog performer.

Have you looked at things like the Areca or Escalade with 1g or more
cache on them?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Dave Page
Tony Caduto wrote:
> Check it out here:
> 
> http://www.amsoftwaredesign.com/pg_vs_fb

Couple of corrections Tony:

- You don't necessarily need to stop the postmaster to take a filesystem
backup -
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP.
Obviously that assumes logs will be replayed during recovery.

- The native win32 port will run on FAT32, we just prevent the installer
from initdb'ing on such a partition. You can do it manually however, but
tablespaces won't work.

I'm a little puzzled about why you list multi-threaded architecture as a
feature - on Windows it's a little more efficient of course, but the
multi-process architecture is arguably far more robust, and certainly
used to be more portable (I'm not sure that's still the case for
platforms we actually care about).

Regards, Dave.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] reporting tools

2007-08-23 Thread Ow Mun Heng
On Thu, 2007-08-23 at 16:42 +0800, Phoenix Kiula wrote:
> On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> >
> > Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
> > for quite some time now.  We had it hitting a pg7.2 db back in the
> > day, when hip kids road around in rag top roadsters and wore tshirts
> > with cigarettes rolled in their sleeves.
> >
> > Also, look at Pentaho.  It's open source and pretty good.
> 
> 
> 
> Thanks. Pentaho looks good. But are there any alternatives that don't
> require me to spend days installing the whole Java shebang?
> 

my 2 cents.. I've tried to install and play with pentaho like more than
a couple of times and failed each time. There was just some java errors
which I didn't comprehend and this is from it's all-in-one package.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 12:24:32 +0400, Hannes Dorbath  
<[EMAIL PROTECTED]> wrote:



On 23.08.2007 11:04, Max Zorloff wrote:

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


No, but are you sure what you see is not i/o wait? What values does top  
display in the %wa columns in the CPU rows? What does iostat -dm 1 say  
say under load?





Well, vmstat 1 says this on 64 users (last column is the same wa) :

procs ---memory-- ---swap-- -io --system--  
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy  
id wa
 1  0  12336 289880 331224 347380404 8   2591 0 31 13  
54  3
13  0  12336 288012 331224 347387200 0   288 1054  3237 59 17  
24  0
 3  0  12336 284044 331224 347387200 0   480  908  3922 71 18  
11  0
 4  0  12336 291500 331224 347387200 0   248  654  2913 63 13  
23  0
 6  0  12336 297220 331224 347394000 0   240  678  3232 44 12  
44  0
 6  0  12336 304312 331224 347394000 0  1708 1166  3303 50 17  
17 16
 9  0  12336 304080 331224 347394000 0   480  779  4856 61 13  
25  0
10  0  12336 309172 331224 347400800 0   304  697  3094 62 16  
21  0
 2  0  12336 308180 331224 347400800 0   272  681  3370 56 12  
32  0
 0  0  12336 307684 331224 347407600 0   112  689  3212 44 11  
44  0
 0  1  12336 312280 331224 347407600 0  1472  863  3121 51 13  
29  7
 7  0  12336 310544 331224 347407600 0   916 1023  3383 59 14  
18  9
 3  0  12336 309428 331224 347407600 0   224  731  2974 55 14  
30  0
 6  0  12336 306444 331224 347414400 0   392  796  3513 60 14  
25  0


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] reporting tools

2007-08-23 Thread Thomas Kellerer

Phoenix Kiula wrote on 23.08.2007 10:42:

On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:

Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
for quite some time now.  We had it hitting a pg7.2 db back in the
day, when hip kids road around in rag top roadsters and wore tshirts
with cigarettes rolled in their sleeves.

Also, look at Pentaho.  It's open source and pretty good.




Thanks. Pentaho looks good. But are there any alternatives that don't
require me to spend days installing the whole Java shebang?


If you don't need a server-based solution, you might want to look at iReport 
designer.


Although it is also Java based it only needs a runtime environment on the client 
(not sure if that qualifies for "whole shebang" for you as well):


http://www.jasperforge.org/sf/projects/ireport

Thomas




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to switch off Snowball stemmer for tsearch2?

2007-08-23 Thread Dmitry Koterov
>
> > Now
> >
> > select lexize('ru_ispell_cp1251', 'Дмитриев') -> "Дмитрий"
> > select lexize('ru_ispell_cp1251', 'Иванов') -> "Иван"
> > - it is completely wrong!
> >
> > I have a database with all Russian name, is it possible to use it (how?)
> to
>
> if you have such database why just don't write special dictionary and
> put it in front ?


Of course because this is a database of Russian NAMES, but NOT a database of
surnames.


> make lexize() not to convert "Ivanov" to "Ivan" even if the ispell
> > dicrionary contains an element for "Ivan"? So, this pseudo-code logic is
> > needed:
> >
> > function new_lexize($string) {
> >  $stem = lexize('ru_ispell_cp1251', $string);
> >  if ($stem in names_database) return $string; else return $stem;
> > }
> >
> > Maybe tsearch2 implements this logic already?
>
> sure, it's how text search mapping works.


Could you please detalize?

Of course I can create all word-forms of all Russian names using ispell and
then - subtract this full list from Ispell dictionary (so I will remove
"Ivan", "Ivanami" etc. from it). But possily tsearch2 has this subtraction
algorythm already.


> Dmitry, seems your company could be my client :)


Not now, thank you. Maybe later.


Re: [GENERAL] reporting tools

2007-08-23 Thread Geoffrey

Joshua D. Drake wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John DeSoi wrote:

On Aug 22, 2007, at 7:21 PM, Geoffrey wrote:


We are looking for an open source reporting tool that will enable
users to generate their own reports.  Something like Crystal Reports. ;)

I was looking at a couple the other day: iReport (part of Jasper),
OpenRPT, and DataVision (http://datavision.sourceforge.net/). The
DataVision page has some links to other report writers. Hopefully you'll
do better than I did -- I also wanted something that works on OS X. All
of the above meet that criteria by using Java or GTK, but the user
interfaces are hard to take if you want a typical Mac application.


MS Access?


Gag, cough, choke

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] reporting tools

2007-08-23 Thread Geoffrey
Thanks for the various responses, I'll check them out and post my 
research results and our decision.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Terry Yapt

Hello all,

I am having problems with the next postgresql version:

pg version: 8.2.4
OS: Win32 (windows xp sp2)
FS: NTFS

It is a production server, but suddenly the DB stop answering to any sql 
command.  It seems dead.  After restart server all starts to works again.


I am looking for system errors and nothing is there.  But I have a lot 
of messages on system APP errors.  The error is the same every ten 
seconds or so.


This is the main error:
* FATAL:  could not reattach to shared memory (key=5432001, 
addr=01D8): Invalid argument


It is always followed by this another system-app error:
* LOG:  unrecognized win32 error code: 487

I have found this on my intensive internet search:
http://archives.postgresql.org/pgsql-bugs/2007-01/msg00032.php

I need to solve this ASAP.  Anybody have any idea about this ?

Thanks.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dump causes postgres crash

2007-08-23 Thread Jeff Amiel
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> 
> I can't help thinking you are looking at generalized
> system
> instability.  Maybe someone knocked a few cables
> loose while
> installing new network hardware?

Database server/storage instability or network
instability?  

There is no doubt that there is something flaky about
the networking between the db server and the box(es)
trying to do the pg_dump.  We have indeed had issues
(timeouts, halts, etc) moving large quantities of data
across various segments to and from these boxes...like
the db serverbut how would this effect something
like a pg_dump? 

Would a good stack trace (assuming I want to crash my
database again) help here?




   
Ready
 for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to switch off Snowball stemmer for tsearch2?

2007-08-23 Thread Oleg Bartunov

On Thu, 23 Aug 2007, Dmitry Koterov wrote:




Now

select lexize('ru_ispell_cp1251', 'Дмитриев') -> "Дмитрий"
select lexize('ru_ispell_cp1251', 'Иванов') -> "Иван"
- it is completely wrong!

I have a database with all Russian name, is it possible to use it (how?)

to

if you have such database why just don't write special dictionary and
put it in front ?



Of course because this is a database of Russian NAMES, but NOT a database of
surnames.



make lexize() not to convert "Ivanov" to "Ivan" even if the ispell

dicrionary contains an element for "Ivan"? So, this pseudo-code logic is
needed:

function new_lexize($string) {
 $stem = lexize('ru_ispell_cp1251', $string);
 if ($stem in names_database) return $string; else return $stem;
}

Maybe tsearch2 implements this logic already?


write your own dictionary, which implements any logic you need. In your
case it's just a wrapper around ispell, which will returns original string
not stem. See example
http://www.sai.msu.su/~megera/postgres/fts/doc/fts-intdict-xmp.html
and russian article 
http://www.sai.msu.su/~megera/postgres/talks/fts_pgsql_intro.html#ftsdict




sure, it's how text search mapping works.



Could you please detalize?


you create dictionary surnames_dict and configure 
pg_ts_cfgmap to process token of type nlword by 
surnames_dict, ru_ispell, ru_stem, for example.




Of course I can create all word-forms of all Russian names using ispell and
then - subtract this full list from Ispell dictionary (so I will remove
"Ivan", "Ivanami" etc. from it). But possily tsearch2 has this subtraction
algorythm already.



don't do that ! Just go plain way.

Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] reporting tools

2007-08-23 Thread Andrew Kelly
On Thu, 2007-08-23 at 06:20 -0400, Geoffrey wrote:
> Joshua D. Drake wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> > 
> > John DeSoi wrote:
> >> On Aug 22, 2007, at 7:21 PM, Geoffrey wrote:
> >>
> >>> We are looking for an open source reporting tool that will enable
> >>> users to generate their own reports.  Something like Crystal Reports. ;)
> >> I was looking at a couple the other day: iReport (part of Jasper),
> >> OpenRPT, and DataVision (http://datavision.sourceforge.net/). The
> >> DataVision page has some links to other report writers. Hopefully you'll
> >> do better than I did -- I also wanted something that works on OS X. All
> >> of the above meet that criteria by using Java or GTK, but the user
> >> interfaces are hard to take if you want a typical Mac application.
> > 
> > MS Access?
> 
> Gag, cough, choke

Ah, no, sorry.
Gag and cough only run on an Amiga, and choke went EOL with MSDOS 6.2

;-)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Converting non-null unique idx to pkey

2007-08-23 Thread Kristo Kaiv


On 23.08.2007, at 11:23, Alban Hertroys wrote:


Ed L. wrote:

On Tuesday 21 August 2007 1:45 pm, Scott Marlowe wrote:

If you have a large db in 7.4.6, you should do two things.

1: Update to 7.4.19 or whatever the latest flavor of 7.4 is,
right now.  There are a few known data eating bugs in 7.4.6.


Sounds like good advice from a strictly technical viewpoint.
Unfortunately, in our particular real world, there are also
political, financial, and resource constraints and impacts from
downtime that at times outweigh the technical merits of
upgrading 'right now'.


Since you're setting up replication to another database, you might as
well try replicating to a newer release and swap them around once it's
done. I've seen that method of upgrading mentioned on this list a  
few times.
Don't try this. Belive me you don't want to do it. We have had our  
fun with this 1.5 y ago


Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [GENERAL] %TYPE

2007-08-23 Thread Richard Huxton

Ged wrote:

Ty for those comments.

Hmm, I did try it out before posting of course, and I've just tried it
again to make sure I hadn't boobed with a typo. It seems my ISP is
running 8.0.8 and it's definitely not working on that. It *is* in the
8.0.13 documentation also though... So now I'm off to beg them to
upgrade.


Hmm - should work in any 8.0.x, the development team don't add new 
features in point releases. I'm not sure if this feature wasn't there in 
7.4 too.


Might be a bug affecting you though - could be worth checking the 
release-notes in the back of the manual.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] How to switch off Snowball stemmer for tsearch2?

2007-08-23 Thread Dmitry Koterov
>
> write your own dictionary, which implements any logic you need. In your
> case it's just a wrapper around ispell, which will returns original string
> not stem. See example
> http://www.sai.msu.su/~megera/postgres/fts/doc/fts-intdict-xmp.html
> and russian article
> http://www.sai.msu.su/~megera/postgres/talks/fts_pgsql_intro.html#ftsdict

Ah, I understand you!
You offer to write a small Postgres contrib module (new dictionary) in C and
implement all logic in it.
Seems it's a bit complex solution for such a simple task (exclude surnames
for lexization), but - it could be implemented, of course.


> > Of course I can create all word-forms of all Russian names using ispell
> and
> > then - subtract this full list from Ispell dictionary (so I will remove
> > "Ivan", "Ivanami" etc. from it). But possily tsearch2 has this
> subtraction
> > algorythm already.
> >
>
> don't do that ! Just go plain way.
>

Another method is to generate a singular ** synonym dictionary based on all
Russian names word-forms using ispell (we will get all suspicous surnames in
this set) and add it before ispell. This solution does not need to write
anything in C.


Re: [GENERAL] Converting non-null unique idx to pkey

2007-08-23 Thread Michael Glaesemann


On Aug 23, 2007, at 7:44 , Kristo Kaiv wrote:



On 23.08.2007, at 11:23, Alban Hertroys wrote:


Since you're setting up replication to another database, you might as
well try replicating to a newer release and swap them around once  
it's
done. I've seen that method of upgrading mentioned on this list a  
few times.
Don't try this. Belive me you don't want to do it. We have had our  
fun with this 1.5 y ago


Care to share? Were you using Slony? AIUI, one of the motivations for  
Slony was to be able to do exactly that, so I'm sure there's interest  
in what didn't go as expected.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] reporting tools

2007-08-23 Thread Reid Thompson

On Wed, 2007-08-22 at 18:57 -0400, Geoffrey wrote:
> We are looking for a reporting tool that will enable users to generate 
> their own reports.  Something like Crystal Reports.
> 
> Anyone using something like this with Postgresql?
> 

agata, datavision, jasper reports, birt, openRPT -- google shows
numerous results

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Undetected corruption of table files

2007-08-23 Thread Albe Laurenz
I am slightly worried that corruption of data files may
remain undetected in PostgreSQL.

As an experiment, I created a simple table with a primary key
index and inserted a couple of rows. The corresponding data file
is 1 page = 8K long.

Now when I stop the server, zero out the data file with
dd if=/dev/zero of=45810 bs=8192 count=1
and start the server again, the table is empty when I SELECT
from it and no errors are reported.

Only a VACUUM gives me the
WARNING:  relation "test" page 0 is uninitialized --- fixing
and the file is truncated to length zero.

The next thing I tried is to randomly scribble into the 8K data
file with a hex editor at different locations.

Some of these actions provoked error messages ranging from
ERROR:  invalid page header in block 0 of relation "test"
over
ERROR:  could not access status of transaction 1954047348
to
LOG:  server process (PID 28149) was terminated by signal 11

Frequently, though, the result was that some of the rows were
"missing", i.e. there was no error message when I SELECTed
from the table, but some of the rows were gone.

I got no errors or warnings from VACUUM either.


As far as I know there is no tool to verify the integrity of
a PostgreSQL table.

- Shouldn't there be an error, some kind of 'missing magic
  number' or similar, when a table file consists of only
  zeros?

- Wouldn't it be desirable to have some means to verify the
  integrity of a table file or a whole database?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] reporting tools

2007-08-23 Thread Ned Lilly

On 8/23/2007 5:16 AM Thomas Kellerer wrote:

Phoenix Kiula wrote on 23.08.2007 10:42:

On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:

Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
for quite some time now.  We had it hitting a pg7.2 db back in the
day, when hip kids road around in rag top roadsters and wore tshirts
with cigarettes rolled in their sleeves.

Also, look at Pentaho.  It's open source and pretty good.




Thanks. Pentaho looks good. But are there any alternatives that don't
require me to spend days installing the whole Java shebang?


If you don't need a server-based solution, you might want to look at 
iReport designer.


Although it is also Java based it only needs a runtime environment on 
the client (not sure if that qualifies for "whole shebang" for you as 
well):


This is specifically why we released OpenRPT as open source - it's very 
lightweight, no Java required.  http://sf.net/projects/openrpt

Cheers,
Ned


--
Ned Lilly
President and CEO
xTuple
119 West York Street
Norfolk, VA 23510
tel. 757.461.3022 x101
email: [EMAIL PROTECTED]
www.xtuple.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re : [GENERAL] reporting tools

2007-08-23 Thread Laurent ROCHE
Hi,

For my projet (J2EE application), I have been using Jasper and iReport.
That works quite well although so far we have not done stuff really complicated.

I am hoping that the "Power" end users will be able to write the reports 
themselves. "Power" users because although iReports is wysiwyg tool, it's not 
so easy that everybody can use it.
For that, we will need to provide them with a data source so they can play 
around and see results while designing the reports.

In the J2EE code, Jasper will use a JAVA list (populated from a PG 8.x 
database) as the data source.
iReport can easily use a CSV file (exporting the Java list to CSV is easy too) 
and that's convenient for the user. However when there's master -detail report 
(i.e. a report and sub-reports) like with invoices (master table) and 
invoices_lines (details table) : that becomes more complicated.
So far, we haven't found an easily solution (we have not searched much though).

Here is my experience, in a few words : works quite well, setting up and 
running needs a couple of days, not so easy to use by end users. Once used to 
it and set up, it will give great results within a few minutes.

 
Have fun,
[EMAIL PROTECTED]
The Computing Froggy

- Message d'origine 
De : Thomas Kellerer <[EMAIL PROTECTED]>
À : pgsql-general@postgresql.org
Envoyé le : Jeudi, 23 Août 2007, 11h16mn 46s
Objet : Re: [GENERAL] reporting tools

Phoenix Kiula wrote on 23.08.2007 10:42:
> On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>> Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
>> for quite some time now.  We had it hitting a pg7.2 db back in the
>> day, when hip kids road around in rag top roadsters and wore tshirts
>> with cigarettes rolled in their sleeves.
>>
>> Also, look at Pentaho.  It's open source and pretty good.
> 
> 
> 
> Thanks. Pentaho looks good. But are there any alternatives that don't
> require me to spend days installing the whole Java shebang?

If you don't need a server-based solution, you might want to look at iReport 
designer.

Although it is also Java based it only needs a runtime environment on the 
client 
(not sure if that qualifies for "whole shebang" for you as well):

http://www.jasperforge.org/sf/projects/ireport

Thomas




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings





  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problem with UPDATE and UNIQUE

2007-08-23 Thread Frank Millman
Michael Glaesemann wrote:
> 
> On Aug 22, 2007, at 1:02 , Frank Millman wrote:
> 
> > I want to store data in a 'tree' form, with a fixed number 
> of levels, 
> > so that each level has a defined role.
> 

Thanks very much for the in-depth response, Michael. Plenty for the little
grey cells to work on.

> First thought: fixed, predetermined levels, separate tables 
> for each level. If a more general approach is desired, your 
> options are generally adjacency list, nested sets, or 
> contrib/ltree. Each has their own strengths and weaknesses.
> 

I am writing a general-purpose business/accounting application. If
successful, I hope to have a number of different companies using it. I want
to provide the ability for the end-user to to define their own,
multi-dimensional, views of various core tables (general ledger, products,
etc). I foresee that it will only be used for reporting purposes
(particularly WHERE, ORDER BY and GROUP BY). Therefore I do need a general
approach.

> > I have the following (simplified) table -
> >
> > CREATE TABLE treedata (
> >   rowid serial primary key,
> >   levelno int not null,
> >   parentid int references treedata,
> >   seq int not null,
> >   code varchar not null,
> >   description varchar not null
> >   );
> 
> rowid + parentid looks like adjacency list to me. Note that 
> you're storing redundant data (the levelno, which can be 
> derived from the rowid/parentid relationships), which you may 
> want to do for performance reasons, but does make things more 
> complicated: you're essentially caching data which brings 
> with it problems of cache invalidation. In this case, you 
> need to make sure you're updating levelno whenever it needs 
> to be updated. (Which I'm sure you've already thought of.)
> 

I read up on 'adjency list' and 'nested sets', and I agree, the scheme I
have come up with is an adjency list. It had not occurred to me that levelno
is redundant, but I can see that this is so. I will have to check to see if
there are any implications if I remove it.

> > To describe each of the levels in the tree, I have the 
> following table 
> > -
> >
> > CREATE TABLE treelevels (
> >   levelno int primary key,
> >   code varchar unique not null,
> >   description varchar not null
> >   );
> 
> Having each level as its own table would make this redundant, 
> but again, that might not fit with what you're modeling.
> 
> > Typical values for this table could be -
> >   (0,'Prod','Product code')
> >   (1,'Cat','Product category')
> >   (2,'*','All products')
> 
> This makes me think you'll want to rethink your schema a bit, 
> as you're mixing different types of data: categories and 
> products. I'd at least separate this out into a products 
> table and a categories table. The categories table may in 
> fact still require some kind of tree structure, but I don't 
> think products belongs as part of it.
> 

Very good point. I will give this some serious thought.

[...]

> >
> > Say I want to insert a level between 'code' and 'category' called 
> > 'group' -
> >
> > INSERT INTO treelevels VALUES (1,'Group','Product group');
> 
> It's a good habit to *always* explicitly list your columns: 
> it's self- documenting and more robust in the face of schema changes.
> 
> > Obviously this will fail with a duplicate levelno. Therefore before 
> > the insert statement I want to do this -
> >
> > UPDATE treelevels SET levelno = (levelno+1) WHERE levelno >= 1;
> >
> > The problem is that if there are a number of levels, and 
> they are in 
> > indeterminate order, I can get duplicate level numbers while the 
> > command is being executed.
> >
> > My workaround at present is the following -
> >
> > UPDATE treelevels SET levelno = (levelno+10001) WHERE levelno >= 1; 
> > UPDATE treelevels SET levelno = (levelno-1) WHERE levelno >= 1;
> 
> This is a general problem with nested sets and your situation 
> where you're caching the levelno, and you're work around is 
> similar to the two generally recommended solutions. One is to 
> make updates using an offset such as what you're doing, and 
> the other is to utilize negative levels. I'm keen on the 
> latter, as I feel it's a bit more
> flexible: you don't need to make sure your offset is large enough.  

I also like the idea of 'negating' the level. It is neat and effective.
Thanks for the tip, I will use it.

One trivial point. I use 'negating' quite a bit, and instead of -
SET levelno = -1 * (levelno + 1)

I have adopted the habit of using -
SET levelno = -(levelno + 1)

It just feels a bit neater.

[...]

> 
> Anyway, hope this gives you something to think about.
> 

It certainly does. Thanks again for all the valuable advice.

Frank


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Local authentication/security

2007-08-23 Thread Richard Huxton

Lange Marcus wrote:

Hi,

I guess the answer to the 2 questions would be, yes the user will
probably have physical access to the machine, but will not have
superuser access. The OS is, atleast for now, windows.

I have been looking and searching manuals and so on for different
methods but I still haven´t figured out how or if it is possible. To
be more specific about what I really want: I have an application that
will insert some data into a database and while this data is in the
database I dont want it to be possible to copy it or in any other way
get access to it, except through that application that inserted it.
It woule be acceptable if, and maybe even preferbly when the program
exit, that the database are deleted (so that it is only stored in
memory while using it). But if there is a way to build up a database
that is protected when stored on disc that would also be acceptable.


If the user has physical access to the machine then there's nothing you 
can do to stop someone who is (a) determined and (b) knowledgeable.


If you want to stop casual access though:
1. Make sure PostgreSQL + its files aren't accessible to normal users.
2. Make sure application will only run as user X
3. Set up a pgpass.conf file only accessible by user X
4. Lock down BIOS etc. to prevent someone booting from a CD-ROM or USB 
stick.


http://www.postgresql.org/docs/8.2/static/libpq-pgpass.html

That should cope with someone who doesn't know what they're doing. If 
you're worried about them removing the hard-disk then you'll need to set 
up an encrypted filesystem and figure out a way to get a password 
entered on reboot.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Problem with UPDATE and UNIQUE

2007-08-23 Thread Michael Glaesemann


On Aug 23, 2007, at 8:59 , Frank Millman wrote:


It certainly does. Thanks again for all the valuable advice.


Glad you found it helpful. Good luck!

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] reporting tools

2007-08-23 Thread Thomas Kellerer

Ned Lilly wrote on 23.08.2007 15:44:
This is specifically why we released OpenRPT as open source - it's very 
lightweight, no Java required.  http://sf.net/projects/openrpt


I am a Java developer and thus I have no problems in using Java based tools. 
Especially because I ususally only have a JDBC driver for the databases I use 
around (especially with Oracle this is *very* nice, because it does not require 
a full client install, only a single .jar file)


But OpenRPT looks quite nice, I'll have a look at it as well. I guess I need to 
install the whole ODBC "shebang" for that, right :)



Thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Alvaro Herrera
Terry Yapt wrote:

> I am looking for system errors and nothing is there.  But I have a lot of 
> messages on system APP errors.  The error is the same every ten seconds or 
> so.
>
> This is the main error:
> * FATAL:  could not reattach to shared memory (key=5432001, addr=01D8): 
> Invalid argument

Please run "ipcs" on a command line window and paste the results.

I see a minor problem in that code: we are invoking two system calls
(shmget and shmat) but the log does not say which one failed.  However
in this case it seems only shmget could be returning EINVAL.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dump causes postgres crash

2007-08-23 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> Would a good stack trace (assuming I want to crash my
> database again) help here?

Well, it'd be more information than we have now ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] reporting tools

2007-08-23 Thread Michael Schmidt
I looked at BIRT - it is part of Eclipse.  It is pretty new and I found the 
documentation to be pretty limited.  Also, it has few output options.  These 
factors caused me to decide against it.  Settled on JasperReports and like it a 
lot.  It integrated easily with my Java (Eclipse RCP) GUI.  Creating reports 
isn't any harder than Crystal Reports.  In my GUI, I use an xml file for a 
report list, so I can add, edit, and delete reports easily without altering 
program code.

Michael Schmidt
  - Original Message - 
  From: Reid Thompson 
  To: Geoffrey 
  Cc: pgsql-general@postgresql.org 
  Sent: Thursday, August 23, 2007 7:35 AM
  Subject: Re: [GENERAL] reporting tools



  On Wed, 2007-08-22 at 18:57 -0400, Geoffrey wrote:
  > We are looking for a reporting tool that will enable users to generate 
  > their own reports.  Something like Crystal Reports.
  > 
  > Anyone using something like this with Postgresql?
  > 

  agata, datavision, jasper reports, birt, openRPT -- google shows
  numerous results

  ---(end of broadcast)---
  TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Erik Jones


On Aug 23, 2007, at 12:00 AM, Tony Caduto wrote:


Check it out here:

http://www.amsoftwaredesign.com/pg_vs_fb


When comparing in the grid the only major advantage FB has is  
probably BLOB support.
PG only suppports 1 gb while FB supports 32gb.  Bytea is pretty  
slow as well when compared to the FB BLOB support.


Actually, Postgres's large object facility allows storage of binary  
data up to 2GB in size.  http://www.postgresql.org/docs/8.2/ 
interactive/largeobjects.html


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Error Installing postgres 8.2.4 on Windows Server 2003 64bit

2007-08-23 Thread Peck, Brian
Hey all,

 

When I try and install form the msi installer for version 8.2.4 I get
the following error.

 

Failed to run initdb: 128!

 

And when it tells me to see the logfile for details, there is no log
file in the tmp directory. 

 

Anyone know why it's doing this?

 

It crashes furring the initdb phase of the installer.

 

- Brian Peck

- 858-795-1398



Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Tony Caduto

Dave Page wrote:

Couple of corrections Tony:

- You don't necessarily need to stop the postmaster to take a filesystem
backup -
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP.
Obviously that assumes logs will be replayed during recovery.

- The native win32 port will run on FAT32, we just prevent the installer
from initdb'ing on such a partition. You can do it manually however, but
tablespaces won't work.

I'm a little puzzled about why you list multi-threaded architecture as a
feature - on Windows it's a little more efficient of course, but the
multi-process architecture is arguably far more robust, and certainly
used to be more portable (I'm not sure that's still the case for
platforms we actually care about).

Regards, Dave.


  


Thanks  Dave.
Will update ASAP.

I agree with you on the multi-threaded.  I think I will add a note 
saying the the multi-threaded architecture is only advantageous  on Windows.
I have seen instances where the threaded version of Firebird completely 
craps out because one of the threads  has issues.


Will also make a note that it can run on FAT32 with some limitations.

Later,

Tony



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Tony Caduto

Dave Page wrote:

Tony Caduto wrote:
  

Other than that I would say PG kicks butt.



You're just realising that? :-)

  


Ah, I new that around 2004 :-)  I just have to convince Delphi users of 
that :-)



Later,

Tony

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Argument type list

2007-08-23 Thread Gustavo Tonini
I want to create a function that receive a list argument and filter
data with IN operator. Example:

CREATE OR REPLACE FUNCTION "public"."ffoo" (list ???) RETURNS VOID AS
$body$
BEGIN
  select * from foo where foo_column in list;
END;
$body$
LANGUAGE 'plpgsql' ;

I played with arrays but I got no success...
Is it possible? How proceed?

Thanks,
Gustavo.

PS: Please C.C. to me, I'm not subscribed in list.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tony Caduto wrote:
> Dave Page wrote:
>> Tony Caduto wrote:
>>  
>>> Other than that I would say PG kicks butt.
>>> 
>>
>> You're just realising that? :-)
>>
>>   
> 
> Ah, I new that around 2004 :-)  I just have to convince Delphi users of
> that :-)

My understanding is the Firebird is relatively non-configured though
isn't it? For a large scale client server app there is no question that
PG is going to wipe the universe with Firebird, but I would think that
Firebird may be better suited for embedded shipping that kind of thing.

Sincerely,

Joshua D. Drake

> 
> 
> Later,
> 
> Tony
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>   http://archives.postgresql.org/
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGzb0ZATb/zqfZUUQRAttRAJ4mamXurjzMDH9kqD3cWt9EC6RT7wCfRpkE
efUsuyz2f1GQKSs4dfgzr+A=
=JHrY
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] reporting tools

2007-08-23 Thread Ned Lilly



On 8/23/2007 10:07 AM Thomas Kellerer wrote:

Ned Lilly wrote on 23.08.2007 15:44:
This is specifically why we released OpenRPT as open source - it's 
very lightweight, no Java required.  http://sf.net/projects/openrpt


I am a Java developer and thus I have no problems in using Java based 
tools. Especially because I ususally only have a JDBC driver for the 
databases I use around (especially with Oracle this is *very* nice, 
because it does not require a full client install, only a single .jar file)


But OpenRPT looks quite nice, I'll have a look at it as well. I guess I 
need to install the whole ODBC "shebang" for that, right :)


Heh.  Actually, no, there's a native Postgres connection as well.  And you can 
compile it with any other native db driver provided by Qt (but why on earth 
would anyone want to use any other db ;-)

Cheers,
Ned


--
Ned Lilly
President and CEO
xTuple
119 West York Street
Norfolk, VA 23510
tel. 757.461.3022 x101
email: [EMAIL PROTECTED]
www.xtuple.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Undetected corruption of table files

2007-08-23 Thread Tom Lane
"Albe Laurenz" <[EMAIL PROTECTED]> writes:
> - Shouldn't there be an error, some kind of 'missing magic
>   number' or similar, when a table file consists of only
>   zeros?

The particular case of an all-zeroes page is specifically allowed,
and has to be because it's a valid transient state in various
scenarios.

> - Wouldn't it be desirable to have some means to verify the
>   integrity of a table file or a whole database?

SELECT * usually does reasonably well at that.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff

Hello.

I have a subject setup and a few questions.

The first one is this. PHP establishes a connection to the Postgres  
database through pg_pconnect(). Then it
runs some query, then the script returns, leaving the persistent  
connection hanging. But the trouble
is that in this case any query takes significantly more time to execute  
than in the case of one PHP script
running the same query with different parameters for N times. How can I  
achieve the same performance in the first
case? Persistent connections help but not enough - the queries are still  
10 times slower than they would be on

the 2nd time.


The second one is that the machine with this setup is dual core Xeon  
2.8ghz. I've read somewhere about
the switching context problem and bad postgres performance. What are the  
effects? What are the symptoms?
And what will be the performance gain if I change the machine to equal  
Athlon?


Thank you in advance.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Argument type list

2007-08-23 Thread Erik Jones

On Aug 23, 2007, at 11:56 AM, Gustavo Tonini wrote:


I want to create a function that receive a list argument and filter
data with IN operator. Example:

CREATE OR REPLACE FUNCTION "public"."ffoo" (list ???) RETURNS VOID AS
$body$
BEGIN
  select * from foo where foo_column in list;
END;
$body$
LANGUAGE 'plpgsql' ;

I played with arrays but I got no success...
Is it possible? How proceed?


Without knowing the data type of foo_column we can't really give a  
"best" solution, but with an array you could do something like (not  
tested):


CREATE OR REPLACE FUNCTION public.ffoo(list sometype[]) RETURNS VOID  
AS $$

BEGIN
 execute 'select * from foo where foo_column::text in (' ||  
array_to_string(list, ',') || ');';

END;
$$
LANGUAGE plpgsql;

Note that if foo_column is already a text type you don't need the cast.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:
> Hello.
> 
> I have a subject setup and a few questions.
> 
> The first one is this. PHP establishes a connection to the Postgres
> database through pg_pconnect(). 

Don't use pconnect. Use pgbouncer or pgpool.

> Then it
> runs some query, then the script returns, leaving the persistent
> connection hanging. But the trouble
> is that in this case any query takes significantly more time to execute
> than in the case of one PHP script
> running the same query with different parameters for N times. How can I
> achieve the same performance in the first
> case? Persistent connections help but not enough - the queries are still
> 10 times slower than they would be on
> the 2nd time.

Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)

> 
> The second one is that the machine with this setup is dual core Xeon
> 2.8ghz. I've read somewhere about
> the switching context problem and bad postgres performance. What are the
> effects? What are the symptoms?

You likely do not have this problem if you are running anywhere near a
current PostgreSQL release but you can check it with vmstat.

> And what will be the performance gain if I change the machine to equal
> Athlon?

Depends on the work load.

Sincerely,

Joshua D. Drake


> 
> Thank you in advance.
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGzcEAATb/zqfZUUQRAkkEAKCc00kZu6YSDp1RWjY9zZeQVEYeVACeIsOl
hzyHOnynNSNWOrBakMeVKpc=
=LL5i
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Adapter update.

2007-08-23 Thread Murali Maddali
Hello Group,
 
I have asked this question already on the NpgSql forum, but didn't get a
response so far. Sorry for cross posting, but want to check if any one had
any suggestions for my problem.
 
I am trying to do my updates through NpgsqlDataAdapter (I also tried with
Odbc driver with no luck) by passing in a Datatable with changes in it, this
would take forever to do the updates. 
 
This is what I am doing, I am reading the data from SQL Server 2005 and
dumping to out to Postgresql 8.2 database.
 
using (SqlCommand cmd = new SqlCommand(t.SourceSelect, conn))
{
using (SqlDataReader r = cmd.ExecuteReader())
{
DataSet ds = new DataSet("postgis");
NpgsqlDataAdapter adp = new
NpgsqlDataAdapter(t.DestinationSelect, destConn);
NpgsqlCommandBuilder cmdBld = new
NpgsqlCommandBuilder(adp);
adp.Fill(ds, t.DestinationTable);
DataTable destTbl = ds.Tables[t.DestinationTable];

DataTable srcTblSchema = r.GetSchemaTable();
adp.FillSchema(ds, SchemaType.Mapped,
t.DestinationTable);

// My save method will check if the row exists or not
and would add or update accordingly to the datatable (destTbl). The whole
process
// of comparision is done under 2 mins on 60,000
records.
while (r.Read())
_save(r, srcTblSchema, destTbl, destConn);  

r.Close();

 
// This is the where my application goes into lala land.
If I call this update in my while loop above, it took about two hours to
process 
// the whole thing
adp.Update(destTbl);
}
}
 
I have around 6 records. I also have a geometry field on my table.
 
I have couple of questions.
 
1) What do I do to speed up the process? Any database configuration changes,
connection properties, ...
2) When I call the adapter.update does NpgsqlDataAdapter checks to see if
the column value really changed or not? I believe SQLDataAdapter does this
validation before it actually writes to the database.
 
Any suggestions and comments are greatly appreciated. Right now I am in dead
waters and can't get it to work on large datasets.
 
Thank you all.
 
Regards,
Murali K. Maddali
UAI, Inc.
[EMAIL PROTECTED]  
 
"Always bear in mind that your own resolution to succeed is more important
than any one thing." - Abraham Lincoln

 
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.


[GENERAL] problem Linking a TTable component to a pgsql view using BCB5

2007-08-23 Thread JLoz
Hello,

I am writing an application in Borland C++ builder 5 that connects to a
postgresql database (8.2.4).

I am trying to link a TDBListBox to a view by using a TTable and a
TDataSource component but when I send the TTable->Refresh() I get the
following error:

"Table does not support this operation because it is not uniquely indexed".

I have not been able to find a workaround for this?

Can anyone help me?  Is this the right list to post this to?

Thanks in advance,

JLoz



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] CPU load high

2007-08-23 Thread Patrick
Hi Max,

To find out what is causing the big load you could also try to use 'ATOP'
which can be found at http://www.atcomputing.nl/atop. This tool shows more
(accurate) information than the regular TOP.

There are also some kernel patches available which, when applied to your
kernel, even show more information which might come in handy.

Good Luck,

Patrick Lindeman

> Hello.
>
> I have a web-server with php 5.2 connected to postgres 8.0 backend. Most
> of the queries the users are doing are SELECTs (100-150 in a second for
> 100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There
> is also a demon running in the background doing some work once every
> 100ms. The problem is that after the number of concurrent users rises to
> 100, CPU becomes almost 100% loaded. How do I find out what's hogging the
> CPU?
>
> 'top' shows demon using 8% cpu on top, and some amount of postgres
> processes each using 2% cpu with some apache processes occassionally
> rising with 2% cpu also. Often the writer process is at the top using 10%
> cpu.
>
> And the second question is that over time demon and writer processes use
> more and more shared memory - is it normal?
>
> Thanks in advance.
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] 64 bit PG on OSX, FATAL: could not create shared memory segment

2007-08-23 Thread Kasper Frederiksen
A while ago Kevin Murphy reported that PG built for 64 bit  
archetecture cant start on a OSX:

http://archives.postgresql.org/pgsql-general/2007-04/msg00788.php

The problem was eventually traced to a compatability problem with OSX:
http://lists.apple.com/archives/darwin-kernel/2007/Apr/msg00021.html

I just tried with the new PG 8.2.4 and i get the exact same fatal error.

Does any one know of a patch that will fix this problem?
Or maby I need to build the binarys with different compiler flags?

---
I am running on a Intel Xeon OSX system

I configured PG with
./configure --prefix=pgsql64 --without-readline CFLAGS='-arch x86_64'

and the error is:
> /usr/local/pgsql64/bin/initdb -D /usr/local/pgsql64/data
The files belonging to this database system will be owned by user  
"kasperf".

This user must also own the server process.

The database cluster will be initialized with locale C.

fixing permissions on existing directory /usr/local/pgsql64/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creating configuration files ... ok
creating template1 database in /usr/local/pgsql64/data/base/1 ...  
FATAL:  could not create shared memory segment: Cann

ot allocate memory
DETAIL:  Failed system call was shmget(key=1, size=1810432, 03600).
HINT:  This error usually means that PostgreSQL's request for a  
shared memory segment exceeded available memory or swa
p space. To reduce the request size (currently 1810432 bytes), reduce  
PostgreSQL's shared_buffers parameter (currently

50) and/or its max_connections parameter (currently 10).
The PostgreSQL documentation contains more information about  
shared memory configuration.

child process exited with exit code 1
initdb: removing contents of data directory "/usr/local/pgsql64/data"


Thanks,
Kasper Frederiksen




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] reporting tools

2007-08-23 Thread Gauthier, Dave
If query development is an important part of what you need to do,
consider dbQwikEdit.  It's not open or free.  But you can get a minimal
config for free (I think) and it's pretty cheap copnsidering what it can
do.

It uses ODBC and can read any DB that ODBC points to (Oracle, MySQL,
Postgres, SQLServer, etc...).  You can enter your queries in by hand
(being the sql savy people we are, that's what we'd do).  But there is
also a GUI that users can run that'll hand-hold them through building
SQL using graphics.  Pretty neat.  This feature allows users to run
"ad-hoc".  

THe output is just tabular.  But you can export to lots of different
formats.

Just a thought.

http://www.dbqwikedit.com/

-dave 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Thomas Kellerer
Sent: Thursday, August 23, 2007 10:07 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] reporting tools

Ned Lilly wrote on 23.08.2007 15:44:
> This is specifically why we released OpenRPT as open source - it's
very 
> lightweight, no Java required.  http://sf.net/projects/openrpt

I am a Java developer and thus I have no problems in using Java based
tools. 
Especially because I ususally only have a JDBC driver for the databases
I use 
around (especially with Oracle this is *very* nice, because it does not
require 
a full client install, only a single .jar file)

But OpenRPT looks quite nice, I'll have a look at it as well. I guess I
need to 
install the whole ODBC "shebang" for that, right :)


Thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Bill Moran
In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Max Zorloff wrote:
> > Hello.
> > 
> > I have a subject setup and a few questions.
> > 
> > The first one is this. PHP establishes a connection to the Postgres
> > database through pg_pconnect(). 
> 
> Don't use pconnect. Use pgbouncer or pgpool.
> 
> > Then it
> > runs some query, then the script returns, leaving the persistent
> > connection hanging. But the trouble
> > is that in this case any query takes significantly more time to execute
> > than in the case of one PHP script
> > running the same query with different parameters for N times. How can I
> > achieve the same performance in the first
> > case? Persistent connections help but not enough - the queries are still
> > 10 times slower than they would be on
> > the 2nd time.
> 
> Well you haven't given us any indication of data set or what you are
> trying to do. However, I can tell you, don't use pconnect, its broke ;)

Broke?  How do you figure?

I'm not trying to argue the advantages of a connection pooler such as
pgpool, but, in my tests, pconnect() does exactly what it's supposed
to do: reuse existing connections.  In our tests, we saw a 2x speed
improvement over connect().  Again, I understand that pgpool will do
even better ...

Also, I'm curious as to whether he's timing the actual _query_ or the
entire script execution.  If you're running a script multiple times
to get multiple queries, most of your time is going to be tied up in
PHP's parsing and startup -- unless I misunderstood the question.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Richard Broersma Jr
--- Tony Caduto <[EMAIL PROTECTED]> wrote:

> Check it out here:
> 
> http://www.amsoftwaredesign.com/pg_vs_fb

One row that you could elaborate on is:
CHECK CONSTRAINTS support for correlated sub-queries.
PostgreSQL doesn't official support this kink of constraint unless it is rolled 
up in a function. 
I am not sure what support FB has for this.

Another Constraint row you could add would be:
CREATE ASSERTION  which is a schema level constraint.  Currently PostgreSQL 
doesn't support this,
I am not sure if FB does either.

Also you could mention PostgreSQL support for row-wise comparison:
i.e. WHERE ( last_name, city, gender ) = ( 'Doe', 'Paris', 'female' );

and PostgreSQL support for additional SQL comparison operators:
i.e. WHERE (( last_name, city, gender ) = ( 'Doe', 'Paris', 'female' )) IS 
UNKNOWN;
-- return all people who might meet this criteria if their null field where 
known.

Regards,
Richard Broersma Jr.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Adapter update.

2007-08-23 Thread Murali Maddali
Richard,

I have added transaction to my code and it took about 2 and half hours to
process around 48,000 records. Again all this time is taken by update method
on the adapter.

I don't know Perl to setup the database link to SQL Server 2005 and also I
don't have permission to write the data to files. Are there any other
options like a different driver I can use or through stored procedures. I
have to compare each column in each row before doing the update.

Your suggestions and comments are greatly appreciated.

Thank you,
Murali K. Maddali
256-705-5191
[EMAIL PROTECTED]

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 22, 2007 2:41 PM
To: Murali Maddali
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adapter update.

Murali Maddali wrote:
> This is what I am doing, I am reading the data from SQL Server 2005 
> and dumping to out to Postgresql 8.2 database.

> while (r.Read())
> _save(r, srcTblSchema, destTbl, destConn);
> 
> r.Close();
> 
>  
> // This is the where my application goes into lala
land.
> If I call this update in my while loop above, it took about two hours 
> to process
> // the whole thing
> adp.Update(destTbl);

That's probably because it was doing each update in its own transaction. 
That'll require committing each row to disk.

> I have around 6 records. I also have a geometry field on my table.
>  
> I have couple of questions.
>  
> 1) What do I do to speed up the process? Any database configuration 
> changes, connection properties, 

Well, if you're doing it all in its own transaction it should be fairly
quick.

You might also find the DBI-link project useful, if you know any Perl. 
That would let you reach out directly from PG to the SQL-Server database.
   http://pgfoundry.org/projects/dbi-link/

> 2) When I call the adapter.update does NpgsqlDataAdapter checks to see 
> if the column value really changed or not? I believe SQLDataAdapter 
> does this validation before it actually writes to the database.

Sorry, don't know - but you have the source, should be easy enough to check.
If not, I'm sure the npgsql people would be happy of a patch.

> Any suggestions and comments are greatly appreciated. Right now I am 
> in dead waters and can't get it to work on large datasets.

Fastest way to load data into PG is via COPY, don't know if npgsql driver
supports that. If not, you'd have to go via a text-file.

Load the data into an import table (TEMPORARY table probably) and then just
use three queries to handle deletion, update and insertion. 
Comparing one row at a time is adding a lot of overhead.

-- 
   Richard Huxton
   Archonet Ltd
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Postgres, fsync and RAID controller with 100M of internal cache & dedicated battery

2007-08-23 Thread Lincoln Yeoh

At 11:28 PM 8/22/2007, Dmitry Koterov wrote:

Hello.

We are trying to use HP CISS contoller (Smart Array E200i) with 
internal cache memory (100M for write caching, built-in power 
battery) together with Postgres. Typically under a heavy load 
Postgres runs checkpoint fsync very slow:


checkpoint buffers dirty=16.8 MB (3.3%) write=24.3 ms sync=6243.3 ms

(If we turn off fsync, the speed increases greatly, fsync=0.) And 
unfortunately it affects all the database productivity during the checkpoint.
Here is the timing (in milliseconds) of a test transaction called 
multiple times concurrently (6 threads) with fsync turned ON:


It's likely your controller is probably not doing the write caching 
thingy or the write caching is still slow (I've seen raid controllers 
that are slower than software raid).


Have you actually configured your controller to do the write caching? 
Won't be surprised if it's in a conservative setting which means 
"write-through" rather than "write-back", even if there's a battery.


BTW, what happens if someone replaced a faulty battery backed 
controller card on a "live" system with one from a "don't care test 
system" (identical hardware tho) that was powered down abruptly 
because people didn't care? Would the new card proceed to trash the 
"live" system?


Probably not that important, but what are your mount options for the 
partition? Is the partition mounted noatime (or similar)?


Regards,
Link.





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Adapter update.

2007-08-23 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Murali Maddali wrote:
> Richard,
> 
> I have added transaction to my code and it took about 2 and half hours to
> process around 48,000 records. Again all this time is taken by update method
> on the adapter.
> 
> I don't know Perl to setup the database link to SQL Server 2005 and also I
> don't have permission to write the data to files. Are there any other
> options like a different driver I can use or through stored procedures. I
> have to compare each column in each row before doing the update.

This is probably where your time is spent, not the actual commit of the
data. 48k records is nothing.

Joshua D. Drake


> 
> Your suggestions and comments are greatly appreciated.
> 
> Thank you,
> Murali K. Maddali
> 256-705-5191
> [EMAIL PROTECTED]
> 
> -Original Message-
> From: Richard Huxton [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, August 22, 2007 2:41 PM
> To: Murali Maddali
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Adapter update.
> 
> Murali Maddali wrote:
>> This is what I am doing, I am reading the data from SQL Server 2005 
>> and dumping to out to Postgresql 8.2 database.
> 
>> while (r.Read())
>> _save(r, srcTblSchema, destTbl, destConn);
>>
>> r.Close();
>>
>>  
>> // This is the where my application goes into lala
> land.
>> If I call this update in my while loop above, it took about two hours 
>> to process
>> // the whole thing
>> adp.Update(destTbl);
> 
> That's probably because it was doing each update in its own transaction. 
> That'll require committing each row to disk.
> 
>> I have around 6 records. I also have a geometry field on my table.
>>  
>> I have couple of questions.
>>  
>> 1) What do I do to speed up the process? Any database configuration 
>> changes, connection properties, 
> 
> Well, if you're doing it all in its own transaction it should be fairly
> quick.
> 
> You might also find the DBI-link project useful, if you know any Perl. 
> That would let you reach out directly from PG to the SQL-Server database.
>http://pgfoundry.org/projects/dbi-link/
> 
>> 2) When I call the adapter.update does NpgsqlDataAdapter checks to see 
>> if the column value really changed or not? I believe SQLDataAdapter 
>> does this validation before it actually writes to the database.
> 
> Sorry, don't know - but you have the source, should be easy enough to check.
> If not, I'm sure the npgsql people would be happy of a patch.
> 
>> Any suggestions and comments are greatly appreciated. Right now I am 
>> in dead waters and can't get it to work on large datasets.
> 
> Fastest way to load data into PG is via COPY, don't know if npgsql driver
> supports that. If not, you'd have to go via a text-file.
> 
> Load the data into an import table (TEMPORARY table probably) and then just
> use three queries to handle deletion, update and insertion. 
> Comparing one row at a time is adding a lot of overhead.
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGzccJATb/zqfZUUQRAsBWAJ4ppz8X4RABNTdJYH/iFNvmnuUZrgCfbJiD
8Lb6BstpYZ/ipR0jgyh4ALE=
=3DmY
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Erik Jones

On Aug 23, 2007, at 12:29 PM, Bill Moran wrote:


In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:

Hello.

I have a subject setup and a few questions.

The first one is this. PHP establishes a connection to the Postgres
database through pg_pconnect().


Don't use pconnect. Use pgbouncer or pgpool.


Then it
runs some query, then the script returns, leaving the persistent
connection hanging. But the trouble
is that in this case any query takes significantly more time to  
execute

than in the case of one PHP script
running the same query with different parameters for N times. How  
can I

achieve the same performance in the first
case? Persistent connections help but not enough - the queries  
are still

10 times slower than they would be on
the 2nd time.


Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its  
broke ;)


Broke?  How do you figure?

I'm not trying to argue the advantages of a connection pooler such as
pgpool, but, in my tests, pconnect() does exactly what it's supposed
to do: reuse existing connections.  In our tests, we saw a 2x speed
improvement over connect().  Again, I understand that pgpool will do
even better ...


We were just talking about this less than two weeks ago: http:// 
archives.postgresql.org/pgsql-general/2007-08/msg00660.php


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgres, fsync and RAID controller with 100M of internal cache & dedicated battery

2007-08-23 Thread Greg Smith

On Fri, 24 Aug 2007, Lincoln Yeoh wrote:

BTW, what happens if someone replaced a faulty battery backed controller card 
on a "live" system with one from a "don't care test system" (identical 
hardware tho) that was powered down abruptly because people didn't care? 
Would the new card proceed to trash the "live" system?


All the caching controllers I've examined this behavior on give each disk 
a unique ID, so if you connect new disks to them they wouldn't trash 
anything because those writes will only go out to the original drives. 
What happens to the pending writes for the drives that aren't there 
anymore is kind of undefined though; presumably they'll just be thrown 
away, I don't know if there are any cards that try to hang on to them in 
case the original disks are connected later.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Josh Trutwin
On Thu, 23 Aug 2007 13:29:46 -0400
Bill Moran <[EMAIL PROTECTED]> wrote:

> > Well you haven't given us any indication of data set or what you
> > are trying to do. However, I can tell you, don't use pconnect,
> > its broke ;)
> 
> Broke?  How do you figure?

I asked that question earlier this month - this thread has some
interesting discussion on pconnect:

http://archives.postgresql.org/pgsql-general/2007-08/msg00602.php

Josh

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Adapter update.

2007-08-23 Thread Lincoln Yeoh

At 01:30 AM 8/24/2007, Murali Maddali wrote:


options like a different driver I can use or through stored procedures. I
have to compare each column in each row before doing the update.


Do you have to compare with all rows, or just one? Can your 
comparison make use of an index?


Link.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 21:16:48 +0400, Joshua D. Drake <[EMAIL PROTECTED]>  
wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:

Hello.

I have a subject setup and a few questions.

The first one is this. PHP establishes a connection to the Postgres
database through pg_pconnect().


Don't use pconnect. Use pgbouncer or pgpool.


Then it
runs some query, then the script returns, leaving the persistent
connection hanging. But the trouble
is that in this case any query takes significantly more time to execute
than in the case of one PHP script
running the same query with different parameters for N times. How can I
achieve the same performance in the first
case? Persistent connections help but not enough - the queries are still
10 times slower than they would be on
the 2nd time.


Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)


The data set is some 400mb database with ~100 SELECT queries running in a  
second
and some 7-10 pl/pgsql functions doing select checks and then 2-3  
insert/updates.




The second one is that the machine with this setup is dual core Xeon
2.8ghz. I've read somewhere about
the switching context problem and bad postgres performance. What are the
effects? What are the symptoms?


You likely do not have this problem if you are running anywhere near a
current PostgreSQL release but you can check it with vmstat.


I have 8.0.13 postgres. How do I check the thing with vmstat?


And what will be the performance gain if I change the machine to equal
Athlon?


Depends on the work load.


Right now 100 concurrent users completely use the cpu. So I'm trying to
find out where the problem lies.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Scott Marlowe
On 8/23/07, Tony Caduto <[EMAIL PROTECTED]> wrote:
> Check it out here:
>
> http://www.amsoftwaredesign.com/pg_vs_fb
> If there is any interest I could also add MySQL 5.0 to the mix as the
> third column.

If you do, you should really do it as MySQL-isam and MySQL-innodb.

the limitations of each table handler are often as much different as
to make it another database server.  i.e. no full text search on
innodb tables, no foreign keys on isam tables, etc...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 21:29:46 +0400, Bill Moran <[EMAIL PROTECTED]>  
wrote:



In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:
> Hello.
>
> I have a subject setup and a few questions.
>
> The first one is this. PHP establishes a connection to the Postgres
> database through pg_pconnect().

Don't use pconnect. Use pgbouncer or pgpool.

> Then it
> runs some query, then the script returns, leaving the persistent
> connection hanging. But the trouble
> is that in this case any query takes significantly more time to  
execute

> than in the case of one PHP script
> running the same query with different parameters for N times. How can  
I

> achieve the same performance in the first
> case? Persistent connections help but not enough - the queries are  
still

> 10 times slower than they would be on
> the 2nd time.

Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)


Broke?  How do you figure?

I'm not trying to argue the advantages of a connection pooler such as
pgpool, but, in my tests, pconnect() does exactly what it's supposed
to do: reuse existing connections.  In our tests, we saw a 2x speed
improvement over connect().  Again, I understand that pgpool will do
even better ...

Also, I'm curious as to whether he's timing the actual _query_ or the
entire script execution.  If you're running a script multiple times
to get multiple queries, most of your time is going to be tied up in
PHP's parsing and startup -- unless I misunderstood the question.



I'm timing it with the php gettimeofday(). And I'm timing the actual  
pg_query()

run time, excluding db connection and everything else.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Lewis Cunningham
If anyone is interested, I could answer the questions for Oracle and
you could add those, too.  Be interesting to see a chart like that
(that stays updated after releases) for a large assortment of
databases.

If we add a bunch of different databases, it might be easier to
manipulate if it was stored in a database.  MS-Access maybe?  ;-)

LewisC

--- Tony Caduto <[EMAIL PROTECTED]> wrote:

> Check it out here:
> 
> http://www.amsoftwaredesign.com/pg_vs_fb
> 
> 
> When comparing in the grid the only major advantage FB has is
> probably 
> BLOB support.
> PG only suppports 1 gb while FB supports 32gb.  Bytea is pretty
> slow as 
> well when compared to the FB BLOB support.
> 
> The other area is Character sets and collation.  They support it at
> a 
> field level as well as the database.
> 
> Other than that I would say PG kicks butt.
> 
> If there is any interest I could also add MySQL 5.0 to the mix as
> the 
> third column.
> 
> 
> Later,
> 
> Tony
> 
> ---(end of
> broadcast)---
> TIP 1: if posting/reading through Usenet, please send an
> appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that
> your
>message can get through to the mailing list cleanly
> 


---
Lewis R Cunningham

An Expert's Guide to Oracle Technology
http://blogs.ittoolbox.com/oracle/guide/

LewisC's Random Thoughts
http://lewiscsrandomthoughts.blogspot.com/

EnterpriseDB: The Definitive Reference
http://tinyurl.com/39246e
--

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Argument type list

2007-08-23 Thread Tom Lane
Erik Jones <[EMAIL PROTECTED]> writes:
> On Aug 23, 2007, at 11:56 AM, Gustavo Tonini wrote:
>> I want to create a function that receive a list argument and filter
>> data with IN operator. Example:

> CREATE OR REPLACE FUNCTION public.ffoo(list sometype[]) RETURNS VOID  

this is right ...

>   execute 'select * from foo where foo_column::text in (' ||  
> array_to_string(list, ',') || ');';

this is pretty horrid.  Use = ANY(array) instead of trying to construct
an IN on the fly.

select * from foo where foo_column = any(list)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] "out of memory" error

2007-08-23 Thread Christian Schröder




Tom Lane wrote:

  
Ok, I can do this, but why can more memory be harmful?

  
  
Because you've left no room for anything else?  The kernel, the various
other daemons, the Postgres code itself, and the local memory for each
Postgres process all require more than zero space.
  


So does this mean that the stuff you mentioned needs more than 1 GB of
memory? I seem to have undererstimated the amount of memory that is
needed for these purposes. :(


  
Even more to the point, with such a large shared-buffer space, the
kernel probably will be tempted to swap out whatever parts of it seem
less used at the moment.  That is far more harmful to performance than
not having had the buffer in the first place --- it can easily triple
the amount of disk I/O involved.  (Thought experiment: dirty buffer is
written to disk, versus dirty buffer is swapped out to disk, then later
has to be swapped in so it can be written to wherever it should have
gone.)

Bottom line is that PG shared buffers are not so important as to deserve
3/4ths of your RAM.
  


Thanks for your tips! I have changed the "shared_buffers" setting back
to 2 GB. It was set to 2 GB before, but we also had "out of memory"
errors with this setting, so I raised it to 3 GB.
Could you please help me understand what's happening? The server is a
dedicated database server. Few other demons are running, most of them
are system services that do not consume a considerable amount of
memory. No web server or similar is running on this machine.
Moreover, the output of "free" confuses me:

    db2:~ # free -m
     total   used   free shared    buffers
cached
    Mem:  3954   3724    229  0 
0   3097
    -/+ buffers/cache:    627   3326
    Swap: 2055    628   1426

Doesn't that mean that plenty of memory is unused? I always thought
that the memory used for buffers and caches can be thought of as free
memory. Isn't this correct?
Regarding the memory needs of the PostgreSQL server itself: Is there
any estimation how much memory will be needed besides the shared
buffers? What exactly does "out of memory" mean? Who requested the
memory and why could this memory request not be fulfilled?
I can post the memory overview from the log file, but I don't know if
it's considered impolite to post so many lines to this mailing list.

Thanks a lot again for your help,
    Christian
-- 
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax:  +49 551 489500-91
Hans-Böckler-Straße 2  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer




Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Lewis Cunningham wrote:
> If anyone is interested, I could answer the questions for Oracle and
> you could add those, too.  Be interesting to see a chart like that
> (that stays updated after releases) for a large assortment of
> databases.
> 
> If we add a bunch of different databases, it might be easier to
> manipulate if it was stored in a database.  MS-Access maybe?  ;-)

Let's get this up on the wiki.

Joshua D. Drake

> 
> LewisC
> 
> --- Tony Caduto <[EMAIL PROTECTED]> wrote:
> 
>> Check it out here:
>>
>> http://www.amsoftwaredesign.com/pg_vs_fb
>>
>>
>> When comparing in the grid the only major advantage FB has is
>> probably 
>> BLOB support.
>> PG only suppports 1 gb while FB supports 32gb.  Bytea is pretty
>> slow as 
>> well when compared to the FB BLOB support.
>>
>> The other area is Character sets and collation.  They support it at
>> a 
>> field level as well as the database.
>>
>> Other than that I would say PG kicks butt.
>>
>> If there is any interest I could also add MySQL 5.0 to the mix as
>> the 
>> third column.
>>
>>
>> Later,
>>
>> Tony
>>
>> ---(end of
>> broadcast)---
>> TIP 1: if posting/reading through Usenet, please send an
>> appropriate
>>subscribe-nomail command to [EMAIL PROTECTED] so that
>> your
>>message can get through to the mailing list cleanly
>>
> 
> 
> ---
> Lewis R Cunningham
> 
> An Expert's Guide to Oracle Technology
> http://blogs.ittoolbox.com/oracle/guide/
> 
> LewisC's Random Thoughts
> http://lewiscsrandomthoughts.blogspot.com/
> 
> EnterpriseDB: The Definitive Reference
> http://tinyurl.com/39246e
> --
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGzdNoATb/zqfZUUQRAh/sAJ92Ko3lB6eCGSyJJyoPw5sn4VI44QCdGTjc
XzyzrDQKnA7mgoNXDohvUpY=
=Um04
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] "out of memory" error

2007-08-23 Thread Martijn van Oosterhout
On Thu, Aug 23, 2007 at 08:30:46PM +0200, Christian Schröder wrote:
>Thanks for your tips! I have changed the "shared_buffers" setting back
>to 2 GB. It was set to 2 GB before, but we also had "out of memory"
>errors with this setting, so I raised it to 3 GB.

You've got it completely wrong. By setting shared_buffers to 2GB it
means no-one can use it. It's not postgres that's running out of
memory, it's the rest of your system. Set it to something sane like
128MB or maybe smaller.

It's a cache, nothing more, small values does not mean you can't run big
queries.

The rest of Tom's comment was about how large shared_buffer is worse
because it eats away at your real disk cache and your performance will
completely tank.

Have a nice day,

>Doesn't that mean that plenty of memory is unused? I always thought
>that the memory used for buffers and caches can be thought of as free
>memory. Isn't this correct?

Postgresql shared_buffers is not "free". It should be around your
actually working set size, much bigger is counter productive.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Bill Moran
In response to Josh Trutwin <[EMAIL PROTECTED]>:

> On Thu, 23 Aug 2007 13:29:46 -0400
> Bill Moran <[EMAIL PROTECTED]> wrote:
> 
> > > Well you haven't given us any indication of data set or what you
> > > are trying to do. However, I can tell you, don't use pconnect,
> > > its broke ;)
> > 
> > Broke?  How do you figure?
> 
> I asked that question earlier this month - this thread has some
> interesting discussion on pconnect:
> 
> http://archives.postgresql.org/pgsql-general/2007-08/msg00602.php

Thanks to you and Erik for the link.  Not sure how I missed that
thread.

I guess I just feel that "broken" is a bit of a harsh term.  If
your expectations are for full-blown connection management from
pconnect(), then you will be disappointed.  If you take it for
what it is: persistent connections, then those limitations would
be expected.

*shrug*

I'm just glad there aren't any unknown problems waiting to bite
me ...

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Argument type list

2007-08-23 Thread Erik Jones

On Aug 23, 2007, at 1:27 PM, Tom Lane wrote:


Erik Jones <[EMAIL PROTECTED]> writes:

On Aug 23, 2007, at 11:56 AM, Gustavo Tonini wrote:

I want to create a function that receive a list argument and filter
data with IN operator. Example:



CREATE OR REPLACE FUNCTION public.ffoo(list sometype[]) RETURNS VOID


this is right ...


  execute 'select * from foo where foo_column::text in (' ||
array_to_string(list, ',') || ');';


this is pretty horrid.  Use = ANY(array) instead of trying to  
construct

an IN on the fly.

select * from foo where foo_column = any(list)


Yes, I always forget about using ANY.  Thx.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tony Caduto wrote:
> Dave Page wrote:
>> Couple of corrections Tony:
>>
>> - You don't necessarily need to stop the postmaster to take a filesystem
>> backup -
>> http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP.

>>
>>   
> 
> Thanks  Dave.
> Will update ASAP.
> 
> I agree with you on the multi-threaded.  I think I will add a note
> saying the the multi-threaded architecture is only advantageous  on
> Windows.

And Solaris.

Joshua D. Drake

> I have seen instances where the threaded version of Firebird completely
> craps out because one of the threads  has issues.
> 
> Will also make a note that it can run on FAT32 with some limitations.
> 
> Later,
> 
> Tony
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGzdyOATb/zqfZUUQRAjYtAJ9GxNvF46JXM34i6Kf0RE7TLwkGggCeN5QD
eELS+fyixPqlB/dYiGkC/vM=
=wN+j
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] problem Linking a TTable component to a pgsql view using BCB5

2007-08-23 Thread Rodrigo De León
On 8/21/07, JLoz <[EMAIL PROTECTED]> wrote:
> I have not been able to find a workaround for this?

Does the table have a unique index/primary key?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread Greg Smith

On Thu, 23 Aug 2007, Tony Caduto wrote:

If there is any interest I could also add MySQL 5.0 to the mix as the third 
column.


As already mentioned, MyISAM and InnoDB should get their own columns.

This is a really good comparision, focusing on features that I think 
people understand rather than so much on technical trivia.  Someone else 
mentioned moving it onto the Wiki.  Questions that pop into my head:


-Tony, would be you be comfortable with your work being assimilated into a 
larger table that was hosted somewhere else but credited yours as a 
source?


-Is the Wiki the right place to build this table at?  Large Wiki tables 
get very difficult to manage.  It may be easier to build the table in 
something else and then have that generate markup instead.  I'd rather 
edit this in a spreadsheet and write something to massage that into final 
form than do all the edits within the Wikipedia editor.


-If this is going to turn into the grand feature comparision table, 
everyone might as well be thinking from day one that inevitably there will 
be columns for Oracle (with a volunteer to fill out already), SQL Server, 
DB2, etc. and plan a useful way to manage all that data from the 
beginning.  That's another reason why the Wiki is a bad way to cope with 
this data; adding another column is a painful and error-prone operation.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Terry Yapt
Sorry, I have not be able to execute "ipcs" on windows.  it doesn't 
exists.  I have tried to find some utility that gives me the same 
information or any ipcs porting to win32, but I haven't had any luck.


If I can do something more to get help, please tell me.

Greetings.


Alvaro Herrera escribió:

Terry Yapt wrote:

  
I am looking for system errors and nothing is there.  But I have a lot of 
messages on system APP errors.  The error is the same every ten seconds or 
so.


This is the main error:
* FATAL:  could not reattach to shared memory (key=5432001, addr=01D8): 
Invalid argument



Please run "ipcs" on a command line window and paste the results.

I see a minor problem in that code: we are invoking two system calls
(shmget and shmat) but the log does not say which one failed.  However
in this case it seems only shmget could be returning EINVAL.

  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Alvaro Herrera
Terry Yapt wrote:

> This is the main error:
> * FATAL:  could not reattach to shared memory (key=5432001, addr=01D8): 
> Invalid argument
>
> It is always followed by this another system-app error:
> * LOG:  unrecognized win32 error code: 487

FWIW,
http://help.netop.com/support/errorcodes/win32_error_codes.htm

says
487 Attempt to access invalid address.  ERROR_INVALID_ADDRESS

This problem has been reported before, for example in

http://bbs.chinaunix.net/thread-973003-1-1.html
(not that I can read it very well)

and

http://lists.pgfoundry.org/pipermail/brasil-usuarios/20061127/003150.html

No resolution seems to have been found.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-23 Thread David Fetter
On Thu, Aug 23, 2007 at 03:30:30PM -0400, Greg Smith wrote:
> On Thu, 23 Aug 2007, Tony Caduto wrote:
> 
> >If there is any interest I could also add MySQL 5.0 to the mix as the 
> >third column.
> 
> As already mentioned, MyISAM and InnoDB should get their own columns.

Yes.

> This is a really good comparision, focusing on features that I think 
> people understand rather than so much on technical trivia.  Someone else 
> mentioned moving it onto the Wiki.  Questions that pop into my head:
> 
> -Tony, would be you be comfortable with your work being assimilated into a 
> larger table that was hosted somewhere else but credited yours as a 
> source?
> 
> -Is the Wiki the right place to build this table at?  Large Wiki
> tables get very difficult to manage.

They're very easy to manage using things like the Firefox/Mozilla
plugin viewsourcewith


> It may be easier to build the table in something else and then have
> that generate markup instead.  I'd rather edit this in a spreadsheet
> and write something to massage that into final form than do all the
> edits within the Wikipedia editor.

See above :)

> -If this is going to turn into the grand feature comparision table,
> everyone might as well be thinking from day one that inevitably
> there will be columns for Oracle (with a volunteer to fill out
> already), SQL Server, DB2, etc. and plan a useful way to manage all
> that data from the beginning.  That's another reason why the Wiki is
> a bad way to cope with this data; adding another column is a painful
> and error-prone operation.

Could be.  Try viewsourcewith with your favorite editor and see
whether it eases the pain :)

Cheers,
David.
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!
Consider donating to PostgreSQL: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Argument type list

2007-08-23 Thread Gustavo Tonini
Ok. It works well, but my argument type must be varchar (because java
conversions in my application). Then I want to convert varchar in
format "{int, int, ...}" to an integer array. Is there a function that
converts varchar -> integer [] ? I tried with casts and got no
success.

Tanks,
Gustavo.

On 8/23/07, Erik Jones <[EMAIL PROTECTED]> wrote:
> On Aug 23, 2007, at 1:27 PM, Tom Lane wrote:
>
> > Erik Jones <[EMAIL PROTECTED]> writes:
> >> On Aug 23, 2007, at 11:56 AM, Gustavo Tonini wrote:
> >>> I want to create a function that receive a list argument and filter
> >>> data with IN operator. Example:
> >
> >> CREATE OR REPLACE FUNCTION public.ffoo(list sometype[]) RETURNS VOID
> >
> > this is right ...
> >
> >>   execute 'select * from foo where foo_column::text in (' ||
> >> array_to_string(list, ',') || ');';
> >
> > this is pretty horrid.  Use = ANY(array) instead of trying to
> > construct
> > an IN on the fly.
> >
> >   select * from foo where foo_column = any(list)
>
> Yes, I always forget about using ANY.  Thx.
>
> Erik Jones
>
> Software Developer | Emma(r)
> [EMAIL PROTECTED]
> 800.595.4401 or 615.292.5888
> 615.292.0777 (fax)
>
> Emma helps organizations everywhere communicate & market in style.
> Visit us online at http://www.myemma.com
>
>
>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Terry Yapt

Alvaro Herrera escribió:

Terry Yapt wrote:

  

This is the main error:
* FATAL:  could not reattach to shared memory (key=5432001, addr=01D8): 
Invalid argument


It is always followed by this another system-app error:
* LOG:  unrecognized win32 error code: 487



This problem has been reported before, for example in

http://bbs.chinaunix.net/thread-973003-1-1.html
(not that I can read it very well)

and

http://lists.pgfoundry.org/pipermail/brasil-usuarios/20061127/003150.html

  

Yes, those are the same than here:
http://archives.postgresql.org/pgsql-bugs/2007-01/msg00032.php


No resolution seems to have been found.
  

Then, I am very worried now.   :-|

Thanks Alvaro.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Magnus Hagander
Alvaro Herrera wrote:
> Terry Yapt wrote:
> 
>> This is the main error:
>> * FATAL:  could not reattach to shared memory (key=5432001, addr=01D8): 
>> Invalid argument
>>
>> It is always followed by this another system-app error:
>> * LOG:  unrecognized win32 error code: 487
> 
> FWIW,
> http://help.netop.com/support/errorcodes/win32_error_codes.htm
> 
> says
> 487   Attempt to access invalid address.  ERROR_INVALID_ADDRESS
> 
> This problem has been reported before, for example in
> 
> http://bbs.chinaunix.net/thread-973003-1-1.html
> (not that I can read it very well)
> 
> and
> 
> http://lists.pgfoundry.org/pipermail/brasil-usuarios/20061127/003150.html
> 
> No resolution seems to have been found.

8.3 will have a new way to deal with shared mem on win32. It's the same
underlying tech, but we're no longer trying to squeeze it into an
emulation of sysv. With a bit of luck, that'll help :-)

//Magnus


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Alvaro Herrera
Magnus Hagander wrote:
> Alvaro Herrera wrote:

> > No resolution seems to have been found.
> 
> 8.3 will have a new way to deal with shared mem on win32. It's the same
> underlying tech, but we're no longer trying to squeeze it into an
> emulation of sysv. With a bit of luck, that'll help :-)

So you're saying we won't fix this bug in 8.2?  That seems unfortunate,
given that 8.2 is still supposed to be supported on Windows.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Adapter update.

2007-08-23 Thread Richard Huxton

Joshua D. Drake wrote:


I have added transaction to my code and it took about 2 and half hours to
process around 48,000 records. Again all this time is taken by update method
on the adapter.

I don't know Perl to setup the database link to SQL Server 2005 and also I
don't have permission to write the data to files. Are there any other
options like a different driver I can use or through stored procedures. I
have to compare each column in each row before doing the update.


This is probably where your time is spent, not the actual commit of the
data. 48k records is nothing.


Ditto what Joshua says. Loading that many records should take minutes 
not hours.


Try this last bit of my first reply.


Load the data into an import table (TEMPORARY table probably) and then just
use three queries to handle deletion, update and insertion. 
Comparing one row at a time is adding a lot of overhead.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)

2007-08-23 Thread Shelby Cain
>- Original Message 
>From: Magnus Hagander <[EMAIL PROTECTED]>
>To: Alvaro Herrera <[EMAIL PROTECTED]>
>Cc: Terry Yapt <[EMAIL PROTECTED]>; pgsql-general@postgresql.org
>Sent: Thursday, August 23, 2007 3:43:32 PM
>Subject: Re: [GENERAL] FATAL: could not reattach to shared memory (Win32)
>
>
>8.3 will have a new way to deal with shared mem on win32. It's the same
>underlying tech, but we're no longer trying to squeeze it into an
>emulation of sysv. With a bit of luck, that'll help :-)
>
>//Magnus
>

Wild guess on my part... could that error be the result of an attempt to map 
shared memory into a process at a fixed location that just happens to already 
be occupied by a dll that Windows had decided to relocate?

Regards,

Shelby Cain



   

Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Local authentication/security

2007-08-23 Thread Peter Eisentraut
Lange Marcus wrote:
> I would like to be able to restrict the access to a database so that
> only a specific program running on the same machine can access it,

In postgresql.conf, set

unix_socket_permissions = 770
unix_socket_group = postgres

and make your program setgid postgres.  Or some variant of this 
involving those parameters.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Automating logins for mundane chores

2007-08-23 Thread Decibel!

On Aug 18, 2007, at 5:20 AM, Phoenix Kiula wrote:

I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons, is quite a convoluted one).

So I read up on .pgpass.


FWIW, *IF* you can trust identd in your environment, I find it to be  
easier to deal with than .pgpass or the like.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Searching for Duplicates and Hosed the System

2007-08-23 Thread Decibel!

On Aug 21, 2007, at 12:04 AM, Tom Lane wrote:

If you need to deal with very large result sets, the standard advice
is to use a cursor so you can pull a few hundred or thousand rows
at a time via FETCH.


In case it's not obvious... in this case you might want to dump the  
output of that query into another table; perhaps a temp table...


CREATE TEMP TABLE dupe_check AS SELECT ...
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-23 Thread Decibel!

On Aug 22, 2007, at 3:37 PM, Joshua D. Drake wrote:

You can not do multi master cross continent reliably.


I'm pretty sure that credit card processors and some other companies  
do it... it just costs a LOT to actually do it well.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Seeking datacenter PITR backup procedures [RESENDING]

2007-08-23 Thread Decibel!

On Aug 19, 2007, at 7:23 AM, Bill Moran wrote:

Assumptions:
a. After pg_stop_backup(), Pg immediately recycles log files and  
hence wal

logs can be copied to backup. This is a clean start.


I don't believe so.  ARAIK, all pg_stop_backup() does is remove the
marker that pg_start_backup() put in place to tell the recovery  
process

when the filesystem backup started.


I'm pretty certain that's not the case. For a PITR to ensure that  
data is back to a consistent state after a recovery, it has to replay  
all the transactions that took place between pg_start_backup and  
pg_stop_backup; so it needs to know when pg_stop_backup() was  
actually run.



By not backing up pg_xlog, you are
going to be behind by however many transactions are in the most recent
transaction log that has not yet been archived.  Depending on how  
often

your databases are updated, this is likely acceptable.  If you need
anything more timely than that, you'll probably want to implement
Slony or some other replication system.


Just keep in mind that Slony is *not* a backup solution (though you  
could possibly argue that it's log shipping is).

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Seeking datacenter PITR backup suggestions

2007-08-23 Thread Decibel!

On Aug 17, 2007, at 5:48 PM, Joey K. wrote:
We have several web applications with Pg 8.2.x running on isolated  
servers (~25). The database size on each machines (du -h pgdata) is  
~2 GB. We have been using nightly filesystem backup (stop pg, tar  
backup to ftp, start pg) and it worked well.


We would like to move to PITR backups since the database size will  
increase moving forward and our current backup method might  
increase server downtimes.


We have a central ftp backup server (yes, ftp :-) which we would  
like to use for weekly full and daily incremental PITR backups.


After reading the docs, PITR is still fuzzy. Our ideas for backup  
are (do not worry about the syntax),


** START **

tmpwal = "/localhost/tmp"   # tmp space on server 1 for storing wal  
files before ftp

Configure $pgdata/postgresql.conf archive_command = "cp %p $tmpwal/%f"


Why not just FTP WAL files directly?


Day 1:
% psql pg_start_backup(); tar pgdata.tar --exclude pg_xlog/ pgdata
% psql pg_stop_backup()
% ftp put pgdata.tar ftpserver:/server1/day1/pgdata
% ftp put $tmpwal/* ftpserver:/server1/day1/wal
% rm -f $tmpwal/* pgdata.tar


The last 2 are a race condition... you could easily lose a WAL file  
that way.


Keep in mind that that pgdata.tar is 100% useless unless you also  
have the WAL files that were created during the backup. I generally  
recommend to folks that they keep two base copies around for that  
reason.



Day 2:
% ftp put $tmpwal/* ftpserver:/server1/day2/wal
% rm -f $tmpwal/*

Day 3:
...
...

Day 7:
% rm -f $tmpwal/*
Start over

Recovery on server1 (skeleton commands),
% rm -f $tmpwal/*
% mv pgdata pgdata.hosed
% ftp get ftpbackup:/server1/day1/pgdata.tar  .
% tar -xvf pgdata.tar
% ftp get ftpbackup:/server1/day1/wal/*  $tmpwal
% ftp get ftpbackup:/server1/day2/wal/*  $tmpwal
.
.
% cp -r pgdata.hosed/pg_xlog pgdata/
% echo "cp $tmpwal/%f %p" > pgdata/recovery.conf
% start pg (recovery begins)

** END **

Assumptions:
a. After pg_stop_backup(), Pg immediately recycles log files and  
hence wal logs can be copied to backup. This is a clean start.

b. New wal files since (a) are incremental backups

We are not sure if WAL log filenames are unique and possibly  
overwrite older wal files during recovery.


I'm seeking suggestions from others with experience performing  
PostgreSQL PITR backups from multiple servers to a central backup  
server.


In general, your handling of WAL files seems fragile and error-prone.  
I think it would make far more sense to just FTP them directly, and  
not try and get fancy with different directories for different days.  
*when* a WAL file was generated is meaningless until you compare it  
to a base backup to see if that WAL file is required for the base  
backup, useful (but not required) to the base backup, or useless for  
the base backup.

--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Enterprise Wide Deployment

2007-08-23 Thread Decibel!

On Aug 17, 2007, at 5:37 PM, Andrej Ricnik-Bay wrote:

On 8/14/07, john_sm <[EMAIL PROTECTED]> wrote:

Hey guys, for an enterprise wide deployment, what will you
suggest and why among - Red Hat Linux, Suse Linux and
Ubuntu Linux, also, do you think, we can negotiate the
support pricing down?

For all it's worth:  my personal experiences with RH support
were shocking, to say the least, and I can't fathom why
anyone would want to pay for it.

If you have in-house linux expertise, choose whatever they're
familiar with.  If you don't - find a local company that can give
you support and use what they're familiar with.  Just my 2 cents.


While you're looking at support; I strongly recommend looking at  
getting a support contract for PostgreSQL as well if you're going to  
be banking your business on it. While it's pretty rare to run into  
problems in production (depending on the knowledge of your staff and  
the quality of your hardware), it can happen.


(Disclosure: I work for one company that provides PostgreSQL support)
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-23 Thread Bill Moran
Decibel! <[EMAIL PROTECTED]> wrote:
>
> On Aug 22, 2007, at 3:37 PM, Joshua D. Drake wrote:
> > You can not do multi master cross continent reliably.
> 
> I'm pretty sure that credit card processors and some other companies  
> do it... it just costs a LOT to actually do it well.

Isn't this sort of requirement the entire reason for 2-phase commit?

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Hannes Dorbath
Bill Moran wrote:
> I guess I just feel that "broken" is a bit of a harsh term.  If
> your expectations are for full-blown connection management from
> pconnect(), then you will be disappointed.  If you take it for
> what it is: persistent connections, then those limitations would
> be expected.

It's broken because persistent connections get randomly garbage
collected where they should not. So broken in the sense of bugged.
Expect connections to die for no reason, especially under load.


-- 
Best regards,
Hannes Dorbath

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Seeking datacenter PITR backup procedures [RESENDING]

2007-08-23 Thread Bill Moran
Decibel! <[EMAIL PROTECTED]> wrote:
>
> On Aug 19, 2007, at 7:23 AM, Bill Moran wrote:
> >> Assumptions:
> >> a. After pg_stop_backup(), Pg immediately recycles log files and  
> >> hence wal
> >> logs can be copied to backup. This is a clean start.
> >
> > I don't believe so.  ARAIK, all pg_stop_backup() does is remove the
> > marker that pg_start_backup() put in place to tell the recovery  
> > process
> > when the filesystem backup started.
> 
> I'm pretty certain that's not the case. For a PITR to ensure that  
> data is back to a consistent state after a recovery, it has to replay  
> all the transactions that took place between pg_start_backup and  
> pg_stop_backup; so it needs to know when pg_stop_backup() was  
> actually run.

Sounds likely ... but I don't believe that forces any specific log
cycling activity, like the OP suggested.

Be nice if someone who knew for sure would chime in ;)

> > By not backing up pg_xlog, you are
> > going to be behind by however many transactions are in the most recent
> > transaction log that has not yet been archived.  Depending on how  
> > often
> > your databases are updated, this is likely acceptable.  If you need
> > anything more timely than that, you'll probably want to implement
> > Slony or some other replication system.
> 
> Just keep in mind that Slony is *not* a backup solution (though you  
> could possibly argue that it's log shipping is).

True.  This rides the fine line of the difference between an HA setup
and backup.  Specifically: HA won't allow you to recovery from user
error.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] %TYPE

2007-08-23 Thread Michael Glaesemann


On Aug 23, 2007, at 7:57 , Richard Huxton wrote:


Ged wrote:

Ty for those comments.
Hmm, I did try it out before posting of course, and I've just  
tried it

again to make sure I hadn't boobed with a typo. It seems my ISP is
running 8.0.8 and it's definitely not working on that. It *is* in the
8.0.13 documentation also though... So now I'm off to beg them to
upgrade.


Hmm - should work in any 8.0.x, the development team don't add new  
features in point releases. I'm not sure if this feature wasn't  
there in 7.4 too.


Might be a bug affecting you though - could be worth checking the  
release-notes in the back of the manual.


I don't seem to have received the message Richard's responding to.  
The archives seem to be missing a couple as well.


http://archives.postgresql.org/pgsql-general/2007-08/threads.php#01346

Anyone else besides Richard catch these messages? Any idea where they  
may have ended up?If someone wouldn't mind sending them on to me, I'd  
appreciate it.


Thanks!

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to extract a substring using Regex

2007-08-23 Thread Postgres User
Hi,

I'm new to Regex in Postgres.  Can someone give me a quick pointer on
how I'd SELECT the substring between   ''and  ''  in
a field?

Sample field data:
address city here Rogers, Jim zip code place

and I'd like the SELECT to return only:
Rogers, Jim

Thanks!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to extract a substring using Regex

2007-08-23 Thread Michael Glaesemann


On Aug 23, 2007, at 19:33 , Postgres User wrote:


I'm new to Regex in Postgres.  Can someone give me a quick pointer on
how I'd SELECT the substring between   ''and  ''  in
a field?


Check out regexp_replace:

http://www.postgresql.org/docs/8.2/interactive/functions- 
matching.html#FUNCTIONS-POSIX-REGEXP


One of the forms of substring might work for you, too.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to extract a substring using Regex

2007-08-23 Thread Postgres User
Yes, I read the manual.  I think I had a problem because of the
special chars (< / >) that I'm trying to search for...  Still looking
for the right syntax.

On 8/23/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
>
> On Aug 23, 2007, at 19:33 , Postgres User wrote:
>
> > I'm new to Regex in Postgres.  Can someone give me a quick pointer on
> > how I'd SELECT the substring between   ''and  ''  in
> > a field?
>
> Check out regexp_replace:
>
> http://www.postgresql.org/docs/8.2/interactive/functions-
> matching.html#FUNCTIONS-POSIX-REGEXP
>
> One of the forms of substring might work for you, too.
>
> Michael Glaesemann
> grzm seespotcode net
>
>
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How to extract a substring using Regex

2007-08-23 Thread Michael Glaesemann
[Please don't top post as it makes the discussion more difficult to  
follow, and please reply to the list so that others may benefit from  
and participate in the discussion.]


On Aug 23, 2007, at 19:49 , Postgres User wrote:


Yes, I read the manual.  I think I had a problem because of the
special chars (< / >) that I'm trying to search for...  Still looking
for the right syntax.


Why don't you show us what you've tried and the errors you're  
getting? That way we can help you figure out what you're doing wrong  
rather than just give you an answer.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-23 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/23/07 17:22, Bill Moran wrote:
> Decibel! <[EMAIL PROTECTED]> wrote:
>> On Aug 22, 2007, at 3:37 PM, Joshua D. Drake wrote:
>>> You can not do multi master cross continent reliably.
>> I'm pretty sure that credit card processors and some other companies  
>> do it... it just costs a LOT to actually do it well.
> 
> Isn't this sort of requirement the entire reason for 2-phase commit?

Entire reason?  Not that I've heard.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGzjKiS9HxQb37XmcRArTlAJ43MAEDdbbi71WDIApW5j0PveeJIwCePJPx
czuG/oescDoF8SAAehw4xdA=
=v+RP
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   >