Re: [GENERAL] [SQL] Flight numbers data

2006-03-29 Thread Achilleus Mantzios
O Josh Rovero έγραψε στις Mar 29, 2006 :

> We built an operational flight tracking and weather system that uses 
> Postgresql,
> http://www.sonalysts.com/wXstation
> 
> One data feed (FAA ASDI) uses both aircraft registration
> numbers (tail numbers)  and airline-assigned flight numbers.  Typically
> if you have the latter, you won't get the former.   This feed only has
> information for flights in the U.S. or to/from the U.S.  The data feed also
> includes flight plans, so it wouldn't be too hard
> to build up a flight number schedule over a short period of time.
> 
> Outside the U.S. the problem is more difficult. 

Thanx, unfortunately our flights are 90% in middle east/asia.
Are you aware of any such feed in production?

> 
> > Has anyone from the postgresql camp ever solved such a problem?
> >   
> 
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [SQL] Flight numbers data

2006-03-29 Thread Achilleus Mantzios
O Scott Marlowe έγραψε στις Mar 29, 2006 :

> On Wed, 2006-03-29 at 02:17, Achilleus Mantzios wrote:
> > Hi, i am in the process of writing an application about
> > tickets, flights, etc, and i am thinking of getting the primitive
> > data ready at the begining and doing it the right way,
> > (e.g. the user will just select a flight number and doesnt 
> > have to enter the airports, airport coordinates, countries,
> > airline name, airline code, departure/arrival times, etc...)
> > 
> > I found some sources on the web about airlines (codes, names, countries, 
> > etc...), also i found about airports, but nothing about flight numbers!
> 
> That's cause companies that keep track of those things charge a LOT of
> money for their datasets. 
> 
> > This application will be for enterprise (internal) company use,
> > covering the flights of emploees around the world.
> > 
> > I was just wondering how internet ticket agents manage to do it.
> 
> They subscribe to some service that has this information in the back
> end.  The data in that back end comes from the one or two sources of
> that data who charge yearly subscriptions in the hundreds of thousands
> of dollars.

Just to examine a little closer (bassically show my boss this!), can you 
give me some pointers of some kind of web services providers 
(in the broad sense) that sell such yearly subscriptions, for 
internet travel agents?

> 
> > Has anyone from the postgresql camp ever solved such a problem?
> 
> Where I work, we're building a middle level system (look up the website
> that goes with my domain).  And if we weren't in the airline reservation
> industry, we couldn't afford the data sets.
> 
> > It is just that i wanna do it right, maybe play a little bit with AJAX 
> > also who knows :)
> 
> But "doing it right" goes against almost every tenet of the airline
> reservation industry :)  haha.  only serious.
> 
> Seriously though, you might be able to get your travel agent or whoever
> you do reservations through to provide you with this information in some
> kind of reliable format for the tickets you've bought.  If you can get
> some kind of automated feed from them, that would be the best way, if
> not, maybe they can email you the info each time, and you can cut and
> paste it in.
> 
> There's much in the reservation industry that is basically a computer
> implementation of a 3x5 note card system.  And none of those systems
> were originally built to talk to each other, so it's often impossible
> for a single user to get any kind of automatically created data from
> such an industry.

That was pretty much what i suspected, i have seen some 
3270 type terminal emulators in some travel agents that 
reminded me VSE/MVS/AS400/CICS and other beauties like that!!

Well, Scott your response was really helpful.
What i will do is just build the tables with airlines info,
airports info (IATA codes,etc..) with sources for the web.
Really are there any good sources out there for such info?
For airports i came across 
http://www.partow.net/miscellaneous/airportdatabase/
it seems very complete but it misses new airports
(it doesnt have the new Greek one (IATA code ATH)which is some 5 
years old).
Other sources i found is just HTML pages which is not so handy.

Then i would let the in-company tickets person build the flight numbers table 
incrementally as she works.

What do you think?

> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] EXPLAIN SELECT .. does not return

2006-03-29 Thread Tom Lane
David Link <[EMAIL PROTECTED]> writes:
> The following large EXPLAIN SELECT Statement fails to return, but 
> continues to take up processing time until it is killed.
> [ 52-way join... ]

Am I right in guessing that all the sales_xxx tables are the same size
and have similar statistics?  I think the problem is that the planner is
faced with a large set of essentially equivalent plans and isn't pruning
the list aggressively enough.  That's something we fixed in 8.0.

> Postgresql 7.4.8

You really oughta try something newer.  On my machine, 7.4.12 plans a
52-way join in about a minute, and 8.0 and 8.1 in under a second.

I wonder also if there's not a better way to design the query...
maybe a UNION ALL would work better than nested joins.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Reid Thompson

Ian Harding wrote:

I'm wondering if I could get some suggestions as to how implement
this quickly and simply? I was thinking a web interface using PHP
would be the fastest way of going about it.



If you used Ruby on Rails, you'd be finished by now.  It slices, it
dices, it makes julienne fries.

Seriously, it's not too bad if you don't mind it's plentiful
shortcomings.  I was getting carpal tunnel syndrome from typing
 pages so I switched to RoR for a hobby app.  It
works fine, but you have to do it "The Rails Way" and expect no help
from the "Community" because they are a fanboi cheerleader squad, not
interested in silly stuff like referential integrity, functions,
triggers, etc.  All that nonsense belongs in the application!

Check this out, there is no stale connection detection or handling in
rails.  I'm not kidding.  If you connection drops out, restart your
web server.  Sorry.  Blah.

Anyway, besides its warts, it is dead easy to use, and does make
putting together web applications in a "green field" scenario quite
painless.  Just don't try to do anything outside the box like trying
to access an existing database that uses RDBMS features heavily and
uses normal object naming.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend
  

re:  ruby && postgres -- i'll throw nitro and og into the hat.
http://nitrohq.com
nitro is the web presentation framework, Og is the ORM...
--
require 'og'

class Comment
   property :title, String, :sql => 'VARCHAR(60) NOT NULL'
   property :body, String
   property :author, String
   property :create_time, Time
   def initialize( title = '', body = '', author = '', time = Time.now )
   @title = title
   @body = body
   @author = author
   @create_time = time
   end
end

og_psql = {
 :destroy_tables => true,  # don't use this on a DB with tables that 
you DO NOT want to lose -- or set to false

 :store => :psql,
 :user => 'rthompso',
 :password => 'rthompso',
 :name => 'test'
}

#Og.table_prefix = '' # remove og generated table prefix
db = Og.setup(og_psql)
c = Comment.new('Title', 'Body', 'Author')
#  or
#  c = Comment.new
#  c.title = 'Hello'
#  c.body = 'World'
#  c.create_time = Time.now
#  c.author = 'tml'
 c.save

--
$ ruby -rubygems pgtest.rb
I, [2006-03-29T20:16:50.278205 #16029]  INFO -- : Og uses the Psql store.
D, [2006-03-29T20:16:50.637238 #16029] DEBUG -- : Dropped database table 
ogcomment

I, [2006-03-29T20:16:50.943499 #16029]  INFO -- : Created table 'ogcomment'.
D, [2006-03-29T20:16:50.963087 #16029] DEBUG -- : PostgreSQL processing 
foreign key constraints
D, [2006-03-29T20:16:50.963532 #16029] DEBUG -- : PostgreSQL finished 
setting constraints. No action was taken in 0.00 seconds.

[EMAIL PROTECTED]:/home/rthompso>
$ psql test -c "select * from ogcomment;"
title | body | author | create_time | oid
---+--++-+-
Title | Body | Author | 2006-03-29 20:16:50 |   1
(1 row)

$ psql test -c "\d ogcomment;"
 Table "public.ogcomment"
  Column|Type |
Modifiers   
-+-+-

title   | character varying(60)   | not null
body| text|
author  | text|
create_time | timestamp without time zone |
oid | integer | not null default 
nextval('ogcomment_oid_seq'::regclass)

Indexes:
   "ogcomment_pkey" PRIMARY KEY, btree (oid)








---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] checking data type

2006-03-29 Thread Chris

raj wrote:

is there a function that could check for a variable's data type?  like
i want to check all the columns of a table and if i found a column with
an integer  data type i set it to a default 1 and i'll set a constant
to a column of type text.


You could start psql with -E:

psql -d dbname -E

run \d 

and use the queries that postgres runs to work it out..

There could be a simpler way though in the system catalogues (anyone?).


I should ask why you need this info ;)

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] weird GROUP BY error

2006-03-29 Thread Chris

Sergey E. Koposov wrote:

Hi All!

I experienced the strange error when running a simple query.

1) I have the following tables :
wsdb=# \d slit_data
Table "public.slit_data"
 Column |   Type   | Modifiers 
+--+---
 chip   | smallint | 
 tile   | smallint | 
 slit   | smallint | 
 id | integer  | 
 cx | double precision | 



wsdb=# \d gems_spec 
   Table "public.gems_spec"
  Column   | Type  | Modifiers 
---+---+---
 tile  | smallint  | 
 chip  | smallint  | 
 id| integer   | 
 priority  | smallint  | 



2) When I run the query like this
wsdb# SELECT id,slit,tile,chip FROM slit_data LEFT JOIN gems_spec
USING(id,tile,chip) WHERE cx>0.5 ORDER BY priority,id;

it runs smoothly,  but when I run the same query with aggregate, I get the
error:

wsdb=# SELECT count(*) FROM slit_data LEFT JOIN gems_spec
USING(id,tile,chip) WHERE cx>0.5 ORDER BY priority,id;

ERROR:  column "gems_spec.priority" must appear in the GROUP BY clause or be
used in an aggregate function

Is that normal? I really do not see the reason for the error. I even don't
use the "GROUP BY" clause. Am I wrong ? 


Remove the order by and you should be fine:

SELECT count(*) FROM slit_data LEFT JOIN gems_spec
USING(id,tile,chip) WHERE cx>0.5;


When you use any aggregate function - count, avg, sum, etc if you are 
ordering your results, you need to group your results:


SELECT count(*) FROM slit_data LEFT JOIN gems_spec USING(id,tile,chip) 
WHERE cx>0.5 GROUP BY priority, id, . ORDER BY priority,id;


You have to list all columns because that's what you're counting - *

Alternatively:

select count(chip) FROM slit_data LEFT JOIN gems_spec 
USING(id,tile,chip) WHERE cx>0.5 GROUP BY chip, priority, id ORDER BY 
priority, id;


and you only need to group by chip (because that's all your counting).

You have to group by priority and id because they are in your order 
results (I think postgres forces you to include them in the group by 
because they are in the order by - try just grouping by chip and see 
what happens anyway).


--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is this possible.

2006-03-29 Thread Klint Gore
On Thu, 30 Mar 2006 10:45:20 +1100, "Harvey, Allan AC" <[EMAIL PROTECTED]> 
wrote:
> Hi all,
> 
> Can sombody please help me realise a function, the intent as described by...
> 
> -- Function to create the table for a new point
> 
> CREATE OR REPLACE FUNCTION make_table( varchar ) RETURNS VARCHAR AS '
> CREATE TABLE $1(
> parameter varchar(8) NOT NULL,
> value float NOT NULL,
> dt timestamp NOT NULL
> );

 execute ''create table ''||quote_ident($1)||'' (...)'';

> CREATE INDEX $1_dtindex ON $1( dt );

execute ''create index ''||quote_ident($1)||''_dtindex ...'';

> SELECT $1;

return $1;

> ' LANGUAGE SQL;

' language plpgsql;


quote_ident function should help with sql insertion attack.

klint.


+---+-+
: Klint Gore: "Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless"   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Is this possible.

2006-03-29 Thread Harvey, Allan AC
Hi all,

Can sombody please help me realise a function, the intent as described by...

-- Function to create the table for a new point

CREATE OR REPLACE FUNCTION make_table( varchar ) RETURNS VARCHAR AS '
CREATE TABLE $1(
parameter varchar(8) NOT NULL,
value float NOT NULL,
dt timestamp NOT NULL
);

CREATE INDEX $1_dtindex ON $1( dt );

SELECT $1;
' LANGUAGE SQL;

I'm using 7.4.5.

Thanks

Allan




The material contained in this email may be confidential, privileged or 
copyrighted. If you are not the intended recipient, use, disclosure or copying 
of this information is prohibited. If you have received this document in error, 
please advise the sender and delete the document. Neither OneSteel nor the 
sender accept responsibility for any viruses contained in this email or any 
attachments.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] weird GROUP BY error

2006-03-29 Thread Sergey E. Koposov
Hi All!

I experienced the strange error when running a simple query.

1) I have the following tables :
wsdb=# \d slit_data
Table "public.slit_data"
 Column |   Type   | Modifiers 
+--+---
 chip   | smallint | 
 tile   | smallint | 
 slit   | smallint | 
 id | integer  | 
 cx | double precision | 


wsdb=# \d gems_spec 
   Table "public.gems_spec"
  Column   | Type  | Modifiers 
---+---+---
 tile  | smallint  | 
 chip  | smallint  | 
 id| integer   | 
 priority  | smallint  | 


2) When I run the query like this
wsdb# SELECT id,slit,tile,chip FROM slit_data LEFT JOIN gems_spec
USING(id,tile,chip) WHERE cx>0.5 ORDER BY priority,id;

it runs smoothly,  but when I run the same query with aggregate, I get the
error:

wsdb=# SELECT count(*) FROM slit_data LEFT JOIN gems_spec
USING(id,tile,chip) WHERE cx>0.5 ORDER BY priority,id;

ERROR:  column "gems_spec.priority" must appear in the GROUP BY clause or be
used in an aggregate function

Is that normal? I really do not see the reason for the error. I even don't
use the "GROUP BY" clause. Am I wrong ? 

(I'm using PG 8.1.3)

Thanks in advance for any advices.

Regards,
Sergey


*
Sergey E. Koposov
Max Planck Institute for Astronomy/Sternberg Astronomical Institute
Web: http://lnfm1.sai.msu.ru/~math 
E-mail: [EMAIL PROTECTED]

 





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 16:48, Antimon wrote:
> Yes i tried and realized apache child processes..
> Then i looked for another pooling solution, the project pgpool i found.
> No windows binaries, it might run on cygwin.
> After all, i think postgreSQL is not meant to run on windows production
> for ~2 more major releases or something. It performs great on linux
> thats for sure but i cannot do the platform decision everytime, so
> gonna stick with mySQL on windows and postgre on linux when i can.

There's a windows version of lighttpd available.  That might well do the
trick.

I'd still run ANY database on linux / unix before windows, just cause I
don't trust windows when things go wrong to work right.  But there's
certainly nothing stopping you from running a linux or BSD based
postgresql server behind a windows lighttpd.

If the db and web server HAVE to both be on the same server, and you
have to have windows, then you might want to look at firebird, which
apparently has quite good windows performance, and, unlike MySQL has
good SQL compliance and behaviour.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Yes i tried and realized apache child processes..
Then i looked for another pooling solution, the project pgpool i found.
No windows binaries, it might run on cygwin.
After all, i think postgreSQL is not meant to run on windows production
for ~2 more major releases or something. It performs great on linux
thats for sure but i cannot do the platform decision everytime, so
gonna stick with mySQL on windows and postgre on linux when i can.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] EXPLAIN SELECT .. does not return

2006-03-29 Thread David Link

Hi,

The following large EXPLAIN SELECT Statement fails to return, but 
continues to take up processing time until it is killed.


I've straightened up the postgresql.conf, as per Tom Lane's suggestions 
since last time I wrote about this.

See: http://archives.postgresql.org/pgsql-general/2005-12/msg00259.php

Any and all help, greatly appreciated.  Thank you.
David Link, White Plains, NY


Environment:

Linux 2.6.5-7.191 SMP
Postgresql 7.4.8
Database size: 110 Gb on disk.
vacuum analyze done

Processes:

postgres 15687 1  0 16:12 pts/100:00:00 /usr/bin/postmaster -D 
/db/pgsql
postgres 15693 15687  0 16:12 pts/100:00:00 postgres: stats buffer 
process   
postgres 15695 15693  0 16:12 pts/100:00:00 postgres: stats 
collector process   
postgres 17485 15687 99 16:17 pts/100:18:17 postgres: dlink usbkup 
[local] EXPLAIN


Configuration:

tcpip_socket = true
max_connections = 200
shared_buffers = 8000
sort_mem = 262144 
vacuum_mem = 65536
max_fsm_pages = 20
fsync = false
wal_sync_method = fsync
checkpoint_segments = 30
effective_cache_size = 131072
random_page_cost = 4
geqo = true

geqo_threshold = 13
default_statistics_target = 100
from_collapse_limit = 10
join_collapse_limit = 10
log_timestamp = true

stats_start_collector = true
stats_command_string = true
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'



The SQL:

explain
select
   t.tid,
   t.title,
   to_char (t.retail_reldate, 'mm-dd-yy') as retail_reldate,
   coalesce (s200401.units, 0) as s200401u,
   coalesce (s200402.units, 0) as s200402u,
   coalesce (s200403.units, 0) as s200403u,
   coalesce (s200404.units, 0) as s200404u,
   coalesce (s200405.units, 0) as s200405u,
   coalesce (s200406.units, 0) as s200406u,
   coalesce (s200407.units, 0) as s200407u,
   coalesce (s200408.units, 0) as s200408u,
   coalesce (s200409.units, 0) as s200409u,
   coalesce (s200410.units, 0) as s200410u,
   coalesce (s200411.units, 0) as s200411u,
   coalesce (s200412.units, 0) as s200412u,
   coalesce (s200413.units, 0) as s200413u,
   coalesce (s200414.units, 0) as s200414u,
   coalesce (s200415.units, 0) as s200415u,
   coalesce (s200416.units, 0) as s200416u,
   coalesce (s200417.units, 0) as s200417u,
   coalesce (s200418.units, 0) as s200418u,
   coalesce (s200419.units, 0) as s200419u,
   coalesce (s200420.units, 0) as s200420u,
   coalesce (s200421.units, 0) as s200421u,
   coalesce (s200422.units, 0) as s200422u,
   coalesce (s200423.units, 0) as s200423u,
   coalesce (s200424.units, 0) as s200424u,
   coalesce (s200425.units, 0) as s200425u,
   coalesce (s200426.units, 0) as s200426u,
   coalesce (s200427.units, 0) as s200427u,
   coalesce (s200428.units, 0) as s200428u,
   coalesce (s200429.units, 0) as s200429u,
   coalesce (s200430.units, 0) as s200430u,
   coalesce (s200431.units, 0) as s200431u,
   coalesce (s200432.units, 0) as s200432u,
   coalesce (s200433.units, 0) as s200433u,
   coalesce (s200434.units, 0) as s200434u,
   coalesce (s200435.units, 0) as s200435u,
   coalesce (s200436.units, 0) as s200436u,
   coalesce (s200437.units, 0) as s200437u,
   coalesce (s200438.units, 0) as s200438u,
   coalesce (s200439.units, 0) as s200439u,
   coalesce (s200440.units, 0) as s200440u,
   coalesce (s200441.units, 0) as s200441u,
   coalesce (s200442.units, 0) as s200442u,
   coalesce (s200443.units, 0) as s200443u,
   coalesce (s200444.units, 0) as s200444u,
   coalesce (s200445.units, 0) as s200445u,
   coalesce (s200446.units, 0) as s200446u,
   coalesce (s200447.units, 0) as s200447u,
   coalesce (s200448.units, 0) as s200448u,
   coalesce (s200449.units, 0) as s200449u,
   coalesce (s200450.units, 0) as s200450u,
   coalesce (s200451.units, 0) as s200451u,
   coalesce (s200452.units, 0) as s200452u
from
   title t
   left outer join sale_200401 s200401 on t.tid=s200401.tid and 
s200401.channel=100
   left outer join sale_200402 s200402 on t.tid=s200402.tid and 
s200402.channel=100
   left outer join sale_200403 s200403 on t.tid=s200403.tid and 
s200403.channel=100
   left outer join sale_200404 s200404 on t.tid=s200404.tid and 
s200404.channel=100
   left outer join sale_200405 s200405 on t.tid=s200405.tid and 
s200405.channel=100
   left outer join sale_200406 s200406 on t.tid=s200406.tid and 
s200406.channel=100
   left outer join sale_200407 s200407 on t.tid=s200407.tid and 
s200407.channel=100
   left outer join sale_200408 s200408 on t.tid=s200408.tid and 
s200408.channel=100
   left outer join sale_200409 s200409 on t.tid=s200409.tid and 
s200409.channel=100
   left outer join sale_200410 s200410 on t.tid=s200410.tid and 
s200410.channel=100
   left outer join sale_200411 s200411 on t.tid=s200411.tid and 
s200411.channel=100
   left outer join sale_200412 s200412 on t.tid=s200412.tid and 
s200412.channel

Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 15:58, Scott Marlowe wrote:
> On Wed, 2006-03-29 at 15:22, Antimon wrote:
> > Just tried, yes, pconnect boosts multi user performance.
> > But causes this:
> > http://img526.imageshack.us/img526/6302/pgsql7th.jpg
> > :)
> > 
> > Need to modify max persistent connection settings.
> 
> I don't thin that doesn't do what you think it does.  max persistent
> setting in php.ini sets the max number of persistent connections PER PHP
> / web server process.

That should be "I don't think that does what you think it does."  man,
my editor is so fired...

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 15:22, Antimon wrote:
> Just tried, yes, pconnect boosts multi user performance.
> But causes this:
> http://img526.imageshack.us/img526/6302/pgsql7th.jpg
> :)
> 
> Need to modify max persistent connection settings.

I don't thin that doesn't do what you think it does.  max persistent
setting in php.ini sets the max number of persistent connections PER PHP
/ web server process.

What web server is that with, btw?

With a web server like lighttpd, you have a "pool" of php child
processes pre-spawned and ready to go, and you limit that pool to
something reasonable.

With apache and other web servers, each web server child process gets
its own php process hanging off of it, and if you've got, say, a default
apache install, that allows up to 150 child processes.

If you're using apache, you're often better off limiting the number of
children to something smaller, or cranking up postgresql allowed
connections to keep up.  150 is usually a bit much.  20 to 50 is
normally plenty for most applications that can benefit from pconnects.

But you're probably better off with something like lighttpd.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Rory Hart
Thanks for all the suggestions I now have a lot better idea of my 
options. It is nice to fire off a question at a mailing list and get so 
many positive helpful answers, gives me a warm and fuzzy glow.


Thank you all

--
Rory Hart

Lifestyle Management Consultant

Professional Lifestyle Management
http://www.lifestylemanage.com

Phone  03 9879 5643  PO Box 4179
Fax03 9879 6743  Ringwood Vic 3134
Mobile 0412 821030   Australia


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Just tried, yes, pconnect boosts multi user performance.
But causes this:
http://img526.imageshack.us/img526/6302/pgsql7th.jpg
:)

Need to modify max persistent connection settings.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] More AIX 5.3 fun - out of memory ?

2006-03-29 Thread Gavin Hamill
On Wed, 29 Mar 2006 11:22:11 -0500
Seneca Cunningham <[EMAIL PROTECTED]> wrote:
> 
> Are "0x800" and "0x400" typos in this message, or did you really 
> request less than 256MB?  Those numbers are the number of bytes 
> requested for the heap and memory is set aside in 256MB segments.

No, they're not typos - I think I must have had my brain out - oops :)

> If you want a 64-bit postgres, you will need to recompile.  

Ahh you jewel, I passed in all the variables you specified and the recompile 
ran smoothly once I'd recompiled zlib and moved the .a library from /usr/lib/ 
to /usr/lib/ppc64 - the cube + earthdistance modules also compiled nicely.

pg_restore ran perfectly and I have much more faith in the running binary now 
everything is 64-bit native :)

> If you do switch to a 64-bit build, you will need to re-initdb.  I have 
> heard that the statistics collector of a 64-bit build may have some 
> problems, but it does pass "gmake check".

Yep, zapped + restored from a dumpfile.

Thank you so much for this info! I was really beginning to worry and wondered 
if I should just cut my losses and install Debian on the pSeries box.

Cheers,
Gavin.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Rory Hart

Bernhard Weisshuhn wrote:


Kenneth Downs wrote:

I have been meaning to investigate it because it is the only system 
I've heard of that makes the same claim that I do, which is to have 
eliminated entire categories of labor through automation.



Have you looked at http://catalyst.perl.org/ lately?
IMHO it's "Rails done right" and it's perl, so it just /has/ to be 
good ;-)


sorry, couldn't resist.
bkw


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Just had a look and it seems rather interesting! I had a soft spot for 
perl since it was my first "proper" programing language  (I learnt 
pascal when I was 8 but never did anything more than four or five part 
pick a path adventures ^_^).


Thanks

--
Rory Hart

Lifestyle Management Consultant

Professional Lifestyle Management
http://www.lifestylemanage.com

Phone  03 9879 5643  PO Box 4179
Fax03 9879 6743  Ringwood Vic 3134
Mobile 0412 821030   Australia


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread David Bernal
> By "8.1" do you really mean "8.1.0"?  If so, you might try updating to
> the latest subrelease (currently 8.1.3).  This problem doesn't offhand
> seem to match any of the bug fixes I see in the CVS logs, but there have
> been a number of Windows-specific fixes and maybe one of them explains
> it.
Yeah, sorry it is 8.1.0. If it comes to that, I will try the CVS
version, but I'm kind of leaning towards this being a network problem,
so I want to investigate that first.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread David Bernal
> Just a thought, did you restart the server after making the changes?
> From the  Postgres docs-
> "...This parameter can only be set at server start."

Sure did, each and every time.

On 3/29/06, Adrian Klaver <[EMAIL PROTECTED]> wrote:
> On Wednesday 29 March 2006 02:59 am, David Bernal wrote:
>
> >
> > Any ideas? I'm fairly baffled, but then I'm a newbie.
> >
> Just a thought, did you restart the server after making the changes?
> From the  Postgres docs-
> "...This parameter can only be set at server start."
>
> --
> Adrian Klaver
> [EMAIL PROTECTED]
>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread David Bernal
> Try * (wildcard) and see what happens.  It should either work or not
> work, not work 'sometimes' so I think there must be something else
> involved.  If pg_hba.conf is set up right, and listen address is *
> then you have the network to look at.

I actually also did try '*', and it actually did sometimes work and
sometimes not. Most of the time it wouldn't work, but occaisonally it
did.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [SQL] Flight numbers data

2006-03-29 Thread Josh Rovero
We built an operational flight tracking and weather system that uses 
Postgresql,

http://www.sonalysts.com/wXstation

One data feed (FAA ASDI) uses both aircraft registration
numbers (tail numbers)  and airline-assigned flight numbers.  Typically
if you have the latter, you won't get the former.   This feed only has
information for flights in the U.S. or to/from the U.S.  The data feed also
includes flight plans, so it wouldn't be too hard
to build up a flight number schedule over a short period of time.

Outside the U.S. the problem is more difficult. 


Has anyone from the postgresql camp ever solved such a problem?
  




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ODed on overloads

2006-03-29 Thread karly
On Wed, Mar 29, 2006 at 12:19:03PM -0800, Joshua D. Drake wrote:
> 
> > MY application sends XML requests to a perl script, which has to
> > parse them and turn them into queries which get sent off, then the
> > results are put back into XML, and sent back to the client.
> 
> This doesn't answer your question... but why not just use plPerl?

Reasonable question that I asked and my co-developer asked.

Well the Perl middleware are CGI scripts, and some times do things
other than DB queries, so they need to be outside the DB.  I guess
in DB Client/Server parlance the CGI scripts are the client.

As to why PL/SQL instead of PL/Perl, I inherited the DB work by
default, and I had some examples in PL/SQL, so I continued with
that before I discovered PL/Perl.  Since the two developers working
on this are comfortable with Perl, we'll be investigating porting
everything to PL/Perl, but for now we just need to get a functional
prototype ASAP.

-karl

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL's XML support comparison against other

2006-03-29 Thread SCassidy
If you have just a column containing a chunk of xml, and want to transform
it into individual columns, such that each element/attribute is its own
column, it should be pretty easy.  Here's a simple perl example:

Table "public.testxml1"
 Column  |  Type   | Modifiers
-+-+---
 id1 | integer | not null default
nextval('public.testxml1_id1_seq'::text)
 xmltext | text|

Contains:
 id1 |   ltext
-+---
   1 | 
dog
Cocker Spaniel
black
male
123456
100.00
  
   2 | 
dog
Toy poodle
white
female
234567
120.00
  
   3 | 
dog
Chihuahua
tan
male
345678
125.00
  
(3 rows)


Create new table, with a column per element/attribute:

create table flatdata (
  id  serial primary key,
  typevarchar(50),
  breed   varchar(50),
  color   varchar(50),
  sex varchar(10),
  stock_numbervarchar(30),
  price   numeric (8,2),
  currencyvarchar(30)
);

Run the perl program (below) against the original xml data, and get new
table:

 id | type | breed  | color |  sex   | stock_number | price  |
currency
+--++---++--++
  7 | dog  | Cocker Spaniel | black | male   | 123456   | 100.00 | US
dollars
  8 | dog  | Toy poodle | white | female | 234567   | 120.00 | US
dollars
  9 | dog  | Chihuahua  | tan   | male   | 345678   | 125.00 | US
dollars
(3 rows)

#=  perl program:  
#!/usr/local/bin/perl

use DBI;
use XML::Twig;
our $dbh;
our $sth;
our $sth2;
$DEBUG=1;

my $dbname='mytestdb1';
my $dbport=5432;

$dbuser="testuser";

#=== connect to database =
$dbh = DBI->connect("dbi:Pg:dbname=$dbname", $dbuser, "", {PrintError => 0,
PrintWarn => 0}) or
 errexit( "Unable to connect to dbname $dbname, err: $DBI::errstr");

print "Connect done\n";

my (@data);

my $twig=new XML::Twig(
  keep_spaces=> 1,
  twig_roots => {
 'animal' => \&process_entry }
);

my $stmt=qq(select xmltext from testxml1);
execute_db_statement($stmt, __LINE__);

my $insert=<<"EOF";
 INSERT into flatdata (type, breed, color, sex, stock_number, price,
currency)
  values (?, ?, ?, ?, ?, ?, ?)
EOF

our $cnt=0;

#prepare the insert statement, with placeholders:
$sth2=$dbh->prepare($insert) ||
errexit("bad prepare for stmt $insert, error: $DBI::errstr");

#fetch each xml row, transform into individual columns:
while (@data = $sth->fetchrow_array) {
  foreach (@data) { $_='' unless defined}
  next if ($data[0] eq '');
  $twig->parse($data[0]);
}

#check for problems with premature termination
errexit($sth->errstr) if $sth->err;


sub process_entry {
  my ($t, $element)[EMAIL PROTECTED];

  #process elements under the 'animal' element:

  my $type=$element->first_child('type');
  my $type_text;
  if ($type) {
$type_text=$type->text();
  }

  my $breed=$element->first_child('breed');
  my $breed_text;
  if ($breed) {
$breed_text=$breed->text();
  }

  my $color=$element->first_child('color');
  my $color_text;
  if ($color) {
$color_text=$color->text();
  }

  my $sex=$element->first_child('sex');
  my $sex_text;
  if ($sex) {
$sex_text=$sex->text();
  }

  my $stock_number=$element->first_child('stock_number');
  my $stock_number_text;
  if ($stock_number) {
$stock_number_text=$stock_number->text();
  }

  my $price=$element->first_child('price');
  my ($price_text, $price_currency);
  if ($price) {
$price_text=$price->text();
$price_currency=$price->{'att'}->{'currency'}; #currency attribute
  }

  $twig->flush;

  my $rc2=$sth2->execute($type_text, $breed_text, $color_text, $sex_text,
$stock_number_text,
$price_text, $price_currency) ||
die("can't insert data for item: $type, $breed, $color, $stock_number,
DB error: $DBI::errstr");

} # end sub process_entry


sub errexit {
  my (@msg)[EMAIL PROTECTED];
  print @msg,"\n";
  exit 1;
}



sub execute_db_statement {
  #this subroutine will prepare and execute a statement for the database,
  # and errexit if it fails either step
  my ($statement, $lineno)[EMAIL PROTECTED];
  my ($rc);
  $sth=$dbh->prepare($statement) ||
errexit("bad prepare for stmt $statement at line $lineno, error:
$DBI::errstr");
  $rc=$sth->execute() ||
errexit("can't execute statement:\n$statement\n at line $lineno, ",
 "return code $rc: DB error: $DBI::errstr");
} # end sub execute_db_statement

#===  end perl pgm



Easy!  It could be fancied up, of course, this was quick and dirty.

Susan




  

Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Joshua D. Drake

Antimon wrote:

Just noticed,
On windows, these results are produced.
But on linux, postgresql performs great. So postgre has a bad windows
integration than mysql. Well since it supports win32 for a long time,
it makes sense.
I did some "multi client" tests and postgre could not even catch mysql
so i decided to go test it on my linux box. It seems that the
performance issue is about windows version.


Part of the problem may be the connection penalty on Windows. Since 
PostgreSQL is a processed based database (mySQL is threaded I believe) 
it is expensive (resource wise) to fire a bunch of connections.


You may be able to get similar performance if you were to use pconnect 
or connection pooling with PHP and PostgreSQL.


Sincerely,

Joshua D. Drake





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




--

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ODed on overloads

2006-03-29 Thread Joshua D. Drake



MY application sends XML requests to a perl script, which has to
parse them and turn them into queries which get sent off, then the
results are put back into XML, and sent back to the client.


This doesn't answer your question... but why not just use plPerl?

Joshua D. Drake


--

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Just noticed,
On windows, these results are produced.
But on linux, postgresql performs great. So postgre has a bad windows
integration than mysql. Well since it supports win32 for a long time,
it makes sense.
I did some "multi client" tests and postgre could not even catch mysql
so i decided to go test it on my linux box. It seems that the
performance issue is about windows version.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] ODed on overloads

2006-03-29 Thread karly

I just wanted to get a sanity check on using overloading in
PL/pgSQL.

MY application sends XML requests to a perl script, which has to
parse them and turn them into queries which get sent off, then the
results are put back into XML, and sent back to the client.

The XML language is pretty simple, mostly an element name can map
directly to a stored procedure, and the attributes are arguments to
the procedure.  ON many of the queries though, not all of the
attributes are present in every case.

IT seems like a perfect case for using overloading.  So far I've
done this by creating the simplest case (fewest arguments) first,
then once that seems to be working, I copy the entire text of the
function, add an argument, and add the code to make that argument
work.

So now I might have 5-10 copies of some of these functions, with a
lot of the code duplicated, which of course creates maintenance
issues.

So if this were Perl, or C, I wouldn't do it this way, but would
have the duplicate code in one function which the other functions
could call.

My question is, is there any penalty for doing this in PL/SQL?
Expecially in functions that return sets. So, if I start with


  CREATE FUNCTION getlist(INT) 
RETURNS SETOF record AS  $$
 
 FOR rec IN SELECT ...


 LOOP
RETURN NEXT rec;
 END LOOP;

Then if I want to add an argument to return fewer rows I would do

  CREATE FUNCTION getlist(INT, INT) 
RETURNS SETOF record AS  $$


   FOR rec IN SELECT * FROM getlist($1)
   LOOP
  IF somefield = $2 THEN
 RETURN NEXT rec;
  END IF;
   END LOOP;

This doesn't look like a good idea to me, like I'm not letting the
query engine do what it's best at.  Another thought I've had
is to have the functions build up a query string then EXECUTE it,
but this gets tedious.  Maybe this is a job for CURSORs?

Thanks for any feedback on this

-karl

PS  Sorry if this is rambly
PPS Would this type of question be better on Novice?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [SQL] Flight numbers data

2006-03-29 Thread Simon Riggs
On Wed, 2006-03-29 at 10:50 -0600, Scott Marlowe wrote:

> > Has anyone from the postgresql camp ever solved such a problem?
> 
> Where I work, we're building a middle level system (look up the website
> that goes with my domain).  And if we weren't in the airline reservation
> industry, we couldn't afford the data sets.
> 
> > It is just that i wanna do it right, maybe play a little bit with AJAX 
> > also who knows :)
> 
> But "doing it right" goes against almost every tenet of the airline
> reservation industry :)  haha.  only serious.

Yeh, IATA don't have a normalised data model, so you're badly out of
luck there. 

There are some reasonable books on Data Modelling from Wiley you can get
with fair models in, plus Kimball has a simplified data model in his
Toolkit book.

I'd make sure you get your requirements straight, otherwise the data
model will grow and grow as each new strange-but-true wierdness emerges.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Create an index with a sort condition

2006-03-29 Thread Simon Riggs
On Wed, 2006-03-29 at 03:14 -0800, sylsau wrote:

> I use PostgreSQL 8.1 and I would like create and index on a table's
> field with a sort condition on this field.
> For example, I have a table named books and I want create an index on
> the fields title and id_book with an order by id_book descendant.
> 
> I know how to create the index without this condition :
> 
> create index book_index on books(id_book, title);
> 

The index is already sorted and can be scanned forwards or backwards.

Perhaps you want CLUSTER?

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Ron Mayer

Ian Harding wrote:

...
works fine, but you have to do it "The Rails Way" and expect no help
from the "Community" because they are a fanboi cheerleader squad, not
interested in silly stuff like referential integrity, functions,
triggers, etc.  All that nonsense belongs in the application!



You exaggerate.  There's nothing that says you need to only
use ActiveRecord's out-of-the-box configuration with rails apps.

All of our models directly use their postgresql library which
is just a wrapper around libpq; and from there you can use whatever
postgresql specific tricks you'd like (postgis types was the main
reason we used that instead of ActiveRecord).


Check this out, there is no stale connection detection or handling in
rails.  I'm not kidding.  If you connection drops out, restart your
web server.  Sorry.  Blah.


Which is a reasonable default.  If you want to catch the
exception and re-set the connection, you surely can do so.

We prefer to catch the exception and make the machine take
itself out of the load-balancing pool so we can diagnose
the problem rather than trying to automatically do (whatever
it is you expected it to do).


We're drifting way off topic so I'll stop here. I'd be happy
to discuss further via email or on the rails lists.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Merlin Moncure
On 3/29/06, Ted Byers <[EMAIL PROTECTED]> wrote:
> May I ask a question about this?
>
> I will be working on an older database in which the original developer
> stored XML documents as a single variable length text field.  To process it,
> it has to be retrieved in full and parsed.  But the structure of it is

heheh :)

> simple in that it has an element for each field in the DB that replaced the
> earily version.  But people are still using the earlier one because they
> still need access to the old data and no tool has yet been written by my
> predecessors to move the old data over to the new DB.  Does the XML support
> you're all talking about make it less tedious to develop tools to parse
> these old XML files and put their data into the right field of the right
> table?  I can develop the tool our users need using the resources I have at
> hand, but the proces of parsing these XML files is certain to be very
> tedious; something I am not looking forward to.  There is a reason I don't
> do much with XML even though I know how.

Most high level languages these days have decent xml parsing power. 
My suggestion would be to parse it into insert statements the easiest
way possible.  Following that just stuff it into the new database. 
PostgreSQL string processing is powerful enough that you can create a
view which presents the old structure if you want your legacy app to
continue to access your database without substantion modication. 
Check out array_cat, array_to_string, etc.  You could explore
developing custom aggregates if necessary

Merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Ted Byers

May I ask a question about this?

I will be working on an older database in which the original developer 
stored XML documents as a single variable length text field.  To process it, 
it has to be retrieved in full and parsed.  But the structure of it is 
simple in that it has an element for each field in the DB that replaced the 
earily version.  But people are still using the earlier one because they 
still need access to the old data and no tool has yet been written by my 
predecessors to move the old data over to the new DB.  Does the XML support 
you're all talking about make it less tedious to develop tools to parse 
these old XML files and put their data into the right field of the right 
table?  I can develop the tool our users need using the resources I have at 
hand, but the proces of parsing these XML files is certain to be very 
tedious; something I am not looking forward to.  There is a reason I don't 
do much with XML even though I know how.


Ted
- Original Message - 
From: "Steve Atkins" <[EMAIL PROTECTED]>

To: "pgsql general" 
Sent: Wednesday, March 29, 2006 12:21 PM
Subject: Re: [GENERAL] PostgreSQL's XML support comparison against other 
RDBMSes





On Mar 29, 2006, at 2:57 AM, Ivan Zolotukhin wrote:


On 3/29/06, Peter Eisentraut <[EMAIL PROTECTED]> wrote:

Ivan Zolotukhin wrote:

BTW, are there any plans for its improvement? What are the first
steps for the people who want to contribute to PostgreSQL in this
field?


The first step would be making a detailed analysis of what "XML 
support"

is supposed to mean.


So let's make such analysis here within PostgreSQL community! There
were a lot of talks about XML support in Postgres, but they did not
lead to any steady work in this direction. IMHO, it's time to start it
now, no?


The people who want it should explain what it is, ideally by
pointing at the relevant standards documents.

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 12:19, Antimon wrote:
> Hi,
> I'm sorry about being understood like i was trying to compare pgsql
> with mysql. I was trying stuff, did this and saw that huge difference
> (even it is not bad alone, but comparing to mysql), and thought that
> might be some library issue causing slow reads from server. I don't
> need any rdbms to be faster on selecting a constant integer. My point
> was the library, not server performance.

No need to apologize, honest.  This discussion brings up some valid
points, even if the benchmark is not necessarily a valid method for
choosing the database.

PostgreSQL is generally "heavier" than MySQL.  A large portion of this
is that PostgreSQL is generally busy making sure things are done right
first, and fast second.  In MySQL, it's the other way around.

I wonder how your test would work if you did something a little more
complex (like the pick 1 row in a million scenario) and did in parallel
for say 5, 10, 20, 50 clients at a time.  That would like give you some
idea of how well connectivity and small sql select statements scale on
each system.

Generally, pgsql folks consider the single user scenario to not be that
important, and ask themselves "so what happens when 50 people do this at
the same time?"  Again, MySQL tends to be the opposite.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Foreign key / performance question

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 11:35, Nico Callewaert wrote:
> Hi !,
>  
> Is it wise to define foreign keys for referential entegrity ?
> Example : I have a customer table with 40 fields.  Out of that 40
> fields, 10 fields contain information linked to other tables.   So, is
> defining foreign keys for these 10 fields a good idea ?  Because from
> what I understand, for every foreign key, there is an index defined. 
> So, all these indexes has to be maintained.  Is that killing
> performance ?  What's the best practise : defining foreign keys or not
> ?

While a foreign key has to point to a field(s) with a unique index on
it, the foreign key itself doesn't require an index.  That said,
performance is usually better with it than without it.

If performance is your only consideration, then an SQL database is
probably not your best choice.  There are plenty of solutions that can
run faster.  They just may not guarantee you data stays coherent.  And
sometimes, that's ok.  Sometimes you have a margin of error in your data
that means you can lost a few bits here and there and the system is
still allright.  (i.e. weather monitoring and such)

However, if your data is critical, and even a single error is a bad
thing (i.e. accounting, airline reservations, medical, and so on) or
possibly even deadly.

>From a performance perspective, I haven't found that FK/PK is the
problem so much as extreme normalization.  When you have to join 100+
tables for every request, your performance may not be as fast as you'd
like.  Setting up fk/pk relations for these 100 tables, however, almost
never makes them slower, unless they're set up poorly.

Occasionally you'll see someone used mismatched types in a FK/PK
relationship (i.e. int -> text) or other strange things.  That can cause
issues.

I'd suggest benchmarking your issue, and seeing what kind of performance
you get in your schema with and without fk/pk references.

and if you do decide that going without fks are fine, then don't forget
to factor in your daily / weekly / monthly / yearly data cleaning
festivals...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Foreign key / performance question

2006-03-29 Thread Stephan Szabo
On Wed, 29 Mar 2006, Nico Callewaert wrote:

>   Is it wise to define foreign keys for referential entegrity ?

>   Example : I have a customer table with 40 fields.  Out of that 40
>   fields, 10 fields contain information linked to other tables.  So, is
>   defining foreign keys for these 10 fields a good idea ?  Because from
>   what I understand, for every foreign key, there is an index defined.
>   So, all these indexes has to be maintained.  Is that killing
>   performance ?  What's the best practise : defining foreign keys or not
>   ?

The referencing side of the constraint doesn't need an index, although
it's useful for speeding up deletes or updates to the referenced table
(so, if those operations don't happen or are significantly rare, having
those have to do a sequential scan may be better than the maintenance
cost of the index on the referencing side). The referenced side does need
an index, however that's theoretically the same index that's used to
guarantee the required unique/primary key constraint.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
Hi,
I'm sorry about being understood like i was trying to compare pgsql
with mysql. I was trying stuff, did this and saw that huge difference
(even it is not bad alone, but comparing to mysql), and thought that
might be some library issue causing slow reads from server. I don't
need any rdbms to be faster on selecting a constant integer. My point
was the library, not server performance.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Flight numbers data

2006-03-29 Thread Vivek Khera


On Mar 29, 2006, at 11:50 AM, Scott Marlowe wrote:


There's much in the reservation industry that is basically a computer
implementation of a 3x5 note card system.  And none of those systems
were originally built to talk to each other, so it's often impossible


We at one time built a nice pre-dot-com-bubble-pop fortune on taking  
the logical equivalent of 3x5 card info from the US government and  
making it web searchable.  Sometimes we had to pay lots of $$$ to buy  
the data from various contractors, but there was no copyright on it  
so we could resell and/or give it away.  It was a brilliant business  
model for its day.  Now most of the data is online by the gov't  
itself


It just takes some push by some enterprising individuals to move the  
world into better data access.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Foreign key / performance question

2006-03-29 Thread Vivek Khera


On Mar 29, 2006, at 12:35 PM, Nico Callewaert wrote:

Because from what I understand, for every foreign key, there is an  
index defined.  So, all these indexes has to be maintained.  Is  
that killing performance ?  What's the best practise : defining  
foreign keys or not ?


If your application is 100% perfect and you never do any manual  
updates to the DB, then who needs FK's?


Show me that your app is 100% perfect first... which there is no way  
you can do unless it is a trivial amount of code.


But just defining FK's won't define an index. You have to do that  
manually else suffer performance.  However, if the referenced table  
is only a few hundred rows or less, there is not much point in an  
index on that column for that table.


Personally, I live by FKs since I value my data to be correct.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Vivek Khera


On Mar 29, 2006, at 12:33 PM, Steve Atkins wrote:

For the original poster - a web interface might well be the  
simplest to put together,
but if a client turns out to be a better solution I'd strongly  
suggest looking at Qt.
It has nice SQL support and it's very quick to turn around a simple  
database


There's also XUL built into Mozilla derived browsers if you want a  
middle-ground to full custom app and web app.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Vivek Khera


On Mar 29, 2006, at 11:03 AM, Kenneth Downs wrote:

I am fascinated by your post.  I have never heard a bad thing said  
about RoR.


Most of what you read about RoR is written from a very superficial  
view of what it promises, as tainted by the simplistic uses of mysql  
people are familiar with.


Last summer at the O'Reilly OSCON, the author of RoR gave a  
presentation.  My colleagues and I just sat there stunned at how one  
of the great features of RoR he was showing off was basically  
referential integrity.  Except that you were *only* allowed to access  
the DB using the RoR tools.  No direct connections were ever  
allowed.  He brushed off any comments about that as "I consider the  
DB to be just a dumb object store".  So why bother using an SQL  
engine then?  Silly.


That was the end of us even bothering to investigate it as a serious  
platform, though we have borrowed some of the ideas and concepts in  
our own in-house platform we have built since then.


My take on RoR is that it makes the simple things simpler (kind of  
how the Daily Show is now Dailier) and the moderate to hard things  
impossible.  If you have a simple project then by all means use it to  
full advantage.  But if you have complex data models then  
fugeddaboutit. :-(



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Comparing 7.4 with 8.0

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 11:58, Reimer wrote:
> Hello,
>  
> We would like to have only one PostgreSQL version at our clients,
> currently some are  using 7.4.6 and others 8.0.6. The first thing is
> to migrate those 7.4.6 clients to 8.0.6. 
>  
> But before, I´ve to convince them that such migration will be worth
> for them. My idea is to write a document with all enhancements they
> will have if they migrate to 8.0.6. 
>  
> Where could I find some documents to help me? 
>  
> Are 7.4.6 vulnerable in some area? Security?
>  

Just posted this in another thread:



http://www.postgresql.org/docs/8.1/static/install-upgrading.html


http://www.postgresql.org/docs/8.1/static/release-8-1.html


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Comparing 7.4 with 8.0

2006-03-29 Thread Reimer



Hello,
 
We would like to have only one PostgreSQL version 
at our clients, currently some are  using 7.4.6 and others 8.0.6. The 
first thing is to migrate those 7.4.6 clients to 8.0.6. 
 
But before, I´ve to convince them that such 
migration will be worth for them. My idea is to write a document with all 
enhancements they will have if they migrate to 8.0.6. 
 
Where could I find some documents to help me? 

 
Are 7.4.6 vulnerable in some area? 
Security?
 
Many thanks in advance!
 
Reimer
 


[GENERAL] Foreign key / performance question

2006-03-29 Thread Nico Callewaert
Hi !,     Is it wise to define foreign keys for referential entegrity ?  Example : I have a customer table with 40 fields.  Out of that 40 fields, 10 fields contain information linked to other tables.   So, is defining foreign keys for these 10 fields a good idea ?  Because from what I understand, for every foreign key, there is an index defined.  So, all these indexes has to be maintained.  Is that killing performance ?  What's the best practise : defining foreign keys or not ?     Thanks a lot,  Nico Callewaert
		New Yahoo! Messenger with Voice. Call regular phones from your PC and save big.

Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Steve Atkins


On Mar 29, 2006, at 8:03 AM, Kenneth Downs wrote:


Ian Harding wrote:

I am fascinated by your post.  I have never heard a bad thing said  
about RoR.


I have been meaning to investigate it because it is the only system  
I've heard of that makes the same claim that I do, which is to have  
eliminated entire categories of labor through automation.


Except that I built mine on a database foundation.  Systematize and  
automate database handling and UI creation should follow.  I did  
not know that RoR was so cavalier w/respect to the database, is  
that really true?  Is it really just yet-another-UI system?


Pretty much, AFAICT, it's designed to run with anything that supports  
SQL as it's
embedded store, rather than allowing you to talk to an RDBMS with an  
existing

schema easily.

It's the exact opposite there of OpenACS, which puts tentacles deep  
into the database,
and really relies on embedded functions and well crafted SQL. And  
only supports
Oracle and PostgreSQL, not MySQL. I'm playing with Perl+Catalyst+DBIx  
at the
moment, which seems to be a reasonable compromise, as long as you  
really like

perl. :).

For the original poster - a web interface might well be the simplest  
to put together,
but if a client turns out to be a better solution I'd strongly  
suggest looking at Qt.
It has nice SQL support and it's very quick to turn around a simple  
database
access application, if you've a passing acquaintance with C++. And  
it'll compile

to Windows, Linux and OS X from the same source.

Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 02:45, Antimon wrote:
> I know this is not even a test but i thought it might be a client
> library issue, not server itself.
> Well it is not, i tried it on .net with npgsql which is a .net client
> library (Not using libpq). Results are same. Connect time does not have
> much effect by the way.

Just so you understand, there was a period of time when MySQL AB put out
benchmark after benchmark like this to "prove" that MySQL was faster
than PostgreSQL.  Of course, they never compared anything moderately
complex, since they knew they'd lose.

Is it a fair comparison of say, a mainframe and a workstation to compare
the boot up times?  Not really, unless your particular application is
gonna be rebooting the mainframe a lot.  It's a small, narrow channel to
compare on, and most the time it doesn't mean a lot.

That's all.  No personal attack meant against you, sorry if it came
across that way.  It's just many pgsql folks still smart from that era,
and it seems to come back to haunt us every so often.  

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Bernhard Weisshuhn

Kenneth Downs wrote:

I have been meaning to investigate it because it is the only system I've 
heard of that makes the same claim that I do, which is to have 
eliminated entire categories of labor through automation.


Have you looked at http://catalyst.perl.org/ lately?
IMHO it's "Rails done right" and it's perl, so it just /has/ to be good ;-)

sorry, couldn't resist.
bkw


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Steve Atkins


On Mar 29, 2006, at 2:57 AM, Ivan Zolotukhin wrote:


On 3/29/06, Peter Eisentraut <[EMAIL PROTECTED]> wrote:

Ivan Zolotukhin wrote:

BTW, are there any plans for its improvement? What are the first
steps for the people who want to contribute to PostgreSQL in this
field?


The first step would be making a detailed analysis of what "XML  
support"

is supposed to mean.


So let's make such analysis here within PostgreSQL community! There
were a lot of talks about XML support in Postgres, but they did not
lead to any steady work in this direction. IMHO, it's time to start it
now, no?


The people who want it should explain what it is, ideally by
pointing at the relevant standards documents.

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Postgresql string parsing

2006-03-29 Thread Tony Caduto

[EMAIL PROTECTED] wrote:

Hi Folks,
I'm looking for the fatest way to parse string in a postgresql function and 
insert each parsed chunk in a table. Something like that:

  
You might be able to use the |string_to_array function which |splits a 
string into array elements using the provided delimiter which could be a 
EOL marker.
Then use a loop to iterate through the  array  and  insert  into  your  
table.


Just a quick idea :-)
||

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Connecting

2006-03-29 Thread Bob Pawley

Thanks Tony I found it.

Can you give me a hint as to how to open the file??

Bob
- Original Message - 
From: "Tony Caduto" <[EMAIL PROTECTED]>

To: "Bob Pawley" <[EMAIL PROTECTED]>; 
Sent: Wednesday, March 29, 2006 8:12 AM
Subject: Re: [GENERAL] Connecting



Bob Pawley wrote:
 Do I somehow have the wrong driver for my version of Postgresql or is 
there something in Postgresql that I need to switch on???

 Bob Pawley


Hi Bob,
Make sure you have the listen_addresses setting in postgresql.conf set to 
*


listen_addresses = '*'# what IP address(es) to listen on;
   # comma-separated list of addresses;
   # defaults to 'localhost', '*' = all

Also make sure you have a entry in the pg_hba.conf file that allows your 
remote host to connect, you can either specify a explicit host or you can 
allow whole subnets access.


These files are located(at least on my PC :-) :C:\Program 
Files\PostgreSQL\8.1\data


Hope this helps,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Postgresql string parsing

2006-03-29 Thread ycrux
Hi Folks,
I'm looking for the fatest way to parse string in a postgresql function and 
insert each parsed chunk in a table. Something like that:

CREATE FUNCTION parse_and_insert(text) RETURNS integer AS '

 DECLARE
 my_string   ALIAS FOR $1;

-- empty string, do nothing
 IF my_string IS NULL THEN
-- split my_string at each new line char '\n', '\r\n' or '\r'

-- loop for each founded chunk, and simply insert it
INSERT INTO tableX (data) VALUES (chunk);

 return 1;
 END IF;

-- empty string, do nothing
return 0;

   END;
' LANGUAGE 'plpgsql';

My be there is exists complety different and fasted method?

Thanks in advance guys

/Youn


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [SQL] Flight numbers data

2006-03-29 Thread Scott Marlowe
On Wed, 2006-03-29 at 02:17, Achilleus Mantzios wrote:
> Hi, i am in the process of writing an application about
> tickets, flights, etc, and i am thinking of getting the primitive
> data ready at the begining and doing it the right way,
> (e.g. the user will just select a flight number and doesnt 
> have to enter the airports, airport coordinates, countries,
> airline name, airline code, departure/arrival times, etc...)
> 
> I found some sources on the web about airlines (codes, names, countries, 
> etc...), also i found about airports, but nothing about flight numbers!

That's cause companies that keep track of those things charge a LOT of
money for their datasets. 

> This application will be for enterprise (internal) company use,
> covering the flights of emploees around the world.
> 
> I was just wondering how internet ticket agents manage to do it.

They subscribe to some service that has this information in the back
end.  The data in that back end comes from the one or two sources of
that data who charge yearly subscriptions in the hundreds of thousands
of dollars.

> Has anyone from the postgresql camp ever solved such a problem?

Where I work, we're building a middle level system (look up the website
that goes with my domain).  And if we weren't in the airline reservation
industry, we couldn't afford the data sets.

> It is just that i wanna do it right, maybe play a little bit with AJAX 
> also who knows :)

But "doing it right" goes against almost every tenet of the airline
reservation industry :)  haha.  only serious.

Seriously though, you might be able to get your travel agent or whoever
you do reservations through to provide you with this information in some
kind of reliable format for the tickets you've bought.  If you can get
some kind of automated feed from them, that would be the best way, if
not, maybe they can email you the info each time, and you can cut and
paste it in.

There's much in the reservation industry that is basically a computer
implementation of a 3x5 note card system.  And none of those systems
were originally built to talk to each other, so it's often impossible
for a single user to get any kind of automatically created data from
such an industry.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Tomi NA
On 3/29/06, Ian Harding <[EMAIL PROTECTED]> wrote:
Seriously, it's not too bad if you don't mind it's plentifulshortcomings.  I was getting carpal tunnel syndrome from typing pages so I switched to RoR for a hobby app.  Itworks fine, but you have to do it "The Rails Way" and expect no help
from the "Community" because they are a fanboi cheerleader squad, notinterested in silly stuff like referential integrity, functions,triggers, etc.  All that nonsense belongs in the application!...
That's an eye opener, thanks Ian.t.n.a.


Re: [GENERAL] More AIX 5.3 fun - out of memory ?

2006-03-29 Thread Seneca Cunningham

Gavin Hamill wrote:

Seneca Cunningham wrote:

You're using a 32-bit build with the default memory model, it would 
seem.  You're limited to less than 256MB for the heap in that model. 
Try stopping the postmaster and starting it back up with the 
environment variable LDR_CNTRL set to "MAXDATA=0x8000".  This 
expands the heap to 2GB and if it helps, you can use ldedit on your 
postgres binaries to make that the default (or rebuild, passing the 
linker "-bmaxdata:0x8000").



Yep you're right on the money with default everything, and the kernel is 
indeed 32-bit. Still a bit of a mixed bag tho :)


Builds default to 32-bit even with a 64-bit kernel in use.  64-bit 
binaries can be built and used even with a 32-bit kernel just so long as 
the underlying hardware is 64-bit and 64-bit mode is enabled.  You have 
64-bit hardware.



Last night, I passed

$ LDR_CNTRL="MAXDATA=0x800" bin/pg_ctl start -D /usr/local/pgsql/share/

Which immediately threw back an 'shm' error that the process table was 
full, so I tried with a more modest 0x400 request and that worked a 
treat; autovac ran fine and a full pg_restore completed perfectly - 
hurrah! :)


This morning, however, pg starts with either  0x800 or 0x400, 
but I am right back at square one with the same messages when autovac 
fires:


"2006-03-29 09:18:45 BSTERROR:  out of memory
2006-03-29 09:18:45 BSTDETAIL:  Failed on request of size 268435452." so 
I'm at a loss :)


Are "0x800" and "0x400" typos in this message, or did you really 
request less than 256MB?  Those numbers are the number of bytes 
requested for the heap and memory is set aside in 256MB segments.


Would it be better to 'Enable 64-bit application environment' as the 
option in 'smitty' has kept tempting me with? Or use the AIX 64-bit 
kernel instead? Or both? :)) I'm using gcc 4.1.0 to compile pg - would I 
need to recompile with any particular flags for 64-bit goodness?


The 64-bit application environment allows you to use 64-bit apps with a 
32-bit kernel.  Your hardware is intended for use with the 64-bit kernel.


If you want a 64-bit postgres, you will need to recompile.  Set 
OBJECT_MODE to 64 before running ./configure, and pass CC="gcc -maix64" 
and LDFLAGS="-Wl,-bbigtoc" to ./configure.  OBJECT_MODE tells the 
linker, assembler, archiver, and other AIX build tools to operate on 
64-bit objects.  "-maix64" is the flag to tell gcc to use the 64-bit AIX 
ABI.  "-bbigtoc" tells the linker that it is allowed to generate a toc 
greater than 64KB if needed (it will be).


If you do switch to a 64-bit build, you will need to re-initdb.  I have 
heard that the statistics collector of a 64-bit build may have some 
problems, but it does pass "gmake check".


--
Seneca Cunningham
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] invalid page header

2006-03-29 Thread Chris Travers

Jo De Haes wrote:

OK.  The saga continues, everything is a little bit more clear, but at 
the same time a lot more confusing.


Today i wanted to reproduce the problem again.  And guess what? A 
vacuum of the database went thru without any problems.


I dump the block i was having problems with yesterday.  It doesn't 
report an invalid header anymore and it contains other data!!!


Inconsistant problems esp. with PostgreSQL are usually the result of 
hardware failure. 

Turns out the data that was returned yesterday belongs to another 
database!


Some more detail about the setup.  This server runs 2 instances of 
postgresql.  One production instance which is version 8.0.3.  And 
another testing instance installed in a different folder which runs 
version 8.1.3  Am I wrong thinking this setup ought to work?


No.  Ihave done it before too.  PostgreSQL instances running on 
different ports or addresses are sufficiently isolated to prevent this 
from being a problem.




Both instances use completely seperated data folders.

So the first dump returned data that actually belongs to an 8.0.3 
database (that runs fine).  And today without _any_ intervention that 
same block returns the correct data and the complete database is fine.


Where is the problem?
The fact that i'm running 2 different instances?
Cache on raid controller messing up?
Some strange voodoo?


I would see what sort of memory testing suite you can run on your system 
first (memtestx86, for example) and go from there.  It sounds to me like 
some sort of a hardware issue.  It *could* be bits flipped anywhere, 
from the writehead on the disk to the main system memory or the CPU.


The likelihood that it is a random RAM error is reduced if you are using 
ECC RAM.  Otherwise it could be anything.


This being said, when I have seen bits flipped by the CPU usually you 
get a lot of index issues and shared memory corruptions, so I would be 
more inclined to think that this was RAM or RAID cache.


Best Wishes,
Chris Travers
Metatron Technology Consulting
begin:vcard
fn:Chris Travers
n:Travers;Chris
email;internet:[EMAIL PROTECTED]
tel;work:509-888-0220
tel;cell:509-630-7794
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Getting more information about errorcodes such as when these error1 happen

2006-03-29 Thread Emi Lu

Good morning,

In my plpgsql functions I use "exception when..." to catch possible 
exceptions of my data. I found postgresql error code track functions are 
very helpful.  In my functions, I will not catch all error codes, but 
only the ones applying to my data operation .


I found errorcodes info here:
http://www.postgresql.org/docs/8.1/static/errcodes-appendix.html

But I am afraid that I could not imagine when and under what possible 
circumstances some errorcodes may happen just by their names such as : 
locator_exception, invalid_grantor,  active_sql_transaction , and so on.


I tried to search the online docs in order to get more info such as when 
will errorcode X happens. But I could not find it.


Could someone tell me some links that I can find more information about 
these errorcodes please?


Thanks alot,
Ying

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Connecting

2006-03-29 Thread Tony Caduto

Bob Pawley wrote:
 
Do I somehow have the wrong driver for my version of Postgresql or is 
there something in Postgresql that I need to switch on???
 
Bob Pawley
 
 

Hi Bob,
Make sure you have the listen_addresses setting in postgresql.conf set to *

listen_addresses = '*'# what IP address(es) to listen on;
   # comma-separated list of addresses;
   # defaults to 'localhost', '*' = all

Also make sure you have a entry in the pg_hba.conf file that allows your 
remote host to connect, you can either specify a explicit host or you 
can allow whole subnets access.


These files are located(at least on my PC :-) :C:\Program 
Files\PostgreSQL\8.1\data


Hope this helps,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Kenneth Downs




Ian Harding wrote:

I am fascinated by your post.  I have never heard a bad thing said
about RoR.

I have been meaning to investigate it because it is the only system
I've heard of that makes the same claim that I do, which is to have
eliminated entire categories of labor through automation.

Except that I built mine on a database foundation.  Systematize and
automate database handling and UI creation should follow.  I did not
know that RoR was so cavalier w/respect to the database, is that really
true?  Is it really just yet-another-UI system?

IMHO the problem with all blink-of-an-eye dev tools is that they are
not built on a foundation of solid database design, but now we're
drifting OT

  

  
I'm wondering if I could get some suggestions as to how implement
this quickly and simply? I was thinking a web interface using PHP
would be the fastest way of going about it.

  

  
  
If you used Ruby on Rails, you'd be finished by now.  It slices, it
dices, it makes julienne fries.

Seriously, it's not too bad if you don't mind it's plentiful
shortcomings.  I was getting carpal tunnel syndrome from typing
 pages so I switched to RoR for a hobby app.  It
works fine, but you have to do it "The Rails Way" and expect no help
from the "Community" because they are a fanboi cheerleader squad, not
interested in silly stuff like referential integrity, functions,
triggers, etc.  All that nonsense belongs in the application!

Check this out, there is no stale connection detection or handling in
rails.  I'm not kidding.  If you connection drops out, restart your
web server.  Sorry.  Blah.

Anyway, besides its warts, it is dead easy to use, and does make
putting together web applications in a "green field" scenario quite
painless.  Just don't try to do anything outside the box like trying
to access an existing database that uses RDBMS features heavily and
uses normal object naming.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend
  




begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Guidelines for upgrading from pgsql7.4.xxx server to

2006-03-29 Thread Scott Marlowe
On Tue, 2006-03-28 at 20:53, Luki Rustianto wrote:
> Hi All,
> 
> All of our application now runs on postgresql7.4.xxx servers, I never
> tried it on version 8.xxx
> 
> I wonder if there are any guidelines / step by step / special
> considerations whether it applies to database structure or the
> application it self if we want to upgrade to version 8.xxx ?

There's this page:

http://www.postgresql.org/docs/8.1/static/install-upgrading.html

And take a look at this one too:

http://www.postgresql.org/docs/8.1/static/release-8-1.html

which covers a few issues like encoding problems you might see.  In
versions before 8.0 charset checking was lax, and you might have
characters in your database that are illegal for UTF-8 encoding.  Simple
fix is to let iconv fix that.  Linux has it, not sure about other OSes,
but someone could always download a knoppix disc and boot from it just
long enough to convert the dumps if you don't run linux in your shop.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] PostgreSQL x Sybase

2006-03-29 Thread Reimer



Hi,
 
Does anyone know of any recent comparisons of postgres vs 
Sybase?
 
Thanks in advance!
 
Reimer


Re: [GENERAL] invalid page header

2006-03-29 Thread Jo De Haes
Ok,  So we reran everything and got the same error message again, now 
i'm able to reproduce it.


2006-03-28 12:05:18.638 CESTERROR:  XX001: invalid page header in block 
39248 of relation "dunn_main"

2006-03-28 12:05:18.638 CESTLOCATION:  ReadBuffer, bufmgr.c:257
2006-03-28 12:05:18.638 CESTSTATEMENT:  VACUUM;

 The output of the block in question is as follows:

* File: /backup/pgsql/data/base/16384/934173
* Options used: -R 39248 -f
*
* Dump created on: Tue Mar 28 13:43:09 2006
***

Block 39248 
 -
 Block Offset: 0x132a Offsets: Lower 536 (0x0218)
 Block: Size 8192  Version2Upper4796 (0x12bc)
 LSN:  logid 26 recoff 0xa9cad42c  Special  8176 (0x1ff0)
 Items:  129   Free Space: 4260
 Length (including item array): 540

 Error: Invalid header information.

  : 1a00 2cd4caa9 0100 1802bc12  ,...
  0010: f01f0220 d49f3800 b49f4000 949f4000  ... [EMAIL PROTECTED]@.
  0020: 749f4000 5c9f3000 449f3000 2c9f3000  [EMAIL PROTECTED],.0.
  0030: 149f3000 fc9e3000 e49e3000 cc9e3000  ..0...0...0...0.
  0040: b49e3000 9c9e3000 849e3000 6c9e3000  ..0...0...0.l.0.
  0050: 549e3000 3c9e3000 249e3000 0c9e3000  T.0.<.0.$.0...0.
  0060: f49d3000 dc9d3000 c49d3000 ac9d3000  ..0...0...0...0.
  0070: 949d3000 7c9d3000 649d3000 4c9d3000  ..0.|.0.d.0.L.0.
  0080: 289d4800 049d4800 e49c4000 c89c3800  ([EMAIL PROTECTED]
  0090: a89c4000 909c3000 6c9c4800 549c3000  [EMAIL PROTECTED]
  00a0: ec923800 389c3800 1c9c3800 fc9b4000  [EMAIL PROTECTED]
  00b0: dc9b4000 bc9b4000 9c9b4000 7c9b4000  [EMAIL PROTECTED]@[EMAIL 
PROTECTED]|[EMAIL PROTECTED]
  00c0: 5c9b4000 3c9b4000 1c9b4000 009b3800  [EMAIL PROTECTED]<[EMAIL 
PROTECTED]@...8.
  00d0: e89a3000 d09a3000 b89a3000 a09a3000  ..0...0...0...0.
  00e0: 849a3800 689a3800 509a3000 2c9a4800  ..8.h.8.P.0.,.H.
  00f0: 0c9a4000 bc923000 d4923000 08933000  [EMAIL PROTECTED]
  0100: f4993000 dc993000 c4993000 ac993000  ..0...0...0...0.
  0110: 94993000 7c993000 64993000 4c993000  ..0.|.0.d.0.L.0.
  0120: 34993000 1c993000 04993000 ec983000  4.0...0...0...0.
  0130: d4983000 bc983000 a4983000 8c983000  ..0...0...0...0.
  0140: 74983000 5c983000 44983000 2c983000  t.0.\.0.D.0.,.0.
  0150: 14983000 fc973000 e4973000 cc973000  ..0...0...0...0.
  0160: b4973000 9c973000 84973000 6c973000  ..0...0...0.l.0.
  0170: 54973000 3c973000 24973000 0c973000  T.0.<.0.$.0...0.
  0180: f4963000 dc963000 c4963000 ac963000  ..0...0...0...0.
  0190: 94963000 7c963000 64963000 4c963000  ..0.|.0.d.0.L.0.
  01a0: 34963000 1c963000 04963000 ec953000  4.0...0...0...0.
  01b0: d4953000 bc953000 a4953000 8c953000  ..0...0...0...0.
  01c0: 74953000 5c953000 44953000 2c953000  t.0.\.0.D.0.,.0.
  01d0: 14953000 fc943000 e4943000 cc943000  ..0...0...0...0.
  01e0: b4943000 9c943000 80943800 64943800  ..0...0...8.d.8.
  01f0: 48943800 2c943800 10943800 f4933800  H.8.,.8...8...8.
  0200: cc935000 a8934800 80935000 58935000  ..P...H...P.X.P.
  0210: 3c933800 20933800 20933800   <.8. .8. .8.

 --
 Item   1 -- Length:   28  Offset: 8148 (0x1fd4)  Flags: USED
  1fd4: 0f45 20001c40 1300 62727574  ...E [EMAIL PROTECTED]
  1fe4: 6f2d7569 746b6572 696e6700   o-uitkering.

 Item   2 -- Length:   32  Offset: 8116 (0x1fb4)  Flags: USED
  1fb4: dd92 32002040 1600 62727574  2. @brut
  1fc4: 6f2d6d61 616e6473 616c6172 6973  o-maandsalaris..

 Item   3 -- Length:   32  Offset: 8084 (0x1f94)  Flags: USED
  1f94: 8e29 15002040 1800 62727574  ...).. @brut
  1fa4: 6f6d6161 6e647361 6c617269 7373656e  omaandsalarissen

 Item   4 -- Length:   32  Offset: 8052 (0x1f74)  Flags: USED
  1f74: af3a 32002040 1800 62727574  ...:2. @brut
  1f84: 6f6d6161 6e647361 6c617269 7373656e  omaandsalarissen

 Item   5 -- Length:   24  Offset: 8028 (0x1f5c)  Flags: USED
  1f5c: a70c 14001840 0e00 62727574  [EMAIL PROTECTED]
  1f6c: 6f6d6172 6765omarge..

 Item   6 -- Length:   24  Offset: 8004 (0x1f44)  Flags: USED
  1f44: 3b0d 3d001840 0e00 62727574  ..;[EMAIL PROTECTED]
  1f54: 6f6d6172 6765omarge..

 Item   7 -- Length:   24  Offset: 7980 (0x1f2c)  Flags: USED
  1f2c: 7618 19001840 0e00 62727574  [EMAIL PROTECTED]
  1f3c: 6f6d6172 6765omarge..

 Item   8 -- Length:   24  Offset: 7956 (0x1f14)  Flags: USED
  1f14: af19 25001840 0e00 62727574  [EMAIL PROTECTED]
  1f24: 6f6d6172 6765omarge..

 Item   9 -- Length:   24  Offset: 7932 (0x1efc)  Flags: USED
  1efc: 7c1e 0e001840 0e00 62727574  ..|[EMAIL PROTECTED]
  1f0c: 6f6d6172 6765omarge..

 Item  10 -- Length:   24  Offset: 7908 (0x1ee4)  Flags: USED
  1ee4: 1420 11001840 0e00 62727574  ... [EMAIL PROTECTED]
  1ef4: 6f6d6172 6765omarge..

 Item  11 --

Re: [GENERAL] invalid page header

2006-03-29 Thread Jo De Haes
OK.  The saga continues, everything is a little bit more clear, but at 
the same time a lot more confusing.


Today i wanted to reproduce the problem again.  And guess what? A vacuum 
of the database went thru without any problems.


I dump the block i was having problems with yesterday.  It doesn't 
report an invalid header anymore and it contains other data!!!


Turns out the data that was returned yesterday belongs to another database!

Some more detail about the setup.  This server runs 2 instances of 
postgresql.  One production instance which is version 8.0.3.  And 
another testing instance installed in a different folder which runs 
version 8.1.3  Am I wrong thinking this setup ought to work?


Both instances use completely seperated data folders.

So the first dump returned data that actually belongs to an 8.0.3 
database (that runs fine).  And today without _any_ intervention that 
same block returns the correct data and the complete database is fine.


Where is the problem?
The fact that i'm running 2 different instances?
Cache on raid controller messing up?
Some strange voodoo?




Jo De Haes wrote:
Ok,  So we reran everything and got the same error message again, now 
i'm able to reproduce it.




Tom Lane wrote:


Jo De Haes <[EMAIL PROTECTED]> writes:

I asked the developper to delete all imported data again an restart 
the import.  This import crashed again with the same error but this 
time on another block.




2006-03-27 00:15:25.458 CESTERROR:  XX001: invalid page header in 
block 48068 of relation "dunn_main"
2006-03-27 00:15:25.458 CESTCONTEXT:  SQL statement "SELECT  phone 
FROM dunn_main WHERE source_id =  $1  AND duns =  $2 "

PL/pgSQL function "proc_dunn" line 29 at select into variables
2006-03-27 00:15:25.458 CESTLOCATION:  ReadBuffer, bufmgr.c:257
2006-03-27 00:15:25.458 CESTSTATEMENT:  SELECT proc_dunn ('J M 
Darby','TA4 3BU','215517942','1','01','S',NULL,'0219',156,1

54,387166)




But again, when i do the 'SELECT proc_dunn ('J M Darby','TA4 
3BU','215517942','1','01','S',NULL,'0219',156,1

54,387166)' statement now, it works without errors.




That is *really* strange.  Are you certain that the function is
examining the same table you are?  I'm wondering about multiple
similarly-named tables in different schemas, or something like that.


If I would like to dump block 48068 now with pg_dumpfile, how do i 
know which file this block belongs to?




See
http://www.postgresql.org/docs/8.1/static/storage.html
and/or use contrib/oid2name.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Create an index with a sort condition

2006-03-29 Thread sylsau
Hello,

I use PostgreSQL 8.1 and I would like create and index on a table's
field with a sort condition on this field.
For example, I have a table named books and I want create an index on
the fields title and id_book with an order by id_book descendant.

I know how to create the index without this condition :

create index book_index on books(id_book, title);


But, I don't know to create this index with the sort condition.
Someone has an idea ?

Thanks to help me.

Sylvain.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Implementation Suggestions

2006-03-29 Thread Ian Harding
> >> I'm wondering if I could get some suggestions as to how implement
> >> this quickly and simply? I was thinking a web interface using PHP
> >> would be the fastest way of going about it.
> >

If you used Ruby on Rails, you'd be finished by now.  It slices, it
dices, it makes julienne fries.

Seriously, it's not too bad if you don't mind it's plentiful
shortcomings.  I was getting carpal tunnel syndrome from typing
 pages so I switched to RoR for a hobby app.  It
works fine, but you have to do it "The Rails Way" and expect no help
from the "Community" because they are a fanboi cheerleader squad, not
interested in silly stuff like referential integrity, functions,
triggers, etc.  All that nonsense belongs in the application!

Check this out, there is no stale connection detection or handling in
rails.  I'm not kidding.  If you connection drops out, restart your
web server.  Sorry.  Blah.

Anyway, besides its warts, it is dead easy to use, and does make
putting together web applications in a "green field" scenario quite
painless.  Just don't try to do anything outside the box like trying
to access an existing database that uses RDBMS features heavily and
uses normal object naming.

- Ian

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] checking data type

2006-03-29 Thread raj
is there a function that could check for a variable's data type?  like
i want to check all the columns of a table and if i found a column with
an integer  data type i set it to a default 1 and i'll set a constant
to a column of type text.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] using types for encrypting fields

2006-03-29 Thread Tobias Herp
Hi,

I have the need to encrypt some columns in some tables. With Postgres 7.4,
using a tablespace on an encrypted partition is not an option, right? Thus,
my idea is:

- put the encryption/decryption key in a temporary table

- create types for each type of data which must be encrypted (enc_numeric,
enc_char etc.)

- define input, output, send and receive functions for each of these types

Good idea?

However, the realisation seems to be somewhat difficult. I'm not very
experienced at writing pl/pgsql functions, and I'm not sure how to specify
the internal structure of my new type.

This is what I've written so far:


-- encrypted numeric:
CREATE TYPE public.enc_numeric (
 INPUT = public.enc_numeric_txt_in,  -- read from text
 OUTPUT = enc_numeric_txt_out,   -- write to text
 receive = enc_numeric_in,   -- read from numeric
 send = enc_numeric_out, -- write to numeric
 default = cast (0 as numeric)
);

CREATE OR REPLACE FUNCTION public.enc_numeric_txt_in(cstring)
  RETURNS enc_numeric AS
'
DECLARE
data ALIAS FOR $1;
x bytea;
y cstring;
key bytea;
BEGIN
key = select key from keys limit 1;
x = decode(data, \'escape\');
y = encrypt(x, key, \'bf\');
RETURN y::numeric
END;
'
  LANGUAGE 'plpgsql' STABLE STRICT;


(which is by no means complete, of course).

I reckon something is utterly wrong here; perhaps someone can push me into
the right direction?

Or can't it be done, and I should use triggers (when writing) and change
all concerned views instead?

-- 
Thanks in advance,

Tobias

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-29 Thread Mark Aufflick
To go slightly OT, I have the current displeasure of becomming
acquainted with Sybase which has ideas about being flexible with
standards (and sanity) much like MySQL.

The first of the two intentional "helpful" features I have come across
so far is that inserting (or updating) char/varchar columns with
strings that are too long silently results in a truncated string.

The second is that in where clauses, NULL = NULL is true. (waits for
you to get up off the floor) Apparently it didn't use to be like that,
but the "feature" was added due to customer requests... What's worse
is that Sybase is not even internally consistent - FALSE = FALSE in
the join clause does not cause a match.

Truly horrible. Of course there are runtime parameters you can set to
get more ansi-ish behaviour, but by the time you realise this, there
may already production code that relies on the behaviour...

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread Adrian Klaver
On Wednesday 29 March 2006 02:59 am, David Bernal wrote:

>
> Any ideas? I'm fairly baffled, but then I'm a newbie.
>
Just a thought, did you restart the server after making the changes?
From the  Postgres docs-
"...This parameter can only be set at server start."

-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread Tom Lane
"David Bernal" <[EMAIL PROTECTED]> writes:
> I recently have been attempting to get my install of postgresql 8.1
> (running Win XP as OS) to listen on both 127.0.0.1 and my IP address,
> 192.168.0.100 (inside my network, obviously.) As such, I tried first
> setting listen_addresses = '192.168.0.100, localhost'

By "8.1" do you really mean "8.1.0"?  If so, you might try updating to
the latest subrelease (currently 8.1.3).  This problem doesn't offhand
seem to match any of the bug fixes I see in the CVS logs, but there have
been a number of Windows-specific fixes and maybe one of them explains
it.

> With it like that, when I try to connect to the server using localhost
> (I've tried this on a couple different clients) I get an error. If,
> for example, I run psql -U user -d base, I get
> "psql: server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request."

This should certainly leave some trace in the postmaster log file.
If you don't know where the log output is going, find out ;-)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread Ian Harding
On 3/29/06, David Bernal <[EMAIL PROTECTED]> wrote:
> I recently have been attempting to get my install of postgresql 8.1
> (running Win XP as OS) to listen on both 127.0.0.1 and my IP address,
> 192.168.0.100 (inside my network, obviously.) As such, I tried first
> setting listen_addresses = '192.168.0.100, localhost'
>
Try * (wildcard) and see what happens.  It should either work or not
work, not work 'sometimes' so I think there must be something else
involved.  If pg_hba.conf is set up right, and listen address is *
then you have the network to look at.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Connecting

2006-03-29 Thread Martijn van Oosterhout
On Wed, Mar 29, 2006 at 06:08:44AM -0800, Bob Pawley wrote:
> I have been attempting to connect to a database (Postgresql 8.0, running on 
> XP.
> 
> I have two different applications that see and apparently connect to
> the Postgresql ODBC driver (version - psqlodbc-08_00_0101) but I
> receive an error message when attempting to connect.

You're going to have to tell us the error message if you want any help.
Our mind-reading devices are offline today.

Have a ncie day,

-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Merlin Moncure
On 28 Mar 2006 17:01:45 -0800, Antimon <[EMAIL PROTECTED]> wrote:
> Nearly 3 times slower even w/o any table queries. But i could not
> reproduce this by writing stored procs on both which selects 0-1 in
> a loop to a variable. results were almost same.
> (I tried pg_ and mysqli_ functions too, results were not too different)
>
> Is it mysql client libraries performs better? Or postgre stored procs
> are 3x faster? I cannot understand, since there is not even an io
> operation or any query planning stuff, what is the cause of this?

Yes, it is the case that raw select performance for pgsql  drastically
improves with prepares statements as much as 2-3x.  In my experience
postgresql is just a bit faster with prepared statements and
substantially slower without.  There is a little bit of pain in
setting them up properly in a pooling web environment but it is worth
it if you want top performance.

Does anybody know if php uses the parameterized flavors of the C API? 
That's another huge optimization over PQexec.

I agree with Jim in that it't probably fast enough anyways.

Merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Connecting

2006-03-29 Thread Bob Pawley



I have been attempting to connect to a database 
(Postgresql 8.0, running on XP.
 
I have two different applications that see 
and apparently connect to the Postgresql ODBC driver (version - 
psqlodbc-08_00_0101) but I receive an error message when attempting to 
connect.
 
Do I somehow have the wrong driver for my version of 
Postgresql or is there something in Postgresql that I need to switch 
on???
 
Bob Pawley
 
 


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Merlin Moncure
On 3/29/06, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> On 3/29/06, Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> > Ivan Zolotukhin wrote:
> > > BTW, are there any plans for its improvement? What are the first
> > > steps for the people who want to contribute to PostgreSQL in this
> > > field?
> >
> > The first step would be making a detailed analysis of what "XML support"
> > is supposed to mean.
>
> So let's make such analysis here within PostgreSQL community! There
> were a lot of talks about XML support in Postgres, but they did not
> lead to any steady work in this direction. IMHO, it's time to start it
> now, no?

IMO, PostgreSQL has the tools to do amazing things in XML. Check out:
1. pl languages (especially pl/perl)
2. funtional indexes
3. xml contrib
4. gist indexes

I agree wtih martijn.  If I have XML requirement, I either store it as
black box blob or preprocesss it (possibly with XSLT) into insert
statements.  I treat XML no differently than a  CSV file for example.

Merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Tomi NA
On 3/29/06, Martijn van Oosterhout  wrote:
On Wed, Mar 29, 2006 at 02:57:44PM +0400, Ivan Zolotukhin wrote:> So let's make such analysis here within PostgreSQL community! There> were a lot of talks about XML support in Postgres, but they did not> lead to any steady work in this direction. IMHO, it's time to start it
> now, no?Quite, lets start with why the module in contrib is not good enough andwhat the various existing XML projects lack.
http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/xml2/http://pgfoundry.org/projects/getxml/
http://gborg.postgresql.org/project/xpsql/projdisplay.phpThe getxml and xpsql project seem to fit phantasticaly into my idea of handling XML inside a database.Say you wan't to store an OpenOffice.org
 document into a database, but rather than say "it's a binary" and lose all editing and most search capabilities, it might be possible to use xpsql to store the document into the database. Once stored, specific parts could be altered as the system changes states and during the systems lifecycle, you could at any time generate a document based on the current system state.
Can anyone give me any kind of opinion on this kind of document handling?Does anyone here have any experiance with xpsql?Is xpsql under any kind of active development?Can xpsql handle documents as complex as OOo documents, with all their DTDs, schemas...whatever?
t.n.a.


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Martijn van Oosterhout
On Wed, Mar 29, 2006 at 02:57:44PM +0400, Ivan Zolotukhin wrote:
> So let's make such analysis here within PostgreSQL community! There
> were a lot of talks about XML support in Postgres, but they did not
> lead to any steady work in this direction. IMHO, it's time to start it
> now, no?

Quite, lets start with why the module in contrib is not good enough and
what the various existing XML projects lack.

http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/xml2/
http://pgfoundry.org/projects/getxml/
http://gborg.postgresql.org/project/xpsql/projdisplay.php

As for myself, I've never had XML even come close to my database so I
have no real interest in this.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] table owner of cloned databases

2006-03-29 Thread Stefan Fink


Scott Marlowe wrote:
> On Mon, 2006-03-27 at 11:04, Stefan Fink wrote:
> 
>>Hi all,
>>
>>I would like to create a database based upon a template which also
>>contains some tables. After invoking
>>
>>psql -T  -U  -O  test
>>
>>the new database 'test' belongs to  but all tables are still
>>belonging to the user who owns the template . This results
>>in the following error message when accessing the tables:
>>
>>ERROR:  permission denied for relation 
>>
>>Of course, I could alter the permissions in way that  has
>>access to all tables. But it would be much nicer, if the owner of the
>>cloned database were inherited to all tables. Is there a way to achieve
>>this?
> 
> 
> The easy way to do this is to dump the source database with the -O
> switch, which turns off the generation of the sql that sets owner on the
> objects created during restore.
> 
> If you can't get a fresh dump, for whatever reason, then you'll have to
> either hand edit the dump to remove those lines or write a one line sed
> script to remove them on the fly.

This makes usage of a template pointless at all - very unsatisfying.
I think the easiest way is to simply use a database defintion file and
to create every database by invoking

psql -U  < 

This is exactly what I wanted to avoid :-(

Thanks anyway,
Stefan.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] postgresql.conf listen_addresses causing connection problems

2006-03-29 Thread David Bernal
I recently have been attempting to get my install of postgresql 8.1
(running Win XP as OS) to listen on both 127.0.0.1 and my IP address,
192.168.0.100 (inside my network, obviously.) As such, I tried first
setting listen_addresses = '192.168.0.100, localhost'

With it like that, when I try to connect to the server using localhost
(I've tried this on a couple different clients) I get an error. If,
for example, I run psql -U user -d base, I get
"psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request."

Now, if I try again, this time with psql -h 192.168.0.100 -U user -d
base, it simply hangs, I don't even get a password prompt. Just for
fun, I let this sit for about 30 minutes, and it still did nothing. I
also checked the various logs I know of, and found nothing.

As a work around, I have tried setting
listen_addresses='192.168.0.100', however then I get an error about
half of the time, and it works normally half the time.

Setting listen_addresses='localhost' works just dandy, but then I
can't access the 'base except from home, and that's no good either.

Now, I did make some changes to my router recently, that I haven't
looked into yet, however, since this is all happening inside the
network, and mostly on the same machine as pgsql, I don't think that
would be it (correct me if I'm wrong.)

Any ideas? I'm fairly baffled, but then I'm a newbie.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Ivan Zolotukhin
On 3/29/06, Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> Ivan Zolotukhin wrote:
> > BTW, are there any plans for its improvement? What are the first
> > steps for the people who want to contribute to PostgreSQL in this
> > field?
>
> The first step would be making a detailed analysis of what "XML support"
> is supposed to mean.

So let's make such analysis here within PostgreSQL community! There
were a lot of talks about XML support in Postgres, but they did not
lead to any steady work in this direction. IMHO, it's time to start it
now, no?

Sincerely,
Ivan Zolotukhin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Guidelines for upgrading from pgsql7.4.xxx server to pgsql8.xxx server

2006-03-29 Thread Gregory S. Williamson
As others have pointed out, OID dependant tables may need special attention.

We recently upgraded from 7.4 to 8.1.x and found some issues with encoding -- 
8.1 defaulted to a different encoding and we had some data that was invalid; we 
manually fixed the old data before retrying the export.

Make sure you read the manual's section on the configuration parameters as 
well; there are some changes / improvements.

Some SQL may need adjustment -- there are some stricter checks in 8.1 which 
will fail to run SQL that 7.4 would accept (look for the "Missing WHERE clause" 
stuff in the manual, for instance.)

Our upgrade went smoothly (both runtime with postGIS data and the billing side 
of things) and I'd upgrading; 8.1 has some substantial improvements.

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Luki Rustianto
Sent:   Tue 3/28/2006 6:53 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Guidelines for upgrading from pgsql7.4.xxx server to 
pgsql8.xxx server

Hi All,

All of our application now runs on postgresql7.4.xxx servers, I never
tried it on version 8.xxx

I wonder if there are any guidelines / step by step / special
considerations whether it applies to database structure or the
application it self if we want to upgrade to version 8.xxx ?

Thanks.

---(end of broadcast)---
TIP 6: explain analyze is your friend

!DSPAM:4429f70e308891228024673!





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Peter Eisentraut
Ivan Zolotukhin wrote:
> BTW, are there any plans for its improvement? What are the first
> steps for the people who want to contribute to PostgreSQL in this
> field?

The first step would be making a detailed analysis of what "XML support" 
is supposed to mean.  (You alluded to one use case, but before we 
commit to anything, we need to see a more general analysis.)  The 
second step would be checking how some of these issues can be solved 
within existing standards.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] best practice in upgrading db structure

2006-03-29 Thread Csaba Nagy
> Could somebody explain me, or point me to a resource where I can find 
> out what is the recommended practice when a live db needs to be replaced 
> with a new version of it that has a slightly different structure?

Our development infrastructure includes a development data base cluster,
with one data base per developer, and a staging data base with a largish
deployment of the current production data base version.

The developers are free to test whatever data base modifications they
need on their private development data base. We have a "setup_db"
script, which creates the data base structure + initial data. The script
is based on an XML file which is processed by XSLT to generate the
actual schema for Oracle/Postgres. So the developers usually recreate
the DB from scratch on any modification of the db schema. We test both
on postgres/oracle, the application must support both (it can even
connect to both in parallel).

When a release is approaching, we make a diff of the currently deployed
production schema and the to be deployed new release schema. The
differences go to a db migration script (separately for
Oracle/Postgres), broken in a few steps:
  - init: create new DB structures (create new tables, add new db fields
to existing tables, insert new data, etc.). After this step the DB can
be used both by the old version and the new one of the application.
After this step we restart the application with the new version deployed
(actually we have usually another cluster of app-servers connecting to
the same data base, running the new version);
  - cleanup: delete old structures, add the new constraints which could
not be added because compatibility problems with the old version;

All can be done using alter table and the like. When some field changes
it's structure/meaning, we actually create a new field, and copy over
the old one with the needed conversions. The new version will use the
new field, old version the old field... we do make all possible to be
able to run both versions after the init step. We actually have a middle
step, because we have our systems (logically) partitioned in
sub-systems, and we usually migrate them separately, so we have a
"migrate sub-system" step too. So we actually can have 2 versions of the
software connecting in parallel to the DB, each having a different set
of sub-systems to run.

After the migration scripts are created, we test them on the staging
system (our QA does this, and they are more than happy to discover any
mistakes in the process ;-)

We also have a quite extensive unit/integration test suite which catches
quite a lot of potential regression cases, this makes any change easier
to prove not breaking existing functionality, and in particular data
base changes are also easier to add with enough confidence it will work.

HTH,
Csaba.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL's XML support comparison against other RDBMSes

2006-03-29 Thread Ivan Zolotukhin
Hello,

On 3/28/06, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> > Does anybody know good comparison/review article of XML support in
> > major RDMBSes? I know that PostgreSQL's XML capabilities are quite
> > weak, but how far is it from other products?
>
> Could you please desribe a specific example of what you would like to
> do and how you would not be able to do that via PostgreSQL.  I think
> you will find with functions of various languages you can do just
> about anything.  You could for example load up exported functions from
> Xalan/Xerces XML libaray.

Actually, at this moment I'm just interested in possibilities to
combine relational and XML approaches in my projects and therefore
want to understand defferences between implementations of XML support
in DBMSes. I'm sure that I won't stop using relational model - so,
native XML databases are almost out of my interest.

For example, let's consider web application with heterogeneous data
(internal data is stored in relational tables but there is a lot of
data from outside, that were imported from different places and -
surely, as XML is the language to speak with other apps - it's much
better to store this data 'as is', e.g. as a set of XMLs. Then, I need
to manipulate with mixture of relational and XML portions of data...)

Sure, PostgreSQL is the main DB I use. So, at the best I'll use it's
abilities, but I see that its XML support is weak.

BTW, are there any plans for its improvement? What are the first steps
for the people who want to contribute to PostgreSQL in this field?

Sincerely,
Ivan Zolotukhin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] best practice in upgrading db structure

2006-03-29 Thread Ivan Zolotukhin
Hello,

> Could somebody explain me, or point me to a resource where I can find
> out what is the recommended practice when a live db needs to be replaced
> with a new version of it that has a slightly different structure?
>
> What do you usually do in a situation like this?

That's a big problem for us too. We develop web application in "always
beta" regime, having very short one-week release cycle. It is
practically impossible to foresee all business requirements that
affect database structure, so almost every release we need to change
it on the live server. However, it's critically important to plan the
DB structure properly anyway, so that you will change it only slightly
in future.

Some time ago I did not manage to find information on how to update
both application and database without any interruption of the clients.
I can easily update my application without any losses within a
transaction (`mv` command), I can easily do the same with PostgreSQL,
but how to *coordinate* both changes? What if my working application
introduces some logical inconsistency when I updated DB structure but
not yet updated application?

So we could not find a correct solution. IMHO, the only good practices
are to test transition intensively in development enviroinment to
smooth it, minimize database changes and minimize release duration.

Now every week we write release plans trying to think it over in
details, then stop application showing some fun message to users and
do quickly what we planned in advance. It takes 2-3 minutes to
complete everything, but I would like to avoid it...

Sincerely,
Ivan Zolotukhin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] about un-discrible

2006-03-29 Thread chris smith
On 3/29/06, 查海平 <[EMAIL PROTECTED]> wrote:
> hi,
>Could anyone tell me how to un-discrible this mails list? how to do?

Check the mail headers:

List-Unsubscribe: 

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] about un-discrible

2006-03-29 Thread 查海平
hi,
   Could anyone tell me how to un-discrible this mails list? how to do?

thx!
  



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL client api

2006-03-29 Thread Antimon
I know this is not even a test but i thought it might be a client
library issue, not server itself.
Well it is not, i tried it on .net with npgsql which is a .net client
library (Not using libpq). Results are same. Connect time does not have
much effect by the way.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] More AIX 5.3 fun - out of memory ?

2006-03-29 Thread Gavin Hamill

Seneca Cunningham wrote:

You're using a 32-bit build with the default memory model, it would 
seem.  You're limited to less than 256MB for the heap in that model. 
Try stopping the postmaster and starting it back up with the 
environment variable LDR_CNTRL set to "MAXDATA=0x8000".  This 
expands the heap to 2GB and if it helps, you can use ldedit on your 
postgres binaries to make that the default (or rebuild, passing the 
linker "-bmaxdata:0x8000").



Yep you're right on the money with default everything, and the kernel is 
indeed 32-bit. Still a bit of a mixed bag tho :)


Last night, I passed

$ LDR_CNTRL="MAXDATA=0x800" bin/pg_ctl start -D /usr/local/pgsql/share/

Which immediately threw back an 'shm' error that the process table was 
full, so I tried with a more modest 0x400 request and that worked a 
treat; autovac ran fine and a full pg_restore completed perfectly - 
hurrah! :)


This morning, however, pg starts with either  0x800 or 0x400, 
but I am right back at square one with the same messages when autovac fires:


"2006-03-29 09:18:45 BSTERROR:  out of memory
2006-03-29 09:18:45 BSTDETAIL:  Failed on request of size 268435452." so 
I'm at a loss :)


Would it be better to 'Enable 64-bit application environment' as the 
option in 'smitty' has kept tempting me with? Or use the AIX 64-bit 
kernel instead? Or both? :)) I'm using gcc 4.1.0 to compile pg - would I 
need to recompile with any particular flags for 64-bit goodness?


Anyway, "Large Program Support"[1] from the AIX docs give an overview 
of the situation.  Chapter 3 of the redbook "Developing and porting C 
and C++ Applications on AIX"[2] goes into all the gory details of 
what's happening.


[1] 
http://publib.boulder.ibm.com/infocenter/pseries/topic/com.ibm.aix.doc/aixprggd/genprogc/lrg_prg_support.htm 




Wow, that's quite some bedtime reading - hopefully some day I will be 
able to read it without feeling that my brain is about to melt :)


FAO Tom Lane - I have maintenance_work_mem set to 256MB, so that also 
matches perfectly with the size of the out of memory error :)


Cheers,
Gavin.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Flight numbers data

2006-03-29 Thread Achilleus Mantzios
Hi, i am in the process of writing an application about
tickets, flights, etc, and i am thinking of getting the primitive
data ready at the begining and doing it the right way,
(e.g. the user will just select a flight number and doesnt 
have to enter the airports, airport coordinates, countries,
airline name, airline code, departure/arrival times, etc...)

I found some sources on the web about airlines (codes, names, countries, 
etc...), also i found about airports, but nothing about flight numbers!

This application will be for enterprise (internal) company use,
covering the flights of emploees around the world.

I was just wondering how internet ticket agents manage to do it.

Has anyone from the postgresql camp ever solved such a problem?

One point for me, is that my company is doing shipping,
and we are not in the travel agents business (IATA, standards etc..) at 
all, so its hard for me to find the path to follow, and in addition,
going the full Travel Agent path would be an overkill, since what we need
is just an application for the tickets.

It is just that i wanna do it right, maybe play a little bit with AJAX 
also who knows :)

Thanx for any hints.

-- 
-Achilleus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] updategram in pg

2006-03-29 Thread SunWuKung
Is there a way to store update/insert logic in the data instead of 
writing db or application procedures to do that?

I am looking for something along the lines of MS updategrams like this:


 
  
  
   
   
  
 


Thanks for the help.
Balázs

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq