Re: [GENERAL] Override system-defined cast?

2006-10-13 Thread Peter






  
Is there any way I can override system-defined casts?

Tried

create cast (varchar as timestamptz) with function 
user_timestamptz(varchar) ;

and got

ERROR:  cast from type pg_catalog.varchar to type timestamptz already exists

DROP CAST does not work:

ERROR:  cannot drop cast from character varying to timestamp with time 
zone because it is required by the database system

(or are my permissions insufficient?)

Basically my problem is converting '' (empty string) to 
NULL::timestampz, and built-in cast cannot convert blank string to 
timestamptz. Maybe I'm wondering up the wrong alley with casts?

One solution I can see is create user-defined type (which would be the 
same timestamptz) and define varchar-mytype cast, but that will require 
rather extensive changes to database schema. Plus, our user-interface 
currently relies on PG datatypes to format input/output data.

Any suggestions?

  
  
How about a function with a CASE statement in it?

  


That wouldn't be The Way of The Dragon ;)

Most of my SQL statements are dynamically generated. Using CASE means I
will have to check target field datatype, and apply CASE whenever it's
timestamptz. Rather messy.

I tried defining my own base type using timestamptz _in and _out
functions, and it seems to work. Had to re-create half of my database
objects due to dependencies, but now that it's done it seems to work
quite well. 

Peter






Re: [GENERAL] looping through query to update column

2006-10-13 Thread Albe Laurenz
Jean-Christophe Roux wrote:
 I am trying to loop through a table to update one column
 
 create or replace function foo() returns integer as $$
 declare
 rec RECORD;
 row integer := 0;
 begin
 for rec in select * from table loop
 update rec set recordid = row;
 row++;
 end loop;
 return 0;
 end;
 $$ language plpgsql
 
 In pgadmin, I am getting the following error message, but 
 that does not help me much:
 ERROR:  syntax error at or near $1 at character 9
 QUERY:  update  $1  set recordid =  $2 
 CONTEXT:  SQL statement in PL/PgSQL function foo near line 6

You cannot UPDATE a record, you can only UPDATE a table.

So it should be something like
UPDATE table SET recordid = row WHERE primarykey = rec.primarykey

You might use 'ctid' to identify the row if you have no suitable
primary key (you do have one, don't you?), but beware that ctid
can change suddenly and unexpectedly when somebody else modifies
the row. To protect against that, you can either LOCK the table or
SELECT ... FOR UPDATE.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UTF-8

2006-10-13 Thread Martijn van Oosterhout
On Thu, Oct 12, 2006 at 11:09:53PM +0200, Tomi NA wrote:
 2006/10/12, Martijn van Oosterhout kleptog@svana.org:
 On Tue, Oct 10, 2006 at 11:49:06AM +0300, Martins Mihailovs wrote:
  There are some misunderstood. Im using Linux 2.6.16.4, postgresql 8.1.4,
  (there are one of locale:   lv_LV.utf8, for Latvian language). But if I
  want do lower, then with standard latin symbols all is ok, but with
  others special symbols (like umlaut in Germany) there is problems, and
  sorting is going not like alphabet but like latin alphabet and specials
  symbols after. :(
 
 You don't say what your encoding is. If it not UTF-8, that's your
 problem...
 
 Doesn't lv_LV.utf8 mean he *did* say what his encoding is?

Not really. It says the encoding the system *expects*. However, if he
actually created his database with LATIN1 encoding, it would explain
the problems he's having.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Performance Problem

2006-10-13 Thread Martijn van Oosterhout
On Thu, Oct 12, 2006 at 10:26:28PM -0700, roopa perumalraja wrote:
 I am new to postgres and I have 4 doubts.

   1) I have a performance problem as I am trying to insert around 60
   million rows to a table which is partitioned. So first I copied the
   .csv file which contains data, with COPY command to a temp table
   which was quick. It took only 15 to 20 minutes. Now I am inserting
   data from temp table to original table using insert into org_table
   (select * from temp_table); which is taking more than an hour  is
   still inserting. Is there an easy way to do this?

Does the table you're inserting into have indexes or foreign keys?
Either of those slow down loading considerably. One commen workaround
is to drop the indexes and constraints, load the data and re-add them.

   2) I want to increase the performance of database as I find it very
   slow which has more than 60 million rows in one table. I increased
   the shared_buffer parameter in postgres.conf file to 2 but that
   does help much.

Find out the queries that are slow and use EXPLAIN to identify possible
useful indexes.

   2) I have partitioned a parent table into 100 child tables so when
   I insert data to parent table, it automatically inserts to child
   table. I have followed
   http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html.
   When I did this, I noticed that when I viewed data of parent table,
   it had the rows of the child table and is not empty. But the child
   tables do have the rows in it. I don’t understand.

When you select from a parent table, it shows the rows of the child
tables also, that's kind of the point. You can say: SELECT * FROM ONLY
parent;

The partitioning may only explain the slow loading...

   3) I want to use materialized views, I don’t understand it from
   http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html,
   can anyone explain me with a simple example.

Can't help you there...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] postgresql.conf shared buffers

2006-10-13 Thread Harald Armin Massa
Jim, list,from your link:ttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html 
I quote:As a rule of thumb,
observe shared memory usage of PostgreSQL with
tools like ipcs and determine the setting. Remember that this is
only half the story. You also need to set effective_cache_size so
that  postgreSQL will use available memory optimally.and add the question (not necessarily to you): -what is the best way to obsere shared memory usage on win32?
- which memory-size should be taken for effective_cache_size on windows servers with multpile purposes (i.e.: more then PostgreSQL running on them)Available are (propable ones): physical memory, system cache, available memory (depends on system load)
Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [GENERAL] looping through query to update column

2006-10-13 Thread Rafal Pietrak
On Fri, 2006-10-13 at 09:23 +0200, Albe Laurenz wrote:
 You might use 'ctid' to identify the row if you have no suitable

How should I use 'ctid'? Like in the case, when I've selected something
by means of SELECT ... FOR UPDATE?


-- 
-R

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Performance Problem

2006-10-13 Thread Uwe C. Schroeder
On Friday 13 October 2006 01:22, Martijn van Oosterhout wrote:
    1) I have a performance problem as I am trying to insert around 60
    million rows to a table which is partitioned. So first I copied the
    .csv file which contains data, with COPY command to a temp table
    which was quick. It took only 15 to 20 minutes. Now I am inserting
    data from temp table to original table using insert into org_table
    (select * from temp_table); which is taking more than an hour  is
    still inserting. Is there an easy way to do this?

 Does the table you're inserting into have indexes or foreign keys?
 Either of those slow down loading considerably. One commen workaround
 is to drop the indexes and constraints, load the data and re-add them.

Why do you COPY the data into a temporary table just to do a insert into 
org_table  (select * from temp_table); ? Since you're copying ALL records 
anyways, why don't you just copy the data into the org_table directly?

Also look for the autocommit setting. If autocommit is on, every insert is a 
transaction on it's own - leading to a lot of overhead. Turning autocommit 
off and running the inserts in batches of - say 1000 inserts per transaction 
- will increase speed considerably.


UC

--
Open Source Solutions 4U, LLC   1618 Kelly St
Phone:  +1 707 568 3056 Santa Rosa, CA 95401
Cell:   +1 650 302 2405 United States
Fax:+1 707 568 6416

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Performance Problem

2006-10-13 Thread A. Kretschmer
am  Fri, dem 13.10.2006, um  1:55:06 -0700 mailte Uwe C. Schroeder folgendes:
  Does the table you're inserting into have indexes or foreign keys?
  Either of those slow down loading considerably. One commen workaround
  is to drop the indexes and constraints, load the data and re-add them.
 
 Why do you COPY the data into a temporary table just to do a insert into 
 org_table  (select * from temp_table); ? Since you're copying ALL records 
 anyways, why don't you just copy the data into the org_table directly?

Perhaps he want to modify the data in this temp. table?


 
 Also look for the autocommit setting. If autocommit is on, every insert is 
 a 
 transaction on it's own - leading to a lot of overhead. Turning autocommit 
 off and running the inserts in batches of - say 1000 inserts per transaction 
 - will increase speed considerably.

A 'insert into org_table  (select * from temp_table);' is only ONE
transaction.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] encoding problem

2006-10-13 Thread jef peeraer

i never thought i would be bblocked by an encoding problem :-(
My database is in LATIN1 , i have entries like this in a table called 
gemeenten

Column |   Type   |   Modifiers
---+--+
 id| integer  | serial
 gemeente  | text | not null
 postcode  | smallint | not null
 provincies_id | integer  |

This data is copied from a dump from that table

9780Quévy   70407
9781Quévy-le-Grand  70407
9782Quévy-le-Petit  70407

So, the accents are there. But with my web page, which is set to 
ISO-8859-1, i don't get the accents.

The web-pages are build with XUL, where i set the charset to ISO-8859-1,
but communication with the server is through  XMLHttpRequest.
Do I have to specify the charset as well in the communication between 
server and client ? Or where else could it go wrong.


jef peeraer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Execute a function upon a connection made/closed

2006-10-13 Thread Uyelik




Hi,
Is there any way to execute a function upon a new connection made or
existing one closed?
Is active connections info stored on a table or (?) ?

Regards,
Mustafa





Re: [GENERAL] Execute a function upon a connection made/closed

2006-10-13 Thread A. Kretschmer
am  Fri, dem 13.10.2006, um 12:25:38 +0300 mailte Uyelik folgendes:
 Hi,
 Is there any way to execute a function upon a new connection made or existing
 one closed?

IIRC no, but you can set variables, perhaps this will help you.
ALTER name SET parameter { TO | = } { value | DEFAULT }


 Is active connections info stored on a table or (?) ?

In a system-view, pg_stat_activity. You can select from there.



HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Execute a function upon a connection made/closed

2006-10-13 Thread Uyelik





A. Kretschmer wrote, On 13.10.2006 12:32:

  am  Fri, dem 13.10.2006, um 12:25:38 +0300 mailte Uyelik folgendes:
  
  
Hi,
Is there any way to execute a function upon a new connection made or existing
one closed?

  
  
IIRC no, but you can set variables, perhaps this will help you.
ALTER name SET parameter { TO | = } { value | DEFAULT }

  

Do you mean; after a connection made by a client, client first set a
variable? I need a server side solution?

  
  
  
Is active connections info stored on a table or (?) ?

  
  
In a system-view, pg_stat_activity. You can select from there.

  

Yes, i can query this view, but i need to do some thing on database
upon a new connection info added to this (view) or deleted. But this is
a view and i cant add trigger? More deeply where is the connection info
stored? 

  

HTH, Andreas
  

Thanks,
Mustafa




[GENERAL] Partitioning vs. View of a UNION ALL

2006-10-13 Thread Ron Johnson
Hi,

I've gotten preliminary approval to buy a server and load a *lot* of
data into it.  One table will eventually have 4.5Bn 330 bytes rows,
the other 9Bn 300 byte rows.  Other will only have a billion rows.
 They are easily partitioned by mm, which we call FISCAL_PERIOD.
 (In fact, the app creates the integer FISCAL_PERIOD by extracting
year and month from transaction date: YEAR*100+MONTH.)

Even though using a view means that it would have to be recreated
each period as the oldest table is dropped, it seems that it would
still be easier to work with, since you wouldn't have to worry about
preventing a badly behaving user from inserting into the DDL
partition's parent table and create 588 CHECK constraints (12 per
year x 7 years x 7 base tables).

The most important issue, though, is query speed.  Assuming
excellent index support for query WHERE clauses, regardless of
whether partitioning or a viewed UNION ALL, which will the query
optimizer and constraint_exclusion be more friendly towards?

Thanks,
Ron
-- 
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Execute a function upon a connection made/closed

2006-10-13 Thread A. Kretschmer
am  Fri, dem 13.10.2006, um 12:47:09 +0300 mailte Uyelik folgendes:
 
 A. Kretschmer wrote, On 13.10.2006 12:32:
 
 am  Fri, dem 13.10.2006, um 12:25:38 +0300 mailte Uyelik folgendes:
 
 
 Hi,
 Is there any way to execute a function upon a new connection made or 
 existing
 one closed?
 
 
 
 IIRC no, but you can set variables, perhaps this will help you.
 ALTER name SET parameter { TO | = } { value | DEFAULT }
 
 
 
 
 Do you mean; after a connection made by a client, client first set a variable?
 I need a server side solution?

No, you can define per user server-side variables like search_path.
Independent of the client-program, every times the user logged in this
variable set. But i don't know a solution to call a function on
login/logout, sorry.


 
 
 
 Is active connections info stored on a table or (?) ?
 
 
 
 In a system-view, pg_stat_activity. You can select from there.
 
 
 
 
 Yes, i can query this view, but i need to do some thing on database upon a new
 connection info added to this (view) or deleted. But this is a view and i cant
 add trigger? More deeply where is the connection info stored?

IIRC, it's not possible to create a TRIGGER on system-tables.
Read more about this:
http://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Partitioning vs. View of a UNION ALL

2006-10-13 Thread Harald Armin Massa
Ron,Even though using a view means that it would have to be recreatedeach period as the oldest table is dropped, 
please keep in mind: views are not really created ... also the command is named create viewVIEWS, at least in PostgreSQL (and Oracle) are nothing else then macros for Queries - the views get substituted within the query with their creation rule.
That is: the recreation of a VIEW is a nearly instant process (it just is frustrating to connections using this view)
it seems that it wouldstill be easier to work with, since you wouldn't have to worry aboutpreventing a badly behaving user from inserting into the DDLpartition's parent table and create 588 CHECK constraints (12 per
year x 7 years x 7 base tables).That is true only if you trust your users not to insert into the wrong table of your 12*7*7 tables.If you have the appropriate check constraints on your parent table, the pushing data into the inherited tables should happen automagically (at least on my databases it does :) )
So... to make sure nobody inserts rubbish you will have to have those 588 check constraints one way or another. a) to make your partitioning workb) to ensure nobody inserts data for 2000 into the table for 1900
The most important issue, though, is query speed.Assumingexcellent index support for query WHERE clauses, regardless of
whether partitioning or a viewed UNION ALL, which will the queryoptimizer and constraint_exclusion be more friendly towards?in an optimal world, should'nt those two options be exactly the same? 
a) the partition solution: query planner decides which of your 12*7*7 tables to access and only scans those. To my undestanding,  constraint_exclusion only applies to this solution.b) the union all - or partitioning by hand:
at the beginning of each partial query there will be an index scan on your date-column, learning that no data comes from that partial query and planner skipping on to the next.Harald-- GHUM Harald Massa
persuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [GENERAL] looping through query to update column

2006-10-13 Thread Albe Laurenz
Rafal Pietrak wrote:
 You might use 'ctid' to identify the row if you have no suitable
 
 How should I use 'ctid'? Like in the case, when I've selected 
 something by means of SELECT ... FOR UPDATE?

You lock the table (with LOCK) or the row you're working on
(with SELECT FOR UPDATE) so that nobody else can change it while
you are working on it.

You need something like ctid if your table has the fundamental flaw
of lacking a primary key.

Sample:

FOR row IN SELECT ctid, * FROM table FOR UPDATE LOOP
   UPDATE table SET column=value WHERE ctid=row.ctid;
   ...
END LOOP;

If your table has a primary key, use that instead and please
forget about the ctid.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Performance Problem

2006-10-13 Thread Richard Broersma Jr
   3) I want to use materialized views, I don?t understand it from
 http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html, can 
 anyone explain me
 with a simple example.

The following link helps to describe the idea behind each of the methods of 
Materialize views.

http://jarednevans.typepad.com/technoblog/2006/03/materialized_vi.html

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Postgresql 6.13

2006-10-13 Thread Geoffrey
Hi, I'm from an alternative universe and I'm trying to install 
Postgresql 6.12.  When will 6.13 be available?


Sorry, couldn't resist...

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql 6.13

2006-10-13 Thread A. Kretschmer
am  Fri, dem 13.10.2006, um  8:40:54 -0400 mailte Geoffrey folgendes:
 Hi, I'm from an alternative universe and I'm trying to install 
 Postgresql 6.12.  When will 6.13 be available?

Huch?


 
 Sorry, couldn't resist...

;-)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Postgresql 6.13

2006-10-13 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 Hi, I'm from an alternative universe and I'm trying to install
 Postgresql 6.12.  When will 6.13 be available?

It's in the /extras/postgres directory on the eighth
installation DVD of Duke Nukem Forever.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200610130927
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFFL5S3vJuQZxSWSsgRAuEUAJ4qiuR4/nPsbDvHThOBsphBeB19DgCghUTx
9kyp658Z2Hft7AOWhcyVC08=
=UmR2
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Partitioning vs. View of a UNION ALL

2006-10-13 Thread Merlin Moncure

On 10/13/06, Ron Johnson [EMAIL PROTECTED] wrote:

Hi,

I've gotten preliminary approval to buy a server and load a *lot* of
data into it.  One table will eventually have 4.5Bn 330 bytes rows,
the other 9Bn 300 byte rows.  Other will only have a billion rows.
 They are easily partitioned by mm, which we call FISCAL_PERIOD.
 (In fact, the app creates the integer FISCAL_PERIOD by extracting
year and month from transaction date: YEAR*100+MONTH.)


wow. if I do my math correctly, this will put you in the multi
terabyte range.  i'm sure the postgresql community (talk to Josh
Berkus) would love to hear about your experiences in this project.

anyways, regarding built in/manual partitioning, I have to admit I am
not a big fan of the built in table partitioning.  It was kind of
fiddly to set up, and constraint exclusion only worked on select
queries, which was the real deal killer for me.  however, the latter
issue this has been addressed in 8.2
(http://developer.postgresql.org/pgdocs/postgres/release-8-2.html).
With built in, foreign key behavior is a little bit different and
perhaps problematic, which you you should consider if you plan to
enforce constraints via RI. however, you can always drop down to a
trigger calling a dynamic pl/sql function which is almost as good
(sometimes better) to enforce constraints.  another issue is that
sometimes the plans generated on non trivial queries involving joins
to partitioned tables were not what i would have expected, resorting
to seq scans or not using constraint_exclusion conditions in certain
cases  obviously, this is a moving target and may improve in later
versions of postgresql, so test your sql carefully.

one thing that is interesting is that if your data divisions is very
strictly regimented so that most of your operations work on exactly
one schema, you can put your partions in separate schemas.  why do
this? well your table names are uniform for starters.  if you are into
pl/pgsql functions you can then keep one function/operation which
operates over all your partitions without excessive use of dynamic sql
(which is not terrible, but I'd prefer not to use it if possible.).
so long as you have a pretty good idea of when function plans are
generated, you can enter into your 'namespace' by manipulating
search_path and go to work.

with a database of your size you really have to work out some test
data and try both approaches. what works is going to be a combination
of pracical factors and personal style...and great feedback for the
community should you be persuaded to give regular updates on your
progress.

as with all partitioning strategies, keep an eye out for worst case behavior.

merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] UTF-8

2006-10-13 Thread Tomi NA

2006/10/13, Martijn van Oosterhout kleptog@svana.org:

On Thu, Oct 12, 2006 at 11:09:53PM +0200, Tomi NA wrote:
 2006/10/12, Martijn van Oosterhout kleptog@svana.org:
 On Tue, Oct 10, 2006 at 11:49:06AM +0300, Martins Mihailovs wrote:
  There are some misunderstood. Im using Linux 2.6.16.4, postgresql 8.1.4,
  (there are one of locale:   lv_LV.utf8, for Latvian language). But if I
  want do lower, then with standard latin symbols all is ok, but with
  others special symbols (like umlaut in Germany) there is problems, and
  sorting is going not like alphabet but like latin alphabet and specials
  symbols after. :(
 
 You don't say what your encoding is. If it not UTF-8, that's your
 problem...

 Doesn't lv_LV.utf8 mean he *did* say what his encoding is?

Not really. It says the encoding the system *expects*. However, if he
actually created his database with LATIN1 encoding, it would explain
the problems he's having.


This is a reoccurring topic on the list: sure, it's possible to
misconfigure pg so that uppercase/lowercase/ilike/tsearch2/order don't
work with a single letter outside of the English alphabet, but the
problem Martins seems to be facing is one we've seen here before
(myself being one of those affected). There's no way Martins can set
up pg - UTF or no UTF - so that collation and case insensitivity-based
functions work in both Latvian an Russian.
Because I have the same problem with Croatian, German and Italian,
I've limited my use of pg to projects targeted at LAN or intranet
environments: others are probably switching to mysql or firebird
altogether as it's easier to work with just one RDBMS than two.

t.n.a.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] looping through query to update column

2006-10-13 Thread Jean-Christophe Roux
Thanks for the "ctid" trick. The code below worked fine for rec in select * from fromemail_trades loop  update fromemail_trades set recordid = row where ctid = rec.ctid;  row := row -1; end loop;The first line is a little different from your's: FOR row IN SELECT ctid, * FROM table FOR UPDATE LOOPHow important is it to specify ctid in the select and to add 'for update'?Thanks againJCR- Original Message From: Albe Laurenz [EMAIL PROTECTED]To:
 pgsql-general@postgresql.orgSent: Friday, October 13, 2006 6:24:16 AMSubject: Re: [GENERAL] looping through query to update columnRafal Pietrak wrote: You might use 'ctid' to identify the row if you have no suitable  How should I use 'ctid'? Like in the case, when I've selected  something by means of SELECT ... FOR UPDATE?You lock the table (with LOCK) or the row you're working on(with SELECT FOR UPDATE) so that nobody else can change it whileyou are working on it.You need something like ctid if your table has the fundamental flawof lacking a primary key.Sample:FOR row IN SELECT ctid, * FROM table FOR UPDATE LOOP UPDATE table SET column=value WHERE ctid=row.ctid; ...END LOOP;If your table has a primary key, use that instead and pleaseforget about the ctid.Yours,Laurenz Albe---(end of
 broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

Re: [GENERAL] Postgresql 6.13

2006-10-13 Thread Richard Huxton

Greg Sabino Mullane wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Hi, I'm from an alternative universe and I'm trying to install
Postgresql 6.12.  When will 6.13 be available?


It's in the /extras/postgres directory on the eighth
installation DVD of Duke Nukem Forever.


Greg's mostly right, but the Vic-20 port is available as a type-in 
program listing in issue 3 of Your Computer.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] looping through query to update column

2006-10-13 Thread Merlin Moncure

On 10/13/06, Jean-Christophe Roux [EMAIL PROTECTED] wrote:


Thanks for the ctid trick. The code below worked fine
for rec in select * from fromemail_trades loop
update fromemail_trades set recordid = row where ctid = rec.ctid;
row := row -1;
end loop;
The first line is a little different from your's:
FOR row IN SELECT ctid, * FROM table FOR UPDATE LOOP

How important is it to specify ctid in the select and to add 'for update'?


it's not. also, without a where clause you are better off just locking
the table (lock table...).  also, the above loop is better achieved
via a single query.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] looping through query to update column

2006-10-13 Thread Andrew - Supernews
On 2006-10-13, Albe Laurenz [EMAIL PROTECTED] wrote:
 You lock the table (with LOCK) or the row you're working on
 (with SELECT FOR UPDATE) so that nobody else can change it while
 you are working on it.

 You need something like ctid if your table has the fundamental flaw
 of lacking a primary key.

Looping over rows unnecessarily is a mistake.

You can add a SERIAL column to a table using ALTER TABLE, which will
automatically number the existing rows; this is a better way to fix a
lack of a primary key than messing around with ctids.

For a one-off update, use a temporary sequence:

create temporary sequence foo;
update table set recordid = nextval('foo');

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] UTF-8

2006-10-13 Thread Martijn van Oosterhout
On Fri, Oct 13, 2006 at 03:40:17PM +0200, Tomi NA wrote:
 This is a reoccurring topic on the list: sure, it's possible to
 misconfigure pg so that uppercase/lowercase/ilike/tsearch2/order don't
 work with a single letter outside of the English alphabet, but the
 problem Martins seems to be facing is one we've seen here before
 (myself being one of those affected). There's no way Martins can set
 up pg - UTF or no UTF - so that collation and case insensitivity-based
 functions work in both Latvian an Russian.

While sorting for multiple languages simultaneously is an issue, that's
not the problem here. Linux/GLibc *does* support correct sorting for
all language/charset combinations, and that's what he's using. Just for
the hell of it I setup lv_LV.utf8 on my laptop and verifed that it
sorts just fine:

$ LC_ALL=lv_LV.utf8 sort  /tmp/test2 | recode utf8..latin1
a
á
B
d
F
ü
Z
$
 (hope it looks ok on your screen, the source file is utf8 but the
recode is for display).

Similarly, upper/lower are also supported, although postgresql doesn't
take advantage of the system support in that case.

So yes, PostgreSQL does have some issues in this area, but for the
platform he is using sorting *should* work.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-10-10 14:16:19 -0400:
 FUD from another open source project is really poor form, particulary
 when not in competing segements where a little bit of competitive
 rivalry is expected.

OMG WTF what FUD???

# [EMAIL PROTECTED] / 2006-10-10 13:55:57 -0400:
 http://www.zabbix.com/manual/v1.1/install.php

  recent benchmarks using ZABBIX clearly show that PostgreSQL
 (7.1.x) is at least 10 times slower than MySQL (3.23.29)
 
  Note: These results are predictable. ZABBIX server processes use
 simple SQL statements like single row INSERT, UPDATE and simple SELECT
 operators. In such environment, use of advanced SQL engine (like
 PostgreSQL) is overkill.

That's true.

* no need to constantly run resource-hungry command vacuum for MySQL

Last time I used MySQL that was true.

Some time ago I did a simplistic, but quite telling, test.

I had a large (several milion rows), indexed table, same data, in
MySQL (4.0.x) and PostgreSQL (late 7.4), on the same RHEL or FreeBSD
(don't remember) machine. Walking over the table with

SELECT * FROM TABLE ORDER BY pk LIMIT 10 OFFSET N;

or the MySQL equivalent, MySQL was several times faster than
PostgreSQL, but the times were getting longer and longer
As N grew in increments of 10, it took ages for MySQL to return
the rows. PostgreSQL... Well, it was as slow with N=10 as it was
with N=0.

* MySQL is used as a primary development platform.

How does *this* qualify as FUD? Or are *you* spreading FUD to scare
people from even mentioning the software?

-- 
I don't like MySQL. I hate it when people put cheerleading where reason
should prevail.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] exploiting features of pg to obtain polymorphism

2006-10-13 Thread Karsten Hilbert
On Thu, Oct 12, 2006 at 04:40:32PM +0200, Ivan Sergio Borgonovo wrote:

 Anyway it doesn't solve the problem of having lists that
 can contain different elements with same parent and maintain
 ref. integrity.
Only to some degree.

You can put a unique constraint and a serial default on the
parent table (such as a primary key). Insertion on a child
table will fail if the key in question already exists in the
base table. It may have come from another child table.
Updating the base table updates all the relevant child
tables, too. Delete will extend from base to child tables,
too. That way I'll have a unique serial across all the child
tables. I just need to take care to not use ONLY on
update/delete on the base table or to INSERT into the base
table directly (the latter isn't really harmful to the
issue, however).

 Now back to gm code.
 
 I see you've data tables with their pk/fk relations and triggers in one 
 schema that inherit from audit tables in another.
Yes.

 You've a function that helps to put tables that have to be audited in another 
 table, nothing special compared with an insert with the exception of some 
 extra control on input.
Yes.

 Audit tables have their own pk/fk relationships and their triggers but 
 according to my knowledge they won't be considered unless you operate on 
 those table directly.
 If you operate on the data tables those triggers pk/fk won't be seen.
True. But I still get the unique pks since I don't operate
on them directly. Eventually, PG will enforce those
constraints, too.

 Considering you forbid direct insert, update and delete on those tables, 
 while pk/fk etc... are still a security net it seems that those relationship 
 will never be really used.
True as of today.

 Later on you grant the same operations to gm-doctors. This further puzzle me
Well, all objects are owned by gm-dbo. Our bootstrapper
does that. So I need to grant access rights to some people.
Namely those in the group gm-doctors.

 even if I've the suspect the code is not complete enough
 to implement the features
Yes. Eventually it is going to be something like Veil. Or
rather, I suppose it will *be* (as in use) Veil.

 Finally I read:
 comment on table audit.audited_tables is
   'All tables that need standard auditing must be
recorded in this table. Audit triggers will be
generated automatically for all tables recorded
here.';
 
 But I can't see anything doing this.
gmAuditSchemaGenerator.py in server/bootstrap/

 There is one point of contact between what I did already
 and what I would like to do but I still haven't had a good
 idea how to implement it. The use of metadata. But
 definitively I can't see polymorphism in your use of
 inheritance.
Surely not to the extent a C++ programmer would hope for.

 Any second chance to find an OO use of inherits,
Not that I know.

 cos this seems the only OO construct of pg.
Surely not. SPs can be overloaded. Datatypes can be
extended.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] A query planner that learns

2006-10-13 Thread Erik Jones
Forgive me if I'm way off here as I'm not all that familiar with the 
internals of postgres, but isn't this what the genetic query optimizer 
discussed the one of the manual's appendixes is supposed to do.  Or, is 
that more about using the the known costs of atomic operations that make 
up a query plan, i.e. more of a bottom up approach than what you're 
discussing.  If it's the latter, then it sounds like what you're looking 
for is a classifier system.  See a 
href=http://en.wikipedia.org/wiki/Learning_classifier_system;this/a 
wikepedia article for a short description of them along with a couple of 
reference links, you can google for many more.


Scott Marlowe wrote:

On Thu, 2006-10-12 at 17:14, Jim C. Nasby wrote:
  

On Thu, Oct 12, 2006 at 03:31:50PM -0500, Scott Marlowe wrote:


While all the talk of a hinting system over in hackers and perform is
good, and I have a few queries that could live with a simple hint system
pop up now and again, I keep thinking that a query planner that learns
from its mistakes over time is far more desirable.

Is it reasonable or possible for the system to have a way to look at
query plans it's run and look for obvious mistakes its made, like being
off by a factor of 10 or more in estimations, and slowly learn to apply
its own hints?

Seems to me that would be far more useful than my having to babysit the
queries that are running slow and come up with hints to have the
database do what I want.

I already log slow queries and review them once a week by running them
with explain analyze and adjust what little I can, like stats targets
and such.

It seems to me the first logical step would be having the ability to
flip a switch and when the postmaster hits a slow query, it saves both
the query that ran long, as well as the output of explain or explain
analyze or some bastardized version missing some of the inner timing
info.  Even just saving the parts of the plan where the planner thought
it would get 1 row and got instead 350,000 and was using a nested loop
to join would be VERY useful.  I could see something like that
eventually evolving into a self tuning system.
  
 
Saves it and then... does what? That's the whole key...



It's meant as a first step.  I could certainly use a daily report on
which queries had bad plans so I'd know which ones to investigate
without having to run them each myself in explain analyze.  Again, my
point was to do it incrementally.  This is something someone could do
now, and someone could build on later.

To start with, it does nothing.  Just saves it for the DBA to look at. 
Later, it could feed any number of the different hinting systems people

have been proposing.

It may well be that by first looking at the data collected from problems
queries, the solution for how to adjust the planner becomes more
obvious.

  

Well, I'm busy learning to be an Oracle DBA right now, so I can't do
it.  But it would be a very cool project for the next college student
who shows up looking for one.
  

Why? There's a huge demand for PostgreSQL experts out there... or is
this for a current job?



Long story.  I do get to do lots of pgsql stuff.  But right now I'm
learning Oracle as well, cause we use both DBs.  It's just that since I
know pgsql pretty well, and know oracle hardly at all, Oracle is taking
up lots more of my time.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
  



--
erik jones [EMAIL PROTECTED]
software development
emma(r)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Alexander Staubo

On Oct 11, 2006, at 16:54 , [EMAIL PROTECTED] wrote:

I'm author and maintainer of ZABBIX and the manual. I would like to  
add

some comments to the thread.

[snip]

I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
database settings.


PostgreSQL 7.4 was released in Nov 2003, and 7.4.12 does not (afaik)  
include any performance enhancements. MySQL 5.0.22 came out in May  
2006 and, despite the low version number, includes a number of  
additional features and performance enhancements.


You might start by comparing apples to apples; apt-get install  
postgresql-8.1.



PostgreSQL does approximately 1600 records per second for the first
1, then 200rps for the first 100k records, and then slower and
slower downgrading to 10-20 rps(!!!) when reaching 300k.


You are absolutely right that PostgreSQL performs significantly worse  
than MySQL at this extremely artificial test.


On my box (Dell PowerEdge 1850, dual Xeon 2.8GHz, 4GB RAM, 10kRPM  
SCSI, Linux 2.6.15, Ubuntu) I get 1,100 updates/sec, compared to  
10,000 updates/sec with MySQL/InnoDB, using a stock installation of  
both. Insert performance is only around 10% worse than MySQL at  
around 9,000 rows/sec. Curiously enough, changing shared_buffers,  
wal_buffers, effective_cache_size and even fsync seems to have no  
effect on update performance, while fsync has a decent effect on  
insert performance.


Alexander.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Andrew - Supernews
On 2006-10-13, Alexander Staubo [EMAIL PROTECTED] wrote:
 On my box (Dell PowerEdge 1850, dual Xeon 2.8GHz, 4GB RAM, 10kRPM  
 SCSI, Linux 2.6.15, Ubuntu) I get 1,100 updates/sec, compared to  
 10,000 updates/sec with MySQL/InnoDB, using a stock installation of  
 both. Insert performance is only around 10% worse than MySQL at  
 around 9,000 rows/sec. Curiously enough, changing shared_buffers,  
 wal_buffers, effective_cache_size and even fsync seems to have no  
 effect on update performance, while fsync has a decent effect on  
 insert performance.

Your disk probably has write caching enabled. A 10krpm disk should be
limiting you to under 170 transactions/sec with a single connection
and fsync enabled.

I also did some tests on this, and even though the machine I was testing
on had some competing database activity, autovacuum was effective at
keeping the table size stable (at 70-odd pages) when running several
hundred thousand updates on a 1-row table.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Alexander Staubo

On Oct 13, 2006, at 17:13 , Andrew - Supernews wrote:


On 2006-10-13, Alexander Staubo [EMAIL PROTECTED] wrote:

On my box (Dell PowerEdge 1850, dual Xeon 2.8GHz, 4GB RAM, 10kRPM
SCSI, Linux 2.6.15, Ubuntu) I get 1,100 updates/sec, compared to
10,000 updates/sec with MySQL/InnoDB, using a stock installation of
both. Insert performance is only around 10% worse than MySQL at
around 9,000 rows/sec. Curiously enough, changing shared_buffers,
wal_buffers, effective_cache_size and even fsync seems to have no
effect on update performance, while fsync has a decent effect on
insert performance.


Your disk probably has write caching enabled. A 10krpm disk should be
limiting you to under 170 transactions/sec with a single connection
and fsync enabled.


What formula did you use to get to that number? Is there a generic  
way on Linux to turn off (controller-based?) write caching?


Alexander.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Merlin Moncure

On 10/13/06, Roman Neuhauser [EMAIL PROTECTED] wrote:

# [EMAIL PROTECTED] / 2006-10-10 14:16:19 -0400:
 FUD from another open source project is really poor form, particulary
 when not in competing segements where a little bit of competitive
 rivalry is expected.

OMG WTF what FUD???


please see my later comments.  'fud' is not a great term. however, if
you are going to publish remarks about another project that might be
perceived as disparaging, please keep them up to date and factually
relevant.  I can write queries that are 10x slower on mysql that
postgresql but that ultimately means nothing.  the major point thought
is that zabbix does *not* run 10x slower on postgresql and I am going
to prove it.

btw, i never said anything disparaging about mysql or zabbix. i am
focused like a laser beam on the comments in the documentation and the
greater implications for the community.


I had a large (several milion rows), indexed table, same data, in
MySQL (4.0.x) and PostgreSQL (late 7.4), on the same RHEL or FreeBSD
(don't remember) machine. Walking over the table with

SELECT * FROM TABLE ORDER BY pk LIMIT 10 OFFSET N;


using offset to walk a table is extremely poor form because of:
* poor performance
* single user mentality
* flat file mentality

databases are lousy at this becuase they inheritly do not support
abolute addressing of data -- nore should they, beause this is not
what sql is all about.  in short, 'offset' is a hack, albeit a useful
one in some cases, but dont gripe when it doesn't deliver the goods.

for server side browsing use cursors or a hybrid pl/pgqsl loop. for
client side, browse fetching relative to the last key:

select * from foo where p  p1 order by p limit k;

in 8.2, we get proper comparisons so you can do this with multiple part keys:

select * from foo where (a1,b1,b1)  (a,b,c) order by a,b,c limit k;

for fast dynamic browsing you can vary k for progressive fetches.


or the MySQL equivalent, MySQL was several times faster than
PostgreSQL, but the times were getting longer and longer
As N grew in increments of 10, it took ages for MySQL to return
the rows. PostgreSQL... Well, it was as slow with N=10 as it was
with N=0.

* MySQL is used as a primary development platform.

How does *this* qualify as FUD? Or are *you* spreading FUD to scare
people from even mentioning the software?


I think zabbix is fine software.  I would hopefully prefer that if
someone were to write what could be perceived as negative things about
postgresql, they would back it up with facts better than 'update foo
set id = 0' ran a million times or 'select * from foo limit 1 offset
10'


I don't like MySQL. I hate it when people put cheerleading where reason
should prevail.


outside of the 'fud' statement, which was a hastily written reaction,
my tone has been more constructive criticism.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Andrew - Supernews
On 2006-10-13, Alexander Staubo [EMAIL PROTECTED] wrote:
 On Oct 13, 2006, at 17:13 , Andrew - Supernews wrote:
 Your disk probably has write caching enabled. A 10krpm disk should be
 limiting you to under 170 transactions/sec with a single connection
 and fsync enabled.

 What formula did you use to get to that number?

It's just the number of disk revolutions per second. Without caching, each
WAL flush tends to require a whole revolution unless the on-disk layout of
the filesystem is _very_ strange. You can get multiple commits per WAL
flush if you have many concurrent connections, but with a single connection
that doesn't apply.

 Is there a generic  
 way on Linux to turn off (controller-based?) write caching?

I don't use Linux, sorry. Modern SCSI disks seem to ship with WCE=1 on
mode page 8 on the disk, thus enabling evil write caching by default.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] UTF-8

2006-10-13 Thread Tomi NA

2006/10/13, Martijn van Oosterhout kleptog@svana.org:


While sorting for multiple languages simultaneously is an issue, that's
not the problem here. Linux/GLibc *does* support correct sorting for
all language/charset combinations, and that's what he's using. Just for
the hell of it I setup lv_LV.utf8 on my laptop and verifed that it
sorts just fine:

...

Similarly, upper/lower are also supported, although postgresql doesn't
take advantage of the system support in that case.


I think this is the crux of the problem. Not supporting uppercase and
lowercase makes an e.g. generic people search dialog not malfunction:
searching for Müller will not find him if he is stored as OTTO
MÜLLER in the database. Certainly not if I have to make sure the
search finds one Zvonimir Šimić stored as ZVONIMIR ŠIMIĆ. Whats
more, if the user gives up on the integrated search and tries to list
all the people in such a database ordered by their last names, he
probably won't find Šimić because the user expects him to be between S
and T, not after 'Z' (where he ends up beacuse the letter code of 'Š'
is greater than that of 'Z').
As for Martins' problem, he needs to support 2 non-english languages
which means he's stuck with the same problem of one language being
semi functional.

As an aside, why doesn't pg take advantage ot the underlying system's
support of upper/lower case?

t.n.a.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Alexander Staubo

On Oct 13, 2006, at 17:35 , Andrew - Supernews wrote:


On 2006-10-13, Alexander Staubo [EMAIL PROTECTED] wrote:

On Oct 13, 2006, at 17:13 , Andrew - Supernews wrote:
Your disk probably has write caching enabled. A 10krpm disk  
should be

limiting you to under 170 transactions/sec with a single connection
and fsync enabled.


What formula did you use to get to that number?


It's just the number of disk revolutions per second. Without  
caching, each
WAL flush tends to require a whole revolution unless the on-disk  
layout of

the filesystem is _very_ strange. You can get multiple commits per WAL
flush if you have many concurrent connections, but with a single  
connection

that doesn't apply.


Makes sense. However, in this case I was batching updates in  
transactions and committing each txn at 1 second intervals, all on a  
single connection. In other words, the bottleneck illustrated by this  
test should not be related to fsyncs, and this does not seem to  
explain the huge discrepancy between update (1,000/sec) and insert  
(9,000 inserts/sec, also in 1-sec txns) performance.


Alexander.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] A query planner that learns

2006-10-13 Thread John D. Burger

Erik Jones wrote:

Forgive me if I'm way off here as I'm not all that familiar with  
the internals of postgres, but isn't this what the genetic query  
optimizer discussed the one of the manual's appendixes is supposed  
to do.


No - it's not an optimizer in that sense.  When there are a small  
enough set of tables involved, the planner uses a dynamic programming  
algorithm to explore the entire space of all possible plans.  But the  
space grows exponentially (I think) with the number of tables - when  
this would take too long, the planner switches to a genetic algorithm  
approach, which explores a small fraction of the plan space, in a  
guided manner.


But with both approaches, the planner is just using the static  
statistics gathered by ANALYZE to estimate the cost of each candidate  
plan, and these statistics are based on sampling your data - they may  
be wrong, or at least misleading.  (In particular, the statistic for  
total number of unique values is frequently =way= off, per a recent  
thread here.  I have been reading about this, idly thinking about how  
to improve the estimate.)


The idea of a learning planner, I suppose, would be one that examines  
cases where these statistics lead to very misguided expectations.   
The simplest version of a learning planner could simply bump up the  
statistics targets on certain columns.  A slightly more sophisticated  
idea would be for some of the statistics to optionally use parametric  
modeling (this column is a Gaussian, let's estimate the mean and  
variance, this one is a Beta distribution ...).  Then the smarter  
planner could spend some cycles applying more sophisticated  
statistical modeling to problematic tables/columns.


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Andrew - Supernews
On 2006-10-13, Alexander Staubo [EMAIL PROTECTED] wrote:
 Makes sense. However, in this case I was batching updates in  
 transactions and committing each txn at 1 second intervals, all on a  
 single connection. In other words, the bottleneck illustrated by this  
 test should not be related to fsyncs, and this does not seem to  
 explain the huge discrepancy between update (1,000/sec) and insert  
 (9,000 inserts/sec, also in 1-sec txns) performance.

Update has to locate the one live row version amongst all the dead ones;
insert doesn't need to bother.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Server Added Y'day. Missing Today

2006-10-13 Thread Harpreet Dhaliwal
Hi,Yesterday I configured a postgres server using Pgadmin3, restored the database from a backup.Today when i come and check my pgadmin, i can't see any server added there.Kind of confused.Can anyone please let me know whats happeneing here and what should i do to see the server that I added yesterday.
Thanks,Harpreet


Re: [GENERAL] A query planner that learns

2006-10-13 Thread AgentM


On Oct 13, 2006, at 11:47 , John D. Burger wrote:


Erik Jones wrote:

Forgive me if I'm way off here as I'm not all that familiar with  
the internals of postgres, but isn't this what the genetic query  
optimizer discussed the one of the manual's appendixes is supposed  
to do.


No - it's not an optimizer in that sense.  When there are a small  
enough set of tables involved, the planner uses a dynamic  
programming algorithm to explore the entire space of all possible  
plans.  But the space grows exponentially (I think) with the number  
of tables - when this would take too long, the planner switches to  
a genetic algorithm approach, which explores a small fraction of  
the plan space, in a guided manner.


But with both approaches, the planner is just using the static  
statistics gathered by ANALYZE to estimate the cost of each  
candidate plan, and these statistics are based on sampling your  
data - they may be wrong, or at least misleading.  (In particular,  
the statistic for total number of unique values is frequently =way=  
off, per a recent thread here.  I have been reading about this,  
idly thinking about how to improve the estimate.)


The idea of a learning planner, I suppose, would be one that  
examines cases where these statistics lead to very misguided  
expectations.  The simplest version of a learning planner could  
simply bump up the statistics targets on certain columns.  A  
slightly more sophisticated idea would be for some of the  
statistics to optionally use parametric modeling (this column is a  
Gaussian, let's estimate the mean and variance, this one is a Beta  
distribution ...).  Then the smarter planner could spend some  
cycles applying more sophisticated statistical modeling to  
problematic tables/columns.


One simple first step would be to run an ANALYZE whenever a  
sequential scan is executed. Is there a reason not to do this? It  
could be controlled by a GUC variable in case someone wants  
repeatable plans.


Further down the line, statistics could be collected during the  
execution of any query- updating histograms on delete and update, as  
well.


-M

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] UTF-8

2006-10-13 Thread Tom Lane
Tomi NA [EMAIL PROTECTED] writes:
 2006/10/13, Martijn van Oosterhout kleptog@svana.org:
 Similarly, upper/lower are also supported, although postgresql doesn't
 take advantage of the system support in that case.

 I think this is the crux of the problem.

If it were true, then it might be ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] exploiting features of pg to obtain polymorphism

2006-10-13 Thread Jeff Davis
On Fri, 2006-10-06 at 23:09 +0200, Ivan Sergio Borgonovo wrote:
 Is there any good documentation, example, tutorial, pamphlet, discussion... 
 to exploit pg features to obtain polymorphic behavior without renouncing to 
 referential integrity?
 
 Inheritance seems *just* promising.
 
 Any methodical a approach to the problem in pg context?
 

I'm not sure if this answers your question, but here's how I do
inheritance in the relational model.

Just make a parent table that holds a more generic object like:

CREATE TABLE person (name TEXT PRIMARY KEY, age INT, height NUMERIC);

Then a child table like:

CREATE TABLE student (name TEXT REFERENCES person(name), gpa NUMERIC);

Every person, student or otherwise has a record in person. If, and
only if, they are a student they have a record in the student table.

To select all people, select only from the person table. To select all
students, select from the join of the two tables.

Regards,
Jeff Davis 


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Query

2006-10-13 Thread Bob Pawley



I have a trigger that produces an error "returns more than 
one row".

My intent is to fill one table (library.specification) 
from another (p_id.specification). The p_id table can have multiple instances of 
the same fluid but I want the library table to have only one record of each 
fluid.

Anyinsight into what is happening will be 
appreciated.

Bob Pawley


create or replace function library_spec() returns 
trigger as $$beginDeclare 
fluid_type varchar ;

BeginSelect fluid into 
fluid_typeFrom p_id.specificationsWhere fluid = 
new.fluid ;

If fluid_type  library.specifications.fluid 
ThenInsert Into library.specifications (fluid) values (new.fluid_type) 
;

Elseif fluid_type = 
library.specifications.fluid Then Do Nothing ;

End if ;return null ;end 
;end ;$$ language plpgsql ;

create trigger libspec after insert on 
p_id.processesfor each row execute procedure 
library_spec();


Re: [GENERAL] PostgreSQL Database Transfer between machines(again)

2006-10-13 Thread Brian J. Erickson

The machine did boot and PostgreSQL started backup, etc.

Thanks.

- Original Message -
From: Joshua D. Drake [EMAIL PROTECTED]
To: Brian J. Erickson [EMAIL PROTECTED]
Cc: PostgreSQL Mailing lists pgsql-general@postgresql.org
Sent: Friday, October 06, 2006 9:17 AM
Subject: Re: [GENERAL] PostgreSQL Database Transfer between machines(again)


 Brian J. Erickson wrote:
  You can boot from any rescue CD, mount the partition, copy the
database
  directory away and then copy it back once you have reinstalled. This
is
  safe because it is on the same machine. It is not safe to copy the
  database to some arbitrary computer and expect it to run.
  That is basically the plan but I want to make sure that I have all of
the
  data.
 
  O.k. hold on... are you getting any errors in /var/log/messages?
  Here is some of the /var/log/messages file


 That's odd. Have you tried removing /etc/mtab manually and rebooting?
 does the problem still occur? As far as PostgreSQL... you need to look
 in to postgresql logs.

 Did you compile from source? What version of Linux is this?

 Joshua D. Drake



 ---Begin--
--
  --
  Oct  6 07:57:27 Info1A kernel: PCI: Using configuration type 1
  Oct  6 07:57:27 Info1A kernel: PCI: Probing PCI hardware
  Oct  6 07:57:27 Info1A mount: mount: can't open /etc/mtab for writing:
  Input/output error
  Oct  6 07:57:27 Info1A kernel: PCI: Probing PCI hardware (bus 00)
  Oct  6 07:57:27 Info1A netfs: Mounting other filesystems:  failed
  .
  .
  .
  Oct  6 07:57:33 Info1A su(pam_unix)[1229]: session opened for user
postgres
  by (uid=0)
  Oct  6 07:57:33 Info1A su(pam_unix)[1229]: session closed for user
postgres
  Oct  6 07:57:34 Info1A postgresql: Starting postgresql service:  failed

 -End--
--
  -
 
 
 
  - Original Message -
  From: Joshua D. Drake [EMAIL PROTECTED]
  To: AgentM [EMAIL PROTECTED]
  Cc: PostgreSQL Mailing lists pgsql-general@postgresql.org
  Sent: Thursday, October 05, 2006 5:42 PM
  Subject: Re: [GENERAL] PostgreSQL Database Transfer between
machines(again)
 
 
  AgentM wrote:
  On Oct 5, 2006, at 19:10 , Brian J. Erickson wrote:
 
  And since it's a text file, can't someone fix it with $EDITOR?
  I tried to edit the file, but I get the Input/Output error.
 
  O.k. hold on... are you getting any errors in /var/log/messages?
 
  Joshua D. Drake
 
 
 
 
  The recommendatation was to re-install the OS.
  However, I DO NOT want to lose my database,
  so I am tring to backup the database.
  You can boot from any rescue CD, mount the partition, copy the
database
  directory away and then copy it back once you have reinstalled. This
is
  safe because it is on the same machine. It is not safe to copy the
  database to some arbitrary computer and expect it to run. Make sure to
  match the database version. Good luck!
 
  -M
 
  ---(end of
broadcast)---
  TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 
 
  --
 
 === The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensive  PostgreSQL solutions since 1997
   http://www.commandprompt.com/
 
 
 
  ---(end of
broadcast)---
  TIP 3: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/docs/faq
 
 
 
  ---(end of broadcast)---
  TIP 3: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/docs/faq
 


 --

=== The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
  http://www.commandprompt.com/



 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] some log statements ignored

2006-10-13 Thread brian

Bruce Momjian wrote:

brian wrote:


I changed my postgresql.conf to have:

log_statement = mod

It appears to be working, though not logging *all* INSERTs. For 
instance, I have a PHP class that inserts into two tables in a 
transaction. The log shows the first, but not the second. Has anyone 
seen this behaviour?



test=# show log_statement;
 log_statement
---
 mod
(1 row)



I have no idea why that would happen.  If you do 'all' do you see all of
them?



Sorry--i hadn't had time to run a test. Setting it to 'all' works fine, 
and i think i see the problem: the second INSERT is in a prepared 
statement, so it's not being logged.


PREPARE mdb2_statement_pgsql00fb05c2c509aa2608b68bf2b87693a2 AS INSERT 
INTO ...


(this is using the PEAR MDB2 package)

So, log_statement= 'mod' won't log a PREPARE ... AS INSERT, i guess.

b

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] UTF-8

2006-10-13 Thread Martijn van Oosterhout
On Fri, Oct 13, 2006 at 12:04:02PM -0400, Tom Lane wrote:
 Tomi NA [EMAIL PROTECTED] writes:
  2006/10/13, Martijn van Oosterhout kleptog@svana.org:
  Similarly, upper/lower are also supported, although postgresql doesn't
  take advantage of the system support in that case.
 
  I think this is the crux of the problem.
 
 If it were true, then it might be ...

Eh? Here's the declaration of pg_toupper:

unsigned char pg_toupper(unsigned char ch);

Characters havn't fitted in an unsigned char in a very long time. It's
obviously bogus for any multibyte encoding (the code even says so). For
such encodings you could use the system's towupper() (ANSI C/Unix98)
which will work on any unicode char.

To make this work, pg_strupper() will have to convert each character to
Unicode, run towupper() and convert back to the encoding. I imagine
that'll get rejected for being inefficient, but really don't see any
other way.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Martijn van Oosterhout
On Fri, Oct 13, 2006 at 03:35:37PM -, Andrew - Supernews wrote:
 It's just the number of disk revolutions per second. Without caching, each
 WAL flush tends to require a whole revolution unless the on-disk layout of
 the filesystem is _very_ strange. You can get multiple commits per WAL
 flush if you have many concurrent connections, but with a single connection
 that doesn't apply.

Is that really true? In theory block n+1 could be half a revolution
after block n, allowing you to commit two transactions per revolution.

If you work with the assumption that blocks are consecutive I can see
your point, but is that a safe assumption?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] some log statements ignored

2006-10-13 Thread Tom Lane
brian [EMAIL PROTECTED] writes:
 Sorry--i hadn't had time to run a test. Setting it to 'all' works fine, 
 and i think i see the problem: the second INSERT is in a prepared 
 statement, so it's not being logged.

 PREPARE mdb2_statement_pgsql00fb05c2c509aa2608b68bf2b87693a2 AS INSERT 
 INTO ...

I'm betting that's really a Parse protocol message, not a PREPARE
statement as such (the 8.1 logging code misguidedly tries to obscure the
difference).  The logging of the subsequent Bind/Execute messages is
really weak in existing releases :-(.  We've fixed it up for 8.2 though.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] UTF-8

2006-10-13 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 Characters havn't fitted in an unsigned char in a very long time. It's
 obviously bogus for any multibyte encoding (the code even says so). For
 such encodings you could use the system's towupper() (ANSI C/Unix98)
 which will work on any unicode char.

http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/oracle_compat.c?rev=1.67

 * If the system provides the needed functions for wide-character manipulation
 * (which are all standardized by C99), then we implement upper/lower/initcap
 * using wide-character functions.  Otherwise we use the traditional ctype.h
 * functions, which of course will not work as desired in multibyte character
 * sets.  Note that in either case we are effectively assuming that the
 * database character encoding matches the encoding implied by LC_CTYPE.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Server Added Y'day. Missing Today

2006-10-13 Thread Shane Ambler

Harpreet Dhaliwal wrote:

Hi,
Yesterday I configured a postgres server using Pgadmin3, restored the
database from a backup.
Today when i come and check my pgadmin, i can't see any server added there.
Kind of confused.
Can anyone please let me know whats happeneing here and what should i do to
see the server that I added yesterday.

Thanks,
Harpreet



I think the only issue you will find is pgAdmin didn't save it's 
settings. I have seen this happen if it doesn't quit nicely - it saves 
it's settings when you quit pgAdmin not when you change the settings. 
This won't affect the postgresql server with all your data.


Simply enter the server details in pgAdmin again and connect as well as 
any other pgAdmin options you may have changed.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 On Fri, Oct 13, 2006 at 03:35:37PM -, Andrew - Supernews wrote:
 It's just the number of disk revolutions per second. Without caching, each
 WAL flush tends to require a whole revolution unless the on-disk layout of
 the filesystem is _very_ strange.

 Is that really true? In theory block n+1 could be half a revolution
 after block n, allowing you to commit two transactions per revolution.

Not relevant, unless the prior transaction happened to end exactly at a
WAL block boundary.  Otherwise, you still have to re-write the back end
of the same disk block the previous transaction wrote into.  (In
practice, for the sort of tiny transactions that are at stake here,
quite a few xacts fit into a single WAL block so the same block is
rewritten several times before moving on to the next.)

There was a long thread in -hackers a couple years back exploring ways
to break this 1 xact per disk rotation barrier with more creative
layouts of the WAL files, but nobody could come up with something that
looked reasonably robust --- ie, both safe and not full of unsupportable
assumptions about knowing exactly where everything actually is on the
disk platter.  It'd still be interesting if anyone gets a new idea...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] A query planner that learns

2006-10-13 Thread Jim C. Nasby
On Thu, Oct 12, 2006 at 05:39:20PM -0500, Scott Marlowe wrote:
   It seems to me the first logical step would be having the ability to
   flip a switch and when the postmaster hits a slow query, it saves both
   the query that ran long, as well as the output of explain or explain
   analyze or some bastardized version missing some of the inner timing
   info.  Even just saving the parts of the plan where the planner thought
   it would get 1 row and got instead 350,000 and was using a nested loop
   to join would be VERY useful.  I could see something like that
   eventually evolving into a self tuning system.
   
  Saves it and then... does what? That's the whole key...
 
 It's meant as a first step.  I could certainly use a daily report on
 which queries had bad plans so I'd know which ones to investigate
 without having to run them each myself in explain analyze.  Again, my
 point was to do it incrementally.  This is something someone could do
 now, and someone could build on later.
 
 To start with, it does nothing.  Just saves it for the DBA to look at. 
 Later, it could feed any number of the different hinting systems people
 have been proposing.
 
 It may well be that by first looking at the data collected from problems
 queries, the solution for how to adjust the planner becomes more
 obvious.

Yeah, that would be useful to have. The problem I see is storing that
info in a format that's actually useful... and I'm thinking that a
logfile doesn't qualify since you can't really query it.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] A query planner that learns

2006-10-13 Thread Jim C. Nasby
On Fri, Oct 13, 2006 at 11:53:15AM -0400, AgentM wrote:
 One simple first step would be to run an ANALYZE whenever a  
 sequential scan is executed. Is there a reason not to do this? It  

Yes. You want a seqscan on a small (couple pages) table, and ANALYZE has
a very high overhead on some platforms.

Just recording the query plan and actual vs estimated rowcounts would be
a good start, though. And useful to DBA's, provided you had some means
to query against it.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Merlin Moncure

On 10/13/06, Tom Lane [EMAIL PROTECTED] wrote:

Martijn van Oosterhout kleptog@svana.org writes:
 Is that really true? In theory block n+1 could be half a revolution
 after block n, allowing you to commit two transactions per revolution.

Not relevant, unless the prior transaction happened to end exactly at a


does full page writes setting affect this?

merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Partitioning vs. View of a UNION ALL

2006-10-13 Thread Jim C. Nasby
The only case I can think of where view partitioning makes more sense is
if it's list partitioning where you can also drop a field from your
tables. IE: if you have 10 projects, create 10 project_xx tables where
xx is the ID of the project, UNION ALL them together in a view, and
create rules on that view to handle DML.

Note I haven't actually tested to see if this is better than inherited
tables...

On Fri, Oct 13, 2006 at 05:00:23AM -0500, Ron Johnson wrote:
 Hi,
 
 I've gotten preliminary approval to buy a server and load a *lot* of
 data into it.  One table will eventually have 4.5Bn 330 bytes rows,
 the other 9Bn 300 byte rows.  Other will only have a billion rows.
  They are easily partitioned by mm, which we call FISCAL_PERIOD.
  (In fact, the app creates the integer FISCAL_PERIOD by extracting
 year and month from transaction date: YEAR*100+MONTH.)
 
 Even though using a view means that it would have to be recreated
 each period as the oldest table is dropped, it seems that it would
 still be easier to work with, since you wouldn't have to worry about
 preventing a badly behaving user from inserting into the DDL
 partition's parent table and create 588 CHECK constraints (12 per
 year x 7 years x 7 base tables).
 
 The most important issue, though, is query speed.  Assuming
 excellent index support for query WHERE clauses, regardless of
 whether partitioning or a viewed UNION ALL, which will the query
 optimizer and constraint_exclusion be more friendly towards?
 
 Thanks,
 Ron
 -- 
 Ron Johnson, Jr.
 Jefferson LA  USA
 
 Is common sense really valid?
 For example, it is common sense to white-power racists that
 whites are superior to blacks, and that those with brown skins
 are mud people.
 However, that common sense is obviously wrong.
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 

-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Query

2006-10-13 Thread Jeff Davis
On Fri, 2006-10-13 at 09:42 -0700, Bob Pawley wrote:
 I have a trigger that produces an error returns more than one row.
  
 My intent is to fill one table (library.specification) from another
 (p_id.specification). The p_id table can have multiple instances of
 the same fluid but I want the library table to have only one record of
 each fluid.
  
 Any insight into what is happening will be appreciated.
  
 Bob Pawley
  
  
  create or replace function library_spec() returns trigger as $$
  begin
  
  Declare 
   fluid_type varchar ;
  
  Begin
   Select fluid into fluid_type
  From p_id.specifications
  Where  fluid = new.fluid ;

Why not just do fluid_type := new.fluid? I don't understand what that
query is supposed to do. You can't fit multiple records into the
fluid_type variable. This might be the source of your error if there are
multiple records with the same fluid_type in p_id.specifications.

  
  If fluid_type  library.specifications.fluid Then
  Insert Into library.specifications (fluid) values (new.fluid_type) ;
  
  Elseif 
  fluid_type = library.specifications.fluid Then 
  Do Nothing ;

Why an elseif? I don't understand.

  
  End if ;
  return null ;
  end ;
  end ;
  $$ language plpgsql ;
  
  create trigger libspec after insert on p_id.processes
  for each row execute procedure library_spec();

Hope this helps.

Regards,
Jeff Davis




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Backup DB not getting connected

2006-10-13 Thread J S B
Hi,I took a back up of my database and restored it in a new DB..When I'm trying to connect to the new DB from a client machine using ECPG connection techniques, it says newDB doesn't exist.There's another DB in the same DB server and if I try and connect to that DB then it doesn't cry over anything.
ThanksJas


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Jeff Davis
On Fri, 2006-10-13 at 13:52 -0400, Merlin Moncure wrote:
 On 10/13/06, Tom Lane [EMAIL PROTECTED] wrote:
  Martijn van Oosterhout kleptog@svana.org writes:
   Is that really true? In theory block n+1 could be half a revolution
   after block n, allowing you to commit two transactions per revolution.
 
  Not relevant, unless the prior transaction happened to end exactly at a
 
 does full page writes setting affect this?
 

No, full page writes only affects checkpoints.

For a transaction to commit, some bits must hit permanent storage
*somewhere*. If that location is in one general area on disk, you must
either commit several transactions at once (see commit_delay), or you
must wait until the next revolution to get back to that area of the
disk.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Jim C. Nasby
On Fri, Oct 13, 2006 at 01:52:10PM -0400, Merlin Moncure wrote:
 On 10/13/06, Tom Lane [EMAIL PROTECTED] wrote:
 Martijn van Oosterhout kleptog@svana.org writes:
  Is that really true? In theory block n+1 could be half a revolution
  after block n, allowing you to commit two transactions per revolution.
 
 Not relevant, unless the prior transaction happened to end exactly at a
 
 does full page writes setting affect this?

If anything it makes it more true, but full pages are only written the
first time a page is dirtied after a checkpoint, so in a
high-transaction system I suspect they don't have a lot of impact.

It would be nice to have stats on how many transactions have to write a
full page, as well as how many have been written, though...
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] some log statements ignored

2006-10-13 Thread brian

Tom Lane wrote:

brian [EMAIL PROTECTED] writes:

Sorry--i hadn't had time to run a test. Setting it to 'all' works fine, 
and i think i see the problem: the second INSERT is in a prepared 
statement, so it's not being logged.



PREPARE mdb2_statement_pgsql00fb05c2c509aa2608b68bf2b87693a2 AS INSERT 
INTO ...



I'm betting that's really a Parse protocol message, not a PREPARE
statement as such (the 8.1 logging code misguidedly tries to obscure the
difference).  The logging of the subsequent Bind/Execute messages is
really weak in existing releases :-(.  We've fixed it up for 8.2 though.



You mean, because the PREPARE .. AS INSERT is different from the actual 
EXECUTE statement that follows it? And the latter isn't flagged as a 
mod action?


In any case, i guess it's not really a big deal. It just seemed 
mysterious why the second one wasn't showing up.


b

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] some log statements ignored

2006-10-13 Thread Tom Lane
brian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 I'm betting that's really a Parse protocol message, not a PREPARE
 statement as such (the 8.1 logging code misguidedly tries to obscure the
 difference).  The logging of the subsequent Bind/Execute messages is
 really weak in existing releases :-(.  We've fixed it up for 8.2 though.

 You mean, because the PREPARE .. AS INSERT is different from the actual 
 EXECUTE statement that follows it? And the latter isn't flagged as a 
 mod action?

No, if you were using actual SQL PREPARE and EXECUTE statements via
simple query protocol, I think they would both get logged (at least
the current 8.1.5 code looks like it will, not so sure about 8.1.0-4).
The problem is that extended query protocol is a different code path
that doesn't have the same logging support.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup DB not getting connected

2006-10-13 Thread Shane Ambler

J S B wrote:

Hi,
I took a back up of my database and restored it in a new DB..When I'm 
trying

to connect to the new DB from a client machine using ECPG connection
techniques, it says newDB doesn't exist.
There's another DB in the same DB server and if I try and connect to 
that DB

then it doesn't cry over anything.

Thanks
Jas

I am guessing that you can verify that the newDB exists and has the data 
you just have trouble from the ECPG client from any machine.


I would check access privileges - does the user you log in as using ECPG 
have access to newDB? - the 'DB doesn't exist' may be a wrong error 
which should say access denied.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Stephen Frost
* Alexander Staubo ([EMAIL PROTECTED]) wrote:
 What formula did you use to get to that number? Is there a generic  
 way on Linux to turn off (controller-based?) write caching?

Just a side-note, but if you've got a pretty good expectation that you
won't be without power for 24 consecutive hours ever you can get a
controller with a battery-backed write cache (some will do better than
24 hours too).  For the performance concerned... :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Joshua D. Drake
Stephen Frost wrote:
 * Alexander Staubo ([EMAIL PROTECTED]) wrote:
 What formula did you use to get to that number? Is there a generic  
 way on Linux to turn off (controller-based?) write caching?
 
 Just a side-note, but if you've got a pretty good expectation that you
 won't be without power for 24 consecutive hours ever you can get a
 controller with a battery-backed write cache (some will do better than
 24 hours too).  For the performance concerned... :)

No to mention if you are *that* concerned you could buy a generator for
500 bucks that will keep the machine alive if you absolutely have to.

There is nothing wrong with write back cache as long as you have the
infrastructure to support it.

Joshua D. Drake

-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] A query planner that learns

2006-10-13 Thread Scott Marlowe
On Fri, 2006-10-13 at 12:48, Jim C. Nasby wrote:
 On Thu, Oct 12, 2006 at 05:39:20PM -0500, Scott Marlowe wrote:
It seems to me the first logical step would be having the ability to
flip a switch and when the postmaster hits a slow query, it saves both
the query that ran long, as well as the output of explain or explain
analyze or some bastardized version missing some of the inner timing
info.  Even just saving the parts of the plan where the planner thought
it would get 1 row and got instead 350,000 and was using a nested loop
to join would be VERY useful.  I could see something like that
eventually evolving into a self tuning system.

   Saves it and then... does what? That's the whole key...
  
  It's meant as a first step.  I could certainly use a daily report on
  which queries had bad plans so I'd know which ones to investigate
  without having to run them each myself in explain analyze.  Again, my
  point was to do it incrementally.  This is something someone could do
  now, and someone could build on later.
  
  To start with, it does nothing.  Just saves it for the DBA to look at. 
  Later, it could feed any number of the different hinting systems people
  have been proposing.
  
  It may well be that by first looking at the data collected from problems
  queries, the solution for how to adjust the planner becomes more
  obvious.
 
 Yeah, that would be useful to have. The problem I see is storing that
 info in a format that's actually useful... and I'm thinking that a
 logfile doesn't qualify since you can't really query it.

grep / sed / awk can do amazing things to a text file.  

I'd actually recommend URL encoding (or something like that) so they'd
be single lines, then you could grep for certain things and feed the
lines to a simple de-encoder.

We do it with our log files at work and can search through some fairly
large files for the exact entry we need fairly quickly.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Jeff Davis
On Fri, 2006-10-13 at 13:07 -0500, Jim C. Nasby wrote:
 On Fri, Oct 13, 2006 at 01:52:10PM -0400, Merlin Moncure wrote:
  On 10/13/06, Tom Lane [EMAIL PROTECTED] wrote:
  Martijn van Oosterhout kleptog@svana.org writes:
   Is that really true? In theory block n+1 could be half a revolution
   after block n, allowing you to commit two transactions per revolution.
  
  Not relevant, unless the prior transaction happened to end exactly at a
  
  does full page writes setting affect this?
 
 If anything it makes it more true, but full pages are only written the
 first time a page is dirtied after a checkpoint, so in a
 high-transaction system I suspect they don't have a lot of impact.
 
 It would be nice to have stats on how many transactions have to write a
 full page, as well as how many have been written, though...

Maybe rather than the number of transactions that are forced to write
full pages, would it be useful to know the fraction of the WAL traffic
used for full page writes? Otherwise, a transaction that dirtied one
data page would be counted the same as a transaction that dirtied 100
data pages.

I guess it gets tricky though, because you really need to know the
difference between what the volume of WAL traffic is and what it would
be if full_page_writes was disabled.

That brings up a question. Does a full page write happen in addition to
a record of the changes to that page, or instead of a record of the
changes to that page? If the answer is in addition the calculation
would just be a count of the pages dirtied between checkpoints. Or am I
way off base?

But yes, statistics in that area would be useful to know whether you
need to crank up the checkpoint_timeout. Ideas?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup DB not getting connected

2006-10-13 Thread J S B
The user in the client machine is usrxyz (a unix user role) and user role that owns newDB is userxyz (a db user role)works fine with another database in the same database server with same user role combination.
Don't know whats going wrong..On 10/13/06, Shane Ambler [EMAIL PROTECTED] wrote:
J S B wrote: Hi, I took a back up of my database and restored it in a new DB..When I'm trying to connect to the new DB from a client machine using ECPG connection techniques, it says newDB doesn't exist.
 There's another DB in the same DB server and if I try and connect to that DB then it doesn't cry over anything. Thanks JasI am guessing that you can verify that the newDB exists and has the data
you just have trouble from the ECPG client from any machine.I would check access privileges - does the user you log in as using ECPGhave access to newDB? - the 'DB doesn't exist' may be a wrong errorwhich should say access denied.



Re: [GENERAL] Backup DB not getting connected

2006-10-13 Thread J S B
The only diff b/w the two DBs is that the one getting connected has ACL value as blank and the one that doesn't get connected has the same ACL property values = {}On 10/13/06, 
Shane Ambler [EMAIL PROTECTED] wrote:
J S B wrote: Hi, I took a back up of my database and restored it in a new DB..When I'm trying to connect to the new DB from a client machine using ECPG connection techniques, it says newDB doesn't exist.
 There's another DB in the same DB server and if I try and connect to that DB then it doesn't cry over anything. Thanks JasI am guessing that you can verify that the newDB exists and has the data
you just have trouble from the ECPG client from any machine.I would check access privileges - does the user you log in as using ECPGhave access to newDB? - the 'DB doesn't exist' may be a wrong errorwhich should say access denied.



Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread AgentM


On Oct 13, 2006, at 14:36 , Joshua D. Drake wrote:


Stephen Frost wrote:

* Alexander Staubo ([EMAIL PROTECTED]) wrote:

What formula did you use to get to that number? Is there a generic
way on Linux to turn off (controller-based?) write caching?


Just a side-note, but if you've got a pretty good expectation that  
you

won't be without power for 24 consecutive hours ever you can get a
controller with a battery-backed write cache (some will do better  
than

24 hours too).  For the performance concerned... :)


No to mention if you are *that* concerned you could buy a generator  
for

500 bucks that will keep the machine alive if you absolutely have to.

There is nothing wrong with write back cache as long as you have the
infrastructure to support it.


Why does the battery have to be at that level? It's seems like a  
reasonable poor man's solution would be to have a standard $50 UPS  
plugged in and have the UPS signal postgresql to shut down and sync.  
Then, theoretically, it would be safe to run with fsync=off. The  
level of risk seems the same no?


-M

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Merlin Moncure

On 10/13/06, AgentM [EMAIL PROTECTED] wrote:

 No to mention if you are *that* concerned you could buy a generator
 for
 500 bucks that will keep the machine alive if you absolutely have to.

 There is nothing wrong with write back cache as long as you have the
 infrastructure to support it.

Why does the battery have to be at that level? It's seems like a
reasonable poor man's solution would be to have a standard $50 UPS
plugged in and have the UPS signal postgresql to shut down and sync.
Then, theoretically, it would be safe to run with fsync=off. The
level of risk seems the same no?


1. your ups must be configured to power down your computer or you are
only delaying the inevitable for 10 minutes.  (a raid bbu might stay
alive for 24 hours)

2. less points of failure: ups doesnt help you if your cpu fries,
power supply fries, memory frieds, motherboard fries, o/s halts, etc
etc. :-)

3. experience has taught me not to put 100% faith in ups power switchover.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Joshua D. Drake
 
 2. less points of failure: ups doesnt help you if your cpu fries,
 power supply fries, memory frieds, motherboard fries, o/s halts, etc
 etc. :-)
 
 3. experience has taught me not to put 100% faith in ups power switchover.

As a follow up to this. We have all line conditioning natural gas
generators for our equipment.

We had an outage once due to power... guess how?

An electrician blew the panel.

Joshua D. Drake


 
 merlin
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] question on renaming a foreign key

2006-10-13 Thread Brent Wood

Jonathan Vanasco wrote:


I made a  HUGE mistake, and used 'UK' as the abbreviation for the  
united kingdom ( the ISO abbv is 'GB' )


I've got a database where 8 tables have an FKEY on a table  
'location_country' , using the text 'uk' as the value -- so i've got  
9 tables that I need to swap data out on


can anyone suggest a non-nightmarish way for me to do this ?


Umm, I think this should work,  isn't all that bad:

insert a 'gb' record in location_country
update each of 8 tables set country='gb' where country='uk'
delete the 'uk' record from location_country


Brent Wood

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread alexei . vladishev
  1. create table test (id int4, aaa int4, primary key (id));
  2. insert into test values (0,1);
  3. Execute update test set aaa=1 where id=0; in an endless loop
 
  I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
  sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
  database settings.
 
  MySQL performs very well, approximately 15000-2 updates per second
  with no degradation of performance.
 
  PostgreSQL does approximately 1600 records per second for the first
  1, then 200rps for the first 100k records, and then slower and
  slower downgrading to 10-20 rps(!!!) when reaching 300k.

 Something is wrong with your test code.  If I had to guess I would say
 you did all the updates in a single transaction without committing
 them, in which case yes it will slow down until you commit.

No, I'm not doing all the updates in a single transaction. Is it so
hard to repeat my test in your environment? :) It would take 5min to
see my point.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread alexei . vladishev
  I'm author and maintainer of ZABBIX and the manual. I would like to add
  some comments to the thread.

 just so you know, I brought this up after taking a look at the zabbix
 software, which is in my opinion very excellent.  I came across a
 little strong in my comments and peter e was correct in pointing out
 that the performance related comments were not 'fud'.  I felt a little
 bad after opening this thread but you have to take this in context of
 the bigger picture.  The postgresql poeple have been dealing with
 (sometimes) unfounded prejudices for years.

No worries! :)

  Unfortunately PostgreSQL performs much slower than MySQL doing large
  number of updates for one single table. By its nature ZABBIX requires
  to execute hundreds of updates per second for large installations.
  PostgreSQL cannot handle this nicely.
 
  Do a simple test to see my point:
 
  1. create table test (id int4, aaa int4, primary key (id));
  2. insert into test values (0,1);
  3. Execute update test set aaa=1 where id=0; in an endless loop

 this is a very contrived test:
 1. nothing really going on
 2. no data
 3. single user test
 4. zabbix doesn't do this, nor does anything else
 5. proves nothing.

 zabbix is a bit more complex than that with multiple users, tables and
 the ocassional join.  With a high number of servers in play things
 might go differently than you expect.

I cannot agree. Yes, ZABBIX software is much more coplex than the test
here. But performance of core functions of ZABBIX Server depends on
speed of update operations very much. The goal of the test was to
demonstrate very fast performance degradation of the updates.

I'm sure PostgreSQL would perform nicely for a large database with
large number of users, but I just wanted to prove my statement from the
manual.

 ...

 well, I am playing with zabbix with the possible eventuality of
 rolling it out in our servers I might be able to get you some hard
 data on performance.  By the way, I'm currently managing a
 spectactularly large mysql database which is getting moved to
 postgresql with the next release of the software -- in part because I
 was able to show that postgresql gave much more reliable performance
 in high load envirnonments.

 In light of this discussion, I might be interested in running a little
 test to see how zabbix would hold up on postgresql under a
 artificially high load.  If I was to show that things were quite so
 one-sided as you assumed, would you be willing to say as much in your
 documentation? :-)

I would be very interested in any real-life experience running large
ZABBIX installation under PostgreSQL. Feel free to send me your
results. Yes, I'm ready to change the manual, no doubt! :)

Cheers,
Alexei


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread alexei . vladishev
  Unfortunately PostgreSQL performs much slower than MySQL doing large
  number of updates for one single table. By its nature ZABBIX requires
  to execute hundreds of updates per second for large installations.
  PostgreSQL cannot handle this nicely.

 If you refuse to vacuum (or have the table autovacuumed) then sure.  Of
 course, I don't know of anyone who actually uses PostgreSQL who would
 run a system like that.

In order to keep performance of busy application steady, I had to
perform the vacuum every 10 seconds. As I said earlier ZABBIX Server
does hundredrs of updates per second and performance of the updates
degrades very fast.

  I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
  sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
  database settings.

 Don't say 'sorry' to us for using MyISAM (though it pretty much
 invalidates the test), say 'sorry' to your users...  You can try running
 Postgres with fsync=off but I would strongly recommend against it in a
 production environment (just like I'd strongly recommend against
 MyISAM).

Yes, I believe fsync=on during my tests, the option is commented in
PostgreSQL config file. It explains worses performance of PostgreSQL
for the first 20K updated, but still my observation are valid.

  MySQL performs very well, approximately 15000-2 updates per second
  with no degradation of performance.
 
  PostgreSQL does approximately 1600 records per second for the first
  1, then 200rps for the first 100k records, and then slower and
  slower downgrading to 10-20 rps(!!!) when reaching 300k.

 If you periodically vacuum the table (where periodically most likely
 would mean after some number of write transactions) I expect you'd find
 Postgres performance to at *least* stabalize.  If you vacuum with a
 periodicity reasonably ratioed to your update statement frequency you'd
 find that it will *improve* performance and Postgres will provide a
 *consistant* performance.

  Yes, I'm aware of autovacuuming, etc. But it eats resources and I
  cannot handle to run it periodically because I want steady performance
  from my application. I do not want to see ZABBIX performing slower just
  because of database housekeeper.

 This, above all things imv, would be FUD here.  Vacuum/autovacuum aren't
 something to be feared as damaging, detrimental, or resource hogging.
 Vacuum doesn't take an exclusive lock and moves along quite decently if
 done with an appropriate frequency.  If you wait far, far, too long to
 do a vacuum (to the point where you've got 10x as many dead tuples as
 live ones) then sure it'll take a while, but that doesn't make it
 resource hogging when you consider what you're having it do.

Face it, if one does hundreds updates per second for one table (that's
exactly what ZABBIX does, and not for one record(!) table as in my
simple test), performance degrades so fast that vacuum has to be
executed once per 5-15 seconds to keep good performance. The vacuum
will run at least several seconds with high disk io. Do you think it
won't make PostgreSQL at least 10x slower than MySQL as stated in the
manual? What we are discussing here? :)

And by the way, ZABBIX periodically doess execute vacuum for subset of
tables, the functionality is is built in ZABBIX.

  Several years ago I contacted PostgreSQL developers but unfortunately
  the only answer was Run vacuum. We won't change PostgreSQL to reuse
  unused tuples for updates.

 That's exactly what vacuum *does*, it marks dead tuples as being
 available for reuse.  Please understand that vacuum != vacuum full.

  Perhaps something has changed in recent releases of PostgreSQL, I don't
  think so. Please correct me if I'm wrong.

 I'm afraid there's a bit of a misunderstanding about what vacuum is for
 and how it can affect the behaviour of Postgres.  Please, please forget
 whatever notion you currently have of vacuum and actually run some tests
 with it, and post back here (or -performance) if you run into problems,
 have questions or concerns.  I expect you could also tune autovacuum to
 be frequent enough on the appropriate tables that you wouldn't have to
 intersperse your own vacuum commands in.  Also, as pointed out, current
 releases (8.1) also have quite a few enhanments and performance
 improvements.

I will try to experiment with newer PostgreSQL when I find some time.
I'm sure PostgreSQL is doing very good progress, and I'm really happy
to see that PostgreSQL became an excellent alternative to
Oracle/DB2/Informix.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] List of supported 64bit OS

2006-10-13 Thread Stanislaw Tristan
Thanks for the answer!
We'll order a 2 x Opteron2xxx series (Dual Core) and the memory will be 
16-32 Gb. This server is only for DB - non other services such as hosting, 
mail and so on.
I'm not system integrator, but the project manager and interesting about:
- existing the free OS that ideally supports hardware above in conjunction 
with PostgreSQL 8.x that will use dual core and big memory
Thanks!
Martijn van Oosterhout kleptog@svana.org wrote in message 
news:[EMAIL PROTECTED] 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Searching a tool [like XAMPP on linux] of linux/apache/pgsql/php installer

2006-10-13 Thread jatrojoomla
Hi,

I am searching a installer tool for linux / apache / pgsql / php
[which is like WAMP or, XAMPP (on linux)].

is there any tool which is avaliable on net.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] problem with using O_DIRECT

2006-10-13 Thread Ye Qin

I tried to use O_DIRECT on Linux (SuSe) Kernel 2.6, but failed to make it run.
For example, if I added the option in the open of BasicOpenFile(),
I got the following error after typing psql -l,

psql: could not connect to server: Connection refused
  Is the server running locally and accepting
  connections on Unix domain socket /tmp/.s.PGSQL.5432?

Any advice?

Thanks,

Brian

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] UTF-8

2006-10-13 Thread Martins Mihailovs

Martijn van Oosterhout wrote:

On Thu, Oct 12, 2006 at 11:09:53PM +0200, Tomi NA wrote:

2006/10/12, Martijn van Oosterhout kleptog@svana.org:

On Tue, Oct 10, 2006 at 11:49:06AM +0300, Martins Mihailovs wrote:

There are some misunderstood. Im using Linux 2.6.16.4, postgresql 8.1.4,
(there are one of locale:   lv_LV.utf8, for Latvian language). But if I
want do lower, then with standard latin symbols all is ok, but with
others special symbols (like umlaut in Germany) there is problems, and
sorting is going not like alphabet but like latin alphabet and specials
symbols after. :(

You don't say what your encoding is. If it not UTF-8, that's your
problem...

Doesn't lv_LV.utf8 mean he *did* say what his encoding is?


Not really. It says the encoding the system *expects*. However, if he
actually created his database with LATIN1 encoding, it would explain
the problems he's having.

Have a nice day,



of course DB is width UNICODE

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Andrew Sullivan
On Fri, Oct 13, 2006 at 01:35:51PM -0400, Tom Lane wrote:
 looked reasonably robust --- ie, both safe and not full of unsupportable
 assumptions about knowing exactly where everything actually is on the
 disk platter.  It'd still be interesting if anyone gets a new idea...

Might it be the case that WAL is the one area where, for Postgres,
the cost of using raw disk could conceivably be worth the benefit? 
(I.e. you end up having to write a domain-specific filesystemish
thing that is optimised for exactly your cases)?  (And before you ask
me, no I'm not volunteering :( )

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Create Index on Date portion of timestamp

2006-10-13 Thread Niederland
I am using postgresql 8.1.4.

Is there anyway to create an index equivalent to:
CREATE INDEX i1 ON t1 USING btree  (ts::Date);

So that indexes are used for queries when the field is cast to a date.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] SQL syntax error handling within SPI functions in C

2006-10-13 Thread Stuart Morse








Hi,



Ive written a set of functions in C that will
ultimately be called from an enterprise java bean. I expected that when calling
SPI_exec(sql, 0) it would return an error code if sql contained a
syntax error. At that point I would be able to return my own (more meaningful)
error message based on the error thrown. 



Instead an SQLException is thrown, and any error detection
and processing code in the SPI function is bypassed. Is this the only error
reporting model, or can I trap SQL errors within my functions? Im using
postgreSQL version 7.4.2 on Red Hat Linux version 9 and cant upgrade at
the moment.



Thanks in advance for your help,



Stuart Morse



Optimedia Solutions
1247 Knockan Drive 
Victoria, BC, V8Z 7B8

(250) 658-8104 ph
(250) 658-8146 fax

[EMAIL PROTECTED]
http://www.optimediasolutions.ca/










[GENERAL]

2006-10-13 Thread SISTEMAS









El
backend de Postgres (el programa ejecutable postgres
real) lo puede ejecutar el superusuario directamente desde el intrprete
de rdenes de usuario de Postgres (con el
nombre de la base de datos como un argumento). Sin embargo, hacer esto elimina
el buffer pool compartido y bloquea la tabla asociada con un postmaster/sitio,
por ello esto no est recomendado en un sitio multiusuario. 

Esta parte no la entiendo, podran explicrmela
mejor???



Gracias



DEPARTAMENTO DE SISTEMAS

TELESENTINEL
LTDA

2 88 87
88 Ext. 134 -133 -132










[GENERAL] Performance problem

2006-10-13 Thread roopa perumalraja
I am new to postgres and I have 4 doubts.1) I have a performance problem as I am trying to insert around 60 million rows to a table which is partitioned. So first I copied the .csv file which contains data, with COPY command to a temp table which was quick. It took only 15 to 20 minutes. Now I am inserting data from temp table to original table using insert into org_table (select * from temp_table); which is taking more than an hour  is still inserting. Is there an easy way to do this?2) I want to increase the performance of database as I find it very slow which has more than 60 million rows in one table. I increased the shared_buffer parameter in postgres.conf file to 2 but that does help much.3) I have partitioned a parent table into 100 child tables so when I insert data to parent table, it automatically inserts to child table. I have followed http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html. When I did
 this, I noticed that when I viewed data of parent table, it had the rows of the child table and is not empty. But the child tables do have the rows in it. I don’t understand.4) I want to use materialized views, I don’t understand it from http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html, can anyone explain me with a simple example.Thanks in
 advance.Regards  Roopa  
		Get your email and more, right on the  new Yahoo.com 


Re: [GENERAL] Can a function determine whether a primary key constraint exists on a table?

2006-10-13 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Albe Laurenz [EMAIL PROTECTED] wrote:

%  How can I check for the 
%  presence of constraints inside a function?
% 
% select t.oid as tableid, t.relname as tablename,
%   c.oid as constraintid, conname as constraintname
% from pg_constraint c join pg_class t on (c.conrelid = t.oid);

or, perhaps simpler,

 select * from information_schema.table_constraints
  where constraint_type = 'PRIMARY KEY';

-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread alexei . vladishev
 it would be cool if you could at least:

  - bundle your updates into transactions of, say, 1000 updates at a time
i.e. wrap a BEGIN; END; around a 1000 of them
  - run postgresql with fsync off, since you're using MyISAM
  - run PostgreSQL at least 8, since you're running MySQL 5

 I'd bet MySQL would still be faster on such an artificial, single user
 test, but not *that much* faster.

I'm quite sure the results will be very close to what I get before even
if I do all of the above. My post was not about MySQL vs PostgreSQL. It
was about very fast performance degradation of PostgreSQL in case of
large number of updates provided vacuum is not used.

 If you don't want to install 8.0, could you maybe at least do the first
 two items (shouldn't be a lot of work)...?

 Which client are you using? Just mysql/psql or some API?

C API

Alexei


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL Shared Memory and Semaphors

2006-10-13 Thread [EMAIL PROTECTED]

Hello,

I want to increase the max_connections of PostgreSQL from around 40 to 
around 100. For this I need to change the Shared Memory and Semaphores 
settings.


I followed this link -

http://www.postgresql.org/docs/8.1/interactive/kernel-resources.html#SYSVIPC

and used the proposed values in a test installation FreeBSD 5.5, 
PostgreSQL 8.x, with 96 MB RAM (a VMware guest) -


I added -

kern.ipc.shmall=32768
kern.ipc.shmmax=134217728
kern.ipc.semmap=256
to /etc/sysctl.conf

kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
to /boot/loader.conf

And I changed max_connections = 40 to 100 in postgresql.conf.

Rebooted and all works OK.

Now I want to do the same on a production machine FreeBSD 5.4, 
PostgreSQL 8.x, with 2 GB RAM.


Are there any dangers I should have in mind?

Thank you,
Iv

PS I know that the values can be compiled into the kernel, but I am not 
that good yet.


PPS I posted this first to 'FreeBSD questions' but there was no response 
there.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Joshua D. Drake

 Face it, if one does hundreds updates per second for one table (that's
 exactly what ZABBIX does, and not for one record(!) table as in my
 simple test), performance degrades so fast that vacuum has to be
 executed once per 5-15 seconds to keep good performance. The vacuum
 will run at least several seconds with high disk io. Do you think it
 won't make PostgreSQL at least 10x slower than MySQL as stated in the
 manual? What we are discussing here? :)

I am not sure what we are discussing actually. It is well know that
PostgreSQL can not do the type of update load you are talking. Even with
autovacuum.

Now, there are ways to make postgresql be able to handle this *if* you
know what you are doing with things like partitioning but out of the
box, this guy is right.

That being said, innodb would likely suffer from the same problems and
the only reason his app works the way it does is because he is using MyISAM.

Joshua D. Drake


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Create Index on Date portion of timestamp

2006-10-13 Thread Andrew Sullivan
On Thu, Oct 12, 2006 at 06:40:22PM -0700, Niederland wrote:
 I am using postgresql 8.1.4.
 
 Is there anyway to create an index equivalent to:
 CREATE INDEX i1 ON t1 USING btree  (ts::Date);
 
 So that indexes are used for queries when the field is cast to a date.

I didn't try it, but you ought to be able to create a functional
index on the to_date() of the column.  I don't know if that will
solve your cast issue, but you could rewrite the CAST into the
to_date form to get around that.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The year's penultimate month is not in truth a good way of saying
November.
--H.W. Fowler

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL syntax error handling within SPI functions in C

2006-10-13 Thread Martijn van Oosterhout
On Thu, Oct 12, 2006 at 02:29:27PM -0700, Stuart Morse wrote:
 Hi,
 
 I've written a set of functions in C that will ultimately be called from an
 enterprise java bean. I expected that when calling SPI_exec(sql, 0) it would
 return an error code if sql contained a syntax error. At that point I
 would be able to return my own (more meaningful) error message based on the
 error thrown. 

Find the section in the docs on exception handling. The rule is
basically: if the function gets an error it won't return. For this
reason you never have to check if palloc() returns NULL. It really
can't happen.

There are try/catch blocks you can install to catch errors. pl/pgsql
does this for example. Note it is slightly expensive, so you're usually
better off avoiding errors you know you're going to ignore anyway. An
error will abort the current transaction, no changing that, you use
subtransactions to isolate them...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL]

2006-10-13 Thread Alvaro Herrera
SISTEMAS wrote:
 El backend de Postgres (el programa ejecutable postgres real) lo puede
 ejecutar el superusuario directamente desde el intérprete de órdenes de
 usuario de Postgres (con el nombre de la base de datos como un argumento).
 Sin embargo, hacer esto elimina el buffer pool compartido y bloquea la tabla
 asociada con un postmaster/sitio, por ello esto no está recomendado en un
 sitio multiusuario. 
 
 Esta parte no la entiendo, podrían explicármela mejor???

De donde sacaste ese trozo de texto?  Lo tradujiste tu, o estaba en
castellano?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] encoding problem

2006-10-13 Thread [EMAIL PROTECTED]

jef peeraer wrote:

i never thought i would be bblocked by an encoding problem :-(
My database is in LATIN1 , i have entries like this in a table called 
gemeenten

Column |   Type   |   Modifiers
---+--+ 


 id| integer  | serial
 gemeente  | text | not null
 postcode  | smallint | not null
 provincies_id | integer  |

This data is copied from a dump from that table

9780Quévy70407
9781Quévy-le-Grand70407
9782Quévy-le-Petit70407

So, the accents are there. But with my web page, which is set to 
ISO-8859-1, i don't get the accents.

The web-pages are build with XUL, where i set the charset to ISO-8859-1,
but communication with the server is through  XMLHttpRequest.
Do I have to specify the charset as well in the communication between 
server and client ? Or where else could it go wrong.


jef peeraer


I am not sure where your problem is, but we have used a PostgreSQL 
database with the default encoding (ISO something or ANSI something, 
can't recall right now, but not Unicode or so) for several years storing 
all kind of encodings inside and outputting them successfully.


Only the browser encoding must match the original encoding.

Don't know if this helps, just wanted to give you our example.

All best,
Iv

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Create Index on Date portion of timestamp

2006-10-13 Thread Tom Lane
Niederland [EMAIL PROTECTED] writes:
 I am using postgresql 8.1.4.
 Is there anyway to create an index equivalent to:
 CREATE INDEX i1 ON t1 USING btree  (ts::Date);

You're short some parentheses:

CREATE INDEX i1 ON t1 USING btree  ((ts::Date));

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Create Index on Date portion of timestamp

2006-10-13 Thread A. Kretschmer
am  Thu, dem 12.10.2006, um 18:40:22 -0700 mailte Niederland folgendes:
 I am using postgresql 8.1.4.
 
 Is there anyway to create an index equivalent to:
 CREATE INDEX i1 ON t1 USING btree  (ts::Date);

CREATE INDEX i1 ON t1 USING BTREE (to_char(ts, 'dd-mm-' ));

*untested*


Please, let me know, if this okay.


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Thomas Kellerer

[EMAIL PROTECTED] wrote on 11.10.2006 16:54:

Do a simple test to see my point:

1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute update test set aaa=1 where id=0; in an endless loop


As others have pointed out, committing the data is a vital step in when testing 
the performance of a relational/transactional database.


What's the point of updating an infinite number of records and never committing 
them? Or were you running in autocommit mode?
Of course MySQL will be faster if you don't have transactions. Just as a plain 
text file will be faster than MySQL.


You are claiming that this test does simulate the load that your applications 
puts on the database server. Does this mean that you never commit data when 
running on MySQL?


This test also proves (in my opinion) that any multi-db application when using 
the lowest common denominator simply won't perform equally well on all 
platforms. I'm pretty sure the same test would also show a very bad performance 
on an Oracle server.
It simply ignores the basic optimization that one should do in an transactional 
system. (Like batching updates, committing transactions etc).


Just my 0.02€
Thomas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup DB not getting connected

2006-10-13 Thread Tom Lane
J S B [EMAIL PROTECTED] writes:
 The user in the client machine is usrxyz (a unix user role) and user role
 that owns newDB is userxyz (a db user role)

I notice you keep spelling it as newDB ... is there a case-folding
issue here perhaps?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] more anti-postgresql FUD

2006-10-13 Thread Dann Corbit
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Thomas Kellerer
 Sent: Friday, October 13, 2006 2:11 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] more anti-postgresql FUD
 
 [EMAIL PROTECTED] wrote on 11.10.2006 16:54:
  Do a simple test to see my point:
 
  1. create table test (id int4, aaa int4, primary key (id));
  2. insert into test values (0,1);
  3. Execute update test set aaa=1 where id=0; in an endless loop
 
 As others have pointed out, committing the data is a vital step in when
 testing
 the performance of a relational/transactional database.
 
 What's the point of updating an infinite number of records and never
 committing
 them? Or were you running in autocommit mode?
 Of course MySQL will be faster if you don't have transactions. Just as a
 plain
 text file will be faster than MySQL.
 
 You are claiming that this test does simulate the load that your
 applications
 puts on the database server. Does this mean that you never commit data
 when
 running on MySQL?
 
 This test also proves (in my opinion) that any multi-db application when
 using
 the lowest common denominator simply won't perform equally well on all
 platforms. I'm pretty sure the same test would also show a very bad
 performance
 on an Oracle server.
 It simply ignores the basic optimization that one should do in an
 transactional
 system. (Like batching updates, committing transactions etc).
 
 Just my 0.02€
 Thomas

In a situation where a ludicroulsly high volume of update transactions is 
expected, probably a tool like MonetDB would be a good idea:
http://monetdb.cwi.nl/

It's basically the freely available DB correspondent to TimesTen:
http://www.oracle.com/database/timesten.html

For an in-memory database, the high speed will require heaps and gobs of RAM, 
but then you will be able to do transactions 10x faster than anything else can.

It might be interesting to add fragmented column tubes in RAM {like MonetDB 
uses} for highly transactional tables to PostgreSQL some day.

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backup DB not getting connected

2006-10-13 Thread J S B
well, newDB is the name of the database and that what I'm tryin to connect. I'm tryin to keep it the same case in ECPG code as it is in the Database.
Do u mean to say that combination of upper and lower case is not allowed?
in newDB , 'new' is all lower case and 'DB' is all upper.

Thanks,
Jas
On 10/13/06, Tom Lane [EMAIL PROTECTED] wrote:
J S B [EMAIL PROTECTED] writes: The user in the client machine is usrxyz (a unix user role) and user role
 that owns newDB is userxyz (a db user role)I notice you keep spelling it as newDB ... is there a case-foldingissue here perhaps? regards, tom lane



Re: [GENERAL]nbsp;encodingnbsp;problem

2006-10-13 Thread stevegy
Hi Jef,I use the prototype 1.4 to ajax some web pages. I have to encodeURI the post form data especial the string form value, otherwise the server will receive wrong encoding characters.If you can not see the query result in correct web page encoding, maybe the page container of this XUL ajax control is not match of your setting of the web page.regards Steve Yao-原始邮件-发件人:jef peeraer [EMAIL PROTECTED]发送时间:2006-10-13 17:14:53收件人:pgsql-general@postgresql.org抄送:(无)主题:[GENERAL] encoding problemi never thought i would be bblocked by an encoding problem :-(My database is in LATIN1 , i have entries like this in a table called gemeenten Column |   Type   |   Modifiers---+--+  id| integer  | serial  gemeente  | text | not null  postcode  | smallint | not null  provincies_id | integer  |This data is copied from a dump from that table9780	Quévy	7040	79781	Quévy-le-Grand	7040	79782	Quévy-le-Petit	7040	7So, the accents are there. But with my web page, which is set to ISO-8859-1, i don't get the accents.The web-pages are build with XUL, where i set the charset to ISO-8859-1,but communication with the server is through  XMLHttpRequest.Do I have to specify the charset as well in the communication between server and client ? Or where else could it go wrong.jef peeraer---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster

	这 些 让 她 幸 福 迭 起 ( 图 )
	
	 汗 ! 女 人 穿 这 些 最 迷 人 , 女 友 亲 自 给 演 示 ( 组 图 )




  1   2   >