Re: [GENERAL] normal user dump gives error because of plpgsql

2005-03-14 Thread Janning Vygen
Am Donnerstag, 10. März 2005 18:17 schrieb Tom Lane:
 John Sidney-Woollett [EMAIL PROTECTED] writes:
  I'm pretty sure I had the same problem when using pg_restore. If
  pl/pgsql is installed in template1, then the restore fails.
 
  And I couldn't find any solution to this on the list either.

 You're supposed to restore into a database cloned from template0,
 not template1.

Thanks! I just didn't read the manual careful enough. Sorry for stealing your 
time as it is 10 times worth than mine, i guess.

  Can pg_restore be made to ignore the error?

 It does, at least since 8.0.

I guess, it's time to upgrade!

kind regards,
Janning

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Convert Cursor to array

2005-03-14 Thread Richard Huxton
GIROIRE Nicolas (COFRAMI) wrote:
my sort algorithm is a very particular solution.
In fact, I put record in array to allow me to sort record. I sort them by
using one column of my table but the column indx can have duplicate value
and it's volontary.
I still can't see why you can't sort by that column with order by 
There's nothing obvious in your description that rules that out. Are you 
aware that PostgreSQL offers functional and partial indexes? They can 
make complex selection/ordering quite efficient.

here is the order algorithm :
for i in 1..recordcount loop
tmp_row := children[i];
indx := tmp_row[5];
			if (indxi) then 
-- on déplace les éléments pour laisser la
place à l'élément qu'on déplace
for j in 0..(i-indx-1) loop
	children[i-j] := children[i-j-1];
end loop;
-- on met l'élément à sa nouvelle position
children[indx] := tmp_row;
 			end if;
 		end loop;

It's particular to my system. I won't go to explain my choice but if someone
has an idea to save time.
If you are going to sort, this looks a particularly slow algorithm for 
large values of recordcount. Personally, I'd use plperl/plpython or 
something with built-in hash sorting capabilities. That will almost 
certainly be tens or hundreds of times faster.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] sql question

2005-03-14 Thread Ragnar Hafstað
On Sun, 2005-03-13 at 23:13 -0600, George Essig wrote:
 On Fri, 11 Mar 2005 13:26:07 +0100, Steven Verhoeven
 [EMAIL PROTECTED] wrote:

 [snip problem]

 select id, fref as ref from my_table
 union
 select id, mref as ref from my_table;

union ALL

(see other replies)

gnari



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Convert Cursor to array

2005-03-14 Thread FERREIRA William (COFRAMI)
Title: RE: [GENERAL] Convert Cursor to array





hi


i'm a friend of nicolas and i'm working with him on this project


so this is the problem :
we are working on an application allowing to manage documentation in xml format.
At the the first level, there is the root, and at the next level there are the childs (example : chapters).
the application allows to make evolve docs (by adding chapters for example).
so, if we consider 2 chapters and that we want to add one between them, we need to have an index.
the first idea was to re-number all the index chapter when 1 was added, but it was too slow.
the second idea was to use float index. if we consider chapter 1 with index 1.0 and chapter 2 with index 2.0, and that we want to add new one between them, we set the new index at 1.5 but performances wasn't good.

so we choice to use a different solution which consist on using the index of a chapter and its evolution.
if we have this data :
chapter_id | evolution | index
 1 | 0 | 1
 2 | 0 | 2
 3 | 0 | 3
 4 | 1 | 2


by using our sort function we obtain this :
chapter_id | evolution | index
 1 | 0 | 1
 4 | 1 | 2
 2 | 0 | 2
 3 | 0 | 3


in consequence a new chapter has been added between two others.
this solution works fine under oracle, but under postgresql, performance are bad and we are working on the tunning of the DB.

although this solution works, we don't find it very elegant and we think that we can find a better one.
the principal problem is how to add leaf between two others whitout modifing too much data.
the size of the documents are between 200MB and 800MB;


By hoping to have been clear.



-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]De la part de Richard Huxton
Envoyé : lundi 14 mars 2005 09:31
À : GIROIRE Nicolas (COFRAMI)
Cc : 'pgsql-general@postgresql.org'
Objet : Re: [GENERAL] Convert Cursor to array



GIROIRE Nicolas (COFRAMI) wrote:
 my sort algorithm is a very particular solution.
 In fact, I put record in array to allow me to sort record. I sort them by
 using one column of my table but the column indx can have duplicate value
 and it's volontary.


I still can't see why you can't sort by that column with order by 
There's nothing obvious in your description that rules that out. Are you 
aware that PostgreSQL offers functional and partial indexes? They can 
make complex selection/ordering quite efficient.


 here is the order algorithm :
 for i in 1..recordcount loop   
tmp_row := children[i];
indx := tmp_row[5];
 
if (indxi) then 
 -- on déplace les éléments pour laisser la
 place à l'élément qu'on déplace
 for j in 0..(i-indx-1) loop
  children[i-j] := children[i-j-1];
 end loop;
 -- on met l'élément à sa nouvelle position
 children[indx] := tmp_row;
end if;
   end loop;
 
 It's particular to my system. I won't go to explain my choice but if someone
 has an idea to save time.


If you are going to sort, this looks a particularly slow algorithm for 
large values of recordcount. Personally, I'd use plperl/plpython or 
something with built-in hash sorting capabilities. That will almost 
certainly be tens or hundreds of times faster.


-- 
 Richard Huxton
 Archonet Ltd


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


This mail has originated outside your organization,
either from an external partner or the Global Internet. 
Keep this in mind if you answer this message.





Re: [GENERAL] Temporary tables privileges

2005-03-14 Thread Alejandro D. Burne
Thanks Bruce, then how can I grant an user to create tmp tables and
drop then it, without gives him global drop priv (I can't grant drop
priv for the tmp table because don't exist yet)

Thnx, Alejandro
Sorry 4 my english

On Fri, 11 Mar 2005 10:52:05 -0500 (EST), Bruce Momjian
pgman@candle.pha.pa.us wrote:
 Alejandro D. Burne wrote:
  Hi, I'm new at pg.
  I'll be using tmp tables in others rdbms. An user can create your own
  tmp tables (grant temporary tables) but can't drop it (I don't want to
  grant drop privileges).
  Other way it's using on commit; but I can't make this work.
 
  Example:
  CREATE TEMPORARY TABLE tmp ON COMMIT DROP AS
  SELECT code FROM mytable WHERE code BETWEEN 1 AND 10;
 
  shows error near ON
 
 Our TODO has:
 
 * Add ON COMMIT capability to CREATE TABLE AS SELECT
 
 --
   Bruce Momjian|  http://candle.pha.pa.us
   pgman@candle.pha.pa.us   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Convert Cursor to array

2005-03-14 Thread Ragnar Hafstað
On Mon, 2005-03-14 at 10:44 +0100, FERREIRA William (COFRAMI) wrote:

 so we choice to use a different solution which consist on using the
 index of a chapter and its evolution. 
 if we have this data : 
 chapter_id | evolution | index 
1   | 0 |   1 
2   | 0 |   2 
3   | 0 |   3 
4   | 1 |   2
 
 by using our sort function we obtain this : 
 chapter_id | evolution | index 
1   | 0 |   1 
4   | 1 |   2 
2   | 0 |   2 
3   | 0 |   3
 

in what way is this different than
  ... ORDER BY index ASC, evolution DESC;
?

gnari



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] pg_restore: [custom archiver] could not uncompress data: incorrect data check

2005-03-14 Thread Janning Vygen
Hi,

i dumped my database on server1 with pg_dump -Fc ..., copied the dump to
server2, both same pgsql version 7.4.6

pg_restore says
pg_restore: [custom archiver] could not uncompress data: incorrect data check

But it seems that almost any data was restored.

What does this error mean. I didn't found anything in the archives (searched
google with 'pg_restore incorrect data check'). Just one unanswered message
( http://archives.postgresql.org/pgsql-general/2003-08/msg01035.php )

kind regards,
janning

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] New user: Windows, Postgresql, Python

2005-03-14 Thread Magnus Hagander
 Hi,
 I'm just starting to look at Postgresql. My platform (for better or
 worse) is Windows, and I'm quite interested in the pl/python support.
 However, when I run the binary installer, it is not offered 
 to me as an option (it's there, but greyed out). The 
 plpython.dll file is installed, however.
 
 When I check, it looks like plpython.dll is linked against 
 Python 2.3. I have Python 2.4 installed on my PC, and I don't 
 really want to downgrade.
 
 I suppose my first (lazy) question is, is there a Python 2.4 
 compatible plpython.dll available anywhere? Alternatively, is 
 there a way I can build one for myself? I'm happy enough 
 doing my own build (I have mingw and msys available), but I'd 
 rather not build the whole of postgresql if possible, just 
 for the sake of one DLL

Not that I know of. IFF the libraries export the same entrypoints
without changing things, you could try just copying python24.dll to
python23.dll. I don't know how the Python guys are with binary
compatibility, though. Might be worth a shot.

On a different note, can't you have both python 2.3 *and* 2.4 on the
asme system? Considering they put the version number in the filename, it
seems this should be possible?

//Magnus

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Rick Schumeyer
Below are some PRELIMINARY results in comparing the performance of pgsql and
mysql.

These results are for a single process populating a table with 934k rows,
and then performing some selects.  I also compared the effect of creating 
indexes on some of the columns.

I have not yet done any testing of transactions, multiple concurrent
processes, etc.

I did not make any changes to the default config settings.  I can do
so if someone has some suggestions.

My machine is a 3.0 GHz P4 with 1 GB ram, running FC 3.

I used pg 8.0.1 and mysql 5.0.2 alpha.

I compiled pg from source, but I downloaded an binary for mysql.  If 
someone thinks this is significant, and can point me to a good
binary for pg, I will give it a try.

All timings are as reported by the db.

I included the pg script below..

Finally, I don't have an axe to grind either way.  I'm trying to be
fair, but I am the first to admit I am not an expert in db tuning.
I welcome constructive comments and advice.

 data and queries

The data comes from some experimental data I have been working with.

I load data into a table with relavant columns fid, rid, x.
The combination of fid,rid is unique.
x is a location, and is not unique.

I loaded the data using COPY (pg) and LOAD (mysql).

The queries were:

select count(*) from data where fid=2 and rid=6; count = 100
select count(*) from data where x  5000 and x  5500;   count = 35986
select count(*) from data where x  5000 and x  5020;   count = 1525

* preliminary conclusions

As suspected, MyISAM is very fast.  In the tested case (only
one process, only one big load and some selects) MyISAM tables are
much faster than pg or InnoDB.

For queries, InnoDB and pg are roughly equivalent.  In some cases
one or the other is a little faster, but they are mostly in the
same ballpark.  The one exception seems to be that pg has an edge
in seq scans.

pg is slower loading data when it has to create an index.  Also,
I found that is is critical to run vacuum analyze in pg.  Running
analyze in mysql did not seem to make much difference.  I'm guessing
that mysql builds statistics while it is loading data, and does not
actually run an analyze since the table has not changed.

*** preliminary results ***
*** all times in seconds ** 

note: input table has 934500 rows.

 mysql 5.0.2 alpha
   PG 8.0.1  MyISAM   InnoDB
   NO INDEXES

Load file22.3 3.9  22.1
select count fid=?,rid=?  3.0 0.23  2.07
select count x  5000, x  5500   1.2 0.27  1.59
select count x  5000, x  5020   0.630.29  1.58

  INDEXES on (fid,rid) 
 
Load file36. 13.5  30.1
vacuum analyze3.6 
select count fid=?,rid=?  0.0 0.00  0.02 
select count x  5000, x  5500   0.702   0.29  2.07
select count x  5000, x  5020   0.713   0.28  1.59

  INDEXES on (fid,rid) and (x)

Load file   202. 24.  151.
vacuum analyze   11.
select count fid=?,rid=?  0.002   0.00  0.02

select count x  5000, x  5500   0.9 0.06  0.75
select count x  5000, x  5020   0.048   0.01  0.01

* PG-SQL script 

\timing

--
-- Load table, no indexes
--

drop table data cascade;

create table data (
fid integer,
rid integer,
range   real,
x   real,
y   real,
z   real,
bs  real,
snr real,
rvelreal,
cfarsmallint);

COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z) FROM
'/home/rick/bed/data/data.dat';
select count(*) from data where fid=2 and rid=6;
select count(*) from data where x  5000 and x  5500;
select count(*) from data where x  5000 and x  5020;

--
-- Load table, index on (fid,rid)
--

drop table data cascade;

create table data (
fid integer,
rid integer,
range   real,
x   real,
y   real,
z   real,
bs  real,
snr real,
rvelreal,
cfarsmallint);

create index fidrid_data on data (fid,rid);
COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z) FROM
'/home/rick/bed/data/data.dat';
vacuum analyze data;
select count(*) from data where fid=2 and rid=6;
select count(*) from data where x  5000 and x  5500;
select count(*) from data where x  5000 and x  5020;

--
-- Load table, index on (fid,rid) and (x)
--

drop table data cascade;

create table data (
fid integer,
rid integer,
range   real,
x   real,
y   real,
z   real,
bs  

Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Miroslav ulc
Hi Rick,
the work you are doing is important (at least I think so).
From my experience PosgreSQL performance is also very slow in case 
there are several LEFT JOINs and there are varchar() fields. You can see 
an example in archive where my problem is described (Subject: How to 
read query plan). There is a patch that partially solves this problem 
(as Tom Lane mentioned) but it is not in the latest PostgreSQL release.

I will work on improving of the design of tables that I have problem 
with to see if there will be any significant speed improvements. Maybe 
you could use some of my results for some of your tests then.

Miroslav
Rick Schumeyer wrote:
Below are some PRELIMINARY results in comparing the performance of pgsql and
mysql.
...
 

begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Peculiar performance observation....

2005-03-14 Thread Net Virtual Mailing Lists
Hello,


I am sorry to bring this up again Does anyone have any idea what
might be going on here?...   I'm very worried about this situation.. ;-(

- Greg


Something even more peculiar (at least it seems to me..)...


If I drop the index table1_category_gist_idx, I get this:


jobs= explain analyze select id from table1 where category @ 'a.b'
ORDER BY category;
QUERY PLAN  
  
-

--
 Sort  (cost=7568.55..7568.62 rows=28 width=52) (actual
time=4842.691..4854.468 rows=1943 loops=1)
   Sort Key: category
   -  Seq Scan on jobdata  (cost=0.00..7567.88 rows=28 width=52) (actual
time=11.498..4800.907 rows=1943 loops=1)
 Filter: (category @ 'a.b'::ltree)
 Total runtime: 4871.076 ms
(5 rows)


.. no disk thrashing all over the place.. 

I'm really perplexed about this one..;-(

- Greg

I have a rather peculiar performance observation and would welcome any
feedback on this.

First off, the main table (well, part of it.. it is quite large..):


 Table table1
   Column   |   Type   |
   Modifiers
+--
+-
 id | integer  | not null default
nextval('master.id_seq'::text)
 user_id| integer  | 
 ... (skipping about 20 columns)
 category   | ltree[]  | 
 somedata   | text | not null



Indexes:
table1_pkey primary key, btree (id)
table1_category_full_gist_idx gist (category)
table1_id_idx btree (id)
table1_fti_idx gist (fti) WHERE ((status)::text = 'open'::text)
table1_user_id_idx btree (user_id)


database= explain analyze select id from table1 where category @ 'a.b';
QUERY
PLAN 
-
-
-
 Index Scan using table1_category_full_gist_idx on jobdata 
(cost=0.00..113.48 rows=28 width=4) (actual time=43.814..12201.528
rows=1943 loops=1)
   Index Cond: (category @ 'a.b'::ltree)
   Filter: (category @ 'a.b'::ltree)
 Total runtime: 1.258 ms


If I do this:

create table yuck (id integer, category ltree[]);
insert into yuck select id, category from table1;
create index category_idx on yuck using gist(category);
vacuum analyze yuck;
jobs= explain analyze select id from table1 where id in (select id from
yuck where category @ 'a.b');
  QUERY PLAN
  
-
-
-
 Nested Loop  (cost=108.64..114.28 rows=1 width=52) (actual
time=654.645..1245.212 rows=1943 loops=1)
   -  HashAggregate  (cost=108.64..108.64 rows=1 width=4) (actual
time=654.202..690.709 rows=1943 loops=1)
 -  Index Scan using category_idx on yuck  (cost=0.00..108.57
rows=28 width=4) (actual time=2.046..623.436 rows=1943 loops=1)
   Index Cond: (category @ 'a.b'::ltree)
   Filter: (category @ 'a.b'::ltree)
   -  Index Scan using table1_pkey on jobdata  (cost=0.00..5.64 rows=1
width=52) (actual time=0.219..0.235 rows=1 loops=1943)
 Index Cond: (table1.id = outer.id)
 Total runtime: 1261.551 ms
(8 rows)


In the first query, my hard disk trashes audibly the entire 12 seconds
(this is actually the best run I could get, it is usually closer to 20
seconds), the second query runs almost effortlessly..  I've tried
reindexing, even dropping the index and recreating it but nothing I do
helps at all.

Now keep in mind that I do all of my development on painfully slow
hardware in order to make any performance issues really stand out.  But,
I've done this on production servers too with an equal performance
improvement noticed.

I just can't figure out why this second query is so much faster, I feel
like I must have done something very wrong in my schema design or
something to be suffering this sort of a performance loss.   Any idea
what I can do about this?

Thanks as always!

- Greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] plpython function problem workaround

2005-03-14 Thread Sim Zacks
I worked around the plpython problem that doesn't allow scripts created on
Windows to be run on the *nix server with the following statement.
update pg_proc set prosrc=replace(prosrc,chr(13),'') where prolang=87238
--plpythonu's oid in my setup is 87238. I don't know if that is a standard
or just on mine.

Is there a way to automate that script every time a plpythonu function is
created?
I tried writing a trigger on the pg_proc table but it wouldn't let me:
ERROR:  permission denied: pg_proc is a system catalog
Is there a way to do this without playing with the source code?

CREATE FUNCTION fixpython() RETURNS trigger AS $$
BEGIN
IF new.prolang=87238 THEN
new.prosrc=replace(prosrc,chr(13),'');
END IF;
end
$$ LANGUAGE 'plpgsql';

CREATE TRIGGER fixpython BEFORE INSERT OR UPDATE ON pg_proc
FOR EACH ROW EXECUTE PROCEDURE fixpython();



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pg_restore: [custom archiver] could not uncompress data: incorrect data check

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 12:22:57PM +0100, Janning Vygen wrote:

 i dumped my database on server1 with pg_dump -Fc ..., copied the dump to
 server2, both same pgsql version 7.4.6

How are you copying the file?

 pg_restore says
 pg_restore: [custom archiver] could not uncompress data: incorrect data check

incorrect data check appears to be a zlib error.  Is there any
chance the file got corrupted?  Can you duplicate the problem if
you do another dump?

What OS and version of zlib are you using on each server?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Richard Huxton
Rick Schumeyer wrote:
Below are some PRELIMINARY results in comparing the performance of pgsql and
mysql.
These results are for a single process populating a table with 934k rows,
and then performing some selects.  I also compared the effect of creating 
indexes on some of the columns.

I have not yet done any testing of transactions, multiple concurrent
processes, etc.
I did not make any changes to the default config settings.  I can do
so if someone has some suggestions.
My machine is a 3.0 GHz P4 with 1 GB ram, running FC 3.
Stop now. I've not looked at your test results, and frankly there is no 
point. As it ships, PG should run fine on a small corner of an old 
laptop. It will not perform well with any sort of serious workload on 
any sort of serious hardware. You're wasting your time if you want to 
get any sort of meaningful result.

Take 30 minutes to read through the article below. It covers the basics 
of how to manage your configuration settings.
  http://www.powerpostgresql.com/PerfList

Oh - make sure you are accounting for caching effects as well.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL]

2005-03-14 Thread Mohsen Pahlevanzadeh
Dears,
When i wanna insert to my following code,I receive following error:
Insert into c(t) values('ccc'); -- It's my qurey.
Insert command failed!So i can't continue to insert in your table!
ERROR:  pg_atoi: error in ccc: can't parse ccc


my code :
pres = PQexec(pobj-connection2db, ins.c_str());
 ins=;
 if (!pres || PQresultStatus(pres) != PGRES_COMMAND_OK)
 {
  fprintf(stderr, Insert command failed!So i can't continue to insert
in your table!\n);
  cout  PQerrorMessage(pobj-connection2db);
  pobj-exit_nicely(pobj-connection2db);
 }
 PQclear(pres);

When i print ins.c_str(),I see following statement:
Insert into z(e) values('10z');
It's true,But i recieve same error.
Please help me..
--Mohsen

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL]

2005-03-14 Thread Miroslav ulc
Mohsen Pahlevanzadeh wrote:
Dears,
When i wanna insert to my following code,I receive following error:
Insert into c(t) values('ccc'); -- It's my qurey.
Insert command failed!So i can't continue to insert in your table!
ERROR:  pg_atoi: error in ccc: can't parse ccc
 

Usually atoi functions perform conversion from string to integer (if my 
memory serves well). Maybe the problem is that you are trying to insert 
string into numeric field.

...
 

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Mark Rae
On Mon, Mar 14, 2005 at 06:52:58AM -0500, Rick Schumeyer wrote:
 Below are some PRELIMINARY results in comparing the performance of pgsql and
 mysql.
 ...
 I have not yet done any testing of transactions, multiple concurrent
 processes, etc.
 

I would say that doing the concurrency tests is probably the most
important factor in comparing other databases against MySQL, as
MySQL will almost always win in single-user tests.

E.g. here are some performance figures from tests I have done in the past.
This is with a 6GB databse on a 4CPU Itanium system running a mixture of
read-only queries, but it is fairly typical of the behaviour I have seen. 
The Oracle figures also scaled in a similar way to postgres.

Clients   1 2 3 4 6 812163264   128
---
mysql-4.1.11.00  1.41  1.34  1.16  0.93  1.03  1.01  1.00  0.94  0.86  0.80
pg-7.4.1   0.65  1.27  1.90  2.48  2.45  2.50  2.48  2.51  2.49  2.39  2.38

-Mark

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 02:37:00PM +0200, Sim Zacks wrote:

 I worked around the plpython problem that doesn't allow scripts created on
 Windows to be run on the *nix server with the following statement.
 update pg_proc set prosrc=replace(prosrc,chr(13),'') where prolang=87238
 --plpythonu's oid in my setup is 87238. I don't know if that is a standard
 or just on mine.

The oid is arbitrary, so you should get it via a (sub)query instead
of hardcoding it.

 Is there a way to automate that script every time a plpythonu function is
 created?
 I tried writing a trigger on the pg_proc table but it wouldn't let me:

Hmmm...plpythonu doesn't install a VALIDATOR function.  I wonder
if you could exploit that?  This is just a brainstorm, but the
following worked for me in trivial tests:

CREATE FUNCTION fixpython(funcoid oid) RETURNS void AS $$
BEGIN
UPDATE pg_proc SET prosrc = replace(prosrc, chr(13), '')
 WHERE oid = funcoid;

RETURN;
END;
$$ LANGUAGE plpgsql VOLATILE STRICT;

UPDATE pg_language SET lanvalidator = 'fixpython'::regproc 
 WHERE lanname = 'plpythonu';

Are there any problems with doing this?  Is a VALIDATOR function
permitted to modify the function it's validating?  This wouldn't
work if plpythonu ever installs a VALIDATOR, but you might be able
to use it until such time (barring objections about why it's a Bad
Idea, that is).

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] free space map settings

2005-03-14 Thread go
Hi
 Tell me please what does it mean the next hint:
TIP 7: don't forget to increase your free space map settings
where can i read about this settings?

--
Igor


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] skip weekends: revisited

2005-03-14 Thread Jeff Amiel
Back in 2002, Jean-Luc Lachance gave a nifty algorithm 
(http://archives.postgresql.org/pgsql-sql/2002-06/msg00301.php) for 
determining the resulting date given a starting date and number of 
'workdays' in the future.  The trick was that weekends (Saturday and 
Sunday) could not be counted.

The algorithm looks like this:
date := now - day_of_the_week
interval := interval + day_of_the_week
date := date + int( interval/5)x7 + ( interval mod 5)
However, when I attempted to implement it, I found the answers I received to be problematic.
I'm sure the issue is in my interpretation of the algorithm, but I can't quite figure it out.  

Let's take an example.  Starting on March 11, 2005, what date is 4 'workdays' 
in the future?
Step one:
date := now - day_of_the_week
March 5th = March 11th - 6   (6 is the day of week for Fridaymarch 11th).
Step two:
interval = interval + day of the week
10 = 4 + 6  (4 is the interval)
After this point, the date field is now March 5th and the interval is now 10.
Step three:
date := date + int( interval/5)x7 + ( interval mod 5)
March 19th = March 5th + 14 + 0   

End result.March 19th.   
But March 19th is not correct (heck, it's a Saturday!).  It should be March 17th.

What am I doing/interpreting wrong?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Sim Zacks
Thank You Michael,

It worked when I tried it on the 3 functions that I have.
I checked the archives for any discussion on the Validator and there wasn't
anything specifically discussing it.
I really didn't find a lot of information about the Validators either, for
example where did you see that a validator function gets the language oid
passed in? The only place I could find it was after I looked at the other
languages installed on my system and saw that they all had the same argument
type passed in.

Sim

Michael Fuhr [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 On Mon, Mar 14, 2005 at 02:37:00PM +0200, Sim Zacks wrote:

  I worked around the plpython problem that doesn't allow scripts created
on
  Windows to be run on the *nix server with the following statement.
  update pg_proc set prosrc=replace(prosrc,chr(13),'') where
prolang=87238
  --plpythonu's oid in my setup is 87238. I don't know if that is a
standard
  or just on mine.

 The oid is arbitrary, so you should get it via a (sub)query instead
 of hardcoding it.

  Is there a way to automate that script every time a plpythonu function
is
  created?
  I tried writing a trigger on the pg_proc table but it wouldn't let me:

 Hmmm...plpythonu doesn't install a VALIDATOR function.  I wonder
 if you could exploit that?  This is just a brainstorm, but the
 following worked for me in trivial tests:

 CREATE FUNCTION fixpython(funcoid oid) RETURNS void AS $$
 BEGIN
 UPDATE pg_proc SET prosrc = replace(prosrc, chr(13), '')
  WHERE oid = funcoid;

 RETURN;
 END;
 $$ LANGUAGE plpgsql VOLATILE STRICT;

 UPDATE pg_language SET lanvalidator = 'fixpython'::regproc
  WHERE lanname = 'plpythonu';

 Are there any problems with doing this?  Is a VALIDATOR function
 permitted to modify the function it's validating?  This wouldn't
 work if plpythonu ever installs a VALIDATOR, but you might be able
 to use it until such time (barring objections about why it's a Bad
 Idea, that is).

 -- 
 Michael Fuhr
 http://www.fuhr.org/~mfuhr/

 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] free space map settings

2005-03-14 Thread Miroslav ulc
go wrote:
Hi
Tell me please what does it mean the next hint:
TIP 7: don't forget to increase your free space map settings
where can i read about this settings?
 

http://www.postgresql.org/docs/8.0/interactive/runtime-config.html
--
Igor
 

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] free space map settings

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 04:52:58PM +0300, go wrote:

  Tell me please what does it mean the next hint:
 TIP 7: don't forget to increase your free space map settings
 where can i read about this settings?

See the Run-time Configuration section of the Server Run-time
Environment chapter of the documentation.  Here's a link to the
documentation for the latest version of PostgreSQL:

http://www.postgresql.org/docs/8.0/interactive/runtime-config.html#RUNTIME-CONFIG-RESOURCE-FSM

See also the Annotated POSTGRESQL.CONF Guide for PostgreSQL:

http://www.powerpostgresql.com/Downloads/annotated_conf_80.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] preoblem in jdbc postgresql and tomcat

2005-03-14 Thread David Gagnon
Hi,
 I see 2 jdbc version in your classpath pg73jdbc3.jar and 
pg74.215.jdbc3.jar.

For Tomcat you just have to drop the jdbc Jar file in
/usr/local/jakarta-tomcat-4.1.31/common/lib
No classpath to change.. Nothing.
After create an entry in Server.xml or create a Context.xml to define your 
datasource.  See Tomcat Doc.
Works oki for me 

/David
java unix wrote:
hi,
i have installed postgresql-7.4.1,and tomcat4.1.31and
j2sdk1.4.2_07
for tomcat i have included path
/usr/local/jakarta-tomcat-4.1.31
for j2sdk path set is /usr/local/j2sdk1.4.2_07
and CLASSPATH SET IS:
/usr/local/pgsql/pg73jdbc3.jar:/usr/local/pgsql/pg74.215.jdbc3.jar:/usr/local/pg
sql/postgresql-8.0-310.jdbc3.jar:/usr/local/j2sdk1.4.2_07/lib/tools.jar:/usr/loc
al/j2sdk1.4.2_07/jre/lib/rt.jar:/usr/local/jakarta-tomcat-4.1.31/common/lib/serv
let.jar:/usr/local/jakarta-tomcat-4.1.31/common/lib:/usr/local/pgsql:/usr/local/
j2sdk1.4.2_07/jre/lib:/usr/local/j2sdk1.4.2_07/lib
i have included all the paths
all the above path are set in /etc/profile/
and i set path for all three jar
files:/usr/local/pgsql/pg73jdbc3.jar:/usr/local/pgsql/pg74.215.jdbc3.jar:/usr/local/pg
sql/postgresql-8.0-310.jdbc3.jar
when i run this simple java program i get
error as follows:
[EMAIL PROTECTED] bin]# javac PostgreSQLTest.java//compile
PostgreSQLTest.java: In class `PostgreSQLTest':
PostgreSQLTest.java: In method
`PostgreSQLTest.main(java.lang.String[])':
PostgreSQLTest.java:7: Exception
`java.lang.ClassNotFoundException' must be caught, or
it must be declared in the `throws' clause of `main'.
 
Class.forName(postgresql.Driver).newInstance();
  ^
PostgreSQLTest.java:7: Exception
`java.lang.InstantiationException' must be caught, or
it must be declared in the `throws' clause of `main'.
 
Class.forName(postgresql.Driver).newInstance();
 ^
PostgreSQLTest.java:7: Exception
`java.lang.IllegalAccessException' must be caught, or
it must be declared in the `throws' clause of `main'.
 
Class.forName(postgresql.Driver).newInstance();

my PostgreSQLTest.java is as below
import java.sql.*;
 
 
  
class PostgreSQLTest {
 public static void main (String[] args) {
   try {
 Driver driver = (Driver)
  
Class.forName(postgresql.Driver).newInstance();
 DriverManager.registerDriver(driver);
 
 
  
 String url = jdbc:postgresql:javatest;
 Connection con =
DriverManager.getConnection(url, postgres, );
 Statement  stm = con.createStatement();
 
 
  
 stm.setQueryTimeout(10);
 ResultSet  rs  = stm.executeQuery(select col1
from test);
 
 
  
 rs.next();
 
 
  
 System.out.println(rs.getString(1));
 
 
  
   } catch (SQLException e) {
 
 
  
 System.out.println(Exception!);
 System.out.println(e.toString());
 }
}
}
This is one simple example

Now when i tried to perform operations with tomcat
simple sevlet works but with database operations i get
the following errors
file is:ShowBedrock.java
import javax.servlet.*;
import javax.servlet.http.*;
 
 
  
public class ShowBedrock extends HttpServlet
{
   public String getServletInfo()
   {
  return Servlet connects to PostgreSQL database
and displays result of a SELECT;
   }
 
 
  
   private Connection dbcon;  // Connection for scope
of ShowBedrock
 
 
  
   // init sets up a database connection
   public void init(ServletConfig config) throws
ServletException
   {
   String loginUser = postgres;
   String loginPasswd = roopesh;
   String loginUrl =
jdbc:postgresql://localhost/bedrock;
 
 
  
   // Load the PostgreSQL driver
   try
  {
 Class.forName(org.postgresql.Driver);
 dbcon =

Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 04:08:09PM +0200, Sim Zacks wrote:

 I checked the archives for any discussion on the Validator and there wasn't
 anything specifically discussing it.
 I really didn't find a lot of information about the Validators either, for
 example where did you see that a validator function gets the language oid
 passed in?

The validator is passed the function's OID, not the language's OID.
See the documentation for CREATE LANGUAGE, as well as the Procedural
Languages chapter:

http://www.postgresql.org/docs/8.0/interactive/sql-createlanguage.html
http://www.postgresql.org/docs/8.0/interactive/xplang.html

Note that my idea to use the validator function was just a brainstorm,
not thoroughly tested or thought out.  Maybe one of the developers
will comment about the wisdom of (ab)using it the way I suggested.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



Re: [GENERAL] skip weekends: revisited

2005-03-14 Thread Shachar Shemesh
Jeff Amiel wrote:
Back in 2002, Jean-Luc Lachance gave a nifty algorithm 
(http://archives.postgresql.org/pgsql-sql/2002-06/msg00301.php) for 
determining the resulting date given a starting date and number of 
'workdays' in the future.  The trick was that weekends (Saturday and 
Sunday) could not be counted.
...
What am I doing/interpreting wrong?
I'm not sure about the algorithm. I can tell you, however, that such an 
algorithm is locale dependent.

For example, in Israel the work week is Sunday-Thursday. Weekend is 
Friday and Saturday. I'm fairly sure that there are places around the 
globe in which a work-week is six days long.

 Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Convert Cursor to array

2005-03-14 Thread FERREIRA William (COFRAMI)
Title: RE: [GENERAL] Convert Cursor to array





well my example was incomplete :
at the begin :
chapter_id | evolution | index 
 1 | 0 | 1 
 2 | 0 | 2 
 3 | 0 | 3 
 4 | 1 | 2
 5 | 1 | 4
by using the sort function i obtain this : 


chapter_id | evolution | index 
 1 | 0 | 1 
 4 | 1 | 2 
 2 | 0 | 2
 5 | 1 | 4 
 3 | 0 | 3


that why i can't use ORDER BY...


-Message d'origine-
De : Ragnar Hafstað [mailto:[EMAIL PROTECTED]]
Envoyé : lundi 14 mars 2005 12:11
À : FERREIRA William (COFRAMI)
Cc : 'Richard Huxton'; GIROIRE Nicolas (COFRAMI);
'pgsql-general@postgresql.org'
Objet : Re: [GENERAL] Convert Cursor to array



On Mon, 2005-03-14 at 10:44 +0100, FERREIRA William (COFRAMI) wrote:


 so we choice to use a different solution which consist on using the
 index of a chapter and its evolution. 
 if we have this data : 
 chapter_id | evolution | index 
 1 | 0 | 1 
 2 | 0 | 2 
 3 | 0 | 3 
 4 | 1 | 2
 
 by using our sort function we obtain this : 
 chapter_id | evolution | index 
 1 | 0 | 1 
 4 | 1 | 2 
 2 | 0 | 2 
 3 | 0 | 3
 


in what way is this different than
 ... ORDER BY index ASC, evolution DESC;
?


gnari




This mail has originated outside your organization,
either from an external partner or the Global Internet. 
Keep this in mind if you answer this message.





Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Rick Schumeyer
That site produces some sort of php error.

I don't suppose this information is available elsewhere?

 Stop now. I've not looked at your test results, and frankly there is no
 point. As it ships, PG should run fine on a small corner of an old
 laptop. It will not perform well with any sort of serious workload on
 any sort of serious hardware. You're wasting your time if you want to
 get any sort of meaningful result.

 Take 30 minutes to read through the article below. It covers the basics
 of how to manage your configuration settings.
http://www.powerpostgresql.com/PerfList



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 Are there any problems with doing this?  Is a VALIDATOR function
 permitted to modify the function it's validating?

Well, it's not *supposed* to, but the tuple is already stored so I
guess an UPDATE on it will work.  At the moment.

 This wouldn't work if plpythonu ever installs a VALIDATOR, but you
 might be able to use it until such time (barring objections about why
 it's a Bad Idea, that is).

The proposed hack seems far too simplistic to me ... what of CRs that
are deliberately included in string literals?

I don't know Python at all, so I don't know how complicated its lexical
structure is, but ISTM you'd at least need enough smarts to distinguish
literals from unquoted whitespace.

The other small fly in the ointment is that when the server is running
on Windows, I suppose we would have to *put in* rather than remove CRs.
Sim probably doesn't care about that case, but we couldn't accept an
official patch that doesn't handle it.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Richard Huxton
Rick Schumeyer wrote:
That site produces some sort of php error.
Hmm - was working this morning. Perhaps some maintenance going on.
I don't suppose this information is available elsewhere?
Try some slightly older notes here:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Peculiar performance observation....

2005-03-14 Thread Thomas F . O'Connell
Well, your expected vs. actual rows are off, so analyzing might help.  
Otherwise, what is your sort_mem set to?

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Mar 14, 2005, at 6:11 AM, Net Virtual Mailing Lists wrote:
Hello,
I am sorry to bring this up again Does anyone have any idea what
might be going on here?...   I'm very worried about this situation..  
;-(

- Greg

Something even more peculiar (at least it seems to me..)...
If I drop the index table1_category_gist_idx, I get this:
jobs= explain analyze select id from table1 where category @ 'a.b'
ORDER BY category;
   QUERY PLAN
-

--
Sort  (cost=7568.55..7568.62 rows=28 width=52) (actual
time=4842.691..4854.468 rows=1943 loops=1)
  Sort Key: category
  -  Seq Scan on jobdata  (cost=0.00..7567.88 rows=28 width=52)  
(actual
time=11.498..4800.907 rows=1943 loops=1)
Filter: (category @ 'a.b'::ltree)
Total runtime: 4871.076 ms
(5 rows)

.. no disk thrashing all over the place..
I'm really perplexed about this one..;-(
- Greg
I have a rather peculiar performance observation and would welcome  
any
feedback on this.

First off, the main table (well, part of it.. it is quite large..):
Table table1
  Column   |   Type   |
  Modifiers
+--
+-
id | integer  | not null default
nextval('master.id_seq'::text)
user_id| integer  |
... (skipping about 20 columns)
category   | ltree[]  |
somedata   | text | not null

Indexes:
   table1_pkey primary key, btree (id)
   table1_category_full_gist_idx gist (category)
   table1_id_idx btree (id)
   table1_fti_idx gist (fti) WHERE ((status)::text = 'open'::text)
   table1_user_id_idx btree (user_id)
database= explain analyze select id from table1 where category @  
'a.b';

QUERY
PLAN
-
-
- 

Index Scan using table1_category_full_gist_idx on jobdata
(cost=0.00..113.48 rows=28 width=4) (actual time=43.814..12201.528
rows=1943 loops=1)
  Index Cond: (category @ 'a.b'::ltree)
  Filter: (category @ 'a.b'::ltree)
Total runtime: 1.258 ms

If I do this:
create table yuck (id integer, category ltree[]);
insert into yuck select id, category from table1;
create index category_idx on yuck using gist(category);
vacuum analyze yuck;
jobs= explain analyze select id from table1 where id in (select id  
from
yuck where category @ 'a.b');
 QUERY  
PLAN

-
-
-
Nested Loop  (cost=108.64..114.28 rows=1 width=52) (actual
time=654.645..1245.212 rows=1943 loops=1)
  -  HashAggregate  (cost=108.64..108.64 rows=1 width=4) (actual
time=654.202..690.709 rows=1943 loops=1)
-  Index Scan using category_idx on yuck  (cost=0.00..108.57
rows=28 width=4) (actual time=2.046..623.436 rows=1943 loops=1)
  Index Cond: (category @ 'a.b'::ltree)
  Filter: (category @ 'a.b'::ltree)
  -  Index Scan using table1_pkey on jobdata  (cost=0.00..5.64  
rows=1
width=52) (actual time=0.219..0.235 rows=1 loops=1943)
Index Cond: (table1.id = outer.id)
Total runtime: 1261.551 ms
(8 rows)

In the first query, my hard disk trashes audibly the entire 12  
seconds
(this is actually the best run I could get, it is usually closer to  
20
seconds), the second query runs almost effortlessly..  I've tried
reindexing, even dropping the index and recreating it but nothing I  
do
helps at all.

Now keep in mind that I do all of my development on painfully slow
hardware in order to make any performance issues really stand out.   
But,
I've done this on production servers too with an equal performance
improvement noticed.

I just can't figure out why this second query is so much faster, I  
feel
like I must have done something very wrong in my schema design or
something to be suffering this sort of a performance loss.   Any idea
what I can do about this?

Thanks as always!
- Greg

---(end of  
broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to 

Re: [GENERAL] skip weekends: revisited

2005-03-14 Thread Andre Maasikas
Jeff Amiel wrote:
Back in 2002, Jean-Luc Lachance gave a nifty algorithm 
(http://archives.postgresql.org/pgsql-sql/2002-06/msg00301.php) for 
determining the resulting date given a starting date and number of 
'workdays' in the future.  The trick was that weekends (Saturday and 
Sunday) could not be counted.

The algorithm looks like this:
date := now - day_of_the_week
interval := interval + day_of_the_week
date := date + int( interval/5)x7 + ( interval mod 5)
However, when I attempted to implement it, I found the answers I 
received to be problematic.
I'm sure the issue is in my interpretation of the algorithm, but I can't 
quite figure it out. 
Let's take an example.  Starting on March 11, 2005, what date is 4 
'workdays' in the future?

Step one:
date := now - day_of_the_week
March 5th = March 11th - 6   (6 is the day of week for Fridaymarch 
11th).
...
What am I doing/interpreting wrong?
From the looks of it, variable date seems to calculate
the first day of a week and for it to work day_of_the_week
has to start from 0 so Sunday - 0,  Friday - 5
(in US at least?)
Andre

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 10:54:22AM -0500, Tom Lane wrote:

 The proposed hack seems far too simplistic to me ... what of CRs that
 are deliberately included in string literals?

Yeah, I meant to mention that; I think it's been brought up before
in related threads.  The programmer would need to be aware of that
issue and allow for it.  It doesn't seem to be a problem if you use
\r escape sequences in a dollar-quoted function body or if you use
\\r in single quotes -- is there any case where those sequences
might get converted to literal CRs?  Third-party software like user
interfaces might be problematic, but what about PostgreSQL itself
and its official tools (psql, pg_dump, etc.)?

 I don't know Python at all, so I don't know how complicated its lexical
 structure is, but ISTM you'd at least need enough smarts to distinguish
 literals from unquoted whitespace.

The example I posted was merely that: an example.  I was more
interested in whether abusing the validator mechanism would work
or if it might have subtle problems.  A function that understands
the Python grammar is left as an exercise for the reader.

 The other small fly in the ointment is that when the server is running
 on Windows, I suppose we would have to *put in* rather than remove CRs.

Would we?  My understanding is that code passed to PyRun_String()
and friends must be free of line-ending CRs on all platforms, and
that the code that reads a normal Python script takes care of
that (i.e., normalizes line endings to be LF only).  Can anybody
confirm or deny?

 Sim probably doesn't care about that case, but we couldn't accept an
 official patch that doesn't handle it.

I wasn't proposing a patch, at least not yet.  Just throwing out
an idea that somebody might be able to build on.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] script variables

2005-03-14 Thread Paul Cunningham
I use a bash script (similar to following example) to update tables.
psql -v passed_in_var=\'some_value\'  -f script_name 
Is it possible to pass a value back from psql to the bash script?
Thanks,
Paul Cunningham
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Temporary tables privileges

2005-03-14 Thread Bruce Momjian
Alejandro D. Burne wrote:
 Thanks Bruce, then how can I grant an user to create tmp tables and
 drop then it, without gives him global drop priv (I can't grant drop
 priv for the tmp table because don't exist yet)

Perhaps you need a SECURITY DEFINER function.  You can set the
permissions on the temp schemas too:

   GRANT { { CREATE | TEMPORARY | TEMP } [,...] | ALL [ PRIVILEGES ] }
   ON DATABASE dbname [, ...] TO { username | GROUP groupname | PUBLIC 
} [, ...] [ WITH
 GRANT OPTION ]

I have forgotten what you are trying to do.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Scott Marlowe
On Mon, 2005-03-14 at 05:52, Rick Schumeyer wrote:
 Below are some PRELIMINARY results in comparing the performance of pgsql and
 mysql.
 
 These results are for a single process populating a table with 934k rows,
 and then performing some selects.  I also compared the effect of creating 
 indexes on some of the columns.
 
 I have not yet done any testing of transactions, multiple concurrent
 processes, etc.
 
 I did not make any changes to the default config settings.  I can do
 so if someone has some suggestions.
 
 My machine is a 3.0 GHz P4 with 1 GB ram, running FC 3.
 
 I used pg 8.0.1 and mysql 5.0.2 alpha.

Why are all the tests here select count(*) tests?  Surely your
application does something more interesting than counting rows...

For a more interesting test, try setting up three or four streaming
writers that write information continuously into the database, and then
run the select count(*) queries against both and see what happens.

My guess is that the table level locking of myisam tables means the
MySQL database will slow to a crawl or throw error messages, while the
postgresql system will slow down somewhat but keep right on running.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] sql question

2005-03-14 Thread Chris Travers
Steven Verhoeven wrote:
Hi all
My table definition :
   id | fref  | mref
--+---+--
  1   | 23|   25
  2   | 24|   28
  3   | 25|   31
  4   | 26|   34
My problem :
i need a query that results in this :
   id |ref
--+--
  1   | 23  
  1   | 25
  2   | 24
  2   | 28
  3   | 25   
  3   | 31
  4   | 26   
  4   | 34

Do I need a crosstab-query ?
Who can help me ?

How about
select id, mref AS ref from table
UNION
select id, fref AS ref from table
Or is that not what you want?
Best Wishes,
Chris Travers
Metatron Technology Consulting

--
/A computer is like an airconditioner. When windows open, it stops 
working ! /

-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- --

Steven Verhoeven, ICT Support Engineer
Department for Molecular Biomedical Research (DMBR)
VIB - Ghent University 'Fiers-Schell-Van Montagu' building
Technologiepark 927B - 9052 Ghent (Zwijnaarde)
Belgium
Tel : +32-(0)9-33-13.606
Fax : +32-(0)9-33-13.609E-mail : [EMAIL PROTECTED]
URL : http://www.dmbr.UGent.be


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] skip weekends: revisited

2005-03-14 Thread Jeff Amiel
Andre Maasikas wrote:
From the looks of it, variable date seems to calculate
the first day of a week and for it to work day_of_the_week
has to start from 0 so Sunday - 0,  Friday - 5
(in US at least?)
Andre
That appears to be the case (0 is sunday.)
thanks for the assist!!
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Peculiar performance observation....

2005-03-14 Thread Scott Marlowe
On Mon, 2005-03-14 at 06:11, Net Virtual Mailing Lists wrote:
 Hello,
 
 
 I am sorry to bring this up again Does anyone have any idea what
 might be going on here?...   I'm very worried about this situation.. ;-(

It looks to me like either you're not analyzing often enough, or your
statistics target is too low to get a good sample.  Note your estimated
versus real rows are off by a factor of 70 (28 est. versus 1943 actual
rows). That's a pretty big difference, and where you should be looking.

-  Seq Scan on jobdata  (cost=0.00..7567.88 rows=28 width=52) (actual
 time=11.498..4800.907 rows=1943 loops=1)

Yes, this is because PostgreSQL is using an index to approximate a
sequential scan, which is not a good thing since PostgreSQL can't get
all the information it needs from just an index, but has to visit the
table to check visibility.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-14 Thread Karsten Hilbert
For the record, if people search the archives for solutions.

Problem:

You need the time zone in use for timestamp with time zone
when data was previously inserted/updated.

Discussion:

   As others have mentioned, store timestamps on the server in UTC,
  
  1) As long as I store them as timestamp with time zone I should
  not need to care what they are stored as on the backend as
  long as I provide the proper timezone for the client location.
  Correct ?
  
  2) If I then retrieve them as ... at time zone ... I will get
  the equivalent time in the time zone of the retrieving client.
  The same could be be achieved with set timezone per session.
  Correct ?
 
 Yes and Yes
 
  3) If I retrieve them without at time zone I will get them with
  the time zone that was stored in the first place, right ?
 [...]
 
 This would be news to me.
...
 Anyway, afaik, Postgres does not store the original timezone
 anywhere, and so could not possibly retrieve it.

Solution:

GnuMed now uses a trigger to store the time zone at the time
of data insertion. This was close enough for our needs at the
time (yes, we are old, triggers still need to return opaque on
some of our installations...).

Code:

\unset ON_ERROR_STOP
drop trigger tr_set_encounter_timezone on clin_encounter;
drop function f_set_encounter_timezone();
\set ON_ERROR_STOP 1

create function f_set_encounter_timezone() returns opaque as '
begin
if TG_OP = ''INSERT'' then
NEW.source_time_zone := (select (extract(timezone from (select 
now()))::text || ''seconds'')::interval);
else
NEW.source_time_zone := OLD.source_time_zone;
end if;
return NEW;
end;
' language 'plpgsql';

create trigger tr_set_encounter_timezone
before insert or update on clin_encounter
for each row execute procedure f_set_encounter_timezone()
;

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Checking for schedule conflicts

2005-03-14 Thread Andre Maasikas
Ragnar Hafstað wrote:
On Sat, 2005-03-12 at 00:13 -0800, Benjamin Smith wrote:
Given the tables defined below, what's the easiest way to check for schedule 
conflicts? 

So far, the only way I've come up with is to create a huge, multi-dimensional 
array in PHP, with a data element for every minute of all time taken up by 
all events, and then check for any of these minutes to be set as I go through 
all the records. (ugh!) 

But, how could I do this in the database? 

But I'd like to see something like 
select count(*) FROM events, sched 
WHERE sched.date=$date
AND events.id=sched.events_id 
...
GROUP BY date, startfinish and finishstart 
HAVING count(*) 1  

And here's where I get stumped. You can't group by start or end because we 
need to check if they OVERLAP any other records on the same date. 

Ideas?

use the OVERLAPS operator ?
http://www.postgresql.org/docs/8.0/interactive/functions-datetime.html
gnari
The idea is to join table with itself so you can
compare different records, something like:
select * from  sched a, sched b  /* join with itself */
where (a.start between b.start and b.end /* filter out overlapping */
   or a.end between b.start and b.end)
and a.id != b.id /* event overlaps iself - leave that out */
or insted of 'between' use the OVERLAPS operator Ragnar mentioned when
dealing with date types.
Andre
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Marco Colombo
On Mon, 14 Mar 2005, Michael Fuhr wrote:
Would we?  My understanding is that code passed to PyRun_String()
and friends must be free of line-ending CRs on all platforms, and
that the code that reads a normal Python script takes care of
that (i.e., normalizes line endings to be LF only).  Can anybody
confirm or deny?
I'm not sure of that. I suspect you'll need to pass CRs on windows.
If anyone manages to compile the following code on Windows...
#include Python.h
void
run_program(const char *program)
{
PyObject *ret, *globals, *locals;
printf( running:\n%s\n, program);
globals = PyDict_New();
locals = PyDict_New();
ret = PyRun_String(program, Py_file_input, globals, locals);
if (ret) {
Py_DECREF(ret);
printf(\n);
} else {
PyErr_Print();
}
Py_DECREF(locals);
Py_DECREF(globals);
printf( end\n\n);
}
int
main(int argc, char *argv[])
{
const char *program1 = print 1\nprint 2\n;
const char *program2 = print 1\r\nprint 2\r\n;
Py_Initialize();
printf( Initialized.\n);
printf( Python %s\n, Py_GetVersion());
run_program(program1);
run_program(program2);
Py_Finalize();
printf( Finalized.\n);
}
On my Fedora Core 2, I need to complile it with the following command:
gcc -I/usr/include/python2.3 -L/usr/lib/python2.3/config py-test.c -o py-test\
-lpython2.3 -ldl -lm -lpthread -lutil
This is my first attempt to embed python, so I may be missing something...
On Linux, you get:
$ ./py-test 21 | cat -v
Initialized.
Python 2.3.3 (#1, May  7 2004, 10:31:40)
[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)]
running:
print 1
print 2
1
2
end

running:
print 1^M
print 2^M
  File string, line 1
print 1^M
   ^
SyntaxError: invalid syntax
end

Finalized.

I bet on windows the first program fails and the second is ok.
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Tom Lane
Richard Huxton dev@archonet.com writes:
 Rick Schumeyer wrote:
 Below are some PRELIMINARY results in comparing the performance of pgsql and
 mysql.

 Take 30 minutes to read through the article below. It covers the basics 
 of how to manage your configuration settings.
http://www.powerpostgresql.com/PerfList

I have been fooling with the sql-bench stuff that MySQL ships with their
database.  Not because I take it seriously ;-) but because I thought it
would be useful to understand in detail why we look so spectacularly bad
on it.  I'll write a more complete report when I'm done, but what's
relevant to Rick's testing is that I have found that a few simple
configuration adjustments make a huge difference.  Specifically,
I've got

shared_buffers = 1  # 10x the default
checkpoint_segments = 30# 10x the default
work_mem = 10   # ~100x the default
maintenance_work_mem = 10   # ~6x the default

(The *work_mem numbers are probably excessive but I've not bothered to
fine-tune them.)  A stock out-of-the-box PG 8.0.1 RPM is about 10x slower
overall than MySQL according to this benchmark, but these adjustments
bring it to something like 2x slower.  Which is at least in the ballpark.

Most of the tables that this benchmark uses have about 300K
not-super-wide rows, so what this says is that you need numbers in this
vicinity to work on tables of that size.

Bottom line is that you *must* adjust at least these settings if you
want a high-performance PG server.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL training

2005-03-14 Thread Robert Treat
On Sun, 2005-03-13 at 15:23, Alex Adriaanse wrote:
 Hi,
 
 I'm working on an application for a client that uses PostgreSQL as its 
 database backend.  The client wants to train their team on PostgreSQL so 
 that they can maintain the application and the database themselves after 
 it goes live should they need to.  As far as I know the majority of them 
 don't have any experience with databases (other than using applications 
 that rely on them of course).  The type of training that they would need 
 would need to cover generic database concepts (database design, SQL), as 
 well as PostgreSQL-specific information (mostly database administration, 
 maybe some performance tuning).
 
 They can travel anywhere within the US to take the training.  I noticed 
 there were a few training events posted on postgresql.org, but only one 
 of them is within the US (PostgreSQL Bootcamp, next month), and I'm not 
 sure if that course would offer the right type of training for them (I 
 think they'd need something more elementary).
 
 Do you guys have any suggestions as far as training is concerned 
 (training events throughout the year, companies that do PostgreSQL 
 training, etc.)?
 

My first recommendation would be to see if attending OSCon is an option
for them, since there will be a number of classes at different levels
that they could attend, plus they would get the chance to meet other
postgresql users and developers. 

If that doesn't work, try contacting some of the bigger support
companies like Pervasive or SRA who both have intentions of doing
training services so they might have something for you. 

Next up would be any of the smaller support companies listed at
http://techdocs.postgresql.org/companies.php, many of whom offer
training courses that might fit what your looking for. 

If you still can't find anything then, given enough dollars, I'll do a
weekend crash course for you guys ;-)  Actually I should mention a
disclaimer here in that afaik the Nerd Ranch guys can go either fast or
slow on the course depending on the aptitude of the students... if your
going to be sending multiple people it wouldn't hurt to talk with them. 

HTH

Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Michael Fuhr
On Mon, Mar 14, 2005 at 08:14:42PM +0100, Marco Colombo wrote:
 On Mon, 14 Mar 2005, Michael Fuhr wrote:
 
 Would we?  My understanding is that code passed to PyRun_String()
 and friends must be free of line-ending CRs on all platforms, and
 that the code that reads a normal Python script takes care of
 that (i.e., normalizes line endings to be LF only).  Can anybody
 confirm or deny?
 
 I'm not sure of that. I suspect you'll need to pass CRs on windows.

Hmmm...I think that would be inconsistent with previous reports.
For example, in the following message, the poster said that everything
(PostgreSQL, pgAdmin) was running on Windows 2003:

http://archives.postgresql.org/pgsql-interfaces/2005-02/msg00066.php

I suggested that he strip the CRs from pg_proc.prosrc and he said
it worked:

http://archives.postgresql.org/pgsql-interfaces/2005-03/msg00014.php

It's not clear that the test in the second message was run on a
Windows server (apparently pgAdmin was run on a Windows client),
but I think the beginning of the message is saying that he didn't
reply for so long because he didn't have access to a Windows server.
From that I infer that he tried my suggestion and posted the results
when he finally did get access to a customer's Windows server.  I
could be misreading that, however.

A couple of months ago Stuart Bishop brought this issue up in
python-dev.  Most of the thread is along the lines of strip the
carriage returns:

http://mail.python.org/pipermail/python-dev/2005-January/051203.html

 If anyone manages to compile the following code on Windows...
...
 I bet on windows the first program fails and the second is ok.

Hopefully somebody will do a Windows test of the code you posted.
I'd find it bizarre that the Python code embedded in a C program
had to care whether it was running on *nix or Windows.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Problem with special character () on postgresql 7.4... getting out of idea .. please help :-)

2005-03-14 Thread David Gagnon
Hi,
I really have a problem with a production environment (RH 9, Postgresql 
7.4).

When executing a stored procedure on my computer (development 
environment: 7.4 under cygwin. ) everything is oki

When I deploy on the production env the same stored procedure with the 
same data (different OS and postgresql instance) the stored procedure 
crash. I get this error:
java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null 
querystring when executing the

I looked with pgadminIII and found that a charater used as a string 
separator (i.e.: ) is shown as  on the production database. It look 
just oki in my dev env. (I included part of both stored procedure below).

Both stored procedure have been added to postgresql via JDBC. When I 
update the stored procedure via Pgadmin III the stored procedure look oki.

Any Idea what can be the error. Is there any JDBC/Postgresql 7.4 version 
that can cause the behavior. Do I have to set a flag somewhere?! Is 
there a way I can work around this problem?

Thanks for your help .. it's really appreciated
/David


CREATE OR REPLACE FUNCTION uk_webos_parseitemprice(int4, _int4)
RETURNS int4 AS
'
DECLARE
commandId ALIAS FOR $1;
arrayProp ALIAS FOR $2;
rawData RECORD;
oneRow text[];
i INTEGER;
idValue VARCHAR;
typeValue VARCHAR;

OFFSET 1
LOOP
select into oneRow (string_to_array(rawData.VDDATA,\'\'));
action:=oneRow[1];
FOR i IN array_lower(oneRow, 1)..array_upper(oneRow, 1) LOOP
column:=oneRow[i];
IF (column = \'\')
THEN

END;
'
LANGUAGE 'plpgsql' VOLATILE;


CREATE OR REPLACE FUNCTION uk_webos_parseitemprice(int4, _int4)
RETURNS int4 AS
'
DECLARE

-- RAISE NOTICE \'test \' ;
FOR rawData IN
SELECT VDNUM, VDVSSRC, VDVSNUM, VDKEY, VDDATA, ts
FROM VD
WHERE VDVSNUM = commandId
AND VDKEY = \'IL\'
AND VDVSSRC = 1
ORDER BY VDNUM
OFFSET 1
LOOP
select into oneRow (string_to_array(rawData.VDDATA,\'\'));
action:=oneRow[1];
FOR i IN array_lower(oneRow, 1)..array_upper(oneRow, 1) LOOP
column:=oneRow[i];
IF (column = \'\')
THEN
column:= null ;
END IF;
IF (i = arrayProp[1])
THEN
idValue:= column;
ELSIF (i = arrayProp[2])
THEN
typeValue:= column;
ELSIF (i = arrayProp[3])
THEN
itemIdValue:= column;
ELSIF (i = arrayProp[4])
THEN
resourceIdValue:= column;
ELSIF (i = arrayProp[5])
THEN
minimalQuantityValue:= column;
ELSIF (i = arrayProp[6])
THEN
unitPriceValue:= column;
END IF;
END LOOP;
IF ((action = \'UPDATE\') or (action = \'GUESS\'))
THEN
EXECUTE \'DELETE FROM IL WHERE ILNUM =\' || idValue;
END IF;
-- process the insert statement
insertStatement:= \'INSERT INTO IL ( ILNUM, ILTYPE, ILICNUM, ILRRNUM, 
ILQTE, ILPRIX, ts ) VALUES ( \' || idValue ||\', \'|| typeValue ||\',\';
IF (itemIdValue is null)
THEN
insertStatement:= insertStatement || \' null,\';
ELSE
insertStatement:= insertStatement || quote_literal(itemIdValue)|| \',\';
END IF;
IF (resourceIdValue is null)
THEN
insertStatement:= insertStatement || \' null,\';
ELSE
insertStatement:= insertStatement || quote_literal(resourceIdValue)|| 
\',\';
END IF;
insertStatement:= insertStatement || 
minimalQuantityValue||\',\'||unitPriceValue||\',CURRENT_TIMESTAMP ) \';
-- RAISE NOTICE \'insertStatement %\', insertStatement ;
EXECUTE insertStatement;
END LOOP;
return -1;
END;
'
LANGUAGE 'plpgsql' VOLATILE;

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Problem with special character

2005-03-14 Thread Ragnar Hafstað
On Mon, 2005-03-14 at 16:32 -0500, David Gagnon wrote:
 Hi,
 
 I really have a problem with a production environment (RH 9, Postgresql 
 7.4).

 When I deploy on the production env the same stored procedure with the 
 same data (different OS and postgresql instance) the stored procedure 
 crash. I get this error:
 java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null 
 querystring when executing the
 
 I looked with pgadminIII and found that a charater used as a string 
 separator (i.e.: ) is shown as  on the production database. It look 
 just oki in my dev env. (I included part of both stored procedure below).


were the 2 clusters initialized with the same locale
settings ?

gnari




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Logging VACUUM activity

2005-03-14 Thread jao
I'm using postgresql 7.4.6 on RH9. My application includes a
long-running Java process responsible for database access. I'd like to
have this Java process initiate a daily VACUUM and, one way or
another, log VACUUM output.

- Is there some way to get VACUUM output via JDBC? 

- If not, how can I get VACUUM output in the postgres logs? Setting
log_min_error_statement to INFO doesn't seem to have any effect.

Jack Orenstein



This message was sent using IMP, the Internet Messaging Program.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Problem with special character on

2005-03-14 Thread David Gagnon
Thanks for your answer.
 The ISP created the db fom me .. So I don't have this information. I 
search the web to know how to get this info via PgadminIII and I haven't 
found :-(  Is there a way to get this information once the database have 
been created ?  I looked via psql .. I haven`t found either

Thanks
/David
Ragnar Hafsta wrote:
On Mon, 2005-03-14 at 16:32 -0500, David Gagnon wrote:
 

Hi,
I really have a problem with a production environment (RH 9, Postgresql 
7.4).
   

 

When I deploy on the production env the same stored procedure with the 
same data (different OS and postgresql instance) the stored procedure 
crash. I get this error:
java.io.IOException: Unable to get row: ERROR: cannot EXECUTE a null 
querystring when executing the

I looked with pgadminIII and found that a charater used as a string 
separator (i.e.: ) is shown as  on the production database. It look 
just oki in my dev env. (I included part of both stored procedure below).
   


were the 2 clusters initialized with the same locale
settings ?
gnari

 


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Problem with special character on

2005-03-14 Thread Miroslav ulc
David Gagnon wrote:
Thanks for your answer.
 The ISP created the db fom me .. So I don't have this information. I 
search the web to know how to get this info via PgadminIII and I 
haven't found :-(  Is there a way to get this information once the 
database have been created ?  I looked via psql .. I haven`t found either
David, you can try 'SHOW ALL' command.
Thanks
/David
Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Chris Browne
[EMAIL PROTECTED] (Mark Rae) writes:
 On Mon, Mar 14, 2005 at 06:52:58AM -0500, Rick Schumeyer wrote:
 Below are some PRELIMINARY results in comparing the performance of pgsql and
 mysql.
 ...
 I have not yet done any testing of transactions, multiple concurrent
 processes, etc.
 

 I would say that doing the concurrency tests is probably the most
 important factor in comparing other databases against MySQL, as
 MySQL will almost always win in single-user tests.

 E.g. here are some performance figures from tests I have done in the past.
 This is with a 6GB databse on a 4CPU Itanium system running a mixture of
 read-only queries, but it is fairly typical of the behaviour I have seen. 
 The Oracle figures also scaled in a similar way to postgres.

 Clients   1 2 3 4 6 812163264   
 128
 ---
 mysql-4.1.11.00  1.41  1.34  1.16  0.93  1.03  1.01  1.00  0.94  0.86  
 0.80
 pg-7.4.1   0.65  1.27  1.90  2.48  2.45  2.50  2.48  2.51  2.49  2.39  
 2.38

Could you elaborate on what the measures are here?  I don't quite
follow what 0.8 means as compared to 2.38.
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://www.ntlug.org/~cbbrowne/linuxxian.html
A VAX is virtually a computer, but not quite.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Multi Time Zone Select

2005-03-14 Thread Vanole, Mike
Title: Multi Time Zone Select






Hi,


I load date information from different time zones into a single table. My columns are defined 'timestamp(0) with time zone' and I'm loading '2005-03-01 22:00:00-05:00' and '2005-03-01 23:00:00-06:00' for example. I query this table and expect rows from all time zones and I want to see the time zones in the result. But when I select he data all rows return converted to my local time zone for each timezone. Since this is a single query (select * ) I don't know where to apply AT TIME ZONE 'zone'. Is there a way to specify that I want the time data presented according to the way the data is stored - by time zone?

Many thanks in advance.

Mike





Re: [GENERAL] Problem with special character on

2005-03-14 Thread David Gagnon
Hi
Thanks for the tips.  Locale are the same ...  unless I don`t look at 
the right thing...

Production ENV
-
lc_collate | C
lc_ctype   | C
lc_messages| C
lc_monetary| C
lc_numeric | C
lc_time| C
Dev ENV
--
lc_collate;C
lc_ctype;C
lc_messages;C
lc_monetary;C
lc_numeric;C
lc_time;C
Thanks for your help
/David
PROD ALL VARIABLES
---
add_missing_from;on
australian_timezones;off
authentication_timeout;60
check_function_bodies;on
checkpoint_segments;3
checkpoint_timeout;300
checkpoint_warning;30
client_encoding;UNICODE
client_min_messages;notice
commit_delay;0
commit_siblings;5
cpu_index_tuple_cost;0.001
cpu_operator_cost;0.0025
cpu_tuple_cost;0.01
DateStyle;ISO, MDY
db_user_namespace;off
deadlock_timeout;1000
debug_pretty_print;off
debug_print_parse;off
debug_print_plan;off
debug_print_rewritten;off
default_statistics_target;10
default_transaction_isolation;read committed
default_transaction_read_only;off
dynamic_library_path;$libdir
effective_cache_size;1000
enable_hashagg;on
enable_hashjoin;on
enable_indexscan;on
enable_mergejoin;on
enable_nestloop;on
enable_seqscan;on
enable_sort;on
enable_tidscan;on
explain_pretty_print;on
extra_float_digits;0
from_collapse_limit;8
fsync;on
geqo;on
geqo_effort;1
geqo_generations;0
geqo_pool_size;0
geqo_selection_bias;2
geqo_threshold;11
join_collapse_limit;8
krb_server_keyfile;FILE:/etc/sysconfig/pgsql/krb5.keytab
lc_collate;C
lc_ctype;C
lc_messages;C
lc_monetary;C
lc_numeric;C
lc_time;C
log_connections;off
log_duration;off
log_error_verbosity;default
log_executor_stats;off
log_hostname;off
log_min_duration_statement;-1
log_min_error_statement;panic
log_min_messages;notice
log_parser_stats;off
log_pid;off
log_planner_stats;off
log_source_port;off
log_statement;off
log_statement_stats;off
log_timestamp;off
max_connections;100
max_expr_depth;1
max_files_per_process;1000
max_fsm_pages;2
max_fsm_relations;1000
max_locks_per_transaction;64
password_encryption;on
port;5432
pre_auth_delay;0
preload_libraries;unset
random_page_cost;4
regex_flavor;advanced
rendezvous_name;unset
search_path;$user,public
server_encoding;UNICODE
server_version;7.4.7
shared_buffers;1000
silent_mode;off
sort_mem;1024
sql_inheritance;on
ssl;off
statement_timeout;0
stats_block_level;off
stats_command_string;off
stats_reset_on_server_start;on
stats_row_level;off
stats_start_collector;on
superuser_reserved_connections;2
syslog;0
syslog_facility;LOCAL0
syslog_ident;postgres
tcpip_socket;on
TimeZone;unknown
trace_notify;off
transaction_isolation;read committed
transaction_read_only;off
transform_null_equals;off
unix_socket_directory;unset
unix_socket_group;unset
unix_socket_permissions;511
vacuum_mem;8192
virtual_host;unset
wal_buffers;8
wal_debug;0
wal_sync_method;fdatasync
zero_damaged_pages;off
DEV ENV FULL VARIABLE
---
add_missing_from;on
australian_timezones;off
authentication_timeout;60
check_function_bodies;on
checkpoint_segments;3
checkpoint_timeout;300
checkpoint_warning;30
client_encoding;UNICODE
client_min_messages;notice
commit_delay;0
commit_siblings;5
cpu_index_tuple_cost;0.001
cpu_operator_cost;0.0025
cpu_tuple_cost;0.01
DateStyle;ISO, MDY
db_user_namespace;off
deadlock_timeout;1000
debug_pretty_print;off
debug_print_parse;off
debug_print_plan;off
debug_print_rewritten;off
default_statistics_target;10
default_transaction_isolation;read committed
default_transaction_read_only;off
dynamic_library_path;$libdir
effective_cache_size;1000
enable_hashagg;on
enable_hashjoin;on
enable_indexscan;on
enable_mergejoin;on
enable_nestloop;on
enable_seqscan;on
enable_sort;on
enable_tidscan;on
explain_pretty_print;on
extra_float_digits;0
from_collapse_limit;8
fsync;on
geqo;on
geqo_effort;1
geqo_generations;0
geqo_pool_size;0
geqo_selection_bias;2
geqo_threshold;11
join_collapse_limit;8
krb_server_keyfile;unset
lc_collate;C
lc_ctype;C
lc_messages;C
lc_monetary;C
lc_numeric;C
lc_time;C
log_connections;off
log_duration;off
log_error_verbosity;default
log_executor_stats;off
log_hostname;off
log_min_duration_statement;-1
log_min_error_statement;panic
log_min_messages;notice
log_parser_stats;off
log_pid;off
log_planner_stats;off
log_source_port;off
log_statement;off
log_statement_stats;off
log_timestamp;off
max_connections;40
max_expr_depth;1
max_files_per_process;1000
max_fsm_pages;2
max_fsm_relations;1000
max_locks_per_transaction;64
password_encryption;on
port;5432
pre_auth_delay;0
preload_libraries;unset
random_page_cost;4
regex_flavor;advanced
rendezvous_name;unset
search_path;$user,public
server_encoding;UNICODE
server_version;7.4.5
shared_buffers;1000
silent_mode;off
sort_mem;1024
sql_inheritance;on
ssl;off
statement_timeout;0
stats_block_level;off
stats_command_string;off
stats_reset_on_server_start;on
stats_row_level;off
stats_start_collector;on
superuser_reserved_connections;2
syslog;0

Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-14 Thread Scott Ribe
 3) If I retrieve them without at time zone I will get them with
 the time zone that was stored in the first place, right ?
 [...]
 
 This would be news to me.  I don't think it's possible to *not* have a
 timezone set on a session.  The server will have a default timezone
 based either on the local (server) system time or the setting of the
 timezone variable in postgresql.conf.  Additionally, libpq
 applications will, I believe, issue a set timezone during initial
 connection setup.

This is certainly the default behavior--I don't know whether there are
settings to change it. All I know is that I regularly work with a database
located in a different time zone, and displayed times are adjusted to my
local time.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 665-7007 voice



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Logging VACUUM activity

2005-03-14 Thread Joshua D. Drake
On Mon, 2005-03-14 at 17:12 -0500, [EMAIL PROTECTED] wrote:
 I'm using postgresql 7.4.6 on RH9. My application includes a
 long-running Java process responsible for database access. I'd like to
 have this Java process initiate a daily VACUUM and, one way or
 another, log VACUUM output.
 
 - Is there some way to get VACUUM output via JDBC? 
 
 - If not, how can I get VACUUM output in the postgres logs? Setting
 log_min_error_statement to INFO doesn't seem to have any effect.

You can run the query vacuum verbose.

Sincerely,

Joshua D. Drake


 
 Jack Orenstein
 
 
 
 This message was sent using IMP, the Internet Messaging Program.
 
 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
-- 
Command Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564
Custom programming, 24x7 support, managed services, and hosting
Open Source Authors: plPHP, pgManage, Co-Authors: plPerlNG
Reliable replication, Mammoth Replicator - http://www.commandprompt.com/


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Problem with special character on

2005-03-14 Thread Miroslav ulc
David Gagnon wrote:
Hi
Thanks for the tips.  Locale are the same ...  unless I don`t look at 
the right thing...

...
Can you try to run the procedure from some other environment? It seems 
you use some Java class, so I mean try psql, phpPgAdmin or something 
different.

Miroslav
begin:vcard
fn;quoted-printable:Miroslav =C5=A0ulc
n;quoted-printable:=C5=A0ulc;Miroslav
org:StartNet s.r.o.
adr;quoted-printable;quoted-printable:;;Vrchlick=C3=A9ho 161/5;Praha 5;;150 00;=C4=8Cesk=C3=A1 republika
email;internet:[EMAIL PROTECTED]
title:CEO
tel;work:+420 257 225 602
tel;cell:+420 603 711 413
x-mozilla-html:TRUE
url:http://www.startnet.cz
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Problem with special character on

2005-03-14 Thread David Gagnon
I did tried to update the stored-procedure via PgadminIII and it 
worked.  The problem seems to be JDBC driver .. But all works well on my 
TOMCA/POSTGRESL/WINDOWS platform. 

I think it's something in the database setting .. is there other setting 
that can cause this behavior ?

Thanks!
/David
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Problem with special character

2005-03-14 Thread Ragnar Hafstað
On Mon, 2005-03-14 at 19:13 -0500, David Gagnon wrote:
 I did tried to update the stored-procedure via PgadminIII and it 
 worked.  The problem seems to be JDBC driver .. But all works well on my 
 TOMCA/POSTGRESL/WINDOWS platform. 
 
 I think it's something in the database setting .. is there other setting 
 that can cause this behavior ?

maybe some difference in the environments that the two
tomcats run in? are their locales the same ?

gnari



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Checking for schedule conflicts

2005-03-14 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Benjamin Smith wrote:
 Given the tables defined below, what's the easiest way to check for schedule
 conflicts?

Assuming you don't count more than once things with the same id, start, and 
finish:

SELECT COUNT(DISTINCT (s1.event_id||s1.start||s1.finish))
FROM sched s1, sched s2
WHERE s1.date = s2.date AND s1.start = s2.start
  AND s2.finish = s2.finish AND NOT s1.ctid = s2.ctid

Add AND s1.date = $date as needed.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200503140639
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFCNXgTvJuQZxSWSsgRAlBmAKC7hj8XzHzS7srqfgdSGOZiCfvtDQCfWM22
VXMkQB7IzEdTKjqpcmWVdaM=
=hFiy
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Logging VACUUM activity

2005-03-14 Thread Jack Orenstein
Joshua D. Drake wrote:
On Mon, 2005-03-14 at 17:12 -0500, [EMAIL PROTECTED] wrote:
I'm using postgresql 7.4.6 on RH9. My application includes a
long-running Java process responsible for database access. I'd like to
have this Java process initiate a daily VACUUM and, one way or
another, log VACUUM output.
- Is there some way to get VACUUM output via JDBC? 

- If not, how can I get VACUUM output in the postgres logs? Setting
log_min_error_statement to INFO doesn't seem to have any effect.

You can run the query vacuum verbose.
I tried that. Whether I issue the VACUUM VERBOSE call through JDBC
or psql, I don't see anything in the postgres log file at all, even
with log_min_error_statement set to info.
In psql, I see the VACUUM VERBOSE output in the psql session itself,
but I need to log the output. I could spawn a psql session if necessary,
but ideally I'd just run VACUUM VERBOSE from JDBC and get the output
in the postgres log file.
Jack Orenstein
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Peculiar performance observation....

2005-03-14 Thread Net Virtual Mailing Lists
On Mon, 2005-03-14 at 06:11, Net Virtual Mailing Lists wrote:
 Hello,
 
 
 I am sorry to bring this up again Does anyone have any idea what
 might be going on here?...   I'm very worried about this situation.. ;-(

It looks to me like either you're not analyzing often enough, or your
statistics target is too low to get a good sample.  Note your estimated
versus real rows are off by a factor of 70 (28 est. versus 1943 actual
rows). That's a pretty big difference, and where you should be looking.

-  Seq Scan on jobdata  (cost=0.00..7567.88 rows=28 width=52) (actual
 time=11.498..4800.907 rows=1943 loops=1)

Yes, this is because PostgreSQL is using an index to approximate a
sequential scan, which is not a good thing since PostgreSQL can't get
all the information it needs from just an index, but has to visit the
table to check visibility.



All of these were after a vacuum full analyze, which I actually do
nightly on the database.

I probably confused the issue with all of my posts, this is the query
which has me concerned.  When running it on my system here, the disk
thrashes (and I mean *THRASHES*) the entire 12-20 seconds it takes to
run...  WHen running on our production servers, I can't hear the disk,
but see an equally troubling performance loss when using the index.

database= explain analyze select id from table1 where category @ 'a.b';
QUERY
PLAN 
-
-
-
 Index Scan using table1_category_full_gist_idx on jobdata 
(cost=0.00..113.48 rows=28 width=4) (actual time=43.814..12201.528
rows=1943 loops=1)
   Index Cond: (category @ 'a.b'::ltree)
   Filter: (category @ 'a.b'::ltree)
 Total runtime: 1.258 ms


I can do this to speed things up (this results in very little disk
activity, certainly not the thrashing the original query did):


create table yuck (id integer, category ltree[]);
insert into yuck select id, category from table1;
create index category_idx on yuck using gist(category);
vacuum analyze yuck;
jobs= explain analyze select id from table1 where id in (select id from
yuck where category @ 'a.b');
  QUERY PLAN
  
-
-
-
 Nested Loop  (cost=108.64..114.28 rows=1 width=52) (actual
time=654.645..1245.212 rows=1943 loops=1)
   -  HashAggregate  (cost=108.64..108.64 rows=1 width=4) (actual
time=654.202..690.709 rows=1943 loops=1)
 -  Index Scan using category_idx on yuck  (cost=0.00..108.57
rows=28 width=4) (actual time=2.046..623.436 rows=1943 loops=1)
   Index Cond: (category @ 'a.b'::ltree)
   Filter: (category @ 'a.b'::ltree)
   -  Index Scan using table1_pkey on jobdata  (cost=0.00..5.64 rows=1
width=52) (actual time=0.219..0.235 rows=1 loops=1943)
 Index Cond: (table1.id = outer.id)
 Total runtime: 1261.551 ms
(8 rows)



If I drop the index table1_category_full_gist_idx, the query speeds up
dramatically (10-15 times faster on both dev and prod uction systems).

So my concern, in short: why is it so much slower when actually using an
index and why is it trying to make mince meat out of my hard drive?


- Greg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Question about database restrict

2005-03-14 Thread Yu Jie
Hi all,
If I want to restrict the maximum size of one
table to 5MB, restrict the maximum size of database
file to 100MB, how can I do that restriction?
Is PostgreSQL8.0 support this function? I can't
find it in the manual of PostgreSQL.

Thanks.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Peculiar performance observation....

2005-03-14 Thread Scott Marlowe
On Mon, 2005-03-14 at 21:14, Net Virtual Mailing Lists wrote:
 On Mon, 2005-03-14 at 06:11, Net Virtual Mailing Lists wrote:
  Hello,
  
  
  I am sorry to bring this up again Does anyone have any idea what
  might be going on here?...   I'm very worried about this situation.. ;-(
 
 It looks to me like either you're not analyzing often enough, or your
 statistics target is too low to get a good sample.  Note your estimated
 versus real rows are off by a factor of 70 (28 est. versus 1943 actual
 rows). That's a pretty big difference, and where you should be looking.
 
 -  Seq Scan on jobdata  (cost=0.00..7567.88 rows=28 width=52) (actual
  time=11.498..4800.907 rows=1943 loops=1)
 
 Yes, this is because PostgreSQL is using an index to approximate a
 sequential scan, which is not a good thing since PostgreSQL can't get
 all the information it needs from just an index, but has to visit the
 table to check visibility.
 
 
 
 All of these were after a vacuum full analyze, which I actually do
 nightly on the database.
 
 I probably confused the issue with all of my posts, this is the query
 which has me concerned.  When running it on my system here, the disk
 thrashes (and I mean *THRASHES*) the entire 12-20 seconds it takes to
 run...  WHen running on our production servers, I can't hear the disk,
 but see an equally troubling performance loss when using the index.

I'll call this query 1:

 
 database= explain analyze select id from table1 where category @ 'a.b';
 QUERY
 PLAN 
 -
 -
 -
  Index Scan using table1_category_full_gist_idx on jobdata 
 (cost=0.00..113.48 rows=28 width=4) (actual time=43.814..12201.528
 rows=1943 loops=1)
Index Cond: (category @ 'a.b'::ltree)
Filter: (category @ 'a.b'::ltree)
  Total runtime: 1.258 ms
 
 
 I can do this to speed things up (this results in very little disk
 activity, certainly not the thrashing the original query did):
 
 
 create table yuck (id integer, category ltree[]);
 insert into yuck select id, category from table1;
 create index category_idx on yuck using gist(category);
 vacuum analyze yuck;
 jobs= explain analyze select id from table1 where id in (select id from
 yuck where category @ 'a.b');
   QUERY PLAN
   
 -
 -
 -
  Nested Loop  (cost=108.64..114.28 rows=1 width=52) (actual
 time=654.645..1245.212 rows=1943 loops=1)
-  HashAggregate  (cost=108.64..108.64 rows=1 width=4) (actual
 time=654.202..690.709 rows=1943 loops=1)
  -  Index Scan using category_idx on yuck  (cost=0.00..108.57
 rows=28 width=4) (actual time=2.046..623.436 rows=1943 loops=1)
Index Cond: (category @ 'a.b'::ltree)
Filter: (category @ 'a.b'::ltree)
-  Index Scan using table1_pkey on jobdata  (cost=0.00..5.64 rows=1
 width=52) (actual time=0.219..0.235 rows=1 loops=1943)
  Index Cond: (table1.id = outer.id)
  Total runtime: 1261.551 ms
 (8 rows)


 If I drop the index table1_category_full_gist_idx, the query speeds up
 dramatically (10-15 times faster on both dev and prod uction systems).
 
 So my concern, in short: why is it so much slower when actually using an
 index and why is it trying to make mince meat out of my hard drive?

I'll explain it again, sorry if my quoting originally was a bit of a
mess.  I meant to post the last comment I made after some other comment
in your original post that I think I deleted.

Anyway, the reason it's slow is that PostgreSQL, unlike most other
databases, cannot get the answers from an index.  It can only get a
pointer to the right place in the table to look for the answer.  After
that, due to visibility issues caused by the way postgresql implements
MVCC, it then has to look IN THE TABLE to find out if the value is
visible to your transaction or not.  So it's going Index then table,
then index, then table, then index, then table, for however many rows
it's gonna grab.  In this case 1943.

In query 1, the number of rows being returned by the index scan is 1943,
but the planner only thinks it's gonna get back 28.  So, with a 70:1
ratio of incorrectness here, the planner thinks an index scan is a good
idea.  It's not, it's a terrible idea for your table.  The problem is
likely that the query planner is not getting the right numbers for this
table, and I'm not even sure how accurate statistics can be for ltrees,
as I've only ever used btree indexes in postgresql.  But, upping the
statistics target for the column producing this bad behavior and
rerunning 

[GENERAL] Daffodil Replicator is now available open source

2005-03-14 Thread Brian Maguire








Has anyone ever used this? They have support for Postgres as
master or a slave in a heterogeneous setup.



Daffodil Replicator is a powerful Open Source data
synchronization tool that allows bi-directional data synchronization between
heterogeneous databases supporting JDBC drivers. Its flexible publish and
subscribe model supports both snapshot and merge replication.



http://www.daffodildb.com/replicator.html









Re: [GENERAL] PostgreSQL training

2005-03-14 Thread Sven Willenberger

Alex Adriaanse presumably uttered the following on 03/13/05 15:23:
Hi,
I'm working on an application for a client that uses PostgreSQL as its 
database backend.  The client wants to train their team on PostgreSQL so 
that they can maintain the application and the database themselves after 
it goes live should they need to.  As far as I know the majority of them 
don't have any experience with databases (other than using applications 
that rely on them of course).  The type of training that they would need 
would need to cover generic database concepts (database design, SQL), as 
well as PostgreSQL-specific information (mostly database administration, 
maybe some performance tuning).

They can travel anywhere within the US to take the training.  I noticed 
there were a few training events posted on postgresql.org, but only one 
of them is within the US (PostgreSQL Bootcamp, next month), and I'm not 
sure if that course would offer the right type of training for them (I 
think they'd need something more elementary).

Do you guys have any suggestions as far as training is concerned 
(training events throughout the year, companies that do PostgreSQL 
training, etc.)?

Thanks a lot,
Alex
I would like to mention that the folks at the Big Nerd Ranch (tm) who 
run the PostgreSQL boot camp also do on-site training which would then 
be tailored to your own organization's needs. (www.bignerdranch.com) 
Having attended their week-long training I can certainly attest to the 
benefits such training has.

Sven
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Question about database restrict

2005-03-14 Thread Qingqing Zhou



Yu Jie [EMAIL PROTECTED]  writes:
 Hi all,
 If I want to restrict the maximum size of one
 table to 5MB, restrict the maximum size of database
 file to 100MB, how can I do that restriction?
 Is PostgreSQL8.0 support this function? I can't
 find it in the manual of PostgreSQL.

PG does not support this function directly. However, you may use OS provided
functions to do this more or less.

Regards,
Qingqing


 Thanks.

 __
 Do You Yahoo!?
 Tired of spam?  Yahoo! Mail has the best spam protection around
 http://mail.yahoo.com

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Sim Zacks
I don't think that this type of solution should be discussed as an official
patch.
If it was, I would recommend solving the problem in source code when the
function is passed to the translator. That way each platform could fix the
code to work with as is needed and the code would be portable.

I ran into this problem about half a year ago and it didn't go anywhere,
then I saw a long discussion about it that also didn't go anywhere. I had
given up on using plpython until now because I actually need it. So I
figured out how to make it work and thought that it would be helpful to
others, but I couldn't figure out how to automate the fix.

Hopefully, aside from the issue of having CRs inside the actual program,
using the validator function/updating pg_proc directly won't cause any
harmful ramifications. I don't have access to any other platforms right now,
aside from Windows client connected to Linux server, so I really couldn't
reliably test any other situation.

Sim

Tom Lane [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Michael Fuhr [EMAIL PROTECTED] writes:
  Are there any problems with doing this?  Is a VALIDATOR function
  permitted to modify the function it's validating?

 Well, it's not *supposed* to, but the tuple is already stored so I
 guess an UPDATE on it will work.  At the moment.

  This wouldn't work if plpythonu ever installs a VALIDATOR, but you
  might be able to use it until such time (barring objections about why
  it's a Bad Idea, that is).

 The proposed hack seems far too simplistic to me ... what of CRs that
 are deliberately included in string literals?

 I don't know Python at all, so I don't know how complicated its lexical
 structure is, but ISTM you'd at least need enough smarts to distinguish
 literals from unquoted whitespace.

 The other small fly in the ointment is that when the server is running
 on Windows, I suppose we would have to *put in* rather than remove CRs.
 Sim probably doesn't care about that case, but we couldn't accept an
 official patch that doesn't handle it.

 regards, tom lane

 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Function results written to memory, then sent?

2005-03-14 Thread Steve - DND
I was reading through the docs today, and came across a paragraph that
indicated when plpgsql queries are executed on the server, the results are
all written to memory(or disk if necessary), and not streamed as available.
I can't find the doc page which said it, but does anyone know if this
applies to regular SQL as well, or is it just plpgsql specific. If it
applies to either or both, are there any current plans to not have PG not
behave in this manner, and stream the results of a query as they become
available?

Thanks,
Steve



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] plpython function problem workaround

2005-03-14 Thread Tom Lane
Sim Zacks [EMAIL PROTECTED] writes:
 I don't think that this type of solution should be discussed as an official
 patch.
 If it was, I would recommend solving the problem in source code when the
 function is passed to the translator.

Indeed, but first we need to work out what the necessary translation is.
A hack using a VALIDATOR function isn't an unreasonable way to prototype
the conversion logic.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-14 Thread Karsten Hilbert
  This would be news to me.  I don't think it's possible to *not* have a
  timezone set on a session.  The server will have a default timezone
  based either on the local (server) system time or the setting of the
  timezone variable in postgresql.conf.  Additionally, libpq
  applications will, I believe, issue a set timezone during initial
  connection setup.
 
 This is certainly the default behavior--I don't know whether there are
 settings to change it. All I know is that I regularly work with a database
 located in a different time zone, and displayed times are adjusted to my
 local time.
That surely works. The question was whether there was a
built-in way to recover the time zone of the client inserting
the data.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Multi Time Zone Select

2005-03-14 Thread Karsten Hilbert
 I load date information from different time zones into a single table.
 My columns are defined 'timestamp(0) with time zone' and I'm loading
 '2005-03-01 22:00:00-05:00' and '2005-03-01 23:00:00-06:00'  for
 example. I query this table and expect rows from all time zones and I
 want to see the time zones in the result. But when I select he data all
 rows return converted to my local time zone for each timezone. Since
 this is a single query (select * ...)  I don't know where to apply AT
 TIME ZONE 'zone'. Is there a way to specify that I want the time data
 presented according to the way the data is stored - by time zone?
If you look at the archives from, what, one day ago you'll
find a discussion and one possible solution.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org