Re: [GENERAL] PostgreSQL consulting company in France or Europe?

2003-08-04 Thread Herv Piedvache
Yes, my company Elma Ingénierie Informatique do that in France, Paris.

About the French market of PostgreSQL ... is really small for the moment but 
whe are doing our best to make it growing :o) ...

Regards,

Le Samedi 2 Août 2003 13:55, Francois Suter a écrit :
 Hi All,

 Someone asked me via the advocacy site if there exists a company diffusing
 and offering support/consulting services for PostgreSQL in France, or more
 generally in Europe. Is there anything of the sort?

 This person is obviously interested in a commercial venture. Do you think
 there would be a market for this?

 Thanks in advance for any opinion.

 Cheers.

 
 Francois

 Home page: http://www.monpetitcoin.com/
 We waste our time spending money we don't have to buy things we don't need
 to impress people we don't like


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

-- 
Hervé Piedvache

Elma Ingénierie Informatique
6 rue du Faubourg Saint-Honoré
F-75008 - Paris - France
Pho. 33-144949901
Fax. 33-144949902


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Is there any place to get 7.4 win binaries ?

2003-08-04 Thread Ivar
Hi,

Is there any place to get 7.4 win binaries ?




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] like performance w/o wildcards.

2003-08-04 Thread Richard Huxton
On Monday 04 August 2003 04:29, Joseph Shraibman wrote:
 Hmm.  I didn't work for me.  I'll try and figure this out.

 Tom Lane wrote:
  Joseph Shraibman [EMAIL PROTECTED] writes:
 I notice in 7.3.3 the planner can't tell when a LIKE has no wildcards
 and is in reality an '='.  Is this an easy change to make?

Check your locale/encoding - you probably want C or similar for LIKE to use 
an index.
-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread Aspire
Dear all
I was waiting for this type of product but will plPHP turn to be like LXP

also does it support all the PHP4 syntax

Regards,
V Kashyap



- Original Message -
From: Francisco J Reyes [EMAIL PROTECTED]
To: Joshua D. Drake [EMAIL PROTECTED]
Cc: Gavin M. Roy [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, August 03, 2003 00:04
Subject: Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial


 On Fri, 1 Aug 2003, Joshua D. Drake wrote:

  I am in final testing now, so feel free to download the static. It is
  a pain in the but to compile (static is Linux). By the end of the week
  it should be all pacakaged etc...

 Will this compile in any environment where PHP is installed?
 ie. FreeBSD

 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Monthly table partitioning for fast purges?

2003-08-04 Thread psql-mail
I am looking at ways to speed up queries, the most common way by for 
queries to be constrianed is by date range. I have indexed the date 
column. Queries are still slower than i would like.

Would there be any performance increase for these types of queries if 
the tables were split by month as described by Shridhar (method1) so 
only the required tables were loaded from disk?

Will there be any performance increase if table partitioning is 
implemented?

If method1 is likely to yield a performance increase, will there be a 
large hit for other types 
of queries which require all the data?

I'd be happy to post the EXPLAIN ANALYZE output for a typical query if 
more info is needed.

The date column is of type timestamp (and so goes right down to seconds)
, most user queries are only concerned about whole days without the 
times, (but hte time data is required for other queries) can i do 
something with an index of the timestamps cast to dates? and then cast 
the queries to dates too?

Thanks!

Ron Johnson Wrote:
 On Sun, 2003-08-03 at 06:31, Shridhar Daithankar wrote:
  On 1 Aug 2003 at 13:46, Roger Hand wrote:  
 [snip]
  Here is what you can do.
  
  * Create a empty base table.
  * Create a trigger the directs the select on timestamps accordingly

  * Write a schedule to create a new table at the start of every 
month and update 
  the trigger accordingly
  * Create and maintain rules to insert into base table depending 
upon the 
  timestamp.
  
  This is just a rough idea.
  
  There might be fair amount of work to get this working but surely 
it is not 
  imposible.
 
 And you get a big performance hit when all those records are moved.
 
 Partitioning should be put on the TODO list soon after tablespaces
 (or DBA-defined directories) is implemented.
 

-- 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Monthly table partitioning for fast purges?

2003-08-04 Thread Benjamin Jury

You could create an index on the function date(), which strips the time
information.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: 04 August 2003 14:01
 To: PgSQL General ML
 Subject: Re: [GENERAL] Monthly table partitioning for fast purges?
 
 
 I am looking at ways to speed up queries, the most common way by for 
 queries to be constrianed is by date range. I have indexed the date 
 column. Queries are still slower than i would like.

...

 The date column is of type timestamp (and so goes right down 
 to seconds)
 , most user queries are only concerned about whole days without the 
 times, (but hte time data is required for other queries) can i do 
 something with an index of the timestamps cast to dates? and 
 then cast 
 the queries to dates too?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Monthly table partitioning for fast purges?

2003-08-04 Thread Jan Poslusny
Did you think about cluster on index ?

[EMAIL PROTECTED] wrote:

I am looking at ways to speed up queries, the most common way by for 
queries to be constrianed is by date range. I have indexed the date 
column. Queries are still slower than i would like.

Would there be any performance increase for these types of queries if 
the tables were split by month as described by Shridhar (method1) so 
only the required tables were loaded from disk?

Will there be any performance increase if table partitioning is 
implemented?

If method1 is likely to yield a performance increase, will there be a 
large hit for other types 
of queries which require all the data?

I'd be happy to post the EXPLAIN ANALYZE output for a typical query if 
more info is needed.

The date column is of type timestamp (and so goes right down to seconds)
, most user queries are only concerned about whole days without the 
times, (but hte time data is required for other queries) can i do 
something with an index of the timestamps cast to dates? and then cast 
the queries to dates too?

Thanks!

Ron Johnson Wrote:
 

On Sun, 2003-08-03 at 06:31, Shridhar Daithankar wrote:
   

On 1 Aug 2003 at 13:46, Roger Hand wrote:  
 

[snip]
   

Here is what you can do.

* Create a empty base table.
* Create a trigger the directs the select on timestamps accordingly
 

 

* Write a schedule to create a new table at the start of every 
 

month and update 
 

the trigger accordingly
* Create and maintain rules to insert into base table depending 
 

upon the 
 

timestamp.

This is just a rough idea.

There might be fair amount of work to get this working but surely 
 

it is not 
 

imposible.
 

And you get a big performance hit when all those records are moved.

Partitioning should be put on the TODO list soon after tablespaces
(or DBA-defined directories) is implemented.
   

 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Apache - DBI - Postgresql: Cancelling queries

2003-08-04 Thread Ian Harding
Javascript does suck, but I have a captive audience.  They have to have it turned on 
or nothing happens when you click the submit button.  If nothing happens when you 
click the submit button, you don't get paid
=8^O

 Lincoln Yeoh [EMAIL PROTECTED] 08/02/03 02:32AM 
If it's double-click stuff why don't you just not submit duplicate queries? 
There are very many methods - e.g. must have valid token to submit query, 
token only valid for one query. That sort of thing.
e.g. store hash of certain values (hidden params, sessionid) in the form 
(including a random salt, and row num) in a table, as a token. Then send 
the form with token to the user. When user submits form, do a select for 
update on the table for that row, if particular row does not have matching 
token, flag an error. If no row - flag error. Update row to invalidate the 
token.

You could even limit the number of concurrent queries that way, with a bit 
more overhead. You'll have a bottleneck, but since it's only for expensive 
(slow) queries it might not be a big problem.

Javascript is annoying, plus if it's off it doesn't work.

Regards,
Link.

At 01:19 PM 8/1/2003 +, Ian Harding wrote:

The solution I finally implemented seems to be pretty good, graying out 
the button after it's pushed with javascript.  That means no more 
doubleclick problem, and no more hammering away at the same button.  It 
does not preclude the reloading of the page (reactivating the button) or 
just going somewhere else and issuing another query.

The real solutions involving cute little client side applets that phone 
home to the server to see if the query is still running and show a phony 
status bar seem like too much work and still don't prevent malicious 
wankers from issuing multiple queries.

Good luck!

Mat wrote:

I am having trouble with users firing off a query from the web
interface, and then getting bored of waiting and firing off a different
query.

As http is stateless apache, (and so the perl cgi script) only realises
that the user has gone when it ties to send data back to the user and
gets a broken pipe type response. This is usually too late as by this
time the query has completed, using up valuable resources.

Is there a tried and tested solution to this problem?
If so please let me know!

If not...

I am working on a work around at the moment but have run into trouble!

I have read the DBI man pages but there doesn't seem to be a cancel
query function implemented, can anyone confirm or deny this?

Can i send some control sequence using DBI to cancel the query?

I had taken the approach of having two perl threads, one to run the
query and one to poll the browser every second to see if the user is
still there. Plan X was then to cancel the query if the user had ended
the connection.

The first problem was the lack of cancel query, second problem seems to
be that the DBI database handles cannot be shared between thread - so i
will have to pass a signal to the thread waiting for query to return to
cancel it? anyone else tried this? any more gaping pitfalls that i
should be aware of?!

Thanks!




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread Robert Treat
On Fri, 2003-08-01 at 21:08, Ron Johnson wrote:
 On Fri, 2003-08-01 at 19:06, Joshua D. Drake wrote:
  Hello,
  
As a recent flurry of activity has commenced within Command Prompt we 
  have released
  upon this rather unround earth, plPHP. Yes it is trigger safe, yes you 
  can write UDF's in
  PostgreSQL with PHP now.
  
Find it here:
   
http://www.commandprompt.com/entry.lxp?lxpe=260
 
 Is there a possibility of getting this in v7.5?

Heres the things I think would have to happen to put this in 7.5

1) Resolve license issues (not bsd licensed)
2) Integrate it into the postgresql build system so it can work on all
platforms.
3) Make it build against different versions of php
4) More people testing it in different situations.

FWIW I tried installing it on a system over the weekend, the trigger
feature seemed to work, and the complex array example function seemed to
work ok, but the basic simple echo example failed. I've subscribed to
the mailing lists, so we'll see if I can figure out where I went wrong.

Robert Treat 
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Monthly table partitioning for fast purges?

2003-08-04 Thread scott.marlowe
On Fri, 1 Aug 2003, Roger Hand wrote:

 We are moving an application from Oracle 8i to Postgres and I've run 
 into a problem attempting to duplicate a feature we currently use. 
 
 In Oracle you can divide a table into partitions. We use this feature 
 to break up the data by month. Each month we store several tens of 
 millions of rows in a particular table, and each month we drop the 
 partition that's a year old. In other words, we always keep the last 12 
 months of data (12 partitions). This is clean and fast. Since the 
 partition is by a timestamp column, it also gives us a certain amount of 
 automatic indexing. 
 
 Postgres doesn't support table partitions (correct me if I'm wrong!) so 
 the only option appears to be to dump everything into one big table. 
 What I'm worried about is the purging of the data from 12 months ago 
 ... I'm worried that this will be a slow and expensive operation. 

There are two approaches I can think of, and you should test each one to 
see how it holds up to your usage.

Approach the first:  Using scripts, create a table for each month.  Create 
a view that combines all of these months.  When a month goes out of date, 
simply remove it from the view.  Deleting the month can be done at your 
leisure, as it only saves disk space at this point, but since it isn't in 
the view, it doesn't slow you down to keep them.  

Approach the second:  Use partial indexes to make it look like the table 
is partitioned.  I.e. every month create a new partial index like:

create index on bigoldtable (datefield) where datefield =1stofmonth and 
datefield=lastofmonth.

Then include the where date =firstofmonth AND date = lastofmonth

This should then hit the partial index, which will be small compared to 
the master table with all the rows, or the main index, which will index 
all fields.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Is there any place to get 7.4 win binaries ?

2003-08-04 Thread Bruce Momjian

No, you have to use cygwin under 7.3.3.  Win32 is delayed for 7.5, and
7.4 isn't even in beta yet.

---

Ivar wrote:
 Hi,
 
 Is there any place to get 7.4 win binaries ?
 
 
 
 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Is there any place to get 7.4 win binaries ?

2003-08-04 Thread Ivar

I mean cygwin binaries, it hard to compile it.

I want to get latest version with cywing.

Bruce Momjian [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]

 No, you have to use cygwin under 7.3.3.  Win32 is delayed for 7.5, and
 7.4 isn't even in beta yet.

 --
-

 Ivar wrote:
  Hi,
 
  Is there any place to get 7.4 win binaries ?
 
 
 
 
  ---(end of broadcast)---
  TIP 8: explain analyze is your friend
 

 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Dealing with commas in fields

2003-08-04 Thread scott.marlowe
On Fri, 1 Aug 2003, MT wrote:

 Hi,
 
 I'm using Postgresql to store customer and product information. Since 
 this is a French language database, some of the names include commas. 
 Information entered into this database is via html forms and php 
 scripts. 
 When I search the database with a partial name that includes a comma, 
 such as D'Abcd, the database takes longer to find the record I'm looking 
 for than if I searched the database with a partial name that does not 
 include a comma, such as Abcd. I' ve also noticed if I D'Abcd in a php 
 variable, which is diplayed as the title in the html header, it shows up 
 as D\\\'Abcd, if this variable is passed and displayed in the next page, 
 it shows up as d'Abcd.
 
 Is there a way to enter names with a comma into a field without it 
 causing this strange behavior.

This normal PHP behaviour with most default installs.  You need to 
figure out where in your script you need to add stripslashes and 
addslashes to your data.  note that with postgresql, there are specific 
functions you should use pg_escape_string, but this function requires 
postgresql 7.2 or later, so if you're still on 7.1 you'll need to use 
addslashes.

Use stripslashes to unescape the data when it comes back out.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread Joe Conway
Robert Treat wrote:
On Fri, 2003-08-01 at 21:08, Ron Johnson wrote:
Is there a possibility of getting this in v7.5?
Heres the things I think would have to happen to put this in 7.5

1) Resolve license issues (not bsd licensed)
2) Integrate it into the postgresql build system so it can work on all
platforms.
3) Make it build against different versions of php
4) More people testing it in different situations.
Here's my list.

Minor issues/ general code cleanup:
1. eliminate '//' style comments
2. fix indentation
More significant:
2. Add support for array conversion in-out
3. Add support for set-returning functions
4. Add support for argument/return-value polymorphism
5. Incorporate recent improvements in function caching (ala PL/pgSQL)
6. Add documentation
7. Add regression test support
8. Modify for new ereport syntax and nested error contexts
Most significant:
9. Use PHP embed API instead of the PHP CLI (command line interface)
10.Fix license
Joe

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread scott.marlowe
Any chance that PHP in safe mode meets our requirements for a trusted 
language?  It would be nice to have both php and phpu as two different 
choices.

On Fri, 1 Aug 2003, Joshua D. Drake wrote:

 Hello,
 
   As a recent flurry of activity has commenced within Command Prompt we 
 have released
 upon this rather unround earth, plPHP. Yes it is trigger safe, yes you 
 can write UDF's in
 PostgreSQL with PHP now.
 
   Find it here:
  
   http://www.commandprompt.com/entry.lxp?lxpe=260
 
 Have a glorius weekend.
 
 Sincerely,
 
 Joshua D. Drake
 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Functions and procedures code!!!!

2003-08-04 Thread Freddy Menjívar M .



is there any way to get user's
functions and procedures
code from postgresql system 
catalog.

if a create a function something 
like


CREATE OR REPLACE FUNCTION 
myfunction()
RETURNS INTEGER

AS'
BEGIN
my 
code ;
END;
'LANGUAGE 
'plpgsql';

How can I retrieve "my 
code"?


Re: [GENERAL] Is there any place to get 7.4 win binaries ?

2003-08-04 Thread Bruce Momjian

Oh, not sure.  I hope someone makes them soon after we go beta so people
can do testing.

---

Ivar wrote:
 
 I mean cygwin binaries, it hard to compile it.
 
 I want to get latest version with cywing.
 
 Bruce Momjian [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 
  No, you have to use cygwin under 7.3.3.  Win32 is delayed for 7.5, and
  7.4 isn't even in beta yet.
 
  --
 -
 
  Ivar wrote:
   Hi,
  
   Is there any place to get 7.4 win binaries ?
  
  
  
  
   ---(end of broadcast)---
   TIP 8: explain analyze is your friend
  
 
  -- 
Bruce Momjian|  http://candle.pha.pa.us
[EMAIL PROTECTED]   |  (610) 359-1001
+  If your life is a hard drive, |  13 Roberts Road
+  Christ can be your backup.|  Newtown Square, Pennsylvania
 19073
 
  ---(end of broadcast)---
  TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 
 
 
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Inheritance multiple-value fields

2003-08-04 Thread scott.marlowe
On Sat, 2 Aug 2003, Vernon Smith wrote:

 
 We usually use another table for a multi-valued field. Is possible 
 having a single multi-valued field table for all tables in the same 
 heredity, other than having a multi-valued table for every single 
 tables in the heredity?

The SQL 3 standard has an enumerated type listed in it, but I don't think 
it's likely to show up in Postgresql any time soon.  you can approximate 
these using a check in() constraint.

Or are you looking at more than one value in the same field?  In that 
case, arrays are a way to do that.

both enumerated types and arrays break the strict relational model, but 
sometimes they're the simpler, cleaner soltution.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Functions and procedures code!!!!

2003-08-04 Thread Tom Lane
=?iso-8859-1?Q?Freddy_Menj=EDvar_M.?= [EMAIL PROTECTED] writes:
 is there any way to get user's
 functions and procedures
 code from postgresql system catalog.

Sure, just look at pg_proc.prosrc for your function.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread scott.marlowe
On Mon, 4 Aug 2003, Joe Conway wrote:

 Robert Treat wrote:
  On Fri, 2003-08-01 at 21:08, Ron Johnson wrote:
 Is there a possibility of getting this in v7.5?
  
  Heres the things I think would have to happen to put this in 7.5
  
  1) Resolve license issues (not bsd licensed)
  2) Integrate it into the postgresql build system so it can work on all
  platforms.
  3) Make it build against different versions of php
  4) More people testing it in different situations.
 
 Here's my list.
 
 Minor issues/ general code cleanup:
 1. eliminate '//' style comments
 2. fix indentation
 
 More significant:
 2. Add support for array conversion in-out
 3. Add support for set-returning functions
 4. Add support for argument/return-value polymorphism
 5. Incorporate recent improvements in function caching (ala PL/pgSQL)
 6. Add documentation
 7. Add regression test support
 8. Modify for new ereport syntax and nested error contexts
 
 Most significant:
 9. Use PHP embed API instead of the PHP CLI (command line interface)
 10.Fix license

Looking at the license for PHP found here:

http://www.php.net/license/3_0.txt

it would seem to be pretty much an apache style license that doesn't allow 
you to relicense it without permission.  but it looks BSD compatible.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread Joe Conway
scott.marlowe wrote:
10.Fix license
Looking at the license for PHP found here:

http://www.php.net/license/3_0.txt

it would seem to be pretty much an apache style license that doesn't allow 
you to relicense it without permission.  but it looks BSD compatible.


The issue was that plPHP as posted was claimed to be GPL, although there 
isn't any notice at all in the source that I saw.

Does the PHP license require programs that dynamically link carry their 
license, similar to GPL (I didn't get that impression)? If not, then 
something like PL/PHP should be licensable under BSD.

Joe

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] plPHP -- sort of an announcement.. but not commercial

2003-08-04 Thread scott.marlowe
On Mon, 4 Aug 2003, Joe Conway wrote:

 scott.marlowe wrote:
 10.Fix license
  
  Looking at the license for PHP found here:
  
  http://www.php.net/license/3_0.txt
  
  it would seem to be pretty much an apache style license that doesn't allow 
  you to relicense it without permission.  but it looks BSD compatible.
 
 The issue was that plPHP as posted was claimed to be GPL, although there 
 isn't any notice at all in the source that I saw.
 
 Does the PHP license require programs that dynamically link carry their 
 license, similar to GPL (I didn't get that impression)? If not, then 
 something like PL/PHP should be licensable under BSD.

Yeah, I'm pretty sure it's safe to link to.  We could always as the PHP 
guys themselves to be sure.



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] like performance w/o wildcards.

2003-08-04 Thread Joseph Shraibman
Richard Huxton wrote:
On Monday 04 August 2003 04:29, Joseph Shraibman wrote:

Hmm.  I didn't work for me.  I'll try and figure this out.

Tom Lane wrote:

Joseph Shraibman [EMAIL PROTECTED] writes:

I notice in 7.3.3 the planner can't tell when a LIKE has no wildcards
and is in reality an '='.  Is this an easy change to make?


Check your locale/encoding - you probably want C or similar for LIKE to use 
an index.
I'm using latin1.  Why should it make a difference?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] like performance w/o wildcards.

2003-08-04 Thread Bruce Momjian

Well, I don't see that = would be significantly faster than LIKE with a
no-wildcard string, so I don't see the value in having LIKE detect
non-whildcard strings.

---

Joseph Shraibman wrote:
 I notice in 7.3.3 the planner can't tell when a LIKE has no wildcards 
 and is in reality an '='.  Is this an easy change to make?
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] varchar, text and cidr

2003-08-04 Thread Richard Welty
On Mon, 04 Aug 2003 20:07:18 -0400 Tom Lane [EMAIL PROTECTED] wrote:

 Richard Welty [EMAIL PROTECTED] writes:
  On Mon, 04 Aug 2003 18:07:49 -0400 Tom Lane [EMAIL PROTECTED] wrote:
  You don't have to.  See CREATE CAST.
 
  cool. although it does look like there's still a piece missing:
 
  test=# create cast( varchar as cidr) with function cidr( varchar);
  ERROR:  CreateCast: function cidr(character varying) does not exist
 
 Yeah, you'd need to create a pg_proc entry for cidr(varchar) --- though
 you can get away with letting it point to the implementation function
 for cidr(text).

which would be a new row looking like this one, but with a different entry
in the vector of arg types. at this point, i end up needing to know the
OIDs for the argument types to do this after all. i guess i can figure that
out from looking at other functions in the pg_proc table.

test=# select * from pg_proc where proname = 'cidr';
 proname | pronamespace | proowner | prolang | proisagg | prosecdef
-+--+--+-+--+--
 cidr|   11 |1 |  12 | f| f

 | proisstrict | proretset | provolatile | pronargs | prorettype |
-+-+---+-+--++
 | t   | f | i   |1 |650 |

 proargtypes |  prosrc   | probin | proacl
-+---++
  25 | text_cidr | -  | {=X}
(1 row)

test=#

-- 
Richard Welty [EMAIL PROTECTED]
Averill Park Networking 518-573-7592
Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [osdldbt-general] Re: [PERFORM] OSDL Database Test Suite 3 is

2003-08-04 Thread Jenny Zhang
Thanks all for your feedback.

I think I should explain more about how to use this test kit. 

The main purpose of putting the test kit on Scalability Test
Platform(STP) is that testers can run the workload against the database
with different parameters and Linux kernels to see performance
differences.  Though the test kit picks up default parameters if they
are not provided, the command line parameters overwrite the default
ones.  Currently, the following parameters are supported:
-s scale_factor -n number of streams -d 'database parameters' -r
{0|1} -x {0|1} 

where:
-s scale_factor is tpc-h database scale factor, right now, only SF=1
is available.

-n number of streams is the number of throughput test streams, which
corresponds number of simultaneous database connections during
throughput test.

-d database parameters is the database parameters used when starting
postmaster.  for example: 
-B 12 -c effective_cache_size=393216 -c sort_mem=524288 -c
stats_command_string=true -c stats_row_level=true -c
stats_block_level=true

-r {0|1}: indicates if the database dir base/database dir/pgsql_tmp is
put on a separate disk drive

-x {0|1}: indicates if the WAL is put on a separate disk drive.

The other comments are in-lined:

On Mon, 2003-08-04 at 06:33, Manfred Koizar wrote:
 | effective_cache_size   | 1000
 
 With 4GB of memory this is definitely too low and *can* (note that I
 don't say *must*) lead the planner to wrong decisions.
 
I changed the default to effective_cache_size=393216 as calculated by
Scott.  Another way to check the execution plan is to go to the results
dir:
http://khack.osdl.org/stp/276917/results
There is a 'power_plan.out' file to record the execution plan.  I am
running a test with the changed effective_cache_size, I will see how it
affect the plan.

 | shared_buffers | 15200
 
 ... looks reasonable.  Did you test with other values?
I have only one with shared_buffers=120 at:
http://khack.osdl.org/stp/276847/
The performance degraded.  
 
 | sort_mem   | 524288
 
 This is a bit high, IMHO, but might be ok given that DBT3 is not run
 with many concurrent sessions (right?).
 http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows
 some swapping activity towards the end of the run which could be
 caused by a too high sort_mem setting.
Right, I run only 4 streams.  Setting this parameter lower caused more
reading/writing to the pgsql/tmp.  I guess the database has to do it if
it can not do sorting in memory. 

On 4 Aug 2003 at 15:33, Manfred Koizar wrote:
 
 I could not get postgresql .conf so I will combine the comments.
It is under database monitor data: database parameters
 
 1. Effective cache size, already mentioned
 2. Sort memory already mentioned.
 3. Was WAL put on different drive?
That run did not put WAL on different drive.  I changed it this morning
so that it is configurable.  Also I changed the result page so that the
testers can tell from the result page.
 4. Can you try with autovacuum daemon and 7.4beta when it comes out..
I'd be happy to run it.  We would like to improve out Patch Life
Management(PLM) system so that it can accept PG patches and run
performance tests on those patches.  Right now PLM only manages Linux
Kernel patches.  I would like to ask the PostgreSQL community if this
kind of tools is of interest.
 5. What was the file system? Ext2/Ext3/reiser/XFS?
 
 Scratching head
 
It is Ext2.  Yeah, it is not reported on the page.
 Is there any comparison available for other databases.. Could be interesting to 
 see..:-)
 
 /Scratching head
 

Let me know if you have any suggestions about how to improve the test
kit (parameters, reported information, etc.), or how to make it more
useful to PG community.

Thanks,
-- 
Jenny Zhang
Open Source Development Lab Inc 
12725 SW Millikan Way
Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [osdldbt-general] Re: [PERFORM] OSDL Database Test Suite 3 is

2003-08-04 Thread scott.marlowe
On 4 Aug 2003, Jenny Zhang wrote:

 On Mon, 2003-08-04 at 06:33, Manfred Koizar wrote:
  | effective_cache_size   | 1000
  
  With 4GB of memory this is definitely too low and *can* (note that I
  don't say *must*) lead the planner to wrong decisions.
  
 I changed the default to effective_cache_size=393216 as calculated by
 Scott.  Another way to check the execution plan is to go to the results
 dir:
 http://khack.osdl.org/stp/276917/results
 There is a 'power_plan.out' file to record the execution plan.  I am
 running a test with the changed effective_cache_size, I will see how it
 affect the plan.
 
  | shared_buffers | 15200
  
  ... looks reasonable.  Did you test with other values?
 I have only one with shared_buffers=120 at:
 http://khack.osdl.org/stp/276847/
 The performance degraded.  

Well, that's truly huge, even for a machine with lots-o-ram.  Most tests 
find that once the shared_buffers are big enough to use more than about 25 
to 33% of RAM, they're too big, as you get little return.

  | sort_mem   | 524288
  
  This is a bit high, IMHO, but might be ok given that DBT3 is not run
  with many concurrent sessions (right?).
  http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows
  some swapping activity towards the end of the run which could be
  caused by a too high sort_mem setting.
 Right, I run only 4 streams.  Setting this parameter lower caused more
 reading/writing to the pgsql/tmp.  I guess the database has to do it if
 it can not do sorting in memory. 

Note that IF your sortmem really is 1/2 gig, then you'll likely have LESS 
than 3 gigs left for OS system cache.  About how big does top show buff 
and cached to be on that box under load?  Not that it's a big deal if you 
get the effective cache size off by a little bit, it's more of a rubber 
mallet setting than a jeweler's screw driver setting.


Thanks a bunch for all the great testing.  It's a very nice tool to have 
for convincing the bosses to go with Postgresql.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly