[GENERAL] function with tablename parameter

2003-10-27 Thread Miso Hlavac
hello,

it is possible to write something similar???

create function get_count(varchar(32)) RETURNS int4 AS '
DECLARE
  tmp int4;
BEGIN
  SELECT COUNT(*) INTO tmp FROM $1;
  RETURN tmp;
END;' LANGUAGE 'plpgsql';


SELECT get_count('k_part');
SQL error:
ERROR:  parser: parse error at or near $1 at character 24

thanx, miso


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Casing: bug?

2003-10-27 Thread Andrew Sullivan
On Sun, Oct 26, 2003 at 10:43:58AM +, Vadim Chekan wrote:
 Hello there,
 
 I have quite problem with PG 7.3  7.4b5
 I create tables using pgAdmin3 v-1.0.1
 I created a table xType. Pay attention to capital T letter.

And when you created it, dod you write it xType or xType?  With the
quotes, you enforce cases-sensitivity.

 Doesn't it violate documentation:

No.  The same page says that  will enforce the case you wrote.

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Afilias CanadaToronto, Ontario Canada
[EMAIL PROTECTED]  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Index Usage Question

2003-10-27 Thread scott.marlowe
On Fri, 24 Oct 2003, Staff, Alexander wrote:

 Hi,
 I created a simple table (name char200, zip char10, city char200, street char200, id 
 int) and filled some data, appr. 250 000 records, in it.
 I tested accessing some rows (select id from address where id = 4;, select * 
 from address where id between 3, 333444) with an unique index on id and without 
 an index. EXPLAIN tells me in both cases that it does NOT use the index.
 I tested this with using the name column (select name from address where name like 
 'Wal%';, select name from address where name = 'Walz') and if I created an index on 
 name it uses it. 
 Using id in the where clause of the select is as slow as using an unindexed name in 
 the where-clause.
 I ran ANALYZE, no changes.
 What's this ?
 This is not a feature, is it ?

Can we see the output of explain analyze select...(your query here)???


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Recomended FS

2003-10-27 Thread scott.marlowe
On Sat, 25 Oct 2003, James Moe wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Sun, 26 Oct 2003 16:24:17 +1300, Mark Kirkwood wrote:
 
 I would conclude that it not *always* the case that power failure 
 renders the database unuseable.
 
 I have just noticed a similar posting from Scott were he finds the cache 
 enabled case has a dead database after power failure.
 
   Other posts have noted that SCSI never fails under this condition. Apparently SCSI 
 drives sense an impending power loss and flush the cache before power completely 
 disappears. Speed *and* reliability. Hm.

Actually, it would appear that the SCSI drives simply don't lie about 
fsync.  I.e. when they tell the OS that they wrote the data, they wrote 
the data.  Some of them may have caching flushing with lying about fsync 
built in, but the performance looks more like just good fsyncing to me.  
It's all a guess without examining the microcode though... :-)

   Of course, anyone serious about a server would have it backed up with a UPS and 
 appropriate software to shut the system down during an extended power outage. This 
 just 
 leaves people tripping over the power cords or maliciously pulling the plugs.

Or a CPU frying, or a power supply dying, or a motherboard failure, or a 
kernel panic, or any number of other possibilities.  Admittedly, the first 
line of defense is always good backups, but it's nice knowing that if one 
of my CPUs fry, I can pull it, put in the terminator / replacement, and my 
whole machine will likely come back up.

But anyone serious about a server will also likely be running on SCSI as 
well as on a UPS.  We use a hosting center with 3 UPS and a Diesel 
generator, and we still managed to lose power about a year ago when one 
UPS went haywire, browned out the circuits of the other two, and the 
diesel generator's switch burnt out.  Millions of dollars worth of UPS / 
high reliability equipment, and a $50 switch brought it all down. 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [OT] Choosing a scripting language.

2003-10-27 Thread scott.marlowe
On Sat, 25 Oct 2003, Marco Colombo wrote:

 On Fri, 24 Oct 2003, scott.marlowe wrote:
  On Fri, 24 Oct 2003, Scott Chapman wrote:
 [...]
   I also love the indentation to 
   differentiate code blocks rather than the junk I'd been used to seeing in 
   Perl and previous languages.
  
  Sorry, that was the one big turnoff for me in Python.  Indentation is 
  simple to me, I do it linux kernel style, and don't even have to pay 
  attention to it anymore, it's just automatic for me.  I guess I'm just 
  used to doing it the old fashioned way.
 
 I don't get it. If you already indent code, what's the problem with
 Python? Python _requires_ correct indentation,

No, it RELIES on it.  I.e. code blocks are marked out by how you indent.  
I.e. it doesn't look for block markers, then make sure indentation is 
right, it uses the indentation to show it where code blocks are.

The fact that tabs are parsed as 8 spaces by Python, when many editors are 
set to render them as 4 or 6 makes it quite possible to have a file that 
looks like it should run but doesn't.  I'll take good old {} anyday.

Just do a google search for python whitespace tabs and you'll get about 
7820 results back...

 so it's a problem only
 to beginners who don't like indenting (and forces them in doing
 the Right Thing). If indentation is automatic for you, you're already
 doing it the Python way.

Not exactly.  I still prefer being able to do simple:

if (something) do one thing; constructs

Or put in debug lines that ARE NOT indented so they're easier to see:
function test(var1){
# delete this test var when testing is done
$test_var = set;
start of code...
}

without worrying about the parser complaining about white space.

Indentation is for ME, not the parser.  Having it count as the block 
marker just feels wrong to me.  I'm not even sure I can explain why 
completely, but my above points are just one small part of it.

I agree with you on using the right tool for the job.  Except Perl.  
The more I use other languages, the less I want to use Perl.  Maybe it was 
a bad experience as a junior developer long ago with it or something :-)


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] table functions + user defined types

2003-10-27 Thread Adam Witney
On 27/10/03 3:20 pm, BARTKO, Zoltan [EMAIL PROTECTED] wrote:

 Ladies and Gentlemen,
 
 Please, enlighten me, if you can, in the following matter:
 
 I made a type:
 
 create type my_type as (
   a integer,
   b integer
 );
 
 since I have a table:
 
 create table my_table (
   a integer;
 );
 
 and I have now a function too:
 
 create or replace function my_func (
   integer, -- a
   integer, -- b
 ) returns setof my_type as
 '
 declare
   pa alias for $1;
   pb alias for $2;
   -- declarations
   my_valuemy_type;
 begin
   my_value.a := pa;
   my_value.b := pb;
   return my_value;
 end;
 ' language 'plpgsql';


Try this

create or replace function my_func (
integer, -- a
integer -- b
) returns my_type as
'
 declare
   pa alias for $1;
   pb alias for $2;
   -- declarations
   my_valuerecord;
 begin
   select into my_value pa, pb;
   return my_value;
 end;
 ' language 'plpgsql';


 when I run this darling function I get a parse error pointing to the line
 after begin.
 
 What am I doing wrong? I have skimmed through all the manuals, had a look at
 the postgresql cookbook, no info on this.  I just would like to have a
 function that returns more fields at the same time - add a column to table
 my_table, where I could occasionally return some value (e.g. error code). How
 to do this?
 
 Thanks for your advice in advance
 
 Zoltan Bartko
 



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] function with tablename parameter

2003-10-27 Thread Stephan Szabo
On Mon, 27 Oct 2003, Miso Hlavac wrote:

 hello,

 it is possible to write something similar???

 create function get_count(varchar(32)) RETURNS int4 AS '
 DECLARE
   tmp int4;
 BEGIN
   SELECT COUNT(*) INTO tmp FROM $1;
   RETURN tmp;
 END;' LANGUAGE 'plpgsql';

Youll need to do something a little more complicated like:

create function get_count(varchar) RETURNS int8 AS '
DECLARE
 tmp record;
BEGIN
 FOR tmp IN EXECUTE ''SELECT COUNT(*) AS count FROM '' || $1 LOOP
  RETURN tmp.count;
 END LOOP;
END;'
LANGUAGE 'plpgsql';


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Replication again

2003-10-27 Thread Chris M. Gamble
Does anyone know a good commercial application that does multi-master replication for 
postgres?

Thanks

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Nullable 'Foreign Key-like' Constraint

2003-10-27 Thread Manfred Koizar
On Fri, 24 Oct 2003 18:19:05 GMT, Ron [EMAIL PROTECTED]
wrote:
When I try the following with my current database I 
get an error:
  giibdb=# ALTER TABLE project ADD CONSTRAINT company_is_ta FOREIGN
  KEY (companyID) REFERENCES tblCompanies(companyID);
  NOTICE: ALTER TABLE will create implicit trigger(s) for FOREIGN
  KEY check(s)
  ERROR:  company_is_ta referential integrity violation - key
  referenced from project not found in company

Is there a way I can modify an existing database to get the same 
end-result (eg it works when DB is set up, before it is populated with 
data)?

Ron, you can have referential integrity or you can have projects
referencing nonexistent companies, but not both.  Whichever you
implement first prevents creation of the other one.

CREATE TABLE company (
  companyId int PRIMARY KEY,
  name text
);

INSERT INTO company VALUES (1, 'one');
INSERT INTO company VALUES (2, 'two');

CREATE TABLE project (
  projectId int PRIMARY KEY,
  name text,
  companyId int
);

INSERT INTO project VALUES (1, 'p1c1', 1);
INSERT INTO project VALUES (2, 'p2c1', 1);
INSERT INTO project VALUES (3, 'p3', NULL);

-- this works:
ALTER TABLE project ADD CONSTRAINT company_is_ta
  FOREIGN KEY (companyId) REFERENCES company (companyId);
NOTICE:  ALTER TABLE will create implicit trigger(s) for FOREIGN KEY
check(s)

-- this will fail:
INSERT INTO project VALUES (4, 'p4c7', 7);
ERROR:  company_is_ta referential integrity violation -
key referenced from project not found in company

-- now the other way round:
ALTER TABLE project DROP CONSTRAINT company_is_ta;
INSERT INTO project VALUES (4, 'p4c7', 7);
ALTER TABLE project ADD CONSTRAINT company_is_ta
  FOREIGN KEY (companyId) REFERENCES company (companyId);
ERROR:  company_is_ta referential integrity violation -
key referenced from project not found in company

To find projects violating the constraint:

SELECT * FROM project AS p WHERE NOT companyId IS NULL
 AND NOT EXISTS (
SELECT * FROM company AS c WHERE c.companyId = p.companyId);

Servus
 Manfred

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] function with tablename parameter

2003-10-27 Thread Adam Witney

To use dynamic queries you will probably have to use EXECUTE, take a look
here

http://www.postgresql.org/docs/7.3/interactive/plpgsql-statements.html#PLPGS
QL-STATEMENTS-EXECUTING-DYN-QUERIES

Although that page says that EXECUTE does not support SELECT INTO queries,
but you may be able to build something using FOR-IN-EXECUTE as described in
this section:

http://www.postgresql.org/docs/7.3/interactive/plpgsql-control-structures.ht
ml#PLPGSQL-RECORDS-ITERATING

HTH 

Adam



 hello,
 
 it is possible to write something similar???
 
 create function get_count(varchar(32)) RETURNS int4 AS '
 DECLARE
 tmp int4;
 BEGIN
 SELECT COUNT(*) INTO tmp FROM $1;
 RETURN tmp;
 END;' LANGUAGE 'plpgsql';
 
 
 SELECT get_count('k_part');
 SQL error:
 ERROR:  parser: parse error at or near $1 at character 24
 
 thanx, miso
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] PostgreSQL with MS Query?

2003-10-27 Thread Grant Rutherford
Hi there,

I'm trying to access my postgreSQL database using Excel (through MS 
Query).  I've been reading a bit about ODBC and I'm pretty sure that 
this is required.  Is there a way to see if this is set up already?  Am 
I on the right track?  The database server is running redhat linux.

Any help would be appreciated...
Thanks,
Grant
--
Grant Rutherford
Iders Incorporated
600A Clifton Street
Winnipeg, MB
R3G 2X6
http://www.iders.ca
tel: 204-779-5400 ext 36
fax: 204-779-5444

Iders Incorporated: Confidential

Note: This message is intended solely for the use of the designated
recipient(s) and their appointed delegates, and may contain
confidential information.  Any unauthorized disclosure, copying or
distribution of its contents is strictly prohibited.  If you have
received this message in error, please destroy it and advise the sender
immediately by phone, Email or facsimile.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Timestamp and interval precision

2003-10-27 Thread Tom Lane
Vilson farias [EMAIL PROTECTED] writes:
 bxs=# select CAST('10:32:14.553243' AS interval(0));
 interval
 -
  10:32:14.553243
 (1 row)

This is a known bug in 7.3 and before --- it's fixed for 7.4.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Question about the Internals

2003-10-27 Thread Bruce Momjian
Relaxin wrote:
 I'm a consultant for a large company and they are looking at a database to
 use.
 
 I made the suggestion about going with Postgresql.
 
 I told him about it's MVCC design, he liked that ideal, but wanted to know
 exactly HOW did it handle the multiple versions of records.  He's concerned
 because he was burn very early on by another database that was MVCC (won't
 mention any names) and his system became corrupt.  That database has since
 improved and fixed all of their problems.
 
 But the question is, how does Postgresql handle the multiple versions?
 How does it handle the records BEFORE they are committed and how does it
 handle the records AFTER they are committed that allows different users to
 possible have a different view of the data and indexes. And also how does
 the transactional part of the system place a role?

Each record has a transction id of creation and an expire transaction
id.  When you add a record, you put your xid on the old record and
create a new one.  No one has reported any corruption problems with our
database for a long time, so you should be fine.

If you want details, see the developers web page and see my internals
presentation --- it has a diagram showing old/new rows and their
visibility to a single transaction.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] incrementing and decrementing dates by day increments programmatically

2003-10-27 Thread Alvaro Herrera
On Mon, Oct 27, 2003 at 01:40:53AM -0800, Neil Zanella wrote:
 [EMAIL PROTECTED] (Alvaro Herrera) wrote in message 
 
  You can of course do
  SELECT now() + CAST('5 day' AS interval);
 
 Perhaps I should get myself a copy of the relevant parts of the SQL 99
 standard.  How would you do the above in standard SQL?

I think one standard way of doing the above would be
SELECT CURRENT_TIMESTAMP + CAST('5 day' AS interval);

Or, as pointed out by Tom Lane and someone else, if you don't need the
time part,
SELECT CURRENT_DATE + 5;

  For the date -I format you can use something like
  SELECT to_char(now() + 5 * '1 day'::interval, '-MM-DD');
 
 I believe Oracle also has a to_char() function. Is this to_char() function
 part of standard SQL or is it just a coincidence that both DBMSs support
 such a function call? I wonder whether the PostgreSQL to_char()
 function is compatible with the Oracle one.

AFAIK the main motivation to create the to_char() function in the first
place was in fact Oracle compatibility.  If you want to do such a thing
in a standard manner, you should probably do

SELECT  EXTRACT(year FROM a) || '-' ||
EXTRACT(month FROM a) || '-' ||
EXTRACT(day FROM a)
FROM(SELECT CURRENT_DATE + 5 AS a) AS foo;

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
I would rather have GNU than GNOT.  (ccchips, lwn.net/Articles/37595/)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Experience with PL/xx?

2003-10-27 Thread Jeff
On 27 Oct 2003 00:21:07 -0800
[EMAIL PROTECTED] (Klaus P. Pieper) wrote:

 
 What about PL/Python or PL/Ruby? Any experience with these two
 implementations on a production system? Are there any PostgreSQL
 specifics or limitations? Significant differences between these two
 languages?
 

PL/PGSQL is probably the most popular one. I use it quite a bit and it works like a 
champ.  It is quite comprable to Oracle's PL/SQL (hence the name)

Check it out in the docs on http://www.postgresql.org/

-- 
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Temporary tables and miscellaneous schemas

2003-10-27 Thread Sean Chittenden
  If you see a pg_temp_* for every connection, that is a little
  overwhelming.  pg_toast and stuff aren't really too bad.  Is there
  any way to access your local temp schema in a way that doesn't
  show the others?  Could we use backend_pid in the query and show
  them only their own?
 
 I have created the following patch for 7.5.  It has \dn show only
 your local pg_temp_* schema, and only if you own it --- there might
 be an old temp schema around from an old backend.
 
 This patch requires a new function pg_stat_backend_id which returns
 your current slot id (not your pid) --- that would be separate
 addition.

If by slot, you mean connection ID, then this sounds like a good
compromise/patch to me.  -sc

-- 
Sean Chittenden

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Temporary tables and miscellaneous schemas

2003-10-27 Thread Bruce Momjian
Sean Chittenden wrote:
   If you see a pg_temp_* for every connection, that is a little
   overwhelming.  pg_toast and stuff aren't really too bad.  Is there
   any way to access your local temp schema in a way that doesn't
   show the others?  Could we use backend_pid in the query and show
   them only their own?
  
  I have created the following patch for 7.5.  It has \dn show only
  your local pg_temp_* schema, and only if you own it --- there might
  be an old temp schema around from an old backend.
  
  This patch requires a new function pg_stat_backend_id which returns
  your current slot id (not your pid) --- that would be separate
  addition.
 
 If by slot, you mean connection ID, then this sounds like a good
 compromise/patch to me.  -sc

Yep, that's what it is.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL with MS Query?

2003-10-27 Thread Joshua D. Drake
Hello,

 Yes ODBC will be required and unless you have installed it is not 
configured. You can go
here:

  http://gborg.postgresql.org/project/psqlodbc/projdisplay.php

  For the OpenSource/Free version or here:

  http://www.commandprompt.com/

  For a commercial (with more features) version.

Sincerely,

Joshua D. Drake

Grant Rutherford wrote:

Hi there,

I'm trying to access my postgreSQL database using Excel (through MS 
Query).  I've been reading a bit about ODBC and I'm pretty sure that 
this is required.  Is there a way to see if this is set up already?  
Am I on the right track?  The database server is running redhat linux.

Any help would be appreciated...
Thanks,
Grant


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
Editor-N-Chief - PostgreSQl.Org - http://www.postgresql.org 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Slow query

2003-10-27 Thread Yonatan Goraly
yes

Alvaro Herrera wrote:

On Mon, Oct 27, 2003 at 12:08:31AM +0200, Yonatan Goraly wrote:
 

I have a query that uses the same view 6 times. It seems that the 
database engine is calculating the view each time.
The result is very poor performance. The same query takes 2 sec with MS 
SQL, and more than 10 sec with PostgreSQL.
Is there a method to improve the performance besides merging the 
components of the view into one table?
   

Say, have you ANALYZEd your tables ?

 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL with MS Query?

2003-10-27 Thread Patrick Hatcher
I'll assume you are on a Windows box.  The answer is yes, you can use Excel
to pull back data from a Pg database on a Linux box.
If you are planning to use MS Query and If you don't have MS Query
installed, you will need to install from the disk.
You can download the Pg ODBC driver from the Pg site.  Just click on the
download link, select your country, and then navigate to the ODBC/Versions
folder.  You'll want to download the 7.03.02 version as this is the newest.
Then create the DSN - do a  Google search on how to do this if you don't
know how.
Then fire up Excel and from the menu select Data, Get External Data, New
Database Query.  Follow the wizard from there.

hth
Patrick

***You wrote

Hi there,

I'm trying to access my postgreSQL database using Excel (through MS
Query).  I've been reading a bit about ODBC and I'm pretty sure that
this is required.  Is there a way to see if this is set up already?  Am
I on the right track?  The database server is running redhat linux.

Any help would be appreciated...
Thanks,
Grant





---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Retrieving a column comment

2003-10-27 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 On Sun, Oct 26, 2003 at 10:02:22PM +, Oliver Kohll wrote:
 I can't seem to retrieve a comment on a table column. The following copy from 
 psql should I think return a comment:
 
 mydatabase= SELECT relnamespace FROM pg_class WHERE relname='car';
 relnamespace
 --
 2200
 (1 row)

 Try using relfilenode instead of relnamespace.

Actually what he wants is the oid.  relfilenode is not relevant to
anything except the table's disk file name.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Recomended FS

2003-10-27 Thread scott.marlowe
On Fri, 24 Oct 2003, Scott Chapman wrote:

 On Friday 24 October 2003 16:23, scott.marlowe wrote:
  Right, but NONE of the benchmarks I've seen have been with IDE drives with
  their cache disabled, which is the only way to make them reliable under
  postgresql should something bad happen.  but thanks for the benchmarks,
  I'll look them over.
 
 I don't recall seeing anyone explain how to disable caching on a drive in this 
 thread.  Did I miss that?  'Would be useful.  I'm running a 3Ware mirror of 2 
 IDE drives.
 
 Scott

Each OS has it's own methods, and some IDE RAID cards don't give you 
direct access to the drives to enable / disable write cache.

On Linux you can disable write cache like so:

hdparm -W0 /dev/hda

back on:

hdparm -W1 /dev/hda


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Recomended FS

2003-10-27 Thread scott.marlowe
On Sun, 26 Oct 2003, Mark Kirkwood wrote:

 Got to going this today, after a small delay due to the arrival of new 
 disks,
 
 So the system is  2x700Mhz PIII, 512 Mb, Promise TX2000, 2x40G ATA-133 
 Maxtor Diamond+8 .
 The relevent software is Freebsd 4.8 and Postgresql 7.4 Beta 2.
 
 Two runs of 'pgbench -c 50 -t 100 -s 10 bench' with a power cord 
 removal after about 2 minutes were performed, one with hw.ata.wc = 1 
 (write cache enabled) and other with hw.ata.wc = 0 (disabled).
 
 In *both* cases the Pg server survived - i.e it came up, performed 
 automatic recovery. Subsequent 'vacuum full' and further runs of pgbench 
 completed with no issues.

Sweet.  It may be that the promise is turning off the cache, or that the 
new generation of IDE drives is finally reporting fsync correctly.  Was 
there a performance difference in the set with write cache on or off?

 I would conclude that it not *always* the case that power failure 
 renders the database unuseable.

But it usually is if write cache is enabled.

 I have just noticed a similar posting from Scott were he finds the cache 
 enabled case has an dead database after power failure. It seems that 
 it's a question of how *likely* is it that the database will survive/not 
 survive a power failure...
 
 The other interesting possibility is that Freebsd with soft updates 
 helped things remain salvageable in the cache enabled case (as some 
 writes *must* be lost at power off in this case)

Free BSD may be the reason here.  If it's softupdates are ordered in the 
right way, it may be that even with write caching on, the drives do the 
right thing under BSD.  Time to get out my 5.0 disks and start playing 
with my test server.  Thanks for the test!


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Experience with PL/xx?

2003-10-27 Thread Klaus P. Pieper
Hi,

does anybody out there have experience with the several PL's which are
available for PostgreSQL? I am currently evaluating several databases
(commercial as well as free  open source) for a new project and would
just like to hear some feedback.

PL/Java seems to be developed by a fairly small team - no updates on
their website since December 2002 (and even what's available on this
web site is not very encouraging to use PL/Java on a production
system). Does anybody use PL/Java?

What about PL/Python or PL/Ruby? Any experience with these two
implementations on a production system? Are there any PostgreSQL
specifics or limitations? Significant differences between these two
languages?

Any input will be apprecĂ­ated,

Klaus

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Question about the Internals

2003-10-27 Thread Relaxin
I'm a consultant for a large company and they are looking at a database to
use.

I made the suggestion about going with Postgresql.

I told him about it's MVCC design, he liked that ideal, but wanted to know
exactly HOW did it handle the multiple versions of records.  He's concerned
because he was burn very early on by another database that was MVCC (won't
mention any names) and his system became corrupt.  That database has since
improved and fixed all of their problems.

But the question is, how does Postgresql handle the multiple versions?
How does it handle the records BEFORE they are committed and how does it
handle the records AFTER they are committed that allows different users to
possible have a different view of the data and indexes. And also how does
the transactional part of the system place a role?

Any help here would be great appreciated!

Thanks




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] SCSI vs. IDE performance test

2003-10-27 Thread Rick Gigger
It seems to me file system journaling should fix the whole problem by giving
you a record of what was actually commited to disk and what was not.  I must
not understand journaling correctly.  Can anyone explain to me how
journaling works.

- Original Message - 
From: Bruce Momjian [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: Stephen [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, October 27, 2003 12:14 PM
Subject: Re: [GENERAL] SCSI vs. IDE performance test


 Mike Benoit wrote:
  I just ran some benchmarks against a 10K SCSI drive and 7200 RPM IDE
  drive here:
 
  http://fsbench.netnation.com/
 
  The results vary quite a bit, and it seems the file system you use
  can make a huge difference.
 
  SCSI is obviously faster, but a 20% performance gain for 5x the cost is
  only worth it for a very small percentage of people, I would think.

 Did you turn off the IDE write cache?  If not, the SCSI drive is
 reliable in case of OS failure, while the IDE is not.

 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

http://archives.postgresql.org



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Recomended FS

2003-10-27 Thread Greg Stark
scott.marlowe [EMAIL PROTECTED] writes:

 Sweet.  It may be that the promise is turning off the cache, or that the 
 new generation of IDE drives is finally reporting fsync correctly.  Was 
 there a performance difference in the set with write cache on or off?

Check out this thread. It seems the ATA standard does not include any way to
make fsync work properly without destroying performance. At least on linux
even that much is impossible without disabling caching entirely as the
operation required isn't exposed to user-space. There is some hope for the
future though.

http://www.ussg.iu.edu/hypermail/linux/kernel/0310.2/0163.html

  The other interesting possibility is that Freebsd with soft updates 
  helped things remain salvageable in the cache enabled case (as some 
  writes *must* be lost at power off in this case)
 
 Free BSD may be the reason here.  If it's softupdates are ordered in the 
 right way, it may be that even with write caching on, the drives do the 
 right thing under BSD.  Time to get out my 5.0 disks and start playing 
 with my test server.  Thanks for the test!

I thought soft updates applied only to directory metadata changes. 

-- 
greg


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Multiple database services and multiple versions on Red Hat Linux systems

2003-10-27 Thread Fernando Nasser
Multiple database services and multiple versions on Red Hat Linux systems

The way it works is that we require a specific service script for each 
database service (that is listening on each port).  Each of these 
services has a init script in /etc/init.d and a corresponding 
configuration file in /etc/sysconfig.  We use the 'chkconfig' utility to 
decide if each of those services will be activated on boot or not (it 
manipulates links under the /etc/init.c for each SysV run level).

We currently support multiple versions running.  I have myself half a 
dozen database services on my system with versions that range from 7.1 
to 7.4.  As each configuration file for each service points to the 
location of the proper binaries we have no problems dealing with this.

For example:

# cat /etc/sysconfig/rhdb-production
PGDATA=/usr/local/pgsql73/data
PGDATA2=/var/lib/pgsql2
PGDATA3=/var/lib/pgsql3
PGDATA4=/var/lib/pgsql4
PGENGINE=/home/fnasser/INST/pgsql73/bin
PGPORT=5433
PGLOG=/var/log/rhdb/rhdb-production
PGINITOPTIONS=--lc-messages=pt_BR
As you can see the PGENGINE points to a binary that I built myself.  It 
is unfortunate that I can only have one RPM installed at a time.

Oliver Elphick has suggested different package names for each version 
that has a different catalog number (i.e., we need a pg_dump + 
pg_restore and we can't use these version's postmaster to access other 
version's data areas).

If we configure each of these packages with a different base path which 
includes the version and install, of course, to these versioned 
directories, we will end up with a setup similar to what I have on my 
system with the bakends I've built myself.  It can be even a Java-like 
solution

/usr/pgsql/postgresql71
/usr/pgsql/postgresql72
/usr/pgsql/postgresql73
/usr/pgsql/postgresql74
or have then scattered if the LSB so requires (I believe it does not 
address this case though).

As the binaries have been configured with the versioned paths, all RPMs 
are normal (not relocatable) and the binaries will refer to the 
libraries and other files of the proper version.  So by setting one's 
path, the user can use the version she or he seems fit.

For Red Hat's users (and Debian's, I believe), the 'alternatives' 
utility can be used to direct links from /usr/bin and such to the chosen 
version files, so a default could be established and for such there 
would be no need to change the PATH variable.

Also, the multiple versioning can be kept only on the server side.  On 
the client side the latest version will suffice if it guarantees a 
(minimum) 2 version backwards compatibility (as we do with the JDBC driver).

Besides the client side backaward compatibility, what the core 
postgresql team could also do to support this would be to add version 
checks and issue warnings on mismatches (or errors if used against a 
version too old).  Also, make sure the path of the binary does imply in 
the location of the other files (i.e., the path from configure is always 
used, and not some hardcoded value).

As you see, these goals can be achieved without any changes in the 
postgresql community sources.

Regards to all,
Fernando
--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]