Re: [GENERAL] Does PG Support Unicode on Windows?

2005-11-17 Thread Magnus Hagander
 Is there any truth to what this guy is saying?

Yes, some. But not much.


   On the other hand, Postgresql claims that Windows
 does not support 
   Unicode and you can't have Unicode fields on
 postgresql on Windows.
   This is a big mistake. See:
  
 http://pginstaller.projects.postgresql.org/faq/FAQ_windows.html
   
  What do you mean a big mistake? By Microsoft? Or
 PostgreSQL?

We claim it doesn't support UTF8, and that is true. The misconception is in 
that PostgreSQL used to call UTF8 UNICODE, with no discinctino. Windows 
suåpports UTF-16/UCCS-2.

That siad, we need to update the FAQ, because in 8.1 we *do* support unicode 
(UTF8) on win32. I'll go ahead and do that :)


//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] locked backend

2005-11-17 Thread Csaba Nagy
On Wed, 2005-11-16 at 19:41, Tom Lane wrote:
 Csaba Nagy [EMAIL PROTECTED] writes:
  The situation (diagnosed via pg_stat_activity): one table was locked by
  an update, a VACUUM ANALYZE was running for the same table (triggered by
  autovacuum), and a handful of inserts were waiting on the same table.
 
 Updates do not block inserts, and neither does vacuum, so there's
 something you're not telling us.  In particular an UPDATE wouldn't
 take an ExclusiveLock on the table, so that lock must have come from
 some other operation in the same transaction.

Well, if I'm not telling you something is because I don't know it myself
:-) 

OK, that makes sense with something else done before blocking the
inserts and not the update. In any case the transaction of the update
was locking the rest, as nothing else was running at the moment I
checked. 

BTW, is the ExclusiveLock a table lock ? From the documentation of
pg_locks it is not completely clear (it refers to
http://www.postgresql.org/docs/8.0/static/explicit-locking.html#LOCKING-TABLES 
which does not enumerates these names used in pg_lock).

I wonder what would take an exclusive lock on the table ? 

I would exclude any alter table, we don't do that from our application,
and the other person who could have done an alter table beside me sits
next to me and he didn't do it (the update's SQL comes from the
application actually). There are no foreign keys on the table, just a
primary key on a field populated from a sequence (by our application,
not via a default clause). We do not lock the table explicitly. The only
locking is done via a SELECT...FOR UPDATE, could that cause a table lock
?

But whatever the cause of the lock would be, I still wonder why was the
UPDATE hanging ? This table is a temporary table, it is regularly
filled-emptied, and usually it is relatively small (max a few tens of
thousands of rows), so an update running more than 3 hours is not
kosher. The update is part of the emptying procedure actually.

If it was some kind of deadlock, why was it not detected ? And why the
backend didn't respond to the kill signal ?

I'm shooting around in the dark, but I have to find out what happened,
so I can avoid it next time... or at least be able to shut down
efficiently a backend which blocks my server's activity...

Thanks,
Csaba.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PG 8.1 on Dual XEON with FreeBSD 6.0

2005-11-17 Thread Daniel Kalchev
This is how multiprocessor systems work. You will benefit from the second 
processor when you run another query in paralell. There is no way as far as I 
am aware to run single task on both processors (except maybe by using threads?)

By the way, Intel processor's Hyperthreading provides not two CPU's per one, 
but sort of one and a half. The 'second' CPU is not fully functional (some 
parts of the processor are not doubled) so you may indeed get better 
performance for CPU intensive applications by disabling Hyperthreading. This 
is because, if the OS can't known what you are going to do with the CPU, it 
may/will schedule it for the 'half' CPU and it will wait for the respective 
unit becoming 'ready' from time to time. There is a new varint, Pentium D 
processors that are 'dual core' and provide what Hyperthreading was acutllay 
promissing - two CPUs in one chip.

Don not worry about the 25% - you are actually using 50% of your threorethical 
CPU power and 100% of the processing a single CPU can give you - you still 
benefit from the multiprocessing, because, one processor runs your query, 
while another is servicing the operating system functions, including I/O etc.

Daniel

 Hi,
 
 The FreeBSD 6.0 SMP Kernel recognizes my two XEONS as 4 CPUs.
 
 A single postgresql query, as I could see in the top utility, can use a
 maximum of 25% of CPU time, since it runs on one single virtual CPU,
 which means for me half of a XEON. Is that correct?
 
 If yes, is there a way to change that, and accelerate long queries
 (which are all CPU-bound) in giving them more processing time, like a
 full XEON? Both XEONS? Should I disable Hyperthreading for that?
 
 Thanks,
 
 --
 Philippe Lang
 Attik System
 rte de la Fonderie 2
 1700 Fribourg
 Switzerland
 http://www.attiksystem.ch
 
 Tel:   +41 (26) 422 13 75 
 Fax:   +41 (26) 422 13 76
 GSM:   +41 (79) 351 49 94
 Email: [EMAIL PROTECTED]
 Skype: philippe.lang
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Incomplete Startup Packet

2005-11-17 Thread Markus Wollny
Title: [GENERAL] Incomplete Startup Packet
?






Hi!

We're getting "incomplete startup packet" 
messages in our logfiles due to some sort of system probe run by our service 
provider which checks if PG is still running. In our case they're harmless of 
course. Are you sure that you're not running something along those lines, 
too?

Kind regards

 
Markus




Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread john.bender
On Wed, 16 Nov 2005 12:50:37 -0800 Chris Browne [EMAIL PROTECTED] 
I would imagine that if you simply stow components where you 
choose to
stow them, and say, this is part of what we always install for 
all
our customers, and never bring OSS up as an issue, they probably
won't notice they were going to have an issue with it.

A very good point, and probably the tactic I'm going to use. Rather 
than rebranding, downplaying the use might work.

So, I'll forge ahead and simply deploy it in a somewhat 
unobtrusive, unobvious way. If it becomes a problem, I'll seek to 
educate and work with our sales guys to put a positive spin on the 
issue. I'm actually pretty proud of the way our app works with 
PostgreSQL, and I think once our problem customers see how well it 
runs, their fears, sprung from whatever source, will dissipate.

The silent capabilities of the installer will do nicely in this 
regard...I can wrap it into my current NSIS script and simply call 
out to the installer. However, I need to create a empty database 
and initial user after install is complete. Is there any way to 
hook custom scripts into the installer's process? If not, what 
ways would you approach this? It'd have to be self-contained in 
some way, as I cannot guarantee any particular scripting language 
will be installed on the target machines.

Thanks for all your suggestions.

John



Concerned about your privacy? Instantly send FREE secure email, no account 
required
http://www.hushmail.com/send?l=480

Get the best prices on SSL certificates from Hushmail
https://www.hushssl.com?l=485


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread Magnus Hagander
 The silent capabilities of the installer will do nicely in 
 this regard...I can wrap it into my current NSIS script and 
 simply call out to the installer. However, I need to create a 
 empty database and initial user after install is complete. Is 
 there any way to hook custom scripts into the installer's 
 process? If not, what ways would you approach this? It'd have 
 to be self-contained in some way, as I cannot guarantee any 
 particular scripting language will be installed on the target 
 machines.

Nope, no way to hook that in unless you want to build your own MSI. You
could create a hook in pginst.wxs and add the CA, and rebuild the MSI.
BUt then you'd have to rebuild the MSI yourself each time you get a new
version etc, which may not be what you want.

I don't know how NSIS works, but I'm sure you can write extension
functions for it, right? The safest way would be to write one in C
statically linked to libpq, and just have that one connect to the newly
installed database and create the required objects. That way you don't
rely on any external scripting languages or DLLs.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Wrong rows selected with view

2005-11-17 Thread Bill Moseley
On Wed, Nov 16, 2005 at 11:34:21PM -0500, Tom Lane wrote:
 No, I'm saying that the underlying data (the join result before applying
 DISTINCT ON) looks like this:
 
 bill=# select  class.id, person.id AS person_id
 bill-#FROM class, location, region, person, instructors
 bill-#   WHERE class.location = location.id AND class.id = 
 instructors.class 
 bill-# AND instructors.person = person.id AND location.region = region.id
 bill-#   ORDER BY class.id; 
   id  | person_id 
 --+---
 1 |49
 2 |27
 3 |19
 4 |82
 5 |12
  ...
  1238 |61
  1238 |60
  1239 |40
  1240 |67
  1241 |11
  1243 |26
  1243 |84
  1244 |26
  1244 |84
 (1311 rows)
 
 The DISTINCT ON will take just one of the two rows with id = 1243, and
 just one of the rows with id = 1244, and *it is effectively random which
 one gets picked*.  So when you then select rows with person_id = 84, you
 may or may not see these rows in the end result.

Yikes!  The problem is *when* DISTINCT ON happens, right?

And, Tom,  you actually explained this to me on the list back on Aug 25th,
but that's when I was using the view in a different way.  You noted
that the order was unpredictable but at that time it didn't matter
which row was selected to me.

http://archives.postgresql.org/pgsql-general/2005-08/msg01291.php

This current problem was due to my assumption of how PG executes the
query:

My assumption was that the select would first do the joins (including
limit by class.id = 84) *then* weed out the duplicate class.ids.

But if PG is first doing the the joins on all the tables (before
limiting by class.id = 84) and then weeding out the duplicate
class.ids, and then finally limiting by class.id = 84 then I can see
where I might end up wit the missing row.

Frankly, I expected the first to happen because it would use an index
to select just the records of class.id = 84, then do the joins on
that small set of records.  Didn't seem likely that the database would
join all the records first and then limit by class.id.  Seems like the
hard way to do the query.  But the query planner works in strange and
mysterious ways. ;)


Does that also explain why PG was sometimes returning the correct
number of rows?  Depending on which of the two query plans above
were used?

 Exactly.  So your view is going to return the class id along with a
 randomly selected one of the instructor ids.  It seems to me that
 filtering this result on instructor id is perhaps a bit ill-advised,
 even if you fix the view so that the chosen instructor id isn't so
 random (eg, you could fix it to display the lowest-numbered instructor
 id for the particular class).  Even then, are you searching for the
 instructor id that the view happens to show for that class, or some
 other one?

Well, clearly, my one-size-fits-all view doesn't work in this case.
I just need another view without distinct when limiting by instructor.

It was that red-herring of removing a seemingly random column from the
view that made it hard to see what was really happening.

Thanks very much for all your time.


-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PREPARE TRANSACTION and webapps

2005-11-17 Thread Lincoln Yeoh

At 06:04 PM 11/16/2005 +0100, Martijn van Oosterhout wrote:


On Thu, Nov 17, 2005 at 12:29:25AM +0800, Lincoln Yeoh wrote:
 My assumption is that pending transactions (e.g. locks and other metainfo)
 will take much less memory than database backends.

They make take less memory but they take many more resources. Backend
don't take locks by themselves, transactions do.


Just curious: how much memory do locks/transactions occupy as a rough 
percentage of backend memory usage? Assume a typical active backend 
(5MB?). If it's 50% then sure forget it. But if it's 5% or even 1%...



Obviously these should both succeed. reading data doesn't block. Ten
minutes later user 1 submits an update and goes to lunch without
committing. User 2 then does an update but he has to wait. How long?
Well, by your definition, forever. I doubt user 2 will be very happy
with that.


I believe in postgresql there's select for update ...  nowait or 
something like that, and transactions can have savepoints.


Also, if that sort of thing is a problem you could very easily link a user 
session to pending uncommitted database transactions. Once the user session 
times out you rollback all linked transactions.


I'm sure the solutions are decades old. After all in the dumb terminal 
days, couldn't transactions be held open for quite a long time too?



The way I would think about it would be to (a) let user 2 know straight
away someone else is already looking at this record. This is useful
info, maybe they talked to the same customer? and (b) when user 2
submits his edit he should be warned there are conflict and be asked to
resolve them. If you abort either transaction you're going to have some
annoyed users on your hands.


What I used to do was make copies in event of a collision - but it starts 
to get closer to a version control and merging problem, and less of a 
transaction problem ;).


If so many people have no problems with doing transactions at the 
application/middleware level, no wonder MySQL 3 was good enough for them - 
they had little need for MVCC and ACID databases, since they were already 
doing all that at a higher layer.


For what it is worth, I've done that sort of stuff at the application level 
too. shopping cart tables, tables with transaction_id columns, a 
transaction table, etc etc. I dunno about you all, but having to do that 
feels a bit like using MySQL 4 - some tables support transactions and 
some don't.


Oh well, maybe it's just not such a good idea after all. Just thought it 
might be feasible and useful.


Regards,
Link.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] strange behavior on 8.1

2005-11-17 Thread Csaba Nagy
Hi all,

I have observed a strange behavior on 8.1 of an insert statement into a
temporary table done from a delete trigger.
I've attached a test case.
Observe that the NOTICE saying the rows were inserted occurs all 5 times
for the 8.0.3 server and only for the first 2 times for the 8.1
installation, and all further runs of the flush_test function yield no
successful insert...
Might worth to note that the 8.0.3 installation has 2 patches, one to
disallow the locking of parent keys in foreign key triggers, and another
one which changes time-stamp rounding behavior to truncate.

Any ideas why this happens ?

Cheers,
Csaba.


CREATE TABLE test(col TEXT);

CREATE OR REPLACE FUNCTION sp_test_delete()
RETURNS trigger AS '
DECLARE
BEGIN
INSERT INTO temp_test (col) VALUES (OLD.col);

RAISE NOTICE ''Inserting: col=%'', OLD.col;

RETURN NEW;
END; ' language 'plpgsql';

CREATE TRIGGER tr_test_delete AFTER DELETE ON test
FOR EACH ROW EXECUTE PROCEDURE sp_test_delete();

CREATE OR REPLACE FUNCTION flush_test()
RETURNS VOID AS '
DECLARE
rec_debug RECORD;
BEGIN

BEGIN
CREATE TEMPORARY TABLE temp_test (col TEXT) ON COMMIT DELETE ROWS;
EXCEPTION
WHEN duplicate_table THEN
END;

-- delete processed
DELETE FROM test;

FOR rec_debug IN
SELECT * FROM temp_test
LOOP
RAISE NOTICE ''Row in temp_test: col=%'', rec_debug.col;
END LOOP;

RETURN;

END;
' LANGUAGE plpgsql;

INSERT INTO test VALUES ('1');
SELECT flush_test();
INSERT INTO test VALUES ('2');
SELECT flush_test();
INSERT INTO test VALUES ('3');
SELECT flush_test();
INSERT INTO test VALUES ('4');
SELECT flush_test();
INSERT INTO test VALUES ('5');
SELECT flush_test();


*** 8.1.0 behavior ***

cnagy= INSERT INTO test VALUES ('1');
INSERT 0 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=1
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('2');
INSERT 0 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=2
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('3');
INSERT 0 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=3
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('4');
INSERT 0 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=4
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('5');
INSERT 0 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=5
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
 flush_test

  
(1 row)



*** 8.0.3 behavior ***

cnagy= INSERT INTO test VALUES ('1');
INSERT 1216290363 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=1
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
NOTICE:  Row in temp_test: col=1
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('2');
INSERT 1216290370 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=2
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
NOTICE:  Row in temp_test: col=2
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('3');
INSERT 1216290372 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=3
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
NOTICE:  Row in temp_test: col=3
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('4');
INSERT 1216290374 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=4
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
NOTICE:  Row in temp_test: col=4
 flush_test

  
(1 row)
 
cnagy= INSERT INTO test VALUES ('5');
INSERT 1216290376 1
cnagy= SELECT flush_test();
NOTICE:  Inserting: col=5
CONTEXT:  SQL statement DELETE FROM test
PL/pgSQL function flush_test line 12 at SQL statement
NOTICE:  Row in temp_test: col=5
 flush_test

  
(1 row)
 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Performance of a view

2005-11-17 Thread Thomas F. O'Connell


On Nov 14, 2005, at 7:40 PM, John McCawley wrote:


I have a view which is defined as follows:

//-
SELECT tbl_claim.claim_id, count(tbl_invoice.invoice_id) AS count,  
min(tbl_invoice.invoicedate) AS invoicedate

  FROM tbl_claim
  LEFT JOIN tbl_invoice ON tbl_claim.claim_id =  
tbl_invoice.claim_id AND tbl_invoice.active = 1

 GROUP BY tbl_claim.claim_id;
//-


snip

I roughly understand what is happening...in the first query, the  
dataset is being knocked down to one row, then somehow the view is  
being constructed using only that subset of the claim table.  In  
the second query, the view is being constructed from the entire  
dataset which is hundreds of thousands of rows, and thus is much  
slower.


My question is how would I go about obtaining the behavior from the  
faster query in the slower query?  I have switched the order of the  
tables, and tried many different permutations of the query, but no  
matter what I do, it seems that unless I specifically hard-code a  
claim_id filter on the claim_id, I am forced to run through every  
record.


Thoughts?


I'd be curious to see what would happen if you added claimnum as a  
field in your view. I don't have a complete understanding of the  
postgres internals in terms of how it is able to push outer clauses  
down in to its views, but I think it might be able to optimize in  
that fashion if it is able to add a WHERE clause internally to the  
view, which it can't do in the case of claimnum since it doesn't  
exist in the view.


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam
I'm porting an application from Sybase and I've noticed that similar 
application functions take 2 to 3 times longer on postgres than they 
used to on the same machine running under Sybase.  I've tried changing 
various performance tuning parameters, such as shared_buffers, 
effective_cache_size, etc but there's little or no effect.  I'm 
beginning to think there's a deeper root cause to the slowness.


Right now, I'm working on a test case that involves a table with ~360k 
rows called nb.sigs.  My sample query is:


select * from nb.sigs where signum  25

With no index, explain says this query costs 11341.  After CREATE INDEX 
on the signum field, along with an ANALYZE for nb.sigs, the query costs 
3456 and takes around 4 seconds to return the first row.  This seems 
extremely slow to me, but I can't figure out what I might be doing 
wrong.  Any ideas?


(If necessary, I can write an entire script that creates and populates a 
table and then give my performance on that sample for someone else to 
check against.)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Most significant digit number formatting

2005-11-17 Thread codeWarrior
If it is a numeric data column -- you probably want to use the round 
function:

SELECT round(1200.01, 3);
SELECT round(12.009, 2);


Berend Tober [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Say I want to format calculated numeric output to uniformly have a
 specific number of decimal places, say 3 digits right of the decimal
 point. My current understanding is that the TO_CHAR function is the
 appropriate way to accomplish this kind of numeric formatting. So for
 instance I might write

 SELECT
 project_number,
 TO_CHAR(SUM(labor_hours), '999.999') AS total_hours
 FROM labor_data

 This is great as far as accomplishing the decimal part formatting, but
 it is only good provided I've included enough place holders for the
 integer part, i.e., in this example for numbers less than 1000, e.g.,

 project_numbertotal_hours
 05-100 ###.### (actual value is 10810.5)
 05-125 285.000
 05-150 404.500
 05-200  44.000
 05-54  66.000
 05-59 ###.### (actual value is 2245.75)

 So what I'm asking for is advice on how to create the least-significant
 digit formatting specifically, but without having to worry about
 exceeding the most-significant digit formatting specification. I don't
 see that explained in the documentation on TO_CHAR.

 I suppose on approach might to guess what the biggest number might be,
 and then include an order of magintude larger, e.g.
 TO_CHAR(SUM(labor_hours), '99.999') . But you know, good old Dr.
 Murphy, will undoubtly intervene and inevitably create a situation in
 which whatever reasonable provisions were made originally, the limits
 will at some point be exceeded, causing the world as we know it to come
 to an end.

 Regards,
 Berend Tober









 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread codeWarrior
What is the data type for signum ???



David Rysdam [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 I'm porting an application from Sybase and I've noticed that similar 
 application functions take 2 to 3 times longer on postgres than they used 
 to on the same machine running under Sybase.  I've tried changing various 
 performance tuning parameters, such as shared_buffers, 
 effective_cache_size, etc but there's little or no effect.  I'm beginning 
 to think there's a deeper root cause to the slowness.

 Right now, I'm working on a test case that involves a table with ~360k 
 rows called nb.sigs.  My sample query is:

 select * from nb.sigs where signum  25

 With no index, explain says this query costs 11341.  After CREATE INDEX on 
 the signum field, along with an ANALYZE for nb.sigs, the query costs 3456 
 and takes around 4 seconds to return the first row.  This seems extremely 
 slow to me, but I can't figure out what I might be doing wrong.  Any 
 ideas?

 (If necessary, I can write an entire script that creates and populates a 
 table and then give my performance on that sample for someone else to 
 check against.)

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Performance of a view

2005-11-17 Thread John McCawley


I'd be curious to see what would happen if you added claimnum as a  
field in your view. I don't have a complete understanding of the  
postgres internals in terms of how it is able to push outer clauses  
down in to its views, but I think it might be able to optimize in  
that fashion if it is able to add a WHERE clause internally to the  
view, which it can't do in the case of claimnum since it doesn't  
exist in the view.



I added the claimnum and this actually slowed it down a bit because of 
the additional group by, however I then changed my where clause to 
filter on the view's claimnum rather than tbl_claim's claimnum, and I 
got the results I wanted.


It seems to me that in the future I should always construct my views 
such that my WHERE clauses end up on the view and not on any tables that 
they join with.  The only problem with this is that very often I don't 
know in advance what fields the client will want to search by, and now 
I'll end up with two steps instead of one (modify my code AND modify the 
view), however the speed increase is an order of magnatude and well 
worth it.


Thanks!

John


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread Tom Lane
David Rysdam [EMAIL PROTECTED] writes:
 Right now, I'm working on a test case that involves a table with ~360k 
 rows called nb.sigs.  My sample query is:
 select * from nb.sigs where signum  25
 With no index, explain says this query costs 11341.  After CREATE INDEX 
 on the signum field, along with an ANALYZE for nb.sigs, the query costs 
 3456 and takes around 4 seconds to return the first row.  This seems 
 extremely slow to me, but I can't figure out what I might be doing 
 wrong.  Any ideas?

How many rows does that actually return, and what client interface are
you fetching it with?  libpq, at least, likes to fetch the entire query
result before it gives it to you --- so you're talking about 4 sec to
get all the rows, not only the first one.  That might be reasonable if
you're fetching 100k rows via an indexscan...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam
int4, not null and the index is unique.  I even tried clustering on it 
to no avail.


codeWarrior wrote:


What is the data type for signum ???



David Rysdam [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 

I'm porting an application from Sybase and I've noticed that similar 
application functions take 2 to 3 times longer on postgres than they used 
to on the same machine running under Sybase.  I've tried changing various 
performance tuning parameters, such as shared_buffers, 
effective_cache_size, etc but there's little or no effect.  I'm beginning 
to think there's a deeper root cause to the slowness.


Right now, I'm working on a test case that involves a table with ~360k 
rows called nb.sigs.  My sample query is:


select * from nb.sigs where signum  25

With no index, explain says this query costs 11341.  After CREATE INDEX on 
the signum field, along with an ANALYZE for nb.sigs, the query costs 3456 
and takes around 4 seconds to return the first row.  This seems extremely 
slow to me, but I can't figure out what I might be doing wrong.  Any 
ideas?


(If necessary, I can write an entire script that creates and populates a 
table and then give my performance on that sample for someone else to 
check against.)


---(end of broadcast)---
TIP 6: explain analyze is your friend

   





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PREPARE TRANSACTION and webapps

2005-11-17 Thread Greg Stark
Lincoln Yeoh lyeoh@pop.jaring.my writes:

 At 06:04 PM 11/16/2005 +0100, Martijn van Oosterhout wrote:
 
 On Thu, Nov 17, 2005 at 12:29:25AM +0800, Lincoln Yeoh wrote:
   My assumption is that pending transactions (e.g. locks and other metainfo)
   will take much less memory than database backends.
 
 They make take less memory but they take many more resources. Backend
 don't take locks by themselves, transactions do.
 
 Just curious: how much memory do locks/transactions occupy as a rough
 percentage of backend memory usage? Assume a typical active backend (5MB?).
 If it's 50% then sure forget it. But if it's 5% or even 1%...

I'm not sure I agree that 100% overhead is reasonable.

The biggest cost though is in context switching. Instead of having 8 processes
100% busy on 8 processors you have 100 or 1,000 processes mostly idle and
frantically context switching between them.

And in the additional complexity of having to make sure the right database
connection gets reassociated with each application request. The most popular
web server architecture can't even do this since they're in separate
processes; it would necessitate involving yet another process and another
context switch for every bit of data going both directions.

-- 
greg


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam

Tom Lane wrote:


David Rysdam [EMAIL PROTECTED] writes:
 

Right now, I'm working on a test case that involves a table with ~360k 
rows called nb.sigs.  My sample query is:

select * from nb.sigs where signum  25
With no index, explain says this query costs 11341.  After CREATE INDEX 
on the signum field, along with an ANALYZE for nb.sigs, the query costs 
3456 and takes around 4 seconds to return the first row.  This seems 
extremely slow to me, but I can't figure out what I might be doing 
wrong.  Any ideas?
   



How many rows does that actually return, and what client interface are
you fetching it with?  libpq, at least, likes to fetch the entire query
result before it gives it to you --- so you're talking about 4 sec to
get all the rows, not only the first one.  That might be reasonable if
you're fetching 100k rows via an indexscan...

regards, tom lane


 

Right, it's about 100k rows and it is through libpq (pgadmin in this 
case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
to not do what it likes and do what I need instead?  I didn't see 
anything in the docs, but I didn't look very hard.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Most significant digit number formatting

2005-11-17 Thread Berend Tober


codeWarrior wrote:

If it is a numeric data column -- you probably want to use the round 
function:


SELECT round(1200.01, 3);
SELECT round(12.009, 2);

 

Interesting. I had tried that. After your message I tried again and 
encountered this interesting anomaly: while the ROUND function used in a 
query run in the SQL window of PgAdmin III does in fact force  output of 
trailing zero decimal digits to the extent specified, i.e.,


SELECT
   project_number,
   labor_hours,
   TO_CHAR(labor_hours, '999.999'),
   ROUND(labor_hours,3)
FROM time_data
LIMIT 5

05-08,1974., ###.###,1974.000
05-100,10810.5000, ###.###,10810.500
05-125,285., 285.000,285.000
05-150,404.5000, 404.500,404.500
05-200,44.,  44.000,44.000

Running the same query though a TQuery dataset object in Borland Delphi 
using the BDE truncates the trailing zeros from ROUND:


000-05-081974 ###.###1974
000-05-10010810.5 ###.###10810.5
000-05-125285 285.000285
000-05-150404.5 404.500404.5
000-05-20044  44.00044

That is why I didn't realize ROUND was an option, but for me it still is 
not since the report is produced by a Delphi application. I suppose I 
can accomplish this formatting programmatically within the Delphi 
application, but I was hoping to have the data base do it directly.


Thanks,
Berend Tober


Berend Tober [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 


Say I want to format calculated numeric output to uniformly have a
specific number of decimal places, 


...
begin:vcard
fn:Berend Tober
n:Tober;Berend
org:Seaworthy Systems, Inc.
adr:;;22 Main Street;Centerbrook;CT;06409;USA
email;internet:[EMAIL PROTECTED]
tel;work:860-767-9061
url:http://www.seaworthysys.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] strange behavior on 8.1

2005-11-17 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 I have observed a strange behavior on 8.1 of an insert statement into a
 temporary table done from a delete trigger.

In an assert-enabled build this dumps core, so I'd say you've found a
bug ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Incomplete Startup Packet

2005-11-17 Thread Mott Leroy

Markus Wollny wrote:

We're getting incomplete startup packet messages in our logfiles due 
to some sort of system probe run by our service provider which checks if 
PG is still running. In our case they're harmless of course. Are you 
sure that you're not running something along those lines, too?


Ah, in fact, that is the case. We have Nagios running which checks to 
see if postgres is still up. It very well may be that this is the cause 
of the messages. Thank you.


Mott

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread Tom Lane
David Rysdam [EMAIL PROTECTED] writes:
 Right, it's about 100k rows and it is through libpq (pgadmin in this 
 case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
 to not do what it likes and do what I need instead?

The only way ATM is to declare a cursor on the query and then FETCH
whatever amount seems reasonable at a time.

There is support in the on-the-wire protocol for partial fetches from
ordinary queries (ie, without the notational overhead of creating a
cursor).  libpq doesn't expose that at the moment.  There's a thread
right now in pgsql-interfaces about adding such a feature to libpq ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread Bruno Wolff III
On Thu, Nov 17, 2005 at 11:31:27 -0500,
  David Rysdam [EMAIL PROTECTED] wrote:
 Right, it's about 100k rows and it is through libpq (pgadmin in this 
 case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
 to not do what it likes and do what I need instead?  I didn't see 
 anything in the docs, but I didn't look very hard.

You could use a cursor. That will bias the plan toward fast start plans which
might give you lower throughput if you are normally planning to fetch all of
the rows, but will give you quicker access to the first row.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread Martijn van Oosterhout
On Thu, Nov 17, 2005 at 11:31:27AM -0500, David Rysdam wrote:
 Right, it's about 100k rows and it is through libpq (pgadmin in this 
 case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
 to not do what it likes and do what I need instead?  I didn't see 
 anything in the docs, but I didn't look very hard.

Use the async interface. There you submit the query and retrieve rows
as they come in. It's a bit trickier to program but it can be done.

psql doesn't do this though, it's not clear how it could anyway, given
the way it formats.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgppPZBDTl6D0.pgp
Description: PGP signature


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam

Bruno Wolff III wrote:


On Thu, Nov 17, 2005 at 11:31:27 -0500,
 David Rysdam [EMAIL PROTECTED] wrote:
 

Right, it's about 100k rows and it is through libpq (pgadmin in this 
case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
to not do what it likes and do what I need instead?  I didn't see 
anything in the docs, but I didn't look very hard.
   



You could use a cursor. That will bias the plan toward fast start plans which
might give you lower throughput if you are normally planning to fetch all of
the rows, but will give you quicker access to the first row.


 


That is exactly what I want, but is it possible to use a cursor from pgtcl?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Moving from MySQL to PostgreSQL with Ruby on Rails.

2005-11-17 Thread Peter Michaux
Hi,

I'm just new to the PostgreSQL world. I've been using MySQL but I want
to develop a Ruby on Rails application that can be installed on either
MySQL or PostgreSQL. I don't know how much the DDL dialects vary
between them. At the moment I am interested in the options on a table
like UTF-8. In MySQL I write

CREATE TABLE product (
 id INT NOT NULL AUTOINCREMENT,
 name VARCHAR(255) NOT NULL DEFAULT '',
 PRIMARY KEY (id)
) DEFAULT CHARSET=UTF-8;

Will this definition work in the PostgreSQL world? Is there a web page for people with MySQL exerience moving to PostgreSQL?

Part of the issue is the way Ruby on Rails migration class enables me
to add options to Rails' own abstraced DDL just like I have done in the
above example. Other ways of adding options might be tricky.

Thanks,
Peter


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam

Martijn van Oosterhout wrote:


On Thu, Nov 17, 2005 at 11:31:27AM -0500, David Rysdam wrote:
 

Right, it's about 100k rows and it is through libpq (pgadmin in this 
case, but my app uses libpq from pgtcl).  Is there a way to tell libpq 
to not do what it likes and do what I need instead?  I didn't see 
anything in the docs, but I didn't look very hard.
   



Use the async interface. There you submit the query and retrieve rows
as they come in. It's a bit trickier to program but it can be done.

psql doesn't do this though, it's not clear how it could anyway, given
the way it formats.

Have a nice day,
 

I'm experimenting with the async interface right now.  Hopefully it will 
fit in well.


It's OK if psql/pgadmin don't do it, as long as the app does.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread Michael Fuhr
On Thu, Nov 17, 2005 at 10:38:50AM -0500, David Rysdam wrote:
 I'm porting an application from Sybase and I've noticed that similar 
 application functions take 2 to 3 times longer on postgres than they 
 used to on the same machine running under Sybase.  I've tried changing 
 various performance tuning parameters, such as shared_buffers, 
 effective_cache_size, etc but there's little or no effect.

What's your OS and hardware environment and what are your non-default
PostgreSQL settings?

 Right now, I'm working on a test case that involves a table with ~360k 
 rows called nb.sigs.  My sample query is:
 
 select * from nb.sigs where signum  25
 
 With no index, explain says this query costs 11341.  After CREATE INDEX 
 on the signum field, along with an ANALYZE for nb.sigs, the query costs 
 3456 and takes around 4 seconds to return the first row.

Please post the EXPLAIN ANALYZE output for the query -- that'll
show us the query plan, the accuracy of the planner's row count
estimate, and how long the query takes to execute on the server.
It might also be useful to see the table definition and the output
of the following query:

SELECT null_frac, n_distinct, correlation
FROM pg_stats
WHERE schemaname = 'nb' AND tablename = 'sigs' AND attname = 'signum';

What client interface are you using?  If the query returns a lot
of rows then you might benefit from using a cursor to fetch the
result set in chunks; otherwise the client library is probably
waiting for the entire result set to arrive before returning any
rows to you.  If the result set is large then you can experience
performance problems due to a shortage of real memory.

How volatile is the data and how common are queries based on signum?
You might benefit from clustering on the signum index.

 (If necessary, I can write an entire script that creates and populates a 
 table and then give my performance on that sample for someone else to 
 check against.)

If it's a short script that populates the table with canned data
then go ahead and post it.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Moving from MySQL to PostgreSQL with Ruby on Rails.

2005-11-17 Thread Robby Russell
On Thu, 2005-11-17 at 08:48 -0800, Peter Michaux wrote:
 Hi,
 
 I'm just new to the PostgreSQL world. I've been using MySQL but I want
 to develop a Ruby on Rails application that can be installed on either
 MySQL or PostgreSQL. I don't know how much the DDL dialects vary
 between them. At the moment I am interested in the options on a table
 like UTF-8. In MySQL I write
 
 CREATE TABLE product (
   id INT NOT NULL AUTOINCREMENT,
   name VARCHAR(255) NOT NULL DEFAULT '',
   PRIMARY KEY (id)
 ) DEFAULT CHARSET=UTF-8;
 
 Will this definition work in the PostgreSQL world? Is there a web page
 for people with MySQL exerience moving to PostgreSQL?
 

CREATE TABLE product (
  id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL DEFAULT '',
);


 Part of the issue is the way Ruby on Rails migration class enables me
 to add options to Rails' own abstraced DDL just like I have done in
 the above example. Other ways of adding options might be tricky.

With ActiveRecord::Migration:

# db/migrate/1_initial.rb
class Initial  ActiveRecord::Migration

  def self.up
create_table :products do |t|
  t.column :name, :string, :default = ''
end
  end

  # drop all tables 'rake migrate VERSION=0'
  def self.down
drop_table :products   
  end

end

# Run from main Rails directory
rake migrate

Using either plain SQL like above or AR::Migrate will generate the same table 
structure.


Cheers,

-Robby

-- 
/**
* Robby Russell, Founder.Developer.Geek
* PLANET ARGON, Rails Development, Consulting  Hosting
* Portland, Oregon  | p: 503.351.4730 | f: 815.642.4068
* www.planetargon.com | www.robbyonrails.com
* Programming Rails   | www.programmingrails.com
***/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Very slow queries on 8.1

2005-11-17 Thread David Rysdam

Michael Fuhr wrote:


On Thu, Nov 17, 2005 at 10:38:50AM -0500, David Rysdam wrote:
 

I'm porting an application from Sybase and I've noticed that similar 
application functions take 2 to 3 times longer on postgres than they 
used to on the same machine running under Sybase.  I've tried changing 
various performance tuning parameters, such as shared_buffers, 
effective_cache_size, etc but there's little or no effect.
   



What's your OS and hardware environment and what are your non-default
PostgreSQL settings?

 

Fedora Core 2, dual 2.8 GHz, 2 GB ram. 


shared_buffers = 1
effective_cache_size = 10

Right now, I'm working on a test case that involves a table with ~360k 
rows called nb.sigs.  My sample query is:


select * from nb.sigs where signum  25

With no index, explain says this query costs 11341.  After CREATE INDEX 
on the signum field, along with an ANALYZE for nb.sigs, the query costs 
3456 and takes around 4 seconds to return the first row.
   



Please post the EXPLAIN ANALYZE output for the query -- that'll
show us the query plan, the accuracy of the planner's row count
estimate, and how long the query takes to execute on the server.
It might also be useful to see the table definition and the output
of the following query:

SELECT null_frac, n_distinct, correlation
FROM pg_stats
WHERE schemaname = 'nb' AND tablename = 'sigs' AND attname = 'signum';

 

The first things would be problematic to supply, since they are actually 
on a computer that doesn't have access to the Internet or to the machine 
I'm writing this on.  As for the query:


Row  null_frac  n_distinct  correlation
10   -1   1


What client interface are you using?  If the query returns a lot
of rows then you might benefit from using a cursor to fetch the
result set in chunks; otherwise the client library is probably
waiting for the entire result set to arrive before returning any
rows to you.  

That does seem to be the problem.  I've never worked with cursors, so 
I'll have to see if I can fit our DB module into that mold.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] strange behavior on 8.1

2005-11-17 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 Ok, I found a fix which works for me: don't use ON COMMIT DELETE ROWS on
 the temporary tables, but explicitly delete the rows once processed.
 However, I think it should work with ON COMMIT DELETE ROWS too, and it
 works fine indeed in 8.0.3.

I found the bug, and indeed ON COMMIT DELETE ROWS is the only case where
there's a significant chance of observing a problem.  Thanks for the
report!

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] unsubscribe pgsql-general

2005-11-17 Thread Peter Atkins








unsubscribe pgsql-general








Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread Steve Atkins
On Wed, Nov 16, 2005 at 02:19:28PM -0500, Vivek Khera wrote:
 On Nov 16, 2005, at 1:09 PM, [EMAIL PROTECTED]  
 [EMAIL PROTECTED] wrote:
 
 There are a few obstinate anti-open source customers though, that
 prevent my plan from moving forward. They've bought into whatever
 hype they've read and just simply say no. Now, that said, they're
 fairly non-technical and probably had never heard of PostgreSQL
 before we presented our plan.
 
 how would postgres be exposed to them anyhow?  wouldn't it just sit  
 behind the scenes of your front-end?

Backups. You really need to explain pg_dump to the end user.

 the real trick would have been to sell it in a better way.  don't  
 mention open source or antyhing -- just say we have our own in-house  
 DB we can provide at reduced cost to supporting your pre-installed  
 Oracle.  given them too much information was a mistake, IMHO.

We embed postgresql in our product[1]. We don't hide the fact - we
mention it in our pre-sales material and include docs about how to
access the backend DB via psql, JDBC and ODBC and stress that it's a
very standard, widely supported database that's compatible with many
third party tools and reporting utilities. What worries potential
customers most is the need to do maintenance on a database they're not
familiar with so we have app level code to do all the maintenance
needed.

We're selling mostly into large enterprise, and while we've had one or
two requests to support Oracle as well as Postgresql (uhm, no. life is
too short...) we've found that making it very clear that the end users
do not need to become Postgresql DBAs, and that Postgresql is a solid
enterprise grade database has been enough to make potential customers
happy.

Cheers,
  Steve

[1] Almost vanilla build, but we bundle it in the same tarball as the
application and do all the initdb work needed behind the scenes
as part of our installation script, include the PG startup and
shutdown in our rc scripts and have an autovacuum kinda-equivalent
embedded in the app.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Eric E

Hi all,
   In my database application, I've repeatedly encountered a particular 
issue, and I'm not sure I'm addressing it well, so I'd like suggestions 
on how to deal with it.  The problem is that I need something like a 
partial foreign key - a foreign key where, based on field1, in some rows 
field1 references table A, and in some rows field1 references tableB. 

Here's the gist of the design problem.  Say I have a generic product 
sales database:  products, customers, orders - orders bring together 
products and customers.  Now I want a table to track problems associated 
with any of these items; products, customers or orders, and I want to 
associated each problem with an item in one of the tables.


What's the best way to do this?  My immediate reaction is that I want a 
partial foreign key, but perhaps this is not a good way to go about such 
a design.  I've also considered using inheritance. I could put all the 
data fields for problems into a base table, then use separate inherited 
tables for each of the tables I want to reference with foreign keys.  I 
avoided inherited tables in version 7.4 because they didn't seem 
feature-complete.  Finally, there's the option of doing what I do now, 
which is use a check constraint.  The check constraint has the distinct 
downside of making backups and restoration more complex, as it is added 
during table creation, and not after data load.


Does anyone have ideas on the best way to acheive this behavior?  Ideas 
and advice would be much appreciated.


Cheers,

Eric

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Jaime Casanova
On 11/17/05, Eric E [EMAIL PROTECTED] wrote:
 Hi all,
In my database application, I've repeatedly encountered a particular
 issue, and I'm not sure I'm addressing it well, so I'd like suggestions
 on how to deal with it.  The problem is that I need something like a
 partial foreign key - a foreign key where, based on field1, in some rows
 field1 references table A, and in some rows field1 references tableB.

 Here's the gist of the design problem.  Say I have a generic product
 sales database:  products, customers, orders - orders bring together
 products and customers.  Now I want a table to track problems associated
 with any of these items; products, customers or orders, and I want to
 associated each problem with an item in one of the tables.

 What's the best way to do this?  My immediate reaction is that I want a
 partial foreign key, but perhaps this is not a good way to go about such
 a design.  I've also considered using inheritance. I could put all the
 data fields for problems into a base table, then use separate inherited
 tables for each of the tables I want to reference with foreign keys.  I
 avoided inherited tables in version 7.4 because they didn't seem
 feature-complete.  Finally, there's the option of doing what I do now,
 which is use a check constraint.

 Does anyone have ideas on the best way to acheive this behavior?  Ideas
 and advice would be much appreciated.

 Cheers,

 Eric


maybe you can solve it adding a new col and allow both to contain null values.

if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...


 The check constraint has the distinct
 downside of making backups and restoration more complex, as it is added
 during table creation, and not after data load.

after you make pg_dump edit the file delete the check from the create
table and put it in an alter table add constraint at the end of the
file...


--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Most significant digit number formatting

2005-11-17 Thread codeWarrior
Got it now  Delphi is interfering with the numeric formatting --  
obviously... this is considered normal for numeric data types that trailing 
zeroes are removed... they are insignificant anyway

To solve your issue:  I guess the thing to do is to cast the result as text 
to preserve the formatting but this will be a string instead of a number...

SELECT round(12.0109, 3)::text;

Greg...



Berend Tober [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

 codeWarrior wrote:

If it is a numeric data column -- you probably want to use the round
function:

SELECT round(1200.01, 3);
SELECT round(12.009, 2);



 Interesting. I had tried that. After your message I tried again and
 encountered this interesting anomaly: while the ROUND function used in a
 query run in the SQL window of PgAdmin III does in fact force  output of
 trailing zero decimal digits to the extent specified, i.e.,

 SELECT
project_number,
labor_hours,
TO_CHAR(labor_hours, '999.999'),
ROUND(labor_hours,3)
 FROM time_data
 LIMIT 5

 05-08,1974., ###.###,1974.000
 05-100,10810.5000, ###.###,10810.500
 05-125,285., 285.000,285.000
 05-150,404.5000, 404.500,404.500
 05-200,44.,  44.000,44.000

 Running the same query though a TQuery dataset object in Borland Delphi
 using the BDE truncates the trailing zeros from ROUND:

 000-05-081974 ###.###1974
 000-05-10010810.5 ###.###10810.5
 000-05-125285 285.000285
 000-05-150404.5 404.500404.5
 000-05-20044  44.00044

 That is why I didn't realize ROUND was an option, but for me it still is
 not since the report is produced by a Delphi application. I suppose I
 can accomplish this formatting programmatically within the Delphi
 application, but I was hoping to have the data base do it directly.

 Thanks,
 Berend Tober


Berend Tober [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]


Say I want to format calculated numeric output to uniformly have a
specific number of decimal places,

 ...







 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Eric E

maybe you can solve it adding a new col and allow both to contain null values.

if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...



I did think about that, but I disliked the idea of two fields of nulls for 
every one full field maybe it's not as bad a way of doing it as I thought.

EE




Jaime Casanova wrote:


On 11/17/05, Eric E [EMAIL PROTECTED] wrote:
 


Hi all,
  In my database application, I've repeatedly encountered a particular
issue, and I'm not sure I'm addressing it well, so I'd like suggestions
on how to deal with it.  The problem is that I need something like a
partial foreign key - a foreign key where, based on field1, in some rows
field1 references table A, and in some rows field1 references tableB.

Here's the gist of the design problem.  Say I have a generic product
sales database:  products, customers, orders - orders bring together
products and customers.  Now I want a table to track problems associated
with any of these items; products, customers or orders, and I want to
associated each problem with an item in one of the tables.

What's the best way to do this?  My immediate reaction is that I want a
partial foreign key, but perhaps this is not a good way to go about such
a design.  I've also considered using inheritance. I could put all the
data fields for problems into a base table, then use separate inherited
tables for each of the tables I want to reference with foreign keys.  I
avoided inherited tables in version 7.4 because they didn't seem
feature-complete.  Finally, there's the option of doing what I do now,
which is use a check constraint.

Does anyone have ideas on the best way to acheive this behavior?  Ideas
and advice would be much appreciated.

Cheers,

Eric

   



maybe you can solve it adding a new col and allow both to contain null values.

if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...


 


The check constraint has the distinct
downside of making backups and restoration more complex, as it is added
during table creation, and not after data load.
   



after you make pg_dump edit the file delete the check from the create
table and put it in an alter table add constraint at the end of the
file...


--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Eric E

Eric E wrote:

maybe you can solve it adding a new col and allow both to contain 
null values.


if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...


I did think about that, but I disliked the idea of two fields of nulls 
for every one full field maybe it's not as bad a way of doing it 
as I thought.


BTW, in most cases I have 5+ tables to do this, so that's 4+ fields of 
null in each row...


EE




Jaime Casanova wrote:


On 11/17/05, Eric E [EMAIL PROTECTED] wrote:
 


Hi all,
  In my database application, I've repeatedly encountered a particular
issue, and I'm not sure I'm addressing it well, so I'd like suggestions
on how to deal with it.  The problem is that I need something like a
partial foreign key - a foreign key where, based on field1, in some 
rows

field1 references table A, and in some rows field1 references tableB.

Here's the gist of the design problem.  Say I have a generic product
sales database:  products, customers, orders - orders bring together
products and customers.  Now I want a table to track problems 
associated

with any of these items; products, customers or orders, and I want to
associated each problem with an item in one of the tables.

What's the best way to do this?  My immediate reaction is that I want a
partial foreign key, but perhaps this is not a good way to go about 
such

a design.  I've also considered using inheritance. I could put all the
data fields for problems into a base table, then use separate inherited
tables for each of the tables I want to reference with foreign keys.  I
avoided inherited tables in version 7.4 because they didn't seem
feature-complete.  Finally, there's the option of doing what I do now,
which is use a check constraint.

Does anyone have ideas on the best way to acheive this behavior?  Ideas
and advice would be much appreciated.

Cheers,

Eric

  



maybe you can solve it adding a new col and allow both to contain 
null values.


if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...


 


The check constraint has the distinct
downside of making backups and restoration more complex, as it is added
during table creation, and not after data load.
  



after you make pg_dump edit the file delete the check from the create
table and put it in an alter table add constraint at the end of the
file...


--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

 







---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Moving from MySQL to PostgreSQL with Ruby on Rails.

2005-11-17 Thread David Fetter
On Thu, Nov 17, 2005 at 08:48:45AM -0800, Peter Michaux wrote:
 Hi,
 
 I'm just new to the PostgreSQL world. I've been using MySQL but I want to
 develop a Ruby on Rails application that can be installed on either MySQL or
 PostgreSQL. I don't know how much the DDL dialects vary between them. At the
 moment I am interested in the options on a table like UTF-8. In MySQL I
 write
 
 CREATE TABLE product (
 id INT NOT NULL AUTOINCREMENT,
 name VARCHAR(255) NOT NULL DEFAULT '',
 PRIMARY KEY (id)
 ) DEFAULT CHARSET=UTF-8;

CREATE TABLE product (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL
/* name isn't your greatest idea because it's a keyword.
 * http://www.postgresql.org/docs/current/static/sql-keywords-appendix.html
 * A more descriptive word or pair of words would be good here.
 * There is no DEFAULT clause because the database should throw an
 * error if somebody tries to INSERT a NULL here, not march onward.
 */
);

/* You might also want this: */
CREATE UNIQUE INDEX idx_uniq_product_name
ON product(LOWER(TRIM(name)));

 Will this definition work in the PostgreSQL world? Is there a web
 page for people with MySQL exerience moving to PostgreSQL?

Here are a couple.  The first is a general how to convert from other
things, while the second is MySQL specific. :)

http://techdocs.postgresql.org/#convertfrom
http://www.in-nomine.org/~asmodai/mysql-to-pgsql.html

 Part of the issue is the way Ruby on Rails migration class enables
 me to add options to Rails' own abstraced DDL just like I have done
 in the above example.

I can't say I think it's a good idea to have abstracted or
portable DDL.  It's always expensive and difficult to maintain
because you're either writing the DDL, etc. several times, or you're
pushing functionality up into middleware where it may not belong.

Pick one database back-end and stick with it.  It's ever so much
easier to deal with.

OK, that's my $.02 :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] unsubscribe

2005-11-17 Thread Josel Malixi
unsubscribe




__ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Partial foreign keys, check constraints and

2005-11-17 Thread Scott Marlowe
On Thu, 2005-11-17 at 13:36, Eric E wrote:
 Eric E wrote:
 
  maybe you can solve it adding a new col and allow both to contain 
  null values.
 
  if these are not mutually exclusive you can avoid a check if they are
  check that if one has a non-null value other has null...
 
  I did think about that, but I disliked the idea of two fields of nulls 
  for every one full field maybe it's not as bad a way of doing it 
  as I thought.
 
 BTW, in most cases I have 5+ tables to do this, so that's 4+ fields of 
 null in each row...

Could you use some kind of intermediate join table, so that it pointed
to orders and then products / customers / othermidlevel tables pointed
to it, and so did the problems table?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Moving from MySQL to PostgreSQL with Ruby on Rails.

2005-11-17 Thread Bruno Wolff III
On Thu, Nov 17, 2005 at 09:23:51 -0800,
  Robby Russell [EMAIL PROTECTED] wrote:
 
 CREATE TABLE product (
   id SERIAL PRIMARY KEY,
   name VARCHAR(255) NOT NULL DEFAULT '',
 );

And depending on why you chose VARCHAR(255), you may really want to use TEXT
instead.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] [Fwd: Sun backs open-source database PostgreSQL | Topic: everything | ZDNet News Alerts]

2005-11-17 Thread Reid Thompson



 Original Message 
Subject:Sun backs open-source database PostgreSQL | Topic:
everything | ZDNet News Alerts
Date:   Thu, 17 Nov 2005 13:10:34 -0800 (PST)
From:   ZDNet News Alerts[EMAIL PROTECTED]
Reply-To:   [EMAIL PROTECTED]
To: [EMAIL PROTECTED]



  NEWS ALERT FROM ZDNET
http://news.zdnet.com/html/z/alerts.html?tag=zdnn.alert =


*Sun backs open-source database PostgreSQL*
http://dw.com.com/redir?lop=redirTestdestcat=AlertdestUrl=http%3A%2F%2Fnews%2Ezdnet%2Ecom%2F2100-3513_22-5958850.html?tag=zdnn.alert
Sun plans to distribute database and optimize it for Solaris. It also
plans to include Xen virtualization and Linux compatibilty next year.
/Thursday November 17, 2005 01:07PM PST/


  NEWS ALERT CONTROLS [Beta] =

* Cancel Future News Alerts For: everything

http://news.zdnet.com/5230-9595-0.html?filterID=772alertID=772delete=true
   
* Create Another http://news.zdnet.com/html/z/alerts.html
* Feedback http://cma.zdnet.com/texis/members/zdnetcontact.html


  Copyright 2005 =

 CNET Networks (ZDNet's parent company)
 235 Second Street
 San Francisco, CA 94105 USA

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Strange slow database

2005-11-17 Thread David Mitchell

Hi all,

Recently our databases started to experience a significant slowdown. 
Queries that were taking 500ms now take up to 20 seconds. An insert can 
take 150ms or more. This is strange since we are still hitting indexes 
and we vacuum regularly. Here is the description of our system:


A single server has a medium size database with around 30-40 tables. 
Some tables have ~1000 rows. These tables tend to have lots of inserts 
and deletes, so we vacuum them regularly (every ten minutes). One table 
has 15 million rows, but it only every has inserts, never updated or 
deleted, so this table never gets vacuumed. Vacuuming is very low impact:


vacuum_cost_delay = 50
vacuum_cost_limit = 100

We have fsync = true and max_connections = 150. There are 12 servers 
each with the same schema and config, but different data (although the 
number of rows in the tables are very similar). We assign users to a 
server when we set them up so all the data for a ser is on one server. 
This whole setup typically works well and is speedy, we have checked all 
the indexes and they are hit when appropriate.


The table with 15million is the table on which inserts can take 150ms or 
more (I've seen 800ms for a single insert before).


Strangely, this slowdown is on all 12 servers. Perhaps this is a 
configuration issue? If anyone has any ideas we'd love to hear them. 
Since we vacuum regularly and all the indexes get hit I don't know where 
to go next.


Machines are Pentium 4 3.2Ghz running Linux kernel 2.6.10. Postgres 
version 8.0.1. Machines have 2Gb ram and two 10k RPM disks in a RAID-0 
configuration.


Regards
--
David Mitchell
Software Engineer
Telogis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread Andrew Sullivan
On Wed, Nov 16, 2005 at 03:26:19PM -0800, Joshua D. Drake wrote:
 That's easy. The same reason people used to buy Mammoth PostgreSQL (not 
 the replicator version).

Well, yeah-no.  Mammoth was coming from someone who was explicitly in
the business of selling support for it, and was selling to people who
already had picked PostgreSQL.  But the OP was suggesting this was a
way around the We don't use nuttin' but O-ra-cle 'round here crowd;
and I don't see how Magic Blackbox Database is somehow better than
Postgres to those people.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Moving from MySQL to PostgreSQL with Ruby on Rails.

2005-11-17 Thread Robby Russell
On Thu, 2005-11-17 at 15:10 -0600, Bruno Wolff III wrote:
 On Thu, Nov 17, 2005 at 09:23:51 -0800,
   Robby Russell [EMAIL PROTECTED] wrote:
  
  CREATE TABLE product (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL DEFAULT '',
  );
 
 And depending on why you chose VARCHAR(255), you may really want to use TEXT
 instead.

Mainly because, Rails will generate forms based on the data types
differently. A text field - VARCHAR... textarea - TEXT.

Sort of a meta-approach for Rails-based scaffolding generator. Not
required, but it'll speed up the process and limit the number of chars
that you can stick into a text field opposed to a text area.

Robby

-- 
/**
* Robby Russell, Founder.Developer.Geek
* PLANET ARGON, Rails Development, Consulting  Hosting
* Portland, Oregon  | p: 503.351.4730 | f: 815.642.4068
* www.planetargon.com | www.robbyonrails.com
* Programming Rails   | www.programmingrails.com
***/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Eric E

Scott Marlowe wrote:


On Thu, 2005-11-17 at 13:36, Eric E wrote:
 


Eric E wrote:

   

maybe you can solve it adding a new col and allow both to contain 
null values.


if these are not mutually exclusive you can avoid a check if they are
check that if one has a non-null value other has null...
   

I did think about that, but I disliked the idea of two fields of nulls 
for every one full field maybe it's not as bad a way of doing it 
as I thought.
 

BTW, in most cases I have 5+ tables to do this, so that's 4+ fields of 
   


null in each row...Could you use some kind of intermediate join table, so that 
it pointed
to orders and then products / customers / othermidlevel tables pointed
to it, and so did the problems table?

Clever - that intermediate table sounds like sort of a GUID for every 
element in the database, along with what table it belongs to, and the 
problems table points at that GUID.  Sounds pretty promising.  Thanks 
for the idea.


EE

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [Fwd: Sun backs open-source database PostgreSQL | Topic: everything | ZDNet News Alerts]

2005-11-17 Thread Aaron Glenn

 *Sun backs open-source database PostgreSQL*


This is going to make PostgreSQL a much easier sell to PHB's (at
least, in my experience)


aaron.glenn

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Strange slow database

2005-11-17 Thread Andrew Sullivan
On Fri, Nov 18, 2005 at 10:05:47AM +1300, David Mitchell wrote:
 Strangely, this slowdown is on all 12 servers. Perhaps this is a 
 configuration issue? If anyone has any ideas we'd love to hear them. 

It sounds like a use-pattern issue.  Did something in your use
change?  Any time you get repeatable change like that, look for the
common changed factor.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.  What do you do sir?
--attr. John Maynard Keynes

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Strange slow database

2005-11-17 Thread Tom Lane
David Mitchell [EMAIL PROTECTED] writes:
 Recently our databases started to experience a significant slowdown. 
 Queries that were taking 500ms now take up to 20 seconds. An insert can 
 take 150ms or more. This is strange since we are still hitting indexes 
 and we vacuum regularly. Here is the description of our system:

Have you checked to see whether you are encountering table or index
bloat?  Your vacuuming policy doesn't sound unreasonable on its face,
but there's no substitute for actually looking at physical file sizes
when you aren't sure what's wrong.  Try doing a pg_dump and reload into
a virgin database, then compare file sizes with the original
table-by-table to see if anything is drastically out of line.
Don't forget to look at the system catalogs too.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] unsubscribe pgsql-general

2005-11-17 Thread felix
On Thu, Nov 17, 2005 at 09:53:26AM -0800, Peter Atkins wrote:
 unsubscribe pgsql-general

O dear ... I haven't posted this in a while :-)

Here's how to unsubscribe:

First, ask your Internet Provider to mail you an Unsubscribing Kit.
Then follow these directions.

The kit will most likely be the standard no-fault type. Depending on
requirements, System A and/or System B can be used. When operating
System A, depress lever and a plastic dalkron unsubscriber will be
dispensed through the slot immediately underneath. When you have
fastened the adhesive lip, attach connection marked by the large X
outlet hose. Twist the silver-coloured ring one inch below the
connection point until you feel it lock.

The kit is now ready for use. The Cin-Eliminator is activated by the
small switch on the lip.  When securing, twist the ring back to its
initial condition, so that the two orange lines meet.  Disconnect.
Place the dalkron unsubscriber in the vacuum receptacle to the rear.
Activate by pressing the blue button.

The controls for System B are located on the opposite side. The red
release switch places the Cin-Eliminator into position; it can be
adjusted manually up or down by pressing the blue manual release
button. The opening is self-adjusting. To secure after use, press the
green button, which simultaneously activates the evaporator and
returns the Cin-Eliminator to its storage position.

You may log off if the green exit light is on over the evaporator.  If
the red light is illuminated, one of the Cin-Eliminator requirements
has not been properly implemented. Press the List Guy call button on
the right of the evaporator. He will secure all facilities from his
control panel.

To use the Auto-Unsub, first undress and place all your clothes in the
clothes rack. Put on the velcro slippers located in the cabinet
immediately below. Enter the shower, taking the entire kit with
you. On the control panel to your upper right upon entering you will
see a Shower seal button. Press to activate. A green light will then
be illuminated immediately below. On the intensity knob, select the
desired setting. Now depress the Auto-Unsub activation lever. Bathe
normally.

The Auto-Unsub will automatically go off after three minutes unless
you activate the Manual off override switch by flipping it up. When
you are ready to leave, press the blue Shower seal release
button. The door will open and you may leave. Please remove the velcro
slippers and place them in their container.

If you prefer the ultrasonic log-off mode, press the indicated blue
button. When the twin panels open, pull forward by rings A  B. The
knob to the left, just below the blue light, has three settings, low,
medium or high. For normal use, the medium setting is suggested.

After these settings have been made, you can activate the device by
switching to the ON position the clearly marked red switch. If
during the unsubscribing operation you wish to change the settings,
place the manual off override switch in the OFF position. You may
now make the change and repeat the cycle. When the green exit light
goes on, you may log off and have lunch. Please close the door behind
you.


-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [Fwd: Sun backs open-source database PostgreSQL | Topic:

2005-11-17 Thread Bruce Momjian
Aaron Glenn wrote:
 
  *Sun backs open-source database PostgreSQL*
 
 
 This is going to make PostgreSQL a much easier sell to PHB's (at
 least, in my experience)

Agreed.  We now have Sun's blessing, and access to Sun resources.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Mambo/Joomla (CMS) on PostgreSQL ?

2005-11-17 Thread Leif Jensen

   Hi All,

  I have been using PostgreSQL ever since version 6.3 and are very happy
about it. Many of our company corporate registration systems are based on
PostgreSQL and are web based using PHP.

  Unfortunately someone has decided that our (external) homepage(s) are
gonna use the Mambo CMS system which is originated on MySql. We are now
running it successfully for our new homepages (all on the outside of our
firewall, for now).

  This makes things a bit akward, especially if/when we want to make some
of our internal stuff accessible to the outside world. We do NOT want to
maintain/administer 2 DBMS'es for many reasons, e.g. security. I am using
some specific stored procedures in my internal systems, but those part
will probably never have to be exported.

  I wonder if someone in the PostgreSQL community already successfully
have been using Mambo or Joomla based on PostgreSQL. The concern here is
for keeping things standard (no or at least very few local patches to make
things work) as well as the concern for 3rd party plugins/modules to work.
We are at the moment using mambo version 4.5.x and PostgreSQL 7.4.5, but
an upgrade to 8.0.x would be ok I think. I have seen that Mambo version
5.x is supposed to have support for other databases, but I haven't been
able to see any reports on that yet. But the fork of Mambo/Joomla makes
this more difficult to get a decision on moving to Mambo 5, I'm afraid.
There are also a time issue for me, since we have to put up some other
services apart from mambo before the end of the year.

  I am looking for arguments in terms of why are PostgreSQL a better
choice for security reasons, standards (SQLXX standard) reasons, DBMS
handling, and ease of use with Mambo etc. Why do we have to get Mambo
running on PostgreSQL in stead of MySql ? I would just hate to have to
abandone PostgreSQL in favour of MySql.

  Does anyone have some good links to comparisons of MySql and PostgreSQL?
I'm not too much concerned about performance (both are pretty good for
our purposes), commitment to standards are much more of a concern.

  Any comments are very welcome. It could be some rather small thing that
will tip the cup.

  Greetings,

 Leif

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] unsubscribe pgsql-general

2005-11-17 Thread Terry Lee Tucker

:oD

On Thursday 17 November 2005 02:45 pm, [EMAIL PROTECTED] saith:
 On Thu, Nov 17, 2005 at 09:53:26AM -0800, Peter Atkins wrote:
  unsubscribe pgsql-general

 O dear ... I haven't posted this in a while :-)

 Here's how to unsubscribe:

 First, ask your Internet Provider to mail you an Unsubscribing Kit.
 Then follow these directions.

 The kit will most likely be the standard no-fault type. Depending on
 requirements, System A and/or System B can be used. When operating
 System A, depress lever and a plastic dalkron unsubscriber will be
 dispensed through the slot immediately underneath. When you have
 fastened the adhesive lip, attach connection marked by the large X
 outlet hose. Twist the silver-coloured ring one inch below the
 connection point until you feel it lock.

 The kit is now ready for use. The Cin-Eliminator is activated by the
 small switch on the lip.  When securing, twist the ring back to its
 initial condition, so that the two orange lines meet.  Disconnect.
 Place the dalkron unsubscriber in the vacuum receptacle to the rear.
 Activate by pressing the blue button.

 The controls for System B are located on the opposite side. The red
 release switch places the Cin-Eliminator into position; it can be
 adjusted manually up or down by pressing the blue manual release
 button. The opening is self-adjusting. To secure after use, press the
 green button, which simultaneously activates the evaporator and
 returns the Cin-Eliminator to its storage position.

 You may log off if the green exit light is on over the evaporator.  If
 the red light is illuminated, one of the Cin-Eliminator requirements
 has not been properly implemented. Press the List Guy call button on
 the right of the evaporator. He will secure all facilities from his
 control panel.

 To use the Auto-Unsub, first undress and place all your clothes in the
 clothes rack. Put on the velcro slippers located in the cabinet
 immediately below. Enter the shower, taking the entire kit with
 you. On the control panel to your upper right upon entering you will
 see a Shower seal button. Press to activate. A green light will then
 be illuminated immediately below. On the intensity knob, select the
 desired setting. Now depress the Auto-Unsub activation lever. Bathe
 normally.

 The Auto-Unsub will automatically go off after three minutes unless
 you activate the Manual off override switch by flipping it up. When
 you are ready to leave, press the blue Shower seal release
 button. The door will open and you may leave. Please remove the velcro
 slippers and place them in their container.

 If you prefer the ultrasonic log-off mode, press the indicated blue
 button. When the twin panels open, pull forward by rings A  B. The
 knob to the left, just below the blue light, has three settings, low,
 medium or high. For normal use, the medium setting is suggested.

 After these settings have been made, you can activate the device by
 switching to the ON position the clearly marked red switch. If
 during the unsubscribing operation you wish to change the settings,
 place the manual off override switch in the OFF position. You may
 now make the change and repeat the cycle. When the green exit light
 goes on, you may log off and have lunch. Please close the door behind
 you.


 --
 ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
  Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
   GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license
 #4933 I've found a solution to Fermat's Last Theorem but I see I've run out
 of room o

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Strange slow database

2005-11-17 Thread David Mitchell


Tom Lane wrote:

Have you checked to see whether you are encountering table or index
bloat?  Your vacuuming policy doesn't sound unreasonable on its face,
but there's no substitute for actually looking at physical file sizes
when you aren't sure what's wrong.  Try doing a pg_dump and reload into
a virgin database, then compare file sizes with the original
table-by-table to see if anything is drastically out of line.
Don't forget to look at the system catalogs too.


I dumped and loaded then wrote a script to compare the sizes of tables. 
I noticed that a few of the indexes, especially the indexes on the 
15million table shrunk a lot (~50% reduction - ~400Mb). Also, the 
pg_attribute_relid_attnam_index index shrank loads, to 1% of its 
original size. From this, we tried reindexing indexes and found we got 
the same reduction in index size. Strangely though, our queries appear 
to have sped up not just on the servers we reindexed, but across the 
whole cluster - so apparently something else is affecting the speed of 
this all, not just bloat. We're scratching our heads, this is nothing if 
not a little frustrating.


--
David Mitchell
Software Engineer
Telogis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Strange slow database

2005-11-17 Thread Tom Lane
David Mitchell [EMAIL PROTECTED] writes:
 I dumped and loaded then wrote a script to compare the sizes of tables. 
 I noticed that a few of the indexes, especially the indexes on the 
 15million table shrunk a lot (~50% reduction - ~400Mb). Also, the 
 pg_attribute_relid_attnam_index index shrank loads, to 1% of its 
 original size.

Hm, better check whether your vacuuming policy is taking care of the
system catalogs ...

 From this, we tried reindexing indexes and found we got 
 the same reduction in index size. Strangely though, our queries appear 
 to have sped up not just on the servers we reindexed, but across the 
 whole cluster - so apparently something else is affecting the speed of 
 this all, not just bloat.

Odd indeed.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Strange slow database

2005-11-17 Thread David Mitchell

Tom Lane wrote:
Also, the 
pg_attribute_relid_attnam_index index shrank loads, to 1% of its 
original size.


Hm, better check whether your vacuuming policy is taking care of the
system catalogs ...


Yes, fair point. So I did:

18/11/2005 03:02:29 INFO:  vacuuming pg_catalog.pg_attribute
18/11/2005 03:02:29 INFO:  index pg_attribute_relid_attnam_index now 
contains 2861 row versions in 11900 pages
18/11/2005 03:02:29 DETAIL:  11834 index pages have been deleted, 11834 
are currently reusable.

18/11/2005 03:02:29 CPU 0.11s/0.03u sec elapsed 0.45 sec.
18/11/2005 03:02:30 INFO:  index pg_attribute_relid_attnum_index now 
contains 2861 row versions in 2942 pages
18/11/2005 03:02:30 DETAIL:  2917 index pages have been deleted, 2917 
are currently reusable.

18/11/2005 03:02:30 CPU 0.01s/0.00u sec elapsed 0.22 sec.
18/11/2005 03:02:30 INFO:  pg_attribute: found 0 removable, 2861 
nonremovable row versions in 48 pages

18/11/2005 03:02:30 DETAIL:  0 dead row versions cannot be removed yet.
18/11/2005 03:02:30 There were 19 unused item pointers.
18/11/2005 03:02:30 0 pages are entirely empty.
18/11/2005 03:02:30 CPU 0.12s/0.03u sec elapsed 0.72 sec.
18/11/2005 03:02:30 VACUUM

This is from our vacuum process log. The first index there is the one in 
question. Hmm, any clues? Of course everything is running at full speed 
currently.

--
David Mitchell
Software Engineer
Telogis

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Strange slow database

2005-11-17 Thread Tom Lane
David Mitchell [EMAIL PROTECTED] writes:
 18/11/2005 03:02:29 INFO:  index pg_attribute_relid_attnam_index now 
 contains 2861 row versions in 11900 pages
 18/11/2005 03:02:29 DETAIL:  11834 index pages have been deleted, 11834 
 are currently reusable.

 18/11/2005 03:02:30 INFO:  index pg_attribute_relid_attnum_index now 
 contains 2861 row versions in 2942 pages
 18/11/2005 03:02:30 DETAIL:  2917 index pages have been deleted, 2917 
 are currently reusable.

 18/11/2005 03:02:30 INFO:  pg_attribute: found 0 removable, 2861 
 nonremovable row versions in 48 pages

Those indexes are pretty bloated :-( but pg_attribute itself seems to be
nice and tight.  I speculate that you did a VACUUM FULL on it recently.

The condition of the indexes suggests strongly that you've not been
vacuuming pg_attribute often enough (and perhaps not any of the other
system catalogs, either?).  Heavy use of temp tables will cause
pg_attribute and pg_class to bloat if you don't keep after them.

At the moment, a REINDEX seems to be indicated.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-17 Thread Greg Stark

 On 11/17/05, Eric E [EMAIL PROTECTED] wrote:
 
  What's the best way to do this?  My immediate reaction is that I want a
  partial foreign key, but perhaps this is not a good way to go about such
  a design.  

Normally I just have multiple columns with all but one NULL.

Alternatively you can make it a many-to-many relationship. So you have a
problem table and then you have a problem_product, problem_customer, and a
problem_order table.

-- 
greg


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread Christopher Browne
 On Wed, Nov 16, 2005 at 02:19:28PM -0500, Vivek Khera wrote:
 On Nov 16, 2005, at 1:09 PM, [EMAIL PROTECTED]  
 [EMAIL PROTECTED] wrote:
 
 There are a few obstinate anti-open source customers though, that
 prevent my plan from moving forward. They've bought into whatever
 hype they've read and just simply say no. Now, that said, they're
 fairly non-technical and probably had never heard of PostgreSQL
 before we presented our plan.
 
 how would postgres be exposed to them anyhow?  wouldn't it just sit  
 behind the scenes of your front-end?

 Backups. You really need to explain pg_dump to the end user.

Unless you include a tool that internally performs a pg_dump, in
binary form, so they don't need to know about what it's doing, and so
that it doesn't show off being a raw text form that will clue them in
;-).

Think about any of the apps that used embedded DBs like Faircom,
Raima, and such; you'd need to run some extra module to do a backup.

If the intent is to pretend PostgreSQL is being embedded, it's natural
for the results to have a shape like that...
-- 
output = reverse(moc.liamg @ enworbbc)
http://linuxdatabases.info/info/spreadsheets.html
I've seen  a look in dogs'  eyes, a quickly vanishing  look of amazed
contempt,  and I  am convinced  that basically  dogs think  humans are
nuts.  -- John Steinbeck

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Rebranding PostgreSQL

2005-11-17 Thread Christopher Browne
 Maybe he is going to call it Orakle? :)

 I was thinking he could call it my-sql...

Call it Your SQL :-)
-- 
cbbrowne,@,gmail.com
http://cbbrowne.com/info/slony.html
...Roxanne falls in love with Christian, a chevalier in Cyrano's
regiment who hasn't got the brains God gave an eclair...
-- reviewer on NPR

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Strange slow database

2005-11-17 Thread David Mitchell

Tom Lane wrote:

I speculate that you did a VACUUM FULL on it recently.


You speculate right, we do a vacuum full every sunday night as a safety 
net. So the vacuum full was a week old.


The condition of the indexes suggests strongly that you've not been
vacuuming pg_attribute often enough (and perhaps not any of the other
system catalogs, either?).  Heavy use of temp tables will cause
pg_attribute and pg_class to bloat if you don't keep after them.


That's interesting. We stopped using temporary tables because they were 
causing us lots of trouble, including table bloat. We vacuum 
pg_attribute (and every other table with an entry in pg_tables) every 
ten minutes. What other that temp tables could bloat pg_attribute? We 
use refcursors to return data from our stored procedures. We also have a 
few stored procedures that return SETOF.


Thanks for helping us out on this.
--
David Mitchell
Software Engineer
Telogis

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Trouble with recursive trigger

2005-11-17 Thread Tom Lane
Justin Hawkins [EMAIL PROTECTED] writes:
 I'm having trouble with the DELETE. When deleting a row three things
 need to happen:

 o recursively DELETE all children posts to preserve data integrity
 o decrement the number of replies of the parent post (if it exists)
 o delete itself

This has a couple of problems:

1. You can't delete a post's children before deleting the post itself,
because of the REFERENCES constraint.  I'm not entirely sure why your
original formulation of the trigger didn't hit that failure, but I sure
hit it while experimenting with alternatives.

2. The reason the UPDATE causes a problem is that it creates row
versions that are newer than the versions the outer DELETE can see.
(Any database changes caused by a function invoked by a query are by
definition later than that query.)  This means that if the outer
DELETE hasn't yet zapped a row that the UPDATE touches, it will fail to
delete that row when it does come to it.

The easiest way to fix #2 is to do the UPDATEs in an AFTER trigger
instead of a BEFORE trigger, and the easiest way to fix #1 is to let the
system do it for you, by using ON DELETE CASCADE instead of a
handwritten trigger.  I got reasonable behavior with this:

-

CREATE TABLE post (
  id SERIAL NOT NULL PRIMARY KEY,
  parent INT REFERENCES post(id) ON DELETE CASCADE,
  repliesINT NOT NULL DEFAULT 0
);

CREATE OR REPLACE FUNCTION post_update_replies() RETURNS TRIGGER AS $function$
  DECLARE iv integer;
  BEGIN
IF (TG_OP = 'DELETE') THEN
  -- now update the parents replies, if they have any
  IF (OLD.parent IS NOT NULL) THEN
RAISE NOTICE 'decrementing replies of parent % because of delete of %', 
OLD.parent, OLD.id;
UPDATE post SET replies = replies - 1 WHERE id = OLD.parent;
GET DIAGNOSTICS iv = ROW_COUNT;
RAISE NOTICE 'decremented % parent rows of %', iv, OLD.id;
  END IF;
  RETURN OLD;
END IF;
  END;
$function$ LANGUAGE plpgsql;

CREATE TRIGGER post_update_replies AFTER DELETE ON post
FOR EACH ROW EXECUTE PROCEDURE post_update_replies();

COPY post FROM stdin WITH CSV;
3000,,0
3001,3000,0
3002,3001,0
3003,3002,0
3004,3003,0
3005,3004,0
3006,3005,0
\.

-

to wit:

regression=# DELETE FROM post WHERE id = 3002;
NOTICE:  decrementing replies of parent 3005 because of delete of 3006
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decremented 0 parent rows of 3006
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decrementing replies of parent 3004 because of delete of 3005
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decremented 0 parent rows of 3005
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decrementing replies of parent 3003 because of delete of 3004
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decremented 0 parent rows of 3004
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decrementing replies of parent 3002 because of delete of 3003
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decremented 0 parent rows of 3003
CONTEXT:  SQL statement DELETE FROM ONLY public.post WHERE parent = $1
NOTICE:  decrementing replies of parent 3001 because of delete of 3002
NOTICE:  decremented 1 parent rows of 3002
DELETE 1
regression=# select * from post;
  id  | parent | replies 
--++-
 3000 ||   0
 3001 |   3000 |  -1
(2 rows)

regression=# 

Notice that most of the UPDATEs report not doing anything, because the
parent row they would need to hit is already gone by the time the AFTER
trigger runs.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings