[GENERAL] rule to update view that inserts into log

2008-04-12 Thread Chad Showalter
I would like to create a rule that, by updating a view, allows me to update
one table and insert into another.

 

The following example illustrates what I'm trying to do:

 

--Create Tables

CREATE TABLE my_table 

(

my_table_id serial,

a character varying(255),

b character varying(255),

CONSTRAINT my_table_id_pk PRIMARY KEY (my_table_id)

);

 

CREATE TABLE my_audit_table

(

audit_id serial,

my_table_id int,

c character varying(255),

CONSTRAINT audit_id_pk PRIMARY KEY (audit_id)

);

 

--Create View

CREATE OR REPLACE VIEW my_view AS

SELECT  

t.my_table_id,

t.a,

t.b,

au.audit_id,

au.c

FROM

my_table t, my_audit_table au

WHERE

t.my_table_id = au.my_table_id;

 

--Create Rules

CREATE OR REPLACE RULE insert_to_my_view AS 

ON INSERT TO my_view 

DO INSTEAD(  

INSERT INTO my_table (a,b)

VALUES(new.a, new.b);  

INSERT INTO my_audit_table(my_table_id, c)  

VALUES  

(currval('my_table_my_table_id_seq'), new.c); 

);

 

CREATE OR REPLACE RULE update_my_view AS 

ON UPDATE TO my_view DO INSTEAD  

 ( UPDATE my_table SET  

a = new.a,  

b = new.b   

WHERE  

my_table_id = old.my_table_id;  

INSERT INTO my_audit_table 

 (my_table_id,  

 c)  

VALUES  

 (new.my_table_id,  

 new.c); 

);

 

--The insert statement below inserts one row into my_table, and one row into
my_audit_table

--(This works the way I would like)

insert into my_view(a,b,c) values('a contents','b contents', 'c contents');

 

--The update statement below doesn't work the way I want.

--What I would like this to do is to update one row in my_table, and insert

--one row into my_audit table.  It does the update fine, but the insert to
my_audit_table

--doesn't work as I had anticipated.  

update my_view set a = 'new a contents', b = 'new b contents', c  = 'new c
contents' where my_table_id = 1;

 

-- If I execute the above update statement multiple times, multiple rows
will be 

--inserted with each call after the first call.

--

--Specifically,

--after the first call, 1 row is inserted

--after the second call, 2 rows are inserted

--after the third call, 4 rows are inserted

--after the fourth call, 8 rows are inserted... and so on

--

--The problem is due to the INSERT in the update_my_view rule:

--

--INSERT INTO my_audit_table 

-- (my_table_id,  

-- c)  

--VALUES  

-- (new.my_table_id,  

-- new.c);

--

--Apparently, new.my_table_id in this case references more than one row,
if more than one row with 

--the given id already exists in my_audit_table.

--

--How do I accomplish what I want to accomplish here?  I'd prefer not to use
a sp.

 

Thanks,

Chad



[GENERAL] pg_dump / load seems really slow..

2008-04-05 Thread Chad
Howdy!

I'm trying to build some fixture (test) data for a webapp I'm building
for my company. To do this I've taken a dump of data from production,
exported it using pg_dump, and load it via psql database  file

The data that is exported from production is about 25K rows and 12MB.
The dump takes less than a second but the load is taking 2 minutes.
The load step seems really slow. I'm wondering if there are ways to
tweak the system to load faster. Or, if the process cannot be spead up
I guess I don't need all 25 rows.. Is there a way to export a query to
file via command line?

export:

pg_dump forums --data-only --no-owner -t foo  output file

import:

psql forums  output file

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Chad Hendren

PostgreSQL Team,

I have a large OEM customer (one of the top 8 for Sun worldwide) that is 
considering embedding PostgreSQL into the core of their primary product 
instead of Informix.  He is trying to build his case for this change.  
Do we have anything that I can forward to him (something like the 
presentation you did at CEC?) to help him build his case?


This is an urgent request from my customer given that his timeline is 
relatively short.  Any help you can give me will be very appreciated.


Thanks,

Chad Hendren

Original question:

Have you seen any studies (either by Sun or others) that compares 
PostgresSQL to other commercial database software (Informix, Oracle, 
Sybase,etc.)? I am interested seeing a feature by feature comparison of 
PostgresSQL and Informix.



--
Chad Hendren
Solutions Architect
Sun Microsystems, Inc.
3655 North Point Parkway, Suite 600
Alpharetta, GA 30005 US
Phone 770-360-6402
Mobile 770-596-4778
Email [EMAIL PROTECTED] 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-02-25 Thread Chad Wagner

On 2/25/07, Greg Sabino Mullane [EMAIL PROTECTED] wrote:


For the record, anyone using wikipgedia deserves the pain they
get: it is deprecated. The latest version of MediaWiki itself is what
should now be used: it will detect if you have Postgres upon
installation. :)



Some of us are still using php4  :)


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-02-25 Thread Chad Wagner

On 2/25/07, Magnus Hagander [EMAIL PROTECTED] wrote:


 For the record, anyone using wikipgedia deserves the pain they
 get: it is deprecated. The latest version of MediaWiki itself is what
 should now be used: it will detect if you have Postgres upon
 installation. :)

Perhaps the project should be *gasp* deleted then? ;-) Or is there
actual historical information there that someone would be interested in?



As I said in my other mail, some folks are still using PHP4 -- which is why
MediaWiki still maintains the 1.6 branch.  I am more than willing to
contribute the most recent 1.6.10 codebase w/ PostgreSQL modifications to
the foundry.  I am actively maintaining my own codebase for my site.

I agree with Greg, if you are already using PHP5 then use the MediaWiki
distribution, but if your stuck on PHP4 like me then you really don't have a
choice other than what is being offered on pgfoundry.  :)


Re: [GENERAL] pgfoundry

2007-02-24 Thread Chad Wagner

On 2/24/07, Matthew Peter [EMAIL PROTECTED] wrote:


I was wondering if anyone new how to get this package to install for 8.2.3?
When I try to install it I get:
ERROR:  incompatible library /usr/lib/pgsql/uri.so: missing magic block
HINT:  Extension libraries are required to use the PG_MODULE_MAGIC macro.

I read 8.2 now needs the MAGIC definition but C is rather foreign. Any
help?

http://www.postgresql.org/ftp/projects/pgFoundry/uri/

It hasn't been updated since 2005. Thanks



This appears to be a trivial patch against the uri codebase:

diff -wur uri.orig/uri.c uri/uri.c
--- uri.orig/uri.c  2005-07-20 18:49:23.0 -0400
+++ uri/uri.c   2007-02-24 07:16:29.553130168 -0500
@@ -15,6 +15,10 @@

#include liburi/uri.h

+#ifdef PG_MODULE_MAGIC
+PG_MODULE_MAGIC;
+#endif
+
/*
 *  Commonly used code snippets
 */


Other than that it appears to still work with 8.2.3.


--
Chad
http://www.postgresqlforums.com/


uri.diff
Description: Binary data

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-02-23 Thread Chad Wagner

On 2/23/07, Bill Moran [EMAIL PROTECTED] wrote:


I installed wikipgdia for the WPLUG wiki:
http://wplug.ece.cmu.edu/wiki/



Isn't that the same wikipgedia that is found at pgFoundry?  The only issue I
really had the the wikipgedia port is that the codebase is 1.6alpha, and it
seemed like it wasn't being actively maintained anymore (infact that is what
the description says), so I am not sure it has all of the bug fixes up to
1.6.10.

In any case if anyone is interested I was able to reproduce the changes that
wikipgedia made and applied those changes (as well as others) all the way up
to the 1.6.10 codebase.  The only reason I mention this is because 1.6 is
the only choice for PHP4 users.  If anyone is interested I can provide the
codebase, the schema still has to be created manually as was the case with
wikipgedia.


Re: Wikipedia on Postgres (was Re: [GENERAL] postgresql vs mysql)

2007-02-23 Thread Chad Wagner

On 2/23/07, Bill Moran [EMAIL PROTECTED] wrote:


 In any case if anyone is interested I was able to reproduce the changes
that
 wikipgedia made and applied those changes (as well as others) all the
way up
 to the 1.6.10 codebase.  The only reason I mention this is because 1.6is
 the only choice for PHP4 users.  If anyone is interested I can provide
the
 codebase, the schema still has to be created manually as was the case
with
 wikipgedia.

I would be interested.  I'm probably expected to maintain this thing ...



You can download it from:

http://www.postgresqlforums.com/downloads/pgmediawiki-1.6.10.tar.gz

Again, like wikipgedia you have to create a schema (manually) named
mediawiki and like wikipgedia (because the port more or less used some of
the same mods they made) MySQL support is probably broken.


Re: [GENERAL] postgresql vs mysql

2007-02-22 Thread Chad Wagner

On 2/22/07, Martijn van Oosterhout kleptog@svana.org wrote:


On Thu, Feb 22, 2007 at 12:05:20PM +1100, Chris wrote:
 SELECT foo, bar, COUNT(*)
 FROM baz
 GROUP BY foo

 That one actually comes in handy ;) Especially in older versions (4.0)
 that don't support subselects..

I must say I don't see any reasonable way of interpreting the above
query. Is the value of bar selected randomly?



The value of bar happens to be the first value fetched based on the GROUP BY
of foo, not sure how predictable and repeatable it is.


Re: [GENERAL] postgresql vs mysql

2007-02-21 Thread Chad Wagner

On 2/20/07, gustavo halperin [EMAIL PROTECTED] wrote:


I  have a friend that ask me why postgresql is better than mysql.
I personally prefer posgresql, but she need to give in her work 3 or 4
strong reasons for that. I mean not to much technical reasons. Can you
give help me please ?



How about the fact that MySQL accepts the following query as legal:

SELECT foo, bar, COUNT(*)
FROM baz
GROUP BY foo

And produces, naturally, an unexpected result instead of an error.  Totally
annoying, I don't know if it was ever fixed.  It seems that MySQL's parser
is generally weak at syntax validation in it's default configuration.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Option to undo last update on table.

2007-02-15 Thread Chad Wagner

On 2/15/07, RPK [EMAIL PROTECTED] wrote:


Is there any option in PGSQL to undo last changes done on a table? Any
feature similar to FlashBack Query in Oracle.



The only way I could imagine that you could implement a flashback query in
PostgreSQL is if you ignored everyone's advice and didn't vacuum  not a
good idea.


Oracle uses an undo tablespace which records all of the information to
rollback to an older version of row xyz, but PostgreSQL stores a new
version of row xyz in the same data file and vacuum flags the old version
of row xyz for deletion which may quickly get overwritten (or may be
unusable for other reasons???).


As for others that may be interested, the flashback feature allows you to
specify essentially the version to use when executing the query.  And it
is typically done by specifying the SCN (which should be equivalent to the
xid for PostgreSQL) OR a timestamp.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Move data between two databases SQL-ASCII to UTF8

2007-02-08 Thread Chad Wagner

On 2/8/07, Clodoaldo [EMAIL PROTECTED] wrote:


Use pg_dump to dump the db and use iconv on the generated file:

iconv -f ASCII -t UTF-8 mydb.dump -o mydb_utf8.dump



Wouldn't it be adequate to set the client encoding to SQL_ASCII in the dump
file (if that was infact the encoding on the original database)?

SET client_encoding TO SQL_ASCII;

And then let the database do the conversion?  I would think since the db is
UTF8 and the client is claiming SQL_ASCII then it would convert the data to
UTF8.

I have done this in the past with SQL dumps that had characters that UTF8
didn't like, and I just added the SET client_encoding TO LATIN1; since I
knew the source encoding was LATIN1.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] date comparisons

2007-02-03 Thread Chad Wagner

On 2/3/07, Bruce Momjian [EMAIL PROTECTED] wrote:



Would someone please confirm that our behavior in the three queries
below matches Oracle's behavior?



Here is output from Oracle:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL alter session set nls_timestamp_format = '-MM-DD HH.MI.SSXFF AM';

Session altered.

SQL select to_timestamp('January 2006', 'Month ') from dual;

TO_TIMESTAMP('JANUARY2006','MONTH')
---
2006-01-01 12.00.00.0 AM

SQL select to_timestamp('January   2006', 'Month ') from dual;

TO_TIMESTAMP('JANUARY2006','MONTH')
---
2006-01-01 12.00.00.0 AM

SQL select to_timestamp('January 2006', 'FMMonth ') from dual;

TO_TIMESTAMP('JANUARY2006','FMMONTH')
---
2006-01-01 12.00.00.0 AM


Re: [GENERAL] PostgreSQL/FireBird

2007-02-01 Thread Chad Wagner

On 2/1/07, RPK [EMAIL PROTECTED] wrote:


How is FireBird rated when compared with PostgreSQL?



I don't know how it is rated, but honestly I looked at Firebird and it is
way too quirky and I could never get it to compile embedded.  The
documentation is very very poor for that project when it comes to the
embedded model, if you ask me.  Also, Firebird has odd license which I think
may limit your ability to commercially embed it without paying licensing --
not sure about that -- it certainly isn't as simple as a BSD license.

For embedded solutions I ended up looking at SQLite, which seems to work
well enough.




--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] PostgreSQL 9.0

2007-01-29 Thread Chad Wagner

On 1/29/07, Ray Stell [EMAIL PROTECTED] wrote:


That said, probably, lasts gasps from a legacy system.  I'm wondering
when ora will open up its code ala sun/solaris.



According to a recent Gartner study, Oracle has 48% market share (in other
words they are the market leader by a margin of 26%).

http://www.gartner.com/press_releases/asset_152619_11.html


I am pretty convinced Oracle wouldn't open source their database code.  The
competition would tear them apart by reinventing the Oracle Database.  If
you want open source Oracle code then download BDB or InnoDB ;), I think
that is as close as it gets.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] counting query

2007-01-28 Thread Chad Wagner

On 1/28/07, Ron Johnson [EMAIL PROTECTED] wrote:


This is the great synthetic-vs-natural key debate.



Truly.  But what the heck!

Surrogate keys are not evil, and they do have value.  I see no value in
proclaiming surrogate keys are evil, do not use them.

Surrogate keys do have advantages:
- Disassociation of natural data to other referential tables (which can also
be confusing)
   Imagine a social security number, drivers license number, or any other
natural key.  Now imagine that
   key value has changed for a specific person, and you have used it as a
natural key throughout your data
   structures.  (and they do change)
- Reduced storage requirements (yields better performance)
   It is cheaper to store a 50 byte field + a 4 byte surrogate key once,
then it is to store it a million times:
   (surrogate key)   54 bytes + (4 bytes * 1 million) = 4MB
   vs.
   (natural key)   50 bytes * 1 million = 50 MB


Natural keys are not evil either, and they have their own advantages.  But
when your modeling very large databases (around 10TB range) then you
absolutely have to consider every single decision, and natural keys (in my
opinion) is not always a good one as a single natural key could result in
another 100GB of storage requirements.

There should be some thought when you are modeling and these are some of the
things to consider.  I don't see a 10 table join being a major performance
penalty, especially when 8 of the tables may be a few MB in size.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Cannot Restart PostgreSQL-8.1.4

2007-01-24 Thread Chad Wagner

On 1/24/07, Rich Shepard [EMAIL PROTECTED] wrote:


 libpq.so.3 = /usr/lib/libpq.so.3 (0xb7f5f000)

So it's finding all the libraries it wants, and it's using the older
libpq.so. Interesting.



Where is your other installation of PostgreSQL installed?  I suspect this is
because somewhere along the line you had a partial installation of
PostgreSQL (newer pg_config, but missing libpq.so.4)


[Wed Jan 24 15:01:20 2007] [error] [client 127.0.0.1] Premature end of

script headers: /usr/local/sql-ledger/login.pl
/usr/local/sql-ledger/login.pl: symbol lookup error:
/usr/lib/perl5/site_perl/5.8.8/i486-linux/auto/DBD/Pg/Pg.so: undefined
symbol: PQserverVersion

   The version of DBD::Pg is 1.4.9 which is the most recent one available.



The reason is when DBD::Pg probed the PostgreSQL configuration using
pg_config --version it received a response of at least 8.0.0 or later.  Yet
when it attempted to link it could only find libpq.so.3.  I would try
rebuilding DBD::Pg, and run make test to verify whether it works before
installing.

I don't know what the rules are for ld to locate libraries and link them,
but perhaps if ldconfig was not run and libpq.so.4 didn't link to the right
version it used libpq.so.3.

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] show all record between two date after group by and aggrigation...

2007-01-23 Thread Chad Wagner

On 1/23/07, Jorge Godoy [EMAIL PROTECTED] wrote:


 select cal.date, coalesce(foo.x, 0) AS x
  from (select (date_trunc('day', current_timestamp) + (s.s * interval '1
 day'))::date AS date
  from generate_series(1,365) AS s) AS cal
 left join foo ON cal.date = foo.create_date;

Why not, for the date part, the simpler alternative:

SELECT CURRENT_DATE + s.s AS DATE FROM generate_series(1, 365) AS s;



That's one of the problems with having a couple dozen date/time functions
;).


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] More grist for the PostgreSQL vs MySQL mill

2007-01-22 Thread Chad Wagner

On 1/22/07, Harald Armin Massa [EMAIL PROTECTED] wrote:


 select * from table where table_id in (?, ?, ?, ?, ?, ?, ?, ...)


I usually try to rewrite this kind of queries to

select whatever from table t1 join
(select table_id from x where x) t2 using (table_id)

And 3 out of 4 this performs better on Oracle and PostgreSQL.

Would be curious why it does , but usually I am happy that it does:)



Because the results would be different than a subselect, less work =
faster.  One thing to point out is that a query of the form:

select ...
 from foo
where id in (select id from bar where n=27)

Would normally result in a SORT UNIQUE for the select id from bar where
n=27 part.  Where as:

select ...
 from foo f1, (select id from bar where n=27) f2
where f1.id = f2.id

is the same as...

select ...
 from foo f1, bar f2
where f2.n=27
  and f1.id=f2.id

which would not result in a sort unique.  In order to obtain the same
results as a subselect you would need to group or distinct, and I would
imagine the results would be the same as the IN..SUBSELECT


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] show all record between two date after group by and aggrigation...

2007-01-22 Thread Chad Wagner

On 1/22/07, deepak pal [EMAIL PROTECTED] wrote:


i am fatching record's from data base between two date range for
registration_date coloum and than group by an count it using
count(registration_date) i have to show all dates even if date is not there
in registration_date ,it should show date and 0 in count.,how can i do it
plz healp...



Most people do this with a calendar table, worst case is you could use
generate_series to do it (but it's ugly, and it may not scale well --
haven't tested it) and left join it to your data table.

select cal.date, coalesce(foo.x, 0) AS x
 from (select (date_trunc('day', current_timestamp) + (s.s * interval '1
day'))::date AS date
 from generate_series(1,365) AS s) AS cal
left join foo ON cal.date = foo.create_date;

   date| x
+
2007-01-23 |  1
2007-01-24 |  0
2007-01-25 |  2
2007-01-26 |  3
2007-01-27 |  0
2007-01-28 |  4
2007-01-29 |  5
2007-01-30 |  0
2007-01-31 |  6
2007-02-01 |  0
2007-02-02 |  7
2007-02-03 |  8
2007-02-04 |  0
2007-02-05 |  9
2007-02-06 |  0
2007-02-07 | 10
2007-02-08 |  0
2007-02-09 | 11
2007-02-10 |  0
2007-02-11 | 12
2007-02-12 |  0
2007-02-13 | 13
2007-02-14 |  0



--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL]

2007-01-22 Thread Chad Wagner

On 1/22/07, Laurent Manchon [EMAIL PROTECTED] wrote:


I have a slow response of my PostgreSQL database 7.4 using this query
below
on a table with 80 rows:

select count(*)from tbl;

PostgreSQL return result in 28 sec every time.
although MS-SQL return result in 0.02 sec every time.



Here is a few links discussing the issue:

http://sql-info.de/postgresql/postgres-gotchas.html#1_7
http://archives.postgresql.org/pgsql-hackers/2005-01/msg00247.php



--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] More grist for the PostgreSQL vs MySQL mill

2007-01-21 Thread Chad Wagner

On 1/20/07, John Meyer [EMAIL PROTECTED] wrote:


What I think bothers me is this whole concept that if PostgreSQL is to
flourish, MySQL has to be beaten down.  Folks, both products are free,
both can be used in the same shop (maybe not on the same computer if
your running them in production).  Putting down MySQL will not make
PostgreSQL any better, or vice versa.



One thing to point out here is that MySQL's business strategy seems to be
changing, and I believe the community edition is starting to fall apart
(or struggling to gain momentum).  It seems MySQL just dropped the ball on
the free version of their product, and it is taking time for people to run
with it.  Their attention is fully on the enterprise edition packages.
Take a look at the 5.0 GA releases, you can't even download binary releases
of 5.0.33.  You have to build from source, it kind of indicates the lack of
effort that MySQL is putting forward to the community, and sending a signal
buy our enterprise edition.

Additionally, they feel that Oracle is such a threat that they have dumped
BDB (I believe this move was after Oracle acquired Sleepycat) and now they
announced they are dumping InnoDB (again after Oracle acquired it).  Now
they are off building their own engine to compete against InnoDB.

I know right now they are promoting SolidDB, but that won't last long
(Falcon is their new storage engine).  And frankly, SolidDB+MySQL just
doesn't work right.  I gave it a shot and there was tons of problems, hardly
production worthy.  There also seems to be an increase of chatter about
people coming to PostgreSQL because of the actions that MySQL has taken.

Just my 2 cents.

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] More grist for the PostgreSQL vs MySQL mill

2007-01-21 Thread Chad Wagner

On 21 Jan 2007 08:01:57 -0800, Shashank [EMAIL PROTECTED] wrote:


 It seems MySQL just dropped the ball on
 the free version of their product, and it

Not sure what you mean. I can download their latest versions without
any trouble.



The point was they are not going to the effort to roll binary releases, if
you can find binaries for 5.0.33 on their community download page then point
it out.

http://dev.mysql.com/downloads/mysql/5.0.html


They are not there, just source downloads.  In other words they dropped the
ball, in my opinion.


Where is this announcement? They don't need to drop either engine, as

both are GPL. MySQL as a group was never too hot with BDB. As for
InnoDB, if Oracle acts up, the GPL allows MySQL or any of its community



Your correct, I misspoke about an announcement regarding InnoDB.  It was
actually speculation from the community.

members to fork out a separate version. SolidDB and Falcon are just

storage engines, which is quite a smart architecture for MySQL to
follow. There's an interesting discussion about ReiserFS vs MySQL



I don't actually agree that it is a smart architecture.  BDB, InnoDB,
SolidDB, etc all require separate shared memory areas for caching.  It just
isn't efficient use of memory.

more useful it will be to different audiences. Meanwhile, it is unclear

what the goofs at Oracle have in mind for their two acquisitions.



Not sure why you think anything is up their sleeve, other than they would
like to be more competitive in the embedded marketplace and offer a larger
product portfolio.  The problem is Oracle Database is trying to serve a much
different market than TimesTen, BDB, and InnoDB.  Oracle Database is trying
to serve high availability  fault tolerant enterprise markets, and they do
it very well in my book.  TimesTen is trying to serve a high-performance
market, BDB is a light-weight (small device) embedded market, and InnoDB is
more of an larger device (PC-based, perhaps) embedded market.

I know that since Oracle has acquired BDB, they have added
multi-versioning.  I was never really impressed with BDB embedded in MySQL,
but who knows if that is how it was implemented or what.  BDB in general
seems to perform well.


--
Chad
http://www.postgresqlforums.com/


Re: PG not rejecting bad dates (was Re: [GENERAL] Finding bogus dates)

2007-01-18 Thread Chad Wagner

On 1/18/07, Ron Johnson [EMAIL PROTECTED] wrote:


 Right. In my case I have bad data from a source I didn't control,
exported
 via code that I do control which happens to output -MM-DD. Well,
except
 that I don't do what I need to when MM or DD are more than 2 digits, but
I'm
 going back to look at that again ;-)

Why didn't the PG engine reject these bad-date records at INSERT
time.  This smacks of something that MySQL would do...



The original poster mentioned that the data type that the date was stored
in was a varchar, not really much it can do there if don't use the right
data type :(.

Not to mention how misleading it probably is to use a varchar for a data to
the optimizer for calculating selectivity.

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] GUID/UUID Support

2007-01-17 Thread Chad Wagner

On 1/17/07, Patrick Earl [EMAIL PROTECTED] wrote:


Certainly support for the GUID field type itself is most important.
As for the generators, though they are non-essential, they are very
useful.  Other platforms and libraries have standardized on uuid
generators, so I don't see why PostgreSQL can't.



Maybe I am oblivious to the reason, but why is there a need for a special
data type for GUID/UUIDs?  Wouldn't you always be doing an equality
anyways?  Wouldn't a varchar suffice?

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Building web sites using a database

2007-01-17 Thread Chad Wagner

On 1/17/07, Jan Mura [EMAIL PROTECTED] wrote:


 I would like to ask if there is a genral concept of creating web pages
using
a database and PHP or another scripting language.
What I mean is to store basic entities and relations between pages. Only
the
certain texts or information regarding every site will differ. But indeex
page, search page, result page and so on for every site should look
similarly so the application select a structure of a page from DB and add
certain text.



It sounds like you want a content management system, I would take a look at
the Joomla project.  If I recall you can use PostgreSQL as a backend (if not
there is easily 50~60 different content management systems out there ranging
from open source to commercial).  I certainly wouldn't write your own.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Temp Table Within PLPGSQL Function - Something Awry

2007-01-16 Thread Chad Wagner

It appears that what is happening is PL/pgSQL is caching the table
definition (it appears to do this on first execution), testing it with
dynamic SQL via the EXECUTE clause doesn't exhibit the same issue:

CREATE OR REPLACE FUNCTION test_fxn() RETURNS SETOF RECORD AS $$
DECLARE
  test_rec RECORD;
BEGIN
  EXECUTE 'CREATE TEMP TABLE temp_tbl (actual_inventory_id BIGINT, cal_due
TIMESTAMP)';
  FOR test_rec IN SELECT id FROM item LOOP
 EXECUTE 'INSERT INTO temp_tbl (actual_inventory_id) values (6)';
  END LOOP;
  FOR test_rec IN EXECUTE 'SELECT actual_inventory_id FROM temp_tbl' LOOP
 RETURN NEXT test_rec;
  END LOOP;
  EXECUTE 'DROP TABLE temp_tbl';
  RETURN;
END;
$$ LANGUAGE PLPGSQL;


On 1/16/07, Lenorovitz, Joel [EMAIL PROTECTED] wrote:


 Greetings,

I am trying to work with a TEMP TABLE within a plpgsql function and I was
wondering if anyone can explain why the function below, which is fine
syntactically, will work as expected the first time it is called, but will
err out as shown on subsequent calls.  The DROP TABLE line seems to be
executing (note \d results on temp_tbl), and repeatedly
adding/dropping/querying temp_tbl from the command line also works without a
problem.  However, when it's all put into the function and cycled through
multiple times then something seems to be getting confused.  Any light that
can be shed on this peculiarity would be great.  Once I get past this hurdle
the function will, of course, go on to do more and make better use of the
temp table, but for now I just need to figure out why it's failing.  Is this
an improper or ill-advised use of a temp table?

Thanks much,
Joel


CREATE OR REPLACE FUNCTION test_fxn() RETURNS SETOF RECORD AS $$
DECLARE
   test_rec RECORD;
BEGIN
   CREATE TEMP TABLE temp_tbl (actual_inventory_id BIGINT, cal_due
TIMESTAMP);
   FOR test_rec IN SELECT id FROM item LOOP
  INSERT INTO temp_tbl (actual_inventory_id) values (6);
   END LOOP;
   FOR test_rec IN SELECT actual_inventory_id FROM temp_tbl LOOP
  RETURN NEXT test_rec;
   END LOOP;
   DROP TABLE temp_tbl;
   RETURN;
END;
$$ LANGUAGE PLPGSQL;

postgres=# select max(id) from test_fxn() AS (id bigint);
 max
-
   6
(1 row)

postgres=# select max(id) from test_fxn() AS (id bigint);
ERROR:  relation with OID 24449 does not exist
CONTEXT:  SQL statement INSERT INTO temp_tbl (actual_inventory_id) values
(6)
PL/pgSQL function test_fxn line 6 at SQL statement

postgres=# \d temp_tbl;
Did not find any relation named temp_tbl.
postgres=#





--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] invalid byte sequence for encoding UTF8

2007-01-16 Thread Chad Wagner

On 1/16/07, Gary Benade [EMAIL PROTECTED] wrote:


I used shp2pgsql.exe to create an import sql for my gis database.
The resultant sql has data like this in it.INSERT INTO gis.sa_area
(label,type,level,the_geom) VALUES
('MÔRELIG','0x2','2','01060001000');
The Ô is ascii char 212.
This wont import, PSQL returns
ERROR: invalid byte sequence for encoding UTF8: 0xd452
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by client-encoding




I am not terribly familiar with PostGIS (other than installing it, running
the test cases and saying cool :), but it appears that your source data is
probably ISO-8859-1.  You should probably use the -W switch with shp2pgsql
and specify the client encoding as LATIN1, it should write a dump file
with SET client_encoding to 'LATIN1' instead of UTF8 (or you can manually
tweak the SQL file).


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Mounting a data dir read only

2007-01-16 Thread Chad Wagner

On 1/16/07, Thorsten Körner [EMAIL PROTECTED] wrote:


The projects Website is a very high traffic site, with about 500 Pi /
sec.  we
are facing performance trouble, when using just one server.
Searches etc. we have implemented using lucene on our backend servers, to
keep
the database alive, but since content is very dynamic, there is need to
update lucene indexes very often.
Another problem are some logs, like counters how often users have seen the
pages with contents or userprofiles etc.
All those things, seems to me very difficulty to implement with slony.
Do you have a hint, which way we can go?




Perhaps you need to look at a different solution.  It is apparent you want
to reduce I/O and activity on the database, have you looked at distributed
caches like memcached (http://www.danga.com/memcached/) ?

Many popular high volume websites are using memcached (Wikipedia, Slashdot,
Live Journal, etc.).

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] GUID/UUID Support

2007-01-16 Thread Chad Wagner

On 1/16/07, David Lee Lambert [EMAIL PROTECTED] wrote:


The problem with a random UUID generator is:  where do you get the random
numbers?  However,  there are really only two platforms to worry about:
POSIX (use libuuid) and Win32 (there is probably a Win32 function to
generate
it,  since MS SQL Server and the .NET API both provide GUID
generation;  but
I can't seem to find it by searching the Web).

Anyway,  the uniqueidentifier project on GBorg seems like it has what you
want:  a user-defined type,  and a generator.  I think the main thing it
needs is some code cleanup/robustness and a lot of production testing.



FYI...

Apache Portable Runtime has a uuid generator, see apr-util package, function
is apr_uuid_get().  Underlying it uses one of the following:

a) uuid_create (FreeBSD via libc)
b) uuid_generate (from libuuid)
c) UuidCreate (for Win32)
d) Apache version of a UUID generator



--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Improve Postgres Query Speed

2007-01-15 Thread Chad Wagner

On 1/15/07, carter ck [EMAIL PROTECTED] wrote:


I am having slow performance issue when querying a table that contains
more
than 1 records.

Everything just slow down when executing a query though I have created
Index
on it.



You didn't really provide much information for anyone to help you.  I would
suggest posting the table definition (columns  indexes), the queries you
are running, and the output of EXPLAIN ANALYZE your query here;.

--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] How does one perform a case-insenstive query on test or char fields

2007-01-10 Thread Chad Wagner

On 9 Jan 2007 13:44:32 -0800, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


SELECT * FROM table WHERE thisfield = 'some text';

How would I rewrite this query to search through the table looking at
the text in the column thisfield for the string some text but have
it perform a case insensitive search?



I wouldn't consider this a search, but rather a lookup.  You can find some
tips on how to do this in the link below.

http://archives.postgresql.org/pgsql-php/2003-05/msg00045.php


If you need thisfield to be indexed, then I would suggest creating an
index on the expression LOWER(thisfield), the below text is directly from
the user guide:

For example, a common way to do case-insensitive comparisons is to use the
lower function:

SELECT * FROM test1 WHERE lower(col1) = 'value';

This query can use an index, if one has been defined on the result of the
lower(col1) operation:

CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1));


You can read more about expression indexes here:

http://www.postgresql.org/docs/8.2/static/indexes-expressional.html


Re: [GENERAL] Postgres Differential backup

2007-01-08 Thread Chad Wagner

I have jotted down some notes on performing a hot backup (which is what Bill
is referring you to), and I included a script called pg_hotbackup that
automates the relatively simple tasks required to take a snapshot of the
cluster data directory.

http://www.postgresqlforums.com/forums/viewtopic.php?f=14t=12

In that link there is a discussion, and a link to the Wiki article which is
located here:

http://www.postgresqlforums.com/wiki/Backup_%26_Recovery




On 1/8/07, Vijayaram Boopalan - TLS , Chennai [EMAIL PROTECTED] wrote:


 Hi,

I do not kb=now how to take differential database backup in postgre 8.1.5.
Please help.

*B. VijayaRam***
*Chennai.***

DISCLAIMER:

---

The contents of this e-mail and any attachment(s) are confidential and
intended for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its
affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect
the opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure,
modification, distribution and / or publication of
this message without the prior written consent of the author of this
e-mail is strictly prohibited. If you have
received this email in error please delete it and notify the sender
immediately. Before opening any mail and
attachments please check them for viruses and defect.


---


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq





Re: [GENERAL] slow speeds after 2 million rows inserted

2006-12-31 Thread Chad Wagner

On 12/31/06, Nikola Milutinovic [EMAIL PROTECTED] wrote:


 1. There is no difference (speed-wise) between committing every 1K or
every 250K rows.

It was really some time ago, since I have experimented with this. My las
experiment was on PG 7.2 or 7.3. I was inserting cca 800,000 rows.
Inserting without transactions took 25 hrs. Inserting with 10,000 rows per
transaction took about 2.5 hrs. So, the speedup was 10x. I have not
experimented with the transaction batch size, but I suspect that 1,000 would
not show much speedup.



I would imagine the overhead here is sending the BEGIN/COMMIT (or the
behavior could be version dependent), and the commit is forcing a log
flush.  According to the documentation multiple inserts have the potential
of being flushed in one shot when the database wakes up to do a logflush
automatically, so committing more frequently would actually appear to slow
you down.

The time to commit is dependent on hardware, on my hardware it was around
40ms (which you have to question the precision of the calculation, clearly
it is probably less than 40ms and that is an upper limit under load).

My experiment was with 8.2, default configuration, so there is room for
improvement.  It was installed from Devrim's RPM packages.


2. Vacuuming also makes no difference for a heavy insert-only table, only
slows it down.

Makes sense. Since my application was dumping all records each month and
inserting new ones, vacuum was really needed, but no speedup.



I agree, vacuuming is clearly important.  I would also think if you are
going to do a massive one-time update/delete as a maintenance item on a
normally read only table that you should plan on doing a vacuum full to
recover the space used by the old rows.  Logically the fewer pages on
disk, the less I/O that will result in scenarios where you are doing
sequential scans and probably even many index scans.

It seems that the MVCC implementation would introduce fragmentation (with
respect to a btree indexed field) if your table design had an indexed
creation date field and you often range scanned on that field but also
updated the record then over time the optimizer would less favor the index
as the correlation approached 0.  Obviously this is a great feature for a
last update date field.  :)  Not so great if your primary queries are on a
creation date field.



3. Table size plays no real factor.

The reason I saw speedup, must have to do with the fact that without
transactions, each insert was it's own transaction. That was eating
resources.




I would agree.  I am also reusing the same statement handles (prepare once,
execute many) with DBD::Pg.  The benefit here appears to be that it prepares
the cursor once (one time to parse and generate the execution plan), and
executes the same plan multiple times.  The difference in inserts was about
2000 inserts/s!

This is the one of the reasons why everyone keeps saying use COPY instead of
INSERT, COPY is essentially a one time prepare and execute many.

Test #1 (prepare once, execute many):
4000 inserts 0.92 secs, 4368.84 inserts/s, commit 0.04 secs.
4000 inserts 0.93 secs, 4303.47 inserts/s, commit 0.02 secs.
4000 inserts 0.93 secs, 4319.78 inserts/s, commit 0.02 secs.
4000 inserts 0.93 secs, 4306.38 inserts/s, commit 0.02 secs.
4000 inserts 0.93 secs, 4299.53 inserts/s, commit 0.02 secs.
4000 inserts 0.92 secs, 4345.44 inserts/s, commit 0.02 secs.
4000 inserts 0.93 secs, 4298.67 inserts/s, commit 0.03 secs.
4000 inserts 0.91 secs, 4382.13 inserts/s, commit 0.04 secs.
4000 inserts 0.92 secs, 4347.44 inserts/s, commit 0.02 secs.
4000 inserts 0.93 secs, 4314.66 inserts/s, commit 0.02 secs.
inserts/s (StdDev) = 28.7435
inserts/s (Mean) = 4328.6351

Test #2 (prepare/execute each time):
4000 inserts 1.92 secs, 2086.21 inserts/s, commit 0.04 secs.
4000 inserts 1.91 secs, 2092.67 inserts/s, commit 0.02 secs.
4000 inserts 1.91 secs, 2094.54 inserts/s, commit 0.05 secs.
4000 inserts 1.96 secs, 2042.55 inserts/s, commit 0.03 secs.
4000 inserts 1.91 secs, 2098.57 inserts/s, commit 0.04 secs.
4000 inserts 1.91 secs, 2098.91 inserts/s, commit 0.03 secs.
4000 inserts 1.91 secs, 2098.38 inserts/s, commit 0.03 secs.
4000 inserts 1.92 secs, 2083.94 inserts/s, commit 0.02 secs.
4000 inserts 1.95 secs, 2050.07 inserts/s, commit 0.03 secs.
4000 inserts 1.92 secs, 2086.14 inserts/s, commit 0.02 secs.
inserts/s (StdDev) = 19.2360
inserts/s (Mean) = 2083.1987


Re: [GENERAL] How do I use the backend APIs

2006-02-24 Thread Chad
Thanks Martijn.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How do I use the backend APIs

2006-02-22 Thread Chad
Thanks Martijn/Alban,

This look interesting. I'll make some time to try this problem out
using your approach.
I have a few questions like:
-Could I skip the FETCH FORWARD and go straight to the FETCH BACKWARD
i.e. declare cursor to be at Mal and go backwards from there or is
the cursor limited to going backward only as far as Mal?
-Does the DB avoid transferring the data until the FETCH command?
-When rows change in between opening the cursor and fetching the
changed rows, will the FETCH retrieve the new data or is a snapshot
taken when the cursor is declared ?
-What about concurrency? If a cursor is kept open while other
transactions change the same table or does it cause those writer
transactions to block? Perhaps this is configurable.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How do I use the backend APIs

2006-02-20 Thread Chad
Firstly thank you to all who have taken the time to reply so far.
I need to clarify a few things based on the feedback I have received.

1. I understand the concerns you have about people using internal APIs
that the developers are free to change. I also understand the risks I
take if I use an undocumented API. I would prefer to use a supported
public API with full support of the community. I want a clean API as
you have put it. My understanding is that the level of abstraction at
which I want to use Postgres is not supported by a public API today.
This API may exist at a less official level for internal use. Lets
revisit the argument as to whether I NEED this level of abstraction in
a later point as people have made good suggestions in this regard to
counter my opinion.

2. I want to base my development on a fully fledged relational DB (not
just an embedded DB). This is because I believe it will increase the
appeal of my product.

3. We see cleanly defined layers in Networking Software and Operating
Systems Software. For example, I can send stuff over a network as
Ethernet Frames if I want to use a library at that level of
abstraction, OR I can send stuff as TCP packets, this assuming I have
at my disposal libraries exposing these levels of abstraction. I
believe there is a case for DBMS software to support this concept of
layering. I'm not aware of any standards based database software stack
but I believe that if it existed it could result in better reuse of
database core technology. This stack would maybe begin with a buffer
cache at level 0, a data structures layer at level 1 (Btrees, heaps
etc), a data model layer at level 2...eventually you work up to client
APIs and so on. Whatever this stack looks like, I would like to be able
to link to a library that exposed the data structure layer of a fully
fledged relational DB. I'm not going to try to implement that from
scratch and I want what I develop to have full SQL support.

4. Why I think I need to work at this level of abstraction. I believe,
certain types of queries can be realized more efficiently by code that
can seek to specific parts of an index and immediately returning a
small number of rows to the client. The query would remain active and
the client would only retrieve the next rows after the previous cursor
position when the client was ready to do so. With a huge underlying
table set this would enable very free incremental browsing of the data.
Imagine a table browser application sitting on top of a multi
million-row table. The user types in a partial name match (Mal).
They want the table to scroll to the first name matching the name they
have typed (Malcolm). It should then be cheap to interactively
scroll downwards from here. It should also be cheap to scroll upwards
to view records immediately preceding Malcolm. The most natural
way I can think of doing this is an API to cursor seek to Malcolm
and navigate forwards or backwards from that cursor pos. I feel a
B-Tree cursor API matches the way you would implement a table like
this, only retrieving data as the user scrolls to it.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Chad
In a word: The kind of problems people use Berkeley DB for.

People use BDB for more fine grained cursor access to BTrees. Stuff you
CANNOT do with SQL. There is a market for this. See their website. I'd
like something similar from Postgres so that the data would be stored
in a full fledged RDBMS but I could use the cursor methods for
searching more efficient than SQL. Best of both worlds.

I've had a quick browse around the Postgres code and found some
functions like _bt_first() but no sample code to use it. BTW its for
developing an alternative server based access to the underlying
relational data.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] I see this as the end of BDB in MySQL without a doubt.

2006-02-16 Thread Chad
TJ,

I will go back and study the PG backend API (its been a while since I
last looked at it) and get back to you. I was assuming rather than
stating that it didnt meet my needs. Do you have a link to its docs?

BDBs basic C API is what I am talking about and particularly the
abililty for fine grained seeking along leaf nodes, even by row number
in certain types of BTrees.

Ciaran

I am quite happy with the c-language API for postgres, as far as it's
 capabilities and access to low-level postgres.  OK, the docs and examples 
 could
 be better.  Am I missing something in Chad's comments/warnings or is he
 missing something in not understanding pg better?
 Chad, could you say more about what in the BDB/API is missing and needed in 
 postgres?

 TJ O'Donnell
 http://www.gnova.com/

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How do I use the backend APIs

2006-02-16 Thread Chad
Hi,

In Postgres, is there a C language API which would give me access to
BTrees like Berkeley DB does? eg to seek to a particular key/value pair
and iterate forward from there? If not whats the nearest thing to this
in Postgres?

Cheers.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] I see this as the end of BDB in MySQL without a doubt.

2006-02-15 Thread Chad
I am not concerned about Sleepycat revoking their open source license
for future versions of BDB. I am less concerned about them revoking
licenses for current and older releases. That would be impossible.
However this deal troubles me and I cant quite put my finger on why.
I'll try to tease it out. Please bear with me.

As I understand it Sleepycat make most of their money by selling
commercial licenses to companies who use their stuff but who don't want
to open source their own code. Companies such as these will in the
future be required to talk to Oracle to negotiate a new license. So far
nothing sinister about this.

However, I see MySQL as the future losers here. I cannot see why else
Oracle would buy both of the MySQL storage engines other than to
effectively remove both of them from the MySLQ product suite in future
releases, thereby weakening it. Im just wondering how they are going to
achieve it though. According to Olson, BDB will still be available
under the dual license. Lets assume for the moment that at least the
open source license will still be available. Happy days, unless of
course the product you own is called MySQL. Do MySQL or any MySQL
customers need a commercial license for BDB? I think not. MySQL does
not as all its code is open source. As for MySQL customers, unless they
are making direct API calls into BDB (which most don't) I don't think
they are categorized as BDB Api users and so can keep their code
proprietary without having to answer to Sleepycat/Oracle for a
commercial license.

Therefore I see only the following mechanisms for Oracle to remove BDB
from MySQL
1. Discontinue BDB
2. Change their mind about free licensing and start charging
exorbidant fees for use of BDB, regardless of the type of application
3. And I feel if 1 and 2 do not happen then this is the highly
probably: use a non-compete clause in the BDB license to effectively
prevent companies like MySQL ever licensing BDB again. Sleepycat have a
similar clause in their own license to prevent companies releasing
products using BDB which could be seen to compete with Sleepycat. This
clause will change to refer to Oracle instead of Sleepycat. I hasten to
add this non-compete clause only refers to non-open source applications
today. This will signal the end of relationship between MySQL and BDB.
Question is: can they put non-compete clauses into open source
licenses? I dont think so. Maybe Oracle will just proceed with step 2,
first. Either way there is no way Oracle will allow to continue the
situation where MySQL gets to use BDB, a world class storage engine for
FREE, as they happily steal customers from Oracle the very company that
now owns said engine.

As of today I consider myself to be an EX-Berkeley DB user/developer.
What we need now is an open source DB with clean APIs into various
places in the software stack (eg we need a Berkeley DB kind of API
under the hood into something like Postgres) A full bells and whistles
relational DB with these low level ACCESS APIs will be a powerfull
thing in the future. PostgreSQL take note. If you don't already have it
you should begin exposing such a thing today in my humble opinion.

Being part of a big company changes you. This deal may stifle
innovation in BDB going forward. If so there is an opportunity to fill
that gap. I turn to the PostgreSQL community to rise to the challenge.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [SQL] ALTER TABLE ... DROP CONSTRAINT

2003-08-01 Thread Chad Thompson






  Hi 
  all!
  
   Who can tell me what postgres version supports 
  ALTER TABLE... DROP CONSTRAINT without
  the need 
  of droping the table to remove a simple coinstraint. 
(link)
  
   
  \\\!/ 55 
  11 5080 
  9283 
  !_"""_! Elielson 
  Fontanezi 
  (O) (o) PRODAM 
  - Technical Support 
  Analyst---oOOO--(_)--OOOo--- 
  Success 
  usually comes to those who are too busy to be looking for 
  it. 
  0 
  0---( 
  )--( 
  ) 
  \ ( 
  ) 
  / 
  \_/ 
  \_/

7.3 supports the drop constraint. The 
only exception begin if you upgraded your database. It keeps the existing 
trigger like constraints if youve moved from 7.1 or 7.2. But these 
triggers can be dropped as well.

HTHChad

P.S. Great signature! 
:-)


Re: [GENERAL] Application Design and PostgreSQL

2001-07-23 Thread Chad R. Larson

On Tue, Jul 17, 2001 at 12:59:09PM +0200, Janning Vygen wrote:
 But then i thought: Do i really need complex middleware or is it just 
 a wrapper around PostgreSQL??

Your original thinking is correct.  You want a front-end (client),
some middleware that implements your business logic, and then the
database.

The client should be a basic display/input tool, ideally a browser.
Every computer has one these days.

The middleware is the part that is particularly custom to your
world.  It should be written as portably as possible, for example
Java Servlets in a container such as Tomcat (free) or JRun (not
free).

The communication to the database engine should be only ANSI SQL via
ODBC or JDBC.

By doing this, you will not be at the mercy of any vendor.  You can
swap database engines if desired.  You can play Oracle off against
Informix or Sybase by saying, It doesn't really matter to me, give
me your best deal.

Every vendor will try to get you to use their proprietary extensions
and/or 4GL.  Be very aware of the hidden costs of such things.  Use
the extensions if it is the =only= way you can build your
application.

We've been through this design cycle several times for some very
large applications, and we've gotten smarter (you might even say
paranoid) about vendor lock-in each time.

 I think its possible to implement the whole logic inside postgresql. 

Perhaps.  And if your application is for internal use only, and you
never imagine selling a license version to anyone, and you'll never
get into the ASP business with your app, go for it.

But some day, some pesky customer will say, Yes, but we've already
got a site license to Informix and a bunch of Informix DBAs so we
won't buy unless it can run on Informix (replace your favorite
annoying commercial database vendor here).  Layer your application
properly and you can say, We think that would be a mistake, but we
can accomodate your need.

 -crl
--
Chad R. Larson (CRL22)[EMAIL PROTECTED]
  Eldorado Computing, Inc.   602-604-3100
 5353 North 16th Street, Suite 400
   Phoenix, Arizona   85016-3228

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] upgrading postgres from 7.0 to 7.1

2001-07-17 Thread Chad R. Larson

On Mon, Jul 02, 2001 at 11:52:47PM -0700, Pieter Vissenberg wrote:
 The postmaster seems to be running smoothly (psql session works
 fine).  However when I try to connect to the database via Apache
 with PHP, I get an errormessage in the style of 'pconnect()
 failed.  Is the postmaster (with -i) running and listening to port
 5432 ?'

Does pg_ctl status indicate that the postmaster is running with
the -i switch?  That is needed for the postmaster to accept TCP/IP
connections.

Then check the contents of your pg_hba.conf file in your $PGDATA
directory.  It should have an entry for the host from which you're
connecting.  For example, if you're running the Apache server on the
same machine as the database engine, a line like:

local all trust

would suffice.

 -crl
--
Chad R. Larson (CRL22)[EMAIL PROTECTED]
  Eldorado Computing, Inc.   602-604-3100
 5353 North 16th Street, Suite 400
   Phoenix, Arizona   85016-3228

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] NULL values

2001-07-17 Thread Chad R. Larson

On Tue, Jul 10, 2001 at 09:40:31AM +0100, Mark Muffett wrote:
 Is there a Postgresql equivalent to the Oracle NVL( ) function,
 which allows a SELECT statement to fill in default values if a
 column is NULL?

Yes.

Use the DEFAULT keyword while creating the table.

That is:

CREATE TABLE account (
nameCHAR(20),
balance NUMERIC(16,2) DEFAULT 0,
active  CHAR(2) DEFAULT 'Y',
created TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Then :

INSERT INTO account (name)
VALUES ('Federated Builders');

will leave the inserted rows with no nulls, but the balance will be
zero, and the created field will have the date/time of the insert.

 Mark Muffett

 -crl
--
Chad R. Larson (CRL22)[EMAIL PROTECTED]
  Eldorado Computing, Inc.   602-604-3100
 5353 North 16th Street, Suite 400
   Phoenix, Arizona   85016-3228

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]