Re: [GENERAL] RE: postgresql and web transactions

2000-02-18 Thread Lincoln Yeoh

At 10:10 AM 17-02-2000 +0100, Nemeth Miklos wrote:
 And if this isn't too inflamatory a question, where does postgresql stand
 in comparison to MySql and DB2? (both of which aren't free I believe).

MySQL does NOT support transaction, which is indispensable for a real
DBMS used as the primary data storage system for reliable/consistent
business transactions.
However, MySQL is the fastest RDBMS ever seen, and perfect for data
warehouses and DSS and EIS purposes, as well as for
query databases for web sites.

MySQL is fast! As a data point for one of my webapps with MySQL- 30-50
hits/sec, with Postgres 7-8 hits/sec. Quite a significant difference. Heh
one of my colleagues PHP + MySQL pages does 50+ hits/sec too. On just a
workstation level machine...

However, Postgres has triggers, transactions, and it's free and Open
Source. MySQL is USD200 for commercial use and AFAIK not quite Open Source
(at least not the latest versions, the older versions were made Open Source
I think).

About transactions, I find that they're not that useful in a typical web
app, because 
1) Many web apps are simple, and don't need transactions, or have simple
work arounds.
2) And when you do need transactions, HTTP is not very state friendly, so
unless you do something special your database level transaction is
typically going to last only one page, so if your real life transaction
spans across multiple pages, the database level transaction system isn't
going to help much. 

Anyone have any ideas on how to get around that for Postgres? Coz if we
have to do transactions at the application level then it might actually be
better to just go MySQL, than to do transactions at both app and database
level. Hmm any ideas on how to do MVCC at app level? ;).

Session ID column and a committed boolean column? Or create a table for
each session (ouch!), is that what temp tables are for? Do temp tables
persist?

Or is writing a 'session' server the way to do it?
e.g.
webserver-cgi/app server-session server-database server

The session server is given a web session ID and SQL, and uses it to talk
to the database. It also times out connections.

Trouble is you don't know when users are leaving your site, so you could
have tons of simultaneous database connections. Probably only doable or
useful for niche applications- high concurrency limited user apps.

Any better ideas?

BTW: Somehow we seem to be going DB2 tho (dunno exactly why, but at least
it's tons cheaper than Oracle, but infinitely more expensive than Postgres
;) ). Don't know much about DB2..

Cheerio,

Link.









[GENERAL] ORDER BY problems

2000-02-18 Thread David Shrewsbury

I cannot seem to get ORDER BY to work properly when I
want to sort using two different columns. I have code
similar to the following:

SELECT tracking_num, username
FROM reports
WHERE customer='$customer'
ORDER BY tracking_num, username

This will sort by tracking_num (int4, primary key), but
NOT by the username (varchar, not null). The confusing thing
is if I use:

ORDER BY username, tracking_num

it works, but it sorts by username first and then the
tracking number which is the reverse of what I want to do.
Why doesn't the first bit of code work as I expect?

-David





Re: [GENERAL] RE: postgresql and web transactions

2000-02-18 Thread Manuel Lemos

Hello Lincoln,

On 18-Feb-00 05:40:51, you wrote:

About transactions, I find that they're not that useful in a typical web
app, because 
1) Many web apps are simple, and don't need transactions, or have simple
work arounds.

You mean that applications that require almost read only databases, or
single table updates do not require transactions.  You can't live without
transactions outside that scenario unless you want to risk your
applications to break due to database inconsistencies when many users are
working with the application.


2) And when you do need transactions, HTTP is not very state friendly, so
unless you do something special your database level transaction is
typically going to last only one page, so if your real life transaction
spans across multiple pages, the database level transaction system isn't
going to help much. 

The way I see it transactions are meant to turn a set of queries virtually
atomic. Anyway, transaction queries should be done one after the other.
You should not leave a transaction open for an unrestricted period of time.

If you need to hold locks on data for an unknown amount of time, you'd
better find other solutions besides transactions. For instance if you
want to hold on a ticket reservation for a client before he decides to
purchase it, you should not use transactions to lock the reservation.


Regards,
Manuel Lemos

Web Programming Components using PHP Classes.
Look at: http://phpclasses.UpperDesign.com/?[EMAIL PROTECTED]
--
E-mail: [EMAIL PROTECTED]
URL: http://www.mlemos.e-na.net/
PGP key: http://www.mlemos.e-na.net/ManuelLemos.pgp
--






Re: [GENERAL] ORDER BY problems

2000-02-18 Thread omid omoomi

hi,
As far as I understand,tracking_num is primary key and uniqe! So when you 
sort the date by a uniqe field their would be no place for other fields to 
be sort on! May be you are going to sort the data some other way.
so I think your query is doing well.
Omid Omoomi

From: David Shrewsbury [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [GENERAL] ORDER BY problems
Date: Fri, 18 Feb 2000 20:52:06 +

I cannot seem to get ORDER BY to work properly when I
want to sort using two different columns. I have code
similar to the following:

SELECT tracking_num, username
FROM reports
WHERE customer='$customer'
ORDER BY tracking_num, username

This will sort by tracking_num (int4, primary key), but
NOT by the username (varchar, not null). The confusing thing
is if I use:

 ORDER BY username, tracking_num

it works, but it sorts by username first and then the
tracking number which is the reverse of what I want to do.
Why doesn't the first bit of code work as I expect?

-David



__
Get Your Private, Free Email at http://www.hotmail.com






[GENERAL] problems with memory

2000-02-18 Thread Dean Browett

Hi,

We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
network and the nics are 3com3c590's. We are also using a DPT Raid
controller in a raid5 configuration set up as 1 logical drive.

We are try to insert a large amount of data into the database. What happens
is that when we first start loading data everything is fine. Over a period
of time (1.5hrs) there is a marked decrease in performance in terms of both
memory and cpu usage. At this time cpu usage has crept up to 45-50% and
memory usage is 100Mb and rising slowly and there is only one connection to
the database.

All the statistics are gleaned from using 'top'.


Consequently, the database gets slower and slower until it loads at the rate
of 3KBps at which point it becomes more exciting to watch paint dry 8-)).

Postgres is the only program running (except for normal system programs)
that uses a significant amount of memory.

Is Postgres known to leak memory? What causes the high cpu usage?

Any ideas would be greatly appreciated.

Dean






[GENERAL] newbie Q's

2000-02-18 Thread Gary Horton

I have newcomer type questions about use PostGreSQL and have been
directed to this mailing list by the site's webmaster. Please feel free
to redirect me if that's appropriate.

I first wonder if there is a pgsql newgroup or some other type of
archive with FAQ, etc.

Secondly, I seek an E/R modeling tool that can forward-generate
appropriate DDL for PostGreSQL.

The pgsql Projects page alludes to providing "full support for FOREIGN
KEY" to the product. What caveats should I be aware of concerning just
how incomplete this support is currently, and when these issues might be
addressed?

Finally, I wonder about JDBC drivers for PostGreSQL, in particular
supporting JDBC 2. If none for JDBC 2 (I know at least OpenLink supports
JDBC 1.x)...do we know of vendors planning support for this, and when?

Thanks VERY much for any help in these things.

Gary Horton



--


"I have made this letter longer than usual, because I lack the time
to make it short." -- Blaise Pascal

Gary HortonSoftware Engineer
PAGE/UCAR
Program for the Advancement of Geoscience Education
University Corporation for Atmospheric Research
Boulder CO 80307
Voice 303-497-8315Fax 303-497-8336Email [EMAIL PROTECTED]









Re: [GENERAL] problems with memory

2000-02-18 Thread Alfred Perlstein

* Dean Browett [EMAIL PROTECTED] [000218 15:55] wrote:
 Hi,
 
 We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
 Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
 network and the nics are 3com3c590's. We are also using a DPT Raid
 controller in a raid5 configuration set up as 1 logical drive.
 
 We are try to insert a large amount of data into the database. What happens
 is that when we first start loading data everything is fine. Over a period
 of time (1.5hrs) there is a marked decrease in performance in terms of both
 memory and cpu usage. At this time cpu usage has crept up to 45-50% and
 memory usage is 100Mb and rising slowly and there is only one connection to
 the database.
 
 All the statistics are gleaned from using 'top'.
 
 
 Consequently, the database gets slower and slower until it loads at the rate
 of 3KBps at which point it becomes more exciting to watch paint dry 8-)).
 
 Postgres is the only program running (except for normal system programs)
 that uses a significant amount of memory.
 
 Is Postgres known to leak memory? What causes the high cpu usage?
 
 Any ideas would be greatly appreciated.

You really haven't given very much information on the rules and constraints
in your tables, one problem that I had was that a constraint on a table
of mine caused extreme slowdown because each row inserted needed to be
validated through a constraint, as the table grew the amount of data that
needed to be scanned for each insert grew exponentially.

hope this helps,
-Alfred





Re: [GENERAL] problems with memory

2000-02-18 Thread selkovjr

 * Dean Browett [EMAIL PROTECTED] [000218 15:55] wrote:
  Hi,
  
  We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
  Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
  network and the nics are 3com3c590's. We are also using a DPT Raid
  controller in a raid5 configuration set up as 1 logical drive.
  
  We are try to insert a large amount of data into the database. What happens
  is that when we first start loading data everything is fine. Over a period
  of time (1.5hrs) there is a marked decrease in performance in terms of both
  memory and cpu usage. At this time cpu usage has crept up to 45-50% and
  memory usage is 100Mb and rising slowly and there is only one connection to
  the database.
  
  All the statistics are gleaned from using 'top'.
  
  
  Consequently, the database gets slower and slower until it loads at the rate
  of 3KBps at which point it becomes more exciting to watch paint dry 8-)).
  
  Postgres is the only program running (except for normal system programs)
  that uses a significant amount of memory.
  
  Is Postgres known to leak memory? What causes the high cpu usage?
  
  Any ideas would be greatly appreciated.
 
 You really haven't given very much information on the rules and constraints
 in your tables, one problem that I had was that a constraint on a table
 of mine caused extreme slowdown because each row inserted needed to be
 validated through a constraint, as the table grew the amount of data that
 needed to be scanned for each insert grew exponentially.
 
 hope this helps,
 -Alfred

The same is true of any type of index. Make sure you don't have
indices defined unitl after the insert. Also, do a COPY instead of
INSERT if possible.

--Gene





Re: [GENERAL] newbie Q's

2000-02-18 Thread Ed Loehr

Gary Horton wrote:
 
 I have newcomer type questions about use PostGreSQL and have been
 directed to this mailing list by the site's webmaster. Please feel free
 to redirect me if that's appropriate.
 
 I first wonder if there is a pgsql newgroup or some other type of
 archive with FAQ, etc.

Hi Gary,

For FAQs, see www.postgresql.org under Info Central...Documentation. 
There are 5-10 mailing lists noted there nearby, as well.  Re
archives, one is there as well, though I use the deja.com power search
as well (most pgsql mailing lists are archived there).

 Secondly, I seek an E/R modeling tool that can forward-generate
 appropriate DDL for PostGreSQL.

[If it exists, I'd be interested in hearing about it, too...]

 The pgsql Projects page alludes to providing "full support for FOREIGN
 KEY" to the product. What caveats should I be aware of concerning just
 how incomplete this support is currently, and when these issues might be
 addressed?

Caveat:  Brand new, never-been-brutalized-in-prime-time functionality,
due to be released to first beta cycle sometime in the next month or
so, maybe.  The most informative (and most time consuming) answer to
your question is probably to get the beta and hammer on it.  But you
might find a few known issues by searching the hackers list via
deja.com or the pgsql web site search engine.  There has been some
discussion there, though I haven't kept up.  Search for stuff by Jan
Wieck (FKeys leader).

Cheers,
Ed Loehr