Re: [GENERAL] Suggested improvement : Adjust SEQUENCES to accept anINCREMENT of functionname(parameters) instead of an integer

2001-06-28 Thread Shaun Thomas

On Fri, 22 Jun 2001, Justin Clift wrote:

> i.e. CREATE SEQUENCE newseq INCREMENT trunc(random() * 10);

Didn't you ask this like 2 weeks ago?

I said it once, I'll say it again.  Stop being lazy, and write a trigger.
The Postgres developers are *not* going to alter the core functionality of
their database to include something that has always been available.

http://www.postgresql.org/idocs/index.php?triggers.html

Read it, use it, love it.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Suggested improvement : Adjust SEQUENCES to accept anINCREMENT of functionname(parameters) instead of an integer

2001-06-28 Thread Shaun Thomas

On Wed, 27 Jun 2001, Justin Clift wrote:

> Hi Matt,
>
> I'm looking for a way to change an existing sequence's "increment" value on
> the fly (after it's being created).
>
> Can't seem to find a function which does this either.  Being able to change
> the increment every now and again would provide useful in some scenario's.

Unfortunately, there is no "alter sequence" syntax in Postgres as of yet.
You could emulate this by having a trigger address a config table of
some sort where you have a column defining the increment size.  Then all
you'd have to do is change the increment size.  Sadly, that would give
you a performance hit on inserts.

Postgres has a ton of missing "alter" syntax, you just have to learn to
live with it.  I'm not all that happy with it, either.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] PL/java?

2001-08-27 Thread Shaun Thomas

On 25 Aug 2001, Doug McNaught wrote:

> > Can someone explain why the addition of a stored procedural language for
> > MySQL made it as a Slashdot headline?
>
> Probably because /. uses MySQL (poor benighted fools ;)

Back when Slashdot was designed, Postgres was crap.  We have old versions
we're still getting rid of, and they're the biggest headache in the world.

I've actually used \d and the back-end crashed.  Usually this happens when
the database has handled around 15k queries in one session, or someone,
somewhere, even looks in the direction of a row that is anywhere near the
8k limit.

It's very simple.  Anything before postgres 7.1 was complete, utter crap.
Slashdot was around way before 7.1, hence mysql.

Personally, I laud their decision.  I mean, I've never had "show tables"
crash a mysql database, yet \d (or even a single-table, no where clause
select) can crash the back end of postgres 7.0.3.  We can't even do bulk
inserts on these tables (20k rows) because the back end inexplicably dies
before it finishes.  I'm talking about insert statements, not postgres
proprietary COPY.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: MySQL's (false?) claims... (was: Re: [GENERAL] PL/java?)

2001-08-27 Thread Shaun Thomas


> => In MySQL you have to repair your tables manually if corruption occurs.
> PostgreSQL is coded so that corruption cannot occur.

Unless you're running pre-7.1, in which doing any of the following
may corrupt an entire database so badly that pg_dumpall crashes on it.

 * Table A.  Create mirror table B.  Insert into B.  Drop A.  Rename B
   to A.  Watch backend crash randomly, corrupting said table to
   unrecognizable form - hence corrupting entire database.  This may only
   happen once every 1000 times the process is repeated, but *will*
   occur eventually.  This happened more in 6.5.
 * Select, insert, update, whatever.  Eventually, postgrees will
   report that the back end has "exited unexpectedly."  This is
   easier to repeat on an installation serving many simultaneous
   connections, especially if any database has been affected by:
 * Inserting any row with a total column length of 8k or higher, minus
   row/column overhead.  For even more fun, insert a row of arbitrary
   length, or use multiple text columns.
 * Selecting, updating, or even remotely touching any table which has
   an example of the above.  Yes, this means that you can't even
   delete the offending row, or pg_dump the database to remove it
   manually.

What about pg_dump, you say?  Sure, that'll work.  Get the tables that
aren't corrupted, like you know which ones they are.  Then all you have
to do is not give a rat's ass about the data in the table that *is*
corrupted.  Sounds easy, right?

All of this vanished like smoke when 7.1 came out.  In my opinion, 7.1 is
the first real release of postgres, and hence Mysql is fully justified in
most of its accusations/comparisons.  Until 7.1, postgres didn't have a
snowball's chance in hell at beating mysql on the stability front, now the
odds are a little more even.

Either way, don't dare sit there and tell me postgres doesn't corrupt
tables.  I would actually prefer a utility to integrity check/repair a
corrupted table into something that the database could read, rather than
give the data up for a loss and run for our backups, as we have been doing.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Re: MySQL's (false?) claims... (was: Re: PL/java?)

2001-08-27 Thread Shaun Thomas

On Mon, 27 Aug 2001, Andre Schnabel wrote:

> I simply wonder, if any of the guys ever took a lesson in database design.
> If I had told such wonderful ideas on foreign keys to my professor i'd been
> thrown out the university imediatly.

I agree.  While you're at it, tell him you won't have a corresponding drop
for every create, and that outer joins are useless.  Then, while he's
laughing at you, tell him you'll restrict data to 8k per row, and that
you won't truncate inserted data, because people will always follow the
rules.  Then, as he's rolling on the floor clutching his stomach, tell him
you'll add "cool" stuff like table inheritance before even *considering*
adding the things listed previously.  Now he's died laughing.  Great.
You've killed your professor.  Bastard.

Now.  Consider that until 7.1, all of the above was true about Postgres.

Hmm... Bad DB design... what was that you were saying again?

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On Mon, 27 Aug 2001, Tom Lane wrote:

> The latter is what I'm interested in, since \d doesn't invoke anything
> that I'd consider crash-prone.  Could you submit a debugger backtrace
> from the crash?

I should do that.  But, since it's the back-end that's crashing, I'd need
to find some way of getting a core dump.  So far, it isn't producing any.
I'll have to play with the environment to see why.

> Yes, I know, 6.* was not very careful about defending itself from tuples
> over 8K.  But 7.0 is, which is why I don't think that the tuple length
> is relevant.  I'd like to quit bandying accusations about and instead
> find out *exactly why* Postgres is crashing on you, so that we can
> either fix it or confirm that it's been fixed already.

Yeah, I know.  I was just trying to defend mysql. ^_^  We use both, and so
far, it's been the smaller headache, so...  I'll do what I can to get you
a backtrace.  The really strange thing is, one of our newwer databases has
started hanging on vacuums.  That's a 7.1.1, so the 8k thing shouldn't be
any kind of issue in the slightest thanks to the new internal structures.

But something is corrupt enough to break vaccum badly.  That doesn't make
me feel very good.  The worst part is, while it's hung on the vacuum, idle
connections just start piling up until we have to restart the DB.

That's no good.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On 28 Aug 2001, Doug McNaught wrote:

> You obviously know what you're doing, but are you absolutely sure one
> of your clients isn't holding a transaction open?  That'll hang vacuum
> every time...

Yup.  We wrote the client that is accessing the database.  It's using
PHP, and we don't even *use* transactions currently.  But that isn't the
problem.  From what I gather so far, the server is under fairly high
load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
takes a *long* time.  While it's vacuuming, anything using that database
just has to wait, and that's our problem.

Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
reimported it into an entirely new DB.  It was 14MB.  We vacuum at least
once an hour (we have a loader that runs every hour, it may run multiple
concurrent insert scripts).  We also use vacuum analyze.  So, I really
can't see a reason for it to balloon to that horridly expanded size.

Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On 28 Aug 2001, Doug McNaught wrote:

> > Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?
>
> VACUUM doesn't currently vacuum indexes.  Yes, it's a serious wart.  :(

Ah, now that makes sense.  It would also explain why our daily inserts
of many thousands of rows on a fairly regular basis would slowly bloat
the db.  It would also explain why the old system, which didn't use
indexes at all, didn't have this problem.  It would also explain why
the query optimizer picks crap plans, since the indexes are completely
innaccurate.

Hmm.  That's more than a wart, that's nearly a show-stopping bug.

> I suggest drop/recreate the indexes at intervals.  Or try REINDEX,
> which may work better.

Reindex is really our only option.  The database schema is complex enough
that dropping and recreating the indexes is dangerous (esp. primary keys)
and we also want to keep user databases from doing this - and we don't
know the details of those DB's.

Unfortunately, reindex can only be run while the DB is down.  ::sigh::
So, looks like a cron job to run at 2am.

# --- Pseudocode --- #

Get list of DB's.
Take backend down.
For each DB
  REINDEX DATABASE DB
done
Put backend back up.

print "Damn Vacuum."

# --- End Pseudocode --- #

Ew

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



RE: [GENERAL] RFC: PostgreSQL and MySQL comparison.

2001-08-30 Thread Shaun Thomas

On Wed, 29 Aug 2001, Robert J. Sanford, Jr. wrote:

> http://www.phpbuilder.com/columns/tim20001112.php3

Now *that* was very informative, thank you.

The best benefit to this, is that the optimization engine is supposedly
vastly improved in the 7.2 tree, so that'll just increase the lead.  If
they clean up vacuum to actually get indexes, the planner will have a
better chance at picking more optimal execution plans, too.

I'm glad that development has picked up.  It seemed like 6.5x would be
around forever.

Thanks for knocking down the walls guys. ^_^

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] MySQL treads belong else where.

2001-08-30 Thread Shaun Thomas

On Wed, 29 Aug 2001, Guy Fraser wrote:

> I would appreciate it if the MySQL zealots with troglodytical
> mentalities would confine themselves to there own mailing list.

Er?  Man, everybody just has to chime in with their $0.02.  Let the
moderators handle it, oh flaming sword o' justice.

> The odd comparison is OK but the flame wars are a waste of storage.

Which, while an amusing read, also point out actual flaws in Postgres
that we could stand fixing.  People do bitch without a reason, but
the do it more often when they have something to bitch about.

Me, I was just being devil's advocate.  Looks like not everyone
caught that.  Oh well.

> PS If you don't understand any of these words use a dictionary to find
> out what they mean. Don't just presume they are insults.

Now, now.  You had me going for a while, but you have just instantly
turned what may have been a mildly informative post into a pathetic
flame.  I mean, really.  What was that about troglodytical mentalities?

For the most part, I agree with you.

For everyone else who doesn't get it.  Upgrade.  Upgrade now, upgrade
quickly, and upgrade until you can't upgrade anymore.  Postgres 7.1
is not the postgres of yesteryear.  Postgres has evolved past the horrid
thing MySQL compared itself to.  The best part is that 7.2 will be
even better.

The point here is to know postgres's flaws.  Ignore the flames, and
upgrade.  The more people using the newer versions, the better the
next one will be.

C'mon, kiss a developer today!

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Too many open files in system FATAL2

2001-08-31 Thread Shaun Thomas

On Thu, 30 Aug 2001, Christian MEUNIER wrote:

> got the following happened yesterday:
>
> postmaster: StreamConnection: accept: Too many open files in system
> postmaster: StreamConnection: accept: Too many open files in system
> postmaster: StreamConnection: accept: Too many open files in system
> 2001-08-30 03:04:27 FATAL 2:  InitOpen(logfile 3 seg 199) failed: Too many
> open files in system
> Server process (pid 21508) exited with status 512 at Thu Aug 30 03:04:27
> 2001
> Terminating any active server processes...

Most unix systems have a pre-set limit for the number of open file
handles over every running application.  If you're running a lot of
applications on your server along with postgres, they may be consuming
vital system resources (file handles) that postgres wants.

Or, your database may just be making enough connections that it's
consuming all open file handles.  Whatever OS you're using, check
the manual to see how to add more file handles.  This may involve
recompiling the kernel.

Your other problem might be a deadlock.  If postgres gets deadlocked in a
transaction, or has a lock during a vacuum, all subsequent connections
will connect, try a query and then wait indefinitely in an idle state.
This keeps up until there are possibly hundreds (if you allow that many)
postgres connections tying up more and more file handles until there are
none left.

In any case, I'd check the other apps first.  Then, see if the kernel is
compiled with an adequate amount of file handles.  Then, check through
your application for deadlock conditions and vacuums during transactions.
(don't do that, by the way.)

If you have a high-traffic DB with lots of inserts, updates, and
deletes, your indexes might be disgustingly out of sync and turning your
DB into a slow memory, cpu, and file-handle hogging dog.  Postgres has a
reindex command, run that on your DB and see if the problem goes away.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Regarding Vacuumdb

2001-08-31 Thread Shaun Thomas

On Tue, 28 Aug 2001, Bhuvaneswari wrote:

> hi,
> I am getting the following error while doing vacuumdb,
>
> ERROR: mdopen: couldn't open test1: No such file or directory
> vacuumdb: database vacuum failed on db1.

We got this error a lot in 6.5.  Usually it means your table has somehow
been corrupted, and postgres doesn't want anything to do with it.  It'll
show up, and you can even select from it, but doing so will crash the
back-end, and you can't run a vacuum or pg_dump on that database
successfully.

You'll have to do a table-by-table pg_dump, destroy the DB, and reimport
everything.  You'll have to rebuild the corrupted table from scratch,
since you might not be able to dump it.

Either way, it's a lot of work.  Just be careful.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] How to make a REALLY FAST db server?

2001-09-10 Thread Shaun Thomas

On Mon, 10 Sep 2001, bpalmer wrote:

> - Hardware:  dual / quad Intel class

Fairly easy to obtain.  If all you want is a dual, you can use
desktop-class motherboards from such makers as Asus, Abit, and
IWill.  If you're going for speed, stick to the DDR or SDRAM
capable boards.

> - Disk:  SCSI Raid 1+0

To really eek out as much speed as possible here, you'll want 10k RPM
Ultra-160 Fibre Channel SCSI drives with a dedicated hardware raid
controller.  If have more reads than writes, you may want to use Raid 5
instead.

Postgres won't let you separate indexes from the database they represent,
so you can't make separate raid clusters for indexes and data; no
optimization there.  Maybe in the next version that implements
schemas?  What you can do if you use multiple DB's in your app design,
is put different DB's on different raid clusters.  That'll help parallel
execution times.  If you do this, make sure template1 and template0 are
separated from the rest of the databases, this will allow fast responses
from the system tables and make sure no application database IO affects
them adversely.

> - Ram:  Not really sure here.  Is there math somewhere for ram needs for
> pgsql? I imagine is has something to do with # connections,  db size,
> etc.

No reason not to go 2GB.  Ram is cheap these days, and you can always
increase shared buffers and caches to actually fill the server memory
up with as much quick-fetch info as possible.

All in all, if you're making a DB machine, do whatever you can to get
rid of hits caused by disk IO.  Parallelize as much as possible between
your databases, and if you have a DB capable of separating indexes from
the mix, do that too.  Don't run any other services on it, and make
sure it has a nice wide 100MBit or 1GBit pipe so it doesn't saturate when
servicing multiple hosts.

Hope that helps.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly