Re: [GENERAL] Recomended FS

2003-10-24 Thread Nick Burrett
Ben-Nes Michael wrote:

But still the greatest question is what FS to put on ?

I heard Reiesref can handle small files very quickly.
Switching from ext3 to reiserfs for our name servers reduced the time 
taken to load 110,000 zones from 45 minutes to 5 minutes.

However for a database, I don't think you can really factor this type of 
stuff into the equation.  The performance benefits you get from 
different filesystem types are going to be small compared to the 
modifications that you can make to your database structure, queries and 
applications.  The actual algorithms used in processing the data will be 
much slower than the time taken to fetch the data off disk.

--
Nick Burrett
Network Engineer, Designer Servers Ltd.   http://www.dsvr.co.uk
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Recomended FS

2003-10-24 Thread Nick Burrett
Ben-Nes Michael wrote:
- Original Message - 
From: "Nick Burrett" <[EMAIL PROTECTED]>
Ben-Nes Michael wrote:


But still the greatest question is what FS to put on ?

I heard Reiesref can handle small files very quickly.
Switching from ext3 to reiserfs for our name servers reduced the time
taken to load 110,000 zones from 45 minutes to 5 minutes.
However for a database, I don't think you can really factor this type of
stuff into the equation.  The performance benefits you get from
different filesystem types are going to be small compared to the
modifications that you can make to your database structure, queries and
applications.  The actual algorithms used in processing the data will be
much slower than the time taken to fetch the data off disk.


So you say the FS has no real speed impact on the SB ?

In my pg data folder i have 2367 files, some big some small.
I'm saying: don't expect your DB performance to come on leaps and bounds 
just because you changed to a different filesystem format.  If you've 
got speed problems then it might help to look elsewhere first.

--
Nick Burrett
Network Engineer, Designer Servers Ltd.   http://www.dsvr.co.uk
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] ShmemAlloc errors

2003-10-17 Thread Nick Burrett
Tom Lane wrote:
Nick Burrett <[EMAIL PROTECTED]> writes:

Tom Lane wrote:

We don't normally hear of people needing that --- is there anything
unusual about the schema of this database?


Not particularly.  The database consists of around 3000 tables created
using this:


CREATE TABLE vs_foo (date date NOT NULL,
 time time NOT NULL,
 bytesin int8 CHECK (bytesin >= 0),
 bytesout int8 CHECK (bytesout >= 0));


Each table has around 1500 rows.


3000 tables?  That's why you need so many locks.
I'm surprised that I've never hit this problem before though.

Have you thought about
collapsing these into *one* table with an extra key column?  Also, it'd
likely be better to combine the date and time into a timestamp column.
I tried it back in the days when we only had around 1000 tables. 
Problem was that inserts and deletes took a *very* long time.  IIRC a 
one row insert was taking over 10 seconds.  I think this was because the 
index files were growing to several gigabytes.

Having everything in one large table would have been great and would 
have made life much easier.

date and time were split to simplify queries.  I think it had an impact 
on index sizes.

Regards,

Nick.

--
Nick Burrett
Network Engineer, Designer Servers Ltd.   http://www.dsvr.co.uk
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] ShmemAlloc errors

2003-10-17 Thread Nick Burrett
Tom Lane wrote:
Nick Burrett <[EMAIL PROTECTED]> writes:

$ pg_dumpall >full.db
pg_dump: WARNING:  ShmemAlloc: out of memory
pg_dump: Attempt to lock table "vs_dfa554862ac" failed.  ERROR: 
LockAcquire: lock table 1 is out of memory
pg_dumpall: pg_dump failed on bandwidth, exiting


Looks like you need to increase max_locks_per_transaction in postgresql.conf.
(You'll need to restart the postmaster to make this take effect.)
I've tried that and indeeed it works.  Thanks.

We don't normally hear of people needing that --- is there anything
unusual about the schema of this database?
Not particularly.  The database consists of around 3000 tables created
using this:
CREATE TABLE vs_foo (date date NOT NULL,
 time time NOT NULL,
 bytesin int8 CHECK (bytesin >= 0),
 bytesout int8 CHECK (bytesout >= 0));
Each table has around 1500 rows.

Incidently the dump and import reduced the disk space requirements from 
25Gb to 9Gb.  The database is vacummed monthly (data is only deleted 
monthly) using VACUMM FULL.  I can only presume that vacumming is not 
designed to be *that* aggressive.

Cheers,

Nick.

--
Nick Burrett
Network Engineer, Designer Servers Ltd.   http://www.dsvr.co.uk
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]