Alex Turner wrote:
I would recommend running a bonnie++ benchmark on your array to see if
it's the array/controller/raid being crap, or wether it's postgres. I
have had some very surprising results from arrays that theoretically
should be fast, but turned out to be very slow.
I would also seriousl
gure
out how to put different tables on different partitions. Thanks.
Arshavir
Also a note for interest is that this is _software_ raid...
Alex Turner
netEconomist
On 13 Mar 2005 23:36:13 -0500, Greg Stark <[EMAIL PROTECTED]> wrote:
Arshavir Grigorian <[EMAIL PROTECTED]> writes:
Hi,
I ha
Josh Berkus wrote:
A,
This is a Sun e450 with dual TI UltraSparc II processors and 2G of RAM.
It is currently running Debian Sarge with a 2.4.27-sparc64-smp custom
compiled kernel. Postgres is installed from the Debian package and uses
all the configuration defaults.
Please read http://www.powerp
ccessANSI SCSI revision: 02
Host: scsi5 Channel: 00 Id: 03 Lun: 00
Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev:
Type: Direct-AccessANSI SCSI revision: 02
--
Arshavir Grigorian
Systems Administrator/Engineer
---(end of broa
Many thanks for all the response.
I guess there are a lot of things to change and tweak and I wonder what
would be a good benchmarking sample dataset (size, contents).
My tables are very large (the smallest is 7+ mil records) and take
several days to load (if not weeks). It would be nice to have
Tom Lane wrote:
Arshavir Grigorian <[EMAIL PROTECTED]> writes:
I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has
an Ext3 filesystem which is used by Postgres. Currently we are loading a
50G database on this server from a Postgres dump (copy, not insert) and
are experi
ANSI SCSI revision: 02
Host: scsi5 Channel: 00 Id: 03 Lun: 00
Vendor: FUJITSU Model: MAG3091L SUN9.0G Rev:
Type: Direct-AccessANSI SCSI revision: 02
--
Arshavir Grigorian
Systems Administrator/Engineer
---(end of broa
Thanks for all the replies. It actually has to do with the locales. The
db where the index gets used is running on C vs the the other one that
uses en_US.UTF-8. I guess the db with the wrong locale will need to be
waxed and recreated with correct locale settings. I wonder if there are
any plans
Hi,
I have a query that when run on similar tables in 2 different databases
either uses the index on the column (primary key) in the where clause or
does a full table scan. The structure of the tables is the same, except
that the table where the index does not get used has an extra million
rows