Re: [PERFORM] Samsung 32GB SATA SSD tested

2008-07-23 Thread Jeffrey Baker
On Tue, Jul 22, 2008 at 5:32 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Tue, Jul 22, 2008 at 6:04 PM, Jeffrey W. Baker [EMAIL PROTECTED] wrote:

 Strangely the RAID controller behaves badly on the TPC-B workload.  It
 is faster than disk, but not by a lot, and it's much slower than the
 other flash configurations.  The read/write benchmark did not vary when
 changing the number of clients between 1 and 8.  I suspect this is some
 kind of problem with Areca's kernel driver or firmware.

 Are you still using the 2.6.18 kernel for testing, or have you
 upgraded to something like 2.6.22.  I've heard many good things about
 the areca driver in that kernel version.

These tests are being run with the CentOS 5 kernel, which is 2.6.18.
The ioDrive driver is available for that kernel, and I want to keep
the software constant to get comparable results.

I put the Samsung SSD in my laptop, which is a Core 2 Duo @ 2.2GHz
with ICH9 SATA port and kernel 2.6.24, and it scored about 525 on R/W
pgbench.

 This sounds like an interesting development I'll have to keep track
 of.  In a year or two I might be replacing 16 disk arrays with SSD
 drives...

I agree, it's definitely an exciting development.  I have yet to
determine whether the SSDs have good properties for production
operations, but I'm learning.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Perl/DBI vs Native

2008-07-22 Thread Jeffrey Baker
On Tue, Jul 22, 2008 at 9:48 AM, Greg Sabino Mullane [EMAIL PROTECTED] wrote:
 In case someone is wondering, the way to force DBI to use unix
 sockets is by not specifying a host and port in the connect call.

 Actually, the host defaults to the local socket. Using the port
 may still be needed: if you leave it out, it simply uses the default
 value (5432) if left out. Thus, for most purposes, just leaving
 the host out is enough to cause a socket connection on the default
 port.

For the further illumination of the historical record, the best
practice here is probably to use the pg_service.conf file, which may
or may not live in /etc depending on your operating system.  Then you
can connect in DBI using dbi:Pg:service=whatever, and change the
definition of whatever in pg_service.conf.  This has the same
semantics as PGSERVICE=whatever when using psql.  It's a good idea to
keep these connection details out of your program code.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs Areca

2008-07-15 Thread Jeffrey Baker
On Tue, Jul 15, 2008 at 8:17 AM, Greg Smith [EMAIL PROTECTED] wrote:
 On Tue, 15 Jul 2008, Jeffrey Baker wrote:

 But most recently in my memory we had an Areca HBA which, when one of its
 WD RE-2 disks failed, completely stopped responding to both the command line
 and the web management interface.

 What operating system/kernel version are you using on these systems?

Debian etch, which has a 2.6.18 kernel.  I have contacted Areca
support (as well as the linux-scsi mailing list) and their responses
are usually either 1) upgrade the driver and/or firmware even though I
have the latest drivers and firmware, or 2) vague statements about the
disk being incompatible with the controller.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs Areca

2008-07-15 Thread Jeffrey Baker
On Mon, Jul 14, 2008 at 9:13 PM, Greg Smith [EMAIL PROTECTED] wrote:
 On Fri, 11 Jul 2008, Jeffrey Baker wrote:

 Their firmware is, frankly, garbage.  In more than one instance we
 have had the card panic when a disk fails, which is obviously counter
 to the entire purpose of a RAID.  We finally removed the Areca
 controllers from our database server and replaced them with HP P800s.

 Can you give a bit more detail here?  If what you mean is that the driver
 for the card generated an OS panic when a drive failed, that's not
 necessarily the firmware at all.  I know I had problems with the Areca cards
 under Linux until their driver went into the mainline kernel in 2.6.19, all
 kinds of panics under normal conditions.  Haven't seen anything like that
 with later Linux kernels or under Solaris 10, but then again I haven't had a
 disk failure yet either.

Well, it is difficult to tell if the fault is with the hardware or the
software.  No traditional kernel panic has been observed.  But most
recently in my memory we had an Areca HBA which, when one of its WD
RE-2 disks failed, completely stopped responding to both the command
line and the web management interface.  Then, i/o to that RAID became
increasingly slower, and slower, until it stopped serving i/o at all.
At that point it was not relevant that the machine was technically
still running.

We have another Areca HBA that starts throwing errors up the SCSI
stack if it runs for more than 2 months at a time.  We have to reboot
it on a schedule to keep it running.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs Areca

2008-07-11 Thread Jeffrey Baker
On Fri, Jul 11, 2008 at 12:21 PM, Greg Smith [EMAIL PROTECTED] wrote:
 On Fri, 11 Jul 2008, Jeff wrote:

 I've got a couple boxes with some 3ware 9550 controllers, and I'm less
 than pleased with performance on them.. Sequential access is nice, but start
 seeking around and you kick it in the gut.  (I've found posts on the
 internets about others having similar issues).

 Yeah, there's something weird about those controllers, maybe in how stuff
 flows through the cache, that makes them slow in a lot of situations. The
 old benchmarks at
 http://tweakers.net/reviews/557/21/comparison-of-nine-serial-ata-raid-5-adapters-pagina-21.html
 show their cards acting badly in a lot of situations and I haven't seen
 anything else since vindicating the 95XX models from them.

 My last box with a 3ware I simply had it in jbod mode and used sw raid and
 it smoked the hw.

 That is often the case no matter which hardware controller you've got,
 particularly in more complicated RAID setups.  You might want to consider
 that a larger lesson rather than just a single data point.

 Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of
 good anecdotal things that Areca is much better, just wondering if anybody
 here has firsthand experience.It'll be plugged into about 8 10k rpm sata
 disks.

 Areca had a pretty clear performance lead for a while there against 3ware's
 3500 series, but from what I've been reading I'm not sure that is still true
 in the current generation of products.  Check out the pages starting at
 http://www.tomshardware.com/reviews/SERIAL-RAID-CONTROLLERS-AMCC,1738-12.html
 for example, where the newer Areca 1680ML card just gets crushed at all
 kinds of workloads by the AMCC 3ware 9690SA.  I think the 3ware 9600 series
 cards have achieved or exceeded what Areca's 1200 series was capable of,
 while Areca's latest generation has slipped a bit from the previous one.

From my experience, the Areca controllers are difficult to operate.
Their firmware is, frankly, garbage.  In more than one instance we
have had the card panic when a disk fails, which is obviously counter
to the entire purpose of a RAID.  We finally removed the Areca
controllers from our database server and replaced them with HP P800s.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread Jeffrey Baker
On Mon, Jul 7, 2008 at 6:08 AM, Merlin Moncure [EMAIL PROTECTED] wrote:
 On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
Service Time Percentile, millis
R/W TPS   R-O TPS  50th   80th   90th   95th
 RAID  182   673 18 32 42 64
 Fusion971  4792  8  9 10 11

 Someone asked for bonnie++ output:

 Block output: 495MB/s, 81% CPU
 Block input: 676MB/s, 93% CPU
 Block rewrite: 262MB/s, 59% CPU

 Pretty respectable.  In the same ballpark as an HP MSA70 + P800 with
 25 spindles.

 You left off the 'seeks' portion of the bonnie++ results -- this is
 actually the most important portion of the test.  Based on your tps
 #s, I'm expecting seeks equiv of about 10 10k drives in configured in
 a raid 10, or around 1000-1500.  They didn't publish any prices so
 it's hard to say if this is 'cost competitive'.

I left it out because bonnie++ reports it as + i.e. greater than
or equal to 10 per second.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Practical upper limits of pgbench read/write tps with 8.3

2008-07-07 Thread Jeffrey Baker
I'm spending a third day testing with the ioDrive, and it occurred to
me that I should normalize my tests by mounting the database on a
ramdisk.  The results were surprisingly low.  On the single 2.2GHz
Athlon, the maximum tps seems to be 1450.  This is achieved with a
single connection.  I/O rates to and from the ramdisk never exceed
50MB/s on a one-minute average.

With the flash device on the same benchmark, the tps rate is 1350,
meaning that as far as PostgreSQL is concerned, on this machine, the
flash device achieves 90% of the best possible performance.

Question being: what's the bottleneck?  Is PG lock-bound?

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Practical upper limits of pgbench read/write tps with 8.3

2008-07-07 Thread Jeffrey Baker
On Mon, Jul 7, 2008 at 3:22 PM, Greg Smith [EMAIL PROTECTED] wrote:
 On Mon, 7 Jul 2008, Jeffrey Baker wrote:

 On the single 2.2GHz Athlon, the maximum tps seems to be 1450...what's the
 bottleneck?  Is PG lock-bound?

 It can become lock-bound if you don't make the database scale significantly
 larger than the number of clients, but that's probably not your problem.
  The pgbench client driver program itself is pretty CPU intensive and can
 suffer badly from kernel issues.  I am unsurprised you can only hit 1450
 with a single CPU.  On systems with multiple CPUs where the single CPU
 running the pgbench client is much faster than your 2.2GHz Athlon, you'd
 probably be able to get a few thousand TPS, but eventually the context
 switching of the client itself can become a bottleneck.

On a 2GHz Core 2 Duo the best tps achieved is 2300, with -c 8.
pgbench itself gets around 10% of the CPU (user + sys for pgbench is
7s of 35s wall clock time, or 70 CPU-seconds, thus 10%).

I suppose you could still blame it on ctxsw between pgbench and pg
itself, but the results are not better with pgbench on another machine
cross-connected with gigabit ethernet.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Fusion-io ioDrive

2008-07-01 Thread Jeffrey Baker
I recently got my hands on a device called ioDrive from a company
called Fusion-io.  The ioDrive is essentially 80GB of flash on a PCI
card.  It has its own driver for Linux completely outside of the
normal scsi/sata/sas/fc block device stack, but from the user's
perspective it behaves like a block device.  I put the ioDrive in an
ordinary PC with 1GB of memory, a single 2.2GHz AMD CPU, and an
existing Areca RAID with 6 SATA disks and a 128MB cache.  I tested the
device with PostgreSQL 8.3.3 on Centos 5.3 x86_64 (Linux 2.6.18).

The pgbench database was initialized with scale factor 100.  Test runs
were performed with 8 parallel connections (-c 8), both read-only (-S)
and read-write.  PostgreSQL itself was configured with 256MB of shared
buffers and 32 checkpoint segments.  Otherwise the configuration was
all defaults.

In the following table, the RAID configuration has the xlogs on a
RAID 0 of 2 10krpm disks with ext2, and the heap is on a RAID 0 of 4
7200rpm disks with ext3.  The Fusion configuration has everything on
the ioDrive with xfs.  I tried the ioDrive with ext2 and ext3 but it
didn't seem to make any difference.

Service Time Percentile, millis
R/W TPS   R-O TPS  50th   80th   90th   95th
RAID  182   673 18 32 42 64
Fusion971  4792  8  9 10 11

Basically the ioDrive is smoking the RAID.  The only real problem with
this benchmark is that the machine became CPU-limited rather quickly.
During the runs with the ioDrive, iowait was pretty well zero, with
user CPU being about 75% and system getting about 20%.

Now, I will say a couple of other things.  The Linux driver for this
piece of hardware is pretty dodgy.  Sub-alpha quality actually.  But
they seem to be working on it.  Also there's no driver for
OpenSolaris, Mac OS X, or Windows right now.  In fact there's not even
anything available for Debian or other respectable Linux distros, only
Red Hat and its clones.  The other problem is the 80GB model is too
small to hold my entire DB, Although it could be used as a tablespace
for some critical tables.  But hey, it's fast.

I'm going to put this board into my 8-way Xeon to see if it goes any
faster with more CPU available.

I'd be interested in hearing experiences with other flash storage
devices, SSDs, and that type of thing.  So far, this is the fastest
hardware I've seen for the price.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Fusion-io ioDrive

2008-07-01 Thread Jeffrey Baker
On Tue, Jul 1, 2008 at 6:17 PM, Andrej Ricnik-Bay
[EMAIL PROTECTED] wrote:
 On 02/07/2008, Jeffrey Baker [EMAIL PROTECTED] wrote:

  Red Hat and its clones.  The other problem is the 80GB model is too
  small to hold my entire DB, Although it could be used as a tablespace
  for some critical tables.  But hey, it's fast.
 And when/if it dies, please give us a rough guestimate of its
 life-span in terms of read/write cycles.  Sounds exciting, though!

Yeah.  The manufacturer rates it for 5 years in constant use.  I
remain skeptical.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Quad Xeon or Quad Opteron?

2008-05-24 Thread Jeffrey Baker
On Fri, May 23, 2008 at 3:41 AM, Andrzej Zawadzki [EMAIL PROTECTED] wrote:

 Hello,

  We're planning new production server for PostgreSQL and I'm wondering
 which processor (or even platform) will be better: Quad Xeon or Quad
 Opteron (for example SUN now has a new offer Sun Fire X4440 x64).

 When I was buying my last database server, then SUN v40z was a really
 very good choice (Intel's base server was slower). This v40z still works
 pretty good but I need one more.

 AFAIK Intel made some changes in chipset but... is this better then AMD
 HyperTransport and Direct Connect Architecture from database point of
 view? How about L3 cache - is this important for performance?


Intel's chipset is still broken when using dual sockets and quad core
processors.  The problem manifests itself as excessive cache line bouncing.
In my opinion the best bang/buck combo on the CPU side is the fastest
dual-core Xeon CPUs you can find.  You get excellent single-thread
performance and you still have four processors, which was a fantasy for most
people only 5 years ago.  In addition you can put a ton of memory in the new
Xeon machines.  64GB is completely practical.

I still run several servers on Opterons but in my opinion they don't make
sense right now unless you truly need the CPU parallelism.

-jwb


Re: [PERFORM] Update performance degrades over time

2008-05-15 Thread Jeffrey Baker
On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84
[EMAIL PROTECTED] wrote:
 Hi All,

 We are doing some load tests with our application running postgres 8.2.4. At
 times we see updates on a table taking longer (around
 11-16secs) than expected sub-second response time. The table in question is
 getting updated constantly through the load tests. In checking the table
 size including indexes, they seem to be bloated got it confirmed after
 recreating it (stats below). We have autovacuum enabled with default
 parameters. I thought autovaccum would avoid bloating issues but looks like
 its not aggressive enough. Wondering if table/index bloating is causing
 update slowness in over a period of time. Any ideas how to troubleshoot this
 further.

Sometimes it is necessary to not only VACUUM, but also REINDEX.  If
your update changes an indexed column to a new, distinct value, you
can easily get index bloat.

Also, you should check to see if you have any old, open transactions
on the same instance.  If you do, it's possible that VACUUM will have
no beneficial effect.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SELECT 'DBD::Pg ping test'

2008-04-23 Thread Jeffrey Baker
On Wed, Apr 23, 2008 at 12:19 AM, sathiya psql [EMAIL PROTECTED] wrote:
 Hi All,

 This query is being executed nearly a million times
SELECT 'DBD::Pg ping test'

Something in your Perl application is use $dbh-ping().  See perldoc
DBI.  It's possible that this is happening under the hood, because
your application is using connect_cached() instead of connect().

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] 3-days-long vacuum of 20GB table

2008-04-18 Thread Jeffrey Baker
This autovacuum has been hammering my server with purely random i/o
for half a week.  The table is only 20GB and the i/o subsystem is good
for 250MB/s sequential and a solid 5kiops.  When should I expect it to
end (if ever)?

current_query: VACUUM reuters.value
query_start: 2008-04-15 20:12:48.806885-04
think=# select * from pg_class where relname = 'value';
-[ RECORD 1 ]--+-
relname| value
relfilenode| 191425
relpages   | 1643518
reltuples  | 1.37203e+08
# find -name 191425\*
./16579/191425
./16579/191425.1
./16579/191425.10
./16579/191425.11
./16579/191425.12
./16579/191425.13
./16579/191425.14
./16579/191425.15
./16579/191425.16
./16579/191425.17
./16579/191425.18
./16579/191425.19
./16579/191425.2
./16579/191425.3
./16579/191425.4
./16579/191425.5
./16579/191425.6
./16579/191425.7
./16579/191425.8
./16579/191425.9
# vmstat 1
procs ---memory-- ---swap-- -io -system-- cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 0  1  30336  46264 60 788235600   250   29911  6  2 87  5
 0  1  30336  47412 60 788130800  289648  944 4861  3  2 71 24
 0  2  30336  46696 60 788218800   816 4  840 5019  1  0 75 24
 0  1  30336  49228 60 787986800  1888   164  971 5687  1  1 74 24
 0  1  30336  49688 60 787891600  264048 1047 5751  1  0 75 23
 autovacuum  | on
 autovacuum_vacuum_cost_delay| -1
 autovacuum_vacuum_cost_limit| -1
 vacuum_cost_delay   | 0
 vacuum_cost_limit   | 200
 vacuum_cost_page_dirty  | 20
 vacuum_cost_page_hit| 1
 vacuum_cost_page_miss   | 10

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3-days-long vacuum of 20GB table

2008-04-18 Thread Jeffrey Baker
On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
 Jeffrey Baker [EMAIL PROTECTED] writes:
   This autovacuum has been hammering my server with purely random i/o
   for half a week.  The table is only 20GB and the i/o subsystem is good
   for 250MB/s sequential and a solid 5kiops.  When should I expect it to
   end (if ever)?

  What have you got maintenance_work_mem set to?  Which PG version
  exactly?

This is 8.1.9 on Linux x86_64,

# show maintenance_work_mem ;
 maintenance_work_mem
--
 16384

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3-days-long vacuum of 20GB table

2008-04-18 Thread Jeffrey Baker
On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:

 On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
   Jeffrey Baker [EMAIL PROTECTED] writes:
 This autovacuum has been hammering my server with purely random i/o
 for half a week.  The table is only 20GB and the i/o subsystem is good
 for 250MB/s sequential and a solid 5kiops.  When should I expect it to
 end (if ever)?
  
What have you got maintenance_work_mem set to?  Which PG version
exactly?

  This is 8.1.9 on Linux x86_64,

  # show maintenance_work_mem ;
   maintenance_work_mem
  --
   16384

That appears to be the default.  I will try increasing this.  Can I
increase it globally from a single backend, so that all other backends
pick up the change, or do I have to restart the instance?

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3-days-long vacuum of 20GB table

2008-04-18 Thread Jeffrey Baker
On Fri, Apr 18, 2008 at 10:34 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:

 On Fri, Apr 18, 2008 at 10:32 AM, Jeffrey Baker [EMAIL PROTECTED] wrote:
  
   On Fri, Apr 18, 2008 at 10:03 AM, Tom Lane [EMAIL PROTECTED] wrote:
 Jeffrey Baker [EMAIL PROTECTED] writes:
   This autovacuum has been hammering my server with purely random i/o
   for half a week.  The table is only 20GB and the i/o subsystem is 
 good
   for 250MB/s sequential and a solid 5kiops.  When should I expect it 
 to
   end (if ever)?

  What have you got maintenance_work_mem set to?  Which PG version
  exactly?
  
This is 8.1.9 on Linux x86_64,
  
# show maintenance_work_mem ;
 maintenance_work_mem
--
 16384

  That appears to be the default.  I will try increasing this.  Can I
  increase it globally from a single backend, so that all other backends
  pick up the change, or do I have to restart the instance?

I increased it to 1GB, restarted the vacuum, and system performance
seems the same.  The root of the problem, that an entire CPU is in the
iowait state and the storage device is doing random i/o, is unchanged:

 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 1  1  30328  53632 60 691471600   904  2960 1216 4720  1  1 74 23
 0  1  30328  52492 60 691603600  1152  1380  948 3637  0  0 75 24
 0  1  30328  49600 60 691768000  1160  1420 1055 4191  1  1 75 24
 0  1  30328  49404 60 691900000  1048  1308 1133 5054  2  2 73 23
 0  1  30328  47844 60 692109600  1552  1788 1002 3701  1  1 75 23

At that rate it will take a month.  Compare the load generated by
create table foo as select * from bar:

 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
 2  2  30328  46580 60 691102400 145156   408 2006 10729 52  8 17 23
 3  1  30328  46240 60 690097600 133312   224 1834 10005 23 12 42 23
 1  3  30328  60700 60 690205600 121480   172 1538 10629 22 14 32 32
 1  2  30328  49520 60 691420400 122344   256 1408 14374 13 17 41 28
 1  2  30328  47844 60 691596000 127752   248 1313 9452 16 15 42 27

That's rather more like it.  I guess I always imagined that VACUUM was
a sort of linear process, not random, and that it should proceed at
sequential scan speeds.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Strange behavior: pgbench and new Linux kernels

2008-04-17 Thread Jeffrey Baker
On Thu, Apr 17, 2008 at 12:58 AM, Greg Smith [EMAIL PROTECTED] wrote:
  So in the case of this simple benchmark, I see an enormous performance
 regression from the newest Linux kernel compared to a much older one.

This has been discussed recently on linux-kernel.  It's definitely a
regression.  Instead of getting a nice, flat overload behavior when
the # of busy threads exceeds the number of CPUs, you get the
declining performance you noted.

Poor PostgreSQL scaling on Linux 2.6.25-rc5 (vs 2.6.22)
http://marc.info/?l=linux-kernelm=120521826111587w=2

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] seq scan issue...

2008-04-17 Thread Jeffrey Baker
On Thu, Apr 17, 2008 at 11:24 AM, kevin kempter
[EMAIL PROTECTED] wrote:
 Hi List;

  I have a large tble (playback_device) with 6million rows in it. The
 aff_id_tmp1 table has 600,000 rows.
  - why am I still getting a seq scan ?


You're selecting almost all the rows in the product of aff_id_tmp1 *
playback_fragment.  A sequential scan will be far faster than an index
scan.  You can prove this to yourself using 'set enable_seqscan to
false' and running the query again.  It should be much slower.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Anybody using the Dell Powervault MD3000 array?

2008-04-16 Thread Jeffrey Baker
Thinking about buying the Powervault MD3000 SAS array with 15 15k
300GB disks for use as a postgres tablespace.  Is anyone using these
(or other LSI/Engenio rebadge jobs?).  I'm interested in hearing about
performance of the array, and problems (if any) with Dell's SAS HBA
that comes bundled.  Also interested in performance of the maximum
config of an MD3000 with two MD1000 shelves.

-Jeff

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Anybody using the Dell Powervault MD3000 array?

2008-04-16 Thread Jeffrey Baker
On Wed, Apr 16, 2008 at 1:20 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
 On Wed, 16 Apr 2008 16:17:10 -0400


   On Wed, Apr 16, 2008 at 4:15 PM, Jeffrey Baker [EMAIL PROTECTED]
   wrote:
  
Thinking about buying the Powervault MD3000 SAS array with 15 15k
300GB disks for use as a postgres tablespace.  Is anyone using these
(or other LSI/Engenio rebadge jobs?).  I'm interested in hearing
about performance of the array, and problems (if any) with Dell's
SAS HBA that comes bundled.  Also interested in performance of the
maximum config of an MD3000 with two MD1000 shelves.
   
-Jeff

  moved Gavin's reply below
  Gavin M. Roy [EMAIL PROTECTED] wrote:

   Might want to check out the HP MSA70 arrays.  I've had better luck
   with them and you can get 25 drives in a smaller rack unit size.  I
   had a bad experience with the MD3000 and now only buy MD1000's with
   Perc 6/e when I buy Dell.
   Good luck!
  

  I can second this. The MSA 70 is a great unit for the money.

Thank you both.  The MSA 70 looks like an ordinary disk shelf.  What
controllers do you use?  Or, do you just go with a software RAID?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance