Hi
I have noticed that my transaction log has quite large activity volume
(up to 15MB per transaction), so with the amount of data I am using I
have manually moved the pg_xlog directory to a different disk. This
allows me to have both the table space and transaction log on two
different high
Joshua D. Drake wrote:
> This community is notorious for "optimum". MySQL is notorious for "satisfy".
Within *this* community, MySQL is just plain notorious. Let's face it --
we are *not* dolphin-safe.
>
> Which one would you rather store your financial information in?
The one that had the be
On Fri, 23 Jan 2009, Merlin Moncure wrote:
Note, while sequential write speeds are a good indication of general
raid crappyness, they are not the main driver of your low pgbench
results (buy they may be involved with poor insert performance) That is
coming from your seek performance, which is
On Fri, 23 Jan 2009, M. Edward (Ed) Borasky wrote:
* large-capacity inexpensive rotating disks,
* a hardware RAID controller containing a battery-backed cache,
* as much RAM as one can afford and the chassis will hold, and
* enough cores to keep the workload from becoming processor-bound
are goo
On Fri, 2009-01-23 at 09:22 -0800, M. Edward (Ed) Borasky wrote:
> I question, however, whether there's much point in seeking an optimum.
> As was noted long ago by Nobel laureate Herbert Simon, in actual fact
> managers / businesses rarely optimize. Instead, they satisfice. They do
> what is "goo
da...@lang.hm wrote:
> On Fri, 23 Jan 2009, Luke Lonergan wrote:
>
>> Why not simply plug your server into a UPS and get 10-20x the
>> performance using the same approach (with OS IO cache)?
>>
>> In fact, with the server it's more robust, as you don't have to
>> transit several intervening physic
On Fri, 23 Jan 2009, Merlin Moncure wrote:
On 1/23/09, da...@lang.hm wrote:
the review also includes the Intel X-25E and X-25M drives (along with a
variety of SCSI and SATA drives)
The x25-e is a game changer for database storage. It's still a little
pricey for what it does but who can ar
Hey All,
I previously post about the troubles I was having dumping a >1Tb (size
with indexes) table. The rows in the table could be very large. Using
perl's DBD::Pg we were some how able to add these very large rows
without running in to the >1Gb row bug. With everyones help I
determined I
On 1/23/09, da...@lang.hm wrote:
> the review also includes the Intel X-25E and X-25M drives (along with a
> variety of SCSI and SATA drives)
>
The x25-e is a game changer for database storage. It's still a little
pricey for what it does but who can argue with these numbers?
http://techreport.c
On 1/23/09, Ibrahim Harrani wrote:
> Hi Craig,
>
> Here is the result. It seems that disk write is terrible!.
>
> r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
> count=100; sync)
Note, while sequential write speeds are a good indication of general
raid crappyness, they are
* Craig Ringer:
> I'd be much more confident with something like those devices than I
> would with an OS ramdisk plus startup/shutdown scripts to initialize it
> from a file and write it out to a file. Wouldn't it be a pain if the UPS
> didn't give the OS enough warning to write the RAM disk out b
Luke Lonergan wrote:
> Why not simply plug your server into a UPS and get 10-20x the performance
> using the same approach (with OS IO cache)?
A big reason is that your machine may already have as much RAM as is
currently economical to install. Hardware with LOTS of RAM slots can
cost quite a bit
Hmm - I wonder what OS it runs ;-)
- Luke
- Original Message -
From: da...@lang.hm
To: Luke Lonergan
Cc: glynast...@yahoo.co.uk ;
pgsql-performance@postgresql.org
Sent: Fri Jan 23 04:52:27 2009
Subject: Re: [PERFORM] SSD performance
On Fri, 23 Jan 2009, Luke Lonergan wrote:
> Why not
On Fri, 23 Jan 2009, Luke Lonergan wrote:
Why not simply plug your server into a UPS and get 10-20x the
performance using the same approach (with OS IO cache)?
In fact, with the server it's more robust, as you don't have to transit
several intervening physical devices to get to the RAM.
If y
On Fri, 23 Jan 2009, Luke Lonergan wrote:
Why not simply plug your server into a UPS and get 10-20x the
performance using the same approach (with OS IO cache)?
In fact, with the server it's more robust, as you don't have to transit
several intervening physical devices to get to the RAM.
If
Why not simply plug your server into a UPS and get 10-20x the performance using
the same approach (with OS IO cache)?
In fact, with the server it's more robust, as you don't have to transit several
intervening physical devices to get to the RAM.
If you want a file interface, declare a RAMDISK.
On Fri, 23 Jan 2009, Glyn Astill wrote:
I spotted a new interesting SSD review. it's a $379
5.25" drive bay device that holds up to 8 DDR2 DIMMS
(up to 8G per DIMM) and appears to the system as a SATA
drive (or a pair of SATA drives that you can RAID-0 to get
past the 300MB/s SATA bottleneck)
> I spotted a new interesting SSD review. it's a $379
> 5.25" drive bay device that holds up to 8 DDR2 DIMMS
> (up to 8G per DIMM) and appears to the system as a SATA
> drive (or a pair of SATA drives that you can RAID-0 to get
> past the 300MB/s SATA bottleneck)
>
Sounds very similar to the Giga
I spotted a new interesting SSD review. it's a $379 5.25" drive bay device
that holds up to 8 DDR2 DIMMS (up to 8G per DIMM) and appears to the
system as a SATA drive (or a pair of SATA drives that you can RAID-0 to
get past the 300MB/s SATA bottleneck)
the best review I've seen only ran it on
On Thu, Jan 22, 2009 at 10:52 PM, Ibrahim Harrani
wrote:
> Hi Craig,
>
> Here is the result. It seems that disk write is terrible!.
>
> r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
> count=100; sync)
>
>
> 100+0 records in
> 100+0 records out
> 819200 bytes transf
20 matches
Mail list logo