Re: [PERFORM] Fusion-io ioDrive

2008-07-08 Thread Jeremy Harris
Scott Carey wrote: Well, what does a revolution like this require of Postgres? That is the question. [...] #1 Per-Tablespace optimizer tuning parameters. ... automatically measured? Cheers, Jeremy -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make ch

Re: [PERFORM] Fusion-io ioDrive

2008-07-08 Thread Scott Carey
Well, what does a revolution like this require of Postgres? That is the question. I have looked at the I/O drive, and it could increase our DB throughput significantly over a RAID array. Ideally, I would put a few key tables and the WAL, etc. I'd also want all the sort or hash overflow from wo

Re: [PERFORM] Fusion-io ioDrive

2008-07-08 Thread Markus Wanner
Hi, Jonah H. Harris wrote: I'm not sure how those cards work, but my guess is that the CPU will go 100% busy (with a near-zero I/O wait) on any sizable workload. In this case, the current pgbench configuration being used is quite small and probably won't resemble this. I'm not sure how they w

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread PFC
PFC, I have to say these kind of posts make me a fan of yours. I've read many of your storage-related replied and have found them all very educational. I just want to let you know I found your assessment of the impact of Flash storage perfectly-worded and unbelievably insightful. Thanks

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread Jeffrey Baker
On Mon, Jul 7, 2008 at 6:08 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote: > On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: >>>Service Time Percentile, millis >>>R/W TPS R-O TPS 50th 80th 90th 95th >>> RAID 182 673

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread Jonah H. Harris
On Mon, Jul 7, 2008 at 9:23 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote: > I have a lot of problems with your statements. First of all, we are > not really talking about 'RAM' storage...I think your comments would > be more on point if we were talking about mounting database storage > directly fr

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread PFC
*) is the flash random write problem going to be solved in hardware or specialized solid state write caching techniques. At least currently, it seems like software is filling the role. Those flash chips are page-based, not unlike a harddisk, ie. you cannot erase and write a byte, you mus

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread Merlin Moncure
On Wed, Jul 2, 2008 at 7:41 AM, Jonah H. Harris <[EMAIL PROTECTED]> wrote: > On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: >> Basically the ioDrive is smoking the RAID. The only real problem with >> this benchmark is that the machine became CPU-limited rather quickly. >

Re: [PERFORM] Fusion-io ioDrive

2008-07-07 Thread Merlin Moncure
On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: >>Service Time Percentile, millis >>R/W TPS R-O TPS 50th 80th 90th 95th >> RAID 182 673 18 32 42 64 >> Fusion971 4792 8 9

Re: [PERFORM] Fusion-io ioDrive

2008-07-04 Thread Jeffrey Baker
On Tue, Jul 1, 2008 at 5:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > I recently got my hands on a device called ioDrive from a company > called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI > card. [...] >Service Time Percentile, millis >R

Re: [PERFORM] Fusion-io ioDrive

2008-07-02 Thread Cédric Villemain
Le Wednesday 02 July 2008, Jonah H. Harris a écrit : > On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > > Basically the ioDrive is smoking the RAID. The only real problem with > > this benchmark is that the machine became CPU-limited rather quickly. > > That's traditional

Re: [PERFORM] Fusion-io ioDrive

2008-07-02 Thread Jonah H. Harris
On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > Basically the ioDrive is smoking the RAID. The only real problem with > this benchmark is that the machine became CPU-limited rather quickly. That's traditionally the problem with everything being in memory. Unless the dat

Re: [PERFORM] Fusion-io ioDrive

2008-07-02 Thread Merlin Moncure
On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > I recently got my hands on a device called ioDrive from a company > called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI > card. It has its own driver for Linux completely outside of the > normal scsi/sata/s

Re: [PERFORM] Fusion-io ioDrive

2008-07-01 Thread Andrej Ricnik-Bay
On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > Yeah. The manufacturer rates it for 5 years in constant use. I > remain skeptical. I read in one of their spec-sheets that w/ continuous writes it should survive roughly 3.4 years ... I'd be a tad more conservative, I guess, and try to dr

Re: [PERFORM] Fusion-io ioDrive

2008-07-01 Thread Greg Smith
On Tue, 1 Jul 2008, Jeffrey Baker wrote: The only real problem with this benchmark is that the machine became CPU-limited rather quickly. During the runs with the ioDrive, iowait was pretty well zero, with user CPU being about 75% and system getting about 20%. You might try reducing the numb

Re: [PERFORM] Fusion-io ioDrive

2008-07-01 Thread Jeffrey Baker
On Tue, Jul 1, 2008 at 6:17 PM, Andrej Ricnik-Bay <[EMAIL PROTECTED]> wrote: > On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > >> Red Hat and its clones. The other problem is the 80GB model is too >> small to hold my entire DB, Although it could be used as a tablespace >> for some cri

Re: [PERFORM] Fusion-io ioDrive

2008-07-01 Thread Andrej Ricnik-Bay
On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote: > Red Hat and its clones. The other problem is the 80GB model is too > small to hold my entire DB, Although it could be used as a tablespace > for some critical tables. But hey, it's fast. And when/if it dies, please give us a rough gues

[PERFORM] Fusion-io ioDrive

2008-07-01 Thread Jeffrey Baker
I recently got my hands on a device called ioDrive from a company called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI card. It has its own driver for Linux completely outside of the normal scsi/sata/sas/fc block device stack, but from the user's perspective it behaves like a block