Scott Carey wrote:
Well, what does a revolution like this require of Postgres? That is the
question.
[...]
#1 Per-Tablespace optimizer tuning parameters.
... automatically measured?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
Well, what does a revolution like this require of Postgres? That is the
question.
I have looked at the I/O drive, and it could increase our DB throughput
significantly over a RAID array.
Ideally, I would put a few key tables and the WAL, etc. I'd also want all
the sort or hash overflow from wo
Hi,
Jonah H. Harris wrote:
I'm not sure how those cards work, but my guess is that the CPU will
go 100% busy (with a near-zero I/O wait) on any sizable workload. In
this case, the current pgbench configuration being used is quite small
and probably won't resemble this.
I'm not sure how they w
PFC, I have to say these kind of posts make me a fan of yours. I've
read many of your storage-related replied and have found them all very
educational. I just want to let you know I found your assessment of the
impact of Flash storage perfectly-worded and unbelievably insightful.
Thanks
On Mon, Jul 7, 2008 at 6:08 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
>>>Service Time Percentile, millis
>>>R/W TPS R-O TPS 50th 80th 90th 95th
>>> RAID 182 673
On Mon, Jul 7, 2008 at 9:23 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> I have a lot of problems with your statements. First of all, we are
> not really talking about 'RAM' storage...I think your comments would
> be more on point if we were talking about mounting database storage
> directly fr
*) is the flash random write problem going to be solved in hardware or
specialized solid state write caching techniques. At least
currently, it seems like software is filling the role.
Those flash chips are page-based, not unlike a harddisk, ie. you cannot
erase and write a byte, you mus
On Wed, Jul 2, 2008 at 7:41 AM, Jonah H. Harris <[EMAIL PROTECTED]> wrote:
> On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
>> Basically the ioDrive is smoking the RAID. The only real problem with
>> this benchmark is that the machine became CPU-limited rather quickly.
>
On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
>>Service Time Percentile, millis
>>R/W TPS R-O TPS 50th 80th 90th 95th
>> RAID 182 673 18 32 42 64
>> Fusion971 4792 8 9
On Tue, Jul 1, 2008 at 5:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> I recently got my hands on a device called ioDrive from a company
> called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI
> card.
[...]
>Service Time Percentile, millis
>R
Le Wednesday 02 July 2008, Jonah H. Harris a écrit :
> On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> > Basically the ioDrive is smoking the RAID. The only real problem with
> > this benchmark is that the machine became CPU-limited rather quickly.
>
> That's traditional
On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> Basically the ioDrive is smoking the RAID. The only real problem with
> this benchmark is that the machine became CPU-limited rather quickly.
That's traditionally the problem with everything being in memory.
Unless the dat
On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> I recently got my hands on a device called ioDrive from a company
> called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI
> card. It has its own driver for Linux completely outside of the
> normal scsi/sata/s
On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> Yeah. The manufacturer rates it for 5 years in constant use. I
> remain skeptical.
I read in one of their spec-sheets that w/ continuous writes it
should survive roughly 3.4 years ... I'd be a tad more conservative,
I guess, and try to dr
On Tue, 1 Jul 2008, Jeffrey Baker wrote:
The only real problem with this benchmark is that the machine became
CPU-limited rather quickly. During the runs with the ioDrive, iowait was
pretty well zero, with user CPU being about 75% and system getting about
20%.
You might try reducing the numb
On Tue, Jul 1, 2008 at 6:17 PM, Andrej Ricnik-Bay
<[EMAIL PROTECTED]> wrote:
> On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
>
>> Red Hat and its clones. The other problem is the 80GB model is too
>> small to hold my entire DB, Although it could be used as a tablespace
>> for some cri
On 02/07/2008, Jeffrey Baker <[EMAIL PROTECTED]> wrote:
> Red Hat and its clones. The other problem is the 80GB model is too
> small to hold my entire DB, Although it could be used as a tablespace
> for some critical tables. But hey, it's fast.
And when/if it dies, please give us a rough gues
I recently got my hands on a device called ioDrive from a company
called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI
card. It has its own driver for Linux completely outside of the
normal scsi/sata/sas/fc block device stack, but from the user's
perspective it behaves like a block
18 matches
Mail list logo