On Wed, May 03, 2006 at 04:30:32PM -0500, Scott Marlowe wrote:
If you've not tried bonnie++ on a limited memory machine, you really
should.
Yes, I have. I also patched bonnie to handle large files and other such
nifty things before bonnie++ was forked. Mostly I just didn't get much
value out
On Wed, 2006-05-03 at 15:53, Michael Stone wrote:
> On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:
> >Note that I'm referring to bonnie++ as was an earlier poster. It
> >certainly seems capable of giving you a good idea of how your hardware
> >will behave under load.
>
> IME it gi
On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:
Note that I'm referring to bonnie++ as was an earlier poster. It
certainly seems capable of giving you a good idea of how your hardware
will behave under load.
IME it give fairly useless results. YMMV. Definately the numbers posted
On Wed, 2006-05-03 at 14:26, Michael Stone wrote:
> On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:
> >Well, in this case the question was about random write access, which dd
> >won't show you.
>
> That's the kind of thing you need to measure against your workload.
Of course, the fi
On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:
Well, in this case the question was about random write access, which dd
won't show you.
That's the kind of thing you need to measure against your workload.
Mike Stone
---(end of broadcast)---
On Wed, May 03, 2006 at 01:06:06PM -0400, Michael Stone wrote:
> On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:
> >I have often used the mem=xxx arguments to lilo when needing to limit
> >the amount of memory for testing purposes. Just google for limit memory
> >and your bootloader
Mario Splivalo <[EMAIL PROTECTED]> writes:
> I have a quite large query that takes over a minute to run on my laptop.
The EXPLAIN output you provided doesn't seem to agree with the stated
query. Where'd the "service_id = 1102" condition come from?
In general, I'd suggest playing around with the
On 5/2/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Ian Burrell" <[EMAIL PROTECTED]> writes:
> We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
> having performance problems and running for very long times. The
> commonality seems to be PostgreSQL 8.1 is choosing to use a nested
On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:
I have often used the mem=xxx arguments to lilo when needing to limit
the amount of memory for testing purposes. Just google for limit memory
and your bootloader to find the options.
Or, just don't worry about it. Even if you get b
On Wed, 2006-05-03 at 10:59, Michael Stone wrote:
> On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:
> >Bonnie++ is able to use very large datasets. It also tries to figure
> >out hte size you want (2x ram) - the original bonnie is limited to 2GB.
>
> Yes, and once you get into large
On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to figure
out hte size you want (2x ram) - the original bonnie is limited to 2GB.
Yes, and once you get into large datasets like that the quality of the
data is fairly poor b
> -> Nested Loop (cost=0.00..176144.30 rows=57925 width=26)
> (actual time=68.322..529472.026 rows=57925 loops=1)
>-> Seq Scan on ticketing_codes_played
> (cost=0.00..863.25 rows=57925 width=8) (actual time=0.042..473.881
> rows=57925 loops=1)
>-> Index
On May 3, 2006, at 10:16 AM, Vivek Khera wrote:
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since i
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since it has bad
assumptions about which systems can do
On May 3, 2006, at 8:18 AM, Michael Stone wrote:
On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:
---Sequential Output ---Sequential
Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --
Block--- --Seeks---
MachineMB
On Wednesday 03 May 2006 03:29, Magnus Hagander wrote:
> > > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > > multi-CPU environment on w2k3. It runs fine for some period, and
> > > > then CPU and throughput drop to zero. So far I've been unable to
> > > > track down any more i
On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec % CP
On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:
Reindexing is in a different class than vacuuming.
Kinda, but it is in the same class as vacuum full. If vacuum neglect (or
dramatic change in usage) has gotten you to the point of 10G of overhead
on a 2G table you can get a dramatic
> > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > multi-CPU environment on w2k3. It runs fine for some period, and
> > > then CPU and throughput drop to zero. So far I've been unable to
> > > track down any more information than that, other than the
> fact that
> > > I h
19 matches
Mail list logo