Well just a little update:
1) Looks like I'm definately RAM constrained. Just placed an order
for another 4GB.
2) I ended up dropping the primary key too which helped with disk
thrashing a lot (average disk queue wait was between 500ms and 8500ms
before and 250-500ms after)
3) Playing with most
Are the key values really all 48 chars long? If not, you made a
bad datatype choice: varchar(n) (or even text) would be a lot
smarter. char(n) wastes space on blank-padding.
Yep, everything exactly 48. Looks like I'll be storing it as a bytea
in the near future though.
It's a good idea
Hi, Aaron,
Aaron Turner wrote:
> 4) Does decoding the data (currently base64) and storing the binary
> data improve the distribution of the index, thereby masking it more
> efficent?
Yes, but then you should not use varchar, but a bytea.
If your data is some numer internally, numeric or decimal
On 2/12/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> Aaron Turner <[EMAIL PROTECTED]> writes:
> > Well before I go about re-architecting things, it would be good to
> > have a strong understanding of just what is going on. Obviously, the
> > unique index on the char(48) is the killer. What I don't k
On 2/12/06, Marc Morin <[EMAIL PROTECTED]> wrote:
> From your config, a check point will be forced when
>
> (checkpoint_segments * 16 M) < rows * (8K/N*h + (1-h)*8K) * B
>
> Where h is the "hitrate" or correlation between the update scan and the
> index. Do you have a sense of what this is?
I know
sioning,
shared_buffers and checkpoint_segments are interconnected in weird and
wonderful ways... Seldom have found "simple" solutions to performance
problems.
Marc
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Aaron Turne
Aaron Turner <[EMAIL PROTECTED]> writes:
> Well before I go about re-architecting things, it would be good to
> have a strong understanding of just what is going on. Obviously, the
> unique index on the char(48) is the killer. What I don't know is:
You have another unique index on the integer pr
On 2/11/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:
> > On 2/10/06, Matthew T. O'Connor wrote:
> > > Aaron Turner wrote:
> >
> > Basically, I need some way to optimize PG so that I don't have to drop
> > that index every time.
> >
> >
On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:
> On 2/10/06, Matthew T. O'Connor wrote:
> > Aaron Turner wrote:
> > > So I'm trying to figure out how to optimize my PG install (8.0.3) to
> > > get better performance without dropping one of my indexes.
> >
> > What about something li
On 2/10/06, Matthew T. O'Connor wrote:
> Aaron Turner wrote:
> > So I'm trying to figure out how to optimize my PG install (8.0.3) to
> > get better performance without dropping one of my indexes.
>
> What about something like this:
>
> begin;
> drop slow_index_name;
> update;
> create index slow_
Aaron Turner wrote:
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
What about something like this:
begin;
drop slow_index_name;
update;
create index slow_index_name;
commit;
vacuum;
Matt
On 2/10/06, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On 2/10/06, Aaron Turner <[EMAIL PROTECTED]> wrote:
> > So I'm trying to figure out how to optimize my PG install (8.0.3) to
> > get better performance without dropping one of my indexes.
> > Basically, I have a table of 5M records
On 2/10/06, Aaron Turner <[EMAIL PROTECTED]> wrote:
> So I'm trying to figure out how to optimize my PG install (8.0.3) to
> get better performance without dropping one of my indexes.
> Basically, I have a table of 5M records with 3 columns:
> pri_key (SERIAL)
> data char(48)
> groupid integer
> th
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
Basically, I have a table of 5M records with 3 columns:
pri_key (SERIAL)
data char(48)
groupid integer
there is an additional unique index on the data column.
The prob
14 matches
Mail list logo