Re: [GENERAL] Asking advice on speeding up a big table

2006-04-16 Thread felix-accts-pgsql
On Sat, Apr 15, 2006 at 10:31:26AM -0400, Francisco Reyes wrote:
> [EMAIL PROTECTED] writes:
> 
> >Usage is to match data from the key and val tables to fetch the data
> >value from the sid table.
> 
> What is the relation between key and val tables?
> Will key.id and val.id be equal?

This benchmark explores an idea for a simple berkeley-db-like lookup,
but faster and allowing ranges, and specialized for looking up info in
some other tables I have.  The key table data is table.column, and
1000 is a rough guess on how many unique column names there might be.
The val table is the contents of those columns, and 100K is nother
rough guess.  The end result, the sid table, is a generic ID I have,
coudl be anything, like lat/lon, room-bookshelf-shelf-book, etc.

key.id and val.id have no bearing on each other.

I have made some minor changes and speeded things up to around 15-20
lookups/sec, good enough, but not exciting :-) and in the process,
come across some odd misbehavior.  I have a writeup, almost ready to
post, but I want to make sure I cxross my Is and dot my Ts properly.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-15 Thread Francisco Reyes

[EMAIL PROTECTED] writes:


I have made some minor changes and speeded things up to around 15-20
lookups/sec, good enough, but not exciting :-)



hmm let me understand this.
You went from 1 query 3 to 4 seconds to 45 to 60 queries in the same amount 
of time... 45 to 60 times faster.. and that is not something to be excited 
about. :)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-15 Thread Francisco Reyes

[EMAIL PROTECTED] writes:


Usage is to match data from the key and val tables to fetch the data
value from the sid table.


What is the relation between key and val tables?
Will key.id and val.id be equal?


I have never quite/fully understand the outputs of analyze, but I wonder why 
you have:



 ->  Nested Loop  (cost=0.00..4467.01 rows=1001 width=20) (actual 
time=0.205..28.304 rows=1000 loops=1)


Why 1000 rows?
Does any of your conditions in the select returns many records?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread Alban Hertroys

[EMAIL PROTECTED] wrote:

On Tue, Apr 11, 2006 at 09:52:40AM +0200, hubert depesz lubaczewski wrote:


On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:




What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.



hmm .. perhaps you can try to denormalize the table, and then use
multicolumn indices?


That's an idea ... I had thought that since my two referenced tables
are small in comparison to the third table, that wouldn't be of any
use, but I will give it a try.  Thanks ...


You could also experiment with clustering your large tables on some 
index, or using one or more partial indexes on relevant partitions of 
your data set. The application of such measures really depends on how 
your data behaves, so it's hard to predict whether it's going to help 
you or not.


We have seen great benefits of using the 'right' index opposed to just 
any index. An explain analyze showing an index scan is a good start, but 
your design may yet improve.


For example, we have a table with translations of strings in different 
languages. Part of that tables' contents deals with names of 
geographical locations (cities mostly). An index on lowercase location 
name translations and only on the translation records relevant for 
locations sped up our queries immensely (from over 300ms to less than 
1ms - uncached). That was a pretty amazing performance improvement to be 
sure :)


So it is possible.

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread felix
On Tue, Apr 11, 2006 at 09:52:40AM +0200, hubert depesz lubaczewski wrote:
> On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

> > What I was hoping for was some general insight from the EXPLAIN
> > ANALYZE, that maybe extra or different indices would help, or if there
> > is some better method for finding one row from 100 million.  I realize
> > I am asking a vague question which probably can't be solved as
> > presented.
> >
> 
> hmm .. perhaps you can try to denormalize the table, and then use
> multicolumn indices?

That's an idea ... I had thought that since my two referenced tables
are small in comparison to the third table, that wouldn't be of any
use, but I will give it a try.  Thanks ...

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread hubert depesz lubaczewski
On 4/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
It is, but it is only 32 msec because the  query has already run andcached the useful bits.  And since I have random values, as soon as Ilook up some new values, they are cached and no longer new.
according to my experiene i would vote for too slow filesystem What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if thereis some better method for finding one row from 100 million.  I realizeI am asking a vague question which probably can't be solved aspresented.
hmm .. perhaps you can try to denormalize the table, and then use multicolumn indices?depesz


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-10 Thread felix
On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > I have a simple benchmark which runs too slow on a 100M row table, and
> > I am not sure what my next step is to make it faster.
> 
> The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
> enough for anyone on that size table.  You need to show us data on the
> problem case ...

It is, but it is only 32 msec because the  query has already run and
cached the useful bits.  And since I have random values, as soon as I
look up some new values, they are cached and no longer new.

What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-09 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I have a simple benchmark which runs too slow on a 100M row table, and
> I am not sure what my next step is to make it faster.

The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
enough for anyone on that size table.  You need to show us data on the
problem case ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Asking advice on speeding up a big table

2006-04-09 Thread felix
I have a simple benchmark which runs too slow on a 100M row table, and
I am not sure what my next step is to make it faster.

It's a simple setup, part of a larger system.  There are three data
tables, each with a BIGINT id and a data column of dofferent types.
There is a fourth table with BIGINT foreign key references to the
other three tables' id columns.

felix=> \d key
Table "oddtimes.key"
 Column |  Type  | Modifiers 
++---
 id | bigint | not null
 data   | text   | not null
Indexes:
"key_pkey" PRIMARY KEY, btree (id)
"key_data_key" UNIQUE, btree (data)

felix=> \d val
 Table "oddtimes.val"
 Column |  Type   | Modifiers 
+-+---
 id | bigint  | not null
 data   | integer | not null
Indexes:
"val_pkey" PRIMARY KEY, btree (id)
"val_data_key" UNIQUE, btree (data)

felix=> \d sid
Table "oddtimes.sid"
 Column | Type  | Modifiers 
+---+---
 id | bigint| not null
 data   | character(40) | not null
Indexes:
"sid_pkey" PRIMARY KEY, btree (id)

felix=> \d glue
Table "oddtimes.glue"
 Column |  Type  | Modifiers 
++---
 key| bigint | 
 val| bigint | 
 sid| bigint | 
Indexes:
"glue_key_idx" btree ("key")
"glue_key_val_idx" btree ("key", val)
"glue_val_idx" btree (val)
"glue_val_key_idx" btree (val, "key")
Foreign-key constraints:
"glue_key" FOREIGN KEY ("key") REFERENCES "key"(id)
"glue_val" FOREIGN KEY (val) REFERENCES val(id)
"glue_sid" FOREIGN KEY (sid) REFERENCES sid(id)

Usage is to match data from the key and val tables to fetch the data
value from the sid table.  It's sort of a glorified Berkeley db, but
you can do compare ranges, not just exact matches.  If I can make it
fast enough, I may add two more types, date and text.

The key table has 1K rows, val has 100K, and sid and glue have 100M
rows.  They take about 31G space, last time I checked.

 felix=> SELECT relname, relpages FROM pg_class ORDER BY relpages DESC 
LIMIT 20;
  relname | relpages 
 -+--
  sid |  1086957
  glue|   735295
  glue_key_val_idx|   385042
  glue_val_key_idx|   385042
  sid_pkey|   274194
  glue_key_idx|   274194
  glue_val_idx|   274194
  val |  589
  val_pkey|  382
  val_data_key|  283
 

My benchmark times SQL matches like this.  This example was a repeat
and has reasonable speed.  But fresh values take around 3-4 seconds.
This is 8.0.3 on a dual Opteron dual core machine with only 2G RAM (it
is meant for compute intensive work and was idle while I ran these
tests).  The disks are plain vanilla IDE, maybe SATA, but nothing at
all special.

felix=> explain analyze SELECT sid.data, val.data FROM key, val, sid, glue 
WHERE key.data = 'UajzAQjTJPevVJBuuerjU4pcl8eJcyrIxzkC' AND key.id = glue.key 
AND val.data = 1984186373 AND val.id = glue.val AND glue.sid = sid.id;
   QUERY 
PLAN   


 Nested Loop  (cost=5.84..4480.89 rows=1 width=48) (actual 
time=32.157..32.157 rows=0 loops=1)
   ->  Hash Join  (cost=5.84..4477.87 rows=1 width=12) (actual 
time=32.149..32.149 rows=0 loops=1)
 Hash Cond: ("outer"."key" = "inner".id)
 ->  Nested Loop  (cost=0.00..4467.01 rows=1001 width=20) (actual 
time=0.205..28.304 rows=1000 loops=1)
   ->  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=12) (actual time=0.059..0.066 rows=1 loops=1)
 Index Cond: (data = 1984186373)
   ->  Index Scan using glue_val_idx on glue  
(cost=0.00..4447.15 rows=1108 width=24) (actual time=0.131..20.670 rows=1000 
loops=1)
 Index Cond: ("outer".id = glue.val)
 ->  Hash  (cost=5.84..5.84 rows=1 width=8) (actual 
time=0.123..0.123 rows=0 loops=1)
   ->  Index Scan using key_data_key on "key"  (cost=0.00..5.84 
rows=1 width=8) (actual time=0.100..0.105 rows=1 loops=1)
 Index Cond: (data = 
'UajzAQjTJPevVJBuuerjU4pcl8eJcyrIxzkC'::text)
   ->  Index Scan using sid_pkey on sid  (cost=0.00..3.01 rows=1 width=52) 
(never executed)
 Index Cond: ("outer".sid = sid.id)
 Total runtime: 32.2