Previously:
>psql ids -c 'select src,dst,count(*) from brick* where src_port=135
>group by src,dst' > /tmp/135.dat
This is just a guess, increasing the parameters shared_buffers and sort_mem
might help.
For example if your table is about 1Gb in size then try shared_buffers=1
and sort_mem
Dear all,
Tom's comments on my previous posting encouraged me think some more about
this... and now I believe got to the heart of what I was attempting to get
accross before.
I have a fresh and hopefully clear example.
Ok lets start with a small table called 'dim0' that has a unique key call
Dear List,
I have been doing a little investigation on when the optimizer chooses a
sequential scan over an index access. I have come accross what interesting
behaviour in the current 7.2 sources ( 2001-08-17):
The consider two types of query on my "usual" tables :
SELECT
f.d0key,
Hi Ryan,
There is a bit of a strange way around the distance overhead issue :
Create another table with structure like
(lat1,long1,zip1,lat2,long2,zip2,distance)
and precalculate the distance for each possibility. This means n*(n-1) rows
if you have n location rows. You would then include thi
Dear list,
With the advent of Version 7.1.2 I thought it would be interesting to compare
how Postgresql does a certain class of queries (Star Queries), and Data Loads
with some of the other leading databases ( which were in my humble opinion
Mysql 3.23.39, Db2 7.2 ,Oracle 9.0 [ ok - Possibly S
Previously...
>FATAL 1: Memory exhausted in AllocSetAlloc()
>pqReadData() -- backend closed the channel unexpectedly.
>This probably means the backend terminated abnormally
>before or while processing the request.
>
>A table has been corrupted and i don't know why...
>It's really hard to recover
> >The system that I'm developing, I have about 25000 (persons) x 8
>>(exams)
>> x 15 (answers per exam) = 300 records to process and it is VERY SLOW.
>
>f you need to import large quantities of data, look at the copy
>command, that tends to be faster.
By way of example for the level of
>
>That 'count(*)' is going to be slow.
>Try counting a column that's indexed (p.doc might work?)
That is not the case, you can convince yourself with explain - you get the
same plan(s) :
e.g : ( table dim0 with indexed column d0key ) ...
explain select count(*) from dim0 where d0ke
A previous posting mentioning clusters prompted me to revist some earlier
tests done on clustered and unclustered data.
It appears that currently ( 7.1beta5 ) the optimizer is unaware of any
clustering on a table - how important is that ?
To answer this question I used by "pet" data warehouse