solution. But
that doesn't seem to exist either.
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an "analyze audittrailclinical" to no avail.
Thanks Bruce,
I have, and I even thought, I understood it :).
I just ran an explain analyze on another table - and ever since the query plan
changed. It's now using the index as expected. I guess, I have some more
reading to do.
On Oct 16, 2012, at 20:31 , Bruce Momjian wrote:
>
> Have yo
On Oct 16, 2012, at 20:01 , Evgeny Shishkin wrote:
> Selecting 5 yours of data is not selective at all, so postgres decides it is
> cheaper to do seqscan.
>
> Do you have an index on patient.dnsortpersonnumber? Can you post a result
> from
> select count(*) from patient where dnsortpersonnu
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an "analyze audittrailclinical" to no avail.
ot;|psql ... & ) once the 'serial build' test is done.
Maybe, in a future release, somebody will develop something that can create
indexes as inactive and have a build tool build and activate them at the same
time. Food for thought?
On Apr 9, 2011, at 13:10 , Tom Lane wrote:
> Chri
xes in parallel while
reading the table only once for all indexes and building them all at the same
time. Is there an index build tool that I missed somehow, that can do this?
Thanks,
Chris.
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
--
Sent via pg
Joshua,
did you try to run the 345 on an IBM ServeRAID 6i?
I have one in mine, but I never actually ran any speed test.
Do you have any benchmarks that I could run and compare?
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
On May 12, 2008, at 22:11
Hi all,
If you have a DB of 'only' 13 GB and you do not expect it to grow much, it
might be advisable to have enough memory (RAM) to hold the entire DB in
shared memory (everything is cached). If you have a server with say 24 GB or
memory and can allocate 20 GB for cache, you don't care about t
Bruce,
my bet is on the limited amount of shared memory. The setup as posted by Leon
only shows 80 MB. On a 4 GB database, that's not all that much. Depending on
what he's doing, this might be a bottleneck. I don't like the virtual memory
strategy of Linux too much and would rather increase thi
10 matches
Mail list logo