On Jul 14, 2010, at 6:57 PM, Scott Carey wrote:
> But none of this explains why a 4-disk raid 10 is slower than a 1 disk
> system. If there is no write-back caching on the RAID, it should still be
> similar to the one disk setup.
Many raid controllers are smart enough to always turn off write
On Wed, Jul 14, 2010 at 6:57 PM, Scott Carey wrote:
> But none of this explains why a 4-disk raid 10 is slower than a 1 disk
> system. If there is no write-back caching on the RAID, it should still be
> similar to the one disk setup.
>
> Unless that one-disk setup turned off fsync() or was config
I have a query:
SELECT d1.ID, d2.ID
FROM DocPrimary d1
JOIN DocPrimary d2 ON d2.BasedOn=d1.ID
WHERE (d1.ID=234409763) or (d2.ID=234409763)
i think what QO(Query Optimizer) can make it faster (now it seq scan and on
million records works 7 sec)
SELECT d1.ID, d2.ID
FROM DocPrimary d1
J
But none of this explains why a 4-disk raid 10 is slower than a 1 disk system.
If there is no write-back caching on the RAID, it should still be similar to
the one disk setup.
Unless that one-disk setup turned off fsync() or was configured with
synchronous_commit off. Even low end laptop driv
On 14 July 2010 17:16, Kevin Grittner wrote:
> Ivan Voras wrote:
>
>> which didn't help.
>
> Didn't help what? You're processing each row in 22.8 microseconds.
> What kind of performance were you expecting?
Well, I guess you're right. What I was expecting is a large bump in
speed going from LIK
On Wed, 2010-07-14 at 08:58 -0500, Kevin Grittner wrote:
> Scott Marlowe wrote:
> > Hannu Krosing wrote:
> >> One example where you need a separate connection pool is pooling
> >> really large number of connections, which you may want to do on
> >> another host than the database itself is running
Ivan Voras wrote:
> which didn't help.
Didn't help what? You're processing each row in 22.8 microseconds.
What kind of performance were you expecting?
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.pos
One of the possibilities would be to decompose your bitmap into an
array of base integers and then create a GIN (or GIST) index on that
array (intarray contrib package). This would make sense if your
articles are distributed relatively equally and if do not do big ORDER
BY and then LIMIT/OFFSET que
Ivan,
here is explain analyze output - 7122 out of 528155 docs
tseval=# select count(*) from document;
count
528155
(1 row)
Time: 345,562 ms
tseval=# explain analyze select docno, title from document where vector @@
to_tsquery('english','mars');
Bitmap Heap Scan on document (cos
On 07/14/10 16:03, Kevin Grittner wrote:
> Ivan Voras < ivo...@freebsd.org > wrote:
>> On 07/14/10 15:49, Stephen Frost wrote:
>
>>> Regarding the statistics, it's entirely possible that the index
>>> is *not* the fastest way to pull this data (it's nearly 10% of
>>> the table..)
>>
>> I think
Ivan Voras < ivo...@freebsd.org > wrote:
> On 07/14/10 15:49, Stephen Frost wrote:
>> Regarding the statistics, it's entirely possible that the index
>> is *not* the fastest way to pull this data (it's nearly 10% of
>> the table..)
>
> I think that what I'm asking here is: is it reasonable for
Scott Marlowe wrote:
> Hannu Krosing wrote:
>> One example where you need a separate connection pool is pooling
>> really large number of connections, which you may want to do on
>> another host than the database itself is running.
>
> Definitely. Often it's best placed on the individual webser
On 07/14/10 15:49, Stephen Frost wrote:
> * Ivan Voras (ivo...@freebsd.org) wrote:
>> Total runtime: 0.507 ms
> [...]
>> Total runtime: 118.689 ms
>>
>> See in the first query where I have a simple LIMIT, it fetches random 10
>> rows quickly, but in the second one, as soon as I give it to execute
* Ivan Voras (ivo...@freebsd.org) wrote:
> Total runtime: 0.507 ms
[...]
> Total runtime: 118.689 ms
>
> See in the first query where I have a simple LIMIT, it fetches random 10
> rows quickly, but in the second one, as soon as I give it to execute and
> calculate the entire result set before I
On 07/14/10 15:25, Oleg Bartunov wrote:
> On Wed, 14 Jul 2010, Ivan Voras wrote:
>
>>> Returning 8449 rows could be quite long.
>>
>> You are right, I didn't test this. Issuing a query which returns a
>> smaller result set is much faster.
>>
>> But, offtopic, why would returning 8500 records, each
On Wed, 14 Jul 2010, Ivan Voras wrote:
Returning 8449 rows could be quite long.
You are right, I didn't test this. Issuing a query which returns a
smaller result set is much faster.
But, offtopic, why would returning 8500 records, each around 100 bytes
long so around 8.5 MB, over local unix s
On 07/14/10 14:31, Oleg Bartunov wrote:
> Something is not good with statistics, 91 est. vs 8449 actually returned.
I don't think the statistics difference is significant - it's actually
using the index so it's ok. And I've run vacuum analyze just before
starting the query.
> Returning 8449 rows
Something is not good with statistics, 91 est. vs 8449 actually returned.
Returning 8449 rows could be quite long.
Oleg
On Wed, 14 Jul 2010, Ivan Voras wrote:
Here's a query and its EXPLAIN ANALYZE output:
cms=> select count(*) from forum;
count
---
90675
(1 row)
cms=> explain analyze se
Here's a query and its EXPLAIN ANALYZE output:
cms=> select count(*) from forum;
count
---
90675
(1 row)
cms=> explain analyze select id,title from forum where _fts_ @@
'fer'::tsquery;
QUERY PLAN
19 matches
Mail list logo