Question: We have recently moved PostgreSql as we just migrated to
OpenMfg. i really like the database and PL/pgSQL its very similar to
how FoxPro language layout so learning has been pretty easy.
Now for my question It does not appear PostgreSQL does not have an Ad
Hoc Indexes ability
Justin wrote:
Now for my question It does not appear PostgreSQL does not have an Ad
Hoc Indexes ability where the Query planner will create an in memory
index based on the Select, Update, Insert or Delete commands.
How is that supposed to work? In order to create an index you would need to
On Feb 18, 2008, at 9:09 PM, Peter Eisentraut wrote:
Justin wrote:
Now for my question It does not appear PostgreSQL does not have an Ad
Hoc Indexes ability where the Query planner will create an in memory
index based on the Select, Update, Insert or Delete commands.
How is that supposed to
Justin [EMAIL PROTECTED] writes:
Is there any plans in the future to add the ability for PostgreSQL to
create Ad Hoc indexes if it makes sense.
No, I'm not aware of anyone contemplating such a thing. I can hardly
imagine a situation where building an index for a single query is
actually a
The idea of ad hoc indexes is speed up loop scans To prove my idea i
created a sql file in PGAdmin that creates the indexes on the fly then
runs the query then drops the indexs.
without the indexes it takes 18 to 19 seconds to run the query.
To create the index and do the query takes 400
Hans-Juergen Schoenig wrote:
this kind of indexing makes sense if you visit the same data over and
over again. WITH-queries would be an example for that and self joins
could benefit from the this feature too.
Yes, for joins it is useful. We have hash joins, which are kind of ad hoc
hash
Justin [EMAIL PROTECTED] writes:
The idea of ad hoc indexes is speed up loop scans To prove my idea i
created a sql file in PGAdmin that creates the indexes on the fly then
runs the query then drops the indexs.
without the indexes it takes 18 to 19 seconds to run the query.
To create
Then why are the estimates so far off??? If estimates where correct
would it improve the performance that much.
Vaccum is set to run automatically so the stats stay update.
Total record count for the tables for all the tables put together is
around 120,000 the query returns only 458 records
oops dam spell checker really should be rarely sorry
Justin wrote:
Then why are the estimates so far off??? If estimates where correct
would it improve the performance that much.
Vaccum is set to run automatically so the stats stay update.
Total record count for the tables for all the
Justin wrote:
Then why are the estimates so far off?
Perhaps a really high correlation between those records where coitem_status =
'O' and those records in your date range. The planner estimates 1 row when
restricting by both restrictions, whereas there where 458. The 1 row was
probably a
yes that improved the select statement allot from 18 to 20 to 6
seconds 3 fold improvement with no indexes. thanks
Now i tested your new and improved select statement with the indexes its
query time was taking from 90 to 140 milliseconds compared to the
original select statement of 94 to
Justin wrote:
although in both case with and with out indexes the estimate still failed to
return the correct number by allot.
The improvement wasn't to the part of the query that had the bad cost estimate,
it was to the part that was being performed hundreds of times instead of the
one
Stephen Denne [EMAIL PROTECTED] writes:
The improvement wasn't to the part of the query that had the bad cost
estimate, it was to the part that was being performed hundreds of times
instead of the one time the planner estimated.
The planner still thought it was only going to perform a
there are certainly problems with this idea. Figuring out the cost to
either create an index or just do table scan would be pain but not
impossible. The hit to index a table in memory i don't think would be
that bad compare to do 100's to thousands of loops with table scans.
I see more
14 matches
Mail list logo