Dan Libby <[EMAIL PROTECTED]> writes:
> Or failing that, I'd at least like to understand why the planner
> is deciding not to use the category_lang index when the result
> set is coming from a function instead of a "regular" table.
The planner defaults to assuming that set-returning functions re
Hi,
Using postgres 8.0.1, I'm having a problem where a user-defined function
that executes quite quickly on its own slows down the calling query,
which ignores at least one index. I don't think this should be happening.
Please forgive the long explanation below; I'm trying to be clear.
So --
On Mon, 2006-06-05 at 14:06, Tom Lane wrote:
> Andrew Sullivan <[EMAIL PROTECTED]> writes:
> > On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:
> >> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing
> >> custom adjustments of statistics target settings, etc.
>
> > But
Andrus,
> SELECT toode, nimetus
> FROM toode
> WHERE toode ILIKE 'x10%' ESCAPE '!'
> ORDER BY UPPER(toode ),nimetus LIMIT 100
>
> runs 1 minute in first time for small table size.
>
> Toode field type is CHAR(20)
1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:
>> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing
>> custom adjustments of statistics target settings, etc.
> But even the nested loop shouldn't be a "never returns" case,
On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:
> Hi,
> We just don't seem to be getting much benefit from autovacuum. Running
> a manual vacuum seems to still be doing a LOT, which suggests to me
> that I should either run a cron job and disable autovacuum, or just
> run a cron job on top
Thanks Tom,
I knew you would come through again!
Query is now returning with results on our replicated database. Will
vacuum analyze production now. So it seems to have done the trick. Now
the question is has our auto vacuum failed or was not set up properly...
A question for my IT people.
Th
On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:
>
> broke:
> >-> Nested Loop (cost=30150.77..129334.04 rows=1 width=305)
>
> work:
> >-> Hash Join (cost=30904.77..125395.89 rows=1810 width=306)
>
> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing
> cust
Chris Beecroft <[EMAIL PROTECTED]> writes:
> Our problem is that about a week and a half ago we started to get some
> queries that would (seemingly) never return (e.g., normally run in a
> couple minutes, but after 2.5 hours, they were still running, the
> process pushing the processor up to 99.9%
On Mon, Jun 05, 2006 at 12:05:08PM -0700, Chris Beecroft wrote:
> Our problem is that about a week and a half ago we started to get some
> queries that would (seemingly) never return (e.g., normally run in a
> couple minutes, but after 2.5 hours, they were still running, the
> process pushing the p
I have UTF-8 Postgres 8.1 database on W2K3
Query
SELECT toode, nimetus
FROM toode
WHERE toode ILIKE 'x10%' ESCAPE '!'
ORDER BY UPPER(toode ),nimetus LIMIT 100
runs 1 minute in first time for small table size.
Toode field type is CHAR(20)
How to create index on toode field so that qu
Hello,
I've noticed some posts on hanging queries but haven't seen any
solutions yet so far.
Our problem is that about a week and a half ago we started to get some
queries that would (seemingly) never return (e.g., normally run in a
couple minutes, but after 2.5 hours, they were still running, th
On Thu, Jun 01, 2006 at 02:04:46PM -0400, Michael Artz wrote:
> On 5/30/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
>
> >Your best bet is to do this as a single, bulk operation if possible.
> >That way you can simply do an UPDATE ... WHERE EXISTS followed by an
> >INSERT ... SELECT ... WHERE NOT E
On Fri, Jun 02, 2006 at 01:11:27PM +0530, Gourish Singbal wrote:
> This Error came since the 0002 file from the pg_clog folder was missing.
> Since the logs are missing from pg_clog folder can perfom pg_resetxlogs to
> reset the logs and bring up the database.
Understand that this will almost cert
On Thu, Jun 01, 2006 at 11:05:55AM -0400, Tom Lane wrote:
> Antoine <[EMAIL PROTECTED]> writes:
> > We just don't seem to be getting much benefit from autovacuum. Running
> > a manual vacuum seems to still be doing a LOT, which suggests to me
> > that I should either run a cron job and disable auto
On Mon, May 29, 2006 at 07:35:14AM -0700, Cstdenis wrote:
> > To be honest, you're pushing things expecting a machine with only 1G to
> > serve 300 active connections. How large is the database itself?
>
> The database is 3.7G on disk. There is about 1G of actual data in it -- the
> rest is dead t
16 matches
Mail list logo