On Wed, 27 Feb 2008, Dan Kaplan wrote:
I've got a lot of rows in one table and a lot of rows in another table. I
want to do a bunch of queries on their join column. One of these is like
this: t1.col like '%t2.col%'
We have an idea how to speedup wildcard search at the expense of the size -
At 6:01 PM -0500 2/26/08, Tom Lane wrote:
Hmm, that says that it's not a matter of locking on pg_listener,
but of actually applying the row update(s) and/or signaling the
recipient(s). If you're not seeing performance issues for ordinary
table-update operations it's hard to see why pg_listener u
I figured out what appears to happen with cascading delete using a
seqscan. In this case, the foreign keys in the child table are not
equally distributed. A few parent values occur often. Most parent
values do not occur at all. So the planner, faced with an unknown
generic key, takes the safe route
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> If xact A is using a snapshot from before the commit of the index
> DROP, it shouldn't see anything done after the drop anyway. If
> it's using a snapshot from after the DROP, it won't see the index.
> xact B would only fail to update the index if it
I wrote:
> In the particular case at hand, a planner hook to make it ignore the
> index is a far better solution anyway...
Just as proof of concept, a quick-and-dirty version of this is attached.
It works in 8.3 and up. Sample (after compiling the .so):
regression=# load '/home/tgl/pgsql/planign
>>> On Wed, Feb 27, 2008 at 5:00 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> Out of curiosity, couldn't any transaction using a snapshot prior to
>> the commit of the DROP continue to use it (just like an uncommited
>>
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> Out of curiosity, couldn't any transaction using a snapshot prior to
> the commit of the DROP continue to use it (just like an uncommited
> DELETE of a row)? The transaction doing the DROP wouldn't maintain
> it for modifications, which is fine whethe
>>> On Tue, Feb 26, 2008 at 10:48 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
> Er, later than what? Once the DROP is pending, other transactions can
> hardly safely use the index for lookups, and what should they do about
> insertions?
Out of curiosity, couldn't a
"Laurent Raufaste" <[EMAIL PROTECTED]> writes:
> On a random server, the plan before the ANALYZE was:
> Bitmap Heap Scan on _comment (cost=15833.00..440356.99 rows=155649
> width=0) (actual time=1.581..2.885 rows=1070 loops=1)
>Recheck Cond: (path <@ '0.1.14155763'::ltree)
>-> Bitmap Ind
"Dan Kaplan" <[EMAIL PROTECTED]> writes:
> I've got a lot of rows in one table and a lot of rows in another table. I
> want to do a bunch of queries on their join column. One of these is like
> this: t1.col like '%t2.col%'
> I know that always sucks. I'm wondering how I can make it better.
tse
Jeff Davis <[EMAIL PROTECTED]> writes:
>>> begin;
>>> drop index abc_dx;
>>> select
>>> rollback;
> It may cause catalog bloat though, right?
Not in this particular case; AFAIR this will only result in catalog row
deletions, not updates. So when the deletions roll back, there's no
dead rows
I've got a lot of rows in one table and a lot of rows in another table. I
want to do a bunch of queries on their join column. One of these is like
this: t1.col like '%t2.col%'
I know that always sucks. I'm wondering how I can make it better. First, I
should let you know that I can likely ho
I've got a lot of rows in one table and a lot of rows in another table. I
want to do a bunch of queries on their join column. One of these is like
this: t1.col like '%t2.col%'
I know that always sucks. I'm wondering how I can make it better. First, I
should let you know that I can likely ho
In response to Douglas J Hunley <[EMAIL PROTECTED]>:
> On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:
> > > > 2) is there any internal data in the db that would allow me to
> > > > programmatically determine which tables would benefit from being
> > > > clustered? 3) for that matte
On Tue, 2008-02-26 at 17:22 -0500, Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
> >> begin;
> >> drop index abc_dx;
> >> select
> >> rollback;
> >>
> >> and viola, your index is still there. note that there are likely some
> >
On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:
> > > 2) is there any internal data in the db that would allow me to
> > > programmatically determine which tables would benefit from being
> > > clustered? 3) for that matter, is there info to allow me to determine
> > > which index it
On Wednesday 27 February 2008 12:40:57 Bill Moran wrote:
> In response to Douglas J Hunley <[EMAIL PROTECTED]>:
> > After reviewing
> > http://www.postgresql.org/docs/8.3/static/sql-cluster.html a couple of
> > times, I have some questions:
> > 1) it says to run analyze after doing a cluster. i'm a
In response to Douglas J Hunley <[EMAIL PROTECTED]>:
> After reviewing http://www.postgresql.org/docs/8.3/static/sql-cluster.html a
> couple of times, I have some questions:
> 1) it says to run analyze after doing a cluster. i'm assuming autovacuum will
> take care of this? or should i go ahead
2008/2/26, Tom Lane <[EMAIL PROTECTED]>:
>
> That's the right RESTRICT function, but what exactly did you mean by
> "first definition"? Are there more?
I thought it was enough, here is the complete definition of the <@ operator:
--
-- Name: <@; Type: OPERATOR; Schema: public; Owner: postgres
--
thanks for your answer!
okay, cost is a totally wrong word
here. I am using the "actual times" in the execplan.
what I am trying to do, is optimise a database both by investigating
execplans, and thinking about what concurrency would do to my database. (I
have a database which is reported to almo
After reviewing http://www.postgresql.org/docs/8.3/static/sql-cluster.html a
couple of times, I have some questions:
1) it says to run analyze after doing a cluster. i'm assuming autovacuum will
take care of this? or should i go ahead and do the analyze 'now' instead of
waiting?
2) is there any
"Frits Hoogland" <[EMAIL PROTECTED]> writes:
> The manual states: "Actually two numbers are shown: the start-up time before
> the first row can be returned, and the total time to return all the rows.".
> Does this mean that the difference between the first and second is the cost
> or the time the s
I've got some long running queries, and want to tune them.
Using simple logic, I can understand what expensive steps in the query plan
ought to be (seq scan and index scans using much rows), but I want to
quantify; use a somewhat more scientific approach.
The manual states: "Actually two numbers a
I've got some long running queries, and want to tune them.
Using simple logic, I can understand what expensive steps in the query plan
ought to be (seq scan and index scans using much rows), but I want to
quantify; use a somewhat more scientific approach.
The manual states: "Actually two numbers a
24 matches
Mail list logo