On Friday 24 August 2007 15:39:22 Tom Lane wrote:
> Kevin Kempter <[EMAIL PROTECTED]> writes:
> > The development folks that have been here awhile tell me that it seems
> > like when they have a query (not limited to vacuum processes) that has
> > been running for a long time (i.e. > 5 or 6 hours)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This kind of disappointing, I was hoping there was more that could be
done.
There has to be another way to do incremental indexing without
loosing that much performance.
Benjamin
On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:
-BEGI
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brandon Shalton wrote:
> Benjamin,
>
>
>>
>> In order to avoid the re-indexing I was thinking of instead creating
>> a new
>> table each month (building its indexes and etc) and accessing them all
>> through a view. This way I only have to index the
Benjamin,
In order to avoid the re-indexing I was thinking of instead creating a
new
table each month (building its indexes and etc) and accessing them all
through a view. This way I only have to index the new data each month.
Take a look at bizgres.org (based on postgres).
They have a
Hi,
I have an application which loads millions of NEW documents each month
into a PostgreSQL tsearch2 table. I have the initial version completed
and searching performance is great but my problem is that each time a
new
month rolls around I have to drop all the indexes do a COPY and re-index
Gregory Stark <[EMAIL PROTECTED]> writes:
> Should reindex be doing an in-place update?
Not if you'd like it to be crash-safe.
> Alternatively, why does the planner need access to the pg_class entry and not
> just the pg_index record?
For one thing, to find out how big the index is ... though if
Kevin Kempter <[EMAIL PROTECTED]> writes:
> The development folks that have been here awhile tell me that it seems like
> when they have a query (not limited to vacuum processes) that has been
> running for a long time (i.e. > 5 or 6 hours) that the query sort of "goes
> crazy" and the entire sy
In response to "Kevin Grittner" <[EMAIL PROTECTED]>:
> >>> On Fri, Aug 24, 2007 at 2:57 PM, in message
> <[EMAIL PROTECTED]>, Kevin Kempter
> <[EMAIL PROTECTED]> wrote:
> >c) setup WAL archiving on the 8.1.4 cluster
> >
> >d) do a full dump of the 8.1.4 cluster and restore it to the new
>>> On Fri, Aug 24, 2007 at 2:57 PM, in message
<[EMAIL PROTECTED]>, Kevin Kempter
<[EMAIL PROTECTED]> wrote:
>c) setup WAL archiving on the 8.1.4 cluster
>
>d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4
> cluster
>
>e) stop the 8.2.4 cluster and bring it u
In response to Kevin Kempter <[EMAIL PROTECTED]>:
> Hi List;
>
> I've just started working with a client that has been running Postgres (with
> no DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes
> with 4Gig of memory on each box attached to RAID-10 disk arrays.
>
> So
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Steven Flatt" <[EMAIL PROTECTED]> writes:
>> So, can we simply trust what's in pg_class.relpages and ignore looking
>> directly at the index?
>
> No, we can't. In the light of morning I remember more about the reason
> for the aforesaid patch: it's actua
Hi List;
I've just started working with a client that has been running Postgres (with
no DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes
with 4Gig of memory on each box attached to RAID-10 disk arrays.
Some of their key config settings are here:
shared_buffers = 20480
On 8/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> You might be able to work around it for now by faking such a reindex
> "by hand"; that is, create a duplicate new index under a different
> name using CREATE INDEX CONCURRENTLY, then exclusive-lock the table
> for just long enough to drop the old i
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> So, can we simply trust what's in pg_class.relpages and ignore looking
> directly at the index?
No, we can't. In the light of morning I remember more about the reason
for the aforesaid patch: it's actually unsafe to read the pg_class row
at all if you
On 8/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Steven Flatt" <[EMAIL PROTECTED]> writes:
> > Why do we even need to consider calling RelationGetNumberOfBlocks or
> looking
> > at the pg_class.relpages entry? My understanding of the expected
> behaviour
> > is that while a reindex is happenin
Below is a patch against 8.2.4 (more or less), Heikki can you take a look at
it?
This enables the use of index scan of a child table by recognizing sort
order of the append node. Kurt Harriman did the work.
- Luke
Index: cdb-pg/src/backend/optimizer/path/indxpath.c
=
We just fixed this - I'll post a patch, but I don't have time to verify
against HEAD.
- Luke
On 8/24/07 3:38 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
> Anton wrote:
=# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;
"Steven Flatt" <[EMAIL PROTECTED]> writes:
>> The fly in the ointment is that after collecting the pg_index definition
>> of the index, plancat.c also wants to know how big it is --- it calls
>> RelationGetNumberOfBlocks.
> Why do we even need to consider calling RelationGetNumberOfBlocks or looki
On 8/24/07, Mark Kirkwood <[EMAIL PROTECTED]> wrote:
>
> Tom Lane wrote:
> >
> > The fly in the ointment is that after collecting the pg_index definition
> > of the index, plancat.c also wants to know how big it is --- it calls
> > RelationGetNumberOfBlocks. And that absolutely does look at the
>
Anton wrote:
>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;
>>>QUERY PLAN
>> -
>>> Limit (cost=824637.69..824637.69 rows=1 w
> > =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;
> >QUERY PLAN
> -
> > Limit (cost=824637.69..824637.69 rows=1 width=32)
> >
On 8/24/07, Anton <[EMAIL PROTECTED]> wrote:
>
> Hi.
>
> I just created partitioned table, n_traf, sliced by month
> (n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are
> indexed by 'date_time' column.
> Then I populate it (last value have date 2007-08-...) and do VACUUM
> ANALYZE
Hi.
I just created partitioned table, n_traf, sliced by month
(n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are
indexed by 'date_time' column.
Then I populate it (last value have date 2007-08-...) and do VACUUM
ANALYZE ON n_traf_y2007... all of it.
Now I try to select latest va
23 matches
Mail list logo