Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Wes
On 4/26/06 5:42 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:

> Yeah, this is probably the best workaround for now.  I think we should
> look at making it fully concurrent-safe per upthread comments, but that
> won't be happening in existing release branches.

I changed the index build script such that for each table it builds one
index by itself, then builds the remaining indexes in parallel.  This
appears to be stable.  I made several runs with no errors.  I've got some
more testing to do, then I'll try my big run.

Thanks

Wes



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Jim C. Nasby
On Wed, Apr 26, 2006 at 07:13:08PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > What about not updating if the tuplecount is within X percent? Would
> > that be safe enough to back-port?
> 
> Even if you got agreement that it was a good idea (I don't think so
> myself), it wouldn't help Wes, at least not for values of X smaller
> than 100.  Presumably, that first CREATE INDEX is trying to update
> reltuples from zero to reality.

It may be, but an ANALYZE would eliminate that need and be far faster
than waiting on one entire CREATE INDEX. I'm thinking that even being of
by as much as 5% won't matter to the planner, and I can't think of any
possible reason to need an exact tuplecount in pg_class...

> Also, the first CREATE INDEX has to set relhasindex = true, and that's
> not fuzzy at all.

Oh, will each index build try and do that? Would changing that be
non-invasive enough to backpatch (I'm guessing it's just an added
conditional...)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Wes
On 4/26/06 5:34 PM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:

> Try running a first index build by itself and then running them in
> parallel. Hopefully once pg_class has an exact tuple count the
> conflicting update won't happen. If you actually have an exact tuple
> count you could also try updating pg_class manually beforehand, but
> that's not exactly a supported option...

I thought about that.  It would work well for the table with 3 indexes (1/2)
either way, but would be an extra pass on the one with 4 (1/2/1 instead of
2/2).

> Another possibility would be to patch the code so that if the tuplecount
> found by CREATE INDEX is within X percent of what's already in pg_class
> it doesn't do the update. Since there's already code to check to see if
> the count is an exact match, this patch should be pretty simple, and the
> community might well accept it into the code as well.

I don't think that would help here.  I assume after the COPY, the tuple
count is zero, and after the first index build, it is exact.

Dumb question...  Since COPY has to lock the table, why doesn't it take the
current count in pg_class and increment it by the number of rows inserted?
If you're doing a clean load of a table, that would result in an exact
count.

What about your idea of retrying the request if it detects a conflict?
 
> BTW, why are you limiting yourself to 2 indexes at once? I'd expect that
> for a table larger than memory you'd be better off building all the
> indexes at once so that everything runs off a single sequential scan.

I don't know enough about the index build process.   My presumption was that
while you might get a gain during the read process, the head contention
during the sort/write process would be a killer.  I don't have enough
spindles (yet) to separate out the different indexes.  I think you'd only
want to do one table at a time to avoid head contention during the read
process.  Wouldn't this tend to exacerbate the current problem, too?  In
this specific case, I guess I could do 1,2 and 1,3 parallel builds (for the
3 index/4 index tables).

Right now I'm running with rather restricted hardware (1GB memory, two
2-disk RAID 0's and a single disk).  If the pilot proves what I think is
possible, and I can get real hardware (i.e. an intelligent caching array and
some serious memory), things change a bit.

Wes



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Tom Lane
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> What about not updating if the tuplecount is within X percent? Would
> that be safe enough to back-port?

Even if you got agreement that it was a good idea (I don't think so
myself), it wouldn't help Wes, at least not for values of X smaller
than 100.  Presumably, that first CREATE INDEX is trying to update
reltuples from zero to reality.

Also, the first CREATE INDEX has to set relhasindex = true, and that's
not fuzzy at all.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Jim C. Nasby
On Wed, Apr 26, 2006 at 06:42:53PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > Try running a first index build by itself and then running them in
> > parallel.
> 
> Yeah, this is probably the best workaround for now.  I think we should
> look at making it fully concurrent-safe per upthread comments, but that
> won't be happening in existing release branches.
> 
> Also, the only case where it's a problem is if the first two index
> builds finish at almost exactly the same time.  It might be possible to
> overlap the first two index builds with reasonable safety so long as you
> choose indexes with very different sorting costs (eg, integer vs text
> columns, different numbers of columns, etc).

What about not updating if the tuplecount is within X percent? Would
that be safe enough to back-port? I've been trying to think of a reason
why disabling the current behavior of CREATE INDEX forcing reltuples to
be 100% accurate but I can't think of one...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Tom Lane
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Try running a first index build by itself and then running them in
> parallel.

Yeah, this is probably the best workaround for now.  I think we should
look at making it fully concurrent-safe per upthread comments, but that
won't be happening in existing release branches.

Also, the only case where it's a problem is if the first two index
builds finish at almost exactly the same time.  It might be possible to
overlap the first two index builds with reasonable safety so long as you
choose indexes with very different sorting costs (eg, integer vs text
columns, different numbers of columns, etc).

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Jim C. Nasby
On Wed, Apr 26, 2006 at 05:24:27PM -0500, Wes wrote:
> On 4/25/06 12:24 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> 
> > I'm inclined to think that the right solution is to fix UpdateStats and
> > setRelhasindex so that they don't use simple_heap_update, but call
> > heap_update directly and cope with HeapTupleUpdated (by looping around
> > and trying the update from scratch).
> 
> Is there a verdict on what can/should/will be done for this?  As far as I
> can tell from all this, there appears to be no workaround (even kludgy)
> other than to not build indexes in parallel - not an attractive option.
> 
> If I'm only building two indexes simultaneously, what would happen if I
> tried to lock pg_class in the shorter index build transaction?  Besides
> seeming like a bad idea...

Try running a first index build by itself and then running them in
parallel. Hopefully once pg_class has an exact tuple count the
conflicting update won't happen. If you actually have an exact tuple
count you could also try updating pg_class manually beforehand, but
that's not exactly a supported option...

Another possibility would be to patch the code so that if the tuplecount
found by CREATE INDEX is within X percent of what's already in pg_class
it doesn't do the update. Since there's already code to check to see if
the count is an exact match, this patch should be pretty simple, and the
community might well accept it into the code as well.

BTW, why are you limiting yourself to 2 indexes at once? I'd expect that
for a table larger than memory you'd be better off building all the
indexes at once so that everything runs off a single sequential scan.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-26 Thread Wes
On 4/25/06 12:24 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:

> I'm inclined to think that the right solution is to fix UpdateStats and
> setRelhasindex so that they don't use simple_heap_update, but call
> heap_update directly and cope with HeapTupleUpdated (by looping around
> and trying the update from scratch).

Is there a verdict on what can/should/will be done for this?  As far as I
can tell from all this, there appears to be no workaround (even kludgy)
other than to not build indexes in parallel - not an attractive option.

If I'm only building two indexes simultaneously, what would happen if I
tried to lock pg_class in the shorter index build transaction?  Besides
seeming like a bad idea...

Wes



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-25 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Hannu Krosing wrote:
>> Would this take effect even inside a single command ? in other words, if
>> it were possible that an index appeared in middle of a big update, would
>> the tuples updated after the index becomes visible be also added to the
>> index ?

> This can't happen, because an UPDATE to a table requires acquiring a
> lock (RowExclusiveLock) which conflicts with a lock taken by the CREATE
> INDEX (ShareLock).

Right.  By and large, schema-changing operations take AccessExclusive
lock on the table they are changing, which guarantees (a) no concurrent
operation is is touching the table, and (b) by the time the lock is
released, the schema-changing command is already committed and so its
catalog changes appear valid to any subsequent transactions touching the
table, since they look at the catalogs with SnapshotNow rules.

CREATE INDEX is the only schema-changing op that I can think of offhand
that takes a non-exclusive lock, and so its little problem with two
concurrent operations on the same table is unique.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-25 Thread Alvaro Herrera
Hannu Krosing wrote:
> Ühel kenal päeval, T, 2006-04-25 kell 13:58, kirjutas Tom Lane:
> > Martijn van Oosterhout  writes:
> > > I think the basic problem is that DDL can't really work within a
> > > transaction. If I do an ALTER TABLE, some of these changes need to show
> > > up to concurrent transactions (maybe creating a unique index?).
> > 
> > The point is that DDL can't be MVCC.  If for instance you add an index
> > to a table, once you commit every other transaction must *immediately*
> > start updating that index when they modify the table. 
> 
> How is it done in a way that all other backends see it, but only after
> commit ?
> 
> Is there some secret transaction isolation mode for DDL?

Sort of.  Catalog accesses normally use SnapshotNow, instead of
ActiveSnapshot which is normally used by regular access.  The use of
ActiveSnapshot is what makes a transaction read committed or
serializable; in a serializable transaction, ActiveSnapshot will point
to SerializableSnapshot, while on a read committed transaction,
ActiveSnapshot will point to a snapshot acquired at the beggining of the
command by GetSnapshotData.  Have a look at GetTransactionSnapshot() in
tqual.c.

(The trick is grokking the differences among the various
HeapTupleSatisfies routines.)

> Would this take effect even inside a single command ? in other words, if
> it were possible that an index appeared in middle of a big update, would
> the tuples updated after the index becomes visible be also added to the
> index ?

This can't happen, because an UPDATE to a table requires acquiring a
lock (RowExclusiveLock) which conflicts with a lock taken by the CREATE
INDEX (ShareLock).  You can see the conflict table in lock.c,
the LockConflicts array.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem

2006-04-25 Thread Hannu Krosing
Ühel kenal päeval, T, 2006-04-25 kell 13:58, kirjutas Tom Lane:
> Martijn van Oosterhout  writes:
> > I think the basic problem is that DDL can't really work within a
> > transaction. If I do an ALTER TABLE, some of these changes need to show
> > up to concurrent transactions (maybe creating a unique index?).
> 
> The point is that DDL can't be MVCC.  If for instance you add an index
> to a table, once you commit every other transaction must *immediately*
> start updating that index when they modify the table. 

How is it done in a way that all other backends see it, but only after
commit ?

Is there some secret transaction isolation mode for DDL? Maybe something
that fits between "read uncommitted" and "read committed" ? Or is it
just that catalog access is always done in read-committed mode, even if
transaction is in "serializable" ?

Would this take effect even inside a single command ? in other words, if
it were possible that an index appeared in middle of a big update, would
the tuples updated after the index becomes visible be also added to the
index ?

The reason I ask, is that I'm still keen on implementin a CONCURRENT
INDEX command, and I want to get as much background info as possible
before diving in.

> They can't use
> the excuse of "not my problem because the catalog change postdates the
> snapshot I'm using".  The drop-index case is even worse, since a
> transaction that believes the index is still present is likely to try
> to access/update a disk file that's not there anymore. Adding/dropping
> columns, constraints, triggers, etc all have hazards of the same ilk.

at what point will an add/drop column become visible for parallel
transactions ?

can trigger/constraint appear magically inside a transaction ? so if I
update 5 rows inside one serialisable trx, is it possible that a trigger
added to the table after 2nd update will fire for last 3 updates ?

btw, i don't think that a stored procedure (cached plans) will pick up
something like added/changed default even after commit in both
connections.

> > I think it's like Tom says in that email, it could be done, but the
> > cost/benefit ratio isn't very good...
> 
> It's barely possible that we could make this happen, but it would be a
> huge amount of work, and probably a huge amount of instability for a
> very long time until we'd gotten all the corner cases sorted.  I think
> there are much more productive uses for our development effort.

True.

---
Hannu





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem building indexes)

2006-04-25 Thread Tom Lane
Martijn van Oosterhout  writes:
> I think the basic problem is that DDL can't really work within a
> transaction. If I do an ALTER TABLE, some of these changes need to show
> up to concurrent transactions (maybe creating a unique index?).

The point is that DDL can't be MVCC.  If for instance you add an index
to a table, once you commit every other transaction must *immediately*
start updating that index when they modify the table.  They can't use
the excuse of "not my problem because the catalog change postdates the
snapshot I'm using".  The drop-index case is even worse, since a
transaction that believes the index is still present is likely to try
to access/update a disk file that's not there anymore.  Adding/dropping
columns, constraints, triggers, etc all have hazards of the same ilk.

> I think it's like Tom says in that email, it could be done, but the
> cost/benefit ratio isn't very good...

It's barely possible that we could make this happen, but it would be a
huge amount of work, and probably a huge amount of instability for a
very long time until we'd gotten all the corner cases sorted.  I think
there are much more productive uses for our development effort.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem building indexes)

2006-04-25 Thread Martijn van Oosterhout
On Tue, Apr 25, 2006 at 12:25:35PM -0500, Jim C. Nasby wrote:
> Is there anything in comments/docs/list archives about why catalog
> access uses a bunch of 'magic' instead of treating catalog tables the
> same as every other table? I realize that ultimately you have to
> bootstrap somehow (kinda hard to read from pg_class if the info needed
> to do so is in pg_class), but perhaps switching over to the regular
> access methods after the system is up would be worth-while.

I don't know if it's explicitly documented, but here's one mail that
describes some of the issues:

http://www.thelinuxreview.com/pgsql-hackers/2005-03/msg00840.php

I think the basic problem is that DDL can't really work within a
transaction. If I do an ALTER TABLE, some of these changes need to show
up to concurrent transactions (maybe creating a unique index?).

I think it's like Tom says in that email, it could be done, but the
cost/benefit ratio isn't very good...

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Catalog Access (was: [HACKERS] [GENERAL] Concurrency problem building indexes)

2006-04-25 Thread Jim C. Nasby
On Tue, Apr 25, 2006 at 12:48:04PM -0400, Alvaro Herrera wrote:
> I'm late to this thread, but maybe we can make the process of storing
> the new data in pg_class take a lock using LockObject() or something
> like that to serialize the access to the pg_class row.  The idea would
> be that this lock doesn't conflict with a LockRelation(), but it would
> of course conflict with itself so no two CREATE INDEXES can enter that
> code section concurrently.

Is there anything in comments/docs/list archives about why catalog
access uses a bunch of 'magic' instead of treating catalog tables the
same as every other table? I realize that ultimately you have to
bootstrap somehow (kinda hard to read from pg_class if the info needed
to do so is in pg_class), but perhaps switching over to the regular
access methods after the system is up would be worth-while.

Advantages:
Allows for concurrent access (using MVCC)

Potentially reduces locking requirements (if snapshots aren't required
anymore, each backend should theoretically be able to rely on MVCC to
get the right catalog info, though of course this depends on the actual
operation)

Should allow for much-sought-after triggers on the system catalogs

But I'm sure this has come up in the past, I just can't find any info
about why not to do this...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org