Jeff Frost wrote:
> Karl Denninger wrote:
>> Karl Denninger wrote:
>>   
>>> I had posted a compatibility question related to Slony 2.0.2 and
>>> Postgres 8.4 previous, and got back that there were some warnings at
>>> issue, but no other particular problems.
>>>
>>> The current configuration has 1 master (node 2) and two slaves (1 and
>>> 3); I attempted to add a node #4 with 8.4.0 running on the same machine
>>> as the master.
>>>
>>> The server is in its own directory, on its own port (55432) and all the
>>> tables are present.
>>>
>>> I've got the following config:
>>>
>>>       
>> Following up on this, I did a "DROP NODE" with the slons running, then
>> re-ran the config scripts.  Now I'm getting this - the same thing I was
>> getting last night:
>>
>> Aug 22 00:04:11 tickerforum slon[66032]: [129-1] INFO   copy_set 1
>> Aug 22 00:04:11 tickerforum slon[66032]: [130-1] CONFIG version for
>> "dbname=ticker host=colo1.denninger.net user=slony port=5432
>> password=xxxxx" is 80306
>> Aug 22 00:04:11 tickerforum slon[66032]: [131-1] CONFIG
>> remoteWorkerThread_2: connected to provider DB
>> Aug 22 00:04:11 tickerforum slon[66032]: [132-1] CONFIG
>> remoteWorkerThread_2: prepare to copy table "public"."banned_ip"
>> Aug 22 00:04:11 tickerforum slon[66032]: [133-1] ERROR 
>> remoteWorkerThread_2: Could not lock table "public"."banned_ip" on
>> subscriber
>> Aug 22 00:04:11 tickerforum slon[66032]: [134-1] WARN  
>> remoteWorkerThread_2: data copy for set 1 failed - sleep 60 seconds
>>
>> And then this repeats every 60 seconds.
>>
>> This is the lock error I was seeing before; I don't get it.....
>>
>>   
> Karl, when you dropped the node did you drop the slony schema as well
> on node 4?  DROP NODE doesn't do an UNINSTALL NODE.  What does psql
> show for '\d banned_ip' in the node 4 DB?
>
>
> -- 
> Jeff Frost <[email protected]>
> COO, PostgreSQL Experts, Inc.
> Phone: 1-888-PG-EXPRT x506
> http://www.pgexperts.com/ 
>   
I found the problem - there was a DDL problem on the new client. 
However, it was NOT in the table referenced.

Slony has a major issue with this in that it will, under those
conditions, leave the transaction open and then bomb on the retry on the
FIRST table in the set, since the transaction is open and you can't lock
a locked table.

I know this sort of thing (DDL difference) isn't supposed to happen and
the way to prevent it is to dump the schema, strip out the SLONY "cruft"
and create the tablespace on the new node this way, but (in theory) with
a canned app you should be able to run its "create table" script and be
good to go.

In this case it was a version mismatch (on my end) between the script I
ran and the DDL in use in the active cluster.

So the real issue here appears to be one of a misleading error message
and bad code path choices in SLONY when this error occurs; I would have
expected the in-process transaction to be ABORTed instead of being left
open.

-- Karl

begin:vcard
fn:Karl Denninger
n:Denninger;Karl
org:Cuda Systems LLC
adr;dom:;;314 Olde Post Road;Niceville;FL;32578
email;internet:[email protected]
tel;work:850-376-9364
tel;fax:850-897-9364
x-mozilla-html:TRUE
url:http://market-ticker.org
version:2.1
end:vcard

_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to