Apparently patience is a virtue I don't possess. This morning the
resubscribe had completed overnight.
I was able to drop the node and everything is well

Thanks for your help!

Dave Cramer

On 24 August 2015 at 17:44, Dave Cramer <davecra...@gmail.com> wrote:

>
>
> Dave Cramer
>
> On 24 August 2015 at 17:42, Steve Singer <ssin...@ca.afilias.info> wrote:
>
>> On 08/24/2015 05:40 PM, Dave Cramer wrote:
>>
>>>
>>> On 24 August 2015 at 17:35, Steve Singer <ssin...@ca.afilias.info
>>> <mailto:ssin...@ca.afilias.info>> wrote:
>>>
>>>     On 08/24/2015 05:29 PM, Dave Cramer wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>              Now I assume the  "only at (2,5003478340 <tel:5003478340>
>>>         <tel:5003478340 <tel:5003478340>>) number is
>>>
>>>              staying the same and it isn't going up. IF it is then you
>>>         actually
>>>              have progress being made but I find that unlikely if node 2
>>>         isn't
>>>              the origin of any sets to node 1(you would be caught up
>>>         quickly).
>>>
>>>
>>>              Another option would be
>>>
>>>                 cluster name=mycluster;
>>>                 node 1 admin conninfo='...'
>>>                 node 2 admin conninfo='..
>>>                 node 3 admin conninfo='..
>>>                 node 4 admin conninfo='..
>>>                 failover(id=3, backup node = 2);
>>>
>>>
>>>              Per the failover documentation
>>>
>>>              Nodes that are forwarding providers can also be passed to
>>> the
>>>              failover command as a failed node. The failover process will
>>>              redirect the subscriptions from these nodes to the backup
>>> node.
>>>
>>>
>>>         failover provided no feedback and doesn't appear to have done
>>>         anything.
>>>
>>>
>>>     Most slonik commands don't produce feedback when they work.
>>>
>>>     select * FROM _mycluster.sl_subscribe
>>>     on your origin;
>>>
>>>     Does it show node 3 as a provider?
>>>
>>>
>>> It is still there. FWIW, this is  2.2.2-1 was there a fix for this
>>> recently?
>>>
>>>
>> Sounds similar to bug 342
>> http://www.slony.info/bugzilla/show_bug.cgi?id=342
>>
>> That was fixed in 2.2.3
>>
>> Sounds like it. I really didn't want to have to upgrade this mess to fix
> another mess.....
>
> Thanks for your help, Sorry for not brining up the version earlier.
>
> Dave
>
>>
>>
>>>     If not you can then drop node 3.
>>>
>>>     --
>>>     cluster name=mycluster;
>>>     node 1 admin conninfo=''
>>>     node 2 admin conninfo=''
>>>     node 4 admin conninfo=''
>>>     drop node (id=3, event node=1);
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>                       are there paths between node 2 and 4?
>>>
>>>                  There are but I thought I would try your suggestion
>>>         which evokes a
>>>                  different error message
>>>
>>>                  waiting for events  (2,5003485579 <tel:5003485579>
>>>         <tel:5003485579 <tel:5003485579>>) only at
>>>                  (2,5003478340 <tel:5003478340> <tel:5003478340
>>>         <tel:5003478340>>),
>>>                  (4,5002587214 <tel:5002587214> <tel:5002587214
>>>         <tel:5002587214>>) only at (4,5002579907 <tel:5002579907>
>>>                  <tel:5002579907 <tel:5002579907>>) to be confirmed on
>>>
>>>         node 1
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>                                             Dave Cramer
>>>
>>>                                             On 24 August 2015 at 15:38,
>>>         Scott Marlowe
>>>                                             <scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>
>>>                                    <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>         <mailto:scott.marl...@gmail.com <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>
>>>                                    <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>>>
>>>
>>>           <mailto:scott.marl...@gmail.com <mailto:
>>> scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>
>>>                                    <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>>
>>>
>>>           <mailto:scott.marl...@gmail.com <mailto:
>>> scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>
>>>                                    <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>
>>>                           <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>
>>>                  <mailto:scott.marl...@gmail.com
>>>         <mailto:scott.marl...@gmail.com>>>>>>> wrote:
>>>
>>>                                                  Note that the node will
>>>         still
>>>                  show up in
>>>                           sl_nodes and
>>>                                             sl_status for a
>>>                                                  while, until slony does
>>> a
>>>                  cleanup event / log
>>>                                    switch (can't
>>>                                             remember
>>>                                                  which right now). This
>>> is
>>>                  normal. Don't
>>>                           freak out.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to