Can you run the uninstall playbook at playbooks/adhoc/uninstall.yml
first to clean up any previous installs? We record the version the
first time we install cluster components in order to ensure the rest
of the cluster matches that version, so we'd need to be starting from
a clean environment.

--
Scott

On Thu, Oct 6, 2016 at 7:16 AM, Julio Saura <jsa...@hiberus.com> wrote:
> i have moved to branch release-1.2 but still the same error :/
>
>
>> El 6 oct 2016, a las 12:07, Julio Saura <jsa...@hiberus.com> escribió:
>>
>> hello
>>
>> thanks for the input
>>
>> after typing in my ansible file the version i get this error
>>
>> fatal: [openshift-master01]: FAILED! => {"changed": false, "failed": true, 
>> "msg": "Detected OpenShift version 1.3.0 does not match requested 
>> openshift_release 1.2. You may need to adjust your yum repositories, 
>> inventory, or run the appropriate OpenShift upgrade playbook.”}
>>
>> do i need to change anything else for installing 1.2?
>>
>> i have check yum repos but i  don’t see any reference to version..
>>
>> thanks again
>>
>>> El 5 oct 2016, a las 20:43, Scott Dodson <sdod...@redhat.com> escribió:
>>>
>>> We maintain branches in the github repo, release-1.2, release-1.3, etc
>>> that are updated less frequently meaning they often don't get the
>>> latest installer features but should be more stable. The master branch
>>> shouldn't be used for 1.2 installs at this point, we're only testing
>>> it against the latest stable release and current development releases.
>>> Setting openshift_release=v1.2 should force installation of 1.2,
>>> though i'm not certain if that feature exists in the release-1.2
>>> branch or not.
>>>
>>> The error you ran into is fixed on master via
>>> https://github.com/openshift/openshift-ansible/pull/2552 the problem
>>> was introduced in
>>> https://github.com/openshift/openshift-ansible/pull/2511
>>>
>>> On Wed, Oct 5, 2016 at 11:55 AM, Julio Saura <jsa...@hiberus.com> wrote:
>>>> hello
>>>>
>>>> installing a new brand cluster on centos this afternoon y realized that is 
>>>> installing now version 1.3.0 ( ok so far )
>>>>
>>>> when deploying the playbook from git master branch i got an error when 
>>>> checking if master API is up through a native master cluster ( haproxy )
>>>>
>>>> the playbook is trying to connect to port 8443 on master load balancer 
>>>> ..but the haproxy.conf is set to listen on port 8843 .
>>>>
>>>> taking a look on the haproxy.config deployed i also see that is trying to 
>>>> connect with masters using port 8843 but masters service is up and running 
>>>> on 8443 .. so checks fail and so masters are “down” from haproxy 
>>>> perspective.
>>>>
>>>> i did realized that on the lb playbook ( 
>>>> playbooks/common/openshift-loadbalancer/config.yml ) the default port is 
>>>> set as 8843, changed it to 8443 and installation completes without any 
>>>> problem.
>>>>
>>>> just in case it may help people installing right now because i think is an 
>>>> error on the playbooks
>>>>
>>>> btw: is possible to install version 1.2.0 instead on 1.3.0 using the 
>>>> ansible procedure? i do not really trust 1.3.0 right now for my production 
>>>> environments :( and my staging environments are based on 1.2.x and i want 
>>>> them to be on the same version.
>>>>
>>>> thanks
>>>>
>>>> best regards
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to