Re: Pods stuck on Terminating status

2018-03-16 Thread Rodrigo Bersa
Bahhoo,

I believe that the namespace will get stuck also. 'Cause it will only be
deleted after all of it's objects got deleted.

I would try to restart the Masters services before.


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55-11-99557-5841

TRIED. TESTED. TRUSTED. 
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Fri, Mar 16, 2018 at 5:25 PM, Bahhoo  wrote:

> Hi  Rodrigo,
>
> No PVs are used. One of the pods is a build pod, the other one's a normal
> pod without storage.
> I'll try deleting the namespace. I didn't want to do that,since I had
> running pods in the namespace.
>
> Best,
> Bahho
> --
> Kimden: Rodrigo Bersa 
> Gönderme tarihi: ‎16.‎3.‎2018 16:12
> Kime: Bahhoo 
> Bilgi: rahul334...@gmail.com; users 
>
> Konu: Re: Pods stuck on Terminating status
>
> Hi Bahhoo,
>
> Are you using PVs on the "Terminating" POD? I heard about some issues with
> PODs bounded to PV/PVCs provided by dynamic storage, where you have to
> first remove the volume form POD, then the PVPVC. Just after that remove
> the POD or the DeplymentConfig.
>
> If it's not the case, maybe restarting the atomic-openshift-master-*
> services can work removing the inconsistent POD.
>
>
> Regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil 
>
> rbe...@redhat.comM: +55-11-99557-5841
> 
> TRIED. TESTED. TRUSTED. 
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo  wrote:
>
>> Hi Rahul,
>>
>> That won't do it either.
>>
>> Thanks
>> Bahho
>> --
>> Kimden: Rahul Agarwal 
>> Gönderme tarihi: ‎15.‎3.‎2018 22:26
>> Kime: bahhooo 
>> Bilgi: users 
>> Konu: Re: Pods stuck on Terminating status
>>
>> Hi Bahho
>>
>> Try: oc delete all -l app=
>>
>> Thanks,
>> Rahul
>>
>> On Thu, Mar 15, 2018 at 5:19 PM, bahhooo  wrote:
>>
>>> Hi all,
>>>
>>> I have some zombie pods stuck on Terminating status on a OCP 3.7
>>> HA-cluster.
>>>
>>> oc delete with --grace-period=0 --force etc. won't work.
>>> Docker restart. server reboot won't help either.
>>>
>>> I tried to find the pod key in etcd either in order to delete it
>>> manually. I couldn't find it.
>>>
>>> Is there a way to delete these pods?
>>>
>>>
>>>
>>>
>>> Bahho
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods stuck on Terminating status

2018-03-16 Thread Bahhoo
Hi  Rodrigo,

No PVs are used. One of the pods is a build pod, the other one's a normal pod 
without storage. 
I'll try deleting the namespace. I didn't want to do that,since I had running 
pods in the namespace.

Best,
Bahho

-Orijinal ileti-
Kimden: "Rodrigo Bersa" 
Gönderme tarihi: ‎16.‎3.‎2018 16:12
Kime: "Bahhoo" 
Bilgi: "rahul334...@gmail.com" ; "users" 

Konu: Re: Pods stuck on Terminating status

Hi Bahhoo,


Are you using PVs on the "Terminating" POD? I heard about some issues with PODs 
bounded to PV/PVCs provided by dynamic storage, where you have to first remove 
the volume form POD, then the PVPVC. Just after that remove the POD or the 
DeplymentConfig.


If it's not the case, maybe restarting the atomic-openshift-master-* services 
can work removing the inconsistent POD.



Regards,





Rodrigo Bersa
Cloud Consultant, RHCVA, RHCE
Red Hat Brasil
rbe...@redhat.comM: +55-11-99557-5841 
 TRIED. TESTED. TRUSTED.

Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo 
Great Place to Work.


On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo  wrote:

Hi Rahul,

That won't do it either.

Thanks
Bahho


Kimden: Rahul Agarwal
Gönderme tarihi: ‎15.‎3.‎2018 22:26
Kime: bahhooo
Bilgi: users
Konu: Re: Pods stuck on Terminating status


Hi Bahho


Try: oc delete all -l app=


Thanks,
Rahul


On Thu, Mar 15, 2018 at 5:19 PM, bahhooo  wrote:

Hi all,


I have some zombie pods stuck on Terminating status on a OCP 3.7 HA-cluster.
 
oc delete with --grace-period=0 --force etc. won't work. 
Docker restart. server reboot won't help either.


I tried to find the pod key in etcd either in order to delete it manually. I 
couldn't find it.


Is there a way to delete these pods? 








Bahho

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Cloud provider register/deregister node and labels

2018-03-16 Thread Walters, Todd
We have not found any documentation or any solutions for this issue. We’ve 
recently upgraded to 3.7 and have had the same issues. Our work around has been 
to have cloudwatch check logs for anytime a node (or master) is registered and 
then it add the logging fluentd label . and if it’s a master, it sets 
schedulingdisabled, because we also see this behavior on our masters.

I’d like to see a resolution for these and some documentation? We’ve struggled 
to get it working properly. Setting the command in node services command caused 
other issues, but it may have been misconfigured.

Thanks,

Todd

Today's Topics:
3. Re: Looking for documentation on cloud provider delete node
  andregister node (Mark McKinstry)

--


--

Message: 3
Date: Thu, 15 Mar 2018 14:53:08 -0700
From: Mark McKinstry 
To: Clayton Coleman 
Cc: "users@lists.openshift.redhat.com"
,"Ernst, Chad"

Subject: Re: Looking for documentation on cloud provider delete node
andregister node
Message-ID:

Content-Type: text/plain; charset="utf-8"

Is there more info on this? I'm having this problem one OCP 3.7 right now
too. If a node is rebooted, it comes back up but is missing
the logging-infra-fluentd=true label.




On Thu, Dec 21, 2017 at 10:15 AM, Clayton Coleman 
wrote:

> There was an open bug on this previously - I?m having trouble finding it
> at the moment.  The node may be racing with the cloud controller and then
> not updating the labels.  One workaround is to simply add an ?oc label
> node/$(hostname) ...? command to the origin-node services as a prestart
> command.
>
> On Dec 21, 2017, at 9:13 AM, Ernst, Chad  wrote:
>
>
>
> Running Origin 3.6 on AWS, we?ve found that if our EC2 instances go down
> for any length of time and come back up (as opposed to the EC2 instance
> getting terminated) the nodes are automatically deleted from openshift 
then
> re-registered after the ec2 instance is restarted.  The activity is logged
> in /var/log/messages
>
>
>
> Dec 20 21:59:30 ip-172-21-21-30 origin-master-controllers: I1220
> 21:59:30.297638   26242 nodecontroller.go:761] Deleting node (no longer
> present in cloud provider): ip-172-21-20-30.ec2.internal
>
> Dec 20 21:59:30 ip-172-21-21-30 origin-master-controllers: I1220
> 21:59:30.297662   26242 controller_utils.go:273] Recording Deleting Node
> ip-172-21-20-30.ec2.internal because it's not present according to cloud
> provider event message for node ip-172-21-20-30.ec2.internal
>
> Dec 20 21:59:30 ip-172-21-21-30 origin-master-controllers: I1220
> 21:59:30.297895   26242 event.go:217] 
Event(v1.ObjectReference{Kind:"Node",
> Namespace:"", Name:"ip-172-21-20-30.ec2.internal",
> UID:"36c8dca4-e5c9-11e7-b2ce-0e69b80c212e", APIVersion:"",
> ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeletingNode'
> Node ip-172-21-20-30.ec2.internal event: Deleting Node
> ip-172-21-20-30.ec2.internal because it's not present according to cloud
> provider
>
>
>
>
>
> Dec 20 23:10:06 ip-172-21-21-30 origin-master-controllers: I1220
> 23:10:06.303567   26242 nodecontroller.go:616] NodeController observed a
> new Node: "ip-172-21-22-30.ec2.internal"
>
> Dec 20 23:10:06 ip-172-21-21-30 origin-master-controllers: I1220
> 23:10:06.303597   26242 controller_utils.go:273] Recording Registered Node
> ip-172-21-22-30.ec2.internal in NodeController event message for node
> ip-172-21-22-30.ec2.internal
>
> Dec 20 23:10:06 ip-172-21-21-30 origin-master-controllers: I1220
> 23:10:06.303899   26242 event.go:217] 
Event(v1.ObjectReference{Kind:"Node",
> Namespace:"", Name:"ip-172-21-22-30.ec2.internal",
> UID:"e850129f-e5da-11e7-ac5e-027542a418ee", APIVersion:"",
> ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 
'RegisteredNode'
> Node ip-172-21-22-30.ec2.internal event: Registered Node
> ip-172-21-22-30.ec2.internal in NodeController
>
>
>
> The issue we are running into is that when the nodes come back they don?t
> have all of our labels on them.  They don?t get labelled to run the 
fluentd
> pods ?logging-infra-fluentd=true? and my masters aren?t set for 
?Scheduling
> Disabled?.
>
>
>
> Can anybody point me to the any doc regarding the automatic registration
> of the node from the cloudprovider or knows how to adjust the behavior 
when
> a node is re-registered so they can be tagged properly.
>
>
>
> Thanks
>
>
>
> Chad
>
> 
> The information contained in this message, and any attachments thereto,
> is intended solely for the use of 

Reverse Proxy using Nginx

2018-03-16 Thread Gaurav Ojha
Hello,

I have a single host OpenShift cluster. Is it possible to install Nginx
(run it as a docker image) and route traffic using Nginx?

If so, can someone point out the configurations for NO_PROXY and HTTP_PROXY
in this case?

I dont want any OpenShift instance IP managed by OpenShift. What I am
confused about is this part of the document

HTTP_PROXY=http://:@:/
HTTPS_PROXY=https://:@:/
NO_PROXY=master.hostname.example.com,10.1.0.0/16,172.30.0.0/16


It mentions that NO_PROXY has the hostname of the master included in
NO_PROXY. But since my cluster only has 1 host, so all my routes are
managed through that hostname. In this case, do I just assign some random
routes, and route through Nginx?

Regards
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods stuck on Terminating status

2018-03-16 Thread Humble Devassy Chirammal
Which PV types are in use here?

--Humble


On Fri, Mar 16, 2018 at 8:42 PM, Rodrigo Bersa  wrote:

> Hi Bahhoo,
>
> Are you using PVs on the "Terminating" POD? I heard about some issues with
> PODs bounded to PV/PVCs provided by dynamic storage, where you have to
> first remove the volume form POD, then the PVPVC. Just after that remove
> the POD or the DeplymentConfig.
>
> If it's not the case, maybe restarting the atomic-openshift-master-*
> services can work removing the inconsistent POD.
>
>
> Regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil 
>
> rbe...@redhat.comM: +55-11-99557-5841
> 
> TRIED. TESTED. TRUSTED. 
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo  wrote:
>
>> Hi Rahul,
>>
>> That won't do it either.
>>
>> Thanks
>> Bahho
>> --
>> Kimden: Rahul Agarwal 
>> Gönderme tarihi: ‎15.‎3.‎2018 22:26
>> Kime: bahhooo 
>> Bilgi: users 
>> Konu: Re: Pods stuck on Terminating status
>>
>> Hi Bahho
>>
>> Try: oc delete all -l app=
>>
>> Thanks,
>> Rahul
>>
>> On Thu, Mar 15, 2018 at 5:19 PM, bahhooo  wrote:
>>
>>> Hi all,
>>>
>>> I have some zombie pods stuck on Terminating status on a OCP 3.7
>>> HA-cluster.
>>>
>>> oc delete with --grace-period=0 --force etc. won't work.
>>> Docker restart. server reboot won't help either.
>>>
>>> I tried to find the pod key in etcd either in order to delete it
>>> manually. I couldn't find it.
>>>
>>> Is there a way to delete these pods?
>>>
>>>
>>>
>>>
>>> Bahho
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods stuck on Terminating status

2018-03-16 Thread Rodrigo Bersa
Hi Bahhoo,

Are you using PVs on the "Terminating" POD? I heard about some issues with
PODs bounded to PV/PVCs provided by dynamic storage, where you have to
first remove the volume form POD, then the PVPVC. Just after that remove
the POD or the DeplymentConfig.

If it's not the case, maybe restarting the atomic-openshift-master-*
services can work removing the inconsistent POD.


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55-11-99557-5841

TRIED. TESTED. TRUSTED. 
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo  wrote:

> Hi Rahul,
>
> That won't do it either.
>
> Thanks
> Bahho
> --
> Kimden: Rahul Agarwal 
> Gönderme tarihi: ‎15.‎3.‎2018 22:26
> Kime: bahhooo 
> Bilgi: users 
> Konu: Re: Pods stuck on Terminating status
>
> Hi Bahho
>
> Try: oc delete all -l app=
>
> Thanks,
> Rahul
>
> On Thu, Mar 15, 2018 at 5:19 PM, bahhooo  wrote:
>
>> Hi all,
>>
>> I have some zombie pods stuck on Terminating status on a OCP 3.7
>> HA-cluster.
>>
>> oc delete with --grace-period=0 --force etc. won't work.
>> Docker restart. server reboot won't help either.
>>
>> I tried to find the pod key in etcd either in order to delete it
>> manually. I couldn't find it.
>>
>> Is there a way to delete these pods?
>>
>>
>>
>>
>> Bahho
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users