Re: openshift memory requirements

2017-05-23 Thread Louis Santillan
If the machine is an i7 it likely only has 4 cores/threads total.  3 VM x 2
cores = 6 cores required.  Also, instead of having 3 VMs with at least one
IO controller and a NIC each, you have 3 VMs sharing 1 IO controller and 1
NIC.  Not as fun as it sounds.

My lab machine is a HP z820 with 32 cores and 128GB of RAM.  When I spin up
a SmartStart style 3x3x3 cluster + 1 bastion host + 1 NFS host (3 masters,
3 infra, 3 app nodes + 2 support hosts), my IO controller can't keep up.  I
might as well be running on some 4 year old cellphone.  Which reminds me, I
need to move those VMs over to the storage attached to my RAID controller.

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Tue, May 23, 2017 at 11:13 AM, Hetz Ben Hamo  wrote:

> Well, I installed (through the Ansible install) the origin 1.5 on 3 VM's,
> each of them had 4 GB RAM, the nodes had 2 cores.I also enabled the metrics.
>
> The entire system barely responded (this is an i7 machine with 16GB RAM).
> Thats why I asked if there are any changes that I need to add through
> Ansible to make the system work well with such amount of RAM.
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
> 
>
> On Tue, May 23, 2017 at 9:09 PM, Clayton Coleman 
> wrote:
>
>> OpenShift at that scale probably requires 300-500M on the master and
>> 100-200M at the nodes at that scale.
>>
>> On Tue, May 23, 2017 at 2:02 PM, Hetz Ben Hamo  wrote:
>>
>>> I thought about 2 nodes with 4GB (and 2 coers) at the beginning and add
>>> nodes as needed.
>>> Number of pods - thought about using 6 (2 wordpress, 2 mysql, 2 memcache)
>>>
>>> תודה,
>>> *חץ בן חמו*
>>> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי
>>> שלי 
>>>
>>> On Tue, May 23, 2017 at 8:59 PM, Clayton Coleman 
>>> wrote:
>>>
 How many nodes and pods are you planning to run?

 On Tue, May 23, 2017 at 1:43 PM, Hetz Ben Hamo  wrote:

> Hi,
>
> I've read the docs about openshift memory requirements and I wanted to
> ask something..
>
> I'm planning to build a system which will host a web site (wordpress
> based, for example) which will auto-scale based on the number of 
> visitors..
>
> According to the docs in the link (https://docs.openshift.com/co
> ntainer-platform/3.5/install_config/install/prerequisites.html) it
> requires 16GB RAM (it's used to be 8GB in 3.0). It doesn't mention the
> amount of ram needed for the infra nodes in the docs (I assume another 
> 8GB?)
>
> My question: is there any way to build a system with much less ram?
> something like 4GB for Master, 4GB per node (minimum 2 nodes)? if so, what
> configurations should I add to my ansible host file?
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift memory requirements

2017-05-23 Thread Clayton Coleman
OpenShift at that scale probably requires 300-500M on the master and
100-200M at the nodes at that scale.

On Tue, May 23, 2017 at 2:02 PM, Hetz Ben Hamo  wrote:

> I thought about 2 nodes with 4GB (and 2 coers) at the beginning and add
> nodes as needed.
> Number of pods - thought about using 6 (2 wordpress, 2 mysql, 2 memcache)
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
> 
>
> On Tue, May 23, 2017 at 8:59 PM, Clayton Coleman 
> wrote:
>
>> How many nodes and pods are you planning to run?
>>
>> On Tue, May 23, 2017 at 1:43 PM, Hetz Ben Hamo  wrote:
>>
>>> Hi,
>>>
>>> I've read the docs about openshift memory requirements and I wanted to
>>> ask something..
>>>
>>> I'm planning to build a system which will host a web site (wordpress
>>> based, for example) which will auto-scale based on the number of visitors..
>>>
>>> According to the docs in the link (https://docs.openshift.com/co
>>> ntainer-platform/3.5/install_config/install/prerequisites.html) it
>>> requires 16GB RAM (it's used to be 8GB in 3.0). It doesn't mention the
>>> amount of ram needed for the infra nodes in the docs (I assume another 8GB?)
>>>
>>> My question: is there any way to build a system with much less ram?
>>> something like 4GB for Master, 4GB per node (minimum 2 nodes)? if so, what
>>> configurations should I add to my ansible host file?
>>>
>>> Thanks
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift memory requirements

2017-05-23 Thread Clayton Coleman
How many nodes and pods are you planning to run?

On Tue, May 23, 2017 at 1:43 PM, Hetz Ben Hamo  wrote:

> Hi,
>
> I've read the docs about openshift memory requirements and I wanted to ask
> something..
>
> I'm planning to build a system which will host a web site (wordpress
> based, for example) which will auto-scale based on the number of visitors..
>
> According to the docs in the link (https://docs.openshift.com/
> container-platform/3.5/install_config/install/prerequisites.html) it
> requires 16GB RAM (it's used to be 8GB in 3.0). It doesn't mention the
> amount of ram needed for the infra nodes in the docs (I assume another 8GB?)
>
> My question: is there any way to build a system with much less ram?
> something like 4GB for Master, 4GB per node (minimum 2 nodes)? if so, what
> configurations should I add to my ansible host file?
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


openshift memory requirements

2017-05-23 Thread Hetz Ben Hamo
Hi,

I've read the docs about openshift memory requirements and I wanted to ask
something..

I'm planning to build a system which will host a web site (wordpress based,
for example) which will auto-scale based on the number of visitors..

According to the docs in the link (
https://docs.openshift.com/container-platform/3.5/install_config/install/prerequisites.html)
it requires 16GB RAM (it's used to be 8GB in 3.0). It doesn't mention the
amount of ram needed for the infra nodes in the docs (I assume another 8GB?)

My question: is there any way to build a system with much less ram?
something like 4GB for Master, 4GB per node (minimum 2 nodes)? if so, what
configurations should I add to my ansible host file?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: users Digest, Vol 58, Issue 33

2017-05-23 Thread Walters, Todd


-Original Message-
From: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] On Behalf Of 
users-requ...@lists.openshift.redhat.com
Sent: Tuesday, May 23, 2017 11:00 AM
To: users@lists.openshift.redhat.com
Subject: users Digest, Vol 58, Issue 33

Send users mailing list submissions to
users@lists.openshift.redhat.com

To subscribe or unsubscribe via the World Wide Web, visit
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshiftmm%2Flistinfo%2Fusers=01%7C01%7Ctodd_walters%40unigroup.com%7Ccedc0a88d76d42eb7bf408d4a1f4cf9e%7C259bdc2f86d3477b8cb34eee64289142%7C1=3tXzRxjeDhhcxnfFHDuefDWGYFvJH5VEJD%2F5nfJir24%3D=0
or, via email, send a message with subject or body 'help' to
users-requ...@lists.openshift.redhat.com

You can reach the person managing the list at
users-ow...@lists.openshift.redhat.com

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of users digest..."


Today's Topics:

   1. Re: Pods has connectivity to other pod and service only when
  I run anadditional pod (Philippe Lafoucri?re)



--

Message: 1
Date: Tue, 23 May 2017 10:00:19 -0400
From: Philippe Lafoucri?re 
To: St?phane Klein 
Cc: users 
Subject: Re: Pods has connectivity to other pod and service only when
I run anadditional pod
Message-ID:

Content-Type: text/plain; charset="utf-8"

Do you know if it's possible to run 1.4 nodes with 1.5 masters?
We need to start rolling back, we have too many issues with our clients :(

Thanks
?

Hi Philippe,

Per the docs yes you can across one minor version within a major version:

"Unless noted otherwise, node and masters within a major version are forward 
and backward compatible across one minor version, so upgrading your cluster 
should go smoothly. However, you should not run mismatched versions longer than 
necessary to upgrade the entire cluster."

https://docs.openshift.com/container-platform/3.5/install_config/upgrading/index.html

Todd


The information contained in this message, and any attachments thereto,
is intended solely for the use of the addressee(s) and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination, copying, or other use of the transmitted information is
prohibited. If you received this in error, please contact the sender
and delete the material from any computer. UNIGROUP.COM



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods has connectivity to other pod and service only when I run an additional pod

2017-05-23 Thread Andrew Lau
Yes, I believe you can. Otherwise you wouldn't be able to handle rolling
updates

On Wed, 24 May 2017 at 00:00 Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Do you know if it's possible to run 1.4 nodes with 1.5 masters?
> We need to start rolling back, we have too many issues with our clients :(
>
> Thanks
> ​
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods has connectivity to other pod and service only when I run an additional pod

2017-05-23 Thread Philippe Lafoucrière
Do you know if it's possible to run 1.4 nodes with 1.5 masters?
We need to start rolling back, we have too many issues with our clients :(

Thanks
​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cannot push image

2017-05-23 Thread Hetz Ben Hamo
You mean labeling them both as infra and as primary?

תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי


On Tue, May 23, 2017 at 4:56 PM, Rodrigo Bersa  wrote:

> Hi Hetz,
>
> You need to do one of the 2 options:
>
> 1. Enable scheduling on your master node.
> 2. Label the node1 and node2 with region=infra.
>
> I would choose the second option and remove the label from the master node.
>
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCSA, RHCVA
>
> Red Hat Brasil 
>
> rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
>  [image: Red Hat] 
> TRIED. TESTED. TRUSTED. 
>
>
> 
>
> On Tue, May 23, 2017 at 10:52 AM, Ben Parees  wrote:
>
>>
>>
>> On Tue, May 23, 2017 at 9:49 AM, Hetz Ben Hamo  wrote:
>>
>>> That's true. I didn't want to have container apps on it.
>>>
>>
>> since you labeled it infra (based on your inventory), it won't.  but it's
>> also your only infra structure labeled node, and the registry has to run on
>> an infra structure node.
>>
>> So you either need to add another node labeled infra that's scheduleable,
>> or make your master scheduleable.
>>
>>
>>
>>>
>>> # oc get nodes
>>> NAME  STATUS AGE
>>> master-home   Ready,SchedulingDisabled   1h
>>> node1-homeReady  1h
>>> node2-homeReady  1h
>>>
>>>
>>> תודה,
>>> *חץ בן חמו*
>>> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי
>>> שלי 
>>>
>>> On Tue, May 23, 2017 at 4:45 PM, Ben Parees  wrote:
>>>
 sounds like maybe your master node is not scheduleable, can you run:

 $ oc get nodes

 $ oc describe node master

 ?


 On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:

> Sure, here it is:
>
> # oc describe pod docker-registry-2-deploy
> Name:   docker-registry-2-deploy
> Namespace:  default
> Security Policy:restricted
> Node:   /
> Labels: openshift.io/deployer-pod-for.
> name=docker-registry-2
> Status: Pending
> IP:
> Controllers:
> Containers:
>   deployment:
> Image:  openshift/origin-deployer:v1.4.1
> Port:
> Volume Mounts:
>   /var/run/secrets/kubernetes.io/serviceaccount from
> deployer-token-sbvm4 (ro)
> Environment Variables:
>   KUBERNETES_MASTER:https://master-home:8443
>   OPENSHIFT_MASTER: https://master-home:8443
>   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
> o/serviceaccount/token
>   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
> -END CERTIFICATE-
>
>   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
>   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
> Conditions:
>   Type  Status
>   PodScheduled  False
> Volumes:
>   deployer-token-sbvm4:
> Type:   Secret (a volume populated by a Secret)
> SecretName: deployer-token-sbvm4
> QoS Class:  BestEffort
> Tolerations:
> Events:
>   FirstSeen LastSeenCount   From
>  SubobjectPath   TypeReason  Message
>   - -   
>  -   --  ---
>   11m   7m  4   {default-scheduler }
>  Warning FailedSchedulingpod
> (docker-registry-2-deploy) failed to fit in any node
> fit failure on node 

Re: Pods has connectivity to other pod and service only when I run an additional pod

2017-05-23 Thread Stéphane Klein
2017-05-23 15:32 GMT+02:00 Andrew Lau :

> Philippe, I'm curious if you are running containerized?
>
>
yes,  containerized.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cannot push image

2017-05-23 Thread Hetz Ben Hamo
Like this?

[masters]
master-home.hetzlabs.pro openshift_public_hostname="master-home.hetzlabs.pro
"

[nodes]
# master needs to be included in the node to be configured in the SDN
master-home.hetzlabs.pro openshift_schedulable=true
master-home.hetzlabs.pro openshift_node_labels="{'region': 'infra', 'zone':
'default'}"
node1-home.hetzlabs.pro openshift_node_labels="{'region': 'primary',
'zone': 'default'}"
node2-home.hetzlabs.pro openshift_node_labels="{'region': 'primary',
'zone': 'default'}"


תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי


On Tue, May 23, 2017 at 4:52 PM, Ben Parees  wrote:

>
>
> On Tue, May 23, 2017 at 9:49 AM, Hetz Ben Hamo  wrote:
>
>> That's true. I didn't want to have container apps on it.
>>
>
> since you labeled it infra (based on your inventory), it won't.  but it's
> also your only infra structure labeled node, and the registry has to run on
> an infra structure node.
>
> So you either need to add another node labeled infra that's scheduleable,
> or make your master scheduleable.
>
>
>
>>
>> # oc get nodes
>> NAME  STATUS AGE
>> master-home   Ready,SchedulingDisabled   1h
>> node1-homeReady  1h
>> node2-homeReady  1h
>>
>>
>> תודה,
>> *חץ בן חמו*
>> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
>> 
>>
>> On Tue, May 23, 2017 at 4:45 PM, Ben Parees  wrote:
>>
>>> sounds like maybe your master node is not scheduleable, can you run:
>>>
>>> $ oc get nodes
>>>
>>> $ oc describe node master
>>>
>>> ?
>>>
>>>
>>> On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:
>>>
 Sure, here it is:

 # oc describe pod docker-registry-2-deploy
 Name:   docker-registry-2-deploy
 Namespace:  default
 Security Policy:restricted
 Node:   /
 Labels: openshift.io/deployer-pod-for.
 name=docker-registry-2
 Status: Pending
 IP:
 Controllers:
 Containers:
   deployment:
 Image:  openshift/origin-deployer:v1.4.1
 Port:
 Volume Mounts:
   /var/run/secrets/kubernetes.io/serviceaccount from
 deployer-token-sbvm4 (ro)
 Environment Variables:
   KUBERNETES_MASTER:https://master-home:8443
   OPENSHIFT_MASTER: https://master-home:8443
   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
 o/serviceaccount/token
   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
 MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
 c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
 MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
 ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
 VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
 NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
 Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
 +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
 OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
 AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
 SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
 c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
 P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
 -END CERTIFICATE-

   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
 Conditions:
   Type  Status
   PodScheduled  False
 Volumes:
   deployer-token-sbvm4:
 Type:   Secret (a volume populated by a Secret)
 SecretName: deployer-token-sbvm4
 QoS Class:  BestEffort
 Tolerations:
 Events:
   FirstSeen LastSeenCount   From
  SubobjectPath   TypeReason  Message
   - -   
  -   --  ---
   11m   7m  4   {default-scheduler }
  Warning FailedSchedulingpod
 (docker-registry-2-deploy) failed to fit in any node
 fit failure on node (node2-home): CheckServiceAffinity,
 MatchNodeSelector
 fit failure on node (node1-home): MatchNodeSelector,
 CheckServiceAffinity

   4m3m  2   {default-scheduler }Warning
 FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any

Re: cannot push image

2017-05-23 Thread Rodrigo Bersa
Hi Hetz,

You need to do one of the 2 options:

1. Enable scheduling on your master node.
2. Label the node1 and node2 with region=infra.

I would choose the second option and remove the label from the master node.



Rodrigo Bersa

Cloud Consultant, RHCSA, RHCVA

Red Hat Brasil 

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
 [image: Red Hat] 
TRIED. TESTED. TRUSTED. 




On Tue, May 23, 2017 at 10:52 AM, Ben Parees  wrote:

>
>
> On Tue, May 23, 2017 at 9:49 AM, Hetz Ben Hamo  wrote:
>
>> That's true. I didn't want to have container apps on it.
>>
>
> since you labeled it infra (based on your inventory), it won't.  but it's
> also your only infra structure labeled node, and the registry has to run on
> an infra structure node.
>
> So you either need to add another node labeled infra that's scheduleable,
> or make your master scheduleable.
>
>
>
>>
>> # oc get nodes
>> NAME  STATUS AGE
>> master-home   Ready,SchedulingDisabled   1h
>> node1-homeReady  1h
>> node2-homeReady  1h
>>
>>
>> תודה,
>> *חץ בן חמו*
>> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
>> 
>>
>> On Tue, May 23, 2017 at 4:45 PM, Ben Parees  wrote:
>>
>>> sounds like maybe your master node is not scheduleable, can you run:
>>>
>>> $ oc get nodes
>>>
>>> $ oc describe node master
>>>
>>> ?
>>>
>>>
>>> On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:
>>>
 Sure, here it is:

 # oc describe pod docker-registry-2-deploy
 Name:   docker-registry-2-deploy
 Namespace:  default
 Security Policy:restricted
 Node:   /
 Labels: openshift.io/deployer-pod-for.
 name=docker-registry-2
 Status: Pending
 IP:
 Controllers:
 Containers:
   deployment:
 Image:  openshift/origin-deployer:v1.4.1
 Port:
 Volume Mounts:
   /var/run/secrets/kubernetes.io/serviceaccount from
 deployer-token-sbvm4 (ro)
 Environment Variables:
   KUBERNETES_MASTER:https://master-home:8443
   OPENSHIFT_MASTER: https://master-home:8443
   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
 o/serviceaccount/token
   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
 MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
 c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
 MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
 ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
 VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
 NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
 Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
 +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
 OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
 AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
 SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
 c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
 P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
 -END CERTIFICATE-

   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
 Conditions:
   Type  Status
   PodScheduled  False
 Volumes:
   deployer-token-sbvm4:
 Type:   Secret (a volume populated by a Secret)
 SecretName: deployer-token-sbvm4
 QoS Class:  BestEffort
 Tolerations:
 Events:
   FirstSeen LastSeenCount   From
  SubobjectPath   TypeReason  Message
   - -   
  -   --  ---
   11m   7m  4   {default-scheduler }
  Warning FailedSchedulingpod
 (docker-registry-2-deploy) failed to fit in any node
 fit failure on node (node2-home): CheckServiceAffinity,
 MatchNodeSelector
 fit failure on node (node1-home): MatchNodeSelector,
 CheckServiceAffinity

   4m3m  2   {default-scheduler }Warning
 FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
 node
 fit failure on node (node2-home): CheckServiceAffinity,
 MatchNodeSelector
 

Re: cannot push image

2017-05-23 Thread Ben Parees
On Tue, May 23, 2017 at 9:49 AM, Hetz Ben Hamo  wrote:

> That's true. I didn't want to have container apps on it.
>

since you labeled it infra (based on your inventory), it won't.  but it's
also your only infra structure labeled node, and the registry has to run on
an infra structure node.

So you either need to add another node labeled infra that's scheduleable,
or make your master scheduleable.



>
> # oc get nodes
> NAME  STATUS AGE
> master-home   Ready,SchedulingDisabled   1h
> node1-homeReady  1h
> node2-homeReady  1h
>
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
> 
>
> On Tue, May 23, 2017 at 4:45 PM, Ben Parees  wrote:
>
>> sounds like maybe your master node is not scheduleable, can you run:
>>
>> $ oc get nodes
>>
>> $ oc describe node master
>>
>> ?
>>
>>
>> On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:
>>
>>> Sure, here it is:
>>>
>>> # oc describe pod docker-registry-2-deploy
>>> Name:   docker-registry-2-deploy
>>> Namespace:  default
>>> Security Policy:restricted
>>> Node:   /
>>> Labels: openshift.io/deployer-pod-for.
>>> name=docker-registry-2
>>> Status: Pending
>>> IP:
>>> Controllers:
>>> Containers:
>>>   deployment:
>>> Image:  openshift/origin-deployer:v1.4.1
>>> Port:
>>> Volume Mounts:
>>>   /var/run/secrets/kubernetes.io/serviceaccount from
>>> deployer-token-sbvm4 (ro)
>>> Environment Variables:
>>>   KUBERNETES_MASTER:https://master-home:8443
>>>   OPENSHIFT_MASTER: https://master-home:8443
>>>   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
>>> o/serviceaccount/token
>>>   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
>>> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
>>> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
>>> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
>>> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
>>> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
>>> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
>>> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
>>> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
>>> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
>>> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
>>> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
>>> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
>>> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
>>> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
>>> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
>>> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
>>> -END CERTIFICATE-
>>>
>>>   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
>>>   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
>>> Conditions:
>>>   Type  Status
>>>   PodScheduled  False
>>> Volumes:
>>>   deployer-token-sbvm4:
>>> Type:   Secret (a volume populated by a Secret)
>>> SecretName: deployer-token-sbvm4
>>> QoS Class:  BestEffort
>>> Tolerations:
>>> Events:
>>>   FirstSeen LastSeenCount   From
>>>  SubobjectPath   TypeReason  Message
>>>   - -   
>>>  -   --  ---
>>>   11m   7m  4   {default-scheduler }
>>>Warning FailedSchedulingpod
>>> (docker-registry-2-deploy) failed to fit in any node
>>> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>>> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>>>
>>>   4m3m  2   {default-scheduler }Warning
>>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>>> node
>>> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>>> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>>>
>>>   11m   2m  3   {default-scheduler }Warning
>>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>>> node
>>> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>>> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>>>
>>>   13m   2m  13  {default-scheduler }Warning
>>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>>> node
>>> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>>> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>>>
>>>   11m   1m 

Re: cannot push image

2017-05-23 Thread Hetz Ben Hamo
That's true. I didn't want to have container apps on it.

# oc get nodes
NAME  STATUS AGE
master-home   Ready,SchedulingDisabled   1h
node1-homeReady  1h
node2-homeReady  1h


תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי


On Tue, May 23, 2017 at 4:45 PM, Ben Parees  wrote:

> sounds like maybe your master node is not scheduleable, can you run:
>
> $ oc get nodes
>
> $ oc describe node master
>
> ?
>
>
> On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:
>
>> Sure, here it is:
>>
>> # oc describe pod docker-registry-2-deploy
>> Name:   docker-registry-2-deploy
>> Namespace:  default
>> Security Policy:restricted
>> Node:   /
>> Labels: openshift.io/deployer-pod-for.
>> name=docker-registry-2
>> Status: Pending
>> IP:
>> Controllers:
>> Containers:
>>   deployment:
>> Image:  openshift/origin-deployer:v1.4.1
>> Port:
>> Volume Mounts:
>>   /var/run/secrets/kubernetes.io/serviceaccount from
>> deployer-token-sbvm4 (ro)
>> Environment Variables:
>>   KUBERNETES_MASTER:https://master-home:8443
>>   OPENSHIFT_MASTER: https://master-home:8443
>>   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
>> o/serviceaccount/token
>>   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
>> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
>> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
>> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
>> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
>> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
>> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
>> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
>> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
>> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
>> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
>> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
>> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
>> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
>> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
>> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
>> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
>> -END CERTIFICATE-
>>
>>   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
>>   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
>> Conditions:
>>   Type  Status
>>   PodScheduled  False
>> Volumes:
>>   deployer-token-sbvm4:
>> Type:   Secret (a volume populated by a Secret)
>> SecretName: deployer-token-sbvm4
>> QoS Class:  BestEffort
>> Tolerations:
>> Events:
>>   FirstSeen LastSeenCount   From
>>  SubobjectPath   TypeReason  Message
>>   - -   
>>  -   --  ---
>>   11m   7m  4   {default-scheduler }
>>Warning FailedSchedulingpod
>> (docker-registry-2-deploy) failed to fit in any node
>> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>>
>>   4m3m  2   {default-scheduler }Warning
>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>> node
>> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>>
>>   11m   2m  3   {default-scheduler }Warning
>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>> node
>> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>>
>>   13m   2m  13  {default-scheduler }Warning
>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>> node
>> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>>
>>   11m   1m  7   {default-scheduler }Warning
>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>> node
>> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>>
>>   13m   1m  4   {default-scheduler }Warning
>> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
>> node
>> fit failure on node 

Re: cannot push image

2017-05-23 Thread Ben Parees
sounds like maybe your master node is not scheduleable, can you run:

$ oc get nodes

$ oc describe node master

?


On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo  wrote:

> Sure, here it is:
>
> # oc describe pod docker-registry-2-deploy
> Name:   docker-registry-2-deploy
> Namespace:  default
> Security Policy:restricted
> Node:   /
> Labels: openshift.io/deployer-pod-for.
> name=docker-registry-2
> Status: Pending
> IP:
> Controllers:
> Containers:
>   deployment:
> Image:  openshift/origin-deployer:v1.4.1
> Port:
> Volume Mounts:
>   /var/run/secrets/kubernetes.io/serviceaccount from
> deployer-token-sbvm4 (ro)
> Environment Variables:
>   KUBERNETES_MASTER:https://master-home:8443
>   OPENSHIFT_MASTER: https://master-home:8443
>   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.
> io/serviceaccount/token
>   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
> -END CERTIFICATE-
>
>   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
>   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
> Conditions:
>   Type  Status
>   PodScheduled  False
> Volumes:
>   deployer-token-sbvm4:
> Type:   Secret (a volume populated by a Secret)
> SecretName: deployer-token-sbvm4
> QoS Class:  BestEffort
> Tolerations:
> Events:
>   FirstSeen LastSeenCount   From
>  SubobjectPath   TypeReason  Message
>   - -   
>  -   --  ---
>   11m   7m  4   {default-scheduler }
>  Warning FailedSchedulingpod (docker-registry-2-deploy)
> failed to fit in any node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
>   4m3m  2   {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   11m   2m  3   {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   13m   2m  13  {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   11m   1m  7   {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   13m   1m  4   {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
>   13m   37s 10  {default-scheduler }Warning
> FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>
>   13m   5s  7   {default-scheduler }Warning
> FailedSchedulingpod 

Re: cannot push image

2017-05-23 Thread Hetz Ben Hamo
Sure, here it is:

# oc describe pod docker-registry-2-deploy
Name:   docker-registry-2-deploy
Namespace:  default
Security Policy:restricted
Node:   /
Labels: openshift.io/deployer-pod-for.name=docker-registry-2
Status: Pending
IP:
Controllers:
Containers:
  deployment:
Image:  openshift/origin-deployer:v1.4.1
Port:
Volume Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from
deployer-token-sbvm4 (ro)
Environment Variables:
  KUBERNETES_MASTER:https://master-home:8443
  OPENSHIFT_MASTER: https://master-home:8443
  BEARER_TOKEN_FILE:/var/run/secrets/
kubernetes.io/serviceaccount/token
  OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
+L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
-END CERTIFICATE-

  OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
  OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
Conditions:
  Type  Status
  PodScheduled  False
Volumes:
  deployer-token-sbvm4:
Type:   Secret (a volume populated by a Secret)
SecretName: deployer-token-sbvm4
QoS Class:  BestEffort
Tolerations:
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  11m   7m  4   {default-scheduler }
 Warning FailedSchedulingpod (docker-registry-2-deploy)
failed to fit in any node
fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity

  4m3m  2   {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector

  11m   2m  3   {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector

  13m   2m  13  {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity

  11m   1m  7   {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity

  13m   1m  4   {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity

  13m   37s 10  {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector

  13m   5s  7   {default-scheduler }Warning
FailedSchedulingpod (docker-registry-2-deploy) failed to fit in any
node
fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector


# oc describe pod router-2-deploy
Name:   router-2-deploy
Namespace:  default
Security Policy:restricted
Node:   /
Labels: 

Re: cannot push image

2017-05-23 Thread Rodrigo Bersa
Hi Hetz,

It seems that your Registry and Router PODs are not running. Probably
there's a problem avoiding them to deploy.

Can you send the output of the commands below?

# oc describe pod docker-registry-1-deploy
# oc describe pod router-1-deploy



Rodrigo Bersa

Cloud Consultant, RHCSA, RHCVA

Red Hat Brasil 

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
 [image: Red Hat] 
TRIED. TESTED. TRUSTED. 




On Tue, May 23, 2017 at 8:28 AM, Hetz Ben Hamo  wrote:

> ]# oc get pods -n default
> NAMEREADY STATUSRESTARTS   AGE
> docker-registry-1-deploy0/1   Pending   0  16m
> registry-console-1-deploy   0/1   Error 0  15m
> router-1-deploy 0/1   Pending   0  17m
> [root@master-home ~]# oc logs registry-console-1-deploy
> --> Scaling registry-console-1 to 1
> --> Waiting up to 10m0s for pods in rc registry-console-1 to become ready
> error: update acceptor rejected registry-console-1: pods for rc
> "registry-console-1" took longer than 600 seconds to become ready
> [root@master-home ~]# oc logs router-1-deploy
> [root@master-home ~]# oc logs docker-registry-1-deploy
> [root@master-home ~]# oc logs docker-registry-1-deploy -n default
> [root@master-home ~]# oc get pods
>
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
> 
>
> On Tue, May 23, 2017 at 1:49 AM, Ben Parees  wrote:
>
>>
>>
>> On Mon, May 22, 2017 at 6:18 PM, Hetz Ben Hamo  wrote:
>>
>>> Hi,
>>>
>>> I've built on a 3 nodes openshift origin using the host file included
>>> below, but it seems few things are getting broken. I didn't modify anything
>>> yet on the openshift, just used the openshift-Ansible checked out from
>>> today.
>>>
>>> Problem one: After building an image from the examples (I chose Java
>>> with the example of wildfly) I get:
>>>
>>> [INFO] 
>>> 
>>> [INFO] BUILD SUCCESS
>>> [INFO] 
>>> 
>>> [INFO] Total time: 12.182 s
>>> [INFO] Finished at: 2017-05-22T22:08:21+00:00
>>> [INFO] Final Memory: 14M/134M
>>> [INFO] 
>>> 
>>> Moving built war files into /wildfly/standalone/deployments for later
>>> deployment...
>>> Moving all war artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> '/opt/app-root/src/target/ROOT.war' -> '/wildfly/standalone/deploymen
>>> ts/ROOT.war'
>>> Moving all ear artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> Moving all rar artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> Moving all jar artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> ...done
>>> Pushing image 172.30.172.85:5000/test1/wf:latest ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Registry server Address:
>>> Registry server User Name: serviceaccount
>>> Registry server Email: serviceacco...@example.org
>>> Registry server Password: <>
>>> error: build error: Failed to push image: Get
>>> https://172.30.172.85:5000/v1/_ping: dial tcp 172.30.172.85:5000:
>>> getsockopt: connection refused
>>>
>>>
>> can you confirm your registry pod is running in the default namespace (oc
>> get pods -n default)?  Can you get logs from it?
>>
>>
>>
>>>
>>> Another problem: I added the metrics option so it installed hawkler but
>>> when it complains that it needs SSL approval (it shows a message about a
>>> problem with hawkler and gives a link to open it) I get upon clicking the
>>> link: connection refused.
>>>
>>> I've tested the host configuration on 2 sets of VM's (one at home with
>>> digital ocean, another set here at home with VMWare). I've set up DNS with
>>> subdomain wildcard and I can ping the app names but trying to connect
>>> through a browser or curl - gives connection refused.
>>>
>>> Have I missed something?
>>>
>>> here is my byo host file:
>>>
>>> [OSEv3:children]
>>> masters
>>> nodes
>>>
>>> [OSEv3:vars]
>>> ansible_ssh_user=root
>>>
>>> deployment_type=origin
>>> openshift_release=v1.5.0
>>> containerized=true
>>> openshift_install_examples=true
>>> openshift_hosted_metrics_deploy=true
>>>
>>> # use htpasswd authentication with demo/demo
>>> 

Re: cannot push image

2017-05-23 Thread Hetz Ben Hamo
]# oc get pods -n default
NAMEREADY STATUSRESTARTS   AGE
docker-registry-1-deploy0/1   Pending   0  16m
registry-console-1-deploy   0/1   Error 0  15m
router-1-deploy 0/1   Pending   0  17m
[root@master-home ~]# oc logs registry-console-1-deploy
--> Scaling registry-console-1 to 1
--> Waiting up to 10m0s for pods in rc registry-console-1 to become ready
error: update acceptor rejected registry-console-1: pods for rc
"registry-console-1" took longer than 600 seconds to become ready
[root@master-home ~]# oc logs router-1-deploy
[root@master-home ~]# oc logs docker-registry-1-deploy
[root@master-home ~]# oc logs docker-registry-1-deploy -n default
[root@master-home ~]# oc get pods


תודה,
*חץ בן חמו*
אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי


On Tue, May 23, 2017 at 1:49 AM, Ben Parees  wrote:

>
>
> On Mon, May 22, 2017 at 6:18 PM, Hetz Ben Hamo  wrote:
>
>> Hi,
>>
>> I've built on a 3 nodes openshift origin using the host file included
>> below, but it seems few things are getting broken. I didn't modify anything
>> yet on the openshift, just used the openshift-Ansible checked out from
>> today.
>>
>> Problem one: After building an image from the examples (I chose Java with
>> the example of wildfly) I get:
>>
>> [INFO] 
>> 
>> [INFO] BUILD SUCCESS
>> [INFO] 
>> 
>> [INFO] Total time: 12.182 s
>> [INFO] Finished at: 2017-05-22T22:08:21+00:00
>> [INFO] Final Memory: 14M/134M
>> [INFO] 
>> 
>> Moving built war files into /wildfly/standalone/deployments for later
>> deployment...
>> Moving all war artifacts from /opt/app-root/src/target directory into
>> /wildfly/standalone/deployments for later deployment...
>> '/opt/app-root/src/target/ROOT.war' -> '/wildfly/standalone/deploymen
>> ts/ROOT.war'
>> Moving all ear artifacts from /opt/app-root/src/target directory into
>> /wildfly/standalone/deployments for later deployment...
>> Moving all rar artifacts from /opt/app-root/src/target directory into
>> /wildfly/standalone/deployments for later deployment...
>> Moving all jar artifacts from /opt/app-root/src/target directory into
>> /wildfly/standalone/deployments for later deployment...
>> ...done
>> Pushing image 172.30.172.85:5000/test1/wf:latest ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Warning: Push failed, retrying in 5s ...
>> Registry server Address:
>> Registry server User Name: serviceaccount
>> Registry server Email: serviceacco...@example.org
>> Registry server Password: <>
>> error: build error: Failed to push image: Get
>> https://172.30.172.85:5000/v1/_ping: dial tcp 172.30.172.85:5000:
>> getsockopt: connection refused
>>
>>
> can you confirm your registry pod is running in the default namespace (oc
> get pods -n default)?  Can you get logs from it?
>
>
>
>>
>> Another problem: I added the metrics option so it installed hawkler but
>> when it complains that it needs SSL approval (it shows a message about a
>> problem with hawkler and gives a link to open it) I get upon clicking the
>> link: connection refused.
>>
>> I've tested the host configuration on 2 sets of VM's (one at home with
>> digital ocean, another set here at home with VMWare). I've set up DNS with
>> subdomain wildcard and I can ping the app names but trying to connect
>> through a browser or curl - gives connection refused.
>>
>> Have I missed something?
>>
>> here is my byo host file:
>>
>> [OSEv3:children]
>> masters
>> nodes
>>
>> [OSEv3:vars]
>> ansible_ssh_user=root
>>
>> deployment_type=origin
>> openshift_release=v1.5.0
>> containerized=true
>> openshift_install_examples=true
>> openshift_hosted_metrics_deploy=true
>>
>> # use htpasswd authentication with demo/demo
>> openshift_master_identity.providers=[{'name': 'htpasswd_auth', 'login':
>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>> 'filename': '/etc/origin/master/htpasswd'}]
>> openshift_master_htpasswd_users={'demo': '$
>> .'}
>>
>> # put the router on dedicated infra node
>> openshift_hosted_router_selector='region=infra'
>> openshift_master_default_subdomain=apps.test.com
>>
>> # put the image registry on dedicated infra node
>> openshift_hosted_registry_selector='region=infra'
>>
>> #.project pods should be placed on primary nodes
>> osm_default_node_selector='region=primary'
>>
>> [masters]
>> master-home.test.com openshift_public_hostname="master-home.test.com"
>>
>> [nodes]
>> # master