Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-12 Thread Hemant Kumar
Do you have access to logs of atomic-openshift-node process where secrets
are failing to mount? If yes, can you post them a in Bug or something[1]

We may have clues in that. Is the API request that is fetching secret is
taking time to respond or something else is amiss. Also, api-server metrics
can be easily requested via curl. Something like - "curl
http://api-server-url/metrics;.


[1] https://bugzilla.redhat.com/enter_bug.cgi?product=OpenShift%20Origin



On Wed, Jul 12, 2017 at 3:06 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Could it be related to this?
> https://github.com/openshift/origin/issues/11016
> ​
> Sounds definitely like our issue, I just don't understand why would we hit
> this suddenly.
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-12 Thread Philippe Lafoucrière
Could it be related to this?
https://github.com/openshift/origin/issues/11016
​
Sounds definitely like our issue, I just don't understand why would we hit
this suddenly.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Werner, Mark
Hi, I have just gotten past the issue with the master not starting or 
restarting. It starts now. But I am trying to login with an AD account and 
receive Authentication Error Occurred. Not sure what the syntax should be. I 
try domain\username and username@domain.local  , 
or just username.



Mark Werner | Senior Systems Engineer | Cloud & Infrastructure Services

Unisys | Mobile Phone 586.214.9017 | mark.wer...@unisys.com 


11720 Plaza America Drive, Reston, VA 20190



 



THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is for use only by the intended recipient. If you received this 
in error, please contact the sender and delete the e-mail and its attachments 
from all devices.

  
 
 
   




From: Rodrigo Bersa [mailto:rbe...@redhat.com]
Sent: Wednesday, July 12, 2017 3:00 PM
To: Javier Palacios 
Cc: Werner, Mark ; users@lists.openshift.redhat.com
Subject: Re: OpenShift Origin Active Directory Authentication



Hi Mark,

I believe maybe the syntax is not right..

Could you try this?

oauthConfig:

  assetPublicURL:   
https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - challenge: true

login: true

mappingMethod: claim

name: Active_Directory

provider:

  apiVersion: v1

  kind: LDAPPasswordIdentityProvider

  attributes:

id:

- dn

email:

- mail

name:

- cn

preferredUsername:

- uid

  bindDN: "cn=openshift,cn=users,dc=domain,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid

  masterPublicURL:   
https://master.domain.local:8443

  masterURL:   
https://master.domain.local:8443



Best regards,




Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

  Red Hat Brasil

  rbe...@redhat.comM:   
+55 11 99557-5841


 

  TRIED. TESTED. TRUSTED.







On Wed, Jul 12, 2017 at 2:15 PM, Javier Palacios  > wrote:


> I did try sAMAccountName at first and was getting the same results. Then I
> had read that variable was for older Windows machines so I tried uid as that
> was the other example I saw.

The relevant part of my master-config.yaml is below, and appart from using 
ldaps, I don't see any other difference. If the uid attribute is valid on your 
schema, the yours seems ok.

Javier Palacios

  identityProviders:
  - challenge: true
login: true
mappingMethod: claim
name: n4tdc1
provider:
  apiVersion: v1
  attributes:
email:
- mail
id:
- dn
name:
- cn
preferredUsername:
- sAMAccountName
  bindDN: CN=openshift,OU=N4T-USERS,dc=net4things,dc=local
  bindPassword: 
  ca: ad-ldap-ca.crt
  insecure: false
  kind: LDAPPasswordIdentityProvider
  url: 
ldaps://n4tdc1.net4things.local/dc=net4things,dc=local?sAMAccountName




___
users mailing list
users@lists.openshift.redhat.com 
http://lists.openshift.redhat.com/openshiftmm/listinfo/users





smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Rodrigo Bersa
Hi Mark,

I believe maybe the syntax is not right..

Could you try this?

oauthConfig:

  assetPublicURL: https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - challenge: true

login: true

mappingMethod: claim

name: Active_Directory

provider:

  apiVersion: v1

  kind: LDAPPasswordIdentityProvider

  attributes:

id:

- dn

email:

- mail

name:

- cn

preferredUsername:

- uid

  bindDN: "cn=openshift,cn=users,dc=domain,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid

  masterPublicURL: https://master.domain.local:8443
  masterURL: https://master.domain.local:8443


Best regards,

Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
 [image: Red Hat] 
TRIED. TESTED. TRUSTED. 




On Wed, Jul 12, 2017 at 2:15 PM, Javier Palacios 
wrote:

>
> > I did try sAMAccountName at first and was getting the same results. Then
> I
> > had read that variable was for older Windows machines so I tried uid as
> that
> > was the other example I saw.
>
> The relevant part of my master-config.yaml is below, and appart from using
> ldaps, I don't see any other difference. If the uid attribute is valid on
> your schema, the yours seems ok.
>
> Javier Palacios
>
>   identityProviders:
>   - challenge: true
> login: true
> mappingMethod: claim
> name: n4tdc1
> provider:
>   apiVersion: v1
>   attributes:
> email:
> - mail
> id:
> - dn
> name:
> - cn
> preferredUsername:
> - sAMAccountName
>   bindDN: CN=openshift,OU=N4T-USERS,dc=net4things,dc=local
>   bindPassword: 
>   ca: ad-ldap-ca.crt
>   insecure: false
>   kind: LDAPPasswordIdentityProvider
>   url: ldaps://n4tdc1.net4things.local/dc=net4things,dc=local?
> sAMAccountName
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Javier Palacios

> I did try sAMAccountName at first and was getting the same results. Then I
> had read that variable was for older Windows machines so I tried uid as that
> was the other example I saw.

The relevant part of my master-config.yaml is below, and appart from using 
ldaps, I don't see any other difference. If the uid attribute is valid on your 
schema, the yours seems ok.

Javier Palacios

  identityProviders:
  - challenge: true
login: true
mappingMethod: claim
name: n4tdc1
provider:
  apiVersion: v1
  attributes:
email:
- mail
id:
- dn
name:
- cn
preferredUsername:
- sAMAccountName
  bindDN: CN=openshift,OU=N4T-USERS,dc=net4things,dc=local
  bindPassword: 
  ca: ad-ldap-ca.crt
  insecure: false
  kind: LDAPPasswordIdentityProvider
  url: ldaps://n4tdc1.net4things.local/dc=net4things,dc=local?sAMAccountName



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Werner, Mark
I did try sAMAccountName at first and was getting the same results. Then I
had read that variable was for older Windows machines so I tried uid as that
was the other example I saw. 

One thing I didn't change was:
  preferredUsername:
- uid

Would I have to change this to:
  preferredUsername:
- sAMAccountName

And also use:
url: ldap://dc.domain.local:389/ou=users,dc=domain,dc=local?sAMAccountName



oauthConfig:
  assetPublicURL: https://master.domain.local:8443/console/
  grantConfig:
method: auto
  identityProviders:
  - name: Active_Directory
challenge: true
login: true
mappingMethod: claim
provider:
  apiVersion: v1
  kind: LDAPPasswordIdentityProvider
  attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
  bindDN: "cn=openshift,ou=users,dc=domain,dc=local"
  bindPassword: "password"
  insecure: true
  url: ldap://dc.domain.local:389/ou=users,dc=domain,dc=local?uid


Mark Werner | Senior Systems Engineer | Cloud & Infrastructure Services
Unisys | Mobile Phone 586.214.9017 | mark.wer...@unisys.com 
11720 Plaza America Drive, Reston, VA 20190



THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is for use only by the intended recipient. If you received this
in error, please contact the sender and delete the e-mail and its
attachments from all devices.
   

-Original Message-
From: Javier Palacios [mailto:jpalac...@net4things.com] 
Sent: Wednesday, July 12, 2017 10:48 AM
To: Werner, Mark ; users@lists.openshift.redhat.com
Subject: RE: OpenShift Origin Active Directory Authentication


I cannot tell for the oauthConfig, but  for the identity provider you have

> preferredUsername:
> - uid

and I'm not sure that attribute exist. It doesn't in the mine at least, and
I'm using sAMAccountName, which is on the default AD schema.
Although I don't see how that could prevent master service to start.

Mine works, but it has ldap authentication since the beginning, as I used
the openshift_master_identity_providers ansible variable.

Javier Palacios



smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-12 Thread Philippe Lafoucrière
On the master, we're seeing this on a regular basis:
https://gist.github.com/gravis/cae52e763cd5cdac19a8456f9208aa34

I don't know if it can be related
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Javier Palacios

I cannot tell for the oauthConfig, but  for the identity provider you have

> preferredUsername:
> - uid

and I'm not sure that attribute exist. It doesn't in the mine at least, and I'm 
using sAMAccountName, which is on the default AD schema.
Although I don't see how that could prevent master service to start.

Mine works, but it has ldap authentication since the beginning, as I used the 
openshift_master_identity_providers ansible variable.

Javier Palacios


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Werner, Mark
Tried again. Made changes from cn=users to ou=users

 

oauthConfig:

  assetPublicURL: https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - name: Active_Directory

challenge: true

login: true

  mappingMethod: claim

provider:

  apiVersion: v1

  kind: LDAPPasswordIdentityProvider

  attributes:

id:

- dn

email:

- mail

name:

- cn

preferredUsername:

- uid

  bindDN: "cn=openshift,ou=users,dc=cswp,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/ou=users,dc=cswp,dc=local?uid

  assetPublicURL: https://master.domain.local:8443/console/

  masterPublicURL: https://master.domain.local:8443

  masterURL: https://master.domain.local:8443

 

Same result. 

 

systemctl restart origin-master

Job for origin-master.service failed because the control process exited with 
err 
or code. See "systemctl status origin-master.service" and "journalctl -xe" for 
d 
etails.

 

Results from “systemctl status origin-master.service:

 

   Loaded: loaded (/etc/systemd/system/origin-master.service; enabled; vendor 
preset: disabled)

   Active: activating (auto-restart) (Result: exit-code) since Wed 2017-07-12 
10:16:02 EDT; 2s ago

 Docs:   
https://github.com/openshift/origin

  Process: 41762 ExecStart=/usr/bin/openshift start master 
--config=${CONFIG_FILE} $OPTIONS (code=exited, status=255)

Main PID: 41762 (code=exited, status=255)

Jul 12 10:16:02 master.domain.local systemd[1]: origin-master.service: main 
process exited, code=exited, status=255/n/a

Jul 12 10:16:02 master.domain.local systemd[1]: Failed to start Origin Master 
Service.

Jul 12 10:16:02 master.domain.local systemd[1]: Unit origin-master.service 
entered failed state.

Jul 12 10:16:02 master.domain.local systemd[1]: origin-master.service failed.

Results from journalctl –xe:

 

Jul 12 10:17:02 master.domain.local systemd[1]: Failed to start Origin Master 
Service.

-- Subject: Unit origin-master.service has failed

-- Defined-By: systemd

-- Support:   
http://lists.freedesktop.org/mailman/listinfo/systemd-devel

--

-- Unit origin-master.service has failed.

--

-- The result is failed.

Jul 12 10:17:02 master.domain.local systemd[1]: Unit origin-master.service 
entered failed state.

Jul 12 10:17:02 master.domain.local systemd[1]: origin-master.service failed.

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.459671   
14773 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.459675   
14773 reflector.go:188] github.com/openshift/origin/pkg/cmd/server/kub

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.462990   
14773 reflector.go:188] github.com/openshift/origin/pkg/sdn/plugin/com

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.465266   
14773 reflector.go:188] github.com/openshift/origin/pkg/cmd/server/kub

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.465367   
14773 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *ap

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.467387   
14773 reflector.go:188] github.com/openshift/origin/pkg/sdn/plugin/com

Jul 12 10:17:03 master.domain.local origin-node[14773]: E0712 10:17:03.467413   
14773 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *ap

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.043488   
14773 kubelet_node_status.go:323] Error updating node status, will ret

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.045247   
14773 kubelet_node_status.go:323] Error updating node status, will ret

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.046899   
14773 kubelet_node_status.go:323] Error updating node status, will ret

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.048586   
14773 kubelet_node_status.go:323] Error updating node status, will ret

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.050320   
14773 kubelet_node_status.go:323] Error updating node status, will ret

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.050347   
14773 kubelet_node_status.go:315] Unable to update node status: update

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.461624   
14773 reflector.go:188] github.com/openshift/origin/pkg/cmd/server/kub

Jul 12 10:17:04 master.domain.local origin-node[14773]: E0712 10:17:04.461642   
14773 reflector.go:188] 

Re: The easiest way to start Docker Registry in Origin

2017-07-12 Thread Henryk Konsek
The route itself seems to be OK, apparently it is just not linked to the
service...

$ oc get route
NAME  HOST/PORT  PATH  SERVICES  PORT
TERMINATION   WILDCARD
docker-registry   192.168.1.21 docker-registry   5000-tcp
  None

Any ideas why could that happen?

śr., 12 lip 2017 o 16:21 użytkownik Henryk Konsek 
napisał:

> BTW If I would like to expose my registry to the outside world, would
> executing the following command just do the job? I'm trying to expose the
> registry via...
>
> oc expose svc/docker-registry --hostname=192.168.1.21
>
> ...but connecting to http://192.168.1.21:5000 gives me Connection
> Refused. I missed some steps here? :)
>
> śr., 12 lip 2017 o 15:57 użytkownik Henryk Konsek 
> napisał:
>
>> Many thanks. Integrated registry is exactly what I need and works like a
>> charm :) .
>>
>> czw., 29 cze 2017 o 11:59 użytkownik Maciej Szulik 
>> napisał:
>>
>>> On Wed, Jun 28, 2017 at 11:53 AM, Frederic Giloux 
>>> wrote:
>>>
 Hi Henryk

 If I correctly understand your use case I think that the easiest way is
 to create an imagestream foo and to use the pull-through feature:

 https://docs.openshift.org/latest/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough

 https://docs.openshift.org/latest/dev_guide/managing_images.html#image-pull-policy

 Regards,

 Frédéric

 On Wed, Jun 28, 2017 at 11:29 AM, Henryk Konsek 
 wrote:

> Hi,
>
> What would be the easiest way to start Docker Registry in OpenShift
> Origin and tell OpenShift to look up for Docker images in it?
>
> What I would like to achieve is that when I execute "oc new-app foo",
> OpenShift will try to look up for "foo" image in my local Origin registry
> and then in DockerHub.
>

>>>
>>> Just install the integrated registry and all you're asking for will be
>>> there:
>>> https://docs.openshift.org/latest/install_config/registry/index.html
>>>
>>>
>>>
>>> --
>> Henryk Konsek
>> https://linkedin.com/in/hekonsek
>>
> --
> Henryk Konsek
> https://linkedin.com/in/hekonsek
>
-- 
Henryk Konsek
https://linkedin.com/in/hekonsek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: The easiest way to start Docker Registry in Origin

2017-07-12 Thread Henryk Konsek
BTW If I would like to expose my registry to the outside world, would
executing the following command just do the job? I'm trying to expose the
registry via...

oc expose svc/docker-registry --hostname=192.168.1.21

...but connecting to http://192.168.1.21:5000 gives me Connection Refused.
I missed some steps here? :)

śr., 12 lip 2017 o 15:57 użytkownik Henryk Konsek 
napisał:

> Many thanks. Integrated registry is exactly what I need and works like a
> charm :) .
>
> czw., 29 cze 2017 o 11:59 użytkownik Maciej Szulik 
> napisał:
>
>> On Wed, Jun 28, 2017 at 11:53 AM, Frederic Giloux 
>> wrote:
>>
>>> Hi Henryk
>>>
>>> If I correctly understand your use case I think that the easiest way is
>>> to create an imagestream foo and to use the pull-through feature:
>>>
>>> https://docs.openshift.org/latest/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough
>>>
>>> https://docs.openshift.org/latest/dev_guide/managing_images.html#image-pull-policy
>>>
>>> Regards,
>>>
>>> Frédéric
>>>
>>> On Wed, Jun 28, 2017 at 11:29 AM, Henryk Konsek 
>>> wrote:
>>>
 Hi,

 What would be the easiest way to start Docker Registry in OpenShift
 Origin and tell OpenShift to look up for Docker images in it?

 What I would like to achieve is that when I execute "oc new-app foo",
 OpenShift will try to look up for "foo" image in my local Origin registry
 and then in DockerHub.

>>>
>>
>> Just install the integrated registry and all you're asking for will be
>> there:
>> https://docs.openshift.org/latest/install_config/registry/index.html
>>
>>
>>
>> --
> Henryk Konsek
> https://linkedin.com/in/hekonsek
>
-- 
Henryk Konsek
https://linkedin.com/in/hekonsek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Werner, Mark
I do believe in one attempt I did change the cn=users to ou=users and had the 
same issue. But I can give a try just to make certain.



Thanks,



Mark Werner | Senior Systems Engineer | Cloud & Infrastructure Services

Unisys | Mobile Phone 586.214.9017 |   
mark.wer...@unisys.com

11720 Plaza America Drive, Reston, VA 20190



 



THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is for use only by the intended recipient. If you received this 
in error, please contact the sender and delete the e-mail and its attachments 
from all devices.

  
 
 
   




From: Jon Stanley [mailto:jonstan...@gmail.com]
Sent: Wednesday, July 12, 2017 10:08 AM
To: Werner, Mark 
Cc: users@lists.openshift.redhat.com
Subject: Re: OpenShift Origin Active Directory Authentication





  bindDN: "cn=openshift,cn=users,dc=domain,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid







In addition to Clayton's question of the exact messages, this configuration 
looks bad - I'm not sure if it's a problem in your redaction of the 
configuration, or if it's real - 'cn=openshift,cn=users,dc=domain,dc=local' 
has 2 CN's in it -  should be 'cn=openshift,ou=users,dc=domain,dc=local'



smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Jon Stanley
>
>
>   bindDN: "cn=openshift,cn=users,dc=domain,dc=local"
>
>   bindPassword: "password"
>
>   insecure: true
>
>   url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid
>
>
>
>
In addition to Clayton's question of the exact messages, this configuration
looks bad - I'm not sure if it's a problem in your redaction of the
configuration, or if it's real - 'cn=openshift,cn=users,dc=domain,dc=local'
has 2 CN's in it -  should be 'cn=openshift,ou=users,dc=domain,dc=local'
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:41 GMT+02:00 Peter Portante :

>
>
> On Wed, Jul 12, 2017 at 9:28 AM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>>
>> 2017-07-12 15:20 GMT+02:00 Peter Portante :
>>
>>> This looks a lot like this BZ: https://bugzilla.redhat.co
>>> m/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
>>> configuration"
>>>
>>> What version of Origin are you using?
>>>
>>>
>> Logging image : origin-logging-elasticsearch:v1.5.0
>>
>> $ oc version
>> oc v1.4.1+3f9807a
>> kubernetes v1.4.0+776c994
>> features: Basic-Auth
>>
>> Server https://console.tech-angels.net:443
>> openshift v1.5.0+031cbe4
>> kubernetes v1.5.2+43a9be4
>>
>> and with 1.4 nodes because of this crazy bug
>> https://github.com/openshift/origin/issues/14092)
>>
>>
>>> I found that I had to run the sgadmin script in each ES pod at the same
>>> time, and when one succeeds and one fails, just run it again and it worked.
>>>
>>>
>> Ok, I'll try that, how can I execute sgadmin script manually ?
>>
>
> ​You can see it in the run.sh script in each pod, look for the invocation
> of sgadmin there.
>
>
Ok I have executed:

/usr/share/elasticsearch/plugins/search-guard-2/tools/sgadmin.sh \
-cd ${HOME}/sgconfig \
-i .searchguard.${HOSTNAME} \
-ks /etc/elasticsearch/secret/searchguard.key \
-kst JKS \
-kspass kspass \
-ts /etc/elasticsearch/secret/searchguard.truststore \
-tst JKS \
-tspass tspass \
-nhnv \
-icl

One ES node 1 and 2 in same time, but I have need to restart one second
time on node2.

Now I have this message:

Will connect to localhost:9300 ... done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: GREEN
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-x39myqbs-1-s5g7c index already exists, so we do not
need to create one.
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Done with success

Fixed, thanks.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: The easiest way to start Docker Registry in Origin

2017-07-12 Thread Henryk Konsek
Many thanks. Integrated registry is exactly what I need and works like a
charm :) .

czw., 29 cze 2017 o 11:59 użytkownik Maciej Szulik 
napisał:

> On Wed, Jun 28, 2017 at 11:53 AM, Frederic Giloux 
> wrote:
>
>> Hi Henryk
>>
>> If I correctly understand your use case I think that the easiest way is
>> to create an imagestream foo and to use the pull-through feature:
>>
>> https://docs.openshift.org/latest/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough
>>
>> https://docs.openshift.org/latest/dev_guide/managing_images.html#image-pull-policy
>>
>> Regards,
>>
>> Frédéric
>>
>> On Wed, Jun 28, 2017 at 11:29 AM, Henryk Konsek 
>> wrote:
>>
>>> Hi,
>>>
>>> What would be the easiest way to start Docker Registry in OpenShift
>>> Origin and tell OpenShift to look up for Docker images in it?
>>>
>>> What I would like to achieve is that when I execute "oc new-app foo",
>>> OpenShift will try to look up for "foo" image in my local Origin registry
>>> and then in DockerHub.
>>>
>>
>
> Just install the integrated registry and all you're asking for will be
> there:
> https://docs.openshift.org/latest/install_config/registry/index.html
>
>
>
> --
Henryk Konsek
https://linkedin.com/in/hekonsek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Clayton Coleman
When you restart, what log messages are printed in origin-master?

On Jul 11, 2017, at 10:19 PM, Werner, Mark  wrote:

I am really struggling to get Active Directory authentication to work.

The oauthConfig section of the master-config.yaml file starts out like this
and all is fine.

oauthConfig:

  assetPublicURL: https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - challenge: true

login: true

mappingMethod: claim

name: allow_all

provider:

  apiVersion: v1

  kind: AllowAllPasswordIdentityProvider

  masterCA: ca-bundle.crt

  masterPublicURL: https://master.domain.local:8443

  masterURL: https://master.domain.local:8443

Then I attempt to modify the oauthConfig section of the master-config.yaml
file to look like this.

oauthConfig:

  assetPublicURL: https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - name: Active_Directory

challenge: true

login: true

mappingMethod: claim

provider:

  apiVersion: v1

  kind: LDAPPasswordIdentityProvider

  attributes:

id:

- dn

email:

- mail

name:

- cn

preferredUsername:

- uid

  bindDN: "cn=openshift,cn=users,dc=domain,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid

  assetPublicURL: https://master.domain.local:8443/console/

  masterPublicURL: https://master.domain.local:8443

  masterURL: https://master.domain.local:8443

Then I try to restart the origin-master service and it fails to restart,
and won't start again, not even on reboot. If I revert back to the old
master-config.yaml file everything works fine again, and origin-master
service starts with no problem.

The user "openshift" has been created in Active Directory with the correct
password.

I have even tried using url:
ldaps://dc.domain.local:686/cn=users,dc=domain,dc=local?uid

That doesn't work either. I cannot seem to figure out what I am doing wrong
and what the origin-master service does not like about the modified
master-config.yaml file that keeps it from starting.





*Mark Werner* | Senior Systems Engineer | Cloud & Infrastructure Services

Unisys | Mobile Phone 586.214.9017 | mark.wer...@unisys.com

11720 Plaza America Drive, Reston, VA 20190



 



THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is for use only by the intended recipient. If you received
this in error, please contact the sender and delete the e-mail and its
attachments from all devices.

   
 



 



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Peter Portante
On Wed, Jul 12, 2017 at 9:28 AM, Stéphane Klein  wrote:

>
> 2017-07-12 15:20 GMT+02:00 Peter Portante :
>
>> This looks a lot like this BZ: https://bugzilla.redhat.co
>> m/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
>> configuration"
>>
>> What version of Origin are you using?
>>
>>
> Logging image : origin-logging-elasticsearch:v1.5.0
>
> $ oc version
> oc v1.4.1+3f9807a
> kubernetes v1.4.0+776c994
> features: Basic-Auth
>
> Server https://console.tech-angels.net:443
> openshift v1.5.0+031cbe4
> kubernetes v1.5.2+43a9be4
>
> and with 1.4 nodes because of this crazy bug https://github.com/openshift/
> origin/issues/14092)
>
>
>> I found that I had to run the sgadmin script in each ES pod at the same
>> time, and when one succeeds and one fails, just run it again and it worked.
>>
>>
> Ok, I'll try that, how can I execute sgadmin script manually ?
>

​You can see it in the run.sh script in each pod, look for the invocation
of sgadmin there.

-peter​



>
> Best regards,
> Stéphane
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
2017-07-12 15:20 GMT+02:00 Peter Portante :

> This looks a lot like this BZ: https://bugzilla.redhat.
> com/show_bug.cgi?id=1449378, "Timeout after 30SECONDS while retrieving
> configuration"
>
> What version of Origin are you using?
>
>
Logging image : origin-logging-elasticsearch:v1.5.0

$ oc version
oc v1.4.1+3f9807a
kubernetes v1.4.0+776c994
features: Basic-Auth

Server https://console.tech-angels.net:443
openshift v1.5.0+031cbe4
kubernetes v1.5.2+43a9be4

and with 1.4 nodes because of this crazy bug
https://github.com/openshift/origin/issues/14092)


> I found that I had to run the sgadmin script in each ES pod at the same
> time, and when one succeeds and one fails, just run it again and it worked.
>
>
Ok, I'll try that, how can I execute sgadmin script manually ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Peter Portante
This looks a lot like this BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1449378, "Timeout after
30SECONDS while retrieving configuration"

What version of Origin are you using?

I found that I had to run the sgadmin script in each ES pod at the same
time, and when one succeeds and one fails, just run it again and it worked.

It seems to have to do with sgadmin script trying to be sure that all nodes
can see the searchguard index, but since we create one per node, if another
node does not have searchguard successfully setup, the current node's setup
will fail.  Retry at the same time until they work seems to be the fix. :(

-peter

On Wed, Jul 12, 2017 at 9:03 AM, Stéphane Klein  wrote:

> Hi,
>
> Since one day, after ES cluster pods restart, I have this error message
> when I launch logging-es:
>
> $ oc logs -f logging-es-ne81bsny-5-jdcdk
> Comparing the specificed RAM to the maximum recommended for
> ElasticSearch...
> Inspecting the maximum RAM available...
> ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx4096m'
> Checking if Elasticsearch is ready on https://localhost:9200
> ..Will connect to localhost:9300 ...
> done
> Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
> clusterstate ...
> Clustername: logging-es
> Clusterstate: YELLOW
> Number of nodes: 2
> Number of data nodes: 2
> .searchguard.logging-es-ne81bsny-5-jdcdk index does not exists, attempt
> to create it ... done (with 1 replicas, auto expand replicas is off)
> Populate config from /opt/app-root/src/sgconfig/
> Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
>SUCC: Configuration for 'config' created or updated
> Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
>SUCC: Configuration for 'roles' created or updated
> Will update 'rolesmapping' with /opt/app-root/src/sgconfig/sg_
> roles_mapping.yml
>SUCC: Configuration for 'rolesmapping' created or updated
> Will update 'internalusers' with /opt/app-root/src/sgconfig/sg_
> internal_users.yml
>SUCC: Configuration for 'internalusers' created or updated
> Will update 'actiongroups' with /opt/app-root/src/sgconfig/sg_
> action_groups.yml
>SUCC: Configuration for 'actiongroups' created or updated
> Timeout (java.util.concurrent.TimeoutException: Timeout after 30SECONDS
> while retrieving configuration for [config, roles, rolesmapping,
> internalusers, actiongroups](index=.searchguard.logging-es-
> x39myqbs-1-s5g7c))
> Done with failures
>
> after some time, my ES cluster (2 nodes) is green:
>
> stephane$ oc rsh logging-es-x39myqbs-1-s5g7c bash
> st:9200/_cluster/health?pretty=trueasticsearch/secret/admin-cert
> https://localho
> {
>   "cluster_name" : "logging-es",
>   "status" : "green",
>   "timed_out" : false,
>   "number_of_nodes" : 2,
>   "number_of_data_nodes" : 2,
>   "active_primary_shards" : 1643,
>   "active_shards" : 3286,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 0,
>   "delayed_unassigned_shards" : 0,
>   "number_of_pending_tasks" : 0,
>   "number_of_in_flight_fetch" : 0,
>   "task_max_waiting_in_queue_millis" : 0,
>   "active_shards_percent_as_number" : 100.0
> }
>
> I have this error in kibana container:
>
> $ oc logs -f -c kibana logging-kibana-1-jblhl
> {"type":"log","@timestamp":"2017-07-12T12:54:54Z","tags":[
> "warning","elasticsearch"],"pid":1,"message":"No living connections"}
> {"type":"log","@timestamp":"2017-07-12T12:54:57Z","tags":[
> "warning","elasticsearch"],"pid":1,"message":"Unable to revive
> connection: https://logging-es:9200/"}
>
> But in Kibana container I can access to elasticsearch server:
>
> $ oc rsh -c kibana logging-kibana-1-jblhl bash
> $ curl https://logging-es:9200/ --cacert /etc/kibana/keys/ca --key
> /etc/kibana/keys/key --cert /etc/kibana/keys/cert
> {
>   "name" : "Adri Nital",
>   "cluster_name" : "logging-es",
>   "cluster_uuid" : "iRo3wOHWSq2bTZskrIs6Zg",
>   "version" : {
> "number" : "2.4.4",
> "build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
> "build_timestamp" : "2017-01-03T11:33:16Z",
> "build_snapshot" : false,
> "lucene_version" : "5.5.2"
>   },
>   "tagline" : "You Know, for Search"
> }
>
> How can I fix this error?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[Logging] searchguard configuration issue? ["warning", "elasticsearch"], "pid":1, "message":"Unable to revive connection: https://logging-es:9200/"}

2017-07-12 Thread Stéphane Klein
Hi,

Since one day, after ES cluster pods restart, I have this error message
when I launch logging-es:

$ oc logs -f logging-es-ne81bsny-5-jdcdk
Comparing the specificed RAM to the maximum recommended for ElasticSearch...
Inspecting the maximum RAM available...
ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx4096m'
Checking if Elasticsearch is ready on https://localhost:9200
..Will connect to localhost:9300 ...
done
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW
clusterstate ...
Clustername: logging-es
Clusterstate: YELLOW
Number of nodes: 2
Number of data nodes: 2
.searchguard.logging-es-ne81bsny-5-jdcdk index does not exists, attempt to
create it ... done (with 1 replicas, auto expand replicas is off)
Populate config from /opt/app-root/src/sgconfig/
Will update 'config' with /opt/app-root/src/sgconfig/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'roles' with /opt/app-root/src/sgconfig/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'rolesmapping' with
/opt/app-root/src/sgconfig/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'internalusers' with
/opt/app-root/src/sgconfig/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'actiongroups' with
/opt/app-root/src/sgconfig/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Timeout (java.util.concurrent.TimeoutException: Timeout after 30SECONDS
while retrieving configuration for [config, roles, rolesmapping,
internalusers,
actiongroups](index=.searchguard.logging-es-x39myqbs-1-s5g7c))
Done with failures

after some time, my ES cluster (2 nodes) is green:

stephane$ oc rsh logging-es-x39myqbs-1-s5g7c bash
st:9200/_cluster/health?pretty=trueasticsearch/secret/admin-cert
https://localho
{
  "cluster_name" : "logging-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 1643,
  "active_shards" : 3286,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

I have this error in kibana container:

$ oc logs -f -c kibana logging-kibana-1-jblhl
{"type":"log","@timestamp":"2017-07-12T12:54:54Z","tags":["warning","elasticsearch"],"pid":1,"message":"No
living connections"}
{"type":"log","@timestamp":"2017-07-12T12:54:57Z","tags":["warning","elasticsearch"],"pid":1,"message":"Unable
to revive connection: https://logging-es:9200/"}

But in Kibana container I can access to elasticsearch server:

$ oc rsh -c kibana logging-kibana-1-jblhl bash
$ curl https://logging-es:9200/ --cacert /etc/kibana/keys/ca --key
/etc/kibana/keys/key --cert /etc/kibana/keys/cert
{
  "name" : "Adri Nital",
  "cluster_name" : "logging-es",
  "cluster_uuid" : "iRo3wOHWSq2bTZskrIs6Zg",
  "version" : {
"number" : "2.4.4",
"build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
"build_timestamp" : "2017-01-03T11:33:16Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
  },
  "tagline" : "You Know, for Search"
}

How can I fix this error?

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-12 Thread Philippe Lafoucrière
Our nodes are up-to-date already, but we're not using docker-latest (1.13).
I don't think that's an issue, since everything was fine with 1.12 last
week.
​
The only thing having changed lately are PVs, we are migrating some
datastores. I wonder if one of them could be an issue, and openshift is
waiting for a volume until timeout.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-12 Thread Philippe Lafoucrière
Hi,

We have this issue on Openshift 1.5 (with 1.4 nodes because of this crazy
bug https://github.com/openshift/origin/issues/14092).
It started a few days ago, and nothing really changed in our cluster. We
just added a bunch of secrets, and noticed longer and longer deploys.

We have nothing fancy in the logs, and the only relevent event is :

Unable to mount volumes for pod "xx": timeout expired waiting for
volumes to attach/mount for pod ""/"". list of unattached/unmounted
volumes=[xxx-secrets -secrets -secrets ssl-certs -secrets
default-token-n6pbo]
​
We have this event several times (it varies, let's say around 5 times),
then the container starts as expected. It's an issue when it comes to
single DB pod, the application is down for 5 minutes if the pod needs to
restart.

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Method to move a single or multiple pods to a different node?

2017-07-12 Thread Per Carlson
On 12 July 2017 at 00:50, G. Jones  wrote:

> That’s just it, the masters were unschedulable. During the outage wer
> restarted the masters and nodes but the nodes wouldn’t come online. While
> we were working on getting the nodes up the pods had been restarted on the
> masters but they were never set as schedulable. When everything was finally
> up and running I did an oc describe node and found that pods were spread
> across the masters and nodes without me explicitly setting the masters as
> schedulable.
>

​Sounds like a bug to me. If you still have got logs/forensics you could
file a bug report.

-- 
Pelle

Research is what I'm doing when I don't know what I'm doing.
- Wernher von Braun
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users