SSO with Jenkins not working

2020-02-04 Thread Marc Boorshtein
I have deployed OCP 4.3 on AWS.  I replaced the certs for the router with
wildcards from letsencrypt.  TLS from my browser and from apps in openshift
to the router are all working fine.  I deployed Jenkins using oc- new-app
jenkins-persistent.  When I try to login I'm presented with the "Login With
OpenShift" screen, I login and authorize Jenkins to access OpenShift on
behalf of me but then I'm stuck in a loop of the "Login With OpenShift"
screen.  Looking in the logs I see:

2020-02-04 18:48:29.870+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#populateDefaults: OpenShift OAuth:
provider: OpenShiftProviderInfo: issuer:
https://oauth-openshift.apps.devopsdev.tremolo.dev auth ep:
https://oauth-openshift.apps.devopsdev.tremolo.dev/oauth/authorize token
ep: https://oauth-openshift.apps.devopsdev.tremolo.dev/oauth/token
2020-02-04 18:48:29.873+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#useProviderOAuthEndpoint: OpenShift
OAuth server is 4.x, specifically OpenShiftVersionInfo: major: 1 minor: 16+
gitVersion: v1.16.2
2020-02-04 18:48:29.873+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#initializeHttpsProxyAuthenticator:
Checking if HTTPS proxy initialization is required ...
2020-02-04 18:48:29.887+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#transportToUse: OpenShift OAuth got
an SSL error when accessing the issuer's token endpoint when using the SA
certificate
2020-02-04 18:48:29.893+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#transportToUse: OpenShift OAuth was
able to complete the SSL handshake when accessing the issuer's token
endpoint using the JVMs default keystore
2020-02-04 18:48:29.894+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#populateDefaults: OpenShift OAuth
returning true with namespace jenkins SA dir null default /run/secrets/
kubernetes.io/serviceaccount SA name null default jenkins client ID null
default system:serviceaccount:jenkins:jenkins secret null default
eyJhb... redirect null default
https://oauth-openshift.apps.devopsdev.tremolo.dev server null default
https://kubernetes.default:443
2020-02-04 18:48:29.915+ [id=18] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#getRoleToPermissionMap: OpenShift
Jenkins Login Plugin could not find the
openshift-jenkins-login-plugin-config config map in namespace jenkins so
the default permission mapping will be used
2020-02-04 18:48:30.051+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#populateDefaults: OpenShift OAuth:
provider: OpenShiftProviderInfo: issuer:
https://oauth-openshift.apps.devopsdev.tremolo.dev auth ep:
https://oauth-openshift.apps.devopsdev.tremolo.dev/oauth/authorize token
ep: https://oauth-openshift.apps.devopsdev.tremolo.dev/oauth/token
2020-02-04 18:48:30.064+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#useProviderOAuthEndpoint: OpenShift
OAuth server is 4.x, specifically OpenShiftVersionInfo: major: 1 minor: 16+
gitVersion: v1.16.2
2020-02-04 18:48:30.064+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#initializeHttpsProxyAuthenticator:
Checking if HTTPS proxy initialization is required ...
2020-02-04 18:48:30.075+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#transportToUse: OpenShift OAuth got
an SSL error when accessing the issuer's token endpoint when using the SA
certificate
2020-02-04 18:48:30.079+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#transportToUse: OpenShift OAuth was
able to complete the SSL handshake when accessing the issuer's token
endpoint using the JVMs default keystore
2020-02-04 18:48:30.079+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#populateDefaults: OpenShift OAuth
returning true with namespace jenkins SA dir null default /run/secrets/
kubernetes.io/serviceaccount SA name null default jenkins client ID null
default system:serviceaccount:jenkins:jenkins secret null default
eyJhb... redirect null default
https://oauth-openshift.apps.devopsdev.tremolo.dev server null default
https://kubernetes.default:443
2020-02-04 18:48:30.084+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#useProviderOAuthEndpoint: OpenShift
OAuth server is 4.x, specifically OpenShiftVersionInfo: major: 1 minor: 16+
gitVersion: v1.16.2
2020-02-04 18:48:30.084+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#newOAuthSession: OpenShift OAuth
using OAuth Provider specified endpoints for this login flow
2020-02-04 18:48:30.084+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#initializeHttpsProxyAuthenticator:
Checking if HTTPS proxy initialization is required ...
2020-02-04 18:48:30.095+ [id=16] INFO
o.o.j.p.o.OpenShiftOAuth2SecurityRealm#transportToUse: OpenShift OAuth got
an SSL error when accessing the issuer's token endpoint when using the SA
certificate

Any thoughts?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.11 - Volume and Claim Pre-binding - volumes for a namespace

2019-11-18 Thread Marc Boorshtein
Ended up doing the same thing with a validating webhook using OPA

On Mon, Nov 18, 2019, 4:13 AM Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks,
>
> I was wondering whether I could create an arbitrary storage class so (if
> the application can be adjusted to name that class) this might well be a
> solution. I’ll poke around today, thanks.
>
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
> On 18 Nov 2019, at 12:08 pm, Frederic Giloux  wrote:
>
> Hi Alan
>
> you can use a storage class for the purpose [1] and pair it with quotas
> for the defined storage class [2] as proposed by Samuel.
>
> [1]
> https://docs.okd.io/3.11/install_config/storage_examples/storage_classes_legacy.html#install-config-storage-examples-storage-classes-legacy
> [2]
> https://docs.okd.io/3.11/dev_guide/compute_resources.html#dev-managed-by-quota
>
> Regards,
>
> Frédéric
>
> On Mon, Nov 18, 2019 at 12:38 PM Samuel Martín Moro 
> wrote:
>
>> Not that I know of.
>> The claimRef is not meant to be changed manually. Once set, PV should
>> have been bound already, you won't be able to only set a namespace.
>>
>> Have you considered using ResourceQuotas?
>>
>> To deny users in a Project from requesting persistent storage, you could
>> use the following:
>>
>> apiVersion: v1
>> kind: ResourceQuota
>> metadata:
>>   name: no-pv
>>   namespace: project-with-no-persistent-volumes
>> spec:
>>   hard:
>> persistentvolumeclaims: 0
>>
>>
>> On Mon, Nov 18, 2019 at 12:00 PM Alan Christie <
>> achris...@informaticsmatters.com> wrote:
>>
>>> On the topic of volume claim pre-binding …
>>>
>>> Is there a pattern for creating volumes that can only be bound to a PVC
>>> from a known namespace, specifically when the PVC name may not be known in
>>> advance?
>>>
>>> In my specific case I don’t have control over the application's PVC name
>>> but I do know its namespace. I need to prevent the pre-allocated volume
>>> from being bound to a claim from a namespace other than the one the
>>> application’s in.
>>>
>>> The `PersistentVolume` spec contains a `claimRef` section but I suspect
>>> that you can’t just fill-out the `namespace`, you need to provide both the
>>> `name` and `namespace` (because although the former doesn’t generate an
>>> error it doesn’t work).
>>>
>>> Any suggestions?
>>>
>>> Alan Christie
>>> achris...@informaticsmatters.com
>>>
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>>
>> --
>> Samuel Martín Moro
>> {EPITECH.} 2011
>>
>> "Nobody wants to say how this works.
>>  Maybe nobody knows ..."
>>   Xorg.conf(5)
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
> *Frédéric Giloux*
> Senior Technical Account Manager
> Red Hat Germany
>
> fgil...@redhat.com M: +49-174-172-4661
>
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> 
> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn,
> Handelsregister: Amtsgericht München, HRB 153243,
> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Jenkins with PVC won't start with OCP 4

2019-08-29 Thread Marc Boorshtein
for anyone who runs into this issue - the problem was gitlab's instructions
for deployment included adding the anyuid scc group to system:authenticated
which caused havoc.  remove it and all good.

>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Jenkins with PVC won't start with OCP 4

2019-08-28 Thread Marc Boorshtein
I've got OCP4 deployed.  I've got several pods running with ebs pvcs
without issue, but when I tried creating jenkins from a template I get:

cp: cannot create regular file ‘/var/lib/jenkins/config.xml’: Permission
denied

I don't see anything special about the volume mount.  Its not marked as
read only.  Google tells me its an issue with the tags on the images but
i'd think that would be a problem for all my ebs pvcs.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to do a flexvolume with ocp 4?

2019-06-14 Thread Marc Boorshtein
>
>
>
>
> Can you write a a operator that installs a daemonset which runs on
> "selected" nodes and copies flexvolume plugin in aforementioned location
> and then mounts certain hostPath which allows it to copy both kerberos
> secret and mount.cifs mount binary on to the host so as flexvolume plugin
> can access it. I haven't tried it but
> https://gist.github.com/pantelis/540a19262cacc841fb0a
>
>
What would the operator be watching?  Does RHCOS ship with the kerberos
tools?




> I just verified that kernel shipped with rhcos does ship with cifs kernel
> module although we don't ship cifs-utils package so there is no mount.cifs
> (but you can still do mount -t cifs). The daemonset pod should be
> privileged though.
>
>
>
>
>
>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to do a flexvolume with ocp 4?

2019-06-14 Thread Marc Boorshtein
>
> I'll leave the discussion to guys with more knowledge than me, but using a
> sidecar container to provide network storage client seems overkill or more
> complicated than required to me. Network storage should be managed by the
> node, and containers should get the mount points without caring about the
> filesystem type. Only the nodes (or the privileged container that manages
> cifs on the node, for all containers/pods in that node) should need the
> keytab. I'd try providing that file to the privileged pod as a configmap.
>
>
Today we distribute the keytabs to the nodes via ansible.  My concern with
v4 (including  Hemant's comments) are that this method will break with
RHCOS since everything is supposed to run in a container.  I'm also not a
huge fan of distributing keytabs to nodes that might run a pod instead of
it only being available to the pod when it runs.  Can you run a flexvolume
as its own container?  Instead of telling ocp4 to run a script can I tell
it to run a container that does the mount?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to do a flexvolume with ocp 4?

2019-06-14 Thread Marc Boorshtein
>
>
>
> If they are not, you'll need a privileged container to work as the cifs
> client. It would be managed my a DaemonSet and probably require a custom
> SCC to grant it the necessary rights, but it is doable to have a container
> that loads kernel modules into the host and etc.
>
>>
>>
So we already have a mature way to inject a sidecar into pods that need
keytab access.  We detect an annotation on an admission controller webhook
and inject a privileged pod that creates a keyring from the keystore and
shares it with the primary pod via shared memory.  I think ideally what i'd
like to do is create a similar sidecar that gets the keytab from either a
secret or likely a secret manage like vault, run the mount inside of the
container then share the mount across to the primary pod.  We alraedy have
a way of generating the keyring and custom sccs for each user.  i figure
thats the hardest part would be sharing the mount from the sidecar to the
primary pod.  Is that possible?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to do a flexvolume with ocp 4?

2019-06-14 Thread Marc Boorshtein
>
>
> On Thu, Jun 13, 2019 at 7:00 PM Hemant Kumar  wrote:
>
>> Yes they are. The only catch is - getting them to work in control-plane
>> is more difficult, but since your flexvolume plugin worked in 3.11 where
>> controller-manager is already conainerized, it may not be so for your
>> particular use case.
>>
>> [DC]: if you don't mind, curious to understand why you think in v4 is
> harder to get it working with the control-plane?
>
>>
>>
The flexvolume is for cifs and in order to work needs to:

1.  Have the cifs packages installed
2.  Have the user's kerberos keytab available (we're not allowed to use
usernames and passwords)

on 3.11 we're managing this with a combination of FreeIPA (every node is a
member of the ipa domain), Ansible and OpenUnison.  Given 4.x's reliance on
a container os (RHCOS or FCOS) my assumption was this wouldn't work
anymore.  Is that assumption wrong?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to do a flexvolume with ocp 4?

2019-06-13 Thread Marc Boorshtein
I've got a flexvolume driver for CIFS working in 3.11.  How does that work
on 4.x with RHCOS?  Are flexvolumes still possible?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OIDC in OCP4 - cli token menu is gone?

2019-05-02 Thread Marc Boorshtein
What happened to the "Get your token" link in OCP4?  Once I'm logged in how
do I get cli access?  I found
https://github.com/openshift/openshift-docs/blob/enterprise-4.1/authentication/identity_providers/configuring-oidc-identity-provider.adoc
but
going to /oauth/token/request in the console just ounces me back to login
then says theres an error.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OCP4 - Logging in gives me api list on Chrome and Firefox

2019-05-02 Thread Marc Boorshtein
This is really odd.  When I try to login to OCP4 from my mac via chrome or
firefox once I get logged in with kubeadmin I just get a blank screen that
shows the list of apis:

{
  "paths": [
"/apis",
"/healthz",
"/healthz/log",
"/healthz/ping",

"/healthz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
"/metrics",
"/readyz",
"/readyz/log",
"/readyz/ping",

"/readyz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
"/readyz/terminating"
  ]
}

here's the url in the bar -
https://console-openshift-console.apps.ocp47.tremolo.dev/auth/callback?code=

...

The only browser that works is safari on mac.  Whats really odd is when I
add an openid connect provider it redirects me to my identity provider, but
shows me the above api list.  Whats really odd here is that the url bar is
pointing to my idp (openunison) hosted in openshift:

https://orchestra.apps.ocp47.tremolo.dev/auth/idp/OpenShiftIdP/auth?client_id=openshift_uri=https%3A%2F%2Fopenshift-authentication-openshift-authentication.apps.ocp47.tremolo.dev%2Foauth2callback%2Fopenunison_type=code=openid=Y3NyZj1keUtqRndhY2ItNFR6M1B5a1VXVHZiOXFJYjk3clRhbDI3WXVrYl9lWjZZJnRoZW49JTJGb2F1dGglMkZhdXRob3JpemUlM0ZjbGllbnRfaWQlM0Rjb25zb2xlJTI2aWRwJTNEb3BlbnVuaXNvbiUyNnJlZGlyZWN0X3VyaSUzRGh0dHBzJTI1M0ElMjUyRiUyNTJGY29uc29sZS1vcGVuc2hpZnQtY29uc29sZS5hcHBzLm9jcDQ3LnRyZW1vbG8uZGV2JTI1MkZhdXRoJTI1MkZjYWxsYmFjayUyNnJlc3BvbnNlX3R5cGUlM0Rjb2RlJTI2c2NvcGUlM0R1c2VyJTI1M0FmdWxsJTI2c3RhdGUlM0Q1NmU5ZmUyOQ%3D%3D

This is REALLY odd.  Why is ocp4 generating a path list on my app?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Special permissions needs for user to create route with host set?

2019-04-21 Thread Marc Boorshtein
>
> but when I post the exact same json as a cluster admin, it gets created no
> problem.  I found some references to openshift online not allowing custom
> domains but thats it.  Is there some kind of setting that needs to be put
> on the router?
>
>
Documentation bug.  submitted a pr.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Special permissions needs for user to create route with host set?

2019-04-21 Thread Marc Boorshtein
I'm writing an operator that creates a route.  I generate the following
JSON:

{"kind":"Route","apiVersion":"route.openshift.io/v1","id":"openunison-https-test-openunison-openunison","metadata":{"name":"secure-openunison-test-openunison","labels":{"application":"openunison-test-openunison"},"annotations":{"description":"Route
for OpenUnison's https service."}},"spec":{"host":"
testou.apps.ocp47.tremolo.dev","port":{"targetPort":"openunison-secure-test-openunison"},"to":{"kind":"Service","name":"openunison-test-openunison"},"tls":{"termination":"reencrypt","destinationCACertificate":"-BEGIN
CERTIFICATE-\nMIIECjCCAvKgAwIBAgIGAWpBtYTeMA0GCSqGSIb3DQEBCwUAMIGPMQswCQYDVQQG\r\nEwJVUzERMA8GA1UECBMIVmlyZ2luaWExEzARBgNVBAcTCkFsZXhhbmRyaWExGTAX\r\nBgNVBAoTEFRyZW1vbG8gU2VjdXJpdHkxDDAKBgNVBAsTA2s4czEvMC0GA1UEAxMm\r\ndGVzdC1vcGVudW5pc29uLm91b3Auc3ZjLmNsdXN0ZXIubG9jYWwwHhcNMTkwNDIx\r\nMjEwMjU2WhcNMjkwNDE4MjEwMjU2WjCBjzELMAkGA1UEBhMCVVMxETAPBgNVBAgT\r\nCFZpcmdpbmlhMRMwEQYDVQQHEwpBbGV4YW5kcmlhMRkwFwYDVQQKExBUcmVtb2xv\r\nIFNlY3VyaXR5MQwwCgYDVQQLEwNrOHMxLzAtBgNVBAMTJnRlc3Qtb3BlbnVuaXNv\r\nbi5vdW9wLnN2Yy5jbHVzdGVyLmxvY2FsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A\r\nMIIBCgKCAQEAtD7inyJs7ghpWVHzjYyKACU7wgVySFEztrua4TXh3b+u9Oavxt7c\r\nfDy3GpY24vgMdaGNtq3PINq1mc4drSbxv6a0A0JCy6fEUXdgTWIHeW1VUpSY9n6s\r\n3eg7yJq6B2wJtt09fow6fP/QkQ1pISfe6uhTlGsnBlKA/9Prco3ipktCtiy4uoJi\r\naoR+vmnpIxccN5xfMciIuQ29bT9JCPzXP87rHlaDP4HXlXx/De1cC9qBUT1lmDSl\r\nhHDxn2H/o2LgBrINA2L4qgM39xt/qeskRsd0ElqwuhuFsH7I2yIqReum5KDSriuF\r\nkWTEIqYRWPJR1MqciVk0ciDKGFzRgMT3IwIDAQABo2owaDAPBgNVHRMBAf8EBTAD\r\nAQH/MA4GA1UdDwEB/wQEAwICBDASBgNVHSUBAf8ECDAGBgRVHSUAMDEGA1UdEQQq\r\nMCiCJnRlc3Qtb3BlbnVuaXNvbi5vdW9wLnN2Yy5jbHVzdGVyLmxvY2FsMA0GCSqG\r\nSIb3DQEBCwUAA4IBAQBjOQcbkltm06C+sUqtW3jhKsEcvbg0JzT57QpXUmy/yOL2\r\n35KHlA4TBH17DCwH60l/2jLg6ECBFAQO5tgA6hEkkPk2+y3PZXYuIztangTsifv2\r\n0jSUTqKQCUQwtqgwqUaCr/OjI/bYs56d/ENBxbvJBCPeJiZ1By6N+q0FOY9mJxf0\r\noB+z2x9tgKRHSi3l8enQGgMqIS9B65UW9jigqX+HbIOIwq2GnXuwO5W6FOmgq9/I\r\n9YfftyOTK0U7gkdw8DibDXtrHslnXWD9CmNwUwIAXTleYHAS1iCHFYhitAFCNH4v\r\n+O8Al6m4/d28Y52f9gIuVCYS5m1RpTAsVcDravpX\r\n-END
CERTIFICATE-\n"}}}

{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"
Route.route.openshift.io \"secure-openunison-test-openunison\" is invalid:
spec.host: Forbidden: you do not have permission to set the host field of
the
route","reason":"Invalid","details":{"name":"secure-openunison-test-openunison","group":"
route.openshift.io","kind":"Route","causes":[{"reason":"FieldValueForbidden","message":"Forbidden:
you do not have permission to set the host field of the
route","field":"spec.host"}]},"code":422}

but when I post the exact same json as a cluster admin, it gets created no
problem.  I found some references to openshift online not allowing custom
domains but thats it.  Is there some kind of setting that needs to be put
on the router?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


GitHub repo for Java S2I repo?

2019-04-18 Thread Marc Boorshtein
I can't seem to find it anywhere.  My old s2i image isn't working on ocp4,
when I download maven i get an error message:

Installing Maven 3.3.9
mkdir: cannot create directory '/usr/local/openunison/build': Permission
denied

I'd like to see what I'm doing differently from the main java s2i image.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Build RHEL images using BuildConfig on OKD

2019-02-14 Thread Marc Boorshtein
>
>
>
> but i'm pretty sure this is a cert configuration issue in your docker
> daemon on your nodes.  Are you able to run a pod that pulls its image from
> registry.access.redhat.com?  I'm guessing you'll see a similar failure.
>
>
I coppied the cert from a rhel instance and got past this issue.  Still
can't build an image because the host rules what i can install into the
rhel image so I added a rhel node to my okd instance and the build process
is working great.

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Build RHEL images using BuildConfig on OKD

2019-02-13 Thread Marc Boorshtein
> what version of OKD?
>

3.11


>
>> failed to pull image: open /etc/docker/certs.d/
>> registry.access.redhat.com/redhat-ca.crt: no such file or directory
>>
>
> It sounds like your docker daemon configuration may be pointing to a file
> that doesn't exist, can you double check your docker daemon configuration?
> Or check if that file is a broken symlink?  (this would be on your cluster
> nodes)
>
> What is the base image your buildconfig/dockerfile is referencing?
> (providing your buildconfig + dockerfile and level 5 build logs would be
> useful)
>

How do I up the log level?

Here the docker file
https://github.com/TremoloSecurity/OpenUnisonS2IDocker/blob/master/Dockerfile.rhel





>
>
>> i can pull images from docker on mac without issue from
>> registry.access.redhat.com, how come I can't from inside of a
>> BuildConfig?  I have a valid login for registry.access.redhat.com.
>>
>> Thanks
>> Marc
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Build RHEL images using BuildConfig on OKD

2019-02-13 Thread Marc Boorshtein
I'm trying to automate my build of our rhel images on our OKD instance.
When I try to run my build I get:

failed to pull image: open /etc/docker/certs.d/
registry.access.redhat.com/redhat-ca.crt: no such file or directory

i can pull images from docker on mac without issue from
registry.access.redhat.com, how come I can't from inside of a BuildConfig?
I have a valid login for registry.access.redhat.com.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Buildah jenkins agent?

2019-02-13 Thread Marc Boorshtein
i should have known that.  Awesome, much easier way to go.

Thanks!

On Wed, Feb 13, 2019 at 1:24 PM Adam Kaplan  wrote:

> I can use a BuildConfig to build and push a generic docker image?
>>
>
> Yes - this is what the Docker build strategy is for [1].
>
> [1]
> https://docs.okd.io/3.11/dev_guide/builds/build_strategies.html#docker-strategy-options
>
> On Wed, Feb 13, 2019 at 12:53 PM Marc Boorshtein 
> wrote:
>
>> Right now on 3.11. I can use a BuildConfig to build and push a generic
>> docker image?  Not one built using an s2i builder?
>>
>> On Wed, Feb 13, 2019, 12:44 PM Adam Kaplan >
>>> Hi Marc,
>>> You can extend our slave-base agent image to yum install buildah. In
>>> general we want to keep the slave-base as minimal as possible (we've said
>>> no to similar requests to include skopeo).
>>>
>>> You can also use OpenShift builds directly to create images - starting
>>> in 4.0 the docker and source strategy builds will use `buildah bud` under
>>> the covers.
>>>
>>> Best,
>>> Adam
>>>
>>> On Wed, Feb 13, 2019 at 11:52 AM Marc Boorshtein 
>>> wrote:
>>>
>>>> I want to build a container using our jenkins in okd.  I'm using the
>>>> maven jenkins agent now quite well and want to do the same thing with some
>>>> docker container images that aren't s2i.  It looks like this should be
>>>> doable via buildah, has anyone created an agent for openshift's jenkins?
>>>>
>>>> Thanks
>>>> Marc
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>
>>>
>>> --
>>>
>>> ADAM KAPLAN
>>>
>>> SENIOR SOFTWARE ENGINEER - OPENSHIFT
>>>
>>> Red Hat <https://www.redhat.com/>
>>>
>>> 100 E Davie St Raleigh, NC 27601 USA
>>>
>>> adam.kap...@redhat.comT: +1-919-754-4843 IM: adambkaplan
>>> <https://red.ht/sig>
>>>
>>
>
> --
>
> ADAM KAPLAN
>
> SENIOR SOFTWARE ENGINEER - OPENSHIFT
>
> Red Hat <https://www.redhat.com/>
>
> 100 E Davie St Raleigh, NC 27601 USA
>
> adam.kap...@redhat.comT: +1-919-754-4843 IM: adambkaplan
> <https://red.ht/sig>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Buildah jenkins agent?

2019-02-13 Thread Marc Boorshtein
I want to build a container using our jenkins in okd.  I'm using the maven
jenkins agent now quite well and want to do the same thing with some docker
container images that aren't s2i.  It looks like this should be doable via
buildah, has anyone created an agent for openshift's jenkins?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Jenkins - Anonymous Web-hooks

2019-02-08 Thread Marc Boorshtein
I did something similar with a reverse proxy. The proxy embeds the token in
the request to openshift.

On Fri, Feb 8, 2019, 9:30 AM Gabe Montero  You need to provide a bearer token with sufficient permissions to all the
> OpenShift project(s) involved.
>
> A quick example on how to get such a token is at
> https://github.com/openshift/jenkins-openshift-login-plugin#non-browser-access
> Apply what is done for the example curl invocation to the webhook
> config/invocation.
>
> And remember if your flow is spanning multiple projects, you'll need an SA
> with sufficient roles/permissions to each of those projects.
>
> On Fri, Feb 8, 2019 at 1:24 AM Samuel Martín Moro 
> wrote:
>
>> Hi,
>>
>> Using the generic webhook trigger plugin myself, while still relying on
>> OpenShift authentication logging into Jenkins, I don't remember having
>> anything like this.
>> Although I can't explain why your plugin would refuse this, unless maybe
>> something's wrong in Jenkins permissions matrix?
>>
>> As far as I've seen, generic triggers from a BuildConfig wouldn't allow
>> for multi-branch jobs - or if they do, I'm still looking for a way to
>> retrieve the triggering branch as a variable somewhere (note: that ruddra
>> sample shows the buildconfig has a "ref: master", which would suggest it is
>> not multi-branch capable).
>> So far, Jenkins plugins was my next best solution, although not ideal.
>>
>>
>> Anyway, you might be able to create a role - or clusterrole - and
>> corresponding binding, with something like this (not tested)
>>
>> - apiVersion: rbac.authorization.k8s.io/v1
>>   kind: ClusterRole
>>   metadata:
>> name: bitbucket-jenkins-hook
>>   rules:
>>   - nonResourceURLs: [ "/bitbucket-scmsource-hook/*" ]
>> verbs: [ "get", "post" ]
>>
>> - apiVersion: rbac.authorization.k8s.io/v1
>>   kind: ClusterRoleBinding
>>   metadata:
>> name: bitbucket-jenkins-hook
>>   roleRef:
>> apiGroup: rbac.authorization.k8s.io
>> kind: ClusterRole
>> name: bitbucket-jenkins-hook
>>   subjects:
>>   - apiGroup: rbac.authorization.k8s.io
>> kind: Group
>> name: system:unauthenticated
>>   - apiGroup: rbac.authorization.k8s.io
>> kind: Group
>> name: system:authenticated
>>
>> (see: https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
>>
>>
>>
>> On Fri, Feb 8, 2019 at 7:14 AM Graham Dumpleton 
>> wrote:
>>
>>> I believe you should be using the web book URL from the pipeline build
>>> config.
>>>
>>> You can get them from the web console page for the pipeline.
>>>
>>> See:
>>>
>>> *
>>> https://ruddra.com/posts/openshift-python-gunicorn-nginx-jenkins-pipelines-part-three/
>>>
>>> Graham
>>>
>>> On 8 Feb 2019, at 5:03 pm, Sean Dawson 
>>> wrote:
>>>
>>> Hi,
>>>
>>> I have Jenkins running in an OpenShift cluster and I have a multi
>>> branch job set up, with the source git repository residing in
>>> Bitbucket server.
>>>
>>> I wan't to set up a web hook from Bitbucket Server to Jenkins to
>>> trigger builds as soon as there are changes to the repo. In a vanilla
>>> Jenkins installation you are able to simply post the updates to
>>> "${JENKINS_URL}/bitbucket-scmsource-hook/notify" as mentioned in this
>>> article:
>>>
>>>
>>> https://support.cloudbees.com/hc/en-us/articles/11553051-How-to-Trigger-Multibranch-Jobs-from-Bitbucket-Server-#configurationinbitbucketserver
>>>
>>> However, our Jenkins instance is the OpenShift version and uses
>>> OpenShift to authenticate. When I try to post to this URL I get the
>>> following error:
>>>
>>>{
>>>"kind": "Status",
>>>"apiVersion": "v1",
>>>"metadata": {
>>>
>>>},
>>>"status": "Failure",
>>>"message": "forbidden: User \"system:anonymous\" cannot post path
>>> \"/bitbucket-scmsource-hook/notify\": no RBAC policy matched",
>>>"reason": "Forbidden",
>>>"details": {
>>>
>>>},
>>>"code": 403
>>>}
>>>
>>> Does anyone know of a way to allow the "system:anonymous" user to post
>>> to that path?
>>>
>>> Thanks
>>>
>>> Sean
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>>
>> --
>> Samuel Martín Moro
>> {EPITECH.} 2011
>>
>> "Nobody wants to say how this works.
>>  Maybe nobody knows ..."
>>   Xorg.conf(5)
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com

Correct way to patch OKD 3.11?

2018-12-04 Thread Marc Boorshtein
With moving the api server to a pod, whats the correct way to patch okd as
of 3.10 (and .11?).  Simply running yum -y upgrade i wouldn't think would
do it and I don't see a playbook that seems to do it.  Do I need to
manually pull the images onto individual nodes or is there a playbook I
should run?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't upgrade okd from 3.10 --> 3.11, odd "failed to backup etcd" error

2018-11-26 Thread Marc Boorshtein
On Mon, Nov 26, 2018 at 2:31 PM Scott Dodson  wrote:

> Was it Ansible 2.7.2, as far as we know everything works in 2.7.2 but
> 2.7.1 and 2.7.0 definitely had problems.
>
>
Looks like it was ansible-2.7.0-1.el7.noarch
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can't add node to 3.11 cluster

2018-11-26 Thread Marc Boorshtein
I have a 3.11 cluster running on AWS.  I'm trying to add a compute node but
I can't get past certificate approval.  I added

[new_nodes]
X.X.X.X openshift_node_group_name='node-config-compute'

to my inventory and ran scaleup.  It fails at

TASK [Approve node certificates when bootstrapping]
*
FAILED - RETRYING: Approve node certificates when bootstrapping (30 retries
left).
FAILED - RETRYING: Approve node certificates when bootstrapping (29 retries
left).

there are no open certificate signing requests.  I found
https://bugzilla.redhat.com/show_bug.cgi?id=1622945 but the latest 3.11
openshift-ansible is still causing the issue.  Is there a workaround?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Jenkins with OpenShift login enabled, how to enable webhooks?

2018-11-26 Thread Marc Boorshtein
>
>
>
> If you're referring to using a GitHub webhook, we ended up having to
> create a simple application that would receive GitHub webhook events,
> verify the request against the webhook secret, and trigger the desired
> OpenShift build or Jenkins job.  This is primarily because GitHub webhooks
> don't really support authentication mechanisms other than the webhook
> secret.
>
>
>
Thanks Andy, we went a somewhat different (but similar) route. We created a
reverse proxy that only accepts requests to the build url and injects the
oauth2 service account token in the call to our jenkins.  I like the idea
of verifying the token first but don't think its necessary.  It could cut
down if there was a vulnerability found in jenkins but i can also cut that
down in other ways too (i might clear all content from the request).

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't upgrade okd from 3.10 --> 3.11, odd "failed to backup etcd" error

2018-11-25 Thread Marc Boorshtein
figured out the issue.  my ansible box was running 2.7.  Downgraded to 2.6
and was able to run the install

On Sun, Nov 25, 2018 at 2:37 PM Marc Boorshtein 
wrote:

> I'm trying to ugprade okd from 3.10 to 3.11.  When I run the control plane
> upgrade I get the following error:
>
> TASK [fail]
> ***
> fatal: [localhost]: FAILED! => {"changed": false, "msg": "Upgrade cannot
> continue. The following hosts did not complete etcd backup: 10.0.4.57"}
> to retry, use: --limit
> @/root/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade_control_plane.retry
>
> PLAY RECAP
> 
> 10.X.X.X : ok=21   changed=1unreachable=0failed=0
>
> 10.X.X.X   : ok=22   changed=1unreachable=0failed=0
> 10.X.X.X  : ok=165  changed=16   unreachable=0
> failed=0
> Zone=us-east-1a: ok=1changed=0unreachable=0
> failed=0
> localhost  : ok=27   changed=0unreachable=0
> failed=1
>
> Here's the odd thing, I don't have localhost anywhere in my inventory.  I
> run a seperate box with ansible on it just for running playbooks.  Googling
> took me to https://github.com/openshift/openshift-ansible/issues/4475 but
> i'm not using --connection localhost
>
> Thanks
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can't upgrade okd from 3.10 --> 3.11, odd "failed to backup etcd" error

2018-11-25 Thread Marc Boorshtein
I'm trying to ugprade okd from 3.10 to 3.11.  When I run the control plane
upgrade I get the following error:

TASK [fail]
***
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Upgrade cannot
continue. The following hosts did not complete etcd backup: 10.0.4.57"}
to retry, use: --limit
@/root/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade_control_plane.retry

PLAY RECAP

10.X.X.X : ok=21   changed=1unreachable=0failed=0
10.X.X.X   : ok=22   changed=1unreachable=0failed=0
10.X.X.X  : ok=165  changed=16   unreachable=0failed=0

Zone=us-east-1a: ok=1changed=0unreachable=0
failed=0
localhost  : ok=27   changed=0unreachable=0
failed=1

Here's the odd thing, I don't have localhost anywhere in my inventory.  I
run a seperate box with ansible on it just for running playbooks.  Googling
took me to https://github.com/openshift/openshift-ansible/issues/4475 but
i'm not using --connection localhost

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Jenkins with OpenShift login enabled, how to enable webhooks?

2018-11-25 Thread Marc Boorshtein
I'm trying to use a webhook to trigger a job.  When I'm authenticated it
works great, but coming from an anonymous point the request always takes me
to the openshift login.  Is there a way to exclude specific URLs from
having to authenticate via openshift?  I see that I can create a bearer
token using a service account, but given RBAC's granularity i'd rather not
do that.  I specifically am trying to get a webhook setup that will trigger
a jenkins job to run when a source container is pushed.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to push result of build to dockerhub from pipeline?

2018-10-26 Thread Marc Boorshtein
> but maybe i should have asked this first:  why isn't your openshift build
> pushing the image to dockerhub directly if that's where you want the image?
>
>
>
because i'm off my google game today.  didn't see
https://blog.openshift.com/pushing-application-images-to-an-external-registry/
but its working perfectly without a jenkins pipeline.

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to push result of build to dockerhub from pipeline?

2018-10-26 Thread Marc Boorshtein
Running OKD 3.10, I have a build and a pipeline that runs the build.  For
the deploy step, I want to push the resulting image up to dockerhub.  I'm
not seeing any examples of how to do this and I can't find any reference
docs for the openshift client plugin for jenkins that says "here's all the
available functions".  I feel like this is easy but am having a hard time
finding it.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


difficulty upgrading from 3.9 to 3.10 with glusterfs.

2018-10-10 Thread Marc Boorshtein
I'm trying to run the control-plan upgrade and its filing when checking the
health of glsuterfs.  Here's the output from ansible:

 (1, '\r\n{"msg": "volume heketidbstorage is not ready",
"failed": true, "state": "unknown", "changed": false, "invocation":
{"module_args": {"cluster_name": "storage", "exclude_node": "os.demo.lan",
"oc_namespace": "glusterfs", "oc_conf":
"/etc/origin/master/admin.kubeconfig", "oc_bin": "oc"}}}\r\n',

I ran oc adm migrate storage --include=* --loglevel=2 --confirm --config
/etc/origin/master/admin.kubeconfig which output no errors.  How do I
approach debugging this?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to disable the ansible service broker?

2018-10-10 Thread Marc Boorshtein
Which release is this one?
>
>
>
3.9
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


how to disable the ansible service broker?

2018-10-10 Thread Marc Boorshtein
I added the following to my inventory:

ansible_service_broker_install=false
ansible_service_broker_remove=true

and then ran the api-server playbook but its still there.  Is there a
different playbook I'm supposed to use?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Possible to run Mutating Webhook BEFORE built in openshift mutators?

2018-08-04 Thread Marc Boorshtein
I'm trying to create a webhook that will change the security context the
containers run in.  The pod definition that comes to me has already been
manipulated by openshift and its making it very difficult to get the right
settings.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to specify admission controller correctly?

2018-07-24 Thread Marc Boorshtein
I've got origin 3.9 running and trying to setup an admission controller
webhook.  I added the appropriate confgurations to master-config.yaml.  I
added the following:

kind: ValidatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1beta1
metadata:
  name: opa-validating-webhook
webhooks:
  - name: validating-webhook.openpolicyagent.org
rules:
  - operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["pods"]
clientConfig:
  #url: https://unison-opa.unison.svc/kubernetes/admission/reveiw
  service:
namespace: unison
name: unison-opa


here's the unison-opa service:
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-07-18T01:35:21Z
  labels:
app: unison
  name: unison-opa
  namespace: unison
  resourceVersion: "13118928"
  selfLink: /api/v1/namespaces/unison/services/unison-opa
  uid: d596be9f-8a2a-11e8-9ee7-525400887c40
spec:
  clusterIP: 172.30.254.250
  ports:
  - name: 443-tcp
port: 443
protocol: TCP
targetPort: 8444
  selector:
deploymentconfig: unison
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

here's what i see in the master logs:
Jul 24 14:21:26 os atomic-openshift-master-api: W0724 14:21:26.389179
1723 admission.go:252] Failed calling webhook, failing open
validating-webhook.openpolicyagent.org: failed calling admission webhook "
validating-webhook.openpolicyagent.org": Post
https://unison-opa.unison.svc:443/?timeout=30s: net/http: request canceled
while waiting for connection (Client.Timeout exceeded while awaiting
headers)
Jul 24 14:21:26 os atomic-openshift-master-api: E0724 14:21:26.389241
1723 admission.go:253] failed calling admission webhook "
validating-webhook.openpolicyagent.org": Post
https://unison-opa.unison.svc:443/?timeout=30s: net/http: request canceled
while waiting for connection (Client.Timeout exceeded while awaiting
headers)

I've also tried running through the router and going directly to 8444.
Nothing seems to work.  The service is setup correctly, i can connect from
inside of containers.

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


After patching, Origin 3.9 Heketi Server stops working

2018-07-22 Thread Marc Boorshtein
I updated my Origin 3.9 install on CentOS.  After rebooting Heketi won't
start anymore.  When trying to start the heketi-storage container i get the
following message in the pod events:

(combined from similar events): MountVolume.SetUp failed for volume "db" :
mount failed: mount failed: exit status 1 Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for
/var/lib/origin/openshift.local.volumes/pods/82562195-8ddd-11e8-bcc6-525400887c40/volumes/
kubernetes.io~glusterfs/db --scope -- mount -t glusterfs -o
backup-volfile-servers=192.168.2.139:192
.168.2.140:192.168.2.141,log-level=ERROR,log-file=/var/lib/origin/openshift.local.volumes/plugins/
kubernetes.io/glusterfs/db/heketi-storage-2-tmv2w-glusterfs.log
192.168.2.139:heketidbstorage
/var/lib/origin/openshift.local.volumes/pods/82562195-8ddd-11e8-bcc6-525400887c40/volumes/
kubernetes.io~glusterfs/db Output: Running scope as unit run-19776.scope.
Mount failed. Please check the log file for more details. the following
error information was pulled from the glusterfs log to help diagnose this
issue: [2018-07-22 18:52:24.527924] I [fuse-bridge.c:5839:fini] 0-fuse:
Closing fuse connection to
'/var/lib/origin/openshift.local.volumes/pods/82562195-8ddd-11e8-bcc6-525400887c40/volumes/
kubernetes.io~glusterfs/db'. [2018-07-22 18:52:24.528231] W
[glusterfsd.c:1300:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25)
[0x7f10eb904e25] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x55675e9840d5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x55675e983efb] ) 0-: received signum (15), shutting down

All 3 of my gluster containers are running.  I turned up logging to see if
I could find the issue, here's what the mount log has:

[2018-07-22 18:34:10.681591] I [rpc-clnt.c:2001:rpc_clnt_reconfig]
0-heketidbstorage-client-0: changing port to 49152 (from 0)
[2018-07-22 18:34:10.684468] E [MSGID: 114058]
[client-handshake.c:1564:client_query_portmap_cbk]
0-heketidbstorage-client-1: failed to get the port number for remote
subvolume. Please run 'gluster volume status' on server to see if brick
process is running.
[2018-07-22 18:34:10.684567] I [MSGID: 114018]
[client.c:2280:client_rpc_notify] 0-heketidbstorage-client-1: disconnected
from heketidbstorage-client-1. Client process will keep trying to connect
to glusterd until brick's port is available
[2018-07-22 18:34:10.686084] I [rpc-clnt.c:2001:rpc_clnt_reconfig]
0-heketidbstorage-client-2: changing port to 49152 (from 0)
[2018-07-22 18:34:10.688890] I [MSGID: 114057]
[client-handshake.c:1477:select_server_supported_programs]
0-heketidbstorage-client-2: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2018-07-22 18:34:10.688989] I [MSGID: 114057]
[client-handshake.c:1477:select_server_supported_programs]
0-heketidbstorage-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2018-07-22 18:34:10.689312] W [MSGID: 114043]
[client-handshake.c:1108:client_setvolume_cbk] 0-heketidbstorage-client-2:
failed to set the volume [Permission denied]
[2018-07-22 18:34:10.689334] W [MSGID: 114007]
[client-handshake.c:1137:client_setvolume_cbk] 0-heketidbstorage-client-2:
failed to get 'process-uuid' from reply dict [Invalid argument]
[2018-07-22 18:34:10.689346] E [MSGID: 114044]
[client-handshake.c:1143:client_setvolume_cbk] 0-heketidbstorage-client-2:
SETVOLUME on remote-host failed: Authentication failed [Permission denied]
[2018-07-22 18:34:10.689366] I [MSGID: 114049]
[client-handshake.c:1257:client_setvolume_cbk] 0-heketidbstorage-client-2:
sending AUTH_FAILED event
[2018-07-22 18:34:10.689389] E [fuse-bridge.c:5328:notify] 0-fuse: Server
authenication failed. Shutting down.
[2018-07-22 18:34:10.689407] I [fuse-bridge.c:5834:fini] 0-fuse: Unmounting
'/tmp/mnt'.
[2018-07-22 18:34:10.689456] I [fuse-bridge.c:5839:fini] 0-fuse: Closing
fuse connection to '/tmp/mnt'.
[2018-07-22 18:34:10.689511] I [MSGID: 114046]
[client-handshake.c:1230:client_setvolume_cbk] 0-heketidbstorage-client-0:
Connected to heketidbstorage-client-0, attached to remote volume
'/var/lib/heketi/mounts/vg_113359a58d40f5a772995c5db1cfa55f/brick_3e0b346013c97202e45fa662c28675b7/brick'.
[2018-07-22 18:34:10.689530] I [MSGID: 114047]
[client-handshake.c:1241:client_setvolume_cbk] 0-heketidbstorage-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2018-07-22 18:34:10.689587] I [MSGID: 108005]
[afr-common.c:4706:afr_notify] 0-heketidbstorage-replicate-0: Subvolume
'heketidbstorage-client-0' came back up; going online.
[2018-07-22 18:34:10.689893] W [glusterfsd.c:1300:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f68d0e51e25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x556f6c5e00d5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x556f6c5dfefb] ) 0-:
received signum (15), shutting down

Any thoughts?

Thanks
___
users mailing list
users@lists.openshift.redhat.com

How to debug FlexMount failure

2018-07-10 Thread Marc Boorshtein
We're deploying Origin 3.9 with a flex volume driver that mounts CIFS
drives.  We're seeing something very odd.  The driver runs, we see the
mount command complete and we can see the cifs mounted directory but the
pod reports a timeout when trying to mount the volume.  We're not finding
any errors that tell us much.  Should we be looking on the api server?  The
node?  Anyone seen something like this before?  Whats also odd about this
is there's another cifs mount point with the same flexvolume script in the
pod, that works great.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to run ansible-playbook in a container on origin 3.9

2018-06-21 Thread Marc Boorshtein
Thanks! I'll check it out.

On Thu, Jun 21, 2018, 7:39 AM Aleksandar Kostadinov 
wrote:

> FYI there is an image `origin-ansible` [1] in case it has ansible
> readily installed and working.
>
> [1] https://github.com/openshift/openshift-ansible
>
> Marc Boorshtein wrote on 06/21/18 07:48:
> > I created a simple container on centos7 designed to run an ansible
> > playbook.  Runs great on local docker, but in openshift I get permission
> > denied errors.  I added ANSIBLE_LOCAL_TMP=/tmp as an environment
> > variable but I'm still getting the error that local directories can't be
> > created:
> >
> > fatal: [node.local.lan]: FAILED! => {
> >  "msg": "Unable to create local directories(/.ansible/cp): [Errno
> > 13] Permission denied: '/.ansible'"
> > }
> >
> > here's the entire output
> > ansible-playbook 2.5.5
> >config file = /etc/ansible/ansible.cfg
> >configured module search path = [u'/.ansible/plugins/modules',
> > u'/usr/share/ansible/plugins/modules']
> >ansible python module location =
> /usr/lib/python2.7/site-packages/ansible
> >executable location = /usr/bin/ansible-playbook
> >python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5
> > 20150623 (Red Hat 4.8.5-16)]
> > Using /etc/ansible/ansible.cfg as config file
> > Parsed /etc/secrets/hosts inventory source with ini plugin
> > PLAYBOOK: push-keytabs.yaml
> > 
> > 1 plays in /etc/config/push-keytabs.yaml
> > PLAY [openshift-nodes]
> > *
> > TASK [Gathering Facts]
> > *
> > task path: /etc/config/push-keytabs.yaml:2
> > Using module file
> > /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
> >  ESTABLISH SSH CONNECTION FOR USER: sa-kt-deployment
> > fatal: [node.local.lan]: FAILED! => {
> >  "msg": "Unable to create local directories(/.ansible/cp): [Errno
> > 13] Permission denied: '/.ansible'"
> > }
> > PLAY RECAP
> > *
> > node.local.lan   : ok=0changed=0unreachable=0failed=1
> >   [WARNING]: Could not create retry file
> '/etc/config/push-keytabs.retry'.
> > [Errno 30] Read-only file system: u'/etc/config/push-keytabs.retry'
> >
> > Is there another variable i need to set?
> >
> > Thanks
> > Marc
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to run ansible-playbook in a container on origin 3.9

2018-06-21 Thread Marc Boorshtein
>
>
>
> From the logs, it clearly says its a permission issue,
>
>
worked that out, was wondering if there was experience running
ansible-playbook openshift to know what environment variable, option, etc
could let me have it NOT try writing to /.ansible
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


how to run ansible-playbook in a container on origin 3.9

2018-06-20 Thread Marc Boorshtein
I created a simple container on centos7 designed to run an ansible
playbook.  Runs great on local docker, but in openshift I get permission
denied errors.  I added ANSIBLE_LOCAL_TMP=/tmp as an environment variable
but I'm still getting the error that local directories can't be created:

fatal: [node.local.lan]: FAILED! => {
"msg": "Unable to create local directories(/.ansible/cp): [Errno 13]
Permission denied: '/.ansible'"
}

here's the entire output
ansible-playbook 2.5.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5
20150623 (Red Hat 4.8.5-16)]
Using /etc/ansible/ansible.cfg as config file
Parsed /etc/secrets/hosts inventory source with ini plugin
PLAYBOOK: push-keytabs.yaml

1 plays in /etc/config/push-keytabs.yaml
PLAY [openshift-nodes]
*
TASK [Gathering Facts]
*
task path: /etc/config/push-keytabs.yaml:2
Using module file
/usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
 ESTABLISH SSH CONNECTION FOR USER: sa-kt-deployment
fatal: [node.local.lan]: FAILED! => {
"msg": "Unable to create local directories(/.ansible/cp): [Errno 13]
Permission denied: '/.ansible'"
}
PLAY RECAP
*
node.local.lan   : ok=0changed=0unreachable=0failed=1
 [WARNING]: Could not create retry file '/etc/config/push-keytabs.retry'.
[Errno 30] Read-only file system: u'/etc/config/push-keytabs.retry'

Is there another variable i need to set?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Requirements for Router Re-encrypt destination certificates?

2018-06-04 Thread Marc Boorshtein
turns out if you don't give the router a new destination ca cert when you
generate one it doesn't work.  Changing the extensions did the trick.

Thanks Jordan

On Mon, Jun 4, 2018 at 11:16 AM Marc Boorshtein 
wrote:

> On Sat, Jun 2, 2018 at 3:25 PM Jordan Liggitt  wrote:
>
>> The only differences I see are in key usage restrictions
>>
>>
> same issue:
>
> Certificate:
> Data:
> Version: 3 (0x2)
> Serial Number: 1528124732081 (0x163cb54f2b1)
> Signature Algorithm: sha256WithRSAEncryption
> Issuer: C = dev, ST = dev, L = dev, O = dev, OU = dev, CN =
> unison-scalejs-rh.tremolo.io
> Validity
> Not Before: Jun  4 00:00:00 2018 GMT
> Not After : Jun  1 00:00:00 2028 GMT
> Subject: C = dev, ST = dev, L = dev, O = dev, OU = dev, CN =
> unison-scalejs-rh.tremolo.io
> Subject Public Key Info:
> Public Key Algorithm: rsaEncryption
> Public-Key: (2048 bit)
> Modulus:
> 00:a1:e3:8e:4f:b1:f1:3a:15:4a:bc:e2:ef:0c:01:
> 1a:98:16:d1:f2:08:96:25:eb:e8:f6:d0:b9:26:01:
> ed:38:9c:d4:57:58:b8:0e:41:53:5b:71:50:28:27:
> ee:45:17:9e:2c:33:9f:2c:40:44:6b:da:04:f4:a8:
> 56:0d:6a:5b:bd:e2:76:e2:e2:91:cf:88:59:c6:31:
> 7d:24:53:1e:42:b4:ac:83:26:b5:33:1a:d0:03:73:
> 62:25:48:5f:f9:6e:74:6b:c7:f7:84:1a:78:db:f5:
> 30:92:97:d5:28:48:bb:ca:28:38:c8:fa:fe:11:54:
> 03:5f:51:82:5d:f0:c4:f6:46:5b:dd:5b:ee:0a:99:
> f1:91:2d:c9:c0:d2:f7:e1:4a:5b:ad:9e:dd:19:f0:
> 1b:08:be:a0:98:23:38:32:40:64:1f:e4:9f:10:43:
> f7:1b:fa:88:55:54:46:46:fc:88:b3:e9:f2:41:7e:
> 6c:93:f2:34:7a:c0:5a:aa:18:35:3e:35:e6:7b:bb:
> e3:77:36:ab:fd:9f:2f:62:f6:33:d5:7a:61:e9:9f:
> 71:42:fa:0a:3f:9c:87:50:87:59:ea:ce:13:23:70:
> 4d:71:11:0b:0d:24:77:c1:9b:c5:38:00:c9:e0:5c:
> a5:29:61:5d:33:f1:53:0a:57:72:e2:69:fa:54:0a:
> 5a:c7
> Exponent: 65537 (0x10001)
> X509v3 extensions:
> X509v3 Basic Constraints: critical
> CA:TRUE
> X509v3 Key Usage: critical
> Certificate Sign
> X509v3 Extended Key Usage: critical
> Any Extended Key Usage
> Signature Algorithm: sha256WithRSAEncryption
>  91:66:93:bc:27:1c:43:48:90:5a:dd:46:8b:d0:43:90:68:71:
>  74:64:47:95:fe:c6:a8:f2:62:40:0e:31:aa:0e:4a:fa:92:b4:
>  ec:d4:b9:78:85:76:ab:ed:2a:5e:7d:07:c3:ed:8b:10:6b:f0:
>  6f:5a:c0:5d:f2:8c:d0:99:2b:12:0c:cc:a3:ae:a6:e3:a8:68:
>  05:62:7c:d3:82:ad:9a:4c:25:d9:a1:23:ca:a0:b1:71:17:e2:
>  37:c9:6f:f2:13:b6:71:ac:61:39:fd:c8:aa:32:cc:b9:fb:81:
>  c6:9b:36:18:95:16:82:a6:76:81:c2:24:03:c7:40:05:a4:f8:
>  ef:4d:15:af:a2:5e:0a:0f:41:20:8d:7f:80:e0:29:b2:90:46:
>  a2:e3:7a:20:a8:db:be:5f:19:31:66:4d:fd:e9:17:b1:84:c9:
>  03:0b:29:70:72:24:30:4e:2d:26:7f:ea:ef:45:d8:64:03:9d:
>  1e:43:51:01:db:f9:44:a7:d8:46:b8:93:d0:49:65:78:3b:5c:
>  78:f5:b5:ca:c0:eb:fa:17:68:0d:87:5d:2f:3e:4b:fc:b8:4b:
>  97:d3:9a:3d:74:ec:6d:39:6a:7c:ab:61:df:b4:bd:e0:f6:1e:
>  60:bc:50:7b:0c:83:ec:12:d6:93:4d:f5:70:4e:36:53:7c:44:
>  1c:fa:f7:db
> -BEGIN CERTIFICATE-
> MIIDkTCCAnmgAwIBAgIGAWPLVPKxMA0GCSqGSIb3DQEBCwUAMG0xDDAKBgNVBAYT
> A2RldjEMMAoGA1UECBMDZGV2MQwwCgYDVQQHEwNkZXYxDDAKBgNVBAoTA2RldjEM
> MAoGA1UECxMDZGV2MSUwIwYDVQQDExx1bmlzb24tc2NhbGVqcy1yaC50cmVtb2xv
> LmlvMB4XDTE4MDYwNDAwMDAwMFoXDTI4MDYwMTAwMDAwMFowbTEMMAoGA1UEBhMD
> ZGV2MQwwCgYDVQQIEwNkZXYxDDAKBgNVBAcTA2RldjEMMAoGA1UEChMDZGV2MQww
> CgYDVQQLEwNkZXYxJTAjBgNVBAMTHHVuaXNvbi1zY2FsZWpzLXJoLnRyZW1vbG8u
> aW8wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCh445PsfE6FUq84u8M
> ARqYFtHyCJYl6+j20LkmAe04nNRXWLgOQVNbcVAoJ+5FF54sM58sQERr2gT0qFYN
> alu94nbi4pHPiFnGMX0kUx5CtKyDJrUzGtADc2IlSF/5bnRrx/eEGnjb9TCSl9Uo
> SLvKKDjI+v4RVANfUYJd8MT2RlvdW+4KmfGRLcnA0vfhSlutnt0Z8BsIvqCYIzgy
> QGQf5J8QQ/cb+ohVVEZG/Iiz6fJBfmyT8jR6wFqqGDU+NeZ7u+N3Nqv9ny9i9jPV
> emHpn3FC+go/nIdQh1nqzhMjcE1xEQsNJHfBm8U4AMngXKUpYV0z8VMKV3LiafpU
> ClrHAgMBAAGjNzA1MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgIEMBIG
> A1UdJQEB/wQIMAYGBFUdJQAwDQYJKoZIhvcNAQELBQADggEBAJFmk7wnHENIkFrd
> RovQQ5BocXRkR5X+xqjyYkAOMaoOSvqStOzUuXiFdqvtKl59B8PtixBr8G9awF3y
> jNCZKxIMzKOupuOoaAVifNOCrZpMJdmhI8qgsXEX4jfJb/ITtnGsYTn9yKoyzLn7
> gcabNhiVFoKmdoHCJAPHQAWk+O9NFa+iXgoPQSCNf4DgKbKQRqLjeiCo275fGTFm
> Tf3pF7GEyQMLKXByJDBOLSZ/6u9F2GQDnR5DUQHb+USn2Ea4k9BJZXg7XHj1tcrA
> 6/oX

Requirements for Router Re-encrypt destination certificates?

2018-06-02 Thread Marc Boorshtein
Something seems odd to be about setting up a route (origin 3.9), i can
create a route with re-encrypt if the cert is signed by a self signed CA,
but the route doesn't work if the destination certificate is self signed
and marked as a CA.  For example this destination certificate does NOT work
with the router:

-BEGIN CERTIFICATE-
MIIDlTCCAn2gAwIBAgIGAWO2zOVIMA0GCSqGSIb3DQEBCwUAMG0xDDAKBgNVBAYT
A2RldjEMMAoGA1UECBMDZGV2MQwwCgYDVQQHEwNkZXYxDDAKBgNVBAoTA2RldjEM
MAoGA1UECxMDZGV2MSUwIwYDVQQDExx1bmlzb24tc2NhbGVqcy1yaC50cmVtb2xv
LmlvMB4XDTE4MDUzMTAwMDAwMFoXDTI4MDUyODAwMDAwMFowbTEMMAoGA1UEBhMD
ZGV2MQwwCgYDVQQIEwNkZXYxDDAKBgNVBAcTA2RldjEMMAoGA1UEChMDZGV2MQww
CgYDVQQLEwNkZXYxJTAjBgNVBAMTHHVuaXNvbi1zY2FsZWpzLXJoLnRyZW1vbG8u
aW8wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCSaec22QonMOU2a/0y
QwOduMlCwQEPMu8E2b1sNAiL5K22i+3i7ozE+/r4AyMAKjvc2TRbObbMrHDnJBgV
WigkaTeSLWQdRol4WlgeFtbYH+S/vWxSsm2dAPpt8wZpuENa6ptK9khPa8n0IhLG
O31UPTEyEIXg/cg20x1+cRcdMCVWSD7F1m3Ia4wvUuH7g21fWCy1ljkbPPMDqI+b
DnrLzsJjgmE8rKbw9dYm7irc3Rgd1zW4Rv/2Wg1JeDWJ3CrWCZPouC2qh1PWgUU2
sMs72cL9PPwHUnKHyBT7RwDXjEI0RjVPQ3jwdXnhaHel4npXP+ByYfaa0jGw4DxQ
vHSTAgMBAAGjOzA5MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgWgMBYG
A1UdJQEB/wQMMAoGCCsGAQUFBwMBMA0GCSqGSIb3DQEBCwUAA4IBAQANboUIllvD
FRoBAOivn2N9BqRDS4c6JlPGZcApv0kr07+gjXziREh1+vUBUjBpCkX+oGWj2ZBe
v714ewxI1Hyr5YG5i8aJEO32GANP+2yesSMLyPGIIKacBYhgctJiMZH+QtZBahqu
jg87XXlIYwOGMAaelRjvJuqRFfkh5xYzCvHYxP26yOT9CqvEv5EsvCss13ZylIsb
U1PX2Xu3FPu+LY2ayS+ZVPRL6J1GkIGO2LhWF00elVk1capS5c6i9Z/TbfjjN8SJ
mYLEuOzeqjcbnxOZU6LzTECfU9SrFXTF3sh/iRqBWrJ69H1IJFpdLsT38a6N4+dZ
yAIcbTIyOcaN
-END CERTIFICATE-


however, this cert does (and its corresponding CA):

-BEGIN CERTIFICATE-
MIIDHDCCAgSgAwIBAgIJANka1xITATPtMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB2t1YmUtY2EwHhcNMTgwNjAyMTkwMjM5WhcNMjgwNTMwMTkwMjM5WjBtMQww
CgYDVQQGEwNkZXYxDDAKBgNVBAgTA2RldjEMMAoGA1UEBxMDZGV2MQwwCgYDVQQK
EwNkZXYxDDAKBgNVBAsTA2RldjElMCMGA1UEAxMcdW5pc29uLXNjYWxlanMtcmgu
dHJlbW9sby5pbzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKRahgUI
umjSD0Yz6Fw/k0DFDDnmlekYLkFGgYz+Z2yxWUVOJo9VLWx0RbkUknul1NB//cW4
lN8Wc7C9gfJJ7zI/v3C2L+N/3f2yp8xshmQzQB+xnjkZjuqXSgMIQWEUHnfaiM8C
1AmeQ07qFbssPnVzlBr7ukQMwU7StI64PDQ77HAT406lf7aVCvikMqKUf40LOaz3
GtWP6bnGPhvMgYytbCysUUP5osLmQeEokxXul77fTeEfBtKX0ITpnZi+daUkFwXi
5NvckN2dZA7wZ+Vat/tZzfTYycHlUF3eWW/9T8cjV0L0V2uT3hXBuXwNw1CXeLcZ
Gf2/8HL/yoXP6VUCAwEAAaMaMBgwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwDQYJ
KoZIhvcNAQELBQADggEBAGu1HvIcINGpyCBXhqgmlafSqh7Huodx4tyHgeu0NTmD
kf8iU7PGiWjk5L9SUBWJO+rvycU5GQ/+yH/tp9xir0uBh1iXOOoth0vPnL5HQcZ4
VXPnmFylUYa3I5123OdCHuzVHlkD6bdiy6E/mT25XcwWpZL9wgjtE1RbDkLR7Gq/
KVUN2KMnX9Eiewm47wXTnDw62eVrzhrApIuqLsMbabOQ2uUeoelE9c0agR6RLTng
50rCfj3MpjpfSZDR/Y9XWKizVMR0sqj0rYw+Mg6XhOzK/c20km6O+Km69Zh1BsdX
LyXGd0Lf/1nSf3jG+h29NUCq3yp7U9iyVyL5Q4nNE6U=
-END CERTIFICATE-

ca:

-BEGIN CERTIFICATE-
MIIC+jCCAeKgAwIBAgIJAIiduSOLKh22MA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB2t1YmUtY2EwHhcNMTgwNjAyMTkwMDIyWhcNMjgwNTMwMTkwMDIyWjASMRAw
DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
48J8oKeAztHrL2Dk9o24TxrgX21uM6GcZKhdDPW7gMn9uYBYMsoaI7eZyYLxhxiV
qG3WP1vgqpB00EbRdojoemdJ2os5rYz512BOlzNVjsgVE2Mgz/8cfV9pHWFp0dF9
C36ZjhUy7yvUyMf8+ekEFdE6fOOu+JImhfKDEHYzohXNITeTtgKpUh6Rw0ZNNRgq
6lVGYt8P6P0xbMHCYICKoJKmlViSVlqkB0R7L+TFOpuNajyibqszlizJGZXotym7
dLz9kIjPkksCl0jAERasacoFonJ8OtkR8G8rdlE+5hg7WAcy1C556mYsJ64ptLqW
yoiOEQyjMkWXKMsaPX4rpwIDAQABo1MwUTAdBgNVHQ4EFgQUxCOIqR3LgiRT5GEy
RhXgk84/wtowHwYDVR0jBBgwFoAUxCOIqR3LgiRT5GEyRhXgk84/wtowDwYDVR0T
AQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAFfcxlzBIDQFwwIF92fXjIaQ1
jqpQRHUwKd2w7/EXyp3f9xQ1+IqlMkQu/Ip0pxZPB2WRWP1tL7o0EetOm6X29h12
be5yVovmx8DlaC0jTjwTDAOsSDHb4GlJv4pLjyDNmk/mtj3mW6UCYH4msWcIidYj
9d/neZnU4RftrtJzYZgcmpCK7xhdXqevoLo1X2b0gUlR/80DsEt37gBFAsp/EP/d
4yygBujWd3Q4d8nNzNVxkB7nXf2Wh0BrWadEKEsN8sukBNHZQ22KeI4YaBI92Mo3
n24wdO7Q3bOmaEHPpVXnZJZKmYy8JNji22WmUi/Z3KD+0880ea+QGh+VC/gZuw==
-END CERTIFICATE-


Now the first cert is marked as a CA, so it SHOULD work (and the same
process generates certs that the golang clients in openshift and k8s both
work with OK).  Is there a requirement I'm missing?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Help with FlexVolumes in 3.9

2018-04-17 Thread Marc Boorshtein
I'm trying to get a CIFS flex volume integrated with origin 3.9 based on
the following instructions:

https://docs.openshift.com/container-platform/3.9/install_config/persistent_storage/persistent_storage_flex_volume.html

I'm specifically trying to get a cifs flex volume working -
https://github.com/andyzhangx/kubernetes-drivers/tree/master/flexvolume/cifs

Here's the steps I took on all nodes and masters:

1.  mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/azure-cifs
2.  cd /usr/libexec/kubernetes/kubelet-plugins/volume/exec/azure-cifs/
3.  sudo wget -O cifs
https://raw.githubusercontent.com/andyzhangx/kubernetes-drivers/master/flexvolume/cifs/cifs
4.  sudo chmod a+x cifs

Then I created a pv and pvc based on the instructions then created the pod
in the README.md.  The flex volume fails to mount with this in
/var/log/messages:

Apr 17 08:29:13 node origin-node: I0417 08:29:13.1166651620
kubelet.go:1854] SyncLoop (ADD, "api"):
"nginx-flex-cifs_test-volumes(efa44126-423a-11e8-a4f3-525400887c40)"
Apr 17 08:29:13 node systemd: Created slice libcontainer container
kubepods-besteffort-podefa44126_423a_11e8_a4f3_525400887c40.slice.
Apr 17 08:29:13 node systemd: Starting libcontainer container
kubepods-besteffort-podefa44126_423a_11e8_a4f3_525400887c40.slice.
Apr 17 08:29:13 node origin-node: E0417 08:29:13.1566281620
desired_state_of_world_populator.go:280] Failed to add volume
"flexvol-mount" (specName: "pv-cifs-flexvol") for pod
"efa44126-423a-11e8-a4f3-525400887c40" to desiredStateOfWorld. err=failed
to get Plugin from volumeSpec for volume "pv-cifs-flexvol" err=no volume
plugin matched
Apr 17 08:29:13 node origin-node: I0417 08:29:13.2316611620
reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started
for volume "default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")
Apr 17 08:29:13 node origin-node: I0417 08:29:13.3318741620
reconciler.go:257] operationExecutor.MountVolume started for volume
"default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")
Apr 17 08:29:13 node systemd: Started Kubernetes transient mount for
/var/lib/origin/openshift.local.volumes/pods/efa44126-423a-11e8-a4f3-525400887c40/volumes/
kubernetes.io~secret/default-token-5dnnh.
Apr 17 08:29:13 node systemd: Starting Kubernetes transient mount for
/var/lib/origin/openshift.local.volumes/pods/efa44126-423a-11e8-a4f3-525400887c40/volumes/
kubernetes.io~secret/default-token-5dnnh.
Apr 17 08:29:13 node origin-node: I0417 08:29:13.3584261620
operation_generator.go:481] MountVolume.SetUp succeeded for volume
"default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")

are there special permissions that I need?

thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CIFS access from pods

2018-04-11 Thread Marc Boorshtein
Yes, if it weren't changed the pod wouldn't be accepted and run the pod.

On Tue, Apr 10, 2018, 11:22 PM Yu Wei <yu20...@hotmail.com> wrote:

> Hi,
> Have you changed settings for using hostpath?
> Please reference following doc
>
> https://docs.openshift.org/latest/admin_guide/manage_scc.html#use-the-hostpath-volume-plugin
> --
> *From:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> on behalf of Marc Boorshtein <
> mboorsht...@gmail.com>
> *Sent:* Wednesday, April 11, 2018 11:04 AM
> *To:* users
> *Subject:* CIFS access from pods
>
> OpenShifters,
>
> I'm trying to access CIFS mounts from my OpenShift pods using Origin 3.7
> on CentOS 7.  Here's my setup:
>
> 1.  FreeIPA deployed with domain trust to AD (ENT2K12.DOMAIN.COM)
> 2.  Node is member of FreeIPA domain
> 3.  On Node:
>   a.  Keytab generated
>   b.  CIFS share mounted as AD user using uid from IPA - mount -t cifs -o
> username=mmos...@ent2k12.domain.com,sec=krb5,version=3.0,uid=160811903,gid=0
> //adfs.ent2k12.domain.com/mmosley-share /mount/local-storage/cifs/mmosley
>   c.  marked /mount/local-storage/cifs/mmosley as owned by
> mmos...@ent2k12.domain.com/root
>
> 4.  In OpenShift:
>   a.  Enabled hostPath
>   b.  Set runAsUser to runAsAny
>
> 5.  in my pod added:
>
> securityContext:
> runAsUser: 160811903
>
> And
> volumes:
> - name: ext
>   hostPath:
> path: /mnt/local-storage/cifs/mmosley
> type: Directory
>
> Once my pod is running, i double check the id :
>
> sh-4.2$ id
> uid=160811903 gid=0(root) groups=0(root),100011
> sh-4.2$
>
> but when i try to access the mount I get permission denied:
> drwxrwxrwx.   2 160811903 root   0 Apr 10 13:58 ext
>
> rsh-4.2$ ls /ext/
> ls: cannot open directory /ext/: Permission denied
>
> Here's something interesting, if I unmount the volume I'm able to
> read/write files and files have the correct ownership.
>
> There's nothing in the selinux audit log.
>
> Any help would be greatly appreciated.
>
> Thanks
> Marc
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


CIFS access from pods

2018-04-10 Thread Marc Boorshtein
OpenShifters,

I'm trying to access CIFS mounts from my OpenShift pods using Origin 3.7 on
CentOS 7.  Here's my setup:

1.  FreeIPA deployed with domain trust to AD (ENT2K12.DOMAIN.COM)
2.  Node is member of FreeIPA domain
3.  On Node:
  a.  Keytab generated
  b.  CIFS share mounted as AD user using uid from IPA - mount -t cifs -o
username=mmos...@ent2k12.domain.com,sec=krb5,version=3.0,uid=160811903,gid=0
//adfs.ent2k12.domain.com/mmosley-share /mount/local-storage/cifs/mmosley
  c.  marked /mount/local-storage/cifs/mmosley as owned by
mmos...@ent2k12.domain.com/root

4.  In OpenShift:
  a.  Enabled hostPath
  b.  Set runAsUser to runAsAny

5.  in my pod added:

securityContext:
runAsUser: 160811903

And
volumes:
- name: ext
  hostPath:
path: /mnt/local-storage/cifs/mmosley
type: Directory

Once my pod is running, i double check the id :

sh-4.2$ id
uid=160811903 gid=0(root) groups=0(root),100011
sh-4.2$

but when i try to access the mount I get permission denied:
drwxrwxrwx.   2 160811903 root   0 Apr 10 13:58 ext

rsh-4.2$ ls /ext/
ls: cannot open directory /ext/: Permission denied

Here's something interesting, if I unmount the volume I'm able to
read/write files and files have the correct ownership.

There's nothing in the selinux audit log.

Any help would be greatly appreciated.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Docker registry console shows blank page

2018-02-19 Thread Marc Boorshtein
I'm running origin 3.7.  I get promoted to authenticate and then get a
blank screen.  When I look at the logs inside the console pod i see:

INFO: cockpit-ws: logged in user session
WARNING: GLib: getpwuid_r(): failed due to unknown user id (10)
MESSAGE: cockpit-protocol: couldn't read from connection: Error receiving
data: Connection reset by peer
INFO: cockpit-ws: session timed out


Am I missing something?


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Create service account for pushing images in 3.7

2018-02-03 Thread Marc Boorshtein
>
>
>>
>> $ docker  login --username=$(oc whoami) --password=$(oc whoami -t)
>> os-registry-ext.myos.io
>>
>
> I don't think our auth flow likes the colons in the service account
> username here.  You don't actually need to provide the username anyway, the
> token is sufficient, so just run:
>
>  docker login --username=anything --password=$(oc whoami -t)
> yourregistry.com
>
>
>>
That did it, thanks!
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Create service account for pushing images in 3.7

2018-02-03 Thread Marc Boorshtein
I'm trying to create a service account that will let me push images into my
registry.  The registry is exposed, has a commercial cert and i can push
images with my cluster admin so I'm pretty sure its configured correctly.
I'm looking at a few blog posts and tried to:

1.  Create the service account

$ oc create sa jenkins-ext

2.  I then grant it the edit role in my project

$ oc policy add-role-to-user edit
system:serviceaccount:my-project:jenkins-ext

3.  Then I get the secret and run oc login https://myos --token=...

I get this message:

Logged into "https://myos:443; as
"system:serviceaccount:my-project:jenkins-ext" using the token provided.

You don't have any projects. Contact your system administrator to request a
project.

4.  Then login to docker

$ docker  login --username=$(oc whoami) --password=$(oc whoami -t)
os-registry-ext.myos.io

Error response from daemon: Get https://os-registry-ext.myos.io/v2/:
unauthorized: authentication required

Same docker login command works when I login with creds from my own suer
from the dashboard.

Am I missing a step?  This is origin 3.7

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Passthrough TLS route not working

2018-01-19 Thread Marc Boorshtein
Hm, then you lose the ability to do cookie based load balancing

On Fri, Jan 19, 2018, 5:11 PM Joel Pearson <japear...@agiledigital.com.au>
wrote:

> In the reference implementation they use Classic ELB load balancers in TCP
> mode:
>
> See this cloud formation template:
> https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/playbooks/roles/cloudformation-infra/files/greenfield.json.j2#L763
>
> On Sat, Jan 20, 2018 at 8:55 AM Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> What mode are you running the AWS load balancers in? You probably want to
>> run them as TCP load balancers and not HTTP. That way as you say the SNI
>> will not get messed with.
>> On Sat, 20 Jan 2018 at 4:45 am, Marc Boorshtein <mboorsht...@gmail.com>
>> wrote:
>>
>>> So if I bypass the AWS load balancer, everything works great.  Why
>>> doesn't HAProxy like the incoming requests?  I'm trying to debug the issue
>>> by enabling logging with
>>>
>>> oc set env dc/router ROUTER_SYSLOG_ADDRESS=127.0.0.1 ROUTER_LOG_LEVEL=debug
>>>
>>> But the logging doesn't seem to get there (I also tried a remote server as 
>>> well).  I'm guessing this is probably an SNI configuration issue?
>>>
>>>
>>>
>>> On Fri, Jan 19, 2018 at 11:59 AM Marc Boorshtein <mboorsht...@gmail.com>
>>> wrote:
>>>
>>>> I'm running origin 3.7 on AWS.  I have an AWS load balancer in front of
>>>> my infrastructure node.  I have a pod listening on TLS on port 9090.  The
>>>> service links to the pod and then I have a route that is setup with
>>>> passthrough tls to the pod, but every time i try to access it I get the
>>>> "Application is not availble" screen even though looking in the console the
>>>> service references both the router and the pod.  I have deployments that do
>>>> the same thing but will only work with re-encrypt.  Am I missing
>>>> something?  Is there an issue using the AWS load balancer with passthrough?
>>>>
>>>> Thanks
>>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Passthrough TLS route not working

2018-01-19 Thread Marc Boorshtein
So if I bypass the AWS load balancer, everything works great.  Why doesn't
HAProxy like the incoming requests?  I'm trying to debug the issue by
enabling logging with

oc set env dc/router ROUTER_SYSLOG_ADDRESS=127.0.0.1 ROUTER_LOG_LEVEL=debug

But the logging doesn't seem to get there (I also tried a remote
server as well).  I'm guessing this is probably an SNI configuration
issue?



On Fri, Jan 19, 2018 at 11:59 AM Marc Boorshtein <mboorsht...@gmail.com>
wrote:

> I'm running origin 3.7 on AWS.  I have an AWS load balancer in front of my
> infrastructure node.  I have a pod listening on TLS on port 9090.  The
> service links to the pod and then I have a route that is setup with
> passthrough tls to the pod, but every time i try to access it I get the
> "Application is not availble" screen even though looking in the console the
> service references both the router and the pod.  I have deployments that do
> the same thing but will only work with re-encrypt.  Am I missing
> something?  Is there an issue using the AWS load balancer with passthrough?
>
> Thanks
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Passthrough TLS route not working

2018-01-19 Thread Marc Boorshtein
I'm running origin 3.7 on AWS.  I have an AWS load balancer in front of my
infrastructure node.  I have a pod listening on TLS on port 9090.  The
service links to the pod and then I have a route that is setup with
passthrough tls to the pod, but every time i try to access it I get the
"Application is not availble" screen even though looking in the console the
service references both the router and the pod.  I have deployments that do
the same thing but will only work with re-encrypt.  Am I missing
something?  Is there an issue using the AWS load balancer with passthrough?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-07 Thread Marc Boorshtein
sounds like the SELinux error is a red herring.  found a red hat bug report
showing this isn't an issue.  This is all I'm seeing in the node's system
log:

Jan  7 19:50:08 ip-10-0-4-69 origin-node: I0107 19:50:08.3819381750
kubelet.go:1854] SyncLoop (ADD, "api"):
"mariadb-3-5425j_test2(f6e9aa44-f3e3-11e7-96b9-0abad0f909f2)"
Jan  7 19:50:08 ip-10-0-4-69 origin-node: I0107 19:50:08.4955451750
reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started
for volume "default-token-b8c6l" (UniqueName: "
kubernetes.io/secret/f6e9aa44-f3e3-11e7-96b9-0abad0f909f2-default-token-b8c6l")
pod "mariadb-3-5425j" (UID: "f6e9aa44-f3e3-11e7-96b9-0abad0f909f2")
Jan  7 19:50:08 ip-10-0-4-69 origin-node: I0107 19:50:08.5958411750
reconciler.go:257] operationExecutor.MountVolume started for volume
"default-token-b8c6l" (UniqueName: "
kubernetes.io/secret/f6e9aa44-f3e3-11e7-96b9-0abad0f909f2-default-token-b8c6l")
pod "mariadb-3-5425j" (UID: "f6e9aa44-f3e3-11e7-96b9-0abad0f909f2")
Jan  7 19:50:08 ip-10-0-4-69 origin-node: I0107 19:50:08.6080391750
operation_generator.go:481] MountVolume.SetUp succeeded for volume
"default-token-b8c6l" (UniqueName: "
kubernetes.io/secret/f6e9aa44-f3e3-11e7-96b9-0abad0f909f2-default-token-b8c6l")
pod "mariadb-3-5425j" (UID: "f6e9aa44-f3e3-11e7-96b9-0abad0f909f2")
Jan  7 19:52:11 ip-10-0-4-69 origin-node: E0107 19:52:11.3950231750
kubelet.go:1594] Unable to mount volumes for pod
"mariadb-3-5425j_test2(f6e9aa44-f3e3-11e7-96b9-0abad0f909f2)": timeout
expired waiting for volumes to attach/mount for pod
"test2"/"mariadb-3-5425j". list of unattached/unmounted
volumes=[mariadb-data]; skipping pod
Jan  7 19:52:11 ip-10-0-4-69 origin-node: E0107 19:52:11.3950681750
pod_workers.go:186] Error syncing pod f6e9aa44-f3e3-11e7-96b9-0abad0f909f2
("mariadb-3-5425j_test2(f6e9aa44-f3e3-11e7-96b9-0abad0f909f2)"), skipping:
timeout expired waiting for volumes to attach/mount for pod
"test2"/"mariadb-3-5425j". list of unattached/unmounted
volumes=[mariadb-data]

i'm kind of at a loss where else to look.  There are other EBS volumes on
the server to handle local disks and the docker storage volume.  No selinux
errors.  Any ideas where to look?

Thanks

On Sun, Jan 7, 2018 at 2:28 PM Marc Boorshtein <mboorsht...@gmail.com>
wrote:

> The only errors I can find are in dmesg on the node thats running the pod:
>
> [ 1208.768340] XFS (dm-6): Mounting V5 Filesystem
> [ 1208.907628] XFS (dm-6): Ending clean mount
> [ 1208.937388] XFS (dm-6): Unmounting Filesystem
> [ 1209.016985] XFS (dm-6): Mounting V5 Filesystem
> [ 1209.148183] XFS (dm-6): Ending clean mount
> [ 1209.167997] XFS (dm-6): Unmounting Filesystem
> [ 1209.218989] XFS (dm-6): Mounting V5 Filesystem
> [ 1209.342131] XFS (dm-6): Ending clean mount
> [ 1209.386249] SELinux: mount invalid.  Same superblock, different
> security settings for (dev mqueue, type mqueue)
> [ 1217.550065] pci :00:1d.0: [1d0f:8061] type 00 class 0x010802
> [ 1217.550128] pci :00:1d.0: reg 0x10: [mem 0x-0x3fff]
> [ 1217.551181] pci :00:1d.0: BAR 0: assigned [mem
> 0xc000-0xc0003fff]
> [ 1217.559756] nvme nvme3: pci function :00:1d.0
> [ 1217.568601] nvme :00:1d.0: enabling device ( -> 0002)
> [ 1217.575951] nvme :00:1d.0: irq 33 for MSI/MSI-X
> [ 1218.500526] nvme :00:1d.0: irq 33 for MSI/MSI-X
> [ 1218.500547] nvme :00:1d.0: irq 34 for MSI/MSI-X
>
> google's found some issues with coreos, but nothing for openshift and
> ebs.  I'm running cetos 7.4, docker is at Docker version 1.12.6, build
> ec8512b/1.12.6 running on M5.large instances
>
> On Sat, Jan 6, 2018 at 10:19 PM Hemant Kumar <heku...@redhat.com> wrote:
>
>> The message you posted is generic message that is logged (or surfaced via
>> events) when openshift-node process couldn't find attached volumes within
>> specified time. That message in itself does not mean that node process will
>> not retry (in fact it will retry more than once) and if volume is attached
>> and mounted - pod will start correctly.
>>
>> There may be something else going on here - I can't say for sure without
>> looking at openshift's node and controller-manager's logs.
>>
>>
>>
>>
>>
>> On Sat, Jan 6, 2018 at 9:38 PM, Marc Boorshtein <mboorsht...@gmail.com>
>> wrote:
>>
>>> Thank you for the explanation.  That now makes sense.  I redeployed with
>>> 3.7 and the correct tags on the ec2 instances.  Now my new issue is that
>>> I'm continuously getting the error "Unable to mount volumes for pod
>>> "jenkins-2-lrgjb_test(ca61f5

Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-07 Thread Marc Boorshtein
The only errors I can find are in dmesg on the node thats running the pod:

[ 1208.768340] XFS (dm-6): Mounting V5 Filesystem
[ 1208.907628] XFS (dm-6): Ending clean mount
[ 1208.937388] XFS (dm-6): Unmounting Filesystem
[ 1209.016985] XFS (dm-6): Mounting V5 Filesystem
[ 1209.148183] XFS (dm-6): Ending clean mount
[ 1209.167997] XFS (dm-6): Unmounting Filesystem
[ 1209.218989] XFS (dm-6): Mounting V5 Filesystem
[ 1209.342131] XFS (dm-6): Ending clean mount
[ 1209.386249] SELinux: mount invalid.  Same superblock, different security
settings for (dev mqueue, type mqueue)
[ 1217.550065] pci :00:1d.0: [1d0f:8061] type 00 class 0x010802
[ 1217.550128] pci :00:1d.0: reg 0x10: [mem 0x-0x3fff]
[ 1217.551181] pci :00:1d.0: BAR 0: assigned [mem 0xc000-0xc0003fff]
[ 1217.559756] nvme nvme3: pci function :00:1d.0
[ 1217.568601] nvme :00:1d.0: enabling device ( -> 0002)
[ 1217.575951] nvme :00:1d.0: irq 33 for MSI/MSI-X
[ 1218.500526] nvme :00:1d.0: irq 33 for MSI/MSI-X
[ 1218.500547] nvme :00:1d.0: irq 34 for MSI/MSI-X

google's found some issues with coreos, but nothing for openshift and ebs.
I'm running cetos 7.4, docker is at Docker version 1.12.6, build
ec8512b/1.12.6 running on M5.large instances

On Sat, Jan 6, 2018 at 10:19 PM Hemant Kumar <heku...@redhat.com> wrote:

> The message you posted is generic message that is logged (or surfaced via
> events) when openshift-node process couldn't find attached volumes within
> specified time. That message in itself does not mean that node process will
> not retry (in fact it will retry more than once) and if volume is attached
> and mounted - pod will start correctly.
>
> There may be something else going on here - I can't say for sure without
> looking at openshift's node and controller-manager's logs.
>
>
>
>
>
> On Sat, Jan 6, 2018 at 9:38 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> Thank you for the explanation.  That now makes sense.  I redeployed with
>> 3.7 and the correct tags on the ec2 instances.  Now my new issue is that
>> I'm continuously getting the error "Unable to mount volumes for pod
>> "jenkins-2-lrgjb_test(ca61f578-f352-11e7-9237-0abad0f909f2)": timeout
>> expired waiting for volumes to attach/mount for pod
>> "test"/"jenkins-2-lrgjb". list of unattached/unmounted
>> volumes=[jenkins-data]" when trying to deploy jenkins.   The EBS volume is
>> created, the volume is attached to the node when i run lsblk i see the
>> device but it just times out.
>>
>> Thanks
>> Marc
>>
>> On Sat, Jan 6, 2018 at 6:43 AM Hemant Kumar <heku...@redhat.com> wrote:
>>
>>> Correction in last sentence:
>>>
>>> " hence it will pick NOT zone in which Openshift cluster did not exist."
>>>
>>> On Sat, Jan 6, 2018 at 6:36 AM, Hemant Kumar <heku...@redhat.com> wrote:
>>>
>>>> Let me clarify - I did not say that you have to "label" nodes and
>>>> masters.
>>>>
>>>> I was suggesting to tag nodes and masters, the way you tag a cloud
>>>> resource via AWS console or AWS CLI. I meant - AWS tag not openshift 
>>>> labels.
>>>>
>>>> The reason you have volumes created in another zone is because - your
>>>> AWS account has nodes in more than one zone, possibly not part of Openshift
>>>> cluster. But when you are requesting a dynamic provisioned volume -
>>>> Openshift considers all nodes it can find and accordingly it "randomly"
>>>> selects a zone among zone it discovered.
>>>>
>>>> But if you were to use AWS Console or CLI to tag all nodes(including
>>>> master) in your cluster with "KubernetesCluster" : "cluster_id"  then
>>>> it will only select tagged nodes and hence it will pick zone in which
>>>> Openshift cluster did not exist.
>>>>
>>>>
>>>>
>>>> On Fri, Jan 5, 2018 at 11:48 PM, Marc Boorshtein <mboorsht...@gmail.com
>>>> > wrote:
>>>>
>>>>> how do i label a master?  When i create PVCs it switches between 1c
>>>>> and 1a.  look on the master I see:
>>>>>
>>>>> Creating volume for PVC "wtf3"; chose zone="us-east-1c" from
>>>>> zones=["us-east-1a" "us-east-1c"]
>>>>>
>>>>> Where did us-east-1c come from???
>>>>>
>>>>> On Fri, Jan 5, 2018 at 11:07 PM Hemant Kumar <heku...@redhat.com>
>>>>> wrote:
>>>>>
>>

Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
how do i label a master?  When i create PVCs it switches between 1c and
1a.  look on the master I see:

Creating volume for PVC "wtf3"; chose zone="us-east-1c" from
zones=["us-east-1a" "us-east-1c"]

Where did us-east-1c come from???

On Fri, Jan 5, 2018 at 11:07 PM Hemant Kumar <heku...@redhat.com> wrote:

> Both nodes and masters. The tag information is picked from master
> itself(Where controller-manager is running) and then openshift uses same
> value to find all nodes in the cluster.
>
>
>
>
> On Fri, Jan 5, 2018 at 10:26 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> node and masters?  or just nodes? (sounded like just nodes from the docs)
>>
>> On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar <heku...@redhat.com> wrote:
>>
>>> Make sure that you configure ALL instances in the cluster with tag
>>> "KubernetesCluster": "value". The value of the tag for key
>>> "KubernetesCluster" should be same for all instances in the cluster. You
>>> can choose any string you want for value.
>>>
>>> You will probably have to restart openshift controller-manager after the
>>> change at very minimum.
>>>
>>>
>>>
>>> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein <mboorsht...@gmail.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have a brand new Origin 3.6 running on AWS, the master and all nodes
>>>> are in us-east-1a but whenever I try to have AWS create a new volume, it
>>>> puts it in us-east-1c so then no one can access it and all my nodes go into
>>>> a permanent pending state because NoVolumeZoneConflict.  Looking at
>>>> aws.conf it states us-east-1a.  What am I missing?
>>>>
>>>> Thanks
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
node and masters?  or just nodes? (sounded like just nodes from the docs)

On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar <heku...@redhat.com> wrote:

> Make sure that you configure ALL instances in the cluster with tag
> "KubernetesCluster": "value". The value of the tag for key
> "KubernetesCluster" should be same for all instances in the cluster. You
> can choose any string you want for value.
>
> You will probably have to restart openshift controller-manager after the
> change at very minimum.
>
>
>
> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> Hello,
>>
>> I have a brand new Origin 3.6 running on AWS, the master and all nodes
>> are in us-east-1a but whenever I try to have AWS create a new volume, it
>> puts it in us-east-1c so then no one can access it and all my nodes go into
>> a permanent pending state because NoVolumeZoneConflict.  Looking at
>> aws.conf it states us-east-1a.  What am I missing?
>>
>> Thanks
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift 3.6 on AWS creating EBS volumes in wrong region

2018-01-05 Thread Marc Boorshtein
Hello,

I have a brand new Origin 3.6 running on AWS, the master and all nodes are
in us-east-1a but whenever I try to have AWS create a new volume, it puts
it in us-east-1c so then no one can access it and all my nodes go into a
permanent pending state because NoVolumeZoneConflict.  Looking at aws.conf
it states us-east-1a.  What am I missing?

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error installing Origin 3.7 via advanced install on AWS

2018-01-04 Thread Marc Boorshtein
I'm trying to install origin 3.7 on centos 7 running on AWS via the
advanced install.  When I run ansible I get the following:


TASK [openshift_cloud_provider : Configure AWS cloud provider]
***
fatal: [10.0.4.160]: FAILED! => {"failed": true, "msg": "The task includes
an option with an undefined variable. The error was: 'dict object' has no
attribute 'provider'\n\nThe error appears to have been in
'/root/openshift-ansible/roles/openshift_cloud_provider/tasks/aws.yml':
line 12, column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Configure
AWS cloud provider\n  ^ here\n\nexception type: \nexception: 'dict object' has no
attribute 'provider'"}

NO MORE HOSTS LEFT
***
to retry, use: --limit @/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP
***
10.0.4.144 : ok=49   changed=3unreachable=0
failed=0
10.0.4.160 : ok=205  changed=17   unreachable=0
failed=1
localhost  : ok=11   changed=0unreachable=0
failed=0


INSTALLER STATUS
*
Initialization : Complete
Health Check   : Complete
etcd Install   : Complete
Master Install : In Progress
This phase can be restarted by running:
playbooks/byo/openshift-master/config.yml



Failure summary:


  1. Hosts:10.0.4.160
 Play: Configure masters
 Task: Configure AWS cloud provider
 Message:  The task includes an option with an undefined variable. The
error was: 'dict object' has no attribute 'provider'

   The error appears to have been in
'/root/openshift-ansible/roles/openshift_cloud_provider/tasks/aws.yml':
line 12, column 3, but may
   be elsewhere in the file depending on the exact syntax
problem.

   The offending line appears to be:


   - name: Configure AWS cloud provider
 ^ here

   exception type: 
   exception: 'dict object' has no attribute 'provider'


Googling indicates the issue has to do with the ansible version, but i'm
running the latest:

ansible --version
ansible 2.4.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5
20150623 (Red Hat 4.8.5-16)]

I'm going against release-3.7

Help?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: SSO with OAUTH/OIDC between OpenShift and Jenkins not working

2017-11-17 Thread Marc Boorshtein
Thanks Joel & Jordan.  Deleted all the routes and created a new one with
the same name as the 127.0.0.1.nip.io host but with a new host name and
everything worked great. (Jenkins times out but i'm going to see if I just
need to add some memory to the vm.  Java's CPU is spiking at 100%

On Sat, Nov 18, 2017 at 12:18 AM Jordan Liggitt <jligg...@redhat.com> wrote:

> Or add the new route to the service account annotations as well (it can
> allow more than one)
>
> On Sat, Nov 18, 2017 at 12:15 AM, Joel Pearson <
> japear...@agiledigital.com.au> wrote:
>
>> I’ve had this problem too. You need to use the original route name (you
>> can change the host name) as the Jenkins service account refers to the
>> route name for oauth purposes.
>> On Sat, 18 Nov 2017 at 4:13 pm, Marc Boorshtein <mboorsht...@gmail.com>
>> wrote:
>>
>>> I have a fresh install of Origin 3.6.1 on CentOS 7.  In my project I
>>> created a new persistent jenkins from the template included in origin with
>>> oauth enabled.  It creates a route to 127.0.0.1.nip.io.  When I create
>>> a new route with a routable domain name, and I try to login I get the
>>> following error:
>>>
>>> {
>>>   "error": "invalid_request",
>>>   "error_description": "The request is missing a required parameter, 
>>> includes an invalid parameter value, includes a parameter more than once, 
>>> or is otherwise malformed.",
>>>   "state": "NGEyNWJlOTgtZTZlZC00"
>>> }
>>>
>>> The redirect looks like:
>>>
>>> https://oslocal.tremolo.lan:8443/oauth/authorize?client_id=system:serviceaccount:jjacksontest:jenkins_uri=https://jenkins-jjacksontest.192.168.2.140.nip.io/securityRealm/finishLogin_type=code=user:info
>>>  user:check-access=NGEyNWJlOTgtZTZlZC00
>>>
>>> I suspect the issue is that the redirect_uri is different then what is 
>>> expected, but I can't find a secret or environment variable to set so it 
>>> knows the correct redirect_uri.  Is there some place I can set that?
>>>
>>> Thanks
>>>
>>> Marc
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


SSO with OAUTH/OIDC between OpenShift and Jenkins not working

2017-11-17 Thread Marc Boorshtein
I have a fresh install of Origin 3.6.1 on CentOS 7.  In my project I
created a new persistent jenkins from the template included in origin with
oauth enabled.  It creates a route to 127.0.0.1.nip.io.  When I create a
new route with a routable domain name, and I try to login I get the
following error:

{
  "error": "invalid_request",
  "error_description": "The request is missing a required parameter,
includes an invalid parameter value, includes a parameter more than
once, or is otherwise malformed.",
  "state": "NGEyNWJlOTgtZTZlZC00"
}

The redirect looks like:

https://oslocal.tremolo.lan:8443/oauth/authorize?client_id=system:serviceaccount:jjacksontest:jenkins_uri=https://jenkins-jjacksontest.192.168.2.140.nip.io/securityRealm/finishLogin_type=code=user:info
user:check-access=NGEyNWJlOTgtZTZlZC00

I suspect the issue is that the redirect_uri is different then what is
expected, but I can't find a secret or environment variable to set so
it knows the correct redirect_uri.  Is there some place I can set
that?

Thanks

Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Builder unable to resolve github.com

2017-11-12 Thread Marc Boorshtein
>
> is your machine (the centos7 vm) running dnsmasq or anything else on port
> 53?  If so, shut it down prior to bringing up your cluster.
>
>
Nothing is running on 53:
tcp0  0 localhost:10443 0.0.0.0:*   LISTEN

tcp0  0 localhost:10444 0.0.0.0:*   LISTEN

tcp0  0 0.0.0.0:http0.0.0.0:*   LISTEN

tcp0  0 0.0.0.0:senomix02   0.0.0.0:*   LISTEN

tcp0  0 0.0.0.0:ssh 0.0.0.0:*   LISTEN

tcp0  0 localhost:smtp  0.0.0.0:*   LISTEN

tcp0  0 0.0.0.0:https   0.0.0.0:*   LISTEN

tcp0  0 0.0.0.0:pcsync-https0.0.0.0:*   LISTEN

tcp6   0  0 [::]:jetcmeserver   [::]:*  LISTEN

tcp6   0  0 [::]:ssh[::]:*  LISTEN

tcp6   0  0 [::]:afs3-callback  [::]:*  LISTEN

tcp6   0  0 localhost:smtp  [::]:*  LISTEN

tcp6   0  0 [::]:newoak [::]:*  LISTEN

tcp6   0  0 [::]:10250  [::]:*  LISTEN

udp0  0 0.0.0.0:64109   0.0.0.0:*

udp0  0 0.0.0.0:20230   0.0.0.0:*

udp0  0 0.0.0.0:senomix02   0.0.0.0:*

udp0  0 0.0.0.0:bootpc  0.0.0.0:*

udp0  0 0.0.0.0:bootpc  0.0.0.0:*

udp0  0 localhost:323   0.0.0.0:*

udp6   0  0 [::]:64109  [::]:*

udp6   0  0 [::]:15385  [::]:*

udp6   0  0 localhost:323   [::]:*

raw6   0  0 [::]:ipv6-icmp  [::]:*  7

raw6   0  0 [::]:ipv6-icmp  [::]:*

Firewalld is running...should I add an open up 53?



> what the master container can resolve is not relevant, but if you want to
> diagnose things further, start a different pod (like the jenkins pod) and
> rsh into that pod and see
> 1) whether it can resolve hosts
> 2) what its /etc/resolv.conf value is
> 3) whether it has external connectivity (independent of its ability to
> resolve hostnames)
>

I did fire up another container and it can't resolve service DNS names
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Builder unable to resolve github.com

2017-11-12 Thread Marc Boorshtein
I'm trying to deploy a custom s2i builder into my origin instance and when
the build starts I get the the following error from the builder process:

Cloning "https://github.com/TremoloSecurity/openunison-qs-openshift.git;
...
error: build error: fatal: unable to access '
https://github.com/TremoloSecurity/openunison-qs-openshift.git/': Could not
resolve host: github.com; Unknown error

OpenShift is Origin:

oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7

Installed on CentOS7 using cluster up.  If I go into the master container I
can resolve github.com just fine.  my VM has 2 virtual nics, one for host
only and one for NAT (Virtual Box running on Fedora 26).

I found the following github issue but I don't see any movement on it (
https://github.com/openshift/openshift-ansible/issues/5249).  Any help
would be greatly appreciated.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-17 Thread Marc Boorshtein
That makes sense. Thanks!

On Mon, Oct 16, 2017, 9:31 AM Luke Meyer <lme...@redhat.com> wrote:

> You can configure fluentd to forward logs (see
> https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance).
> Note the caveat, "If you are not using the provided Kibana and
> Elasticsearch images, you will not have the same multi-tenant capabilities
> and your data will not be restricted by user access to a particular
> project."
>
> On Thu, Oct 12, 2017 at 10:35 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> I have built out a cluster on AWS using the ansible advanced install.  I
>> see that i can setup logging by creating infrastructure nodes that will
>> host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
>> use that instead?
>>
>> Thanks
>> Marc
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Possible to use AWS elasitcsearch for OpenShift logging?

2017-10-12 Thread Marc Boorshtein
I have built out a cluster on AWS using the ansible advanced install.  I
see that i can setup logging by creating infrastructure nodes that will
host elasticsearch.  AWS has an elasticsearch service.  Is there a way to
use that instead?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: help with redinessProbe check

2017-06-06 Thread Marc Boorshtein
Thanks Marko, that worked perfectly!

On Tue, Jun 6, 2017 at 1:54 AM Marko Lukša <marko.lu...@gmail.com> wrote:

> Neither of the two forms are correct, because they don't run the command
> in a shell (and you need one because you're actually running multiple
> commans & using pipes).
>
> The correct form is:
>
> readinessProbe:
> exec:
>   command:
> - sh
> - -c
> - '/usr/bin/curl --insecure
> https://127.0.0.1:8443/check_alive 2>/dev/null | grep Anonymous || exit 1'
>
>
> Also, you don't really need that last "|| exit 1" part. Even without it,
> the command's exit code will be 1 (grep returns 1 when it doesn't find a
> match).
>
>
>
> Marko
>
>
> On 06. 06. 2017 06:08, Marc Boorshtein wrote:
>
> I'm trying to use the following command as my liveness check:
>
> /usr/bin/curl --insecure https://127.0.0.1:8443/check_alive
> <https://127.0.0.1/check_alive> 2>/dev/null | grep Anonymous || exit 1
>
> I tried:
>
> readinessProbe:
> exec:
>   command:
> - '/usr/bin/curl'
> - '--insecure'
> - 'https://127.0.0.1:8443/check_alive'
> - '2>/dev/null'
> - '|'
> - 'grep'
> - 'Anonymous'
> - '||'
> - 'exit'
> - '1'
>
> and
>
> readinessProbe:
> exec:
>   command:
> - '/usr/bin/curl --insecure
> https://127.0.0.1:8443/check_alive 2>/dev/null | grep Anonymous || exit 1'
>
> but openshift doesn't seem to like that either.  Thoughts?  Any help is
> greatly appreciated.
>
> Thanks
> Marc
>
>
> ___
> users mailing 
> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


help with redinessProbe check

2017-06-05 Thread Marc Boorshtein
I'm trying to use the following command as my liveness check:

/usr/bin/curl --insecure https://127.0.0.1:8443/check_alive
 2>/dev/null | grep Anonymous || exit 1

I tried:

readinessProbe:
exec:
  command:
- '/usr/bin/curl'
- '--insecure'
- 'https://127.0.0.1:8443/check_alive'
- '2>/dev/null'
- '|'
- 'grep'
- 'Anonymous'
- '||'
- 'exit'
- '1'

and

readinessProbe:
exec:
  command:
- '/usr/bin/curl --insecure
https://127.0.0.1:8443/check_alive 2>/dev/null | grep Anonymous || exit 1'

but openshift doesn't seem to like that either.  Thoughts?  Any help is
greatly appreciated.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


disabling member management from the UI

2017-04-23 Thread Marc Boorshtein
Is there a relatively easy way to disable member management from the admins
role?  Similar to how you can remove user's ability to self provision new
projects?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How does openshift store users?

2016-09-08 Thread Marc Boorshtein
Openshift has stored users but kubernetes doesn't. Where is openshift
storing the users and groups? Etcd? An object database?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Networking issue creating pods

2016-07-26 Thread Marc Boorshtein
I'm already on 1.3:

$ cat /etc/sysconfig/origin-node

OPTIONS=--loglevel=2

CONFIG_FILE=/etc/origin/node/node-config.yaml

IMAGE_VERSION=v1.3.0-alpha.1

On Tue, Jul 26, 2016 at 11:59 AM, Scott Dodson <sdod...@redhat.com> wrote:

> Yeah, can you update your node image to v1.2.1? there should be an
> image version set in /etc/sysconfig/origin-node
>
> On Tue, Jul 26, 2016 at 11:39 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
> > OK, so here's what I get:
> >
> > $ sudo docker exec origin-node docker info
> >
> > /usr/bin/docker: line 2: /etc/sysconfig/docker: No such file or directory
> >
> > I take it this means I'm running into this bug?
> >
> >
> > On Tue, Jul 26, 2016 at 10:41 AM, Scott Dodson <sdod...@redhat.com>
> wrote:
> >>
> >> This is probably the shifting docker dependencies issue. You can
> >> confirm with `docker exec origin node docker info` if that throws an
> >> error then it's fixed by https://github.com/openshift/origin/pull/9811
> >> which is included in v1.2.1.
> >>
> >> On Mon, Jul 25, 2016 at 4:56 PM, Marc Boorshtein <mboorsht...@gmail.com
> >
> >> wrote:
> >> >
> >> >> Was there more information in the logs?
> >> >>
> >> >
> >> > Which logs?  What I posted was from the events. (OpenShift is deployed
> >> > as
> >> > containers, not RPMs)
> >> >
> >> >
> >> >>
> >> >> Does the file /bin/openshift-sdn-ovs exist?
> >> >>
> >> >
> >> > Not on the host, but it is on origin-node (both the node and master
> are
> >> > on
> >> > the same VM)
> >> >
> >> > Thanks
> >> >
> >> > ___
> >> > users mailing list
> >> > users@lists.openshift.redhat.com
> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >
> >
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Networking issue creating pods

2016-07-26 Thread Marc Boorshtein
OK, so here's what I get:

$ sudo docker exec origin-node docker info

/usr/bin/docker: line 2: /etc/sysconfig/docker: No such file or directory

I take it this means I'm running into this bug?

On Tue, Jul 26, 2016 at 10:41 AM, Scott Dodson <sdod...@redhat.com> wrote:

> This is probably the shifting docker dependencies issue. You can
> confirm with `docker exec origin node docker info` if that throws an
> error then it's fixed by https://github.com/openshift/origin/pull/9811
> which is included in v1.2.1.
>
> On Mon, Jul 25, 2016 at 4:56 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
> >
> >> Was there more information in the logs?
> >>
> >
> > Which logs?  What I posted was from the events. (OpenShift is deployed as
> > containers, not RPMs)
> >
> >
> >>
> >> Does the file /bin/openshift-sdn-ovs exist?
> >>
> >
> > Not on the host, but it is on origin-node (both the node and master are
> on
> > the same VM)
> >
> > Thanks
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Networking issue creating pods

2016-07-25 Thread Marc Boorshtein
> Was there more information in the logs?
>
>
Which logs?  What I posted was from the events. (OpenShift is deployed as
containers, not RPMs)



> Does the file /bin/openshift-sdn-ovs exist?
>
>
Not on the host, but it is on origin-node (both the node and master are on
the same VM)

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Unable to add user to group

2016-06-16 Thread Marc Boorshtein
so i'll need to query each group and look for the user in question to be a
member?

On Thu, Jun 16, 2016 at 4:24 PM, Jordan Liggitt <jligg...@redhat.com> wrote:

> There's not an efficient API query to determine that today. Internally,
> the API server maintains a reverse index of username to group names by
> watching updates to the Group API objects
>
> On Thu, Jun 16, 2016 at 4:03 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> oh, if the groups field on the user is deprecated how would I know what
>> groups a specific user has?
>>
>> On Thu, Jun 16, 2016 at 3:57 PM, Jordan Liggitt <jligg...@redhat.com>
>> wrote:
>>
>>> Your command looks correct. Specifying groups directly on a user via the
>>> groups field is deprecated. `oc get group cluster-administrators -o yaml`
>>> would show that your command is effective.
>>>
>>> When a user makes an API request, their effective groups are determined
>>> by combining the names of the Group objects containing their username, the
>>> contents of the deprecated groups field on their User object, and virtual
>>> groups like "system:authenticated".
>>>
>>> On Thu, Jun 16, 2016 at 3:53 PM, Marc Boorshtein <mboorsht...@gmail.com>
>>> wrote:
>>>
>>>> I can't seem to add a user to a group.  I have a user:
>>>>
>>>> $ curl -k -v -XGET  -H "User-Agent: oc/v1.1.2 (darwin/amd64)
>>>> openshift/2711160" -H "Authorization: Bearer
>>>> PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI"
>>>> https://openshift.rheldemo.lan:8443/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4
>>>> *   Trying 192.168.2.191...
>>>> * Connected to openshift.rheldemo.lan (192.168.2.191) port 8443 (#0)
>>>> * TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
>>>> * Server certificate: 172.30.0.1
>>>> * Server certificate: openshift-signer@1465933076
>>>> > GET /oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4 HTTP/1.1
>>>> > Host: openshift.rheldemo.lan:8443
>>>> > Accept: */*
>>>> > User-Agent: oc/v1.1.2 (darwin/amd64) openshift/2711160
>>>> > Authorization: Bearer PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI
>>>> >
>>>> < HTTP/1.1 200 OK
>>>> < Cache-Control: no-store
>>>> < Content-Type: application/json
>>>> < Date: Thu, 16 Jun 2016 19:47:05 GMT
>>>> < Content-Length: 381
>>>> <
>>>> {"kind":"User","apiVersion":"v1","metadata":{"name":"0b126172-33e9-11e6-9c91-525400d4fbc4","selfLink":"/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4","uid":"4c403e86-33f4-11e6-b368-fa163ef48e94","resourceVersion":"17244","creationTimestamp":"2016-06-16T18:58:22Z"},"fullName":"OpenShift
>>>> Admin","identities":["unison_ldap:0b126172-33e9-11e6-9c91-525400d4fbc4"],"groups":null}
>>>>
>>>> then I run oadm to add the user to a group:
>>>>
>>>> [root@openshift ~]# oadm --loglevel 9 groups add-users
>>>> cluster-administrators 0b126172-33e9-11e6-9c91-525400d4fbc4
>>>>
>>>>
>>>> 
>>>> ATTENTION: You are running oadm via a wrapper around 'docker run
>>>> openshift/origin:v1.3.0-alpha.1'.
>>>> This wrapper is intended only to be used to bootstrap an environment.
>>>> Please
>>>> install client tools on another host once you have granted cluster-admin
>>>> privileges to a user.
>>>> See
>>>> https://docs.openshift.org/latest/cli_reference/get_started_cli.html
>>>>
>>>> =
>>>>
>>>> Usage of loopback devices is strongly discouraged for production use.
>>>> Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
>>>> dm.no_warn_on_loop_devices=true` to suppress this warning.
>>>> I0616 19:50:26.085449   1 loader.go:242] Config loaded from file
>>>> /root/.kube/config
>>>> I0616 19:50:26.087794   1 round_trippers.go:299] curl -k -v -XGET
>>>>  -H "Accept: application/json, */*" -H "User-Agent: oadm/v1.3.0
>>>> (linux/amd64) kubernetes/6e83535"
>&

Re: Unable to add user to group

2016-06-16 Thread Marc Boorshtein
oh, if the groups field on the user is deprecated how would I know what
groups a specific user has?

On Thu, Jun 16, 2016 at 3:57 PM, Jordan Liggitt <jligg...@redhat.com> wrote:

> Your command looks correct. Specifying groups directly on a user via the
> groups field is deprecated. `oc get group cluster-administrators -o yaml`
> would show that your command is effective.
>
> When a user makes an API request, their effective groups are determined by
> combining the names of the Group objects containing their username, the
> contents of the deprecated groups field on their User object, and virtual
> groups like "system:authenticated".
>
> On Thu, Jun 16, 2016 at 3:53 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> I can't seem to add a user to a group.  I have a user:
>>
>> $ curl -k -v -XGET  -H "User-Agent: oc/v1.1.2 (darwin/amd64)
>> openshift/2711160" -H "Authorization: Bearer
>> PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI"
>> https://openshift.rheldemo.lan:8443/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4
>> *   Trying 192.168.2.191...
>> * Connected to openshift.rheldemo.lan (192.168.2.191) port 8443 (#0)
>> * TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
>> * Server certificate: 172.30.0.1
>> * Server certificate: openshift-signer@1465933076
>> > GET /oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4 HTTP/1.1
>> > Host: openshift.rheldemo.lan:8443
>> > Accept: */*
>> > User-Agent: oc/v1.1.2 (darwin/amd64) openshift/2711160
>> > Authorization: Bearer PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI
>> >
>> < HTTP/1.1 200 OK
>> < Cache-Control: no-store
>> < Content-Type: application/json
>> < Date: Thu, 16 Jun 2016 19:47:05 GMT
>> < Content-Length: 381
>> <
>> {"kind":"User","apiVersion":"v1","metadata":{"name":"0b126172-33e9-11e6-9c91-525400d4fbc4","selfLink":"/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4","uid":"4c403e86-33f4-11e6-b368-fa163ef48e94","resourceVersion":"17244","creationTimestamp":"2016-06-16T18:58:22Z"},"fullName":"OpenShift
>> Admin","identities":["unison_ldap:0b126172-33e9-11e6-9c91-525400d4fbc4"],"groups":null}
>>
>> then I run oadm to add the user to a group:
>>
>> [root@openshift ~]# oadm --loglevel 9 groups add-users
>> cluster-administrators 0b126172-33e9-11e6-9c91-525400d4fbc4
>>
>>
>> 
>> ATTENTION: You are running oadm via a wrapper around 'docker run
>> openshift/origin:v1.3.0-alpha.1'.
>> This wrapper is intended only to be used to bootstrap an environment.
>> Please
>> install client tools on another host once you have granted cluster-admin
>> privileges to a user.
>> See https://docs.openshift.org/latest/cli_reference/get_started_cli.html
>>
>> =
>>
>> Usage of loopback devices is strongly discouraged for production use.
>> Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
>> dm.no_warn_on_loop_devices=true` to suppress this warning.
>> I0616 19:50:26.085449   1 loader.go:242] Config loaded from file
>> /root/.kube/config
>> I0616 19:50:26.087794   1 round_trippers.go:299] curl -k -v -XGET  -H
>> "Accept: application/json, */*" -H "User-Agent: oadm/v1.3.0 (linux/amd64)
>> kubernetes/6e83535" https://openshift.rheldemo.lan:8443/api
>> I0616 19:50:26.125647   1 round_trippers.go:318] GET
>> https://openshift.rheldemo.lan:8443/api 200 OK in 37 milliseconds
>> I0616 19:50:26.125669   1 round_trippers.go:324] Response Headers:
>> I0616 19:50:26.125677   1 round_trippers.go:327] Date: Thu, 16
>> Jun 2016 19:50:26 GMT
>> I0616 19:50:26.125685   1 round_trippers.go:327] Content-Length:
>> 135
>> I0616 19:50:26.125691   1 round_trippers.go:327] Cache-Control:
>> no-store
>> I0616 19:50:26.125696   1 round_trippers.go:327] Content-Type:
>> application/json
>> I0616 19:50:26.125765   1 request.go:870] Response Body:
>> {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"
>> 0.0.0.0/0","serverAddress":"192.168.100.6:443"}]}
>> I0616 19:50:26.126056   1 round_trippers.go:299] curl 

Unable to add user to group

2016-06-16 Thread Marc Boorshtein
I can't seem to add a user to a group.  I have a user:

$ curl -k -v -XGET  -H "User-Agent: oc/v1.1.2 (darwin/amd64)
openshift/2711160" -H "Authorization: Bearer
PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI"
https://openshift.rheldemo.lan:8443/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4
*   Trying 192.168.2.191...
* Connected to openshift.rheldemo.lan (192.168.2.191) port 8443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: 172.30.0.1
* Server certificate: openshift-signer@1465933076
> GET /oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4 HTTP/1.1
> Host: openshift.rheldemo.lan:8443
> Accept: */*
> User-Agent: oc/v1.1.2 (darwin/amd64) openshift/2711160
> Authorization: Bearer PDqIrEiOTqtwJvHDcTB-snC5FpcpnCz5fIrz7S6ORCI
>
< HTTP/1.1 200 OK
< Cache-Control: no-store
< Content-Type: application/json
< Date: Thu, 16 Jun 2016 19:47:05 GMT
< Content-Length: 381
<
{"kind":"User","apiVersion":"v1","metadata":{"name":"0b126172-33e9-11e6-9c91-525400d4fbc4","selfLink":"/oapi/v1/users/0b126172-33e9-11e6-9c91-525400d4fbc4","uid":"4c403e86-33f4-11e6-b368-fa163ef48e94","resourceVersion":"17244","creationTimestamp":"2016-06-16T18:58:22Z"},"fullName":"OpenShift
Admin","identities":["unison_ldap:0b126172-33e9-11e6-9c91-525400d4fbc4"],"groups":null}

then I run oadm to add the user to a group:

[root@openshift ~]# oadm --loglevel 9 groups add-users
cluster-administrators 0b126172-33e9-11e6-9c91-525400d4fbc4


ATTENTION: You are running oadm via a wrapper around 'docker run
openshift/origin:v1.3.0-alpha.1'.
This wrapper is intended only to be used to bootstrap an environment. Please
install client tools on another host once you have granted cluster-admin
privileges to a user.
See https://docs.openshift.org/latest/cli_reference/get_started_cli.html
=

Usage of loopback devices is strongly discouraged for production use.
Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
I0616 19:50:26.085449   1 loader.go:242] Config loaded from file
/root/.kube/config
I0616 19:50:26.087794   1 round_trippers.go:299] curl -k -v -XGET  -H
"Accept: application/json, */*" -H "User-Agent: oadm/v1.3.0 (linux/amd64)
kubernetes/6e83535" https://openshift.rheldemo.lan:8443/api
I0616 19:50:26.125647   1 round_trippers.go:318] GET
https://openshift.rheldemo.lan:8443/api 200 OK in 37 milliseconds
I0616 19:50:26.125669   1 round_trippers.go:324] Response Headers:
I0616 19:50:26.125677   1 round_trippers.go:327] Date: Thu, 16 Jun
2016 19:50:26 GMT
I0616 19:50:26.125685   1 round_trippers.go:327] Content-Length: 135
I0616 19:50:26.125691   1 round_trippers.go:327] Cache-Control:
no-store
I0616 19:50:26.125696   1 round_trippers.go:327] Content-Type:
application/json
I0616 19:50:26.125765   1 request.go:870] Response Body:
{"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"
0.0.0.0/0","serverAddress":"192.168.100.6:443"}]}
I0616 19:50:26.126056   1 round_trippers.go:299] curl -k -v -XGET  -H
"Accept: application/json, */*" -H "User-Agent: oadm/v1.3.0 (linux/amd64)
kubernetes/6e83535" https://openshift.rheldemo.lan:8443/apis
I0616 19:50:26.126838   1 round_trippers.go:318] GET
https://openshift.rheldemo.lan:8443/apis 200 OK in 0 milliseconds
I0616 19:50:26.126866   1 round_trippers.go:324] Response Headers:
I0616 19:50:26.126872   1 round_trippers.go:327] Content-Type:
application/json
I0616 19:50:26.126877   1 round_trippers.go:327] Date: Thu, 16 Jun
2016 19:50:26 GMT
I0616 19:50:26.126883   1 round_trippers.go:327] Content-Length: 775
I0616 19:50:26.126888   1 round_trippers.go:327] Cache-Control:
no-store
I0616 19:50:26.126922   1 request.go:870] Response Body:
{"kind":"APIGroupList","groups":[{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"},"serverAddressByClientCIDRs":[{"clientCIDR":"
0.0.0.0/0","serverAddress":"192.168.100.6:443
"}]},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"},"serverAddressByClientCIDRs":[{"clientCIDR":"
0.0.0.0/0","serverAddress":"192.168.100.6:443
"}]},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"},"serverAddressByClientCIDRs":[{"clientCIDR":"
0.0.0.0/0","serverAddress":"192.168.100.6:443"}]}]}
I0616 19:50:26.132811   1 round_trippers.go:299] curl -k -v -XGET  -H
"User-Agent: oadm/v1.3.0 (linux/amd64) openshift/6e83535" -H "Accept:
application/json, */*" https://openshift.rheldemo.lan:8443/oapi
I0616 

Pod in project1 can not connect to service in project 2

2016-05-16 Thread Marc Boorshtein
We have OSE 3.1 setup with two separate projects.  project 1 has an apache
pod running on port 8080 with a service and mysql with a service on 3306.
Project 2 is trying to connect to both the web server pod and the mysql
pod.  It can connect to the mysql service no problem but it can't get to
the web server service by name or IP.  Is there any additional step I'm
missing?

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to bind a persistent volume claim to a specific persistent volume

2016-03-19 Thread Marc Boorshtein
Thanks Mark.  For now I'll build that into my deployment instructions.

Thanks

On Thu, Mar 17, 2016 at 1:46 PM, Mark Turansky <mtura...@redhat.com> wrote:

> Hi Marc,
>
> We're currently developing the ability to label PVs and add a selector to
> a claim.  This will help a claim target a specific PV.  That is slated for
> our 3.3 release.  Until then, there's no way to deterministically bind a
> pvc to a pv.
>
> Thanks,
> Mark
>
>
>
> On Thu, Mar 17, 2016 at 12:18 PM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> I think I'm missing something.  My container requires several persistent
>> volumes for configuration data.  Right now the only way I can guarantee
>> that the PVC maps to the PV I want is to do them in order (create PV1,
>> create PVC1, create PV2, create PVC2, etc).  This works fine, but I'd like
>> to create a template and I get the feeling this strategy won't work well.
>> Is there a way I can tell os to bind a PVC to a specific PV based on the
>> PV's metadata?  Or is there a better way to do this that I'm completely
>> missing?
>>
>> Thanks
>> Marc
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Yaml for pod never updating on Origin

2016-02-24 Thread Marc Boorshtein
Thanks Den.  The issue was I'd make a change to the yaml file for the pod,
delete the pod using oc delete -f /path/to/yaml, wait for the pod to be
removed then recreate the pod using oc create -f /path/to/yaml but the pod
configuration didn't echo the changes I made to my yaml file.

Since I rebuilt, I haven't run into this issue.

On Wed, Feb 24, 2016 at 10:45 AM, Den Cowboy  wrote:

> Maybe I don't understand the issue that good, but you can just edit the
> deploymentconfig.
> $ oc edit dc name
>
> After editing and saving the dc there will automatically be triggered a
> new deployment, based on the edited dc.
>
> --
> Date: Sat, 20 Feb 2016 09:33:46 -0500
> Subject: Yaml for pod never updating on Origin
> From: mboorsht...@gmail.com
> To: users@lists.openshift.redhat.com
>
>
> I setup Origin latest on RHEL 7 in containerized mode.  Both the master
> and the node are on the same server.  I created a new pod based on a yaml.
> The new pod works but when I try updating the yaml by first deleting the
> pod then re-creating it it ALWAYS has the original YAML file.
>
> I think I'm missing something but I can't seem to find what.  Any help
> would be greatly appreciated.
>
> Thanks
> Marc
>
> ___ users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Yaml for pod never updating on Origin

2016-02-21 Thread Marc Boorshtein
I did. I gave up figuring I had messed something up along the way and
rebuilt using the ansible scripts without issue and haven't had the problem
anymore.

Thanks
On Feb 21, 2016 4:53 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

> When you delete the container, are you waiting for it to fully
> terminate before recreating?  Deletion is a background job - have to
> wait for the processes to terminate cleanly before returning.  Is the
> "name" field set in your pod to a specific value, or did you use
> "generateName"?
>
> On Sat, Feb 20, 2016 at 9:33 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
> > I setup Origin latest on RHEL 7 in containerized mode.  Both the master
> and
> > the node are on the same server.  I created a new pod based on a yaml.
> The
> > new pod works but when I try updating the yaml by first deleting the pod
> > then re-creating it it ALWAYS has the original YAML file.
> >
> > I think I'm missing something but I can't seem to find what.  Any help
> would
> > be greatly appreciated.
> >
> > Thanks
> > Marc
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Confirming persistent storage for pods thought process

2016-02-19 Thread Marc Boorshtein
Perfect.

Thanks

On Fri, Feb 19, 2016 at 11:39 AM, Mark Turansky <mtura...@redhat.com> wrote:

> Yes, your understanding of PVs is correct.
>
> You might also want to check out ConfigMap as a means of configuring your
> apps.
>
> https://github.com/eBay/Kubernetes/blob/master/docs/proposals/configmap.md
>
> Mark
>
>
> On Fri, Feb 19, 2016 at 11:29 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
>
>> I'm trying to make sure I'm conceptualizing the storage addition to a
>> pod.  What I'm trying to do is setup an NFS server that I can provide data
>> to pods.  The main idea being that I want to be able to put configuration
>> files on the NFS share and have it available as read-only to the pods.
>> After going through the documentation and reading through the wordpress
>> example I think the process is:
>>
>> 1.  Create an NFS share (done and tested)
>> 2.  In my OS project, create a persistent volume
>> 3.  In my OS project, create a claim on the volume
>> 4.  Add the claim to a pod definition
>>
>> This way when i make an update to the files on the share and those
>> updates are available to the pods.
>>
>> Am I thinking about this correctly?
>>
>> Thanks
>> Marc
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Vagrant VM - Adding MySQL pod to project fails

2016-02-17 Thread Marc Boorshtein
> i'm guessing you have no persistent volumes defined in your cluster, so
> the persistent volume claim is stuck in pending waiting for a volume to
> become available.
>
>
Correct.  I'm just getting a POC working so I think that the ephemeral
version will work for me.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Trouble accessing docker image of openshift

2016-02-16 Thread Marc Boorshtein
PERFECT!  Thanks.  added it at the end:

docker run -d --name "origin" --privileged --pid=host --net=bridge
-p 8443:8443 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys
-v /v/lib/docker:/var/lib/docker:rw -v
/var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
  -h openshift.tremolo.lan openshift/origin start --public-master
openshift.tremolo.lan

and now i can get to the console no problem!

On Tue, Feb 16, 2016 at 10:42 AM, Clayton Coleman <ccole...@redhat.com>
wrote:

> It's an origin flag - add it at the end.
>
> On Tue, Feb 16, 2016 at 10:41 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
> > Thanks Clayton.  Is "--public-master" a docker flag?  When I try it I
> get:
> >
> > [root@openshift ~]# docker run -d --name "origin" --privileged
> > --pid=host --net=bridge -p 8443:8443 -v /:/rootfs:ro -v
> > /var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw
> > -v
> >
> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
> > --public-master openshift.tremolo.lan  -h openshift.tremolo.lan
> > openshift/origin start
> >
> > flag provided but not defined: --public-master
> >
> > It looks like its passed to the openshift command, not docker?
> >
> > Thanks
> >
> >
> > On Tue, Feb 16, 2016 at 10:21 AM, Clayton Coleman <ccole...@redhat.com>
> > wrote:
> >>
> >> The console is served on whatever you provide as "--public-master" to
> >> the docker run command.
> >>
> >> I don't think we've seen this particular one yet - we definitely
> >> tightened our accepted ciphers list to pull the insecure ones out, but
> >> please open an issue and we'll track it down.
> >>
> >> On Tue, Feb 16, 2016 at 9:18 AM, Marc Boorshtein <mboorsht...@gmail.com
> >
> >> wrote:
> >> > All,
> >> >
> >> > I tried downloading and setting up openshift on docker
> >> > docker-engine-1.10.1-1 on centos7.  I used the following command to
> get
> >> > up
> >> > and running:
> >> >
> >> > docker run -d --name "origin" --privileged --pid=host
> >> > --net=bridge
> >> > -p 8443:8443 -v /:/rootfs:ro -v /var/run:/var/run:rw -v
> >> > /sys:/sys -v
> >> > /var/lib/docker:/var/lib/docker:rw -v
> >> >
> >> >
> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
> >> > -h openshift.xxx.lanopenshift/origin start
> >> >
> >> > When I try to go to the console on 8443 I get redirected to a 172
> >> > address
> >> > and firefox complains that the SSL connection is broken:
> >> >
> >> > Secure Connection Failed
> >> >
> >> > An error occurred during a connection to openshift..lan:8443.
> >> > security
> >> > library: improperly formatted DER-encoded message. (Error code:
> >> > sec_error_bad_der)
> >> >
> >> > The page you are trying to view cannot be shown because the
> >> > authenticity
> >> > of the received data could not be verified.
> >> > Please contact the website owners to inform them of this problem.
> >> >
> >> > but when I check the connection I get the following:
> >> > [root@openshift ~]# openssl s_client -connect
> >> > 'openshift.tremolo.lan:8443'
> >> > CONNECTED(0003)
> >> > depth=1 CN = openshift-signer@1455630818
> >> > verify error:num=19:self signed certificate in certificate chain
> >> > verify return:0
> >> > ---
> >> > Certificate chain
> >> >  0 s:/CN=127.0.0.1
> >> >i:/CN=openshift-signer@1455630818
> >> >  1 s:/CN=openshift-signer@1455630818
> >> >i:/CN=openshift-signer@1455630818
> >> > ---
> >> > Server certificate
> >> > -BEGIN CERTIFICATE-
> >> > MIID8TCCAtugAwIBAgIBBjALBgkqhkiG9w0BAQswJjEkMCIGA1UEAwwbb3BlbnNo
> >> > aWZ0LXNpZ25lckAxNDU1NjMwODE4MB4XDTE2MDIxNjEzNTM0MloXDTE4MDIxNTEz
> >> > NTM0M1owFDESMBAGA1UEAxMJMTI3LjAuMC4xMIIBIjANBgkqhkiG9w0BAQEFAAOC
> >> > AQ8AMIIBCgKCAQEA8NVlc/xYxrdo6ucYHoCtKvAjTxyCfdsAPGBm/VHbFQ+qLEIn
> >> > 6zk9eIKQ8kIHbm7xYFLFsvgBcmZwg6vf3NJoovaQREGqUo43Kuv2yk1NBVK5t3c9
> >> > bA4fmNJFCjy31JsoSyYm/ndsVatF0y5K8YlFzgyFyMoOuWG

Re: Trouble accessing docker image of openshift

2016-02-16 Thread Marc Boorshtein
Thanks Clayton.  Is "--public-master" a docker flag?  When I try it I get:

[root@openshift ~]# docker run -d --name "origin" --privileged
--pid=host --net=bridge -p 8443:8443 -v /:/rootfs:ro -v
/var/run:/var/run:rw -v /sys:/sys -v /var/lib/docker:/var/lib/docker:rw
-v
/var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
--public-master openshift.tremolo.lan  -h openshift.tremolo.lan
openshift/origin start

flag provided but not defined: --public-master

It looks like its passed to the openshift command, not docker?

Thanks

On Tue, Feb 16, 2016 at 10:21 AM, Clayton Coleman <ccole...@redhat.com>
wrote:

> The console is served on whatever you provide as "--public-master" to
> the docker run command.
>
> I don't think we've seen this particular one yet - we definitely
> tightened our accepted ciphers list to pull the insecure ones out, but
> please open an issue and we'll track it down.
>
> On Tue, Feb 16, 2016 at 9:18 AM, Marc Boorshtein <mboorsht...@gmail.com>
> wrote:
> > All,
> >
> > I tried downloading and setting up openshift on docker
> > docker-engine-1.10.1-1 on centos7.  I used the following command to get
> up
> > and running:
> >
> > docker run -d --name "origin" --privileged --pid=host
> --net=bridge
> > -p 8443:8443 -v /:/rootfs:ro -v /var/run:/var/run:rw -v
> /sys:/sys -v
> > /var/lib/docker:/var/lib/docker:rw -v
> >
> /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
> > -h openshift.xxx.lanopenshift/origin start
> >
> > When I try to go to the console on 8443 I get redirected to a 172 address
> > and firefox complains that the SSL connection is broken:
> >
> > Secure Connection Failed
> >
> > An error occurred during a connection to openshift..lan:8443.
> security
> > library: improperly formatted DER-encoded message. (Error code:
> > sec_error_bad_der)
> >
> > The page you are trying to view cannot be shown because the
> authenticity
> > of the received data could not be verified.
> > Please contact the website owners to inform them of this problem.
> >
> > but when I check the connection I get the following:
> > [root@openshift ~]# openssl s_client -connect
> 'openshift.tremolo.lan:8443'
> > CONNECTED(0003)
> > depth=1 CN = openshift-signer@1455630818
> > verify error:num=19:self signed certificate in certificate chain
> > verify return:0
> > ---
> > Certificate chain
> >  0 s:/CN=127.0.0.1
> >i:/CN=openshift-signer@1455630818
> >  1 s:/CN=openshift-signer@1455630818
> >i:/CN=openshift-signer@1455630818
> > ---
> > Server certificate
> > -BEGIN CERTIFICATE-
> > MIID8TCCAtugAwIBAgIBBjALBgkqhkiG9w0BAQswJjEkMCIGA1UEAwwbb3BlbnNo
> > aWZ0LXNpZ25lckAxNDU1NjMwODE4MB4XDTE2MDIxNjEzNTM0MloXDTE4MDIxNTEz
> > NTM0M1owFDESMBAGA1UEAxMJMTI3LjAuMC4xMIIBIjANBgkqhkiG9w0BAQEFAAOC
> > AQ8AMIIBCgKCAQEA8NVlc/xYxrdo6ucYHoCtKvAjTxyCfdsAPGBm/VHbFQ+qLEIn
> > 6zk9eIKQ8kIHbm7xYFLFsvgBcmZwg6vf3NJoovaQREGqUo43Kuv2yk1NBVK5t3c9
> > bA4fmNJFCjy31JsoSyYm/ndsVatF0y5K8YlFzgyFyMoOuWGuMTiAZAKqHW307/QM
> > IHkmMBt6++tO04F2f9T2Z9h/V677iJ9QC7YiGF+KL9hM7F4S/dwQWiwPso4gMaQF
> > QdvXv9OZoRQ6/0YY/UnLJFoF/hfLt4oODu0GSMK9BAuS/67aJilexcSDXXGeSuIh
> > OgN79UAW70bbd+OR8AqxU3EjiE8P9LMb87EpwwIDAQABo4IBPjCCATowDgYDVR0P
> > AQH/BAQDAgCgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwggED
> > BgNVHREEgfswgfiCCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3Vi
> > ZXJuZXRlcy5kZWZhdWx0LnN2Y4Ika3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVz
> > dGVyLmxvY2Fsgglsb2NhbGhvc3SCCW9wZW5zaGlmdIIRb3BlbnNoaWZ0LmRlZmF1
> > bHSCFW9wZW5zaGlmdC5kZWZhdWx0LnN2Y4Ijb3BlbnNoaWZ0LmRlZmF1bHQuc3Zj
> > LmNsdXN0ZXIubG9jYWyCCTEyNy4wLjAuMYIKMTcyLjE3LjAuMoIKMTcyLjMwLjAu
> > MYcEfwAAAYcErBEAAocErB4AATALBgkqhkiG9w0BAQsDggEBAAgxc6TRaCcT5jBP
> > Mj6K3CUkhN8S/3Us0gHIQ0ZYIvpzfi+HH9vUggS44E3I9OI2TN5pTZ0vDSbLMEva
> > VfvlZHsi4qlA/72rP50Gw+GMooofc8FHo08AXM2Lf/jE8/w88F4kXLZqVvnsQ/N4
> > bxSDg+0tydEAVoBopcvIyUj7QGFT7MT7icHe2ql6vnoXwZzeTLEKoNSk/NXlbLs8
> > IDW9bAa941SBYoVwyXsL5e4y7xqI4fKMX/gbF2FjAIwxa9PfeZKZ4bFNKY0b4LAr
> > Jl3NXbpbzmYlGqJwCBjY5JdOmXpjvkUv7ynYuV/ov65zz9RCfDp4CYDiZG80cgdj
> > Z1EmREE=
> > -END CERTIFICATE-
> > subject=/CN=127.0.0.1
> > issuer=/CN=openshift-signer@1455630818
> > ---
> > Acceptable client certificate CA names
> > /CN=openshift-signer@1455630818
> > Server Temp Key: ECDH, prime256v1, 256 bits
> > ---
> > SSL handshake has read 2414 bytes and written 385 bytes
> > ---
> > New, TLSv1/SSLv3, Cipher i

Trouble accessing docker image of openshift

2016-02-16 Thread Marc Boorshtein
All,

I tried downloading and setting up openshift on docker *docker*-engine-1.10.1-1
on centos7.  I used the following command to get up and running:

docker run -d --name "origin" --privileged --pid=host --net=bridge
-p 8443:8443 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys
-v /var/lib/docker:/var/lib/docker:rw -v
/var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes
-h openshift.xxx.lanopenshift/origin start

When I try to go to the console on 8443 I get redirected to a 172 address
and firefox complains that the SSL connection is broken:

Secure Connection Failed

An error occurred during a connection to openshift..lan:8443. security
library: improperly formatted DER-encoded message. (Error code:
sec_error_bad_der)

The page you are trying to view cannot be shown because the
authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.

but when I check the connection I get the following:
[root@openshift ~]# openssl s_client -connect 'openshift.tremolo.lan:8443'
CONNECTED(0003)
depth=1 CN = openshift-signer@1455630818
verify error:num=19:self signed certificate in certificate chain
verify return:0
---
Certificate chain
 0 s:/CN=127.0.0.1
   i:/CN=openshift-signer@1455630818
 1 s:/CN=openshift-signer@1455630818
   i:/CN=openshift-signer@1455630818
---
Server certificate
-BEGIN CERTIFICATE-
MIID8TCCAtugAwIBAgIBBjALBgkqhkiG9w0BAQswJjEkMCIGA1UEAwwbb3BlbnNo
aWZ0LXNpZ25lckAxNDU1NjMwODE4MB4XDTE2MDIxNjEzNTM0MloXDTE4MDIxNTEz
NTM0M1owFDESMBAGA1UEAxMJMTI3LjAuMC4xMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA8NVlc/xYxrdo6ucYHoCtKvAjTxyCfdsAPGBm/VHbFQ+qLEIn
6zk9eIKQ8kIHbm7xYFLFsvgBcmZwg6vf3NJoovaQREGqUo43Kuv2yk1NBVK5t3c9
bA4fmNJFCjy31JsoSyYm/ndsVatF0y5K8YlFzgyFyMoOuWGuMTiAZAKqHW307/QM
IHkmMBt6++tO04F2f9T2Z9h/V677iJ9QC7YiGF+KL9hM7F4S/dwQWiwPso4gMaQF
QdvXv9OZoRQ6/0YY/UnLJFoF/hfLt4oODu0GSMK9BAuS/67aJilexcSDXXGeSuIh
OgN79UAW70bbd+OR8AqxU3EjiE8P9LMb87EpwwIDAQABo4IBPjCCATowDgYDVR0P
AQH/BAQDAgCgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwggED
BgNVHREEgfswgfiCCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3Vi
ZXJuZXRlcy5kZWZhdWx0LnN2Y4Ika3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVz
dGVyLmxvY2Fsgglsb2NhbGhvc3SCCW9wZW5zaGlmdIIRb3BlbnNoaWZ0LmRlZmF1
bHSCFW9wZW5zaGlmdC5kZWZhdWx0LnN2Y4Ijb3BlbnNoaWZ0LmRlZmF1bHQuc3Zj
LmNsdXN0ZXIubG9jYWyCCTEyNy4wLjAuMYIKMTcyLjE3LjAuMoIKMTcyLjMwLjAu
MYcEfwAAAYcErBEAAocErB4AATALBgkqhkiG9w0BAQsDggEBAAgxc6TRaCcT5jBP
Mj6K3CUkhN8S/3Us0gHIQ0ZYIvpzfi+HH9vUggS44E3I9OI2TN5pTZ0vDSbLMEva
VfvlZHsi4qlA/72rP50Gw+GMooofc8FHo08AXM2Lf/jE8/w88F4kXLZqVvnsQ/N4
bxSDg+0tydEAVoBopcvIyUj7QGFT7MT7icHe2ql6vnoXwZzeTLEKoNSk/NXlbLs8
IDW9bAa941SBYoVwyXsL5e4y7xqI4fKMX/gbF2FjAIwxa9PfeZKZ4bFNKY0b4LAr
Jl3NXbpbzmYlGqJwCBjY5JdOmXpjvkUv7ynYuV/ov65zz9RCfDp4CYDiZG80cgdj
Z1EmREE=
-END CERTIFICATE-
subject=/CN=127.0.0.1
issuer=/CN=openshift-signer@1455630818
---
Acceptable client certificate CA names
/CN=openshift-signer@1455630818
Server Temp Key: ECDH, prime256v1, 256 bits
---
SSL handshake has read 2414 bytes and written 385 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol  : TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Session-ID:
0F1D94EB43646490A6FAFE006BEC3149C48B8A11ACA71CD7B04FD6FA9EAFA0CC
Session-ID-ctx:
Master-Key:
3885305A1D2D8CCFB59A8C535ED0FD23388E774B6262EEF848A5E6B916C2471D1171A87A07AAF7D981916E2F57DDB8A1
Key-Arg   : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket:
 - f9 2d fc 2d 20 77 06 2a-eb 9d 85 e1 ea 9f 3a 82   .-.-
w.*..:.
0010 - a1 c4 b2 10 89 ee 94 33-31 62 fe f4 44 3f e1 16
...31b..D?..
0020 - 4d af 2a 01 b6 f6 d2 62-b7 c2 a6 6c 75 d1 c3 a2
M.*b...lu...
0030 - 90 89 2f 22 eb 02 71 08-38 3b aa 7e ee 0f 39 ee
../"..q.8;.~..9.
0040 - 52 2e f2 1f 47 63 56 a8-65 79 01 7a ab 0d f7 de
R...GcV.ey.z
0050 - 13 b0 6c 49 58 23 46 dc-ec 00 9a 3c 95 3d 87 6c
..lIX#F<.=.l
0060 - b2 da de d4 25 e6 94 87-  %...

Start Time: 1455632113
Timeout   : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---

A couple of questions:
1.  Is there an environment variable I can set that lets me set the host
name openshift console redirects to? (so i don't get redirected to an IP)
2.  Has anyone run into this issue with firefox?  Google seems to think its
because firefox doesn't support the cipher.

Any help would be greatly appreciated.

Thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users