Re: Jenkins plugin - binary build

2016-09-27 Thread Lionel Orellana
Adding the edit cluster role seems to work.

oadm policy add-cluster-role-to-user edit
system:serviceaccount:jenkins:jenkins

But is feels I'm giving it too much access. I tried with role
system:build-controller but that wasn't enough.

On 28 September 2016 at 14:00, Lionel Orellana  wrote:

> Thanks. Invoking oc will do.
>
> I guess I have to oc login everytime?
>
> Somehow related question: can I have one service account with access to
> start builds across all projects? I created a jenkins service account for
> this purpose but I'm not sure how to give it access to all projects instead
> of one by one.
>
>
>
>
>
> On 27 September 2016 at 22:48, Clayton Coleman 
> wrote:
>
>> There is an API for launching a binary build from a build config - you
>> can do it from a curl call if necessary (run with --loglevel=8 to see an
>> example of that call).  You must send as the contents of the POST call the
>> source to build as a tar, zip, or tar.gz
>>
>> On Sep 27, 2016, at 6:35 AM, Ben Parees  wrote:
>>
>>
>>
>> On Sep 27, 2016 2:10 AM, "Lionel Orellana"  wrote:
>> >
>> > Hi
>> >
>> > Is it possible to trigger a binary build in Jenkins using
>> the openshiftBuild step?
>> >
>> > I'm basically trying to run something like
>> >
>> > oc start-build  --from-dir=
>> >
>> > but there's no option to pass from-dir in the openshiftBuild step. Are
>> there plans to support this?
>>
>> It's not possible today, but yes it is on our list. In the meantime you
>> can shell out and invoke oc directly to accomplish the same thing.
>>
>> >
>> > Thanks
>> >
>> >
>> > Lionel.
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: problem with websockets

2016-09-27 Thread Tony Saxon
Just wanted to reach out and say I just figured this out and it had nothing
to do with openshift whatsoever.

The lab system that I'm running openshift on to develop a proof of concept
is running on a libvirt host with NAT based networking. In order to get to
the application that I have deployed to the openshift cluster from outside
that host, I'm using an Apache reverse proxy. Apache mod_proxy does not
play nicely with websockets as I have found out before and it just never
even dawned on me that that was in play because I was doing a whole bunch
of other testing before I even got to implementing the websocket. It wasn't
until this evening when I was trying to debug the issue with a new tool
that someone pointed me to at a docker meetup that I realized that that was
likely the issue. I was trying to do it from a new machine and realized I
needed to add the IP and hostname to my hosts file to get to it and it hit
me. Super frustrating, but finally got it figured out. Just wanted to say
thanks for all the help!

On Thu, Sep 15, 2016 at 5:10 PM, Tony Saxon  wrote:

> I don't think that the websocket requests are even hitting the tomcat
> application. The access logs show hitting the main page, but nothing
> against the websocket itself. It's like it's not getting past the router.
>
> On Thu, Sep 15, 2016 at 4:56 PM, Tony Saxon  wrote:
>
>> I'll see what I can turn on. Right now I don't see anything showing up in
>> the standard catalina.out.
>>
>> On Thu, Sep 15, 2016 at 4:54 PM, Clayton Coleman 
>> wrote:
>>
>>> Ben, Ram, any ideas?  We test web sockets frequently so I assume there's
>>> something slightly different about how this app works in docker vs being
>>> behind the router.  Maybe the headers we send getting misinterpreted via
>>> Tomcat?
>>>
>>> Tony, any errors showing up in tomcat logs (or can you turn on verbose
>>> logging and verify that you see the incoming websocket attempt)?
>>>
>>> On Thu, Sep 15, 2016 at 4:50 PM, Tony Saxon 
>>> wrote:
>>>
 I ended up adding the port to the route configuration, and now i don't
 even get the error immediately. It's almost like it hangs trying to
 establish the websocket connection and only shows the error briefly if I
 navigate to another page. Here's the route configuration:

 [root@os-master ~]# oc get route -o yaml
 apiVersion: v1
 items:
 - apiVersion: v1
   kind: Route
   metadata:
 creationTimestamp: 2016-08-16T19:34:58Z
 labels:
   app: testwebapp
 name: testwebapp
 namespace: testwebapp
 resourceVersion: "1823387"
 selfLink: /oapi/v1/namespaces/testwebapp/routes/testwebapp
 uid: 843470f8-63e8-11e6-95e9-525400f41cdb
   spec:
 host: testwebapp.example.net
 port:
   targetPort: 8080
 to:
   kind: Service
   name: testwebapp
   status:
 ingress:
 - conditions:
   - lastTransitionTime: 2016-08-16T19:34:58Z
 status: "True"
 type: Admitted
   host: testwebapp.example.net
   routerName: ha-router-east
 kind: List
 metadata: {}

 On Wed, Sep 14, 2016 at 11:25 PM, Tony Saxon 
 wrote:

> Not directly in the browser. If I bring up the web developer section
> of my browser and view the console I see an error about being unable to
> connect to the websocket for both my application and the built in tomcat
> examples.
>
> On Wed, Sep 14, 2016 at 11:21 PM, Clayton Coleman  > wrote:
>
>> The router should support transparent connection upgrade in that
>> case.  Are you getting any errors in your browser?
>>
>> On Wed, Sep 14, 2016 at 11:15 PM, Tony Saxon 
>> wrote:
>>
>>> When I deploy it manually (standalone tomcat or non openshift docker
>>> container) the websocket port is 8080 since it's just a context path in
>>> tomcat. This works perfectly fine. When I deploy to openshift and expose
>>> port 8080, I'm able to hit port 80 on the router with the name that I
>>> mapped to the application and get the page and content I expect, but the
>>> websocket doesn't connect like it does when I deploy manually. I also 
>>> get
>>> the same behavior with the example websocket applications included with
>>> Tomcat which work under normal conditions. So basically, the main page 
>>> of
>>> my application is at http://testapp.example.com/testwebapp/ and the
>>> javascript loaded connects to a websocket at ws://
>>> testapp.example.com/testwebapp/websocket. Since I get the page to
>>> load when I hit the application I know it's exposing the right port, but
>>> form some reason the websocket can't connect.
>>>
>>>
>>> On Wed, Sep 14, 2016 at 10:49 PM, 

Re: Installing OpenShift Origin v1.3.0

2016-09-27 Thread Jonathan Yu
On Tue, Sep 27, 2016 at 5:13 PM, Sachin Vaidya  wrote:

> Thanks Troy. I was able to install OpenShift v1.3.0.
>
> deployment_type=origin
> openshift_pkg_version=-1.3.0
>
> What is the official Slack channel for OpenShift ?
>

We have an IRC channel on Freenode:
https://botbot.me/freenode/openshift-dev/

>
> Thanks
> Sachin
>
>
> On Tue, Sep 27, 2016 at 8:09 AM, Troy Dawson  wrote:
>
>> The origin 1.3.0 packages are now in the normal CentOS repository.
>> You no longer need to enable -testing.
>>
>>
>> On Tue, Sep 27, 2016 at 3:45 AM, Raul Martinez-Sanchez (raumarti)
>>  wrote:
>> > At the moment it seems you have to enable the ‘Testing’ Openshift yum
>> repo
>> > to get the install packages, am not sure why is not in the main repo yet
>> > (this might have been already answered in another email thread).
>> >
>> > I have tested it with both ‘variables’ and both work.
>> >
>> > Cheers
>> >
>> >
>> >
>> > From:  on behalf of Sachin
>> Vaidya
>> > 
>> > Date: Tuesday, 27 September 2016 at 07:38
>> > To: "users@lists.openshift.redhat.com" > com>
>> > Subject: Installing OpenShift Origin v1.3.0
>> >
>> > Hi,
>> >How to install OpenShift Origin v1.3.0 using "openshift-ansible"
>> scripts?
>> > I have tried following 2 options in inventory.
>> >
>> > deployment_type=origin
>> > openshift_pkg_version=-1.3.0
>> >
>> > OR
>> > deployment_type=origin
>> > openshift_release=v1.3.0
>> >
>> > Both options have failed installation with error that it can't find
>> v1.3.0
>> > As per github repo (https://github.com/openshift/origin/releases),
>> v1.3.0
>> > is the latest release of OpenShift.
>> >
>> > Thanks
>> > Sachin
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter
(@jawnsy) is the quickest way to my heart 

*“A master in the art of living draws no sharp distinction between his work
and his play; his labor and his leisure; his mind and his body; his
education and his recreation. He hardly knows which is which. He simply
pursues his vision of excellence through whatever he is doing, and leaves
others to determine whether he is working or playing. To himself, he always
appears to be doing both.”* — L. P. Jacks, Education through Recreation
(1932), p. 1
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Enabling emptyDir quota on atomic hosts

2016-09-27 Thread Clayton Coleman
If you can prevent your eyes from bleeding through sheer strength of will -
gaze upon the setup code here:

https://github.com/openshift/vagrant-openshift/blob/master/lib/vagrant-openshift/action/install_origin_base_dependencies.rb#L262

I thought there was doc for this but I'm not seeing it in my quick searches.

On Sep 27, 2016, at 9:12 PM, Andrew Lau  wrote:

I noticed support for emptyDir volume quota was added in 1.3, is there any
documentation on how we can enable this on atomic hosts? Setting gquota in
/etc/fstab doesn't apply.

"Preliminary support for local emptyDir volume quotas, set this value to a
resource quantity representing the desired quota per FSGroup, per node.
(i.e. 1Gi, 512Mi, etc) Currently requires that the volumeDirectory be on an
XFS filesystem mounted with the 'gquota' option, and the matching security
context contraint’s fsGroup type set to 'MustRunAs'."

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Secrets not available anymore with 1.3.0

2016-09-27 Thread Philippe Lafoucrière
It's definitely an issue related to 1.3.0. I have downgraded the cluster to
1.2.1, and it works again :(​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Enabling emptyDir quota on atomic hosts

2016-09-27 Thread Andrew Lau
I noticed support for emptyDir volume quota was added in 1.3, is there any
documentation on how we can enable this on atomic hosts? Setting gquota in
/etc/fstab doesn't apply.

"Preliminary support for local emptyDir volume quotas, set this value to a
resource quantity representing the desired quota per FSGroup, per node.
(i.e. 1Gi, 512Mi, etc) Currently requires that the volumeDirectory be on an
XFS filesystem mounted with the 'gquota' option, and the matching security
context contraint’s fsGroup type set to 'MustRunAs'."
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ruby-ex deployment times out

2016-09-27 Thread Gerard Braad
Hi,


although the image was pushed during the build, it says: Failed to
pull image "172.30.165.95:5000/myproject/ruby-ex


Manually pulling gives:

$ docker pull 172.30.165.95:5000/myproject/ruby-ex:latest
Trying to pull repository 172.30.165.95:5000/myproject/ruby-ex ...
Pulling repository 172.30.165.95:5000/myproject/ruby-ex
Error: image myproject/ruby-ex not found


On Wed, Sep 28, 2016 at 12:27 AM, Clayton Coleman  wrote:
> You can run "oc get events" to see the message about pulling,

$ oc get events
LASTSEEN   FIRSTSEEN   COUNT NAME   KIND
SUBOBJECT TYPE  REASON   SOURCE
9s 22s 2 ruby-ex-3-dtdy0Pod
spec.containers{ruby-ex}  Warning   Failed   {kubelet
10.5.0.27}Failed to pull image
"172.30.165.95:5000/myproject/ruby-ex@sha256:df71f696941a9daa5daaea808cfcaaf72071d7ad206833c1b95a5060dd95ca92":
Cannot overwrite digest
sha256:df71f696941a9daa5daaea808cfcaaf72071d7ad206833c1b95a5060dd95ca92
9s 22s 2 ruby-ex-3-dtdy0Pod
 Warning   FailedSync   {kubelet 10.5.0.27}
Error syncing pod, skipping: failed to "StartContainer" for "ruby-ex"
with ErrImagePull: "Cannot overwrite digest
sha256:df71f696941a9daa5daaea808cfcaaf72071d7ad206833c1b95a5060dd95ca92"

21s   21s   1 ruby-ex-3-dtdy0   Pod
spec.containers{ruby-ex}   NormalBackOff  {kubelet 10.5.0.27}
 Back-off pulling image
"172.30.165.95:5000/myproject/ruby-ex@sha256:df71f696941a9daa5daaea808cfcaaf72071d7ad206833c1b95a5060dd95ca92"


> image to the internal registry, docker has to have --insecure-registry set.

Docker is configured with:

OPTIONS='--selinux-enabled --log-driver=journald --insecure-registry
172.30.0.0/16'


Full output I posted here:
https://gist.github.com/gbraad/e82edffb671a5dd154a939491514f7f8

regards,


Gerard

-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Installing OpenShift Origin v1.3.0

2016-09-27 Thread Sachin Vaidya
Thanks Troy. I was able to install OpenShift v1.3.0.

deployment_type=origin
openshift_pkg_version=-1.3.0

What is the official Slack channel for OpenShift ?

Thanks
Sachin


On Tue, Sep 27, 2016 at 8:09 AM, Troy Dawson  wrote:

> The origin 1.3.0 packages are now in the normal CentOS repository.
> You no longer need to enable -testing.
>
>
> On Tue, Sep 27, 2016 at 3:45 AM, Raul Martinez-Sanchez (raumarti)
>  wrote:
> > At the moment it seems you have to enable the ‘Testing’ Openshift yum
> repo
> > to get the install packages, am not sure why is not in the main repo yet
> > (this might have been already answered in another email thread).
> >
> > I have tested it with both ‘variables’ and both work.
> >
> > Cheers
> >
> >
> >
> > From:  on behalf of Sachin
> Vaidya
> > 
> > Date: Tuesday, 27 September 2016 at 07:38
> > To: "users@lists.openshift.redhat.com"  >
> > Subject: Installing OpenShift Origin v1.3.0
> >
> > Hi,
> >How to install OpenShift Origin v1.3.0 using "openshift-ansible"
> scripts?
> > I have tried following 2 options in inventory.
> >
> > deployment_type=origin
> > openshift_pkg_version=-1.3.0
> >
> > OR
> > deployment_type=origin
> > openshift_release=v1.3.0
> >
> > Both options have failed installation with error that it can't find
> v1.3.0
> > As per github repo (https://github.com/openshift/origin/releases),
> v1.3.0
> > is the latest release of OpenShift.
> >
> > Thanks
> > Sachin
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't push images after 1.3.0 upgrade

2016-09-27 Thread Philippe Lafoucrière
Note that I can pull the image with this account.
I have tried to readd the role to the user:

$ oadm policy add-cluster-role-to-user system:image-builder our_ci_user

with no success.
According to
https://docs.openshift.com/container-platform/3.3/admin_guide/manage_authorization_policy.html,
I
should be able to update the layers.

$ oadm policy who-can update imagestreams/layers
-> my ci user is listed here
​
Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't push images after 1.3.0 upgrade

2016-09-27 Thread Philippe Lafoucrière
On Tue, Sep 27, 2016 at 4:29 PM, Jordan Liggitt  wrote:
>
> Do you have the registry logs available from the timeframe during the
push?


10.1.0.1 - - [27/Sep/2016:20:59:57
+] time="2016-09-27T20:59:58.948672089Z" level=error msg="error
authorizing context: authorization header required" go.version=go1.6.3
http.request.host=redacted http.request.id=24db7eaf-f66f-462a-9d2e-434b77ca7a30
http.request.method=PATCH http.request.remoteaddr=172.29.13.4
http.request.uri="/v2/gemnasium-staging/registry-scanner/blobs/uploads/a9c303fc-e85b-428c-b799-9cba00a40f77?_state=FguTxOGl3FNUtqk1-RNJvR8E7fvACwiGW_MQetCuFRp7Ik5hbWUiOiJnZW1uYXNpdW0tc3RhZ2luZy9yZWdpc3RyeS1zY2FubmVyIiwiVVVJRCI6ImE5YzMwM2ZjLWU4NWItNDI4Yy1iNzk5LTljYmEwMGE0MGY3NyIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAxNi0wOS0yN1QyMDo1OTo1OC45MTEyMDE4MjhaIn0%3D"
http.request.useragent="docker/1.12.1 go/go1.6.3 git-commit/23cf638
kernel/3.16.0-4-amd64 os/linux arch/amd64
UpstreamClient(Docker-Client/1.12.1 \\(linux\\))"
instance.id=093a0322-b4bf-4b2a-bed3-5f02d0b2b0d7
vars.name="gemnasium-staging/registry-scanner"
vars.uuid=a9c303fc-e85b-428c-b799-9cba00a40f77
10.1.0.1 - - [27/Sep/2016:20:59:58 +] "PATCH
/v2/gemnasium-staging/registry-scanner/blobs/uploads/a9c303fc-e85b-428c-b799-9cba00a40f77?_state=FguTxOGl3FNUtqk1-RNJvR8E7fvACwiGW_MQetCuFRp7Ik5hbWUiOiJnZW1uYXNpdW0tc3RhZ2luZy9yZWdpc3RyeS1zY2FubmVyIiwiVVVJRCI6ImE5YzMwM2ZjLWU4NWItNDI4Yy1iNzk5LTljYmEwMGE0MGY3NyIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAxNi0wOS0yN1QyMDo1OTo1OC45MTEyMDE4MjhaIn0%3D
HTTP/1.1" 401 248 "" "docker/1.12.1 go/go1.6.3 git-commit/23cf638
kernel/3.16.0-4-amd64 os/linux arch/amd64
UpstreamClient(Docker-Client/1.12.1 \\(linux\\))"
10.1.0.1 - - [27/Sep/2016:20:59:58 +] "PATCH
/v2/gemnasium-staging/registry-scanner/blobs/uploads/66b43d12-ad91-4c48-9e74-4d7fcc2f6eb8?_state=XTu8Xy0JVxFNNHlaCwnssuOkev1Vc_xy_iyGsSwtI5t7Ik5hbWUiOiJnZW1uYXNpdW0tc3RhZ2luZy9yZWdpc3RyeS1zY2FubmVyIiwiVVVJRCI6IjY2YjQzZDEyLWFkOTEtNGM0OC05ZTc0LTRkN2ZjYzJmNmViOCIsIk9mZnNldCI6MCwiU3RhcnRlZEF0IjoiMjAxNi0wOS0yN1QyMDo1OTo1OC45MTAyNDgzNzlaIn0%3D
HTTP/1.1" 401 248 "" "docker/1.12.1 go/go1.6.3 git-commit/23cf638
kernel/3.16.0-4-amd64 os/linux arch/amd64
UpstreamClient(Docker-Client/1.12.1 \\(linux\\))"

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Secrets not available anymore with 1.3.0

2016-09-27 Thread Philippe Lafoucrière
Is this what you're looking for?

 secret.go:152] Setting up volume airbrake-secrets for pod
41cdd02f-84ea-11e6-be87-005056b17dcc at
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets
 nsenter_mount.go:183] findmnt command: nsenter
[--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target,fstype
--noheadings --first-only --target
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets]


secret.go:179] Received secret gemnasium-staging/airbrake containing (2)
pieces of data, 40 total bytes
atomic_writer.go:316]
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets: current paths:   [airbrake-key
airbrake-project-id]
atomic_writer.go:328]
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets: new paths:   [airbrake-key
airbrake-project-id]
atomic_writer.go:331]
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets: paths to remove: map[]
atomic_writer.go:136] pod gemnasium-staging/gemnasium-api-v1-3-xxi0j volume
airbrake-secrets: no update required for target directory
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets

I can't find any error related to that :(
​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can't push images after 1.3.0 upgrade

2016-09-27 Thread Jordan Liggitt
Do you have the registry logs available from the timeframe during the push?

On Tue, Sep 27, 2016 at 4:26 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> Another issue we're facing after the upgrade to 1.3.0:
> our CI service account can't push images to the registry anymore.
> I have tried to push the image by hand:
>
> 202bc3fd6fe4: Pushing [==>]
> 7.114 MB
> be16db112b16: Pushing [==>]
> 280.6 kB
> unauthorized: authentication required
>
> In the sa description, the tokens seem to be the same (at least they have
> the same names).
> I have triedto reconcile policies :
>
> oadm policy reconcile-cluster-roles \
> --additive-only=true \
> --confirm
>
> oadm policy reconcile-cluster-role-bindings \
> --exclude-groups=system:authenticated \
> --exclude-groups=system:authenticated:oauth \
> --exclude-groups=system:unauthenticated \
> --exclude-users=system:anonymous \
> --additive-only=true \
> --confirm
>
> oadm policy reconcile-sccs \
> --additive-only=true \
> --confirm
>
> (but it should done by the playbook I think), and yet, I can't push any
> more :(
>
> Did we miss something during the upgrade?
>
> Thanks,
> Philippe
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Can't push images after 1.3.0 upgrade

2016-09-27 Thread Philippe Lafoucrière
Hi,

Another issue we're facing after the upgrade to 1.3.0:
our CI service account can't push images to the registry anymore.
I have tried to push the image by hand:

202bc3fd6fe4: Pushing [==>]
7.114 MB
be16db112b16: Pushing [==>]
280.6 kB
unauthorized: authentication required

In the sa description, the tokens seem to be the same (at least they have
the same names).
I have triedto reconcile policies :

oadm policy reconcile-cluster-roles \
--additive-only=true \
--confirm

oadm policy reconcile-cluster-role-bindings \
--exclude-groups=system:authenticated \
--exclude-groups=system:authenticated:oauth \
--exclude-groups=system:unauthenticated \
--exclude-users=system:anonymous \
--additive-only=true \
--confirm

oadm policy reconcile-sccs \
--additive-only=true \
--confirm

(but it should done by the playbook I think), and yet, I can't push any
more :(

Did we miss something during the upgrade?

Thanks,
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Secrets not available anymore with 1.3.0

2016-09-27 Thread Clayton Coleman
Which version of Docker are you running?  Paul, do those propagation
settings look correct?

On Tue, Sep 27, 2016 at 3:40 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> We're testing OS 1.3.0 on our test cluster, and have something weird
> happening.
> The secrets are mounted, but apparently not readable anymore in _some_
> pods:
>
> This is on openshift 1.2.1:
>
> {
> "Source": "/var/lib/origin/openshift.local.volumes/pods/3f7a5adc-
> 84b1-11e6-8101-005056b12d45/volumes/kubernetes.io~secret/
> airbrake-secrets",
> "Destination": "/etc/secrets/airbrake",
> "Mode": "ro,Z",
> "RW": false
> }
>
> and on openshift 1.3.0:
>
>  {
>  "Source": "/var/lib/origin/openshift.local.volumes/pods/19df38db-
> 84e9-11e6-be87-005056b17dcc/volumes/kubernetes.io~secret/
> airbrake-secrets",
>  "Destination": "/etc/secrets/airbrake",
>  "Mode": "ro,Z",
>  "RW": false,
>  "Propagation": "rslave"
>  },
>
> Only the propagation is different, but it should not be an issue.
> I can't get a shell inside the container, because it's just an executable
> wrapped inside a "scratch" docker image.
>
> The pods with a shell don't seem to this problem, and I can see the
> secrets mounted and used as usual.
>
> Any hints?
>
> Thanks,
> Philippe
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Secrets not available anymore with 1.3.0

2016-09-27 Thread Philippe Lafoucrière
Hi,

We're testing OS 1.3.0 on our test cluster, and have something weird
happening.
The secrets are mounted, but apparently not readable anymore in _some_ pods:

This is on openshift 1.2.1:

{
"Source":
"/var/lib/origin/openshift.local.volumes/pods/3f7a5adc-84b1-11e6-8101-005056b12d45/volumes/
kubernetes.io~secret/airbrake-secrets",
"Destination": "/etc/secrets/airbrake",
"Mode": "ro,Z",
"RW": false
}

and on openshift 1.3.0:

 {
 "Source":
"/var/lib/origin/openshift.local.volumes/pods/19df38db-84e9-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets",
 "Destination": "/etc/secrets/airbrake",
 "Mode": "ro,Z",
 "RW": false,
 "Propagation": "rslave"
 },

Only the propagation is different, but it should not be an issue.
I can't get a shell inside the container, because it's just an executable
wrapped inside a "scratch" docker image.

The pods with a shell don't seem to this problem, and I can see the secrets
mounted and used as usual.

Any hints?

Thanks,
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ruby-ex deployment times out

2016-09-27 Thread Clayton Coleman
You can run "oc get events" to see the message about pulling, but it's
possible that your machine can't pull from the registry that the image is
on.  That could be a lot of things - if you're doing a build that pushed an
image to the internal registry, docker has to have --insecure-registry
set.  If you have a proxy or other firewall, that could also be blocking it.

On Tue, Sep 27, 2016 at 11:23 AM, Gerard Braad  wrote:

> On Tue, Sep 27, 2016 at 11:17 PM, Clayton Coleman 
> wrote:
> > oc logs POD_NAME
>
> $ oc log ruby-ex-2-7ywo7
> W0927 15:20:50.019027   17417 cmd.go:269] log is DEPRECATED and will
> be removed in a future version. Use logs instead.
> Error from server: container "ruby-ex" in pod "ruby-ex-2-7ywo7" is
> waiting to start: trying and failing to pull image
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ruby-ex deployment times out

2016-09-27 Thread Gerard Braad
On Tue, Sep 27, 2016 at 11:17 PM, Clayton Coleman  wrote:
> oc logs POD_NAME

$ oc log ruby-ex-2-7ywo7
W0927 15:20:50.019027   17417 cmd.go:269] log is DEPRECATED and will
be removed in a future version. Use logs instead.
Error from server: container "ruby-ex" in pod "ruby-ex-2-7ywo7" is
waiting to start: trying and failing to pull image

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ruby-ex deployment times out

2016-09-27 Thread Clayton Coleman
Were there any logs from the ruby pod as it was being spun up?  If you
trigger a new deployment:

oc deploy ruby-ex --latest

When you see a new ruby-ex-* pod get created (the one with the random
suffix) try checking the logs with

oc logs POD_NAME

To see if the app was failing to load.

On Sep 27, 2016, at 11:08 AM, Gerard Braad  wrote:

Hi All,


On a test environment using the `oc cluster up` command, I am unable
to deploy the ruby-ex example application. I am deploying on Fedora
24, using origin v1.3.

As you can see, the build succeeds, pushed the image and then
deployment times out.

$ oc logs bc/ruby-ex
Cloning "https://github.com/gbraad/ruby-ex; ...
   Commit: f63d076b602441ebd65fd0749c5c58ea4bafaf90 (Merge pull
request #2 from mfojtik/add-puma)
   Author: Michal Fojtik 
   Date:   Thu Jun 30 10:47:53 2016 +0200
---> Installing application source ...
---> Building your Ruby application from source ...
---> Running 'bundle install --deployment' ...
Fetching gem metadata from https://rubygems.org/...
Installing puma (3.4.0)
Installing rack (1.6.4)
Using bundler (1.3.5)
Cannot write a changed lockfile while frozen.
Your bundle is complete!
It was installed into ./bundle
---> Cleaning up unused ruby gems ...
Pushing image 172.30.165.95:5000/myproject/ruby-ex:latest ...
Pushed 0/10 layers, 2% complete
Pushed 1/10 layers, 43% complete
Pushed 2/10 layers, 50% complete
Pushed 3/10 layers, 50% complete
Pushed 4/10 layers, 50% complete
Pushed 5/10 layers, 70% complete
Pushed 6/10 layers, 61% complete
Pushed 7/10 layers, 71% complete
Pushed 8/10 layers, 92% complete
Pushed 9/10 layers, 97% complete
Pushed 10/10 layers, 100% complete
Push successful
$ oc logs dc/ruby-ex
--> Scaling ruby-ex-1 to 1
--> Waiting up to 10m0s for pods in deployment ruby-ex-1 to become ready
$ oc logs dc/ruby-ex
--> Scaling ruby-ex-1 to 1
--> Waiting up to 10m0s for pods in deployment ruby-ex-1 to become ready
error: update acceptor rejected ruby-ex-1: pods for deployment
"ruby-ex-1" took longer than 600 seconds to become ready


I verified that the environment works by deploying the hello-openshift:

$ oc run hello-openshift
--image=docker.io/openshift/hello-openshift:latest --port=8080
--expose

and this gets deployed and is accessible.

Any ideas?

regards,


Gerard


-- 

  Gerard Braad | http://gbraad.nl
  [ Doing Open Source Matters ]

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: AW: Router Sharding

2016-09-27 Thread Phil Cameron
The discussion on routing has been improved in the latest 3.3 docs. 
Please take a look and see if it is helpful.


With sharding each route gets one or more labels and each router (shard) 
has a selector that selects a set of routes by label. So it is up to the 
user/admin to assign labels to the routes and then to select a set of 
routes on each router (shard). By default a router selects all routes, 
so when sharding is used all routers (including the default router) will 
need to have a selector.


ipf (VIP using VRRP) deployment selects a set of nodes on which to 
present the supplied set of ip addresses. At any point in time one of 
the nodes is the master and it receives the packets to the IP address. 
So a single IP address can be set in DNS with knowledge that it will be 
serviced by one of the nodes. This implies that the application has 
replicas running on all of the nodes in the set.


phil

On 09/26/2016 04:53 PM, Aleksandar Lazic wrote:


Hi.

I agree with you, and I have tried to contribute to the doc but that’s 
wasn’t an easy task so I stopped.


Maybe I was also to naïve so blame me that I have stopped contribution.

@1: Currently that’s not possible you will need to add for every route 
the label for the dedicate router.


‘oc create route …’

have no options to set labels you will need to use

oc expose service ... --labels='router=one' --hostname='...'

or you can use the labels in the webconsole.

Oh and by the way the default router MUST also have ROUTE_LABELS if 
you don’t want to expose all routes to the default router.


@2: you will need the new template from OCP 3.3 there are additional 
env variables necessary to be able to use more the none router on the 
same node.


https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L147

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L184

and you need to add on the router nodes in the iptables chain 
‘OS_FIREWALL_ALLOW’ the additional ports.


@3: This would be a little bit tricky on the same node due to the fact 
that the


https://github.com/openshift/origin/blob/master/images/ipfailover/keepalived/lib/failover-functions.sh#L11-L12

only handle one config file. Maybe there is a way with *VIPS but I 
have never tried this.


Hth

Aleks

*Von:*users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] *Im Auftrag von 
*Srinivas Naga Kotaru (skotaru)

*Gesendet:* Montag, 26. September 2016 21:31
*An:* Andrew Lau ; users@lists.openshift.redhat.com
*Betreff:* Re: Router Sharding

Current sharding documentation is very high level, doesn’t cover step 
by step actual real world use cases.


Anyway, I was succeeded to create 2 shards. Lot of questions on this 
topic on how to proceed next …


1.How to tell a project that all apps created on this project should 
use router #1 or router #2?


2.Now we have 3 routers (default created as part of installation + 
additional 2 routers created). How the ports work? 80, 443 & 1936 
assigned to default router. I changed ports to 81/444/1937 and 
82/445/1938 to respectively shad #1 #2. These ports open automatically 
or explicit action required?


3.Ipfailover (floating VIP) bound to default router. Do we need to 
create additional IP failover pods with different IP’s and match to 
shad #1 and #2? Or can we share same IP failover pods with single 
floating VIP to newly created shad’s as well?


--

*Srinivas Kotaru*

*From: *Andrew Lau >
*Date: *Friday, September 23, 2016 at 7:41 PM
*To: *Srinivas Naga Kotaru >, "users@lists.openshift.redhat.com 
" 
>

*Subject: *Re: Router Sharding

There are docs here:

- 
https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding


- 
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#creating-router-shards


On Sat, 24 Sep 2016 at 06:13 Srinivas Naga Kotaru (skotaru) 
> wrote:


Just saw 3.3 features blog

https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/

We’re rethinking of our cluster design and want to consolidate 1
cluster per data center. Initially we were planning off 2 cluster
per data center to server internal and external traffic dedicated
to its own cluster.

Consolidating to a single cluster per DC will offer multiple
advantages to us.  We currently running latest 3.2.1 release

Router Sharding is available in 3.2.x branch or need to wait for
3.3? I was thinking this feature has been available from 3.x
onwards as per documentation available. Not sure what is mean for
upcoming 3.3.

We really want to take advantage 

Re: Jenkins plugin - binary build

2016-09-27 Thread Clayton Coleman
There is an API for launching a binary build from a build config - you can
do it from a curl call if necessary (run with --loglevel=8 to see an
example of that call).  You must send as the contents of the POST call the
source to build as a tar, zip, or tar.gz

On Sep 27, 2016, at 6:35 AM, Ben Parees  wrote:



On Sep 27, 2016 2:10 AM, "Lionel Orellana"  wrote:
>
> Hi
>
> Is it possible to trigger a binary build in Jenkins using
the openshiftBuild step?
>
> I'm basically trying to run something like
>
> oc start-build  --from-dir=
>
> but there's no option to pass from-dir in the openshiftBuild step. Are
there plans to support this?

It's not possible today, but yes it is on our list. In the meantime you can
shell out and invoke oc directly to accomplish the same thing.

>
> Thanks
>
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Jenkins plugin - binary build

2016-09-27 Thread Ben Parees
On Sep 27, 2016 2:10 AM, "Lionel Orellana"  wrote:
>
> Hi
>
> Is it possible to trigger a binary build in Jenkins using
the openshiftBuild step?
>
> I'm basically trying to run something like
>
> oc start-build  --from-dir=
>
> but there's no option to pass from-dir in the openshiftBuild step. Are
there plans to support this?

It's not possible today, but yes it is on our list. In the meantime you can
shell out and invoke oc directly to accomplish the same thing.

>
> Thanks
>
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Router Sharding

2016-09-27 Thread Michail Kargakis
can you open an issue about adding --labels to create route?

On Mon, Sep 26, 2016 at 10:53 PM, Aleksandar Lazic <
aleksandar.la...@cloudwerkstatt.com> wrote:

> Hi.
>
>
>
> I agree with you, and I have tried to contribute to the doc but that’s
> wasn’t an easy task so I stopped.
>
> Maybe I was also to naïve so blame me that I have stopped contribution.
>
>
>
> @1: Currently that’s not possible you will need to add for every route the
> label for the dedicate router.
>
>
>
> ‘oc create route …’
>
>
>
> have no options to set labels you will need to use
>
>
>
> oc expose service ... --labels='router=one' --hostname='...'
>
>
>
> or you can use the labels in the webconsole.
>
>
>
> Oh and by the way the default router MUST also have ROUTE_LABELS if you
> don’t want to expose all routes to the default router.
>
>
>
> @2: you will need the new template from OCP 3.3 there are additional env
> variables necessary to be able to use more the none router on the same
> node.
>
>
>
> https://github.com/openshift/origin/blob/master/images/
> router/haproxy/conf/haproxy-config.template#L147
>
> https://github.com/openshift/origin/blob/master/images/
> router/haproxy/conf/haproxy-config.template#L184
>
>
>
> and you need to add on the router nodes in the iptables chain
> ‘OS_FIREWALL_ALLOW’ the additional ports.
>
>
>
> @3: This would be a little bit tricky on the same node due to the fact
> that the
>
>
>
> https://github.com/openshift/origin/blob/master/images/
> ipfailover/keepalived/lib/failover-functions.sh#L11-L12
>
>
>
> only handle one config file. Maybe there is a way with *VIPS but I have
> never tried this.
>
>
>
> Hth
>
>
>
> Aleks
>
>
>
> *Von:* users-boun...@lists.openshift.redhat.com [mailto:
> users-boun...@lists.openshift.redhat.com] *Im Auftrag von *Srinivas Naga
> Kotaru (skotaru)
> *Gesendet:* Montag, 26. September 2016 21:31
> *An:* Andrew Lau ; users@lists.openshift.redhat.com
> *Betreff:* Re: Router Sharding
>
>
>
>
>
> Current sharding documentation is very high level, doesn’t cover step by
> step actual real world use cases.
>
>
>
> Anyway, I was succeeded to create 2 shards. Lot of questions on this topic
> on how to proceed next …
>
>
>
> 1.  How to tell a project that all apps created on this project
> should use router #1 or router #2?
>
> 2.  Now we have 3 routers (default created as part of installation +
> additional 2 routers created). How the ports work? 80, 443 & 1936 assigned
> to default router. I changed ports to 81/444/1937 and 82/445/1938 to
> respectively shad #1 #2. These ports open automatically or explicit action
> required?
>
> 3.  Ipfailover (floating VIP) bound to default router. Do we need to
> create additional IP failover pods with different IP’s and match to shad #1
> and #2? Or can we share same IP failover pods with single floating VIP to
> newly created shad’s as well?
>
>
>
> --
>
> *Srinivas Kotaru*
>
>
>
> *From: *Andrew Lau 
> *Date: *Friday, September 23, 2016 at 7:41 PM
> *To: *Srinivas Naga Kotaru , "
> users@lists.openshift.redhat.com" 
> *Subject: *Re: Router Sharding
>
>
>
> There are docs here:
>
> - https://docs.openshift.org/latest/architecture/core_
> concepts/routes.html#router-sharding
>
> - https://docs.openshift.org/latest/install_config/router/
> default_haproxy_router.html#creating-router-shards
>
>
>
>
>
> On Sat, 24 Sep 2016 at 06:13 Srinivas Naga Kotaru (skotaru) <
> skot...@cisco.com> wrote:
>
> Just saw 3.3 features blog
>
>
>
> https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/
>
>
>
> We’re rethinking of our cluster design and want to consolidate 1 cluster
> per data center. Initially we were planning off 2 cluster per data center
> to server internal and external traffic dedicated to its own cluster.
>
>
>
> Consolidating to a single cluster per DC will offer multiple advantages to
> us.  We currently running latest 3.2.1 release
>
>
>
> Router Sharding is available in 3.2.x branch or need to wait for 3.3? I
> was thinking this feature has been available from 3.x onwards as per
> documentation available. Not sure what is mean for upcoming 3.3.
>
>
>
> We really want to take advantage of this feature and test ASAP. Current
> documentation is not clear or explains only high level.
>
>
>
> Can you help me or point to right documentation which explains step by
> steps to test this feature?
>
>
>
> Can we control routes at project level so that clients wont modifies to
> move their routes from prod to non-prod or internal to external routers?
>
>
>
> --
>
> *Srinivas Kotaru*
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>