Adding the edit cluster role seems to work.
oadm policy add-cluster-role-to-user edit
system:serviceaccount:jenkins:jenkins
But is feels I'm giving it too much access. I tried with role
system:build-controller but that wasn't enough.
On 28 September 2016 at 14:00, Lionel Orellana
Just wanted to reach out and say I just figured this out and it had nothing
to do with openshift whatsoever.
The lab system that I'm running openshift on to develop a proof of concept
is running on a libvirt host with NAT based networking. In order to get to
the application that I have deployed
On Tue, Sep 27, 2016 at 5:13 PM, Sachin Vaidya wrote:
> Thanks Troy. I was able to install OpenShift v1.3.0.
>
> deployment_type=origin
> openshift_pkg_version=-1.3.0
>
> What is the official Slack channel for OpenShift ?
>
We have an IRC channel on Freenode:
If you can prevent your eyes from bleeding through sheer strength of will -
gaze upon the setup code here:
https://github.com/openshift/vagrant-openshift/blob/master/lib/vagrant-openshift/action/install_origin_base_dependencies.rb#L262
I thought there was doc for this but I'm not seeing it in my
It's definitely an issue related to 1.3.0. I have downgraded the cluster to
1.2.1, and it works again :(
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
I noticed support for emptyDir volume quota was added in 1.3, is there any
documentation on how we can enable this on atomic hosts? Setting gquota in
/etc/fstab doesn't apply.
"Preliminary support for local emptyDir volume quotas, set this value to a
resource quantity representing the desired
Hi,
although the image was pushed during the build, it says: Failed to
pull image "172.30.165.95:5000/myproject/ruby-ex
Manually pulling gives:
$ docker pull 172.30.165.95:5000/myproject/ruby-ex:latest
Trying to pull repository 172.30.165.95:5000/myproject/ruby-ex ...
Pulling repository
Thanks Troy. I was able to install OpenShift v1.3.0.
deployment_type=origin
openshift_pkg_version=-1.3.0
What is the official Slack channel for OpenShift ?
Thanks
Sachin
On Tue, Sep 27, 2016 at 8:09 AM, Troy Dawson wrote:
> The origin 1.3.0 packages are now in the normal
Note that I can pull the image with this account.
I have tried to readd the role to the user:
$ oadm policy add-cluster-role-to-user system:image-builder our_ci_user
with no success.
According to
https://docs.openshift.com/container-platform/3.3/admin_guide/manage_authorization_policy.html,
I
On Tue, Sep 27, 2016 at 4:29 PM, Jordan Liggitt wrote:
>
> Do you have the registry logs available from the timeframe during the
push?
10.1.0.1 - - [27/Sep/2016:20:59:57
+] time="2016-09-27T20:59:58.948672089Z" level=error msg="error
authorizing context: authorization
Is this what you're looking for?
secret.go:152] Setting up volume airbrake-secrets for pod
41cdd02f-84ea-11e6-be87-005056b17dcc at
/var/lib/origin/openshift.local.volumes/pods/41cdd02f-84ea-11e6-be87-005056b17dcc/volumes/
kubernetes.io~secret/airbrake-secrets
nsenter_mount.go:183] findmnt
Do you have the registry logs available from the timeframe during the push?
On Tue, Sep 27, 2016 at 4:26 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
> Hi,
>
> Another issue we're facing after the upgrade to 1.3.0:
> our CI service account can't push images to the
Hi,
Another issue we're facing after the upgrade to 1.3.0:
our CI service account can't push images to the registry anymore.
I have tried to push the image by hand:
202bc3fd6fe4: Pushing [==>]
7.114 MB
be16db112b16: Pushing
Which version of Docker are you running? Paul, do those propagation
settings look correct?
On Tue, Sep 27, 2016 at 3:40 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
> Hi,
>
> We're testing OS 1.3.0 on our test cluster, and have something weird
> happening.
> The
Hi,
We're testing OS 1.3.0 on our test cluster, and have something weird
happening.
The secrets are mounted, but apparently not readable anymore in _some_ pods:
This is on openshift 1.2.1:
{
"Source":
You can run "oc get events" to see the message about pulling, but it's
possible that your machine can't pull from the registry that the image is
on. That could be a lot of things - if you're doing a build that pushed an
image to the internal registry, docker has to have --insecure-registry
set.
On Tue, Sep 27, 2016 at 11:17 PM, Clayton Coleman wrote:
> oc logs POD_NAME
$ oc log ruby-ex-2-7ywo7
W0927 15:20:50.019027 17417 cmd.go:269] log is DEPRECATED and will
be removed in a future version. Use logs instead.
Error from server: container "ruby-ex" in pod
Were there any logs from the ruby pod as it was being spun up? If you
trigger a new deployment:
oc deploy ruby-ex --latest
When you see a new ruby-ex-* pod get created (the one with the random
suffix) try checking the logs with
oc logs POD_NAME
To see if the app was failing to load.
The discussion on routing has been improved in the latest 3.3 docs.
Please take a look and see if it is helpful.
With sharding each route gets one or more labels and each router (shard)
has a selector that selects a set of routes by label. So it is up to the
user/admin to assign labels to the
There is an API for launching a binary build from a build config - you can
do it from a curl call if necessary (run with --loglevel=8 to see an
example of that call). You must send as the contents of the POST call the
source to build as a tar, zip, or tar.gz
On Sep 27, 2016, at 6:35 AM, Ben
On Sep 27, 2016 2:10 AM, "Lionel Orellana" wrote:
>
> Hi
>
> Is it possible to trigger a binary build in Jenkins using
the openshiftBuild step?
>
> I'm basically trying to run something like
>
> oc start-build --from-dir=
>
> but there's no option to pass from-dir in the
can you open an issue about adding --labels to create route?
On Mon, Sep 26, 2016 at 10:53 PM, Aleksandar Lazic <
aleksandar.la...@cloudwerkstatt.com> wrote:
> Hi.
>
>
>
> I agree with you, and I have tried to contribute to the doc but that’s
> wasn’t an easy task so I stopped.
>
> Maybe I was
22 matches
Mail list logo