FYI, the Dockerfile was wrong by setting suid to /usr/bin/crontab. That
lead to /var/spool/cron/user file owner by root, thus preventing crond to
read it.
The working version is at
https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/Dockerfile#L24-L28
Regards,
*Mateus Caruccio*
Master
Fwiw, that validation is being reverted to maintain compatibility[0][1],
but you should still update to the corrected template. It's not correct to
to specify a different targetPort for a headless service, and if you do, it
will just be ignored.
[0] https://github.com/openshift/origin/pull/7495
Hey list, having a few issues with the Aggregated Logging template provided as
part of the default install.
When running the deployer it’s failing with the following errors. I’ve also
attached a script I’ve been using to redeploy on failure as part of debugging.
Archtectiure
- aklkvm019.corp
>
> For now, yes. We're looking at ways to make dynamic provisioning more
> widely available, even outside of a cloud environment. We'd prefer to not
> implement more recyclers and instead make more provisioners.
>
Ok thanks, the PV is Bound again:
status:
accessModes:
- ReadWriteOnce
-
You need to call it like this: oc new-app --insecure-registry
On Tue, Feb 23, 2016 at 6:20 AM, Den Cowboy wrote:
> I've added it + restarted docker:
> INSECURE_REGISTRY='--insecure-registry
> ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'
>
> I'm able to perform a
Just 1 place is sufficient - thanks!
On Mon, Feb 22, 2016 at 11:13 PM, Dean Peterson
wrote:
> Oh, I opened the bug on bugzilla but I can open it on github too:
> https://bugzilla.redhat.com/show_bug.cgi?id=1310968
>
>
> On Mon, Feb 22, 2016 at 9:37 PM, Clayton Coleman
On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky wrote:
> There is no recycler for glusterfs, so "no volume plugin matched" would
> occur when the volume is being reclaimed by the cluster after its release
> from a claim.
>
yes, the pvc was probably remove when the
On 02/23/2016 01:56 PM, Stéphane Klein wrote:
I've tried to append :
```
# oc secrets add serviceaccount/default secrets/hub.docker.io --pull
# oc secrets add serviceaccount/default secrets/hub.docker.io --for=pull
# oc secrets add serviceaccount/default secrets/hub.docker.io
# oc secrets add
Hi Philippe,
Has the claim for this volume been deleted?
There is no recycler for glusterfs, so "no volume plugin matched" would
occur when the volume is being reclaimed by the cluster after its release
from a claim.
Mark
On Tue, Feb 23, 2016 at 8:46 AM, Philippe Lafoucrière <
Hi,
We have a volume with status = "Failed" after upgrading to 1.1.3.
All our volumes are mounted through glusterfs, and all the others are fine,
the issue is just with one of them:
Name: pv-storage-1
Labels:
Status: Failed
Claim:
Srinivas Naga Kotaru (skotaru) wrote on 02/22/2016 08:26 PM:
Thanks guys for having some discussion on this topic. Pl confirm whether my
understanding is correct or not pertaining to multi cluster authentication and
token management.
1. OSE3 authentication sub system can use external oAuth
I've tried to append :
```
# oc secrets add serviceaccount/default secrets/hub.docker.io --pull
# oc secrets add serviceaccount/default secrets/hub.docker.io --for=pull
# oc secrets add serviceaccount/default secrets/hub.docker.io
# oc secrets add serviceaccount/deployer secrets/hub.docker.io
```
2016-02-23 11:05 GMT+01:00 Maciej Szulik :
> Have you checked this doc:
>
>
> https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries
>
>
>
Thanks for this url :)
I've created my hub.docker.io secret with (I have replaced
I've added it + restarted docker:
INSECURE_REGISTRY='--insecure-registry
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'
I'm able to perform a docker login and pull the image manually but
oc new-app ec2-xxx:5000/test/image:1 or /test/imageerror: can't look up Docker
image
14 matches
Mail list logo