lol feels like openshift is just pulling errors randomly at me :)
Now, after no changes of my own deployments are failing with a
reference to an 500 internal server error, while trying the push
registry log shows:
time="2016-03-01T07:42:25.273229858Z" level=error msg="response comple
So I went ahead and restarted origin-master and also origin-node now I
get a different error from builds:
I0301 06:38:23.333131 1 docker.go:117] Pushing image
172.30.143.204:5000/web04-app/web04:latest ...
52 F0301 06:38:23.360279 1 builder.go:204] Error: build error: Failed
to pu
Also, in the documentation:
https://docs.openshift.org/latest/install_config/install/docker_registry.html#storage-for-the-registry
This example, the name "v1".. is that a mistake? should it not
be: registry-storage
In:
PRODUCTION USE
For production use, attach a remote volume or define and use
One more issue... now that the registry seems to be back up... rebuilding,
even recreating, a test project I have, seems to fail as if the registry IP
it is trying to push to is wrong?
...
Successfully built 502ed8ccfb26
I0301 05:59:50.642747 1 docker.go:117] Pushing image
172.30.105.149:500
Yes! entrypoints did it! re added the service too. Thanks a lot.
On Mon, Feb 29, 2016 at 7:22 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
> ok, did you create the glusterfs endpoint in the corresponding project
> (yes, you have to do that for all projects using gluster
Hello Keven, if you can access to the DNS server, you should write in bind
configuration file, something like:
forwarders { 8.8.8.8; 8.8.4.4; };
Todo luck
El lun., 29 feb. 2016 17:13, Keven Wang escribió:
> Hello Openshift users,
>
> Anyone get latest Openshift origin installed on open shift vi
ok, did you create the glusterfs endpoint in the corresponding project
(yes, you have to do that for all projects using glusterfs...).
Last but not least, remember to add a glusterfs service to ensure your
endpoints are not deleted:
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-clus
Hello Openshift users,
Anyone get latest Openshift origin installed on open shift via ansible
playbook? I get some pb with DNS. the created cluster uses a DNS VM as DNS
server, but this DNS server itself then can’t download package from internet
because itself can’t resolve external domain name
I did check on that.. looks like the playbook did install the packages
everywhere. This is true in all my four nodes:
'rpm -qa|grep glus;ls -l /usr/sbin/glusterfs;ls -l /usr/sbin/glusterfsd;rpm
-qf /usr/sbin/glusterfs'
glusterfs-client-xlators-3.7.1-16.el7.x86_64
glusterfs-3.7.1-16.el7.x8
Ok, it's exactly this bug:
https://github.com/kubernetes/heapster/issues/925
so we're not alone.
We'll try to upgrade the nodes from fedora 21 to 22 tomorrow to see if it
helps.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.opens
You need to install glusterfs client on all your nodes.
Check that "/usr/sbin/glusterfs" is available there.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
One fewer front-end to vendor and maintain.
On Mon, Feb 29, 2016 at 1:35 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:
> So OSE just expose raw JSON data? Even Kubernetes expose native swagger
> UI, which is decent for understanding and learning API calls.
>
> Any reason why OSE
So OSE just expose raw JSON data? Even Kubernetes expose native swagger UI,
which is decent for understanding and learning API calls.
Any reason why OSE expose raw data without native front-end?
--
Srinivas Kotaru
From: Jordan Liggitt mailto:jligg...@redhat.com>>
Date: Monday, February 29, 2016
That article is using a swagger-exploring UI hosted elsewhere, which
visualizes the JSON data exposed under
https://your-api-server:8443/swaggerapi
On Mon, Feb 29, 2016 at 1:20 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:
> Am not seeing swagger UI as shown or described in bel
Am not seeing swagger UI as shown or described in below blog pst
http://blog.andyserver.com/2015/09/openshift-api-swagger/
Below screenshot show my swagger in action. It seems to be mostly text
interface rather original Swagger native UI
http://www.screencast.com/t/wgUBi8vtALn
--
Srinivas Kota
I setup the registry, as per the docs, to use a persistence storage claim,
backed by glusterfs. It seemed to work fine for a while... until I decided
to verify that is actually persistent... so I scaled it down to 0, from 1.
and ever since I cant start it up again. even rebooted all my 4 nodes... t
I only see the "http: response.WriteHeader on hijacked connection" logs and
origin-node restart on this node. The others look fine.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
No, but I was trying to work my way backwards to see if that was the case
based on the message.
Any obvious reasons in the log as for why it restarted?
Thanks,
Derek
On Mon, Feb 29, 2016 at 11:19 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
>
> On Mon, Feb 29, 2016 at
Note that I don't have any error in the logs, just:
Feb 29 17:17:52 node-2 docker[1081]: 2016/02/29 17:17:52 http:
response.WriteHeader on hijacked connection
(several times, ~ 1 every 2-5s)
Then, suddently:
Feb 29 17:17:52 node-2 kernel: XFS (dm-14): Unmounting Filesystem
Feb 29 17:17:52 node-2
On Mon, Feb 29, 2016 at 11:06 AM, Derek Carr wrote:
> When you see this happen, did the openshift-node restart, and then try to
> kill the pod?
>
Just checked the logs, and it seems it's exactly the case...
openshift-node restarts and then the pod is killed.
Does it sound familiar?
Thanks
Phil
When you see this happen, did the openshift-node restart, and then try to
kill the pod?
Thanks,
Derek
On Sat, Feb 27, 2016 at 4:50 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:
> I can see this in the node's logs:
>
> Feb 27 22:49:39 node-2 docker[1081]: 2016/02/27 22:4
21 matches
Mail list logo