I opened a ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1303273
On Fri, Jan 29, 2016 at 9:07 PM, Dean Peterson
wrote:
> I also noticed, the deployments keep saying say are trying to use the
> oldest image id even though there have been many subsequent successful
> builds. I have attache
That error specifically is from Docker - it's probably causing the issue
(please file the issue with requested info attached so we can correct the
system) in the UI, but the fix will likely be on your system, and may be
related to any changes you made in Docker storage recently.
On Jan 29, 2016, a
Please file a bug with the output of "oc get dc,rc -o yaml"
> On Jan 29, 2016, at 9:47 PM, Dean Peterson wrote:
>
> In the image, you can see; every time I try to start a new deployment for a
> service that has a stuck previous deployment, it increments the number of
> containers on the oldest
I just don't get why sometimes it works and sometimes it doesn't. The
latest random error is this in the event logs:
Failed to pull image "
172.30.250.187:5000/abecorn/tradeclient@sha256:cbca9d885bf1c23bb518662cc51d61b5365ab321147a59d2be5b86869f50c08e":
Driver devicemapper failed to create image
On Fri, Jan 29, 2016 at 5:33 PM, Florian Daniel Otel wrote:
> Hello all,
>
> Was wondering if there is any way to install a specific OSE release via
> the Ansible installer.
>
openshift_pkg_version=-3.1.0.4 would attempt to install version 3.1.0.4
>
> TIA for the help,
>
> /Florian
>
>
> _
Hello all,
Was wondering if there is any way to install a specific OSE release via the
Ansible installer.
TIA for the help,
/Florian
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
On Fri, Jan 29, 2016 at 2:54 PM, Fernando Montenegro wrote:
> Hi!
>
> Trying to get a simple master+1 node setup running on 2 virtual Centos 7.1
> servers (hosted locally on ESXi, if it matters). a separate centos server
> (not part of OO deployment) is running the ansible playbooks (just did a
>
Hi!
Trying to get a simple master+1 node setup running on 2 virtual Centos 7.1
servers (hosted locally on ESXi, if it matters). a separate centos server
(not part of OO deployment) is running the ansible playbooks (just did a
git clone before starting).
Getting this error on the install ('oonode1
As it turns out, it was a permissions issues on that directory.
A shotgun "chmod a+rwx /opt/ose-registry" did the trick
This for the record / whoever runs into this.
Thanks Andy, Jason for the help.
On Fri, Jan 29, 2016 at 3:17 PM, Andy Goldstein wrote:
> ls -laZ /opt/ose-registry
>
> Mo
If you want to run DNSMasq on your masters, you'll need to configure
OpenShift to run DNS on a different port. You can do that by modifying
your config file.
On Fri, Jan 29, 2016 at 12:16 AM, Dean Peterson
wrote:
> Oh boy, well thank you for the information. I looked at my old machine
> runnin
Currently, no.
copr keeps only the latest builds and this caught us off-guard when we
switched to 1.1.
Someone already opened this as an issue on github.
https://github.com/openshift/origin/issues/6695
Here is the reply
It looks like copr keeps only the latest builds.
If you want to grab any
The installer pulls RPMS from a fedora COPR which I thought kept
builds indefinitely but it appears that it removes non-current builds
after 14 days. You can find the source RPM that was used to produce
those RPMS from the build info page and then use `rpmbuild --rebuild`
to rebuild them, you'll ne
Awesome, thank you very much.
On Fri, Jan 29, 2016 at 9:39 AM, Brenton Leanhardt
wrote:
> Hi,
>
> See the notes here:
>
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L62
>
> That and the section below show you how to specify another registry to
>
Hi,
See the notes here:
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L62
That and the section below show you how to specify another registry to
be used for all docker pulls as well as using a custom yum repository
for RPM installations.
--Brenton
ls -laZ /opt/ose-registry
Most likely you need to do: sudo chcon -t svirt_sandbox_file_t
/opt/ose-registry
Andy
On Fri, Jan 29, 2016 at 9:01 AM, Jason DeTiberus
wrote:
>
> On Jan 29, 2016 8:43 AM, "Florian Daniel Otel"
> wrote:
> >
> >
> > No worries ;) -- part since it's my turn to apologis
On Jan 29, 2016 8:43 AM, "Florian Daniel Otel"
wrote:
>
>
> No worries ;) -- part since it's my turn to apologise, since I missed
adding the "admin" role to the "oepnshift" project.
>
> Done that now, and now I get a HTTP 500:
>
> [root@osev31-node1 src]# docker push 172.30.38.99:5000/openshif
No worries ;) -- part since it's my turn to apologise, since I missed
adding the "admin" role to the "oepnshift" project.
Done that now, and now I get a HTTP 500:
[root@osev31-node1 src]# docker push 172.30.38.99:5000/openshift/busybox
The push refers to a repository [172.30.38.99:5000/opensh
On Jan 29, 2016 8:05 AM, "Florian Daniel Otel"
wrote:
>
> I should have mentioned that in my original email, but that's exactly the
steps I followed.
My apologies, missed the auth parts mentioned the first read through.
Just to verify, did you grant reguser admin rights on the openshift
project?
Dear Mailing List,
is it possible to download old RPMs of OpenShift? The official repo only has
versions starting form 1.1.
The reason why I am asking is because we are trying to set up a test cluster
and practice the update to 1.1 there before applying the update to our live
cluster.
To be m
Is there any documentation on how to modify the ansible install script to
point to an alternative registry server? I need to be able to do an offline
install and I have created a repo of the necessary rpms and a private
registry but I have so far been unable to and have found no documentation
aroun
I should have mentioned that in my original email, but that's exactly the
steps I followed.
IOW: In addition to the stuff below (and prior to all that) I have done,
as "system:admin" , for user "reguser"
oadm policy add-role-to-user system:registry reguser
oadm policy add-role-to-user system:im
Hi Rumeha--
Referring your question to the OpenShift users list. I found this, which
may also be relevant:
https://access.redhat.com/documentation/en-US/OpenShift_Enterprise/2/html/Troubleshooting_Guide/MCollective.html
When you run `oo-mco ping`, what is the specific error that you are seeing?
W
On Jan 29, 2016 6:07 AM, "Florian Daniel Otel"
wrote:
>
> Hello all,
>
> I'm pretty sure it's mostly related to my ignorance, but for some reason
I'm not able to push to the built-in docker registry after deploying it.
>
>
> Deplyoment:
>
> oadm registry --service-account=registry
--config=/etc/or
Now this fedora image, I've another bug :
```
DEBUG ssh: Exit status: 1
DEBUG ssh: Re-using SSH connection.
INFO ssh: Execute: mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g
vagrant` vagrant /vagrant (sudo=true)
DEBUG ssh: stderr: /sbin/mount.vboxsf: mounting failed with the error
DEBUG ssh: s
I don't have this bug with Febora image.
2016-01-29 11:21 GMT+01:00 Stéphane Klein :
> I think it's virtualbox port forwarding issue. Port forwarding isn't
> reloaded after vagrant up :(
>
> I don't understand, I've this bug only with OpenShift vagrant VMs.
>
> 2016-01-29 10:44 GMT+01:00 Stéphane
Hello all,
I'm pretty sure it's mostly related to my ignorance, but for some reason
I'm not able to push to the built-in docker registry after deploying it.
Deplyoment:
oadm registry --service-account=registry
--config=/etc/origin/master/admin.kubeconfig
--credentials=/etc/origin/master/openshi
I think it's virtualbox port forwarding issue. Port forwarding isn't
reloaded after vagrant up :(
I don't understand, I've this bug only with OpenShift vagrant VMs.
2016-01-29 10:44 GMT+01:00 Stéphane Klein :
> With export VAGRANT_LOG=debug
>
> ```
> DEBUG ssh: Exit status: 0
> DEBUG ssh: Re-usi
With export VAGRANT_LOG=debug
```
DEBUG ssh: Exit status: 0
DEBUG ssh: Re-using SSH connection.
INFO ssh: Execute: cat /sys/class/net/br0/address (sudo=true)
DEBUG ssh: stdout: b2:01:53:c2:19:49
DEBUG ssh: Exit status: 0
DEBUG ssh: Re-using SSH connection.
INFO ssh: Execute: cat /sys/class/net/
Hi,
what I did :
```
$ git clone g...@github.com:openshift/openshift-ansible.git
$ cd openshift-ansible
$ vagrant up --no-provision
$ vagrant provision
$ vagrant status
node1 running (virtualbox)
node2 running (virtualbox)
masterrunning
29 matches
Mail list logo