OpenShift < 3.9 is actually not intended to use docker-1.13. I don't have a
list of what breaks, I think it's mostly subtle stuff aside from CNS. If
you can see it, there is more detail at this kbase:
https://access.redhat.com/solutions/3376031
On Mon, Mar 12, 2018 at 11:59 PM, Brigman, Larry
On Thu, Feb 8, 2018 at 2:43 AM, Gaurav Ojha wrote:
> Thank you for your reply. Just a couple more questions:
>
>
>1. Is there any way to create this file when I launch by openshift
>start?
>
>
openshift start --write-config= ...
(see --help and also note
As Aleksander said, more information would help.
The service broker waits on the service catalog API to come up. It may be
that the service catalog was deployed but the pods are not actually
starting for some reason (e.g. not available at requested version). Check
the pods in the namespace.
$ oc
Thanks for bringing this up. This tool... needs some attention. Comments
below:
On Fri, Oct 27, 2017 at 7:48 AM, Tim Dudgeon wrote:
> I've been looking at using the diagnostics (oc adm diagnostics) to test
> the status of a cluster installed with the ansible installer and
On Thu, Oct 19, 2017 at 10:58 AM, Julio Saura wrote:
> yes ofc
>
> oc create serviceaccount icinga -n project1
>
> oadm policy add-cluster-role-to-user admin system:serviceaccounts:
> project1:icinga
>
There is no cluster role "admin" (... by default anyway, you could of
You can configure fluentd to forward logs (see
https://docs.openshift.com/container-platform/latest/install_config/aggregate_logging.html#sending-logs-to-an-external-elasticsearch-instance).
Note the caveat, "If you are not using the provided Kibana and
Elasticsearch images, you will not have the
The Elastic Search pods contact port 9300 on other pods, that is, on the
internal pod IP. There should be no need to do anything on the hosts to
enable this. If ES is failing to contact other ES nodes then either there
is a networking problem or the other nodes aren't listening (yet) on the
port.
On Wed, Nov 2, 2016 at 11:34 PM, Ravi wrote:
>
> I am not able to start openshift, I tried three different ways.
>
> 1. Windows 7 + Virtual Box + Ubuntu
> oc cluster up works well. I went to console and launched nodejs-ex
> example. Console shows it is up, however when I
The underscores are the problem. Can you convert them to hyphens?
On Tue, Oct 25, 2016 at 5:45 AM, Stéphane Klein wrote:
> Hi,
>
> How can I put logstash config files in ConfigMap ?
>
>
> $ tree
> .
> ├── logstash-config
> │ ├── 1_tcp_input.conf
> │ ├──
Yeah, I don't think we have quota on ephemeral volumes yet. Curator can
clear out your data more aggressively, for example just keep a few days
worth of logs.
On Mon, Oct 10, 2016 at 3:26 AM, Den Cowboy wrote:
> Hi,
>
>
> We have implemented our logging
Looks like you're using your root partition for docker volume storage (and
thus Elasticsearch storage). That is the default configuration, but not a
recommended one - we recommend specifying storage specifically for docker
On Mon, Aug 15, 2016 at 3:54 AM, Frank Liauw wrote:
> Hi All,
>
> I followed through the instructions on https://docs.openshift.org/
> latest/install_config/aggregate_logging.html and have setup a 3 node ES
> cluster. Fluentd is also deployed on all my nodes.
>
> I am getting
I wish I could be more helpful here but I've never seen this before and I'm
at a loss to think of what could be happening. The fact that you're getting
a redirect and going through the oauth flow and only then getting the error
indicates that at least the auth proxy in front of Kibana is running
le to investigate if you were having some kind of
> >> > network connection issues in the ES cluster (I mean between individual
> >> > cluster nodes)?
> >> >
> >> > Regards,
> >> > Lukáš
> >> >
> >> >
> &g
I believe the "queue capacity" there is the number of parallel searches
that can be queued while the existing search workers operate. It sounds
like it has plenty of capacity there and it has a different reason for
rejecting the query. I would guess the data requested is missing given it
couldn't
I wonder if you executed step 6:
$ oc policy add-role-to-user edit --serviceaccount logging-deployer
... at all, or perhaps in the wrong project?
The service account needs an edit role.
On Tue, Jul 12, 2016 at 4:50 AM, Michael Leimenmeier
wrote:
> Hi,
>
> I've tried to
You may need to modify the file permissions and/or selinux context for the
volume so that the container user can write to it. Under the default SCC
the container user/group are randomized. Under the privileged SCC it will
probably be whatever user the Dockerfile indicates (and you can choose an
om within a pod does.
>
> On Wed, Jun 29, 2016 at 10:58 AM, Luke Meyer <lme...@redhat.com> wrote:
>
>> Are you trying to mount the configmap or read from it? The latter does
>> not require any extra role for the pod service account.
>>
>> On Wed, Jun 29,
`oc process -v` and `oc new-app -p` work exactly the same, both being
implemented the same. You can specify multiple of either. I thought there
was supposed to be a way to escape commas but I can't find it now.
FWIW you can specify newlines - anything, really, except a comma - in
parameters.
The readiness probe status seems like an important indicator to me:
Readiness probe failed: cat: /etc/ld.so.conf.d/*.conf: No such file or
directory
What could cause that failure? Or is that a red herring...
On Tue, Jun 14, 2016 at 1:53 PM, Matt Wringe wrote:
> -
It sounds like what he wants is for the router to simply not interfere with
passing along something that's already returning a 503. It sounds like
haproxy is replacing the page content with its own in that use case.
On Mon, Jun 6, 2016 at 11:53 PM, Ram Ranganathan
wrote:
>
The error is that the "image" field is missing from the container
definition. I wonder if you edited the template at all? It's easy to
indent/outdent something and create a definition where the first validation
that fails looks like this. The spec and container definition should look
something
On Thu, Apr 21, 2016 at 5:00 AM, Den Cowboy wrote:
> My webconsole is showing the following warning when I'm looking for the
> logs of a pod:
> Only the previous 1000 log lines and new log messages will be displayed
> because of the large log size.
>
I'm pretty sure this
On Mon, Jan 11, 2016 at 1:20 PM, Clayton Coleman
wrote:
>
> > - I realized that last time I didn't execute the required
> pre-installation
> > steps (which include setting up docker for instance) but this didn't
> seem to
> > pose any problems. Should I scratch everything
24 matches
Mail list logo