Hi.
On 17 April 2018 at 12:17, Tim Dudgeon <tdudgeon...@gmail.com> wrote:
> So if you are using dynamic provisioning the only option for logging is
> for the default StorageClass to be set to what is needed?
>
> On 17/04/18 11:12, Per Carlson wrote:
>
> Thi
Hi.
I just noticed a message on the dev mailinglist:
> We've branched master for release-3.8 and created a v3.9.0-alpha.0 tag.
> This is because 3.8 is a "skip" release where we'll only do an internal
> data upgrade and then go from 3.7 to 3.9 directly.
Does that mean there will never be a
And to the list as well...
--8<--
If having the password in clear text in master-coinfig.yaml is OK, you
could also use this trick:
openshift_master_identity_providers=[{'name':'OpenID',
'kind':'OpenIDIdentityProvider',
'clientSecret':"{{ lookup('file','/path/to/secret') }}"}]
This way
Hi.
On 12 August 2017 at 18:39, Avesh Agarwal <avaga...@redhat.com> wrote:
>
>
> On Sat, Aug 12, 2017 at 11:59 AM, Avesh Agarwal <avaga...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Aug 11, 2017 at 2:28 AM, Per Carlson <pe...@hemmop.com> wrote:
>
Hi.
We are in the process of rebuilding a cluster with a new topology, and I'm
trying to fit the node labels to a scheduler policy, and would like to base
the policy on the default one.
I've searched both the openshift/origin and and openshift/openshift-ansible
repos on GitHub without finding
Hi Alexandar.
> > There is a /healthz endpoint in HA-proxy on port 1936, but it isn't
> > exposed outside the cluster and requires a password (which is unique
> > per dc). What else could be used? We would like to stay as close as
> > "stock configuration" as possible to reduce technical
Hi Jordan.
On 21 July 2017 at 18:58, Jordan Liggitt wrote:
> Looks like an invalid cert. See https://github.com/golang/go/issues/15407
> for details.
>
Figured that out by doing a packet capture. Wireshark did complain about
the invalid length of the time string.
When trying to login to a (new) OCP 3.5 cluster, oc emits this error:
error: tls: failed to parse certificate from server: parsing time
"20171001Z" as "20060102150405Z0700": cannot parse "Z" as "05"
The certificate (which I got via "openssl s_client") has got:
Validity
Not Before: Jul
On 12 July 2017 at 00:50, G. Jones wrote:
> That’s just it, the masters were unschedulable. During the outage wer
> restarted the masters and nodes but the nodes wouldn’t come online. While
> we were working on getting the nodes up the pods had been restarted on the
>
Hi.
On 8 July 2017 at 21:45, G. Jones wrote:
> I’ve got an Origin 1.5 environment where, during an outage with my nodes,
> pods got relocated to my masters fairly randomly. I need to clean it up and
> get the pods back where I want them but have not yet found a way to
Hi.
On 5 December 2016 at 09:39, Den Cowboy wrote:
> Thanks for your response Per.
>
> I can confirm it's dynamically. We increased our resources and the
> heapsize increased for our tomcat.
>
> When we want to increase the heap_size with the size of our environment
>
Hi Den.
The heap allocation is done dynamically. When the image starts it runs
/opt/webserver/bin/launch.sh. In this file you will find
MAX_HEAP=`get_heap_size`
if [ -n "$MAX_HEAP" ]; then
CATALINA_OPTS="$CATALINA_OPTS -Xms${MAX_HEAP}m -Xmx${MAX_HEAP}m"
fi
The function
What about cleaning the local docker storage? We have had several
"incidents" were deployments of a POD failed due to "out of disk" errors on
the node.
For example on an infrastructure node:
$ sudo docker info
Containers: 8
Images: 23
Storage Driver: devicemapper
Pool Name:
Hi.
It looks like PODs are logging with a different timezone than the host
system. This is of course cosmetic, but nevertheless annoying.
The host is configured with CET (UTC+1):
[root@infra201 ~]# date
Mon Feb 22 14:51:04 CET 2016
But the docker-registry is using (UTC-5):
[root@infra201
Hi David.
On 12 February 2016 at 09:45, David Strejc wrote:
> This is a ansible installer problem.
>
> I had similar issue with my instalation and this is really important.
> Without MTU set to 1450 there is
> a problem with traffic between nodes. I've spent four hours
Hi.
We are seeing some strange packet traces on the nodes, and we suspect that
it might be a MTU-issue.
According to the documentation (
https://docs.openshift.com/enterprise/3.1/install_config/configuring_sdn.html#configuring-the-pod-network-on-nodes)
a "mtu" parameter in node-config.yaml
16 matches
Mail list logo