Re: Caching DNS entries in openshift pod/node

2021-03-06 Thread Aleksandar Lazic

Hi,

On 05.03.21 13:57, Arunkumar wrote:

Hi,

> Is it possible to leverage DNSMASQ at openshift node to cache DNS entries?
> I am using Openshift 4.5, my understanding is DNSMASQ is present by default in
> every node in the cluster. Please add your thoughts on caching DNS entries in
> openshift node/pod level. This is to improve performance and also offload the
> coredns. Is there something like kubernetes nodelocaldns available?

Well due to the fact that OpenShift 4 was completely redesigned is the dnsmasq
not anymore part of OpenShift 4.

How the DNS Operator works is described in the README.md
https://github.com/openshift/cluster-dns-operator

You can see the Corefile config here.
https://github.com/openshift/cluster-dns-operator/blob/master/pkg/operator/controller/controller_dns_configmap.go

Here is documented how the DNS Operator can be configured.

https://docs.okd.io/latest/networking/dns-operator.html

There is a cache plugin for coredns but it's not yet configurable in OpenShift.
https://coredns.io/plugins/cache/

Maybe you can raise a RFC 
https://github.com/openshift/cluster-dns-operator/issues
to be able to configure the cache feature in OpenShift.



--
Thanks & Regards
Arunkumar


Hth
Alex

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



Re: [EXTERNAL] Changing Workers' machine config - 2 IngressControllers

2020-11-30 Thread Aleksandar Lazic

Hi Carlo.

I would prefer option one because the TLS handling can be quite expensive and 
therefore I would not want that a Application
workload blocks in anyway the masters for there work.

Well the MCO (Machine Config Operator) maintains the worker, which explains why 
your manual deletion does not work.

https://github.com/openshift/machine-config-operator

I would try to run 2 ingress controller with different |nodePlacement, domain| 
and any other settings specific for the
external router. The Idea is untested.

https://docs.okd.io/latest/networking/ingress-operator.html

Default: worker nodes
External: new worker nodes

Maybe this doc will show you some more options for your Setup.
https://docs.okd.io/latest/networking/configuring_ingress_cluster_traffic/overview-traffic.html

HTH and Regards
Aleks

On 26.11.20 10:39, Carlo Rodrigues wrote:


Anyone?

*From:*users-boun...@lists.openshift.redhat.com 
 *On Behalf Of *Carlo Rodrigues
*Sent:* Monday, November 23, 2020 17:46
*To:* users@lists.openshift.redhat.com
*Subject:* [EXTERNAL] Changing Workers' machine config - 2 IngressControllers

Hello All,

I’m using an OKD cluster 4.5 with 3 masters and 3 workers, using oVirt IPI.

I want to segregate external traffic of some workloads from the rest, so I 
created a different IngressController, named external.

I had 2 choices.

 1. Add another worker node and keep the default ingress controller on 2 worker 
nodes and the external ingress controller on the other 2 worker nodes.
 2. Move default ingress controller to master nodes and use the worker nodes to 
host the external ingress controller.

I opted for option 2, using nodeSelector and tolerances so that the default 
routers would run on the master nodes.

So far, so good.

My problem now it that I don’t want keepalived for the internal API and 
internal *.app to run on the worker nodes, I want it to run only on the master 
nodes. So I edited 00-worker Machine Config and removed the 
/etc/kubernetes/manifests/keepalived.yaml config.

But this MachineConfig gets overwritten very time I change it, probably 
overwritten by the machine config operator. I deleted the file manually on the 
worker nodes, but I’m afraid it will come back after an upgrade or some other 
change.

Is there any other way to accomplish what I’m trying to do?

Even if I opt for having 2 worker nodes with the default router and 2 worker 
nodes with the new one (external), I think I’ll have the same problem, because 
keepalived could put the internal *.apps IP on a worker node with the external 
router, and there would be at least a mismatched certificate, and, because I 
want to only publish some few namespace routes on the external router, internal 
apps would not run when hitting the external router, including console.

How do you people segregate traffic and how did you overcome these problems?

Thanks

Carlo Rodrigues


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Unable to connect to the server: Forbidden

2020-04-09 Thread Aleksandar Lazic

Hi.

On 03.04.20 20:04, lejeczek wrote:

hi guys,

I run deployment off Centos 7 using:

centos-release-openshift-origin311-1-2.el7.centos.noarch
openshift-ansible-playbooks-3.11.37-1.git.0.3b8b341.el7.noarch
openshift-ansible-3.11.37-1.git.0.3b8b341.el7.noarch

and prerequisites.yml and deployment both seem to succeed
and yet things do not work:

$ oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Unable to connect to the server: Forbidden

Would very much appreciate suggestions and advice.
many thanks, L.


What's the output of `oc config get-contexts`
You should have a .kube dir in your home where the client config is.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



Re: Openshift : Few questions

2020-04-09 Thread Aleksandar Lazic

Hi Thierry.

On 07.04.20 22:16, thierry.leurent wrote:

Hi All,

I'm trying to install OKD on my infrastrucutre. But I have some questions.

The Goal.
In a first step, I would like have a all-in-one.
In a second step, I would like add 2 nodes.

The Infrastrucutre.
RHEL 7 Servers with docker from the official reopository.


Do you have OpenShift subscriptions?
I ask because you have RHEL.


So I have 4 Hosts :
  - Dok001a : The Master/First Node.
  - Dok001b : The Second Node.
  - Dok001c : The Third Node.
  - NFS666a : The NFS server.

What I try.
To install :
- Using openshift-ansible. I have trouble with cgroup between kubectl and 
docker. And I'm a beginer with Ansible.


What trouble do you have?
What's the ouput from a ansible run?
Have you followed the docs?

https://docs.okd.io/3.11/install/index.html


- Using oc cluster up :
     - I use oc from /pub/openshift-v4/clients/oc/latest/linux


What's the full URL?


     - When I make oc version, I get :
       oc v3.11.0+62803d0-1
   kubernetes v1.11.0+d4cacc0
   features: Basic-Auth GSSAPI Kerberos SPNEGO


Do you want to install OKD or OCP because for OKD is a version 4 not released?


To configure :
  - An LDAP acces.


Have you seen the docs for the config.

https://docs.okd.io/3.11/install_config/configuring_authentication.html#LDAPPasswordIdentityProvider


Questions:

- After I try to install it using Ansible, I see an export file with
   "/exports/registry" *(rw,root_squash)

That's the registry storage.
https://docs.okd.io/3.11/install_config/registry/index.html


   "/exports/metrics" *(rw,root_squash)

That's the metrics storage.
https://docs.okd.io/3.11/install_config/cluster_metrics.html


   "/exports/logging-es" *(rw,root_squash)
   "/exports/logging-es-ops" *(rw,root_squash)

That the elasticsearch stroage
https://docs.okd.io/3.11/install_config/aggregate_logging.html



   "/exports/etcd" *(rw,root_squash)

That's the etcd storage.


  I have a DO180 Redhat book and I don't see it.


If this is what you have then it looks to me that it does not explain the 
Installation of OpenShift.
https://www.redhat.com/en/services/training/do180-introduction-containers-kubernetes-red-hat-openshift

Maybe the following courses can help you to understand the System part of 
OpenShift,
DO280,DO281,DO285
https://www.redhat.com/en/services/training/all-courses-exams?f%5B0%5D=taxonomy_courses_by_curriculum%3AOpenShift

>   Why use this and wich size I must provide for this "disk" ?

It is generally not recommend to use NFS for some parts of OpenShift. Please 
take a
look into the docs for the components to see if NFS is recommend or not


- What the differences between :
   - Openshift.


That's the general name for all OpenShift flavors.


   - Openshift Origin.


The OpenShift Open Source Version.


   - OKD.


Renaming of OpenShift Origin.
https://www.okd.io/#v3


   - Minishift.


Local running OKD
https://www.okd.io/minishift/


- What is the best version to use ?


I would recommend OCP (OpenShift Container Platform) 4.x. That's the commercial 
version of OpenShift.


- For OKD, how to update it to 4.4 .


New installation and deploy the apps on 4


  Thanks,

  Thierry


Hth
Aleks


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: sftp service on cluster - how to do it

2019-11-21 Thread Aleksandar Lazic

Hi.
Am 16.11.2019 um 22:36 schrieb Just Marvin:

Hi,

     I know its trivial to run an sftp server as a pod on an openshift cluster. 
The real trick would be to figure out how clients outside the cluster could 
access this service. How can one accomplish this?


I would try this tool https://github.com/yrutschle/sslh .
But I know it's not trivial to run sftp server in OpenShift in default setup, as 
sshd checks a lot security and user stuff which will fail in a unprivileged pod.


I would try https://github.com/drakkan/sftpgo as sftp server and see if it 
works.

The flow is then:

User -> OCP Router passthrough -> sslh service/pod -> sftp service/pod

It would be nice when you tell us if this setup works.


Regards,
Marvin


Regards
Aleks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Follow up on OKD 4

2019-07-27 Thread Aleksandar Lazic
Am 25.07.2019 um 19:31 schrieb Daniel Comnea:
> 
> 
> On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  <mailto:mgug...@redhat.com>> wrote:
> 
> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
> 
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.

+1

> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
> 
> So maybe what we are asking here is:
> 
>   * opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using ignition,
> CVO etc
>   * DYI kube philosophy reusing as many v4 components but with your own
> preferred operating system

+1 and in addition "preferred hoster" similar to 3.x.

It would be nice when the ansible-installer is still available and possible for
4 as it makes it possible to run OKD 4 on many Distros where ansible is running.

> In terms of approach, priority i think is fair to adopt a baby steps approach 
> where:
> 
>   * phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up the
> knowledge around operating the new solution in a full production env
>   * phase 2 = once experience/ knowledge was built up then we can crack on 
> with
> reverse eng and see what we can swap etc.

I don't think that reverse eng is a good way to go. As most of the parts are OSS
and even the platform `none` exists it would be nice to get the docs out for
this option to be able to run the ansible-installer vor OKD 4.

https://github.com/openshift/installer/blob/master/CHANGELOG.md#090---2019-01-05

```
Added

There is a new none platform for bring-your-own infrastructure users who
want to generate Ignition configurations. The new platform is mostly
undocumented; users will usually interact with it via OpenShift Ansible.

```
> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman  <mailto:ccole...@redhat.com>> wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic
> mailto:openshift-li...@me2digital.com>> 
> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with m

Re: Follow up on OKD 4

2019-07-27 Thread Aleksandar Lazic
Am 25.07.2019 um 21:20 schrieb Clayton Coleman:
>> On Jul 25, 2019, at 2:32 PM, Fox, Kevin M  wrote:
>>
>> While "just works" is a great goal, and its relatively easy to accomplish in 
>> the nice, virtualized world of vm's, I've found it is often not the case in 
>> the dirty realm of real physical hardware. Sometimes you must 
>> rebuild/replace a kernel or add a kernel module to get things to actually 
>> work. If you don't support that, Its going to be a problem for many a site.
> 
> Ok, so this would be the “I want to be able to run my own kernel” use case.

Well It's more "I want to run my own Distro on my preferd Hoster" which is not
possible yet.

> That’s definitely something I would expect to be available with OKD in
> the existing proposal, you would just be providing a different ostree
> image at install time.
> 
> How often does this happen with fedora today?  I don’t hear it brought
> up often so I may just be oblivious to something folks deal with more.
> Certainly fcos should work everywhere existing fedora works, but if a
> substantial set of people want that flexibility it’s a great data
> point.
> 
>>
>> Thanks,
>> Kevin
>> 
>> From: dev-boun...@lists.openshift.redhat.com 
>> [dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
>> [jber...@redhat.com]
>> Sent: Thursday, July 25, 2019 11:23 AM
>> To: Clayton Coleman; Aleksandar Lazic
>> Cc: users; dev
>> Subject: Re: Follow up on OKD 4
>>
>>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>>> add an rpm to the OS to get a kernel module, or you want to ship a
>>> complex set of config and managing things with mcd looks too hard)
>>> 2. You want to build and maintain these things yourself, so the “just
>>> works” mindset doesn’t appeal.
>>
>> FWIW, 2.5 years ago when we were exploring having a specific
>> Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
>> Cloud users.  We found that 2/3 of respondees wanted a complete package
>> (that is, OKD+Atomic) that installed and "just worked" out of the box,
>> and far fewer folks wanted to hack their own.  We never had such a
>> release due to insufficient engineering resources (and getting stuck
>> behind the complete rewrite of the Fedora build pipelines), but that was
>> the original goal.
>>
>> Things may have changed in the interim, but I think that a broad user
>> survey would still find a strong audience for a "just works" distro in
>> Fedora.
>>
>> --
>> --
>> Josh Berkus
>> Kubernetes Community
>> Red Hat OSAS
>>
>> ___
>> dev mailing list
>> d...@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD Working Group Community Survey & Kick-off OKD WG Meeting Details

2019-07-24 Thread Aleksandar Lazic
Hi Diane.

Am 24.07.2019 um 19:39 schrieb Diane Mueller-Klingspor:
> All,
> 
> Could you take a minute and do this short survey for the OKD Working Group 
> (mtg
> logistics, communication channels, interest levels, feedback) 
> 
> Survey link here: https://forms.gle/abEFZ6oey79jxGjJ7

I have answered the "Survey" even it is a google forms :-/.

I would prefer to use some more privacy friendly Survey tools like
https://www.limesurvey.org/

Best regards
Aleks

> We'll be holding the OKD Working Group kick-off meeting next week on July 31st
> at 9:00 am Pacific.  The OKD working group purpose is to discuss, give 
> guidance
> & enable collaboration on current development efforts for OKD4, Fedora CoreOS 
> (FCOS) and Kubernetes. The OKD WG will also include discussion of shared
> community goals for OKD4 and beyond. 
> 
> OKD WG Kick-off Mtg details
> here: 
> https://commons.openshift.org/events.html#event|okd-working-group-kick-off-meeting|983
> *
> *
> We are also hosting a OpenShift Commons Briefing tomorrow (July 25th at 9:00 
> am
> Pacific) with Ben Breard and Benjamin Gilbert on FCOS 
> 
> Briefing details
> here: 
> https://commons.openshift.org/events.html#event|introduction-to-fedora-coreos-fcos-with-ben-breard-and-benjamin-gilbert-red-hat|982
> 
> Kind Regards,
> 
> Diane Mueller
> Director, Community Development
> Red Hat OpenShift
> @openshiftcommons
> 
> We have more in Common than you know, learn more at 
> http://commons.openshift.org
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Configure the load balancer

2019-05-03 Thread Aleksandar Lazic
Hi Udi.

Am 30.04.2019 um 22:01 schrieb Udi Kalifon:
> Hi all.
> 
> I started an app (gcr.io/kuar-demo/kuard-amd64:blue
> ) in 3 replicas, but I can see that
> it's always the same pod that is serving the requests. How do I check what 
> load
> balancer is used (if any), and change it to be more fair?
> 
> I am very new to openshift. I am practicing on minishift 3.11.

You can try to run this image which is based on this repo

https://gitlab.com/aleks001/caddy-template-usage

```
oc new-project demo-001
oc new-app --docker-image=docker.io/me2digital/caddy --name=caddy
oc scale dc/caddy --replicas=3
oc create route edge caddy --service=caddy --insecure-policy=Redirect
```

You then have a route and you can use this route to call the template

https:///templates/print-hostname.tmpl

To disable cookie session sticky you will need to set this annotation

```
oc annotate route caddy haproxy.router.openshift.io/disable_cookies=true
```

https://docs.okd.io/latest/architecture/networking/routes.html#route-specific-annotations

> Regards,
> Udi Kalifon; Senior Automation QE

Hth
Aleks

> Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
> Commercial register: Amtsgericht Muenchen, HRB 153243,
> Managing Directors: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Upgrade the HAProxy inside the Openshift (to match the OSCP version)

2018-12-18 Thread Aleksandar Lazic
Hi.

I have created one in the past and now a new one with 1.8.15

https://hub.docker.com/r/me2digital/openshift-ocp-router-hap18

It's untested as I have not running a ocp/okd for now but it should work.

###

|docker run --rm --entrypoint /usr/local/sbin/haproxy
me2digital/openshift-ocp-router-hap18 -vv HA-Proxy version 1.8.15 2018/12/13
Copyright 2000-2018 Willy Tarreau  Build options : TARGET =
linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing
-Wdeclaration-after-statement -fwrapv -Wno-unused-label OPTIONS =
USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 Default settings : maxconn = 2000,
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with OpenSSL
version : OpenSSL 1.0.2k-fips 26 Jan 2017 Running on OpenSSL version : OpenSSL
1.0.2k-fips 26 Jan 2017 OpenSSL library supports TLS extensions : yes OpenSSL
library supports SNI : yes OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1
TLSv1.2 Built with Lua version : Lua 5.3.4 Built with transparent proxy support
using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Encrypted password support
via crypt(3): yes Built with multi-threading support. Built with PCRE version :
8.32 2012-11-30 Running on PCRE version : 8.32 2012-11-30 PCRE library supports
JIT : yes Built with zlib version : 1.2.7 Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip") Built with network namespace support.
Available polling systems : epoll : pref=300, test result OK poll : pref=200,
test result OK select : pref=150, test result OK Total: 3 (3 usable), will use
epoll. Available filters : [SPOE] spoe [COMP] compression [TRACE] trace|

###

Best regards
Aleks

Am 15.12.2018 um 15:11 schrieb Jan-Otto Kröpke:
> Hi,
>
> it is possible to upgrade the HAProxy version from 1.8.1 to 1.8.14 to match
> the version of haproxy inside OSCP?
>
> HAProxy 1.8.1 has a lot of issues with h2 (there are 44 bugfixes related to h2
> since 1.8.1; https://www.haproxy.org/download/1.8/src/CHANGELOG). Also the
> thread implementation seems to be buggy in 1.8.1.
>
> Can someone provide a newer haproxy-router image with the haproxy version of 
> OSCP?
>
> The SRPMs are available
> here: http://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHOSE/SRPMS/
>
> It would be great if OKD and OSCP using the same version.
>
> Best,
> Jan
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Antwort: PV based on NFS does not survive reboot

2018-11-06 Thread Aleksandar Lazic
Hi Marc.

Am 05.11.2018 um 09:55 schrieb marc.schle...@sdv-it.de:
> It seems my understanding of persistent-volumes and the corresponding claim
> was wrong. I've expected that a PV can have multiple PVS associated to it as
> long as there is enough storage.
> But it seems it is a 1-to-1 relation and my PV was not reclaimed after I
> deleted the first PVC. The reboot obviously had nothing to do with this.
>
> I am going to test this later today.

Yes it is a 1:1 relation.

You can use different pv's with the same nfs connections string but still 1:1


pv001 -> pvc001

pv002 -> pvc002

Regards

aleks

>
>
> Von:        marc.schle...@sdv-it.de
> An:        users@lists.openshift.redhat.com
> Datum:        05.11.2018 08:58
> Betreff:        PV based on NFS does not survive reboot
> Gesendet von:        users-boun...@lists.openshift.redhat.com
> 
>
>
>
> I am running a test setup including a dedicated node providing a NFS share,
> which is not part of the Openshift installation.
> After the installation I ran all the steps provided by the documentation [1]
> and I was able to add a persistent-volume-claim to my projekt which was bound
> to the NFS-PV.
>
> However, after rebooting my cluster I can no longer add PVCs. They fail with
> the message that no persistent-volume is available. Running the oc command to
> add the NFS-PV again fails with a message that it already exists.
> I checked my nfs-node and the nfs-service is running. Since I did not install
> any nfs-utils on the Openshift nodes I assume that the client service might
> not be enabled there, hence the PV is not available. I would assume that this
> is handled by the ansible-installer.
>
> Any ideas what could cause this behavior?
>
> [1]
> _https://docs.openshift.com/enterprise/3.0/admin_guide/persistent_storage_nfs.html_
>
> regards
> Marc
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Command to start/top a cluster gracefully

2018-10-10 Thread Aleksandar Lazic
Am 10.10.2018 um 11:22 schrieb Marc Ledent:
> Hi all,
>
> Is there a command to stop/start an openshift cluster gracefully. "oc cluster"
> commands are acting only for a local all-in-one cluster...

Do you mean something like this?

* Scale all dc/rc/ds to 0

* stop all node processes

* stop all master process

* stop all etc processes

* stop all docker processes

* shutdown all machines

I don't know a easier way maybe there is a playbook in the ansible repo

Regards

Aleks

> Thanks in advance,
> Marc
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to disable the ansible service broker?

2018-10-10 Thread Aleksandar Lazic
Hi.
Am 10.10.2018 um 17:25 schrieb Marc Boorshtein:
>
>
> Which release is this one?
>
>
>
> 3.9

I face similar issue at upgrade process.

There is a pull request to fix the handling of the variable

https://github.com/openshift/openshift-ansible/pull/9770

I hope this pull request will be fast released.

You can try to set the following vars as mentioned in this comment
https://github.com/openshift/openshift-ansible/issues/8705#issuecomment-396459919

```

|openshift_service_catalog_remove=true openshift_enable_service_catalog=true
ansible_service_broker_remove=true ansible_service_broker_install=false
template_service_broker_remove=true template_service_broker_install=false |```

Regards
Aleks
||

> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift Origin cluster migration

2018-08-17 Thread Aleksandar Lazic
Hi Marcello.

Am 17.08.2018 um 10:18 schrieb Marcello Lorenzi:
> Hi All,
>
> we are checking the possibility to move an existing Openshift Origin 3.6
> cluster hosted on a KVM environment to a newer Openshift Origin 3.9 cluster
> hosted on VMWare environment with more servers and hardware optimized. Into
> this migration we would maintain the wildcard DNS address and the master API
> address.
>
> Is it possible to install via advanced installation the newer cluster or the
> installer tries to contact the older cluster and it can creates issues?

Yes you can install the new cluster via advanced install.
Why not 3.10 ?

Normally 2 ocp/okd cluster does not talk to each other, afaik.

If you mean with ' maintain the wildcard DNS address and the master API address'
that you will reuse the current setup for the new cluster then you should take
care about the DNS caches in the clients and that the old DNS setup does not
point to the old one only to the new one.

Regards
Aleks

> Thanks,
> Marcello 
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Aleksandar Lazic
Hi.

Am 16.08.2018 um 16:27 schrieb Rich Megginson:
> On 08/16/2018 05:42 AM, Aleksandar Lazic wrote:
>> Am 16.08.2018 um 12:48 schrieb Aleksandar Kostadinov:
>>> Might be real nice to allow pod to request sockets created where different 
>>> log
>>> streams can be sent to central logging without extra containers in the pod.
>> You can run socklog/fluentbit/... in the background to handle the logging and
>> your app logs to this socket.
>
> So you would need to configure your app to log to a socket instead of a log 
> file?
> Where does socklog write the logs?  Who reads from that destination?

Socklog writes to stdout by default.
In my setup is the haproxy configured to write to the unix socket but he can
also listen to udp socket.
In any case the output is written to stdout

http://smarden.org/socklog/

I have describe the setup in two blog posts
https://www.me2digital.com/blog/2017/05/syslog-in-a-container-world/
https://www.me2digital.com/blog/2017/09/syslog-receiver/

Another possible tool is https://fluentbit.io/ as it can use more input sources.
https://fluentbit.io/documentation/0.13/input/

For example you can use tail if it's not possible to change easily the logging
setup of the app.
https://fluentbit.io/documentation/0.13/input/tail.html

In the past was the rsyslog hard to setup for openshift with normal privileges
from the rhel image, that was the reason for me to build this solution, imho.
The https://www.rsyslog.com/doc/v8-stable/configuration/modules/omstdout.html is
documented to not use it in real deployments

Best Regards
Aleks

>> Something similar as I have done it in my haproxy image.
>>
>> https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L92-93
>>
>>
>> ###
>> ...
>> echo "starting socklog"
>> /usr/local/bin/socklog unix /tmp/haproxy_syslog &
>> ...
>> ###
>>
>> Regards
>> Aleks
>>> Jeff Cantrill wrote on 08/15/18 16:50:
>>>> The recommended options with the current log stack are either to 
>>>> reconfigure
>>>> your log to send to stdout or add a sidecar container that is capable of
>>>> tailing the log in question which would write it to stdout and ultimately
>>>> read by fluentd.
>>>>
>>>> On Wed, Aug 15, 2018 at 2:47 AM, Leo David >>> <mailto:leoa...@gmail.com>> wrote:
>>>>
>>>>  Hi Everyone,
>>>>  I have logging with fluentd / elasticsearch at cluster level running
>>>>  fine,  everything works as expected.
>>>>  I have an issue though...
>>>>  What would it be the procedure to add some custom log files from
>>>>  different containers ( logs that are not shown in stdout ) to be
>>>>  delivered to elasticseach as well ?
>>>>  I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
>>>>  that in 3.7 docker logging driver is configured with journald whilst
>>>>  in 3.9 is json-file.
>>>>  Any thoughts on this ?
>>>>  Thanks a lot !
>>>>
>>>>  --     Best regards, Leo David
>>>>
>>>> -- 
>>>> -- 
>>>> Jeff Cantrill
>>>> Senior Software Engineer, Red Hat Engineering
>>>> OpenShift Logging
>>>> Red Hat, Inc.
>>>> *Office*: 703-748-4420 | 866-546-8970 ext. 8162420
>>>> jcant...@redhat.com <mailto:jcant...@redhat.com>
>>>> http://www.redhat.com


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift centralized logging - add custom container logfiles

2018-08-16 Thread Aleksandar Lazic
Am 16.08.2018 um 12:48 schrieb Aleksandar Kostadinov:
> Might be real nice to allow pod to request sockets created where different log
> streams can be sent to central logging without extra containers in the pod.

You can run socklog/fluentbit/... in the background to handle the logging and
your app logs to this socket.
Something similar as I have done it in my haproxy image.

https://gitlab.com/aleks001/haproxy18-centos/blob/master/containerfiles/container-entrypoint.sh#L92-93

###
...
echo "starting socklog"
/usr/local/bin/socklog unix /tmp/haproxy_syslog &
...
###

Regards
Aleks
> Jeff Cantrill wrote on 08/15/18 16:50:
>> The recommended options with the current log stack are either to reconfigure
>> your log to send to stdout or add a sidecar container that is capable of
>> tailing the log in question which would write it to stdout and ultimately
>> read by fluentd.
>>
>> On Wed, Aug 15, 2018 at 2:47 AM, Leo David > > wrote:
>>
>>     Hi Everyone,
>>     I have logging with fluentd / elasticsearch at cluster level running
>>     fine,  everything works as expected.
>>     I have an issue though...
>>     What would it be the procedure to add some custom log files from
>>     different containers ( logs that are not shown in stdout ) to be
>>     delivered to elasticseach as well ?
>>     I two different clusters ( 3.7 and 3.9 ) up and running,  and i know
>>     that in 3.7 docker logging driver is configured with journald whilst
>>     in 3.9 is json-file.
>>     Any thoughts on this ?
>>     Thanks a lot !
>>
>>     --     Best regards, Leo David
>>
>>     ___
>>     users mailing list
>>     users@lists.openshift.redhat.com
>>     
>>     http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>     
>>
>>
>>
>>
>> -- 
>> -- 
>> Jeff Cantrill
>> Senior Software Engineer, Red Hat Engineering
>> OpenShift Logging
>> Red Hat, Inc.
>> *Office*: 703-748-4420 | 866-546-8970 ext. 8162420
>> jcant...@redhat.com 
>> http://www.redhat.com
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: error running application using customized image stream

2018-08-07 Thread Aleksandar Lazic
Hi.

Am 07.08.2018 um 16:23 schrieb dhanashree.kulka...@brown-iposs.eu:
>
> Hello thank you for taking a look. I checked the link you provided and tried
> to change my Dockerfile accordingly but it didn’t seem to work.
>
> So, I changed the Dockerfile to use a user called “ubuntu” and added this user
> to sudoers of container. Still I get the permission error.
>
> I added following lines in the Dockerfile:
>
>  
>
> RUN apt-get install -y libreoffice --no-install-recommends
>
>
>  
>
> RUN apt-get install -y sudo && adduser ubuntu && echo "ubuntu ALL=(root)
> NOPASSWD:ALL" > /etc/sudoers.d/ubuntu && chmod 4755 /etc/sudoers.d/ubuntu
>
>
> RUN su - ubuntu
>
>  
>
> Is it advisable to change default setting of openshift to use anyuser?
>

Not it's not a good Idea.
The main problem is that the https://github.com/openmeetings/openmeetings-docker
isn't prepared to run as non root user which is in general not a good idea.

You can see this in this lines
https://github.com/openmeetings/openmeetings-docker/blob/master/Dockerfile#L30
ENV work /root/work

https://github.com/openmeetings/openmeetings-docker/blob/master/scripts/om.sh#L15-L17

I suggest to change the Dockerfile and the om.sh according to the suggestion
from Anton in the keycloak dockerfile.

https://github.com/jboss-dockerfiles/keycloak/blob/master/server-openshift/Dockerfile#L9-L16

As at Buildtime can you run some tasks as root like yum install but at runtime 
not.

You can change the work to let's say /data/om and do all the work there.
At runtime just call '${TOMCAT_PATH}/bin/catalina.sh run'

Regards
aleks

> Best Regards,
>
> Dhanashree Kulkarni
>
>  
>
> brown-iposs GmbH
>
> Friedrich-Breuer-Straße 120
>
> 53225 Bonn
>
> Germany
>
>  
>
> Fon   +49 (0) 228 299 799 80
>
> Fax   +49 (0) 228 299 799 84
>
> mailto:birgit.bachm...@brown-iposs.eu
>
> www.brown-iposs.eu 
>
> www.facebook.com/browniposs 
>
> www.facebook.com/wimap4g 
>
>  
>
> Directors: Dr. Bernd Schröder, Karsten Schmeling
>
> Trade register: 14385, Country court Bonn
>
> VAT-ID: DE814670174
>
>  
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich
> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie
> diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail
> ist nicht gestattet.
>
>  
>
> This e-mail may contain confidential and/or privileged information. If you are
> not the intended recipient (or have received this e-mail in error) please
> notify the sender immediately and destroy this e-mail. Any unauthorised
> copying, disclosure or distribution of the material in this e-mail is strictly
> forbidden.
>
>  
>
> *Von:*kurren...@gmail.com [mailto:kurren...@gmail.com] *Im Auftrag von *Anton
> Hughes
> *Gesendet:* Tuesday, August 07, 2018 1:12 PM
> *An:* dhanashree.kulka...@brown-iposs.eu
> *Cc:* users@lists.openshift.redhat.com
> *Betreff:* Re: error running application using customized image stream
>
>  
>
> By default OpenShift doesnt allow containers to run using root user.
>
>  
>
> Take a look
> at 
> https://github.com/jboss-dockerfiles/keycloak/blob/master/server-openshift/Dockerfile#L9-L16
> for an example of giving the permissions and setting a non-root user.
>
>  
>
> On 7 August 2018 at 21:38,  > wrote:
>
> Hello,
>
> My name is Dhanashree Kulkarni. I have installed OpenShift Origin all in
> one in a Centos 7 VM running on Proxmox VE.
>
> I have built a Docker image using a Dockerfile, and created an image
> stream using that Docker image and tagged and pushed it in the Docker
> registry inside OpenShift. Now when I want to run the application using
> this created image stream, it gives me permission error.
>
> I want to run Apache Openmeetings application inside OpenShift. For that I
> have used the Dockerfile created by Maxim Solodovnik
> (https://github.com/openmeetings/openmeetings-docker). The ENTRYPOINT in
> the Dockerfile seems to create this error.
>
> **Steps Followed:**
>
>  
>
> git clone https://github.com/dhanugithub/openmeetings-docker.git
>
> cd openmeetings-docker
>
> ls
>
> docker build -t om-server .
>
> docker images
>
> docker login -u openshift –p 
> docker-registry-default.apps.x.x.x.x.nip.io
> 
>
> oc create is om-server -n mec
>
> docker tag om-server
> docker-registry-default.apps.x.x.x.x.nip.io/mec/om-server:latest
> 
>
> docker push
> docker-registry-default.apps.x.x.x.x.nip.io/mec/om-server:latest
> 
>
>  
>
> I am attaching the error log which 

Re: OC debug command does not show command prompt

2018-06-06 Thread Aleksandar Lazic

On 06/06/2018 13:04, Brian Keyes wrote:

If I do a "debug in terminal" in the console I always get a command prompt

if i goto the command line and do a "oc debug   i get this message

Debugging with pod/lster-1-2rqg9-debug, original command:
container-entrypoint /tmp/scripts/run
Waiting for pod to start ...
Pod IP: 10.252.4.18
If you don't see a command prompt, try pressing enter.

i hit enter many many times and do not ever get a command prompt


Are you behind a proxy?


--
thanks



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: errors accessing egressnetworkpolicies.network.openshift.io when attempting to export project

2018-06-02 Thread Aleksandar Lazic

Hi.

On 02/06/2018 13:18, Graham Dumpleton wrote:

For the basic Python application you wouldn't need to export most of
those and for some doing so would cause problems when you try to load
them again.

For a basic application with no secrets, configmaps or persistent
volumes, all you need is:

   oc export is,bc,dc,svc,route -o yaml


Just to be on the save site please add cm (=configmap ) and secrets to
the export also, for the future case.

oc export is,bc,dc,svc,route,cm,secrets -o yaml


Do not include pods, replicationcontrollers or endpoints.

You also want to be selective about what you export by using a label
selector.

   oc export is,bc,dc,svc,route --selector app=yourappname -o yaml

That way you get just what is necessary for the application.

Before they can be reloaded in a fresh project or OpenShift instance,
you would usually need to massage the result, especially fixing up
image references and reverting them to image stream references.

Overall you are better off to export as a template and edit the result
to create a template you can then deploy multiple times, where the
application name is parameterised.


Full ack, the command looks then like this.

FYI: As always in yaml **don't use TABS**

```
oc export is,bc,dc,svc,route,cm,secrets -o yaml --as-template=MyPersonalTemplate
```

That's the link to the template doc
https://docs.openshift.org/3.9/dev_guide/templates.html


Graham


Best regards
Aleks


On 2 Jun 2018, at 2:01 am, Brian Keyes  wrote:

I am attempting to follow these instructions

https://docs.openshift.com/container-platform/3.7/day_two_guide/project_level_tasks.html
 


I want to backup THE sample python app and I created a script like this ( from 
the documentation)




$ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset 
cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas 
pvcs templates cronjobs statefulsets hpas deployments replicasets 
poddisruptionbudget endpoints
do
  oc export $object -o yaml > $object.yaml
done

--
but when I run this I get some access denied errors like this , is this saying 
that the objects I am attempting to back up do not exist?


$ ./exportotherprojects.sh
error: no resources found - nothing to export
the server doesn't have a resource type "cms"
Error from server (Forbidden): User "admin" cannot list egressnetworkpolicies.network.openshift.io 
 in the namespace "sample-py": User "admin" cannot list 
egressnetworkpolicies.network.openshift.io  in project "sample-py" 
(get egressnetworkpolicies.network.openshift.io )
error: no resources found - nothing to export
error: no resources found - nothing to export
error: no resources found - nothing to export
the server doesn't have a resource type "pvcs"
error: no resources found - nothing to export
error: no resources found - nothing to export
error: no resources found - nothing to export
the server doesn't have a resource type "hpas"
error: no resources found - nothing to export
error: no resources found - nothing to export
Error from server (Forbidden): User "admin" cannot list poddisruptionbudgets.policy in the namespace 
"sample-py": User "admin" cannot list poddisruptionbudgets.policy in project "sample-py" 
(get poddisruptionbudgets.policy)


thanks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how can I use a custom image with openshift

2018-05-23 Thread Aleksandar Lazic

Hi Brian.

On 23/05/2018 17:20, Brian Keyes wrote:

I want to use a custom image that has alpline with python and boto3
installed on it

I am seeing the console might have some way to do this , but I am not sure
on the procedure at all

would I , create a docker contaner , install boto3 manually , commit that
to an image and somehow get that image into openshift , maybe pull from
dockerhub

any advice would be helpfull


You can use the `dockerfile` build input, if your are allowed to build
via Docker.
https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#dockerfile-source

If you are not allowed to use Docker build then can you take a look into
source2image

https://docs.openshift.org/latest/dev_guide/builds/build_inputs.html#using-secrets-s2i-strategy

Maybe this image could help, but I don't know if you are allowed to
install anything with this image, I'm not a python guru ;-).

https://access.redhat.com/containers/#/registry.access.redhat.com/rhscl/python-35-rhel7


thanks


Your welcome !!


--
Bnd/or criminal penalties as detailed in as amended Privacy Act of 1974 and
DoD 5400.11-R.



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: attempting to run some phython code and getting resouce errors

2018-05-23 Thread Aleksandar Lazic

Hi.

On 23/05/2018 10:45, Brian Keyes wrote:

I am attempting to run this code on openshift

import sys
for i in range(sys.maxsize**10): # you could go even higher if you really
want
if there_is_a_reason_to_break(i):
break


I get this error in the gui

0/3 nodes are available: 1 Insufficient pods, 2 MatchNodeSelector.
37 times in the last
how can I see what resources are available and what may be using them?

thanks


You will need to make some calls via the cli.

oc get nodes
oc describe node 

There you can see the available resources.

I think you will need to have at least cluster-reader permission for this.

Best regards
aleks


count-1-build

Pod Warning Failed Scheduling  0/3 nodes are available: 1 Insufficient
pods, 2 MatchNodeSelector.
37 times in the last

--
Brian Keyes
Systems Engineer, Vizuri
703-855-9074(Mobile)
703-464-7030 x8239 (Office)

FOR OFFICIAL USE ONLY: This email and any attachments may contain
information that is privacy and business sensitive.  Inappropriate or
unauthorized disclosure of business and privacy sensitive information may
result in civil and/or criminal penalties as detailed in as amended Privacy
Act of 1974 and DoD 5400.11-R.


Well this is now in the archive ;-)

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: IBM WebSphere Application Server on OpenShift Origin

2018-05-23 Thread Aleksandar Lazic

Hi.

On 23/05/2018 17:36, Tien Hung Nguyen wrote:

Hi,

I'm trying to install IBM WebSphere Application Server on OpenShift Origin
running on my local computer (Docker for Windows).

Normally, when I'm running IBM WebSphere Application Server on Docker, I
just use an URL like this to retrieve the Admin console on the browser:
https://10.0.75.1:9043/ibm/console/logon.jsp. However, my WebSphere
Application doesn't support SSL (which is the default setting), hence it
establishes an unsecured connection (without the https) between the browser
and the server, and this worked fine when I was running WebSphere on Docker.

Now, I was trying to migrate the same Docker WebSphere Application Server
image to OpenShift Origin. Therefore, I created a secured router with the
path  /ibm/console/logon.jsp.pointing to a service on port 9043 that
connects to my deployed WebSphere Application Server pod. The URL for the
route looks like this:
https://websphere-server-myproject.10.0.75.2.nip.io/ibm/console/login.do?action=secure

However, the route doesn't work and I can't retrieve the WebSphere
Application Server Console on my Browser. I'm getting the following message
on the browser:


[snipp] Default 503 page


When I try the same with an unsecured router, I'm getting the same error.

I think there is a problem with the routing system on OpenShift in
combination with IBM WebSphere Application Server because the IBM WebSphere
Application Server has just worked fine on Docker.

Please, could you help me to solve this issue?


Please can you provide the following outputs.

* oc version
* oc -n  describe svc > services.yml
* oc -n  describe route > routes.yml
* oc -n  get pod -o wide > pods.yml
* oc -n  logs  > was-pod-log.log


Regards,
Tien


Regards
Aleks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How could I re configure "openshift_master_cluster_public_hostname" after cluster setup?

2018-05-22 Thread Aleksandar Lazic

On 22/05/2018 10:12, Yu Wei wrote:

Hi,
I installed openshift origin cluster withe following variables set.
openshift_master_cluster_public_hostname
openshift_master_cluster_hostname

Then I want to reconfigure above variables to use different values.

Is it possible? If so, how could I do that?


Yes just rerun byo/config.yml


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc port-forward command unexpectedly cuts the proxied connection on Origin 3.7.2

2018-04-24 Thread Aleksandar Lazic
Hi.

Am 23.04.2018 um 11:33 schrieb Fabio Martinelli:
> Hi Aleksandar
>
> On 22 April 2018 at 17:07, Aleksandar Lazic <al...@me2digital.eu
> <mailto:al...@me2digital.eu>> wrote:
>
>
> Does the port-forwarding goes thru a proxy?
>
>
> No others SW in the middle, it's just "pure" oc port-forward
>  
>
> Is there a amount of time when this happens (= timeout)
>
>
> ~3mins the times I've checked
180 seconds
Maybe this could be a answer
https://unix.stackexchange.com/questions/150402/what-is-the-default-idle-timeout-for-openssh

> What's in the events when this happen?
>
>
> the only messages I could found are :
>
> Apr 20 14:30:56 wfpromshap21 origin-node: I0420 14:30:56.783451 
> 102439 docker_streaming.go:186] executing port forwarding command:
> /usr/bin/nsenter -t 12036 -n /usr/bin/socat - TCP4:localhost:
>
>
> Apr 20 14:30:56 wfpromshap21 journal: I0420 14:30:56.783451  102439
> docker_streaming.go:186] executing port forwarding command:
> /usr/bin/nsenter -t 12036 -n /usr/bin/socat - TCP4:localhost:
>
>
>   is the port of the SSHd unprivileged daemon
>
>
>
> I'm afraid that somehow Ansible manages to screw up the oc
> port-forward tunnel but I can't really say how

what's the output when you add `--loglevel=9` ?

BR
Aleks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc port-forward command unexpectedly cuts the proxied connection on Origin 3.7.2

2018-04-22 Thread Aleksandar Lazic
Hi.

Am 19.04.2018 um 10:46 schrieb Fabio Martinelli:
> Dear Colleagues
>
> In few time I've to migrate several corporate applications from a
> RedHat6 LXC cluster to a RedHat7 OpenShift Origin 3.7.2 cluster
>
> here the application Developers are use to write an Ansible playbook
> for each app so they've explicitly requested me to prepare a base
> CentOS7 container running as non-root and featuring an unprivileged
> SSHd daemon in order to run their well tested Ansible playbooks,
> furthermore to place the container /home on a dedicated GlusterFS
> volume to make it persistent along the time ; last ring of this chain
> is the oc port-forward command that's in charge of connecting the
> Developers workstation with the unprivileged SSHd daemon just for the
> Ansible playbook execution time.
>
> this is actually working pretty well but the fact that the oc
> port-forward command at certain point cuts the connection and the
> Ansible run gets obviously affected making the Developer experience
> disappointing ; on the other end the SSHd process didn't stop.
Does the port-forwarding goes thru a proxy?
Is there a amount of time when this happens (= timeout)
What's in the events when this happen?

> kindly which settings may I change both on the Origin Masters yaml
> files and on the Origin Nodes yaml files in order to prevent this issue ?
>
> I'm aware that the application Developers should rewrite their works
> in terms of Dockerfiles but for the time being they've really no time
> to do that.
>
>
> Many thanks,
> Fabio Martinelli
Best regards
Aleks
ME2Digital

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: NGINX Ingress

2018-03-12 Thread Aleksandar Lazic
Hi Andrew.

Am 12.03.2018 um 14:03 schrieb Gebauer, Andrew:
> Thanks, Aleks. NGINX conf is below.
>
> I think this issue was caused by the OpenShift cluster nodes being on Docker 
> 1.13.1.
>
> According to this, Docker 1.12.6 is the latest supported by OS 3.7:
> https://access.redhat.com/articles/2176281
>
> I reverted to Docker 1.12.6, and the permissions on /proc/1/fd are now 
> root:root instead of 1001:root, and NGINX is able to access stderr and stdout 
> and starts successfully.
So issue solved.
> Docker was installed on this cluster by the OpenShift 3.7 ansible scripts, so 
> I'm wondering why 1.13.1 was installed instead of 1.12.6.
>
> Andrew
>
> >What the output of
>> oc rsh  cat /var/lib/nginx/conf/nginx.config
> user  nginx;
even with this directive is nginx running?!
That's surprise me.

Best regards
Aleks

> worker_processes  auto;
> daemon off;
>
> error_log  /var/log/nginx/error.log warn;
> pid/var/run/nginx.pid;
>
> events {
> worker_connections  1024;
> }
>
>
> http {
> include   /etc/nginx/mime.types;
> default_type  application/octet-stream;
>
> log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
>   '$status $body_bytes_sent "$http_referer" '
>   '"$http_user_agent" "$http_x_forwarded_for"';
> access_log  /var/log/nginx/access.log  main;
>
> sendfileon;
> #tcp_nopush on;
>
> keepalive_timeout  65;
>
> #gzip  on;
>
> server_names_hash_max_size 1024;
> server_names_hash_bucket_size 128;
>
> map $http_upgrade $connection_upgrade {
> default upgrade;
> ''  close;
> }
> 
> 
> 
> 
>
> server {
> listen 80 default_server;
> listen 443 ssl default_server;
>
> ssl_certificate /etc/nginx/secrets/default;
> ssl_certificate_key /etc/nginx/secrets/default;
>
> server_name _;
> server_tokens "on";
> access_log off;
>
> 
>
> location / {
>return 404;
> }
> }
>
> include /etc/nginx/conf.d/*.conf;
> }
>
> 
> > ?
> >
> > Thanks for any help,
> >
> > Andrew
> >
> Best regards
> Aleks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: NGINX Ingress

2018-03-12 Thread Aleksandar Lazic
Hi.

Am 09.03.2018 um 19:53 schrieb Gebauer, Andrew:
>
> Hello—
>
>  
>
> I am running into an issue launching NGINX Ingress on OpenShift Origin
> 3.7. Wondering if others have seen the same problem.
>
>  
>
> The pod is running in the default project and is launching using an OS
> service account that has scc/privileged and cluster-admin access
> (system:serviceaccount:default:nginx-ingress).
>
>  
>
> The NGINX container (docker.io/nginxdemos/nginx-ingress:1.1.1) is
> configured to pipe the access and error logs to stdout and stderr,
> respectively.
>
>  
>
> However, when the pod launches, it goes into CrashLoopBackOff because
> NGINX can’t access the configured stdout/stderr locations:
>
>  
>
> I0309 18:43:15.111265 645 main.go:65] Starting NGINX Ingress
> controller Version=1.1.1 GitCommit=8fc772d
>
> nginx: [alert] could not open error log file: open()
> "/var/log/nginx/error.log" failed (13: Permission denied)
>
> 2018/03/09 18:43:15 [emerg] 657#657: open() "/var/log/nginx/error.log"
> failed (13: Permission denied)
>
> E0309 18:43:15.134386 645 main.go:158] nginx command exited with
> an error: exit status 1
>
>  
>
> When I run the pod in debug mode, I can see that the reason for the
> error is that a non-root user (1001) owns the /proc directory that
> access.log and error.log are symlinked to:
>
>  
>
> rwxrwxrwx. 1 root root 12 Jan 12 18:43 access.log -> /proc/1/fd/1
>
> lrwxrwxrwx. 1 root root 12 Jan 12 18:43 error.log -> /proc/1/fd/2
>
>  
>
> root@nginx-ingress-rc-rr2xz-debug:/var/log/nginx# ls -l /proc/1/fd
>
> ls: cannot read symbolic link '/proc/1/fd/0': Permission denied
>
> ls: cannot read symbolic link '/proc/1/fd/1': Permission denied
>
> ls: cannot read symbolic link '/proc/1/fd/2': Permission denied
>
> total 0
>
> lr-x--. 1 1001 root 64 Mar  9 18:32 0
>
> l-wx--. 1 1001 root 64 Mar  9 18:32 1
>
> l-wx--. 1 1001 root 64 Mar  9 18:32 2
>
>  
>
> Where does the 1001 user come from?
>

Well it could be this line.
https://github.com/openshift/origin/blob/master/images/router/nginx/Dockerfile#L26

But this does not fit to that image
docker.io/nginxdemos/nginx-ingress:1.1.1

which is based on
https://github.com/nginxinc/kubernetes-ingress/blob/v1.1.1/nginx-controller/Dockerfile

How do you have setup ed the ingress controller?
What the output of

oc rsh  cat /var/lib/nginx/conf/nginx.config

>  
>
> Thanks for any help,
>
> Andrew
>
Best regards
Aleks


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Prometheus

2018-03-01 Thread Aleksandar Lazic
Hi Alex.

Can you tell us how much data was written per day?

Thank you

Best regards
aleks

Am 28.02.2018 um 15:13 schrieb Alex Bpunkt:
> Hi Aleksandr,
>
> a PVC is a persistent volume claim. These are the datastores where
> openshift applications can write their data to.
>
> Yes, auto-scaling will work when prometheus is installed. But in the
> current version autoscaling is handled by hawkular (openshift-metrics
> playbooks when you are installing it)
>
> Regards,
> Alexander
>
> On Wed, Feb 28, 2018 at 2:27 PM, Polushkin Aleksandr
>  > wrote:
>
> Alex thank you for the response !
>
>  
>
> Sorry but what is PVC ? Another thing that concerns me, if I’ll
> install cluster with Prometheus, will autoscalling work ?
>
>  
>
>  
>
> Regards,
>
> Aleksandr
>
>  
>
> 
> --
>
> T-Systems RUS GmbH
>
> Point of Production
>
> Aleksandr Polushkin
>
> Sr. Configuration Manager
>
> V.O. 13th line, 14B, 199034, St.Petersburg, Russia
>
> Email: aleksandr.polush...@t-systems.ru
> 
>
>  
>
> *From:*Alex Bpunkt [mailto:alexander.barti...@gmail.com
> ]
> *Sent:* Tuesday, February 27, 2018 8:13 PM
> *To:* Polushkin Aleksandr  >
> *Subject:* Re: Prometheus
>
>  
>
> Hi,
>
> I tested it on our test cluster and it was running very good.
>
> As far as I know it should replace hawkular in one of the future
> releases to handle autoscaling.
>
> Depending on the size of your cluster have a look at the PVC.
> Prometheus wrote a lot of data in my previous tests.
>
> Regards,
>
> Alexander
>
>  
>
> On Tue, Feb 27, 2018 at 5:54 PM, Polushkin Aleksandr
>  > wrote:
>
> Hello everyone !
>
> Documentation points that -
>
> Prometheus on OpenShift Origin is a Technology Preview feature
> only.
>
>
> Does anybody uses it in a production environment ? Is it still
> a preview ?
>
> Regards,
> Aleksandr
>
> 
> --
> T-Systems RUS GmbH
> Point of Production
> Aleksandr Polushkin
> Sr. Configuration Manager
> V.O. 13th line, 14B, 199034, St.Petersburg, Russia
> Email: aleksandr.polush...@t-systems.ru
> 
>  >
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 
>
>  
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Routing question

2018-02-02 Thread Aleksandar Lazic

Hi Aaron.

-- Originalnachricht --
Von: "Aaron Rodriguez" 
An: users@lists.openshift.redhat.com
Gesendet: 02.02.2018 17:34:06
Betreff: Routing question

I have set of secure, HTTPS, microservices that I would like to 
configure with reencrypt routes.   Each of these will serve their API 
under the same hostname.


I have attempted to configure these as re-encrypt, path-based routes 
but the proxy returns 'Application is not available'.


If I disable SSL on the service and run HTTP only, then the path based 
routes work when configured as edge terminated.


Can someone confirm that path-based re-encrypt routes is currently not 
supported?  Assuming that's the case, are there any suggestions for 
achieving this configuration?

I don't see in the current template any reason why it should not work.
https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template

Please can you give us more informations.
Please take care of some sensible informations, because this is a public 
mailing list!


* oc version

# I assume you run the router in the default project
* oc export -n default dc route > the_exported_route.txt

# search for the router pod
* oc get po -n default |egrep router

* oc rsh  -n default  cat haproxy.config > 
the_haproxy.config.txt


# when reencryption set
* oc rsh  -n default  cat os_reencrypt.map > 
the_os_reencrypt.map.txt


# when http only set
* oc rsh  -n default  cat os_http_be.map > 
the_os_http_be.map.txt


tar cfvz router-infos_001 *.txt


Thanks!

Regards
Aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[2]: Passthrough TLS route not working

2018-01-23 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Marc Boorshtein" 
An: "Joel Pearson" 
Cc: "users" 
Gesendet: 20.01.2018 00:55:28
Betreff: Re: Passthrough TLS route not working


Hm, then you lose the ability to do cookie based load balancing

This makes the openshift router by default.
You can switch it of with the following annotation.
You can also set a cookie name with a annotation per route

https://docs.openshift.org/latest/architecture/networking/routes.html#route-specific-annotations

oc annotate route  
"haproxy.router.openshift.io/disable_cookies=true"
oc annotate route  
"haproxy.router.openshift.io/cookie_name=MyFunnyCookie"


When you use the proxy protocol in aws you are able to get the real 
client IP from the client.


https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol

oc set env dc/router ROUTER_USE_PROXY_PROTOCOL=true

Pay attention due to the fact that when you setup this every request 
must be a proxy protocol request to the router.


Hth

aleks

On Fri, Jan 19, 2018, 5:11 PM Joel Pearson 
 wrote:
In the reference implementation they use Classic ELB load balancers in 
TCP mode:


See this cloud formation template: 
https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/playbooks/roles/cloudformation-infra/files/greenfield.json.j2#L763


On Sat, Jan 20, 2018 at 8:55 AM Joel Pearson 
 wrote:
What mode are you running the AWS load balancers in? You probably 
want to run them as TCP load balancers and not HTTP. That way as you 
say the SNI will not get messed with.
On Sat, 20 Jan 2018 at 4:45 am, Marc Boorshtein 
 wrote:
So if I bypass the AWS load balancer, everything works great.  Why 
doesn't HAProxy like the incoming requests?  I'm trying to debug the 
issue by enabling logging with


oc set env dc/router ROUTER_SYSLOG_ADDRESS=127.0.0.1 ROUTER_LOG_LEVEL=debug
But the logging doesn't seem to get there (I also tried a remote server as 
well).  I'm guessing this is probably an SNI configuration issue?


On Fri, Jan 19, 2018 at 11:59 AM Marc Boorshtein 
 wrote:
I'm running origin 3.7 on AWS.  I have an AWS load balancer in 
front of my infrastructure node.  I have a pod listening on TLS on 
port 9090.  The service links to the pod and then I have a route 
that is setup with passthrough tls to the pod, but every time i try 
to access it I get the "Application is not availble" screen even 
though looking in the console the service references both the 
router and the pod.  I have deployments that do the same thing but 
will only work with re-encrypt.  Am I missing something?  Is there 
an issue using the AWS load balancer with passthrough?


Thanks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Heptio Contour

2018-01-23 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Srinivas Naga Kotaru (skotaru)" 
An: "users" 
Gesendet: 19.01.2018 19:36:18
Betreff: Heptio Contour

How it is different than Openshift router and what extra benefits it 
brings? Anyone educate me to understand differences or possible use 
cases where it fit into eco system? Or replacing ingress controller or 
will it solve ingress controller 244 address limitations?


https://blog.heptio.com/announcing-contour-0-3-37f4aa7bc6f7
As far as I have understood this tool it's *just* a smaller part from 
the envoy proxy.
But this doc describes it better 
https://github.com/heptio/contour/blob/v0.2.1/docs/about.md


You can see it like the the openshift router part imho.

You can also take a look for the haproxy as ingress controller as 
described in this link.


https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/

I have also created a openshift router with haproxy 1.8 which supports 
h2 for the clients and lua script.
Just in case you want to take a look into the h2 features of haproxy and 
therefore of openshift.


https://hub.docker.com/r/me2digital/openshift-ocp-router-hap18/


--

Srinivas Kotaru


Best Regards
aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Openshift router with h2 and print_headers

2018-01-14 Thread Aleksandar Lazic

Hi all.

I have created a openshift router image with the new haproxy 1.8 .

https://hub.docker.com/r/me2digital/openshift-ocp-router-hap18/

The haproxy 1.8 have added a lot of feature and one of them is h2 
(=http/2).
I have added also a smal lua script which prints the incomming http 
headers to the log at info level.


Any feedback is welcome.

Best regards
aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[4]: nginx in front of haproxy ?

2018-01-05 Thread Aleksandar Lazic

Hi Fabio.

-- Originalnachricht --
Von: "Fabio Martinelli" <fabio.martinelli.1...@gmail.com>
An: "Aleksandar Lazic" <al...@me2digital.eu>
Gesendet: 04.01.2018 10:34:03
Betreff: Re: Re[2]: nginx in front of haproxy ?


Thanks Joel,
that's correct, in this particular case it is not nginx in front of our 
3 haproxy but nginx in front of our 3 Web Console ; I got confused 
because in our nginx we have other rules pointing to the 3 haproxy, for 
instance to manage the 'metrics.hosting.wfp.org' case



Thanks Aleksandar,
my inventory sets :

openshift_master_default_subdomain=hosting.wfp.org
​openshift_master_cluster_public_hostname={{openshift_master_default_subdomain}}

maybe I had to be more explicit as you advice by directly setting :
openshift_master_cluster_public_hostname=hosting.wfp.org

I would do

`openshift_master_cluster_public_hostname=master.{{openshift_master_default_subdomain}}`

The ip for `master.hosting.wfp.org` should be a vip.

The domain alone is not enough you need a ip e. g.:  10.11.40.99, if you 
have not setuped a wild-card dns entry for this domain.


https://github.com/openshift/openshift-ansible/blob/master/inventory/hosts.example#L304-L311

anyway I'm afraid to run Ansible again because of the 2 GlusterFS we 
run, 1 for general data, 1 for the internal registry ;
installing GlusterFS was the hardest part for us, maybe is there a way 
to skip the GlusterFS part without modifying the inventory file ?

Well I don't know.
How about to show us your inventory file with removed sensible data.


best regards,
Fabio


Best regards
Aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[2]: nginx in front of haproxy ?

2018-01-03 Thread Aleksandar Lazic

Hi.

@Fabio: When you use the advanced setup you should set the 
`openshift_master_cluster_public_hostname` to `hosting.wfp.org`and rerun 
the install playbook.


I suggest to take a look into the now very detailed documentation.

https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-cluster-variables
https://docs.openshift.org/latest/install_config/install/advanced_install.html#running-the-advanced-installation-system-container

It's big but worth to read.

-- Originalnachricht --
Von: "Joel Pearson" 
An: "Fabio Martinelli" 
Cc: users@lists.openshift.redhat.com
Gesendet: 03.01.2018 20:59:59
Betreff: Re: nginx in front of haproxy ?

It’s also worth mentioning that the console is not haproxy. That is the 
router, which run on the infrastructure nodes. The console/api server 
runs something else.
The 'something else' is the openshift master server or the api servers 
depend on the setup.


Regards
Aleks



On Wed, 3 Jan 2018 at 1:46 am, Fabio Martinelli 
 wrote:
It was actually needed to rewrite the master-config.yaml in this other 
way, basically removing all the :8443 strings in the 'public' fields, 
i.e. to make it implicitly appear as :443

[snipp]



the strange PHP error message was due to another service listening on 
the 8443 port on the same host where nginx it's running !





Exploiting this post https://github.com/openshift/origin/issues/17456 
our nginx setup got now :


upstream openshift-cluster-webconsole {
ip_hash;
server wfpromshap21.global.wfp.org:8443;
server wfpromshap22.global.wfp.org:8443;
server wfpromshap23.global.wfp.org:8443;
}

server {
listen   10.11.40.99:80;
server_name hosting.wfp.org;
return 301 https://$server_name$request_uri;
}


server {
listen   10.11.40.99:443;
server_name hosting.wfp.org;

access_log /var/log/nginx/hosting-console-access.log;
#access_log off;
error_log  /var/log/nginx/hosting-console-error.log  crit;

include /data/nginx/includes.d/ssl-wfp.conf;

include /data/nginx/includes.d/error.conf;

include /data/nginx/includes.d/proxy.conf;

proxy_set_header Host $host;

location / {
proxy_pass https://openshift-cluster-webconsole;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

}

​and it seems to work by nicely masking the 3 Web Consoles.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[2]: External network for one project

2018-01-01 Thread Aleksandar Lazic


-- Originalnachricht --
Von: "Jacek Suchenia" <jacek.suche...@gmail.com>
An: "Aleksandar Lazic" <al...@me2digital.eu>
Cc: "Openshift Users" <users@lists.openshift.redhat.com>
Gesendet: 30.12.2017 08:57:14
Betreff: Re: External network for one project


Aleksandar

Thank you for your answer, I configured ingress IP range and its 
working fine for services where NAT is fine, however I have a case 
where I wish to deliver external IP directly to Pod.

Well I think youl will then need the nodeport solution.

https://docs.openshift.org/latest/dev_guide/expose_service/expose_internal_ip_nodeport.html#getting-traffic-into-cluster-nodeport

I have add some pictures to my blog post which also describes the 
ingress setup


https://www.me2digital.com/blog/2017/01/12-openshift-ingress-setup/

You can share the external ip with ipfailover cross nodes.

Hth
Aleks


Jacek

30.12.2017 1:38 AM "Aleksandar Lazic" <al...@me2digital.eu> napisał(a):

Hi

-- Originalnachricht --
Von: "Jacek S." <jacek.suchenia+opensh...@gmail.com 
<mailto:jacek.suchenia%2bopensh...@gmail.com>>
An: "Openshift Users" <users@lists.openshift.redhat.com 
<mailto:users@lists.openshift.redhat.com>>

Gesendet: 27.12.2017 11:58:13
Betreff: External network for one project


Hi

I'm using Openshift 3.6 on dedicated, bare-metal machines (with 
openshift-ovs-multitenant networking plugin).
I'd like to run in my cluster 3rd party images that require external 
IP address attached (i'm not able to configure external IP). Is there 
a way to define secondary cluster network, do not NAT it and attach 
it to some pods in one particular project?
Maybe you can use the `ExternalIPNetworkCIDRs`, but how does the 
routing works to your cluster with the external IP, which you can't 
change ?


https://docs.openshift.org/latest/admin_guide/tcp_ingress_external_ports.html 
<https://docs.openshift.org/latest/admin_guide/tcp_ingress_external_ports.html>
https://docs.openshift.org/latest/install_config/master_node_configuration.html#master-node-config-network-config 
<https://docs.openshift.org/latest/install_config/master_node_configuration.html#master-node-config-network-config>



Regards
Jacek


Best regards
aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: External network for one project

2017-12-29 Thread Aleksandar Lazic

Hi

-- Originalnachricht --
Von: "Jacek S." 
An: "Openshift Users" 
Gesendet: 27.12.2017 11:58:13
Betreff: External network for one project


Hi

I'm using Openshift 3.6 on dedicated, bare-metal machines (with 
openshift-ovs-multitenant networking plugin).
I'd like to run in my cluster 3rd party images that require external IP 
address attached (i'm not able to configure external IP). Is there a 
way to define secondary cluster network, do not NAT it and attach it to 
some pods in one particular project?
Maybe you can use the `ExternalIPNetworkCIDRs`, but how does the routing 
works to your cluster with the external IP, which you can't change ?


https://docs.openshift.org/latest/admin_guide/tcp_ingress_external_ports.html
https://docs.openshift.org/latest/install_config/master_node_configuration.html#master-node-config-network-config


Regards
Jacek


Best regards
aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: router certificate question

2017-12-07 Thread Aleksandar Lazic

Hi.


-- Originalnachricht --
Von: "Feld, Michael (IMS)" 
An: "users@lists.openshift.redhat.com" 


Gesendet: 06.12.2017 14:29:27
Betreff: router certificate question


Hey all,



I have a cluster where we use an external HAProxy to terminate SSL and 
send traffic to the routers in the OpenShift cluster, so the routes 
within the cluster do not use TLS. It looks like when this cluster was 
setup, default certificates were given to the routers and are expiring 
soon (I get this when running the ansible easy-mode.yaml):




"router": [

{

  "cert_cn": "OU=Domain Control Validated:, 
CN=*..com:, DNS:*. .com, DNS: .com",


  "days_remaining": 11,

  "expiry": "2017-12-17 20:13:24",

  "health": "warning",

  "path": "/api/v1/namespaces/default/secrets/router-certs",

  "serial": ,

  "serial_hex": ""

}

  ]


My question is, is it OK to let this expire without taking any action? 
How can I safely remove the default certificates to remove the warnings 
in the future?

Well this depends on your opneshift version.
The default router have some secrets where the certs are or in a 
environment Variable.


Please can you provide us with the output of the following commands.

oc version
oc get secrets -n default # I assume you use the router in the default 
project

oc env dc/router -n default --list
oc describe dc router -n default



Thanks
Mike


Best regards
aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Service Catalog and Openshift Origin 3.7

2017-12-05 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Marcello Lorenzi" 
An: "users" 
Gesendet: 05.12.2017 16:55:22
Betreff: Service Catalog and Openshift Origin 3.7


Hi All,
we tried to install the newer version of Openshift Origin 3.7 but 
during the playbook execution we noticed this error:


FAILED - RETRYING: wait for api server to be ready (120 retries left).

The issue seems to be related to the service catalog but we don't know 
there this is running.

Why do you assume this?
Please can you share some more datas like.

* inventory file
* ansible version
* playbook version
* os
* some logs


Does someone notice this issue?

Thanks,
Marcello


Regards
Aleks

smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: login debug?

2017-12-02 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Brigman, Larry" 
An: "users@lists.openshift.redhat.com" 


Gesendet: 30.11.2017 22:13:16
Betreff: login debug?

I had LDAP auth working with Active Directory.  I didn’t like the id 
mapping and decided to change it.


I wiped out the old identities from the system and did a restart of the 
master service.


Now I cannot login.  Reverted my change on id attribute and restarted.  
Still cannot login.  No errors anywhere.
Please can you increase the loglevel, for example to 9,  in 
"/etc/sysconfig/master*" and take a look into the logs for some ldap 
messages.


I assume that some of the messages matches in the code.
https://github.com/openshift/origin/tree/master/pkg/auth/ldaputil



I have a second identity provider using htpasswd which still works as 
expected.


oc version

oc v3.6.1+008f2d5

kubernetes v1.6.1+5115d708d7

features: Basic-Auth GSSAPI Kerberos SPNEGO



Server https://lab-stack1.lab.c-cor.com:8443

openshift v3.6.1+008f2d5

kubernetes v1.6.1+5115d708d7



This is similar to https://github.com/openshift/origin/issues/14506 
 but I did delete 
both the user and identity.


Also new users from LDAP aren’t being allowed in either.


smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re[2]: Openshift router per project

2017-10-28 Thread Aleksandar Lazic

Hi.

In addition to Łukasz answer I strongly suggest to use a dedicated 
project for the different router.


The reason for this suggestion is that you can define a node selector 
for the namespace and threfore the pods for this routers will run only 
on the UAT nodes.


For example:

oc adm new-project uat-router --node-selector='router=uat'

Regards
Aleks

-- Originalnachricht --
Von: "Łukasz Strzelec" 
An: "Marcello Lorenzi" 
Cc: "users" 
Gesendet: 24.10.2017 15:06:52
Betreff: Re: Openshift router per project


Hi :)
You can ,  you can deploy additional router and setup  the domain that 
it will be operating. We did similar approach, we have got internal 
applications, and external ones (internet access and exposure into the 
internet). You can assigne route to particualr  namespace (project) and 
operate diffrent domain :)


I had based on following information : 
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#creating-router-shards,


I also use affinity/ antyaffinity  and  additional labels.

Best regards

2017-10-24 10:42 GMT+02:00 Marcello Lorenzi :

HI,
we're discovering an installation of Openshift Origin 3.6 with some 
app nodes present on a VLAN (development) and some other app nodes on 
a public VLAN (UAT) and we would use the labels and node selector to 
deploy the pods for the 2 different projects on their specified nodes. 
The problem is related to the routes because we can't install 
different routers per projects on different VLANs. Is it possible to 
achieve this problem without the installation of a new cluster per 
environment?


Thanks
Marcello

___
users mailing list
users@lists.openshift.redhat.com 

http://lists.openshift.redhat.com/openshiftmm/listinfo/users 







--
Ł.S.



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin router and X-Forwarded-For

2017-10-23 Thread Aleksandar Lazic
Hi Marcello.

on Mittwoch, 18. Oktober 2017 at 10:32 was written:

> Hi Aleks,
> I already configured the 4 values and if I miss the intermediate CA
> into the destinationCACertificate field the Origin GUI shows to me a
> warning related to the certificate. The export of the command is :

Are there any errors in the router logs?

oc logs -n dev-shared  |egrep callcentergw

> apiVersion: v1
>   
> kind: Route
>   
> metadata:
>   
>   creationTimestamp: null
>   
>   name: callcentergw-dev-external
>   
> spec:
>   
>   host: callcenter.fineco.it
>   
>   port:
>   
>     targetPort: 443-tcp
>   
>   tls:
>   
>     caCertificate: |-
>   
>   -BEGIN CERTIFICATE-
>   
> ….
>   
>   -END CERTIFICATE-
>   
>   -BEGIN CERTIFICATE-
>   
> …
>   
>   -END CERTIFICATE-
>   
>     certificate: |-
>   
>   -BEGIN CERTIFICATE-
>   
> …
>   
>   -END CERTIFICATE-
>   
>     destinationCACertificate: |-
>   
>   -BEGIN CERTIFICATE-
>   
> …
>   
>   -END CERTIFICATE-
>   
>     key: |-
>   
>   -BEGIN RSA PRIVATE KEY-
>   
> …
>   
>   -END RSA PRIVATE KEY-
>   
>     termination: reencrypt
>   
>   to:
>   
>     kind: Service
>   
>     name: callcentergw-dev
>   
>     weight: 100
>   
>   wildcardPolicy: None
>   
> status:
>   
>   ingress:
>   
>   - conditions:
>   
>     - lastTransitionTime: 2017-10-18T07:54:22Z
>   
>   status: "True"
>   
>   type: Admitted
>   
>     host: callcenter.test.local
>   
>     routerName: router
>   
>     wildcardPolicy: None




> The second command results are the same in insecure and passing the
> cafile formed by intermediate + root CA certificates.




> * About to connect() to callcenter.test.local port 443 (#0)

> *   Trying 192.168.10.10...

> * Connected to callcenter.test.local (192.168.10.10) port 443 (#0)

> * Initializing NSS with certpath: sql:/etc/pki/nssdb

> *   CAfile: /tmp/new-cac.crt

>   CApath: none

> * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

> * Server certificate:

> *   subject:
> E=my.test.local,CN=callcenter.test.local,OU=test,O=Local=Milan,ST=Italy,C=IT

> *   start date: Mar 31 11:54:54 2016 GMT

> *   expire date: Mar 31 11:54:54 2018 GMT

> *   common name: callcenter.test.local

> *   issuer: CN=Local CA Subordinate,DC=milano,DC=test,DC=local,DC=it

>> GET / HTTP/1.1

>> User-Agent: curl/7.29.0

>> Host: callcenter.test.local

>> Accept: */*

>> 

> < HTTP/1.1 302 Found

> < Date: Wed, 18 Oct 2017 08:29:17 GMT

> < Server: Apache/2.4.28 (Unix) OpenSSL/1.0.2k-fips

> < Location: https://callcenter.test.local/home

>  < Content-Length: 228

>   

> < Content-Type: text/html; charset=iso-8859-1




> Marcello









> On Tue, Oct 17, 2017 at 11:21 PM, Aleksandar Lazic <al...@me2digital.eu> 
> wrote:

> Hi Marcello.

>  on Dienstag, 17. Oktober 2017 at 09:11 was written:

 >> Hi,
 >> I'm using a re-encrypt configuration to preserve the x-forwrded-for 
 >> information. The configuration is:
 >>
 >> Name:                   callcentergw-dev-external
 >> Namespace:              dev-shared
 >> Created:                17 hours ago
 >> Labels:                 
 >> Annotations:            
 >> Requested Host:         callcenter.test.local
 >>                           exposed on router router 17 hours ago
 >> Path:                   
 >> TLS Termination:        reencrypt
 >> Insecure Policy:        Redirect
 >> Endpoint Port:          443-tcp

 >> Service:        callcentergw-dev
 >> Weight:         100 (100%)
 >> Endpoints:      10.131.0.138:443, 10.131.0.138:80

> I miss the destinationCACertificate maybe it's shown with export.

>  oc export route -n dev-shared callcentergw-dev-external

>  You can add in the GUI (=> Webinterface ) all four values under
>  "Security" settings. There is a section "Certificates" .

>  key: [as in edge termination]
>  certificate: [as in edge termination]
>  caCertificate: [as in edge termination]
>  destinationCACertificate: ...

>  Please can you also show us the output of

>  curl -vk callcenter.test.local

 >> Marcello

>  Best Regards
>  Aleks


 >> Il 16 Ott 2017 20:45, "Aleksandar Lazic" <al...@me2digital.eu> ha scritto:

 >> Hi Marcello.

 >>  on Montag, 16. Oktober 2017 at 15:23 was 

Re: Network issues with openvswitch

2017-10-23 Thread Aleksandar Lazic
Title: Re: Network issues with openvswitch


Hi Yu Wei.

Ah that's a good point.

Do you have seen this doc?
https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_red_hat_openshift_container_platform_3.4_on_red_hat_openstack_platform_10/

Regards
Aleks

on Montag, 23. Oktober 2017 at 19:09 was written:






My environment is setting up on VMs provided by openstack.
It seemed that nodes not working were created from resource pool in which openstack has different version of ovs.
As I have destroyed the environment and want to try again.  I couldn't get more information now.

Thanks,
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Tuesday, October 24, 2017 12:18:55 AM
To: Yu Wei; users@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch 
 
Hi Yu Wei.

Interesting issue.
What's the difference between the nodes which the connection work and the one from which the connection does not work?

Please can you share some more Informations.

I assume this is on aws, is the UDP port 4789 open from everywhere, as described in the doc?
https://docs.openshift.org/3.6/install_config/install/prerequisites.html#prereq-network-access

and of course the other ports also.

oc get nodes
oc describe svc -n default docker-registry

Do you have reboot the notworking nodes?
Are there errors in the journald logs?

Best Regards
Aleks

on Montag, 23. Oktober 2017 at 04:38 was written:





Hi Aleks,

I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.
In some nodes, pods running on them couldn't be reached by other nodes or pods running on other nodes. It indicates "no route to host". 
[root@host-10-1-130-32 ~]# curl -kv docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host

And other nodes works fine.
In my previous mail, host name of node is host-10-1-130-32.
Output of "ifconfig tun0" is as below,
[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
       inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
       inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
       ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
       RX packets 97906  bytes 8665783 (8.2 MiB)
       RX errors 0  dropped 0  overruns 0  frame 0
       TX packets 163379  bytes 27405744 (26.1 MiB)
       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following, 
10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 10.130.2.1, length 28
       0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
       0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
       0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
       0x0030:      ce50 3d07 9ea2 0806  .P=.
       0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
       0x0050:  0201    0a80 012d            ...-
  25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.45, length 46
       0x:     fa16 3e5a a862 0806 0001  >Z.b
       0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
       0x0020:     0a01 8202     
       0x0030:                   
  26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has fe80::824:c2ff:fea1:1fbb
       0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
       0x0010:   0018 3aff       :...
       0x0020:     ff02      
       0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
       0x0040:     0824 c2ff fea1 1fbb       ...$..
  27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.73, length 46
       0x:     fa16 3ec4 a9be 0806 0001  >...
       0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
       0x0020:     0a01 8202     
       0x0030:                   

I didn't understand why the IP marked in red above were involved.

Thanks,
Jared, (韦

Re: Network issues with openvswitch

2017-10-23 Thread Aleksandar Lazic
Title: Re: Network issues with openvswitch


Hi Yu Wei.

Interesting issue.
What's the difference between the nodes which the connection work and the one from which the connection does not work?

Please can you share some more Informations.

I assume this is on aws, is the UDP port 4789 open from everywhere, as described in the doc?
https://docs.openshift.org/3.6/install_config/install/prerequisites.html#prereq-network-access

and of course the other ports also.

oc get nodes
oc describe svc -n default docker-registry

Do you have reboot the notworking nodes?
Are there errors in the journald logs?

Best Regards
Aleks

on Montag, 23. Oktober 2017 at 04:38 was written:






Hi Aleks,

I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.
In some nodes, pods running on them couldn't be reached by other nodes or pods running on other nodes. It indicates "no route to host". 
[root@host-10-1-130-32 ~]# curl -kv docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host

And other nodes works fine.
In my previous mail, host name of node is host-10-1-130-32.
Output of "ifconfig tun0" is as below,
[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
        inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
        ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
        RX packets 97906  bytes 8665783 (8.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 163379  bytes 27405744 (26.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following, 
10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 10.130.2.1, length 28
        0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
        0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
        0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
        0x0030:      ce50 3d07 9ea2 0806  .P=.
        0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
        0x0050:  0201    0a80 012d            ...-
   25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.45, length 46
        0x:     fa16 3e5a a862 0806 0001  >Z.b
        0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
        0x0020:     0a01 8202     
        0x0030:                   
   26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has fe80::824:c2ff:fea1:1fbb
        0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
        0x0010:   0018 3aff       :...
        0x0020:     ff02      
        0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
        0x0040:     0824 c2ff fea1 1fbb       ...$..
   27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.73, length 46
        0x:     fa16 3ec4 a9be 0806 0001  >...
        0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
        0x0020:     0a01 8202     
        0x0030:                   

I didn't understand why the IP marked in red above were involved.

Thanks,
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Monday, October 23, 2017 2:34:13 AM
To: Yu Wei; users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch 
 
Hi Yu Wei.

on Sonntag, 22. Oktober 2017 at 19:13 was written:

> Hi,

> I execute following command on work node of openshift origin cluster 3.6.
>
> [root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
> traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 byte packets
>  1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
>  It seemed content marked in red  should be hostname of work node.
>  How could I debug such issue? Where to start

Re: Network issues with openvswitch

2017-10-22 Thread Aleksandar Lazic
Hi Yu Wei.

on Sonntag, 22. Oktober 2017 at 19:13 was written:

> Hi,

> I execute following command on work node of openshift origin cluster 3.6.
>
> [root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
> traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 
> byte packets
>  1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
>  It seemed content marked in red  should be hostname of work node.
>  How could I debug such issue? Where to start?

What's the hostname of the node?
I'm not sure what you try to debug or what's the problem you try to 
solve?

> Thanks,

> Jared, (韦煜)
>  Software developer
>  Interested in open source software, big data, Linux

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin router and X-Forwarded-For

2017-10-17 Thread Aleksandar Lazic
Hi Marcello.

on Dienstag, 17. Oktober 2017 at 09:11 was written:

> Hi,
> I'm using a re-encrypt configuration to preserve the x-forwrded-for 
> information. The configuration is:
>
> Name:                   callcentergw-dev-external
> Namespace:              dev-shared
> Created:                17 hours ago
> Labels:                 
> Annotations:            
> Requested Host:         callcenter.test.local
>                           exposed on router router 17 hours ago
> Path:                   
> TLS Termination:        reencrypt
> Insecure Policy:        Redirect
> Endpoint Port:          443-tcp

> Service:        callcentergw-dev
> Weight:         100 (100%)
> Endpoints:      10.131.0.138:443, 10.131.0.138:80

I miss the destinationCACertificate maybe it's shown with export.

oc export route -n dev-shared callcentergw-dev-external

You can add in the GUI (=> Webinterface ) all four values under 
"Security" settings. There is a section "Certificates" .

key: [as in edge termination]
certificate: [as in edge termination]
caCertificate: [as in edge termination]
destinationCACertificate: ...

Please can you also show us the output of 

curl -vk callcenter.test.local

> Marcello

Best Regards
Aleks

> Il 16 Ott 2017 20:45, "Aleksandar Lazic" <al...@me2digital.eu> ha scritto:

> Hi Marcello.

>  on Montag, 16. Oktober 2017 at 15:23 was written:

 >> Hi,
 >> I have tried it and it worked fine but the problem is override the
 >> default wildcard certificate and configure a different certificate,
 >> because it's not possible to configure the intermediate CA chain into
 >> the admin panel. I tried to configure the CA cert with the root CA and
 >> the subordinate CA files and the router is ok but if I navigate the
 >> new route I received a security error.

>  do you use reencrypted or passthrough route

>  please can you show us the output of.

>  oc get route -n your-project
>  oc describe route -n your-project your-route

>  Best Regards
>  Aleks


 >> Marcello

 >> On Thu, Oct 12, 2017 at 1:14 PM, Aleksandar Lazic <al...@me2digital.eu> 
 >> wrote:

 >>
 >> Hi Marcello Lorenzi.

 >>  have you used -servername in s_client?

 >>  The ssl solution is based on sni (
 >> https://en.wikipedia.org/wiki/Server_Name_Indication )

 >> Regards
 >>  Aleks

 >> on Donnerstag, 12. Oktober 2017 at 13:02 was written:



 >> Hi All,
 >>  thanks for the response and we checked the configuration. If I tried
 >> to check the certificated propagate with the passthrough configuration
 >> with openssl s_client  and the certificate provided is the wilcard
 >> domain certificate and not the pod itself. Is it normal?

 >>  Thanks,
 >>  Marcello

 >>  On Thu, Oct 12, 2017 at 10:34 AM, Aleksandar Lazic <al...@me2digital.eu> 
 >>wrote:

 >> Hi.

 >>  Additionally to joel suggestion can you also use reencrypted route
 >> if you want to talk encrypted with apache webserver.

 >> https://docs.openshift.org/3.6/architecture/networking/routes.html#re-encryption-termination

 >> Regards
 >>  Aleks

 >>  on Mittwoch, 11. Oktober 2017 at 15:51 was written:


 >> Sorry I meant it say, it *cannot modify the http request in any way.
 >>  On Thu, 12 Oct 2017 at 12:51 am, Joel Pearson
 >> <japear...@agiledigital.com.au> wrote:

 >> Hi Marcelo,

 >>  If you use Passthrough termination then that means that OpenShift
 >> cannot add the X-Forwarded-For header, because as the name suggests it
 >> is just passing the packets through and because it’s encrypted it can
 >> modify the http request in anyway.

 >>  If you want X-Forwarded-For you will need to switch to Edge termination.

 >>  Thanks,

 >>  Joel
 >>  On Thu, 12 Oct 2017 at 12:27 am, Marcello Lorenzi <cell...@gmail.com> 
 >>wrote:

 >> Hi All,
 >>  we tried to configure a route on Origin 3.6 with a Passthrough
 >> termination to an Apache webserver present into a single POD but we
 >> can't notice the X-Forwarded-Header to Apache logs. We tried to capture it 
 >> without success.

 >>  Could you confirm if there are some method to extract it from the POD side?

 >>  Thanks,
 >> Marcello
 >> ___
 >>  users mailing list
 >> users@lists.openshift.redhat.com
 >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users--
 >> Kind Regards,

 >>  Joel Pearson
 >>  Agile Digital | Senior Software Consultant

 >>  Love Your Software™ | ABN 98 106 361 273
 >>  p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au--
 >> Kind Regards,

 >>  Joel Pearson
 >>  Agile Digital | Senior Software Consultant

 >>  Love Your Software™ | ABN 98 106 361 273
 >>  p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin router and X-Forwarded-For

2017-10-16 Thread Aleksandar Lazic
Hi Marcello.

on Montag, 16. Oktober 2017 at 15:23 was written:

> Hi,
> I have tried it and it worked fine but the problem is override the
> default wildcard certificate and configure a different certificate,
> because it's not possible to configure the intermediate CA chain into
> the admin panel. I tried to configure the CA cert with the root CA and
> the subordinate CA files and the router is ok but if I navigate the
> new route I received a security error.

do you use reencrypted or passthrough route

please can you show us the output of.

oc get route -n your-project
oc describe route -n your-project your-route

Best Regards
Aleks


> Marcello

> On Thu, Oct 12, 2017 at 1:14 PM, Aleksandar Lazic <al...@me2digital.eu> wrote:

>   
> Hi Marcello Lorenzi.

>  have you used -servername in s_client?

>  The ssl solution is based on sni (
> https://en.wikipedia.org/wiki/Server_Name_Indication )

> Regards
>  Aleks

> on Donnerstag, 12. Oktober 2017 at 13:02 was written:



> Hi All,
>  thanks for the response and we checked the configuration. If I tried
> to check the certificated propagate with the passthrough configuration
> with openssl s_client  and the certificate provided is the wilcard
> domain certificate and not the pod itself. Is it normal?

>  Thanks,
>  Marcello

>  On Thu, Oct 12, 2017 at 10:34 AM, Aleksandar Lazic <al...@me2digital.eu> 
> wrote:

> Hi.

>  Additionally to joel suggestion can you also use reencrypted route
> if you want to talk encrypted with apache webserver.

> https://docs.openshift.org/3.6/architecture/networking/routes.html#re-encryption-termination

> Regards
>  Aleks

>  on Mittwoch, 11. Oktober 2017 at 15:51 was written:


> Sorry I meant it say, it *cannot modify the http request in any way. 
>  On Thu, 12 Oct 2017 at 12:51 am, Joel Pearson
> <japear...@agiledigital.com.au> wrote:

> Hi Marcelo,

>  If you use Passthrough termination then that means that OpenShift
> cannot add the X-Forwarded-For header, because as the name suggests it
> is just passing the packets through and because it’s encrypted it can
> modify the http request in anyway. 

>  If you want X-Forwarded-For you will need to switch to Edge termination.

>  Thanks,

>  Joel
>  On Thu, 12 Oct 2017 at 12:27 am, Marcello Lorenzi <cell...@gmail.com> wrote:

> Hi All,
>  we tried to configure a route on Origin 3.6 with a Passthrough
> termination to an Apache webserver present into a single POD but we
> can't notice the X-Forwarded-Header to Apache logs. We tried to capture it 
> without success.

>  Could you confirm if there are some method to extract it from the POD side?

>  Thanks,
> Marcello
> ___
>  users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users-- 
> Kind Regards,

>  Joel Pearson
>  Agile Digital | Senior Software Consultant

>  Love Your Software™ | ABN 98 106 361 273
>  p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au-- 
> Kind Regards,

>  Joel Pearson
>  Agile Digital | Senior Software Consultant

>  Love Your Software™ | ABN 98 106 361 273
>  p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin router and X-Forwarded-For

2017-10-12 Thread Aleksandar Lazic
Title: Re: Origin router and X-Forwarded-For


Hi Marcello Lorenzi.

have you used -servername in s_client?

The ssl solution is based on sni ( https://en.wikipedia.org/wiki/Server_Name_Indication )

Regards
Aleks

on Donnerstag, 12. Oktober 2017 at 13:02 was written:





Hi All,
thanks for the response and we checked the configuration. If I tried to check the certificated propagate with the passthrough configuration with openssl s_client  and the certificate provided is the wilcard domain certificate and not the pod itself. Is it normal?

Thanks,
Marcello

On Thu, Oct 12, 2017 at 10:34 AM, Aleksandar Lazic <al...@me2digital.eu> wrote:




Hi.

Additionally to joel suggestion can you also use reencrypted route if you want to talk encrypted with apache webserver.

https://docs.openshift.org/3.6/architecture/networking/routes.html#re-encryption-termination

Regards
Aleks

on Mittwoch, 11. Oktober 2017 at 15:51 was written:





Sorry I meant it say, it *cannot modify the http request in any way. 
On Thu, 12 Oct 2017 at 12:51 am, Joel Pearson <japear...@agiledigital.com.au> wrote:




Hi Marcelo,

If you use Passthrough termination then that means that OpenShift cannot add the X-Forwarded-For header, because as the name suggests it is just passing the packets through and because it’s encrypted it can modify the http request in anyway. 

If you want X-Forwarded-For you will need to switch to Edge termination. 

Thanks,

Joel
On Thu, 12 Oct 2017 at 12:27 am, Marcello Lorenzi <cell...@gmail.com> wrote:




Hi All,
we tried to configure a route on Origin 3.6 with a Passthrough termination to an Apache webserver present into a single POD but we can't notice the X-Forwarded-Header to Apache logs. We tried to capture it without success.

Could you confirm if there are some method to extract it from the POD side?

Thanks,
Marcello
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


-- 
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au


-- 
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au











smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin router and X-Forwarded-For

2017-10-12 Thread Aleksandar Lazic
Title: Re: Origin router and X-Forwarded-For


Hi.

Additionally to joel suggestion can you also use reencrypted route if you want to talk encrypted with apache webserver.

https://docs.openshift.org/3.6/architecture/networking/routes.html#re-encryption-termination

Regards
Aleks

on Mittwoch, 11. Oktober 2017 at 15:51 was written:





Sorry I meant it say, it *cannot modify the http request in any way. 
On Thu, 12 Oct 2017 at 12:51 am, Joel Pearson  wrote:




Hi Marcelo,

If you use Passthrough termination then that means that OpenShift cannot add the X-Forwarded-For header, because as the name suggests it is just passing the packets through and because it’s encrypted it can modify the http request in anyway. 

If you want X-Forwarded-For you will need to switch to Edge termination. 

Thanks,

Joel
On Thu, 12 Oct 2017 at 12:27 am, Marcello Lorenzi  wrote:




Hi All,
we tried to configure a route on Origin 3.6 with a Passthrough termination to an Apache webserver present into a single POD but we can't notice the X-Forwarded-Header to Apache logs. We tried to capture it without success.

Could you confirm if there are some method to extract it from the POD side?

Thanks,
Marcello
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


-- 
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au


-- 
Kind Regards,

Joel Pearson
Agile Digital | Senior Software Consultant

Love Your Software™ | ABN 98 106 361 273
p: 1300 858 277 | m: 0405 417 843 | w: agiledigital.com.au





smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift Origin and fixed user ID

2017-09-13 Thread Aleksandar Lazic
Hi Marcello.

on Mittwoch, 13. September 2017 at 18:00 was written:

> Hi Clayton
> I have into docker image this commands:


> && groupadd $APPLICATION_USER \
> && useradd -g $APPLICATION_USER -m -d /home/$APPLICATION_USER -s
> /bin/bash -c 'Application user' $APPLICATION_USER \
> && chown -R $APPLICATION_USER:$APPLICATION_USER $TOMCAT_PATH \
> && chgrp -R 0 $TOMCAT_PATH \
>
> EXPOSE $TOMCAT_HTTP_PORT
>
> USER $APPLICATION_USER

> On Origin configuration I added the user admin to nonroot SCC.
>
> oadm policy add-scc-to-user nonroot admin
>
> After this I execute the container but i received an entrypoint permission 
> denied.

Please can you show us the whole Dockerfile.
Is the file executable?

what do you get when you start the process manually?

oc debug dc/
ls -la 
# call 

> Marcello

Regards
Aleks

> On Wed, Sep 13, 2017 at 5:42 PM, Clayton Coleman  wrote:

> You would define that in your pod spec, or give the service accounts
>  in your namespace access to the "nonroot" SCC.


 >> On Sep 13, 2017, at 11:33 AM, Marcello Lorenzi  wrote:
 >>
 >> HI All,
 >> we have created some images with commands executed by user jboss and its 
 >> user id is fixed to 500 into the docker file. If we start the image on 
 >> Origin the image fails for the permission denied. We discovered that Origin 
 >> use a random uid assignment during the image creation, but is it possible 
 >> to fix the user id for a specific user like jboss for all the container?
 >>
 >> Thanks,
 >> Marcello
>> ___
 >> users mailing list
 >> users@lists.openshift.redhat.com
 >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users





-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Metrics not accessible

2017-09-07 Thread Aleksandar Lazic
Hi Tim.

on Mittwoch, 06. September 2017 at 13:43 was written:

> On 06/09/2017 12:33, Aleksandar Lazic wrote:
>> Hi Tim.
>>
>> A dump question but do you have any proxy setuped?
> No, just a vanilla ansible install running on a machine on EC2. The 
> inventory file was posted earlier.
>>
>> on Mittwoch, 06. September 2017 at 12:49 was written:
>>
>>> No joy.
>>> The cassandra pod starts fine but the hawkular on fails to start with
>>> what looks like the same errors as I described before.
>> One of the interesting par is that hawkular can connect to cas
>>
>> ###
>> 2017-09-05 14:54:48,123 INFO  [com.datastax.driver.core.Cluster] 
>> (ServerService Thread Pool -- 64) New Cassandra host 
>> hawkular-cassandra/172.30.151.137:9042 added
>> ...
>> 2017-09-05 14:54:48,276 INFO  [org.cassalog.core.CassalogImpl] 
>> (metricsservice-lifecycle-thread) Applying ChangeSet
>> -- version: set-keyspace
>> USE hawkular_metrics
>> ...
>> 
>>
>> and then got you a NullPointerException
>>
>> ###
>> 2017-09-05 14:54:49,163 FATAL 
>> [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
>> (metricsservice-lifecycle-thread) HAWKMETRICS26: An error occurred 
>> trying to connect to the Cassandra cluster: java.lang.NullPointerException
>>  at 
>> org.hawkular.metrics.core.dropwizard.HawkularObjectNameFactory.createName(HawkularObjectNameFactory.java:54)
>>  at 
>> com.codahale.metrics.JmxReporter$JmxListener.createName(JmxReporter.java:656)
>>  at 
>> com.codahale.metrics.JmxReporter$JmxListener.onTimerAdded(JmxReporter.java:633)
>>  at 
>> com.codahale.metrics.MetricRegistry.notifyListenerOfAddedMetric(MetricRegistry.java:356)
>>  at 
>> com.codahale.metrics.MetricRegistry.addListener(MetricRegistry.java:191)
>>  at com.codahale.metrics.JmxReporter.start(JmxReporter.java:715)
>>  at 
>> org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle.startMetricsService(MetricsServiceLifecycle.java:474)
>>  at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>  at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>  at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  at java.lang.Thread.run(Thread.java:748)
>> ###
>>
>> Was the Cassandra up when the hawkular started?
> Yes.
>>
>> are you able to curl Cassandra from hawkular pod?
>>
>> oc debug rc/hawkular-metrics
>>
>> curl -v telnet://hawkular-cassandra:9042/
> Yes:

> $ curl -v telnet://hawkular-cassandra:9042/
> * About to connect() to hawkular-cassandra port 9042 (#0)
> *   Trying 172.30.78.190...
> * Connected to hawkular-cassandra (172.30.78.190) port 9042 (#0)

> ^C

that's strange.

Please can you try a previous image version of hawkular?

> Tim
>>
>>> Tim
>>
>>> On 06/09/2017 10:34, Aleksandar Lazic wrote:
>>>> Hi Tim.
>>>>
>>>> on Dienstag, 05. September 2017 at 17:10 was written:
>>>>
>>>>> Still no joy with this.
>>>>> I retried with the latest code and still hitting the same problem.
>>>>> Metrics does not seem to be working with a new Ansible install.
>>>>> I'm using a minimal setup with an inventory like this:
>>>>>> [OSEv3:children]
>>>> [snipp]
>>>>
>>>>> When the install completes the openshift-infra project pods ends up like
>>>>> this:
>>>>>> NAME READY STATUS RESTARTS   AGE
>>>>>> hawkular-cassandra-1-4m7lq   1/1   Running 0  16m
>>>>>> hawkular-metrics-0nl1q   0/1   CrashLoopBackOff 7  16m
>>>>>> heapster-cgw0b   0/1   Running 1  16m
>>>>> The hawkular-metrics pods is failing, and it looks like its because it
>>>>> can't connect to the cassandra pod.
>>>>> The full log of the hawkular-metrics pod is here:
>>>>> https://gist.github.com/tdudgeon/f3099911eed441817369ee03635aad7d
>>

Re: Annotations for ha-proxy x-forwarded-host

2017-09-07 Thread Aleksandar Lazic
Hi Daniel.

on Donnerstag, 07. September 2017 at 14:30 was written:

> Hi!
>
> We run into an issue with how ha-proxy is re-writing the
> x-forwarded-headers in OpenShift. We have an application that needs
> the original x-forwarded-host header passed in  the request it seems
> that the ha-proxy router is overwriting the ‘X-Forwarded-Host” headers
> to match the requested host-header.
>  
> --- the generated config for my route---
>  
>   http-request set-header X-Forwarded-Host %[req.hdr(host)]
>   http-request set-header X-Forwarded-Port %[dst_port]
>  
> Basically we want to keep the orginial ‘X-Forwarded-Host’, is there
> any way to configure this behavior?


You will need a custom template as described in the doc
https://docs.openshift.org/latest/install_config/router/customized_haproxy_router.html#using-configmap-replace-template

You can then add in the custom template this line, untested.

http-request set-header X-Forwarded-Host 
%[req.fhdr(X-Forwarded-Host)],%[req.hdr(host)] if { req.fhdr(X-Forwarded-Host) 
-m found }
http-request set-header X-Forwarded-Host %[req.hdr(host)] if 
!{req.fhdr(X-Forwarded-Host) -m found }

This solution is based on Patricks example on HAProxy ml
https://www.mail-archive.com/haproxy@formilux.org/msg26811.html

> Best Regards/Vänliga Hälsningar
> Daniel Svensson
>  
> Product Specialist Linux SITI
> Technical Infrastructure
> IT Delivery Production
> IKEA IT AB
> Phone: +46 (0)766 190 484

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Debugging router hostname matching

2017-08-23 Thread Aleksandar Lazic
Hi Henryk.

on Mittwoch, 23. August 2017 at 13:59 was written:

> Hi Alex,
>
> Many thanks for your help.
>
> It was my fault. I assumed that you can use nip.io with domains
> prefixes. My bad. :) curl -v told me the harsh truth :)
>
> Cheers!

That's the reason why I ALWAYS use -v ;-)

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Debugging router hostname matching

2017-08-22 Thread Aleksandar Lazic
Hi Henryk.

on Dienstag, 22. August 2017 at 18:03 was written:

> Hi,

> I have exposed service using the following command:

>   oc expose docker-registry
> --hostname=docker-registry-default.ec2-52-59-245-55.eu-central-1.compute.amazonaws.com.nip.io

> I can see route created properly:
>
> $ oc get routes
> NAME              HOST/PORT                                          
> PATH      SERVICES          PORT       TERMINATION   WILDCARD
> docker-registry  
> docker-registry-default.ec2-52-59-245-55.eu-central-1.compute.amazonaws.com.nip.io
> docker-registry   5000-tcp                 None

> However it seems that router cannot match my hostname with the service:

> $ curl
> http://docker-registry-default.ec2-52-59-245-55.eu-central-1.compute.amazonaws.com.nip.io
> 
> 301 Moved Permanently
> 
> 301 Moved Permanently
> nginx/1.4.6 (Ubuntu)
> 
> 

Please can you use

curl -v 
http://docker-registry-default.ec2-52-59-245-55.eu-central-1.compute.amazonaws.com.nip.io
curl -v http://docker-registry-default.router.default.svc.cluster.local

Then we can see to which IP the name will be resolved

> Everything works fine when I expose service without --hostname option
> and rely on docker-registry-default.router.default.svc.cluster.local hostname.


> How can I debug why is that happening?

you can take a look into the router config.

oc get po -n default # => pick any router pod

oc rsh  cat haproxy.config > router-config.txt

egrep registry router-config.txt

There should be a entry for the router.

> Many thanks!-- 
> Henryk Konsek

BTW. A oc version would also be nice.

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Better understanding of limits and quotas

2017-08-14 Thread Aleksandar Lazic
Title: Re: Better understanding of limits and quotas


Hi Thorvald.

I found this blog very helpfully.

https://blog.openshift.com/full-cluster-capacity-management-monitoring-openshift/

What's the output of

oc describe 
oc get ev

It's also one of the trickiest part of oc & kube for me.

Regards
Aleks

on Montag, 14. August 2017 at 17:39 was written:





Hi Derek,

Thank you for your reply.

But as I mentioned in my first post I requested following:
request.cpu: 1
limits.cpu: 2
request:mem: 256Mi
limit.mem: 512Mi

So it wasn't a pod without any resource request. There was an request but it just didn't work or I don't understand the entire concept. 

Thank you.
TH

On 14 August 2017 at 15:56, Derek Carr  wrote:




If you create a pod with no resource requests, the pod has BestEffort QoS.  This means it has no resource guarantees, and will get variable performance based on what else is happening on the machine running your pod.

For your scenario, it appears you are trying to run builds as "BestEffort" workloads.

To support this, you can create a BestEffort quota on #of pods, and alternately update your compute-resources quota to add a "NotBestEffort" scope.

For reference, see example 4:
https://docs.openshift.org/latest/admin_guide/quota.html

Thanks
Derek

On Mon, Aug 14, 2017 at 6:04 AM, Thorvald Hallvardsson  wrote:




Hi,

I'm trying to play a bit with limits and quotas and generally I don't understand anything. 

I'm trying to build a test application and I get an error:
Failed to create build pod: pods "wordpress-1-build" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory.. A new deployment will be created automatically once the build completes

I built it with providing:
request.cpu: 1
limits.cpu: 2
request:mem: 256Mi
limit.mem: 512Mi

My quotas look as follows:
[root@master ~]# oc get quota
NAME                AGE
compute-resources   2d
object-counts       2d
[root@master ~]# oc describe quota compute-resources 
Name:           compute-resources
Namespace:      limits
Resource        Used    Hard
            
limits.cpu      0       10
limits.memory   0       2Gi
pods            0       4
requests.cpu    0       5
requests.memory 0       1Gi
[root@master ~]# oc describe quota object-counts 
Name:                   object-counts
Namespace:              limits
Resource                Used    Hard
                    
configmaps              0       5
persistentvolumeclaims  0       1
replicationcontrollers  0       10
secrets                 9       10
services                1       10

What do I do wrong or can someone explain me how that should work? 

Thank you.

Regards,
TH

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users












-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Thin Pool Space

2017-08-14 Thread Aleksandar Lazic
Title: Re: Docker Thin Pool Space


Hi David.

Yes.

on Montag, 14. August 2017 at 15:05 was written:





Thanks Aleksandar,

Do you just have that cron'd on each node?

On Mon, Aug 14, 2017 at 1:24 PM, Aleksandar Lazic <al...@me2digital.eu> wrote:




Hi David.

on Montag, 14. August 2017 at 13:46 was written:

> I keep getting the below error on my app nodes. Running Origin v1.4.1
> on AWS installed using the reference architecture. Am I missing
> something which should be cleaning up?

You should cleanup the local docker storage on regular base.

Something like this.

https://github.com/rhcarvalho/openshift-devtools/blob/master/docker-cleanup

> Error syncing pod, skipping: failed to "StartContainer" for "jnlp"
> with ErrImagePull: "failed to register layer: devmapper: Thin Pool has
> 4801 free data blocks which is less than minimum required 4863 free
> data blocks. Create more free space in thin pool or use
> dm.min_free_space option to change behavior"
>
> Thanks,
>
> Dave

--
Best Regards
Aleks








-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Thin Pool Space

2017-08-14 Thread Aleksandar Lazic
Hi David.

on Montag, 14. August 2017 at 13:46 was written:

> I keep getting the below error on my app nodes. Running Origin v1.4.1
> on AWS installed using the reference architecture. Am I missing
> something which should be cleaning up?

You should cleanup the local docker storage on regular base.

Something like this.

https://github.com/rhcarvalho/openshift-devtools/blob/master/docker-cleanup

> Error syncing pod, skipping: failed to "StartContainer" for "jnlp"
> with ErrImagePull: "failed to register layer: devmapper: Thin Pool has
> 4801 free data blocks which is less than minimum required 4863 free
> data blocks. Create more free space in thin pool or use
> dm.min_free_space option to change behavior"
>
> Thanks,
>
> Dave

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Default cluster administrator user in a multi-node cluster

2017-08-14 Thread Aleksandar Lazic
Title: Re: Default cluster administrator user in a multi-node cluster


Hi Isuru.

when you on the master and root user you are by default system:admin, afaik.

You will need to add a user and give them the cluster-admin privileges to work from remote or as normal user.

https://docs.openshift.org/latest/admin_guide/manage_authorization_policy.html#managing-role-bindings

oadm policy add-role-to-user cluster-admin your-user

Maybe you will need to do the same on minishift.

Regards
Aleks

on Montag, 14. August 2017 at 10:55 was written:





Hi all, 

Followed [1] to create a multi node setup with a single master and three nodes using the ansible installer. After the all nodes started successfully, tried to use the CLI tool against the Openshift cluster, similar to how I used it in the local minishift environment, to login as the default system admin (system:admin): 

oc login -u system:admin

Then, I'm prompted for a password, which did not happen locally. 

Checked the master configuration file master-config.yaml, and the section is similar to [4]. AFAIU from the docs, the AllowAllPasswordIdentityProvider configuration will allow any non empty username and password to login to the system, but not relevant to the cluster administrator. Please correct if I'm wrong. 

Also went through the user management [2] and authorization documents [3] but I was unable to figure out how to configure/find the default admin credentials. Please do share your inputs on how to proceed from here. 

[1]. https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-host-port
[2]. https://docs.openshift.org/latest/admin_guide/manage_users.html
[3]. https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html
[4]. 
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: claim
    name: allow_all
    provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider

-- 
Thanks and Regards,
Isuru 





smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Health check of API-server and HA-proxy

2017-07-28 Thread Aleksandar Lazic
Hi Per Carlson.

on Freitag, 28. Juli 2017 at 13:19 was written:

> We are using an external load balancer in front of both the
> API-server and the HA-proxies, and need some form of health checks to achieve 
> HA.
>
> The API-server has got /healthz and /healthz/ready endpoints, what
> the difference?​ Could any of those be used (and which is
> recommended), or are there better choices?
>
> There is a /healthz endpoint in HA-proxy on port 1936, but it isn't
> exposed outside the cluster and requires a password (which is unique
> per dc). What else could be used? We would like to stay as close as
> "stock configuration" as possible to reduce technical dept.

For the HAProxy have we added the 1936 port on the router nodes in the 
iptables chain OS_FIREWALL_ALLOW and configured the LB to check 
/healthz.

> ​What are the recommended ways to do health checking of those components? 

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Expose a range of ports

2017-07-14 Thread Aleksandar Lazic
Hi Javier Palacios.

on Freitag, 14. Juli 2017 at 14:45 was written:

> Hello,
>
> We have a service that exposes a wide port range that we want to move
> into openshift. Is that possible with origin 1.5.1?
> What my search found is that is not possible, but I cannot find any
> recent statement and want to be sure.

Well to expose several ports in a Service is not that difficult.
The `oc new-app ..` with a dockerfile creates this for your by default.

The challenge is to be able to reach this ports from outside of the 
project.

Please could you add more infos what exactly you want to achieve

> Javier Palacios

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc rsh or oc get pod -w disconnection after few minutes

2017-07-14 Thread Aleksandar Lazic
Title: Re: oc rsh or oc get pod -w disconnection after few minutes


Hi Philippe Lafoucrière.

on Donnerstag, 13. Juli 2017 at 21:04 was written:





We have achieved a lot of tests, and the connection is dropped somewhere in Openshift, not by the firewall.

As we don't have any proxy, except haproxy.

We've seen https://docs.openshift.com/container-platform/3.3/install_config/router/default_haproxy_router.html#preventing-connection-failures-during-restarts

Could it be related?



I think you should try to increase ROUTER_DEFAULT_CLIENT_TIMEOUT

oc env dc/router -n default ROUTER_DEFAULT_CLIENT_TIMEOUT=1h

You can see more of this variables in the doc

https://docs.openshift.org/latest/architecture/core_concepts/routes.html#env-variables

or in the code

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L98-L112






We're seeing a disconnection of `oc get events -w` after exactly 30s, which is exactly the reload time of haproxy.

thanks



-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: timeout expired waiting for volumes to attach/mount for pod

2017-07-11 Thread Aleksandar Lazic
Title: Re: timeout expired waiting for volumes to attach/mount for pod


Hi Philippe.

on Dienstag, 11. Juli 2017 at 23:18 was written:





And... it's starting again.
Pods are getting stuck because volumes (secrets) can't be mounted, then after a few minutes, everything starts.
I really don't get it :(



Maybe it would help when you tell us some basic informations.

On which plattform do your run openshift?
Since when happen this behavior?
What was the latest changes which your have done before? 

oc version
oc project
oc export dc/
oc describe pod 
oc get events

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 40000 hits by hours

2017-07-07 Thread Aleksandar Lazic
Title: Re: Origin-Aggregated-Logging OPS generate 10Go ES data by day, 4 hits by hours


Hi Stéphane Klein.

on Freitag, 07. Juli 2017 at 11:15 was written:





Hi,

Origin-Aggregated-Logging (https://github.com/openshift/origin-aggregated-logging) is installed on my cluster and I have enabled "OPS" option.

Then, I have two ElasticSearch clusters:

* ES
* ES-OPS

My issue: OPS logging generate 10Go ES data by day!

origin-node log level is set at 0 (errors and warnings only).

This is some logging record:

/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --insecure-registry=172.30.0.0/16 --log-driver=journald --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/cah-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true

/usr/lib/systemd/systemd --switched-root --system --deserialize 19

/usr/bin/docker-current run --name origin-node --rm --privileged --net=host --pid=host --env-file=/etc/sysconfig/origin-node -v /:/rootfs:ro,rslave -e CONFIG_FILE=/etc/origin/node/node-config.yaml -e OPTIONS=--loglevel=0 -e HOST=/rootfs -e HOST_ETC=/host-etc -v /var/lib/origin:/var/lib/origin:rslave -v /etc/origin/node:/etc/origin/node -v /etc/localtime:/etc/localtime:ro -v /etc/machine-id:/etc/machine-id:ro -v /run:/run -v /sys:/sys:rw -v /sys/fs/cgroup:/sys/fs/cgroup:rw -v /usr/bin/docker:/usr/bin/docker:ro -v /var/lib/docker:/var/lib/docker -v /lib/modules:/lib/modules -v /etc/origin/openvswitch:/etc/openvswitch -v /etc/origin/sdn:/etc/openshift-sdn -v /var/lib/cni:/var/lib/cni -v /etc/systemd/system:/host-etc/systemd/system -v /var/log:/var/log -v /dev:/dev --volume=/usr/bin/docker-current:/usr/bin/docker-current:ro --volume=/etc/sysconfig/docker:/etc/sysconfig/docker:ro openshift/node:v1.4.1

...

4 hits by hours!

I don't understand why I have all this log record, it is usual?



From my observations yes it is normal.
You should also have a lot of something like atomic-openshift-node entries.





How can I fix it?



Only with redefine the log lines in docker, ihmo.





Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane





-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: I think such an addition to OpenShift might be useful ;)

2017-07-07 Thread Aleksandar Lazic
Title: Re: I think such an addition to OpenShift might be useful ;)


Hi Hetz Ben Hamo.

on Freitag, 07. Juli 2017 at 00:48 was written:





https://arstechnica.com/information-technology/2017/07/lets-encrypt-to-start-offering-free-wildcard-certificates-for-https/



+1

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-21 Thread Aleksandar Lazic
Title: Re: oc whoami bug?


Hi Philippe Lafoucrière.

on Mittwoch, 21. Juni 2017 at 13:48 was written:





Just to be clear, my point is: if `oc whoami` returns "error: You must be logged in to the server (the server has asked for the client to provide credentials)", `oc whoami -t` should return the same if the session has timed out ;)​



+1

or some error code e. g. -1 or something similar

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oauth token info

2017-06-13 Thread Aleksandar Lazic
Title: Re: oauth token info


Hi Andrew Lau.

on Dienstag, 13. Juni 2017 at 12:14 was written:





Normal users can't query those endpoints



That's true.

I think then the easiest way is to use

https://jwt.io/ or
https://github.com/auth0/jwt-decode

Regards
Aleks





On Tue, 13 Jun 2017 at 17:46 Aleksandar Lazic <al...@me2digital.eu> wrote:




Hi Andrew Lau.

on Dienstag, 13. Juni 2017 at 03:38 was written:





Is there an endpoint to retrieve the current token information?

ie. /oapi/v1/users/~ seems to be an undocumented way to get the current user information. I'm looking to obtain the expiry time on the current token being used.


Is the https://jwt.io/ not an option?

You can try this sequence

Search for the token if you don't know the token only the userName.
curl -k -v -H "Accept: application/json, */*" -H "User-Agent: oc/v3.4.1.18 (linux/amd64) openshift/0f9d380" -H "Authorization: Bearer ${AUTH_TOKEN}" "MASTER_URL/oapi/v1/oauthaccesstokens?pretty=true"

Get information about a token also expiresIn
curl -k -v -H "Accept: application/json, */*" -H "User-Agent: oc/v3.4.1.18 (linux/amd64) openshift/0f9d380" -H "Authorization: Bearer ${AUTH_TOKEN}" "MASTER_URL/oapi/v1/oauthaccesstokens/{metadata.name}?pretty=true"


I have found this in https://docs.openshift.org/latest/rest_api/openshift_v1.html at

GET /oapi/v1/oauthaccesstokens/{name}

Hth



-- 
Best Regards
Aleks








-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oauth token info

2017-06-13 Thread Aleksandar Lazic
Title: Re: oauth token info


Hi Andrew Lau.

on Dienstag, 13. Juni 2017 at 03:38 was written:





Is there an endpoint to retrieve the current token information?

ie. /oapi/v1/users/~ seems to be an undocumented way to get the current user information. I'm looking to obtain the expiry time on the current token being used.



Is the https://jwt.io/ not an option?

You can try this sequence

Search for the token if you don't know the token only the userName.
curl -k -v -H "Accept: application/json, */*" -H "User-Agent: oc/v3.4.1.18 (linux/amd64) openshift/0f9d380" -H "Authorization: Bearer ${AUTH_TOKEN}" "MASTER_URL/oapi/v1/oauthaccesstokens?pretty=true"

Get information about a token also expiresIn
curl -k -v -H "Accept: application/json, */*" -H "User-Agent: oc/v3.4.1.18 (linux/amd64) openshift/0f9d380" -H "Authorization: Bearer ${AUTH_TOKEN}" "MASTER_URL/oapi/v1/oauthaccesstokens/{metadata.name}?pretty=true"


I have found this in https://docs.openshift.org/latest/rest_api/openshift_v1.html at

GET /oapi/v1/oauthaccesstokens/{name}

Hth


-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Getting "error: unexpected EOF" while checking logs on a single pod

2017-06-10 Thread Aleksandar Lazic
Hi G. Jones.

on Sonntag, 11. Juni 2017 at 00:01 was written:

> I have a pod that's constantly restarting (Hawkular Metrics), pretty much
> every day. I'm trying to keep an eye on the logs for that specific pod in
> order to catch the events leading up to the restart so I use:
>
> $ oc logs -f hawkular-metrics-j2q0a
>
> And allow it to just run in the hopes that the next time it restarts I'll
> see what caused it. The problem I'm running into is that it seems that if
> nothing is written to the logs for some length of time the command stops
> with "error: unexpected EOF" and just exits. 

What's the output of

oc --loglevel=9 logs -f hawkular-metrics-j2q0a

oc logs -p hawkular-metrics-j2q0a
oc describe po hawkular-metrics-j2q0a

> Is this by design? Is there something that can be tweaked to stop this from
> happening?

> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users



-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [EXTERNAL] Re: garbage collection docker metadata

2017-06-09 Thread Aleksandar Lazic
Title: Re: [EXTERNAL] Re: garbage collection docker metadata


Hi Mateus Caruccio.

on Freitag, 09. Juni 2017 at 14:50 was written:





I do basically the same on an node cronjob: docker rm $(docker images -q)



We also.

I think the same as Andrew that the kubernetes gc does not take care about the metadata part in the thinpool.

Maybe there is already a issue open in k8 for this.

Regards
Aleks





--
Mateus Caruccio / Master of Puppets
GetupCloud.com 
We make the infrastructure invisible

2017-06-09 9:30 GMT-03:00 Gary Franczyk <gary.franc...@availity.com>:




I regularly run an app named “docker-gc” to clean up unused images and containers.
 
https://github.com/spotify/docker-gc
 
 
Gary Franczyk
Senior Unix Administrator, Infrastructure
 
Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
W 904.470.4953 | M 561.313.2866
gary.franc...@availity.com
 
From: <users-boun...@lists.openshift.redhat.com> on behalf of Andrew Lau <and...@andrewklau.com>
Date: Friday, June 9, 2017 at 8:27 AM
To: Fernando Lozano <floz...@redhat.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: [EXTERNAL] Re: garbage collection docker metadata
 
WARNING: This email originated outside of the Availity email system.
DO NOT CLICK links or open attachments unless you recognize the sender and know the content is safe.
The error was from a different node. 
 
`docker info` reports plenty of data storage free. Manually removing images from the node has always fixed the metadata storage issue, hence why I was asking if garbage collection did take into account metadata or only data storage.
 
On Fri, 9 Jun 2017 at 22:11 Fernando Lozano <floz...@redhat.com> wrote:




If the Docker GC complains images are in use and you get out of disk space errors, I'd assume you need more space for docker storage.
 
On Fri, Jun 9, 2017 at 8:37 AM, Andrew Lau <and...@andrewklau.com> wrote:




 
On Fri, 9 Jun 2017 at 21:10 Aleksandar Lazic <al...@me2digital.eu> wrote:




Hi Andrew Lau.

on Freitag, 09. Juni 2017 at 12:35 was written:




Does garbage collection get triggered when the docker metadata storage is full? Every few days I see some nodes fail to create new containers due to the docker metadata storage being full. Docker data storage has plenty of capacity.

I've been cleaning out the images manually as the garbage collection doesn't seem to trigger.


 
Do you have tried to change the default settings?

https://docs.openshift.org/latest/admin_guide/garbage_collection.html#image-garbage-collection

How was the lvm thinpool created?
https://docs.openshift.org/latest/install_config/install/host_preparation.html#configuring-docker-storage

The docker-storage-setup calculates normally 0.1% for metadata as describe in this line
https://github.com/projectatomic/container-storage-setup/blob/master/container-storage-setup.sh#L380
 


 
Garbage collection set to 80 high and 70 low.
 
Garbage collection is working on, I see it complain about images in use on other nodes:

ImageGCFailedwanted to free 3289487769, but freed 3466304680 space with errors in image deletion: [Error response from daemon: {"message":"conflict: unable to delete 96f1d6e26029 (cannot be forced) - image is being used by running container 3ceb5410db59"}, Error response from daemon: {"message":"conflict: unable to delete 4e390ce4fc8b (cannot be forced) - image is being used by running container 0040546d8f73"}, Error response from daemon: {"message":"conflict: unable to delete 60b78ced07a8 (cannot be forced) - image has dependent child images"}, Error response from daemon: {"message":"conflict: unable to delete 2aebdcf9297e (cannot be forced) - image has dependent child images"}]
 
docker-storage-setup with 99% data volume. I wondering if maybe only the data volume is watched
 






-- 
Best Regards
Aleks


 






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


 


The information contained in this e-mail may be privileged and confidential under applicable law. It is intended solely for the use of the person or firm named above. If the reader of this e-mail is not the intended recipient, please notify us immediately by returning the e-mail to the originating e-mail address. Availity, LLC is not responsible for errors or omissions in this e-mail message. Any personal comments made in this e-mail do not reflect the views of Availity, LLC.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users









-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: garbage collection docker metadata

2017-06-09 Thread Aleksandar Lazic
Title: Re: garbage collection docker metadata


Hi Andrew Lau.

on Freitag, 09. Juni 2017 at 12:35 was written:





Does garbage collection get triggered when the docker metadata storage is full? Every few days I see some nodes fail to create new containers due to the docker metadata storage being full. Docker data storage has plenty of capacity.

I've been cleaning out the images manually as the garbage collection doesn't seem to trigger.



Do you have tried to change the default settings?

https://docs.openshift.org/latest/admin_guide/garbage_collection.html#image-garbage-collection

How was the lvm thinpool created?
https://docs.openshift.org/latest/install_config/install/host_preparation.html#configuring-docker-storage

The docker-storage-setup calculates normally 0.1% for metadata as describe in this line
https://github.com/projectatomic/container-storage-setup/blob/master/container-storage-setup.sh#L380


-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift dns as delegated domain

2017-06-08 Thread Aleksandar Lazic
Title: Re: openshift dns as delegated domain


Hi Javier Palacios.

on Donnerstag, 08. Juni 2017 at 18:42 was written:





 
No, I’ve discovered the openshift_use_dnsmasq=False ansible option and now is the openshift DNS the one listening on port 53, but seems still not accessible from our domain controller as an standard DNS server, so I’ve resigned from this and using a wildcard DNS entry that at least allow me to progress.



That's strange, but okay good that you found a workaround.






Javier Palacios
 
De: Aleksandar Lazic [mailto:al...@me2digital.eu] 

Hi Javier.

Please can you tell us if your issue is now solved?

Best Regards
Aleks

on Dienstag, 06. Juni 2017 at 19:32 was written:




Ah I think I got you now.

The proxy mode is the default mode for dnsmasq.

http://www.thekelleys.org.uk/dnsmasq/doc.html

Cite from website.


The DNS subsystem provides a local DNS server for the network, with forwarding of all query types to upstream recursive DNS servers and caching of common record types 

Regards
Aleks

Javier Palacios <jpalac...@net4things.com> schrieb am 06.06.2017:




 




For this you will need to add the cluster.local domain into the DNS
Server which is configured in the client and forward the requests to
dnsmasq.

I think you need something like this called split horizon.

https://serverfault.com/a/563397/391298



That is exactly my question, which is much simpler than the serverfault question & answer, as I don't want to override any authoritative answers, just to get them.




What I would do is the following.

.) add cluster.local zone in your primary dns server
.) point the ns entries for master01



In particular, how to do this two steps at least, that I know how to do for standard dnsmasq as authoritative server, but not as "proxy" for the openshift one.

Javier Palacios








-- 
ME2Digital e. U.
https://me2digital.online/





-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Backup of databases on OpenShift

2017-06-08 Thread Aleksandar Lazic
Title: Re: Backup of databases on OpenShift


Hi Jens.

on Donnerstag, 08. Juni 2017 at 16:46 was written:





Hi,

We recently set up an OpenShift Enterprise cloud and we're wondering what the best practices are for backing up databases running in an OpenShift cloud. I will focus on PostgreSQL here, but the same goes for MongoDB, MariaDB...

- Should we rely on backups of the persistent volumes (we're using NFS)? This would mean assuming the on-disk state is always recoverable. Which it *should* be, but it does feel like a hack...
- Should we have an admin-level oc script that filters out all running database containers and does some 'oc exec pg_dump ... > backup.sql' magic on them? 
- Should we provide some simple templates to our users that contain nothing but a cron script that calls pg_dump?
...

Please share your solutions?



I like this one.

oc rsh  mysqldump/pg_dump/... > backup_file

Some user use Filesytem back, as you have mentioned

I have seen somewhere out a concept with a sidecar container but I can't find it now

What I have seen in the past is not the backup the problem, the restore is the difficult part.
I have once needed to restore a db (postgresql) and it was not that easy and not automatically!






Kind Regards,


Jens




-- 
Best Regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/



smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift dns as delegated domain

2017-06-08 Thread Aleksandar Lazic
Title: Re: openshift dns as delegated domain


Hi Javier.

Please can you tell us if your issue is now solved?

Best Regards
Aleks

on Dienstag, 06. Juni 2017 at 19:32 was written:





Ah I think I got you now.

The proxy mode is the default mode for dnsmasq.

http://www.thekelleys.org.uk/dnsmasq/doc.html

Cite from website.


The DNS subsystem provides a local DNS server for the network, with forwarding of all query types to upstream recursive DNS servers and caching of common record types 

Regards
Aleks

Javier Palacios  schrieb am 06.06.2017:









For this you will need to add the cluster.local domain into the DNS
Server which is configured in the client and forward the requests to
dnsmasq.

I think you need something like this called split horizon.

https://serverfault.com/a/563397/391298



That is exactly my question, which is much simpler than the serverfault question & answer, as I don't want to override any authoritative answers, just to get them.





What I would do is the following.

.) add cluster.local zone in your primary dns server
.) point the ns entries for master01



In particular, how to do this two steps at least, that I know how to do for standard dnsmasq as authoritative server, but not as "proxy" for the openshift one.

Javier Palacios








-- 
ME2Digital e. U.
https://me2digital.online/



smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to grant system:admin rights to admin?

2017-06-06 Thread Aleksandar Lazic
Title: Re: How to grant system:admin rights to admin?


Hi.

am Mittwoch, 07. Juni 2017 um 01:31 schrieben Sie:










On 7 Jun 2017, at 3:01 AM, Ulf Lilleengen <l...@redhat.com> wrote:

Hi Henryk,

Not sure if this is applicable to your setup, but an alternative is to point oc to admin.kubeconfig. E.g.:

oc --config /var/lib/origin/openshift.local.config/master/admin.kubeconfig adm policy add-cluster-role-to-user cluster-admin developer

I've been using this way as 'oc login -u system:admin' didn't work with my dev setup (created using 'oc cluster up') for some reason. It seems to work when using minishift, so I'd love to know what's causing it as well.



If you have access to the master node that will work. Sometimes the master nodes will already have cached login as admin from setup of cluster and just being able to access the master node as root will leave you as admin user anyway.



Well the main issue is that a lot of customers change the user with 'oc login -u ...' when they a root on the masters, and then you can't login with system:admin!

I solved this with the following sequence.

oc config view |egrep 'context|default' # look for system:admin
oc set-context default/./system:admin
oc use-context default/./system:admin

My first advice to all users on the master servers is!

Don't use root for 'normal' oc work.

Create a user on the master switch to this user and run the oc commands as this user.
This is by far the safest way, afaik.







Another alternative is if you have granted specific user sudoer role access, then such a user could use impersonation to run:

    oc admin policy add-cluster-role-to-user cluster-admin developer --as system:admin

See:

    https://docs.openshift.com/online/architecture/additional_concepts/authentication.html#authentication-impersonation

Graham





Hth,

Ulf

On 06. juni 2017 16:16, Henryk Konsek wrote:




Hi Graham,
That would be probably fine. I assume that I should log in as system:admin in order to execute those commands, right?
The problem is that I cannot switch to system:admin...
oc login -u system:admin
Authentication required for https://localhost:8443 (openshift)
Username: system:admin
Password:
error: username system:admin is invalid for basic auth
Any idea what I'm doing wrong?
Cheers!
pon., 5 cze 2017 o 12:28 użytkownik Graham Dumpleton <gdump...@redhat.com <mailto:gdump...@redhat.com>> napisał:
    > On 5 Jun 2017, at 8:13 PM, Henryk Konsek <hekon...@gmail.com
   <mailto:hekon...@gmail.com>> wrote:
    >
    > Hi,
    >
    > Quick question. Is there an easy way to grant "system:admin"
   privileges to "admin" user? I'd like to make it possible for 'admin'
   user to list projects and namespaces for example. I'm aware that
   this is not recommended for production environment, but this is
   something we need for an automation of our integration tests suite.
   Not sure if suits your requirements, but presuming 'username'
   exists, as user who already has admin rights, try:
            oc adm policy add-cluster-role-to-user cluster-reader username
   If only want them to be able to read view stuff but not modify, or:
            oc adm policy add-cluster-role-to-user cluster-admin username
   if want to allow them full edit ability on cluster.
   Replace 'username' with actual name of user.
   Graham
-- 
Henryk Konsek
https://linkedin.com/in/hekonsek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



-- 
Ulf






-- 
Best regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/



smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: openshift dns as delegated domain

2017-06-06 Thread Aleksandar Lazic
Ah I think I got you now.

The proxy mode is the default mode for dnsmasq.

http://www.thekelleys.org.uk/dnsmasq/doc.html

Cite from website.


The DNS subsystem provides a local DNS server for the network, with forwarding 
of all query types to upstream recursive DNS servers and caching of common 
record types 

Regards
Aleks

Javier Palacios  schrieb am 06.06.2017:
>
>> For this you will need to add the cluster.local domain into the DNS
>> Server which is configured in the client and forward the requests to
>> dnsmasq.
>> 
>> I think you need something like this called split horizon.
>> 
>> https://serverfault.com/a/563397/391298
>
>That is exactly my question, which is much simpler than the serverfault
>question & answer, as I don't want to override any authoritative
>answers, just to get them.
>
>> What I would do is the following.
>> 
>> .) add cluster.local zone in your primary dns server
>> .) point the ns entries for master01
>
>In particular, how to do this two steps at least, that I know how to do
>for standard dnsmasq as authoritative server, but not as "proxy" for
>the openshift one.
>
>Javier Palacios
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift dns as delegated domain

2017-06-06 Thread Aleksandar Lazic
Hi Javier.

am Dienstag, 06. Juni 2017 um 14:24 schrieben Sie:

>> De: Aleksandar Lazic [mailto:al...@me2digital.eu]
>> 
>> You can add for example on master01 the following line in
>> /etc/sysconfig/iptables.
>> 
>> -A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 53 -j
>> ACCEPT
>> 
>> Then you only need to point the ns entry to the master01 and of course
>> your clients must be able to reach master01 via udp 53.

> That is for sure required, but seems not enough. That just allows to
> gets name resolution when binding directly to the dnsmasq.
> But what I want is not to add master01 to my node dnsserver list, but
> let my standard dns to ask to master01 for anything below
> cluster.local, as it does with any other query for non-local domains.

> Let say, after opening 53/udp I can do (10.1.0.155 is the master01 addresses)
> nslookup 
> registry-console-default.router.default.svc.cluster.local
> registry-console-default.router.default.svc.cluster.local - 10.1.0.155
> but what I want is to succeed just with
> nslookup 
> registry-console-default.router.default.svc.cluster.local
> registry-console-default.router.default.svc.cluster.local

For this you will need to add the cluster.local domain into the DNS
Server which is configured in the client and forward the requests to
dnsmasq.

I think you need something like this called split horizon.

https://serverfault.com/a/563397/391298

What I would do is the following.

.) add cluster.local zone in your primary dns server
.) point the ns entries for master01
.) reload/restart dns server
.) flush dns cache on client

> I can do that with a dnsmasq instance that I fully manage, but the
> first step is to make it authoritative
> (http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html#lbAH), and
> I cannot do with the openshift one which is by definition a forward only 
> instance.

> Javier Palacios

-- 
Best Regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift dns as delegated domain

2017-06-05 Thread Aleksandar Lazic
Hi Javier.

am Montag, 05. Juni 2017 um 14:06 schrieben Sie:

>> De: Aleksandar Lazic [mailto:al...@me2digital.eu]
>> >
>> > I would like to convert the skydns built into openshift into a
>> > delegated zone of our own DNS domain. I've seen that it runs at 8053,
>> 
>> The dnsmasq is not a workaround it's the solution for keep DNS
>> resolving up and running.
>
> Maybe I didn't explain well enough. I don't want to get DNS
> resolucion from whitin the openshift nodes towards themselves or the
> outside. I want to make the **.cluster.local names resolvable from the 
> outside.

Due to the fact that dnsmasq listen by default on all interfaces

netstat -tulpn|egrep 'Pro|dns'
Proto Recv-Q Send-Q Local Address   Foreign Address State   
PID/Program name
tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  
97429/dnsmasq
tcp6   0  0 :::53   :::*LISTEN  
97429/dnsmasq
udp0  0 0.0.0.0:53  0.0.0.0:*   
97429/dnsmasq
udp6   0  0 :::53   :::*
97429/dnsmasq

you can add udp 53 to OS_FIREWALL_ALLOW chain on the nodes which you want
to use as dns resolver for cluster.local?

You can add for example on master01 the following line in 
/etc/sysconfig/iptables.

-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 53 -j ACCEPT

Then you only need to point the ns entry to the master01 and of course 
your clients must be able to reach master01 via udp 53.

Does this helps?

> Javier Palacios

-- 
Best Regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can I exclude one project or one container to Origin-Aggregated-Logging system?

2017-05-30 Thread Aleksandar Lazic
Hi.

Afasik there is no option for this.

Best regards
Aleks

"Stéphane Klein"  schrieb am 30.05.2017:
>HI,
>
>I just read origin-aggregated-logging
> documentation
>and
>I don't found if I can exclude one project or one container to logging
>system.
>
>Is it possible with a container labels? or other system?
>
>Best regards,
>Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: passwordless authentication

2017-05-06 Thread Aleksandar Lazic
Hi.

Am Fri, 5 May 2017 22:09:52 +
schrieb Gary Franczyk :

> I’m trying to create some deploy scripts to automatically provision
> multi-project pipelines and would like to have my scripts stay
> authenticated (sort of like using SSH keys).  Is there a way to do
> this?

Maybe service accounts could be the solution.

https://docs.openshift.org/latest/admin_guide/service_accounts.html
https://kubernetes.io/docs/admin/service-accounts-admin/

You can then use the bearer token from the service account for
authentication

> Thanks
> 
> Gary Franczyk
> Senior Unix Administrator, Infrastructure

Regards
Aleks


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 'oc new-app -f openshift/templates/nodejs.json' fails

2017-05-03 Thread Aleksandar Lazic
Hi David.

Am Wed, 3 May 2017 22:55:05 +
schrieb David VOGEL :

> I tried unsuccessfully to create a new-app using an Openshift
> template. The following command fails; a service is deployed but an
> image never gets built:
> 
> $ oc new-app -f openshift/templates/nodejs.json
> ...
> deploymentconfig "nodejs-example" created
> --> Success  
> Build schedule, use 'oc logs -f bc/nodejs-example' to track its
> progress.
> 
> But the build fails, and the 'oc logs' command above fails. The
> following messages are from the server log:
> 
> E0502 10:58:45.7797578176 config_controller.go:84] error
> instantiating Build from BuildConfig nodejs-echo/nodejs-example:
> Error resolving ImageStreamTag nodejs:4 in namespace openshift:
> unable to find latest tagged image
> 
> E0502 10:58:47.3291548176 docker_manager.go:2294] container start
> failed: ErrImagePull: Error response from daemon: {"message":"error
> parsing HTTP 400 response body: unexpected end of JSON input: \"\""}

Please can you send us additional the output of the following
commands, thanks.

oc status -v in the project of your nodjs app

oc get is -n openshift |egrep node

###
nodejs172.30.187.109:5000/openshift/nodejs  
 latest,4,0.10   6 days ago
###

oc describe istag  -n openshift nodejs:4
###
Name:   
sha256:c5b21dc08cf5da8b6b0485147d946d8202f2be211c17bcef3a0fc26570217dd3
Namespace:  
Created:6 days ago
Labels: 
Description:Build and run Node.js 4 applications on RHEL 7. For more 
information about using this builder image, including OpenShift considerations, 
see https://github.com/sclorg/s2i-nodejs-container/blob/master/4/README.md.
Annotations:iconClass=icon-nodejs
openshift.io/display-name=Node.js 4
sampleRepo=https://github.com/openshift/nodejs-ex.git
supports=nodejs:4,nodejs
tags=builder,nodejs
version=4
Docker Image:   
registry.access.redhat.com/rhscl/nodejs-4-rhel7@sha256:c5b21dc08cf5da8b6b0485147d946d8202f2be211c17bcef3a0fc26570217dd3
Image Name: 
sha256:c5b21dc08cf5da8b6b0485147d946d8202f2be211c17bcef3a0fc26570217dd3
Image Size: 160.9 MB (last binary layer 20.12 MB)
Image Created:  12 days ago

###

what's in the docker logs?

Regards
Aleks

> SYSTEM INFO:
> 
> $ oc version
> oc v1.5.0+031cbe4
> kubernetes v1.5.2+43a9be4
> features: Basic-Auth GSSAPI Kerberos SPNEGO
> 
> Server https://10.3.1.55:8443
> openshift v1.5.0+031cbe4
> kubernetes v1.5.2+43a9be4
> 
> $ oc status
> In project default on server https://10.3.1.55:8443
> 
> svc/docker-registry - 172.30.211.144:5000
>   dc/docker-registry deploys
> docker.io/openshift/origin-docker-registry:v1.5.0 deployment #1
> deployed 3 hours ago - 1 pod
> 
> svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053
> 
> svc/router - 172.30.74.148 ports 80, 443, 1936
>   dc/router deploys docker.io/openshift/origin-haproxy-router:v1.5.0
> deployment #1 deployed 6 days ago - 1 pod
> 
> I tried to access the Openshift internal Docker registry via HTTPS:
> 
> $ curl -v https://172.30.211.144:5000/v2/_catalog
> 
> * About to connect() to 172.30.211.144 port 5000 (#0)
> * Trying 172.30.211.144...
> * Connected to 172.30.211.144 (172.30.211.144) port 5000 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> *
> CAfile: 
> /home/dvogel/dfc-openshift-project/server/openshift.local.config/master/ca.crt
> CApath: none
> * NSS error -12263 (SSL_ERROR_RX_RECORD_TOO_LONG)
> * SSL received a record that exceeded the maximum permissible length.
> * Closing connection 0
> curl: (35) SSL received a record that exceeded the maximum
> permissible length.
> 
> $ echo $CURL_CA_BUNDLE
> /home/dvogel/dfc-openshift-project/server/openshift.local.config/master/ca.crt
> 
> $ echo $KUBECONFIG
> /home/dvogel/dfc-openshift-project/server/openshift.local.config/master/admin.kubeconfig
> 
> The permissions on the config files are readable by all.
> 
> I was able to log on to the internal Docker Registry using 'docker
> login':
> 
> oc login -u test
> tkn=`oc whoami -t`
> docker login -u test -p $tkn 172.30.211.144:5000
> -> Login Succeeded  
> 
> However, when I tried to push an image to the registry, I generated
> an error similar to the one above:
> 
>   E0502 10:58:47.3291548176 docker_manager.go:2294] container
> start failed: ErrImagePull: Error response from daemon:
> {"message":"error parsing HTTP 400 response body: unexpected end of
> JSON input: \"\""}
> 
> $ docker tag ubuntu 172.30.211.144:5000/myubuntu
> $ docker push 172.30.211.144:5000/myubuntu
> The push refers to a repository [172.30.211.144:5000/myubuntu]
> 73e5d2de6e3e: Preparing
> 08f405d988e4: Preparing
> 511ddc11cf68: Preparing
> a1a54d352248: Preparing
> 9d3227c1793b: Preparing
> error parsing HTTP 400 response body: unexpected end of JSON input: ""
> 
> $ docker info
> 

Re: "Application is not available" Post 1.5 Upgrade

2017-05-02 Thread Aleksandar Lazic
Hi Rahul.

Am Tue, 2 May 2017 17:30:34 -0400
schrieb Rahul Agarwal <rahul334...@gmail.com>:

> Hi Team,
> 
> I upgraded from 1.4.1 to 1.5 version and after successful upgrade the
> webpage shows below error which was fine earlier.
> 
> Application is not available
> 
> The application is currently not serving requests at this endpoint.
> It may not have been started or is still starting.
> 
> Possible reasons you are seeing this page:
> 
>- *The host doesn't exist.* Make sure the hostname was typed
> correctly and that a route matching this hostname exists.
>- *The host exists, but doesn't have a matching path.* Check if
> the URL path was typed correctly and that the route was created using
> the desired path.
>- *Route and path matches, but all pods are down.* Make sure that
> the resources exposed by this route (pods, services, deployment
> configs, etc) have at least one pod running.
>
> Any help is appreciated.

please can you post the output of.

oc export route 
oc describe pod 
oc logs 
oc get events
oc get pod -n default
oc rsh -n default  cat haproxy.config

> Thanks,
> Rahul

-- 
Best regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Routing & External Service

2017-05-02 Thread Aleksandar Lazic
Hi David.

Am Wed, 26 Apr 2017 16:09:03 +0100
schrieb David Conde <da...@donedeal.ie>:

> I am looking for a bit of advice on the best practice for routing.
> 
> I have a service which I do not control. It lives behind an ELB and
> runs over plain HTTP.
> 
> I would like to add the following to it via an Openshift cluster:
> 1) HTTPS termination
> 2) CORS headers
> 3) Enhance the request to include some API keys via http headers
> 
> I could deploy a new service that adds 2 + 3 with 1 added via a
> route. But haproxy in front of haproxy seems overkill.
> 
> I was looking at the potential of a service with a type of
> ExternalName but that does not help with adding the headers needed.
> 
> I'm also not too keen on adding a configmap to all the haproxy config
> in the router config just to add the few extra headers for a single
> route.
> 
> What would be the recommended way to achieve the above?

Well I would use the Passthrough Termination

https://docs.openshift.org/latest/architecture/core_concepts/routes.html#passthrough-termination

with "insecureEdgeTerminationPolicy: Redirect" and terminate on your
custom haproxy.

In case you don't want to build your own haproxy image can you
use my custom haproxy image

https://hub.docker.com/r/me2digital/haproxy17/

based on

https://gitlab.com/aleks001/haproxy17-centos

HTH


> Thanks,
> *David Conde*

-- 
Best regards
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


SSO in openshift with openshift

2017-04-22 Thread Aleksandar Lazic
Hi.

I have created a blog entry about the SSO topic in an for openshift.

https://me2digital.online/2017/04/21/sso-in-openshift-with-openshift/

It would be nice to receive some feedback.

Best regards
Aleks
---
Freundliche Grüße
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/
UID-Nr.: ATU71765716
IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Get OpenShift API address

2017-04-04 Thread Aleksandar Lazic
Hi.

Do I understand you right that you want to add the external URL into the 
template via JavaScript on the webconsole?

I don't think that this is currently possible.

Maybe you find some hints on this page.

https://docs.openshift.org/latest/install_config/web_console_customization.html

Regards
Aleks

Tako Schotanus <tscho...@redhat.com> schrieb am 04.04.2017:
>So I know you can use "openshift.default.svc.cluster.local" for
>accessing
>the OpenShift's console API internally from within a Pod.
>We actually use that to create a new project for the same user, but now
>we
>want to redirect the user to that newly created project.
>So my question is:
>
>Is there a way to get the OpenShift console's _external_ IP/host from
>inside a Pod?
>
>Right now we have to add a field to our template where we're basically
>telling the user: "could you look at the address bar of your browser
>and
>paste the hostname/IP-address you see there into this field?". Not very
>user-friendly :)

---
Freundliche Grüße
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/
UID-Nr.: ATU71765716
IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Fwd: Undelivered Mail Returned to Sender

2017-02-07 Thread Aleksandar Lazic
FYI, and any news about this topic.


 Forwarded message 
>From : mailer-dae...@mail.zoho.com
To : <al...@me2digital.eu>
Date : Tue, 07 Feb 2017 21:54:10 +0100
Subject : Undelivered Mail Returned to Sender
 Forwarded message 
 > This message was created automatically by mail delivery software. 
 >  A message that you sent could not be delivered to one or more of its 
 > recipients. This is a permanent error.  
 >  
 > us...@redhat.com INVALID_ADDRESS, ERROR_CODE :554, ERROR_CODE :5.7.1 
 > <us...@redhat.com>: Recipient address rejected: Access denied 
 >  
 >  
 >  
 >  Received:from mail.zoho.com by mx.zohomail.com 
 > with SMTP id 1486500843548979.1563441658715; Tue, 7 Feb 2017 12:54:03 
 > -0800 (PST) 
 >  Message-ID:<15a1a5ad019.c52cd574133561.3364614241113815...@me2digital.eu> 
 >  Date:Tue, 07 Feb 2017 21:54:03 +0100 
 >  From:Aleksandar Lazic <al...@me2digital.eu> 
 >  User-Agent:Zoho Mail 
 >  To:"Bendik Paulsrud" <bendik.pauls...@gmail.com> 
 >  Cc:<us...@redhat.com> 
 >  Subject:Re: How to patch RHEL while running OpenShift 
 >  Content-Type:text/plain; charset="UTF-8"
--- 
Mit freundlichen Grüßen
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/
UID-Nr.: ATU71765716
IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Template with Secret and Parameter

2017-01-30 Thread Aleksandar Lazic
Hi.

  On Mon, 30 Jan 2017 10:04:35 +0100 Lionel Orellana <lione...@gmail.com> 
wrote  
 > Put another way, how can I create a secret from user input?

You will need to base64 encode the given string.

echo -n 'pass'|base64
This value is then the SVN_PASSWORD

what's the ouput of?

oc process -f ... -p ... 

BR aleks--- 
Mit freundlichen Grüßen
Aleksandar Lazic - ME2Digital e. U.
https://me2digital.online/
UID-Nr.: ATU71765716
IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i


 > On 30 January 2017 at 18:45, Lionel Orellana <lione...@gmail.com> wrote:
 > Hello
 > I'm trying to create a secret as part of a template but the value of the 
 > secret should come from a parameter. Something like this:
 > {"kind": "Template","apiVersion": "v1",...},
 > "objects": [
 > ... {"kind": "Secret","apiVersion": "v1",
 > "metadata": {"name": "svn-pwd",
 > "creationTimestamp": null},"stringData": {   
 >  "password": "${SVN_PASSWORD}"}}],  
 > "parameters": [{  "name": "SVN_PASSWORD",  "value": "",  
 > "description": "Will be stored as a secret",  "required": true}  ]}
 > 
 > The secret is getting created but it's not resolving the parameter value 
 > (i.e. the value is literally ${SVN_PASSWORD}).
 > Is there a way to resolve the template parameter in the secret definition?
 > Thanks  
 > 
 >  
 >  ___ 
 > users mailing list 
 > users@lists.openshift.redhat.com 
 > http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
 > 



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging breaking metrics in OCP 3.3?

2017-01-26 Thread Aleksandar Lazic






 On Thu, 26 Jan 2017 11:57:34 +0100 Aleksandar Lazic 
al...@me2digital.eu wrote 




Let's say it's a know issue ;-)



https://github.com/openshift/openshift-ansible/pull/2617



https://github.com/openshift/openshift-ansible/issues/2629



BR Aleks



 On Wed, 25 Jan 2017 17:21:01 +0100 Josh Baird joshba...@gmail.com 
wrote 









--- 

Mit freundlichen Grüßen

Aleksandar Lazic - ME2Digital e. U.

https://me2digital.online/

UID-Nr.: ATU71765716

IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i




Replying to my own email here, but the problem is that the Ansible playbooks 
are removing metricsPublicURL and loggingPublicURL from master-config.yaml on 
the masters.  This is happening even though I do *not* comment out the 
openshift_hosted_logging_master_public_url and 
openshift_hosted_metrics_master_public_url options in the Ansible hosts file.  



Could this be a bug?




On Wed, Jan 25, 2017 at 10:13 AM, Josh Baird joshba...@gmail.com wrote:

Hi all,



Facing an odd situation here and was hoping for some feedback.



I'm trying to stand up two OCP 3.3 HA environments (3/3/4) with logging and 
metrics.  I'm installing metrics from the Ansible playbooks like so:



# metrics

openshift_hosted_metrics_deploy=true

openshift_hosted_metrics_storage_kind=nfs

openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']

openshift_hosted_metrics_storage_host=fc-cifs03.corp.follett.com

openshift_hosted_metrics_storage_nfs_directory=/OCPQA_INFRA01

openshift_hosted_metrics_storage_volume_name=metrics

openshift_hosted_metrics_storage_volume_size=75Gi



openshift_hosted_metrics_public_url=https://hawkular-metrics.qa.ocp.domain.com/hawkular/metrics




This results in a successful metrics installation.  Metrics are displayed for 
each pod on the overview page, etc.



Next, I want to go back and add logging, so I comment out the metrics stuff 
from the 'hosts' file, and add the logging config:



# logging

openshift_hosted_logging_deploy=true

openshift_hosted_logging_storage_kind=nfs

openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']

openshift_hosted_logging_storage_host=nfsserver

openshift_hosted_logging_storage_nfs_directory=nfsexport

openshift_hosted_logging_storage_volume_name=logging

openshift_hosted_logging_storage_volume_size=150Gi




After a successful playbook run, the logging pods come up fine (not sure if 
they are totally operational), but something is causing my metrics data to 
vanish.  Metrics stats are no longer visibly on the pod overview page or in the 
pod details page.  I don't see any errors on any of the logging and/or metrics 
pods.  I have been able to consistently reproduce this in multiple environments.



Any ideas on how to troubleshoot this?



Thanks.








___

users mailing list 

users@lists.openshift.redhat.com 

http://lists.openshift.redhat.com/openshiftmm/listinfo/users 



--- 

Mit freundlichen Grüßen

Aleksandar Lazic - ME2Digital e. U.

https://me2digital.online/

UID-Nr.: ATU71765716

IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Run a HTTP/2 service on Openshift

2017-01-13 Thread Aleksandar Lazic


Hi Karl.



you can use the passthrough mode and terminate the gRPC service with https 
directly.



https://docs.openshift.org/latest/architecture/core_concepts/routes.html#secured-routes

=Passthrough Termination



This is the TCP mode you searching for ;-).



Hth

Aleks



 On Fri, 13 Jan 2017 11:22:19 +0100 Karl Gerhard karl_g...@gmx.at 
wrote 




Hi, 



ist it possible to run gRPC on Openshift? 

As far as I understand gRPC uses HTTP/2 and the Openshift Router/Haproxy 
doesn't support HTTP/2 yet. Only way I can think of to make this work is 
modifying the Haproxy config to use "mode tcp" instead of "mode http" which is 
the default. But the Haproxy config is automatically generated by the Go Binary 
in the Pod so this solution would probably involve a lot of fiddling around and 
end up being not very pretty. 



Has anyone succeeded in getting gRPC or any HTTP/2 service to work on Openshift 
and could share his/her experience? 



Regards 

Karl 



___ 

users mailing list 

users@lists.openshift.redhat.com 

http://lists.openshift.redhat.com/openshiftmm/listinfo/users 





--- 

Mit freundlichen Grüßen

Aleksandar Lazic - ME2Digital e. U.

https://me2digital.online/

UID-Nr.: ATU71765716

IBAN: AT27 1420 0200 1096 9086
Firmenbuch: 462678 i





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


AW: Router Sharding

2016-09-26 Thread Aleksandar Lazic
Hi.

I agree with you, and I have tried to contribute to the doc but that’s wasn’t 
an easy task so I stopped.
Maybe I was also to naïve so blame me that I have stopped contribution.

@1: Currently that’s not possible you will need to add for every route the 
label for the dedicate router.

‘oc create route …’

have no options to set labels you will need to use

oc expose service ... --labels='router=one' --hostname='...'

or you can use the labels in the webconsole.

Oh and by the way the default router MUST also have ROUTE_LABELS if you don’t 
want to expose all routes to the default router.

@2: you will need the new template from OCP 3.3 there are additional env 
variables necessary to be able to use more the none router on the same node.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L147
https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L184

and you need to add on the router nodes in the iptables chain 
‘OS_FIREWALL_ALLOW’ the additional ports.

@3: This would be a little bit tricky on the same node due to the fact that the

https://github.com/openshift/origin/blob/master/images/ipfailover/keepalived/lib/failover-functions.sh#L11-L12

only handle one config file. Maybe there is a way with *VIPS but I have never 
tried this.

Hth

Aleks

Von: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] Im Auftrag von Srinivas Naga 
Kotaru (skotaru)
Gesendet: Montag, 26. September 2016 21:31
An: Andrew Lau ; users@lists.openshift.redhat.com
Betreff: Re: Router Sharding


Current sharding documentation is very high level, doesn’t cover step by step 
actual real world use cases.

Anyway, I was succeeded to create 2 shards. Lot of questions on this topic on 
how to proceed next …


1.  How to tell a project that all apps created on this project should use 
router #1 or router #2?

2.  Now we have 3 routers (default created as part of installation + 
additional 2 routers created). How the ports work? 80, 443 & 1936 assigned to 
default router. I changed ports to 81/444/1937 and 82/445/1938 to respectively 
shad #1 #2. These ports open automatically or explicit action required?

3.  Ipfailover (floating VIP) bound to default router. Do we need to create 
additional IP failover pods with different IP’s and match to shad #1 and #2? Or 
can we share same IP failover pods with single floating VIP to newly created 
shad’s as well?

--
Srinivas Kotaru

From: Andrew Lau >
Date: Friday, September 23, 2016 at 7:41 PM
To: Srinivas Naga Kotaru >, 
"users@lists.openshift.redhat.com" 
>
Subject: Re: Router Sharding

There are docs here:
- 
https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding
- 
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#creating-router-shards


On Sat, 24 Sep 2016 at 06:13 Srinivas Naga Kotaru (skotaru) 
> wrote:
Just saw 3.3 features blog

https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/

We’re rethinking of our cluster design and want to consolidate 1 cluster per 
data center. Initially we were planning off 2 cluster per data center to server 
internal and external traffic dedicated to its own cluster.

Consolidating to a single cluster per DC will offer multiple advantages to us.  
We currently running latest 3.2.1 release

Router Sharding is available in 3.2.x branch or need to wait for 3.3? I was 
thinking this feature has been available from 3.x onwards as per documentation 
available. Not sure what is mean for upcoming 3.3.

We really want to take advantage of this feature and test ASAP. Current 
documentation is not clear or explains only high level.

Can you help me or point to right documentation which explains step by steps to 
test this feature?

Can we control routes at project level so that clients wont modifies to move 
their routes from prod to non-prod or internal to external routers?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OopenShift Nginx issue - setgid(107) failed (1: Operation not permitted)

2016-09-23 Thread Aleksandar Lazic
Hi.

You can remove the user line ( http://nginx.org/en/docs/ngx_core_module.html ) 
in the nginx conf.

You will also face some problems with log writeing

I have a example setup here

https://github.com/git001/nginx-osev3

which solves the issue.

Best regards
Aleks

Von: Charles Moulliard
Gesendet: Freitag, 23. September 19:03
Betreff: OopenShift Nginx issue - setgid(107) failed (1: Operation notpermitted)
An: users

Hi,

Can somebody help me concerning this (setgid(107) failed (1: Operation not 
permitted)) issue reported here - 
https://github.com/jimmidyson/minishift/issues/105#issuecomment-249245765
 ?

Many thanks in advance

Charles

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: HAProxy Router

2016-09-09 Thread Aleksandar Lazic
Hi.

Isn't the "router" also a ingress router?

What's the difference for you between the "router" and the "ingress".

Best regards
Aleks

Von: Clayton Coleman
Gesendet: Freitag, 9. September 05:19
Betreff: Re: HAProxy Router
An: Diego Castro
Cc: users

Actually - we're not deprecating or removing routers or the router.  We're just 
adapting to also support ingress.  There will be a very long period where both 
routes and ingress happily coexist.

On Thu, Sep 8, 2016 at 11:35 AM, Diego Castro 
> wrote:


On Wed, Sep 7, 2016 at 1:21 PM, Andy Grimm 
@redhat.com> wrote:

On Wed, Sep 7, 2016 at 11:22 PM, Diego Castro 
> wrote:

Hello, list.

We have been running Origin since last November and i'd like to share some 
experiences, pains and thoughts.

Our origin cluster has about 25 servers including masters,nodes and routers. We 
have roughly 500 applications exposing services and a bunch of HPA firing up 
containers all the time.

1) Resource consumption: i noticed during the day a increase of memory 
consumption due multiple reloads, a lot of process keep running until the 
connections is finished or OOM kill. Other issue regarding restarts is that due 
to TCP SYN DROP iptables we are facing some high latencies.  What can we do to 
reduce restart overhead ?

You seem to have several questions intertwined here, and I am by no means an 
expert on this, but on the "lots of processes keep running" topic, you may be 
hitting 
https://bugzilla.redhat.com/show_bug.cgi?id=1364870
 (though this manifests as more of a CPU consumption issue than a memory 
issue).   In short, what we've seen is cases where haproxy connections are 
"orphaned", so the old processes never exit -- they continuously think they 
have one or two "jobs" left, but they never actually handle them.  I think this 
is fixed in the latest 1.5.x release of haproxy, but have not had a chance to 
test yet.


In 3.3 there are some more knobs you can set to limit the length of time that 
an haproxy will stay around after a restart, you may wish to try playing wit 
hthat... but the underlying bug is still there in 3.3.

Understood, i'll give it a try.





2) Metrics: Would be nice to pull some metrics from the routers, something like 
general network i/o and per endpoint traffic, i found a prometheus export but 
due to process restart the endpoint states are cleaned. HAProxy 1.6 have a fix 
for that 
(http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/).
 Do we have plans to upgrade to 1.6 ? What kind of metrics do we have available 
today?

The lack of metrics is a problem, and there's no great answer to your question/

There are no plans to go to 1.6 at the moment, but we do need to solce the 
stats problem, and we need to solve the reload problem, so we may end up 
moving.  But we are investigating upstream ingress and trying to get support 
for that into OpenShift so we can migrate and deprecate the router.

Nice, i'd like to track this work, can you point me on the right direction?

-ben



---

Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___

AW: router host restrictions

2016-09-06 Thread Aleksandar Lazic
Hi Miloslav.

> Von: Miloslav Vlach
> Gesendet: Dienstag, 06. September 2016 07:30
> An: users@lists.openshift.redhat.com
> Betreff: router host restrictions
>
> Hi all,
>
> I would like to ask if there is any option to secure the route.
>
> For example, in apache I have the host restriction - only some
> IP address or host are allowed to get the virtual host.
>
> Is there any option how to do this in openshift ?

You can try to adopt the template as described here
https://docs.openshift.org/latest/install_config/install/deploy_router.html#using-configmap-replace-template

The syntax for the acls in haproxy is described here

http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#7

The ip matching is described here
http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#7.1.6

An example for blocking specific ips can be found here
http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.2-block
http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.2-http-request

Hth
Aleks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


AW: How to set an proxy in the openshift origin to pull the image

2016-07-23 Thread Aleksandar Lazic
Hi.

This is the username and password for the proxy.
You only need to add this if your proxy need an authentication.

This is a standard url syntax.
https://en.wikipedia.org/wiki/Uniform_Resource_Locator#Syntax

Maybe it should be shown in the doc that this part is optional, something like 
this.

HTTP_PROXY=http://[USERNAME:PASSWORD@]10.0.1.1:8080/

I not after the disaster with the last try of contribution.

--
Best regards
Aleksandar Lazic
Cloudwerkstatt GmbH : Lassallestraße 7b : A-1020 Vienna : Austria
aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>
[Logo_Cloudwerkstatt]<http://www.cloudwerkstatt.com/>

Von: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] Im Auftrag von ???
Gesendet: Freitag, 22. Juli 2016 15:19
An: aleks <al-openshiftus...@none.at>
Cc: users <users@lists.openshift.redhat.com>
Betreff: Re: How to set an proxy in the openshift origin to pull the image

Thanks,
But do you know the the exact meaning of the  paraterm USERNAME:PAWWORD ?
The username and password for the openshift cluster?or for the VPN or other 
meaning?

Proxying Docker Pull

OpenShift Origin node hosts need to perform push and pull operations to Docker 
registries. If you have a registry that does not need a proxy for nodes to 
access, include the NO_PROXY parameter with the registry’s host name, the 
registry service’s IP address, and service name. This blacklists that registry, 
leaving the external HTTP proxy as the only option.

1.  Edit the /etc/sysconfig/docker file and add the variables in shell format:

2.  HTTP_PROXY=http://USERNAME:PASSWORD@10.0.1.1:8080/

3.  HTTPS_PROXY=https://USERNAME:PASSWORD@10.0.0.1:8080/

NO_PROXY=master.hostname.example.com,172.30.123.45,docker-registry.default.svc.cluster.local

4.  Restart the Docker service:

# systemctl restart docker
-- Original --
From:  "aleks";<al-openshiftus...@none.at<mailto:al-openshiftus...@none.at>>;
Date:  Fri, Jul 22, 2016 03:03 AM
To:  "周华康"<huakang.z...@qq.com<mailto:huakang.z...@qq.com>>;
Cc:  
"users"<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>;
Subject:  Re: How to set an proxy in the openshift origin to pull the image

Hi.

Am 21-07-2016 09:33, schrieb 周华康:

> Hi
> When i try the deploy the example apps,it shows that in the log i need
> to set an proxy,but how?
> log:
> "API error (500): Get
> https://registry-1.docker.io/v2/library/dancer-example/manifests/latest:
> Get
> https://auth.docker.io/token?scope=repository%3Alibrary%2Fdancer-example%3Apull=registry.docker.io:
> dial tcp: lookup auth.docker.io on 10.202.72.116:53: read udp
> 10.161.67.132:57753->10.202.72.116:53: i/o timeout\n"

Maybe this can help

https://docs.openshift.org/latest/install_config/http_proxies.html

BR aleks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


AW: OpenShift origin: internal routing with services

2016-07-20 Thread Aleksandar Lazic
Hi Den.

Yes this called router-sharding.
I have tried to add some pr user doc but I stopped due to the fact that it 
takes too long.
https://github.com/openshift/openshift-docs/pull/2139

You can run a router just for internal usage.

When you search for sharding in the doc you will finde some infos
https://docs.openshift.org/latest/welcome/index.html

maybe this doc will also help

https://github.com/openshift/origin/blob/master/docs/router_sharding.md

One important info. When you decide to use router labels you should also use it 
for internl & external.

--
Best regards
Aleksandar Lazic
Cloudwerkstatt GmbH : Lassallestraße 7b : A-1020 Vienna : Austria
aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>
[Logo_Cloudwerkstatt]<http://www.cloudwerkstatt.com/>

Von: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] Im Auftrag von Den Cowboy
Gesendet: Mittwoch, 20. Juli 2016 15:27
An: aleks <al-openshiftus...@none.at>
Cc: users@lists.openshift.redhat.com
Betreff: RE: OpenShift origin: internal routing with services

I read the documentation about it.
It's not very clear for me but it seems to be something that you can deploy 
multiple routers en routerA will handle the routes of project A, B and C and 
router B will handle the routes of project D,E,F or something?

I don't really see or know how I can create a router which will handle routes 
internally (without going to the outside)
> Date: Mon, 18 Jul 2016 19:04:22 +0200
> From: al-openshiftus...@none.at<mailto:al-openshiftus...@none.at>
> To: dencow...@hotmail.com<mailto:dencow...@hotmail.com>
> CC: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
> Subject: Re: OpenShift origin: internal routing with services
>
> Am 14-07-2016 09:27, schrieb Den Cowboy:
>
> > Hi,
> >
> > At the moment we have a setup like this:
> > project A
> > project B
> >
> > project A contains a pod A which needs an API which is running in pod B
> > in project B.
> > Pod A has an environment variable: "api-route.xxx.dev/api"
> > So when I'm going to that route in my browser I'm able to see the API
> > and this works fine (okay we're able to configure https route etc)
> >
> > But we'd like to keep everything internally. So without using routes.
> > So thanks to the ovs-multitenant-pluging we're able to "join" the
> > networks of our projects (namespaces). And I'm able to ping to from
> > inside pod A to the service of my pod B in project B.
> > ping api-service.project-b
> > api-service.project-b.svc.cluster.local (172.30.xx.xx) 56(84) bytes of
> > data.
> >
> > So we're able to access the pod from its service without using an
> > external route.
> > But like I told in the beginning. Our API is on api-route.xxx.dev/api
> > so I have to go to something like 172.30.xx.xx:8080/api.
> >
> > Is there a way to obtain this goal? So we try to connect to a 'subpath'
> > of our service without using routes.
> > Is this possible?
>
> I think you can go another way and use a internal router with
> router-sharding
>
> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding
>
> and deploy the internal api on the internal router.
>
> Best regards
> Aleks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: access restrictions to private apps

2016-05-11 Thread Aleksandar Lazic
Hi Sebastian.

you have two options from my point of view.

.) create own haproxy image and config
https://docs.openshift.org/latest/install_config/install/deploy_router.html#deploying-a-customized-haproxy-router

.) use a internal router with ROUTE_LABELS
https://github.com/openshift/origin/blob/388478c40e751c4295dcb9a44dd69e5ac65d0e3b/pkg/cmd/infra/router/router.go#L53

Best regards
Aleks


From: users-boun...@lists.openshift.redhat.com 
 on behalf of Sebastian Wieseler 

Sent: Wednesday, May 11, 2016 05:31
To: users
Subject: access restrictions to private apps

Dear community,

Our current setup is *.my.wildcard.domain.example.com -> Load Balancer -> 
{Master1, Master2, Master3}
with the router pods deployed on the master nodes.

Is it possible to allow only app1.my.wildcard.domain.example.com and 
app2.my.wildcard.domain.example.com from the outside (0.0.0.0/0)
and for the rest (*.my.wildcard.domain.example.com) restrict it to pre-defined 
IP addresses?

How could we implement those restrictions?
What are best practices to allow only certain IPs to certain applications?


Thanks a lot in advance.
Greetings,
   Sebastian



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Public URL's

2016-04-27 Thread Aleksandar Lazic
Hi Fran.


with  two router pairs and with different domains,

Yes it's possible.


If I understand you right.


Best regards

Aleks


From: users-boun...@lists.openshift.redhat.com 
 on behalf of Fran Barrera 

Sent: Wednesday, April 27, 2016 19:03
To: users
Subject: Public URL's

Hello,

Is it possible to publish openshift with two URL's? I can see the parameter 
PublicURL in master-config.yaml but I need to have two PublicURL. I don't know 
if this is possible or if this will be done for other way.

Best Regards.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: route hostname generation in template

2016-04-08 Thread Aleksandar Lazic
Hi Dale.

I have solved this with the 

https://docs.openshift.org/latest/dev_guide/downward_api.html

We use in the template the following.

###
DeploymentConfig
spec
  template
spec
  containers
env
  - name: PROJECT
valueFrom:
  fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
###

"parameters": [
{
"name": "PROJECT",
"description": "Project namespace",
"required": true
}


Then in the container is the namespace as ENV var PROJECT available.

But I'm not sure if you can use the same syntax fo the Routes.
Maybe it would be a good idea to have some default variables in the template 
which can be used such as.

namespace
defaultdomain
...

BR Aleks

From: users-boun...@lists.openshift.redhat.com 
 on behalf of Dale Bewley 

Sent: Friday, April 08, 2016 21:29
To: users@lists.openshift.redhat.com
Subject: route hostname generation in template

I'm creating a template which has 2 services. One is a python gunicorn and one 
is httpd.

I want the first service reachable at app-project.domain/ and the second 
service to be reachable at app-project.domain/static. That works, but I'm 
having trouble automating it in a template.

Unfortunately if I use default value of ${APPLICATION_DOMAIN} it includes the 
service name and I wind up with a distinct hostname in each route: 
app-static-project.domain and app-py-project.domain

{
  "kind": "Route",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}-static"
  },
  "spec": {
"host": "${APPLICATION_DOMAIN}",
"path": "/${STATIC_DIR}",
"to": {
  "kind": "Service",
  "name": "${NAME}-static"
},
"tls": {
  "termination" : "edge"
}
  }
},
{
  "kind": "Route",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}-py"
  },
  "spec": {
"host": "${APPLICATION_DOMAIN}",
"to": {
  "kind": "Service",
  "name": "${NAME}-py"
},
"tls": {
  "termination" : "edge"
}
  }
},


I could prompt for a hostname, but I would like to auto-generate the hostname 
to include the project by default. What I would like is 
-. in both routes.


Is is there a list somewhere of the variables available to templates?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Higher priority for issue 5946

2016-03-31 Thread Aleksandar Lazic
Dear redhat and list.


How can we argue that this issue 
https://github.com/openshift/origin/issues/5946 and this 
https://bugzilla.redhat.com/show_bug.cgi?id=1317159 come to prio 1 and be 
implemented in the next version.


I assume that 3.2 is almost feature complete so it would be nice to have this 
feature in 3.3.


I have described my point of view in my blog http://wp.me/pgAVf-44 .


Please can anyone from redhat take a look into this, thank you.


Regards


Aleksandar Lazic

Cloudwerkstatt GmbH : Lassallestraße 7b - 1020 Wien - Austria

aleksandar.la...@cloudwerkstatt.com<mailto:markus.pl...@cloudwerkstatt.com>


<mailto:markus.pl...@cloudwerkstatt.com>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift Routing Haproxy Logging

2016-03-18 Thread Aleksandar Lazic
Hi John


your guess is right.

https://docs.openshift.org/latest/install_config/install/deploy_router.html#deploying-a-customized-haproxy-router


You will also need a syslog which receives the logs.

I have build such a solution which you can use as base if you want.


https://github.com/git001/haproxy


Just follow the steps to get the haproxy.template from openshift and adopt the 
log line.


BR Aleks


From: users-boun...@lists.openshift.redhat.com 
 on behalf of Skarbek, John 

Sent: Friday, March 18, 2016 13:46
To: users@lists.openshift.redhat.com
Subject: Openshift Routing Haproxy Logging


Good Morning,

Anyone have any advice of plucking the access logs out of the haproxy router?

I'm pushing a TLS feature and while I love the fact that I get a 502 responses, 
at this moment, I have zero method to debug this.

My guess is that I need to create a custom haproxy image to add some ability to 
log to some location. The haproxy config running in the container currently 
doesn't appear to do any logging whatsoever.


--
John Skarbek
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Java heap and arguments

2016-03-14 Thread Aleksandar Lazic
Hi.

I would use

oc env dc ... JAVA_TOOL_OPTIONS='...'

https://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/envvars.html

Attached to the question of Srinivas is this the preferred solution?

B r

Aleksandar Lazic
Cloudwerkstatt GmbH : Lassallestra?e 7b - 1020 Wien - Austria
aleksandar.la...@cloudwerkstatt.com
Von Outlook<http://taps.io/outlookmobile> Mobile gesendet

Von: Srinivas Naga Kotaru (skotaru)
Gesendet: Dienstag, 15. M?rz 05:40
Betreff: Java heap and arguments
An: users

OSE 2.x support adding java arguments, including setting up heap and other 
values by clients itself.

What is the recommended procedure for OSE 3.X? Do we have any documentation 
which describe ?

--

Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: api and console port : 8443

2016-03-13 Thread Aleksandar Lazic
Hi.

Sorry to stick on this but we have put the default router into a DMZ and used 
this document for the firewall setup.

https://docs.openshift.org/latest/install_config/install/prerequisites.html#prereq-network-access

We had the requirements to separate also intranet , company and extra-net and 
this is all possible with the openshift  Router Options.

How ever as you solution works it's perfekt for you.

Cheers

Aleksandar Lazic
Cloudwerkstatt GmbH : Lassallestraße 7b – 1020 Wien – Austria
aleksandar.la...@cloudwerkstatt.com
Von Outlook<http://taps.io/outlookmobile> Mobile gesendet



On Sun, Mar 13, 2016 at 11:11 AM -0700, "Srinivas Naga Kotaru (skotaru)" 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

For node routing, we have to use DMZ based proxy servers. There are end points 
are clients and proxy to openshift routers.

Openshift routers doesn’t support DMZ. We can’t directly expose or put 
openshift nodes directly into DMZ as it shared same VXLAN with application 
nodes. I heard there is a tunneling but I didn’t understand it concepts or 
documentation is clear.

Since we have multiple data centers we have something like

GLB —> DC RP —> Openshift Routers —> Openshift Nodes


--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Saturday, March 12, 2016 at 1:43 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>, 
Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Subject: Re: api and console port : 8443


Hi.


To be more precise.


Do you use the openshift ability to route based on labels ( ROUTE_LABELS ) and 
dedicated management labeled nodes?

BR Aleks


From: 
users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Sent: Friday, March 11, 2016 20:54
To: Srinivas Naga Kotaru (skotaru); Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443


Hi.


You mean different network routes, right?


what else have you changed to use the master on 443?


Which version of HA have you chosen?

https://docs.openshift.com/enterprise/3.1/architecture/infrastructure_components/kubernetes_infrastructure.html#high-availability-masters


BR Aleks


____
From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Friday, March 11, 2016 19:17
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Thanks for sharing your experience and writeup

We decided to go with different route. don’t want to involve run time layer 
with management traffic and also simplify as much as possible since we have 
multiple clusters in each life cycle ( non prod, prod etc)

This is final approach we decided to go

1.  Change port 8443 to 443 during ansible fresh installation ( Our Dev builds 
starting this week onwards)
2. Use a DNS based load balancer to forward to 3 masters in each cluster.

Hope this works. Pl comment if it doesn’t work so we can a fresh look.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Friday, March 11, 2016 at 2:29 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


I have read this post and the solution works.

The handycap from my point of view is that you will need to use official 
certificates in the master(s).

I have written a more or less detailed description how we at cloudwerkstatt 
solved this issue.


https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

[https://alword.files.wordpress.com/2016/03/osv3-cons-443.png]<https

  1   2   >