Re: Openshift origin - nodes in vlans

2017-06-20 Thread Frederic Giloux
Hi Lukasz,

this is not an unusual setup. You will need:
- the SDN port: 4789 UDP (both directions: masters/nodes to nodes)
- the kubelet port: 10250 TCP (masters to nodes)
- the DNS port: 8053 TCP/UDP (nodes to masters)
If you can't reach VLAN b pods from VLAN A the issue is probably with the
SDN port. Mind that it is using UDP.

Regards,

Frédéric

On Wed, Jun 21, 2017 at 4:13 AM, Łukasz Strzelec 
wrote:

> -- Hello,
>
> I have to install OSO with dedicated  HW nodes for one of  my customer.
>
> Current cluster is placed in VLAN (for the sake of this question) called:
> VLAN_A
>
> The Customer's nodes have to be place in another vlan: VLAN_B
>
> Now the question,  what ports and routes I have to setup to get this to
> work?
>
> The assumption is that traffic between vlans is filtered by default.
>
>
> Now, what I already did:
>
> I had opened the ports with accordance to documentation, then scaled up
> the cluster (ansible playbook).
>
> From the first sight , everything  was working fine. Nodes had been ready.
> I can deploy simple pod (eg. hello-openshift). But I can't reach te
> service. During S2I process, pushing into registry is ending with
>
> information "no route to host". I've checked this out, and for nodes
> placed in VLAN_A (the same one as registry and router) everything works
> fine. The problem is in the traffic between VLANs A <-> B. I
>
> can't reach any IP of services  of deployed pods on newly added nodes.
> Thus, traffic between pods over service-subnet is not allow.  Question is
> what should I open? Whole 172.30.0.0/16 between those 2
>
> vlans, or  dedicated rules to /from registry, router , metrics and so on ?
>
>
> --
> Ł.S.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
*Frédéric Giloux*
Senior Middleware Consultant
Red Hat Germany

fgil...@redhat.com M: +49-174-172-4661

redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Openshift origin - nodes in vlans

2017-06-20 Thread Łukasz Strzelec
-- Hello,

I have to install OSO with dedicated  HW nodes for one of  my customer.

Current cluster is placed in VLAN (for the sake of this question) called:
VLAN_A

The Customer's nodes have to be place in another vlan: VLAN_B

Now the question,  what ports and routes I have to setup to get this to
work?

The assumption is that traffic between vlans is filtered by default.


Now, what I already did:

I had opened the ports with accordance to documentation, then scaled up
the cluster (ansible playbook).

>From the first sight , everything  was working fine. Nodes had been ready.
I can deploy simple pod (eg. hello-openshift). But I can't reach te
service. During S2I process, pushing into registry is ending with

information "no route to host". I've checked this out, and for nodes placed
in VLAN_A (the same one as registry and router) everything works fine. The
problem is in the traffic between VLANs A <-> B. I

can't reach any IP of services  of deployed pods on newly added nodes.
Thus, traffic between pods over service-subnet is not allow.  Question is
what should I open? Whole 172.30.0.0/16 between those 2

vlans, or  dedicated rules to /from registry, router , metrics and so on ?


-- 
Ł.S.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
Whoops.  Hit the Send button early.

$ ocx () { ( oc project >/dev/null 2>&1 ) && oc $@ || echo "ERROR: You may
not be logged in!" ; }

$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Tue, Jun 20, 2017 at 3:46 PM, Louis Santillan 
wrote:

> $ ocx () { oc project 2&>/dev/null && oc $@ || echo "ERROR: You may not be
> logged in!" ; }
> $ ocx get pods -o wide
>
> ---
>
> LOUIS P. SANTILLAN
>
> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>
> Red Hat Consulting, NA US WEST 
>
> lpsan...@gmail.comM: 3236334854
> 
> TRIED. TESTED. TRUSTED. 
>
> On Tue, Jun 20, 2017 at 11:34 AM, Jordan Liggitt 
> wrote:
>
>> `oc whoami -t` doesn't talk to the server at all... it just prints your
>> current session's token
>>
>>
>> On Tue, Jun 20, 2017 at 2:31 PM, Louis Santillan 
>> wrote:
>>
>>> The `oc` command always looks for the current session in
>>> `~/.kube/config`.  It doesn't know if a session is expired or not since
>>> session timeouts are configurable and could have changed since the last API
>>> call was made to the master(s).  You can run your `oc` commands to with
>>> `--loglevel=8` to see this interaction play out.
>>>
>>> You could also run your command like so (in bash):
>>>
>>> $ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!"
>>> ; }
>>> $ ocx get pods -o wide
>>>
>>> ---
>>>
>>> LOUIS P. SANTILLAN
>>>
>>> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>>>
>>> Red Hat Consulting, NA US WEST 
>>>
>>> lpsan...@gmail.comM: 3236334854
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>> On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
>>> philippe.lafoucri...@tech-angels.com> wrote:
>>>

 On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
 wrote:

> The default user for any request is `system:anonymous` a user is not
> logged in or a valid token is not found.  Depending on your cluster, this
> usually has almost no access (less than `system:authenticated`).  Maybe an
> RFE is order (oc could suggest logging in if request is unsuccessful and
> the found user happens to be `system:anonymous`).


 That's what I suspect, but when I'm logged, I expect the token to be
 mine.
 In this particular case, the session had expired, and nothing warned
 that the issued token was for `system:anonymous` instead of me.

 Thanks,
 Philippe


>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
$ ocx () { oc project 2&>/dev/null && oc $@ || echo "ERROR: You may not be
logged in!" ; }
$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Tue, Jun 20, 2017 at 11:34 AM, Jordan Liggitt 
wrote:

> `oc whoami -t` doesn't talk to the server at all... it just prints your
> current session's token
>
>
> On Tue, Jun 20, 2017 at 2:31 PM, Louis Santillan 
> wrote:
>
>> The `oc` command always looks for the current session in
>> `~/.kube/config`.  It doesn't know if a session is expired or not since
>> session timeouts are configurable and could have changed since the last API
>> call was made to the master(s).  You can run your `oc` commands to with
>> `--loglevel=8` to see this interaction play out.
>>
>> You could also run your command like so (in bash):
>>
>> $ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!"
>> ; }
>> $ ocx get pods -o wide
>>
>> ---
>>
>> LOUIS P. SANTILLAN
>>
>> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>>
>> Red Hat Consulting, NA US WEST 
>>
>> lpsan...@gmail.comM: 3236334854
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>> On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
>> philippe.lafoucri...@tech-angels.com> wrote:
>>
>>>
>>> On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
>>> wrote:
>>>
 The default user for any request is `system:anonymous` a user is not
 logged in or a valid token is not found.  Depending on your cluster, this
 usually has almost no access (less than `system:authenticated`).  Maybe an
 RFE is order (oc could suggest logging in if request is unsuccessful and
 the found user happens to be `system:anonymous`).
>>>
>>>
>>> That's what I suspect, but when I'm logged, I expect the token to be
>>> mine.
>>> In this particular case, the session had expired, and nothing warned
>>> that the issued token was for `system:anonymous` instead of me.
>>>
>>> Thanks,
>>> Philippe
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
The `oc` command always looks for the current session in `~/.kube/config`.
It doesn't know if a session is expired or not since session timeouts are
configurable and could have changed since the last API call was made to the
master(s).  You can run your `oc` commands to with `--loglevel=8` to see
this interaction play out.

You could also run your command like so (in bash):

$ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!" ; }
$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
> wrote:
>
>> The default user for any request is `system:anonymous` a user is not
>> logged in or a valid token is not found.  Depending on your cluster, this
>> usually has almost no access (less than `system:authenticated`).  Maybe an
>> RFE is order (oc could suggest logging in if request is unsuccessful and
>> the found user happens to be `system:anonymous`).
>
>
> That's what I suspect, but when I'm logged, I expect the token to be mine.
> In this particular case, the session had expired, and nothing warned that
> the issued token was for `system:anonymous` instead of me.
>
> Thanks,
> Philippe
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: split long log records

2017-06-20 Thread Rich Megginson

On 06/19/2017 10:48 AM, Peter Portante wrote:

Hi Andre,

This is a hard-coded Docker size.  For background see:

  * https://bugzilla.redhat.com/show_bug.cgi?id=1422008, "[RFE] Fluentd
handling of long log lines (> 16KB) split by Docker and indexed into
several ES documents"
* And the reason for the original 16 KB limit:
https://bugzilla.redhat.com/show_bug.cgi?id=1335951, "heavy logging
leads to Docker daemon OOM-ing"


Not only that, but a PR to give the ability to docker to configure the 
size limit [1] was rejected due to $reasons . . .




The processor that reads the json-file documents for sending to
graylog needs to be endowed with the smarts to handle reconstruction
of those log lines, most likley, obviously with some other upper bound
(as a container is not required to emit newlines in stdout or stderr.

Regards,

-peter


[1] For the 90% of cases where this problem could have been easily 
solved by simply bumping the limit.  Of course there will be times when 
no appropriate limit can be found.  Of course a client that consumes 
docker logs must be able to handle partial messages, by reconstructing 
them in the logging collector, or further on down the chain.




On Mon, Jun 19, 2017 at 11:43 AM, Andre Esser
 wrote:

We use Graylog for log visualisation. However that's not the culprit it
turns out. Log entries in the pod's log file are already split into chunks
of 16KB like this:

{"log":"The quick brown[...]jumps ov","stream":"stdout",\
"time":"2017-06-19T15:27:33.130524954Z"}

{"log":"er the lazy dog.\n","stream":"stdout",\
"time":"2017-06-19T15:27:33.130636562Z"}

So, to cut a long story short, is there any way to increase the size limit
before a log record gets split into two JSON records?




On 2017-06-19 16:21, Peter Portante wrote:

Who setup Graylog for openshift?

-peter

On Mon, Jun 19, 2017 at 11:18 AM, Andre Esser
 wrote:

I meant the limit in Graylog. Although I just noticed that it is actually
16384 (16KB). The line split after 2048 characters only applies on the
web
UI.

Is this a Graylog limitation and can it be extended?


On 2017-06-19 14:21, Jessica Forrester wrote:


Are you asking about logs in the web console, the `oc logs` command, or
in
Kibana?

On Mon, Jun 19, 2017 at 8:29 AM, Andre Esser > wrote:

  Hi,

  In Origin 1.4.1 all log records longer than 2048 characters are
  split over two lines (longer than 4096 characters over three lines
  and so on).

  Is there any way to increase this limit?


  Thanks,

  Andre



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Philippe Lafoucrière
On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
wrote:

> The default user for any request is `system:anonymous` a user is not
> logged in or a valid token is not found.  Depending on your cluster, this
> usually has almost no access (less than `system:authenticated`).  Maybe an
> RFE is order (oc could suggest logging in if request is unsuccessful and
> the found user happens to be `system:anonymous`).


That's what I suspect, but when I'm logged, I expect the token to be mine.
In this particular case, the session had expired, and nothing warned that
the issued token was for `system:anonymous` instead of me.

Thanks,
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift build from git repo

2017-06-20 Thread Ben Parees
On Tue, Jun 20, 2017 at 9:24 AM, Jonathan Alvarsson <
jonathan.alvars...@genettasoft.com> wrote:

> Hi!
>
> I am trying to initialise an OpenShift build from a git repository
> using an ssh-key as as secret:
>
> $ oc new-build --name=modelingweb
> g...@bitbucket.org:genettasoft/gs_modelling_web.git --build-secret
> deploymentkey
>
> But it seems like my git url is not recognised as a git url:
>
> error: no match for "g...@bitbucket.org:genettasoft
> /gs_modelling_web.git"
>
> The 'oc new-build' command will match arguments to the following types:
>
>   1. Images tagged into image streams in the current project or
> the 'openshift' project
>  - if you don't specify a tag, we'll add ':latest'
>   2. Images in the Docker Hub, on remote registries, or on the
> local Docker engine
>   3. Git repository URLs or local paths that point to Git repositories
>
> --allow-missing-images can be used to force the use of an image
> that was not matched
>
> See 'oc new-build -h' for examples.
>
> What am I missing?
>

Seems related to this issue that just got opened, let's hash it out there:
https://github.com/openshift/origin/issues/14761



>
> --
> // Jonathan Alvarsson
>
> Ps. I have also posted this on
> https://stackoverflow.com/questions/44652848/openshift-build-from-git-repo
> Ds.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift build from git repo

2017-06-20 Thread Jonathan Alvarsson
Hi!

I am trying to initialise an OpenShift build from a git repository
using an ssh-key as as secret:

$ oc new-build --name=modelingweb
g...@bitbucket.org:genettasoft/gs_modelling_web.git --build-secret
deploymentkey

But it seems like my git url is not recognised as a git url:

error: no match for "g...@bitbucket.org:genettasoft/gs_modelling_web.git"

The 'oc new-build' command will match arguments to the following types:

  1. Images tagged into image streams in the current project or
the 'openshift' project
 - if you don't specify a tag, we'll add ':latest'
  2. Images in the Docker Hub, on remote registries, or on the
local Docker engine
  3. Git repository URLs or local paths that point to Git repositories

--allow-missing-images can be used to force the use of an image
that was not matched

See 'oc new-build -h' for examples.

What am I missing?

--
// Jonathan Alvarsson

Ps. I have also posted this on
https://stackoverflow.com/questions/44652848/openshift-build-from-git-repo
Ds.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: split long log records

2017-06-20 Thread Andre Esser

Understood. Thanks Peter


On 2017-06-19 17:48, Peter Portante wrote:

Hi Andre,

This is a hard-coded Docker size.  For background see:

  * https://bugzilla.redhat.com/show_bug.cgi?id=1422008, "[RFE] Fluentd
handling of long log lines (> 16KB) split by Docker and indexed into
several ES documents"
* And the reason for the original 16 KB limit:
https://bugzilla.redhat.com/show_bug.cgi?id=1335951, "heavy logging
leads to Docker daemon OOM-ing"

The processor that reads the json-file documents for sending to
graylog needs to be endowed with the smarts to handle reconstruction
of those log lines, most likley, obviously with some other upper bound
(as a container is not required to emit newlines in stdout or stderr.

Regards,

-peter

On Mon, Jun 19, 2017 at 11:43 AM, Andre Esser
 wrote:

We use Graylog for log visualisation. However that's not the culprit it
turns out. Log entries in the pod's log file are already split into chunks
of 16KB like this:

{"log":"The quick brown[...]jumps ov","stream":"stdout",\
"time":"2017-06-19T15:27:33.130524954Z"}

{"log":"er the lazy dog.\n","stream":"stdout",\
"time":"2017-06-19T15:27:33.130636562Z"}

So, to cut a long story short, is there any way to increase the size limit
before a log record gets split into two JSON records?



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users