Re: Aggregating container logs using Kibana

2016-04-13 Thread Eric Wolinetz
On Wed, Apr 13, 2016 at 3:16 AM, Lorenz Vanthillo <
lorenz.vanthi...@outlook.com> wrote:

> I saw on https://github.com/openshift/origin/issues/8358:
>
>
> $ oc debug pod/logging-fluentd-80xzt -- cat /proc/self/attr/current
> Debugging with pod/debug-logging-fluentd-80xzt, original command:  entrypoint>
> Waiting for pod to start ...
> system_u:system_r:svirt_lxc_net_t:s0:c216,c576
>
> Removing debug pod ...
>
>
> Yup. The problem was what I thought: it's being run under the
> svirt_lsc_net_t SELinux type, which doesn't have access to var_log_t. If
> you don't want to disable SELinux, you'll need to follow the instructions
> for creating a new SELinux type that I posted above.
>
> So I understand what's wrong but I don't see why the workaround (changing
> the service account permissions from anyuid to privileged) isn't working
> for me + I don't want to create a new selinuxtype.
>

Sorry about that, we had missed a step.  You'll need to delete your
daemonset, edit your logging-fluentd-template to add a property to your
container spec and recreate your daemonset to let it properly run as
privileged to escape the SELinux enforcing.

$ oc delete daemonset logging-fluentd

$ oc edit template/logging-fluentd-template


# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving
this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Template
labels:
  component: fluentd
. . .
objects:
- apiVersion: extensions/v1beta1
  kind: DaemonSet
. . .
spec:
selector:
  matchLabels:
component: fluentd
provider: openshift
template:
  metadata:
labels:
  component: fluentd
  provider: openshift
name: fluentd-elasticsearch
  spec:
containers:
. . .
  name: fluentd-elasticsearch

# insert below here
  securityContext:
privileged: true
# insert above here

  resources:
limits:
  cpu: 100m
. . .

$ oc process logging-fluentd-template | oc create -f -


> --
> From: lorenz.vanthi...@outlook.com
> To: ewoli...@redhat.com
> CC: users@lists.openshift.redhat.com
> Subject: RE: Aggregating container logs using Kibana
> Date: Wed, 13 Apr 2016 09:30:48 +0200
>
>
> Fixed the issue with nodeselectormismatching:
> So now I have 3 fluentd pods on my 2 normal nodes and my infranode:
> But still the same permission issue:
> NAME  READY STATUS  RESTARTS   AGE
> logging-curator-1-j7mz0   1/1   Running 0  17m
> logging-deployer-39qcz0/1   Completed   0  47m
> logging-es-605u5g7g-1-36owl   1/1   Running 0  17m
> logging-fluentd-4uqx1 1/1   Running 0  46m
> logging-fluentd-dez5r 1/1   Running 0  2m
> logging-fluentd-m50nj 1/1   Running 0  46m
> logging-kibana-1-wfog22/2   Running 0  16m
>
> --
> From: lorenz.vanthi...@outlook.com
> To: ewoli...@redhat.com
> CC: users@lists.openshift.redhat.com
> Subject: RE: Aggregating container logs using Kibana
> Date: Wed, 13 Apr 2016 09:21:47 +0200
>
> Hi Eric,
>
> Thanks for your reply and the follow up of this issue.
> I've created a new origin 1.1.6 cluster (2 days ago) but still have the
> same issue:
> My environment is one master (with node) non schedulable, 2 'normal' nodes
> and one infra node.
> I still got the permission denied (The documentation is up to date so I
> even don't had to perform the workaround manually).
> - system:serviceaccount:logging:aggregated-logging-fluentd is in scc
> privileged by default.
>
> The logging-deployer-template creates services and 2 pods of fluentd (on
> the normal nodes).
> The pods appear after performing this command:
>
> oc label nodes --all logging-infra-fluentd=true
>
> So my nodes got that label. also the unschedulable node on my master. So
> that's normal that it failed but why it fails on my infra-node I don't
> know. (I defined in my master-config that projects are by default on the
> other 2 nodes, maybe that's why but I don't know it's relevant for my
> issue).
> I also don't really understand why 'oc process logging-support-tempalte |
> oc create -f -' is only be cited at the troubleshooting part.
> Still the error: [error]: unexpected error error_class=Errno::EACCES
> error=#
>
> oc get is
> NAMEDOCKER REPO
> TAGSUPDATED
> logging-auth-proxy  docker.io/openshift/origin-logging-auth-proxy
> latest,v0.0.1   4 minutes ago
> logging-curator docker.io/openshift/origin-logging-curator
> latest  4 minutes ago
> logging-elasticsearch   docker.io/openshift/origin-logging-elasticsearch
> latest  4 minutes ago
> logging-fluentd docker.io/openshift/origin-logging-fluentd
> latest  4 minutes ago
> 

Secure route Origin 1.1.6

2016-04-13 Thread Den Cowboy
I have a docker container which is communicating on port 80 with another server.
So it's using http and its an insecure route.

Now we're going to use https (443). The other server has a certificate (.jks).
How do I have to settle this? I have to create a secure route but which type?
- passthrough
- edge
- re-encrypt

Do I have to convert his .jks to .pem and copy it in my route?

I read this about passthrough:
The destination pod is responsible for serving certificates for the
traffic at the endpoint.
So can I just create a passthrough route and that's it? Because that did not 
seem to work.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: making deployment easier

2016-04-13 Thread Aleksandar Kostadinov

Candide Kemmler wrote on 04/13/2016 12:12 PM:

Hi Aleksandar,


I might not be able to help a lot with your specific issues, but could you 
explain more about them and possibly include some relevant logs?

 From your email it is not clear what exactly issues you're hitting.
With a more detailed explanation and specific examples, it is much more likely 
to receive a helpful answer.


I'm not hitting an "issue" per se, I'm looking for a guide on how to package a 
complex setup made of multiple microservices in a way that makes it easy to deploy them 
at once on Openshift Online as well as easy to update.

For me personally, I would like for instance to be able to spin up many 
instances of my services at will, but doing so requires at least a couple hours 
of hard work each time.


In your original message you said, you're hitting timing issues when 
using a template with everything inside. Can you explain what exactly 
those issues were?


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: making deployment easier

2016-04-13 Thread Aleksandar Kostadinov

Candide Kemmler wrote on 04/13/2016 10:53 AM:

My application is made up of several modules and jar dependencies (you guessed 
it, it's written in java). I have a complex setup involving a nexus repo and 
jenkins. To deploy one of the services that make up my app, I first have to 
build jar dependencies so they are available on my local repo. Then I am able 
to call start-build for any pod...

So this works, it's just really cumbersome.

The one thing that really bothers me is that the initial setup is really 
complicated. I have tried using templates, but there are timing issues and so 
to play it safe I have every service in my system split up in two different 
scripts, one with the ImageStream, BuildConfig and DeploymentConfig and the 
other with the Service definition and optional Route. If I put all objects in 
the same templates, things don't work correctly. Mind, I have 5 such services, 
so that means that deployment has me go the he command line 10 times. Plus I 
have other dependencies (mysql and couchdb) so that's something else I need to 
deploy manually.


I might not be able to help a lot with your specific issues, but could 
you explain more about them and possibly include some relevant logs?


From your email it is not clear what exactly issues you're hitting.
With a more detailed explanation and specific examples, it is much more 
likely to receive a helpful answer.



What I'm looking for is a way to package my entire app (that's 7 different 
pods) in one go.

The end goal is to make it available on Openshift Online. Updates to the app 
have to be automatic, reliable and fast.

Do you guys have any advice, pointers, etc?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users