Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1

2019-11-07 Thread Jeff Cantrill
On Wed, Nov 6, 2019 at 6:48 PM Full Name  wrote:

> Thank you Rich for your prompt reply.
>
> After viewing  the
> "manifests/4.2/cluster-logging.v4.2.0.clusterserviceversion.yaml " on the
> cluster-logging-operator pod, I confirm that the added (minKubeVersion:
> 1.16.0) line in GITHUB  is missing in the manifest file on the CLO pod on
> my Cluster.
>

The minKubeVersion was corrected for 4.2 in:
https://github.com/openshift/cluster-logging-operator/pull/267
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kibana permissions issue

2019-07-16 Thread Jeff Cantrill
We have some recent issues logged against this which are related to load
and the number of projects which a user can view.  This [1] is a high level
document which may be of interest to you on how the permissions are
generated and what constitutes an 'admin user'

[1]
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/access-control.md#role-definitions-and-permissions


On Tue, Jul 16, 2019 at 11:05 AM Shane Ripley  wrote:

> Greetings, I have a permissions issue with Kibana that I can't seem to
> figure out. I've reviewed all the settings that I can think of, but nothing
> seems to be wrong.
>
> The domains user is the admin of several projects, and up until recently,
> was able to view logs in kibana for all of its projects. I have no idea
> what changed, but now I can no longer view any logs.
>
>  [security_exception] no permissions for [indices:data/read/search] and
> User [name=domains 
>
>
> oc describe rolebinding.rbac -n domains-dev |more
>
> Name: admin
> Labels:   
> Annotations:  
> Role:
>   Kind:  ClusterRole
>   Name:  admin
> Subjects:
>   Kind  Name Namespace
>      -
>   User  domains
>
> I'm at a loss as to what to check next. Other users can view logs, so the
> issue seems to be limited to just the domains user.
>
> I've redeployed the openshift-logging project and ouath/kibana pod but
> that didn't seem to help.
>
> Thanks.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


--
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regarding Logging

2018-11-19 Thread Jeff Cantrill
It doesn't appear you have any fluentd pods which are responsible for
collecting logs from the other pods.  Are your nodes labeled with
'logging-infra-fluend=true'

On Mon, Nov 19, 2018 at 7:28 AM Kasturi Narra  wrote:

> Hello Everyone,
>
>I have a setup where i am trying to install logging using ocp3.9+
> cns3.11 . I see that logging pods are up and running but when i access the
> webconsole i get an error  present at [1]  and i tried the solution
> provided at [2] but having no luck. Can some one of you please help me on
> resolving this issue ?
>
> [root@dhcp46-170 ~]# oc version
> oc v3.9.43
> kubernetes v1.9.1+a0ce1bc657
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://dhcp46-170.lab.eng.blr.redhat.com:8443
> openshift v3.9.43
> kubernetes v1.9.1+a0ce1bc657
>
> [root@dhcp46-170 ~]# oc get pods
> NAME  READY STATUSRESTARTS
> AGE
> logging-curator-1-bgjbj   1/1   Running   0  2h
> logging-es-data-master-5gjnm57x-2-5vjq6   2/2   Running   0  2h
> logging-kibana-1-872dn2/2   Running   0  2h
>
> [1] Discover: [exception] The index returned an empty result. You can use
> the Time Picker to change the time filter or select a higher time interval
> [2] https://access.redhat.com/solutions/3352681
>
> Thanks
> kasturi
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Logging
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Best file system for elasticsearch

2018-10-08 Thread Jeff Cantrill
On Mon, Oct 8, 2018 at 5:13 AM Marc Ledent  wrote:

> Hi Rich,
>
> Thanks for the advice.
>
> Concerning the use of local file system, how can I simply (ansible var)
> "bind" the elastic search pod to a given host?
>

Using a unique node selector for each ES deploymentconfig [1].  This is the
technique used previously by our QE and online clusters before we added pod
affinity.

[1]
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Jeff Cantrill
Consider logging and issue so that it is properly addressed by the
development team.

On Mon, May 21, 2018 at 7:05 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> I'm seeing a  strange problem with trying to use a Cinder volume for the
> elasticsearch PVC when installing logging with Origin 3.7. If I use NFS or
> GlusterFS volumes it all works fine. If I try a Cinder volume elastic
> search fails to start because of permissions problems:
>
>
> [2018-05-21 11:03:48,483][INFO ][container.run] Begin
> Elasticsearch startup script
> [2018-05-21 11:03:48,500][INFO ][container.run] Comparing the
> specified RAM to the maximum recommended for Elasticsearch...
> [2018-05-21 11:03:48,503][INFO ][container.run] Inspecting the
> maximum RAM available...
> [2018-05-21 11:03:48,513][INFO ][container.run] ES_HEAP_SIZE:
> '4096m'
> [2018-05-21 11:03:48,527][INFO ][container.run] Setting heap
> dump location /elasticsearch/persistent/heapdump.hprof
> [2018-05-21 11:03:48,531][INFO ][container.run] Checking if
> Elasticsearch is ready on https://localhost:9200
> Exception in thread "main" java.lang.IllegalStateException: Failed to
> created node environment
> Likely root cause: java.nio.file.AccessDeniedException:
> /elasticsearch/persistent/logging-es
> at sun.nio.fs.UnixException.translateToIOException(UnixExceptio
> n.java:84)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:107)
> at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSy
> stemProvider.java:384)
> at java.nio.file.Files.createDirectory(Files.java:674)
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
> at java.nio.file.Files.createDirectories(Files.java:767)
> at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment
> .java:169)
> at org.elasticsearch.node.Node.(Node.java:165)
> at org.elasticsearch.node.Node.(Node.java:140)
> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194)
> at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
> at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch
> .java:45)
> Refer to the log for complete error details.
>
> The directory ownerships do look very strange. Using Gluster (where it
> works) you see this (/elasticsearch/persistent is where the volume is
> mounted):
>
> sh-4.2$ cd /elasticsearch/persistent
> sh-4.2$ ls -al
> total 8
> drwxrwsr-x. 4 root 2009 4096 May 21 07:17 .
> drwxrwxrwx. 4 root root   42 May 21 07:17 ..
> drwxr-sr-x. 3 1000 2009 4096 May 21 07:17 logging-es
>
> User 1000 and group 2009 do not exist in /etc/passwd or /etc/groups
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Jeff Cantrill
openshift_logging_elasticsearch_pvc_dynamic is a deprecated variable that
defined the alpha feature of PV->PVC associations prior to the introduction
of storage classes

On Tue, Apr 17, 2018 at 6:26 AM, Per Carlson <pe...@hemmop.com> wrote:

> Hi.
>
> On 17 April 2018 at 12:17, Tim Dudgeon <tdudgeon...@gmail.com> wrote:
>
>> So if you are using dynamic provisioning the only option for logging is
>> for the default StorageClass to be set to what is needed?
>>
>> On 17/04/18 11:12, Per Carlson wrote:
>>
>> This holds at least for 3.7:
>>
>> For metrics you can use "openshift_metrics_cassanda_pvc_storage_class_name"
>> (https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44).
>>
>> Using a StorageClass for logging (ElasticSearch) is more confusing. The
>> variable is "openshift_logging_elasticsearch_pvc_storage_class_name" (
>> https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34). But,
>> it is only used for non-dynamic PVCs (https://github.com/openshift/
>> openshift-ansible/blob/release-3.7/roles/openshift_logging_
>> elasticsearch/tasks/main.yaml#L368-L370).
>>
>>
>> --
>> Pelle
>>
>> Research is what I'm doing when I don't know what I'm doing.
>> - Wernher von Braun
>>
>>
>>
>
> ​No, I think you can ​use a StorageClass by keeping 
> "openshift_logging_elasticsearch_pvc_dynamic"
> is false. Not sure if that has any side effects though.
>
> --
> Pelle
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Regex in logging curator settings

2017-11-27 Thread Jeff Cantrill
Create an RFE for your request either
https://trello.com/b/oJbshSIs/logging-and-metrics or
https://bugzilla.redhat.com/

On Mon, Nov 27, 2017 at 9:32 AM, bahhooo <bah...@gmail.com> wrote:

> Hello all,
>
> Is there a reason why regexes are not allowed in the curator settings?
>
> I would like to delete some indices according to the regular expressions I
> provide, so that I am not forced to enter individual project names into the
> configs.
>
> Right now I create a setting yaml with a one-liner and add it to the
> configmap. But whenever new projects are added to the cluster I will have
> to maintain the list manually.
>
> Anybody having a similar issue?
>
>
> Best,
> Bahho
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-10-31 Thread Jeff Cantrill
Please provide additional information, logs, etc or post the output of [1]
someplace for review.  Additionally, consider reviewing [2].

[1]
https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh

[2]
https://github.com/openshift/origin-aggregated-logging/blob/master/docs/checking-efk-health.md

On Tue, Oct 31, 2017 at 11:47 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> Hi All,
>
> I've deployed logging using the ansible installer (v3.6.0) for a fairly
> simple openshift setup and everything appears to running:
>
> NAME   READY STATUS RESTARTS   AGE
> logging-curator-1-gvh731/1   Running 24 3d
> logging-es-data-master-xz0e7a0c-1-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-4-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-5-deploy   0/1   Error 0  3d
> logging-es-data-master-xz0e7a0c-7-t4xpf1/1   Running 0  3d
> logging-fluentd-4rm2w  1/1   Running 0  3d
> logging-fluentd-8h944  1/1   Running 0  3d
> logging-fluentd-n00bn  1/1   Running 0  3d
> logging-fluentd-vt8hh  1/1   Running 0  3d
> logging-kibana-1-g7l4z 2/2   Running 0  3d
>
> (the failed pods were related to getting elasticsearch running, but that
> was resolved).
>
> The problem is that I don't see any logs in Kibana. When I look in the
> fluentd pod logs I see lots of stuff like this:
>
> 2017-10-31 13:53:15 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 13:58:02 + [warn]: no patterns matched
> tag="kubernetes.journal.container"
> 2017-10-31 14:02:18 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:07:15 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:11:20 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:15:16 + [warn]: no patterns matched tag="journal.system"
> 2017-10-31 14:19:58 + [warn]: no patterns matched tag="journal.system"
>
> Is this the cause, and if so what is wrong?
> If not how to debug this?
>
> Tim
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [aos-int-services] Problem about logging in openshift origin

2017-09-18 Thread Jeff Cantrill
The images you reference may not even be the latest 3.6.x version of the
image.  I recommend you rebuild them yourself.

Access to the OCP imags require a valid RedHat subscription.

On Mon, Sep 18, 2017 at 2:24 AM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi Jeff,
>
> The image used is docker.io/openshift/origin-logging-elasticsearch:v3.6.0.
>
> It's fetched from docker hub.
>
> How could I get images from OCP?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
> ------
> *From:* Jeff Cantrill <jcant...@redhat.com>
> *Sent:* Saturday, September 16, 2017 1:32:19 AM
> *To:* Peter Portante
> *Cc:* Yu Wei; d...@lists.openshift.redhat.com;
> users@lists.openshift.redhat.com; aos-int-services
> *Subject:* Re: [aos-int-services] Problem about logging in openshift
> origin
>
> Can you also post the image Tag you are using?  Is this from an OCP based
> image or upstream images you may find on dockerhub?
>
> On Fri, Sep 15, 2017 at 7:20 AM, Peter Portante <pport...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:
>>
>>> Hi,
>>>
>>> I setup OpenShift origin 3.6 cluster successfully and enabled metrics
>>> and logging.
>>>
>>> Metrics worked well and logging didn't worked.
>>>
>>> Pod * logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
>>> crashed with below logs,
>>>
>>> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
>>> *--> Waiting up to 10m0s for pods in rc
>>> logging-es-data-master-lf6al5rb-5 to become ready *
>>> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
>>> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
>>> become ready*
>>>
>>> I didn't find other information. How could I debug such problem?
>>>
>> ​Hi Yu,​
>>
>> Added aos-int-services ...
>>
>> ​How many indices do you have in the Elasticsearch instance?
>>
>> What is the storage configuration for the Elasticsearch pods?
>>
>> ​Regards, -peter
>>
>>
>>
>>>
>>> Thanks,
>>>
>>> Jared, (韦煜)
>>> Software developer
>>> Interested in open source software, big data, Linux
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
>
> --
> --
> Jeff Cantrill
> Senior Software Engineer, Red Hat Engineering
> OpenShift Integration Services
> Red Hat, Inc.
> *Office*: 703-748-4420 <(703)%20748-4420> | 866-546-8970 ext. 8162420
> <(866)%20546-8970>
> jcant...@redhat.com
> http://www.redhat.com
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [aos-int-services] Problem about logging in openshift origin

2017-09-15 Thread Jeff Cantrill
Can you also post the image Tag you are using?  Is this from an OCP based
image or upstream images you may find on dockerhub?

On Fri, Sep 15, 2017 at 7:20 AM, Peter Portante <pport...@redhat.com> wrote:

>
>
> On Fri, Sep 15, 2017 at 6:10 AM, Yu Wei <yu20...@hotmail.com> wrote:
>
>> Hi,
>>
>> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
>> logging.
>>
>> Metrics worked well and logging didn't worked.
>>
>> Pod *logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
>> crashed with below logs,
>>
>> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
>> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
>> to become ready *
>> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
>> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
>> become ready*
>>
>> I didn't find other information. How could I debug such problem?
>>
> ​Hi Yu,​
>
> Added aos-int-services ...
>
> ​How many indices do you have in the Elasticsearch instance?
>
> What is the storage configuration for the Elasticsearch pods?
>
> ​Regards, -peter
>
>
>
>>
>> Thanks,
>>
>> Jared, (韦煜)
>> Software developer
>> Interested in open source software, big data, Linux
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Running sshd in a Docker Container on Openshift

2017-07-09 Thread Jeff Cantrill
This may be of interest to you:

https://docs.openshift.com/enterprise/3.1/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile

On Sun, Jul 9, 2017 at 6:03 AM Isuru Haththotuwa <isurulu...@gmail.com>
wrote:

> Hi,
>
> I'm trying to do $subject. Using the minimal docker sample found at [1].
> While this works perfectly in bare docker, when I'm trying to run on
> Openshift it fails with the error [2]. When I tried to re-create the ssh
> keys at startup with *ssh-keygen -A*, gave me the error [3]. I read that
> Openshift uses a random user id (usually 10) when starting a
> container, I created a user with the same id, gave permission to
> /etc/ssh/ssh* and ran. Still did not work.
>
> Seems a permission issue. Any idea what is going wrong here?
>
> [1].
> https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
>
> [2].
> Could not load host key: /etc/ssh/ssh_host_rsa_key
> Could not load host key: /etc/ssh/ssh_host_dsa_key
> Could not load host key: /etc/ssh/ssh_host_ecdsa_key
> Could not load host key: /etc/ssh/ssh_host_ed25519_key
>
> [3].
> open /etc/ssh/ssh_host_key failed: Permission denied.
> ssh-keygen: generating new host keys: RSA1 Saving the key failed:
> /etc/ssh/ssh_host_key.
> ssh-keygen: generating new host keys: RSA Saving the key failed:
> /etc/ssh/ssh_host_rsa_key.
> open /etc/ssh/ssh_host_rsa_key failed: Permission denied.
> open /etc/ssh/ssh_host_dsa_key failed: Permission denied.
> ssh-keygen: generating new host keys: DSA Saving the key failed:
> /etc/ssh/ssh_host_dsa_key.
> open /etc/ssh/ssh_host_ecdsa_key failed: Permission denied.
> ssh-keygen: generating new host keys: ECDSA Saving the key failed:
> /etc/ssh/ssh_host_ecdsa_key.
> open /etc/ssh/ssh_host_ed25519_key failed: Permission denied.
> ssh-keygen: generating new host keys: ED25519 Saving the key failed:
> /etc/ssh/ssh_host_ed25519_key.
>
> --
> Thanks and Regards,
> Isuru
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: what is meaning of openshift_hosted_logging_hostname

2016-12-15 Thread Jeff Cantrill
On Thu, Dec 15, 2016 at 3:59 PM, Den Cowboy  wrote:

> Hi, I saw this option in the ansible playbook example:
>
> https://github.com/openshift/openshift-ansible/blob/master/
> inventory/byo/hosts.ose.example
>
>
> 1) What is the meaning of this viarable: openshift_hosted_logging_
> hostname?
>
​This is the host for the route which should replace what is listed in 2
per the README​


> 2) I tried to deploy the logging project with ansible. All the pods seems
> to deploy fine but I see things like:
>
> logging-kibana has containers without health checks, which ensure your
> application is running correctly.
>
>
> ​Work noted here:
https://github.com/openshift/origin-aggregated-logging/issues/291​

> + my route to kibana is:
>
> https://logging-kibana-logging.apps.xx.xx
> 
>
> + gives: invalid request: missed required parameter... (don't know why
> it's putting logging- before the kibana).
>
>
> ​The route looks like it is being generated in the format of:
-. where the subdomain is configured in
the master config​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Clean logs in ES on Origin 1.2.0

2016-11-22 Thread Jeff Cantrill
You can create a secret with a file that has content like:

.defaults:
  delete:
days: 7
  runhour: 0
  runminute: 0

and add the volume to the deployment config as described here:
https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0#curator

Alternatively, if you do not provide the secret, you could update the
following value in the deploymentconfig:
https://github.com/openshift/origin-aggregated-logging/blob/v1.2.0/deployment/templates/curator.yaml#L90



https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0

On Tue, Nov 22, 2016 at 9:34 AM, Den Cowboy <dencow...@hotmail.com> wrote:

> Hi,
>
>
> We have an origin 1.2.0 cluster in which we've integrated the logging
> project. It works fine but we just followed the setup tutorial. We don't
> know may about the real setup.
>
>
> We're facing sometimes issues that our disk is getting too fill because
> our ES is keeping to many data.
>
> How can we easily configure the curator to delete every log of every
> project once a week?
>
>
> We took a look to the documentation:
>
> myapp-dev:
>  delete:
>days: 1
>
> myapp-qe:
>   delete:
> weeks: 1
>
> .operations:
>   delete:
> weeks: 8
>
> .defaults:
>   delete:
> days: 30
>   runhour: 0
>   runminute: 0
>
> But it isn't clear for us. Isn't there just a setting in the
> deploymentconfig or what's the most easy approach for this?
>
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Why Metrics and Logging use Deployer container?

2016-11-15 Thread Jeff Cantrill
> > I think it's too difficult to hack, why don't use only Ansible for that?
>
> The deployer containers are going to be deprecated as we move over to
> Ansible for this. Expect an update for this soon.
>
> ​https://trello.com/c/piaWOfvB/​



>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kibana Visualization Sharing

2016-09-19 Thread Jeff Cantrill
This is a known issue captured
https://trello.com/c/RLJbg6KX/385-share-kibana-user-dashboard and the
referenced BZ.

Bottom line is to achieve multi-tenancy using the Kibana version provided
by the EFK stack, each user essentially has a 'profile' where there
dashboards and visualizations are stored.  Their currently is no easy
mechanism in the version we provide that would allow you to achieve
sharing.

On Sun, Sep 18, 2016 at 9:20 PM, Frank Liauw <fr...@vsee.com> wrote:

> Hi All,
>
> I am using Openshift Origin's aggregated logging stack.
>
> However, my visualizations are not shared amongst users; even via the
> direct share link:
>
> http://i.stack.imgur.com/PoNvl.png
>
> Viewers (who are already logged in) get the following error: Could not
> locate that visualization (id: response-codes); they have access to the
> indices / namespaces on which the visualization was built upon.
>
> I've experience with vanilla kibanas, and visualizations were shared by
> default.
>
> Thanks!
>
> Frank
> Systems Engineer
>
> VSee: fr...@vsee.com <http://vsee.com/u/tmd4RB> | Cell: +65 9338 0035
>
> Join me on VSee for Free <http://vsee.com/u/tmd4RB>
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: EFK Logging no image for logging-elasticsearch

2016-03-11 Thread Jeff Cantrill
Dean,

Instructions for installing aggregated logging are found in the repo here:
https://github.com/openshift/origin-aggregated-logging

The images are available from dockerhub.  Additionally, you can build them
yourself using the build scripts and or the buildconfigs defined in the
hack directory.

On Thu, Mar 10, 2016 at 5:32 PM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> I am getting the following error in the event logs when I go to deploy the
> elasticsearch pods after running the logging-deployer:
>
> Error syncing pod, skipping: failed to "StartContainer" for
> "elasticsearch" with ImagePullBackOff: "Back-off pulling image
> \"logging-elasticsearch\
>
> However, there is no build config for me to build such an image.  Where do
> I get that image for the deployer to use?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users