Re: [onap-discuss] [OOM][LOG] Error when connecting logstash to remote ES

2018-09-24 Thread Hector Anapan
Actually, I found the issue. It seems like there is a line in the 
onap-pipeline.conf that enables "sniffing" to true (by default, it's false). 
When i disabled sniffing in the lgoastsh config, logstash connects to ES 
successfully.

* 248  ## This setting asks Elasticsearch for the list of all cluster nodes and 
adds them to the hosts list. Default is false.*
* 249  sniffing => true*

Nonetheless... how does logstash connect to an ES cluster (multi-nodes, 
multi-replica pods of a ES deployment) between two separate k8s clusters? That 
question still remains.

Thanks,
Hector

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12624): https://lists.onap.org/g/onap-discuss/message/12624
Mute This Topic: https://lists.onap.org/mt/26202387/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[onap-discuss] [OOM][LOG] Error when connecting logstash to remote ES

2018-09-24 Thread Hector Anapan
Hi Michael,

I was wondering if you or someone in the OOM / ELK team if you have ever tried 
to set up a connection between a logstash pod (in kubernetes cluster 1) and 
elasticsearch pod (in kubernetes cluster 2) and have seen this error before 
(full log attached):

22:52:14.523 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - 
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", 
:hosts=>[#http://192.168.10.10:30254>]}

22:52:19.457 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated 
{:changes=>{:removed=>[http://elastic:xx@192.168.10.10:30254/], 
:added=>[http://elastic:xx@10.42.162.82:9200/]}}

22:52:19.459 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 INFO  logstash.outputs.elasticsearch - Running health check to see if an 
Elasticsearch connection is working 
{:healthcheck_url=>http://elastic:xx@10.42.162.82:9200/, :path=>"/"}

22:52:22.455 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting 
pipeline {"id"=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>125, 
"pipeline.batch.delay"=>5, "pipeline.max_inflight"=>375}

22:52:22.660 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 WARN  logstash.outputs.elasticsearch - Attempted to resurrect connection to 
dead ES instance, but got an error. {:url=>#http://elastic:xx@10.42.162.82:9200/>, 
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError,
 :error=>"Elasticsearch Unreachable: 
[http://elastic:xx@10.42.162.82:9200/][Manticore::SocketException] No route 
to host (Host unreachable)"}

22:52:24.453 [Ruby-0-Thread-8: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:224]
 INFO  logstash.outputs.elasticsearch - Running health check to see if an 
Elasticsearch connection is working 
{:healthcheck_url=>http://elastic:xx@10.42.162.82:9200/, :path=>"/"}

22:52:25.470 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 WARN  logstash.outputs.elasticsearch - Marking url as dead. Last error: 
[LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] 
Elasticsearch Unreachable: 
[http://elastic:xx@10.42.162.82:9200/][Manticore::SocketException] No route 
to host (Host unreachable) {:url=>http://elastic:xx@10.42.162.82:9200/, 
:error_message=>"Elasticsearch Unreachable: 
[http://elastic:xx@10.42.162.82:9200/][Manticore::SocketException] No route 
to host (Host unreachable)", 
:error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}

We have a set up an environment where we deployed two complete LOG deployments 
(ES-logstash-kibana), each of these two in their own kubernetes cluster. The 
logstash in kubernetes cluster 1 had its config changed (in onap-pipeline.conf) 
to point to the ES pod in kubernetes cluster 2. It seems like when it reaches 
192.168.10.10:30254 (the ES external nodeport IP), there is something that 
forces that URL to be updated to point to 10.42.162.82:9200 (the ES pod 
internal IP) .

Is there a way to resolve this issue such as the external endpoint does not 
resolve into its unreachable pod internal IP?

Thanks,
Hector

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12618): https://lists.onap.org/g/onap-discuss/message/12618
Mute This Topic: https://lists.onap.org/mt/26202387/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

22:52:14.523 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - 
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", 
:hosts=>[#http://192.168.10.10:30254>]}
22:52:19.457 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated 
{:changes=>{:removed=>[http://elastic:xx@192.168.10.10:30254/], 
:added=>[http://elastic:xx@10.42.162.82:9200/]}}
22:52:19.459 [Ruby-0-Thread-9: 
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.5-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:136]
 INFO  logstash.outputs.elasticsearch - Running health check to see if an 
Elasticsearch connection is working 

Re: [onap-discuss] [PORTAL]: cannot log in after install

2018-09-11 Thread Hector Anapan
Hi David,

The first rule of thumb to make sure your pods are healthy is to see that all 
the containers inside each pod are in “ready” or “green” state as below:

root@rancher:~# kubectl get pods -a --namespace=onap | grep portal
dev-portal-app-fb6fd5f84-8s49f 2/2   Running 0  
16m
dev-portal-cassandra-5d6649dfb6-fxngd  1/1   Running 0  
15d
dev-portal-db-56bdf48468-ftwq6 1/1   Running 0  
33m
dev-portal-db-config-9z5md 0/2   Completed   0  
6d
dev-portal-sdk-f4d454ddc-h57br 2/2   Running 0  
15d
dev-portal-widget-55b4d88875-29n28 1/1   Running 0  
15d
dev-portal-zookeeper-f649b6d49-d7dql   1/1   Running 0  
15d

The only pods that are okay to see containers in not ready state are the pods 
triggered by kubernetes jobs. In the case above, it’s the 
“dev-portal-db-config-9z5md” pod which as you can see is in “completed” state. 
This means that the job’s finite set of actions was successfully completed.

If you are meeting the conditions above but still getting the invalid 
username/password error, please check the following:


  *   Login to the portal-db (not the portal-db-config) pod, enter the mysql 
console (“mysql -u root -p” where password is Aa123456), and check if the demo 
user is in the fn_user table (USE portal; SELECT first_name, org_user_id, email 
FROM fn_user) where “org_user_id” is the column that lists the login usernames 
to access the portal GUI.



  *   If demo user is not in the table above, then the job didn’t complete 
correctly. Please delete the job, and re-run the helm release with helm upgrade.



  *   If demo user is already there, then please delete the “portal-app” pod 
and wait for all its pods to be in ready state again.

Please let me know your findings.


From: onap-discuss@lists.onap.org  On Behalf Of 
David Darbinyan
Sent: Tuesday, September 11, 2018 3:29 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] [PORTAL]: cannot log in after install


hi list!

using Rancher+Kubernetes

I can successfully reach to ONAP login interface, but demo/demo123456! gives me 
"Invalid username or password. Please try again."

Presently i use only [ portal ], [ multicloud ], [ so ].

with this setup all my Pods are in "green" state except of "portal-db-config"



Should any other pod be installed for logging in ? Or may be the demo user 
reseted manually ???



Thanks

DD




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12390): https://lists.onap.org/g/onap-discuss/message/12390
Mute This Topic: https://lists.onap.org/mt/25511078/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[onap-discuss] [dcaegen2] [oom] Adding labels and modifying namespace to DCAGEN2 pods

2018-08-07 Thread Hector Anapan
Hi,


  1.  I am trying to list pods per HELM deployment (assuming I don't use 
local/onap and I use separate HELM charts), so I am using labels to list pods 
by their "release" label (helm release name).

I made a change (https://jira.onap.org/browse/OOM-1319 -- 
https://gerrit.onap.org/r/#/c/59555/) to add some missing label metadata to 
show the HELM release name of some pods who didn't have this label. As I run 
"kubectl get pods -a --namespace=onap --show-labels | grep -v release" to show 
the pods that don't have the "release" label in their pod metadata, I see that 
in order for the DCAEGEN2 stack (deployed by the bootstrap pod) to add 
additional labels, changes need to happen in the dcaegen2 git repos. Can 
someone please advise what changes are necessary to add a label to the dcaegen2 
pods?



  1.  Also, I see that in order to override the DCAE_NAMESPACE value, the 
following instructions are in this README: 
https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/dcaegen2/charts/dcae-cloudify-manager/README.md;hb=f2895bdfbff1a7c40dd7c247dcefcc19d43dcde0.
 It says to modidy the dcae_ns value in values.yaml but this value shows up in 
multiple values.yaml so can someone clarify this?

Assuming I change the dcae_ns value(s) and if my dcaegen2 HELM deployment is 
already running, would a HELM upgrade suffice to reflect this change, or would 
the whole deployment need to be re-deployed from scratch?


Thanks,
Hector

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11736): https://lists.onap.org/g/onap-discuss/message/11736
Mute This Topic: https://lists.onap.org/mt/24226806/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-