Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-16 Thread ESWAR RAO
Thanks Lusheng.

After going through jira bugs and changing docker image version of SDC
containers, I was able to succeed

--
Basic ASDC Health Check   |
PASS |
--

Thanks
Eswar Rao



On Tue, Jan 16, 2018 at 6:34 AM, JI, LUSHENG (LUSHENG)  wrote:

> Eswar,
>
> Good to hear the progress you have made.
>
> Service Change Handler has dependency on (A)SDC because it is expecting
> the latter to distribute close loop models.  So if SDC is not healthy,
> Service Change Handler will bail.
>
> For debugging (A)SDC, you may want to check with the SDC team.  You might
> want to start with looking into SDC's docker log just like how you checked
> the DCAE boot container.
>
>
> Lusheng
>
> Sent via the Samsung Galaxy S7, an AT&T 4G LTE smartphone
>
>
>  Original message 
> From: ESWAR RAO 
> Date: 1/15/18 7:47 PM (GMT-05:00)
> To: Josef Reisinger 
> Cc: "JI, LUSHENG (LUSHENG)" ,
> onap-discuss@lists.onap.org, onap-discuss-boun...@lists.onap.org
> Subject: Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM
>
> Hi Josef and Lusheng,
>
> Thanks for the response.
>
> I made the boot container to run without exiting and i deleted all
> deployments/cancelledexecutions individually and installed them manually
> using the install script.
>
> All 29 VM's are UP.
>
> While installation of deployment "PlatformServicesInventory" I faced
> problem with "service-change-handler" start.
>
> I executed robot test case to verify sanity out of which 2 components are
> failing.
>
> root@onap-robot:/home/ubuntu# docker exec -it openecompete_container
> /var/opt/OpenECOMP_ETE/runTags.sh -i health h -d ./html
> -V /share/config/integration_robot_properties.py  -V
> /share/config/integration_preload_parameters.py -V
> /share/config/vm_properties.py
>
> *Basic DCAE Health Chec*k   | 
> *FAIL
> *|
> [  | cdap |  |  |  |  | cdap_broker | config_binding_service |
> deployment_handler | inventory | 
> a35a7b6a9379424386475302c113c3d5_cdap_app_cdap_app_tca
> | platform_doc
> kerhost |  | b982f9652d8c4f3d95dd806b20f31909_dcaegen2-collectors-ves |
> component_dockerhost |
> | cloudify_manager ] does not contain match for pattern
> 'service-change-handler'.
>
> *Basic ASDC Health Check*   | 
> *FAIL
> *|
> 500 != 200
>
>
> (1) Please let me know where I can debug for Basic ASDC Health Check
> failure ??
>
> (2) Please help to start 'service-change-handler' docker.
>
>  # cfy install -p ./blueprints/inv/inventory.yaml -b
> PlatformServicesInventory -d PlatformServicesInventory
>  -i "location_id=fKHc" -i ./config/invinputs.yaml  -vv
>
> 2018-01-15T10:36:44 CFY 
> [inventory_ec24a.start] Task succeeded 'dockerplugin.create_and_
> start_container_for_platforms'
> 2018-01-15T10:36:45 CFY 
> [service-change-handler_8060a] Creating node
> 2018-01-15T10:41:50 CFY 
> [service-change-handler_8060a.start] Task failed 
> 'dockerplugin.create_and_start_container_for_platforms'
> -> Container never became healthy
> Traceback (most recent call last):
>   File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py",
> line 596, in main
>   File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py",
> line 366, in handle
>   File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/
> python2.7/site-packages/dockerplugin/decorators.py", line 57, in wrapper
> raise NonRecoverableError(e)
> NonRecoverableError: Container never became healthy
>
> 2018-01-15T10:41:51 CFY  'install' workflow
> execution failed: RuntimeError: Workflow failed: Task failed
> 'dockerplugin.create_and_start_container_for_platforms' -> Container
> never became healthy
>
> *Image is downloaded to dokp ; but process is not running*
>
> ubuntu@dcaedokp00:~$ sudo docker images
> REPOSITORY
> TAG IMAGE IDCREATED SIZE
> nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.configbinding
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__nexus3.onap.org-3A10001_onap_org.onap.dcaegen2.platform.configbinding&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qiKb3LQBLje9oy8MjG3MEx1ia4mIocif7RoUY8WQWV8&m=EXV5tXNlsVJxbnQkTqye-mojb3EewThEJN4i7tBV4o8&s=2pgSGsygQUbZuSXK0Hp4-_8-Q2ZM-3oTdeCCTX2pRhM&e=>
>  v1.2.0  001c94f2d799   

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-15 Thread JI, LUSHENG (LUSHENG)
Eswar,

Good to hear the progress you have made.

Service Change Handler has dependency on (A)SDC because it is expecting the 
latter to distribute close loop models.  So if SDC is not healthy, Service 
Change Handler will bail.

For debugging (A)SDC, you may want to check with the SDC team.  You might want 
to start with looking into SDC's docker log just like how you checked the DCAE 
boot container.


Lusheng

Sent via the Samsung Galaxy S7, an AT&T 4G LTE smartphone


 Original message 
From: ESWAR RAO 
Date: 1/15/18 7:47 PM (GMT-05:00)
To: Josef Reisinger 
Cc: "JI, LUSHENG (LUSHENG)" , 
onap-discuss@lists.onap.org, onap-discuss-boun...@lists.onap.org
Subject: Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

Hi Josef and Lusheng,

Thanks for the response.

I made the boot container to run without exiting and i deleted all 
deployments/cancelledexecutions individually and installed them manually using 
the install script.

All 29 VM's are UP.

While installation of deployment "PlatformServicesInventory" I faced problem 
with "service-change-handler" start.

I executed robot test case to verify sanity out of which 2 components are 
failing.

root@onap-robot:/home/ubuntu# docker exec -it openecompete_container 
/var/opt/OpenECOMP_ETE/runTags.sh -i health h -d ./html
-V /share/config/integration_robot_properties.py  -V 
/share/config/integration_preload_parameters.py -V 
/share/config/vm_properties.py

Basic DCAE Health Check   | FAIL |
[  | cdap |  |  |  |  | cdap_broker | config_binding_service | 
deployment_handler | inventory | 
a35a7b6a9379424386475302c113c3d5_cdap_app_cdap_app_tca | platform_doc
kerhost |  | b982f9652d8c4f3d95dd806b20f31909_dcaegen2-collectors-ves | 
component_dockerhost |
| cloudify_manager ] does not contain match for pattern 
'service-change-handler'.

Basic ASDC Health Check   | FAIL |
500 != 200


(1) Please let me know where I can debug for Basic ASDC Health Check failure ??

(2) Please help to start 'service-change-handler' docker.

 # cfy install -p ./blueprints/inv/inventory.yaml -b PlatformServicesInventory 
-d PlatformServicesInventory
 -i "location_id=fKHc" -i ./config/invinputs.yaml  -vv

2018-01-15T10:36:44 CFY  [inventory_ec24a.start] 
Task succeeded 'dockerplugin.create_and_start_container_for_platforms'
2018-01-15T10:36:45 CFY  
[service-change-handler_8060a] Creating node
2018-01-15T10:41:50 CFY  
[service-change-handler_8060a.start] Task failed 
'dockerplugin.create_and_start_container_for_platforms' -> Container never 
became healthy
Traceback (most recent call last):
  File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", 
line 596, in main
  File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", 
line 366, in handle
  File 
"/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py",
 line 57, in wrapper
raise NonRecoverableError(e)
NonRecoverableError: Container never became healthy

2018-01-15T10:41:51 CFY  'install' workflow 
execution failed: RuntimeError: Workflow failed: Task failed 
'dockerplugin.create_and_start_container_for_platforms' -> Container never 
became healthy

Image is downloaded to dokp ; but process is not running

ubuntu@dcaedokp00:~$ sudo docker images
REPOSITORY
TAG IMAGE IDCREATED SIZE
nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.configbinding<https://urldefense.proofpoint.com/v2/url?u=http-3A__nexus3.onap.org-3A10001_onap_org.onap.dcaegen2.platform.configbinding&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qiKb3LQBLje9oy8MjG3MEx1ia4mIocif7RoUY8WQWV8&m=EXV5tXNlsVJxbnQkTqye-mojb3EewThEJN4i7tBV4o8&s=2pgSGsygQUbZuSXK0Hp4-_8-Q2ZM-3oTdeCCTX2pRhM&e=>
   v1.2.0  001c94f2d7992 months ago714 MB
nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.inventory-api<https://urldefense.proofpoint.com/v2/url?u=http-3A__nexus3.onap.org-3A10001_onap_org.onap.dcaegen2.platform.inventory-2Dapi&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qiKb3LQBLje9oy8MjG3MEx1ia4mIocif7RoUY8WQWV8&m=EXV5tXNlsVJxbnQkTqye-mojb3EewThEJN4i7tBV4o8&s=3enrLURhWpX2_BPD4lpX1NFfgMfL3EkhVWVUdIQN8ww&e=>
   v1.2.0  7237fbbd35cb2 months ago557 MB
nexus3.onap.org:10001/onap/org.onap.dcaegen2.platform.servicechange-handler<https://urldefense.proofpoint.com/v2/url?u=http-3A__nexus3.onap.org-3A10001_onap_org.onap.dcaegen2.platform.servicechange-2Dhandler&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qiKb3LQBLje9oy8MjG3MEx1ia4mIocif7RoUY8WQWV8&m=EXV5tXNlsVJxbnQkTqye-mojb3EewThEJN4i7tBV4o8&s=siCrTheXR98P

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-15 Thread ESWAR RAO
   consumerGroup: dcae
  consumerId: dcae-sch
  environmentName: { get_input: asdc_environment_name }
  keyStorePath:
  keyStorePassword:
  activateServerTLSAuth: { get_input: asdc_use_secure_https }
  useHttpsWithDmaap: { get_input: asdc_use_https_dmaap }
  isFilterInEmptyResources: false
*dcaeInventoryClient:*
*  uri: http://inventory:8080 <http://inventory:8080>*
  docker_config:
healthcheck:
  type: "docker"
  interval: "30s"
  timeout: "3s"
  script: "/opt/health.sh"
  image:
{ get_input: service_change_handler_image }
relationships:
  - type: cloudify.relationships.depends_on
target: inventory
  - type: dcae.relationships.component_contained_in
target: docker_host


But on dokp:

ubuntu@dcaedokp00:~$ curl http://127.0.0.1:8080
{"code":404,"message":"HTTP 404 Not Found"}ubuntu@dcaedokp00:~$


Thanks
Eswar Rao



On Mon, Jan 15, 2018 at 3:47 PM, Josef Reisinger  wrote:

> Eswar,
>
> maybe this medium complex procedure helps:
>
>- run the boot container until it exits. (not sure it is needed at all)
>- create a new image from stopped boot container: docker commit boot
>broken # broken is the new image name
>- create environment variables
>NEXUS_USER=$(cat /opt/config/nexus_username.txt)
>NEXUS_PASSWORD=$(cat /opt/config/nexus_password.txt)
>NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt)
>DOCKER_VERSION=$(cat /opt/config/docker_version.txt)
># use rand_str as zone
>ZONE=$(cat /opt/config/rand_str.txt)
>MYFLOATIP=$(cat /opt/config/dcae_float_ip.txt)
>MYLOCALIP=$(cat /opt/config/dcae_ip_addr.txt)
>- run committed image:  docker run -it  --user root  -v
>/opt/app/config:/opt/app/installer/config -e "LOCATION=$ZONE"
>--entrypoint=/bin/bash broken
>- you need to change every occasion of "mkdir" in the file "installer"
>by "mkdir -p"; otherwise the script will fail; maybe you need to install an
>editor
>- bash -x installer # this runs now as root .. not sure this has an
>impact, but it is just for testing
>-
>
> Let me know whether it worked :-)
> Mit freundlichen Grüßen / Kind regards
> *Josef Reisinger*
> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
> Geschäftsführung: Martina Koederitz (Vorsitzende), Nicole Reimer, Norbert
> Janzen, Dr. Christian Keller, Ivo Koerner, Stefan Lutz
> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
> HRB 14562 / WEEE-Reg.-Nr. DE 99369940
>
>
>
>
> From:ESWAR RAO 
> To:"JI, LUSHENG (LUSHENG)" 
> Cc:"onap-discuss@lists.onap.org" 
> Date:14.01.2018 09:38
> Subject:Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM
> Sent by:onap-discuss-boun...@lists.onap.org
> --
>
>
>
>
> Hi Lusheng,
>
> Thanks for the response.
>
> I deleted the boot-docker multiple times and at-last the bootstrap-docker
> was able to spin all 14 DCAE VM's.
>
> But I still see issues like :
>
> | 23755508-1765-4a01-a023-8ee3059c3c94 | dcaedokp00| ACTIVE |
> oam_onap_fKHc=10.0.0.14, 192.168.21.204  | es-new-ubuntu-16.04 |
> | 3c1aa9f3-2b13-47f9-bdcd-afee49e94903 | dcaedoks00| ACTIVE |
> oam_onap_fKHc=10.0.0.5, 192.168.21.201   | es-new-ubuntu-16.04 |
>
>
>
> root@onap-dcae-bootstrap:/home/ubuntu# docker logs boot -f
>
> 2018-01-14T03:28:53 CFY  [registrator_b8f59.start] Task
> failed 'dockerplugin.create_and_start_container' -> Failed to find:
> platform_dockerhost
> Traceback (most recent call last):
>   File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py",
> line 596, in main
>   File "/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py",
> line 366, in handle
>   File "/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/
> python2.7/site-packages/dockerplugin/decorators.py", line 53, in wrapper
> raise RecoverableError(e)
> RecoverableError: Failed to find: platform_dockerhost
>
> + grep cdap
> + curl -Ss *http://192.168.21.218:8500/v1/catalog/service/cdap*
> <http://192.168.21.218:8500/v1/catalog/service/cdap>
> + echo -n .
> + sleep 30
>
>
> ubuntu@dcaedokp00:/$ tail -f  /var/log/cloud-init-output.log
> * Failed to resolve cloudify-manager-fKHc: lookup cloudify-manager-fKHc on
> *192.168.21.66:53* <http://192.168.21.66:53/>: read udp
> 10.0.0.14:59076->*192.168.21.66:53* <http://192.168.21.66:53/>: read: no
> route to host)
&

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-15 Thread Josef Reisinger
Eswar,

maybe this medium complex procedure helps:

run the boot container until it exits. (not sure it is needed at all)
create a new image from stopped boot container: docker commit boot broken 
# broken is the new image name
create environment variables
NEXUS_USER=$(cat /opt/config/nexus_username.txt)
NEXUS_PASSWORD=$(cat /opt/config/nexus_password.txt)
NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt)
DOCKER_VERSION=$(cat /opt/config/docker_version.txt)
# use rand_str as zone
ZONE=$(cat /opt/config/rand_str.txt)
MYFLOATIP=$(cat /opt/config/dcae_float_ip.txt)
MYLOCALIP=$(cat /opt/config/dcae_ip_addr.txt)
run committed image:  docker run -it  --user root  -v 
/opt/app/config:/opt/app/installer/config -e "LOCATION=$ZONE" 
--entrypoint=/bin/bash broken
you need to change every occasion of "mkdir" in the file "installer" by 
"mkdir -p"; otherwise the script will fail; maybe you need to install an 
editor 
bash -x installer # this runs now as root .. not sure this has an impact, 
but it is just for testing

Let me know whether it worked :-)

Mit freundlichen Grüßen / Kind regards
Josef Reisinger 
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter 
Geschäftsführung: Martina Koederitz (Vorsitzende), Nicole Reimer, Norbert 
Janzen, Dr. Christian Keller, Ivo Koerner, Stefan Lutz 
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 14562 / WEEE-Reg.-Nr. DE 99369940 




From:   ESWAR RAO 
To: "JI, LUSHENG (LUSHENG)" 
Cc: "onap-discuss@lists.onap.org" 
Date:   14.01.2018 09:38
Subject:        Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM
Sent by:onap-discuss-boun...@lists.onap.org




Hi Lusheng,

Thanks for the response.

I deleted the boot-docker multiple times and at-last the bootstrap-docker 
was able to spin all 14 DCAE VM's.

But I still see issues like :

| 23755508-1765-4a01-a023-8ee3059c3c94 | dcaedokp00| ACTIVE | 
oam_onap_fKHc=10.0.0.14, 192.168.21.204  | es-new-ubuntu-16.04 |
| 3c1aa9f3-2b13-47f9-bdcd-afee49e94903 | dcaedoks00| ACTIVE | 
oam_onap_fKHc=10.0.0.5, 192.168.21.201   | es-new-ubuntu-16.04 |



root@onap-dcae-bootstrap:/home/ubuntu# docker logs boot -f

2018-01-14T03:28:53 CFY  [registrator_b8f59.start] Task 
failed 'dockerplugin.create_and_start_container' -> Failed to find: 
platform_dockerhost
Traceback (most recent call last):
  File 
"/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", line 
596, in main
  File 
"/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", line 
366, in handle
  File 
"/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py",
 
line 53, in wrapper
raise RecoverableError(e)
RecoverableError: Failed to find: platform_dockerhost

+ grep cdap
+ curl -Ss http://192.168.21.218:8500/v1/catalog/service/cdap
+ echo -n .
+ sleep 30


ubuntu@dcaedokp00:/$ tail -f  /var/log/cloud-init-output.log
* Failed to resolve cloudify-manager-fKHc: lookup cloudify-manager-fKHc on 
192.168.21.66:53: read udp 
10.0.0.14:59076->192.168.21.66:53: read: no route to host)
Failed to join any nodes.
+ echo Waiting to join Consul cluster
Waiting to join Consul cluster
+ sleep 60
...


Can you please help me in resolving them:

Platform/Service  docker host(dokp/doks) is trying to join 
#/opt/consul/bin/consul join "cloudify-manager-${{DATACENTER}"
for which its trying to access my openstack setup controller 
192.168.21.66.

Thanks
Eswar Rao




On Sat, Jan 13, 2018 at 9:35 AM, JI, LUSHENG (LUSHENG) <
l...@research.att.com> wrote:
Eswar,
 
If the container has not exited, you can get into the container as Alexis 
mentioned. The container entry point is a script called 
/opt/app/installer.  This script contains all the steps
 
If the container has already exited, you can first delete it (sudo docker 
rm boot), then rerun dcae2-vm-init.sh under /opt.
 
Thanks,
Lusheng Ji
 
 
From:  on behalf of ESWAR RAO <
eswar7...@gmail.com>
Date: Friday, January 12, 2018 at 11:16 AM
To: Alexis de Talhouët 
Cc: "onap-discuss@lists.onap.org" 
Subject: Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM
 
Thanks Alexis for the response.
 
As you know,  dcae-bootstrap docker after creating dcaeorcI00 VM its 
trying to install packages but upon failure of install its rollbacking 
using uninstall and docker container is killed.
 
Please let me know how we can bypass the docket getting killed/removed so 
that we can run cfy scripts manually.
 
Thanks
Eswar Rao
 
 
 
 
On 12 Jan 2018 8:28 pm, "Alexis de Talhouët"  
wrote:
Hi, Here is an example I faced, maybe it can help you.
To debug, I have done the following:
1.  go in the boot container
docker exec -it boot bash
2.  Activate the virtual environment created by the installer
source dcaeinsta

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-14 Thread ESWAR RAO
Hi Lusheng,

Thanks for the response.

I deleted the boot-docker multiple times and at-last the bootstrap-docker
was able to spin all 14 DCAE VM's.

But I still see issues like :

| 23755508-1765-4a01-a023-8ee3059c3c94 | dcaedokp00| ACTIVE |
oam_onap_fKHc=10.0.0.14, 192.168.21.204  | es-new-ubuntu-16.04 |
| 3c1aa9f3-2b13-47f9-bdcd-afee49e94903 | dcaedoks00| ACTIVE |
oam_onap_fKHc=10.0.0.5, 192.168.21.201   | es-new-ubuntu-16.04 |



root@onap-dcae-bootstrap:/home/ubuntu# docker logs boot -f

2018-01-14T03:28:53 CFY  [registrator_b8f59.start] Task
failed 'dockerplugin.create_and_start_container' -> Failed to find:
platform_dockerhost
Traceback (most recent call last):
  File
"/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", line
596, in main
  File
"/tmp/pip-build-P7Jcnx/cloudify-plugins-common/cloudify/dispatch.py", line
366, in handle
  File
"/opt/mgmtworker/env/plugins/dockerplugin-2.4.0/lib/python2.7/site-packages/dockerplugin/decorators.py",
line 53, in wrapper
raise RecoverableError(e)
RecoverableError: Failed to find: platform_dockerhost

+ grep cdap
+ curl -Ss http://192.168.21.218:8500/v1/catalog/service/cdap
+ echo -n .
+ sleep 30


ubuntu@dcaedokp00:/$ tail -f  /var/log/cloud-init-output.log
* Failed to resolve cloudify-manager-fKHc: lookup cloudify-manager-fKHc on
192.168.21.66:53: read udp
10.0.0.14:59076->192.168.21.66:53: read: no route to host)
Failed to join any nodes.
+ echo Waiting to join Consul cluster
Waiting to join Consul cluster
+ sleep 60
...


Can you please help me in resolving them:

Platform/Service  docker host(dokp/doks) is trying to join
#/opt/consul/bin/consul join "cloudify-manager-${{DATACENTER}"
for which its trying to access my openstack setup controller 192.168.21.66.

Thanks
Eswar Rao




On Sat, Jan 13, 2018 at 9:35 AM, JI, LUSHENG (LUSHENG)  wrote:

> Eswar,
>
>
>
> If the container has not exited, you can get into the container as Alexis
> mentioned. The container entry point is a script called
> /opt/app/installer.  This script contains all the steps
>
>
>
> If the container has already exited, you can first delete it (sudo docker
> rm boot), then rerun dcae2-vm-init.sh under /opt.
>
>
>
> Thanks,
>
> Lusheng Ji
>
>
>
>
>
> *From: * on behalf of ESWAR RAO <
> eswar7...@gmail.com>
> *Date: *Friday, January 12, 2018 at 11:16 AM
> *To: *Alexis de Talhouët 
> *Cc: *"onap-discuss@lists.onap.org" 
> *Subject: *Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM
>
>
>
> Thanks Alexis for the response.
>
>
>
> As you know,  dcae-bootstrap docker after creating dcaeorcI00 VM its
> trying to install packages but upon failure of install its rollbacking
> using uninstall and docker container is killed.
>
>
>
> Please let me know how we can bypass the docket getting killed/removed so
> that we can run cfy scripts manually.
>
>
>
> Thanks
>
> Eswar Rao
>
>
>
>
>
>
>
>
>
> On 12 Jan 2018 8:28 pm, "Alexis de Talhouët" 
> wrote:
>
> Hi, Here is an example I faced, maybe it can help you.
>
> To debug, I have done the following:
>
>1. go in the *boot* container
>
> docker exec -it boot bash
>
>1. Activate the virtual environment created by the installer
>
> source dcaeinstall/bin/activate
>
>1. Run the command provided in the failed execution logs to see the
>logs output which might point to the failure
>
>  Réduire la source
>
> (dcaeinstall) installer@e6120e566d15:~/consul$ cfy events list --tail
> --include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518
>
> Listing events for execution id 3b9a6058-3b35-4406-a077-8430fff5a518
> [include_logs=True]
>
> Execution of workflow install for deployment ves failed. [error=Traceback
> (most recent call last):
>
>   File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py",
> line 472, in _remote_workflow_child_thread
>
>   File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py",
> line 504, in _execute_workflow_function
>
>   File "/opt/mgmtworker/env/lib/python2.7/site-packages/
> cloudify/plugins/workflows.py", line 27, in install
>
> node_instances=set(ctx.node_instances))
>
>   File "/opt/mgmtworker/env/lib/python2.7/site-packages/
> cloudify/plugins/lifecycle.py", line 28, in install_node_instances
>
> processor.install()
>
>   File "/opt/mgmtworker/env/lib/python2.7/site-packages/
> cloudify/plugins/lifecycle.py", line 83, in install
>
> graph_finisher_func=self._finish_install)
>
>   File "/opt/mgmtworker/env/lib/pyth

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-12 Thread JI, LUSHENG (LUSHENG)
Eswar,

If the container has not exited, you can get into the container as Alexis 
mentioned. The container entry point is a script called /opt/app/installer.  
This script contains all the steps

If the container has already exited, you can first delete it (sudo docker rm 
boot), then rerun dcae2-vm-init.sh under /opt.

Thanks,
Lusheng Ji


From:  on behalf of ESWAR RAO 

Date: Friday, January 12, 2018 at 11:16 AM
To: Alexis de Talhouët 
Cc: "onap-discuss@lists.onap.org" 
Subject: Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

Thanks Alexis for the response.

As you know,  dcae-bootstrap docker after creating dcaeorcI00 VM its trying to 
install packages but upon failure of install its rollbacking using uninstall 
and docker container is killed.

Please let me know how we can bypass the docket getting killed/removed so that 
we can run cfy scripts manually.

Thanks
Eswar Rao




On 12 Jan 2018 8:28 pm, "Alexis de Talhouët" 
mailto:adetalhoue...@gmail.com>> wrote:

Hi, Here is an example I faced, maybe it can help you.

To debug, I have done the following:

  1.  go in the boot container
docker exec -it boot bash


  1.  Activate the virtual environment created by the installer
source dcaeinstall/bin/activate


  1.  Run the command provided in the failed execution logs to see the logs 
output which might point to the failure
 Réduire la source
(dcaeinstall) installer@e6120e566d15:~/consul$ cfy events list --tail 
--include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518
Listing events for execution id 3b9a6058-3b35-4406-a077-8430fff5a518 
[include_logs=True]
Execution of workflow install for deployment ves failed. [error=Traceback (most 
recent call last):
  File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", 
line 472, in _remote_workflow_child_thread
  File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", 
line 504, in _execute_workflow_function
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py",
 line 27, in install
node_instances=set(ctx.node_instances))
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 28, in install_node_instances
processor.install()
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 83, in install
graph_finisher_func=self._finish_install)
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 103, in _process_node_instances
self.graph.execute()
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py",
 line 133, in execute
self._handle_terminated_task(task)
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py",
 line 207, in _handle_terminated_task
raise RuntimeError(message)
RuntimeError: Workflow failed: Task failed 
'dockerplugin.create_and_start_container_for_components_with_streams' -> 500 
Server Error: Internal Server Error ("{"message":"Get 
https://nexus3.onap.org:10001/v2/onap/org.onap.dcaegen2.collectors.ves.vescollector/manifests/v1.1.0:<https://urldefense.proofpoint.com/v2/url?u=http-3A__nexus3.onap.org-3A10001_v2_onap_org.onap.dcaegen2.collectors.ves.vescollector_manifests_v1.1.0-3A&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=qiKb3LQBLje9oy8MjG3MEx1ia4mIocif7RoUY8WQWV8&m=QY8VU3NbMRW_cZ5zP-FWP2FRHLZFV8LrU_eosODEAM4&s=mT89e9d967hZ9kPMcIxwc1E7I2Igc2fO0sH62qo4oT0&e=>
 read tcp 10.0.0.13:46574-\u003e199.204.45.137:10001: read: no route to host"}")
]


Unfortunatly for me, Nexus decided to crap on me at the exact time it tried to 
query it...
Anyway, here try to understand the issue, and/or send the output to the 
mailing-list. In my case, I tried to uninstall and re-installed the ves and it 
worked.

  1.  Uninstall the failed deployment
 Réduire la source
(dcaeinstall) installer@e6120e566d15:~/consul$ cfy uninstall  -d ves


  1.  Install the deployment
(dcaeinstall) installer@e6120e566d15:~/consul$ cfy install -p 
./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml

Thanks,
Alexis



On Jan 12, 2018, at 5:51 AM, ESWAR RAO 
mailto:eswar7...@gmail.com>> wrote:

Hi All,

I am using ONAP amsterdam release.

I am facing problems with DCAE bootstrapVM.



Can some one please help me in resolving the issue ??



# docker logs boot

2018-01-12 10:15:04 CFY  'install' workflow execution succeeded

Plugin validated successfully

Downloading from
https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn<https://urldefense.p

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-12 Thread ESWAR RAO
Thanks Alexis for the response.

As you know,  dcae-bootstrap docker after creating dcaeorcI00 VM its trying
to install packages but upon failure of install its rollbacking using
uninstall and docker container is killed.

Please let me know how we can bypass the docket getting killed/removed so
that we can run cfy scripts manually.

Thanks
Eswar Rao




On 12 Jan 2018 8:28 pm, "Alexis de Talhouët" 
wrote:

Hi, Here is an example I faced, maybe it can help you.

To debug, I have done the following:

   1.

   go in the *boot* container
   docker exec -it boot bash
   2.

   Activate the virtual environment created by the installer
   source dcaeinstall/bin/activate
   3.

   Run the command provided in the failed execution logs to see the logs
   output which might point to the failure
Réduire la source
   (dcaeinstall) installer@e6120e566d15:~/consul$ cfy events list --tail
   --include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518
   Listing events for execution id 3b9a6058-3b35-4406-a077-8430fff5a518
   [include_logs=True]
   Execution of workflow install for deployment ves failed.
   [error=Traceback (most recent call last):
 File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/
   cloudify/dispatch.py", line 472, in _remote_workflow_child_thread
 File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/
   cloudify/dispatch.py", line 504, in _execute_workflow_function
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/plugins/workflows.py", line 27, in install
   node_instances=set(ctx.node_instances))
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/plugins/lifecycle.py", line 28, in install_node_instances
   processor.install()
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/plugins/lifecycle.py", line 83, in install
   graph_finisher_func=self._finish_install)
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/plugins/lifecycle.py", line 103, in _process_node_instances
   self.graph.execute()
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/workflows/tasks_graph.py", line 133, in execute
   self._handle_terminated_task(task)
 File "/opt/mgmtworker/env/lib/python2.7/site-packages/
   cloudify/workflows/tasks_graph.py", line 207, in _handle_terminated_task
   raise RuntimeError(message)
   RuntimeError: Workflow failed: Task failed 'dockerplugin.create_and_
   start_container_for_components_with_streams' -> 500 Server Error:
   Internal Server Error ("{"message":"Get https://nexus3.onap.org:10001/
   v2/onap/org.onap.dcaegen2.collectors.ves.vescollector/manifests/v1.1.0:
   

   read tcp 10.0.0.13:46574-\u003e199.204.45.137:10001: read: no route to
   host"}")
   ]

   Unfortunatly for me, Nexus decided to crap on me at the exact time it
   tried to query it...
   Anyway, here try to understand the issue, and/or send the output to the
   mailing-list. In my case, I tried to uninstall and re-installed the ves and
   it worked.
   4.

   Uninstall the failed deployment
Réduire la source
   (dcaeinstall) installer@e6120e566d15:~/consul$ cfy uninstall  -d ves

   5.

   Install the deployment
   (dcaeinstall) installer@e6120e566d15:~/consul$ cfy install -p
   ./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml


Thanks,
Alexis


On Jan 12, 2018, at 5:51 AM, ESWAR RAO  wrote:

Hi All,

I am using ONAP amsterdam release.

I am facing problems with DCAE bootstrapVM.



Can some one please help me in resolving the issue ??




# docker logs boot

2018-01-12 10:15:04 CFY  'install' workflow execution succeeded

Plugin validated successfully

Downloading from
https://nexus.onap.org/service/local/repositories/
raw/content/org.onap.dcaegen2.platform.plugins/releases/
plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn to
/tmp/tmpsd_8XB/relationshipplugin-1.0.0-py27-none-any.wgn
Bootstrap failed! (HTTPSConnectionPool(host='nexus.onap.org', port=443):
Read timed out.)
Executing teardown due to failed bootstrap...

2018-01-12 10:17:10 CFY  Starting 'uninstall' workflow execution
2018-01-12 10:17:37 CFY  'uninstall' workflow execution succeeded



I think for small intermittent time-out , the uninstall workflow gets
kick-in and docker is killed.

root@onap-dcae-bootstrap:/home/ubuntu# docker ps -a
CONTAINER IDIMAGE
 COMMAND  CREATED STATUS
  PORTS   NAMES
0f489b90232fnexus3.onap.org:10001/onap/
org.onap.dcaegen2.deployments.bootstrap:v1.1.0   "/bin/sh -c 'exec ..."   2
hours ago Exited (1) 35 minutes 

Re: [onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-12 Thread Alexis de Talhouët
Hi, Here is an example I faced, maybe it can help you.

To debug, I have done the following:

go in the boot container

docker exec -it boot bash
Activate the virtual environment created by the installer

source dcaeinstall/bin/activate
Run the command provided in the failed execution logs to see the logs output 
which might point to the failure

 Réduire la source
(dcaeinstall) installer@e6120e566d15:~/consul$ cfy events list --tail 
--include-logs --execution-id 3b9a6058-3b35-4406-a077-8430fff5a518
Listing events for execution id 3b9a6058-3b35-4406-a077-8430fff5a518 
[include_logs=True]
Execution of workflow install for deployment ves failed. [error=Traceback (most 
recent call last):
  File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", 
line 472, in _remote_workflow_child_thread
  File "/tmp/pip-build-c5GA7o/cloudify-plugins-common/cloudify/dispatch.py", 
line 504, in _execute_workflow_function
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/workflows.py",
 line 27, in install
node_instances=set(ctx.node_instances))
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 28, in install_node_instances
processor.install()
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 83, in install
graph_finisher_func=self._finish_install)
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/plugins/lifecycle.py",
 line 103, in _process_node_instances
self.graph.execute()
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py",
 line 133, in execute
self._handle_terminated_task(task)
  File 
"/opt/mgmtworker/env/lib/python2.7/site-packages/cloudify/workflows/tasks_graph.py",
 line 207, in _handle_terminated_task
raise RuntimeError(message)
RuntimeError: Workflow failed: Task failed 
'dockerplugin.create_and_start_container_for_components_with_streams' -> 500 
Server Error: Internal Server Error ("{"message":"Get 
https://nexus3.onap.org:10001/v2/onap/org.onap.dcaegen2.collectors.ves.vescollector/manifests/v1.1.0:
 read tcp 10.0.0.13:46574-\u003e199.204.45.137:10001: read: no route to host"}")
]
Unfortunatly for me, Nexus decided to crap on me at the exact time it tried to 
query it...
Anyway, here try to understand the issue, and/or send the output to the 
mailing-list. In my case, I tried to uninstall and re-installed the ves and it 
worked.

Uninstall the failed deployment

 Réduire la source
(dcaeinstall) installer@e6120e566d15:~/consul$ cfy uninstall  -d ves

Install the deployment

(dcaeinstall) installer@e6120e566d15:~/consul$ cfy install -p 
./blueprints/ves/ves.yaml -b ves -d ves -i ../config/vesinput.yaml

Thanks,
Alexis


> On Jan 12, 2018, at 5:51 AM, ESWAR RAO  wrote:
> 
> Hi All,
> 
> I am using ONAP amsterdam release.
> 
> I am facing problems with DCAE bootstrapVM.
> 
> 
> 
> Can some one please help me in resolving the issue ??
> 
> 
> 
> # docker logs boot 
> 
> 2018-01-12 10:15:04 CFY  'install' workflow execution succeeded
> 
> Plugin validated successfully
> 
> Downloading from 
> https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn
>  
> 
>  to 
> /tmp/tmpsd_8XB/relationshipplugin-1.0.0-py27-none-any.wgn
> Bootstrap failed! (HTTPSConnectionPool(host='nexus.onap.org 
> ', port=443): Read timed out.)
> Executing teardown due to failed bootstrap...
> 
> 2018-01-12 10:17:10 CFY  Starting 'uninstall' workflow execution
> 2018-01-12 10:17:37 CFY  'uninstall' workflow execution succeeded
> 
> 
> I think for small intermittent time-out , the uninstall workflow gets kick-in 
> and docker is killed.
> 
> root@onap-dcae-bootstrap:/home/ubuntu# docker ps -a
> CONTAINER IDIMAGE 
>   COMMAND  CREATED STATUS 
>  PORTS   NAMES
> 0f489b90232f
> nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.0 
> 
>"/bin/sh -c 'exec ..."   2 hours ago Exited (1) 35 minutes ago 
>   boot
> root@onap-dcae-bootstrap:/home/ubuntu#
> 
> | 596424a8-f38d-4dac-86a2-04c63603dbd0 | dcaeorcl00  | ACTIVE | 
> oam_onap_fKHc=10.0.0.5, 192.168.21.220   | centos-7 |
> 
> [root@dcaeorcl00 centos]# wget 
> https

[onap-discuss] [DCAE] problems with dcae bootstrap VM

2018-01-12 Thread ESWAR RAO
Hi All,

I am using ONAP amsterdam release.

I am facing problems with DCAE bootstrapVM.



Can some one please help me in resolving the issue ??



# docker logs boot

2018-01-12 10:15:04 CFY  'install' workflow execution succeeded

Plugin validated successfully

Downloading from
https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn
to
/tmp/tmpsd_8XB/relationshipplugin-1.0.0-py27-none-any.wgn
Bootstrap failed! (HTTPSConnectionPool(host='nexus.onap.org', port=443):
Read timed out.)
Executing teardown due to failed bootstrap...

2018-01-12 10:17:10 CFY  Starting 'uninstall' workflow execution
2018-01-12 10:17:37 CFY  'uninstall' workflow execution succeeded


I think for small intermittent time-out , the uninstall workflow gets
kick-in and docker is killed.

root@onap-dcae-bootstrap:/home/ubuntu# docker ps -a
CONTAINER IDIMAGE
 COMMAND  CREATED STATUS
  PORTS   NAMES
0f489b90232f
nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.bootstrap:v1.1.0
 "/bin/sh -c 'exec ..."   2 hours ago Exited (1) 35 minutes ago
   boot
root@onap-dcae-bootstrap:/home/ubuntu#

| 596424a8-f38d-4dac-86a2-04c63603dbd0 | dcaeorcl00  | ACTIVE |
oam_onap_fKHc=10.0.0.5, 192.168.21.220   | centos-7 |

[root@dcaeorcl00 centos]# wget
https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn
--2018-01-12 10:36:10--
https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.plugins/releases/plugins/relationshipplugin/relationshipplugin-1.0.0-py27-none-any.wgn
Resolving nexus.onap.org (nexus.onap.org)... 199.204.45.137,
2604:e100:1:0:f816:3eff:fefb:56ed
Connecting to nexus.onap.org (nexus.onap.org)|199.204.45.137|:443...
connected.
HTTP request sent, awaiting response... 200 OK
Length: 771113 (753K) [application/octet-stream]
Saving to: ‘relationshipplugin-1.0.0-py27-none-any.wgn’

[>]
7,71,1136.05KB/s   in 45s

2018-01-12 10:36:56 (16.8 KB/s) -
‘relationshipplugin-1.0.0-py27-none-any.wgn’ saved [771113/771113]

[root@dcaeorcl00 centos]#

Is there any way I can install the DCAE VM's manually or retry with
cloudify scripts ??


Thanks
Eswar Rao
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss