Re: [onap-discuss] need your help

2018-06-14 Thread Vijendra Singh Rajput
Hi Michael,

Thanks for clarification.

Thanks,
Vijendra


Get Outlook for iOS

From: Michael O'Brien 
Sent: Thursday, June 14, 2018 10:33:13 PM
To: Vijendra Singh Rajput
Cc: onap-discuss@lists.onap.org
Subject: RE: need your help

Hi, the Amsterdam release is essentially deprecated for several reasons – it is 
a hybrid kubernetes/heat deployment and it uses an old version of docker 1.12 – 
If you don’t specifically need Amsterdam I recommend moving to Beijing or 
master (Casablanca).

I personally did not have any issues with AWS – I mostly stuck to R4 instances  
- but only ones with EBS.  For Beijing I use an EFS/NFS wrapper for the shared 
drive.

There is a small chance you are compromised.
If you are on public cloud – either run in a private VPC, set your SG to block 
10249-10255 from outside the subnet – or set the oauth admin security to your 
github account.
Check your df that the drive is not saturated as well.

I would not be able to triage your docker specific issue without more details 
on which pod is causing this – however right now we are primarily focused on 
only working out issues with the latest releases – switch to a later release 
and post to this list and any number of ONAP enthusiasts will assist.


Thank you
/michael

From: Vijendra Singh Rajput [mailto:vijendra_raj...@infosys.com]
Sent: Thursday, June 14, 2018 2:53 AM
To: Michael O'Brien 
Subject: need your help

Hello Michael,

I deployed ONAP Amsterdam release on r4-4xlarge EC2 instance. All pods are in 
running state. health test is passed except DCAE and at this moment I don't 
bother about DCAE.

I am facing one issue, starting all pods successfully, after 3-4 hrs I can see 
one process "/tmp/docker -c /tmp/k.conf" started and using 100% CPU with no 
resource left, which trigger cluster to panic state and all pods goes into 
different state (either pending or init) even tiller deploy pod goes in restart 
mode and not allowing to clean dead pods using oom scripts. sometime after 
killing that process goes away and cluster comes to healthy state but again 
this process will trigger automatically.

could you please help me understand what is that process and why it is starting 
again and again even after killing.

Your help really appreciated.

Thanks,

Vijendra
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at 
https://www.amdocs.com/about/email-disclaimer
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] need your help

2018-06-14 Thread Michael O'Brien
Hi, the Amsterdam release is essentially deprecated for several reasons - it is 
a hybrid kubernetes/heat deployment and it uses an old version of docker 1.12 - 
If you don't specifically need Amsterdam I recommend moving to Beijing or 
master (Casablanca).

I personally did not have any issues with AWS - I mostly stuck to R4 instances  
- but only ones with EBS.  For Beijing I use an EFS/NFS wrapper for the shared 
drive.

There is a small chance you are compromised.
If you are on public cloud - either run in a private VPC, set your SG to block 
10249-10255 from outside the subnet - or set the oauth admin security to your 
github account.
Check your df that the drive is not saturated as well.

I would not be able to triage your docker specific issue without more details 
on which pod is causing this - however right now we are primarily focused on 
only working out issues with the latest releases - switch to a later release 
and post to this list and any number of ONAP enthusiasts will assist.


Thank you
/michael

From: Vijendra Singh Rajput [mailto:vijendra_raj...@infosys.com]
Sent: Thursday, June 14, 2018 2:53 AM
To: Michael O'Brien 
Subject: need your help

Hello Michael,

I deployed ONAP Amsterdam release on r4-4xlarge EC2 instance. All pods are in 
running state. health test is passed except DCAE and at this moment I don't 
bother about DCAE.

I am facing one issue, starting all pods successfully, after 3-4 hrs I can see 
one process "/tmp/docker -c /tmp/k.conf" started and using 100% CPU with no 
resource left, which trigger cluster to panic state and all pods goes into 
different state (either pending or init) even tiller deploy pod goes in restart 
mode and not allowing to clean dead pods using oom scripts. sometime after 
killing that process goes away and cluster comes to healthy state but again 
this process will trigger automatically.

could you please help me understand what is that process and why it is starting 
again and again even after killing.

Your help really appreciated.

Thanks,

Vijendra
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss