Hi Michael

Here the environment info:
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.5 LTS
Release:        14.04
Codename:       trusty

I didn’t check the pod status after I saw the Error message. Seems this ipv6 
error didn't stop pod creating process maybe this error only leads to no ipv6 
address for every pod.
Here the pods status:
NAMESPACE             NAME                                   READY     STATUS   
          RESTARTS   AGE
default               nginx-deployment-431080787-6chxv       1/1       Running  
          0          1d
default               nginx-deployment-431080787-9nswb       1/1       Running  
          0          1d
kube-system           heapster-4285517626-7vnf3              1/1       Running  
          0          7d
kube-system           kube-dns-646531078-x4h83               3/3       Running  
          0          7d
kube-system           kubernetes-dashboard-716739405-6pz6n   1/1       Running  
          20         7d
kube-system           monitoring-grafana-3552275057-03wqg    1/1       Running  
          0          7d
kube-system           monitoring-influxdb-4110454889-527nw   1/1       Running  
          0          7d
kube-system           tiller-deploy-737598192-5d3hc          1/1       Running  
          0          7d
onap                  config-init                            0/1       
Completed          0          1d
onap-aai              aai-dmaap-522748218-2jc3p              1/1       Running  
          0          1d
onap-aai              aai-kafka-2485280328-3kd9x             1/1       Running  
          0          1d
onap-aai              aai-resources-353718113-3h81b          1/1       Running  
          0          1d
onap-aai              aai-service-3321436576-twmwf           1/1       Running  
          0          1d
onap-aai              aai-traversal-338636328-vxxd3          1/1       Running  
          0          1d
onap-aai              aai-zookeeper-1010977228-xtflg         1/1       Running  
          0          1d
onap-aai              data-router-1397019010-k0w40           1/1       Running  
          0          1d
onap-aai              elasticsearch-2660384851-gnn3w         1/1       Running  
          0          1d
onap-aai              gremlin-3971586470-rn4p1               0/1       
CrashLoopBackOff   246        1d
onap-aai              hbase-3880914143-kj4r2                 1/1       Running  
          0          1d
onap-aai              model-loader-service-226363973-g5bdv   1/1       Running  
          0          1d
onap-aai              search-data-service-1212351515-88hwk   1/1       Running  
          0          1d
onap-aai              sparky-be-2088640323-k7vvs             1/1       Running  
          0          1d
onap-appc             appc-1972362106-m9hxv                  1/1       Running  
          0          1d
onap-appc             appc-dbhost-2280647936-4bv02           1/1       Running  
          0          1d
onap-appc             appc-dgbuilder-2616852186-l39zn        1/1       Running  
          0          1d
onap-message-router   dmaap-3565545912-6dvhm                 1/1       Running  
          0          1d
onap-message-router   global-kafka-701218468-nlqf2           1/1       Running  
          0          1d
onap-message-router   zookeeper-555686225-qsnsw              1/1       Running  
          0          1d
onap-mso              mariadb-2814112212-sd34k               1/1       Running  
          0          1d
onap-mso              mso-2505152907-8jqrm                   1/1       Running  
          0          1d
onap-policy           brmsgw-362208961-nvpsn                 1/1       Running  
          0          1d
onap-policy           drools-3066421234-cb2gs                1/1       Running  
          0          1d
onap-policy           mariadb-2520934092-xvj8r               1/1       Running  
          0          1d
onap-policy           nexus-3248078429-qhqrc                 1/1       Running  
          0          1d
onap-policy           pap-4199568361-j4sf6                   1/1       Running  
          0          1d
onap-policy           pdp-785329082-qs52n                    1/1       Running  
          0          1d
onap-policy           pypdp-3381312488-lgf9d                 1/1       Running  
          0          1d
onap-portal           portalapps-2799319019-m30kb            1/1       Running  
          0          1d
onap-portal           portaldb-1564561994-bn70l              1/1       Running  
          0          1d
onap-portal           portalwidgets-1728801515-kvlzt         1/1       Running  
          0          1d
onap-portal           vnc-portal-700404418-g6vsq             0/1       Init:2/5 
          142        1d
onap-robot            robot-349535534-5q1zz                  1/1       Running  
          0          1d
onap-sdc              sdc-be-628593118-dvvm5                 0/1       Running  
          0          1d
onap-sdc              sdc-cs-2640808243-v35fh                1/1       Running  
          0          1d
onap-sdc              sdc-es-227943957-lwrpb                 1/1       Running  
          0          1d
onap-sdc              sdc-fe-1609420241-r1n3d                0/1       Init:0/1 
          143        1d
onap-sdc              sdc-kb-1998598941-7p4n3                1/1       Running  
          0          1d
onap-sdnc             sdnc-250717546-qbt17                   1/1       Running  
          0          1d
onap-sdnc             sdnc-dbhost-3807967487-l7lq1           1/1       Running  
          0          1d
onap-sdnc             sdnc-dgbuilder-3446959187-dr427        1/1       Running  
          0          1d
onap-sdnc             sdnc-portal-4253352894-x6gh7           1/1       Running  
          0          1d
onap-vid              vid-mariadb-2932072366-t8sn1           1/1       Running  
          0          1d
onap-vid              vid-server-377438368-6fpr1             1/1       Running  
          0          1

here post the ipv6 settings of my server anyway:
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_from_local = 0
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.force_tllao = 0
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000
net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.ndisc_notify = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.regen_max_retry = 3
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitations = 3
net.ipv6.conf.all.suppress_frag_ndisc = 1
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.use_tempaddr = 2

Thanks
Harry

发件人: Michael O'Brien [mailto:frank.obr...@amdocs.com]
发送时间: 2017年9月7日 19:28
收件人: Tina Tsou <tina.t...@arm.com>; huangxiangyu <huangxiang...@huawei.com>
抄送: opnfv-tech-discuss@lists.opnfv.org; onap-disc...@lists.onap.org
主题: RE: [opnfv-tech-discuss] [Auto] Error when create onap pods using kubernetes

Tina, Harry,
   Hi, sorry to hear that.  Could you post your environment (Ubuntu 16.04 ?)
   We can compare network setups then as I have not seen an IPV6 issue yet.
   Are you seeing all 6 pods of the k8s/rancher stack
   Post your results from a
   Kubectl get pods –all-namespaces –a
   You should see all 1/1 or 3/3
   There are sometimes issues with a clustered rancher setup where the dns pod 
is 0/1 above.

    Thank you
    /michael

From: Tina Tsou [mailto:tina.t...@arm.com]
Sent: Thursday, September 7, 2017 03:31
To: huangxiangyu <huangxiang...@huawei.com<mailto:huangxiang...@huawei.com>>; 
Michael O'Brien <frank.obr...@amdocs.com<mailto:frank.obr...@amdocs.com>>
Cc: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>
Subject: Re: [opnfv-tech-discuss] [Auto] Error when create onap pods using 
kubernetes

Dear Frank,

Would you like to help here?

Thank you,
Tina

On Sep 6, 2017, at 7:03 PM, huangxiangyu 
<huangxiang...@huawei.com<mailto:huangxiang...@huawei.com>> wrote:
Hi Tina

Here is the error log of the main issue I met when perform ONAP deploy 
according to  https://wiki.onap.org/display/DW/ONAP+on+Kubernetes.

Creating deployments and services **********
E0907 01:24:17.232548   42374 portforward.go:209] Unable to create listener: 
Error listen tcp6 [::1]:51
444: bind: cannot assign requested address
NAME:   mso
LAST DEPLOYED: Thu Sep  7 01:24:38 2017
NAMESPACE: onap
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME     CLUSTER-IP    EXTERNAL-IP  PORT(S)
         AGE
mariadb  10.43.13.178  <nodes>      3306:30252/TCP
         0s
mso      10.43.53.224  <nodes>      
8080:30223/TCP,3904:30225/TCP,3905:30224/TCP,9990:30222/TCP,8787:30
250/TCP  0s

==> extensions/v1beta1/Deployment
NAME     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mariadb  1        1        1           0          0s
mso      1        1        1           0          0s

I try to enable ipv6 on host server but still can’t fix the error. Maybe you 
can find some help with this?

Thanks
Harry
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to