Finally i figure it out.

As per Vishnu suggestion . Salt is excepted to copy the content from the 
master.

So I download the kubernetes source version and compile the source with my 
changes using "make release" . However the build it is not successful .

Instead what i tried is . 

1.Untar the Kubernetes Release version 
2.Get into the folder "server"
3.Untar the kubernetes-salt.tgz 
4.Edit my changes and packed again the salt tgz
5.Now run the kube-up.sh .

My Elasticvolume get reflected inside the Elasticsearch pod

[root@localhost cluster]# kubectl exec -it elasticsearch-logging-v1-10dzt 
/bin/bash --namespace=kube-system
root@elasticsearch-logging-v1-10dzt:/# df -h
Filesystem Size Used Avail Use% Mounted on
none 30G 2.0G 26G 7% /
tmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvdba 51G 52M 48G 1% /data
/dev/mapper/vg--ephemeral-ephemeral 30G 2.0G 26G 7% /etc/hosts
tmpfs 3.7G 12K 3.7G 1% /run/secrets/kubernetes.io/serviceaccount
shm 64M 0 64M 0% /dev/shm



On Friday, July 8, 2016 at 12:28:27 PM UTC+5:30, Vinoth Narasimhan wrote:
>
>
> Environment: Kubernetes in AWS
>
> Kubernetes Version : 1.2.4
>
> I trying to add the "persistent volumes" to the cluster monitoring tool 
> "influxdb and elk"
>
> Before running the "kube-up" i create the AWS Elastic Volumes and edit the 
> files in 
> "cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml" 
> instead of emptyDir {}
>
>     volumes:
>       - name: influxdb-persistent-storage
>         awsElasticBlockStore:
>            volumeID: vol-65a40dd0
>            fsType: ext4
>       - name: grafana-persistent-storage
>         awsElasticBlockStore:
>            volumeID: vol-81a70e34
>            fsType: ext4
>
> But after the Cluster setup i go and check in the master node under 
> "/etc/kubernetes/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml"
>
> It has the same emptyDir {} volumes. My changes not reflected back in the 
> files.
>
> I checked in the elasticsearch minion node as well. The disk is not 
> attached to it,
>
>
>
> docker ps | grep elasticsearch
>
> 017367caf54e        gcr.io/google_containers/elasticsearch:1.8           
>                   "/run.sh"                18 minutes ago      Up 18 
> minutes                           
> k8s_elasticsearch-logging.4808e25e_elasticsearch-logging-v1-31n48_kube-system_020ce01a-44d6-11e6-bbd3-0a04f1b6d611_5d8d2069
> c14625ab4890        gcr.io/google_containers/elasticsearch:1.8           
>                   "/run.sh"                19 minutes ago      Up 19 
> minutes                           
> k8s_elasticsearch-logging.4808e25e_elasticsearch-logging-v1-alk5r_kube-system_020cff5b-44d6-11e6-bbd3-0a04f1b6d611_970fb2a2
> 4cc350a9f05a        gcr.io/google_containers/fluentd-elasticsearch:1.15   
>                  "td-agent"               19 minutes ago      Up 19 minutes 
>                           
> k8s_fluentd-elasticsearch.3feec757_fluentd-elasticsearch-ip-172-20-0-88.us-west-2.compute.internal_kube-system_e313182cba6f0619720f91d8860ae1bb_6ca72fd9
> 6410196df066        gcr.io/google_containers/pause:2.0                   
>                   "/pause"                 20 minutes ago      Up 20 
> minutes                           
> k8s_POD.558027c8_elasticsearch-logging-v1-31n48_kube-system_020ce01a-44d6-11e6-bbd3-0a04f1b6d611_4d911697
> 7e5dd64962e3        gcr.io/google_containers/pause:2.0                   
>                   "/pause"                 20 minutes ago      Up 20 
> minutes                           
> k8s_POD.558027c8_elasticsearch-logging-v1-alk5r_kube-system_020cff5b-44d6-11e6-bbd3-0a04f1b6d611_d0c15525
> a0bf440f9580        gcr.io/google_containers/pause:2.0                   
>                   "/pause"                 20 minutes ago      Up 20 
> minutes                           
> k8s_POD.6059dfa2_fluentd-elasticsearch-ip-172-20-0-88.us-west-2.compute.internal_kube-system_e313182cba6f0619720f91d8860ae1bb_5e153aab
>
> root@ip-172-20-0-88:~# df -h
> Filesystem                           Size  Used Avail Use% Mounted on
> /dev/xvda1                            32G  2.3G   28G   8% /
> udev                                  10M     0   10M   0% /dev
> tmpfs                                1.5G  8.5M  1.5G   1% /run
> tmpfs                                3.7G  864K  3.7G   1% /dev/shm
> tmpfs                                5.0M     0  5.0M   0% /run/lock
> tmpfs                                3.7G     0  3.7G   0% /sys/fs/cgroup
> /dev/mapper/vg--ephemeral-ephemeral   30G  2.5G   26G   9% /mnt/ephemeral
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/020a34ec-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/020cff5b-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/020ce01a-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/020f4cee-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/020f2688-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
> tmpfs                                3.7G   12K  3.7G   1% 
> /mnt/ephemeral/kubernetes/kubelet/pods/0215295c-44d6-11e6-bbd3-0a04f1b6d611/volumes/
> kubernetes.io~secret/default-token-nsxe7
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.

Reply via email to