I have now pushed this fix to master branch and merged to
master-deployment-policy-fix-merge branch. A new tag has been created to
track Kubernetes fix: 4.1.0-beta-kubernetes-v4
Thanks
On Fri, Feb 27, 2015 at 7:34 AM, Imesh Gunaratne wrote:
> I found the cause of the problem and fixed it!
>
> -
I found the cause of the problem and fixed it!
- Host ports should be unique for a host. Which means the same host port
cannot be defined in multiple pods. This is the cause of the above issue.
- Host ports are not mandatory to expose ports of pods to the external
network, rather services should b
It looks like this is the same issue which we discussed in "[Discuss]
Kubernetes constraint violation for host port":
https://github.com/GoogleCloudPlatform/kubernetes/issues/1751
On Thu, Feb 26, 2015 at 4:30 PM, Imesh Gunaratne wrote:
> Hi Devs,
>
> I'm seeing a problem here, the second pod doe
Hi Devs,
I'm seeing a problem here, the second pod does not get a host ip allocated.
Not sure whether it was caused by this modification, I'm now investigating
it.
Thanks
On Sun, Feb 22, 2015 at 2:00 PM, Imesh Gunaratne wrote:
> Hi Devs,
>
> I have now completed this modification and pushed to
Hi Devs,
I have now completed this modification and pushed to master branch. I
verified this with single-cartridge and tomcat sample applications.
Now we only create Kubernetes Services and Pods. For each port mapping
there will be a Kubernetes service created. For each member there will be a
pod
Thanks for the feedback Lakmal. I will make this change.
Thanks
On Sun, Feb 22, 2015 at 10:15 AM, Lakmal Warusawithana
wrote:
> Yes, I think we should create pods directly until we can have call back
> methods for get pod information from Kubernetes.
>
> On Sun, Feb 22, 2015 at 10:10 AM, Imesh
Yes, I think we should create pods directly until we can have call back
methods for get pod information from Kubernetes.
On Sun, Feb 22, 2015 at 10:10 AM, Imesh Gunaratne wrote:
> Hi Devs,
>
> Currently we create a replication controller for each member in
> Kubernetes. As a result if the pod st
Hi Devs,
Currently we create a replication controller for each member in Kubernetes.
As a result if the pod stop responding the replication controller will
remove the existing pod and create a new one. Then the new pod will get a
new pod id.
Once this happens Stratos will not be able to manage th