I mean, the LB itself requires very little CPU, but certainly hardware
can make a difference...
On Fri, Sep 22, 2017 at 12:33 AM, Vinoth Narasimhan
wrote:
> Thanks tim.
>
> Is my assumption is right ?
>
> Throughput depends on CPU threads and sockets arch
>
> On Friday,
Thanks tim.
Is my assumption is right ?
Throughput depends on CPU threads and sockets arch
On Friday, September 22, 2017 at 12:39:41 PM UTC+5:30, Tim Hockin wrote:
>
> Thanks for following up!
>
> On Thu, Sep 21, 2017 at 1:33 AM, Vinoth Narasimhan > wrote:
> > Finally
Thanks for following up!
On Thu, Sep 21, 2017 at 1:33 AM, Vinoth Narasimhan wrote:
> Finally the issue was with the hardware spec. The previous k8s test i did
> with 3 node cluster with each node spec has 1cpu and 4Gig RAM.
>
> Today i map the spec of the native tomcat
Finally the issue was with the hardware spec. The previous k8s test i did
with 3 node cluster with each node spec has 1cpu and 4Gig RAM.
Today i map the spec of the native tomcat test machine with the K8s node.
I created the new 3-node K8s cluster with version 1.6.9 with node
configuration
Do you have more details on the resource you were running out of on the
Tomcast containers? Were you CPU bound? Perhaps K8s is limiting the number
of CPU's available in a different fashion than what docker was.
Could you include the Tomcat iostat, vmstat, etc. output?
-EJ
On Tue, Sep 19, 2017
Sorry, not sure I parsed your reply.
If you test docker with client and server on the same node, you need
to test kubernetes the same way.
You can test your client to the pod's IP directly (should be same as
docker perf) and then test kube services.
On Tue, Sep 19, 2017 at 10:16 PM, Vinoth
Tim u mean the backend for k8s node is same as the result of the backend on
native tomcat test as well as on the docker.
k8s node backend is different that the tomcat and docker test.
Tomcat/docker test did on the GCP machine with ubuntu flavour with
8cpu/30Gig machine
k8s test did on 3
Thanks for the response Tim.
I don't see any failures in the ab result all the request got successful in
all the test.
The following are the results that I attached in the GitHub.
https://github.com/kubernetes/kubernetes/issues/52652
k8s_service.txt
I am not limiting resource for the tomcat test on k8s.
On Tuesday, September 19, 2017 at 9:25:50 PM UTC+5:30, Warren Strange wrote:
>
>
> Debugging performance issues on Docker/Kube can be interesting
>
> You could try exposing the service through a Nodeport, and run your
> benchmark
NodePort vs VIP should have no difference - they traverse the same paths.
This is a much steeper difference than what I measured and more than I
would expect.
Is this 8k new connections per second? Could you be exhausting
conntrack records and getting some failures? It would be interesting
to
Debugging performance issues on Docker/Kube can be interesting
You could try exposing the service through a Nodeport, and run your
benchmark directly against the node IP. That would at least tell you if the
GKE LB is a factor or not.
Also - are your pods possibly CPU or memory limited
11 matches
Mail list logo