Directions:
nova->switch port, switch port -> glance, glance->switch port (to swift). I assume traffic from switch to swift outside installation.

Glance-api receive and send same amount of traffic. It sounds like a minor issue until you starts to count CPU IRQ time of network card (doubled compare to a single direction of traffic).

Glance on compute will consume less CPU (because of high performance loopback).

On 01/21/2015 07:20 PM, Michael Dorman wrote:
This is great info, George.

Could you explain the 3x snapshot transport under the traditional Glance
setup, please?

I understand that you have compute —> glance, and glance —> swift.  But
what’s the third transfer?

Thanks!
Mike





On 1/21/15, 10:36 AM, "George Shuklin" <george.shuk...@gmail.com> wrote:

Ok, news so far:

It works like a magic. Nova have option
[glance]
host=127.0.0.1

And I do not need to cheat with endpoint resolving (my initial plan was
to resolve glance endpoint to 127.0.0.1 with /etc/hosts magic). Normal
glance-api reply to external clients requests
(image-create/download/list/etc), and local glance-apis (per compute)
are used to connect to swift.

Glance registry works in normal mode (only on 'official' api servers).

I don't see any reason why we should centralize all traffic to swift
through special dedicated servers, investing in fast CPU and 10G links.

With that solution CPU load on glance-api is distributed evenly on all
compute nodes, and overall snapshot traffic (on ports) was cut down 3
times!

Why I didn't thought about this earlier?

On 01/16/2015 12:20 AM, George Shuklin wrote:
Hello everyone.

One more thing in the light of small openstack.

I really dislike tripple network load caused by current glance
snapshot operations. When compute do snapshot, it playing with files
locally, than it sends them to glance-api, and (if glance API is
linked to swift), glance sends them to swift. Basically, for each
100Gb disk there is 300Gb on network operations. It is specially
painful for glance-api, which need to get more CPU and network
bandwidth than we want to spend on it.

So idea: put glance-api on each compute node without cache.

To help compute to go to the proper glance, endpoint points to fqdn,
and on each compute that fqdn is pointing to localhost (where
glance-api is live). Plus normal glance-api on API/controller node to
serve dashboard/api clients.

I didn't test it yet.

Any ideas on possible problems/bottlenecks? And how many
glance-registry I need for this?

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to