On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
On Tue, Apr 19, 2016 at 9:06 AM, Adam Young <ayo...@redhat.com> wrote:
I wonder how much of that is Token caching. In a typical CLI use
patter,
a new token is created each time a client is called, with no
passing of a
token between services. Using a session can greatly decrease the
number of
round trips to Keystone.
Not as much as you think (or hope?). Persistent token caching to
disk will
help some, at other expenses though. Using --timing on OSC will
show how
much time the Identity auth round trip cost.
I don't have current numbers, the last time I instrumented OSC there
were
significant load times for some modules, so we went a good distance to
lazy-load as much as possible.
What Dan sees WRT a persistent client process, though, is a
combination of
those two things: saving the Python loading and the Keystone round
trips.
The 1.5sec overhead I eliminated doesn't actually have anything todo
with network round trips at all. Even if you turn off all network
services and just run 'openstack <somecmmand>' and let it fail due
to inability to connect it'll still have that 1.5 sec overhead. It
is all related to python runtime loading and work done during module
importing.
eg run 'unstack.sh' and then compare the main openstack client:
$ time /usr/bin/openstack server list
Discovering versions from the identity service failed when creating
the password plugin. Attempting to determine version from URL.
Unable to establish connection to
http://192.168.122.156:5000/v2.0/tokens
real 0m1.555s
user 0m1.407s
sys 0m0.147s
Against my client-as-a-service version:
$ time $HOME/bin/openstack server list
[Errno 111] Connection refused
real 0m0.045s
user 0m0.029s
sys 0m0.016s
I'm sure there is scope for also optimizing network traffic / round
trips, but I didn't investigate that at all.
I have (had!) a version of DevStack that put OSC into a subprocess and
called it via pipes to do essentially what Dan suggests. It saves some
time, at the expense of complexity that may or may not be worth the
effort.
devstack doesn't actually really need any significant changes beyond
making sure $PATH pointed to the replacement client programs and that
the server was running - the latter could be automated as a launch on
demand thing which would limit devstack changes.
It actually doesn't technically need any devstack change - these
replacement clients could simply be put in some 3rd party git repo
and let developers who want the speed benefit simply put them in
their $PATH before running devstack.
One thing missing is any sort of transactional control in the I/O
with the
subprocess, ie, an EOT marker. I planned to add a -0 option (think
xargs)
to handle that but it's still down a few slots on my priority list.
Error
handling is another problem, and at this point (for DevStack purposes
anyway) I stopped the investigation, concluding that reliability
trumped a
few seconds saved here.
For I/O I simply replaced stdout + stderr with a new StringIO handle to
capture the data when running each command, and for error handling I
ensured the exit status was fed back & likewise stderr printed.
It is more than just a few seconds saved - almost 4 minutes, or
nearly 20% of entire time to run stack.sh on my machine
Ultimately, this is one of the two giant nails in the coffin of
continuing
to persue CLIs in Python. The other is co-installability. (See that
current thread on the ML for pain points). Both are easily solved with
native-code-generating languages. Go and Rust are at the top of my
personal list here...