----- Original Message ----- > From: "Yaniv Kaul" <yk...@redhat.com> > To: "Simon Grinberg" <si...@redhat.com> > Cc: engine-devel@ovirt.org, "Itamar Heim" <ih...@redhat.com> > Sent: Thursday, November 15, 2012 10:42:19 AM > Subject: Re: [Engine-devel] SPICE IP override > > On 11/15/2012 10:33 AM, Simon Grinberg wrote: > > > > ----- Original Message ----- > >> From: "Yaniv Kaul" <yk...@redhat.com> > >> To: "Itamar Heim" <ih...@redhat.com> > >> Cc: "Simon Grinberg" <si...@redhat.com>, engine-devel@ovirt.org > >> Sent: Thursday, November 15, 2012 10:07:02 AM > >> Subject: Re: [Engine-devel] SPICE IP override > >> > >> On 11/15/2012 09:35 AM, Itamar Heim wrote: > >>> On 11/15/2012 09:06 AM, Yaniv Kaul wrote: > >>>> ----- Original Message ----- > >>>>> On 11/15/2012 08:33 AM, Yaniv Kaul wrote: > >>>>>> On 11/15/2012 06:10 AM, Itamar Heim wrote: > >>>>>>> On 11/11/2012 11:45 AM, Yaniv Kaul wrote: > >>>>>>>> On 11/07/2012 10:52 AM, Simon Grinberg wrote: > >>>>>>>>> ----- Original Message ----- > >>>>>>>>>> From: "Michal Skrivanek"<michal.skriva...@redhat.com> > >>>>>>>>>> To:engine-devel@ovirt.org > >>>>>>>>>> Sent: Tuesday, November 6, 2012 10:39:58 PM > >>>>>>>>>> Subject: [Engine-devel] SPICE IP override > >>>>>>>>>> > >>>>>>>>>> Hi all, > >>>>>>>>>> On behalf of Tomas - please check out the proposal for > >>>>>>>>>> enhancing our > >>>>>>>>>> SPICE integration to allow to return a custom IP/FQDN > >>>>>>>>>> instead > >>>>>>>>>> of the > >>>>>>>>>> host IP address. > >>>>>>>>>> http://wiki.ovirt.org/wiki/Features/Display_Address_Override > >>>>>>>>>> All comments are welcome... > >>>>>>>>> My 2 cents, > >>>>>>>>> > >>>>>>>>> This works under the assumption that all the users are > >>>>>>>>> either > >>>>>>>>> outside of the organization or inside. > >>>>>>>>> But think of some of the following scenarios based on a > >>>>>>>>> topology > >>>>>>>>> where users in the main office are inside the corporate > >>>>>>>>> network > >>>>>>>>> while users on remote offices / WAN are on a detached > >>>>>>>>> different > >>>>>>>>> network on the other side of the NAT / public firewall : > >>>>>>>>> > >>>>>>>>> With current 'per host override' proposal: > >>>>>>>>> 1. Admin from the main office won't be able to access the > >>>>>>>>> VM > >>>>>>>>> console > >>>>>>>>> 2. No Mixed environment, meaning that you have to have > >>>>>>>>> designated > >>>>>>>>> clusters for remote offices users vs main office users - > >>>>>>>>> otherwise > >>>>>>>>> connectivity to the console is determined based on > >>>>>>>>> scheduler > >>>>>>>>> decision, or may break by live migration. > >>>>>>>>> 3. Based on #2, If I'm a user travelling between offices > >>>>>>>>> I'll > >>>>>>>>> have > >>>>>>>>> to ask the admin to turn off my VM and move it to internal > >>>>>>>>> cluster > >>>>>>>>> before I can reconnect > >>>>>>>>> > >>>>>>>>> My suggestion is to covert this to 'alternative' IP/FQDN > >>>>>>>>> sending > >>>>>>>>> the > >>>>>>>>> spice client both internal fqdn/ip and the alternative. The > >>>>>>>>> spice > >>>>>>>>> client should detect which is available of the two and > >>>>>>>>> auto-connect. > >>>>>>>>> > >>>>>>>>> This requires enhancement of the spice client, but still > >>>>>>>>> solves > >>>>>>>>> all > >>>>>>>>> the issues raised above (actually it solves about 90% of > >>>>>>>>> the > >>>>>>>>> use > >>>>>>>>> cases I've heard about in the past). > >>>>>>>>> > >>>>>>>>> Another alternative is for the engine to 'guess' or 'elect' > >>>>>>>>> which to > >>>>>>>>> use, alternative or main, based on the IP of the client - > >>>>>>>>> meaning > >>>>>>>>> admin provides the client ranges for providing internal > >>>>>>>>> host > >>>>>>>>> address > >>>>>>>>> vs alternative - but this is more complicated compared for > >>>>>>>>> the > >>>>>>>>> previous suggestion > >>>>>>>>> > >>>>>>>>> Thoughts? > >>>>>>>> Lets not re-invent the wheel. This problem has been pondered > >>>>>>>> before and > >>>>>>>> solved[1], for all scenarios: > >>>>>>>> internal clients connecting to internal resources; > >>>>>>>> internal clients connecting to external resources, without > >>>>>>>> the > >>>>>>>> need for > >>>>>>>> any intermediate assistance > >>>>>>>> external clients connecting to internal resources, with the > >>>>>>>> need > >>>>>>>> for > >>>>>>>> intermediate assistance. > >>>>>>>> VPN clients connecting to internal resources, with or > >>>>>>>> without > >>>>>>>> an > >>>>>>>> internal IP. > >>>>>>>> > >>>>>>>> Any other solution you'll try to come up with will bring you > >>>>>>>> back > >>>>>>>> to > >>>>>>>> this standard, well known (along with its faults) method. > >>>>>>>> > >>>>>>>> The browser client will use PAC to determine how to connect > >>>>>>>> to > >>>>>>>> the hosts > >>>>>>>> and will deliver this to the client. It's also a good path > >>>>>>>> towards real > >>>>>>>> proxy support for Spice. > >>>>>>>> (Regardless, we still need to deal with the Spice protocol's > >>>>>>>> migration > >>>>>>>> command of course). > >>>>>>>> > >>>>>>>> > >>>>>>>> [1] http://en.wikipedia.org/wiki/Proxy_auto-config > >>>>>>> so instead of a spice proxy fqdn field, we should just allow > >>>>>>> user > >>>>>>> to > >>>>>>> specify a pac file which resides under something like > >>>>>>> /etc/ovirt/engine/pac...? > >>>>>> I would actually encourage the customers to use their own > >>>>>> corporate > >>>>>> PAC > >>>>>> and add the information to it. > >>>>> so you are suggesting that there is no need at all to deal with > >>>>> proxy > >>>>> definition/configuration at ovirt engine/user portal level? > >>>> I expect the admin/user portal to send the result of the PAC > >>>> processing to the Spice client. > >>>> I don't think the Spice client should execute the PAC (it's a > >>>> Javascript...). > > And live migration? > > Read my email: "And of course, Spice protocol changes" > > > I don't completely understand how you can avoid executing the PAC > > file if the destination host is provided by Qemu > > (client_migrate_info) unless I'm confusing with something else and > > it is the web client that delivers this info on migration. > > I'm not against executing the PAC. It just requires a javascript > engine, > which is a bit of an overkill for Spice client to start working with, > no? > I'm aware there is a critical gap with the Spice protocol, but all > I'm > saying is that any other idea you'll come up with to get the topology > right is going to be a rewrite of the PAC idea. You will need to > define > the topology, and you'll need to lookup your current location against > it. This is what PAC does. > > A Spice proxy would probably be able to solve the Spice protocol > issue, > as we expect the proxy to handle the host hand-over when migration > happens, I reckon. > > > > > P.S., > > If it is Qemu, then I don't see the current feature page accounting > > for that - IE, the hosts should also be informed on this override > > IP > > Why? A host is rarely aware it is behind NAT. If it's because of the > protocol issue, the protocol has to be changed.
I'm referring to the current feature page as is (taking out the rest of the discussion): It lacks solution for migration: 1. You have host A, and B, both with overridden IP set 2. On initial connect the browser provides the connection details using host A 'override IP' settings 3. VM Migrates from A to B 4. Now it's Qemu providing the destination connection details - it will provide the internal IP of host B Connection breaks !!! Again, unless I'm missing something and live migration destination is provided by the web client/engine to the spice client (somehow) > Y. > > > > > > > > >>> ok, so no engine, but just client side support for PAC? > >> Exactly. > >> And of course, Spice protocol changes, without which all this > >> effort > >> is > >> nice, but incomplete. > >> Y. > >> > >> > >> _______________________________________________ > >> Engine-devel mailing list > >> Engine-devel@ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/engine-devel > >> > > _______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel