Hello people, >> I think backward compatibility is a good idea. We can make the >> user/pass inputs for data objects optional (they are required >> currently), maybe even gray them out in the UI with a checkbox to turn >> them on, or something like that. > > > This is similar to what I was thinking. We would allow the username and > password inputs to accept a blank input.
I like the idea of keeping backward compatibility by supporting username/password. And also I really dislike one more config (domain for temp users) to be mandatory. So supporting old behaviour here also simplifies deployment, which is good especially for new users. Thanks, Dmitry 2014-08-15 18:04 GMT+04:00 mike mccune <mimcc...@redhat.com>: > thanks for the thoughts Trevor, > > > > On 08/15/2014 09:32 AM, Trevor McKay wrote: >> >> I think backward compatibility is a good idea. We can make the >> user/pass inputs for data objects optional (they are required >> currently), maybe even gray them out in the UI with a checkbox to turn >> them on, or something like that. > > > This is similar to what I was thinking. We would allow the username and > password inputs to accept a blank input. > > I also like the idea of giving some sort of visual reference, like graying > out the fields. > > >> Sahara can detect whether or not the proxy domain is there, and whether >> or not it can be created. If Sahara ends up in a situation where it >> thinks user/pass are required, but the data objects don't have them, >> we can return a meaningful error. > > > I think it sounds like we are going to avoid having Sahara attempt to create > a domain. It will be the duty of a stack administrator to create the domain > and give it's name in the sahara.conf file. > > Agreed about meaning errors. > > >> The job manager can key off of the values supplied for the data source >> objects (no user/pass? must be proxy) and/or cluster configs (for >> instance, a new cluster config could be added -- if it's absent we >> assume "old" cluster and therefore old hadoop swfit plugin). Workflow >> can be generated accordingly. > > > This sounds good. If there is some way to determine the version of the > hadoop-swiftfs on the cluster that would be ideal. > > >> The hadoop swift plugin can look at the config values provided, as you >> noted yesterday, and get auth tokens in either manor. > > > exactly. > > > > mike > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev