Re: [openstack-dev] [horizon] Augmenting openstack_dashboard settings and possible horizon bug

2014-01-15 Thread Radomir Dopieralski
On 15/01/14 15:30, Timur Sufiev wrote:
 Recently I've decided to fix situation with Murano's dashboard and move
 all Murano-specific django settings into a separate file (previously
 they were appended to
 /usr/share/openstack-dashboard/openstack_dashboard/settings.py). But, as
 I knew, /etc/openstack_dashboard/local_settings.py is for customization
 by admins and is distro-specific also - so I couldn't use it for
 Murano's dashboard customization.

[snip]

 2. What is the sensible approach for customizing settings for some
 Horizon's dashboard in that case?

We recently added a way for dashboards to have (some) of their
configuration provided in separate files, maybe that would be
helpful for Murano?

The patch is https://review.openstack.org/#/c/56367/

We can add more settings that can be changed, we just have to know what
is needed.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript checkstyle improvement

2014-01-03 Thread Radomir Dopieralski
On 03/01/14 08:17, Matthias Runge wrote:
 On 12/27/2013 03:52 PM, Maxime Vidori wrote:

 I send this mail to talk about Javascript coding style improvement,
 like python has pep8, it could be interesting to have some rules for
 javascript too. JSHint provides some rules to perform this and I think
 it could be a great idea to discuss about which rules could be integrated
 into Horizon.

[snip]

 We're bundling foreign code (which is bad in general); now we need to
 change/format that code too, to match our style conventions? That would
 even generate a fork, like here[1], where the changes were just cosmetics.

[snip]

This is actually not a problem at all, because the way jshint works now,
we have to explicitly list the files to be checked against those
rules. That means, that we can only check our own code, and not the
included libraries. Of course, the reviewers have to pay attention to
make sure that all the non-library code is added to the list, but we
don't really get new files that often.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] import only module message and #noqa

2014-01-03 Thread Radomir Dopieralski
On 03/01/14 16:18, Russell Bryant wrote:
 On 01/03/2014 10:10 AM, Radomir Dopieralski wrote:
 I think that we can actually do a little bit better and remove many of
 the #noqa tags without forfeiting automatic checking. I submitted a
 patch: https://review.openstack.org/#/c/64832/

 This basically adds a h302_exceptions option to tox.ini, that lets us
 specify which names are allowed to be imported. For example, we can do:

 [hacking]
 h302_exceptions = django.conf.settings,
   django.utils.translation.ugettext_lazy,
   django.core.urlresolvers.

 To have settings, _ and everything from urlresolvers importable without
 the need for the #noqa tag.

 Of course every project can add their own names there, depending what
 they need.
 
 Isn't that what import_exceptions is for?  For example, we have this
 in nova:
 
 import_exceptions = nova.openstack.common.gettextutils._
 
No exactly, as this will disable all import checks, just like # noqa.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 00:17, Jay Pipes wrote:
 On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:
 On 14/12/13 16:51, Jay Pipes wrote:

 [snip]

 Instead of focusing on locking issues -- which I agree are very
 important in the virtualized side of things where resources are
 thinner -- I believe that in the bare-metal world, a more useful focus
 would be to ensure that the Tuskar API service treats related group
 operations (like deploy an undercloud on these nodes) in a way that
 can handle failures in a graceful and/or atomic way.

 Atomicity of operations can be achieved by intoducing critical sections.
 You basically have two ways of doing that, optimistic and pessimistic.
 Pessimistic critical section is implemented with a locking mechanism
 that prevents all other processes from entering the critical section
 until it is finished.
 
 I'm familiar with the traditional non-distributed software concept of a
 mutex (or in Windows world, a critical section). But we aren't dealing
 with traditional non-distributed software here. We're dealing with
 highly distributed software where components involved in the
 transaction may not be running on the same host or have much awareness
 of each other at all.

Yes, that is precisely why you need to have a single point where they
can check if they are not stepping on each other's toes. If you don't,
you get race conditions and non-deterministic behavior. The only
difference with traditional, non-distributed software is that since the
components involved are communicating over a, relatively slow, network,
you have a much, much greater chance of actually having a conflict.
Scaling the whole thing to hundreds of nodes practically guarantees trouble.

 And, in any case (see below), I don't think that this is a problem that
 needs to be solved in Tuskar.

 Perhaps you have some other way of making them atomic that I can't
 think of?
 
 I should not have used the term atomic above. I actually do not think
 that the things that Tuskar/Ironic does should be viewed as an atomic
 operation. More below.

OK, no operations performed by Tuskar need to be atomic, noted.

 For example, if the construction or installation of one compute worker
 failed, adding some retry or retry-after-wait-for-event logic would be
 more useful than trying to put locks in a bunch of places to prevent
 multiple sysadmins from trying to deploy on the same bare-metal nodes
 (since it's just not gonna happen in the real world, and IMO, if it did
 happen, the sysadmins/deployers should be punished and have to clean up
 their own mess ;)

 I don't see why they should be punished, if the UI was assuring them
 that they are doing exactly the thing that they wanted to do, at every
 step, and in the end it did something completely different, without any
 warning. If anyone deserves punishment in such a situation, it's the
 programmers who wrote the UI in such a way.
 
 The issue I am getting at is that, in the real world, the problem of
 multiple users of Tuskar attempting to deploy an undercloud on the exact
 same set of bare metal machines is just not going to happen. If you
 think this is actually a real-world problem, and have seen two sysadmins
 actively trying to deploy an undercloud on bare-metal machines at the
 same time without unbeknownst to each other, then I feel bad for the
 sysadmins that found themselves in such a situation, but I feel its
 their own fault for not knowing about what the other was doing.

How can it be their fault, when at every step of their interaction with
the user interface, the user interface was assuring them that they are
going to do the right thing (deploy a certain set of nodes), but when
they finally hit the confirmation button, did a completely different
thing (deployed a different set of nodes)? The only fault I see is in
them using such software. Or are you suggesting that they should
implement the lock themselves, through e-mails or some other means of
communication?

Don't get me wrong, the deploy button is just one easy example of this
problem. We have it all over the user interface. Even such a simple
operation, as retrieving a list of node ids, and then displaying the
corresponding information to the user has a race condition in it -- what
if some of the nodes get deleted after we get the list of ids, but
before we make the call to get node details about them? This should be
done as an atomic operation that either locks, or fails if there was a
change in the middle of it, and since the calls are to different
systems, the only place where you can set a lock or check if there was a
change, is the tuskar-api. And no, trying to get again the information
about a deleted node won't help -- you can keep retrying for years, and
the node will still remain deleted. This is all over the place. And,
saying that this is the user's fault doesn't help.

 Trying to make a complex series of related but distributed actions --
 like the underlying actions of the Tuskar - Ironic API

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 12:25, Ladislav Smola wrote:
 May I propose we keep the conversation Icehouse related. I don't think
 we can make any sort of locking
 mechanism in I.

By getting rid of tuskar-api and putting all the logic higher up, we are
forfeiting the ability to ever create it. That worries me. I hate to
remove potential solutions from my toolbox, even when the problems they
solve may as well never materialize.

 Though it would be worth of creating some WikiPage that would present it
 whole in some consistent
 manner. I am kind of lost in these emails. :-)
 
 So, what do you thing are the biggest issues for the Icehouse tasks we
 have?
 
 1. GET operations?
 I don't think we need to be atomic here. We basically join resources
 from multiple APIs together. I think
 it's perfectly fine that something will be deleted in the process. Even
 right now we join together only things
 that exists. And we can handle when something is not. There is no need
 of locking or retrying here AFAIK.
 2. Heat stack create, update
 This is locked in the process of the operation, so nobody can mess with
 it while it is updating or creating.
 Once we will pack all operations that are now aside in this, we should
 be alright. And that should be doable in I.
 So we should push towards this, rather then building some temporary
 locking solution in Tuskar-API.
 
 3. Reservation of resources
 As we can deploy only one stack now, so I think it shouldn't be a
 problem with multiple users there. When
 somebody will delete the resources from 'free pool' in the process, it
 will fail with 'Not enough free resources'
 I guess that is fine.
 Also not sure how it's now, but it should be possible to deploy smartly,
 so the stack will be working even
 with smaller amount of resources. Then we would just heat stack-update
 with numbers it ended up with,
 and it would switch to OK status without changing anything.
 
 So, are there any other critical sections you see?

It's hard for me to find critical sections in a system that doesn't
exist, is not documented and will be designed as we go. Perhaps you are
right and I am just panicking, and we won't have any such critical
sections, or can handle the ones we do without any need for
synchronization. You probably have a much better idea how the whole
system will look like. Even then, I think it still makes sense to keep
that door open an leave ourselves the possibility of implementing
locking/sessions/serialization/counters/any other synchronization if we
need them, unless there is a horrible cost involved. Perhaps I'm just
not aware of the cost?

As far as I know, Tuskar is going to have more than just GETs and Heat
stack operations. I seem to remember stuff like resource classes, roles,
node profiles, node discovery, etc. How will updates to those be handled
and how will they interact with the Heat stack updates? Will every
change trigger a heat stack update immediately and force a refresh for
all open tuskar-ui pages?

Every time we will have a number of operations batched together -- such
as in any of those wizard dialogs, for which we've had so many
wireframes already, and which I expect to see more -- we will have a
critical section. That critical section doesn't begin when the OK
button is pressed, it starts when the dialog is first displayed, because
the user is making decisions based on the information that is presented
to her or him there. If by the time he finished the wizard and presses
OK the situation has changed, you are risking doing something else than
the user intended. Will we need to implement such interface elements,
and thus need synchronization mechanisms for it?

I simply don't know. And when I'm not sure, I like to have an option.

As I said, perhaps I just don't understand that there is a large cost
involved in keeping the logic inside tuskar-api instead of somewhere
else. Perhaps that cost is significant enough to justify this difficult
decision and limit our options. In the discussion I saw I didn't see
anything like that pointed out, but maybe it's just so obvious that
everybody takes it for granted and it's just me that can't see it. In
that case I will rest my case.
-- 
Radomir Dopieralski



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-20 Thread Radomir Dopieralski
On 20/12/13 13:04, Radomir Dopieralski wrote:

[snip]

I have just learned that tuskar-api stays, so my whole ranting is just a
waste of all our time. Sorry about that.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar] How to install tuskar-ui from packaging point of view

2013-12-19 Thread Radomir Dopieralski
On 16/12/13 04:47, Thomas Goirand wrote:

[snip]

 As for tuskar-ui, the install.rst is quite vague about how to install. I
 got the python-tuskar-ui binary package done, with egg-info and all,
 that's not the problem. What worries me is this part:

[snip]

Hello Thomas,

sorry for the late reply. The install instructions in the tuskar-ui
repository seem to be written with the developer in mind. For a
production installation (for which Tuskar is not yet entirely ready,
regrettably), you would just need two things:

1. Make sure that tuskar_ui is importable as a python module.
2. Make sure that Tuskar-UI is enabled as a Horizon extension, by
creating a file within Horizon's configuration, as described here:
https://github.com/openstack/horizon/blob/master/doc/source/topics/settings.rst#examples

We will need to update our documentation to include those instructions.

I hope that helps.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-19 Thread Radomir Dopieralski
On 14/12/13 16:51, Jay Pipes wrote:

[snip]

 Instead of focusing on locking issues -- which I agree are very
 important in the virtualized side of things where resources are
 thinner -- I believe that in the bare-metal world, a more useful focus
 would be to ensure that the Tuskar API service treats related group
 operations (like deploy an undercloud on these nodes) in a way that
 can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished. Optimistic one is implemented using transactions,
that assume that there will be no conflict, and just rollback all the
changes if there was. Since none of OpenStack services that we use
expose any kind of transaction mechanisms (mostly, because they have
REST, stateless APIs, and transacrions imply state), we are left with
locks as the only tool to assure atomicity. Thus, your sentence above is
a little bit contradictory, advocating ignoring locking issues, and
proposing making operations atomic at the same time.
Perhaps you have some other way of making them atomic that I can't think of?

 For example, if the construction or installation of one compute worker
 failed, adding some retry or retry-after-wait-for-event logic would be
 more useful than trying to put locks in a bunch of places to prevent
 multiple sysadmins from trying to deploy on the same bare-metal nodes
 (since it's just not gonna happen in the real world, and IMO, if it did
 happen, the sysadmins/deployers should be punished and have to clean up
 their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-19 Thread Radomir Dopieralski
On 11/12/13 21:42, Robert Collins wrote:
 On 12 December 2013 01:17, Jaromir Coufal jcou...@redhat.com wrote:
 On 2013/10/12 23:09, Robert Collins wrote:

[snip]

 Thats speculation. We don't know if they will or will not because we
 haven't given them a working system to test.

 Some part of that is speculation, some part of that is feedback from people
 who are doing deployments (of course its just very limited audience).
 Anyway, it is not just pure theory.
 
 Sure. Let be me more precise. There is a hypothesis that lack of
 direct control will be a significant adoption blocker for a primary
 group of users.

I'm sorry for butting in, but I think I can see where your disagreement
comes from and maybe explaining it will help resolving it.

It's not a hypothesis, but a well documented and researched fact, that
transparency has a huge impact on the ease of use of any information
artifact. In particular, the easier you can see what is actually
happening and how your actions affect the outcome, the faster you can
learn to use it and the more efficient you are in using it and resolving
any problems with it. It's no surprise that closeness of mapping and
hidden dependencies are two important congnitive dimensions that are
often measured when assesing the usability of an artifact. Humans simply
find it nice when they can tell what is happening, even if theoretically
they don't need that knowledge when everything works correctly.

This doesn't come from any direct requirements of Tuskar itself, and I
am sure that all the workarounds that Robert gave will work somehow in
every real-world problem that arises. But the whole will not necessarily
be easy or pleasant to learn and use. I am aware, that the requirment to
be able to see what is happening is a fundamental problem, because it
destroys one of the most important rules in system engineering --
separation of concerns. The parts in the upper layers should simply not
care how the parts in the lower layers do their jobs, as long as they
work properly.

I know that it is a kind of a tradition in Open Source software to
create software with the assumption, that it's enough for it to do its
job, and if every use case can be somehow done, directly or indirectly,
then it's good enough. We have a lot of working tools designed with this
principle in mind, such as CSS, autotools or our favorite git. They do
their job, and they do it well (except when they break horribly). But I
think we can put a little bit more effort into also ensuring that the
common use cases are not just doable, but also easy to implement and
maintain. And that means that we will sometimes have a requirement that
comes from how people think, and not from any particular technical need.
I know that it sounds like speculation, or theory, but I think we need
to tust in Jarda's experience with usability and his judgement about
what works better -- unless of course we are willing to learn all that
ourselves, which may take quite some time.

What is the point of having an expert, if we know better, after all?
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Horizon] [Tuskar] [UI] Horizon and Tuskar-UI merge

2013-12-17 Thread Radomir Dopieralski
On 17/12/13 09:04, Thomas Goirand wrote:
 On 12/16/2013 10:32 PM, Jaromir Coufal wrote:

[snip]

 So by default, I would say that there should exist Project + Admin tab
 together or Infrastructure. But never all three together. So when
 Matthias say 'disabled by default', I would mean completely hidden for
 user and if user wants to use Infrastructure management, he can enable
 it in different horizon instance, but it will be the only visible tab
 for him. So it will be sort of separate application, but still running
 on top of Horizon.

 I think the disabled by default approach is the wrong one. Instead, we
 should have some users with enough credentials that will have the
 feature, and others will not.

[snip]

The thing is, as Jarda writes, that Tuskar is for managing a different
cloud than the rest of the Horizon. With TripleO you have two clouds,
one, called Undercloud, consists of real hardware and is managed by the
datacenter administrators -- that is the one where Tuskar runs. The
other, called Overcloud, is deployed on the Undercloud machines, and is
the cloud that the users actually use. It's managed by normal Horizon.
Those two instances of Horizon live on separate machines, in separate
networks and have separate sets of users.

I agree that in a general case your proposition would be much better,
but in this case it doesn't really make sense to have the normal
Horizon in Undercloud, and require the admins to specially configure the
users to see Tuskar. Instead, a preconfigured installation of Horizon
with Tuskar enabled seems much better.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Radomir Dopieralski
On 11/12/13 13:33, Jiří Stránský wrote:

[snip]

 TL;DR: I believe that As an infrastructure administrator, Anna wants a
 CLI for managing the deployment providing the same fundamental features
 as UI. With the planned architecture changes (making tuskar-api thinner
 and getting rid of proxying to other services), there's not an obvious
 way to achieve that. We need to figure this out. I present a few options
 and look forward for feedback.

[snip]

 2) Make a thicker tuskar-api and put the business logic there. (This is
 the original approach with consuming other services from tuskar-api. The
 feedback on this approach was mostly negative though.)

This is a very simple issue, actualy. We don't have any choice. We need
locks. We can't make the UI, CLI and API behave in consistent and
predictable manner when multiple people (and cron jobs on top of that)
are using them, if we don't have locks for the more complex operations.
And in order to have locks, we need to have a single point where the
locks are applied. We can't have it on the client side, or in the UI --
it has to be a single, shared place. It has to be Tuskar-API, and I
really don't see any other option.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-12 Thread Radomir Dopieralski
On 12/12/13 11:49, Radomir Dopieralski wrote:
 On 11/12/13 13:33, Jiří Stránský wrote:
 
 [snip]
 
 TL;DR: I believe that As an infrastructure administrator, Anna wants a
 CLI for managing the deployment providing the same fundamental features
 as UI. With the planned architecture changes (making tuskar-api thinner
 and getting rid of proxying to other services), there's not an obvious
 way to achieve that. We need to figure this out. I present a few options
 and look forward for feedback.
 
 [snip]
 
 2) Make a thicker tuskar-api and put the business logic there. (This is
 the original approach with consuming other services from tuskar-api. The
 feedback on this approach was mostly negative though.)
 
 This is a very simple issue, actualy. We don't have any choice. We need
 locks. We can't make the UI, CLI and API behave in consistent and
 predictable manner when multiple people (and cron jobs on top of that)
 are using them, if we don't have locks for the more complex operations.
 And in order to have locks, we need to have a single point where the
 locks are applied. We can't have it on the client side, or in the UI --
 it has to be a single, shared place. It has to be Tuskar-API, and I
 really don't see any other option.
 

Ok, it seems that not everyone is convinced that we will actually need
locks, transactions, sessions or some other way of keeping the
operations synchronized, so I will give you a couple of examples. For
clarity, I will talk about what we have in Tuskar-UI right now, not
about something that is just planned. Please don't respond with but we
will do this particular thing differently this time. We will hit the
same issue again in a different place, because the whole nature of
Tuskar is to provide large operations that abstract away the smaller
operations that could be done without Tuskar.

One example of which I spoke already is the Resource Class creation
workflow. In that workflow in the first step we fill in the information
about the particular Resource Class -- its name, kind, network subnet,
etc. In the second step, we add the nodes that should be included in
that Resource Class. Then we hit OK the nodes are created, one by one,
then the nodes are assigned to the newly created Resource Class.

There are several concurrency-related problems here if you assume that
multiple users are using the UI:
* In the mean time, someone can create a Resource Class with the same
  name, but different ID and different set of nodes. Our new nodes will
  get created, but creating the resource class will fail (as the name
  has to be unique) and we will have a bunch of unassigned nodes.
* In the mean time, someone can add one of our nodes to a different
  Resource Class. The creation of nodes will fail at some point (as
  the MAC addresses need to be unique), and we will have a bunch of
  unassigned nodes, no Resource Class, and lost user input for the
  nodes that didn't get created.
* Someone adds one of our nodes to a different Resource Class, but does
  it in a moment between our creating the nodes, and creating the
  Resource Class. Hard to tell in which Resource Class the nodes is now.

The only way around such problem is to have a critical section there.
This can be done in multiple ways, but they all require some means of
synchronization and would be preferably implemented in a single place.
The Tuskar-API is that place.

Another example is the deploy button. When you press it, you are
presented with a list of undeployed nodes that will be deployed, and are
asked for confirmation. But if any nodes are created or deleted in
the mean time, the list you saw is not the list of nodes that actually
are going to be deployed -- you have been lied to. You may accidentally
deploy nodes that you didn't want.

This sort of problem will pop up again and again -- it's common in user
interfaces. Without a single point of synchronization where we can check
for locks, sessions, operation serial numbers and state, there is no way
to solve it. That's why we need to have all operations go through
Tuskar-API.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Javascript testing framework

2013-12-03 Thread Radomir Dopieralski
On 03/12/13 01:26, Maxime Vidori wrote:
 Hi!
 
 In order to improve the javascript quality of Horizon, we have to change the 
 testing framework of the client-side. Qunit is a good tool for simple tests, 
 but the integration of Angular need some powerful features which are not 
 present in Qunit. So, I have made a little POC with the javascript testing 
 library Jasmine, which is the one given as an example into the Angularjs 
 documentation. I have also rewrite a Qunit test in Jasmine in order to show 
 that the change is quite easy to make.
 
 Feel free to comment in this mailing list the pros and cons of this new tool, 
 and to checkout my code for reviewing it. I have also made an helper for 
 quick development of Jasmine tests through Selenium.
 
 To finish, I need your opinion for a new command line in run_tests.sh. I 
 think we should create a run_tests.sh --runserver-test target which will 
 allow developers to see all the javascript test page. This new command line 
 will allow people to avoid the use of the command line for running Selenium 
 tests, and allow them to view their tests in a comfortable html interface. It 
 could be interesting for the development of tests, this command line will 
 only be used for development purpose.
 
 Waiting for your feedbacks!
 
 Here is a link to the Jasmine POC: https://review.openstack.org/#/c/59580/

Hello Maxime,

thank you for this proof of concept, it looks very interesting. I left a
small question about how it's going to integrate with Selenium there.

But I thought that it would be nice if you could point us here to some
resources that explain why Jasmine is better than QUnit, other than the
fact that it is used in some AngularJS example. I'm sure that would help
convince a lot of people to the idea of switching. A quick search for
QUnit vs Jasmine tells me that the advantages of Jasmine are
tight integration with Ruby on Rails and Behavior Driven Development
style syntax. As we don't use either, I'm not sure we want it.

I'm sure that pointers to resources specific for our use cases would
greatly help everyone make a decission.

Thanks,
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] JavaScript use policy

2013-11-26 Thread Radomir Dopieralski
On 25/11/13 18:29, Lyle, David wrote:
 We have been having this discussion here on the mailing list [1][2] and also 
 in the Horizon team meetings [3].
 
 The overall consensus has been that we are going to move forward with a 
 JavaScript requirement.

[...]

 [1] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/018629.html
 [2] 
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019894.html
 [3] http://eavesdrop.openstack.org/meetings/horizon/

Please forgive me rising this subject then. The links you provided talk
about using AngularJS and node.js, and I think that's independent from
whether we will require JavaScript in the user's browser or not. In
fact, I'm following closely the AngularJS discussion and the way it is
going to be introduced specifically allows non-JavaScript fallbacks. I
also couldn't find anything about requiring JavaScript in Horizon's
documentation, so I was under the impression that this topic is still open.

I'm very happy to hear that there is a consensus and that we have a
clear policy about it. Could we maybe put that policy in writing
somewhere in the Horizon documentation, so that other people won't make
a fool of them like I just did?

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] adding jshint to qunit tests (was: Javascript development improvement)

2013-11-25 Thread Radomir Dopieralski
On 22/11/13 16:08, Imre Farkas wrote:
 There's a jslint fork called jshint which is able to run in the browser
 without any node.js dependency.
 
 I created a POC patch [1] long time ago to demonstrate its capabilities.
 It's integrated with qunit and runs automatically with the horizon test
 suite.
 
 The patch also contains a .jshintrc file for the node.js package but you
 can remove it since it's not used by the qunit+jshint test at all.

This is excellent, just as I wanted to raise the issue of inconsistent
style in js in Horizon. Running jshint as part of the qunit tests is a
great solution that I didn't even think about. I think we should
definitely use that.

For people who don't have everyday contact with JavaScript, jshint is
not just a style checker similar to pep8 or flake8.  JavaScript is a
language that wasn't fortunate enough to have its syntax designed and
tested for a long time by a crowd of very intelligent and careful
people, like Python's. It's full of traps, things that look very
innocent on the surface and seem to work without any errors, but turn
out to be horribly wrong -- and all because of a missing semicolon or an
extra comma or even a newline in a wrong spot. Jshint catches at least
the most common mistakes like that, and I honestly can't imagine writing
JavaScript without having it enabled in my editor. We definitely want to
use it sooner or later, and preferably sooner.

Whether we need node.js for it or not is a technical issue -- as Imre
demonstrated here, this one can be solved without node.js, so there is
really nothing stopping us from adopting it.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] JavaScript use policy

2013-11-25 Thread Radomir Dopieralski
Hello everyone,

there has been some talks about this behind the stage for a while, but I
think that we need to have this discussion here at last and make a
deicsion. We need a clear, concrete and enforcable policy about the use
of client-side JavaScript in Horizon. The current guideline it would be
nice if it worked without JavaScript doesn't really cut it, and due to
it not being enforced, it's really uneven across the project.

This is going to be a difficult discussion, and it's likely to get very
emotional. There are different groups of users and developers with
different tasks and needs, and it may be hard to get everyone to agree
on a single option. But I believe it will be beneficial for the project
in the long run to have a clear policy on this.

As I see it, we have basically three options about it. The two extreme
approaches are the simplest: either require everything to work without
JavaScript, or simply refuse to work with JavaScript disabled. The third
option is in reality more like another three hundred options, because it
would basically specify what absolutely has to work without JavaScript,
and what is allowed to break. Personally I have no opinion about what
would be best, but I do have a number of questions that I would like you
to consider:

1. Are the users of Horizon likely to be in a situation, where they need
to use JavaScript-less browser and it's not more convenient to use the
command-line tools?

2. Are there users of Horizon who, for whatever reason (security,
disability, weak hardware), prefer to use a browser with JavaScript
disabled?

3. Designing for the web constrains the designs in certain ways. Are
those constrains hurting us so much? Do you know any examples in Horizon
right now that could be redesigned if we dropped the non-JavaScript
requirements?

4. Some features are not as important as others. Some of them are nice
to have, but not really necessary. Can you think about any parts of the
Horizon user interface that are not really necessary without JavaScript?
Do they have something in common?

5. Some features are absolutely necessary, even if we had to write a
separate fallback view or render some things on the server side to
support them without JavaScript. Can you think of any in Horizon right now?

6. How much more work it is to test if your code works without
JavaScript? Is it easier or harder to debug when something goes wrong?
Is it easier or harder to expand or modify existing code?

7. How would you test if the code conforms to the policy? Do you think
it could be at least partially automated? How could we enforce the
policy better?

8. How much more experience and knowledge is needed to create web pages
that have proper graceful degradation? Is that a common skill? Is that
skill worth having?

9. How much more work are we ready to put into supporting
JavaScript-less browsers? Is that effort that could be spent on
something else, or would it waste anyways?

10. How much do we need real-time updates on the web pages? What do we
replace them with when no js is available -- static and outdated data,
or not display that information at all?

11. If we revisit this policy next year, and decide that JavaScript can
be a requirement after all, how much work would have been wasted?

12. Are we likely to have completely different designs if we require
JavaScript? Is that a good or a bad thing?

13. Can we use any additional libraries, tools or techniques if we
decide to require JavaScript? Are they really useful, are are they just
toys?

14. How do we decide which features absolutely need to work without
JavaScript, and which can break?

15. Should we display a warning to the user when JavaScript is disabled?
Maybe be should do something else? If so, what?

16. If JavaScript is optional, would you create a single view and
enhance it with JavaScript, or would you rather create a separate
fallback view? What are the pros and cons of both approaches? How would
you test them?

17. If we make JavaScript easier to use in Horizon, will it lead to it
being used in unnecessary places and causing bloat? If so, what can be
done to prevent that?

18. Will accessibility be worse? Can it be improved by using good
practices? What would be needed for that to happen?

You don't have to answer those questions -- they are just examples of
the problems that we have to consider. Please think about it. There is
no single best answer, but I'm sure we can create a policy that will
make Horizon better in the long run.

Thank you,
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bad review patterns

2013-11-07 Thread Radomir Dopieralski
On 06/11/13 23:07, Robert Collins wrote:
 On 6 November 2013 21:34, Radomir Dopieralski openst...@sheep.art.pl wrote:

 [...] Firstly, things
 like code duplication is a sliding scale, and I think it's ok for a
 reviewer to say 'these look similar, give it a go please'. If the
 reviewee tries, and it's no good - thats fine. But saying 'hey, I
 won't even try' - thats not so good. Many times we get drive-by
 patches and no follow up, so there is a learnt response to require
 full satisfaction with each patch that comes in. Regular contributors
 get more leeway here I think.

This is of course a gut feeling thing, and different people have
different thresholds for when to deduplicate code. I came up with that
pattern for myself, because my feelings about the code often didn't
match the feelings of the rest of the team. Please note that the fact
that I notice a bad pattern in my review doesn't immediately mean I
shouldn't do it -- just that it is suspect and I should rethink it. So,
if it's a blatant copy-paste of code, of course I will point it out. If
the patch includes code that could be easily replaced by an existing
utility function, that the author might not know about, of course I will
point it out. But if the code is similar to some other code in some
other part of the project, I will try to treat it the same way as when I
spot a bug that is not related to the patch during the review -- either
follow up with a patch of my own, or report a bug if I don't have time
for this (this has a nice side effect of producing the low-hanging-fruit
bugs that teach refactoring to newcomers). I think it's important to
remember that you can always commit to that project, so not all changes
need to be mde by that single author (especially in OpenStack, where you
have the marvelous option of amending someone else's patch).

[...]
 This is quite obvious. Just don't do it. It's OK to spend an hour
 reviewing something, and then leaving no comments on it, because it's
 simply fine, or because we had to means to test someting (see the first
 pattern).
 
 Core reviewers look for the /comments/ from people, not just the
 votes. A +1 from someone that isn't core is meaningless unless they
 are known to be a thoughtful code reviewer. A -1 with no comment is
 also bad, because it doesn't help the reviewee get whatever the issue
 is fixed.

 It's very much not OK to spend an hour reviewing something and then +1
 with no comment: if I, and I think any +2er across the project see a
 patch that needs an hour of review, with a commentless +1, we'll
 likely discount the +1 as being meaningful.

That is a very good point! We want reviews that have useful comments in
them, so it is indeed a very good rule of thumb to always ask yourself
Is this comment that I am posting helpful?. What I meant here is that
if I do a review, and even after spending a lot of time on it, can't
come up with anything that would actually help the author make it a
better patch, then it is fine to not post anything at all, including no
score. I suppose it would be nice to +1 it if you find it basically
fine, but we didn't have separate +1s and +2s in my previous project.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Bad review patterns

2013-11-06 Thread Radomir Dopieralski
 suspicious, and when I find myself
falling into one of them, I always rethink what I'm doing. Maybe you
have some more bad patterns that you would like to share? Reviewing code
is a difficult skill and it's always good to improve it by using
experience of others.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2