Great job!!

Best regards,
Matej Leško
Middleware Messaging Quality Assurance Engineer

Red Hat Czech s.r.o., Purkynova 647/111, 612 00  Brno, Czech Republic

E-mail: lesko.matej...@gmail.com
phone: +421 949 478 066
IRC:   mlesko at #brno, #messaging, #messaging-qe, #brno-TPB

On Mon, Apr 17, 2017 at 10:29 PM, Jiri Danek <jda...@redhat.com> wrote:

>  Hello folks,
>
> for a while I've been working on WebDriver (Selenium 2.0) tests for the
> Dispatch web console. The idea is to have automatic check that the console
> is working and usable. I'd like to share it now in order to get feedback
> and possibly even adoption.
>
> This started as a learning project to get more familiar with pytest and
> webdriver. I would be glad for any suggestions and recommendations
> regarding what I've done wrong and what should be improved.
>
> Currently there is 10 tests, essentially all of them about connecting the
> console to a router.
> Source on GitHub
> https://github.com/jdanekrh/dispatch-console-tests/tree/update_to_9
>
> See it on Travis CI (running on Chrome and Firefox)
> https://travis-ci.org/jdanekrh/dispatch-console-tests/builds/222912530
>
> The way it runs on Travis CI is that it first downloads and runs two docker
> images which I've created, one for console and the other for router. The
> Dockerfiles are in the docker/ directory.
>
> When getting up to speed on UI tests, I tried to follow the idea of test
> pyramid [0] and chose to structure the tests around Page Objects[1][2],
> because it seems to be considered a good idea. This means a test might look
> like this
>
> @pytest.mark.nondestructive
> @pytest.mark.parametrize("when_correct_details", [
> lambda self, page: self.when_correct_details(page),
> lambda self, page: page.connect_to(self.console_ip)])
> def test_correct_details(self, when_correct_details):
> self.test_name = 'test_correct_details'
> page = self.given_connect_page()
> when_correct_details(self, page)
> page.connect_button.click()
> self.then_login_succeeds()
> self.then_no_js_error()
>
> If you are familiar with pytest and pytest-selenium, you'd know that by
> default only tests marked as nondestructive are executed. That is the
> meaning of the first decorator/annotation. The second annotation causes the
> test run twice, each time with different function as argument, the first
> function fills both ip and port, second function fills only ip on the
> initial connect screen.
>
> Here is a screencast of a complete test run in a Chrome browser. All
> software is running locally (meaning the test, the Chrome browser, Tomcat
> with the console and Dispatch Router).
>
> https://www.youtube.com/watch?v=A7XFCXPcIeE
> <https://www.youtube.com/edit?o=U&video_id=A7XFCXPcIeE> (3 minutes)
>
> to run the same thing on the CLI, in the top level directory, run
> $ py.test --base-url http://127.0.0.1:8080/stand-alone --console
> stand-alone --local-chrome
>
> to use firefox, run
>
> $ py.test --base-url http://127.0.0.1:8080/stand-alone --console
> stand-alone  --capability marionette true --driver Firefox --driver-path
> /unless/in/PATH/then/path/to/geckodriver
>
> Regarding tests that fail in the video,
>
>
>    - TestConnectPage::test_wrongip,port is not reported yet; I'd expect to
>    see error message almost immediatelly, the way it used to work about 5
>    months ago in hawtio version (when I tried it last)
>    - TestConnectPage::test_correct_details(when_correct_details1) is
>    reported as https://issues.apache.org/jira/browse/DISPATCH-746
>    - TestHawtioLogsPage::test_open_hawtio_logs_page should not be tested
> on
>    standalone console (and it passes because of the @pytest.mark.reproduces
>    as explained below)
>    - TestOverviewPage::test_expanding_tree should not be tested on
>    standalone console
>
> There was idea that tests should never be failing. If there is a test that
> fails, then the test could be modified to succeed if the issue is present.
> I marked such tests with @pytest.mark.reproduces. Passing tests are marked
> with @pytest.mark.verifies. This is probably not a good idea, because it is
> chore to maintain. Better to fix the issue in the first place.
>
> Regarding CI, there is a Travis CI job linked to the test repository
> itself, and another Travis job to build Docker images. In the future, I'd
> like to run the image building job daily and have it trigger a job which
> will run the tests with the image. This way it will be immediatelly clear
> if some new test failed.
>
> If you have any suggestions regarding either the tests itself or ideas
> around what should be tested in general. I would be glad to hear it.
>
> [0]
> https://testing.googleblog.com/2015/04/just-say-no-to-
> more-end-to-end-tests.html
> [1]
> https://gojko.net/2010/04/13/how-to-implement-ui-testing-
> without-shooting-yourself-in-the-foot-2/
> [2] https://youtu.be/7tzA2nsg1jQ?t=14m
>
> Thanks for your help,
>
>  --
> Jiri
>

Reply via email to