side-note: It looks that with my last commit, the testbench in master branch should now work. I couldn't test all possible option/platform combinations. So it would be good if someone (Thomas? Anyone else?) could try to break it and report back.
Thanks, Rainer 2014-11-11 15:20 GMT+01:00 Rainer Gerhards <rgerha...@hq.adiscon.com>: > > 2014-11-11 14:43 GMT+01:00 David Lang <da...@lang.hm>: > >> On Tue, 11 Nov 2014, Thomas D. wrote: >> >> Hi, >>> >>> On 2014-11-11 03:13, David Lang wrote: >>> >>>> you still need those credentials for your sandbox, right? >>>> >>>> If MySQL isn't installed on the system you will need to install the >>>> package >>>> (requiring root + distro specific knowledge), if it is installed, you >>>> need to >>>> configure users and permissions for the test user (requiring DB root >>>> equivalent >>>> + distro specific knowlege) >>>> >>> >>> Yes, running tests against any third-party application would require >>> access to those applications while testing. Theses tests are called >>> "integration tests". >>> >>> Because you should *not* depend on those third-party applications you >>> often cannot run your integration tests every time you run the normal >>> test suite. >>> >>> Therefore you would split your tests. And this is already done for >>> rsyslog: The mysql tests for example are separated. >>> >>> But this doesn't mean you cannot test the ommysql at all without mysql: >>> You can write a mock so you can test your code at least. >>> >>> BTW: Travis will allow us to test against a running mysql each time... >>> >> >> Add in Oracle, Postgres, RabbitMQ, 0MQ, journald, ElasticSearch, etc. >> >> The vast majority of the problems that we run into with rsyslog are in >> the interfaces to these third party destinations. >> > > I wouldn't go that far to say it's the "vast majority", but its a big > bunch in any case. A common case, however, is that the most useful tests > are those that are very hard to create - most notably those things that > deal with races. I judge "most useful" by actual bug reports which could > have been avoided by a test. > > >> >> But let me be clear. From this thread, it looks like these should be our >>>>>> project priorities: >>>>>> >>>>>> 1. make the testbench fully automatic and self-contained >>>>>> 2. create unit tests >>>>>> 3. create more and better integration tests >>>>>> 4. document >>>>>> 5. fix bugs >>>>>> 6. develop new features >>>>>> >>>>>> [...] >>>>>> >>>>> >>>>> [...] >>>>> >>>>> Yes, I am voting for changing the priority order like you said. >>>>> >>>> >>>> I disagree that 1, 2, and 3 are more important than 4 and 5 (where 6 >>>> lives is >>>> subject to a lot of debate) >>>> >>> >>> Wait, this order is not set in stone. >>> >>> For example, to write tests, you have to know what a component should do >>> and how it should be used. So you need a documentation before you can >>> write tests. >>> >>> <snip> >> >>> Our views are not so far apart. >>> >>> To be clear: I don't want to establish a process above the product. >>> >>> But the process should be part of the product. Change the way you >>> currently work: >>> >>> When developing a new feature, start with the documentation. >>> If you know how your new feature should look like, should be used, start >>> writing tests so you can be sure your new feature is actual doing what >>> you are are expecting. >>> Now you can start the actual coding... >>> >>> In the end you have a new feature, fully documented and tested. >>> >>> >>> If you don't follow this work flow, you will end up with the current >>> situation: You maybe have great features -- but nobody knows due to >>> missing/hard to use documentation. >>> The tests will help you to keep your hard work working. Imagine you >>> spend days for a new feature but another change will break it. Without >>> tests you will only notice if somebody will actual try to use your >>> feature and fill a bug. But we all know that this don't happen that >>> often. Also, if you start looking for a new product which maybe solve >>> your problem you are currently facing and experience a bug which is a >>> show stopper for you in this moment, you will skip this product. >>> >> >> you _are_ argueing for process over product. You are pushing for strict >> test-driven-development, and while there are a lot of academics who really >> believe in such development, there are almost no development teams in the >> real world (commercial or opensource) that work that way for very long. >> >> The problem with documentation isn't that no documents get written (with >> the exception of some contributed modules), but rather that the >> documentation has been written by someone so close to the problem that >> there is a huge amount of implied context that isn't obvious to others, and >> that the organization of the documentation is based on how the person >> writing it thinks about the problems, not how the people who don't know >> about the software think about the problems. There are also cases that the >> author (both software and documentation) didn't think of when >> programming/wriring, and so those may not be documented. >> > > +1 > > >> >> So if Rainer should decide to switch the work flow like described, this >>> would only help new features. But we still have to deal with the past. >>> >>> That's why I recommend to change priority for a moment. Like a >>> "sprint"... like a bug hunt day/week/month where everybody focus on >>> squashing bugs. >>> >> >> You keep missing that there isn't a lot of "everybody" in rsyslog >> development. Rainer is the author of >90% of both the code and the >> documentation. He is very willing to accept contributions, but we need to >> get other people involved. The split of the documentation into a different >> repository was to help ease the process of contributing documentation, but >> after a brief initial flurry, the outside contributions have tapered off. >> > > ... still it's a good example. While of course I am a bit disappointed > that we did not get any real doc contributions, James did a great job in > helping to make contributing and doc writing much easier than before. He > did the initial testing with sphinx, converted quite a bit and educated me. > So he made it pretty simple for me to carry on with the new system. And, > yes, the new system still has advantages, even though I still do the vast > majority of work: it's easier now for me as well and at least a little bit > more of fun. > > So even if a contribution is just a spike, it can help fuel some long-term > changes that otherwise wouldn't happen as the initial setup would take too > long or look to scary. If we get up travis CI in a useful way, I think that > would be a good addition to the project. > > >> >> Please contribute tests, contribute documentation, contribute new >> features with any process that you want. >> >> And feel free to discuss development models, but recognize that there are >> a lot of them out there and that there is no "one true way". >> >> Writing good tests is a lot harder than you make it out to be, and >> frequently tests get written that look like they test one thing, but >> actually end up testing something subtly different. >> > > ... a side effect I tend to use ;) The integration tests in the current > testsuite very often catch cases that a unit test would catch. But I had > only to write only one test... > > >> Rainer is listening and adapting to suggestions, but he's also asking for >> help. >> > > Let me clear: I am really listening hard, and I do think myself that a > good testbench, and running it frequently enough, is well worth it. That's > why I started the initial testbench, and that's why I try to add tests > whenever possible (see the latest elasticsearch additions, for example). I > think it's even worth holding development for a bit as I am doing now. But > you already see I am paying almost no attention to user questions, not > doing merge request, not fixing those bugs that we know to exist (in other > words: we wouldn't need to find new bugs in order to fix some ;)). So there > is a price for all of this work. I will probably be able to concentrate on > testbench and "inner workings" (ASL 2.0 conversion immediately pops up my > mind) until end of the year or end of January. But at some point, I need to > do some work that users actually notice and that helps them with their > practical problems. > > On the testbench/sandbox issue, for example, I did some tests with vagrant > yesterday, and this looks pretty cool. So I could envision a two-stage > testbench, where the testbench part 1 spawns multiple VM and make testbench > part 2 be executed inside these machines. Right now, I do part 1 manually. > > Based on experience, I think it will be a big win if the testbench runs > daily on Ubuntu, CentOS 6 & 7 and openSUSE 13 and be able to test mysql & > elasticsearch (done today) and probably postgres soon. This is my initial > goal. I would also like to have at least the capability to easily write > unit tests, thus I asked that question. Anything more is highly > appreciated, but probably won't manifest until we get some serious > contributions (read: worth a couple of days work, at least). All in all, it > would probably be better writing some running code than mailing list > postings ;) > > What what I am currently setting up is missing is that type of "staging" > branch that David mentioned in his well-thought out mail from last Saturday > (or so). The good thing is that this is also easy to implement. Once I have > the basic setup done, I'll probably implement this, so all merges would go > to a candidate tree and only after testsbench success be merged into > master. The testbench success check & merge would for now need to be done > manually, but that isn't a really big issue. > > Bottom line: > > It's good to contribute ideas, and all ideas are appreciated and *make* a > difference. But contributing code (testbench, automation scripts, help > guides) is even more useful. > > Rainer > >> >> If there is an upcoming "national trade show" where you want to show >>> something new, yes... do that (but please follow the new work flow). But >>> I am not aware of something like that in the next 4 months and I am not >>> aware of a missing feature which must be implemented ASAP or rsyslog >>> will go away... >>> >> >> That was an example from another business of how strict adherence to >> "tests for everything and all tests must pass" ended up pretty much dooming >> a company. >> >> David Lang >> >> _______________________________________________ >> rsyslog mailing list >> http://lists.adiscon.net/mailman/listinfo/rsyslog >> http://www.rsyslog.com/professional-services/ >> What's up with rsyslog? Follow https://twitter.com/rgerhards >> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad >> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you >> DON'T LIKE THAT. >> > > _______________________________________________ rsyslog mailing list http://lists.adiscon.net/mailman/listinfo/rsyslog http://www.rsyslog.com/professional-services/ What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE THAT.