On Mon, Feb 6, 2017 at 1:35 PM, Kamil Paral <kpa...@redhat.com> wrote:
> > That's a good point. But do we have a good alternative here? If we depend > on packages like that, I see only two options: > > a) ask the person to install pyfoo as an RPM (in readme) > b) ask the person to install gcc and libfoo-devel as an RPM (in readme) > and pyfoo will be then compiled and installed from pypi > > Approach a) is somewhat easier and does not require compilation stack and > devel libraries. OTOH it requires using virtualenv with > --system-site-packages, which means people get different results on > different setups. That's exactly what I'm trying to eliminate (or at least > reduce). E.g. https://phab.qa.fedoraproject.org/D1111 where I can run the > test suite from makefile and you can't, and it's quite difficult to figure > out why. > > With b) approach, you need compilation stack on the system. I don't think > it's such a huge problem, because you're a developer after all. The > advantage is that virtualenv can be created without --system-site-packages, > which means locally installed libraries do not affect the execution/test > suite results. Also, pyfoo is installed with exactly the right version, > further reducing differences between setups. The only thing that can differ > is the version of libfoo-devel, which can affect the behavior. But the > likeliness of that happening is much smaller than having pyfoo of a > different version or pulling any deps from the system site packages. > > The reason why I want to recommend `make test` for running the test suite > (at least in readme), is because in the makefile we can ensure that a clean > virtualenv with correct properties is created, and only and exactly the > right versions of deps from requirements.txt are installed. We can perform > further necessary steps, like installing the project > <https://phab.qa.fedoraproject.org/D1111>. That further increases > reliability. Compare this to manually running `pytest`- a custom virtualenv > must be active; it can be configured differently than recommended in > readme, it can be out of date, or it can have more packages installed than > needed; you might forget some necessary steps. > > Sure, I am a devel, but not a C-devel... As I told you in our other conversation - I see what you are trying to accomplish, but for me the gain does not even balance the issues. With variant 'a', all you need to do is make sure "these python packages are installed" to run the test suite. I'd rather have something like `requirements_testing.txt` where all the deps are speeled out in the proper versions, and using that as a base for the virtualenv population (I guess we could easily make do with the requirements.py we have now). Either you have the right version in your system (or in your own development virtualenv from which you are running the tests), or the right version will be installed for you from pip. Yes, we might get down to people having to install bunch of header files, and gcc, if for some reason their system is so different that they can not obtain the right version in any other way, but it will work most of the time. > Of course nothing prevents you from simply running the test suite using > `pytest`. It's the same approach that Phab will do when submitting a patch. > However, when some issues arises, I'd like all parties to be able to run > `make test` and it should return the same result. That should be the most > reliable method, and if it doesn't return the same thing, it means we have > an important problem somewhere, and it's not just "a wrongly configured > project on one dev machine". > So, I see these main use cases for `make test` and b) approach: > * good a reliable default for newcomers, an approach that's the least > likely to go wrong > * determining the reason for failures that only one party sees and the > other doesn't > * `make test-ci` target, that will hopefully be used one day to perform > daily/per-commit CI testing of our codebases. Again, using the most > reliable method available. > > Sure, nobody forces _me_ to do it this way, but I still fail to see the overall general benefit. If a random _python web app_ project that I wanted to submit a patch for wanted me to install gcc and tons of -devel libs, I'd be going to the next door. We were talking "accessibility" a lot with Phab, and one of the arguments against it (not saying it was you in particular) was that "it is complicated, and needs additional packages installed". This is even worse version of the same. At least to me. On top of that - who is going to be syncing up the versions of said packages between Fedora (our target) and the requirements.txt? What release are we going to be using as the target? And is it even the right place and way to do it? > For some codebases this is not viable anyway, e.g. libtaskotron, because > they depend on packages not available in pypi (koji) and thus need > --system-site-packages. But e.g. resultsdb projects seem that they could go > without it. > If this is not to be the same on our whole stack, then I can see the point of doing it even less. I'm not against doing the rigorous testing somewhere. I just plain don't believe that the developer's machine is the place for this kind of "making sure every bit of the whole setup is the same for all the people, as for our deployment setup". I'd much rather have some "hardened" acceptance testing done for the diff requests, in a known env, but out of the devs machine - let's tell the devs what we do, and let them know how to achieve the same level of proofing, but leave the default path simple. IMHO 90% of the time, this will be a non-issue (as if was so far), and in the 10% that cause the troubles, human interaction will be required anyway. All in all, I absolutely see the point of what you want to achieve, I just think that the way to do it is not ideal. From my POW, we should stick to approach 'A'. J.
_______________________________________________ qa-devel mailing list -- qa-devel@lists.fedoraproject.org To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org