On Mon, Nov 17, 2008 at 5:33 PM, Ondrej Certik <[EMAIL PROTECTED]> wrote: > On Mon, Nov 17, 2008 at 5:18 PM, llarsen <[EMAIL PROTECTED]> wrote: >> >> Hi Ondrej, >> >> I am using mercurial 1.0.2 to access the repository. I know mercurial >> 1.1 is out, but I was trying to stay compatible with the current >> version of Tortoisehg. However, hg 1.0.2 does support the mercurial >> queueing commands, so I figured using 1.0.2 would be OK. To get the >> code, I used the hg path http://hg.sympy.org/sympy. My code appears >> to be up to date with what is currently in the repository (according >> to the hg update command). Incidentally, I am curious whether there >> are two separate repositories (git and hg) which have to be >> synchronized in some way, or is there just one repository with a >> compatibility layer for git and hg. If there are two repositories, how >> often is synchronization done? Just curious. > > With every commit. The two repositories should be identical. I have > problems installing Mercurial on windows. When I get to it, I'll try > it. > >> >> The test run I included previously was run against the current sympy >> code with code changes I made. It looks like there were a set of >> expected failures and 4 actual failures. I guess the actual failures >> were caused by my code and I will address any of these failures before >> submitting my code. > > If you run py.test, here is the expected output: > > http://code.google.com/p/sympy/wiki/ExampleTestRun > >> >> I am using Python 2.5.2 on Win XP. I downloaded sympy 0.6.2 and also >> did testing and debugging against this. This had several xfails, but >> no fails. My debugging lead me to the conclusion that xfail was an >> expected failure and could be ignored. I also ran the test cases >> against the current repository and got similar results: >> >> = tests finished: 1246 passed, 2 xpass, 31 xfail, 4 skipped in 89.11 >> seconds == > > Yes, this means all tests pass, see above. > >> >> From what I can tell, the tests are probably working as expected (the >> only failures are xfails). I was just confused because from my past >> unit test experience, if a test fails you have a problem. There was no >> concept of an expected failure. It either fails or it doesn't, and if > > I agree it is confusing. That's why in our new tests using "bin/test", > you either get green [OK], or red [FAIL] at the end of each test file, > so now it should be easy to tell if all is ok, or not. > >> it fails you fix it. However, there may be cases where you choose to >> defer a fix till later, so I can see an expected failure coming in >> useful. However, my opinion is that the 'expected failure' cases >> should be filtered out by default so that they are not included in a >> typical test case run. In other words, anyone who just grabs the code > > That's a nice proposition, I added it to our issues: > > http://code.google.com/p/sympy/issues/detail?id=1200 > >> and runs the tests will never see an expected failure. These would >> have to be turned on explicitly (with an option that is passed in? or >> some variable XFAIL class?) if you are actually wanting to see the >> expected failures. Others may have a different opinion, but from my >> perspective, this would reduce confusion for casual developers or >> users who are interested in the test cases for some reason. > > I absolutely agree with you and I hope it will be implemented soon. If > you have other comments, were are interested in that. It's very > valuable to receive feedback like this.
More discussion is here: http://groups.google.com/group/sympy-patches/browse_thread/thread/628ebc799ee57ec1 Ondrej --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sympy" group. To post to this group, send email to sympy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sympy?hl=en -~----------~----~----~----~------~----~------~--~---