On Fri, Jan 15, 2016 at 2:18 AM, Pavel Labath via lldb-commits <lldb-commits@lists.llvm.org> wrote: > I don't really understand the purpose of the test, but if the purpose > of it is to check whether something appears on stdout, then a pexpect > test does seem like the right tool for the job. > > I have tried entering the commands from the test manually, and the > required text does *not* appear when using the lldb built on the > buildbot, while it *does* appear when using the lldb I build locally. > So, this appears to be a genuine bug caused by something in the > buildbot environment, but it's not clear to me now what it could be.
FWIW, before I asked Ying to take this over, I observed the same. However, Ying *was* able to reproduce this on her local WS consistently! > That said, there is currently a big problem with the way this test is > written, and that's the fact that the prompt expectation does not play > well with the lldb's colored prompt feature. LLDB colors the prompt by > overwriting the text which was already written by editline, resulting > in something like "(lldb) \33[2m(lldb) \33[22m" written to stdout. > This throws pexpect completely off scent, as the prompt substring > appears twice, causing the commands and their expectations to > misalign. Right now it does not matter, because as soon as you start > looking for the pattern "this is a test string", the streams will > align up, but it can cause you a lot of problems if you execute a > command and then try to observe some external side effects of it, > because pexpect will return before the command has even been given a > chance to complete. I suspected colors as well, but it is something different which is causing the failure on the bot machine. To spawn LLDB without colors though, we could pass --no-use-colors like here: http://reviews.llvm.org/D6671 > BTW: the skipIfRemote decorator is not needed. It was needed on > TestBatchMode, because that one actually runs executables, which needs > some plumbing to work remotely, but since this test just plays with > python commands it will execute just fine in the remote mode. > > pl > > On 14 January 2016 at 19:06, Jim Ingham via lldb-commits > <lldb-commits@lists.llvm.org> wrote: >> Well, actually on Unix MOST things are files, it was Plan 9 in which that >> all things are files, IIRC... >> >> Jim >> >>> On Jan 14, 2016, at 11:04 AM, Jim Ingham <jing...@apple.com> wrote: >>> >>> Yes, on unix all things are files and all files work the same except when >>> they don't. I'd rather test the thing we ACTUALLY care about, rather than >>> testing something else and assuming that it is going to work in the case we >>> care about as well. >>> >>> Jim >>> >>>> On Jan 14, 2016, at 10:58 AM, Zachary Turner <ztur...@google.com> wrote: >>>> >>>> Wouldn't testing with output redirecxted to a file still test that it is >>>> being streamed as it is obtained rather than a big dump at the end? I >>>> mean that's what stdout is right? Just a file. If you use a file on the >>>> filesystem instead, just check the contents of the file after each >>>> operation. >>>> >>>> On Thu, Jan 14, 2016 at 10:42 AM Jim Ingham <jing...@apple.com> wrote: >>>> I worry giving up on testing using Python's stdout for the immediate >>>> output stream. This is a very useful feature, allowing users to make >>>> Python based commands that produce a bunch of output, and stream the >>>> output as it is being gathered rather than having the command stall and >>>> then dump a bunch of text at the end. This isn't speculative, that's how >>>> many of the commands that the OS X kernel team ships for inspecting kernel >>>> state work. >>>> >>>> So we really should be testing this feature. Maybe the flakiness on Linux >>>> is just a pexpect bug, and streaming to stdout would always work correctly >>>> hooked up to a "real" terminal. But until we know that we should presume >>>> that it is something in LLDB->Python or in the way we use Python, and keep >>>> testing it. >>>> >>>> If you want to separate the two issues, then it would be fine to write >>>> another test that just tests the type maps for FILE *, but I still think >>>> this one is valuable. >>>> >>>> Jim >>>> >>>>> On Jan 14, 2016, at 10:16 AM, Zachary Turner via lldb-commits >>>>> <lldb-commits@lists.llvm.org> wrote: >>>>> >>>>> How much time do you think it would take to determine whether or not >>>>> using the file-based approach would work? Because on the surface it >>>>> sounds fairly straightforward, and fixing it that way would allow us to >>>>> not have to xfail this on more platforms for reasons that we don't >>>>> understand. >>>>> >>>>> On Thu, Jan 14, 2016 at 10:15 AM Enrico Granata via lldb-commits >>>>> <lldb-commits@lists.llvm.org> wrote: >>>>> The log just shows a timeout is happening in pexpect() - which I don’t >>>>> have a ready explanation for >>>>> >>>>> X-failing for now is the proper recourse. But you might want to debug >>>>> this at some point, since it’s weird behavior. It looks like the command >>>>> is not even just silently erroring out - from the log you sent it looked >>>>> stuck somewhere.. >>>>> >>>>> Is there any chance you can step through and see where the hang is >>>>> happening? >>>>> >>>>>> On Jan 14, 2016, at 3:03 AM, Tamas Berghammer <tbergham...@google.com> >>>>>> wrote: >>>>>> >>>>>> I XFAIL-ed the test for Linux to get the build bot green again and filed >>>>>> a bug at https://llvm.org/bugs/show_bug.cgi?id=26139 >>>>>> >>>>>> On Thu, Jan 14, 2016 at 2:18 AM Ying Chen via lldb-commits >>>>>> <lldb-commits@lists.llvm.org> wrote: >>>>>> Please see attached log file. >>>>>> >>>>>> Thanks, >>>>>> Ying >>>>>> >>>>>> On Wed, Jan 13, 2016 at 5:39 PM, Enrico Granata <egran...@apple.com> >>>>>> wrote: >>>>>> From the buildbot log it’s quite hard to tell what could be going on >>>>>> >>>>>> Is there any chance you guys could run the test by hand with the “-t -v” >>>>>> flags to the dotest.py driver and attach the output of the run? >>>>>> >>>>>> That might help figure out where the issue lies >>>>>> >>>>>>> On Jan 13, 2016, at 5:35 PM, Ying Chen <chy...@google.com> wrote: >>>>>>> >>>>>>> Hello Enrico, >>>>>>> >>>>>>> The new test has been failing on Ubuntu buildbot. But it's passing on >>>>>>> some offline Ubuntu machines, I don't understand what caused the >>>>>>> difference. >>>>>>> Could you please help to take a look? >>>>>>> >>>>>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/10299 >>>>>>> >>>>>>> Thanks, >>>>>>> Ying >>>>>>> >>>>>>> On Wed, Jan 13, 2016 at 11:32 AM, Enrico Granata via lldb-commits >>>>>>> <lldb-commits@lists.llvm.org> wrote: >>>>>>> >>>>>>>> On Jan 13, 2016, at 11:26 AM, Zachary Turner <ztur...@google.com> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Thanks! btw would having the command write its output to a file >>>>>>>> instead of stdout solve the pexpect problme as well? >>>>>>>> >>>>>>> >>>>>>> That’s possible - but I would need to play with it a little bit to >>>>>>> convince myself that it really is a faithful reproduction of my >>>>>>> original issue >>>>>>> It’ll take me a little while to get to it - stay tuned. >>>>>>> >>>>>>>> On Wed, Jan 13, 2016 at 11:22 AM Enrico Granata <egran...@apple.com> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> On Jan 13, 2016, at 10:34 AM, Zachary Turner <ztur...@google.com> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Jan 13, 2016 at 10:25 AM Enrico Granata <egran...@apple.com> >>>>>>>>> wrote: >>>>>>>>>> On Jan 13, 2016, at 10:21 AM, Zachary Turner <ztur...@google.com> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Jan 13, 2016 at 10:15 AM Enrico Granata via lldb-commits >>>>>>>>>> <lldb-commits@lists.llvm.org> wrote: >>>>>>>>>> + >>>>>>>>>> +class CommandScriptImmediateOutputTestCase (PExpectTest): >>>>>>>>>> Does the bug that you were trying to fix occur only when using the >>>>>>>>>> command_script.py file from the lldb command line? If you load it >>>>>>>>>> from within lldb via an LLDB command, does the problem still occur? >>>>>>>>>> If the problem you are fixing is not specific to the LLDB command >>>>>>>>>> line, I would prefer if you write this not as a pexpect test. >>>>>>>>> >>>>>>>>> I would love to not touch pexpect :-) But in this case, I can’t see a >>>>>>>>> way around it. I am trying to detect whether some text is >>>>>>>>> “physically” printed to stdout. And pexpect seems the most obvious >>>>>>>>> straightforward way to get that to happen. Note that, in this bug, >>>>>>>>> the result object is filled in correctly even if nothing gets >>>>>>>>> printed, so looking at the result won’t quite cut it - it really >>>>>>>>> needs to detect output to stdout. >>>>>>>>> You're calling result.SetImmediateOutputFile(sys.__stdout__). >>>>>>>>> Wouldn't it work to use a file on the file system here, and then you >>>>>>>>> open that file and look at it after running the commands? It >>>>>>>>> wouldn't work in remote cases, but it already doesn't work on remote >>>>>>>>> cases anyway (as you point out below) >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> + >>>>>>>>>> + mydir = TestBase.compute_mydir(__file__) >>>>>>>>>> + >>>>>>>>>> + def setUp(self): >>>>>>>>>> + # Call super's setUp(). >>>>>>>>>> + PExpectTest.setUp(self) >>>>>>>>>> + >>>>>>>>>> + @skipIfRemote # test not remote-ready llvm.org/pr24813 >>>>>>>>>> + @expectedFlakeyFreeBSD("llvm.org/pr25172 fails rarely on the >>>>>>>>>> buildbot") >>>>>>>>>> + @expectedFlakeyLinux("llvm.org/pr25172") >>>>>>>>>> Are these necessary? The windows one is necessary (but not if you >>>>>>>>>> change this to not being a pexpect test as I've requested above), >>>>>>>>>> but why are the other ones necessary? >>>>>>>>> >>>>>>>>> Do we support remote pexpect? As for FreeBSD and Linux, they might >>>>>>>>> not be necessary, but I’d rather much remove them (or let the >>>>>>>>> relevant platform owners) remove them in a separate commit >>>>>>>>> >>>>>>>>> No remote pexpect, so the @skipIfRemote probably needs to be there. >>>>>>>>> But I think everyone should be checking in tests enabled by default >>>>>>>>> in the widest set of environments possible that you aren't completely >>>>>>>>> sure are broken. It should be up to the platform holders to disable >>>>>>>>> broken tests, not to enable working tests. Because it's much easier >>>>>>>>> to notice a broken test going in than it is to notice a working test >>>>>>>>> went in disabled (because who's going to think to test it out?). >>>>>>>> >>>>>>>> This is a fair point. I’ll enable those platforms in a subsequent >>>>>>>> commit ASAP >>>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> - Enrico >>>>>>>> 📩 egranata@.com ☎️ 27683 >>>>>>>> >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> - Enrico >>>>>>> 📩 egranata@.com ☎️ 27683 >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> lldb-commits mailing list >>>>>>> lldb-commits@lists.llvm.org >>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> Thanks, >>>>>> - Enrico >>>>>> 📩 egranata@.com ☎️ 27683 >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> lldb-commits mailing list >>>>>> lldb-commits@lists.llvm.org >>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits >>>>> >>>>> >>>>> Thanks, >>>>> - Enrico >>>>> 📩 egranata@.com ☎️ 27683 >>>>> >>>>> _______________________________________________ >>>>> lldb-commits mailing list >>>>> lldb-commits@lists.llvm.org >>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits >>>>> _______________________________________________ >>>>> lldb-commits mailing list >>>>> lldb-commits@lists.llvm.org >>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits >>>> >>> >> >> _______________________________________________ >> lldb-commits mailing list >> lldb-commits@lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits > _______________________________________________ > lldb-commits mailing list > lldb-commits@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits