> I guess we are not understanding each other. Probably not. The best way to discuss this is likely a POC.
> If the testing language is > AS or JS, test authors have to know how to deal with the runtime > differences. It depends on the tests. If the code is platform agnostic, the tests should be platform agnostic as well. If the code is platform-specific, the tests could have similar platform specific blocks. It does not seem to me like it’s a difficult problem. > If you want to build up a test harness of tests written in AS, I would > recommend starting with FlexUnit… Yup. This is the low-hanging fruit here. > If you want to run tests that require the runtime, I think Mustella might > be a good starting point instead of trying to re-invent it. Maybe. Once I’m finished with the unit tests, I’ll try to figure out where I stand with integration tests. (unless someone else gets to it first) It “feels” to like the architecture I’m proposing is simpler and more powerful, but I could be wrong. > On Nov 7, 2017, at 7:54 PM, Alex Harui <aha...@adobe.com.INVALID> wrote: > > I guess we are not understanding each other. If the testing language is > AS or JS, test authors have to know how to deal with the runtime > differences. That's why Mustella uses MXML. Automated test code > generation could also abstract those differences from the test authors. > > If you want to build up a test harness of tests written in AS, I would > recommend starting with FlexUnit (as it appears you are doing) and limit > tests to being small units that don't require the runtime. > > If you want to run tests that require the runtime, I think Mustella might > be a good starting point instead of trying to re-invent it. > > Of course, I could be wrong... > -Alex > > On 11/7/17, 9:40 AM, "Harbs" <harbs.li...@gmail.com> wrote: > >> Right. I’m proposing a totally different architecture. >> >> In the architecture I’m proposing, the runner is a passive observer. The >> tests would be run by *the beads themselves* and *push* the results out >> to the runner. >> >> The runner would have a count of the number of tests that are supposed to >> be run, and when all the tests return (or a fail-early test comes back) >> the runner exits with the pass/fail result. >> >> To be clear, there would be *two* separate architectures. >> >> 1. Unit tests would be reserved for simple tests which could be run >> without waiting for UI things to happen. That would use an active test >> runner. >> 2. Integration tests would allow for complex and async tests where the >> test runner would be passive. >> >> Hope this is clearer… >> Harbs >> >>> On Nov 7, 2017, at 7:33 PM, Alex Harui <aha...@adobe.com.INVALID> wrote: >>> >>> If the runner calls testBead.test(), the next line of code cannot check >>> for results. >>> >>> for (i = 0; i < numTests; i++) { >>> testBead[i].test(): >>> if (testBead[i].failed) { >>> // record failure >>> } >>> } >> >