Definitely torn on this one. On one hand, if there are features implemented on some platforms that should be implemented on others than having them fail is a constant reminder that your platform needs to implement the missing functionality. OTOH, things like camera clean-up are meant to be platform specific, so it's nothing but an annoyance if that fails on other platforms.
So, I think my take on it is: 1. Have them shared and failing if the API should eventually be implemented on all platforms 2. Wrap tests in if (platform.name == 'ios') {} if they are meant to only work on one platform. On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca <ldel...@us.ibm.com>wrote: > One issue I ran with respects to the mobile spec is some tests are only > applicable to certain device types. We have a couple options when it > comes to those types of tests: > > 1. Not include them in the automated tests > 2. Include them knowing that they *might* cause failures with certain > device types (see example) > 3. Add javascript logic to check for device type before performing the > tests > 4. OR we could create platform specific automated tests that should be ran > in addition to the base automated test per device. ex. automatedAndroid, > automatedIOS, etc. > > An example is: > https://issues.apache.org/jira/browse/CB-3484 > camera.cleanup is only supported on iOS. > > I added a test case to verify that the function existed. But it doesn't > actually run camera.cleanup so there are no failure on other platforms. So > really there shouldn't be any harm in keeping the test. > > > What are everyone's opinions on a good approach to handle this type of > situation? > > Lisa Seacat DeLuca