On 07/21/2017 08:33 AM, Stefan Hajnoczi wrote: > On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote: >> This is a follow up to a previous discussion about reported failures when >> running some qemu-iotests. Turns out the failures were due to missing >> libraries, which in turn, reflected on the host build configuration. >> >> This series introduces a tool that can check both host and target level >> build configurations. On top of that, it adds a function to to be used >> on qemu-iotests. Finally, as an example, it sets a test to be skipped >> if the required feature is not enable on the host build configuration. >> >> Cleber Rosa (3): >> scripts: introduce buildconf.py >> qemu-iotests: add _require_feature() function >> qemu-iotests: require CONFIG_LINUX_AIO for test 087 >> >> scripts/buildconf.py | 278 >> +++++++++++++++++++++++++++++++++++++++++++ >> tests/qemu-iotests/087 | 1 + >> tests/qemu-iotests/check | 2 + >> tests/qemu-iotests/common.rc | 7 ++ >> 4 files changed, 288 insertions(+) >> > > It should be possible to run iotests against any > qemu/qemu-img/qemu-io/qemu-nbd binaries - even if no build root is > available. >
Yes, I actually overlooked that point. > How about invoking qemu-img and tools to determine their capabilities? > Can capabilities be consistently queried? I would love to not count on a build root if the same information can be consistently queried from the binaries themselves. > At the beginning of ./check, query the qemu/qemu-img/qemu-io/qemu-nbd > binaries for specific features. This produces a set of available > features and tests can say: > Which would be another ad-hoc thing, limited to qemu-iotests. From a test writer perspective, QEMU lacks is a uniform way to introspect its capabilities. > _supported_feature aio_native > > This feature can be checked by opening an image file: > > qemu-io --format raw --nocache --native-aio --cmd quit test.img > While the solution I proposed is not cheap in terms of what it runs to query capabilities (runs make on every query), it was cheap to write, it sets a universal standard, and it's mostly maintenance free. A key point is that all build configuration (capabilities?) is predictable and available across all subsystems and all targets. Being honest, I think your suggestion is terribly expensive in the long run. In the best case scenario, it requires one explicit check to be written for each capability, which at some point, may start to look like a test itself. The capability naming and behavior will probably end up becoming inconsistent. I feel a lot more safe relying on a "capability statement" to write the foundation of tests, than to write a number of custom "capability checks". But I agree that the build root requirement is an issue. Is embedding the configured capabilities in the binary themselves acceptable? Something like an standard option such as `-query-capabilities` or `-debug-build-info` that would basically list the content of "config-host.h" and similar files? Thanks for reviewing the idea and pointing out this important limitation! -- Cleber Rosa [ Sr Software Engineer - Virtualization Team - Red Hat ] [ Avocado Test Framework - avocado-framework.github.io ] [ 7ABB 96EB 8B46 B94D 5E0F E9BB 657E 8D33 A5F2 09F3 ]
signature.asc
Description: OpenPGP digital signature