There is currently an issue open for this in the calcite-test-dataset repository[1], however, I would like to hear more from the wider community regarding this.

I have created a `switch-to-docker` branch on my fork and committed a docker-compose.yml under the docker folder, but ran into a few roadblocks and didn't have any more time to investigate.

I am currently investigating using docker-composer to orchestrate and set up the containers.

Questions:

1. I am not very familiar with Apache Geode. I was able to start the server and locator using the official docker image, but there does not appear to be anyway to import data. In the current repository, there's some java code in `geode-standalone-cluster`. Why do/did we need to write Java code to stand up a geode cluster? Does anyone know if there are any standalone tools (preferably something with built binaries) that we can use to directly ingest the JSON data?

2. From my reading of the integration test instructions[2], the calcite-test-dataset spins up a VM with databases preloaded with data which the main calcite repository runs tests against. HSQLDB and H2 does not have any open ports in the VM that's spun up. How does does Calcite run tests against HSQLDB and H2?

3. What is the role of maven in the calcite-test-dataset repository? I see a lot of POMs in various subfolders such as mysql, postgresql, etc. However, I am not sure what these do. If maven is used to spin up the VM, perhaps we could remove the dependency on it and just run a `docker-compose up` to start the network of containers.

4. Is there any interest in bringing the contents of calcite-test-dataset directly into the Calcite repo? The repo zips up to 1.5MB, so it might not bring to much bloat to the Calcite repo.

Francis

[1] https://github.com/vlsi/calcite-test-dataset/issues/8

[2] https://calcite.apache.org/docs/howto.html#running-integration-tests

Reply via email to