Github user Fokko commented on a diff in the pull request: https://github.com/apache/spark/pull/21596#discussion_r197707147 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonBenchmarks.scala --- @@ -25,8 +25,13 @@ import org.apache.spark.util.{Benchmark, Utils} /** * The benchmarks aims to measure performance of JSON parsing when encoding is set and isn't. - * To run this: - * spark-submit --class <this class> --jars <spark sql test jar> + * To run: + * mvn clean package -pl sql/core -DskipTests + * ./dev/make-distribution.sh --name local-dist + * cd dist/ + * ./bin/spark-submit --class org.apache.spark.sql.execution.datasources.json.JSONBenchmarks \ + * ../sql/core/target/spark-sql_2.11-2.4.0-SNAPSHOT-tests.jar > /tmp/output.txt --- End diff -- I agree, but running this benchmark is quite specific and should only happend when something is changed with the json functionality. Not sure if you want to have a specific build step for it.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org