[SPARK-26032][PYTHON] Break large sql/tests.py files into smaller files ## What changes were proposed in this pull request?
This is the official first attempt to break huge single `tests.py` file - I did it locally before few times and gave up for some reasons. Now, currently it really makes the unittests super hard to read and difficult to check. To me, it even bothers me to to scroll down the big file. It's one single 7000 lines file! This is not only readability issue. Since one big test takes most of tests time, the tests don't run in parallel fully - although it will costs to start and stop the context. We could pick up one example and follow. Given my investigation, the current style looks closer to NumPy structure and looks easier to follow. Please see https://github.com/numpy/numpy/tree/master/numpy. Basically this PR proposes to break down `pyspark/sql/tests.py` into ...: ```bash pyspark ... âââ sql ... â  âââ tests # Includes all tests broken down from 'pyspark/sql/tests.py' â â â # Each matchs to module in 'pyspark/sql'. Additionally, some logical group can â â â # be added. For instance, 'test_arrow.py', 'test_datasources.py' ... â  â  âââ __init__.py â  â  âââ test_appsubmit.py â  â  âââ test_arrow.py â  â  âââ test_catalog.py â  â  âââ test_column.py â  â  âââ test_conf.py â  â  âââ test_context.py â  â  âââ test_dataframe.py â  â  âââ test_datasources.py â  â  âââ test_functions.py â  â  âââ test_group.py â  â  âââ test_pandas_udf.py â  â  âââ test_pandas_udf_grouped_agg.py â  â  âââ test_pandas_udf_grouped_map.py â  â  âââ test_pandas_udf_scalar.py â  â  âââ test_pandas_udf_window.py â  â  âââ test_readwriter.py â  â  âââ test_serde.py â  â  âââ test_session.py â  â  âââ test_streaming.py â  â  âââ test_types.py â  â  âââ test_udf.py â  â  âââ test_utils.py ... âââ testing # Includes testing utils that can be used in unittests. â  âââ __init__.py â  âââ sqlutils.py ... ``` ## How was this patch tested? Existing tests should cover. `cd python` and `./run-tests-with-coverage`. Manually checked they are actually being ran. Each test (not officially) can be ran via: ``` SPARK_TESTING=1 ./bin/pyspark pyspark.sql.tests.test_pandas_udf_scalar ``` Note that if you're using Mac and Python 3, you might have to `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES`. Closes #23021 from HyukjinKwon/SPARK-25344. Authored-by: hyukjinkwon <gurwls...@apache.org> Signed-off-by: hyukjinkwon <gurwls...@apache.org> Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a7a331df Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a7a331df Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a7a331df Branch: refs/heads/master Commit: a7a331df6e6fbcb181caf2363bffc3e05bdfc009 Parents: f26cd18 Author: hyukjinkwon <gurwls...@apache.org> Authored: Wed Nov 14 14:51:11 2018 +0800 Committer: hyukjinkwon <gurwls...@apache.org> Committed: Wed Nov 14 14:51:11 2018 +0800 ---------------------------------------------------------------------- dev/sparktestsupport/modules.py | 25 +- python/pyspark/sql/tests.py | 7079 ------------------ python/pyspark/sql/tests/__init__.py | 16 + python/pyspark/sql/tests/test_appsubmit.py | 96 + python/pyspark/sql/tests/test_arrow.py | 399 + python/pyspark/sql/tests/test_catalog.py | 199 + python/pyspark/sql/tests/test_column.py | 157 + python/pyspark/sql/tests/test_conf.py | 55 + python/pyspark/sql/tests/test_context.py | 263 + python/pyspark/sql/tests/test_dataframe.py | 737 ++ python/pyspark/sql/tests/test_datasources.py | 170 + python/pyspark/sql/tests/test_functions.py | 278 + python/pyspark/sql/tests/test_group.py | 45 + python/pyspark/sql/tests/test_pandas_udf.py | 216 + .../sql/tests/test_pandas_udf_grouped_agg.py | 503 ++ .../sql/tests/test_pandas_udf_grouped_map.py | 530 ++ .../pyspark/sql/tests/test_pandas_udf_scalar.py | 807 ++ .../pyspark/sql/tests/test_pandas_udf_window.py | 262 + python/pyspark/sql/tests/test_readwriter.py | 153 + python/pyspark/sql/tests/test_serde.py | 138 + python/pyspark/sql/tests/test_session.py | 320 + python/pyspark/sql/tests/test_streaming.py | 566 ++ python/pyspark/sql/tests/test_types.py | 944 +++ python/pyspark/sql/tests/test_udf.py | 654 ++ python/pyspark/sql/tests/test_utils.py | 54 + python/pyspark/testing/__init__.py | 16 + python/pyspark/testing/sqlutils.py | 268 + python/run-tests.py | 5 +- .../apache/spark/sql/test/ExamplePointUDT.scala | 2 +- 29 files changed, 7874 insertions(+), 7083 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/a7a331df/dev/sparktestsupport/modules.py ---------------------------------------------------------------------- diff --git a/dev/sparktestsupport/modules.py b/dev/sparktestsupport/modules.py index a63f9d8..9dbe4e4 100644 --- a/dev/sparktestsupport/modules.py +++ b/dev/sparktestsupport/modules.py @@ -333,6 +333,7 @@ pyspark_sql = Module( "python/pyspark/sql" ], python_test_goals=[ + # doctests "pyspark.sql.types", "pyspark.sql.context", "pyspark.sql.session", @@ -346,7 +347,29 @@ pyspark_sql = Module( "pyspark.sql.streaming", "pyspark.sql.udf", "pyspark.sql.window", - "pyspark.sql.tests", + # unittests + "pyspark.sql.tests.test_appsubmit", + "pyspark.sql.tests.test_arrow", + "pyspark.sql.tests.test_catalog", + "pyspark.sql.tests.test_column", + "pyspark.sql.tests.test_conf", + "pyspark.sql.tests.test_context", + "pyspark.sql.tests.test_dataframe", + "pyspark.sql.tests.test_datasources", + "pyspark.sql.tests.test_functions", + "pyspark.sql.tests.test_group", + "pyspark.sql.tests.test_pandas_udf", + "pyspark.sql.tests.test_pandas_udf_grouped_agg", + "pyspark.sql.tests.test_pandas_udf_grouped_map", + "pyspark.sql.tests.test_pandas_udf_scalar", + "pyspark.sql.tests.test_pandas_udf_window", + "pyspark.sql.tests.test_readwriter", + "pyspark.sql.tests.test_serde", + "pyspark.sql.tests.test_session", + "pyspark.sql.tests.test_streaming", + "pyspark.sql.tests.test_types", + "pyspark.sql.tests.test_udf", + "pyspark.sql.tests.test_utils", ] ) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org