[SPARK-26032][PYTHON] Break large sql/tests.py files into smaller files

## What changes were proposed in this pull request?

This is the official first attempt to break huge single `tests.py` file - I did 
it locally before few times and gave up for some reasons. Now, currently it 
really makes the unittests super hard to read and difficult to check. To me, it 
even bothers me to to scroll down the big file. It's one single 7000 lines file!

This is not only readability issue. Since one big test takes most of tests 
time, the tests don't run in parallel fully - although it will costs to start 
and stop the context.

We could pick up one example and follow. Given my investigation, the current 
style looks closer to NumPy structure and looks easier to follow. Please see 
https://github.com/numpy/numpy/tree/master/numpy.

Basically this PR proposes to break down `pyspark/sql/tests.py` into ...:

```bash
pyspark
...
├── sql
...
│   ├── tests  # Includes all tests broken down from 
'pyspark/sql/tests.py'
│   │   │      # Each matchs to module in 'pyspark/sql'. Additionally, 
some logical group can
│   │   │      # be added. For instance, 'test_arrow.py', 
'test_datasources.py' ...
│   │   ├── __init__.py
│   │   ├── test_appsubmit.py
│   │   ├── test_arrow.py
│   │   ├── test_catalog.py
│   │   ├── test_column.py
│   │   ├── test_conf.py
│   │   ├── test_context.py
│   │   ├── test_dataframe.py
│   │   ├── test_datasources.py
│   │   ├── test_functions.py
│   │   ├── test_group.py
│   │   ├── test_pandas_udf.py
│   │   ├── test_pandas_udf_grouped_agg.py
│   │   ├── test_pandas_udf_grouped_map.py
│   │   ├── test_pandas_udf_scalar.py
│   │   ├── test_pandas_udf_window.py
│   │   ├── test_readwriter.py
│   │   ├── test_serde.py
│   │   ├── test_session.py
│   │   ├── test_streaming.py
│   │   ├── test_types.py
│   │   ├── test_udf.py
│   │   └── test_utils.py
...
├── testing  # Includes testing utils that can be used in unittests.
│   ├── __init__.py
│   └── sqlutils.py
...
```

## How was this patch tested?

Existing tests should cover.

`cd python` and `./run-tests-with-coverage`. Manually checked they are actually 
being ran.

Each test (not officially) can be ran via:

```
SPARK_TESTING=1 ./bin/pyspark pyspark.sql.tests.test_pandas_udf_scalar
```

Note that if you're using Mac and Python 3, you might have to 
`OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES`.

Closes #23021 from HyukjinKwon/SPARK-25344.

Authored-by: hyukjinkwon <gurwls...@apache.org>
Signed-off-by: hyukjinkwon <gurwls...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a7a331df
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a7a331df
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a7a331df

Branch: refs/heads/master
Commit: a7a331df6e6fbcb181caf2363bffc3e05bdfc009
Parents: f26cd18
Author: hyukjinkwon <gurwls...@apache.org>
Authored: Wed Nov 14 14:51:11 2018 +0800
Committer: hyukjinkwon <gurwls...@apache.org>
Committed: Wed Nov 14 14:51:11 2018 +0800

----------------------------------------------------------------------
 dev/sparktestsupport/modules.py                 |   25 +-
 python/pyspark/sql/tests.py                     | 7079 ------------------
 python/pyspark/sql/tests/__init__.py            |   16 +
 python/pyspark/sql/tests/test_appsubmit.py      |   96 +
 python/pyspark/sql/tests/test_arrow.py          |  399 +
 python/pyspark/sql/tests/test_catalog.py        |  199 +
 python/pyspark/sql/tests/test_column.py         |  157 +
 python/pyspark/sql/tests/test_conf.py           |   55 +
 python/pyspark/sql/tests/test_context.py        |  263 +
 python/pyspark/sql/tests/test_dataframe.py      |  737 ++
 python/pyspark/sql/tests/test_datasources.py    |  170 +
 python/pyspark/sql/tests/test_functions.py      |  278 +
 python/pyspark/sql/tests/test_group.py          |   45 +
 python/pyspark/sql/tests/test_pandas_udf.py     |  216 +
 .../sql/tests/test_pandas_udf_grouped_agg.py    |  503 ++
 .../sql/tests/test_pandas_udf_grouped_map.py    |  530 ++
 .../pyspark/sql/tests/test_pandas_udf_scalar.py |  807 ++
 .../pyspark/sql/tests/test_pandas_udf_window.py |  262 +
 python/pyspark/sql/tests/test_readwriter.py     |  153 +
 python/pyspark/sql/tests/test_serde.py          |  138 +
 python/pyspark/sql/tests/test_session.py        |  320 +
 python/pyspark/sql/tests/test_streaming.py      |  566 ++
 python/pyspark/sql/tests/test_types.py          |  944 +++
 python/pyspark/sql/tests/test_udf.py            |  654 ++
 python/pyspark/sql/tests/test_utils.py          |   54 +
 python/pyspark/testing/__init__.py              |   16 +
 python/pyspark/testing/sqlutils.py              |  268 +
 python/run-tests.py                             |    5 +-
 .../apache/spark/sql/test/ExamplePointUDT.scala |    2 +-
 29 files changed, 7874 insertions(+), 7083 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a7a331df/dev/sparktestsupport/modules.py
----------------------------------------------------------------------
diff --git a/dev/sparktestsupport/modules.py b/dev/sparktestsupport/modules.py
index a63f9d8..9dbe4e4 100644
--- a/dev/sparktestsupport/modules.py
+++ b/dev/sparktestsupport/modules.py
@@ -333,6 +333,7 @@ pyspark_sql = Module(
         "python/pyspark/sql"
     ],
     python_test_goals=[
+        # doctests
         "pyspark.sql.types",
         "pyspark.sql.context",
         "pyspark.sql.session",
@@ -346,7 +347,29 @@ pyspark_sql = Module(
         "pyspark.sql.streaming",
         "pyspark.sql.udf",
         "pyspark.sql.window",
-        "pyspark.sql.tests",
+        # unittests
+        "pyspark.sql.tests.test_appsubmit",
+        "pyspark.sql.tests.test_arrow",
+        "pyspark.sql.tests.test_catalog",
+        "pyspark.sql.tests.test_column",
+        "pyspark.sql.tests.test_conf",
+        "pyspark.sql.tests.test_context",
+        "pyspark.sql.tests.test_dataframe",
+        "pyspark.sql.tests.test_datasources",
+        "pyspark.sql.tests.test_functions",
+        "pyspark.sql.tests.test_group",
+        "pyspark.sql.tests.test_pandas_udf",
+        "pyspark.sql.tests.test_pandas_udf_grouped_agg",
+        "pyspark.sql.tests.test_pandas_udf_grouped_map",
+        "pyspark.sql.tests.test_pandas_udf_scalar",
+        "pyspark.sql.tests.test_pandas_udf_window",
+        "pyspark.sql.tests.test_readwriter",
+        "pyspark.sql.tests.test_serde",
+        "pyspark.sql.tests.test_session",
+        "pyspark.sql.tests.test_streaming",
+        "pyspark.sql.tests.test_types",
+        "pyspark.sql.tests.test_udf",
+        "pyspark.sql.tests.test_utils",
     ]
 )
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to