Yikun commented on code in PR #36793:
URL: https://github.com/apache/spark/pull/36793#discussion_r894971731


##########
python/pyspark/sql/session.py:
##########
@@ -952,12 +953,29 @@ def createDataFrame(  # type: ignore[misc]
             schema = [x.encode("utf-8") if not isinstance(x, str) else x for x 
in schema]
 
         try:
-            import pandas
+            import pandas as pd
 
             has_pandas = True
         except Exception:
             has_pandas = False
-        if has_pandas and isinstance(data, pandas.DataFrame):
+
+        try:
+            import numpy as np
+
+            has_numpy = True
+        except Exception:
+            has_numpy = False
+
+        if has_numpy and isinstance(data, np.ndarray):
+            from pyspark.sql.pandas.utils import require_minimum_pandas_version
+
+            require_minimum_pandas_version()
+            if data.ndim not in [1, 2]:
+                raise ValueError("NumPy array input should be of 1 or 2 
dimensions.")
+            column_names = ["value"] if data.ndim == 1 else ["_1", "_2"]
+            data = pd.DataFrame(data, columns=column_names)

Review Comment:
   question: Is other numpy types supported in future also considered first 
covert to pandas frame first? numpy --> pandas --> spark df?



##########
python/pyspark/sql/session.py:
##########
@@ -952,12 +953,29 @@ def createDataFrame(  # type: ignore[misc]
             schema = [x.encode("utf-8") if not isinstance(x, str) else x for x 
in schema]
 
         try:
-            import pandas
+            import pandas as pd
 
             has_pandas = True
         except Exception:
             has_pandas = False
-        if has_pandas and isinstance(data, pandas.DataFrame):
+
+        try:
+            import numpy as np
+
+            has_numpy = True
+        except Exception:
+            has_numpy = False
+
+        if has_numpy and isinstance(data, np.ndarray):
+            from pyspark.sql.pandas.utils import require_minimum_pandas_version
+
+            require_minimum_pandas_version()

Review Comment:
   nit: If only numpy installed but pandas not installed, will only raised 
pandas not installed. User may confused why need to install pandas but I just 
want to using numpy ?
   
   So maybe give a certainly exceptions before here to tell users, numpy type 
will first convert pandas df in pyspark so pandas installed is required, like:
   
   ```python
   if not has_pandas:
       // warning or raised friendly exception
   
   from pyspark.sql.pandas.utils import require_minimum_pandas_version
   require_minimum_pandas_version()
   ```
   
   or add a notes before here at least.
   



##########
python/pyspark/sql/session.py:
##########
@@ -952,12 +953,29 @@ def createDataFrame(  # type: ignore[misc]
             schema = [x.encode("utf-8") if not isinstance(x, str) else x for x 
in schema]
 
         try:
-            import pandas
+            import pandas as pd
 
             has_pandas = True
         except Exception:
             has_pandas = False
-        if has_pandas and isinstance(data, pandas.DataFrame):
+
+        try:
+            import numpy as np

Review Comment:
   Should we add `numpy` as requirement in 
[setup.py](https://github.com/apache/spark/blob/master/python/setup.py#L266-L269)?
 and mentioned it in deps doc?
   
   [1] https://github.com/apache/spark/blob/master/python/setup.py#L266-L269
   [2] 
https://dist.apache.org/repos/dist/dev/spark/v3.3.0-rc1-docs/_site/api/python/getting_started/install.html#dependencies



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to