[ 
https://issues.apache.org/jira/browse/SPARK-34803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34803:
------------------------------------

    Assignee: Apache Spark

> Util methods requiring certain versions of Pandas & PyArrow don't pass 
> through the raised ImportError
> -----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-34803
>                 URL: https://issues.apache.org/jira/browse/SPARK-34803
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 3.1.1
>            Reporter: John Hany
>            Assignee: Apache Spark
>            Priority: Major
>
> When checking that the we can import either {{pandas}} or {{pyarrow}}, we 
> except any {{ImportError}} and raise an error declaring the minimum version 
> of the respective package that's required to be in the Python environment.
> We don't however, pass the {{ImportError}} that might have been thrown by the 
> package itself. Take {{pandas}} as an example, when we call {{import 
> pandas}}, pandas itself might be in the environment, but can throw an 
> {{ImportError}} 
> [https://github.com/pandas-dev/pandas/blob/0.24.x/pandas/compat/__init__.py#L438]
>  if another package it requires isn't there. This error wouldn't be passed 
> through and we'd end up getting a misleading error message that states that 
> {{pandas}} isn't in the environment, while in fact it is but something else 
> makes us unable to import it.
> I believe this can be improved by chaining the exceptions and am happy to 
> provide said contribution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to