Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/18945
  
    Ah, I had to be clear. I thought something like ...
    
    ```python
    dtype = {}
    for field in self.schema:
        pandas_type = _to_corrected_pandas_type(field.dataType)
        if pandas_type is not None:
            dtype[field.name] = pandas_type
    
    # Columns with int + nullable from schemaa
    int_null_cols = [...]
    
    # Columns with int + nullable but with actual None.
    int_null_cols_with_none = [...]
    
    # This functions checks if the value is None.
    def check_nulls():
        for row in rows:
            # Check with int_null_cols and set int_null_cols_with_none if there 
is None.
            yield rows
    
    # Don't check anything if no int + nullable columns.
    if len(int_null_cols) > 0:
        check_func = check_nulls
    else:
        check_func = lambda r: r
    
    pdf = pd.DataFrame.from_records(check_null(self.collect()), 
columns=self.columns)
    
    # Replace int32 -> float one by checking int_null_cols.
    dtype = ...
    
    for f, t in dtype.items():
        pdf[f] = pdf[f].astype(t, copy=False)
        return pdf
    ```
    
    So, I was thinking of checking the actual value in the data might be a way 
if we can't resolve this only with the schema.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to