spark git commit: [SPARK-22980][PYTHON][SQL] Clarify the length of each series is of each batch within scalar Pandas UDF

2018-01-12 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 60bcb4685 -> ca27d9cb5


[SPARK-22980][PYTHON][SQL] Clarify the length of each series is of each batch 
within scalar Pandas UDF

## What changes were proposed in this pull request?

This PR proposes to add a note that saying the length of a scalar Pandas UDF's 
`Series` is not of the whole input column but of the batch.

We are fine for a group map UDF because the usage is different from our typical 
UDF but scalar UDFs might cause confusion with the normal UDF.

For example, please consider this example:

```python
from pyspark.sql.functions import pandas_udf, col, lit

df = spark.range(1)
f = pandas_udf(lambda x, y: len(x) + y, LongType())
df.select(f(lit('text'), col('id'))).show()
```

```
+--+
|(text, id)|
+--+
| 1|
+--+
```

```python
from pyspark.sql.functions import udf, col, lit

df = spark.range(1)
f = udf(lambda x, y: len(x) + y, "long")
df.select(f(lit('text'), col('id'))).show()
```

```
+--+
|(text, id)|
+--+
| 4|
+--+
```

## How was this patch tested?

Manually built the doc and checked the output.

Author: hyukjinkwon 

Closes #20237 from HyukjinKwon/SPARK-22980.

(cherry picked from commit cd9f49a2aed3799964976ead06080a0f7044a0c3)
Signed-off-by: hyukjinkwon 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ca27d9cb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ca27d9cb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ca27d9cb

Branch: refs/heads/branch-2.3
Commit: ca27d9cb5e30b6a50a4c8b7d10ac28f4f51d44ee
Parents: 60bcb46
Author: hyukjinkwon 
Authored: Sat Jan 13 16:13:44 2018 +0900
Committer: hyukjinkwon 
Committed: Sat Jan 13 16:13:57 2018 +0900

--
 python/pyspark/sql/functions.py | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/ca27d9cb/python/pyspark/sql/functions.py
--
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 733e32b..e1ad659 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -2184,6 +2184,11 @@ def pandas_udf(f=None, returnType=None, 
functionType=None):
| 8|  JOHN DOE|  22|
+--+--++
 
+   .. note:: The length of `pandas.Series` within a scalar UDF is not that 
of the whole input
+   column, but is the length of an internal batch used for each call 
to the function.
+   Therefore, this can be used, for example, to ensure the length of 
each returned
+   `pandas.Series`, and can not be used as the column length.
+
 2. GROUP_MAP
 
A group map UDF defines transformation: A `pandas.DataFrame` -> A 
`pandas.DataFrame`


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22980][PYTHON][SQL] Clarify the length of each series is of each batch within scalar Pandas UDF

2018-01-12 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/master 55dbfbca3 -> cd9f49a2a


[SPARK-22980][PYTHON][SQL] Clarify the length of each series is of each batch 
within scalar Pandas UDF

## What changes were proposed in this pull request?

This PR proposes to add a note that saying the length of a scalar Pandas UDF's 
`Series` is not of the whole input column but of the batch.

We are fine for a group map UDF because the usage is different from our typical 
UDF but scalar UDFs might cause confusion with the normal UDF.

For example, please consider this example:

```python
from pyspark.sql.functions import pandas_udf, col, lit

df = spark.range(1)
f = pandas_udf(lambda x, y: len(x) + y, LongType())
df.select(f(lit('text'), col('id'))).show()
```

```
+--+
|(text, id)|
+--+
| 1|
+--+
```

```python
from pyspark.sql.functions import udf, col, lit

df = spark.range(1)
f = udf(lambda x, y: len(x) + y, "long")
df.select(f(lit('text'), col('id'))).show()
```

```
+--+
|(text, id)|
+--+
| 4|
+--+
```

## How was this patch tested?

Manually built the doc and checked the output.

Author: hyukjinkwon 

Closes #20237 from HyukjinKwon/SPARK-22980.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cd9f49a2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/cd9f49a2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/cd9f49a2

Branch: refs/heads/master
Commit: cd9f49a2aed3799964976ead06080a0f7044a0c3
Parents: 55dbfbc
Author: hyukjinkwon 
Authored: Sat Jan 13 16:13:44 2018 +0900
Committer: hyukjinkwon 
Committed: Sat Jan 13 16:13:44 2018 +0900

--
 python/pyspark/sql/functions.py | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/cd9f49a2/python/pyspark/sql/functions.py
--
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 733e32b..e1ad659 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -2184,6 +2184,11 @@ def pandas_udf(f=None, returnType=None, 
functionType=None):
| 8|  JOHN DOE|  22|
+--+--++
 
+   .. note:: The length of `pandas.Series` within a scalar UDF is not that 
of the whole input
+   column, but is the length of an internal batch used for each call 
to the function.
+   Therefore, this can be used, for example, to ensure the length of 
each returned
+   `pandas.Series`, and can not be used as the column length.
+
 2. GROUP_MAP
 
A group map UDF defines transformation: A `pandas.DataFrame` -> A 
`pandas.DataFrame`


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org