[jira] [Comment Edited] (SPARK-22814) JDBC support date/timestamp type as partitionColumn

2019-05-04 Thread Al Johri (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-22814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833145#comment-16833145
 ] 

Al Johri edited comment on SPARK-22814 at 5/4/19 7:37 PM:
--

Cross posting my Github 
[comment|https://github.com/apache/spark/pull/21834#issuecomment-489357987] 
here: looks like this feature does not work with PySpark 2.4.0.


was (Author: al.johri):
Cross posting my Github 
[comment]([https://github.com/apache/spark/pull/21834#issuecomment-489357987]) 
here: looks like this feature does not work with PySpark 2.4.0.

> JDBC support date/timestamp type as partitionColumn
> ---
>
> Key: SPARK-22814
> URL: https://issues.apache.org/jira/browse/SPARK-22814
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.6.2, 2.2.1
>Reporter: Yuechen Chen
>Assignee: Takeshi Yamamuro
>Priority: Major
> Fix For: 2.4.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In spark, you can partition MySQL queries by partitionColumn.
> val df = (spark.read.jdbc(url=jdbcUrl,
> table="employees",
> columnName="emp_no",
> lowerBound=1L,
> upperBound=10L,
> numPartitions=100,
> connectionProperties=connectionProperties))
> display(df)
> But, partitionColumn must be a numeric column from the table.
> However, there are lots of table, which has no primary key, and has some 
> date/timestamp indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22814) JDBC support date/timestamp type as partitionColumn

2019-05-04 Thread Al Johri (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-22814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833145#comment-16833145
 ] 

Al Johri commented on SPARK-22814:
--

Cross posting my Github 
[comment]([https://github.com/apache/spark/pull/21834#issuecomment-489357987]) 
here: looks like this feature does not work with PySpark 2.4.0.

> JDBC support date/timestamp type as partitionColumn
> ---
>
> Key: SPARK-22814
> URL: https://issues.apache.org/jira/browse/SPARK-22814
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 1.6.2, 2.2.1
>Reporter: Yuechen Chen
>Assignee: Takeshi Yamamuro
>Priority: Major
> Fix For: 2.4.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In spark, you can partition MySQL queries by partitionColumn.
> val df = (spark.read.jdbc(url=jdbcUrl,
> table="employees",
> columnName="emp_no",
> lowerBound=1L,
> upperBound=10L,
> numPartitions=100,
> connectionProperties=connectionProperties))
> display(df)
> But, partitionColumn must be a numeric column from the table.
> However, there are lots of table, which has no primary key, and has some 
> date/timestamp indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13330) PYTHONHASHSEED is not propgated to python worker

2017-05-04 Thread Al Johri (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996217#comment-15996217
 ] 

Al Johri commented on SPARK-13330:
--

Can this be backported to 2.0 or 2.1? I'm having trouble using Python 3 on 
spark at the moment. Currently I have to set 
`SPARK_YARN_USER_ENV=PYTHONHASHSEED=0` before running spark-submit. Until 2.2 
is released, would it be best practice to put this variable into spark-env.sh?

> PYTHONHASHSEED is not propgated to python worker
> 
>
> Key: SPARK-13330
> URL: https://issues.apache.org/jira/browse/SPARK-13330
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 1.6.0
>Reporter: Jeff Zhang
>Assignee: Jeff Zhang
> Fix For: 2.2.0
>
>
> when using python 3.3 , PYTHONHASHSEED is only set in driver, but not 
> propagated to executor, and cause the following error.
> {noformat}
>   File "/Users/jzhang/github/spark/python/pyspark/rdd.py", line 74, in 
> portable_hash
> raise Exception("Randomness of hash of string should be disabled via 
> PYTHONHASHSEED")
> Exception: Randomness of hash of string should be disabled via PYTHONHASHSEED
>   at 
> org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
>   at 
> org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)
>   at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>   at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
>   at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:77)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:45)
>   at org.apache.spark.scheduler.Task.run(Task.scala:81)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org