[jira] [Resolved] (SPARK-23840) PySpark error when converting a DataFrame to rdd

2018-04-29 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-23840.
--
Resolution: Invalid

I am leaving this resolved. Sounds there's no way to go further without more 
information.

> PySpark error when converting a DataFrame to rdd
> 
>
> Key: SPARK-23840
> URL: https://issues.apache.org/jira/browse/SPARK-23840
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.3.0
>Reporter: Uri Goren
>Priority: Major
>
> I am running code in the `pyspark` shell on an `emr` cluster, and 
> encountering an error I have never seen before...
> This line works:
> spark.read.parquet(s3_input).take(99)
> While this line causes an exception:
> spark.read.parquet(s3_input).rdd.take(99)
> With
> > TypeError: 'int' object is not iterable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-24119) Add interpreted execution to SortPrefix expression

2018-04-29 Thread Kazuaki Ishizaki (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-24119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16458297#comment-16458297
 ] 

Kazuaki Ishizaki commented on SPARK-24119:
--

It seems to make sense.

It would be good to set this JIRA sas a subtask of SPARK-23580

> Add interpreted execution to SortPrefix expression
> --
>
> Key: SPARK-24119
> URL: https://issues.apache.org/jira/browse/SPARK-24119
> Project: Spark
>  Issue Type: Task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Bruce Robbins
>Priority: Minor
>
> [~hvanhovell] [~kiszk]
> I noticed SortPrefix did not support interpreted execution when I was testing 
> the PR for SPARK-24043. Somehow it was not covered by the umbrella Jira for 
> adding interpreted execution (SPARK-23580)
> Since I had to implement interpreted execution for SortPrefix to complete 
> testing, I am creating this Jira. If there's no good reason why eval wasn't 
> implemented, I will make the PR in a few days.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-24120) Show `Jobs` page when `jobId` is missing

2018-04-29 Thread Jongyoul Lee (JIRA)
Jongyoul Lee created SPARK-24120:


 Summary: Show `Jobs` page when `jobId` is missing
 Key: SPARK-24120
 URL: https://issues.apache.org/jira/browse/SPARK-24120
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.3.0
Reporter: Jongyoul Lee


For now, users try to connect {{job}} page without {{jobid}}, Spark UI shows 
only error page. It's not incorrect but helpless to users. It would be better 
to redirect to `jobs` page to select proper job. This, actually, happens when 
users use yarn mode. Because of yarn's bug(YARN-6615), some parameters aren't 
passed to Spark's driver UI with now the latest version of Yarn. It's also 
mentioned at SPARK-20772.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-23846) samplingRatio for schema inferring of CSV datasource

2018-04-29 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-23846.
--
   Resolution: Fixed
Fix Version/s: 2.4.0

Issue resolved by pull request 20959
[https://github.com/apache/spark/pull/20959]

> samplingRatio for schema inferring of CSV datasource
> 
>
> Key: SPARK-23846
> URL: https://issues.apache.org/jira/browse/SPARK-23846
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Assignee: Maxim Gekk
>Priority: Major
> Fix For: 2.4.0
>
>
> The JSON datasource has the option - `samplingRatio` which allows to reduce 
> amount of data loaded for schema inferring. It would be useful having the 
> same for CSV datasource.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23846) samplingRatio for schema inferring of CSV datasource

2018-04-29 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-23846:


Assignee: Maxim Gekk

> samplingRatio for schema inferring of CSV datasource
> 
>
> Key: SPARK-23846
> URL: https://issues.apache.org/jira/browse/SPARK-23846
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Assignee: Maxim Gekk
>Priority: Major
> Fix For: 2.4.0
>
>
> The JSON datasource has the option - `samplingRatio` which allows to reduce 
> amount of data loaded for schema inferring. It would be useful having the 
> same for CSV datasource.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-24119) Add interpreted execution to SortPrefix expression

2018-04-29 Thread Bruce Robbins (JIRA)
Bruce Robbins created SPARK-24119:
-

 Summary: Add interpreted execution to SortPrefix expression
 Key: SPARK-24119
 URL: https://issues.apache.org/jira/browse/SPARK-24119
 Project: Spark
  Issue Type: Task
  Components: SQL
Affects Versions: 2.3.0
Reporter: Bruce Robbins


[~hvanhovell] [~kiszk]

I noticed SortPrefix did not support interpreted execution when I was testing 
the PR for SPARK-24043. Somehow it was not covered by the umbrella Jira for 
adding interpreted execution (SPARK-23580)

Since I had to implement interpreted execution for SortPrefix to complete 
testing, I am creating this Jira. If there's no good reason why eval wasn't 
implemented, I will make the PR in a few days.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-24118) Support lineSep format independent from encoding

2018-04-29 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-24118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16458143#comment-16458143
 ] 

Apache Spark commented on SPARK-24118:
--

User 'MaxGekk' has created a pull request for this issue:
https://github.com/apache/spark/pull/21192

> Support lineSep format independent from encoding
> 
>
> Key: SPARK-24118
> URL: https://issues.apache.org/jira/browse/SPARK-24118
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Priority: Major
>
> Currently, the lineSep option of JSON datasource is depend on encoding. It is 
> impossible to define correct lineSep for JSON files with BOM in UTF-16 and 
> UTF-32 encoding, for example. Need to propose a format of lineSep which will 
> represent sequence of octets (bytes) and will be independent from encoding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-24118) Support lineSep format independent from encoding

2018-04-29 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-24118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-24118:


Assignee: (was: Apache Spark)

> Support lineSep format independent from encoding
> 
>
> Key: SPARK-24118
> URL: https://issues.apache.org/jira/browse/SPARK-24118
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Priority: Major
>
> Currently, the lineSep option of JSON datasource is depend on encoding. It is 
> impossible to define correct lineSep for JSON files with BOM in UTF-16 and 
> UTF-32 encoding, for example. Need to propose a format of lineSep which will 
> represent sequence of octets (bytes) and will be independent from encoding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-24118) Support lineSep format independent from encoding

2018-04-29 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-24118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-24118:


Assignee: Apache Spark

> Support lineSep format independent from encoding
> 
>
> Key: SPARK-24118
> URL: https://issues.apache.org/jira/browse/SPARK-24118
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Maxim Gekk
>Assignee: Apache Spark
>Priority: Major
>
> Currently, the lineSep option of JSON datasource is depend on encoding. It is 
> impossible to define correct lineSep for JSON files with BOM in UTF-16 and 
> UTF-32 encoding, for example. Need to propose a format of lineSep which will 
> represent sequence of octets (bytes) and will be independent from encoding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-24118) Support lineSep format independent from encoding

2018-04-29 Thread Maxim Gekk (JIRA)
Maxim Gekk created SPARK-24118:
--

 Summary: Support lineSep format independent from encoding
 Key: SPARK-24118
 URL: https://issues.apache.org/jira/browse/SPARK-24118
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 2.3.0
Reporter: Maxim Gekk


Currently, the lineSep option of JSON datasource is depend on encoding. It is 
impossible to define correct lineSep for JSON files with BOM in UTF-16 and 
UTF-32 encoding, for example. Need to propose a format of lineSep which will 
represent sequence of octets (bytes) and will be independent from encoding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22338) namedtuple serialization is inefficient

2018-04-29 Thread Sergei Lebedev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16458128#comment-16458128
 ] 

Sergei Lebedev commented on SPARK-22338:


[This|https://github.com/apache/spark/pull/21180] PR fixes the issue for 
namedtuples defined in modules (as opposed to the ones defined inside functions 
or in the REPL).

> namedtuple serialization is inefficient
> ---
>
> Key: SPARK-22338
> URL: https://issues.apache.org/jira/browse/SPARK-22338
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark
>Affects Versions: 2.2.0
>Reporter: Joel Croteau
>Priority: Minor
>
> I greatly appreciate the level of hack that PySpark contains in order to make 
> namedtuples serializable, but I feel like it could be done a little better. 
> In particular, say I create a namedtuple class with a few long argument names 
> like this:
> {code:JiraShouldReallySupportPython}
> MyTuple = namedtuple('MyTuple', ('longarga', 'longargb', 'longargc'))
> {code}
> If a create an instance of this, here is how PySpark serializes it:
> {code:JiraShouldReallySupportPython}
> mytuple = MyTuple(1, 2, 3)
> pickle.dumps(mytuple, pickle.HIGHEST_PROTOCOL)
> b'\x80\x04\x95]\x00\x00\x00\x00\x00\x00\x00\x8c\x13pyspark.serializers\x94\x8c\x08_restore\x94\x93\x94\x8c\x07MyTuple\x94\x8c\x08longarga\x94\x8c\x08longargb\x94\x8c\x08longargc\x94\x87\x94K\x01K\x02K\x03\x87\x94\x87\x94R\x94.'
> {code}
> This serialization includes the name of the namedtuple class, the names of 
> each of its members, as well as references to internal functions in 
> pyspark.serializers. By comparison, this is what I get if I serialize the 
> bare tuple:
> {code:JiraShouldReallySupportPython}
> shorttuple = (1,2,3)
> pickle.dumps(shorttuple, pickle.HIGHEST_PROTOCOL)
> b'\x80\x04\x95\t\x00\x00\x00\x00\x00\x00\x00K\x01K\x02K\x03\x87\x94.'
> {code}
> Much shorter. For another comparison, here is what it looks like if I build a 
> dict with the same data and element names:
> {code:JiraShouldReallySupportPython}
> mydict = {'longarga':1, 'longargb':2, 'longargc':3}
> pickle.dumps(mydict, pickle.HIGHEST_PROTOCOL)
> b'\x80\x04\x95,\x00\x00\x00\x00\x00\x00\x00}\x94(\x8c\x08longarga\x94K\x01\x8c\x08longargb\x94K\x02\x8c\x08longargc\x94K\x03u.'
> {code}
> In other words, even using a dict is substantially shorter than using a 
> namedtuple in its current form. There shouldn't be any need for namedtuples 
> to have this much overhead in their serialization. For one thing, if the 
> class object is being broadcast to the nodes, there should be no need for 
> each namedtuple instance to include all of the field names; the class name 
> should be enough. If you use namedtuples heavily, this can create a lot of 
> overhead in memory and disk use. I am going to try and improve the 
> serialization and submit a patch if I can find the time, but I don't know the 
> pyspark code too well, so if anyone has suggestions for where to start, I 
> would love to hear them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-24108) ChunkedByteBuffer.writeFully method has not reset the limit value

2018-04-29 Thread Li Yuanjian (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-24108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Yuanjian closed SPARK-24108.
---

Duplicated submit for 
[SPARK-24107|https://issues.apache.org/jira/browse/SPARK-24107], just close it.

> ChunkedByteBuffer.writeFully method has not reset the limit value
> -
>
> Key: SPARK-24108
> URL: https://issues.apache.org/jira/browse/SPARK-24108
> Project: Spark
>  Issue Type: Bug
>  Components: Block Manager, Input/Output
>Affects Versions: 2.2.0, 2.2.1, 2.3.0
>Reporter: wangjinhai
>Priority: Major
> Fix For: 2.4.0
>
>
> ChunkedByteBuffer.writeFully method has not reset the limit value. When 
> chunks larger than bufferWriteChunkSize, such as 80*1024*1024 larger than
> config.BUFFER_WRITE_CHUNK_SIZE(64 * 1024 * 1024),only while once, will lost 
> 16*1024*1024 byte



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-24116) SparkSQL inserting overwrite table has inconsistent behavior regarding HDFS trash

2018-04-29 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-24116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457937#comment-16457937
 ] 

Hyukjin Kwon commented on SPARK-24116:
--

Can you describe the cases more closely? The logics there are complicated and 
it'd be nicer if we know the specific cases.

> SparkSQL inserting overwrite table has inconsistent behavior regarding HDFS 
> trash
> -
>
> Key: SPARK-24116
> URL: https://issues.apache.org/jira/browse/SPARK-24116
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Rui Li
>Priority: Major
>
> When inserting overwrite a table, the old data may or may not go to trash 
> based on:
>  # Date format. E.g. text table may go to trash but parquet table doesn't.
>  # Whether table is partitioned. E.g. partitioned text table doesn't go to 
> trash while non-partitioned table does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org