[jira] [Commented] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-14 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951437#comment-16951437
 ] 

fengchaoge commented on SPARK-26570:


It's just a snapshot of an instant, and it's actually getting bigger until the 
memory overflows and any information about the production environment can be 
exported。

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, image-2019-10-14-10-51-28-374.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950658#comment-16950658
 ] 

fengchaoge edited comment on SPARK-26570 at 10/14/19 3:00 AM:
--

hello, [~srowen] [~viirya]

jmap logs is shown below,and filestatus keep rising。

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

 

!image-2019-10-14-10-32-17-949.png!

!file:///C:/Users/chaogefeng/Documents/WXWork/1688853326421109/Cache/Image/2019-10/e3bd43f7df20a95aab7094e9a1dc0094.jpg!


was (Author: fengchaoge):
hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

 

!image-2019-10-14-10-32-17-949.png!

!file:///C:/Users/chaogefeng/Documents/WXWork/1688853326421109/Cache/Image/2019-10/e3bd43f7df20a95aab7094e9a1dc0094.jpg!

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, image-2019-10-14-10-51-28-374.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-26570:
---
Attachment: image-2019-10-14-10-51-28-374.png

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, image-2019-10-14-10-51-28-374.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950675#comment-16950675
 ] 

fengchaoge commented on SPARK-26570:


!image-2019-10-14-10-51-28-374.png!

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950672#comment-16950672
 ] 

fengchaoge edited comment on SPARK-26570 at 10/14/19 2:50 AM:
--

!image-2019-10-14-10-50-47-567.png!


was (Author: fengchaoge):
!image-2019-10-14-10-48-05-063.png!

!image-2019-10-14-10-48-16-632.png!

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> image-2019-10-14-10-50-47-567.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950672#comment-16950672
 ] 

fengchaoge commented on SPARK-26570:


!image-2019-10-14-10-48-05-063.png!

!image-2019-10-14-10-48-16-632.png!

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950658#comment-16950658
 ] 

fengchaoge edited comment on SPARK-26570 at 10/14/19 2:47 AM:
--

hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

 

!image-2019-10-14-10-32-17-949.png!

!file:///C:/Users/chaogefeng/Documents/WXWork/1688853326421109/Cache/Image/2019-10/e3bd43f7df20a95aab7094e9a1dc0094.jpg!


was (Author: fengchaoge):
hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

  !image-2019-10-14-10-32-17-949.png!

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-26570:
---
Attachment: image-2019-10-14-10-47-47-684.png

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, image-2019-10-14-10-47-47-684.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950658#comment-16950658
 ] 

fengchaoge edited comment on SPARK-26570 at 10/14/19 2:32 AM:
--

hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

  !image-2019-10-14-10-32-17-949.png!


was (Author: fengchaoge):
hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

 

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> image-2019-10-14-10-32-17-949.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950658#comment-16950658
 ] 

fengchaoge commented on SPARK-26570:


hello, [~srowen] [~viirya]

jmap logs is shown below:

!image-2019-10-14-10-00-27-361.png!

The dump log is too big, I will upload it later

 

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-26570:
---
Attachment: image-2019-10-14-10-00-27-361.png

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, image-2019-10-14-10-00-27-361.png, 
> screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950276#comment-16950276
 ] 

fengchaoge edited comment on SPARK-26570 at 10/13/19 10:46 AM:
---

hello, [~deshanxiao] [~srowen] spark2.4.3 may also have the same problems. Our 
sql program runs stably on spark2.1.0, generating about 70,000  tasks . After 
migrating to spark2.4.3, the driver memory overflows directly.  Driver logs 
like this:java.lang.OutOfMemoryError: GC overhead limit exceeded. The jstack 
log is shown below, may  be all  serializable  file statuses collected.

_!image-2019-10-13-18-41-22-090.png!_

!image-2019-10-13-18-45-33-770.png!

 


was (Author: fengchaoge):
hello, [~deshanxiao] [~srowen] spark2.4.3 may also have the same problems. Our 
sql program runs stably on spark2.1.0, generating about 70,000  tasks . After 
migrating to spark2.4.3, the memory overflows directly.  Driver logs like 
this:java.lang.OutOfMemoryError: GC overhead limit exceeded. The jstack log is 
shown below, may  be all  serializable  file statuses collected.

_!image-2019-10-13-18-41-22-090.png!_

!image-2019-10-13-18-45-33-770.png!

 

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-26570:
---
Attachment: image-2019-10-13-18-45-33-770.png

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, 
> image-2019-10-13-18-45-33-770.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950276#comment-16950276
 ] 

fengchaoge commented on SPARK-26570:


hello, [~deshanxiao] [~srowen] spark2.4.3 may also have the same problems. Our 
sql program runs stably on spark2.1.0, generating about 70,000  tasks . After 
migrating to spark2.4.3, the memory overflows directly.  Driver logs like 
this:java.lang.OutOfMemoryError: GC overhead limit exceeded. The jstack log is 
shown below, may  be all  serializable  file statuses collected.

_!image-2019-10-13-18-41-22-090.png!_

!image-2019-10-13-18-45-33-770.png!

 

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26570) Out of memory when InMemoryFileIndex bulkListLeafFiles

2019-10-13 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-26570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-26570:
---
Attachment: image-2019-10-13-18-41-22-090.png

> Out of memory when InMemoryFileIndex bulkListLeafFiles
> --
>
> Key: SPARK-26570
> URL: https://issues.apache.org/jira/browse/SPARK-26570
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.2
>Reporter: deshanxiao
>Priority: Major
> Attachments: image-2019-10-13-18-41-22-090.png, screenshot-1.png
>
>
> The *bulkListLeafFiles* will collect all filestatus in memory for every query 
> which may cause the oom of driver. I use the spark 2.3.2 meeting with the 
> problem. Maybe the latest one also exists the problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-25 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937725#comment-16937725
 ] 

fengchaoge commented on SPARK-28990:


 spark3.0 does fix this problem, but i'd like to know what have changed.

> SparkSQL invalid call to toAttribute on unresolved object, tree: *
> --
>
> Key: SPARK-28990
> URL: https://issues.apache.org/jira/browse/SPARK-28990
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: fengchaoge
>Priority: Major
>
> SparkSQL create table as select from one table which may not exists throw 
> exceptions like:
> {code}
> org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree:
> {code}
> This is not friendly, spark user may have no idea about what's wrong.
> Simple sql can reproduce it,like this:
> {code}
> spark-sql (default)> create table default.spark as select * from default.dual;
> {code}
> {code}
> 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
> command: create table default.spark as select * from default.dual
> 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
> [create table default.spark as select * from default.dual]
> org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree: *
> at 
> org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.immutable.List.map(List.scala:296)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
> at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
> at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
> at 
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
> at 
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
> at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
> at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
> at 

[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-24 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936778#comment-16936778
 ] 

fengchaoge commented on SPARK-28990:


@[~726575...@qq.com] hello daile ,Can you send a link? thanks

> SparkSQL invalid call to toAttribute on unresolved object, tree: *
> --
>
> Key: SPARK-28990
> URL: https://issues.apache.org/jira/browse/SPARK-28990
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: fengchaoge
>Priority: Major
>
> SparkSQL create table as select from one table which may not exists throw 
> exceptions like:
> {code}
> org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree:
> {code}
> This is not friendly, spark user may have no idea about what's wrong.
> Simple sql can reproduce it,like this:
> {code}
> spark-sql (default)> create table default.spark as select * from default.dual;
> {code}
> {code}
> 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
> command: create table default.spark as select * from default.dual
> 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
> [create table default.spark as select * from default.dual]
> org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree: *
> at 
> org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.immutable.List.map(List.scala:296)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
> at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
> at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
> at 
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
> at 
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
> at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
> at 
> org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
> at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
   Fix Version/s: 3.0.0
Target Version/s: 2.4.4
 Description: 
*SparkSQL create table as select from one table which may not exists throw 
exceptions like "*org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: **" ,this is not 
friendly,spark user may have no idea about what's wrong.*

 

*Simple sql can reproduce it**,**like this**:*

 

*create table default.spark as select * from default.dual;*

 

spark-sql (default)> create table default.spark as select * from default.dual;

2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual

2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]

org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *

at org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)

at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)

at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)

at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)

at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)

at scala.collection.immutable.List.foreach(List.scala:392)

at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)

at scala.collection.immutable.List.map(List.scala:296)

at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)

at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)

at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)

at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)

at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)

at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)

at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)

at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)

at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)

at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)

at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)

at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)

at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)

at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)

at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)

at scala.collection.immutable.List.foreach(List.scala:392)

at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)

at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)

at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)

at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)

at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)

at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)

at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)

at 

[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923462#comment-16923462
 ] 

fengchaoge commented on SPARK-28990:


Thank you very much, I have some idea about it,Analyzer's executeAndCheck 
method throw UnresolvedException which is not be captured

> SparkSQL invalid call to toAttribute on unresolved object, tree: *
> --
>
> Key: SPARK-28990
> URL: https://issues.apache.org/jira/browse/SPARK-28990
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
> Environment: Any
>Reporter: fengchaoge
>Priority: Major
>
> h6. SparkSQL create table as select from one table which may not exists throw 
> exceptions like "*org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
> Invalid call to toAttribute on unresolved object, tree: **" ,this is not 
> friendly,spark user may have no idea about what's wrong.
> h6. Simple sql can reproduce it,like this:
> ^create table default.spark as select * from default.dual;^
> ~spark-sql (default)> create table default.spark as select * from 
> default.dual;~
>  ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
> command: create table default.spark as select * from default.dual~
>  ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed 
> in [create table default.spark as select * from default.dual]~
>  ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree: *~
>  ~at 
> org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
>  ~at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
>  ~at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
>  ~at scala.collection.immutable.List.foreach(List.scala:392)~
>  ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
>  ~at scala.collection.immutable.List.map(List.scala:296)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
>  ~at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
>  ~at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
>  ~at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
>  ~at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
>  ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
>  ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
>  ~at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
>  ~at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
>  ~at 
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
>  ~at 
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
>  ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
>  ~at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
>  ~at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
>  ~at scala.collection.immutable.List.foreach(List.scala:392)~
>  ~at 
> 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
h6. SparkSQL create table as select from one table which may not exists throw 
exceptions like "*org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: **" ,this is not 
friendly,spark user may have no idea about what's wrong.
h6. Simple sql can reproduce it,like this:

^create table default.spark as select * from default.dual;^

~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
h6. SparkSQL create table as select from one table which may not exists throw 
exception like "org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: *" ,this is not 
friendly,spark user may have no idea about what's wrong.
h6. Simple sql can reproduce it,like this:

^create table default.spark as select * from default.dual;^

~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
               SparkSQL create table as select from one table which may not 
exists throw exception like 
"org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *" ,this is not friendly,spark user may 
have no idea about what's wrong.

Simple sql can reproduce it,like this:

create table default.spark as select * from default.dual;

~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
SparkSQL create table as select from one table which may not exists throw 
exception like "org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: *" ,this is not 
friendly,spark user may have no idea about what's wrong.

Simple sql can reproduce it,like this:

create table default.spark as select * from default.dual;

~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
SparkSQL create table as select from one table which may not exists throw 
exception like "org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: *" ,this is not 
friendly,spark user may have no idea about what's wrong.

Simple sql can reproduce it,like this:

create table default.spark as select * from default.dual;

~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
{quote}               SparkSQL create table as select from one table which may 
not exists throw exception like 
"org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *" ,this is not friendly,spark user may 
have no idea about what's wrong.

              Simple sql can reproduce it,like this:
               create table default.spark as select * from default.dual;
{quote}
~spark-sql (default)> create table default.spark as select * from default.dual;~
 ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual~
 ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]~
 ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *~
 ~at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~
 ~at scala.collection.immutable.List.map(List.scala:296)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~
 ~at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~
 ~at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~
 ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~
 ~at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~
 ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~
 ~at scala.collection.immutable.List.foreach(List.scala:392)~
 ~at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)~
 ~at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)~
 ~at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)~
 ~at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
{{SparkSQL create table as select from one table which may not 
exists throw exception like 
"org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *"  ,this is not friendly,spark user 
may have no idea about what's wrong.

Simple sql can reproduce it,like this:
create table default.spark as select * from default.dual;

spark-sql (default)> create table default.spark as select * from default.dual;
2019-09-05 16:27:24,127 INFO  (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual
2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]
org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *
at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
SparkSQL create table as select from one table which may not exists 
throw exception like 
"org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *"  ,this is not friendly,spark user 
may have no idea about what's wrong.

Simple sql can reproduce it,like this:
create table default.spark as select * from default.dual;

spark-sql (default)> create table default.spark as select * from default.dual;
2019-09-05 16:27:24,127 INFO  (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual
2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]
org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *
at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at 

[jira] [Updated] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-28990:
---
Description: 
SparkSQL create table as select from one table which may not exists 
throw exception like 
"org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *"  ,this is not friendly,spark user 
may have no idea about what's wrong.


  was:
 SparkSQL create table as select from one table which may not exists throw 
exception like "org.apache.spark.sql.catalyst.analysis.UnresolvedException: 
Invalid call to toAttribute on unresolved object, tree: *"  ,this is not 
friendly,spark user may have no idea about what's wrong.



> SparkSQL invalid call to toAttribute on unresolved object, tree: *
> --
>
> Key: SPARK-28990
> URL: https://issues.apache.org/jira/browse/SPARK-28990
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
> Environment: Simple sql can reproduce it,like this:
> create table default.spark as select * from default.dual;
> spark-sql (default)> create table default.spark as select * from default.dual;
> 2019-09-05 16:27:24,127 INFO  (main) [Logging.scala:logInfo(54)] - Parsing 
> command: create table default.spark as select * from default.dual
> 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
> [create table default.spark as select * from default.dual]
> org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
> toAttribute on unresolved object, tree: *
> at 
> org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.immutable.List.map(List.scala:296)
> at 
> org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
> at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
> at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
> at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
> at 
> org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
> at 
> scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
> at 
> scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
> at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
> at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
> at 
> 

[jira] [Created] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *

2019-09-05 Thread fengchaoge (Jira)
fengchaoge created SPARK-28990:
--

 Summary: SparkSQL invalid call to toAttribute on unresolved 
object, tree: *
 Key: SPARK-28990
 URL: https://issues.apache.org/jira/browse/SPARK-28990
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.4.3
 Environment: Simple sql can reproduce it,like this:
create table default.spark as select * from default.dual;

spark-sql (default)> create table default.spark as select * from default.dual;
2019-09-05 16:27:24,127 INFO  (main) [Logging.scala:logInfo(54)] - Parsing 
command: create table default.spark as select * from default.dual
2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in 
[create table default.spark as select * from default.dual]
org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to 
toAttribute on unresolved object, tree: *
at 
org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at 
org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)
at 
org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)
at 
org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
   

[jira] [Commented] (SPARK-21918) HiveClient shouldn't share Hive object between different thread

2018-06-21 Thread fengchaoge (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-21918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519149#comment-16519149
 ] 

fengchaoge commented on SPARK-21918:


Hello  Hu Liu, can you share you patch?  we are suffering DDL and DML problems 
for STS. we will be much appreciated if you provide the patch,spark community 
will be honor of you !

> HiveClient shouldn't share Hive object between different thread
> ---
>
> Key: SPARK-21918
> URL: https://issues.apache.org/jira/browse/SPARK-21918
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.0
>Reporter: Hu Liu,
>Priority: Major
>
> I'm testing the spark thrift server and found that all the DDL statements are 
> run by user hive even if hive.server2.enable.doAs=true
> The root cause is that Hive object is shared between different thread in 
> HiveClientImpl
> {code:java}
>   private def client: Hive = {
> if (clientLoader.cachedHive != null) {
>   clientLoader.cachedHive.asInstanceOf[Hive]
> } else {
>   val c = Hive.get(conf)
>   clientLoader.cachedHive = c
>   c
> }
>   }
> {code}
> But in impersonation mode, we should just share the Hive object inside the 
> thread so that the  metastore client in Hive could be associated with right 
> user.
> we can  pass the Hive object of parent thread to child thread when running 
> the sql to fix it
> I have already had a initial patch for review and I'm glad to work on it if 
> anyone could assign it to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21918) HiveClient shouldn't share Hive object between different thread

2018-05-31 Thread fengchaoge (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-21918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497456#comment-16497456
 ] 

fengchaoge commented on SPARK-21918:


Hu Liu   gone?

> HiveClient shouldn't share Hive object between different thread
> ---
>
> Key: SPARK-21918
> URL: https://issues.apache.org/jira/browse/SPARK-21918
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.2.0
>Reporter: Hu Liu,
>Priority: Major
>
> I'm testing the spark thrift server and found that all the DDL statements are 
> run by user hive even if hive.server2.enable.doAs=true
> The root cause is that Hive object is shared between different thread in 
> HiveClientImpl
> {code:java}
>   private def client: Hive = {
> if (clientLoader.cachedHive != null) {
>   clientLoader.cachedHive.asInstanceOf[Hive]
> } else {
>   val c = Hive.get(conf)
>   clientLoader.cachedHive = c
>   c
> }
>   }
> {code}
> But in impersonation mode, we should just share the Hive object inside the 
> thread so that the  metastore client in Hive could be associated with right 
> user.
> we can  pass the Hive object of parent thread to child thread when running 
> the sql to fix it
> I have already had a initial patch for review and I'm glad to work on it if 
> anyone could assign it to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-27 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: t1.zip)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-27 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: t1.zip

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-25 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: 1tes.zip)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-25 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: 1tes.zip

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: 1tes.zip, test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-08 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: login.controller.js)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-08 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: login.controller.js

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: login.controller.js, test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: pom.xml)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-04-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: pom.xml

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: pom.xml, test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-30 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: HmsClient.bak

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-30 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: HmsClient.bak)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: duibi1.zip)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: duibi2.zip)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: duibi2.zip

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: duibi1.zip, duibi2.zip, test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: duibi1.zip

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: duibi1.zip, duibi2.zip, test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: shiro.ini)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: SecurityRestApi.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: zeppelin-site.xml)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: ZeppelinConfiguration.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: SecurityUtils.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: LoginRestApi.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: SecurityRestApi.java, SecurityUtils.java, 
> ZeppelinConfiguration.java, shiro.ini, test.JPG, test1.JPG, test2.JPG, 
> zeppelin-site.xml
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: GetUserList.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: SecurityRestApi.java, SecurityUtils.java, 
> ZeppelinConfiguration.java, shiro.ini, test.JPG, test1.JPG, test2.JPG, 
> zeppelin-site.xml
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: GbdLdapRealm.java)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: SecurityRestApi.java, SecurityUtils.java, 
> ZeppelinConfiguration.java, shiro.ini, test.JPG, test1.JPG, test2.JPG, 
> zeppelin-site.xml
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: (was: !test.JPG! when there are large 'ca !test2.JPG! se 
when ' expressions in spark sql,the CodeGenerator failed to compile it. 
 Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
"apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it 
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ?)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: GbdLdapRealm.java, GetUserList.java, LoginRestApi.java, 
> SecurityRestApi.java, SecurityUtils.java, ZeppelinConfiguration.java, 
> shiro.ini, test.JPG, test1.JPG, test2.JPG, zeppelin-site.xml
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: zeppelin-site.xml
ZeppelinConfiguration.java
shiro.ini
SecurityUtils.java
SecurityRestApi.java
LoginRestApi.java
GetUserList.java
GbdLdapRealm.java

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: GbdLdapRealm.java, GetUserList.java, LoginRestApi.java, 
> SecurityRestApi.java, SecurityUtils.java, ZeppelinConfiguration.java, 
> shiro.ini, test.JPG, test1.JPG, test2.JPG, zeppelin-site.xml
>
>
> !test.JPG! when there are large 'ca !test2.JPG! se when ' expressions in 
> spark sql,the CodeGenerator failed to compile it. 
>  Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it 
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2018-03-29 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
!test.JPG! when there are large 'ca !test2.JPG! se when ' expressions in spark 
sql,the CodeGenerator failed to compile it. 
 Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
"apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it 
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ?

  was:
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
>Priority: Major
> Fix For: 2.1.1
>
> Attachments: test.JPG, test1.JPG, test2.JPG
>
>
> !test.JPG! when there are large 'ca !test2.JPG! se when ' expressions in 
> spark sql,the CodeGenerator failed to compile it. 
>  Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> "apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection"
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it 
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-11 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: (was: test.xml)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test2.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-11 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: test.xml

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test2.JPG, test.JPG, test.xml
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-10 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Comment: was deleted

(was: thank you very much, what should i do for next? Thank you for your 
guidance

the table like this :
CREATE TABLE app_claim_assess_rule_granularity(
  report_no string,   
  case_times string,  
  id_clm_channel_process string,  
  loss_object_no string,  
  assess_times string,
  loss_name string,   
  max_loss_amount string, 
  impairment_amount string,   
  rule_code string,   
  rule_name string,   
  application_code string,
  brand_name string,  
  manufacturer_name string,   
  series_name string, 
  group_name string,  
  model_name string,  
  end_case_date string,   
  updated_date string,
  assess_um string,   
  car_mark string,
  garage_code string, 
  garage_name string  , 
  garage_type string  , 
  privilege_group_name string  , 
  small_type string,
  is_transfer string,  
  praepostor_type string,  
  channel_type string,  
  channel_flag string,  
  loss_type string, 
  loss_agree_amount string,  
  loss_count_agree string,   
  department_code string,   
  department_code_01 string,  
  department_code_02_v string,   
  department_code_03 string,   
  department_code_04 string,  
  department_code_name_01 string,   
  department_code_name_02 string,  
  department_code_name_03 string,  
  department_code_name_04 string,  
  assess_dept_code string,   
  verify_department_code_01 string,  
  verify_department_code_02 string,  
  verify_department_code_03 string,  
  verify_department_code_04 string,   
  verify_department_code_name_01 string,  
  verify_department_code_name_02 string,  
  verify_department_code_name_03 string,   
  verify_department_code_name_04 string,  
  assess_quote_price_um string,  
  assess_guide_um string,  
  assess_center_guide_um string,  
  rule_type string,  
  loss_count_assess string,  
  loss_name_rank string,  
  loss_name_rule_rank string,   
  both_trigger string)
PARTITIONED BY ( 
  department_code_02 string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' 
WITH SERDEPROPERTIES ( 
  'field.delim'='\u0001', 
  'serialization.format'='\u0001') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.RCFileInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'
LOCATION
  
'hdfs://hdp-hdfs01/user/hive/warehouse/gbd_dm_pac_safe.db/app_claim_assess_rule_granularity'
TBLPROPERTIES (
  'transient_lastDdlTime'='1499412897'))

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test2.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-10 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Comment: was deleted

(was: 1. create database GBD_DM_PAC_SAFE;
2. use GBD_DM_PAC_SAFE;
3. create  table app_claim_assess_rule_granularity;
SQL like this,just  for test:

SELECT x___sql___.2jjg AS cjjg, x___sql___.3jjg AS djjg, (((CASE WHEN ((CASE 
WHEN (CASE WHEN (x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) * 10) + (CASE WHEN ((CASE WHEN x___sql___.jcbj = 
0 THEN CAST(NULL AS DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj 
END) < 300) THEN 1 WHEN ((CASE WHEN x___sql___.jcbj = 0 THEN CAST(NULL AS 
DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj END) < 2000) THEN 2 
ELSE 3 END)) AS calculation_0290210162047568, x___sql___.updated_date AS 
calculation_0910125090644141, (CASE WHEN (x___sql___.small_type = '01') THEN 
'人工报价' ELSE (CASE WHEN (x___sql___.small_type = '02') THEN '指导人' ELSE 
x___sql___.small_type END) END) AS calculation_1700125090616887, 
x___sql___.impairment_amount AS calculation_1750125100625463, (CASE WHEN ((CASE 
WHEN (CASE WHEN (x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) AS calculation_2170210160935298, (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
AS calculation_2390124154901057, (CASE WHEN (x___sql___.application_code = 
'DSFS') THEN '定损发送规则' ELSE x___sql___.application_code END) AS 
calculation_2770125090429540, x___sql___.rule_name AS 
calculation_3060125090537403, (CASE WHEN ((CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
< 10) THEN '暂不考虑规则' ELSE (CASE WHEN CASE WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) * 10) + (CASE WHEN ((CASE WHEN x___sql___.jcbj = 
0 THEN CAST(NULL AS DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj 
END) < 300) THEN 1 WHEN ((CASE WHEN x___sql___.jcbj = 0 THEN CAST(NULL AS 
DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj END) < 2000) THEN 2 
ELSE 3 END)) = 13) THEN '最紧要优化规则' WHEN ((CASE WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 

[jira] [Issue Comment Deleted] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Comment: was deleted

(was: !http://example.com/image.png!)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test2.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: test2.JPG

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test2.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079799#comment-16079799
 ] 

fengchaoge commented on SPARK-21337:


OK  i  will have a try. thank you  very much.

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079773#comment-16079773
 ] 

fengchaoge commented on SPARK-21337:


1. create database GBD_DM_PAC_SAFE;
2. use GBD_DM_PAC_SAFE;
3. create  table app_claim_assess_rule_granularity;
SQL like this,just  for test:

SELECT x___sql___.2jjg AS cjjg, x___sql___.3jjg AS djjg, (((CASE WHEN ((CASE 
WHEN (CASE WHEN (x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) * 10) + (CASE WHEN ((CASE WHEN x___sql___.jcbj = 
0 THEN CAST(NULL AS DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj 
END) < 300) THEN 1 WHEN ((CASE WHEN x___sql___.jcbj = 0 THEN CAST(NULL AS 
DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj END) < 2000) THEN 2 
ELSE 3 END)) AS calculation_0290210162047568, x___sql___.updated_date AS 
calculation_0910125090644141, (CASE WHEN (x___sql___.small_type = '01') THEN 
'人工报价' ELSE (CASE WHEN (x___sql___.small_type = '02') THEN '指导人' ELSE 
x___sql___.small_type END) END) AS calculation_1700125090616887, 
x___sql___.impairment_amount AS calculation_1750125100625463, (CASE WHEN ((CASE 
WHEN (CASE WHEN (x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) AS calculation_2170210160935298, (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
AS calculation_2390124154901057, (CASE WHEN (x___sql___.application_code = 
'DSFS') THEN '定损发送规则' ELSE x___sql___.application_code END) AS 
calculation_2770125090429540, x___sql___.rule_name AS 
calculation_3060125090537403, (CASE WHEN ((CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
< 10) THEN '暂不考虑规则' ELSE (CASE WHEN CASE WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.20001) THEN 1 WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 0.5) THEN 2 ELSE 3 END) * 10) + (CASE WHEN ((CASE WHEN x___sql___.jcbj = 
0 THEN CAST(NULL AS DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj 
END) < 300) THEN 1 WHEN ((CASE WHEN x___sql___.jcbj = 0 THEN CAST(NULL AS 
DOUBLE) ELSE x___sql___.impairment_amount / x___sql___.jcbj END) < 2000) THEN 2 
ELSE 3 END)) = 13) THEN '最紧要优化规则' WHEN ((CASE WHEN ((CASE WHEN (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
= 0 THEN CAST(NULL AS DOUBLE) ELSE CAST(x___sql___.jcbj AS DOUBLE) / (CASE WHEN 
(x___sql___.id_clm_channel_process IS NULL) THEN 0 WHEN NOT 
(x___sql___.id_clm_channel_process IS NULL) THEN 1 ELSE CAST(NULL AS INT) END) 
END) < 

[jira] [Commented] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079790#comment-16079790
 ] 

fengchaoge commented on SPARK-21337:


Attachments  actually happened i have no idea about code generation. some one 
can help? very  thanks much.

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079785#comment-16079785
 ] 

fengchaoge commented on SPARK-21337:


thank you very much, what should i do for next? Thank you for your guidance

the table like this :
CREATE TABLE app_claim_assess_rule_granularity(
  report_no string,   
  case_times string,  
  id_clm_channel_process string,  
  loss_object_no string,  
  assess_times string,
  loss_name string,   
  max_loss_amount string, 
  impairment_amount string,   
  rule_code string,   
  rule_name string,   
  application_code string,
  brand_name string,  
  manufacturer_name string,   
  series_name string, 
  group_name string,  
  model_name string,  
  end_case_date string,   
  updated_date string,
  assess_um string,   
  car_mark string,
  garage_code string, 
  garage_name string  , 
  garage_type string  , 
  privilege_group_name string  , 
  small_type string,
  is_transfer string,  
  praepostor_type string,  
  channel_type string,  
  channel_flag string,  
  loss_type string, 
  loss_agree_amount string,  
  loss_count_agree string,   
  department_code string,   
  department_code_01 string,  
  department_code_02_v string,   
  department_code_03 string,   
  department_code_04 string,  
  department_code_name_01 string,   
  department_code_name_02 string,  
  department_code_name_03 string,  
  department_code_name_04 string,  
  assess_dept_code string,   
  verify_department_code_01 string,  
  verify_department_code_02 string,  
  verify_department_code_03 string,  
  verify_department_code_04 string,   
  verify_department_code_name_01 string,  
  verify_department_code_name_02 string,  
  verify_department_code_name_03 string,   
  verify_department_code_name_04 string,  
  assess_quote_price_um string,  
  assess_guide_um string,  
  assess_center_guide_um string,  
  rule_type string,  
  loss_count_assess string,  
  loss_name_rank string,  
  loss_name_rule_rank string,   
  both_trigger string)
PARTITIONED BY ( 
  department_code_02 string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' 
WITH SERDEPROPERTIES ( 
  'field.delim'='\u0001', 
  'serialization.format'='\u0001') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.RCFileInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'
LOCATION
  
'hdfs://hdp-hdfs01/user/hive/warehouse/gbd_dm_pac_safe.db/app_claim_assess_rule_granularity'
TBLPROPERTIES (
  'transient_lastDdlTime'='1499412897')

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-09 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Attachment: test1.JPG
test.JPG

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
> Attachments: test1.JPG, test.JPG
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 

  was:
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems that SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 

  was:
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems that SPARK-13242 has solved this problem in spark-1.6.1,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 
Error message is followed by a huge dump of generated source code,at last 
failed.

java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 again. 
https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 

  was:
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 

ERROR INFO:
java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 
again。https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> Error message is followed by a huge dump of generated source code,at last 
> failed.
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
> apparence in spark-2.1.1 again. 
> https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 

ERROR INFO:
java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB.

It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
apparence in spark-2.1.1 
again。https://issues.apache.org/jira/browse/SPARK-13242.

is there something wrong ? 

  was:
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 

ERROR INFO:
java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB

SPARK-13242 have solve this problem in spark-1.6.1,however apparence in 
spark-2.1.1 again。https://issues.apache.org/jira/browse/SPARK-13242。

is there something wrong ? 


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> ERROR INFO:
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB.
> It seems like SPARK-13242 has solved this problem in spark-1.6.1,however it  
> apparence in spark-2.1.1 
> again。https://issues.apache.org/jira/browse/SPARK-13242.
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077730#comment-16077730
 ] 

fengchaoge commented on SPARK-21337:


!http://example.com/image.png!

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> ERROR INFO:
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB
> SPARK-13242 have solve this problem in spark-1.6.1,however apparence in 
> spark-2.1.1 again。https://issues.apache.org/jira/browse/SPARK-13242。
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: 
when there are large 'case when ' expressions in spark sql,the CodeGenerator 
failed to compile it. 

ERROR INFO:
java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB

SPARK-13242 have solve this problem in spark-1.6.1,however apparence in 
spark-2.1.1 again。https://issues.apache.org/jira/browse/SPARK-13242。

is there something wrong ? 

  was:java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB


> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> when there are large 'case when ' expressions in spark sql,the CodeGenerator 
> failed to compile it. 
> ERROR INFO:
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB
> SPARK-13242 have solve this problem in spark-1.6.1,however apparence in 
> spark-2.1.1 again。https://issues.apache.org/jira/browse/SPARK-13242。
> is there something wrong ? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchaoge updated SPARK-21337:
---
Description: java.util.concurrent.ExecutionException: java.lang.Exception: 
failed to compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB  (was: java.util.concurrent.ExecutionException: 
java.lang.Exception: failed to compile: 
org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB!attachment-name.jpg|thumbnail!)

> SQL which has large ‘case when’ expressions may cause code generation beyond 
> 64KB
> -
>
> Key: SPARK-21337
> URL: https://issues.apache.org/jira/browse/SPARK-21337
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.1.1
> Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
>Reporter: fengchaoge
> Fix For: 2.1.1
>
>
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
> compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
> apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
>  of class 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
>  grows beyond 64 KB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21337) SQL which has large ‘case when’ expressions may cause code generation beyond 64KB

2017-07-07 Thread fengchaoge (JIRA)
fengchaoge created SPARK-21337:
--

 Summary: SQL which has large ‘case when’ expressions may cause 
code generation beyond 64KB
 Key: SPARK-21337
 URL: https://issues.apache.org/jira/browse/SPARK-21337
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.1
 Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2
Reporter: fengchaoge
 Fix For: 2.1.1


java.util.concurrent.ExecutionException: java.lang.Exception: failed to 
compile: org.codehaus.janino.JaninoRuntimeException: Code of method 
apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V
 of class 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection
 grows beyond 64 KB!attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21010) Spark-Sql Can't Handle char() type Well

2017-06-07 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042107#comment-16042107
 ] 

fengchaoge commented on SPARK-21010:


thank you 

> Spark-Sql Can't  Handle char() type Well
> 
>
> Key: SPARK-21010
> URL: https://issues.apache.org/jira/browse/SPARK-21010
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.1, 2.1.0, 2.1.1
> Environment: spark1.6.1 hadoop-2.6.0-cdh5.4.2
>Reporter: fengchaoge
>
> we create table in spark-sql like this :
> 1. create table cid_test (name string,id char(20)) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY ' ' stored as textfile;
> 2. LOAD DATA LOCAL INPATH '/home/hadoop/id.txt' OVERWRITE INTO TABLE  
> cid_test;
> content for id.txt:
> fengchaoge 41302219990808
> 3. select * from cid_test where id='41302219990808'; 
> 4. select * from cid_test where id='41302219990808  ';
> In third step,we got nothing ,but in four step we got the right ring. we must 
> add two spaces in id if we want  the right value.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-21010) Spark-Sql Can't Handle char() type Well

2017-06-07 Thread fengchaoge (JIRA)
fengchaoge created SPARK-21010:
--

 Summary: Spark-Sql Can't  Handle char() type Well
 Key: SPARK-21010
 URL: https://issues.apache.org/jira/browse/SPARK-21010
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.1.1, 2.1.0, 1.6.1
 Environment: spark1.6.1 hadoop-2.6.0-cdh5.4.2
Reporter: fengchaoge
 Fix For: 2.1.1


we create table in spark-sql like this :
1. create table cid_test (name string,id char(20)) ROW FORMAT DELIMITED FIELDS 
TERMINATED BY ' ' stored as textfile;

2. LOAD DATA LOCAL INPATH '/home/hadoop/id.txt' OVERWRITE INTO TABLE  cid_test;

content for id.txt:
fengchaoge 41302219990808

3. select * from cid_test where id='41302219990808'; 

4. select * from cid_test where id='41302219990808  ';

In third step,we got nothing ,but in four step we got the right ring. we must 
add two spaces in id if we want  the right value.











--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-16647) sparksql1.6.2 on yarn with hive metastore1.0.0 thows "alter_table_with_cascade" exception

2016-08-28 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15443140#comment-15443140
 ] 

fengchaoge commented on SPARK-16647:


Do you have resolve this problem? I have the same problem. my hive's version is 
0.13.1 

> sparksql1.6.2 on yarn with hive metastore1.0.0 thows 
> "alter_table_with_cascade" exception
> -
>
> Key: SPARK-16647
> URL: https://issues.apache.org/jira/browse/SPARK-16647
> Project: Spark
>  Issue Type: Bug
>Reporter: zhangshuxin
>
> my spark version is 1.6.2(1.5.2,1.5.0) and hive version is 1.0.0
> when i execute some sql like 'create table tbl1 as select * from tbl2' or 
> 'insert overwrite table tabl1 select * from tbl2',i get the following 
> exception
> 16/07/20 10:14:13 WARN metastore.RetryingMetaStoreClient: MetaStoreClient 
> lost connection. Attempting to reconnect.
> org.apache.thrift.TApplicationException: Invalid method name: 
> 'alter_table_with_cascade'
> at 
> org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_cascade(ThriftHiveMetastore.java:1374)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_cascade(ThriftHiveMetastore.java:1358)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:340)
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:251)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
> at com.sun.proxy.$Proxy27.alter_table(Unknown Source)
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:496)
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)
> at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1668)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_14.loadTable(HiveShim.scala:441)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply$mcV$sp(ClientWrapper.scala:489)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply(ClientWrapper.scala:489)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply(ClientWrapper.scala:489)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:256)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:211)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:248)
> at 
> org.apache.spark.sql.hive.client.ClientWrapper.loadTable(ClientWrapper.scala:488)
> at 
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:243)
> at 
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127)
> at 
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:263)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)
> at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)
> at 
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:933)
> at 
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:933)
> at 
> org.apache.spark.sql.hive.execution.CreateTableAsSelect.run(CreateTableAsSelect.scala:89)
> at 
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at 
> 

[jira] [Commented] (SPARK-15817) Spark client picking hive 1.2.1 by default which failed to alter a table name

2016-07-27 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395261#comment-15395261
 ] 

fengchaoge commented on SPARK-15817:


spark.sql.hive.metastore.jars /your_path/spark_assembly-1.6.1-hadoopx.x.x.jar
try again

> Spark client picking hive 1.2.1 by default which failed to alter a table name
> -
>
> Key: SPARK-15817
> URL: https://issues.apache.org/jira/browse/SPARK-15817
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 1.6.1
>Reporter: Nataraj Gorantla
>
> Some of our scala scripts are failing with below error. 
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Invalid
> method name: 'alter_table_with_cascade'
> msg: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
> Spark when invoked is trying to initiate Hive 1.2.1 by default. We have Hive 
> 0.14 installed. Some backgroud investigation from our side explained this. 
> Analysis
> "alter_table_with_cascade" error occurs because of metastore version mismatch 
> of Spark. 
> To correct this error set proper version of metastore in Spark config.
> I tried to add a couple of parameters to spark-default-conf file. 
> spark.sql.hive.metastore.version 0.14.0
> #spark.sql.hive.metastore.jars maven
> spark.sql.hive.metastore.jars =/usr/hdp/current/hive-client/lib
> Still I see issues. Can you please let me know if you have any alternative to 
> fix this issue. 
> Thanks,
> Nataraj G



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15703) Spark UI doesn't show all tasks as completed when it should

2016-07-09 Thread fengchaoge (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369416#comment-15369416
 ] 

fengchaoge commented on SPARK-15703:


Thomas Graves, in class AsynchronousListenerBus, the capacity of Queue 
EVENT_QUEUE_CAPACITY is fixed,when high concurrence,this value need to be 
changed. maybe 2 or higher.


> Spark UI doesn't show all tasks as completed when it should
> ---
>
> Key: SPARK-15703
> URL: https://issues.apache.org/jira/browse/SPARK-15703
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.0.0
>Reporter: Thomas Graves
>Priority: Critical
> Attachments: Screen Shot 2016-06-01 at 11.21.32 AM.png, Screen Shot 
> 2016-06-01 at 11.23.48 AM.png
>
>
> The Spark UI doesn't seem to be showing all the tasks and metrics.
> I ran a job with 10 tasks but Detail stage page says it completed 93029:
> Summary Metrics for 93029 Completed Tasks
> The Stages for all jobs pages list that only 89519/10 tasks finished but 
> its completed.  The metrics for shuffled write and input are also incorrect.
> I will attach screen shots.
> I checked the logs and it does show that all the tasks actually finished.
> 16/06/01 16:15:42 INFO TaskSetManager: Finished task 59880.0 in stage 2.0 
> (TID 54038) in 265309 ms on 10.213.45.51 (10/10)
> 16/06/01 16:15:42 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks 
> have all completed, from pool



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org