[jira] [Commented] (SPARK-38265) Update comments of ExecutorAllocationClient

2022-02-20 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495269#comment-17495269
 ] 

Shockang commented on SPARK-38265:
--

Working on this.

> Update comments of ExecutorAllocationClient
> ---
>
> Key: SPARK-38265
> URL: https://issues.apache.org/jira/browse/SPARK-38265
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.2.1
>Reporter: Shockang
>Priority: Trivial
> Fix For: 3.3.0
>
>
> The class comment of ExecutorAllocationClient is out of date.
> {code:java}
> This is currently supported only in YARN mode. {code}
> Nowadays, this is supported in the following modes: Spark's Standalone, 
> YARN-Client, YARN-Cluster, Mesos, Kubernetes.
>  
> In my opinion, this comment should be updated.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38265) Update comments of ExecutorAllocationClient

2022-02-20 Thread Shockang (Jira)
Shockang created SPARK-38265:


 Summary: Update comments of ExecutorAllocationClient
 Key: SPARK-38265
 URL: https://issues.apache.org/jira/browse/SPARK-38265
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 3.2.1
Reporter: Shockang
 Fix For: 3.3.0


The class comment of ExecutorAllocationClient is out of date.
{code:java}
This is currently supported only in YARN mode. {code}
Nowadays, this is supported in the following modes: Spark's Standalone, 
YARN-Client, YARN-Cluster, Mesos, Kubernetes.

 

In my opinion, this comment should be updated.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-37030) Maven build failed in windows!

2021-11-03 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang resolved SPARK-37030.
--
Resolution: Done

> Maven build failed in windows!
> --
>
> Key: SPARK-37030
> URL: https://issues.apache.org/jira/browse/SPARK-37030
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.2.0
> Environment: OS: Windows 10 Professional
> OS Version: 21H1
> Maven Version: 3.6.3
>  
>Reporter: Shockang
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: image-2021-10-17-22-18-16-616.png
>
>
> I pulled the latest Spark master code on my local windows 10 computer and 
> executed the following command:
> {code:java}
> mvn -DskipTests clean install{code}
> Build failed!
> !image-2021-10-17-22-18-16-616.png!
> {code:java}
> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
> (default) on project spark-core_2.12: An Ant BuildException has occured: 
> Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
> "C:\bigdata\spark\core"): CreateProcess error=2{code}
> It seems that the plugin: maven-antrun-plugin cannot run because of windows 
> no bash. 
> The following code comes from pom.xml in spark-core module.
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-antrun-plugin
>   
>     
>       generate-resources
>       
>         
>         
>           
>             
>             
>             
>           
>         
>       
>       
>         run
>       
>     
>   
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37030) Maven build failed in windows!

2021-11-03 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17438017#comment-17438017
 ] 

Shockang commented on SPARK-37030:
--

[~hyukjin.kwon] Thank you for your suggestion. This problem has been solved.

> Maven build failed in windows!
> --
>
> Key: SPARK-37030
> URL: https://issues.apache.org/jira/browse/SPARK-37030
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.2.0
> Environment: OS: Windows 10 Professional
> OS Version: 21H1
> Maven Version: 3.6.3
>  
>Reporter: Shockang
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: image-2021-10-17-22-18-16-616.png
>
>
> I pulled the latest Spark master code on my local windows 10 computer and 
> executed the following command:
> {code:java}
> mvn -DskipTests clean install{code}
> Build failed!
> !image-2021-10-17-22-18-16-616.png!
> {code:java}
> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
> (default) on project spark-core_2.12: An Ant BuildException has occured: 
> Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
> "C:\bigdata\spark\core"): CreateProcess error=2{code}
> It seems that the plugin: maven-antrun-plugin cannot run because of windows 
> no bash. 
> The following code comes from pom.xml in spark-core module.
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-antrun-plugin
>   
>     
>       generate-resources
>       
>         
>         
>           
>             
>             
>             
>           
>         
>       
>       
>         run
>       
>     
>   
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-37030) Maven build failed in windows!

2021-11-02 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-37030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17437168#comment-17437168
 ] 

Shockang commented on SPARK-37030:
--

[~hyukjin.kwon] Even if the community does not support Spark in windows, why no 
one cares about programmers who use windows ...

> Maven build failed in windows!
> --
>
> Key: SPARK-37030
> URL: https://issues.apache.org/jira/browse/SPARK-37030
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.2.0
> Environment: OS: Windows 10 Professional
> OS Version: 21H1
> Maven Version: 3.6.3
>  
>Reporter: Shockang
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: image-2021-10-17-22-18-16-616.png
>
>
> I pulled the latest Spark master code on my local windows 10 computer and 
> executed the following command:
> {code:java}
> mvn -DskipTests clean install{code}
> Build failed!
> !image-2021-10-17-22-18-16-616.png!
> {code:java}
> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
> (default) on project spark-core_2.12: An Ant BuildException has occured: 
> Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
> "C:\bigdata\spark\core"): CreateProcess error=2{code}
> It seems that the plugin: maven-antrun-plugin cannot run because of windows 
> no bash. 
> The following code comes from pom.xml in spark-core module.
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-antrun-plugin
>   
>     
>       generate-resources
>       
>         
>         
>           
>             
>             
>             
>           
>         
>       
>       
>         run
>       
>     
>   
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-36853) Code failing on checkstyle

2021-10-17 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-36853:
-
Attachment: image-2021-10-18-01-57-00-714.png

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
> Attachments: image-2021-10-18-01-57-00-714.png, 
> spark_mvn_clean_install_skip_tests_in_windows.log
>
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[88,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[100,25] 
> (naming) MethodName: Method name 'Once' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[110,25] 
> (naming) MethodName: Method name 'AvailableNow' must match pattern 

[jira] [Commented] (SPARK-36853) Code failing on checkstyle

2021-10-17 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17429743#comment-17429743
 ] 

Shockang commented on SPARK-36853:
--

Due to the existence of the following issue: 
[SPARK-37030|https://issues.apache.org/jira/browse/SPARK-37030], maven build 
failed in windows!

I annotated the doubtful code about bash and re executed the command:
{code:java}
mvn -DskipTests clean install
{code}
!image-2021-10-18-01-57-00-714.png!

For your reference, I have attached the build log.

[~hyukjin.kwon] Can this issue be split into multiple subtasks? Because there 
are 131 errors.

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
> Attachments: image-2021-10-18-01-57-00-714.png, 
> spark_mvn_clean_install_skip_tests_in_windows.log
>
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] 

[jira] [Updated] (SPARK-36853) Code failing on checkstyle

2021-10-17 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-36853:
-
Attachment: spark_mvn_clean_install_skip_tests_in_windows.log

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
> Attachments: spark_mvn_clean_install_skip_tests_in_windows.log
>
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[88,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[100,25] 
> (naming) MethodName: Method name 'Once' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[110,25] 
> (naming) MethodName: Method name 'AvailableNow' must match pattern 
> 

[jira] [Updated] (SPARK-37030) Maven build failed in windows!

2021-10-17 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-37030:
-
Description: 
I pulled the latest Spark master code on my local windows 10 computer and 
executed the following command:
{code:java}
mvn -DskipTests clean install{code}
Build failed!

!image-2021-10-17-22-18-16-616.png!
{code:java}
Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
(default) on project spark-core_2.12: An Ant BuildException has occured: 
Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
"C:\bigdata\spark\core"): CreateProcess error=2{code}
It seems that the plugin: maven-antrun-plugin cannot run because of windows no 
bash. 

The following code comes from pom.xml in spark-core module.
{code:java}


  org.apache.maven.plugins

  maven-antrun-plugin

  

    

      generate-resources

      

        

        

          

            

            

            

          

        

      

      

        run

      

    

  



{code}
 

  was:
I pulled the latest Spark master code on my local windows 10 computer and 
executed the following command:
{code:java}
mvn -DskipTests clean install{code}
Build failed!

!image-2021-10-17-21-55-33-844.png!
{code:java}

Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
(default) on project spark-core_2.12: An Ant BuildException has occured: 
Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
"C:\bigdata\spark\core"): CreateProcess error=2{code}
It seems that the plugin: maven-antrun-plugin cannot run because of windows no 
bash. 

The following code comes from pom.xml in spark-core module.
{code:java}


  org.apache.maven.plugins

  maven-antrun-plugin

  

    

      generate-resources

      

        

        

          

            

            

            

          

        

      

      

        run

      

    

  



{code}
 


> Maven build failed in windows!
> --
>
> Key: SPARK-37030
> URL: https://issues.apache.org/jira/browse/SPARK-37030
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.2.0
> Environment: OS: Windows 10 Professional
> OS Version: 21H1
> Maven Version: 3.6.3
>  
>Reporter: Shockang
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: image-2021-10-17-22-18-16-616.png
>
>
> I pulled the latest Spark master code on my local windows 10 computer and 
> executed the following command:
> {code:java}
> mvn -DskipTests clean install{code}
> Build failed!
> !image-2021-10-17-22-18-16-616.png!
> {code:java}
> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
> (default) on project spark-core_2.12: An Ant BuildException has occured: 
> Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
> "C:\bigdata\spark\core"): CreateProcess error=2{code}
> It seems that the plugin: maven-antrun-plugin cannot run because of windows 
> no bash. 
> The following code comes from pom.xml in spark-core module.
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-antrun-plugin
>   
>     
>       generate-resources
>       
>         
>         
>           
>             
>             
>             
>           
>         
>       
>       
>         run
>       
>     
>   
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-37030) Maven build failed in windows!

2021-10-17 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-37030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-37030:
-
Attachment: image-2021-10-17-22-18-16-616.png

> Maven build failed in windows!
> --
>
> Key: SPARK-37030
> URL: https://issues.apache.org/jira/browse/SPARK-37030
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.2.0
> Environment: OS: Windows 10 Professional
> OS Version: 21H1
> Maven Version: 3.6.3
>  
>Reporter: Shockang
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: image-2021-10-17-22-18-16-616.png
>
>
> I pulled the latest Spark master code on my local windows 10 computer and 
> executed the following command:
> {code:java}
> mvn -DskipTests clean install{code}
> Build failed!
> !image-2021-10-17-21-55-33-844.png!
> {code:java}
> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
> (default) on project spark-core_2.12: An Ant BuildException has occured: 
> Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
> "C:\bigdata\spark\core"): CreateProcess error=2{code}
> It seems that the plugin: maven-antrun-plugin cannot run because of windows 
> no bash. 
> The following code comes from pom.xml in spark-core module.
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-antrun-plugin
>   
>     
>       generate-resources
>       
>         
>         
>           
>             
>             
>             
>           
>         
>       
>       
>         run
>       
>     
>   
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-37030) Maven build failed in windows!

2021-10-17 Thread Shockang (Jira)
Shockang created SPARK-37030:


 Summary: Maven build failed in windows!
 Key: SPARK-37030
 URL: https://issues.apache.org/jira/browse/SPARK-37030
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 3.2.0
 Environment: OS: Windows 10 Professional

OS Version: 21H1

Maven Version: 3.6.3

 
Reporter: Shockang
 Fix For: 3.2.0


I pulled the latest Spark master code on my local windows 10 computer and 
executed the following command:
{code:java}
mvn -DskipTests clean install{code}
Build failed!

!image-2021-10-17-21-55-33-844.png!
{code:java}

Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.8:run 
(default) on project spark-core_2.12: An Ant BuildException has occured: 
Execute failed: java.io.IOException: Cannot run program "bash" (in directory 
"C:\bigdata\spark\core"): CreateProcess error=2{code}
It seems that the plugin: maven-antrun-plugin cannot run because of windows no 
bash. 

The following code comes from pom.xml in spark-core module.
{code:java}


  org.apache.maven.plugins

  maven-antrun-plugin

  

    

      generate-resources

      

        

        

          

            

            

            

          

        

      

      

        run

      

    

  



{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36853) Code failing on checkstyle

2021-09-26 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17420438#comment-17420438
 ] 

Shockang commented on SPARK-36853:
--

OK

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[88,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[100,25] 
> (naming) MethodName: Method name 'Once' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[110,25] 
> (naming) MethodName: Method name 'AvailableNow' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[120,25] 
> 

[jira] [Commented] (SPARK-36853) Code failing on checkstyle

2021-09-25 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17420207#comment-17420207
 ] 

Shockang commented on SPARK-36853:
--

[~abhinavofficial] Can you tell me how you caused this error? I think that if 
it is such an error, the general CI will report an error, and all the full 
tests will not pass.

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
> Fix For: 3.3.0
>
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[88,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[100,25] 
> (naming) MethodName: Method name 'Once' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] 

[jira] [Comment Edited] (SPARK-36843) Add an iterator method to Dataset

2021-09-25 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17420199#comment-17420199
 ] 

Shockang edited comment on SPARK-36843 at 9/26/21, 3:53 AM:


[~lxian2] You mean that one job collects all data and returns an iterator of 
byte array?


was (Author: shockang):
[~lxian2] You mean that a job collects all data and returns an iterator of byte 
array.

> Add an iterator method to Dataset
> -
>
> Key: SPARK-36843
> URL: https://issues.apache.org/jira/browse/SPARK-36843
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Li Xian
>Priority: Minor
>
> The current org.apache.spark.sql.Dataset#toLocalIterator will submit multiple 
> jobs for multiple partitions. 
> In my case, I would like to collect all partition at once to save the job 
> scheduling cost and also has an iterator to save the memory on 
> deserialization (instead of deserialize all rows at once, I want only one row 
> is deserialized during the iteration)
> . 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36843) Add an iterator method to Dataset

2021-09-25 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17420199#comment-17420199
 ] 

Shockang commented on SPARK-36843:
--

[~lxian2] You mean that a job collects all data and returns an iterator of byte 
array.

> Add an iterator method to Dataset
> -
>
> Key: SPARK-36843
> URL: https://issues.apache.org/jira/browse/SPARK-36843
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Li Xian
>Priority: Minor
>
> The current org.apache.spark.sql.Dataset#toLocalIterator will submit multiple 
> jobs for multiple partitions. 
> In my case, I would like to collect all partition at once to save the job 
> scheduling cost and also has an iterator to save the memory on 
> deserialization (instead of deserialize all rows at once, I want only one row 
> is deserialized during the iteration)
> . 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36853) Code failing on checkstyle

2021-09-25 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17420194#comment-17420194
 ] 

Shockang commented on SPARK-36853:
--

working on this

> Code failing on checkstyle
> --
>
> Key: SPARK-36853
> URL: https://issues.apache.org/jira/browse/SPARK-36853
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: Abhinav Kumar
>Priority: Trivial
> Fix For: 3.3.0
>
>
> There are more - just pasting sample 
>  
> [INFO] There are 32 errors reported by Checkstyle 8.43 with 
> dev/checkstyle.xml ruleset.
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF11.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 107).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF12.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 116).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 104).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF13.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 125).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 109).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF14.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 114).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF15.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 143).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 119).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF16.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 152).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 124).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF17.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 161).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 129).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF18.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 170).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 134).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF19.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 179).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 139).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF20.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 188).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 144).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF21.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 197).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[28] (sizes) 
> LineLength: Line is longer than 100 characters (found 149).
> [ERROR] src\main\java\org\apache\spark\sql\api\java\UDF22.java:[29] (sizes) 
> LineLength: Line is longer than 100 characters (found 206).
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[44,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[60,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[75,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[88,25] 
> (naming) MethodName: Method name 'ProcessingTime' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[100,25] 
> (naming) MethodName: Method name 'Once' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] src\main\java\org\apache\spark\sql\streaming\Trigger.java:[110,25] 
> (naming) MethodName: Method name 'AvailableNow' must match pattern 
> '^[a-z][a-z0-9][a-zA-Z0-9_]*$'.
> [ERROR] 

[jira] [Commented] (SPARK-36767) ArrayMin/ArrayMax/SortArray/ArraySort add comment and UT

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415588#comment-17415588
 ] 

Shockang commented on SPARK-36767:
--

[~angerszhuuu] OK

>  ArrayMin/ArrayMax/SortArray/ArraySort add comment and UT
> -
>
> Key: SPARK-36767
> URL: https://issues.apache.org/jira/browse/SPARK-36767
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.3, 3.1.2, 3.2.1
>Reporter: angerszhu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36767) ArrayMin/ArrayMax/SortArray/ArraySort add comment and UT

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415583#comment-17415583
 ] 

Shockang commented on SPARK-36767:
--

working on this.

>  ArrayMin/ArrayMax/SortArray/ArraySort add comment and UT
> -
>
> Key: SPARK-36767
> URL: https://issues.apache.org/jira/browse/SPARK-36767
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.3, 3.1.2, 3.2.1
>Reporter: angerszhu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30175) Eliminate warnings: part 5

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30175:
-
Comment: was deleted

(was: I'm working on this.)

> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> {code:java}
> Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
>   def createPlan(batchId: Long): WriteToDataSourceV2 = {
> Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
> WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
> {code}
> -sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala-
> {code:java}
>  Warning:Warning:line (703)a pure expression does nothing in statement 
> position; multiline expressions might require enclosing parentheses
>   q1
> {code}
> -sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala-
> {code:java}
> Warning:Warning:line (285)object typed in package scalalang is deprecated 
> (since 3.0.0): please use untyped builtin aggregate functions.
> val aggregated = 
> inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30175) Eliminate warnings: part 5

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30175:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
{code:java}
Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is deprecated 
(since 2.4.0): Use specific logical plans like AppendData instead
  def createPlan(batchId: Long): WriteToDataSourceV2 = {
Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
deprecated (since 2.4.0): Use specific logical plans like AppendData instead
WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
{code}
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
{code:java}
 Warning:Warning:line (703)a pure expression does nothing in statement 
position; multiline expressions might require enclosing parentheses
  q1
{code}
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala
{code:java}
Warning:Warning:line (285)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
val aggregated = inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
{code}

  was:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
{code:java}
Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is deprecated 
(since 2.4.0): Use specific logical plans like AppendData instead
  def createPlan(batchId: Long): WriteToDataSourceV2 = {
Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
deprecated (since 2.4.0): Use specific logical plans like AppendData instead
WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
{code}
-sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala-
{code:java}
 Warning:Warning:line (703)a pure expression does nothing in statement 
position; multiline expressions might require enclosing parentheses
  q1
{code}
-sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala-
{code:java}
Warning:Warning:line (285)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
val aggregated = inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
{code}


> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> {code:java}
> Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
>   def createPlan(batchId: Long): WriteToDataSourceV2 = {
> Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
> WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
> {code:java}
>  Warning:Warning:line (703)a pure expression does nothing in statement 
> position; multiline expressions might require enclosing parentheses
>   q1
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala
> {code:java}
> Warning:Warning:line (285)object typed in package scalalang is deprecated 
> (since 3.0.0): please use untyped builtin aggregate functions.
> val aggregated = 
> inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415564#comment-17415564
 ] 

Shockang commented on SPARK-30177:
--

After careful inspection, I found that except for the last problem, everything 
else had been fixed. But the last one is originally a test for expired APIs.

> Eliminate warnings: part 7
> --
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> -/mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala-
>  -Warning:Warning:line (108)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) < 0.1)-
>  -Warning:Warning:line (135)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) == summary.trainingCost)-
>  -Warning:Warning:line (195)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -model.computeCost(dataset)-
> -/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala-
>  -Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.-
>  -jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)-
> -/sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java-
>  -Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
> -/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java-
>  -Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.-
>  -Information:Information:java: Recompile with -Xlint:unchecked for details.-
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
>  Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30177:
-
Comment: was deleted

(was: After careful inspection, I found that except for the last one, 
everything else had been fixed.)

> Eliminate warnings: part 7
> --
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> -/mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala-
>  -Warning:Warning:line (108)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) < 0.1)-
>  -Warning:Warning:line (135)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) == summary.trainingCost)-
>  -Warning:Warning:line (195)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -model.computeCost(dataset)-
> -/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala-
>  -Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.-
>  -jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)-
> -/sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java-
>  -Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
> -/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java-
>  -Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.-
>  -Information:Information:java: Recompile with -Xlint:unchecked for details.-
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
>  Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30177:
-
Comment: was deleted

(was: I'm working on this.)

> Eliminate warnings: part 7
> --
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> -/mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala-
>  -Warning:Warning:line (108)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) < 0.1)-
>  -Warning:Warning:line (135)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -assert(model.computeCost(dataset) == summary.trainingCost)-
>  -Warning:Warning:line (195)method computeCost in class BisectingKMeansModel 
> is deprecated (since 3.0.0): This method is deprecated and will be removed in 
> future versions. Use ClusteringEvaluator instead. You can also get the cost 
> on the training dataset in the summary.-
>  -model.computeCost(dataset)-
> -/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala-
>  -Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.-
>  -jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)-
> -/sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java-
>  -Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
>  -Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated-
> -/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java-
>  -Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.-
>  -Information:Information:java: Recompile with -Xlint:unchecked for details.-
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
>  Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30175) Eliminate warnings: part 5

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30175:
-
Description: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
{code:java}
Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is deprecated 
(since 2.4.0): Use specific logical plans like AppendData instead
  def createPlan(batchId: Long): WriteToDataSourceV2 = {
Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
deprecated (since 2.4.0): Use specific logical plans like AppendData instead
WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
{code}
-sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala-
{code:java}
 Warning:Warning:line (703)a pure expression does nothing in statement 
position; multiline expressions might require enclosing parentheses
  q1
{code}
-sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala-
{code:java}
Warning:Warning:line (285)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
val aggregated = inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
{code}

  was:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala

{code:java}
Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is deprecated 
(since 2.4.0): Use specific logical plans like AppendData instead
  def createPlan(batchId: Long): WriteToDataSourceV2 = {
Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
deprecated (since 2.4.0): Use specific logical plans like AppendData instead
WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
{code}

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala

{code:java}
 Warning:Warning:line (703)a pure expression does nothing in statement 
position; multiline expressions might require enclosing parentheses
  q1
{code}

sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala

{code:java}
Warning:Warning:line (285)object typed in package scalalang is deprecated 
(since 3.0.0): please use untyped builtin aggregate functions.
val aggregated = inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
{code}


> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> {code:java}
> Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
>   def createPlan(batchId: Long): WriteToDataSourceV2 = {
> Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
> WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
> {code}
> -sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala-
> {code:java}
>  Warning:Warning:line (703)a pure expression does nothing in statement 
> position; multiline expressions might require enclosing parentheses
>   q1
> {code}
> -sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala-
> {code:java}
> Warning:Warning:line (285)object typed in package scalalang is deprecated 
> (since 3.0.0): please use untyped builtin aggregate functions.
> val aggregated = 
> inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shockang updated SPARK-30177:
-
Description: 
-/mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala-
 -Warning:Warning:line (108)method computeCost in class BisectingKMeansModel is 
deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.-
 -assert(model.computeCost(dataset) < 0.1)-
 -Warning:Warning:line (135)method computeCost in class BisectingKMeansModel is 
deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.-
 -assert(model.computeCost(dataset) == summary.trainingCost)-
 -Warning:Warning:line (195)method computeCost in class BisectingKMeansModel is 
deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.-
 -model.computeCost(dataset)-

-/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala-
 -Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java enum 
Feature is deprecated: see corresponding Javadoc for more information.-
 -jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)-

-/sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java-
 -Warning:Warning:line (28)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated-
 -Warning:Warning:line (37)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated-
 -Warning:Warning:line (46)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated-
 -Warning:Warning:line (55)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated-
 -Warning:Warning:line (64)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated-

-/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java-
 -Information:Information:java: 
/Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
 uses unchecked or unsafe operations.-
 -Information:Information:java: Recompile with -Xlint:unchecked for details.-

/sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
 Warning:Warning:line (478)java: 
json(org.apache.spark.api.java.JavaRDD) in 
org.apache.spark.sql.DataFrameReader has been deprecated

  was:
/mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala
Warning:Warning:line (108)method computeCost in class BisectingKMeansModel 
is deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.
assert(model.computeCost(dataset) < 0.1)
Warning:Warning:line (135)method computeCost in class BisectingKMeansModel 
is deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.
assert(model.computeCost(dataset) == summary.trainingCost)
Warning:Warning:line (195)method computeCost in class BisectingKMeansModel 
is deprecated (since 3.0.0): This method is deprecated and will be removed in 
future versions. Use ClusteringEvaluator instead. You can also get the cost on 
the training dataset in the summary.
  model.computeCost(dataset)
  
/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
enum Feature is deprecated: see corresponding Javadoc for more information.
  jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)

/sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java
Warning:Warning:line (28)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (37)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (46)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been deprecated
Warning:Warning:line (55)java: 
org.apache.spark.sql.expressions.javalang.typed in 
org.apache.spark.sql.expressions.javalang has been 

[jira] [Commented] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415551#comment-17415551
 ] 

Shockang commented on SPARK-30177:
--

I'm working on this.

> Eliminate warnings: part 7
> --
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> /mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala
> Warning:Warning:line (108)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) < 0.1)
> Warning:Warning:line (135)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) == summary.trainingCost)
> Warning:Warning:line (195)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
>   model.computeCost(dataset)
> 
> /sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
> Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.
>   jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)
> /sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java
> Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
> Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.
> Information:Information:java: Recompile with -Xlint:unchecked for details.
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
> Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30177) Eliminate warnings: part 7

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415550#comment-17415550
 ] 

Shockang commented on SPARK-30177:
--

After careful inspection, I found that except for the last one, everything else 
had been fixed.

> Eliminate warnings: part 7
> --
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> /mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala
> Warning:Warning:line (108)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) < 0.1)
> Warning:Warning:line (135)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) == summary.trainingCost)
> Warning:Warning:line (195)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
>   model.computeCost(dataset)
> 
> /sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
> Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.
>   jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)
> /sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java
> Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
> Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.
> Information:Information:java: Recompile with -Xlint:unchecked for details.
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
> Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30175) Eliminate warnings: part 5

2021-09-15 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415542#comment-17415542
 ] 

Shockang commented on SPARK-30175:
--

I'm working on this.

> Eliminate warnings: part 5
> --
>
> Key: SPARK-30175
> URL: https://issues.apache.org/jira/browse/SPARK-30175
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: jobit mathew
>Priority: Minor
>
> sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala
> {code:java}
> Warning:Warning:line (36)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
>   def createPlan(batchId: Long): WriteToDataSourceV2 = {
> Warning:Warning:line (37)class WriteToDataSourceV2 in package v2 is 
> deprecated (since 2.4.0): Use specific logical plans like AppendData instead
> WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
> {code:java}
>  Warning:Warning:line (703)a pure expression does nothing in statement 
> position; multiline expressions might require enclosing parentheses
>   q1
> {code}
> sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala
> {code:java}
> Warning:Warning:line (285)object typed in package scalalang is deprecated 
> (since 3.0.0): please use untyped builtin aggregate functions.
> val aggregated = 
> inputData.toDS().groupByKey(_._1).agg(typed.sumLong(_._2))
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36328) HadoopRDD#getPartitions fetches FileSystem Delegation Token for every partition

2021-07-29 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389960#comment-17389960
 ] 

Shockang commented on SPARK-36328:
--

I'm working on it.

> HadoopRDD#getPartitions fetches FileSystem Delegation Token for every 
> partition
> ---
>
> Key: SPARK-36328
> URL: https://issues.apache.org/jira/browse/SPARK-36328
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Priority: Major
>
> Spark Job creates a separate JobConf for every RDD (every hive table 
> partition) in HadoopRDD#getPartitions.
> {code}
>   override def getPartitions: Array[Partition] = {
> val jobConf = getJobConf()
> // add the credentials here as this can be called before SparkContext 
> initialized
> SparkHadoopUtil.get.addCredentials(jobConf)
> {code}
> Hadoop FileSystem fetches FileSystem Delegation Token and sets into the 
> Credentials which is part of JobConf. On further requests, will reuse the 
> token from the Credentials if already exists.
> {code}
>if (serviceName != null) { // fs has token, grab it
>   final Text service = new Text(serviceName);
>   Token token = credentials.getToken(service);
>   if (token == null) {
> token = getDelegationToken(renewer);
> if (token != null) {
>   tokens.add(token);
>   credentials.addToken(service, token);
> }
>   }
> }
> {code}
>  But since Spark Job creates a new JobConf (which will have a new 
> Credentials) for every hive table partition, the token is not reused and gets 
> fetched for every partition. This is slowing down the query as each 
> delegation token has to go through KDC and SSL handshake on Secure Clusters.
> *Improvement:*
> Spark can add the credentials from previous JobConf into the new JobConf to 
> reuse the FileSystem Delegation Token similar to how the User Credentials are 
> added into JobConf after construction.
> {code}
>  val jobConf = getJobConf()
> // add the credentials here as this can be called before SparkContext 
> initialized
> SparkHadoopUtil.get.addCredentials(jobConf)
> {code}
> *Repro*
> {code}
> beeline>
> create table parttable (key char(1), value int) partitioned by (p int);
> insert into table parttable partition(p=100) values ('d', 1), ('e', 2), ('f', 
> 3);
> insert into table parttable partition(p=200) values ('d', 1), ('e', 2), ('f', 
> 3);
> insert into table parttable partition(p=300) values ('d', 1), ('e', 2), ('f', 
> 3);
> spark-sql>
> select value, count(*) from parttable group by value
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36099) Group exception messages in core/util

2021-07-27 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388107#comment-17388107
 ] 

Shockang commented on SPARK-36099:
--

It's good for you to wait a day. I was going to submit the PR tonight. Why is 
it so coincidental![~dc-heros]

> Group exception messages in core/util
> -
>
> Key: SPARK-36099
> URL: https://issues.apache.org/jira/browse/SPARK-36099
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 3.3.0
>Reporter: Allison Wang
>Priority: Major
>
> 'core/src/main/scala/org/apache/spark/util'
> || Filename ||   Count ||
> | AccumulatorV2.scala  |   4 |
> | ClosureCleaner.scala |   1 |
> | DependencyUtils.scala|   1 |
> | KeyLock.scala|   1 |
> | ListenerBus.scala|   1 |
> | NextIterator.scala   |   1 |
> | SerializableBuffer.scala |   2 |
> | ThreadUtils.scala|   4 |
> | Utils.scala  |  16 |
> 'core/src/main/scala/org/apache/spark/util/collection'
> || Filename  ||   Count ||
> | AppendOnlyMap.scala   |   1 |
> | CompactBuffer.scala   |   1 |
> | ImmutableBitSet.scala |   6 |
> | MedianHeap.scala  |   1 |
> | OpenHashSet.scala |   2 |
> 'core/src/main/scala/org/apache/spark/util/io'
> || Filename||   Count ||
> | ChunkedByteBuffer.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/logging'
> || Filename   ||   Count ||
> | DriverLogger.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/random'
> || Filename||   Count ||
> | RandomSampler.scala |   1 |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36099) Group exception messages in core/util

2021-07-27 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388101#comment-17388101
 ] 

Shockang commented on SPARK-36099:
--

I’m sorry for my gaffe.[~dc-heros]

> Group exception messages in core/util
> -
>
> Key: SPARK-36099
> URL: https://issues.apache.org/jira/browse/SPARK-36099
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 3.3.0
>Reporter: Allison Wang
>Priority: Major
>
> 'core/src/main/scala/org/apache/spark/util'
> || Filename ||   Count ||
> | AccumulatorV2.scala  |   4 |
> | ClosureCleaner.scala |   1 |
> | DependencyUtils.scala|   1 |
> | KeyLock.scala|   1 |
> | ListenerBus.scala|   1 |
> | NextIterator.scala   |   1 |
> | SerializableBuffer.scala |   2 |
> | ThreadUtils.scala|   4 |
> | Utils.scala  |  16 |
> 'core/src/main/scala/org/apache/spark/util/collection'
> || Filename  ||   Count ||
> | AppendOnlyMap.scala   |   1 |
> | CompactBuffer.scala   |   1 |
> | ImmutableBitSet.scala |   6 |
> | MedianHeap.scala  |   1 |
> | OpenHashSet.scala |   2 |
> 'core/src/main/scala/org/apache/spark/util/io'
> || Filename||   Count ||
> | ChunkedByteBuffer.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/logging'
> || Filename   ||   Count ||
> | DriverLogger.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/random'
> || Filename||   Count ||
> | RandomSampler.scala |   1 |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36099) Group exception messages in core/util

2021-07-27 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388095#comment-17388095
 ] 

Shockang commented on SPARK-36099:
--

I think you should ask my permission, or my time will be wasted.[~dc-heros]

> Group exception messages in core/util
> -
>
> Key: SPARK-36099
> URL: https://issues.apache.org/jira/browse/SPARK-36099
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 3.3.0
>Reporter: Allison Wang
>Priority: Major
>
> 'core/src/main/scala/org/apache/spark/util'
> || Filename ||   Count ||
> | AccumulatorV2.scala  |   4 |
> | ClosureCleaner.scala |   1 |
> | DependencyUtils.scala|   1 |
> | KeyLock.scala|   1 |
> | ListenerBus.scala|   1 |
> | NextIterator.scala   |   1 |
> | SerializableBuffer.scala |   2 |
> | ThreadUtils.scala|   4 |
> | Utils.scala  |  16 |
> 'core/src/main/scala/org/apache/spark/util/collection'
> || Filename  ||   Count ||
> | AppendOnlyMap.scala   |   1 |
> | CompactBuffer.scala   |   1 |
> | ImmutableBitSet.scala |   6 |
> | MedianHeap.scala  |   1 |
> | OpenHashSet.scala |   2 |
> 'core/src/main/scala/org/apache/spark/util/io'
> || Filename||   Count ||
> | ChunkedByteBuffer.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/logging'
> || Filename   ||   Count ||
> | DriverLogger.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/random'
> || Filename||   Count ||
> | RandomSampler.scala |   1 |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36099) Group exception messages in core/util

2021-07-27 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388090#comment-17388090
 ] 

Shockang commented on SPARK-36099:
--

You should tell me in advance.My code has written more than 200 lines and is 
preparing to submit pr….[~dc-heros]

> Group exception messages in core/util
> -
>
> Key: SPARK-36099
> URL: https://issues.apache.org/jira/browse/SPARK-36099
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 3.3.0
>Reporter: Allison Wang
>Priority: Major
>
> 'core/src/main/scala/org/apache/spark/util'
> || Filename ||   Count ||
> | AccumulatorV2.scala  |   4 |
> | ClosureCleaner.scala |   1 |
> | DependencyUtils.scala|   1 |
> | KeyLock.scala|   1 |
> | ListenerBus.scala|   1 |
> | NextIterator.scala   |   1 |
> | SerializableBuffer.scala |   2 |
> | ThreadUtils.scala|   4 |
> | Utils.scala  |  16 |
> 'core/src/main/scala/org/apache/spark/util/collection'
> || Filename  ||   Count ||
> | AppendOnlyMap.scala   |   1 |
> | CompactBuffer.scala   |   1 |
> | ImmutableBitSet.scala |   6 |
> | MedianHeap.scala  |   1 |
> | OpenHashSet.scala |   2 |
> 'core/src/main/scala/org/apache/spark/util/io'
> || Filename||   Count ||
> | ChunkedByteBuffer.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/logging'
> || Filename   ||   Count ||
> | DriverLogger.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/random'
> || Filename||   Count ||
> | RandomSampler.scala |   1 |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-36099) Group exception messages in core/util

2021-07-16 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381874#comment-17381874
 ] 

Shockang commented on SPARK-36099:
--

[~allisonwang-db] Could I working on this issue?

> Group exception messages in core/util
> -
>
> Key: SPARK-36099
> URL: https://issues.apache.org/jira/browse/SPARK-36099
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 3.3.0
>Reporter: Allison Wang
>Priority: Major
>
> 'core/src/main/scala/org/apache/spark/util'
> || Filename ||   Count ||
> | AccumulatorV2.scala  |   4 |
> | ClosureCleaner.scala |   1 |
> | DependencyUtils.scala|   1 |
> | KeyLock.scala|   1 |
> | ListenerBus.scala|   1 |
> | NextIterator.scala   |   1 |
> | SerializableBuffer.scala |   2 |
> | ThreadUtils.scala|   4 |
> | Utils.scala  |  16 |
> 'core/src/main/scala/org/apache/spark/util/collection'
> || Filename  ||   Count ||
> | AppendOnlyMap.scala   |   1 |
> | CompactBuffer.scala   |   1 |
> | ImmutableBitSet.scala |   6 |
> | MedianHeap.scala  |   1 |
> | OpenHashSet.scala |   2 |
> 'core/src/main/scala/org/apache/spark/util/io'
> || Filename||   Count ||
> | ChunkedByteBuffer.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/logging'
> || Filename   ||   Count ||
> | DriverLogger.scala |   1 |
> 'core/src/main/scala/org/apache/spark/util/random'
> || Filename||   Count ||
> | RandomSampler.scala |   1 |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35508) job group and description do not apply on broadcasts

2021-07-16 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381790#comment-17381790
 ] 

Shockang commented on SPARK-35508:
--

Put this issue on hold and wait for the complete plans of multiple job groups 
to fix it.

> job group and description do not apply on broadcasts
> 
>
> Key: SPARK-35508
> URL: https://issues.apache.org/jira/browse/SPARK-35508
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Lior Chaga
>Priority: Minor
> Attachments: spark2-image.png, spark3-image.png
>
>
> Given the following code:
> {code:java}
> SparkContext context = new SparkContext("local", "test"); 
> SparkSession session = new SparkSession(context); 
> List strings = Lists.newArrayList("a", "b", "c"); 
> List otherString = Lists.newArrayList( "b", "c", "d"); 
> Dataset broadcastedDf = session.createDataset(strings, 
> Encoders.STRING()).toDF(); 
> Dataset dataframe = session.createDataset(otherString, 
> Encoders.STRING()).toDF(); 
> context.setJobGroup("my group", "my job", false); 
> dataframe.join(broadcast(broadcastedDf), "value").count();
> {code}
> Job group and description do not apply on broadcasted dataframe. 
> With spark 2.x, broadcast creation is given the same job description as the 
> query itself. 
> This seems to be broken with spark 3.x
> See attached images
>  !spark3-image.png!  !spark2-image.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35508) job group and description do not apply on broadcasts

2021-07-10 Thread Shockang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17378483#comment-17378483
 ] 

Shockang commented on SPARK-35508:
--

It seems that this bug comes from this PR: 
[https://github.com/apache/spark/pull/24595] , which will override the settings 
of job group and job description in the user code. Let me fix this issue.

> job group and description do not apply on broadcasts
> 
>
> Key: SPARK-35508
> URL: https://issues.apache.org/jira/browse/SPARK-35508
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Lior Chaga
>Priority: Minor
> Attachments: spark2-image.png, spark3-image.png
>
>
> Given the following code:
> {code:java}
> SparkContext context = new SparkContext("local", "test"); 
> SparkSession session = new SparkSession(context); 
> List strings = Lists.newArrayList("a", "b", "c"); 
> List otherString = Lists.newArrayList( "b", "c", "d"); 
> Dataset broadcastedDf = session.createDataset(strings, 
> Encoders.STRING()).toDF(); 
> Dataset dataframe = session.createDataset(otherString, 
> Encoders.STRING()).toDF(); 
> context.setJobGroup("my group", "my job", false); 
> dataframe.join(broadcast(broadcastedDf), "value").count();
> {code}
> Job group and description do not apply on broadcasted dataframe. 
> With spark 2.x, broadcast creation is given the same job description as the 
> query itself. 
> This seems to be broken with spark 3.x
> See attached images
>  !spark3-image.png!  !spark2-image.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org