[incubator-uniffle] branch master updated: Rename DISCLAIMER to DISCLAIMER-WIP (#258)

2022-10-10 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new 13f61cd3 Rename DISCLAIMER to DISCLAIMER-WIP (#258)
13f61cd3 is described below

commit 13f61cd35b130928ddb2d1de8bf0605ed005f741
Author: roryqi 
AuthorDate: Tue Oct 11 09:37:34 2022 +0800

Rename DISCLAIMER to DISCLAIMER-WIP (#258)

Co-authored-by: roryqi 
---
 DISCLAIMER | 11 ---
 DISCLAIMER-WIP | 21 +
 pom.xml|  2 +-
 3 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/DISCLAIMER b/DISCLAIMER
deleted file mode 100644
index 3e401182..
--- a/DISCLAIMER
+++ /dev/null
@@ -1,11 +0,0 @@
-Apache Uniffle (incubating) is an effort undergoing incubation at The Apache
-Software Foundation (ASF), sponsored by the Apache Incubator PMC.
-
-Incubation is required of all newly accepted projects until a further review
-indicates that the infrastructure, communications, and decision-making process
-have stabilized in a manner consistent with other successful ASF projects.
-
-While incubation status is not necessarily a reflection of the completeness
-or stability of the code, it does indicate that the project has yet to be
-fully endorsed by the ASF.
-
diff --git a/DISCLAIMER-WIP b/DISCLAIMER-WIP
new file mode 100644
index ..23df370b
--- /dev/null
+++ b/DISCLAIMER-WIP
@@ -0,0 +1,21 @@
+Apache Uniffle (incubating) is an effort undergoing incubation at The Apache
+Software Foundation (ASF), sponsored by the Apache Incubator PMC.
+
+Incubation is required of all newly accepted projects until a further review
+indicates that the infrastructure, communications, and decision-making process
+have stabilized in a manner consistent with other successful ASF projects.
+
+While incubation status is not necessarily a reflection of the completeness
+or stability of the code, it does indicate that the project has yet to be
+fully endorsed by the ASF.
+
+Some of the incubating project’s releases may not be fully compliant with ASF 
policy.
+For example, releases may have incomplete or un-reviewed licensing conditions.
+What follows is a list of issues the project is currently aware of (this list 
is likely to be incomplete):
+
+1. Releases may have incomplete licensing conditions
+
+If you are planning to incorporate this work into your product/project,please 
be aware that
+you will need to conduct a thorough licensing review to determine the overall 
implications of
+including this work.For the current status of this project through the Apache 
Incubator,
+visit: https://incubator.apache.org/projects/uniffle.html
diff --git a/pom.xml b/pom.xml
index b8e57475..c18d1be7 100644
--- a/pom.xml
+++ b/pom.xml
@@ -925,7 +925,7 @@
 
   
 LICENSE
-DISCLAIMER
+DISCLAIMER-WIP
 NOTICE
 **/target/**
 src/test/resources/empty



[incubator-uniffle] branch master updated: Change url of total lines badge in README (#222)

2022-09-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new 8be68ab4 Change url of total lines badge in README (#222)
8be68ab4 is described below

commit 8be68ab42de921e36073024c9bd2f08ae4814b23
Author: Kaijie Chen 
AuthorDate: Thu Sep 15 18:49:06 2022 +0800

Change url of total lines badge in README (#222)
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 8794e016..4ab67422 100644
--- a/README.md
+++ b/README.md
@@ -24,7 +24,7 @@ Currently it supports [Apache 
Spark](https://spark.apache.org) and [MapReduce](h
 
 
[![Build](https://github.com/apache/incubator-uniffle/actions/workflows/build.yml/badge.svg?branch=master=push)](https://github.com/apache/incubator-uniffle/actions/workflows/build.yml)
 
[![Codecov](https://codecov.io/gh/apache/incubator-uniffle/branch/master/graph/badge.svg)](https://codecov.io/gh/apache/incubator-uniffle)
-[![Total 
Lines](https://img.shields.io/tokei/lines/github/apache/incubator-uniffle)](https://github.com/apache/incubator-uniffle)
+[![](https://sloc.xyz/github/apache/incubator-uniffle)](https://github.com/apache/incubator-uniffle)
 [![Code 
Quality](https://img.shields.io/lgtm/grade/java/github/apache/incubator-uniffle?label=code%20quality)](https://lgtm.com/projects/g/apache/incubator-uniffle/)
 
[![License](https://img.shields.io/github/license/apache/incubator-uniffle)](https://github.com/apache/incubator-uniffle/blob/master/LICENSE)
 



[incubator-uniffle] branch master updated: Add more badges in README (#219)

2022-09-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b210c0c Add more badges in README (#219)
3b210c0c is described below

commit 3b210c0cf2ea5e9cd23ce759a267c6c5b3eb302d
Author: Kaijie Chen 
AuthorDate: Thu Sep 15 15:51:43 2022 +0800

Add more badges in README (#219)
---
 README.md | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/README.md b/README.md
index e3521acc..8794e016 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,9 @@ Currently it supports [Apache 
Spark](https://spark.apache.org) and [MapReduce](h
 
 
[![Build](https://github.com/apache/incubator-uniffle/actions/workflows/build.yml/badge.svg?branch=master=push)](https://github.com/apache/incubator-uniffle/actions/workflows/build.yml)
 
[![Codecov](https://codecov.io/gh/apache/incubator-uniffle/branch/master/graph/badge.svg)](https://codecov.io/gh/apache/incubator-uniffle)
+[![Total 
Lines](https://img.shields.io/tokei/lines/github/apache/incubator-uniffle)](https://github.com/apache/incubator-uniffle)
+[![Code 
Quality](https://img.shields.io/lgtm/grade/java/github/apache/incubator-uniffle?label=code%20quality)](https://lgtm.com/projects/g/apache/incubator-uniffle/)
+[![License](https://img.shields.io/github/license/apache/incubator-uniffle)](https://github.com/apache/incubator-uniffle/blob/master/LICENSE)
 
 ## Architecture
 ![Rss Architecture](docs/asset/rss_architecture.png)



[incubator-uniffle] branch master updated: Add Notice and DISCLAMER file (#215)

2022-09-14 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new dcdf8ae5 Add Notice and DISCLAMER file (#215)
dcdf8ae5 is described below

commit dcdf8ae55a774adbd5126919868b4fa5376f99ab
Author: frankliee 
AuthorDate: Wed Sep 14 15:25:21 2022 +0800

Add Notice and DISCLAMER file (#215)
---
 DISCLAIMER | 2 +-
 NOTICE | 7 +++
 pom.xml| 1 +
 3 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/DISCLAIMER b/DISCLAIMER
index 805a8e84..3e401182 100644
--- a/DISCLAIMER
+++ b/DISCLAIMER
@@ -1,4 +1,4 @@
-Apache Uniffle (Incubating) is an effort undergoing incubation at The Apache
+Apache Uniffle (incubating) is an effort undergoing incubation at The Apache
 Software Foundation (ASF), sponsored by the Apache Incubator PMC.
 
 Incubation is required of all newly accepted projects until a further review
diff --git a/NOTICE b/NOTICE
new file mode 100644
index ..2cfb9fb7
--- /dev/null
+++ b/NOTICE
@@ -0,0 +1,7 @@
+Apache Uniffle (incubating)
+Copyright 2022 and onwards The Apache Software Foundation.
+
+This product includes software developed at
+The Apache Software Foundation (https://www.apache.org/).
+
+The initial codebase was donated to the ASF by Tencent, copyright 2020-2022.
diff --git a/pom.xml b/pom.xml
index 7db56f06..327d614f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -926,6 +926,7 @@
   
 LICENSE
 DISCLAIMER
+NOTICE
 **/target/**
 src/test/resources/empty
 **/dependency-reduced-pom.xml



[incubator-uniffle-website] branch master updated: Update Slack invitation link (#4)

2022-09-08 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle-website.git


The following commit(s) were added to refs/heads/master by this push:
 new e2fb0e5  Update Slack invitation link (#4)
e2fb0e5 is described below

commit e2fb0e5f1ca9c6d42e4b6b862bae2aed3bebd714
Author: Kaijie Chen 
AuthorDate: Thu Sep 8 19:28:40 2022 +0800

Update Slack invitation link (#4)
---
 docusaurus.config.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docusaurus.config.js b/docusaurus.config.js
index a1aaf70..66b86e5 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -117,7 +117,7 @@ const config = {
 items: [
 {
 label: 'Slack',
-href: 
'https://github.com/apache/incubator-uniffle/issues',
+href: 
'https://join.slack.com/t/the-asf/shared_invite/zt-1fm9561yr-uzTpjqg3jf5nxSJV5AE3KQ',
 },
 {
 label: 'Issue Tracker',



[incubator-uniffle-website] 01/01: first commit

2022-08-25 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle-website.git

commit d7af0cfa6d7a831188b8f6f79c7626ede2e600d9
Author: Jerry Shao 
AuthorDate: Fri Aug 26 11:42:04 2022 +0800

first commit
---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
new file mode 100644
index 000..ecb1a1c
--- /dev/null
+++ b/README.md
@@ -0,0 +1 @@
+# incubator-uniffle-website



[incubator-uniffle] branch master updated: [TYPO] Fix misspelled word "integration" (#34)

2022-07-05 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new 49f1a16  [TYPO] Fix misspelled word "integration" (#34)
49f1a16 is described below

commit 49f1a16a3bcf33429307b6326d77f782ec9eb79d
Author: Kaijie Chen 
AuthorDate: Wed Jul 6 10:26:10 2022 +0800

[TYPO] Fix misspelled word "integration" (#34)
---
 integration-test/common/pom.xml   | 2 +-
 integration-test/mr/pom.xml   | 2 +-
 integration-test/spark-common/pom.xml | 2 +-
 integration-test/spark2/pom.xml   | 2 +-
 integration-test/spark3/pom.xml   | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/integration-test/common/pom.xml b/integration-test/common/pom.xml
index deeb403..c4dc048 100644
--- a/integration-test/common/pom.xml
+++ b/integration-test/common/pom.xml
@@ -31,7 +31,7 @@
 rss-integration-common-test
 0.6.0-snapshot
 jar
-Apache Uniffle Intergration Test (Common)
+Apache Uniffle Integration Test (Common)
 
 
 
diff --git a/integration-test/mr/pom.xml b/integration-test/mr/pom.xml
index cc9e9c1..2199759 100644
--- a/integration-test/mr/pom.xml
+++ b/integration-test/mr/pom.xml
@@ -30,7 +30,7 @@
 rss-integration-mr-test
 0.6.0-snapshot
 jar
-Apache Uniffle Intergration Test (MapReduce)
+Apache Uniffle Integration Test (MapReduce)
 
 
 
diff --git a/integration-test/spark-common/pom.xml 
b/integration-test/spark-common/pom.xml
index 3a7b56a..42890d3 100644
--- a/integration-test/spark-common/pom.xml
+++ b/integration-test/spark-common/pom.xml
@@ -31,7 +31,7 @@
   rss-integration-spark-common-test
   0.6.0-snapshot
   jar
-  Apache Uniffle Intergration Test (Spark Common)
+  Apache Uniffle Integration Test (Spark Common)
 
   
 
diff --git a/integration-test/spark2/pom.xml b/integration-test/spark2/pom.xml
index 08557d8..c384fda 100644
--- a/integration-test/spark2/pom.xml
+++ b/integration-test/spark2/pom.xml
@@ -31,7 +31,7 @@
   rss-integration-spark2-test
   0.6.0-snapshot
   jar
-  Apache Uniffle Intergration Test (Spark 2)
+  Apache Uniffle Integration Test (Spark 2)
 
   
 
diff --git a/integration-test/spark3/pom.xml b/integration-test/spark3/pom.xml
index c166979..0075522 100644
--- a/integration-test/spark3/pom.xml
+++ b/integration-test/spark3/pom.xml
@@ -31,7 +31,7 @@
 rss-integration-spark3-test
 0.6.0-snapshot
 jar
-Apache Uniffle Intergration Test (Spark 3)
+Apache Uniffle Integration Test (Spark 3)
 
 
 



[incubator-uniffle] branch master updated: Improve asf.yaml to reduce the notifications (#25)

2022-07-05 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new 0d7dfdb  Improve asf.yaml to reduce the notifications (#25)
0d7dfdb is described below

commit 0d7dfdbcc382aee4bdfa6924afd2bfe56d0a0bf5
Author: Saisai Shao 
AuthorDate: Tue Jul 5 15:17:18 2022 +0800

Improve asf.yaml to reduce the notifications (#25)
---
 .asf.yaml | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
index bff9c79..5137082 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -23,6 +23,7 @@ github:
 - mapreduce
 - shuffle
 - remote-shuffle-service
+- rss
   features:
 # Enable wiki for documentation
 wiki: true
@@ -43,6 +44,6 @@ github:
 required_approving_review_count: 1
 
   notifications:
-  commits: notificati...@uniffle.apache.org
-  issues: d...@uniffle.apache.org
-  pullrequests: notificati...@uniffle.apache.org
+commits: commits@uniffle.apache.org
+issues: d...@uniffle.apache.org
+pullrequests: iss...@uniffle.apache.org



[incubator-uniffle] branch master updated: Add asf yaml

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
 new e5dd0ea  Add asf yaml
e5dd0ea is described below

commit e5dd0eaf1651680420f081b3fc456f1c7be3d316
Author: Jerry Shao 
AuthorDate: Fri Jul 1 15:56:56 2022 +0800

Add asf yaml
---
 .asf.yaml | 48 
 1 file changed, 48 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 000..bff9c79
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,48 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+github:
+  description: Uniffle is a high performance, general purpose Remote Shuffle 
Service.
+  homepage: https://uniffle.apache.org/
+  labels:
+- spark
+- mapreduce
+- shuffle
+- remote-shuffle-service
+  features:
+# Enable wiki for documentation
+wiki: true
+# Enable issues management
+issues: true
+# Enable projects for project management boards
+projects: true
+  enabled_merge_buttons:
+squash: true
+merge: false
+rebase: false
+  protected_branches:
+master:
+  required_status_checks:
+strict: true
+  required_pull_request_reviews:
+dismiss_stale_reviews: true
+required_approving_review_count: 1
+
+  notifications:
+  commits: notificati...@uniffle.apache.org
+  issues: d...@uniffle.apache.org
+  pullrequests: notificati...@uniffle.apache.org



[incubator-uniffle] branch branch-0.3.0 created (now 1d69058)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.3.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 1d69058  [Bugfix] Fix uncorrect index file (#92) (#93)

This branch includes the following new commits:

 new 1d69058  [Bugfix] Fix uncorrect index file (#92) (#93)

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-uniffle] 01/01: [Bugfix] Fix uncorrect index file (#92) (#93)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.3.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 1d69058c32f8f943e1694cfe182fb19d55943a11
Author: roryqi 
AuthorDate: Tue Mar 8 17:21:55 2022 +0800

[Bugfix] Fix uncorrect index file (#92) (#93)

backport 0.3.0
### What changes were proposed in this pull request?
Modify the method that calculate the offset in the index file.

### Why are the changes needed?
If we don't have this patch, we run 10TB tpcds, query24a will fail.
https://user-images.githubusercontent.com/8159038/157178756-d8a39b3f-0ea6-4864-ac68-ee382a88bb0f.png;>
When we write many data to dataOutputStream, dataOutputStream.size() won't 
increase again. dataOutputStream.size() will
always be Integer.MAX_VALUE.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Add new uts.

Co-authored-by: roryqi 
---
 .../rss/storage/handler/impl/LocalFileWriter.java   |  6 ++
 .../rss/storage/handler/impl/LocalFileHandlerTest.java  | 17 +
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git 
a/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
 
b/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
index 10185a4..609db7e 100644
--- 
a/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
+++ 
b/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
@@ -30,21 +30,19 @@ public class LocalFileWriter implements Closeable {
 
   private DataOutputStream dataOutputStream;
   private FileOutputStream fileOutputStream;
-  private long initSize;
   private long nextOffset;
 
   public LocalFileWriter(File file) throws IOException {
 fileOutputStream = new FileOutputStream(file, true);
 // init fsDataOutputStream
 dataOutputStream = new DataOutputStream(fileOutputStream);
-initSize = file.length();
-nextOffset = initSize;
+nextOffset = file.length();
   }
 
   public void writeData(byte[] data) throws IOException {
 if (data != null && data.length > 0) {
   dataOutputStream.write(data);
-  nextOffset = initSize + dataOutputStream.size();
+  nextOffset = nextOffset + data.length;
 }
   }
 
diff --git 
a/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
 
b/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
index 32b7ace..846ab20 100644
--- 
a/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
+++ 
b/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
@@ -39,6 +39,7 @@ import com.tencent.rss.storage.handler.api.ServerReadHandler;
 import com.tencent.rss.storage.handler.api.ShuffleWriteHandler;
 import com.tencent.rss.storage.util.ShuffleStorageUtils;
 import java.io.File;
+import java.io.IOException;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
@@ -53,6 +54,7 @@ public class LocalFileHandlerTest {
   @Test
   public void writeTest() throws Exception {
 File tmpDir = Files.createTempDir();
+tmpDir.deleteOnExit();
 File dataDir1 = new File(tmpDir, "data1");
 File dataDir2 = new File(tmpDir, "data2");
 String[] basePaths = new String[]{dataDir1.getAbsolutePath(),
@@ -111,6 +113,21 @@ public class LocalFileHandlerTest {
 }
   }
 
+  @Test
+  public void writeBigDataTest() throws IOException  {
+File tmpDir = Files.createTempDir();
+tmpDir.deleteOnExit();
+File writeFile = new File(tmpDir, "writetest");
+LocalFileWriter writer = new LocalFileWriter(writeFile);
+int  size = Integer.MAX_VALUE / 100;
+byte[] data = new byte[size];
+for (int i = 0; i < 200; i++) {
+  writer.writeData(data);
+}
+long totalSize = 200L * size;
+assertEquals(writer.nextOffset(), totalSize);
+  }
+
 
   private void writeTestData(
   ShuffleWriteHandler writeHandler,



[incubator-uniffle] branch branch-0.4.0 created (now 6a4295a)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.4.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 6a4295a  upgrade to 0.4.1

No new revisions were added by this update.



[incubator-uniffle] 04/04: [Bugfix] [0.5] Fix MR don't have remote storage information when we use dynamic conf and MEMORY_LOCALE_HDFS storageType (#195) (#196)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.5.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 55cb16fb6b9f494f657068721ca81c74043a4bb9
Author: roryqi 
AuthorDate: Thu Jun 23 10:52:59 2022 +0800

[Bugfix] [0.5] Fix MR don't have remote storage information when we use 
dynamic conf and MEMORY_LOCALE_HDFS storageType (#195) (#196)

backport 0.5

### What changes were proposed in this pull request?
We should aquire the storageType from extraConf.
### Why are the changes needed?
If we don't have this patch, MR don't work when we use dynamic conf and 
MEMORY_LOCALE_HDFS storageType.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 .../main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
index 7511104..976b03c 100644
--- 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
+++ 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
@@ -180,7 +180,7 @@ public class RssMRAppMaster extends MRAppMaster {
 RssMRUtils.applyDynamicClientConf(extraConf, clusterClientConf);
   }
 
-  String storageType = conf.get(RssMRConfig.RSS_STORAGE_TYPE);
+  String storageType = RssMRUtils.getString(extraConf, conf, 
RssMRConfig.RSS_STORAGE_TYPE);
   RemoteStorageInfo defaultRemoteStorage =
   new RemoteStorageInfo(conf.get(RssMRConfig.RSS_REMOTE_STORAGE_PATH, 
""));
   RemoteStorageInfo remoteStorage = ClientUtils.fetchRemoteStorage(



[incubator-uniffle] 01/04: [Bugfix] [0.5] Fix spark2 executor stop NPE problem (#188)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.5.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 59856687f8e17b20f206815cbcf31bbbaacf4292
Author: roryqi 
AuthorDate: Wed Jun 22 14:50:40 2022 +0800

[Bugfix] [0.5] Fix spark2 executor stop NPE problem (#188)

backport 0.5.0

### What changes were proposed in this pull request?
We need to judge heartbeatExecutorService whether is null when we will stop 
it.

### Why are the changes needed?
#177 pr introduce this problem, when we run Spark applications on our 
cluster, the executor will throw NPE when method `stop` is called.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 .../src/main/java/org/apache/spark/shuffle/RssShuffleManager.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
index f1f2a36..2970489 100644
--- 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
+++ 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
@@ -370,7 +370,9 @@ public class RssShuffleManager implements ShuffleManager {
 
   @Override
   public void stop() {
-heartBeatScheduledExecutorService.shutdownNow();
+if (heartBeatScheduledExecutorService != null) {
+  heartBeatScheduledExecutorService.shutdownNow();
+}
 threadPoolExecutor.shutdownNow();
 shuffleWriteClient.close();
   }



[incubator-uniffle] 12/17: [Improvement] Move detailed client configuration to individual doc (#201)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 2c1c554bb9a47a25e56164d1af2efa1acff66cd8
Author: frankliee 
AuthorDate: Tue Jun 28 11:02:00 2022 +0800

[Improvement] Move detailed client configuration to individual doc (#201)

 ### What changes were proposed in this pull request?
1.  Put detailed configuration to doc subdirectory.
2. Add doc for client quorum setting.

### Why are the changes needed?
Update doc

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Just doc.
---
 README.md |  22 +--
 docs/client_guide.md  | 148 ++
 docs/coordinator_guide.md |   8 +++
 docs/index.md |   8 +++
 docs/pageA.md |   7 ---
 docs/server_guide.md  |   7 +++
 6 files changed, 173 insertions(+), 27 deletions(-)

diff --git a/README.md b/README.md
index 51a1ed0..eba4fd3 100644
--- a/README.md
+++ b/README.md
@@ -233,27 +233,9 @@ The important configuration is listed as following.
 |rss.server.flush.cold.storage.threshold.size|64M| The threshold of data size 
for LOACALFILE and HDFS if MEMORY_LOCALFILE_HDFS is used|
 
 
-### Spark Client
+### Shuffle Client
 
-|Property Name|Default|Description|
-|---|---|---|
-|spark.rss.writer.buffer.size|3m|Buffer size for single partition data|
-|spark.rss.writer.buffer.spill.size|128m|Buffer size for total partition data|
-|spark.rss.coordinator.quorum|-|Coordinator quorum|
-|spark.rss.storage.type|-|Supports MEMORY_LOCALFILE, MEMORY_HDFS, 
MEMORY_LOCALFILE_HDFS|
-|spark.rss.client.send.size.limit|16m|The max data size sent to shuffle server|
-|spark.rss.client.read.buffer.size|32m|The max data size read from storage|
-|spark.rss.client.send.threadPool.size|10|The thread size for send shuffle 
data to shuffle server|
-
-
-### MapReduce Client
-
-|Property Name|Default|Description|
-|---|---|---|
-|mapreduce.rss.coordinator.quorum|-|Coordinator quorum|
-|mapreduce.rss.storage.type|-|Supports MEMORY_LOCALFILE, MEMORY_HDFS, 
MEMORY_LOCALFILE_HDFS|
-|mapreduce.rss.client.max.buffer.size|3k|The max buffer size in map side|
-|mapreduce.rss.client.read.buffer.size|32m|The max data size read from storage|
+For more details of advanced configuration, please see [Firestorm Shuffle 
Client 
Guide](https://github.com/Tencent/Firestorm/blob/master/docs/client_guide.md).
 
 ## LICENSE
 
diff --git a/docs/client_guide.md b/docs/client_guide.md
new file mode 100644
index 000..95b960b
--- /dev/null
+++ b/docs/client_guide.md
@@ -0,0 +1,148 @@
+---
+layout: page
+displayTitle: Firestorm Shuffle Client Guide
+title: Firestorm Shuffle Client Guide
+description: Firestorm Shuffle Client Guide
+---
+# Firestorm Shuffle Client Guide
+
+Firestorm is designed as a unified shuffle engine for multiple computing 
frameworks, including Apache Spark and Apache Hadoop.
+Firestorm has provided pluggable client plugins to enable remote shuffle in 
Spark and MapReduce.
+
+## Deploy
+This document will introduce how to deploy Firestorm client plugins with Spark 
and MapReduce.
+
+### Deploy Spark Client Plugin
+
+1. Add client jar to Spark classpath, eg, SPARK_HOME/jars/
+
+   The jar for Spark2 is located in 
/jars/client/spark2/rss-client-X-shaded.jar
+
+   The jar for Spark3 is located in 
/jars/client/spark3/rss-client-X-shaded.jar
+
+2. Update Spark conf to enable Firestorm, eg,
+
+   ```
+   spark.shuffle.manager org.apache.spark.shuffle.RssShuffleManager
+   spark.rss.coordinator.quorum :1,:1
+   # Note: For Spark2, spark.sql.adaptive.enabled should be false because 
Spark2 doesn't support AQE.
+   ```
+
+### Support Spark Dynamic Allocation
+
+To support spark dynamic allocation with Firestorm, spark code should be 
updated.
+There are 2 patches for spark-2.4.6 and spark-3.1.2 in spark-patches folder 
for reference.
+
+After apply the patch and rebuild spark, add following configuration in spark 
conf to enable dynamic allocation:
+  ```
+  spark.shuffle.service.enabled false
+  spark.dynamicAllocation.enabled true
+  ```
+
+### Deploy MapReduce Client Plugin
+
+1. Add client jar to the classpath of each NodeManager, e.g., 
/share/hadoop/mapreduce/
+
+The jar for MapReduce is located in 
/jars/client/mr/rss-client-mr-X-shaded.jar
+
+2. Update MapReduce conf to enable Firestorm, eg,
+
+   ```
+   
-Dmapreduce.rss.coordinator.quorum=:1,:1
+   
-Dyarn.app.mapreduce.am.command-opts=org.apache.hadoop.mapreduce.v2.app.RssMRAppMaster
+   
-Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.RssMapOutputCollector
+   
-Dmapreduce.job.reduce.shuffle.consumer.plugin.class=org.apache.hadoop.mapreduce.task.reduce.RssShuffle
+   ```
+Note that the RssMRAppMaster will automatically disable slow start (i.e., 
`mapreduce.job.reduce.slowstart.completedmaps

[incubator-uniffle] 02/04: [Doc] Update readme with features like multiple remote storage support etc (#192)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.5.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit af92c1ca1339d3353ba3f80d5d97ee0658977397
Author: Colin 
AuthorDate: Wed Jun 22 17:16:53 2022 +0800

[Doc] Update readme with features like multiple remote storage support etc 
(#192)

### What changes were proposed in this pull request?
Update Readme for latest features, eg, multiple remote storage support, 
dynamic client conf etc.

### Why are the changes needed?
Doc should be updated


### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
No need
---
 README.md | 46 ++
 1 file changed, 34 insertions(+), 12 deletions(-)

diff --git a/README.md b/README.md
index e134f0f..50903ce 100644
--- a/README.md
+++ b/README.md
@@ -11,7 +11,7 @@ Coordinator will collect status of shuffle server and do the 
assignment for the
 
 Shuffle server will receive the shuffle data, merge them and write to storage.
 
-Depend on different situation, Firestorm supports Memory & Local, Memory & 
Remote Storage(eg, HDFS), Local only, Remote Storage only.
+Depend on different situation, Firestorm supports Memory & Local, Memory & 
Remote Storage(eg, HDFS), Memory & Local & Remote Storage(recommendation for 
production environment).
 
 ## Shuffle Process with Firestorm
 
@@ -74,9 +74,25 @@ rss-xxx.tgz will be generated for deployment
  rss.coordinator.server.heartbeat.timeout 3
  rss.coordinator.app.expired 6
  rss.coordinator.shuffle.nodes.max 5
- rss.coordinator.exclude.nodes.file.path RSS_HOME/conf/exclude_nodes
-   ```
-4. start Coordinator
+ # enable dynamicClientConf, and coordinator will be responsible for most 
of client conf
+ rss.coordinator.dynamicClientConf.enabled true
+ # config the path of client conf
+ rss.coordinator.dynamicClientConf.path /conf/dynamic_client.conf
+ # config the path of excluded shuffle server
+ rss.coordinator.exclude.nodes.file.path /conf/exclude_nodes
+   ```
+4. update /conf/dynamic_client.conf, rss client will get default 
conf from coordinator eg,
+   ```
+# MEMORY_LOCALFILE_HDFS is recommandation for production environment
+rss.storage.type MEMORY_LOCALFILE_HDFS
+# multiple remote storages are supported, and client will get assignment 
from coordinator
+rss.coordinator.remote.storage.path 
hdfs://cluster1/path,hdfs://cluster2/path
+rss.writer.require.memory.retryMax 1200
+rss.client.retry.max 100
+rss.writer.send.check.timeout 60
+rss.client.read.buffer.size 14m
+   ```
+5. start Coordinator
```
 bash RSS_HOME/bin/start-coordnator.sh
```
@@ -90,14 +106,17 @@ rss-xxx.tgz will be generated for deployment
  HADOOP_HOME=
  XMX_SIZE="80g"
```
-3. update RSS_HOME/conf/server.conf, the following demo is for memory + local 
storage only, eg,
+3. update RSS_HOME/conf/server.conf, eg,
```
  rss.rpc.server.port 1
  rss.jetty.http.port 19998
  rss.rpc.executor.size 2000
- rss.storage.type MEMORY_LOCALFILE
+ # it should be configed the same as in coordinator
+ rss.storage.type MEMORY_LOCALFILE_HDFS
  rss.coordinator.quorum :1,:1
+ # local storage path for shuffle server
  rss.storage.basePath /data1/rssdata,/data2/rssdata
+ # it's better to config thread num according to local disk num
  rss.server.flush.thread.alive 5
  rss.server.flush.threadPool.size 10
  rss.server.buffer.capacity 40g
@@ -108,6 +127,10 @@ rss-xxx.tgz will be generated for deployment
  rss.server.preAllocation.expired 12
  rss.server.commit.timeout 60
  rss.server.app.expired.withoutHeartbeat 12
+ # note: the default value of rss.server.flush.cold.storage.threshold.size 
is 64m
+ # there will be no data written to DFS if set it as 100g even 
rss.storage.type=MEMORY_LOCALFILE_HDFS
+ # please set proper value if DFS is used, eg, 64m, 128m.
+ rss.server.flush.cold.storage.threshold.size 100g
```
 4. start Shuffle Server
```
@@ -121,12 +144,11 @@ rss-xxx.tgz will be generated for deployment
 
The jar for Spark3 is located in 
/jars/client/spark3/rss-client-X-shaded.jar
 
-2. Update Spark conf to enable Firestorm, the following demo is for local 
storage only, eg,
+2. Update Spark conf to enable Firestorm, eg,
 
```
spark.shuffle.manager org.apache.spark.shuffle.RssShuffleManager
spark.rss.coordinator.quorum :1,:1
-   spark.rss.storage.type MEMORY_LOCALFILE
```
 
 ### Support Spark dynamic allocation
@@ -140,17 +162,16 @@ After apply the patch and rebuild spark, add following 
configuration in spark co
   spark.dynamicAllocation.enabled true
   ```
 
-## Deploy MapReduce Client
+### Deploy MapReduce Client
 
 1. Add c

[incubator-uniffle] branch branch-0.5.0 created (now 55cb16f)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.5.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 55cb16f  [Bugfix] [0.5] Fix MR don't have remote storage information 
when we use dynamic conf and MEMORY_LOCALE_HDFS storageType (#195) (#196)

This branch includes the following new commits:

 new 5985668  [Bugfix] [0.5] Fix spark2 executor stop NPE problem (#188)
 new af92c1c  [Doc] Update readme with features like multiple remote 
storage support etc (#192)
 new e049863  upgrade to 0.5.0 (#189)
 new 55cb16f  [Bugfix] [0.5] Fix MR don't have remote storage information 
when we use dynamic conf and MEMORY_LOCALE_HDFS storageType (#195) (#196)

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-uniffle] 03/04: upgrade to 0.5.0 (#189)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.5.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit e049863dec022d86a3aa95706c2bb93896a94c4f
Author: roryqi 
AuthorDate: Wed Jun 22 17:17:55 2022 +0800

upgrade to 0.5.0 (#189)

### What changes were proposed in this pull request?
upgrade version number

### Why are the changes needed?
upgrade to 0.5.0

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
no
---
 client-mr/pom.xml | 4 ++--
 client-spark/common/pom.xml   | 4 ++--
 client-spark/spark2/pom.xml   | 4 ++--
 client-spark/spark3/pom.xml   | 4 ++--
 client/pom.xml| 4 ++--
 common/pom.xml| 2 +-
 coordinator/pom.xml   | 2 +-
 integration-test/common/pom.xml   | 4 ++--
 integration-test/mr/pom.xml   | 4 ++--
 integration-test/spark-common/pom.xml | 4 ++--
 integration-test/spark2/pom.xml   | 4 ++--
 integration-test/spark3/pom.xml   | 4 ++--
 internal-client/pom.xml   | 4 ++--
 pom.xml   | 2 +-
 proto/pom.xml | 2 +-
 server/pom.xml| 2 +-
 storage/pom.xml   | 2 +-
 17 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/client-mr/pom.xml b/client-mr/pom.xml
index c15ffba..1dc433e 100644
--- a/client-mr/pom.xml
+++ b/client-mr/pom.xml
@@ -23,13 +23,13 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.5.0
 ../pom.xml
 
 
 com.tencent.rss
 rss-client-mr
-0.5.0-snapshot
+0.5.0
 jar
 
 
diff --git a/client-spark/common/pom.xml b/client-spark/common/pom.xml
index 61c4b1f..fdf3b84 100644
--- a/client-spark/common/pom.xml
+++ b/client-spark/common/pom.xml
@@ -25,12 +25,12 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
 
 
 rss-client-spark-common
-0.5.0-snapshot
+0.5.0
 jar
 
 
diff --git a/client-spark/spark2/pom.xml b/client-spark/spark2/pom.xml
index 41a4432..bef2028 100644
--- a/client-spark/spark2/pom.xml
+++ b/client-spark/spark2/pom.xml
@@ -24,13 +24,13 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
   
 
   com.tencent.rss
   rss-client-spark2
-  0.5.0-snapshot
+  0.5.0
   jar
 
   
diff --git a/client-spark/spark3/pom.xml b/client-spark/spark3/pom.xml
index 5674613..acc4fd7 100644
--- a/client-spark/spark3/pom.xml
+++ b/client-spark/spark3/pom.xml
@@ -24,13 +24,13 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
 
 
 com.tencent.rss
 rss-client-spark3
-0.5.0-snapshot
+0.5.0
 jar
 
 
diff --git a/client/pom.xml b/client/pom.xml
index e6134ce..a6ebf91 100644
--- a/client/pom.xml
+++ b/client/pom.xml
@@ -24,12 +24,12 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.5.0
   
 
   com.tencent.rss
   rss-client
-  0.5.0-snapshot
+  0.5.0
   jar
 
   
diff --git a/common/pom.xml b/common/pom.xml
index b4b65f8..6bf0143 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -22,7 +22,7 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.5.0
 ../pom.xml
   
 
diff --git a/coordinator/pom.xml b/coordinator/pom.xml
index e860a50..ceefda3 100644
--- a/coordinator/pom.xml
+++ b/coordinator/pom.xml
@@ -24,7 +24,7 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.5.0
 ../pom.xml
   
 
diff --git a/integration-test/common/pom.xml b/integration-test/common/pom.xml
index 2a759a4..773f383 100644
--- a/integration-test/common/pom.xml
+++ b/integration-test/common/pom.xml
@@ -24,13 +24,13 @@
 
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
 
 
 com.tencent.rss
 rss-integration-common-test
-0.5.0-snapshot
+0.5.0
 jar
 
 
diff --git a/integration-test/mr/pom.xml b/integration-test/mr/pom.xml
index 489ffd5..4879eea 100644
--- a/integration-test/mr/pom.xml
+++ b/integration-test/mr/pom.xml
@@ -22,14 +22,14 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
 
 4.0.0
 
 com.tencent.rss
 rss-integration-mr-test
-0.5.0-snapshot
+0.5.0
 jar
 
 
diff --git a/integration-test/spark-common/pom.xml 
b/integration-test/spark-common/pom.xml
index 284ca2b..f82e915 100644
--- a/integration-test/spark-common/pom.xml
+++ b/integration-test/spark-common/pom.xml
@@ -23,14 +23,14 @@
   
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.5.0
 ../../pom.xml
   
   4.0.0
 
   com.tencent.rss
   rss-integration-spark-common-test
-  0.5.0-snapshot
+  0.5.0
   jar
 
   
diff --git

[incubator-uniffle] 15/17: [Minor] Make clearResourceThread and processEventThread daemon (#207)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit ba47aa017f67e681af7c311c4ef8578eef740d4b
Author: Zhen Wang <643348...@qq.com>
AuthorDate: Thu Jun 30 14:56:54 2022 +0800

[Minor] Make clearResourceThread and processEventThread daemon (#207)

### What changes were proposed in this pull request?
Make clearResourceThread daemon and processEventThread daemon.

### Why are the changes needed?
`clearResourceThread` and `processEventThread` never exits, we can make it 
daemon.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Nod
---
 .../java/com/tencent/rss/server/ShuffleFlushManager.java | 12 
 .../main/java/com/tencent/rss/server/ShuffleTaskManager.java |  1 +
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git 
a/server/src/main/java/com/tencent/rss/server/ShuffleFlushManager.java 
b/server/src/main/java/com/tencent/rss/server/ShuffleFlushManager.java
index e246b02..be941ac 100644
--- a/server/src/main/java/com/tencent/rss/server/ShuffleFlushManager.java
+++ b/server/src/main/java/com/tencent/rss/server/ShuffleFlushManager.java
@@ -29,6 +29,7 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Maps;
 import com.google.common.collect.Queues;
 import com.google.common.collect.RangeMap;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import com.google.common.util.concurrent.Uninterruptibles;
 import org.apache.hadoop.conf.Configuration;
 import org.roaringbitmap.longlong.Roaring64NavigableMap;
@@ -60,7 +61,6 @@ public class ShuffleFlushManager {
   private Map>> 
handlers = Maps.newConcurrentMap();
   // appId -> shuffleId -> committed shuffle blockIds
   private Map> committedBlockIds = 
Maps.newConcurrentMap();
-  private Runnable processEventThread;
   private final int retryMax;
 
   private final StorageManager storageManager;
@@ -84,11 +84,12 @@ public class ShuffleFlushManager {
 BlockingQueue waitQueue = 
Queues.newLinkedBlockingQueue(waitQueueSize);
 int poolSize = 
shuffleServerConf.getInteger(ShuffleServerConf.SERVER_FLUSH_THREAD_POOL_SIZE);
 long keepAliveTime = 
shuffleServerConf.getLong(ShuffleServerConf.SERVER_FLUSH_THREAD_ALIVE);
-threadPoolExecutor = new ThreadPoolExecutor(poolSize, poolSize, 
keepAliveTime, TimeUnit.SECONDS, waitQueue);
+threadPoolExecutor = new ThreadPoolExecutor(poolSize, poolSize, 
keepAliveTime, TimeUnit.SECONDS, waitQueue,
+new 
ThreadFactoryBuilder().setDaemon(true).setNameFormat("FlushEventThreadPool").build());
 storageBasePaths = 
shuffleServerConf.getString(ShuffleServerConf.RSS_STORAGE_BASE_PATH).split(",");
 pendingEventTimeoutSec = 
shuffleServerConf.getLong(ShuffleServerConf.PENDING_EVENT_TIMEOUT_SEC);
 // the thread for flush data
-processEventThread = () -> {
+Runnable processEventRunnable = () -> {
   while (true) {
 try {
   ShuffleDataFlushEvent event = flushQueue.take();
@@ -103,7 +104,10 @@ public class ShuffleFlushManager {
 }
   }
 };
-new Thread(processEventThread).start();
+Thread processEventThread = new Thread(processEventRunnable);
+processEventThread.setName("ProcessEventThread");
+processEventThread.setDaemon(true);
+processEventThread.start();
 // todo: extract a class named Service, and support stop method
 Thread thread = new Thread("PendingEventProcessThread") {
   @Override
diff --git 
a/server/src/main/java/com/tencent/rss/server/ShuffleTaskManager.java 
b/server/src/main/java/com/tencent/rss/server/ShuffleTaskManager.java
index e847779..fc37a19 100644
--- a/server/src/main/java/com/tencent/rss/server/ShuffleTaskManager.java
+++ b/server/src/main/java/com/tencent/rss/server/ShuffleTaskManager.java
@@ -123,6 +123,7 @@ public class ShuffleTaskManager {
 };
 Thread thread = new Thread(clearResourceThread);
 thread.setName("clearResourceThread");
+thread.setDaemon(true);
 thread.start();
   }
 



[incubator-uniffle] 10/17: [MINOR] Close clusterManager resources (#202)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 8b5f363fa296312042130b73c8dd8f5a15b5e0ae
Author: Junfan Zhang 
AuthorDate: Mon Jun 27 17:34:13 2022 +0800

[MINOR] Close clusterManager resources (#202)

### What changes were proposed in this pull request?
1. Change the method of shutdown to close
2. Close resources of clustermanager in test cases

### Why are the changes needed?
Close resources to reduce the resource occupying in test cases.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Test cases
---
 .../java/com/tencent/rss/coordinator/ClusterManager.java|  5 ++---
 .../java/com/tencent/rss/coordinator/CoordinatorServer.java |  2 +-
 .../com/tencent/rss/coordinator/SimpleClusterManager.java   | 10 --
 .../rss/coordinator/BasicAssignmentStrategyTest.java|  5 -
 .../coordinator/PartitionBalanceAssignmentStrategyTest.java |  4 +++-
 .../tencent/rss/coordinator/SimpleClusterManagerTest.java   | 13 +++--
 .../test/java/com/tencent/rss/test/CoordinatorGrpcTest.java |  1 +
 7 files changed, 30 insertions(+), 10 deletions(-)

diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManager.java 
b/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManager.java
index 4249a03..9f5915e 100644
--- a/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManager.java
+++ b/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManager.java
@@ -18,10 +18,11 @@
 
 package com.tencent.rss.coordinator;
 
+import java.io.Closeable;
 import java.util.List;
 import java.util.Set;
 
-public interface ClusterManager {
+public interface ClusterManager extends Closeable {
 
   /**
* Add a server to the cluster.
@@ -49,6 +50,4 @@ public interface ClusterManager {
   List list();
 
   int getShuffleNodesMax();
-
-  void shutdown();
 }
diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java 
b/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
index 7ba7e1c..3b79221 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
@@ -94,7 +94,7 @@ public class CoordinatorServer {
   jettyServer.stop();
 }
 if (clusterManager != null) {
-  clusterManager.shutdown();
+  clusterManager.close();
 }
 if (accessManager != null) {
   accessManager.close();
diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
index 10af74d..fcfd1dc 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
@@ -21,6 +21,7 @@ package com.tencent.rss.coordinator;
 import java.io.BufferedReader;
 import java.io.File;
 import java.io.FileReader;
+import java.io.IOException;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -186,8 +187,13 @@ public class SimpleClusterManager implements 
ClusterManager {
   }
 
   @Override
-  public void shutdown() {
-scheduledExecutorService.shutdown();
+  public void close() throws IOException {
+if (scheduledExecutorService != null) {
+  scheduledExecutorService.shutdown();
+}
+if (checkNodesExecutorService != null) {
+  checkNodesExecutorService.shutdown();
+}
   }
 
   @Override
diff --git 
a/coordinator/src/test/java/com/tencent/rss/coordinator/BasicAssignmentStrategyTest.java
 
b/coordinator/src/test/java/com/tencent/rss/coordinator/BasicAssignmentStrategyTest.java
index 97afabf..7a95d76 100644
--- 
a/coordinator/src/test/java/com/tencent/rss/coordinator/BasicAssignmentStrategyTest.java
+++ 
b/coordinator/src/test/java/com/tencent/rss/coordinator/BasicAssignmentStrategyTest.java
@@ -24,6 +24,8 @@ import static org.junit.jupiter.api.Assertions.assertNull;
 import static org.junit.jupiter.api.Assertions.assertTrue;
 import com.google.common.collect.Sets;
 import com.tencent.rss.common.PartitionRange;
+
+import java.io.IOException;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
@@ -49,8 +51,9 @@ public class BasicAssignmentStrategyTest {
   }
 
   @AfterEach
-  public void tearDown() {
+  public void tearDown() throws IOException {
 clusterManager.clear();
+clusterManager.close();
   }
 
   @Test
diff --git 
a/coordinator/src/test/java/com/tencent/rss/coordinator/PartitionBalanceAssignmentStrategyTest.java
 
b/coordinator/src/test/java/com/tencent/rss/coordinator/PartitionBalanceAssignmentStrategyTest.java
index 018aa62..9ca4146 100644
--- 
a/coordinator/src/test/java/com/tencent/rss

[incubator-uniffle] 16/17: Support using remote fs path to specify the excludeNodesFilePath (#200)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 5ec04b89348ca9c28c9ddce571ffa528969d2f8a
Author: Junfan Zhang 
AuthorDate: Thu Jun 30 19:12:36 2022 +0800

Support using remote fs path to specify the excludeNodesFilePath (#200)

What changes were proposed in this pull request?
Support using remote fs path to specify the excludeNodesFilePath

Why are the changes needed?
When existing two coordinators serving for online, we hope they can read 
the consistent exclude nodes file insteading of using the localfile syncing 
manually.

Does this PR introduce any user-facing change?
Yes. It's an incompatible change.

When the default fs is HDFS in the core-site.xml, and the excludeFilePath 
is specified to "/user/x" in coordinator server.
After applied this patch, filesystem will be initialized to remote HDFS due 
to lacking scheme.

How was this patch tested?
Unit tests.
---
 .../rss/coordinator/ClusterManagerFactory.java | 10 +++-
 .../tencent/rss/coordinator/CoordinatorServer.java |  2 +-
 .../rss/coordinator/SimpleClusterManager.java  | 68 +-
 .../coordinator/BasicAssignmentStrategyTest.java   |  6 +-
 .../PartitionBalanceAssignmentStrategyTest.java|  6 +-
 .../rss/coordinator/SimpleClusterManagerTest.java  | 13 +++--
 6 files changed, 63 insertions(+), 42 deletions(-)

diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManagerFactory.java
 
b/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManagerFactory.java
index 2ec2b12..b2723f9 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManagerFactory.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/ClusterManagerFactory.java
@@ -18,15 +18,19 @@
 
 package com.tencent.rss.coordinator;
 
+import org.apache.hadoop.conf.Configuration;
+
 public class ClusterManagerFactory {
 
   CoordinatorConf conf;
+  Configuration hadoopConf;
 
-  public ClusterManagerFactory(CoordinatorConf conf) {
+  public ClusterManagerFactory(CoordinatorConf conf, Configuration hadoopConf) 
{
 this.conf = conf;
+this.hadoopConf = hadoopConf;
   }
 
-  public ClusterManager getClusterManager() {
-return new SimpleClusterManager(conf);
+  public ClusterManager getClusterManager() throws Exception {
+return new SimpleClusterManager(conf, hadoopConf);
   }
 }
diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java 
b/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
index 3b79221..2dbe06f 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/CoordinatorServer.java
@@ -111,7 +111,7 @@ public class CoordinatorServer {
 registerMetrics();
 this.applicationManager = new ApplicationManager(coordinatorConf);
 
-ClusterManagerFactory clusterManagerFactory = new 
ClusterManagerFactory(coordinatorConf);
+ClusterManagerFactory clusterManagerFactory = new 
ClusterManagerFactory(coordinatorConf, new Configuration());
 this.clusterManager = clusterManagerFactory.getClusterManager();
 this.clientConfManager = new ClientConfManager(coordinatorConf, new 
Configuration(), applicationManager);
 AssignmentStrategyFactory assignmentStrategyFactory =
diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
index fcfd1dc..972ea5f 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
@@ -19,9 +19,10 @@
 package com.tencent.rss.coordinator;
 
 import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
+import java.io.DataInputStream;
+import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.io.InputStreamReader;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -36,6 +37,10 @@ import com.google.common.collect.Maps;
 import com.google.common.collect.Sets;
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -52,8 +57,9 @@ public class SimpleClusterManager implements ClusterManager {
   private int shuffleNodesMax;
   private ScheduledExecutorService scheduledExecutorService;
   private ScheduledExecutorService checkNodesExecutorService;
+  private FileSystem hadoopFileSystem;
 
-  public SimpleClus

[incubator-uniffle] 13/17: [Improvement] Add RSS_IP environment variable support for K8S (#204)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 6937631876052425b8d808d26caf78c79b24536a
Author: roryqi 
AuthorDate: Wed Jun 29 10:06:31 2022 +0800

[Improvement] Add RSS_IP environment variable support for K8S (#204)

### What changes were proposed in this pull request?
Method `getHostIp` can acquire IP by environment variable.

### Why are the changes needed?
For K8S, there are too many IPs, it's hard to decide which we should use. 
So we use the environment variable to tell RSS to use which one.

### Does this PR introduce _any_ user-facing change?
NO

### How was this patch tested?
UT
---
 .../java/com/tencent/rss/common/util/RssUtils.java | 10 +
 .../com/tencent/rss/common/util/RssUtilsTest.java  | 26 ++
 2 files changed, 36 insertions(+)

diff --git a/common/src/main/java/com/tencent/rss/common/util/RssUtils.java 
b/common/src/main/java/com/tencent/rss/common/util/RssUtils.java
index 1b7200e..7ecae6b 100644
--- a/common/src/main/java/com/tencent/rss/common/util/RssUtils.java
+++ b/common/src/main/java/com/tencent/rss/common/util/RssUtils.java
@@ -41,6 +41,7 @@ import java.util.Map;
 import java.util.Properties;
 
 import com.google.common.collect.Lists;
+import com.google.common.net.InetAddresses;
 import org.roaringbitmap.longlong.Roaring64NavigableMap;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -102,6 +103,15 @@ public class RssUtils {
   // loop back, etc.). If the network interface in the machine is more than 
one, we
   // will choose the first IP.
   public static String getHostIp() throws Exception {
+// For K8S, there are too many IPs, it's hard to decide which we should 
use.
+// So we use the environment variable to tell RSS to use which one.
+String ip = System.getenv("RSS_IP");
+if (ip != null) {
+  if (!InetAddresses.isInetAddress(ip)) {
+throw new RuntimeException("Environment RSS_IP: " + ip + " is wrong 
format");
+  }
+  return ip;
+}
 Enumeration nif = 
NetworkInterface.getNetworkInterfaces();
 String siteLocalAddress = null;
 while (nif.hasMoreElements()) {
diff --git a/common/src/test/java/com/tencent/rss/common/util/RssUtilsTest.java 
b/common/src/test/java/com/tencent/rss/common/util/RssUtilsTest.java
index 95fd55f..220cb5c 100644
--- a/common/src/test/java/com/tencent/rss/common/util/RssUtilsTest.java
+++ b/common/src/test/java/com/tencent/rss/common/util/RssUtilsTest.java
@@ -18,6 +18,7 @@
 
 package com.tencent.rss.common.util;
 
+import java.lang.reflect.Field;
 import java.net.InetAddress;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
@@ -62,6 +63,18 @@ public class RssUtilsTest {
   if (!address.equals("127.0.0.1")) {
 assertEquals(address, realIp);
   }
+  setEnv("RSS_IP", "8.8.8.8");
+  assertEquals("8.8.8.8", RssUtils.getHostIp());
+  setEnv("RSS_IP", "");
+  boolean isException = false;
+  try {
+RssUtils.getHostIp();
+  } catch (Exception e) {
+isException = true;
+  }
+  setEnv("RSS_IP", realIp);
+  RssUtils.getHostIp();
+  assertTrue(isException);
 } catch (Exception e) {
   fail(e.getMessage());
 }
@@ -185,6 +198,19 @@ public class RssUtilsTest {
 }
   }
 
+  public static void setEnv(String key, String value) {
+try {
+  Map env = System.getenv();
+  Class cl = env.getClass();
+  Field field = cl.getDeclaredField("m");
+  field.setAccessible(true);
+  Map writableEnv = (Map) field.get(env);
+  writableEnv.put(key, value);
+} catch (Exception e) {
+  throw new IllegalStateException("Failed to set environment variable", e);
+}
+  }
+
   public static class RssUtilTestDummySuccess implements RssUtilTestDummy {
 private final String s;
 



[incubator-uniffle] 07/17: [Minor] Remove serverNode from tags structure when heartbeart timeout (#193)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit d92208ddb1edca13fcb6cb31a8980b2052f29d7b
Author: Junfan Zhang 
AuthorDate: Thu Jun 23 15:30:19 2022 +0800

[Minor] Remove serverNode from tags structure when heartbeart timeout (#193)

### What changes were proposed in this pull request?
Remove serverNode from tags structure when heartbeart timeout

### Why are the changes needed?
Remove serverNode from tags structure when heartbeart timeout

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
UT
---
 .../com/tencent/rss/coordinator/ServerNode.java|  7 ++
 .../rss/coordinator/SimpleClusterManager.java  |  9 ++--
 .../rss/coordinator/SimpleClusterManagerTest.java  | 27 ++
 3 files changed, 41 insertions(+), 2 deletions(-)

diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/ServerNode.java 
b/coordinator/src/main/java/com/tencent/rss/coordinator/ServerNode.java
index ef09298..816f080 100644
--- a/coordinator/src/main/java/com/tencent/rss/coordinator/ServerNode.java
+++ b/coordinator/src/main/java/com/tencent/rss/coordinator/ServerNode.java
@@ -115,6 +115,13 @@ public class ServerNode implements Comparable {
 + ", healthy[" + isHealthy + "]";
   }
 
+  /**
+   * Only for test case
+   */
+  void setTimestamp(long timestamp) {
+this.timestamp = timestamp;
+  }
+
   @Override
   public int compareTo(ServerNode other) {
 if (availableMemory > other.getAvailableMemory()) {
diff --git 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
index d3fe789..10af74d 100644
--- 
a/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
+++ 
b/coordinator/src/main/java/com/tencent/rss/coordinator/SimpleClusterManager.java
@@ -72,7 +72,7 @@ public class SimpleClusterManager implements ClusterManager {
 }
   }
 
-  private void nodesCheck() {
+  void nodesCheck() {
 try {
   long timestamp = System.currentTimeMillis();
   Set deleteIds = Sets.newHashSet();
@@ -83,7 +83,12 @@ public class SimpleClusterManager implements ClusterManager {
 }
   }
   for (String serverId : deleteIds) {
-servers.remove(serverId);
+ServerNode sn = servers.remove(serverId);
+if (sn != null) {
+  for (Set nodesWithTag : tagToNodes.values()) {
+nodesWithTag.remove(sn);
+  }
+}
   }
 
   CoordinatorMetrics.gaugeTotalServerNum.set(servers.size());
diff --git 
a/coordinator/src/test/java/com/tencent/rss/coordinator/SimpleClusterManagerTest.java
 
b/coordinator/src/test/java/com/tencent/rss/coordinator/SimpleClusterManagerTest.java
index a5040bf..bed9081 100644
--- 
a/coordinator/src/test/java/com/tencent/rss/coordinator/SimpleClusterManagerTest.java
+++ 
b/coordinator/src/test/java/com/tencent/rss/coordinator/SimpleClusterManagerTest.java
@@ -27,6 +27,7 @@ import java.util.Set;
 
 import com.google.common.collect.Sets;
 import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.Assertions;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Test;
 
@@ -142,6 +143,32 @@ public class SimpleClusterManagerTest {
 assertEquals(0, serverNodes.size());
   }
 
+  @Test
+  public void testGetCorrectServerNodesWhenOneNodeRemoved() {
+CoordinatorConf ssc = new CoordinatorConf();
+ssc.setLong(CoordinatorConf.COORDINATOR_HEARTBEAT_TIMEOUT, 30 * 1000L);
+SimpleClusterManager clusterManager = new SimpleClusterManager(ssc);
+ServerNode sn1 = new ServerNode("sn1", "ip", 0, 100L, 50L, 20,
+10, testTags, true);
+ServerNode sn2 = new ServerNode("sn2", "ip", 0, 100L, 50L, 21,
+10, testTags, true);
+ServerNode sn3 = new ServerNode("sn3", "ip", 0, 100L, 50L, 20,
+11, testTags, true);
+clusterManager.add(sn1);
+clusterManager.add(sn2);
+clusterManager.add(sn3);
+List serverNodes = clusterManager.getServerList(testTags);
+assertEquals(3, serverNodes.size());
+
+sn3.setTimestamp(System.currentTimeMillis() - 60 * 1000L);
+clusterManager.nodesCheck();
+
+Map> tagToNodes = clusterManager.getTagToNodes();
+List serverList = clusterManager.getServerList(testTags);
+Assertions.assertEquals(2, 
tagToNodes.get(testTags.iterator().next()).size());
+Assertions.assertEquals(2, serverList.size());
+  }
+
   @Test
   public void updateExcludeNodesTest() throws Exception {
 String excludeNodesFolder = (new 
File(ClassLoader.getSystemResource("empty").getFile())).getParent();



[incubator-uniffle] 17/17: [Improvement] Modify configuration template (#209)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 166f3f8c7c5f14eb75daca843f992e908bd3c938
Author: roryqi 
AuthorDate: Fri Jul 1 11:49:34 2022 +0800

[Improvement] Modify configuration template (#209)

### What changes were proposed in this pull request?
I modify the file `conf/server.conf` and `conf/coordinator.conf`. Some 
configurations are not recommended. I modify them

### Why are the changes needed?
Give users a better configuration template

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
No need.
---
 conf/coordinator.conf |  2 +-
 conf/server.conf  | 16 
 2 files changed, 5 insertions(+), 13 deletions(-)

diff --git a/conf/coordinator.conf b/conf/coordinator.conf
index 294f14e..c66e302 100644
--- a/conf/coordinator.conf
+++ b/conf/coordinator.conf
@@ -21,4 +21,4 @@ rss.jetty.http.port 19998
 rss.coordinator.server.heartbeat.timeout 3
 rss.coordinator.app.expired 6
 rss.coordinator.shuffle.nodes.max 13
-rss.coordinator.exclude.nodes.file.path /xxx
+rss.coordinator.exclude.nodes.file.path file:///xxx
diff --git a/conf/server.conf b/conf/server.conf
index 3c347e1..6ab6571 100644
--- a/conf/server.conf
+++ b/conf/server.conf
@@ -19,18 +19,10 @@
 rss.rpc.server.port 1
 rss.jetty.http.port 19998
 rss.storage.basePath /xxx,/xxx
-rss.storage.type LOCALFILE_AND_HDFS
+rss.storage.type MEMORY_LOCALFILE_HDFS
 rss.coordinator.quorum xxx:1,xxx:1
 rss.server.buffer.capacity 40gb
-rss.server.buffer.spill.threshold 22gb
-rss.server.partition.buffer.size 150mb
 rss.server.read.buffer.capacity 20gb
-rss.server.flush.thread.alive 50
-rss.server.flush.threadPool.size 100
-
-# multistorage config
-rss.server.multistorage.enable true
-rss.server.uploader.enable true
-rss.server.uploader.base.path hdfs://xxx
-rss.server.uploader.thread.number 32
-rss.server.disk.capacity 1011550697553
+rss.server.flush.thread.alive 5
+rss.server.flush.threadPool.size 10
+rss.server.disk.capacity 1t



[incubator-uniffle] 01/17: [Improvement] Avoid using the default forkjoin pool by parallelStream directly (#180)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 46b62b2406a547dca6f6b933ee187047e3618202
Author: Junfan Zhang 
AuthorDate: Tue Jun 21 14:15:59 2022 +0800

[Improvement] Avoid using the default forkjoin pool by parallelStream 
directly (#180)

### What changes were proposed in this pull request?
As we know that parallelStream will use the default forkjoin pool in entire 
jvm. To avoid it, use the custom pool and allow to specify the pool size.

### Why are the changes needed?
use separate forkjoin pool to send shuffle data

### Does this PR introduce _any_ user-facing change?
Yes, introduce the configuration to control the size of forkjoinpool.
mapreduce.rss.client.data.transfer.pool.size for MapReduce
spark.rss.client.data.transfer.pool.size for Spark

### How was this patch tested?
GA passed.
---
 .../org/apache/hadoop/mapreduce/RssMRConfig.java   |  4 
 .../org/apache/hadoop/mapreduce/RssMRUtils.java|  5 -
 .../org/apache/spark/shuffle/RssSparkConfig.java   |  4 
 .../apache/spark/shuffle/RssShuffleManager.java|  5 -
 .../apache/spark/shuffle/RssShuffleManager.java| 14 ++---
 .../rss/client/factory/ShuffleClientFactory.java   |  4 ++--
 .../rss/client/impl/ShuffleWriteClientImpl.java| 24 ++
 .../tencent/rss/client/util/RssClientConfig.java   |  2 ++
 .../client/impl/ShuffleWriteClientImplTest.java|  2 +-
 .../test/java/com/tencent/rss/test/QuorumTest.java |  2 +-
 .../tencent/rss/test/ShuffleServerGrpcTest.java|  2 +-
 .../tencent/rss/test/ShuffleWithRssClientTest.java |  2 +-
 12 files changed, 50 insertions(+), 20 deletions(-)

diff --git 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRConfig.java 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRConfig.java
index a191e2f..3447f09 100644
--- a/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRConfig.java
+++ b/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRConfig.java
@@ -52,6 +52,10 @@ public class RssMRConfig {
   RssClientConfig.RSS_DATA_REPLICA_READ_DEFAULT_VALUE;
   public static final String RSS_DATA_REPLICA_SKIP_ENABLED =
   MR_RSS_CONFIG_PREFIX + RssClientConfig.RSS_DATA_REPLICA_SKIP_ENABLED;
+  public static final String RSS_DATA_TRANSFER_POOL_SIZE =
+  MR_RSS_CONFIG_PREFIX + RssClientConfig.RSS_DATA_TRANSFER_POOL_SIZE;
+  public static final int RSS_DATA_TRANSFER_POOL_SIZE_DEFAULT_VALUE =
+  RssClientConfig.RSS_DATA_TRANFER_POOL_SIZE_DEFAULT_VALUE;
   public static final String RSS_CLIENT_SEND_THREAD_NUM =
   MR_RSS_CONFIG_PREFIX + RssClientConfig.RSS_CLIENT_SEND_THREAD_NUM;
   public static final int RSS_CLIENT_DEFAULT_SEND_THREAD_NUM =
diff --git 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRUtils.java 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRUtils.java
index 1d8b4d6..16613e1 100644
--- a/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRUtils.java
+++ b/client-mr/src/main/java/org/apache/hadoop/mapreduce/RssMRUtils.java
@@ -90,10 +90,13 @@ public class RssMRUtils {
 RssMRConfig.RSS_DATA_REPLICA_DEFAULT_VALUE);
 boolean replicaSkipEnabled = 
jobConf.getBoolean(RssMRConfig.RSS_DATA_REPLICA_SKIP_ENABLED,
 RssMRConfig.RSS_DATA_REPLICA_SKIP_ENABLED_DEFAULT_VALUE);
+int dataTransferPoolSize = 
jobConf.getInt(RssMRConfig.RSS_DATA_TRANSFER_POOL_SIZE,
+RssMRConfig.RSS_DATA_TRANSFER_POOL_SIZE_DEFAULT_VALUE);
 ShuffleWriteClient client = ShuffleClientFactory
 .getInstance()
 .createShuffleWriteClient(clientType, retryMax, retryIntervalMax,
-heartBeatThreadNum, replica, replicaWrite, replicaRead, 
replicaSkipEnabled);
+heartBeatThreadNum, replica, replicaWrite, replicaRead, 
replicaSkipEnabled,
+dataTransferPoolSize);
 return client;
   }
 
diff --git 
a/client-spark/common/src/main/java/org/apache/spark/shuffle/RssSparkConfig.java
 
b/client-spark/common/src/main/java/org/apache/spark/shuffle/RssSparkConfig.java
index 9720ff0..8d5dda9 100644
--- 
a/client-spark/common/src/main/java/org/apache/spark/shuffle/RssSparkConfig.java
+++ 
b/client-spark/common/src/main/java/org/apache/spark/shuffle/RssSparkConfig.java
@@ -106,6 +106,10 @@ public class RssSparkConfig {
   public static final int RSS_DATA_REPLICA_READ_DEFAULT_VALUE = 
RssClientConfig.RSS_DATA_REPLICA_READ_DEFAULT_VALUE;
   public static final String RSS_DATA_REPLICA_SKIP_ENABLED =
   SPARK_RSS_CONFIG_PREFIX + RssClientConfig.RSS_DATA_REPLICA_SKIP_ENABLED;
+  public static final String RSS_DATA_TRANSFER_POOL_SIZE =
+  SPARK_RSS_CONFIG_PREFIX + RssClientConfig.RSS_DATA_TRANSFER_POOL_SIZE;
+  public static final int RSS_DATA_TRANSFER_POOL_SIZE_DEFAULT_VALUE =
+  RssClientConfig.RSS_DATA_TRANFER_POOL_SIZE_DEFAULT_VALUE

[incubator-uniffle] 02/17: [Bugfix] Fix spark2 executor stop NPE problem (#187)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 7fa8b52e5739a0c2ded7f2eca84b086713765418
Author: roryqi 
AuthorDate: Wed Jun 22 14:30:15 2022 +0800

[Bugfix] Fix spark2 executor stop NPE problem (#187)

backport 0.5.0

### What changes were proposed in this pull request?
We need to judge heartbeatExecutorService whether is null when we will stop 
it.

### Why are the changes needed?
#177 pr introduce this problem, when we run Spark applications on our 
cluster, the executor will throw NPE when method `stop` is called.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 .../src/main/java/org/apache/spark/shuffle/RssShuffleManager.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
index 5d11c39..8a2c385 100644
--- 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
+++ 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
@@ -373,7 +373,9 @@ public class RssShuffleManager implements ShuffleManager {
 
   @Override
   public void stop() {
-heartBeatScheduledExecutorService.shutdownNow();
+if (heartBeatScheduledExecutorService != null) {
+  heartBeatScheduledExecutorService.shutdownNow();
+}
 threadPoolExecutor.shutdownNow();
 shuffleWriteClient.close();
   }



[incubator-uniffle] 14/17: [Improvement] Close coordinatorClients when DelegationRssShuffleManager stops (#205)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 15a6ea65ede6a2bc07824855801573a5d0cad512
Author: Zhen Wang <643348...@qq.com>
AuthorDate: Thu Jun 30 11:34:40 2022 +0800

[Improvement] Close coordinatorClients when DelegationRssShuffleManager 
stops (#205)

### What changes were proposed in this pull request?
Close coordinatorClients when DelegationRssShuffleManager stops.

### Why are the changes needed?
The coordinatorClients in DelegationRssShuffleManager are never closed.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
No
---
 .../main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java  | 1 +
 .../main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java  | 1 +
 2 files changed, 2 insertions(+)

diff --git 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
index e0a30e7..03320c0 100644
--- 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
+++ 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
@@ -173,6 +173,7 @@ public class DelegationRssShuffleManager implements 
ShuffleManager {
   @Override
   public void stop() {
 delegate.stop();
+coordinatorClients.forEach(CoordinatorClient::close);
   }
 
   @Override
diff --git 
a/client-spark/spark3/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
 
b/client-spark/spark3/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
index 4ed6cce..32d58d2 100644
--- 
a/client-spark/spark3/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
+++ 
b/client-spark/spark3/src/main/java/org/apache/spark/shuffle/DelegationRssShuffleManager.java
@@ -248,6 +248,7 @@ public class DelegationRssShuffleManager implements 
ShuffleManager {
   @Override
   public void stop() {
 delegate.stop();
+coordinatorClients.forEach(CoordinatorClient::close);
   }
 
   @Override



[incubator-uniffle] branch master created (now 166f3f8)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 166f3f8  [Improvement] Modify configuration template (#209)

This branch includes the following new commits:

 new 46b62b2  [Improvement] Avoid using the default forkjoin pool by 
parallelStream directly (#180)
 new 7fa8b52  [Bugfix] Fix spark2 executor stop NPE problem (#187)
 new 924dac7  [Bugfix] Fix spark2 executor stop NPE problem (#186)
 new 11a8594  [Doc] Update readme with features like multiple remote 
storage support etc (#191)
 new 8d8e6bf  upgrade to 0.6.0-snapshot (#190)
 new cf731f2  [Bugfix] Fix MR don't have remote storage information when we 
use dynamic conf and MEMORY_LOCALE_HDFS storageType (#195)
 new d92208d  [Minor] Remove serverNode from tags structure when heartbeart 
timeout (#193)
 new 6bdf49e  [Improvement] Check ADAPTIVE_EXECUTION_ENABLED in 
RssShuffleManager (#197)
 new a253b1f  [Improvement] Add dynamic allocation patch for Spark 3.2 
(#199)
 new 8b5f363  [MINOR] Close clusterManager resources (#202)
 new 392c881  Support build_distribution.sh to specify different mvn build 
options for Spark2 and Spark3 (#203)
 new 2c1c554  [Improvement] Move detailed client configuration to 
individual doc (#201)
 new 6937631  [Improvement] Add RSS_IP environment variable support for K8S 
(#204)
 new 15a6ea6  [Improvement] Close coordinatorClients when 
DelegationRssShuffleManager stops (#205)
 new ba47aa0  [Minor] Make clearResourceThread and processEventThread 
daemon (#207)
 new 5ec04b8  Support using remote fs path to specify the 
excludeNodesFilePath (#200)
 new 166f3f8  [Improvement] Modify configuration template (#209)

The 17 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-uniffle] 11/17: Support build_distribution.sh to specify different mvn build options for Spark2 and Spark3 (#203)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 392c88129f2706043ebb87cc89e9e2cde5733647
Author: cxzl25 
AuthorDate: Tue Jun 28 10:09:01 2022 +0800

Support build_distribution.sh to specify different mvn build options for 
Spark2 and Spark3 (#203)

What changes were proposed in this pull request?
Add --spark2-mvn, --spark3-mvn parameters in build_distribution.sh to 
support compiling different profiles, we can pass in different maven 
parameters, such as profile, spark version.
Add --help parameters in build_distribution.sh, fix typo.
gitignore ignores the tar package generated by build.
README added how to use build_distribution.sh.
Why are the changes needed?
If we use such a command to build, Spark2 will also use the Spark3 version 
to compile, so we'd better distinguish the build options of different versions.

./build_distribution.sh -Pspark3.2
Does this PR introduce any user-facing change?
No

How was this patch tested?
local test
---
 .gitignore|  1 +
 README.md | 16 
 build_distribution.sh | 53 +++
 pom.xml   |  4 ++--
 4 files changed, 68 insertions(+), 6 deletions(-)

diff --git a/.gitignore b/.gitignore
index 5c39d59..b6164b2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -20,4 +20,5 @@ reports/
 metastore_db/
 derby.log
 dependency-reduced-pom.xml
+rss-*.tgz
 
diff --git a/README.md b/README.md
index 9ad8299..51a1ed0 100644
--- a/README.md
+++ b/README.md
@@ -50,10 +50,26 @@ To build it, run:
 
 mvn -DskipTests clean package
 
+Build against profile Spark2(2.4.6)
+
+mvn -DskipTests clean package -Pspark2
+
+Build against profile Spark3(3.1.2)
+
+mvn -DskipTests clean package -Pspark3
+
+Build against Spark 3.2.x
+
+mvn -DskipTests clean package -Pspark3.2
+
 To package the Firestorm, run:
 
 ./build_distribution.sh
 
+Package against Spark 3.2.x, run:
+
+./build_distribution.sh --spark3-profile 'spark3.2'
+
 rss-xxx.tgz will be generated for deployment
 
 ## Deploy
diff --git a/build_distribution.sh b/build_distribution.sh
index baf50e4..214a2ed 100755
--- a/build_distribution.sh
+++ b/build_distribution.sh
@@ -32,12 +32,57 @@ RSS_HOME="$(
 
 function exit_with_usage() {
   set +x
-  echo "$0 - tool for making binary distributions of Rmote Shuffle Service"
+  echo "./build_distribution.sh - Tool for making binary distributions of 
Remote Shuffle Service"
   echo ""
-  echo "usage:"
+  echo "Usage:"
+  echo 
"+--+"
+  echo "| ./build_distribution.sh [--spark2-profile ] 
[--spark2-mvn ] |"
+  echo "| [--spark3-profile ] 
[--spark3-mvn ] |"
+  echo "| 
|"
+  echo 
"+--+"
   exit 1
 }
 
+SPARK2_PROFILE_ID="spark2"
+SPARK2_MVN_OPTS=""
+SPARK3_PROFILE_ID="spark3"
+SPARK3_MVN_OPTS=""
+while (( "$#" )); do
+  case $1 in
+--spark2-profile)
+  SPARK2_PROFILE_ID="$2"
+  shift
+  ;;
+--spark2-mvn)
+  SPARK2_MVN_OPTS=$2
+  shift
+  ;;
+--spark3-profile)
+  SPARK3_PROFILE_ID="$2"
+  shift
+  ;;
+--spark3-mvn)
+  SPARK3_MVN_OPTS=$2
+  shift
+  ;;
+--help)
+  exit_with_usage
+  ;;
+--*)
+  echo "Error: $1 is not supported"
+  exit_with_usage
+  ;;
+-*)
+  break
+  ;;
+*)
+  echo "Error: $1 is not supported"
+  exit_with_usage
+  ;;
+  esac
+  shift
+done
+
 cd $RSS_HOME
 
 if [ -z "$JAVA_HOME" ]; then
@@ -99,7 +144,7 @@ cp "${RSS_HOME}"/coordinator/target/jars/* 
${COORDINATOR_JAR_DIR}
 CLIENT_JAR_DIR="${DISTDIR}/jars/client"
 mkdir -p $CLIENT_JAR_DIR
 
-BUILD_COMMAND_SPARK2=("$MVN" clean package -Pspark2 -pl client-spark/spark2 
-DskipTests -am $@)
+BUILD_COMMAND_SPARK2=("$MVN" clean package -P$SPARK2_PROFILE_ID -pl 
client-spark/spark2 -DskipTests -am $@ $SPARK2_MVN_OPTS)
 
 # Actually build the jar
 echo -e "\nBuilding with..."
@@ -114,7 +159,7 @@ 
SPARK_CLIENT2_JAR="${RSS_HOME}/client-spark/spark2/target/shaded/rss-client-spar
 echo "copy $SPARK_CLIENT2_JAR to ${SPARK_CLIENT2_JAR_DIR}"
 cp $SPARK_CLIENT2_JAR ${SPARK_CLIENT2_JAR_DIR}
 
-BUILD_COMMAND_SPARK3=("$MVN" clean package -Pspark3 -pl client-spark/spark3 
-DskipTests -am $@)
+BUILD_COMMAND_SPARK3=("

[incubator-uniffle] 09/17: [Improvement] Add dynamic allocation patch for Spark 3.2 (#199)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit a253b1fed2e947e397b45b1db8f56d856eabc9fc
Author: roryqi 
AuthorDate: Mon Jun 27 10:07:13 2022 +0800

[Improvement] Add dynamic allocation patch for Spark 3.2 (#199)

### What changes were proposed in this pull request?
Add the dynamic allocation patch for Spark 3.2, solve issue #106

### Why are the changes needed?
If we don't have this patch, users can't use dynamic allocation in Spark 
3.2.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 README.md  |  2 +-
 .../spark-3.2.1_dynamic_allocation_support.patch   | 92 ++
 2 files changed, 93 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 0fb65e5..9ad8299 100644
--- a/README.md
+++ b/README.md
@@ -155,7 +155,7 @@ rss-xxx.tgz will be generated for deployment
 ### Support Spark dynamic allocation
 
 To support spark dynamic allocation with Firestorm, spark code should be 
updated.
-There are 2 patches for spark-2.4.6 and spark-3.1.2 in spark-patches folder 
for reference.
+There are 3 patches for spark (2.4.6/3.1.2/3.2.1) in spark-patches folder for 
reference.
 
 After apply the patch and rebuild spark, add following configuration in spark 
conf to enable dynamic allocation:
   ```
diff --git a/spark-patches/spark-3.2.1_dynamic_allocation_support.patch 
b/spark-patches/spark-3.2.1_dynamic_allocation_support.patch
new file mode 100644
index 000..1e195df
--- /dev/null
+++ b/spark-patches/spark-3.2.1_dynamic_allocation_support.patch
@@ -0,0 +1,92 @@
+diff --git a/core/src/main/scala/org/apache/spark/Dependency.scala 
b/core/src/main/scala/org/apache/spark/Dependency.scala
+index 1b4e7ba5106..95818ff72ca 100644
+--- a/core/src/main/scala/org/apache/spark/Dependency.scala
 b/core/src/main/scala/org/apache/spark/Dependency.scala
+@@ -174,8 +174,10 @@ class ShuffleDependency[K: ClassTag, V: ClassTag, C: 
ClassTag](
+   !rdd.isBarrier()
+   }
+ 
+-  _rdd.sparkContext.cleaner.foreach(_.registerShuffleForCleanup(this))
+-  _rdd.sparkContext.shuffleDriverComponents.registerShuffle(shuffleId)
++  if (!_rdd.context.getConf.isRssEnable()) {
++_rdd.sparkContext.cleaner.foreach(_.registerShuffleForCleanup(this))
++_rdd.sparkContext.shuffleDriverComponents.registerShuffle(shuffleId)
++  }
+ }
+ 
+ 
+diff --git 
a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala 
b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
+index c4b619300b5..821a01985d9 100644
+--- a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
 b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
+@@ -207,7 +207,9 @@ private[spark] class ExecutorAllocationManager(
+   // If dynamic allocation shuffle tracking or worker decommissioning 
along with
+   // storage shuffle decommissioning is enabled we have *experimental* 
support for
+   // decommissioning without a shuffle service.
+-  if (conf.get(config.DYN_ALLOCATION_SHUFFLE_TRACKING_ENABLED) ||
++  if (conf.isRssEnable()) {
++logInfo("Dynamic allocation will use remote shuffle service")
++  } else if (conf.get(config.DYN_ALLOCATION_SHUFFLE_TRACKING_ENABLED) ||
+   (decommissionEnabled &&
+ conf.get(config.STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED))) {
+ logWarning("Dynamic allocation without a shuffle service is an 
experimental feature.")
+diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
+index 5f37a1abb19..af4bee1e1bb 100644
+--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
 b/core/src/main/scala/org/apache/spark/SparkConf.scala
+@@ -580,6 +580,10 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
+ Utils.redact(this, getAll).sorted.map { case (k, v) => k + "=" + v 
}.mkString("\n")
+   }
+ 
++  /**
++   * Return true if remote shuffle service is enabled.
++   */
++  def isRssEnable(): Boolean = get("spark.shuffle.manager", 
"sort").contains("RssShuffleManager")
+ }
+ 
+ private[spark] object SparkConf extends Logging {
+diff --git a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
+index a82d261d545..72e54940ca2 100644
+--- a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
 b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
+@@ -2231,7 +2231,8 @@ private[spark] class DAGScheduler(
+ // if the cluster manager explicitly tells us that the entire worker was 
lost, then
+ // we know to un

[incubator-uniffle] 04/17: [Doc] Update readme with features like multiple remote storage support etc (#191)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 11a8594e868db3aaf55af9baa1903e8cbd17413e
Author: Colin 
AuthorDate: Wed Jun 22 16:38:27 2022 +0800

[Doc] Update readme with features like multiple remote storage support etc 
(#191)

What changes were proposed in this pull request?
Update Readme for latest features, eg, multiple remote storage support, 
dynamic client conf etc.

Why are the changes needed?
Doc should be updated

Does this PR introduce any user-facing change?
No

How was this patch tested?
No need
---
 README.md | 46 ++
 1 file changed, 34 insertions(+), 12 deletions(-)

diff --git a/README.md b/README.md
index e134f0f..50903ce 100644
--- a/README.md
+++ b/README.md
@@ -11,7 +11,7 @@ Coordinator will collect status of shuffle server and do the 
assignment for the
 
 Shuffle server will receive the shuffle data, merge them and write to storage.
 
-Depend on different situation, Firestorm supports Memory & Local, Memory & 
Remote Storage(eg, HDFS), Local only, Remote Storage only.
+Depend on different situation, Firestorm supports Memory & Local, Memory & 
Remote Storage(eg, HDFS), Memory & Local & Remote Storage(recommendation for 
production environment).
 
 ## Shuffle Process with Firestorm
 
@@ -74,9 +74,25 @@ rss-xxx.tgz will be generated for deployment
  rss.coordinator.server.heartbeat.timeout 3
  rss.coordinator.app.expired 6
  rss.coordinator.shuffle.nodes.max 5
- rss.coordinator.exclude.nodes.file.path RSS_HOME/conf/exclude_nodes
-   ```
-4. start Coordinator
+ # enable dynamicClientConf, and coordinator will be responsible for most 
of client conf
+ rss.coordinator.dynamicClientConf.enabled true
+ # config the path of client conf
+ rss.coordinator.dynamicClientConf.path /conf/dynamic_client.conf
+ # config the path of excluded shuffle server
+ rss.coordinator.exclude.nodes.file.path /conf/exclude_nodes
+   ```
+4. update /conf/dynamic_client.conf, rss client will get default 
conf from coordinator eg,
+   ```
+# MEMORY_LOCALFILE_HDFS is recommandation for production environment
+rss.storage.type MEMORY_LOCALFILE_HDFS
+# multiple remote storages are supported, and client will get assignment 
from coordinator
+rss.coordinator.remote.storage.path 
hdfs://cluster1/path,hdfs://cluster2/path
+rss.writer.require.memory.retryMax 1200
+rss.client.retry.max 100
+rss.writer.send.check.timeout 60
+rss.client.read.buffer.size 14m
+   ```
+5. start Coordinator
```
 bash RSS_HOME/bin/start-coordnator.sh
```
@@ -90,14 +106,17 @@ rss-xxx.tgz will be generated for deployment
  HADOOP_HOME=
  XMX_SIZE="80g"
```
-3. update RSS_HOME/conf/server.conf, the following demo is for memory + local 
storage only, eg,
+3. update RSS_HOME/conf/server.conf, eg,
```
  rss.rpc.server.port 1
  rss.jetty.http.port 19998
  rss.rpc.executor.size 2000
- rss.storage.type MEMORY_LOCALFILE
+ # it should be configed the same as in coordinator
+ rss.storage.type MEMORY_LOCALFILE_HDFS
  rss.coordinator.quorum :1,:1
+ # local storage path for shuffle server
  rss.storage.basePath /data1/rssdata,/data2/rssdata
+ # it's better to config thread num according to local disk num
  rss.server.flush.thread.alive 5
  rss.server.flush.threadPool.size 10
  rss.server.buffer.capacity 40g
@@ -108,6 +127,10 @@ rss-xxx.tgz will be generated for deployment
  rss.server.preAllocation.expired 12
  rss.server.commit.timeout 60
  rss.server.app.expired.withoutHeartbeat 12
+ # note: the default value of rss.server.flush.cold.storage.threshold.size 
is 64m
+ # there will be no data written to DFS if set it as 100g even 
rss.storage.type=MEMORY_LOCALFILE_HDFS
+ # please set proper value if DFS is used, eg, 64m, 128m.
+ rss.server.flush.cold.storage.threshold.size 100g
```
 4. start Shuffle Server
```
@@ -121,12 +144,11 @@ rss-xxx.tgz will be generated for deployment
 
The jar for Spark3 is located in 
/jars/client/spark3/rss-client-X-shaded.jar
 
-2. Update Spark conf to enable Firestorm, the following demo is for local 
storage only, eg,
+2. Update Spark conf to enable Firestorm, eg,
 
```
spark.shuffle.manager org.apache.spark.shuffle.RssShuffleManager
spark.rss.coordinator.quorum :1,:1
-   spark.rss.storage.type MEMORY_LOCALFILE
```
 
 ### Support Spark dynamic allocation
@@ -140,17 +162,16 @@ After apply the patch and rebuild spark, add following 
configuration in spark co
   spark.dynamicAllocation.enabled true
   ```
 
-## Deploy MapReduce Client
+### Deploy MapReduce Client
 
 1. Add client jar to the classpath of each

[incubator-uniffle] 08/17: [Improvement] Check ADAPTIVE_EXECUTION_ENABLED in RssShuffleManager (#197)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 6bdf49e1a68131545a8385123da558be287a196f
Author: xunxunmimi5577 <52647492+xunxunmimi5...@users.noreply.github.com>
AuthorDate: Fri Jun 24 02:12:40 2022 +0800

[Improvement] Check ADAPTIVE_EXECUTION_ENABLED in RssShuffleManager (#197)

### What changes were proposed in this pull request?
 1. Add checking of spark.sql.adaptive.enabled=false in RssShuffleManager's 
constructor for spark2.
 2. Add a description of this parameter in the Deploy Spark Client section 
of the readme.

### Why are the changes needed?
 When use firestorm+spark2+spark.sql.adaptive.enabled=true,the result is 
wrong,but we didn't get any hints.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 README.md  | 1 +
 .../src/main/java/org/apache/spark/shuffle/RssShuffleManager.java  | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/README.md b/README.md
index 50903ce..0fb65e5 100644
--- a/README.md
+++ b/README.md
@@ -149,6 +149,7 @@ rss-xxx.tgz will be generated for deployment
```
spark.shuffle.manager org.apache.spark.shuffle.RssShuffleManager
spark.rss.coordinator.quorum :1,:1
+   # Note: For Spark2, spark.sql.adaptive.enabled should be false because 
Spark2 doesn't support AQE.
```
 
 ### Support Spark dynamic allocation
diff --git 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
index 8a2c385..28f1a8d 100644
--- 
a/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
+++ 
b/client-spark/spark2/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java
@@ -136,6 +136,9 @@ public class RssShuffleManager implements ShuffleManager {
   };
 
   public RssShuffleManager(SparkConf sparkConf, boolean isDriver) {
+if (sparkConf.getBoolean("spark.sql.adaptive.enabled", false)) {
+  throw new IllegalArgumentException("Spark2 doesn't support AQE, 
spark.sql.adaptive.enabled should be false.");
+}
 this.sparkConf = sparkConf;
 
 // set & check replica config



[incubator-uniffle] 03/17: [Bugfix] Fix spark2 executor stop NPE problem (#186)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 924dac7f093d0b3f581e521fc71bc30ea0963907
Author: roryqi 
AuthorDate: Wed Jun 22 14:34:06 2022 +0800

[Bugfix] Fix spark2 executor stop NPE problem (#186)

### What changes were proposed in this pull request?
We need to judge heartbeatExecutorService whether is null when we will stop 
it.

### Why are the changes needed?
#177 pr introduce this problem, when we run Spark applications on our 
cluster, the executor will throw NPE when method `stop` is called.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test



[incubator-uniffle] 05/17: upgrade to 0.6.0-snapshot (#190)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 8d8e6bf81ebf0bbb669642a46d13581927f9cec9
Author: roryqi 
AuthorDate: Wed Jun 22 17:36:33 2022 +0800

upgrade to 0.6.0-snapshot (#190)

### What changes were proposed in this pull request?
upgrade version number

### Why are the changes needed?
upgrade to 0.6.0-snapshot

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
no
---
 client-mr/pom.xml | 4 ++--
 client-spark/common/pom.xml   | 4 ++--
 client-spark/spark2/pom.xml   | 4 ++--
 client-spark/spark3/pom.xml   | 4 ++--
 client/pom.xml| 4 ++--
 common/pom.xml| 2 +-
 coordinator/pom.xml   | 2 +-
 integration-test/common/pom.xml   | 4 ++--
 integration-test/mr/pom.xml   | 4 ++--
 integration-test/spark-common/pom.xml | 4 ++--
 integration-test/spark2/pom.xml   | 4 ++--
 integration-test/spark3/pom.xml   | 4 ++--
 internal-client/pom.xml   | 4 ++--
 pom.xml   | 2 +-
 proto/pom.xml | 2 +-
 server/pom.xml| 2 +-
 storage/pom.xml   | 2 +-
 17 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/client-mr/pom.xml b/client-mr/pom.xml
index c15ffba..650a771 100644
--- a/client-mr/pom.xml
+++ b/client-mr/pom.xml
@@ -23,13 +23,13 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.6.0-snapshot
 ../pom.xml
 
 
 com.tencent.rss
 rss-client-mr
-0.5.0-snapshot
+0.6.0-snapshot
 jar
 
 
diff --git a/client-spark/common/pom.xml b/client-spark/common/pom.xml
index 61c4b1f..e79a671 100644
--- a/client-spark/common/pom.xml
+++ b/client-spark/common/pom.xml
@@ -25,12 +25,12 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.6.0-snapshot
 ../../pom.xml
 
 
 rss-client-spark-common
-0.5.0-snapshot
+0.6.0-snapshot
 jar
 
 
diff --git a/client-spark/spark2/pom.xml b/client-spark/spark2/pom.xml
index 41a4432..54434d5 100644
--- a/client-spark/spark2/pom.xml
+++ b/client-spark/spark2/pom.xml
@@ -24,13 +24,13 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.6.0-snapshot
 ../../pom.xml
   
 
   com.tencent.rss
   rss-client-spark2
-  0.5.0-snapshot
+  0.6.0-snapshot
   jar
 
   
diff --git a/client-spark/spark3/pom.xml b/client-spark/spark3/pom.xml
index 5674613..8cd091e 100644
--- a/client-spark/spark3/pom.xml
+++ b/client-spark/spark3/pom.xml
@@ -24,13 +24,13 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.6.0-snapshot
 ../../pom.xml
 
 
 com.tencent.rss
 rss-client-spark3
-0.5.0-snapshot
+0.6.0-snapshot
 jar
 
 
diff --git a/client/pom.xml b/client/pom.xml
index e6134ce..1b4e3d7 100644
--- a/client/pom.xml
+++ b/client/pom.xml
@@ -24,12 +24,12 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.6.0-snapshot
   
 
   com.tencent.rss
   rss-client
-  0.5.0-snapshot
+  0.6.0-snapshot
   jar
 
   
diff --git a/common/pom.xml b/common/pom.xml
index b4b65f8..9d6b2df 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -22,7 +22,7 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.6.0-snapshot
 ../pom.xml
   
 
diff --git a/coordinator/pom.xml b/coordinator/pom.xml
index e860a50..28b5b5c 100644
--- a/coordinator/pom.xml
+++ b/coordinator/pom.xml
@@ -24,7 +24,7 @@
   
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.6.0-snapshot
 ../pom.xml
   
 
diff --git a/integration-test/common/pom.xml b/integration-test/common/pom.xml
index 2a759a4..179ecb8 100644
--- a/integration-test/common/pom.xml
+++ b/integration-test/common/pom.xml
@@ -24,13 +24,13 @@
 
 com.tencent.rss
 rss-main
-0.5.0-snapshot
+0.6.0-snapshot
 ../../pom.xml
 
 
 com.tencent.rss
 rss-integration-common-test
-0.5.0-snapshot
+0.6.0-snapshot
 jar
 
 
diff --git a/integration-test/mr/pom.xml b/integration-test/mr/pom.xml
index 489ffd5..6ae8a17 100644
--- a/integration-test/mr/pom.xml
+++ b/integration-test/mr/pom.xml
@@ -22,14 +22,14 @@
 
 rss-main
 com.tencent.rss
-0.5.0-snapshot
+0.6.0-snapshot
 ../../pom.xml
 
 4.0.0
 
 com.tencent.rss
 rss-integration-mr-test
-0.5.0-snapshot
+0.6.0-snapshot
 jar
 
 
diff --git a/integration-test/spark-common/pom.xml 
b/integration-test/spark-common/pom.xml
index 284ca2b..8f642a5 100644
--- a/integration-test/spark-common/pom.xml
+++ b/integration-test/spark-common/pom.xml
@@ -23,14 +23,14 @@
   
 rss-main
 com.tencent.rss
-0.5.0

[incubator-uniffle] 06/17: [Bugfix] Fix MR don't have remote storage information when we use dynamic conf and MEMORY_LOCALE_HDFS storageType (#195)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit cf731f24ef3f10bb24c57475131c04355c9d7e64
Author: roryqi 
AuthorDate: Thu Jun 23 09:49:16 2022 +0800

[Bugfix] Fix MR don't have remote storage information when we use dynamic 
conf and MEMORY_LOCALE_HDFS storageType (#195)

### What changes were proposed in this pull request?
We should aquire the storageType from extraConf.
### Why are the changes needed?
If we don't have this patch, MR don't work when we use dynamic conf and 
MEMORY_LOCALE_HDFS storageType.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Manual test
---
 .../main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
index 7511104..976b03c 100644
--- 
a/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
+++ 
b/client-mr/src/main/java/org/apache/hadoop/mapreduce/v2/app/RssMRAppMaster.java
@@ -180,7 +180,7 @@ public class RssMRAppMaster extends MRAppMaster {
 RssMRUtils.applyDynamicClientConf(extraConf, clusterClientConf);
   }
 
-  String storageType = conf.get(RssMRConfig.RSS_STORAGE_TYPE);
+  String storageType = RssMRUtils.getString(extraConf, conf, 
RssMRConfig.RSS_STORAGE_TYPE);
   RemoteStorageInfo defaultRemoteStorage =
   new RemoteStorageInfo(conf.get(RssMRConfig.RSS_REMOTE_STORAGE_PATH, 
""));
   RemoteStorageInfo remoteStorage = ClientUtils.fetchRemoteStorage(



[incubator-uniffle] branch branch-0.1.0 created (now 36343ec)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 36343ec  Upgrade the version to 0.1.0

No new revisions were added by this update.



[incubator-uniffle] 01/02: [Feature] [0.2] Support Spark 3.2 (#88)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.2.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 7f3c44a9a051310e991034162ef53e2835490e71
Author: roryqi 
AuthorDate: Tue Mar 1 20:33:34 2022 +0800

[Feature] [0.2] Support Spark 3.2 (#88)

### What changes were proposed in this pull request?
Support Spark 3.2

### Why are the changes needed?
We need support more Spark Versions

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
GA passed (include profiles spark2,spark3,spark3.0,spark3.1,spark3.2)

Co-authored-by: roryqi 
---
 README.md  |   2 +-
 .../spark/shuffle/writer/WriteBufferManager.java   |   3 +-
 .../spark/shuffle/writer/RssShuffleWriter.java |   5 +
 .../tencent/rss/test/SparkIntegrationTestBase.java |   4 +
 integration-test/spark3/pom.xml|   2 +
 pom.xml| 106 -
 6 files changed, 119 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index a785f47..ac3e92a 100644
--- a/README.md
+++ b/README.md
@@ -36,7 +36,7 @@ The shuffle data is stored with index file and data file. 
Data file has all bloc
 ![Rss Shuffle_Write](docs/asset/rss_data_format.png)
 
 ## Supported Spark Version
-Current support Spark 2.3.x, Spark 2.4.x, Spark3.0.x, Spark 3.1.x
+Current support Spark 2.3.x, Spark 2.4.x, Spark3.0.x, Spark 3.1.x, Spark 3.2.x
 
 Note: To support dynamic allocation, the patch(which is included in 
client-spark/patch folder) should be applied to Spark
 
diff --git 
a/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
 
b/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
index 1b26f0b..91cc6a7 100644
--- 
a/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
+++ 
b/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
@@ -28,6 +28,7 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Maps;
 import org.apache.spark.executor.ShuffleWriteMetrics;
 import org.apache.spark.memory.MemoryConsumer;
+import org.apache.spark.memory.MemoryMode;
 import org.apache.spark.memory.TaskMemoryManager;
 import org.apache.spark.serializer.SerializationStream;
 import org.apache.spark.serializer.Serializer;
@@ -86,7 +87,7 @@ public class WriteBufferManager extends MemoryConsumer {
   Map> partitionToServers,
   TaskMemoryManager taskMemoryManager,
   ShuffleWriteMetrics shuffleWriteMetrics) {
-super(taskMemoryManager);
+super(taskMemoryManager, taskMemoryManager.pageSizeBytes(), 
MemoryMode.ON_HEAP);
 this.bufferSize = bufferManagerOptions.getBufferSize();
 this.spillSize = bufferManagerOptions.getBufferSpillThreshold();
 this.instance = serializer.newInstance();
diff --git 
a/client-spark/spark3/src/main/java/org/apache/spark/shuffle/writer/RssShuffleWriter.java
 
b/client-spark/spark3/src/main/java/org/apache/spark/shuffle/writer/RssShuffleWriter.java
index 2a4beb6..a7e4480 100644
--- 
a/client-spark/spark3/src/main/java/org/apache/spark/shuffle/writer/RssShuffleWriter.java
+++ 
b/client-spark/spark3/src/main/java/org/apache/spark/shuffle/writer/RssShuffleWriter.java
@@ -171,6 +171,11 @@ public class RssShuffleWriter extends 
ShuffleWriter {
 + bufferManager.getManagerCostInfo());
   }
 
+  // only push-based shuffle use this interface, but rss won't be used when 
push-based shuffle is enabled.
+  public long[] getPartitionLengths() {
+return new long[0];
+  }
+
   private void processShuffleBlockInfos(List 
shuffleBlockInfoList, Set blockIds) {
 if (shuffleBlockInfoList != null && !shuffleBlockInfoList.isEmpty()) {
   shuffleBlockInfoList.forEach(sbi -> {
diff --git 
a/integration-test/spark-common/src/test/java/com/tencent/rss/test/SparkIntegrationTestBase.java
 
b/integration-test/spark-common/src/test/java/com/tencent/rss/test/SparkIntegrationTestBase.java
index 06789d2..1e15ba6 100644
--- 
a/integration-test/spark-common/src/test/java/com/tencent/rss/test/SparkIntegrationTestBase.java
+++ 
b/integration-test/spark-common/src/test/java/com/tencent/rss/test/SparkIntegrationTestBase.java
@@ -21,6 +21,9 @@ package com.tencent.rss.test;
 import static org.junit.Assert.assertEquals;
 
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import com.google.common.util.concurrent.Uninterruptibles;
 import org.apache.spark.SparkConf;
 import org.apache.spark.shuffle.RssClientConfig;
 import org.apache.spark.sql.SparkSession;
@@ -50,6 +53,7 @@ public abstract class SparkIntegrationTestBase extends 
IntegrationTestBase {
 Map resultWithoutRss = runSparkApp(sparkConf, fileName);
 long durationWithoutRss = System.currentTimeMi

[incubator-uniffle] branch branch-0.2.0 created (now 75b5376)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.2.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


  at 75b5376  [Bugfix] Fix uncorrect index file (#92)

This branch includes the following new commits:

 new 7f3c44a  [Feature] [0.2] Support Spark 3.2 (#88)
 new 75b5376  [Bugfix] Fix uncorrect index file (#92)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-uniffle] 02/02: [Bugfix] Fix uncorrect index file (#92)

2022-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.2.0
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 75b537661f1a29291f199974c6e7fa1e39197d72
Author: roryqi 
AuthorDate: Tue Mar 8 16:31:33 2022 +0800

[Bugfix] Fix uncorrect index file (#92)

### What changes were proposed in this pull request?
Modify the method that calculate the offset in the index file.

### Why are the changes needed?
If we don't have this patch, we run 10TB tpcds, query24a will fail.
https://user-images.githubusercontent.com/8159038/157178756-d8a39b3f-0ea6-4864-ac68-ee382a88bb0f.png;>
When we write many data to dataOutputStream, dataOutputStream.size() won't 
increase again. dataOutputStream.size() will
always be Integer.MAX_VALUE.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Add new uts.

Co-authored-by: roryqi 
---
 .../rss/storage/handler/impl/LocalFileWriter.java   |  6 ++
 .../rss/storage/handler/impl/LocalFileHandlerTest.java  | 17 +
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git 
a/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
 
b/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
index 10185a4..609db7e 100644
--- 
a/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
+++ 
b/storage/src/main/java/com/tencent/rss/storage/handler/impl/LocalFileWriter.java
@@ -30,21 +30,19 @@ public class LocalFileWriter implements Closeable {
 
   private DataOutputStream dataOutputStream;
   private FileOutputStream fileOutputStream;
-  private long initSize;
   private long nextOffset;
 
   public LocalFileWriter(File file) throws IOException {
 fileOutputStream = new FileOutputStream(file, true);
 // init fsDataOutputStream
 dataOutputStream = new DataOutputStream(fileOutputStream);
-initSize = file.length();
-nextOffset = initSize;
+nextOffset = file.length();
   }
 
   public void writeData(byte[] data) throws IOException {
 if (data != null && data.length > 0) {
   dataOutputStream.write(data);
-  nextOffset = initSize + dataOutputStream.size();
+  nextOffset = nextOffset + data.length;
 }
   }
 
diff --git 
a/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
 
b/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
index 969944d..ce8915b 100644
--- 
a/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
+++ 
b/storage/src/test/java/com/tencent/rss/storage/handler/impl/LocalFileHandlerTest.java
@@ -39,6 +39,7 @@ import com.tencent.rss.storage.handler.api.ServerReadHandler;
 import com.tencent.rss.storage.handler.api.ShuffleWriteHandler;
 import com.tencent.rss.storage.util.ShuffleStorageUtils;
 import java.io.File;
+import java.io.IOException;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
@@ -53,6 +54,7 @@ public class LocalFileHandlerTest {
   @Test
   public void writeTest() throws Exception {
 File tmpDir = Files.createTempDir();
+tmpDir.deleteOnExit();
 File dataDir1 = new File(tmpDir, "data1");
 File dataDir2 = new File(tmpDir, "data2");
 String[] basePaths = new String[]{dataDir1.getAbsolutePath(),
@@ -111,6 +113,21 @@ public class LocalFileHandlerTest {
 }
   }
 
+  @Test
+  public void writeBigDataTest() throws IOException  {
+File tmpDir = Files.createTempDir();
+tmpDir.deleteOnExit();
+File writeFile = new File(tmpDir, "writetest");
+LocalFileWriter writer = new LocalFileWriter(writeFile);
+int  size = Integer.MAX_VALUE / 100;
+byte[] data = new byte[size];
+for (int i = 0; i < 200; i++) {
+  writer.writeData(data);
+}
+long totalSize = 200L * size;
+assertEquals(writer.nextOffset(), totalSize);
+  }
+
 
   private void writeTestData(
   ShuffleWriteHandler writeHandler,



svn commit: r46186 - /dev/incubator/livy/0.7.1-incubating-rc1/ /release/incubator/livy/0.7.1-incubating/

2021-02-18 Thread jshao
Author: jshao
Date: Fri Feb 19 01:57:45 2021
New Revision: 46186

Log:
Livy 0.7.1-incubating release

Added:
release/incubator/livy/0.7.1-incubating/
  - copied from r46185, dev/incubator/livy/0.7.1-incubating-rc1/
Removed:
dev/incubator/livy/0.7.1-incubating-rc1/



svn commit: r45798 - /dev/incubator/livy/0.7.1-incubating-rc1/

2021-02-03 Thread jshao
Author: jshao
Date: Thu Feb  4 05:00:19 2021
New Revision: 45798

Log:
Apache Livy 0.7.1-incubating-rc1

Added:
dev/incubator/livy/0.7.1-incubating-rc1/

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip   
(with props)

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.asc
   (with props)

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.sha512

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip   
(with props)

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.asc
   (with props)

dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.sha512

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip
--
svn:mime-type = application/zip

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.sha512
==
--- 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.sha512
 (added)
+++ 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-bin.zip.sha512
 Thu Feb  4 05:00:19 2021
@@ -0,0 +1,4 @@
+apache-livy-0.7.1-incubating-bin.zip: C4987855 FDCD7220 ABC0FA19 63359019
+  34B2AB6C 76BF54C3 7AF14D97 4FD0BB44
+  05D58AD3 B10C64B8 1E1C0B73 5017822E
+  2030CB57 41C232B3 4E492181 E49002A4

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip
--
svn:mime-type = application/zip

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.sha512
==
--- 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.sha512
 (added)
+++ 
dev/incubator/livy/0.7.1-incubating-rc1/apache-livy-0.7.1-incubating-src.zip.sha512
 Thu Feb  4 05:00:19 2021
@@ -0,0 +1,4 @@
+apache-livy-0.7.1-incubating-src.zip: 03E6F489 518930F5 906F793D A88A6DC0
+  F9735D87 5BCE0E2F 1818AEAA B1C0150D
+  EA9FEB69 9690938A FA6C1648 291FC90D
+  6A9AF132 D4E88C8B CFF2F327 A9CF8AB1




[incubator-livy] branch branch-0.7 updated: [BUILD] Update version for 0.7.2-incubating-SNAPSHOT

2021-02-03 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
 new 972b600  [BUILD] Update version for 0.7.2-incubating-SNAPSHOT
972b600 is described below

commit 972b600d72629884140aec315ea925858eb67884
Author: jerryshao 
AuthorDate: Thu Feb 4 11:00:54 2021 +0800

[BUILD] Update version for 0.7.2-incubating-SNAPSHOT
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 23 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index fbd8496..e160690 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-api
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 36bb48c..113a704 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
 ../pom.xml
   
 
   livy-assembly
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index 3897c2b..540d68d 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-common
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index a053d8d..1625a54 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-http
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 2b21dec..6c76db4 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 5100e19..e703896 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index 9c23dca..d358671 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
   
 
   livy-coverage-report
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   pom
 
   
diff --git a/examples/pom.xml b/examples/pom.xml
index 9692224..7ddc525 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index 80a9c29..a658b81 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating
+0.7.2-incubating-SNAPSHOT
   
 
   livy-integration-test
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 20b1a55..9eb1967 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.1-incubating
+  0.7.2-incubating-SNAPSHOT
   pom
   Livy Project Parent POM
   Livy Project
diff --git a/python-api/pom.xml b/python-api/pom.xml
index 3d7b178..2679ab3 100644

[incubator-livy] tag v0.7.1-incubating-rc1 created (now 7c3d341)

2021-02-03 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to tag v0.7.1-incubating-rc1
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


  at 7c3d341  (commit)
No new revisions were added by this update.



[incubator-livy] branch branch-0.7 updated: [BUILD] Update version for 0.7.1-incubating

2021-02-03 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
 new 7c3d341  [BUILD] Update version for 0.7.1-incubating
7c3d341 is described below

commit 7c3d341926db69fb57a4978b15d4e96f06312267
Author: jerryshao 
AuthorDate: Thu Feb 4 10:31:32 2021 +0800

[BUILD] Update version for 0.7.1-incubating
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 23 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index 66f175c..fbd8496 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
   
 
   org.apache.livy
   livy-api
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index b94f0da..36bb48c 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
 ../pom.xml
   
 
   livy-assembly
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index dac522c..3897c2b 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
   
 
   org.apache.livy
   livy-client-common
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index ad31b41..a053d8d 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
   
 
   org.apache.livy
   livy-client-http
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 5623220..2b21dec 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 041f9c1..5100e19 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index 6419bc4..9c23dca 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
   
 
   livy-coverage-report
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   pom
 
   
diff --git a/examples/pom.xml b/examples/pom.xml
index 1f4aa32..9692224 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index 9fa230b..80a9c29 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating-SNAPSHOT
+0.7.1-incubating
   
 
   livy-integration-test
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 938bdbf..20b1a55 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.1-incubating-SNAPSHOT
+  0.7.1-incubating
   pom
   Livy Project Parent POM
   Livy Project
diff --git a/python-api/pom.xml b/python-api/pom.xml
index 62850c0..3d7b178 100644
--- a/python-api/pom.xml

[incubator-livy] branch branch-0.7 updated: Add html escape to session name

2021-02-03 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
 new 9f1ba47  Add html escape to session name
9f1ba47 is described below

commit 9f1ba47a2f0d8accc435b133b42c3a76aa9ac846
Author: Marco Gaido 
AuthorDate: Fri Aug 14 17:25:54 2020 -0700

Add html escape to session name

## What changes were proposed in this pull request?

The PR adds HTML escaping to session names.

## How was this patch tested?

Manual test.

Author: Marco Gaido 

Closes #302 from mgaido91/escape_html.
---
 .../org/apache/livy/server/ui/static/js/all-sessions.js| 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js 
b/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
index 6e35702..d8a84a7 100644
--- 
a/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
+++ 
b/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
@@ -15,13 +15,17 @@
  * limitations under the License.
  */
 
+function escapeHtml(unescapedText) {
+  return $("").text(unescapedText).html()
+}
+
 function loadSessionsTable(sessions) {
   $.each(sessions, function(index, session) {
 $("#interactive-sessions .sessions-table-body").append(
   "" +
 tdWrap(uiLink("session/" + session.id, session.id)) +
 tdWrap(appIdLink(session)) +
-tdWrap(session.name) +
+tdWrap(escapeHtml(session.name)) +
 tdWrap(session.owner) +
 tdWrap(session.proxyUser) +
 tdWrap(session.kind) +
@@ -38,7 +42,7 @@ function loadBatchesTable(sessions) {
   "" +
 tdWrap(session.id) +
 tdWrap(appIdLink(session)) +
-tdWrap(session.name) +
+tdWrap(escapeHtml(session.name)) +
 tdWrap(session.owner) +
 tdWrap(session.proxyUser) +
 tdWrap(session.state) +
@@ -79,4 +83,4 @@ $(document).ready(function () {
   $("#all-sessions").append('No Sessions or Batches have been created 
yet.');
 }
   });
-});
\ No newline at end of file
+});



[incubator-livy] branch master updated: [LIVY-756] Add Spark 3.0 and Scala 2.12 support

2020-07-02 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 97cf2f7  [LIVY-756] Add Spark 3.0 and Scala 2.12 support
97cf2f7 is described below

commit 97cf2f75929ef6c152afc468adbead269bd0758f
Author: jerryshao 
AuthorDate: Thu Jul 2 15:44:12 2020 +0800

[LIVY-756] Add Spark 3.0 and Scala 2.12 support

## What changes were proposed in this pull request?

This PR is based tprelle 's PR #289 , and address all the left issues in 
that PR:

1. multi-scala version support in one build (Scala 2.11 and 2.12 support).
2. make SparkR work.

Also reverts most of the unnecessary changes. Besides this PR remove the 
build below 2.4 (2.2, 2.3), since Spark 2.2 and 2.3 only ships with Scala 2.11, 
hard to maintain multiple version. But user could still use 2.2 and 2.3 without 
changes.

All credits to tprelle.

## How was this patch tested?

Run UT and IT with Spark 2.4.5 and 3.0.0 locally.

Author: jerryshao 

Closes #300 from jerryshao/LIVY-756.
---
 .gitignore |  1 +
 .rat-excludes  |  1 +
 .travis.yml| 24 +++---
 README.md  |  4 +-
 assembly/assembly.xml  |  7 ++
 assembly/pom.xml   | 23 ++
 client-common/pom.xml  |  2 +-
 .../org/apache/livy/client/common/Serializer.java  |  8 +-
 {client-common => core/scala-2.12}/pom.xml | 52 ++---
 .../org/apache/livy/LivyBaseUnitTestSuite.scala|  4 +-
 coverage/pom.xml   | 35 +
 .../org/apache/livy/examples/WordCountApp.scala|  2 +-
 integration-test/pom.xml   |  2 +-
 integration-test/src/test/resources/rtest.R|  9 +--
 .../scala/org/apache/livy/test/InteractiveIT.scala |  6 +-
 .../src/test/spark2/scala/Spark2JobApiIT.scala | 26 +--
 pom.xml| 88 +-
 repl/pom.xml   |  3 +
 repl/scala-2.11/pom.xml|  1 +
 .../org/apache/livy/repl/SparkInterpreter.scala|  5 +-
 repl/{scala-2.11 => scala-2.12}/pom.xml| 11 +--
 .../org/apache/livy/repl/SparkInterpreter.scala| 17 ++---
 .../apache/livy/repl/SparkInterpreterSpec.scala| 68 +
 .../main/scala/org/apache/livy/repl/Session.scala  |  4 +-
 .../org/apache/livy/repl/SQLInterpreterSpec.scala  |  4 +-
 rsc/pom.xml|  6 +-
 .../org/apache/livy/rsc/driver/SparkEntries.java   |  7 +-
 .../org/apache/livy/rsc/rpc/KryoMessageCodec.java  |  7 --
 {repl/scala-2.11 => scala-api/scala-2.12}/pom.xml  | 17 ++---
 scala-api/src/main/resources/build.marker  |  0
 .../org/apache/livy/scalaapi/ScalaJobHandle.scala  |  8 ++
 server/pom.xml |  9 ++-
 .../org/apache/livy/server/SessionServlet.scala|  2 +-
 .../server/interactive/InteractiveSession.scala|  6 +-
 .../org/apache/livy/utils/LivySparkUtils.scala |  4 +-
 .../apache/livy/server/BaseJsonServletSpec.scala   |  3 +-
 .../apache/livy/server/SessionServletSpec.scala|  2 +-
 .../livy/server/batch/BatchServletSpec.scala   |  2 +-
 .../livy/server/batch/BatchSessionSpec.scala   |  6 +-
 .../InteractiveSessionServletSpec.scala|  3 +-
 .../interactive/InteractiveSessionSpec.scala   |  2 +-
 .../livy/server/interactive/JobApiSpec.scala   |  2 +-
 .../server/interactive/SessionHeartbeatSpec.scala  |  2 +-
 .../server/recovery/FileSystemStateStoreSpec.scala |  2 +-
 .../livy/server/recovery/SessionStoreSpec.scala|  2 +-
 .../livy/server/recovery/StateStoreSpec.scala  |  2 -
 .../server/recovery/ZooKeeperStateStoreSpec.scala  |  2 +-
 .../apache/livy/sessions/SessionManagerSpec.scala  |  2 +-
 .../apache/livy/utils/LivySparkUtilsSuite.scala|  5 ++
 .../org/apache/livy/utils/SparkYarnAppSpec.scala   |  2 +-
 .../org/apache/livy/test/jobs/SQLGetTweets.java|  2 +-
 .../livy/thriftserver/types/DataTypeUtils.scala|  5 +-
 .../livy/thriftserver/ThriftServerSuites.scala |  3 +-
 thriftserver/session/pom.xml   | 13 
 .../thriftserver/session/ColumnBufferTest.java | 16 ++--
 55 files changed, 362 insertions(+), 189 deletions(-)

diff --git a/.gitignore b/.gitignore
index d46d49f..b1045ea 100644
--- a/.gitignore
+++ b/.gitignore
@@ -24,6 +24,7 @@ metastore_db/
 derby.log
 dependency-reduced-pom.xml
 release-staging/
+venv/
 
 # For python setup.py, which pollutes the source dirs.
 python-api/dist
diff --git a/.rat-excludes b/.rat-excludes
index ac29fe6

[incubator-livy] branch master updated: [MINOR] Modify the description of POST /sessions/{sessionId}/completion

2020-03-26 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new ee7fdfc  [MINOR] Modify the description of POST 
/sessions/{sessionId}/completion
ee7fdfc is described below

commit ee7fdfc45d90c0478dcd446bc8a19a217eebe04d
Author: Shingo Furuyama 
AuthorDate: Thu Mar 26 14:59:21 2020 +0800

[MINOR] Modify the description of POST /sessions/{sessionId}/completion

## What changes were proposed in this pull request?

Just modified a description of POST /sessions/{sessionId}/completion in the 
api-doc.

## How was this patch tested?

Since the change is quite small, I didn't test the patch. If I have an 
instruction, I will follow it.

Author: Shingo Furuyama 

Closes #285 from marblejenka/mod-doc-completion.
---
 docs/rest-api.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/rest-api.md b/docs/rest-api.md
index cca937f..d80e77d 100644
--- a/docs/rest-api.md
+++ b/docs/rest-api.md
@@ -312,7 +312,7 @@ Cancel the specified statement in this session.
 
 ### POST /sessions/{sessionId}/completion
 
-Runs a statement in a session.
+Returns code completion candidates for the specified code in the session.
 
  Request Body
 



[incubator-livy] branch master updated: [LIVY-751] Livy server should allow to customize LIVY_CLASSPATH

2020-03-26 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new e39d8fe  [LIVY-751] Livy server should allow to customize 
LIVY_CLASSPATH
e39d8fe is described below

commit e39d8fee43adbddf88acb2e04b470aa14b713785
Author: Shingo Furuyama 
AuthorDate: Thu Mar 26 14:07:42 2020 +0800

[LIVY-751] Livy server should allow to customize LIVY_CLASSPATH

## What changes were proposed in this pull request?

The purpose and background is https://issues.apache.org/jira/browse/LIVY-751

## How was this patch tested?

I tested the following two manually.

1. To confirm there is no degradation, I run 0.7.0-incubating livy server 
with sources in this PR. I also run an example jobs, and it completed without 
error.
2.  To confirm our workaround works, I build 0.7.0-incubating branch with 
specifying `-Dhadoop.scope=provided` and sources with this PR. After that, I 
added `export LIVY_CLASSPATH="$LIVY_HOME/jars/*:$(hadoop classpath)"` in 
conf/livy-env.sh and boot livy server.  I also run an example jobs, and it 
completed without error.

Author: Shingo Furuyama 
Author: Shingo Furuyama 

Closes #282 from marblejenka/livy-classpath.
---
 bin/livy-server   | 2 +-
 conf/livy-env.sh.template | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/bin/livy-server b/bin/livy-server
index 8d27d4e..a0e2fb7 100755
--- a/bin/livy-server
+++ b/bin/livy-server
@@ -90,7 +90,7 @@ start_livy_server() {
 fi
   fi
 
-  LIVY_CLASSPATH="$LIBDIR/*:$LIVY_CONF_DIR"
+  LIVY_CLASSPATH="${LIVY_CLASSPATH:-${LIBDIR}/*:${LIVY_CONF_DIR}}"
 
   if [ -n "$SPARK_CONF_DIR" ]; then
 LIVY_CLASSPATH="$LIVY_CLASSPATH:$SPARK_CONF_DIR"
diff --git a/conf/livy-env.sh.template b/conf/livy-env.sh.template
index 7cba5c3..14f22c3 100644
--- a/conf/livy-env.sh.template
+++ b/conf/livy-env.sh.template
@@ -30,3 +30,4 @@
 # names. (Default: name of the user starting Livy).
 # - LIVY_MAX_LOG_FILES Max number of log file to keep in the log directory. 
(Default: 5.)
 # - LIVY_NICENESS   Niceness of the Livy server process when running in the 
background. (Default: 0.)
+# - LIVY_CLASSPATH  Override if the additional classpath is required.



[incubator-livy] branch master updated: [MINOR] Add description of POST /batches

2020-03-25 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new d07d103  [MINOR] Add description of POST /batches
d07d103 is described below

commit d07d103f22941525d3cfa2f07f647e310ffb34a1
Author: Shingo Furuyama 
AuthorDate: Thu Mar 26 13:55:51 2020 +0800

[MINOR] Add description of POST /batches

## What changes were proposed in this pull request?

Just added a description of POST /batches in the api-doc.

## How was this patch tested?

Since the change is quite small, I didn't test the patch. If I have an 
instruction, I will follow it.

Author: Shingo Furuyama 

Closes #283 from marblejenka/add-description.
---
 docs/rest-api.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/rest-api.md b/docs/rest-api.md
index f1ff9b4..cca937f 100644
--- a/docs/rest-api.md
+++ b/docs/rest-api.md
@@ -389,6 +389,8 @@ Returns all the active batch sessions.
 
 ### POST /batches
 
+Creates a new batch session.   
+
  Request Body
 
 



[incubator-livy] branch master updated (3a26856 -> 06a8d4f)

2020-03-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 3a26856  [LIVY-745] Ensure that a single RSCClientFactory gets loaded.
 add 06a8d4f  [LIVY-748] Add support for running Livy Integration tests 
against secure external clusters

No new revisions were added by this update.

Summary of changes:
 .../apache/livy/client/http/LivyConnection.java|   5 +
 integration-test/pom.xml   |   4 +-
 .../test/framework/BaseIntegrationTestSuite.scala  |  57 ++-
 .../org/apache/livy/test/framework/Cluster.scala   |  44 +++-
 .../livy/test/framework/ExternalCluster.scala  | 103 +++
 .../livy/test/framework/LivyRestClient.scala   | 113 +
 .../apache/livy/test/framework/MiniCluster.scala   |  60 +++
 .../resources/{rtest.R => cluster.spec.template}   |  36 ---
 .../src/test/resources/test_python_api.py  |  34 +--
 .../test/scala/org/apache/livy/test/BatchIT.scala  |   2 +-
 .../scala/org/apache/livy/test/InteractiveIT.scala |   8 +-
 .../test/scala/org/apache/livy/test/JobApiIT.scala |  21 +++-
 .../src/test/spark2/scala/Spark2JobApiIT.scala |  17 +++-
 pom.xml|   6 +-
 14 files changed, 401 insertions(+), 109 deletions(-)
 create mode 100644 
integration-test/src/main/scala/org/apache/livy/test/framework/ExternalCluster.scala
 copy integration-test/src/test/resources/{rtest.R => cluster.spec.template} 
(52%)



[incubator-livy] 01/01: [BUILD] Update version for 0.7.1-incubating-SNAPSHOT

2020-01-07 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git

commit c563bd403896743006f5d350b0ee3b7a0698b8f3
Author: jerryshao 
AuthorDate: Tue Jan 7 20:40:02 2020 +0800

[BUILD] Update version for 0.7.1-incubating-SNAPSHOT
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 docs/_data/project.yml   | 2 +-
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 24 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index dc3a5af..66f175c 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-api
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 41cca2b..b94f0da 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   livy-assembly
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index 8bc52b9..dac522c 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-common
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index b3d5848..ad31b41 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-http
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index a367bc7..5623220 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 6e2062b..041f9c1 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index e4c508a..6419bc4 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-coverage-report
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/docs/_data/project.yml b/docs/_data/project.yml
index 3b4bbd4..3228413 100644
--- a/docs/_data/project.yml
+++ b/docs/_data/project.yml
@@ -16,6 +16,6 @@
 # Apache Project configurations
 #
 name: Apache Livy
-version: 0.7.0-incubating
+version: 0.7.1-incubating-SNAPSHOT
 
 podling: true
diff --git a/examples/pom.xml b/examples/pom.xml
index 9782e08..1f4aa32 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index f652c64..9fa230b 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-integration-test
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 6b4d5a4..938bdbf 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.0-incubating

[incubator-livy] branch branch-0.7 updated: [MINOR] Update NOTICE file to reflect the correct date

2020-01-05 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
 new a4baa62  [MINOR] Update NOTICE file to reflect the correct date
a4baa62 is described below

commit a4baa62c7e02ec34919a432d72780c20e3e88c58
Author: jerryshao 
AuthorDate: Mon Jan 6 15:52:59 2020 +0800

[MINOR] Update NOTICE file to reflect the correct date

Author: jerryshao 

Closes #271 from jerryshao/notice-fix.

(cherry picked from commit 277b0c5af019bf944a17a7f870d4998d948d7917)
Signed-off-by: jerryshao 
---
 NOTICE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/NOTICE b/NOTICE
index 7eeebe8..9d023c2 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache Livy
-Copyright 2018 The Apache Software Foundation
+Copyright 2018 and onwards The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).



[incubator-livy] branch master updated: [LIVY-727] Fix session state always be idle though the yarn application has been killed after restart livy

2019-12-18 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 25542e4  [LIVY-727] Fix session state always be idle though the yarn 
application has been killed after restart livy
25542e4 is described below

commit 25542e4e78b39a3c9b9426a70a92ca7c183daea3
Author: runzhiwang 
AuthorDate: Thu Dec 19 14:29:01 2019 +0800

[LIVY-727] Fix session state always be idle though the yarn application has 
been killed after restart livy

## What changes were proposed in this pull request?

[LIVY-727] Fix session state always be idle though the yarn application has 
been killed after restart livy.

Follows are steps to reproduce the problem:
1. Set livy.server.recovery.mode=recovery, and create a session in 
yarn-cluster
2. Restart livy
3. Kill the yarn application of the session.
4. The session state will always be idle and never change to killed or 
dead. Just as the image, livy-session-16 has been killed in yarn, but the state 
is still idle.

![image](https://user-images.githubusercontent.com/51938049/70371088-92695c80-1909-11ea-875c-73696db693ce.png)

The cause of the problem are as follows:
1. Because when recover session, livy will not startDriver again, so the 
driverProcess is None.
2. SparkYarnApp will not be created in `driverProcess.map { _ => 
SparkApp.create(appTag, appId, driverProcess, livyConf, Some(this)) }` when 
driverProcess is None.
3. So yarnAppMonitorThread of the session will never start, and the session 
state will never change.

How to fix the bug:
1. If livy run in yarn, SparkApp will create even though the driverProcess 
is None
2. If not run in yarn, SparkApp will not create, because the code require 
driverProcess is not None at 
https://github.com/apache/incubator-livy/blob/master/server/src/main/scala/org/apache/livy/utils/SparkApp.scala#L93,
  and I don't want to change the behavior.

## How was this patch tested?

1. Set livy.server.recovery.mode=recovery, and create a session in 
yarn-cluster
2. Restart livy
3. Kill the yarn application of the session.
4. The session state will change to killed

Author: runzhiwang 

Closes #265 from runzhiwang/session-state.
---
 .../org/apache/livy/server/interactive/InteractiveSession.scala| 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
index 4b318b8..790bd5a 100644
--- 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
+++ 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
@@ -399,7 +399,12 @@ class InteractiveSession(
 app = mockApp.orElse {
   val driverProcess = client.flatMap { c => Option(c.getDriverProcess) }
 .map(new LineBufferedProcess(_, 
livyConf.getInt(LivyConf.SPARK_LOGS_SIZE)))
-  driverProcess.map { _ => SparkApp.create(appTag, appId, driverProcess, 
livyConf, Some(this)) }
+
+  if (livyConf.isRunningOnYarn() || driverProcess.isDefined) {
+Some(SparkApp.create(appTag, appId, driverProcess, livyConf, 
Some(this)))
+  } else {
+None
+  }
 }
 
 if (client.isEmpty) {



svn commit: r37274 - /dev/incubator/livy/0.7.0-incubating-rc3/

2019-12-17 Thread jshao
Author: jshao
Date: Wed Dec 18 02:36:42 2019
New Revision: 37274

Log:
Apache Livy 0.7.0-incubating-rc3

Added:
dev/incubator/livy/0.7.0-incubating-rc3/

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip   
(with props)

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.asc
   (with props)

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.sha512

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip   
(with props)

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.asc
   (with props)

dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.sha512

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip
--
svn:mime-type = application/zip

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.sha512
==
--- 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.sha512
 (added)
+++ 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-bin.zip.sha512
 Wed Dec 18 02:36:42 2019
@@ -0,0 +1,4 @@
+apache-livy-0.7.0-incubating-bin.zip: 570FBBEA D736C61A AAA4268D 2EF9C441
+  176E76E9 B86A69CF BF5DC3E0 40AEC460
+  C913ACBC C23A6B59 27E7717C 3F7E8CB6
+  F52E6CF2 73811F74 01163956 8D099036

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip
--
svn:mime-type = application/zip

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.asc
==
Binary file - no diff available.

Propchange: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.asc
--
svn:mime-type = application/pgp-signature

Added: 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.sha512
==
--- 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.sha512
 (added)
+++ 
dev/incubator/livy/0.7.0-incubating-rc3/apache-livy-0.7.0-incubating-src.zip.sha512
 Wed Dec 18 02:36:42 2019
@@ -0,0 +1,4 @@
+apache-livy-0.7.0-incubating-src.zip: AD6CEE1C 97E39A7A DFC51215 E9E3DE86
+  41ED8EE0 DB2F1D7B BA909D08 34962DF1
+  AE0D0D70 443F0A7D 186B41D7 C1AB66FC
+  E07280B5 67B27086 0E28576C 5EB81356




[incubator-livy] 01/01: [BUILD] Update version for 0.7.1-incubating-SNAPSHOT

2019-12-17 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git

commit c3e5188799e361abd5b8342ad86259725a0a7706
Author: jerryshao 
AuthorDate: Wed Dec 18 10:06:47 2019 +0800

[BUILD] Update version for 0.7.1-incubating-SNAPSHOT
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 docs/_data/project.yml   | 2 +-
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 24 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index dc3a5af..66f175c 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-api
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 41cca2b..b94f0da 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   livy-assembly
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index 8bc52b9..dac522c 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-common
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index b3d5848..ad31b41 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-http
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index a367bc7..5623220 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 6e2062b..041f9c1 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index e4c508a..6419bc4 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-coverage-report
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/docs/_data/project.yml b/docs/_data/project.yml
index 3b4bbd4..3228413 100644
--- a/docs/_data/project.yml
+++ b/docs/_data/project.yml
@@ -16,6 +16,6 @@
 # Apache Project configurations
 #
 name: Apache Livy
-version: 0.7.0-incubating
+version: 0.7.1-incubating-SNAPSHOT
 
 podling: true
diff --git a/examples/pom.xml b/examples/pom.xml
index 9782e08..1f4aa32 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index f652c64..9fa230b 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-integration-test
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 1d9d0f1..1322aeb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.0-incubating

[incubator-livy] branch branch-0.7 updated (3ae0e65 -> 9c638c4)

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 3ae0e65  [Minor] Revert unnecessary local changes
 add eebb6ec  [BUILD] Update version for 0.7.0-incubating
 new 9c638c4  [BUILD] Update version for 0.7.1-incubating-SNAPSHOT

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:



[incubator-livy] 01/01: [BUILD] Update version for 0.7.0-incubating

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to tag v0.7.0-incubating-rc2
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git

commit eebb6ecfe5ec021244c69096fa240e84f933b2ca
Author: jerryshao 
AuthorDate: Mon Dec 16 14:43:12 2019 +0800

[BUILD] Update version for 0.7.0-incubating
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 docs/_data/project.yml   | 2 +-
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 24 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index 66f175c..dc3a5af 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-api
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index b94f0da..41cca2b 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
 ../pom.xml
   
 
   livy-assembly
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index dac522c..8bc52b9 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-client-common
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index ad31b41..b3d5848 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-client-http
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 5623220..a367bc7 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 041f9c1..6e2062b 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index 6419bc4..e4c508a 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   livy-coverage-report
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/docs/_data/project.yml b/docs/_data/project.yml
index 3228413..3b4bbd4 100644
--- a/docs/_data/project.yml
+++ b/docs/_data/project.yml
@@ -16,6 +16,6 @@
 # Apache Project configurations
 #
 name: Apache Livy
-version: 0.7.1-incubating-SNAPSHOT
+version: 0.7.0-incubating
 
 podling: true
diff --git a/examples/pom.xml b/examples/pom.xml
index 1f4aa32..9782e08 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index 9fa230b..f652c64 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.1-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   livy-integration-test
-  0.7.1-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 1322aeb..1d9d0f1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.1-incubating

[incubator-livy] tag v0.7.0-incubating-rc2 created (now eebb6ec)

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to tag v0.7.0-incubating-rc2
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


  at eebb6ec  (commit)
This tag includes the following new commits:

 new eebb6ec  [BUILD] Update version for 0.7.0-incubating

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-livy] branch branch-0.7 updated: [Minor] Revert unnecessary local changes

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/branch-0.7 by this push:
 new 3ae0e65  [Minor] Revert unnecessary local changes
3ae0e65 is described below

commit 3ae0e6575b0c5858e081e72f69ff23e042424fcf
Author: jerryshao 
AuthorDate: Mon Dec 16 14:38:39 2019 +0800

[Minor] Revert unnecessary local changes
---
 dev/release-build.sh | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/dev/release-build.sh b/dev/release-build.sh
index a1a30a5..5c3bbc2 100755
--- a/dev/release-build.sh
+++ b/dev/release-build.sh
@@ -122,8 +122,8 @@ if [[ "$1" == "package" ]]; then
   echo "Packaging release tarballs"
   cp -r incubator-livy $ARCHIVE_NAME_PREFIX
   zip -r $SRC_ARCHIVE $ARCHIVE_NAME_PREFIX
-  echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --armour --output 
$SRC_ARCHIVE.asc --detach-sig $SRC_ARCHIVE
-  echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --print-md SHA512 $SRC_ARCHIVE 
> $SRC_ARCHIVE.sha512
+  echo "" | $GPG --passphrase-fd 0 --armour --output $SRC_ARCHIVE.asc 
--detach-sig $SRC_ARCHIVE
+  echo "" | $GPG --passphrase-fd 0 --print-md SHA512 $SRC_ARCHIVE > 
$SRC_ARCHIVE.sha512
   rm -rf $ARCHIVE_NAME_PREFIX
 
   # Updated for binary build
@@ -135,8 +135,8 @@ if [[ "$1" == "package" ]]; then
 
 echo "Copying and signing regular binary distribution"
 cp assembly/target/$BIN_ARCHIVE .
-echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --armour --output 
$BIN_ARCHIVE.asc --detach-sig $BIN_ARCHIVE
-echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --print-md SHA512 
$BIN_ARCHIVE > $BIN_ARCHIVE.sha512
+echo "" | $GPG --passphrase-fd 0 --armour --output $BIN_ARCHIVE.asc 
--detach-sig $BIN_ARCHIVE
+echo "" | $GPG --passphrase-fd 0 --print-md SHA512 $BIN_ARCHIVE > 
$BIN_ARCHIVE.sha512
 
 cp $BIN_ARCHIVE* ../
 cd ..
@@ -192,7 +192,7 @@ if [[ "$1" == "publish-release" ]]; then
   echo "Creating hash and signature files"
   for file in $(find . -type f)
   do
-echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --output $file.asc \
+echo "" | $GPG --passphrase-fd 0 --output $file.asc \
   --detach-sig --armour $file;
 if [ $(command -v md5) ]; then
   # Available on OS X; -q to keep only hash



[incubator-livy] branch master updated: [MINOR][BUILD] Update docs version for 0.8.0-incubating-SNAPSHOT

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 393c162  [MINOR][BUILD] Update docs version for 
0.8.0-incubating-SNAPSHOT
393c162 is described below

commit 393c16201a0594defa0e04e1c1b526096212921a
Author: jerryshao 
AuthorDate: Mon Dec 16 14:26:26 2019 +0800

[MINOR][BUILD] Update docs version for 0.8.0-incubating-SNAPSHOT
---
 docs/_data/project.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/_data/project.yml b/docs/_data/project.yml
index 7af6fdd..8568f32 100644
--- a/docs/_data/project.yml
+++ b/docs/_data/project.yml
@@ -16,6 +16,6 @@
 # Apache Project configurations
 #
 name: Apache Livy
-version: 0.7.0-incubating-SNAPSHOT
+version: 0.8.0-incubating-SNAPSHOT
 
-podling: true
\ No newline at end of file
+podling: true



[incubator-livy] 01/01: [BUILD] Update version for 0.7.1-incubating-SNAPSHOT

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git

commit 909b99035ee3bff5f6b807039518470c2c8bef82
Author: jerryshao 
AuthorDate: Mon Dec 16 14:23:23 2019 +0800

[BUILD] Update version for 0.7.1-incubating-SNAPSHOT
---
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 docs/_data/project.yml   | 2 +-
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 24 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index dc3a5af..66f175c 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-api
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 41cca2b..b94f0da 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   livy-assembly
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index 8bc52b9..dac522c 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-common
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index b3d5848..ad31b41 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   org.apache.livy
   livy-client-http
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index a367bc7..5623220 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 6e2062b..041f9c1 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index e4c508a..6419bc4 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-coverage-report
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   pom
 
   
diff --git a/docs/_data/project.yml b/docs/_data/project.yml
index 3b4bbd4..3228413 100644
--- a/docs/_data/project.yml
+++ b/docs/_data/project.yml
@@ -16,6 +16,6 @@
 # Apache Project configurations
 #
 name: Apache Livy
-version: 0.7.0-incubating
+version: 0.7.1-incubating-SNAPSHOT
 
 podling: true
diff --git a/examples/pom.xml b/examples/pom.xml
index 9782e08..1f4aa32 100644
--- a/examples/pom.xml
+++ b/examples/pom.xml
@@ -23,13 +23,13 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
 ../pom.xml
   
 
   org.apache.livy
   livy-examples
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/integration-test/pom.xml b/integration-test/pom.xml
index f652c64..9fa230b 100644
--- a/integration-test/pom.xml
+++ b/integration-test/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating
+0.7.1-incubating-SNAPSHOT
   
 
   livy-integration-test
-  0.7.0-incubating
+  0.7.1-incubating-SNAPSHOT
   jar
 
   
diff --git a/pom.xml b/pom.xml
index 1d9d0f1..1322aeb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
 
   org.apache.livy
   livy-main
-  0.7.0-incubating

[incubator-livy] branch branch-0.7 updated (0a527e8 -> 909b990)

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 0a527e8  [LIVY-717] introduce maven property to set ZooKeeper version
 add 1918f6a  [BUILD] Update version for 0.7.0-incubating
 new 909b990  [BUILD] Update version for 0.7.1-incubating-SNAPSHOT

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 api/pom.xml  |  4 ++--
 assembly/pom.xml |  4 ++--
 client-common/pom.xml|  4 ++--
 client-http/pom.xml  |  4 ++--
 core/pom.xml |  4 ++--
 core/scala-2.11/pom.xml  |  4 ++--
 coverage/pom.xml |  4 ++--
 dev/release-build.sh | 10 +-
 docs/_data/project.yml   |  4 ++--
 examples/pom.xml |  4 ++--
 integration-test/pom.xml |  4 ++--
 pom.xml  |  2 +-
 python-api/pom.xml   |  4 ++--
 python-api/setup.py  |  2 +-
 repl/pom.xml |  4 ++--
 repl/scala-2.11/pom.xml  |  4 ++--
 rsc/pom.xml  |  2 +-
 scala-api/pom.xml|  4 ++--
 scala-api/scala-2.11/pom.xml |  4 ++--
 scala/pom.xml|  4 ++--
 server/pom.xml   |  4 ++--
 test-lib/pom.xml |  4 ++--
 thriftserver/client/pom.xml  |  2 +-
 thriftserver/server/pom.xml  |  2 +-
 thriftserver/session/pom.xml |  2 +-
 25 files changed, 47 insertions(+), 47 deletions(-)



[incubator-livy] 01/01: [BUILD] Update version for 0.7.0-incubating

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to tag v0.7.0-incubating-rc1
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git

commit 1918f6a510805eb4422f2d23075c4825ce9d23a1
Author: jerryshao 
AuthorDate: Mon Dec 16 14:18:46 2019 +0800

[BUILD] Update version for 0.7.0-incubating
---
 api/pom.xml  |  4 ++--
 assembly/pom.xml |  4 ++--
 client-common/pom.xml|  4 ++--
 client-http/pom.xml  |  4 ++--
 core/pom.xml |  4 ++--
 core/scala-2.11/pom.xml  |  4 ++--
 coverage/pom.xml |  4 ++--
 dev/release-build.sh | 10 +-
 docs/_data/project.yml   |  4 ++--
 examples/pom.xml |  4 ++--
 integration-test/pom.xml |  4 ++--
 pom.xml  |  2 +-
 python-api/pom.xml   |  4 ++--
 python-api/setup.py  |  2 +-
 repl/pom.xml |  4 ++--
 repl/scala-2.11/pom.xml  |  4 ++--
 rsc/pom.xml  |  2 +-
 scala-api/pom.xml|  4 ++--
 scala-api/scala-2.11/pom.xml |  4 ++--
 scala/pom.xml|  4 ++--
 server/pom.xml   |  4 ++--
 test-lib/pom.xml |  4 ++--
 thriftserver/client/pom.xml  |  2 +-
 thriftserver/server/pom.xml  |  2 +-
 thriftserver/session/pom.xml |  2 +-
 25 files changed, 47 insertions(+), 47 deletions(-)

diff --git a/api/pom.xml b/api/pom.xml
index f66e82a..dc3a5af 100644
--- a/api/pom.xml
+++ b/api/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-api
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/assembly/pom.xml b/assembly/pom.xml
index c0d0b26..41cca2b 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
 ../pom.xml
   
 
   livy-assembly
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/client-common/pom.xml b/client-common/pom.xml
index 0d9a27e..8bc52b9 100644
--- a/client-common/pom.xml
+++ b/client-common/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-client-common
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/client-http/pom.xml b/client-http/pom.xml
index 8401950..b3d5848 100644
--- a/client-http/pom.xml
+++ b/client-http/pom.xml
@@ -20,12 +20,12 @@
   
 org.apache.livy
 livy-main
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   org.apache.livy
   livy-client-http
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 6179a7a..a367bc7 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,12 +22,12 @@
   
 org.apache.livy
 multi-scala-project-root
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
 ../scala/pom.xml
   
 
   livy-core-parent
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/core/scala-2.11/pom.xml b/core/scala-2.11/pom.xml
index 417f2e2..6e2062b 100644
--- a/core/scala-2.11/pom.xml
+++ b/core/scala-2.11/pom.xml
@@ -19,13 +19,13 @@
   4.0.0
   org.apache.livy
   livy-core_2.11
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   jar
 
   
 org.apache.livy
 livy-core-parent
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
 ../pom.xml
   
 
diff --git a/coverage/pom.xml b/coverage/pom.xml
index 3887965..e4c508a 100644
--- a/coverage/pom.xml
+++ b/coverage/pom.xml
@@ -23,11 +23,11 @@
 org.apache.livy
 livy-main
 ../pom.xml
-0.7.0-incubating-SNAPSHOT
+0.7.0-incubating
   
 
   livy-coverage-report
-  0.7.0-incubating-SNAPSHOT
+  0.7.0-incubating
   pom
 
   
diff --git a/dev/release-build.sh b/dev/release-build.sh
index 5c3bbc2..a1a30a5 100755
--- a/dev/release-build.sh
+++ b/dev/release-build.sh
@@ -122,8 +122,8 @@ if [[ "$1" == "package" ]]; then
   echo "Packaging release tarballs"
   cp -r incubator-livy $ARCHIVE_NAME_PREFIX
   zip -r $SRC_ARCHIVE $ARCHIVE_NAME_PREFIX
-  echo "" | $GPG --passphrase-fd 0 --armour --output $SRC_ARCHIVE.asc 
--detach-sig $SRC_ARCHIVE
-  echo "" | $GPG --passphrase-fd 0 --print-md SHA512 $SRC_ARCHIVE > 
$SRC_ARCHIVE.sha512
+  echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --armour --output 
$SRC_ARCHIVE.asc --detach-sig $SRC_ARCHIVE
+  echo $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --print-md SHA512 $SRC_ARCHIVE 
> $SRC_ARCHIVE.sha512
   rm -rf $ARCHIVE_NAME_PREFIX
 
   # Updated for binary build
@@ -135,8 +135,8 @@ if [[ "$1" == "package" ]]; then
 
 echo "Copying and signing regular binary distribution"
 cp assembly/target/$BIN_ARCHIVE .
-echo "" | $GPG --passphrase-fd 0 --armour --output $BIN_ARCHIVE.asc

[incubator-livy] tag v0.7.0-incubating-rc1 created (now 1918f6a)

2019-12-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to tag v0.7.0-incubating-rc1
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


  at 1918f6a  (commit)
This tag includes the following new commits:

 new 1918f6a  [BUILD] Update version for 0.7.0-incubating

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-livy] branch master updated (0a527e8 -> 319386a)

2019-12-04 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 0a527e8  [LIVY-717] introduce maven property to set ZooKeeper version
 add 319386a  [MINOR] Bump version to 0.8.0-incubating-SNAPSHOT for master 
branch

No new revisions were added by this update.

Summary of changes:
 api/pom.xml  | 4 ++--
 assembly/pom.xml | 4 ++--
 client-common/pom.xml| 4 ++--
 client-http/pom.xml  | 4 ++--
 core/pom.xml | 4 ++--
 core/scala-2.11/pom.xml  | 4 ++--
 coverage/pom.xml | 4 ++--
 examples/pom.xml | 4 ++--
 integration-test/pom.xml | 4 ++--
 pom.xml  | 2 +-
 python-api/pom.xml   | 4 ++--
 python-api/setup.py  | 2 +-
 repl/pom.xml | 4 ++--
 repl/scala-2.11/pom.xml  | 4 ++--
 rsc/pom.xml  | 2 +-
 scala-api/pom.xml| 4 ++--
 scala-api/scala-2.11/pom.xml | 4 ++--
 scala/pom.xml| 4 ++--
 server/pom.xml   | 4 ++--
 test-lib/pom.xml | 4 ++--
 thriftserver/client/pom.xml  | 2 +-
 thriftserver/server/pom.xml  | 2 +-
 thriftserver/session/pom.xml | 2 +-
 23 files changed, 40 insertions(+), 40 deletions(-)



[incubator-livy] branch branch-0.7 created (now 0a527e8)

2019-12-04 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch branch-0.7
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


  at 0a527e8  [LIVY-717] introduce maven property to set ZooKeeper version

No new revisions were added by this update.



[incubator-livy] branch master updated (cccba94 -> 0a527e8)

2019-11-25 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from cccba94  [LIVY-714][SERVER] Fix cannot remove the app in leakedAppTags 
when timeout
 add 0a527e8  [LIVY-717] introduce maven property to set ZooKeeper version

No new revisions were added by this update.

Summary of changes:
 integration-test/pom.xml | 37 +
 pom.xml  |  2 ++
 server/pom.xml   | 28 
 3 files changed, 67 insertions(+)



[incubator-livy] branch master updated: [LIVY-714][SERVER] Fix cannot remove the app in leakedAppTags when timeout

2019-11-24 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new cccba94  [LIVY-714][SERVER] Fix cannot remove the app in leakedAppTags 
when timeout
cccba94 is described below

commit cccba9480e2db821d6cc67f580eeb67f2fac4e95
Author: runzhiwang 
AuthorDate: Mon Nov 25 11:44:37 2019 +0800

[LIVY-714][SERVER] Fix cannot remove the app in leakedAppTags when timeout

## What changes were proposed in this pull request?
1.`var isRemoved = false` should be in `while(iter.hasNext),` otherwise if 
there are two apps, the first app will be killApplication and the second app 
will timeout in this loop, and after removing the first app,` isRemoved = 
true`, and the second app cannot pass the` if(!isRemoved)` and only will be 
deleted in the next loop.

2.`entry.getValue - now` is negative, and never greater than 
`sessionLeakageCheckTimeout`.


![image](https://user-images.githubusercontent.com/51938049/69202431-99a81080-0b7c-11ea-8084-9801af5a75bd.png)

## How was this patch tested?

Existed IT and UT.

Author: runzhiwang 

Closes #259 from runzhiwang/leakapp.
---
 .../scala/org/apache/livy/utils/SparkYarnApp.scala | 27 --
 .../org/apache/livy/utils/SparkYarnAppSpec.scala   | 21 +
 2 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala 
b/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
index 14af9fa..a245823 100644
--- a/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
+++ b/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
@@ -37,7 +37,8 @@ import org.apache.livy.{LivyConf, Logging, Utils}
 
 object SparkYarnApp extends Logging {
 
-  def init(livyConf: LivyConf): Unit = {
+  def init(livyConf: LivyConf, client: Option[YarnClient] = None): Unit = {
+mockYarnClient = client
 sessionLeakageCheckInterval = 
livyConf.getTimeAsMs(LivyConf.YARN_APP_LEAKAGE_CHECK_INTERVAL)
 sessionLeakageCheckTimeout = 
livyConf.getTimeAsMs(LivyConf.YARN_APP_LEAKAGE_CHECK_TIMEOUT)
 leakedAppsGCThread.setDaemon(true)
@@ -45,6 +46,8 @@ object SparkYarnApp extends Logging {
 leakedAppsGCThread.start()
   }
 
+  private var mockYarnClient: Option[YarnClient] = None
+
   // YarnClient is thread safe. Create once, share it across threads.
   lazy val yarnClient = {
 val c = YarnClient.createYarnClient()
@@ -59,9 +62,9 @@ object SparkYarnApp extends Logging {
   private def getYarnPollInterval(livyConf: LivyConf): FiniteDuration =
 livyConf.getTimeAsMs(LivyConf.YARN_POLL_INTERVAL) milliseconds
 
-  private val appType = Set("SPARK").asJava
+  private[utils] val appType = Set("SPARK").asJava
 
-  private val leakedAppTags = new 
java.util.concurrent.ConcurrentHashMap[String, Long]()
+  private[utils] val leakedAppTags = new 
java.util.concurrent.ConcurrentHashMap[String, Long]()
 
   private var sessionLeakageCheckTimeout: Long = _
 
@@ -69,24 +72,34 @@ object SparkYarnApp extends Logging {
 
   private val leakedAppsGCThread = new Thread() {
 override def run(): Unit = {
+  val client = {
+mockYarnClient match {
+  case Some(client) => client
+  case None => yarnClient
+}
+  }
+
   while (true) {
 if (!leakedAppTags.isEmpty) {
   // kill the app if found it and remove it if exceeding a threshold
   val iter = leakedAppTags.entrySet().iterator()
-  var isRemoved = false
   val now = System.currentTimeMillis()
-  val apps = yarnClient.getApplications(appType).asScala
+  val apps = client.getApplications(appType).asScala
+
   while(iter.hasNext) {
+var isRemoved = false
 val entry = iter.next()
+
 apps.find(_.getApplicationTags.contains(entry.getKey))
   .foreach({ e =>
 info(s"Kill leaked app ${e.getApplicationId}")
-yarnClient.killApplication(e.getApplicationId)
+client.killApplication(e.getApplicationId)
 iter.remove()
 isRemoved = true
   })
+
 if (!isRemoved) {
-  if ((entry.getValue - now) > sessionLeakageCheckTimeout) {
+  if ((now - entry.getValue) > sessionLeakageCheckTimeout) {
 iter.remove()
 info(s"Remove leaked yarn app tag ${entry.getKey}")
   }
diff --git a/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala 
b/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala
index 064bb77..d43125d 100644
--- a/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala
+++ b/server/src/test/scala/org/

[incubator-livy] branch master updated: [LIVY-715][DOC] The configuration in the template is inconsistent with LivyConf.scala

2019-11-21 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 8f1e898  [LIVY-715][DOC] The configuration in the template is 
inconsistent with LivyConf.scala
8f1e898 is described below

commit 8f1e8986b2fa8a5d4047c900c224478bb1829489
Author: captainzmc 
AuthorDate: Fri Nov 22 10:39:20 2019 +0800

[LIVY-715][DOC] The configuration in the template is inconsistent with 
LivyConf.scala

## What changes were proposed in this pull request?

When I test livy impersonation found that, in livy.conf.template the 
value of livy.impersonation.enabled is true. So I thought impersonation was 
enabled by default.
However, impersonation was not turned on when we test. I found that the 
real configuration in LivyConf. scala is false.
This can mislead users.

## How was this patch tested?

no need

Author: captainzmc 

Closes #261 from captainzmc/apache-livy-master.
---
 conf/livy.conf.template | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/conf/livy.conf.template b/conf/livy.conf.template
index de7c248..1fe6047 100644
--- a/conf/livy.conf.template
+++ b/conf/livy.conf.template
@@ -62,7 +62,7 @@
 # livy.server.session.state-retain.sec = 600s
 
 # If livy should impersonate the requesting users when creating a new session.
-# livy.impersonation.enabled = true
+# livy.impersonation.enabled = false
 
 # Logs size livy can cache for each session/batch. 0 means don't cache the 
logs.
 # livy.cache-log.size = 200



[incubator-livy] branch master updated: [LIVY-707] Add audit log for SqlJobs from ThriftServer

2019-11-13 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 6261c57  [LIVY-707] Add audit log for SqlJobs from ThriftServer
6261c57 is described below

commit 6261c57be8df66d5f3fc3ccdaa15f8c4e1989d1d
Author: BoneAn 
AuthorDate: Thu Nov 14 09:51:48 2019 +0800

[LIVY-707] Add audit log for SqlJobs from ThriftServer

## What changes were proposed in this pull request?

We should add audit logs in thriftServer for admin to easily to manage 
operations,

## How was this patch tested?

An audit log example showed below,

```
19/11/06 16:38:30 INFO ThriftServerAudit$: user: test ipAddress: 
10.25.22.46 query: select count(*) from test1 beforeExecute: 1573029416951 
afterExecute: 1573029510972 time spent: 94021
```

Author: BoneAn 

Closes #255 from huianyi/LIVY-707.
---
 .../LivyExecuteStatementOperation.scala|  6 
 .../livy/thriftserver/ThriftServerAudit.scala  | 36 ++
 2 files changed, 42 insertions(+)

diff --git 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyExecuteStatementOperation.scala
 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyExecuteStatementOperation.scala
index ebb8e1d..f7d6c16 100644
--- 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyExecuteStatementOperation.scala
+++ 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyExecuteStatementOperation.scala
@@ -137,6 +137,8 @@ class LivyExecuteStatementOperation(
 }
 setState(OperationState.RUNNING)
 
+val before = System.currentTimeMillis()
+
 try {
   rpcClient.executeSql(sessionHandle, statementId, statement).get()
 } catch {
@@ -147,6 +149,10 @@ class LivyExecuteStatementOperation(
 throw new HiveSQLException(e)
 }
 setState(OperationState.FINISHED)
+
+val sessionInfo = sessionManager.getSessionInfo(sessionHandle)
+val after = System.currentTimeMillis()
+ThriftServerAudit.audit(sessionInfo.username, sessionInfo.ipAddress, 
statement, before, after)
   }
 
   def close(): Unit = {
diff --git 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/ThriftServerAudit.scala
 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/ThriftServerAudit.scala
new file mode 100644
index 000..5bf7760
--- /dev/null
+++ 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/ThriftServerAudit.scala
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.livy.thriftserver
+
+import org.apache.livy.Logging
+
+object ThriftServerAudit extends Logging {
+
+  def audit(
+  user: String,
+  ipAddress: String,
+  query: String,
+  startTime: Long,
+  endTime: Long): Unit = {
+info(
+  s"user: $user ipAddress: $ipAddress query: ${query.replace('\n', ' ')} " 
+
+s"start time: ${startTime} end time: ${endTime} " +
+s"time spent: ${Math.round((endTime - startTime) / 1000)}s")
+  }
+
+}



[incubator-livy] branch master updated: [LIVY-711][TEST] Fix Travis fails to build on Ubuntu16.04

2019-11-11 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 6c7df77  [LIVY-711][TEST] Fix Travis fails to build on Ubuntu16.04
6c7df77 is described below

commit 6c7df77204a5a7bfb04beb9789253120d8d7db6c
Author: runzhiwang 
AuthorDate: Tue Nov 12 15:35:34 2019 +0800

[LIVY-711][TEST] Fix Travis fails to build on Ubuntu16.04

## What changes were proposed in this pull request?

Fix Travis fails to build on Ubuntu16.04

## How was this patch tested?

1. Previously the `dist `of .travis.yml is `xenial ` which points to Ubuntu 
14.04.5 LTS. Travis build successfully.

![image](https://user-images.githubusercontent.com/51938049/68647534-10c81e00-0559-11ea-9152-362711b30946.png)


![image](https://user-images.githubusercontent.com/51938049/68647646-669cc600-0559-11ea-9413-9c29860d63f5.png)

2. However, recently `xenial ` points Ubuntu 16.04.6 LTS which needs jdk > 
8, but 8 is needed by Livy, so travis build failed.

![image](https://user-images.githubusercontent.com/51938049/68647880-edea3980-0559-11ea-8135-f5a68b3d303d.png)

![image](https://user-images.githubusercontent.com/51938049/68648187-baf47580-055a-11ea-90cf-8592e628a32c.png)

![image](https://user-images.githubusercontent.com/51938049/68647919-0e19f880-055a-11ea-94b9-4f19099654cb.png)

3. So I change the dist to `trusty` which points to Ubuntu 14.04.5 LTS 
according to the travis doc, and travis build successfully.

![image](https://user-images.githubusercontent.com/51938049/68648018-4d484980-055a-11ea-8358-c4234d7cf56c.png)

![image](https://user-images.githubusercontent.com/51938049/68648028-5507ee00-055a-11ea-8f6e-7c7efcbc6390.png)

Author: runzhiwang 

Closes #257 from runzhiwang/travis-build-error.
---
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.travis.yml b/.travis.yml
index bc26f82..c2c0ffd 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -16,7 +16,7 @@
 #
 
 sudo: required
-dist: xenial
+dist: trusty
 language: scala
   - 2.11.12
 



[incubator-livy] branch master updated: [LIVY-708][SERVER] Align curator jars version

2019-11-11 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f64264  [LIVY-708][SERVER] Align curator jars version
1f64264 is described below

commit 1f64264135409f4a9c7f094660c7d4d6e4130a34
Author: yihengwang 
AuthorDate: Tue Nov 12 09:53:44 2019 +0800

[LIVY-708][SERVER] Align curator jars version

## What changes were proposed in this pull request?
Livy server has dependency of Apache Curator through hadoop client. 
However, the versions of the curator jars are not aligned. Here're the curator 
jars after build

* curator-client-2.7.1.jar
* curator-framework-2.7.1.jar
* curator-recipes-2.6.0.jar
This will cause Method not found issue in some case:
```
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.curator.utils.PathUtils.validatePath(Ljava/lang/String;)V
```
This patch specify the version of curator-recipes to 2.7.1.

## How was this patch tested?
Manually test in the env where no such method exception are thrown.
Existing UT/IT

Author: yihengwang 

Closes #256 from yiheng/fix_708.
---
 pom.xml| 1 +
 server/pom.xml | 6 ++
 2 files changed, 7 insertions(+)

diff --git a/pom.xml b/pom.xml
index 5bcf1a3..2f6dd09 100644
--- a/pom.xml
+++ b/pom.xml
@@ -124,6 +124,7 @@
 
 2.0.0-M21
 1.0.0-M33
+2.7.1
 
 

[incubator-livy] branch master updated: [MINOR][DOC] Add missing session kind in UI

2019-10-30 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new c7313c5  [MINOR][DOC] Add missing session kind in UI
c7313c5 is described below

commit c7313c5648c63089413464b47f33c9749dbabb20
Author: Yishuang Lu 
AuthorDate: Thu Oct 31 10:47:46 2019 +0800

[MINOR][DOC] Add missing session kind in UI

## What changes were proposed in this pull request?

Add missing session kind in UI

## How was this patch tested?

No need to test

Author: Yishuang Lu 

Closes #248 from lys0716/dev-1.
---
 .../org/apache/livy/server/ui/static/html/sessions-table.html | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
index d832e07..6818f76 100644
--- 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
+++ 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
@@ -52,7 +52,7 @@
 
 
   
+title="Session kind (spark, pyspark, pyspark3, sparkr or sql)">
 Session Kind
   
 
@@ -72,4 +72,4 @@
   
   
   
-
\ No newline at end of file
+



[incubator-livy] branch master updated (85837e3 -> ba12b51)

2019-10-21 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 85837e3  [LIVY-695] Upgrade JQuery to 3.4.1 and Bootstrap to 3.4.1
 add ba12b51  [LIVY-697] Rsc client cannot resolve the hostname of driver 
in yarn-cluster mode

No new revisions were added by this update.

Summary of changes:
 rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)



[incubator-livy] branch master updated: [LIVY-690][THRIFT] Exclude curator in thrift server pom to avoid conflict jars

2019-09-29 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 0804c8e  [LIVY-690][THRIFT] Exclude curator in thrift server pom to 
avoid conflict jars
0804c8e is described below

commit 0804c8ea8ece67d01ababec616c9ad8e3b15dc9f
Author: Yiheng Wang 
AuthorDate: Sun Sep 29 16:24:30 2019 +0800

[LIVY-690][THRIFT] Exclude curator in thrift server pom to avoid conflict 
jars

## What changes were proposed in this pull request?
Currently, thrift server has a dependency of curator-client:2.12.0 through 
the hive service. After the build, a `curator-client-2.12.0.jar` file will be 
generated in the `jars` folder. It is conflicted with the 
`curator-client-2.7.1.jar` file, which is used by livy server.

We observed that in some JDK, the `curator-client-2.12.0.jar` is loaded 
before the `curator-client-2.7.1.jar`, and will crash the recovery enabled livy 
server.

In this patch, we exclude the `org.apache.curator` modules from the hive 
dependency.

## How was this patch tested?
Manual test and existing UT/ITs

Author: Yiheng Wang 

Closes #239 from yiheng/exclude_curator.
---
 thriftserver/server/pom.xml | 4 
 1 file changed, 4 insertions(+)

diff --git a/thriftserver/server/pom.xml b/thriftserver/server/pom.xml
index fe17f96..86f0b86 100644
--- a/thriftserver/server/pom.xml
+++ b/thriftserver/server/pom.xml
@@ -58,6 +58,10 @@
   org.eclipse.jetty
   *
 
+
+  org.apache.curator
+  *
+
   
 
 



[incubator-livy] branch master updated: [LIVY-688] Error message of BypassJobStatus should contains cause information of Exception

2019-09-29 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 0ea4a14  [LIVY-688] Error message of BypassJobStatus should contains 
cause information of Exception
0ea4a14 is described below

commit 0ea4a14fcd77a954898f58b4a624d0d515b52eda
Author: weiwenda 
AuthorDate: Sun Sep 29 16:19:41 2019 +0800

[LIVY-688] Error message of BypassJobStatus should contains cause 
information of Exception

## What changes were proposed in this pull request?
Change the implement of org.apache.livy.rsc.Utils.stackTraceAsString to 
guava Throwables.getStackTraceAsString, so that user can receive details of 
error message by calling org.apache.livy.client.http.JobHandleImpl.get.

https://issues.apache.org/jira/browse/LIVY-688

Author: weiwenda 

Closes #237 from WeiWenda/livy-err-clear.
---
 rsc/src/main/java/org/apache/livy/rsc/Utils.java | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/rsc/src/main/java/org/apache/livy/rsc/Utils.java 
b/rsc/src/main/java/org/apache/livy/rsc/Utils.java
index d2c0059..3c8a5e6 100644
--- a/rsc/src/main/java/org/apache/livy/rsc/Utils.java
+++ b/rsc/src/main/java/org/apache/livy/rsc/Utils.java
@@ -17,6 +17,8 @@
 
 package org.apache.livy.rsc;
 
+import java.io.PrintWriter;
+import java.io.StringWriter;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -91,13 +93,9 @@ public class Utils {
   }
 
   public static String stackTraceAsString(Throwable t) {
-StringBuilder sb = new StringBuilder();
-sb.append(t.getClass().getName()).append(": ").append(t.getMessage());
-for (StackTraceElement e : t.getStackTrace()) {
-  sb.append("\n");
-  sb.append(e.toString());
-}
-return sb.toString();
+StringWriter stringWriter = new StringWriter();
+t.printStackTrace(new PrintWriter(stringWriter));
+return stringWriter.toString();
   }
 
   public static  void addListener(Future future, final FutureListener 
lsnr) {



[incubator-livy] branch master updated: [LIVY-658] RSCDriver should catch exception if cancel job failed during shutdown

2019-09-28 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 6c53d2b  [LIVY-658] RSCDriver should catch exception if cancel job 
failed during shutdown
6c53d2b is described below

commit 6c53d2b41975f5b0171ce45320deb4b69f8ddea7
Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>
AuthorDate: Sun Sep 29 10:09:10 2019 +0800

[LIVY-658] RSCDriver should catch exception if cancel job failed during 
shutdown

## What changes were proposed in this pull request?

Currently, if startup meet exception, exception will trigger spark to 
shutdown, then trigger cancel job, but cancel job will throw another exception 
due to spark is not initialized. The new exception will swallow the old 
exception.

https://issues.apache.org/jira/browse/LIVY-658

Before changes:
![cancel job 
exception](https://user-images.githubusercontent.com/7855100/64118287-f0961900-cdc9-11e9-9b72-d051fb4bdbdf.jpg)

After changes:
![cancel job exception after 
fix](https://user-images.githubusercontent.com/7855100/64118295-f4c23680-cdc9-11e9-9a2d-38efa0770a99.jpg)

## How was this patch tested?

Tested manually, and  add unit test.

Please review https://livy.incubator.apache.org/community/ before opening a 
pull request.

Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>

Closes #223 from yantzu/initialize_exception_swallow_by_shutdown_exception.
---
 .../java/org/apache/livy/rsc/driver/RSCDriver.java |  6 +++-
 .../org/apache/livy/rsc/driver/TestRSCDriver.java  | 36 ++
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java 
b/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java
index eeba300..0d8eec5 100644
--- a/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java
+++ b/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java
@@ -124,7 +124,11 @@ public class RSCDriver extends BaseProtocol {
 
 // Cancel any pending jobs.
 for (JobWrapper job : activeJobs.values()) {
-  job.cancel();
+  try {
+job.cancel();
+  } catch (Exception e) {
+LOG.warn("Error during cancel job.", e);
+  }
 }
 
 try {
diff --git a/rsc/src/test/java/org/apache/livy/rsc/driver/TestRSCDriver.java 
b/rsc/src/test/java/org/apache/livy/rsc/driver/TestRSCDriver.java
new file mode 100644
index 000..df6ccea
--- /dev/null
+++ b/rsc/src/test/java/org/apache/livy/rsc/driver/TestRSCDriver.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.livy.rsc.driver;
+
+import org.apache.spark.SparkConf;
+import org.junit.Test;
+
+import org.apache.livy.rsc.BaseProtocol;
+import org.apache.livy.rsc.RSCConf;
+
+public class TestRSCDriver {
+  @Test(expected = IllegalArgumentException.class)
+  public void testCancelJobAfterInitializeFailed()
+  throws Exception {
+//use empty Conf to trigger initialize throw IllegalArgumentException
+RSCDriver rscDriver = new RSCDriver(new SparkConf(), new RSCConf());
+//add asynchronous dummy job request to trigger cancel job failure
+rscDriver.handle(null, new BaseProtocol.BypassJobRequest("RequestId", 
null, null, false));
+rscDriver.run();
+  }
+}



[incubator-livy] branch master updated: [LIVY-644][TEST] Flaky test: Failed to execute goal org.jacoco:jacoco-maven-plugin:0.8.2:report-aggregate (jacoco-report) on project livy-coverage-report

2019-09-18 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new b8251eb  [LIVY-644][TEST] Flaky test: Failed to execute goal 
org.jacoco:jacoco-maven-plugin:0.8.2:report-aggregate (jacoco-report) on 
project livy-coverage-report
b8251eb is described below

commit b8251eb9b3d63c77c61a5950dee5958b654d9633
Author: yihengwang 
AuthorDate: Thu Sep 19 10:35:26 2019 +0800

[LIVY-644][TEST] Flaky test: Failed to execute goal 
org.jacoco:jacoco-maven-plugin:0.8.2:report-aggregate (jacoco-report) on 
project livy-coverage-report

## What changes were proposed in this pull request?
This patch fixes the flaky test: Failed to execute goal 
org.jacoco:jacoco-maven-plugin:0.8.2:report-aggregate (jacoco-report) on 
project livy-coverage-report.

When JVM shutdown no gracefully in a test, the code coverage data file 
generated by jacoco may be corrupt. Jacoco will throw an exception when 
generate code coverage report.

In Livy integration test, two test cases shut down no gracefully(one of 
them uses System.exit). We can find random failure when jacoco process code 
coverage data file generated by that test case.

In this patch, we turn off the code coverage analysis on these two test 
cases.

## How was this patch tested?
Compare the jacoco data file generated in the integration test. Before the 
fix, there're 18 files, and after the fix there're 16 files, which means the 
fix works.

Run 10 builds on Travis each before and after the fix:
1. Before the fix: 3 builds failed due to the jacoco code coverage exception
2. After the fix: No build failed

Existing UTs and ITs.

Author: yihengwang 

Closes #229 from yiheng/fix_644.
---
 .../src/test/scala/org/apache/livy/test/InteractiveIT.scala | 6 --
 rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java  | 2 +-
 rsc/src/main/java/org/apache/livy/rsc/RSCConf.java  | 1 +
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git 
a/integration-test/src/test/scala/org/apache/livy/test/InteractiveIT.scala 
b/integration-test/src/test/scala/org/apache/livy/test/InteractiveIT.scala
index 0613bf3..0c3d632 100644
--- a/integration-test/src/test/scala/org/apache/livy/test/InteractiveIT.scala
+++ b/integration-test/src/test/scala/org/apache/livy/test/InteractiveIT.scala
@@ -114,14 +114,16 @@ class InteractiveIT extends BaseIntegrationTestSuite {
   }
 
   test("application kills session") {
-withNewSession(Spark) { s =>
+val noCodeCoverageConf = 
s"${RSCConf.Entry.TEST_NO_CODE_COVERAGE_ANALYSIS.key()}"
+withNewSession(Spark, Map(noCodeCoverageConf -> "true")) { s =>
   s.runFatalStatement("System.exit(0)")
 }
   }
 
   test("should kill RSCDriver if it doesn't respond to end session") {
 val testConfName = 
s"${RSCConf.LIVY_SPARK_PREFIX}${RSCConf.Entry.TEST_STUCK_END_SESSION.key()}"
-withNewSession(Spark, Map(testConfName -> "true")) { s =>
+val noCodeCoverageConf = 
s"${RSCConf.Entry.TEST_NO_CODE_COVERAGE_ANALYSIS.key()}"
+withNewSession(Spark, Map(testConfName -> "true", noCodeCoverageConf -> 
"true")) { s =>
   val appId = s.appId()
   s.stop()
   val appReport = cluster.yarnClient.getApplicationReport(appId)
diff --git a/rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java 
b/rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java
index 5a819d5..d67b78a 100644
--- a/rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java
+++ b/rsc/src/main/java/org/apache/livy/rsc/ContextLauncher.java
@@ -207,7 +207,7 @@ class ContextLauncher {
 if (!conf.getBoolean(CLIENT_IN_PROCESS) &&
 // For tests which doesn't shutdown RscDriver gracefully, JaCoCo exec 
isn't dumped properly.
 // Disable JaCoCo for this case.
-!conf.getBoolean(TEST_STUCK_END_SESSION)) {
+!conf.getBoolean(TEST_NO_CODE_COVERAGE_ANALYSIS)) {
   // For testing; propagate jacoco settings so that we also do coverage 
analysis
   // on the launched driver. We replace the name of the main file 
("main.exec")
   // so that we don't end up fighting with the main test launcher.
diff --git a/rsc/src/main/java/org/apache/livy/rsc/RSCConf.java 
b/rsc/src/main/java/org/apache/livy/rsc/RSCConf.java
index d2496b5..4c45956 100644
--- a/rsc/src/main/java/org/apache/livy/rsc/RSCConf.java
+++ b/rsc/src/main/java/org/apache/livy/rsc/RSCConf.java
@@ -71,6 +71,7 @@ public class RSCConf extends ClientConf {
 SASL_MECHANISMS("rpc.sasl.mechanisms", "DIGEST-MD5"),
 SASL_QOP("rpc.sasl.qop", null),
 
+
TEST_NO_CODE_COVERAGE_ANALYSIS("test.d

[spark] branch master updated (eef5e6d -> 0b6775e)

2019-09-18 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from eef5e6d  [SPARK-29113][DOC] Fix some annotation errors and remove 
meaningless annotations in project
 add 0b6775e  [SPARK-29112][YARN] Expose more details when 
ApplicationMaster reporter faces a fatal exception

No new revisions were added by this update.

Summary of changes:
 .../main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[incubator-livy] branch master updated (145cc2b -> e2e966b)

2019-09-17 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 145cc2b  [LIVY-633][SERVER] session should not be gc-ed for long 
running queries
 add e2e966b  [LIVY-657][TEST] Fix travis failed on should not create 
sessions with duplicate names

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/livy/sessions/SessionManagerSpec.scala  | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)



[incubator-livy] branch master updated: [LIVY-633][SERVER] session should not be gc-ed for long running queries

2019-09-17 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 145cc2b  [LIVY-633][SERVER] session should not be gc-ed for long 
running queries
145cc2b is described below

commit 145cc2b77db4c7d7bdd8f953dc3a16856d9fcf0f
Author: yihengwang 
AuthorDate: Tue Sep 17 17:21:28 2019 +0800

[LIVY-633][SERVER] session should not be gc-ed for long running queries

## What changes were proposed in this pull request?
Currently, Livy records the last activity time of the session before 
statement execution. If a statement runs too long, exceeding then the session 
timeout, the session will be garbage collected after the statement execution.

This should not be the expected behavior. The statement execution time 
should not be count into idle. We should update the last activity time after 
the statement execution.

We cannot be updated when session changes state from busy to idle in the 
Session class. So in this patch, we add a replLastActivity field into the 
rscClient, which will be updated when the repl state changes. So when session 
changes its state from busy to idle, this field will catch the time and finally 
reflect on the session last activity.

## How was this patch tested?
Manual test. Also, add a new unit test.

Existing unit tests and integration tests.

Author: yihengwang 
Author: Yiheng Wang 

Closes #224 from yiheng/fix_633.
---
 rsc/pom.xml|  7 +++
 rsc/src/main/java/org/apache/livy/rsc/RSCClient.java   | 18 ++
 .../livy/server/interactive/InteractiveSession.scala   | 12 
 .../server/interactive/InteractiveSessionSpec.scala| 15 +++
 4 files changed, 52 insertions(+)

diff --git a/rsc/pom.xml b/rsc/pom.xml
index dcb58a6..1f3d6a3 100644
--- a/rsc/pom.xml
+++ b/rsc/pom.xml
@@ -49,6 +49,13 @@
   ${project.version}
   test
 
+
+  org.apache.livy
+  livy-core_${scala.binary.version}
+  ${project.version}
+  
+  provided
+
 
 
   com.esotericsoftware.kryo
diff --git a/rsc/src/main/java/org/apache/livy/rsc/RSCClient.java 
b/rsc/src/main/java/org/apache/livy/rsc/RSCClient.java
index f2879b8..c1c9534 100644
--- a/rsc/src/main/java/org/apache/livy/rsc/RSCClient.java
+++ b/rsc/src/main/java/org/apache/livy/rsc/RSCClient.java
@@ -44,6 +44,7 @@ import org.apache.livy.client.common.BufferUtils;
 import org.apache.livy.rsc.driver.AddFileJob;
 import org.apache.livy.rsc.driver.AddJarJob;
 import org.apache.livy.rsc.rpc.Rpc;
+import org.apache.livy.sessions.SessionState;
 
 import static org.apache.livy.rsc.RSCConf.Entry.*;
 
@@ -64,6 +65,8 @@ public class RSCClient implements LivyClient {
   private Process driverProcess;
   private volatile boolean isAlive;
   private volatile String replState;
+  // Record the last activity timestamp of the repl
+  private volatile long replLastActivity = System.nanoTime();
 
   RSCClient(RSCConf conf, Promise ctx, Process driverProcess) 
throws IOException {
 this.conf = conf;
@@ -315,6 +318,16 @@ public class RSCClient implements LivyClient {
 return replState;
   }
 
+  /**
+   * Get the timestamp of the last activity of the repl. It will be updated 
when the repl state
+   * changed from busy to idle
+   *
+   * @return last activity timestamp
+   */
+  public long getReplLastActivity() {
+return replLastActivity;
+  }
+
   private class ClientProtocol extends BaseProtocol {
 
  JobHandleImpl submit(Job job) {
@@ -411,6 +424,11 @@ public class RSCClient implements LivyClient {
 
 private void handle(ChannelHandlerContext ctx, ReplState msg) {
   LOG.trace("Received repl state for {}", msg.state);
+  // Update last activity timestamp when state change is from busy to idle.
+  if (SessionState.Busy$.MODULE$.state().equals(replState) && msg != null 
&&
+SessionState.Idle$.MODULE$.state().equals(msg.state)) {
+replLastActivity = System.nanoTime();
+  }
   replState = msg.state;
 }
   }
diff --git 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
index bccdb4d..cdeddda 100644
--- 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
+++ 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
@@ -626,4 +626,16 @@ class InteractiveSession(
   }
 
   override def infoChanged(appInfo: AppInfo): Unit = { this.appInfo = appInfo }
+
+  override def lastActivity: Long = {
+val serverSideLastActivity = super.lastActivity
+if (serverSideState == SessionState.Running) {
+  // If the rsc client is running, w

[incubator-livy] branch master updated (3cdb7b2 -> 3c4eab9)

2019-09-06 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 3cdb7b2  [LIVY-640] Add tests for ThriftServer
 add 3c4eab9  [LIVY-659][TEST] Fix travis failed on can kill spark-submit 
while it's running

No new revisions were added by this update.

Summary of changes:
 .../org/apache/livy/utils/SparkYarnAppSpec.scala   | 22 ++
 1 file changed, 18 insertions(+), 4 deletions(-)



[incubator-livy] branch master updated: [LIVY-645] Add Session Name, Owner, Proxy User information to Web UI

2019-09-05 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new c4bb55b  [LIVY-645] Add Session Name, Owner, Proxy User information to 
Web UI
c4bb55b is described below

commit c4bb55b12f5978ba151d92e1f1e655f91233138b
Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>
AuthorDate: Thu Sep 5 14:14:17 2019 +0800

[LIVY-645] Add Session Name, Owner, Proxy User information to Web UI

## What changes were proposed in this pull request?

1, Web UI, add Session Name to Interactive Sessions list
2, Web UI, add Session Name, Owner, Proxy User to Batch Sessions list
3,~~fix thrift server session doesn't set Session Name issue.~~ Move to PR 
#218

## How was this patch tested?

Update existing unit test, and have tested manually.


![AddName](https://user-images.githubusercontent.com/7855100/63513238-3de7d000-c518-11e9-8b18-613874ed635a.jpg)

![AddName2](https://user-images.githubusercontent.com/7855100/63513246-42ac8400-c518-11e9-9019-679bb82e4bb0.jpg)

Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>

Closes #207 from yantzu/add_more_info_to_ui.
---
 .../apache/livy/server/ui/static/html/batches-table.html | 16 
 .../livy/server/ui/static/html/sessions-table.html   |  6 ++
 .../org/apache/livy/server/ui/static/js/all-sessions.js  |  4 
 .../org/apache/livy/server/ui/static/js/session.js   |  1 +
 .../apache/livy/server/batch/BatchSessionServlet.scala   |  6 --
 .../org/apache/livy/server/batch/BatchServletSpec.scala  |  7 ++-
 6 files changed, 37 insertions(+), 3 deletions(-)

diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/batches-table.html
 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/batches-table.html
index aeab23b..b198e98 100644
--- 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/batches-table.html
+++ 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/batches-table.html
@@ -35,6 +35,22 @@
 
 
   
+Name
+  
+
+
+  
+Owner
+  
+
+
+  
+Proxy User
+  
+
+
+  
 State
diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
index 8c64100..d832e07 100644
--- 
a/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
+++ 
b/server/src/main/resources/org/apache/livy/server/ui/static/html/sessions-table.html
@@ -35,6 +35,12 @@
   
 
 
+  
+Name
+  
+
+
   
 Owner
   
diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js 
b/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
index 90de331..6e35702 100644
--- 
a/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
+++ 
b/server/src/main/resources/org/apache/livy/server/ui/static/js/all-sessions.js
@@ -21,6 +21,7 @@ function loadSessionsTable(sessions) {
   "" +
 tdWrap(uiLink("session/" + session.id, session.id)) +
 tdWrap(appIdLink(session)) +
+tdWrap(session.name) +
 tdWrap(session.owner) +
 tdWrap(session.proxyUser) +
 tdWrap(session.kind) +
@@ -37,6 +38,9 @@ function loadBatchesTable(sessions) {
   "" +
 tdWrap(session.id) +
 tdWrap(appIdLink(session)) +
+tdWrap(session.name) +
+tdWrap(session.owner) +
+tdWrap(session.proxyUser) +
 tdWrap(session.state) +
 tdWrap(logLinks(session, "batch")) +
 ""
diff --git 
a/server/src/main/resources/org/apache/livy/server/ui/static/js/session.js 
b/server/src/main/resources/org/apache/livy/server/ui/static/js/session.js
index d49e0ec..c87e5ca 100644
--- a/server/src/main/resources/org/apache/livy/server/ui/static/js/session.js
+++ b/server/src/main/resources/org/apache/livy/server/ui/static/js/session.js
@@ -88,6 +88,7 @@ function appendSummary(session) {
 "Session " + session.id + "" +
 "" +
   sumWrap("Application Id", appIdLink(session)) +
+  sumWrap("Name", session.name) +
   sumWrap("Owner", session.owner) +
   sumWrap("Proxy User", session.proxyUser) +
   sumWrap("Session Kind", session.kind) +
diff --git 
a/server/src/main/scala/org/apache/livy/server/batch/BatchSessionServlet.scala 
b/server/src/main/scala/org/apache/livy/server/batch/BatchSessionServlet.scala
index 0bf6799..d14e649 100644
--- 
a/server/src/main/scala/org/a

[incubator-livy] branch master updated: [LIVY-652] Thrifserver doesn't set session name correctly

2019-09-03 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new e6b6ec0  [LIVY-652] Thrifserver doesn't set session name correctly
e6b6ec0 is described below

commit e6b6ec060d18d33a300ffa7d766ad46d6c4c70cf
Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>
AuthorDate: Wed Sep 4 11:08:58 2019 +0800

[LIVY-652] Thrifserver doesn't set session name correctly

## What changes were proposed in this pull request?

Thriftserver should set session name as the value passed in 
livy.session.name , but it current always set it NONE

## How was this patch tested?

add IT

https://issues.apache.org/jira/browse/LIVY-652

Author: Jeffrey(Xilang) Yan <7855100+yan...@users.noreply.github.com>

Closes #218 from yantzu/thrift_session_has_no_name.
---
 .../livy/thriftserver/LivyThriftSessionManager.scala|  2 +-
 .../apache/livy/thriftserver/ThriftServerSuites.scala   | 17 +
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyThriftSessionManager.scala
 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyThriftSessionManager.scala
index bc62084..67c8265 100644
--- 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyThriftSessionManager.scala
+++ 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyThriftSessionManager.scala
@@ -228,7 +228,7 @@ class LivyThriftSessionManager(val server: 
LivyThriftServer, val livyConf: LivyC
   createInteractiveRequest.kind = Spark
   val newSession = InteractiveSession.create(
 server.livySessionManager.nextId(),
-None,
+createInteractiveRequest.name,
 username,
 None,
 server.livyConf,
diff --git 
a/thriftserver/server/src/test/scala/org/apache/livy/thriftserver/ThriftServerSuites.scala
 
b/thriftserver/server/src/test/scala/org/apache/livy/thriftserver/ThriftServerSuites.scala
index 438d86c..48750da 100644
--- 
a/thriftserver/server/src/test/scala/org/apache/livy/thriftserver/ThriftServerSuites.scala
+++ 
b/thriftserver/server/src/test/scala/org/apache/livy/thriftserver/ThriftServerSuites.scala
@@ -396,6 +396,23 @@ class BinaryThriftServerSuite extends ThriftServerBaseTest 
with CommonThriftTest
   getTypeInfoTest(c)
 }
   }
+
+  test("LIVY-652: should set session name correctly") {
+val livySessionManager = 
LivyThriftServer.getInstance.get.livySessionManager
+val testSessionName = "MySessionName"
+assert(livySessionManager.get(testSessionName).isEmpty)
+withJdbcConnection("default", 
Seq(s"livy.session.name=${testSessionName}")) { c =>
+  // execute a statement and block until session is ready
+  val statement = c.createStatement()
+  try {
+statement.executeQuery("select current_database()")
+  } finally {
+statement.close()
+  }
+
+  assert(livySessionManager.get(testSessionName).get.name.get == 
testSessionName)
+}
+  }
 }
 
 class HttpThriftServerSuite extends ThriftServerBaseTest with 
CommonThriftTests {



[incubator-livy] branch master updated: [LIVY-519][TEST] Fix travis failed on should kill yarn app

2019-09-02 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 830d740  [LIVY-519][TEST] Fix travis failed on should kill yarn app
830d740 is described below

commit 830d740db5193314f469fa7bcbd4f6f93cbfc08b
Author: runzhiwang 
AuthorDate: Tue Sep 3 12:58:51 2019 +0800

[LIVY-519][TEST] Fix travis failed on should kill yarn app

## What changes were proposed in this pull request?

Fix travis failed on "should kill yarn app"

The cause of failed is as follows:
1. When create SparkYarnApp, the yarnAppMonitorThread will be created, 
which change app state to Failed. Because before recent commit 
https://github.com/apache/incubator-livy/commit/a90f4fac8be27a38cc961c24043a494a739ff188,
 the pair  which was mocked in test, but was not defined in 
mapYarnState, so the state of app will be changed to failed.

2. Then the test kills app, which will call killApplication when the app is 
running. However the app has been changed to failed in step 1, so 
killApplication won't be called, and 
verify(mockYarnClient).killApplication(appId) failed.

3. So if yarnAppMonitorThread changes app state before main thread kills 
app, the test will failed. If not, the test will succeed.

4. Though the recent commit 
https://github.com/apache/incubator-livy/commit/a90f4fac8be27a38cc961c24043a494a739ff188
 fixed the bug accidentally, it is necessary to ensure the app is running 
before kill app.

## How was this patch tested?

Existed UT and IT.

Author: runzhiwang 

Closes #221 from runzhiwang/LIVY-519.
---
 server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala | 3 ++-
 server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala 
b/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
index d51af62..14af9fa 100644
--- a/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
+++ b/server/src/main/scala/org/apache/livy/utils/SparkYarnApp.scala
@@ -209,7 +209,8 @@ class SparkYarnApp private[utils] (
   .getOrElse(IndexedSeq.empty)
   }
 
-  private def isRunning: Boolean = {
+  // Exposed for unit test.
+  private[utils] def isRunning: Boolean = {
 state != SparkApp.State.FAILED && state != SparkApp.State.FINISHED &&
   state != SparkApp.State.KILLED
   }
diff --git a/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala 
b/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala
index 823ae72..672444f 100644
--- a/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala
+++ b/server/src/test/scala/org/apache/livy/utils/SparkYarnAppSpec.scala
@@ -35,7 +35,7 @@ import org.mockito.stubbing.Answer
 import org.scalatest.FunSpec
 import org.scalatest.mock.MockitoSugar.mock
 
-import org.apache.livy.{LivyBaseUnitTestSuite, LivyConf}
+import org.apache.livy.{LivyBaseUnitTestSuite, LivyConf, Utils}
 import org.apache.livy.utils.SparkApp._
 
 class SparkYarnAppSpec extends FunSpec with LivyBaseUnitTestSuite {
@@ -145,6 +145,7 @@ class SparkYarnAppSpec extends FunSpec with 
LivyBaseUnitTestSuite {
 
when(mockYarnClient.getApplicationReport(appId)).thenReturn(mockAppReport)
 
 val app = new SparkYarnApp(appTag, appIdOption, None, None, livyConf, 
mockYarnClient)
+Utils.waitUntil({ () => app.isRunning }, Duration(10, 
TimeUnit.SECONDS))
 cleanupThread(app.yarnAppMonitorThread) {
   app.kill()
   appKilled = true



[incubator-livy] branch master updated (a90f4fa -> 4521ef9)

2019-08-30 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from a90f4fa  [LIVY-642] Fix a rare status happened in yarn cause SparkApp 
change into error state
 add 4521ef9  [LIVY-586] Fix batch state from starting to dead when startup 
fail

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/livy/utils/SparkYarnApp.scala | 10 -
 .../org/apache/livy/utils/SparkYarnAppSpec.scala   | 24 ++
 2 files changed, 33 insertions(+), 1 deletion(-)



[incubator-livy] branch master updated: [LIVY-617] Livy session leak on Yarn when creating session duplicated names

2019-08-28 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ec3b9b  [LIVY-617] Livy session leak on Yarn when creating session 
duplicated names
4ec3b9b is described below

commit 4ec3b9b47b556390b2f738df62b5b277fa02f6ef
Author: Shanyu Zhao 
AuthorDate: Wed Aug 28 19:04:57 2019 +0800

[LIVY-617] Livy session leak on Yarn when creating session duplicated names

## What changes were proposed in this pull request?
When creating a session with duplicated name, instead of throw exception in 
SessionManager.register() method, we should stop the session. Otherwise the 
session driver process will keep running and end up creating a leaked Yarn 
application.

https://issues.apache.org/jira/browse/LIVY-617

## How was this patch tested?

This is just a simple fix and verified with manual end to end test.

Author: Shanyu Zhao 

Closes #187 from shanyu/shanyu.
---
 .../src/main/scala/org/apache/livy/sessions/SessionManager.scala | 9 -
 server/src/test/scala/org/apache/livy/sessions/MockSession.scala | 5 -
 .../test/scala/org/apache/livy/sessions/SessionManagerSpec.scala | 2 ++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git 
a/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala 
b/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
index f8f98a2..f2548ac 100644
--- a/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
+++ b/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
@@ -98,7 +98,10 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
 synchronized {
   session.name.foreach { sessionName =>
 if (sessionsByName.contains(sessionName)) {
-  throw new IllegalArgumentException(s"Duplicate session name: 
${session.name}")
+  val errMsg = s"Duplicate session name: ${session.name} for session 
${session.id}"
+  error(errMsg)
+  session.stop()
+  throw new IllegalArgumentException(errMsg)
 } else {
   sessionsByName.put(sessionName, session)
 }
@@ -106,6 +109,7 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
   sessions.put(session.id, session)
   session.start()
 }
+info(s"Registered new session ${session.id}")
 session
   }
 
@@ -122,6 +126,7 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
   }
 
   def delete(session: S): Future[Unit] = {
+info(s"Deleting session ${session.id}")
 session.stop().map { case _ =>
   try {
 sessionStore.remove(sessionType, session.id)
@@ -133,6 +138,8 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
 case NonFatal(e) =>
   error("Exception was thrown during stop session:", e)
   throw e
+  } finally {
+info(s"Deleted session ${session.id}")
   }
 }
   }
diff --git a/server/src/test/scala/org/apache/livy/sessions/MockSession.scala 
b/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
index ddcbd4b..f9609b1 100644
--- a/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
+++ b/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
@@ -27,7 +27,10 @@ class MockSession(id: Int, owner: String, conf: LivyConf, 
name: Option[String] =
 
   override def start(): Unit = ()
 
-  override protected def stopSession(): Unit = ()
+  var stopped = false
+  override protected def stopSession(): Unit = {
+stopped = true
+  }
 
   override def logLines(): IndexedSeq[String] = IndexedSeq()
 
diff --git 
a/server/src/test/scala/org/apache/livy/sessions/SessionManagerSpec.scala 
b/server/src/test/scala/org/apache/livy/sessions/SessionManagerSpec.scala
index a5e9ffa..100c756 100644
--- a/server/src/test/scala/org/apache/livy/sessions/SessionManagerSpec.scala
+++ b/server/src/test/scala/org/apache/livy/sessions/SessionManagerSpec.scala
@@ -92,6 +92,8 @@ class SessionManagerSpec extends FunSpec with Matchers with 
LivyBaseUnitTestSuit
   an[IllegalArgumentException] should be thrownBy 
manager.register(session2)
   manager.get(session1.id).isDefined should be(true)
   manager.get(session2.id).isDefined should be(false)
+  assert(!session1.stopped)
+  assert(session2.stopped)
   manager.shutdown()
 }
 



[incubator-livy] branch master updated (3626382 -> 256702e)

2019-08-26 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git.


from 3626382  [LIVY-574][TESTS][THRIFT] Add tests for metadata operations
 add 256702e  [LIVY-639][REPL] add start time and completion time and 
duration to the statements web ui

No new revisions were added by this update.

Summary of changes:
 docs/rest-api.md   | 15 +
 .../main/scala/org/apache/livy/repl/Session.scala  |  3 ++
 .../java/org/apache/livy/rsc/driver/Statement.java |  2 ++
 .../server/ui/static/html/statements-table.html| 18 ++
 .../org/apache/livy/server/ui/static/js/session.js | 38 ++
 5 files changed, 76 insertions(+)



[incubator-livy] branch master updated: [LIVY-622][LIVY-623][LIVY-624][LIVY-625][THRIFT] Support GetFunctions, GetSchemas, GetTables, GetColumns in Livy thrift server

2019-08-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new cae9d97  [LIVY-622][LIVY-623][LIVY-624][LIVY-625][THRIFT] Support 
GetFunctions, GetSchemas, GetTables, GetColumns in Livy thrift server
cae9d97 is described below

commit cae9d97185bf371912dcd863dff5babfd9cb704a
Author: yihengwang 
AuthorDate: Fri Aug 16 10:32:11 2019 +0800

[LIVY-622][LIVY-623][LIVY-624][LIVY-625][THRIFT] Support GetFunctions, 
GetSchemas, GetTables, GetColumns in Livy thrift server

## What changes were proposed in this pull request?
In this patch, we add the implementations of GetSchemas, GetFunctions, 
GetTables, and GetColumns in Livy Thrift server.

https://issues.apache.org/jira/browse/LIVY-622
https://issues.apache.org/jira/browse/LIVY-623
https://issues.apache.org/jira/browse/LIVY-624
https://issues.apache.org/jira/browse/LIVY-625

## How was this patch tested?
Add new unit tests and integration test. Run them with existing tests.

Author: yihengwang 

Closes #194 from yiheng/fix_575.
---
 .../apache/livy/thriftserver/LivyCLIService.scala  |  16 +--
 .../livy/thriftserver/LivyOperationManager.scala   |  63 
 .../livy/thriftserver/cli/ThriftCLIService.scala   |  17 ++-
 .../operation/GetColumnsOperation.scala| 102 +
 .../operation/GetFunctionsOperation.scala  |  94 
 .../operation/GetSchemasOperation.scala|  63 
 .../operation/GetTablesOperation.scala |  73 ++
 .../thriftserver/operation/MetadataOperation.scala |   6 +
 .../operation/SparkCatalogOperation.scala  | 119 
 .../livy/thriftserver/ThriftServerSuites.scala | 158 -
 .../livy/thriftserver/session/CatalogJobState.java |  28 
 .../session/CleanupCatalogResultJob.java   |  37 +
 .../livy/thriftserver/session/ColumnBuffer.java|  36 +
 .../session/FetchCatalogResultJob.java |  51 +++
 .../livy/thriftserver/session/GetColumnsJob.java   |  93 
 .../livy/thriftserver/session/GetFunctionsJob.java |  67 +
 .../livy/thriftserver/session/GetSchemasJob.java   |  47 ++
 .../livy/thriftserver/session/GetTablesJob.java|  92 
 .../livy/thriftserver/session/SparkCatalogJob.java |  50 +++
 .../livy/thriftserver/session/SparkUtils.java  | 113 +++
 .../thriftserver/session/ThriftSessionState.java   |  32 +
 .../thriftserver/session/ThriftSessionTest.java|  53 ++-
 22 files changed, 1395 insertions(+), 15 deletions(-)

diff --git 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyCLIService.scala
 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyCLIService.scala
index 725bdc8..3c84b4a 100644
--- 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyCLIService.scala
+++ 
b/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyCLIService.scala
@@ -215,8 +215,8 @@ class LivyCLIService(server: LivyThriftServer)
   sessionHandle: SessionHandle,
   catalogName: String,
   schemaName: String): OperationHandle = {
-// TODO
-throw new HiveSQLException("Operation GET_SCHEMAS is not yet supported")
+sessionManager.operationManager.getSchemas(
+  sessionHandle, catalogName, schemaName)
   }
 
   @throws[HiveSQLException]
@@ -226,8 +226,8 @@ class LivyCLIService(server: LivyThriftServer)
   schemaName: String,
   tableName: String,
   tableTypes: util.List[String]): OperationHandle = {
-// TODO
-throw new HiveSQLException("Operation GET_TABLES is not yet supported")
+sessionManager.operationManager.getTables(
+  sessionHandle, catalogName, schemaName, tableName, tableTypes)
   }
 
   @throws[HiveSQLException]
@@ -243,8 +243,8 @@ class LivyCLIService(server: LivyThriftServer)
   schemaName: String,
   tableName: String,
   columnName: String): OperationHandle = {
-// TODO
-throw new HiveSQLException("Operation GET_COLUMNS is not yet supported")
+sessionManager.operationManager.getColumns(
+  sessionHandle, catalogName, schemaName, tableName, columnName)
   }
 
   @throws[HiveSQLException]
@@ -253,8 +253,8 @@ class LivyCLIService(server: LivyThriftServer)
   catalogName: String,
   schemaName: String,
   functionName: String): OperationHandle = {
-// TODO
-throw new HiveSQLException("Operation GET_FUNCTIONS is not yet supported")
+sessionManager.operationManager.getFunctions(
+  sessionHandle, catalogName, schemaName, functionName)
   }
 
   @throws[HiveSQLException]
diff --git 
a/thriftserver/server/src/main/scala/org/apache/livy/thriftserver/LivyOperationManager.scala
 

[incubator-livy] branch master updated: [LIVY-547][SERVER] Livy kills session after livy.server.session.timeout even if the session is active

2019-08-09 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new e7f23e0  [LIVY-547][SERVER] Livy kills session after 
livy.server.session.timeout even if the session is active
e7f23e0 is described below

commit e7f23e06606ff79aec1c22e79a96f959cb89a8be
Author: Shanyu Zhao 
AuthorDate: Fri Aug 9 14:51:08 2019 +0800

[LIVY-547][SERVER] Livy kills session after livy.server.session.timeout 
even if the session is active

## What changes were proposed in this pull request?
Add a new configuration:
  livy.server.session.timeout-check.skip-busy
To indicate whether or not to skip timeout check for a busy session. It 
defaults to false for
backward compatibility.

https://issues.apache.org/jira/browse/LIVY-547

## How was this patch tested?

Manually tested the configuration.

Author: Shanyu Zhao 

Closes #190 from shanyu/shanyu-547.
---
 conf/livy.conf.template  |  6 +-
 server/src/main/scala/org/apache/livy/LivyConf.scala |  3 +++
 .../org/apache/livy/sessions/SessionManager.scala|  4 
 .../scala/org/apache/livy/sessions/MockSession.scala |  3 ++-
 .../apache/livy/sessions/SessionManagerSpec.scala| 20 ++--
 5 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/conf/livy.conf.template b/conf/livy.conf.template
index 2590e87..de7c248 100644
--- a/conf/livy.conf.template
+++ b/conf/livy.conf.template
@@ -50,8 +50,12 @@
 
 # Enabled to check whether timeout Livy sessions should be stopped.
 # livy.server.session.timeout-check = true
+#
+# Whether or not to skip timeout check for a busy session
+# livy.server.session.timeout-check.skip-busy = false
 
-# Time in milliseconds on how long Livy will wait before timing out an idle 
session.
+# Time in milliseconds on how long Livy will wait before timing out an 
inactive session.
+# Note that the inactive session could be busy running jobs.
 # livy.server.session.timeout = 1h
 #
 # How long a finished session state should be kept in LivyServer for query.
diff --git a/server/src/main/scala/org/apache/livy/LivyConf.scala 
b/server/src/main/scala/org/apache/livy/LivyConf.scala
index 32b3522..dec8e4a 100644
--- a/server/src/main/scala/org/apache/livy/LivyConf.scala
+++ b/server/src/main/scala/org/apache/livy/LivyConf.scala
@@ -216,6 +216,9 @@ object LivyConf {
   // Whether session timeout should be checked, by default it will be checked, 
which means inactive
   // session will be stopped after "livy.server.session.timeout"
   val SESSION_TIMEOUT_CHECK = Entry("livy.server.session.timeout-check", true)
+  // Whether session timeout check should skip busy sessions, if set to true, 
then busy sessions
+  // that have jobs running will never timeout.
+  val SESSION_TIMEOUT_CHECK_SKIP_BUSY = 
Entry("livy.server.session.timeout-check.skip-busy", false)
   // How long will an inactive session be gc-ed.
   val SESSION_TIMEOUT = Entry("livy.server.session.timeout", "1h")
   // How long a finished session state will be kept in memory
diff --git 
a/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala 
b/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
index a63cab3..f8f98a2 100644
--- a/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
+++ b/server/src/main/scala/org/apache/livy/sessions/SessionManager.scala
@@ -77,6 +77,8 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
 
 
   private[this] final val sessionTimeoutCheck = 
livyConf.getBoolean(LivyConf.SESSION_TIMEOUT_CHECK)
+  private[this] final val sessionTimeoutCheckSkipBusy =
+livyConf.getBoolean(LivyConf.SESSION_TIMEOUT_CHECK_SKIP_BUSY)
   private[this] final val sessionTimeout =
 
TimeUnit.MILLISECONDS.toNanos(livyConf.getTimeAsMs(LivyConf.SESSION_TIMEOUT))
   private[this] final val sessionStateRetainedInSec =
@@ -153,6 +155,8 @@ class SessionManager[S <: Session, R <: RecoveryMetadata : 
ClassTag](
 case _ =>
   if (!sessionTimeoutCheck) {
 false
+  } else if (session.state == SessionState.Busy && 
sessionTimeoutCheckSkipBusy) {
+false
   } else if (session.isInstanceOf[BatchSession]) {
 false
   } else {
diff --git a/server/src/test/scala/org/apache/livy/sessions/MockSession.scala 
b/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
index 3d0cc26..ddcbd4b 100644
--- a/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
+++ b/server/src/test/scala/org/apache/livy/sessions/MockSession.scala
@@ -31,7 +31,8 @@ class MockSession(id: Int, owner: String, conf: LivyConf, 
name: Option[String] =
 
   override def logLines(): IndexedSeq[String] = Inde

[spark] branch master updated: [SPARK-28475][CORE] Add regex MetricFilter to GraphiteSink

2019-08-02 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 6d32dee  [SPARK-28475][CORE] Add regex MetricFilter to GraphiteSink
6d32dee is described below

commit 6d32deeecc5a80230158e12982a5c1ea3f70d89d
Author: Nick Karpov 
AuthorDate: Fri Aug 2 17:50:15 2019 +0800

[SPARK-28475][CORE] Add regex MetricFilter to GraphiteSink

## What changes were proposed in this pull request?

Today all registered metric sources are reported to GraphiteSink with no 
filtering mechanism, although the codahale project does support it.

GraphiteReporter (ScheduledReporter) from the codahale project requires you 
implement and supply the MetricFilter interface (there is only a single 
implementation by default in the codahale project, MetricFilter.ALL).

Propose to add an additional regex config to match and filter metrics to 
the GraphiteSink

## How was this patch tested?

Included a GraphiteSinkSuite that tests:

1. Absence of regex filter (existing default behavior maintained)
2. Presence of `regex=` correctly filters metric keys

Closes #25232 from nkarpov/graphite_regex.

Authored-by: Nick Karpov 
Signed-off-by: jerryshao 
---
 conf/metrics.properties.template   |  1 +
 .../apache/spark/metrics/sink/GraphiteSink.scala   | 13 +++-
 .../spark/metrics/sink/GraphiteSinkSuite.scala | 84 ++
 docs/monitoring.md |  1 +
 4 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/conf/metrics.properties.template b/conf/metrics.properties.template
index 23407e1..da0b06d 100644
--- a/conf/metrics.properties.template
+++ b/conf/metrics.properties.template
@@ -121,6 +121,7 @@
 #   unit  seconds   Unit of the poll period
 #   prefixEMPTY STRING  Prefix to prepend to every metric's name
 #   protocol  tcp   Protocol ("tcp" or "udp") to use
+#   regex NONE  Optional filter to send only metrics matching this 
regex string
 
 # org.apache.spark.metrics.sink.StatsdSink
 #   Name: Default:  Description:
diff --git 
a/core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala 
b/core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala
index 21b4dfb..05d553e 100644
--- a/core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala
+++ b/core/src/main/scala/org/apache/spark/metrics/sink/GraphiteSink.scala
@@ -20,7 +20,7 @@ package org.apache.spark.metrics.sink
 import java.util.{Locale, Properties}
 import java.util.concurrent.TimeUnit
 
-import com.codahale.metrics.MetricRegistry
+import com.codahale.metrics.{Metric, MetricFilter, MetricRegistry}
 import com.codahale.metrics.graphite.{Graphite, GraphiteReporter, GraphiteUDP}
 
 import org.apache.spark.SecurityManager
@@ -38,6 +38,7 @@ private[spark] class GraphiteSink(val property: Properties, 
val registry: Metric
   val GRAPHITE_KEY_UNIT = "unit"
   val GRAPHITE_KEY_PREFIX = "prefix"
   val GRAPHITE_KEY_PROTOCOL = "protocol"
+  val GRAPHITE_KEY_REGEX = "regex"
 
   def propertyToOption(prop: String): Option[String] = 
Option(property.getProperty(prop))
 
@@ -72,10 +73,20 @@ private[spark] class GraphiteSink(val property: Properties, 
val registry: Metric
 case Some(p) => throw new Exception(s"Invalid Graphite protocol: $p")
   }
 
+  val filter = propertyToOption(GRAPHITE_KEY_REGEX) match {
+case Some(pattern) => new MetricFilter() {
+  override def matches(name: String, metric: Metric): Boolean = {
+pattern.r.findFirstMatchIn(name).isDefined
+  }
+}
+case None => MetricFilter.ALL
+  }
+
   val reporter: GraphiteReporter = GraphiteReporter.forRegistry(registry)
   .convertDurationsTo(TimeUnit.MILLISECONDS)
   .convertRatesTo(TimeUnit.SECONDS)
   .prefixedWith(prefix)
+  .filter(filter)
   .build(graphite)
 
   override def start() {
diff --git 
a/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala 
b/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala
new file mode 100644
index 000..2369218
--- /dev/null
+++ b/core/src/test/scala/org/apache/spark/metrics/sink/GraphiteSinkSuite.scala
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless requi

[incubator-livy] branch master updated: [MINOR] Remove unused guava import

2019-08-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 788767e  [MINOR] Remove unused guava import
788767e is described below

commit 788767e4cced78f18798f73a651f3ec2d087f938
Author: jerryshao 
AuthorDate: Thu Aug 1 19:41:23 2019 +0800

[MINOR] Remove unused guava import

## What changes were proposed in this pull request?

PR #181 removed guava dependency in LivyServer, but it still left unused 
guava import. Here in this minor fix, removed this unused import.

## How was this patch tested?

Existing UTs.

Author: jerryshao 

Closes #191 from jerryshao/remove-guava-minor.
---
 server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala | 2 --
 1 file changed, 2 deletions(-)

diff --git 
a/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala 
b/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
index c792b61..69e8d19 100644
--- a/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
+++ b/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
@@ -23,8 +23,6 @@ import java.util.concurrent.locks.ReentrantLock
 
 import scala.io.Source
 
-import com.google.common.collect.EvictingQueue
-
 import org.apache.livy.Logging
 
 class CircularQueue[T](var capacity: Int) extends util.LinkedList[T] {



[incubator-livy] branch master updated: [LIVY-587] Remove unused guava dependency

2019-07-29 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 1ce266d  [LIVY-587] Remove unused guava dependency
1ce266d is described below

commit 1ce266d78fb3475af6154262629ff92f5d342ae6
Author: runzhiwang <1769910...@qq.com>
AuthorDate: Mon Jul 29 16:29:23 2019 +0800

[LIVY-587] Remove unused guava dependency

## What changes were proposed in this pull request?

Guava was unused any more, and it's too heavy to include, so remove the 
guava dependency.

## How was this patch tested?

Existing unit tests.

Author: runzhiwang <1769910...@qq.com>

Closes #181 from runzhiwang/livy-587.
---
 pom.xml|  7 ---
 server/pom.xml |  5 -
 .../apache/livy/server/interactive/InteractiveSession.scala|  2 --
 .../main/scala/org/apache/livy/utils/LineBufferedStream.scala  | 10 +-
 4 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/pom.xml b/pom.xml
index 23909d1..f98071c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -84,7 +84,6 @@
 ${spark.scala-2.11.version}
 3.0.0
 1.9
-15.0
 4.5.3
 4.4.4
 2.9.9
@@ -284,12 +283,6 @@
   
 
   
-com.google.guava
-guava
-${guava.version}
-  
-
-  
 io.dropwizard.metrics
 metrics-core
 ${metrics.version}
diff --git a/server/pom.xml b/server/pom.xml
index d043dfe..e708964 100644
--- a/server/pom.xml
+++ b/server/pom.xml
@@ -75,11 +75,6 @@
 
 
 
-  com.google.guava
-  guava
-
-
-
   io.dropwizard.metrics
   metrics-core
 
diff --git 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
index 9529ed3..bccdb4d 100644
--- 
a/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
+++ 
b/server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
@@ -31,7 +31,6 @@ import scala.concurrent.duration.{Duration, FiniteDuration}
 import scala.util.{Random, Try}
 
 import com.fasterxml.jackson.annotation.JsonIgnoreProperties
-import com.google.common.annotations.VisibleForTesting
 import org.apache.hadoop.fs.Path
 import org.apache.spark.launcher.SparkLauncher
 
@@ -155,7 +154,6 @@ object InteractiveSession extends Logging {
   mockApp)
   }
 
-  @VisibleForTesting
   private[interactive] def prepareBuilderProp(
 conf: Map[String, String],
 kind: Kind,
diff --git 
a/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala 
b/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
index 6896d2f..c792b61 100644
--- a/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
+++ b/server/src/main/scala/org/apache/livy/utils/LineBufferedStream.scala
@@ -18,6 +18,7 @@
 package org.apache.livy.utils
 
 import java.io.InputStream
+import java.util
 import java.util.concurrent.locks.ReentrantLock
 
 import scala.io.Source
@@ -26,9 +27,16 @@ import com.google.common.collect.EvictingQueue
 
 import org.apache.livy.Logging
 
+class CircularQueue[T](var capacity: Int) extends util.LinkedList[T] {
+  override def add(t: T): Boolean = {
+if (size >= capacity) removeFirst
+super.add(t)
+  }
+}
+
 class LineBufferedStream(inputStream: InputStream, logSize: Int) extends 
Logging {
 
-  private[this] val _lines: EvictingQueue[String] = 
EvictingQueue.create[String](logSize)
+  private[this] val _lines: CircularQueue[String] = new 
CircularQueue[String](logSize)
 
   private[this] val _lock = new ReentrantLock()
   private[this] val _condition = _lock.newCondition()



[incubator-livy] branch master updated: [LIVY-613][REPL] Livy can't handle the java.sql.Date type correctly.

2019-07-26 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 92062e1  [LIVY-613][REPL] Livy can't handle the java.sql.Date type 
correctly.
92062e1 is described below

commit 92062e1659db2af85711b1f35c50ff4050fec675
Author: yangping.wyp 
AuthorDate: Fri Jul 26 19:34:42 2019 +0800

[LIVY-613][REPL] Livy can't handle the java.sql.Date type correctly.

## What changes were proposed in this pull request?

When Spark table has `java.sql.Date` type column, Livy can't handle the 
`java.sql.Date` type correctly. e.g
```
create table test(
name string,
birthday date
);

insert into test values ('Livy', '2019-07-24')

curl -H "Content-Type:application/json" -X POST -d '{"code":"select * from 
test", "kind":"sql"}' 192.168.1.6:8998/sessions/48/statements
{"id":1,"code":"select * from 
test","state":"waiting","output":null,"progress":0.0}

curl 192.168.1.6:8998/sessions/48/statements/1
{"id":1,"code":"select * from 
test","state":"available","output":{"status":"ok","execution_count":1,"data":{"application/json":{"schema":{"type":"struct","fields":[{"name":"name","type":"string","nullable":true,"metadata":{}},{"name":"birthday","type":"date","nullable":true,"metadata":{}}]},"data":[["Livy",{}]]}}},"progress":1.0}
```
as you can see, the output of `select * from test` is `["Livy",{}]` , 
birthday column's value isn't handle  correctly.

The reason is that json4s can't handle `java.sql.Date`, so we should define 
the `CustomSerializer` for `java.sql.Date`.

This PR  add a `DateSerializer` to support `java.sql.Date` parser.

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)

Please review https://livy.incubator.apache.org/community/ before opening a 
pull request.

Author: yangping.wyp 

Closes #186 from 397090770/master.
---
 .../org/apache/livy/repl/SQLInterpreter.scala  | 11 +++-
 .../org/apache/livy/repl/SQLInterpreterSpec.scala  | 69 +-
 2 files changed, 78 insertions(+), 2 deletions(-)

diff --git a/repl/src/main/scala/org/apache/livy/repl/SQLInterpreter.scala 
b/repl/src/main/scala/org/apache/livy/repl/SQLInterpreter.scala
index 5a7b606..9abbf2c 100644
--- a/repl/src/main/scala/org/apache/livy/repl/SQLInterpreter.scala
+++ b/repl/src/main/scala/org/apache/livy/repl/SQLInterpreter.scala
@@ -18,6 +18,7 @@
 package org.apache.livy.repl
 
 import java.lang.reflect.InvocationTargetException
+import java.sql.Date
 
 import scala.util.control.NonFatal
 
@@ -25,6 +26,7 @@ import org.apache.spark.SparkConf
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.SparkSession
 import org.json4s._
+import org.json4s.JsonAST.{JNull, JString}
 import org.json4s.JsonDSL._
 import org.json4s.jackson.JsonMethods._
 
@@ -66,7 +68,14 @@ class SQLInterpreter(
 rscConf: RSCConf,
 sparkEntries: SparkEntries) extends Interpreter with Logging {
 
-  private implicit def formats = DefaultFormats
+  case object DateSerializer extends CustomSerializer[Date](_ => ( {
+case JString(s) => Date.valueOf(s)
+case JNull => null
+  }, {
+case d: Date => JString(d.toString)
+  }))
+
+  private implicit def formats: Formats = DefaultFormats + DateSerializer
 
   private var spark: SparkSession = null
 
diff --git a/repl/src/test/scala/org/apache/livy/repl/SQLInterpreterSpec.scala 
b/repl/src/test/scala/org/apache/livy/repl/SQLInterpreterSpec.scala
index 37c9594..781ed72 100644
--- a/repl/src/test/scala/org/apache/livy/repl/SQLInterpreterSpec.scala
+++ b/repl/src/test/scala/org/apache/livy/repl/SQLInterpreterSpec.scala
@@ -17,17 +17,20 @@
 
 package org.apache.livy.repl
 
+import java.sql.Date
+
 import scala.util.Try
 
 import org.apache.spark.SparkConf
 import org.json4s.{DefaultFormats, JValue}
-import org.json4s.JsonAST.JArray
+import org.json4s.JsonAST.{JArray, JNull}
 import org.json4s.JsonDSL._
 
 import org.apache.livy.rsc.RSCConf
 import org.apache.livy.rsc.driver.SparkEntries
 
 case class People(name: String, age: Int)
+case class Person(name: String, birthday: Date)
 
 class SQLInterpreterSpec extends BaseInterpreterSpec {
 
@@ -43,6 +46,7

[spark] branch master updated (b94fa97 -> be4a552)

2019-07-16 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b94fa97  [SPARK-28345][SQL][PYTHON] PythonUDF predicate should be able 
to pushdown to join
 add be4a552  [SPARK-28106][SQL] When Spark SQL use  "add jar" ,  before 
add to SparkContext, check  jar path exist first.

No new revisions were added by this update.

Summary of changes:
 .../main/scala/org/apache/spark/SparkContext.scala | 34 ++
 .../scala/org/apache/spark/SparkContextSuite.scala | 11 +++
 2 files changed, 40 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[incubator-livy] branch master updated: [LIVY-582][TESTS] Hostname in python-api test should be lower case to avoid test failures

2019-07-15 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new f796192  [LIVY-582][TESTS] Hostname in python-api test should be lower 
case to avoid test failures
f796192 is described below

commit f7961924e4bed4d1257817cd871bfe0555d1cca0
Author: Yiheng Wang 
AuthorDate: Tue Jul 16 10:22:11 2019 +0800

[LIVY-582][TESTS] Hostname in python-api test should be lower case to avoid 
test failures

## What changes were proposed in this pull request?
In the python-API test code, when returning a mocked response, the mock lib 
will compare the URL with the predefined URLs case sensitive. However, the 
`Request` lib used in the Livy python API will change the URL to lower case. 
This will cause test failures on a machine with an upper case hostname.

This patch turns the hostname in python-API test code into the lower case 
to avoid such test failures.

## How was this patch tested?
Existing test. Run test specifically on a machine with an upper case 
hostname.

Author: Yiheng Wang 

Closes #180 from yiheng/fix_582.
---
 python-api/src/test/python/livy-tests/client_test.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/python-api/src/test/python/livy-tests/client_test.py 
b/python-api/src/test/python/livy-tests/client_test.py
index 5e47cb0..b6426ae 100644
--- a/python-api/src/test/python/livy-tests/client_test.py
+++ b/python-api/src/test/python/livy-tests/client_test.py
@@ -26,7 +26,8 @@ from livy.client import HttpClient
 
 session_id = 0
 job_id = 1
-base_uri = 'http://{0}:{1}'.format(socket.gethostname(), 8998)
+# Make sure host name is lower case. See LIVY-582
+base_uri = 'http://{0}:{1}'.format(socket.gethostname().lower(), 8998)
 client_test = None
 invoked_queued_callback = False
 invoked_running_callback = False



[incubator-livy] branch master updated: [LIVY-603][BUILD] Upgrade build spark version to 2.4.3

2019-07-11 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new 6ad9ae9  [LIVY-603][BUILD] Upgrade build spark version to 2.4.3
6ad9ae9 is described below

commit 6ad9ae9607e8369d5787d0d0c07eb3bd7b060bfc
Author: yihengwang 
AuthorDate: Fri Jul 12 10:11:25 2019 +0800

[LIVY-603][BUILD] Upgrade build spark version to 2.4.3

## What changes were proposed in this pull request?
Bump Spark 2.4 minor version to 2.4.3.

## How was this patch tested?
Existing test

Author: yihengwang 

Closes #179 from yiheng/bump_spark_2_4_3.
---
 pom.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index fcc5b87..6bfdcc6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1050,15 +1050,15 @@
 
   
   
-2.4.0
+2.4.3
 ${spark.scala-2.11.version}
 4.1.17.Final
 1.8
 0.10.7
 
-  
https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
+  
https://archive.apache.org/dist/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz
 
-spark-2.4.0-bin-hadoop2.7
+spark-2.4.3-bin-hadoop2.7
   
 
 



[incubator-livy] branch master updated: [MINOR] Fix spark 2.4.0 tgz file dead download link in pom.xml

2019-07-09 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-livy.git


The following commit(s) were added to refs/heads/master by this push:
 new ee94065  [MINOR] Fix spark 2.4.0 tgz file dead download link in pom.xml
ee94065 is described below

commit ee94065a630fa932488229365de68ab5e053d3a0
Author: yihengwang 
AuthorDate: Tue Jul 9 19:16:53 2019 +0800

[MINOR] Fix spark 2.4.0 tgz file dead download link in pom.xml

## What changes were proposed in this pull request?
The download link for spark 2.4.0 tgz file in the pom.xml file is a dead 
link. This patch change all the spark tgz download links to the official spark 
release archive site, which should be more stable.

## How was this patch tested?
Existing test.

Author: yihengwang 

Closes #178 from yiheng/fix_travis.
---
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index 43e698d..fcc5b87 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1036,7 +1036,7 @@
 ${spark.scala-2.11.version}
 4.1.17.Final
 
-  
http://mirrors.advancedhosters.com/apache/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz
+  
https://archive.apache.org/dist/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz
 
 spark-2.3.3-bin-hadoop2.7
   
@@ -1056,7 +1056,7 @@
 1.8
 0.10.7
 
-  
http://mirrors.advancedhosters.com/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
+  
https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
 
 spark-2.4.0-bin-hadoop2.7
   



[spark] branch master updated: [SPARK-28202][CORE][TEST] Avoid noises of system props in SparkConfSuite

2019-07-01 Thread jshao
This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 378ed67  [SPARK-28202][CORE][TEST] Avoid noises of system props in 
SparkConfSuite
378ed67 is described below

commit 378ed677a865af47a4ab2853df2f71c36a4c378a
Author: ShuMingLi 
AuthorDate: Tue Jul 2 10:04:42 2019 +0800

[SPARK-28202][CORE][TEST] Avoid noises of system props in SparkConfSuite

When SPARK_HOME of env is set and contains a specific 
`spark-defaults,conf`, `org.apache.spark.util.loadDefaultSparkProperties` 
method may noise `system props`. So when runs `core/test` module, it is 
possible to fail to run `SparkConfSuite` .

 It's easy to repair by setting `loadDefaults` in `SparkConf` to be false.

```
[info] - deprecated configs *** FAILED *** (79 milliseconds)
[info] 7 did not equal 4 (SparkConfSuite.scala:266)
[info] org.scalatest.exceptions.TestFailedException:
[info] at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
[info] at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
[info] at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info] at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info] at 
org.apache.spark.SparkConfSuite.$anonfun$new$26(SparkConfSuite.scala:266)
[info] at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info] at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info] at 
org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
[info] at 
org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
[info] at 
org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
[info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
```

Closes #24998 from LiShuMing/SPARK-28202.

Authored-by: ShuMingLi 
Signed-off-by: jerryshao 
---
 core/src/test/scala/org/apache/spark/SparkConfSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/core/src/test/scala/org/apache/spark/SparkConfSuite.scala 
b/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
index 74f5854..6be1fed 100644
--- a/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
+++ b/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
@@ -242,7 +242,7 @@ class SparkConfSuite extends SparkFunSuite with 
LocalSparkContext with ResetSyst
   }
 
   test("deprecated configs") {
-val conf = new SparkConf()
+val conf = new SparkConf(false)
 val newName = UPDATE_INTERVAL_S.key
 
 assert(!conf.contains(newName))


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



incubator-livy git commit: [LIVY-526] Upgrade jetty version

2018-10-13 Thread jshao
Repository: incubator-livy
Updated Branches:
  refs/heads/master d87a34872 -> a068363a0


[LIVY-526] Upgrade jetty version

## What changes were proposed in this pull request?

Upgrade the jetty patch version to a more recent version that has fixes for a 
few security issues.

## How was this patch tested?

Existing unit tests

Author: Arun Mahadevan 

Closes #120 from arunmahadevan/jetty-upgrade.


Project: http://git-wip-us.apache.org/repos/asf/incubator-livy/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-livy/commit/a068363a
Tree: http://git-wip-us.apache.org/repos/asf/incubator-livy/tree/a068363a
Diff: http://git-wip-us.apache.org/repos/asf/incubator-livy/diff/a068363a

Branch: refs/heads/master
Commit: a068363a0c097ab167e8fb6de1015930e71b4cb8
Parents: d87a348
Author: Arun Mahadevan 
Authored: Sat Oct 13 15:47:18 2018 +0800
Committer: jerryshao 
Committed: Sat Oct 13 15:47:18 2018 +0800

--
 .../scala/org/apache/livy/client/http/LivyConnectionSpec.scala | 2 +-
 pom.xml| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-livy/blob/a068363a/client-http/src/test/scala/org/apache/livy/client/http/LivyConnectionSpec.scala
--
diff --git 
a/client-http/src/test/scala/org/apache/livy/client/http/LivyConnectionSpec.scala
 
b/client-http/src/test/scala/org/apache/livy/client/http/LivyConnectionSpec.scala
index 110eb35..01db5d5 100644
--- 
a/client-http/src/test/scala/org/apache/livy/client/http/LivyConnectionSpec.scala
+++ 
b/client-http/src/test/scala/org/apache/livy/client/http/LivyConnectionSpec.scala
@@ -104,7 +104,7 @@ class LivyConnectionSpec extends FunSpecLike with 
BeforeAndAfterAll with LivyBas
 .set(LivyConf.RESPONSE_HEADER_SIZE, 1024)
   val pwd = "test-password" * 100
   val exception = intercept[IOException](test(pwd, livyConf))
-  exception.getMessage.contains("Request Entity Too Large") should be(true)
+  exception.getMessage.contains("Request Header Fields Too Large") should 
be(true)
 }
 
 it("should be succeeded with configured header size") {

http://git-wip-us.apache.org/repos/asf/incubator-livy/blob/a068363a/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 32a6b89..d9adacb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -89,7 +89,7 @@
 4.4.4
 2.9.5
 3.1.0
-9.3.8.v20160314
+9.3.24.v20180605
 3.2.11
 4.11
 0.9.3



  1   2   3   4   5   >