[jira] [Commented] (HADOOP-11935) Provide optional native implementation of stat syscall.

2019-04-28 Thread Yeliang Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827965#comment-16827965
 ] 

Yeliang Cang commented on HADOOP-11935:
---

I think this has been resolved by HADOOP-14600

> Provide optional native implementation of stat syscall.
> ---
>
> Key: HADOOP-11935
> URL: https://issues.apache.org/jira/browse/HADOOP-11935
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, native
>Reporter: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-11935-NativeIO-prelim.patch
>
>
> Currently, 
> {{RawLocalFileSystem.DeprecatedRawLocalFileStatus#loadPermissionInfo}} is 
> implemented as forking an {{ls}} command and parsing the output.  This was 
> observed to be a bottleneck in YARN-3491.  This issue proposes an optional 
> native implementation of a {{stat}} syscall through JNI.  We would maintain 
> the existing code as a fallback for systems where the native code is not 
> available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
--
Description: 
We met this problem in a production environment, the stack trace like this:
{code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater 
has been closed)'
java.lang.NullPointerException: Inflater has been closed
at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
at java.util.zip.Inflater.inflate(Inflater.java:257)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at 
javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
at 
javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
at 
org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:479)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:469)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at 
org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
 and can reproduce it in our test environment by steps below:
1. set configs:
{code}
hive.server2.async.exec.threads  = 50
hive.server2.async.exec.wait.queue.size = 100
{code}
2. open 4 beeline terminates in 4 different nodes.
3. create 30 queries in each beeline terminate. Each query include "add jar 
xxx.jar" like this:
{code}
add jar mykeytest-1.0-SNAPSHOT.jar;
create temporary function ups as 'com.xxx.manager.GetCommentNameOrId';
insert into test partition(tjrq = ${my_no}, ywtx = '${my_no2}' )
select  dt.d_year as i_brand
   ,item.i_brand_id as i_item_sk
   ,ups(item.i_brand) as i_product_name
   ,sum(ss_ext_sales_price) as i_category_id
 from  date_dim dt
  ,store_sales
  ,item
 where dt.d_date_sk = store_sales.ss_sold_date_sk
   and store_sales.ss_item_sk = item.i_item_sk
   and item.i_manufact_id = 436
   and dt.d_moy=12
 group by dt.d_year
  ,item.i_brand
  ,item.i_brand_id
 order by dt.d_year
{code}
and all these 120 queries connect to one hiveserver2

Run all the query concurrently, and will see the stack trace abover in 
hiveserver2 log



  was:
We met this problem in a production environment, the stack trace like this:
{code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater 
has been closed)'
java.lang.NullPointerException: Inflater has been closed
at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
at java.util.zip.Inflater.inflate(Inflater.java:257)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(

[jira] [Updated] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
--
Priority: Critical  (was: Major)

> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yeliang Cang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
--
Description: 
We met this problem in a production environment, the stack trace like this:
{code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater 
has been closed)'
java.lang.NullPointerException: Inflater has been closed
at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
at java.util.zip.Inflater.inflate(Inflater.java:257)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at 
javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
at 
javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
at 
org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:479)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:469)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at 
org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
at 
org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
 and can reproduce it in our test environment by steps below:
1. set configs:
{code}
hive.server2.async.exec.threads  = 50
hive.server2.async.exec.wait.queue.size = 100
{code}
2. open 4 beeline terminates in 4 different nodes.
3. create 30 queries in each beeline terminate, and all these 120 queries 
connect to one hiveserver2



> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Yeliang Cang
>Priority: Critical
>
> We met this problem in a production environment, the stack trace like this:
> {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
> job_1541600895081_0580 with exception 
> 'java.lang.NullPointerException(Inflater has been closed)'
> java.lang.NullPointerException: Inflater has been closed
> at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
> at java.util.zip.Inflater.inflate(Inflater.java:257)
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.readLine(BufferedReader.java:317)
> at java.io.

[jira] [Updated] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
--
Component/s: common

> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Yeliang Cang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)
Yeliang Cang created HADOOP-15913:
-

 Summary: xml parsing error in a heavily multi-threaded environment
 Key: HADOOP-15913
 URL: https://issues.apache.org/jira/browse/HADOOP-15913
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yeliang Cang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679875#comment-16679875
 ] 

Yeliang Cang commented on HADOOP-15913:
---

We have already applied https://issues.apache.org/jira/browse/HADOOP-12404
and still see the error.
Based on comments in https://github.com/mikiobraun/jblas/issues/103, it 
suggests that it is misuse ZipFile object between multiple threads

> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Yeliang Cang
>Priority: Critical
>
> We met this problem in a production environment, the stack trace like this:
> {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
> job_1541600895081_0580 with exception 
> 'java.lang.NullPointerException(Inflater has been closed)'
> java.lang.NullPointerException: Inflater has been closed
> at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
> at java.util.zip.Inflater.inflate(Inflater.java:257)
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.readLine(BufferedReader.java:317)
> at java.io.BufferedReader.readLine(BufferedReader.java:382)
> at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
> at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469)
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
> at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
> at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
>  and can reproduce it in our test environment by steps below:
> 1. set configs:
> {code}
> hive.server2.async.exec.threads  = 50
> hive.server2.async.exec.wait.queue.size = 100
> {code}
> 2. open 4 beeline terminates in 4 different nodes.
> 3. create 30 queries in each beeline terminate. Each query include "add jar 
> xxx.jar" like this:
> {code}
> add jar mykeytest-1.0-SNAPSHOT.jar;
> create temporary function ups as 'com.xxx.manager.GetCommentNameOrId';
> insert into test partition(tjrq = ${my_no}, ywtx = '${my_no2}' )
> select  dt.d_year as i_brand
>,item.i_brand_id as i_item_sk
>,ups(item.i_brand) as i_product_name
>,sum(ss_ext_sales_price) as i_category_id
>  from  date_dim dt
>   ,store_sales
>   ,item
>  where dt.d_date_sk = store_sales.ss_sold_date_sk
>and store_sales.ss_item_sk = item.i_item_sk
>and item.i_manufact_id = 436
>and dt.d_moy=12
>  group by dt.d_year
>   ,item.i_brand
>   ,item.i_b

[jira] [Updated] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-15913:
--
Affects Version/s: 2.7.3

> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Yeliang Cang
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment

2018-11-08 Thread Yeliang Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679882#comment-16679882
 ] 

Yeliang Cang commented on HADOOP-15913:
---

 [~jeagles], [~zxu], [~arun.sur...@gmail.com], [~ajisakaa], what do you think 
about this bug I encountered? Any thoughts will be great appreciated!

> xml parsing error in a heavily multi-threaded environment
> -
>
> Key: HADOOP-15913
> URL: https://issues.apache.org/jira/browse/HADOOP-15913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Yeliang Cang
>Priority: Critical
>
> We met this problem in a production environment, the stack trace like this:
> {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = 
> job_1541600895081_0580 with exception 
> 'java.lang.NullPointerException(Inflater has been closed)'
> java.lang.NullPointerException: Inflater has been closed
> at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
> at java.util.zip.Inflater.inflate(Inflater.java:257)
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.readLine(BufferedReader.java:317)
> at java.io.BufferedReader.readLine(BufferedReader.java:382)
> at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
> at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:983)
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469)
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
> at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141)
> at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code}
>  and can reproduce it in our test environment by steps below:
> 1. set configs:
> {code}
> hive.server2.async.exec.threads  = 50
> hive.server2.async.exec.wait.queue.size = 100
> {code}
> 2. open 4 beeline terminates in 4 different nodes.
> 3. create 30 queries in each beeline terminate. Each query include "add jar 
> xxx.jar" like this:
> {code}
> add jar mykeytest-1.0-SNAPSHOT.jar;
> create temporary function ups as 'com.xxx.manager.GetCommentNameOrId';
> insert into test partition(tjrq = ${my_no}, ywtx = '${my_no2}' )
> select  dt.d_year as i_brand
>,item.i_brand_id as i_item_sk
>,ups(item.i_brand) as i_product_name
>,sum(ss_ext_sales_price) as i_category_id
>  from  date_dim dt
>   ,store_sales
>   ,item
>  where dt.d_date_sk = store_sales.ss_sold_date_sk
>and store_sales.ss_item_sk = item.i_item_sk
>and item.i_manufact_id = 436
>and dt.d_moy=12
>  group by dt.d_year
>   ,item.i_brand
>   ,item.i_brand_id
>  order by dt.d_year
> {code}
> and all these 120 queries connect to one hiveserve

[jira] [Commented] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-09 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16079836#comment-16079836
 ] 

Yeliang Cang commented on HADOOP-14635:
---

[~bibinchundatt], I have modified the javadoc. Please have a look, thank you!

> Javadoc correction for AccessControlList#buildACL
> -
>
> Key: HADOOP-14635
> URL: https://issues.apache.org/jira/browse/HADOOP-14635
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14635-001.patch
>
>
> {{AccessControlList#buildACL}} 
> {code}
>   /**
>* Build ACL from the given two Strings.
>* The Strings contain comma separated values.
>*
>* @param aclString build ACL from array of Strings
>*/
>   private void buildACL(String[] userGroupStrings) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-09 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14635:
--
Status: Patch Available  (was: Open)

> Javadoc correction for AccessControlList#buildACL
> -
>
> Key: HADOOP-14635
> URL: https://issues.apache.org/jira/browse/HADOOP-14635
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14635-001.patch
>
>
> {{AccessControlList#buildACL}} 
> {code}
>   /**
>* Build ACL from the given two Strings.
>* The Strings contain comma separated values.
>*
>* @param aclString build ACL from array of Strings
>*/
>   private void buildACL(String[] userGroupStrings) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-09 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang reassigned HADOOP-14635:
-

Assignee: Yeliang Cang

> Javadoc correction for AccessControlList#buildACL
> -
>
> Key: HADOOP-14635
> URL: https://issues.apache.org/jira/browse/HADOOP-14635
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14635-001.patch
>
>
> {{AccessControlList#buildACL}} 
> {code}
>   /**
>* Build ACL from the given two Strings.
>* The Strings contain comma separated values.
>*
>* @param aclString build ACL from array of Strings
>*/
>   private void buildACL(String[] userGroupStrings) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14635) Javadoc correction for AccessControlList#buildACL

2017-07-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14635:
--
Attachment: HADOOP-14635-001.patch

> Javadoc correction for AccessControlList#buildACL
> -
>
> Key: HADOOP-14635
> URL: https://issues.apache.org/jira/browse/HADOOP-14635
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14635-001.patch
>
>
> {{AccessControlList#buildACL}} 
> {code}
>   /**
>* Build ACL from the given two Strings.
>* The Strings contain comma separated values.
>*
>* @param aclString build ACL from array of Strings
>*/
>   private void buildACL(String[] userGroupStrings) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14690) RetryInvocationHandler$RetryInfo should override toString()

2017-07-28 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14690:
--
Attachment: HADOOP-14690-001.patch

Hi [~ajisakaa], I submit a patch like you suggested. Please see it!

> RetryInvocationHandler$RetryInfo should override toString()
> ---
>
> Key: HADOOP-14690
> URL: https://issues.apache.org/jira/browse/HADOOP-14690
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, supportability
> Attachments: HADOOP-14690-001.patch
>
>
> {code:title=RetryInvocationHandler.java}
>   LOG.trace("#{} processRetryInfo: retryInfo={}, waitTime={}",
>   callId, retryInfo, waitTime);
> {code}
> RetryInfo is used for logging but it does not output useful information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14690) RetryInvocationHandler$RetryInfo should override toString()

2017-07-28 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14690:
--
Assignee: Yeliang Cang
  Status: Patch Available  (was: Open)

> RetryInvocationHandler$RetryInfo should override toString()
> ---
>
> Key: HADOOP-14690
> URL: https://issues.apache.org/jira/browse/HADOOP-14690
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie, supportability
> Attachments: HADOOP-14690-001.patch
>
>
> {code:title=RetryInvocationHandler.java}
>   LOG.trace("#{} processRetryInfo: retryInfo={}, waitTime={}",
>   callId, retryInfo, waitTime);
> {code}
> RetryInfo is used for logging but it does not output useful information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14690) RetryInvocationHandler$RetryInfo should override toString()

2017-07-30 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16106827#comment-16106827
 ] 

Yeliang Cang commented on HADOOP-14690:
---

Thank you for the review, [~ajisakaa]!

> RetryInvocationHandler$RetryInfo should override toString()
> ---
>
> Key: HADOOP-14690
> URL: https://issues.apache.org/jira/browse/HADOOP-14690
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie, supportability
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14690-001.patch
>
>
> {code:title=RetryInvocationHandler.java}
>   LOG.trace("#{} processRetryInfo: retryInfo={}, waitTime={}",
>   callId, retryInfo, waitTime);
> {code}
> RetryInfo is used for logging but it does not output useful information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14784:
--
Status: Patch Available  (was: Open)

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated HADOOP-14784:
--
Attachment: HADOOP-14784.001.patch

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2017-08-17 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16131627#comment-16131627
 ] 

Yeliang Cang commented on HADOOP-14784:
---

Hi, [~jojochuang], I submit a patch like you suggested. Please check it!


> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14271) Correct spelling of 'occurred' and variants

2017-04-04 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955042#comment-15955042
 ] 

Yeliang Cang commented on HADOOP-14271:
---

[~chris.douglas]], thanks for your review!

> Correct spelling of 'occurred' and variants
> ---
>
> Key: HADOOP-14271
> URL: https://issues.apache.org/jira/browse/HADOOP-14271
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6427.001.patch
>
>
> I have find some spelling mistakes in both hdfs and yarn components. The word 
> "occured" should be "occurred".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org