[jira] [Created] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-08-30 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5605:
--

 Summary: Preempt containers (all on one node) to meet the 
requirement of starved applications
 Key: YARN-5605
 URL: https://issues.apache.org/jira/browse/YARN-5605
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


Required items:
# Identify starved applications
# Identify a node that has enough containers from applications over their 
fairshare.
# Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3997) An Application requesting multiple core containers can't preempt running application made of single core containers

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-3997.

Resolution: Duplicate

> An Application requesting multiple core containers can't preempt running 
> application made of single core containers
> ---
>
> Key: YARN-3997
> URL: https://issues.apache.org/jira/browse/YARN-3997
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
> Environment: Ubuntu 14.04, Hadoop 2.7.1, Physical Machines
>Reporter: Dan Shechter
>Assignee: Arun Suresh
>Priority: Critical
>
> When our cluster is configured with preemption, and is fully loaded with an 
> application consuming 1-core containers, it will not kill off these 
> containers when a new application kicks in requesting containers with a size 
> > 1, for example 4 core containers.
> When the "second" application attempts to us 1-core containers as well, 
> preemption proceeds as planned and everything works properly.
> It is my assumption, that the fair-scheduler, while recognizing it needs to 
> kill off some container to make room for the new application, fails to find a 
> SINGLE container satisfying the request for a 4-core container (since all 
> existing containers are 1-core containers), and isn't "smart" enough to 
> realize it needs to kill off 4 single-core containers (in this case) on a 
> single node, for the new application to be able to proceed...
> The exhibited affect is that the new application is hung indefinitely and 
> never gets the resources it requires.
> This can easily be replicated with any yarn application.
> Our "goto" scenario in this case is running pyspark with 1-core executors 
> (containers) while trying to launch h20.ai framework which INSISTS on having 
> at least 4 cores per container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2457) FairScheduler: Handle preemption to help starved parent queues

2016-08-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-2457.

Resolution: Duplicate

> FairScheduler: Handle preemption to help starved parent queues
> --
>
> Key: YARN-2457
> URL: https://issues.apache.org/jira/browse/YARN-2457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.5.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> YARN-2395/YARN-2394 add preemption timeout and threshold per queue, but don't 
> check for parent queue starvation. 
> We need to check that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5604) Add versioning for FederationStateStore

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5604:


 Summary: Add versioning for FederationStateStore
 Key: YARN-5604
 URL: https://issues.apache.org/jira/browse/YARN-5604
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Giovanni Matteo Fumarola


Currently we don't have versioning (null version) for the 
FederationStateStore.This JIRA proposes add versioning support that is needed 
to support upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5603) Metrics for Federation entities like StateStore/Router/AMRMProxy

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5603:


 Summary: Metrics for Federation entities like 
StateStore/Router/AMRMProxy
 Key: YARN-5603
 URL: https://issues.apache.org/jira/browse/YARN-5603
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Giovanni Matteo Fumarola


This JIRA proposes addition of metrics for Federation entities like 
StateStore/Router/AMRMProxy etc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5602) Utils for Federation State and Policy Store

2016-08-30 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-5602:
--

 Summary: Utils for Federation State and Policy Store
 Key: YARN-5602
 URL: https://issues.apache.org/jira/browse/YARN-5602
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola
Assignee: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5601) Make the RM epoch base value configurable

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5601:


 Summary: Make the RM epoch base value configurable
 Key: YARN-5601
 URL: https://issues.apache.org/jira/browse/YARN-5601
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Subru Krishnan


Currently the epoch always starts from zero. This can cause container ids to 
conflict for an application under Federation that spans multiple RMs 
concurrently. This JIRA proposes to make the RM epoch base value configurable 
which will allow us to avoid conflicts by setting different values for each RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Zhe Zhang
+1 (non-binding)

Did the following on 7 RHEL 6.6 servers
- Downloaded and built from source
- Downloaded and verified checksum of the binary tar.gz file
- Setup a cluster with 1 NN and 6 DNs
- Tried regular HDFS commands
- Tried EC commands (listPolicies, getPolicy, setPolicy), they work fine
- Verified that with a 3-2 policy, 1.67x capacity is used. Below is the
output after copying the binary tar.gz file into an EC folder. The file is
318MB.

Configured Capacity: 3221225472 (3 GB)
Present Capacity: 3215348743 (2.99 GB)
DFS Remaining: 2655666176 (2.47 GB)
DFS Used: 559682567 (533.75 MB)

Thanks Allen for clarifying on the markdown files. I also verified the site
html files (content of the index.html, randomly selected some links).


On Tue, Aug 30, 2016 at 2:20 PM Eric Badger 
wrote:

> Well that's embarrassing. I had accidentally slightly renamed my
> log4j.properties file in my conf directory, so it was there, just not being
> read. Apologies for the unnecessary spam. With this and the public key from
> Andrew, I give my non-binding +1.
>
> Eric
>
>
>
> On Tuesday, August 30, 2016 4:11 PM, Allen Wittenauer <
> a...@effectivemachines.com> wrote:
>
>
> > On Aug 30, 2016, at 2:06 PM, Eric Badger 
> wrote:
> >
> >
> > WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
> incomplete.
>
> ^^
>
>
> >
> > After running the above command, the RM UI showed a successful job, but
> as you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
>
>
> It gave you a very important message in the output ...
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-08-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5600:
--

 Summary: Add a parameter to ContainerLaunchContext to emulate 
yarn.nodemanager.delete.debug-delay-sec on a per-application basis
 Key: YARN-5600
 URL: https://issues.apache.org/jira/browse/YARN-5600
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Daniel Templeton
Assignee: Daniel Templeton


To make debugging application launch failures simpler, I'd like to add a 
parameter to the CLC to allow an application owner to request delayed deletion 
of the application's launch artifacts.

This JIRA solves largely the same problem as YARN-5599, but for cases where ATS 
is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5599) Post AM launcher artifacts to ATS

2016-08-30 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5599:
--

 Summary: Post AM launcher artifacts to ATS
 Key: YARN-5599
 URL: https://issues.apache.org/jira/browse/YARN-5599
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Daniel Templeton


To aid in debugging launch failures, it would be valuable to have an 
application's launch script and logs posted to ATS.  Because the application's 
command line may contain private credentials or other secure information, 
access to the data in ATS should be restricted to the job owner, including the 
at-rest data.

Along with making the data available through ATS, the configuration parameter 
introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5598) [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui

2016-08-30 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5598:


 Summary: [YARN-3368] Fix create-release to be able to generate 
bits for the new yarn-ui
 Key: YARN-5598
 URL: https://issues.apache.org/jira/browse/YARN-5598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-ui-v2, yarn
Reporter: Wangda Tan
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 2:20 PM, Eric Badger  wrote:
> 
> Well that's embarrassing. I had accidentally slightly renamed my 
> log4j.properties file in my conf directory, so it was there, just not being 
> read.

Nah.  You were just testing out the shell rewrite's ability to detect a 
common error. ;) 

BTW, something else.. instead of doing env|grep HADOOP, you can do 
'hadoop envvars' to get most of the good stuff.
-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
Well that's embarrassing. I had accidentally slightly renamed my 
log4j.properties file in my conf directory, so it was there, just not being 
read. Apologies for the unnecessary spam. With this and the public key from 
Andrew, I give my non-binding +1. 

Eric



On Tuesday, August 30, 2016 4:11 PM, Allen Wittenauer 
 wrote:


> On Aug 30, 2016, at 2:06 PM, Eric Badger  
> wrote:
> 
> 
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.

^^


> 
> After running the above command, the RM UI showed a successful job, but as 
> you can see, I did not have anything printed onto the command line. Hopefully 
> this is just a misconfiguration on my part, but I figured that I would point 
> it out just in case.


It gave you a very important message in the output ...

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
Hi Eric, thanks for trying this out,

I tried this gpg command to get my key, seemed to work:

# gpg --keyserver pgp.mit.edu --recv-keys 7501105C
gpg: requesting key 7501105C from hkp server pgp.mit.edu
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 7501105C: public key "Andrew Wang (CODE SIGNING KEY) <
andrew.w...@cloudera.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:   imported: 1  (RSA: 1)

Also found via search:
http://pgp.mit.edu/pks/lookup?search=wang%40apache.org&op=index


On Tue, Aug 30, 2016 at 2:06 PM, Eric Badger  wrote:

> I don't know why my email client keeps getting rid of all of my spacing.
> Resending the same email so that it is actually legible...
>
> All on OSX 10.11.6:
> - Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.
> - Built from source
> - Deployed a pseudo-distributed clusterRan a few sample jobs
> - Poked around the RM UI
> - Poked around the attached website locally via the tarball
>
>
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
>
>
> ebadger@foo: env | grep HADOOP
> HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-
> src/hadoop-dist/target/hadoop-3.0.0-alpha1/
> HADOOP_CONF_DIR=/Users/ebadger/conf
> ebadger@foo: $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
> ebadger@foo:
>
>
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
>
>
> Thanks,
>
>
> Eric
>
>
>
> On Tuesday, August 30, 2016 4:00 PM, Eric Badger
>  wrote:
>
>
>
> All on OSX 10.11.6:
> Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.Built
> from sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked
> around the RM UIPoked around the attached website locally via the tarball
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
> ebadger@foo: env | grep HADOOPHADOOP_HOME=/Users/
> ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/
> target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
> $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
> incomplete.ebadger@foo:
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
> Thanks,
> Eric
>
>
>
> On Tuesday, August 30, 2016 12:58 PM, Andrew Wang <
> andrew.w...@cloudera.com> wrote:
>
>
> I'll put my own +1 on it:
>
> * Built from source
> * Started pseudo cluster and ran Pi job successfully
>
> On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:
>
> >
> > Thanks Andrew for the great work! It's really exciting to finally see a
> > Hadoop 3 RC.
> >
> > I noticed

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 2:06 PM, Eric Badger  
> wrote:
> 
> 
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.

^^


> 
> After running the above command, the RM UI showed a successful job, but as 
> you can see, I did not have anything printed onto the command line. Hopefully 
> this is just a misconfiguration on my part, but I figured that I would point 
> it out just in case.


It gave you a very important message in the output ...


-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
I don't know why my email client keeps getting rid of all of my spacing. 
Resending the same email so that it is actually legible...

All on OSX 10.11.6:
- Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.
- Built from source
- Deployed a pseudo-distributed clusterRan a few sample jobs
- Poked around the RM UI
- Poked around the attached website locally via the tarball


I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.


ebadger@foo: env | grep HADOOP
HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/
HADOOP_CONF_DIR=/Users/ebadger/conf
ebadger@foo: $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
ebadger@foo:


After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.


Thanks,


Eric



On Tuesday, August 30, 2016 4:00 PM, Eric Badger 
 wrote:



All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric



On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
 wrote:


I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on the

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric


On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
 wrote:
 

 I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on their feedback. So, please keep that in mind when
>> voting; hopefully most issues can be addressed by future alphas rather
>> than
>> future RCs.
>>
>> Sorry for getting this out on a Tuesday, but I'd still like this vote to
>> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>> if we lack the votes.
>>
>> Please try it out and let me know what you think.
>>
>> Best,
>> Andrew
>>
>


   

[jira] [Created] (YARN-5597) YARN Federation phase 2

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5597:


 Summary: YARN Federation phase 2
 Key: YARN-5597
 URL: https://issues.apache.org/jira/browse/YARN-5597
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Subru Krishnan


This umbrella JIRA tracks set of improvements over the YARN Federation MVP 
(YARN-2915)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3665) Federation subcluster membership mechanisms

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan resolved YARN-3665.
--
  Resolution: Implemented
Hadoop Flags: Reviewed

Closing this as YARN-3671 includes this too.

> Federation subcluster membership mechanisms
> ---
>
> Key: YARN-3665
> URL: https://issues.apache.org/jira/browse/YARN-3665
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> The member YARN RMs continuously heartbeat to the state store to keep alive 
> and publish their current capability/load information. This JIRA tracks this 
> mechanisms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created YARN-5596:
---

 Summary: TestDockerContainerRuntime fails on the mac
 Key: YARN-5596
 URL: https://issues.apache.org/jira/browse/YARN-5596
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, yarn
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana
Priority: Minor


/sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5595) Update documentation and Javadoc to match change to NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-5595:


 Summary: Update documentation and Javadoc to match change to 
NodeHealthScriptRunner#reportHealthStatus
 Key: YARN-5595
 URL: https://issues.apache.org/jira/browse/YARN-5595
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ray Chiang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 10:17 AM, Zhe Zhang  wrote:
> 
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
> 
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.


The site tarball has them converted to HTML.  I've also re-run the 
versions that I keep on my gitlab account.  (Since the data comes from JIRA, 
the content should be the same but the format and ordering might be different 
since I use the master branch of Yetus.) 
https://gitlab.com/_a__w_/eco-release-metadata/tree/master/HADOOP/3.0.0-alpha1

It also looks like IntelliJ has a few different markdown plug-ins.  
You'll want one that supports what is generally referred to as MultiMarkdown or 
Github-Flavored Markdown (GFM) since releasedocmaker uses the table extension 
format found in that specification.  (It's an extremely common extension so I'm 
sure one of them supports it.)
-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on their feedback. So, please keep that in mind when
>> voting; hopefully most issues can be addressed by future alphas rather
>> than
>> future RCs.
>>
>> Sorry for getting this out on a Tuesday, but I'd still like this vote to
>> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>> if we lack the votes.
>>
>> Please try it out and let me know what you think.
>>
>> Best,
>> Andrew
>>
>


Re: [REMINDER] How to set fix versions when committing

2016-08-30 Thread Andrew Wang
Hi Junping,

On Tue, Aug 30, 2016 at 4:30 AM, Junping Du  wrote:

> Hi Andrew and all,
>  Thanks for the notice on the change. I still concern this rule change
> may cause some confusion from conflicting against our previous rule - no
> need to set trunk version if it is landing on 2.x branch. As we can see,
> there are 4 cases of version setting for JIRA landing on trunk and branch-2
> now:
> 1. JIRA with fixed version set to 2.x only before 3.0.0-alpha1 cut off.
> 2. JIRA with fixed version set to 2.x only after 3.0.0-alpha1 cut off.
> 3. JIRA with fixed version set to 2.x and 3.0.0-alpha1 after 3.0.0-alpha1
> cut off.
> 4. JIRA with fixed version set to 2.x and 3.0.0-alpha2 after 3.0.0-alpha1
> cut off
>
> Case 3 and 4 can be easily distinguished, but case 1 and 2 is against our
> rule change here and hard to differentiate unless we want to mark all
> previous JIRA with fixed version for 2.x only to include
> 3.0.0-alpha1/3.0.0-alpha2. That's a tremendous effort and I doubt this
> should be our option.
>

I believe (1) was handled by the bulk fix version update I did. It added
the 3.0.0-alpha1 fix version for all JIRAs committed to trunk after the
release of 2.7.0. I filtered out branch-2 only commits based on git log.

(2) was addressed by rebranching branch-3.0.0-alpha1 after the bulk fix
version update. It should be easy to track mistakes via a JIRA query
similar to the one used to do the bulk fix version update. That code is all
posted on my github: https://github.com/umbrant/jira-update

Best,
Andrew


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Zhe Zhang
Thanks Andrew for the great work! It's really exciting to finally see a
Hadoop 3 RC.

I noticed CHANGES and RELEASENOTES markdown files which were not in
previous RCs like 2.7.3. What are good tools to verify them? I tried
reading them on IntelliJ but format looks odd.

I'm still testing the RC:
- Downloaded and verified checksum
- Built from source
- Will start small cluster and test simple programs, focusing on EC
functionalities

-- Zhe

On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
wrote:

> Hi all,
>
> Thanks to the combined work of many, many contributors, here's an RC0 for
> 3.0.0-alpha1:
>
> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>
> alpha1 is the first in a series of planned alpha releases leading up to GA.
> The objective is to get an artifact out to downstreams for testing and to
> iterate quickly based on their feedback. So, please keep that in mind when
> voting; hopefully most issues can be addressed by future alphas rather than
> future RCs.
>
> Sorry for getting this out on a Tuesday, but I'd still like this vote to
> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
> if we lack the votes.
>
> Please try it out and let me know what you think.
>
> Best,
> Andrew
>


[VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
Hi all,

Thanks to the combined work of many, many contributors, here's an RC0 for
3.0.0-alpha1:

http://home.apache.org/~wang/3.0.0-alpha1-RC0/

alpha1 is the first in a series of planned alpha releases leading up to GA.
The objective is to get an artifact out to downstreams for testing and to
iterate quickly based on their feedback. So, please keep that in mind when
voting; hopefully most issues can be addressed by future alphas rather than
future RCs.

Sorry for getting this out on a Tuesday, but I'd still like this vote to
run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
if we lack the votes.

Please try it out and let me know what you think.

Best,
Andrew


ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Rich Bowen
It's traditional. We wait for the last minute to get our talk proposals
in for conferences.

Well, the last minute has arrived. The CFP for ApacheCon Seville closes
on September 9th, which is less than 2 weeks away. It's time to get your
talks in, so that we can make this the best ApacheCon yet.

It's also time to discuss with your developer and user community whether
there's a track of talks that you might want to propose, so that you
have more complete coverage of your project than a talk or two.

For Apache Big Data, the relevant URLs are:
Event details:
http://events.linuxfoundation.org/events/apache-big-data-europe
CFP:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp

For ApacheCon Europe, the relevant URLs are:
Event details: http://events.linuxfoundation.org/events/apachecon-europe
CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp

This year, we'll be reviewing papers "blind" - that is, looking at the
abstracts without knowing who the speaker is. This has been shown to
eliminate the "me and my buddies" nature of many tech conferences,
producing more diversity, and more new speakers. So make sure your
abstracts clearly explain what you'll be talking about.

For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
or drop by our IRC channel, #apachecon on the Freenode IRC network.

-- 
Rich Bowen
WWW: http://apachecon.com/
Twitter: @ApacheCon

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Tatyana But (JIRA)
Tatyana But created YARN-5594:
-

 Summary: Handle old data format while recovering RM
 Key: YARN-5594
 URL: https://issues.apache.org/jira/browse/YARN-5594
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Tatyana But


We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.

2016-08-25 17:20:33,293 ERROR
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
load/recover state
com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
an invalid tag (zero).
at 
com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
at 
com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044

The reason of this problem is that we use different formats of files 
/var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
 in these hadoop versions.

This fix handle old data format during RM recover if 
InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/

[Aug 29, 2016 3:55:38 PM] (jlowe) HADOOP-13552. RetryInvocationHandler logs all 
remote exceptions.
[Aug 29, 2016 4:14:55 PM] (jlowe) YARN-5560. Clean up bad exception catching 
practices in TestYarnClient.
[Aug 29, 2016 4:26:46 PM] (aengineer) HADOOP-7363. 
TestRawLocalFileSystemContract is needed. Contributed by
[Aug 29, 2016 5:15:34 PM] (liuml07) HDFS-10807. Doc about upgrading to a 
version of HDFS with snapshots may
[Aug 29, 2016 7:56:09 PM] (jlowe) MAPREDUCE-6768. TestRecovery.testSpeculative 
failed with NPE.
[Aug 29, 2016 8:04:28 PM] (liuml07) HADOOP-13559. Remove close() within 
try-with-resources in
[Aug 29, 2016 8:59:54 PM] (yzhang) HDFS-10625. VolumeScanner to report why a 
block is found bad.
[Aug 29, 2016 9:46:00 PM] (zhz) YARN-5550. TestYarnCLI#testGetContainers should 
format according to
[Aug 29, 2016 10:30:49 PM] (wang) HADOOP-12608. Fix exception message in WASB 
when connecting with
[Aug 29, 2016 10:55:33 PM] (rchiang) YARN-5567. Fix script exit code checking in
[Aug 30, 2016 12:48:08 AM] (xiao) HDFS-4210. Throw helpful exception when DNS 
entry for JournalNode cannot
[Aug 30, 2016 6:37:26 AM] (zhz) HDFS-10814. Add assertion for 
getNumEncryptionZones when no EZ is




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.hdfs.TestRollingUpgrade 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [24K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [120K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-5593) [Umbrella] Add support for YARN Allocation composed of multiple containers/processes

2016-08-30 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5593:
-

 Summary: [Umbrella] Add support for YARN Allocation composed of 
multiple containers/processes
 Key: YARN-5593
 URL: https://issues.apache.org/jira/browse/YARN-5593
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh


Opening this to explicitly call out and track some of the ideas that were 
discussed in YARN-1040. Specifically the concept of an {{Allocation}} which can 
be used by an AM to start multiple {{Containers}} against as long as the sum of 
resources used by all containers {{fitsIn()}} the Resources leased to the 
{{Allocation}}.
This is especially useful for AMs that might want to target certain operations 
(like upgrade / restart) on specific containers / processes within an 
Allocation without fear of losing the allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5592) Add support for dynamic resource updates with multiple resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5592:
---

 Summary: Add support for dynamic resource updates with multiple 
resource types
 Key: YARN-5592
 URL: https://issues.apache.org/jira/browse/YARN-5592
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5590:
---

 Summary: Add support for increase and decrease of container 
resources with resource profiles
 Key: YARN-5590
 URL: https://issues.apache.org/jira/browse/YARN-5590
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5589:
---

 Summary: Update CapacitySchedulerConfiguration minimum and maximum 
calculations to consider all resource types
 Key: YARN-5589
 URL: https://issues.apache.org/jira/browse/YARN-5589
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5591) Update web UIs to reflect multiple resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5591:
---

 Summary: Update web UIs to reflect multiple resource types
 Key: YARN-5591
 URL: https://issues.apache.org/jira/browse/YARN-5591
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5588) Add support for resource profiles in distributed shell and MapReduce

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5588:
---

 Summary: Add support for resource profiles in distributed shell 
and MapReduce
 Key: YARN-5588
 URL: https://issues.apache.org/jira/browse/YARN-5588
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5587) Add support for resource profiles

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5587:
---

 Summary: Add support for resource profiles
 Key: YARN-5587
 URL: https://issues.apache.org/jira/browse/YARN-5587
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev


Add support for resource profiles on the RM side to allow users to use 
shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5586) Update the Resources class to consider all resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5586:
---

 Summary: Update the Resources class to consider all resource types
 Key: YARN-5586
 URL: https://issues.apache.org/jira/browse/YARN-5586
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev


The Resources class provides a bunch of useful functions like clone, addTo, 
etc. These need to be updated to consider all resource types instead of just 
memory and cpu.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-30 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5585:
---

 Summary: [Atsv2] Add a new filter fromId in REST endpoints
 Key: YARN-5585
 URL: https://issues.apache.org/jira/browse/YARN-5585
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelinereader
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S


TimelineReader REST API's provides lot of filters to retrieve the applications. 
Along with those, it would be good to add new filter i.e fromId so that 
entities can be retrieved after the fromId. 

Example : If applications are stored database, app-1 app-2 ... app-10.
*getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
difficult.

So proposal is to have fromId in the filter like 
*getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to app-10. 

This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [REMINDER] How to set fix versions when committing

2016-08-30 Thread Junping Du
Hi Andrew and all,
 Thanks for the notice on the change. I still concern this rule change may 
cause some confusion from conflicting against our previous rule - no need to 
set trunk version if it is landing on 2.x branch. As we can see, there are 4 
cases of version setting for JIRA landing on trunk and branch-2 now:
1. JIRA with fixed version set to 2.x only before 3.0.0-alpha1 cut off. 
2. JIRA with fixed version set to 2.x only after 3.0.0-alpha1 cut off.
3. JIRA with fixed version set to 2.x and 3.0.0-alpha1 after 3.0.0-alpha1 cut 
off.
4. JIRA with fixed version set to 2.x and 3.0.0-alpha2 after 3.0.0-alpha1 cut 
off

Case 3 and 4 can be easily distinguished, but case 1 and 2 is against our rule 
change here and hard to differentiate unless we want to mark all previous JIRA 
with fixed version for 2.x only to include 3.0.0-alpha1/3.0.0-alpha2. That's a 
tremendous effort and I doubt this should be our option.
Or, my preference is: we should update our rule here a bit to assume all JIRA 
marked fixed version to 2.x only get landed to 3.0.0-alpha1 but in the 
meanwhile, we monitor all JIRAs come after 3.0.0-alpha1 cut off to include 
3.0.0-alpha2 (or latest trunk version).
Thoughts?


Thanks,

Junping


From: Andrew Wang 
Sent: Tuesday, August 30, 2016 2:57 AM
To: common-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; 
yarn-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org
Subject: [REMINDER] How to set fix versions when committing

Hi all,

I finished the bulk fix version update and just rebranched
branch-3.0.0-alpha1 off of trunk. So, a reminder that the procedure for
setting fix versions has changed slightly from before.

Everything is fully detailed here, the example in particular should help
clarify things:

https://hadoop.apache.org/versioning.html

The short of it though is that if a JIRA is going into trunk or
branch-3.0.0-alpha1, it should also have a 3.0.0-alpha1 or 3.0.0-alpha2
fixVersion set.

Thanks,
Andrew

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5584) Include name of JAR/Tar/Zip on failure to expand artifact on download

2016-08-30 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-5584:


 Summary: Include name of JAR/Tar/Zip on failure to expand artifact 
on download
 Key: YARN-5584
 URL: https://issues.apache.org/jira/browse/YARN-5584
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.2
Reporter: Steve Loughran


If yarn can't expand a JAR/ZIP/tar file on download, the exception is passed 
back to the AM —but not the name of which file is failing. This makes it harder 
to track down the problem than one would like.
{code}
java.util.zip.ZipException: invalid CEN header (bad signature)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:215)
at java.util.zip.ZipFile.(ZipFile.java:145)
at java.util.zip.ZipFile.(ZipFile.java:159)
at org.apache.hadoop.fs.FileUtil.unZip(FileUtil.java:589)
at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:277)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:362)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5583) [YARN-3368] Fix paths in .gitignore

2016-08-30 Thread Sreenath Somarajapuram (JIRA)
Sreenath Somarajapuram created YARN-5583:


 Summary: [YARN-3368] Fix paths in .gitignore
 Key: YARN-5583
 URL: https://issues.apache.org/jira/browse/YARN-5583
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sreenath Somarajapuram
Assignee: Sreenath Somarajapuram


npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5582) SchedulerUtils#validate vcores even for DefaultResourceCalculator

2016-08-30 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5582:
--

 Summary: SchedulerUtils#validate vcores even for 
DefaultResourceCalculator
 Key: YARN-5582
 URL: https://issues.apache.org/jira/browse/YARN-5582
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


Configure Memory=20 GB core 3 Vcores
Submit request for 5 containers with memory 4 Gb  and  5 core each from 
mapreduce application.

{noformat}
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
 Invalid resource request, requested virtual cores < 0, or requested virtual 
cores > max configured, requestedVirtualCores=5, maxVirtualCores=3
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:105)
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:703)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:65)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:115)
{noformat}

Shouldnot validate core when resource calculator is 
{{org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org