Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Zhe Zhang
+1 (non-binding)

Did the following on 7 RHEL 6.6 servers
- Downloaded and built from source
- Downloaded and verified checksum of the binary tar.gz file
- Setup a cluster with 1 NN and 6 DNs
- Tried regular HDFS commands
- Tried EC commands (listPolicies, getPolicy, setPolicy), they work fine
- Verified that with a 3-2 policy, 1.67x capacity is used. Below is the
output after copying the binary tar.gz file into an EC folder. The file is
318MB.

Configured Capacity: 3221225472 (3 GB)
Present Capacity: 3215348743 (2.99 GB)
DFS Remaining: 2655666176 (2.47 GB)
DFS Used: 559682567 (533.75 MB)

Thanks Allen for clarifying on the markdown files. I also verified the site
html files (content of the index.html, randomly selected some links).


On Tue, Aug 30, 2016 at 2:20 PM Eric Badger 
wrote:

> Well that's embarrassing. I had accidentally slightly renamed my
> log4j.properties file in my conf directory, so it was there, just not being
> read. Apologies for the unnecessary spam. With this and the public key from
> Andrew, I give my non-binding +1.
>
> Eric
>
>
>
> On Tuesday, August 30, 2016 4:11 PM, Allen Wittenauer <
> a...@effectivemachines.com> wrote:
>
>
> > On Aug 30, 2016, at 2:06 PM, Eric Badger 
> wrote:
> >
> >
> > WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
> incomplete.
>
> ^^
>
>
> >
> > After running the above command, the RM UI showed a successful job, but
> as you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
>
>
> It gave you a very important message in the output ...
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-13564) modify mapred to use hadoop_subcommand_opts

2016-08-30 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13564:
-

 Summary: modify mapred to use hadoop_subcommand_opts
 Key: HADOOP-13564
 URL: https://issues.apache.org/jira/browse/HADOOP-13564
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 2:20 PM, Eric Badger  wrote:
> 
> Well that's embarrassing. I had accidentally slightly renamed my 
> log4j.properties file in my conf directory, so it was there, just not being 
> read.

Nah.  You were just testing out the shell rewrite's ability to detect a 
common error. ;) 

BTW, something else.. instead of doing env|grep HADOOP, you can do 
'hadoop envvars' to get most of the good stuff.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
Well that's embarrassing. I had accidentally slightly renamed my 
log4j.properties file in my conf directory, so it was there, just not being 
read. Apologies for the unnecessary spam. With this and the public key from 
Andrew, I give my non-binding +1. 

Eric



On Tuesday, August 30, 2016 4:11 PM, Allen Wittenauer 
 wrote:


> On Aug 30, 2016, at 2:06 PM, Eric Badger  
> wrote:
> 
> 
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.

^^


> 
> After running the above command, the RM UI showed a successful job, but as 
> you can see, I did not have anything printed onto the command line. Hopefully 
> this is just a misconfiguration on my part, but I figured that I would point 
> it out just in case.


It gave you a very important message in the output ...

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
Hi Eric, thanks for trying this out,

I tried this gpg command to get my key, seemed to work:

# gpg --keyserver pgp.mit.edu --recv-keys 7501105C
gpg: requesting key 7501105C from hkp server pgp.mit.edu
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 7501105C: public key "Andrew Wang (CODE SIGNING KEY) <
andrew.w...@cloudera.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:   imported: 1  (RSA: 1)

Also found via search:
http://pgp.mit.edu/pks/lookup?search=wang%40apache.org=index


On Tue, Aug 30, 2016 at 2:06 PM, Eric Badger  wrote:

> I don't know why my email client keeps getting rid of all of my spacing.
> Resending the same email so that it is actually legible...
>
> All on OSX 10.11.6:
> - Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.
> - Built from source
> - Deployed a pseudo-distributed clusterRan a few sample jobs
> - Poked around the RM UI
> - Poked around the attached website locally via the tarball
>
>
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
>
>
> ebadger@foo: env | grep HADOOP
> HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-
> src/hadoop-dist/target/hadoop-3.0.0-alpha1/
> HADOOP_CONF_DIR=/Users/ebadger/conf
> ebadger@foo: $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
> ebadger@foo:
>
>
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
>
>
> Thanks,
>
>
> Eric
>
>
>
> On Tuesday, August 30, 2016 4:00 PM, Eric Badger
>  wrote:
>
>
>
> All on OSX 10.11.6:
> Verified the hashes. However, Andrew, I don't know where to find your
> public key, so I wasn't able to verify that they were signed by you.Built
> from sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked
> around the RM UIPoked around the attached website locally via the tarball
> I did find one odd thing, though. It could be a misconfiguration on my
> system, but I've never had this problem before with other releases (though
> I deal almost exclusively in 2.x and so I imagine things might be
> different). When I run a sleep job, I do not see any
> diagnostics/logs/counters printed out by the client. Initially I ran the
> job like I would on 2.7 and it failed (because I had not set
> yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I didn't see
> anything until I looked at the RM UI. There I was able to see all of the
> logs for the failed job and diagnose the issue. Then, once I fixed my
> parameters and ran the job again, I still didn't see any
> diagnostics/logs/counters.
> ebadger@foo: env | grep HADOOPHADOOP_HOME=/Users/
> ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/
> target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
> $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/
> mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar sleep
> -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME"
> -Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1
> -m 1 -r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be
> incomplete.ebadger@foo:
> After running the above command, the RM UI showed a successful job, but as
> you can see, I did not have anything printed onto the command line.
> Hopefully this is just a misconfiguration on my part, but I figured that I
> would point it out just in case.
> Thanks,
> Eric
>
>
>
> On Tuesday, August 30, 2016 12:58 PM, Andrew Wang <
> andrew.w...@cloudera.com> wrote:
>
>
> I'll put my own +1 on it:
>
> * Built from source
> * Started pseudo cluster and ran Pi job successfully
>
> On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:
>
> >
> > Thanks Andrew for the great work! 

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 2:06 PM, Eric Badger  
> wrote:
> 
> 
> WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.

^^


> 
> After running the above command, the RM UI showed a successful job, but as 
> you can see, I did not have anything printed onto the command line. Hopefully 
> this is just a misconfiguration on my part, but I figured that I would point 
> it out just in case.


It gave you a very important message in the output ...


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
I don't know why my email client keeps getting rid of all of my spacing. 
Resending the same email so that it is actually legible...

All on OSX 10.11.6:
- Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.
- Built from source
- Deployed a pseudo-distributed clusterRan a few sample jobs
- Poked around the RM UI
- Poked around the attached website locally via the tarball


I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.


ebadger@foo: env | grep HADOOP
HADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/
HADOOP_CONF_DIR=/Users/ebadger/conf
ebadger@foo: $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
ebadger@foo:


After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.


Thanks,


Eric



On Tuesday, August 30, 2016 4:00 PM, Eric Badger 
 wrote:



All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric



On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
 wrote:


I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The 

Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Eric Badger
All on OSX 10.11.6:
Verified the hashes. However, Andrew, I don't know where to find your public 
key, so I wasn't able to verify that they were signed by you.Built from 
sourceDeployed a pseudo-distributed clusterRan a few sample jobsPoked around 
the RM UIPoked around the attached website locally via the tarball
I did find one odd thing, though. It could be a misconfiguration on my system, 
but I've never had this problem before with other releases (though I deal 
almost exclusively in 2.x and so I imagine things might be different). When I 
run a sleep job, I do not see any diagnostics/logs/counters printed out by the 
client. Initially I ran the job like I would on 2.7 and it failed (because I 
had not set yarn.app.mapreduce.am.env and mapreduce.admin.user.env), but I 
didn't see anything until I looked at the RM UI. There I was able to see all of 
the logs for the failed job and diagnose the issue. Then, once I fixed my 
parameters and ran the job again, I still didn't see any 
diagnostics/logs/counters.
ebadger@foo: env | grep 
HADOOPHADOOP_HOME=/Users/ebadger/Downloads/hadoop-3.0.0-alpha1-src/hadoop-dist/target/hadoop-3.0.0-alpha1/HADOOP_CONF_DIR=/Users/ebadger/confebadger@foo:
 $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha1-tests.jar
 sleep -Dyarn.app.mapreduce.am.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" 
-Dmapreduce.admin.user.env="HADOOP_MAPRED_HOME=$HADOOP_HOME" -mt 1 -rt 1 -m 1 
-r 1WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be 
incomplete.ebadger@foo:
After running the above command, the RM UI showed a successful job, but as you 
can see, I did not have anything printed onto the command line. Hopefully this 
is just a misconfiguration on my part, but I figured that I would point it out 
just in case.
Thanks,
Eric


On Tuesday, August 30, 2016 12:58 PM, Andrew Wang 
 wrote:
 

 I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on their feedback. So, please keep that in mind when
>> voting; hopefully most issues can be addressed by future alphas rather
>> than
>> future RCs.
>>
>> Sorry for getting this out on a Tuesday, but I'd still like this vote to
>> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>> if we lack the votes.
>>
>> Please try it out and let me know what you think.
>>
>> Best,
>> Andrew
>>
>


   

[jira] [Reopened] (HADOOP-13357) Modify common to use hadoop_subcommand_opts

2016-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-13357:
---

> Modify common to use hadoop_subcommand_opts
> ---
>
> Key: HADOOP-13357
> URL: https://issues.apache.org/jira/browse/HADOOP-13357
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-13341
>
> Attachments: HADOOP-13357-HADOOP-13341.00.patch
>
>
> Add support for hadoop common commands



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13356) Add a function to handle command_subcommand_OPTS

2016-08-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-13356:
---

> Add a function to handle command_subcommand_OPTS
> 
>
> Key: HADOOP-13356
> URL: https://issues.apache.org/jira/browse/HADOOP-13356
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: HADOOP-13341
>
> Attachments: HADOOP-13356-HADOOP-13341.00.patch
>
>
> Build the framework part for handling HADOOP\_command\_OPTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Allen Wittenauer

> On Aug 30, 2016, at 10:17 AM, Zhe Zhang  wrote:
> 
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
> 
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.


The site tarball has them converted to HTML.  I've also re-run the 
versions that I keep on my gitlab account.  (Since the data comes from JIRA, 
the content should be the same but the format and ordering might be different 
since I use the master branch of Yetus.) 
https://gitlab.com/_a__w_/eco-release-metadata/tree/master/HADOOP/3.0.0-alpha1

It also looks like IntelliJ has a few different markdown plug-ins.  
You'll want one that supports what is generally referred to as MultiMarkdown or 
Github-Flavored Markdown (GFM) since releasedocmaker uses the table extension 
format found in that specification.  (It's an extremely common extension so I'm 
sure one of them supports it.)
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
I'll put my own +1 on it:

* Built from source
* Started pseudo cluster and ran Pi job successfully

On Tue, Aug 30, 2016 at 10:17 AM, Zhe Zhang  wrote:

>
> Thanks Andrew for the great work! It's really exciting to finally see a
> Hadoop 3 RC.
>
> I noticed CHANGES and RELEASENOTES markdown files which were not in
> previous RCs like 2.7.3. What are good tools to verify them? I tried
> reading them on IntelliJ but format looks odd.
>
> I'm still testing the RC:
> - Downloaded and verified checksum
> - Built from source
> - Will start small cluster and test simple programs, focusing on EC
> functionalities
>
> -- Zhe
>
> On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
> wrote:
>
>> Hi all,
>>
>> Thanks to the combined work of many, many contributors, here's an RC0 for
>> 3.0.0-alpha1:
>>
>> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>>
>> alpha1 is the first in a series of planned alpha releases leading up to
>> GA.
>> The objective is to get an artifact out to downstreams for testing and to
>> iterate quickly based on their feedback. So, please keep that in mind when
>> voting; hopefully most issues can be addressed by future alphas rather
>> than
>> future RCs.
>>
>> Sorry for getting this out on a Tuesday, but I'd still like this vote to
>> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
>> if we lack the votes.
>>
>> Please try it out and let me know what you think.
>>
>> Best,
>> Andrew
>>
>


[jira] [Created] (HADOOP-13563) hadoop_subcommand_opts should print name not actual content during debug

2016-08-30 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13563:
-

 Summary: hadoop_subcommand_opts should print name not actual 
content during debug
 Key: HADOOP-13563
 URL: https://issues.apache.org/jira/browse/HADOOP-13563
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [REMINDER] How to set fix versions when committing

2016-08-30 Thread Andrew Wang
Hi Junping,

On Tue, Aug 30, 2016 at 4:30 AM, Junping Du  wrote:

> Hi Andrew and all,
>  Thanks for the notice on the change. I still concern this rule change
> may cause some confusion from conflicting against our previous rule - no
> need to set trunk version if it is landing on 2.x branch. As we can see,
> there are 4 cases of version setting for JIRA landing on trunk and branch-2
> now:
> 1. JIRA with fixed version set to 2.x only before 3.0.0-alpha1 cut off.
> 2. JIRA with fixed version set to 2.x only after 3.0.0-alpha1 cut off.
> 3. JIRA with fixed version set to 2.x and 3.0.0-alpha1 after 3.0.0-alpha1
> cut off.
> 4. JIRA with fixed version set to 2.x and 3.0.0-alpha2 after 3.0.0-alpha1
> cut off
>
> Case 3 and 4 can be easily distinguished, but case 1 and 2 is against our
> rule change here and hard to differentiate unless we want to mark all
> previous JIRA with fixed version for 2.x only to include
> 3.0.0-alpha1/3.0.0-alpha2. That's a tremendous effort and I doubt this
> should be our option.
>

I believe (1) was handled by the bulk fix version update I did. It added
the 3.0.0-alpha1 fix version for all JIRAs committed to trunk after the
release of 2.7.0. I filtered out branch-2 only commits based on git log.

(2) was addressed by rebranching branch-3.0.0-alpha1 after the bulk fix
version update. It should be easy to track mistakes via a JIRA query
similar to the one used to do the bulk fix version update. That code is all
posted on my github: https://github.com/umbrant/jira-update

Best,
Andrew


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Zhe Zhang
Thanks Andrew for the great work! It's really exciting to finally see a
Hadoop 3 RC.

I noticed CHANGES and RELEASENOTES markdown files which were not in
previous RCs like 2.7.3. What are good tools to verify them? I tried
reading them on IntelliJ but format looks odd.

I'm still testing the RC:
- Downloaded and verified checksum
- Built from source
- Will start small cluster and test simple programs, focusing on EC
functionalities

-- Zhe

On Tue, Aug 30, 2016 at 8:51 AM Andrew Wang 
wrote:

> Hi all,
>
> Thanks to the combined work of many, many contributors, here's an RC0 for
> 3.0.0-alpha1:
>
> http://home.apache.org/~wang/3.0.0-alpha1-RC0/
>
> alpha1 is the first in a series of planned alpha releases leading up to GA.
> The objective is to get an artifact out to downstreams for testing and to
> iterate quickly based on their feedback. So, please keep that in mind when
> voting; hopefully most issues can be addressed by future alphas rather than
> future RCs.
>
> Sorry for getting this out on a Tuesday, but I'd still like this vote to
> run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
> if we lack the votes.
>
> Please try it out and let me know what you think.
>
> Best,
> Andrew
>


[VOTE] Release Apache Hadoop 3.0.0-alpha1 RC0

2016-08-30 Thread Andrew Wang
Hi all,

Thanks to the combined work of many, many contributors, here's an RC0 for
3.0.0-alpha1:

http://home.apache.org/~wang/3.0.0-alpha1-RC0/

alpha1 is the first in a series of planned alpha releases leading up to GA.
The objective is to get an artifact out to downstreams for testing and to
iterate quickly based on their feedback. So, please keep that in mind when
voting; hopefully most issues can be addressed by future alphas rather than
future RCs.

Sorry for getting this out on a Tuesday, but I'd still like this vote to
run the normal 5 days, thus ending Saturday (9/3) at 9AM PDT. I'll extend
if we lack the votes.

Please try it out and let me know what you think.

Best,
Andrew


ApacheCon Seville CFP closes September 9th

2016-08-30 Thread Rich Bowen
It's traditional. We wait for the last minute to get our talk proposals
in for conferences.

Well, the last minute has arrived. The CFP for ApacheCon Seville closes
on September 9th, which is less than 2 weeks away. It's time to get your
talks in, so that we can make this the best ApacheCon yet.

It's also time to discuss with your developer and user community whether
there's a track of talks that you might want to propose, so that you
have more complete coverage of your project than a talk or two.

For Apache Big Data, the relevant URLs are:
Event details:
http://events.linuxfoundation.org/events/apache-big-data-europe
CFP:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp

For ApacheCon Europe, the relevant URLs are:
Event details: http://events.linuxfoundation.org/events/apachecon-europe
CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp

This year, we'll be reviewing papers "blind" - that is, looking at the
abstracts without knowing who the speaker is. This has been shown to
eliminate the "me and my buddies" nature of many tech conferences,
producing more diversity, and more new speakers. So make sure your
abstracts clearly explain what you'll be talking about.

For further updated about ApacheCon, follow us on Twitter, @ApacheCon,
or drop by our IRC channel, #apachecon on the Freenode IRC network.

-- 
Rich Bowen
WWW: http://apachecon.com/
Twitter: @ApacheCon

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/

[Aug 29, 2016 3:55:38 PM] (jlowe) HADOOP-13552. RetryInvocationHandler logs all 
remote exceptions.
[Aug 29, 2016 4:14:55 PM] (jlowe) YARN-5560. Clean up bad exception catching 
practices in TestYarnClient.
[Aug 29, 2016 4:26:46 PM] (aengineer) HADOOP-7363. 
TestRawLocalFileSystemContract is needed. Contributed by
[Aug 29, 2016 5:15:34 PM] (liuml07) HDFS-10807. Doc about upgrading to a 
version of HDFS with snapshots may
[Aug 29, 2016 7:56:09 PM] (jlowe) MAPREDUCE-6768. TestRecovery.testSpeculative 
failed with NPE.
[Aug 29, 2016 8:04:28 PM] (liuml07) HADOOP-13559. Remove close() within 
try-with-resources in
[Aug 29, 2016 8:59:54 PM] (yzhang) HDFS-10625. VolumeScanner to report why a 
block is found bad.
[Aug 29, 2016 9:46:00 PM] (zhz) YARN-5550. TestYarnCLI#testGetContainers should 
format according to
[Aug 29, 2016 10:30:49 PM] (wang) HADOOP-12608. Fix exception message in WASB 
when connecting with
[Aug 29, 2016 10:55:33 PM] (rchiang) YARN-5567. Fix script exit code checking in
[Aug 30, 2016 12:48:08 AM] (xiao) HDFS-4210. Throw helpful exception when DNS 
entry for JournalNode cannot
[Aug 30, 2016 6:37:26 AM] (zhz) HDFS-10814. Add assertion for 
getNumEncryptionZones when no EZ is




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 

Failed junit tests :

   hadoop.hdfs.TestRollingUpgrade 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [24K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [120K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/149/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-13562) Change hadoop_subcommand_opts to use only uppercase

2016-08-30 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13562:
-

 Summary: Change hadoop_subcommand_opts to use only uppercase
 Key: HADOOP-13562
 URL: https://issues.apache.org/jira/browse/HADOOP-13562
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13560) make sure s3 blob >5GB files copies, with metadata

2016-08-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13560:
---

 Summary: make sure s3 blob >5GB files copies, with metadata
 Key: HADOOP-13560
 URL: https://issues.apache.org/jira/browse/HADOOP-13560
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
that metadata isn't copied on large copies.

1. Add a test to do that large copy/rname and verify that the copy really works
1. Verify that metadata makes it over.

Verifying large file rename is important on its own, as it is needed for very 
large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13561) make sure s3 blob >5GB files copies, with metadata

2016-08-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13561:
---

 Summary: make sure s3 blob >5GB files copies, with metadata
 Key: HADOOP-13561
 URL: https://issues.apache.org/jira/browse/HADOOP-13561
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
that metadata isn't copied on large copies.

1. Add a test to do that large copy/rname and verify that the copy really works
1. Verify that metadata makes it over.

Verifying large file rename is important on its own, as it is needed for very 
large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [REMINDER] How to set fix versions when committing

2016-08-30 Thread Junping Du
Hi Andrew and all,
 Thanks for the notice on the change. I still concern this rule change may 
cause some confusion from conflicting against our previous rule - no need to 
set trunk version if it is landing on 2.x branch. As we can see, there are 4 
cases of version setting for JIRA landing on trunk and branch-2 now:
1. JIRA with fixed version set to 2.x only before 3.0.0-alpha1 cut off. 
2. JIRA with fixed version set to 2.x only after 3.0.0-alpha1 cut off.
3. JIRA with fixed version set to 2.x and 3.0.0-alpha1 after 3.0.0-alpha1 cut 
off.
4. JIRA with fixed version set to 2.x and 3.0.0-alpha2 after 3.0.0-alpha1 cut 
off

Case 3 and 4 can be easily distinguished, but case 1 and 2 is against our rule 
change here and hard to differentiate unless we want to mark all previous JIRA 
with fixed version for 2.x only to include 3.0.0-alpha1/3.0.0-alpha2. That's a 
tremendous effort and I doubt this should be our option.
Or, my preference is: we should update our rule here a bit to assume all JIRA 
marked fixed version to 2.x only get landed to 3.0.0-alpha1 but in the 
meanwhile, we monitor all JIRAs come after 3.0.0-alpha1 cut off to include 
3.0.0-alpha2 (or latest trunk version).
Thoughts?


Thanks,

Junping


From: Andrew Wang 
Sent: Tuesday, August 30, 2016 2:57 AM
To: common-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; hdfs-...@hadoop.apache.org
Subject: [REMINDER] How to set fix versions when committing

Hi all,

I finished the bulk fix version update and just rebranched
branch-3.0.0-alpha1 off of trunk. So, a reminder that the procedure for
setting fix versions has changed slightly from before.

Everything is fully detailed here, the example in particular should help
clarify things:

https://hadoop.apache.org/versioning.html

The short of it though is that if a JIRA is going into trunk or
branch-3.0.0-alpha1, it should also have a 3.0.0-alpha1 or 3.0.0-alpha2
fixVersion set.

Thanks,
Andrew

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org