Re: Aggregation service start

2014-02-16 Thread Zhijie Shen
"But when the job complete, then I click History of Tracking UI
http://172.11.12.6:8088/cluster again, it raise following error:

Firefox can't establish a connection to the server at master:19888."

This is a problem other than log aggregation. After an MapReduce job
completes, the tracking URL is pointing to the MapReduce history server. It
is very likely that the history server hasn't been running on your machine,
such that you didn't get response from it.

- Zhijie


On Sun, Feb 16, 2014 at 10:03 PM, EdwardKing  wrote:

> Thanks for you help. I set yarn-site.xml as you told me,like follows:
>
> [hadoop@master hadoop]$ cat yarn-site.xml
> 
> 
> 
>
> 
> 
>   yarn.resourcemanager.resource-tracker.address
>   master:8990
>   host is the hostname of the resource manager and port is
> the port on which the NodeManagers contact the Resource Manager.
>   
> 
> 
>   yarn.resourcemanager.scheduler.address
>   master:8991
>   host is the hostname of the resourcemanager and port is the
> port on which the Applications in the cluster talk to the Resource Manager.
>   
> 
> 
>   yarn.resourcemanager.scheduler.class
>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   In case you do not want to use the default
> scheduler
> 
> 
>   yarn.resourcemanager.address
>   master:8993
>   the host is the hostname of the ResourceManager and port is
> the port on which the clients can talk to the Resource Manager
> 
> 
> 
>   yarn.nodemanager.local-dirs
>   /home/software/tmp/node
>   the local directions used by the nodemanager
> 
> 
>   yarn.nodemanager.address
>   master:8994
>   the nodemanagers bind to this port
> 
> 
>   yarn.nodemanager.resource.memory-mb
>   5120
>   the amount of memory on the NodeManager in GB
> 
> 
>   yarn.nodemanager.remote-app-log-dir
>   /home/software/tmp/app-logs
>   directory on hdfs where the application logs are moved to
> 
> 
> 
>   yarn.nodemanager.log-dirs
>   /home/software/tmp/node
>   the directories used by Nodemanager as log
> directories
> 
> 
>   yarn.nodemanager.aux-services
>   mapreduce_shuffle
>   shuffle service that needs to be set for Map Reduce to run
> 
> 
>
> 
>   yarn.log-aggregation-enable
>   true
> 
>
> 
>
> Then I submit a job,when this job is running, I click History of Tracking
> UI http://172.11.12.6:8088/cluster
> I can view all log information. It runs ok.
> But when the job complete, then I click History of Tracking UI
> http://172.11.12.6:8088/cluster again, it raise following error:
>
> Firefox can't establish a connection to the server at master:19888.
>
> Do I missing some configuration information in my xml file?  How to
> correct?  Thanks in advance.
>
>
>
>
>
>
>
>
> - Original Message -
> From: Zhijie Shen
> To: user@hadoop.apache.org
> Sent: Monday, February 17, 2014 11:11 AM
> Subject: Re: Aggregation service start
>
>
> Please  set
> 
> yarn.log-aggregation-enable
> true
> 
> in yarn-site.xml to enable log aggregation.
> -Zhijie
>
> On Feb 16, 2014 6:15 PM, "EdwardKing"  wrote:
>
> hadoop 2.2.0, I want to view Tracking UI,so I visit
> http://172.11.12.6:8088/cluster,
> then I click History of Completed Job,such as follows:
>
> MapReduce Job job_1392601388579_0001
> Attempt Number  Start Time NodeLogs
> 1   Sun Feb 16 17:44:57 PST 2014  master:8042  logs
>
> Then I click logs,but it failed.
> Aggregation is not enabled. Try the nodemanager at master:8994
>
> I guess it must a service don't start, which command I need to execute
> under home/software/hadoop-2.2.0/sbin ?  Thanks.
> [hadoop@node1 sbin]$ ls
> distribute-exclude.shstart-all.cmdstop-all.sh
> hadoop-daemon.sh start-all.sh stop-balancer.sh
> hadoop-daemons.shstart-balancer.shstop-dfs.cmd
> hdfs-config.cmd  start-dfs.cmdstop-dfs.sh
> hdfs-config.sh   start-dfs.sh stop-secure-dns.sh
> httpfs.shstart-secure-dns.sh  stop-yarn.cmd
> mr-jobhistory-daemon.sh  start-yarn.cmd   stop-yarn.sh
> refresh-namenodes.sh start-yarn.shyarn-daemon.sh
> slaves.shstop-all.cmd yarn-daemons.sh
>
>
>
>
>
>
>
>
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---

Re: Aggregation service start

2014-02-16 Thread EdwardKing
Thanks for you help. I set yarn-site.xml as you told me,like follows:

[hadoop@master hadoop]$ cat yarn-site.xml






  yarn.resourcemanager.resource-tracker.address
  master:8990
  host is the hostname of the resource manager and port is the 
port on which the NodeManagers contact the Resource Manager.
  


  yarn.resourcemanager.scheduler.address
  master:8991
  host is the hostname of the resourcemanager and port is the port 
on which the Applications in the cluster talk to the Resource Manager.
  


  yarn.resourcemanager.scheduler.class
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
  In case you do not want to use the default 
scheduler


  yarn.resourcemanager.address
  master:8993
  the host is the hostname of the ResourceManager and port is the 
port on which the clients can talk to the Resource Manager 


  yarn.nodemanager.local-dirs
  /home/software/tmp/node
  the local directions used by the nodemanager


  yarn.nodemanager.address
  master:8994
  the nodemanagers bind to this port


  yarn.nodemanager.resource.memory-mb
  5120
  the amount of memory on the NodeManager in GB


  yarn.nodemanager.remote-app-log-dir
  /home/software/tmp/app-logs
  directory on hdfs where the application logs are moved to 



  yarn.nodemanager.log-dirs
  /home/software/tmp/node
  the directories used by Nodemanager as log 
directories


  yarn.nodemanager.aux-services
  mapreduce_shuffle
  shuffle service that needs to be set for Map Reduce to run 


  

  yarn.log-aggregation-enable
  true




Then I submit a job,when this job is running, I click History of Tracking UI 
http://172.11.12.6:8088/cluster
I can view all log information. It runs ok.
But when the job complete, then I click History of Tracking UI 
http://172.11.12.6:8088/cluster again, it raise following error:

Firefox can't establish a connection to the server at master:19888.

Do I missing some configuration information in my xml file?  How to correct?  
Thanks in advance.








- Original Message - 
From: Zhijie Shen 
To: user@hadoop.apache.org 
Sent: Monday, February 17, 2014 11:11 AM
Subject: Re: Aggregation service start


Please  set

yarn.log-aggregation-enable
true
 
in yarn-site.xml to enable log aggregation.
-Zhijie

On Feb 16, 2014 6:15 PM, "EdwardKing"  wrote:

hadoop 2.2.0, I want to view Tracking UI,so I visit 
http://172.11.12.6:8088/cluster,
then I click History of Completed Job,such as follows:

MapReduce Job job_1392601388579_0001
Attempt Number  Start Time NodeLogs
1   Sun Feb 16 17:44:57 PST 2014  master:8042  logs

Then I click logs,but it failed.
Aggregation is not enabled. Try the nodemanager at master:8994

I guess it must a service don't start, which command I need to execute under 
home/software/hadoop-2.2.0/sbin ?  Thanks.
[hadoop@node1 sbin]$ ls
distribute-exclude.shstart-all.cmdstop-all.sh
hadoop-daemon.sh start-all.sh stop-balancer.sh
hadoop-daemons.shstart-balancer.shstop-dfs.cmd
hdfs-config.cmd  start-dfs.cmdstop-dfs.sh
hdfs-config.sh   start-dfs.sh stop-secure-dns.sh
httpfs.shstart-secure-dns.sh  stop-yarn.cmd
mr-jobhistory-daemon.sh  start-yarn.cmd   stop-yarn.sh
refresh-namenodes.sh start-yarn.shyarn-daemon.sh
slaves.shstop-all.cmd yarn-daemons.sh







---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, you are hereby notified that any 
printing, copying, dissemination, distribution, disclosure or forwarding of 
this communication is strictly prohibited. If you have received this 
communication in error, please contact the sender immediately and delete it 
from your system. Thank You.
---
Confidentiality Notice: The information contained in this e-mai

Re: Yarn - specify hosts in ContainerRequest

2014-02-16 Thread Krishna Kishore Bonagiri
Hi Anand,

  Which version of Hadoop are you using? It works from 2.2.0

Try like this, and it should work. I am using this feature on 2.2.0

   String[] hosts = new String[1];
   hosts[0] = node_name;
   ContainerRequest request = new ContainerRequest(capability, hosts,
null, p, false);


Thanks,
Kishore


On Fri, Feb 14, 2014 at 11:43 PM, Anand Mundada wrote:

> Hi All,
>
> How can I launch container on a particular host?
> I tried specifying host name in
> *new ContainerRequest()*
>
> Thanks,
> Anand
>


RE: Bigdata - MapR, Cloudera, Hortonworks, OracleBigdata appliances

2014-02-16 Thread Nirmal Kumar
Hi All,

This information and comparison in the inadvertently shared  link 
(https://docs.google.com/spreadsheet/ccc?key=0AjfuzftCi_w7dFNVQXBnSHUtc25NV1UxSHppN2dvckE#gid=0)
  has NOT been written or is endorsed by me or anybody in Impetus - my 
employer. I and my employer have not participated in creation of this content 
and nor do we vouch for the accuracy of the contents. This report is from a 
publicly available document on the internet by a company called flux7.com - 
please connect to flux7.com for any concerns around the report contents. I 
apologize if the validity  of the shared information has caused concerns to  
anybody.

Please also note that all information and opinions shared in this user list are 
my own and not a representation of my employers opinion or thought process in 
any way.

Thanks,
-Nirmal

From: Nirmal Kumar
Sent: Saturday, February 15, 2014 1:44 PM
To: 'user@hadoop.apache.org'
Cc: 'vjshal...@gmail.com'
Subject: RE: Bigdata - MapR, Cloudera, Hortonworks, OracleBigdata appliances

All,

Apologies for any inconvenience caused.

Request kindly to *ignore* the below info provided by me.

From: Nirmal Kumar
Sent: Friday, February 14, 2014 6:01 PM
To: Hadoop User Mailer List
Cc: 'vjshal...@gmail.com'
Subject: RE: Bigdata - MapR, Cloudera, Hortonworks, OracleBigdata appliances

Detailed comparison of various Hadoop distros:

https://docs.google.com/spreadsheet/ccc?key=0AjfuzftCi_w7dFNVQXBnSHUtc25NV1UxSHppN2dvckE#gid=0

-Nirmal

From: VJ Shalish [mailto:vjshal...@gmail.com]
Sent: Friday, February 14, 2014 5:20 PM
To: Hadoop User Mailer List
Subject: Bigdata - MapR, Cloudera, Hortonworks, OracleBigdata appliances

Can anyone send me valid comparison points or links taking into consideration
Bigdata - MapR, Cloudera, Hortonworks, OracleBigdata appliances etc
Thanks
Shalish.








NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


Re: Aggregation service start

2014-02-16 Thread Zhijie Shen
Please  set


yarn.log-aggregation-enable
true


in yarn-site.xml to enable log aggregation.

-Zhijie
 On Feb 16, 2014 6:15 PM, "EdwardKing"  wrote:

> hadoop 2.2.0, I want to view Tracking UI,so I visit
> http://172.11.12.6:8088/cluster,
> then I click History of Completed Job,such as follows:
>
> MapReduce Job job_1392601388579_0001
> Attempt Number  Start Time NodeLogs
> 1   Sun Feb 16 17:44:57 PST 2014  master:8042  logs
>
> Then I click logs,but it failed.
> Aggregation is not enabled. Try the nodemanager at master:8994
>
> I guess it must a service don't start, which command I need to execute
> under home/software/hadoop-2.2.0/sbin ?  Thanks.
> [hadoop@node1 sbin]$ ls
> distribute-exclude.shstart-all.cmdstop-all.sh
> hadoop-daemon.sh start-all.sh stop-balancer.sh
> hadoop-daemons.shstart-balancer.shstop-dfs.cmd
> hdfs-config.cmd  start-dfs.cmdstop-dfs.sh
> hdfs-config.sh   start-dfs.sh stop-secure-dns.sh
> httpfs.shstart-secure-dns.sh  stop-yarn.cmd
> mr-jobhistory-daemon.sh  start-yarn.cmd   stop-yarn.sh
> refresh-namenodes.sh start-yarn.shyarn-daemon.sh
> slaves.shstop-all.cmd yarn-daemons.sh
>
>
>
>
>
>
>
>
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Aggregation service start

2014-02-16 Thread EdwardKing
hadoop 2.2.0, I want to view Tracking UI,so I visit 
http://172.11.12.6:8088/cluster,
then I click History of Completed Job,such as follows:

MapReduce Job job_1392601388579_0001
Attempt Number  Start Time NodeLogs
1   Sun Feb 16 17:44:57 PST 2014  master:8042  logs

Then I click logs,but it failed.
Aggregation is not enabled. Try the nodemanager at master:8994

I guess it must a service don't start, which command I need to execute under 
home/software/hadoop-2.2.0/sbin ?  Thanks.
[hadoop@node1 sbin]$ ls
distribute-exclude.shstart-all.cmdstop-all.sh
hadoop-daemon.sh start-all.sh stop-balancer.sh
hadoop-daemons.shstart-balancer.shstop-dfs.cmd
hdfs-config.cmd  start-dfs.cmdstop-dfs.sh
hdfs-config.sh   start-dfs.sh stop-secure-dns.sh
httpfs.shstart-secure-dns.sh  stop-yarn.cmd
mr-jobhistory-daemon.sh  start-yarn.cmd   stop-yarn.sh
refresh-namenodes.sh start-yarn.shyarn-daemon.sh
slaves.shstop-all.cmd yarn-daemons.sh







---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---


Re: Start hadoop service

2014-02-16 Thread EdwardKing
My OS is CenterOS 2.6.18, master IP is 172.11.12.6 and slave IP is 172.11.12.7
First I start hadoop service under master,like follows:
[hadoop@master ~]$ cd /home/software/hadoop-2.2.0/sbin
[hadoop@master ~]$./start-dfs.sh
14/02/16 17:24:16 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to 
/home/software/hadoop-2.2.0/logs/hadoop-hadoop-namenode-master.out
master: starting datanode, logging to 
/home/software/hadoop-2.2.0/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to 
/home/software/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-master.out
14/02/16 17:24:43 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

[hadoop@master ~]$./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to 
/home/software/hadoop-2.2.0/logs/yarn-hadoop-resourcemanager-master.out
master: starting nodemanager, logging to 
/home/software/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-master.out

[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to 
/home/software/hadoop-2.2.0/logs/mapred-hadoop-historyserver-master.out

[hadoop@master ~]$jps
4439 DataNode
5400 Jps
4884 NodeManager
4331 NameNode
5342 JobHistoryServer
4595 SecondaryNameNode

Then I open firefox to view hadoop service,like follows:
http://172.11.12.6:9002/dfshealth.jsp
NameNode 'master:9000' (active)
..
Live Nodes  : 1 (Decommissioned: 0)


http://172.11.12.6:8088/cluster
Firefox can't establish a connection to the server at 172.11.12.6:8088.

http://172.11.12.6:9002/dfsnodelist.jsp?whatNodes=LIVE
Live Datanodes : 1
master 172.11.12.6:50010 ..

If I start hadoop service under slave, then I open firefox to view hadoop 
service correct,like follows:
http://172.11.12.6:9002/dfsnodelist.jsp?whatNodes=LIVE
Live Datanodes : 2
master 172.11.12.6:50010 ..
slave  172.11.12.7:

Why need I start hadoop service again under slave? Which configuration file is 
wrong? Thanks



- Original Message - 
From: Anand Mundada 
To: user@hadoop.apache.org 
Cc:  
Sent: Monday, February 17, 2014 9:12 AM
Subject: Re: Start hadoop service 


No. 
You don't need to.
Master will start all required daemons on slave.


Check all daemons using jps command.

Sent from my iPhone

On Feb 16, 2014, at 7:03 PM, EdwardKing  wrote:


I have install hadoop-2.2.0 under two machine,one is master and other is 
slave,then I start hadoop service under master machine.
[hadoop@master ~]$./start-dfs.sh
[hadoop@master ~]$./start-yarn.sh
[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver

My question is whether I need start hadoop service under slave machine again? 
Thanks.
[hadoop@slave ~]$./start-dfs.sh
[hadoop@slave ~]$./start-yarn.sh
[hadoop@slave ~]$./mr-jobhistory-daemon.sh start historyserver






---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---
---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---


Re: Start hadoop service

2014-02-16 Thread Anand Mundada
No. 
You don't need to.
Master will start all required daemons on slave.

Check all daemons using jps command.

Sent from my iPhone

> On Feb 16, 2014, at 7:03 PM, EdwardKing  wrote:
> 
> I have install hadoop-2.2.0 under two machine,one is master and other is 
> slave,then I start hadoop service under master machine.
> [hadoop@master ~]$./start-dfs.sh
> [hadoop@master ~]$./start-yarn.sh
> [hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver
>  
> My question is whether I need start hadoop service under slave machine again? 
> Thanks.
> [hadoop@slave ~]$./start-dfs.sh
> [hadoop@slave ~]$./start-yarn.sh
> [hadoop@slave ~]$./mr-jobhistory-daemon.sh start historyserver
>  
>  
>  
>  
>  
>  
> ---
> Confidentiality Notice: The information contained in this e-mail and any 
> accompanying attachment(s) 
> is intended only for the use of the intended recipient and may be 
> confidential and/or privileged of 
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
> this communication is 
> not the intended recipient, unauthorized use, forwarding, printing,  storing, 
> disclosure or copying 
> is strictly prohibited, and may be unlawful.If you have received this 
> communication in error,please 
> immediately notify the sender by return e-mail, and delete the original 
> message and all copies from 
> your system. Thank you. 
> ---


Start hadoop service

2014-02-16 Thread EdwardKing
I have install hadoop-2.2.0 under two machine,one is master and other is 
slave,then I start hadoop service under master machine.
[hadoop@master ~]$./start-dfs.sh
[hadoop@master ~]$./start-yarn.sh
[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver

My question is whether I need start hadoop service under slave machine again? 
Thanks.
[hadoop@slave ~]$./start-dfs.sh
[hadoop@slave ~]$./start-yarn.sh
[hadoop@slave ~]$./mr-jobhistory-daemon.sh start historyserver





---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---


Re: How to submit the patch MAPREDUCE-4490.patch which works for branch-1.2, not trunk?

2014-02-16 Thread sam liu
Hi Arpit,

Thanks for your guide!  As a new contributor, I still have following two
questions need your help:

1) So I guess I should run 'test-patch.sh' on my local environment against
branch-1.2, not on Apache Hadoop test server, right?
2) On branch-1.2, I found the 'test-patch.sh' is on
./src/test/bin/test-patch.sh, not ./dev-support/test-patch.sh. My command
is 'sh ./src/test/bin/test-patch.sh MAPREDUCE-4490.patch', however failed
with message 'ERROR: usage ./src/test/bin/test-patch.sh HUDSON [args] |
DEVELOPER [args]'. What's the correct way to manually run 'test-patch.sh'?




2014-02-15 5:25 GMT+08:00 Arpit Agarwal :

> Hi Sam,
>
> Hadoop Jenkins does not accept patches for 1.x.
>
> You can manually run 'test-patch.sh' to verify there are no regressions
> introduced by your patch and copy-paste the results into a Jira comment.
>
>
> On Thu, Feb 13, 2014 at 10:50 PM, sam liu  wrote:
>
>> Hi Experts,
>>
>> I have been working on the JIRA
>> https://issues.apache.org/jira/browse/MAPREDUCE-4490 and attached
>> MAPREDUCE-4490.patch which could fix this jira. I would like to contribute
>> my patch to community, but encountered some issues.
>>
>> MAPREDUCE-4490 is an issue on Hadoop-1.x versions, and my patch based on
>> the  latest code of origin/branch-1.2. However, current trunk bases on Yarn
>> and does not has such issue any more. So my patch could not be applied on
>> current trunk code, and it's actually no need to generate a similar patch
>> on trunk at all.
>>
>> How to submit the patch MAPREDUCE-4490.patch only to origin/branch-1.2,
>> not trunk? Is it allowed by Apache Hadoop?
>>
>> Thanks!
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.