[jira] [Resolved] (HADOOP-9551) Backport common utils introduced with HADOOP-9413 to branch-1-win

2013-08-10 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HADOOP-9551.


   Resolution: Fixed
Fix Version/s: 1-win

> Backport common utils introduced with HADOOP-9413 to branch-1-win
> -
>
> Key: HADOOP-9551
> URL: https://issues.apache.org/jira/browse/HADOOP-9551
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 1-win
>
> Attachments: HADOOP-9551.branch-1-win.common.2.patch, 
> HADOOP-9551.branch-1-win.common.3.patch, 
> HADOOP-9551.branch-1-win.common.4.patch
>
>
> Branch-1-win has the same set of problems described in HADOOP-9413. With this 
> Jira I plan to prepare a branch-1-win compatible patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[VOTE] Release Apache Hadoop 2.0.6-alpha

2013-08-10 Thread Konstantin Boudnik
All,

I have created a release candidate (rc0) for hadoop-2.0.6-alpha that I would
like to release.

This is a stabilization release that includes fixed for a couple a of issues
as outlined on the security list.

The RC is available at: http://people.apache.org/~cos/hadoop-2.0.6-alpha-rc0/
The RC tag in svn is here: 
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.6-alpha-rc0

The maven artifacts are available via repository.apache.org.

Please try the release bits and vote; the vote will run for the usual 7 days.

Thanks for your voting
  Cos



signature.asc
Description: Digital signature


Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-10 Thread Alejandro Abdelnur
thanks giri, how do we set 2.4 or 2.5., what is the path to both so we can use 
and env to set it in the jobs?

thx

Alejandro
(phone typing)

On Aug 9, 2013, at 23:10, Giridharan Kesavan  wrote:

> build slaves hadoop1-hadoop9 now has libprotoc 2.5.0
> 
> 
> 
> -Giri
> 
> 
> On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan <
> gkesa...@hortonworks.com> wrote:
> 
>> Alejandro,
>> 
>> I'm upgrading protobuf on slaves hadoop1-hadoop9.
>> 
>> -Giri
>> 
>> 
>> On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur wrote:
>> 
>>> pinging again, I need help from somebody with sudo access to the hadoop
>>> jenkins boxes to do this or to get sudo access for a couple of hours to
>>> set
>>> up myself.
>>> 
>>> Please!!!
>>> 
>>> thx
>>> 
>>> 
>>> On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur >>> wrote:
>>> 
 To move forward with this we need protoc 2.5.0 in the apache hadoop
 jenkins boxes.
 
 Who can help with this? I assume somebody at Y!, right?
 
 Thx
 
 
 On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark 
>>> wrote:
 
> In HBase land we've pretty well discovered that we'll need to have the
> same version of protobuf that the HDFS/Yarn/MR servers are running.
> That is to say there are issues with ever having 2.4.x and 2.5.x on
> the same class path.
> 
> Upgrading to 2.5.x would be great, as it brings some new classes we
> could use.  With that said HBase is getting pretty close to a rather
> large release (0.96.0 aka The Singularity) so getting this in sooner
> rather than later would be great.  If we could get this into 2.1.0 it
> would be great as that would allow us to have a pretty easy story to
> users with regards to protobuf version.
> 
> On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee 
>>> wrote:
>> Sorry to hijack the thread but, I also wanted to mention Avro. See
> HADOOP-9672.
>> The version we are using has memory leak and inefficiency issues.
>>> We've
> seen users running into it.
>> 
>> Kihwal
>> 
>> 
>> 
>> From: Tsuyoshi OZAWA 
>> To: "common-dev@hadoop.apache.org" 
>> Cc: "hdfs-...@hadoop.apache.org" ; "
> yarn-...@hadoop.apache.org" ; "
> mapreduce-...@hadoop.apache.org" 
>> Sent: Thursday, August 8, 2013 1:59 AM
>> Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
> HADOOP-9845
>> 
>> 
>> Hi,
>> 
>> About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
>> For more detail, please see the JIRA ticket:
>> https://issues.apache.org/jira/browse/HADOOP-9346
>> 
>> - Tsuyoshi
>> 
>> On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur <
>>> t...@cloudera.com>
> wrote:
>>> I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
>>> 
>>> As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits
>>> to
>>> justify the upgrade.
>>> 
>>> Doing the upgrade now, with the first beta, will make things easier
>>> for
>>> downstream projects (like HBase) using protobuf and adopting Hadoop
>>> 2.
> If
>>> we do the upgrade later, downstream projects will have to support 2
>>> different versions and they my get in nasty waters due to classpath
> issues.
>>> 
>>> I've locally tested the patch in a pseudo deployment of 2.1.0-beta
> branch
>>> and it works fine (something is broken in trunk in the RPC layer
> YARN-885).
>>> 
>>> Now, to do this it will require a few things:
>>> 
>>> * Make sure protobuf 2.5.0 is available in the jenkins box
>>> * A follow up email to dev@ aliases indicating developers should
> install
>>> locally protobuf 2.5.0
>>> 
>>> Thanks.
>>> 
>>> --
>>> Alejandro
 
 
 
 --
 Alejandro
>>> 
>>> 
>>> 
>>> --
>>> Alejandro
>> 
>> 


Build failed in Jenkins: Hadoop-Common-trunk #856

2013-08-10 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-9857. Tests block and sometimes timeout on Windows due to 
invalid entropy source. Contributed by Chris Nauroth.

[brandonli] update CHANGES.txt for HADOOP-8814

[sandy] YARN-1046. Disable mem monitoring my default in MiniYARNCluster 
(Karthik Kambatla via Sandy Ryza)

[sandy] YARN-656. In scheduler UI, including reserved memory in Memory Total 
can make it exceed cluster capacity. (Sandy Ryza)

[sandy] Reverting 1512475 (labeled as YARN-656). Accidentally committed two 
patches together

[sandy] YARN-656. In scheduler UI, including reserved memory in Memory Total 
can make it exceed cluster capacity. (Sandy Ryza)

[daryn] HADOOP-9757. Har metadata cache can grow without limit (Cristina Abad 
via daryn)

[kihwal] HDFS-4993. Fsck can fail if a file is renamed or deleted. Contributed 
by Robert Parker.

[cmccabe] add HADOOP-9675 to CHANGES.txt

[cmccabe] HADOOP-9675. use svn:eol-style native for html to prevent line ending 
issues (Colin Patrick McCabe)

[daryn] HADOOP-9789. Support server advertised kerberos principals (daryn)

[kihwal] HADOOP-9672. Upgrade Avro dependency to 1.7.4. Contributed by Sandy 
Ryza.

--
[...truncated 18671 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1217,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1074,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1085,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1993,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[2004,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[2912,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[2923,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[726,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[737,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[292,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[303,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
:[1450,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 


[jira] [Created] (HADOOP-9861) Invert ReflectionUtils' stack trace

2013-08-10 Thread Harsh J (JIRA)
Harsh J created HADOOP-9861:
---

 Summary: Invert ReflectionUtils' stack trace
 Key: HADOOP-9861
 URL: https://issues.apache.org/jira/browse/HADOOP-9861
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.5-alpha
Reporter: Harsh J


Often an MR task (as an example) may fail at the configure stage due to a 
misconfiguration or whatever, and the only thing a user gets by virtue of MR 
pulling limited bytes of the diagnostic error data is the top part of the 
stacktrace:

{code}
java.lang.RuntimeException: Error in configuring object
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
{code}

This is absolutely useless to a user, and he also goes ahead and blames the 
framework for having an issue, rather than thinking (non-intuitively) to go see 
the whole task log for the full trace, especially the last part.

Hundreds of time its been a mere class thats missing, etc. but there's just too 
much pain involved here to troubleshoot.

Would be much much better, if we inverted the trace. For example, here's what 
Hive can return back if we did so, for a random trouble I pulled from the web:

{code}
java.lang.RuntimeException: Error in configuring object
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector.toString(StructObjectInspector.java:64)
at java.lang.String.valueOf(String.java:2826)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at 
org.apache.hadoop.hive.ql.exec.UnionOperator.initializeOp(UnionOperator.java:110)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:451)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:407)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:186)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:563)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
at org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:100)
... 22 more
{code}

This way the user can at least be sure what part's really failing, and not get 
lost trying to work their way through reflection utils and upwards/downwards.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira