Re: confirm unsubscribe from common-dev@hadoop.apache.org

2015-02-08 Thread Tao Xie
unsubscribe

2013-08-14 9:15 GMT+08:00 :

> Quoting common-dev-h...@hadoop.apache.org:
>
>  Hi! This is the ezmlm program. I'm managing the
>> common-dev@hadoop.apache.org mailing list.
>>
>> I'm working for my owner, who can be reached
>> at common-dev-ow...@hadoop.apache.org.
>>
>> To confirm that you would like
>>
>>sya...@stevendyates.com
>>
>> removed from the common-dev mailing list, please send a short reply
>> to this address:
>>
>>common-dev-uc.1376436889.aopejedillbfiiahoneg-syates=st
>> evendyates@hadoop.apache.org
>>
>> Usually, this happens when you just hit the "reply" button.
>> If this does not work, simply copy the address and paste it into
>> the "To:" field of a new message.
>>
>> or click here:
>> mailto:common-dev-uc.1376436889.aopejedillbfiiahoneg-syates=st
>> evendyates@hadoop.apache.org
>>
>> I haven't checked whether your address is currently on the mailing list.
>> To see what address you used to subscribe, look at the messages you are
>> receiving from the mailing list. Each message has your address hidden
>> inside its return path; for example, m...@xdd.ff.com receives messages
>> with return path: -mary=
>> xdd.ff@hadoop.apache.org.
>>
>> Some mail programs are broken and cannot handle long addresses. If you
>> cannot reply to this request, instead send a message to
>>  and put the entire address listed
>> above
>> into the "Subject:" line.
>>
>>
>> --- Administrative commands for the common-dev list ---
>>
>> I can handle administrative requests automatically. Please
>> do not send them to the list address! Instead, send
>> your message to the correct command address:
>>
>> To subscribe to the list, send a message to:
>>
>>
>> To remove your address from the list, send a message to:
>>
>>
>> Send mail to the following for info and FAQ for this list:
>>
>>
>>
>> Similar addresses exist for the digest list:
>>
>>
>>
>> To get messages 123 through 145 (a maximum of 100 per request), mail:
>>
>>
>> To get an index with subject and author for messages 123-456 , mail:
>>
>>
>> They are always returned as sets of 100, max 2000 per request,
>> so you'll actually get 100-499.
>>
>> To receive all messages with the same subject as message 12345,
>> send a short message to:
>>
>>
>> The messages should contain one line or word of text to avoid being
>> treated as sp@m, but I will ignore their content.
>> Only the ADDRESS you send to is important.
>>
>> You can start a subscription for an alternate address,
>> for example "john@host.domain", just add a hyphen and your
>> address (with '=' instead of '@') after the command word:
>> 
>>
>> To stop subscription for this address, mail:
>> 
>>
>> In both cases, I'll send a confirmation message to that address. When
>> you receive it, simply reply to it to complete your subscription.
>>
>> If despite following these instructions, you do not get the
>> desired results, please contact my owner at
>> common-dev-ow...@hadoop.apache.org. Please be patient, my owner is a
>> lot slower than I am ;-)
>>
>> --- Enclosed is a copy of the request I received.
>>
>> Return-Path: 
>> Received: (qmail 68133 invoked by uid 99); 13 Aug 2013 23:34:49 -
>> Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230)
>> by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Aug 2013 23:34:49
>> +
>> X-ASF-Spam-Status: No, hits=-6.7 required=10.0
>> tests=ASF_EMPTY_LIST_OPS,ASF_LIST_OPS,ASF_LIST_UNSUB_A,DCC_
>> CHECK,EMPTY_MESSAGE,HTML_MESSAGE,MIME_HTML_MOSTLY,MISSING_SUBJECT
>> X-Spam-Check-By: apache.org
>> Received-SPF: error (nike.apache.org: local policy)
>> Received: from [69.89.23.142] (HELO gproxy4-pub.mail.unifiedlayer.com)
>> (69.89.23.142)
>> by apache.org (qpsmtpd/0.29) with SMTP; Tue, 13 Aug 2013 23:34:42
>> +
>> Received: (qmail 10263 invoked by uid 0); 13 Aug 2013 23:34:00 -
>> Received: from unknown (HELO mailchannelsproxy4.unifiedlayer.com)
>> (66.147.243.73)
>>   by gproxy4.unifiedlayer.com with SMTP; 13 Aug 2013 23:34:00 -
>> X-Sender-Id: {1135:just81.justhost.com:stevend4:stevendyates.com}
>> {sentby:smtp auth 101.119.15.112 authed with syates+stevendyates.com}
>> Received: from just81.justhost.com (just81.justhost.com [173.254.28.81])
>> by 0.0.0.0:2500 (trex/4.8.85);
>> Tue, 13 Aug 2013 23:34:00 GMT
>> DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=
>> stevendyates.com; s=default;
>> h=Content-Type:MIME-Version:Reply-To:To:From:Message-ID:Date; bh=
>> VzTPxlbEJyUU4Gyzuy6t5FvvwFzGNWBhEE6KBMccX3Q=;
>> b=LA0/yTL0AuyXPSNJBYPd1nkEIbfJcd0wOb
>> XixEPvB/ztF3nr/YMV5nQaLb+EbJg+kQW0PyhPVgkBjzVUOwZLaAHMIrkpM/K0P0JSt/
>> wmgmeSKkokRtMrYdJ5PTmG6vdc;
>> Received: from [101.119.15.112] (port=33907 helo=[100.125.141.91])
>> by just81.justhost.com with esmtpsa (TLSv1:RC4-SHA:128)
>> (Exim 4.80)
>> (envelope-from )
>> id 1V9Mw6-0005u8-MR
>> for common-dev-unsubscr...@h

Re: Patch review process

2015-02-08 Thread Steve Loughran
On 7 February 2015 at 02:14:39, Colin P. McCabe 
(cmcc...@apache.org) wrote:
I think it's healthy to have lots of JIRAs that are "patch available."
It means that there is a lot of interest in the project and people
want to contribute. It would be unhealthy if JIRAs that really needed
to get in were not getting in. But beyond a few horror stories, that
usually doesn't seem to happen.


I believe it is easier for you or I to assert that than it is for someone to 
submit a patch which really matters to them, only to find it languishes 
ignored, because it doesn't appear to matter to anyone who has the rights to 
get it into the code.


I agree that we should make an effort to review things that come from
new contributors. I always set aside some time each week to look
through the new JIRAs on the list and review ones that I feel like I
can do.

I think the "patch manager" for a patch should be the person who
submitted it. As Chris suggested, if nobody is reviewing, email
people who reviewed earlier and ask why. Or email the list and ask if
this is the right approach, and bring attention to the issue.

Is the fact that you have keep asking people to look at your patch a good one? 
Its certainly a sign that the submitter feels it matters, but it also shows 
there's no active queue management,

I suspect it also tends to be easier to pull off if you are already known in 
the community. I know a certain AW will now note that it helps to share 
employers with other committers, but we also tend to review and +1 code work by 
people you already know and are reasonably good at working with. (i.e you don't 
fear their code, trust them to care about issues like compatibility, testing, 
etc).  Certainly I appreciate Alan's +1s for my languishing patches.


If you aren't known, if you have just one patch which appears to only surface 
in your env, risk of neglect.

example:

https://issues.apache.org/jira/browse/HADOOP-3426 "Datanode does not start up 
if the local machines DNS isn't working right and 
dfs.datanode.dns.interface==default"

my home lan, my broken /etc/resolv.conf, my patch. And until in Hadoop: my 
private branch needed to work. And now its in, I'm happy with that specific 
problem being addressed.

Except, there's one nearby about failing better in an IPv6 world, that's been 
around for a while and nobody has looked at

https://issues.apache.org/jira/browse/HADOOP-3619

It's little ones like that that I think can fall by the wayside (I'm looking at 
it now). Here's someone pushing the boundaries: running without IPv6 disabled 
-and instead of us picking up the early lessons, they are being ignored 
unless/until they become issues in the runup to a release.

And, we are trying to be a community here, which means encouraging more 
contributions. Those of us working full time on it should be able to allocate 
some time, even if only weekends outside the release phase, to catching up with 
the work queue.

There's an article here that makes this point —that OSS projects should be 
inclusive, not exclusive, which means encouraging a more diverse set of 
contributors.

http://www.curiousefficiency.org/posts/2015/01/abuse-is-not-ok.html

We can't do that if we restrict our reviews to work by known people,

The other issue I find with the "harass people until they commit it" strategy 
is that it scales badly. Not just from the # of people submitting patches, but 
from the #of patches. If I have a small 4 line patch, is it worth the effort of 
chasing people round to get it in, or should I save my effort for the more 
transformational patches?

Furthermore, as a recipient of such emails, after I while I get more ruthless 
about ignoring them. Though I think I'll look at a few today, including one 
that colleague of Colin's has been asking for (HADOOP-11293), as I feel sorry 
for anyone attempting a minor-but-widereaching bit of code cleanup.



I do like the idea of cleaning up old JIRAs that no longer apply or
that have been abandoned. And perhaps picking up on a few issues that
we have forgotten about.


+1


But it is part of release management in my
mind.
The release manager decides that we need to get features and
bugfixes X, Y, and Z in release Q, and then pushes on the JIRAs and
committers responsible for making this happen. Since JIRAs implement
features and bugfixes they naturally fall under release management.
This is how several companies that I've worked at have done it
internally...


At release time it's too late to do things that are important yet whose 
roll-out is considered a threat to the code. If you were to look at the history 
of any JIRA related to updating Jetty you can see this: we know the problems, 
but don't want to go there, especially near a release time. And, given the 
stress induced by the "great protobuf upgrade of 2013", I agree. Except now its 
not release time, nobody has gone near Jetty again.


Anyway, I'm going to review some patches this weekend.

Re: Patch review process

2015-02-08 Thread Karthik Kambatla
On Fri, Feb 6, 2015 at 6:14 PM, Colin P. McCabe  wrote:

> I think it's healthy to have lots of JIRAs that are "patch available."
>  It means that there is a lot of interest in the project and people
> want to contribute.  It would be unhealthy if JIRAs that really needed
> to get in were not getting in.  But beyond a few horror stories, that
> usually doesn't seem to happen.
>
> I agree that we should make an effort to review things that come from
> new contributors.  I always set aside some time each week to look
> through the new JIRAs on the list and review ones that I feel like I
> can do.
>
> I think the "patch manager" for a patch should be the person who
> submitted it.  As Chris suggested, if nobody is reviewing, email
> people who reviewed earlier and ask why.  Or email the list and ask if
> this is the right approach, and bring attention to the issue.
>

It is definitely great if contributors could reach out to potential
reviewers and follow-up. However, newer contributors find it hard to figure
out who to reach out to, and leaving it on them is not very welcoming.

Also, people often find one issue, post a fix out of good will and move on.
They might not be motivated enough to be the "patch manager" for it. I
understand that if it is an important patch, someone else will take it up
and get it in. If it is not or if it is duplicated, that JIRA/patch needs
to be cleaned up.


>
> I do like the idea of cleaning up old JIRAs that no longer apply or
> that have been abandoned.  And perhaps picking up on a few issues that
> we have forgotten about.  But it is part of release management in my
> mind.  The release manager decides that we need to get features and
> bugfixes X, Y, and Z in release Q, and then pushes on the JIRAs and
> committers responsible for making this happen.  Since JIRAs implement
> features and bugfixes they naturally fall under release management.
> This is how several companies that I've worked at have done it
> internally...
>

Getting a release out (for a release manager) is already some work. Adding
more responsibilities to the RM (a voluntary role) makes it less enticing.
And, distributing the work among multiple workers (with domain knowledge)
might be more efficient.


>
> cheers,
> Colin
>
> On Thu, Feb 5, 2015 at 4:18 PM, Akira AJISAKA
>  wrote:
> > I'm thinking it's unhealthy to have over 1000 JIRAs patch available.
> > Reviewers should be more welcome and should review patches from
> everywhere
> > to increase developers and future reviewers.
> >
> > I'm not completely sure patch managers will make it healthy, however,
> > changing the process (and this discussion) would help improving our
> > mindsets.
> >
> > @Committers: Let's review more patches!
> > @Developers: You can also review patches you are interested in. Your
> > comments will help committers to review and merge them.
> > (As you can see, the above comments don't have any enforcement.)
> >
> > Regards,
> > Akira
> >
> >
> > On 2/4/15 13:52, Karthik Kambatla wrote:
> >>
> >> +1 to patch managers per component.
> >>
> >>
> >> On Wed, Feb 4, 2015 at 12:29 PM, Allen Wittenauer 
> >> wrote:
> >>
> >>>
> >>>  Is process really the problem?  Or, more directly, how does
> any
> >>> of
> >>> this actually increase the pool beyond the (I’m feeling generous today)
> >>> 10
> >>> or so committers (never mind PMC) that actually review patches that
> come
> >>> from outside their employers on a regular basis?
> >>>
> >>
> >> Process might not be the source of the problem, however process will
> help
> >> with alleviating the current situation.
> >>
> >> It would definitely help to increase the number of active committers.
> >> Might
> >> not be very hard to add committers, but I don't know of a way to make
> them
> >> active.
> >>
> >>
> >>>
> >>>  To put this in perspective, there are over 1000 JIRAs in patch
> >>> available status across all three projects right now. That’s not even
> >>> counting the ones that I know I’ve personally removed the PA status on
> >>> because the patch no longer applies...
> >>>
> >>>
> >>> On Feb 4, 2015, at 12:10 PM, Chris Douglas 
> wrote:
> >>>
>  Release managers are just committers trying to roll releases; it's not
>  an enduring role. A patch manager is just someone helping to track
>  work and direct reviewers to issues. The job doesn't come with a hat.
>  We could look into a badge and gun if that would help.
> >>>
> >>>
> >>
> >> Badge and gun will ensure a single patch-manager per component.
> >>
> >>
> 
>  This doesn't require a lot of hand-wringing or diagnosis. If you're
>  concerned about the queue, then start trying to find reviewers for
>  viable patches.
> 
>  We should also close issues that require too much work to fix, or at
>  least mark them for "Later". Not every idea needs to end in a commit,
>  but silence is frustrating for contributors. -C
> >>>
> >>>
> >>
> >> +1.
> >>
> >>
> 
>  On Wed, F

Fwd: [Hadoop] AvroRecord cannot be resolved

2015-02-08 Thread Priyank Kapadia
Hi all,
I am new hadoop. I was trying to load all the project in eclipse. After
building hadoop sucessfully using maven3 , I tried to import the Hadoop
into eclipse Luna. But to my surprise even after building the project,
eclipse gives me a lot compilation errors in which the first few is "Avro
Record cannot be resolved." . Does anyone have an idea how to solve this
problem so i build and run the whole project in eclipse successfully ?

Thank You.

Priyank Kapadia


Jenkins build is back to normal : Hadoop-Common-trunk #1399

2015-02-08 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11563:
--

 Summary: Add the missed entry for CHANGES.txt
 Key: HADOOP-11563
 URL: https://issues.apache.org/jira/browse/HADOOP-11563
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: HDFS-EC
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Trivial
 Fix For: HDFS-EC


When committing HADOOP-11541, it forgot to update the 
hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10032) Backport hadoop-openstack to branch 1

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10032.
-
Resolution: Won't Fix

> Backport hadoop-openstack to branch 1
> -
>
> Key: HADOOP-10032
> URL: https://issues.apache.org/jira/browse/HADOOP-10032
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Affects Versions: 1.3.0
>Reporter: Steve Loughran
> Attachments: HADOOP-10032-1.patch
>
>
> Backport the hadoop-openstack module from trunk to branch-1.
> This will need a build.xml file to build it, ivy set up to add any extra 
> dependencies and testing. There's one extra {{FileSystem}} method in 2.x that 
> we can drop for branch-1.
> FWIW I've already built and tested hadoop-openstack against branch 1 by 
> editing the .pom file and having that module build against 1.  Before the 
> move from {{isDir()}} to {{isDirectory()}} it compiled and ran fine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9457) add an SCM-ignored XML filename to keep secrets in (auth-keys.xml?)

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9457.

   Resolution: Fixed
Fix Version/s: 2.6.0

the {{hadoop-aws}} and {{hadop-openstack}} modules both do this

> add an SCM-ignored XML filename to keep secrets in (auth-keys.xml?)
> ---
>
> Key: HADOOP-9457
> URL: https://issues.apache.org/jira/browse/HADOOP-9457
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
>
> to avoid accidentally checking in secrets, I keep auth keys for things like 
> AWS in file called {{auth-keys.xml}} alongside the 
> {{test/resources/core-site.xml}} file, then XInclude them. I also have a 
> global gitignore set up ignore files with that name.
> I propose having a standard name for XML files containing such secrets (we 
> could use auth-keys.xml or something else, and set up 
> {{hadoop-trunk/.gitignore}} and {{svn:ignore}} to ignore them. That way, 
> nobody else will check them in by accident



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-5731) IPC call can raise security exceptions when the remote node is running under a security manager

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-5731.

Resolution: Won't Fix

Nobody else is seeing/complaining about this. WONTFIX

> IPC call can raise security exceptions when the remote node is running under 
> a security manager
> ---
>
> Key: HADOOP-5731
> URL: https://issues.apache.org/jira/browse/HADOOP-5731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> I'm getting a security exception (java.lang.reflect.ReflectPermission 
> suppressAccessChecks) in RPC.Server.call(), when calling a datanode brought 
> up under a security manager, in method.setAccessible(true)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-5093) Configuration default resource handling needs to be able to remove default resources

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-5093.

   Resolution: Won't Fix
Fix Version/s: 2.7.0

real problem is that invalid resources are picked up without any checks. WONTFIX

> Configuration default resource handling needs to be able to remove default 
> resources 
> -
>
> Key: HADOOP-5093
> URL: https://issues.apache.org/jira/browse/HADOOP-5093
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
>
> There's a way to add default resources, but not remove them. This allows 
> someone to push an invalid resource into the default list, and for the rest 
> of the JVM's life, any Conf file loaded with quietMode set will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9089) start-all.sh references a missing file start-mapred.sh

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9089.
--
Resolution: Cannot Reproduce

This has been fixed at some point. Closing.

> start-all.sh references a missing file start-mapred.sh
> --
>
> Key: HADOOP-9089
> URL: https://issues.apache.org/jira/browse/HADOOP-9089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 0.23.4
>Reporter: Yevgen Yampolskiy
>Priority: Minor
>
> start-mapred.sh is not included into 0.23.4 release. 
> I do not know if it is an intended change, however start-all.sh generates 
> message:
> This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
> Either message in start-all.sh needs to be changed, or start-all.sh should be 
> removed, or start-mapred.sh should be put back to the distribution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8938) add option to do better diags of startup configuration

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8938.
--
Resolution: Not a Problem

Lots of shell changes, and in particular HADOOP-11013, have been implemented to 
help here.  HADOOP-7947 is also in progress at present, which will help on the 
XML bits.

> add option to do better diags of startup configuration
> --
>
> Key: HADOOP-8938
> URL: https://issues.apache.org/jira/browse/HADOOP-8938
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin
>Affects Versions: 1.1.0, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: hdiag.py
>
>
> HADOOP-8931 shows a symptom of a larger problem: we need better diagnostics 
> of what all the environment variables and settings going through the hadoop 
> scripts to find out why something isn't working. 
> Ideally some command line parameter to the scripts (or even: a new 
> environment variable) could trigger more display of critical parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8792) hadoop-daemon doesn't handle chown failures

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8792.
--
Resolution: Not a Problem

This was fixed by HADOOP-9902.  At this time, the hadoop shell code no longer 
executes a chown or do much with usernames other than use them for the names of 
logs. Closing as Not a Problem.

> hadoop-daemon doesn't handle chown failures
> ---
>
> Key: HADOOP-8792
> URL: https://issues.apache.org/jira/browse/HADOOP-8792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
> Environment: Whirr deployment onto existing VM
>Reporter: Steve Loughran
>
> A whirr deployment of the JT failed; it looks like the hadoop user wasn't 
> there. This didn't get picked up by whirr (WHIRR-651) as the hadoop-daemon 
> script doesn't check the return value of its chown operation -this should be 
> converted into a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11564) Fix Multithreaded correctness Warnings in BackupImage.java

2015-02-08 Thread Rakesh R (JIRA)
Rakesh R created HADOOP-11564:
-

 Summary: Fix Multithreaded correctness Warnings in BackupImage.java
 Key: HADOOP-11564
 URL: https://issues.apache.org/jira/browse/HADOOP-11564
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R


Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem; locked 60% of 
time
{code}
Bug type IS2_INCONSISTENT_SYNC (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.BackupImage
Field org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem
Synchronized 60% of the time
Unsynchronized access at BackupImage.java:[line 97]
Unsynchronized access at BackupImage.java:[line 261]
Synchronized access at BackupImage.java:[line 197]
Synchronized access at BackupImage.java:[line 212]
Synchronized access at BackupImage.java:[line 295]
{code}

https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html#Details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8494) bin/hadoop dfs -help tries to connect to NameNode instead of just printing help

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8494.
--
Resolution: Duplicate

> bin/hadoop dfs -help tries to connect to NameNode instead of just printing 
> help
> ---
>
> Key: HADOOP-8494
> URL: https://issues.apache.org/jira/browse/HADOOP-8494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.0.3
> Environment: ubuntu 12.04   hadoop-1.0.3
>Reporter: robin
>
> {code}
> szx@ubuntu1:/opt/hadoop$ bin/hadoop dfs -help
> 12/06/07 23:18:51 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 0 time(s).
> 12/06/07 23:18:52 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 1 time(s).
> 12/06/07 23:18:53 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 2 time(s).
> 12/06/07 23:18:54 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 3 time(s).
> 12/06/07 23:18:55 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 4 time(s).
> 12/06/07 23:18:56 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 5 time(s).
> 12/06/07 23:18:57 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 6 time(s).
> 12/06/07 23:18:58 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 7 time(s).
> 12/06/07 23:18:59 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 8 time(s).
> 12/06/07 23:19:00 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 9 time(s).
> Bad connection to FS. command aborted. exception: Call to 
> ubuntu1/192.168.200.135:9000 failed on connection exception: 
> java.net.ConnectException: Connection refused
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7543) hadoop-config.sh is missing in HADOOP_COMMON_HOME/bin after mvn'ization

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7543.
--
Resolution: Not a Problem

> hadoop-config.sh is missing in HADOOP_COMMON_HOME/bin after mvn'ization
> ---
>
> Key: HADOOP-7543
> URL: https://issues.apache.org/jira/browse/HADOOP-7543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>
> hadoop-config.sh is missing in $HADOOP_COMMON_HOME/bin after mvn'ization, 
> it's only in $HADOOP_COMMON_HOME/libexec which breaks bin/hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8650) /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8650.
--
Resolution: Won't Fix

Fixed in trunk, 1.x is dead. 2.x might as well be.

> /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns 
> -
>
> Key: HADOOP-8650
> URL: https://issues.apache.org/jira/browse/HADOOP-8650
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 1.0.3, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
>
> Add a timeout for the daemon script to trigger a kill -9 if the clean 
> shutdown fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8072) bin/hadoop leaks pids when running a non-detached datanode via jsvc

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8072.
--
Resolution: Not a Problem

Long since fixed. Closing.

> bin/hadoop leaks pids when running a non-detached datanode via jsvc
> ---
>
> Key: HADOOP-8072
> URL: https://issues.apache.org/jira/browse/HADOOP-8072
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.205.0
> Environment: Centos 5 or 6, but affects all platforms
>Reporter: Peter Linnell
> Attachments: fix-leaking-pid-s.patch
>
>
> See: https://issues.cloudera.org/browse/DISTRO-53  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7894) bin and sbin commands don't use JAVA_HOME when run from the tarball

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7894.
--
Resolution: Not a Problem

Fixed in trunk.

> bin and sbin commands don't use  JAVA_HOME when run from the tarball 
> -
>
> Key: HADOOP-7894
> URL: https://issues.apache.org/jira/browse/HADOOP-7894
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> When running eg ./sbin/start-dfs.sh from a tarball the scripts complain 
> JAVA_HOME is not set and could not be found even if the env var is set.
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ echo $JAVA_HOME
> /home/eli/toolchain/jdk1.6.0_24-x64
> hadoop-0.24.0-SNAPSHOT $ ./sbin/start-dfs.sh 
> log4j:ERROR Could not find value for key log4j.appender.NullAppender
> log4j:ERROR Could not instantiate appender named "NullAppender".
> Starting namenodes on [localhost]
> localhost: Error: JAVA_HOME is not set and could not be found.
> {noformat}
> I have to explicitly set this via hadoop-env.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8033) HADOOP_JAVA_PLATFORM_OPS is no longer respected

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8033.
--
Resolution: Won't Fix

> HADOOP_JAVA_PLATFORM_OPS is no longer respected
> ---
>
> Key: HADOOP-8033
> URL: https://issues.apache.org/jira/browse/HADOOP-8033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Eli Collins
>Priority: Minor
>
> HADOOP-6284 introduced HADOOP_JAVA_PLATFORM_OPS and it's in branch-1, however 
> it's not in trunk or 22, 23. It's referenced in hadoop-env.sh (commented out) 
> but not actually used anywhere, not sure when it was removed from bin/hadoop. 
> Perhaps the intention was to just use HADOOP_OPTS?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7895) HADOOP_LOG_DIR has to be set explicitly when running from the tarball

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7895.
--
Resolution: Not a Problem

Fixed in trunk.

> HADOOP_LOG_DIR has to be set explicitly when running from the tarball
> -
>
> Key: HADOOP-7895
> URL: https://issues.apache.org/jira/browse/HADOOP-7895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> When running bin and sbin commands from the tarball if HADOOP_LOG_DIR is not 
> explicitly set in hadoop-env.sh it doesn't use HADOOP_HOME/logs by default 
> like it used to, instead picks a wrong dir:
> {noformat}
> localhost: mkdir: cannot create directory `/eli': Permission denied
> localhost: chown: cannot access `/eli/eli': No such file or directory
> {noformat}
> We should have it default to HADOOP_HOME/logs or at least fail with a message 
> if the dir doesn't exist, the env var isn't set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8448) Java options being duplicated several times

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8448.
--
Resolution: Not a Problem

Fixed in trunk.

> Java options being duplicated several times
> ---
>
> Key: HADOOP-8448
> URL: https://issues.apache.org/jira/browse/HADOOP-8448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
>Affects Versions: 1.0.2
> Environment: VirtualBox 4.1.14 r77440
> Linux slack 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011 x86_64 Intel(R) 
> Core(TM)2 Quad CPUQ8300  @ 2.50GHz GenuineIntel GNU/Linux 
> java version "1.7.0_04"
> Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
> Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)
> Hadoop 1.0.2
> Subversion 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 
> 1304954 Compiled by hortonfo on Sat Mar 24 23:58:21 UTC 2012
> From source with checksum c198b04303cfa626a38e13154d2765a9
> Hadoop is running under Pseudo-Distributed mode according to the 
> http://hadoop.apache.org/common/docs/r1.0.3/single_node_setup.html#PseudoDistributed
>Reporter: Evgeny Rusak
>
> After adding the additional java option to the HADOOP_JOBTRACKER_OPTS like 
> the following
>  export HADOOP_JOBTRACKER_OPTS="$HADOOP_JOBTRACKER_OPTS -Dxxx=yyy"
> and starting the hadoop instance with start-all.sh, the option added is being 
> attached several times according to the command
>  ps ax | grep jobtracker 
> which prints 
> .
> 29824 ?Sl22:29 home/hduser/apps/jdk/jdk1.7.0_04/bin/java  
>-Dproc_jobtracker -XX:MaxPermSize=256m 
> -Xmx600m -Dxxx=yyy -Dxxx=yyy
> -Dxxx=yyy -Dxxx=yyy -Dxxx=yyy 
> -Dhadoop.log.dir=/home/hduser/apps/hadoop/hadoop-1.0.2/libexec/../logs
> ..
>  The aforementioned unexpected behaviour causes the severe issue while 
> specifying "-agentpath:" option, because several duplicating agents being 
> considered as different agents are trying to be instantiated several times at 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7927) Can't build packages 205+ on OSX

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7927.
--
Resolution: Won't Fix

Long since fixed issues in newer versions of Hadoop.

> Can't build packages 205+ on OSX
> 
>
> Key: HADOOP-7927
> URL: https://issues.apache.org/jira/browse/HADOOP-7927
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 1.0.0
>Reporter: Jakob Homan
>
> Currently the ant build script tries to reference the native directories, 
> which are not built on OSX, breaking the build:
> {noformat}bin-package:
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/bin
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/etc/hadoop
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/lib
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/libexec
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/sbin
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/contrib
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/webapps
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/templates/conf
>  [copy] Copying 11 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/templates/conf
>  [copy] Copying 39 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/lib
>  [copy] Copying 15 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/lib
> BUILD FAILED
> /Users/jhoman/repos/hadoop-common/build.xml:1611: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/native does not 
> exist.
> {noformat}
> Once one fixes this, one discovers the build is also trying to build the 
> linux task controller, regardless of whether not the native flag is set, 
> which also fails.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-3438) NPE if job tracker started and system property hadoop.log.dir is not set

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-3438.
--
Resolution: Won't Fix

> NPE if job tracker started and system property hadoop.log.dir is not set
> 
>
> Key: HADOOP-3438
> URL: https://issues.apache.org/jira/browse/HADOOP-3438
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.18.0
> Environment: amd64 ubuntu, jrockit 1.6
>Reporter: Steve Loughran
>  Labels: newbie
>
> This is a regression. If the system property "hadoop.log.dir" is not set, the 
> job tracker NPEs rather than starts up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10935) Cleanup HadoopKerberosName for public consumption

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-10935.
---
Resolution: Duplicate

> Cleanup HadoopKerberosName for public consumption
> -
>
> Key: HADOOP-10935
> URL: https://issues.apache.org/jira/browse/HADOOP-10935
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: newbie
>
> It would be good if we pulled HadoopKerberosName out of the closet and into 
> the light so that others may bask in its glorious usefulness.
> Missing:
> * Documentation
> * Shell short cut
> * CLI help when run without arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7893) Sort out tarball conf directories

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7893.
--
Resolution: Won't Fix

Closing as Won't Fix since none of it exists in trunk anymore.

> Sort out tarball conf directories
> -
>
> Key: HADOOP-7893
> URL: https://issues.apache.org/jira/browse/HADOOP-7893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>Assignee: Eric Yang
>
> The conf directory situation in the tarball (generated by mvn pacakge -Dtar) 
> is a mess. The top-level conf directory just contains mr2 conf, there are two 
> other incomplete conf dirs:
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ ls conf/
> slaves  yarn-env.sh  yarn-site.xml
> hadoop-0.24.0-SNAPSHOT $ find . -name conf
> ./conf
> ./share/hadoop/hdfs/templates/conf
> ./share/hadoop/common/templates/conf
> {noformat}
> yet there are 4 hdfs-site.xml files:
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ find . -name hdfs-site.xml
> ./etc/hadoop/hdfs-site.xml
> ./share/hadoop/hdfs/templates/conf/hdfs-site.xml
> ./share/hadoop/hdfs/templates/hdfs-site.xml
> ./share/hadoop/common/templates/conf/hdfs-site.xml
> {noformat}
> And it looks like ./share/hadoop/common/templates/conf contains the old MR1 
> style conf (eg mapred-site.xml).
> We should generate a tarball with a single conf directory that just has 
> common, hdfs and mr2 confs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11191) NativeAzureFileSystem#close() should be synchronized

2015-02-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-11191.
-
Resolution: Later

> NativeAzureFileSystem#close() should be synchronized
> 
>
> Key: HADOOP-11191
> URL: https://issues.apache.org/jira/browse/HADOOP-11191
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void close() throws IOException {
>   in.close();
>   closed = true;
> }
> {code}
> The other methods, such as seek(), are synchronized.
> close() should be as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11565) Add --batch shell option

2015-02-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11565:
-

 Summary: Add --batch shell option
 Key: HADOOP-11565
 URL: https://issues.apache.org/jira/browse/HADOOP-11565
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Allen Wittenauer


Add a --batch shell option to hadoop-config.sh to trigger the given command on 
slave nodes.  This is required to deprecate hadoop-daemons.sh and 
yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [Hadoop] AvroRecord cannot be resolved

2015-02-08 Thread Vinayakumar B
Hi,

I believe you have generated the eclipse project files using the
eclipse:eclipse goal.

This generates the project files for hadoop-common project without the
genarated test sources added as source folder.

I really dont know why this happens.

But, You can resolve it by Just adding,
*hadoop-common-project\hadoop-common\target\generated-test-sources\java
*directory as source directory for the project *hadoop-common *in eclipse.

-Vinay

On Sun, Feb 8, 2015 at 4:29 AM, Priyank Kapadia 
wrote:

> Hi all,
> I am new hadoop. I was trying to load all the project in eclipse. After
> building hadoop sucessfully using maven3 , I tried to import the Hadoop
> into eclipse Luna. But to my surprise even after building the project,
> eclipse gives me a lot compilation errors in which the first few is "Avro
> Record cannot be resolved." . Does anyone have an idea how to solve this
> problem so i build and run the whole project in eclipse successfully ?
>
> Thank You.
>
> Priyank Kapadia
>