[jira] [Created] (HADOOP-17934) NullPointerException when no HTTP response set on AbfsRestOperation

2021-09-23 Thread Josh Elser (Jira)
Josh Elser created HADOOP-17934:
---

 Summary: NullPointerException when no HTTP response set on 
AbfsRestOperation
 Key: HADOOP-17934
 URL: https://issues.apache.org/jira/browse/HADOOP-17934
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Josh Elser
Assignee: Josh Elser


Seen when running HBase 2.2 on top of ABFS with Hadoop 3.1ish:
{noformat}
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.renameIdempotencyCheckOp(AbfsClient.java:382)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.renamePath(AbfsClient.java:348)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.rename(AzureBlobFileSystemStore.java:722)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.rename(AzureBlobFileSystem.java:327)
at 
org.apache.hadoop.fs.FilterFileSystem.rename(FilterFileSystem.java:249)
at 
org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:1115)
 {noformat}
Digging in, it looks like the {{AbfsHttpOperation}} inside of 
{{AbfsRestOperation}} may sometimes be null, but {{AbfsClient}} will try to 
unwrap it (and read the status code from the HTTP call). I'm not sure why we 
sometimes _get_ the null HttpOperation but it seems pretty straightforward to 
not get the NPE.

HBase got wedged after this, but I'm not sure if it's because of this NPE or 
(perhaps) we weren't getting any responses from ABFS itself (i.e. there was 
some ABFS outage/unavailability or the node itself couldn't talk to ABFS).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14911) Upgrade netty-all jar to 4.0.37.Final

2017-09-27 Thread Josh Elser (JIRA)
Josh Elser created HADOOP-14911:
---

 Summary: Upgrade netty-all jar to 4.0.37.Final
 Key: HADOOP-14911
 URL: https://issues.apache.org/jira/browse/HADOOP-14911
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Vinayakumar B
Priority: Critical


Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-15 Thread Josh Elser
A tag is immutable, but you (or someone else) could remove the tag you 
pushed and re-push a new one. That's why the commit id is important -- 
it ensures that everyone else knows the exact commit being voted on.


Junping Du wrote:

The latest commit on RC2 is: e51312e8e106efb2ebd4844eecacb51026fac8b7.
btw, I think tags are immutable. Isn't it?

Thanks,

Junping

From: Steve Loughran
Sent: Wednesday, March 15, 2017 12:30 PM
To: Junping Du
Cc: common-dev@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)


On 14 Mar 2017, at 08:41, Junping Du  wrote:

Hi all,
 With several important fixes get merged last week, I've created a new 
release candidate (RC2) for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,919 fixes, improvements, and new features. 
Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  Please note that RC0 and RC1 are not voted public because significant 
issues are found just after RC tag getting published.

  The RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC2

  The RC tag in git is: release-2.8.0-RC2


given tags are so easy to move, we need to be relying on one or more of:
-the commit ID,
-the tag being signed

Junping: what is the commit Id for the release?


  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1056



thanks, I'll play with these downstream, as well as checking out and trying to 
build on windows


  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/20/2017 PDT time.

Thanks,

Junping


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Local repo sharing for maven builds

2015-09-20 Thread Josh Elser

Andrew Wang wrote:

Theoretically, we should be able to run unittests without a full `mvn
install` right? The "test" phase comes before "package" or "install", so I
figured it only needed class files. Maybe the multi-module-ness screws this
up.


Unless something weird is configured in the poms (which is often a smell 
on its own), the reactor (I think is the right Maven bit) is smart 
enough to pull the right code for multi-module builds.


AFAIK, you should be able to run all unit tests with a patch (hitting 
multiple modules or not) without installing all of the artifacts (e.g. 
using the package lifecycle phase).


If this isn't the case, I'd call that a build bug.


Re: [DISCUSS] project for pre-commit patch testing (was Re: upstream jenkins build broken?)

2015-06-15 Thread Josh Elser

+1

(Have been talking to Sean in private on the subject -- seems 
appropriate to voice some public support)


I'd be interested in this for Accumulo and Slider. For Accumulo, we've 
come a far way without a pre-commit build, primarily due to a CTR 
process. We have seen the repeated questions of how do I run the tests 
which a more automated workflow would help with, IMO. I think Slider 
could benefit with the same reasons.


I'd also be giddy to see the recent improvements in Hadoop trickle down 
into the other projects that Allen already mentioned.


Take this as record that I'd be happy to try to help out where possible.

Sean Busbey wrote:

thank you for making a more digestible version Allen. :)

If you're interested in soliciting feedback from other projects, I created
ASF short links to this thread in common-dev and hbase:


* http://s.apache.org/yetus-discuss-hadoop
* http://s.apache.org/yetus-discuss-hbase

While I agree that it's important to get feedback from ASF projects that
might find this useful, I can say that recently I've been involved in the
non-ASF project YCSB and both the pretest and better shell stuff would be
immensely useful over there.

On Mon, Jun 15, 2015 at 10:36 PM, Allen Wittenauera...@altiscale.com  wrote:


 I'm clearly +1 on this idea.  As part of the rewrite in Hadoop of
test-patch, it was amazing to see how far and wide this bit of code as
spread.  So I see consolidating everyone's efforts as a huge win for a
large number of projects.  (esp considering how many I saw suffering from a
variety of identified bugs! )

 But….

 I think it's important for people involved in those other projects
to speak up and voice an opinion as to whether this is useful.

To summarize:

 In the short term, a single location to get/use a precommit patch
tester rather than everyone building/supporting their own in their spare
time.

  FWIW, we've already got the code base modified to be pluggable.
We've written some basic/simple plugins that support Hadoop, HBase, Tajo,
Tez, Pig, and Flink.  For HBase and Flink, this does include their custom
checks.  Adding support for other project shouldn't be hard.  Simple
projects take almost no time after seeing the basic pattern.

 I think it's worthwhile highlighting that means support for both
JIRA and GitHub as well as Ant and Maven from the same code base.

Longer term:

 Well, we clearly have ideas of things that we want to do. Adding
more features to test-patch (review board? gradle?) is obvious. But what
about teasing apart and generalizing some of the other shell bits from
projects? A common library for building CLI tools to fault injection to
release documentation creation tools to …  I'd even like to see us get as
advanced as a run this program to auto-generate daemon stop/start bits.

 I had a few chats with people about this idea at Hadoop Summit.
What's truly exciting are the ideas that people had once they realized what
kinds of problems we're trying to solve.  It's always amazing the problems
that projects have that could be solved by these types of solutions.  Let's
stop hiding our cool toys in this area.

 So, what feedback and ideas do you have in this area?  Are you a
yay or a nay?


On Jun 15, 2015, at 4:47 PM, Sean Busbeybus...@cloudera.com  wrote:


Oof. I had meant to push on this again but life got in the way and now

the

June board meeting is upon us. Sorry everyone. In the event that this

ends

up contentious, hopefully one of the copied communities can give us a
branch to work in.

I know everyone is busy, so here's the short version of this email: I'd
like to move some of the code currently in Hadoop (test-patch) into a new
TLP focused on QA tooling. I'm not sure what the best format for priming
this conversation is. ORC filled in the incubator project proposal
template, but I'm not sure how much that confused the issue. So to start,
I'll just write what I'm hoping we can accomplish in general terms here.

All software development projects that are community based (that is,
accepting outside contributions) face a common QA problem for vetting
in-coming contributions. Hadoop is fortunate enough to be sufficiently
popular that the weight of the problem drove tool development (i.e.
test-patch). That tool is generalizable enough that a bunch of other TLPs
have adopted their own forks. Unfortunately, in most projects this kind

of

QA work is an enabler rather than a primary concern, so often the tooling
is worked on ad-hoc and little shared improvements happen across
projects. Since
the tooling itself is never a primary concern, any made is rarely reused
outside of ASF projects.

Over the last couple months a few of us have been working on generalizing
the tooling present in the Hadoop code base (because it was the most

mature

out of all those in the various projects) and it's reached a point where

we

think we can start bringing on other downstream 

[jira] [Created] (HADOOP-10927) Ran `hadoop credential` expecting usage, got NPE instead

2014-08-01 Thread Josh Elser (JIRA)
Josh Elser created HADOOP-10927:
---

 Summary: Ran `hadoop credential` expecting usage, got NPE instead
 Key: HADOOP-10927
 URL: https://issues.apache.org/jira/browse/HADOOP-10927
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Josh Elser
Priority: Minor


{noformat}
$ hadoop credential
java.lang.NullPointerException
at 
org.apache.hadoop.security.alias.CredentialShell.run(CredentialShell.java:67)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:420)
{noformat}

Ran a no-arg version of {{hadoop credential}} expecting to get the usage/help 
message (like other commands act), and got the above exception instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10928) Incorrect usage on `hadoop credential list`

2014-08-01 Thread Josh Elser (JIRA)
Josh Elser created HADOOP-10928:
---

 Summary: Incorrect usage on `hadoop credential list`
 Key: HADOOP-10928
 URL: https://issues.apache.org/jira/browse/HADOOP-10928
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Josh Elser
Priority: Trivial
 Attachments: HADOOP-10928.diff

{{hadoop credential list}}'s usage message states a mandatory {{alias}} 
argument. The code does not actually accept an alias.

Fix the message.



--
This message was sent by Atlassian JIRA
(v6.2#6252)