Where do we track our Jenkins stuff

2016-08-05 Thread Sean Busbey
Apologies, but I haven't managed to figure out where we track our changes
to Jenkins.

The ASF build infra has some changes to Java and Maven installations coming
and a fair number of Hadoop related jobs need to be updated.

If I take on doing said updates, do I track it in a JIRA? Or is there some
particular mailing list thread or a wiki page or something?

-- 
Sean Busbey


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/

[Aug 4, 2016 7:57:34 AM] (xiao) HADOOP-13443. KMS should check the type of 
underlying keyprovider of
[Aug 4, 2016 8:38:34 AM] (vvasudev) YARN-5459. Add support for docker rm. 
Contributed by Shane Kumpf.
[Aug 4, 2016 11:13:11 AM] (aajisaka) MAPREDUCE-6730. Use StandardCharsets 
instead of String overload in
[Aug 4, 2016 2:07:34 PM] (kihwal) HDFS-10662. Optimize UTF8 string/byte 
conversions. Contributed by Daryn
[Aug 4, 2016 3:45:55 PM] (kihwal) HADOOP-13442. Optimize UGI group lookups. 
Contributed by Daryn Sharp.
[Aug 4, 2016 4:45:40 PM] (szetszwo) In Balancer, the target task should be 
removed when its size < 0. 
[Aug 4, 2016 4:53:44 PM] (kihwal) HDFS-10722. Fix race condition in
[Aug 4, 2016 5:07:53 PM] (arp) HADOOP-13467. Shell#getSignalKillCommand should 
use the bash builtin on
[Aug 4, 2016 7:25:39 PM] (arp) HADOOP-13466. Add an AutoCloseableLock class. 
(Chen Liang)
[Aug 4, 2016 7:55:21 PM] (kihwal) HDFS-10343. BlockManager#createLocatedBlocks 
may return blocks on failed
[Aug 4, 2016 8:22:48 PM] (kai.zheng) HDFS-10718. Prefer direct ByteBuffer in 
native RS encoder and decoder.
[Aug 4, 2016 9:14:13 PM] (kihwal) HDFS-10673. Optimize FSPermissionChecker's 
internal path usage.
[Aug 5, 2016 2:40:33 AM] (weichiu) HDFS-10588. False alarm in datanode log - 
ERROR - Disk Balancer is not




-1 overall


The following subsystems voted -1:
asflicense mvnsite unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   hadoop.tracing.TestTracing 
   hadoop.security.TestRefreshUserMappings 
   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 
   hadoop.mapreduce.v2.hs.server.TestHSAdminServer 
   hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-checkstyle-root.txt
  [16M]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-mvnsite-root.txt
  [112K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [316K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/124/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-clien

Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-08-05 Thread Jason Lowe
Both sound like real problems to me, and I think it's appropriate to file JIRAs 
to track them.
Jason


  From: Andrew Wang 
 To: Karthik Kambatla  
Cc: larry mccay ; Vinod Kumar Vavilapalli 
; "common-dev@hadoop.apache.org" 
; "hdfs-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 
; "mapreduce-...@hadoop.apache.org" 

 Sent: Thursday, August 4, 2016 5:56 PM
 Subject: Re: [VOTE] Release Apache Hadoop 2.7.3 RC0
   
Could a YARN person please comment on these two issues, one of which Vinay
also hit? If someone already triaged or filed JIRAs, I missed it.

On Mon, Jul 25, 2016 at 11:52 AM, Andrew Wang 
wrote:

> I'll also add that, as a YARN newbie, I did hit two usability issues.
> These are very unlikely to be regressions, and I can file JIRAs if they
> seem fixable.
>
> * I didn't have SSH to localhost set up (new laptop), and when I tried to
> run the Pi job, it'd exit my window manager session. I feel there must be a
> more developer-friendly solution here.
> * If you start the NodeManager and not the RM, the NM has a handler for
> SIGTERM and SIGINT that blocked my Ctrl-C and kill attempts during startup.
> I had to kill -9 it.
>
> On Mon, Jul 25, 2016 at 11:44 AM, Andrew Wang 
> wrote:
>
>> I got asked this off-list, so as a reminder, only PMC votes are binding
>> on releases. Everyone is encouraged to vote on releases though!
>>
>> +1 (binding)
>>
>> * Downloaded source, built
>> * Started up HDFS and YARN
>> * Ran Pi job which as usual returned 4, and a little teragen
>>
>> On Mon, Jul 25, 2016 at 11:08 AM, Karthik Kambatla 
>> wrote:
>>
>>> +1 (binding)
>>>
>>> * Downloaded and build from source
>>> * Checked LICENSE and NOTICE
>>> * Pseudo-distributed cluster with FairScheduler
>>> * Ran MR and HDFS tests
>>> * Verified basic UI
>>>
>>> On Sun, Jul 24, 2016 at 1:07 PM, larry mccay  wrote:
>>>
>>> > +1 binding
>>> >
>>> > * downloaded and built from source
>>> > * checked LICENSE and NOTICE files
>>> > * verified signatures
>>> > * ran standalone tests
>>> > * installed pseudo-distributed instance on my mac
>>> > * ran through HDFS and mapreduce tests
>>> > * tested credential command
>>> > * tested webhdfs access through Apache Knox
>>> >
>>> >
>>> > On Fri, Jul 22, 2016 at 10:15 PM, Vinod Kumar Vavilapalli <
>>> > vino...@apache.org> wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.3.
>>> > >
>>> > > As discussed before, this is the next maintenance release to follow
>>> up
>>> > > 2.7.2.
>>> > >
>>> > > The RC is available for validation at:
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/>
>>> > >
>>> > > The RC tag in git is: release-2.7.3-RC0
>>> > >
>>> > > The maven artifacts are available via repository.apache.org <
>>> > > http://repository.apache.org/> at
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > <
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > >
>>> > >
>>> > > The release-notes are inside the tar-balls at location
>>> > > hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html.
>>> I
>>> > > hosted this at
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html <
>>> > > http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
>>> >
>>> > for
>>> > > your quick perusal.
>>> > >
>>> > > As you may have noted, a very long fix-cycle for the License & Notice
>>> > > issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop
>>> > release)
>>> > > to slip by quite a bit. This release's related discussion thread is
>>> > linked
>>> > > below: [1].
>>> > >
>>> > > Please try the release and vote; the vote will run for the usual 5
>>> days.
>>> > >
>>> > > Thanks,
>>> > > Vinod
>>> > >
>>> > > [1]: 2.7.3 release plan:
>>> > > https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/
>>> msg24439.html
>>> > <
>>> > > http://markmail.org/thread/6yv2fyrs4jlepmmr>
>>> >
>>>
>>
>>
>


   

Re: AWS S3AInputStream questions

2016-08-05 Thread Aaron Fabbri
On Tue, Aug 2, 2016 at 12:17 AM, Mr rty ff  wrote:
>
> Hi I have few questions about implementation of inputstream in S3.
>  1)public synchronized long getPos() throws IOException
> {return (nextReadPos < 0) ? 0 : nextReadPos;}
> Why does it return nextReadPos  not pos?

My understanding is:

seek() is a lazy implementation.  S3AInputStream keeps track of two
seek positions:

1. current position in underlying stream (pos)
2. next position to read (nextReadPos).

If the seek() implementation were eager, not lazy, we could do the seeking when
seek() is called.  In that case, I think we would only need to keep
track of #1 (pos).

Instead we keep track of where the next read() will start, and
lazily do the seek logic when it is actually needed.

getPos() is supposed to return the position of the next read(),
so nextReadPos is the correct value to return.

> In memeber definition for
> pos/*** This is the public position; the one set in {@link #seek(long)}* and
> returned in {@link #getPos()}.*/

This is probably the source of your confusion.  Looks like this comment should
be changed.  I believe pos is the position of the underlying stream,
not the next read pos. They probably became different when
lazy seek was implemented.

> private long pos;

> 2)seekInStream  In the last lines you have:// close the stream;
>  if read the object will be opened at the new pos
> closeStream("seekInStream()", this.requestedStreamLen);
> pos = targetPos; Why you need this line? Shouldn`t pos be updated
> with actual skipped value? As you did:
> | if (skipped > 0) { |
> | pos += skipped; |

skipped variable is not in scope at that point.

It is used to keep track of how far the underlying stream actually skipped.

The point of this logic is to balance performance between
(a) always reopening the stream at the newly-seeked position
(b) just reading forward and discarding unneeded bytes

I believe (a) was found to inefficient in some cases.

This code implements both approaches, depending on how far
forward the seek() is.  The code you are talking about here is
the (a) case where we reopen the stream on next read().

In this case, we just store the desired position (pos) which
will be used in the next call to read() to open the
stream at the offset 'pos' (see call to lazySeek()).

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8625) Use GzipCodec to decompress data in ResetableGzipOutputStream test

2016-08-05 Thread Mike Percy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Percy resolved HADOOP-8625.

Resolution: Duplicate
  Assignee: Mike Percy

> Use GzipCodec to decompress data in ResetableGzipOutputStream test
> --
>
> Key: HADOOP-8625
> URL: https://issues.apache.org/jira/browse/HADOOP-8625
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mike Percy
>Assignee: Mike Percy
>
> Use GzipCodec to decompress data in ResetableGzipOutputStream test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org