[jira] [Reopened] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions

2015-06-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reopened HADOOP-8151:


> Error handling in snappy decompressor throws invalid exceptions
> ---
>
> Key: HADOOP-8151
> URL: https://issues.apache.org/jira/browse/HADOOP-8151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 1.0.2, 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Matt Foley
> Fix For: 1.0.3, 3.0.0
>
> Attachments: HADOOP-8151-branch-1.0.patch, HADOOP-8151.patch, 
> HADOOP-8151.patch
>
>
> SnappyDecompressor.c has the following code in a few places:
> {code}
> THROW(env, "Ljava/lang/InternalError", "Could not decompress data. Buffer 
> length is too small.");
> {code}
> this is incorrect, though, since the THROW macro doesn't need the "L" before 
> the class name. This results in a ClassNotFoundException for 
> Ljava.lang.InternalError being thrown, instead of the intended exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8151) Error handling in snappy decompressor throws invalid exceptions

2015-06-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HADOOP-8151.

   Resolution: Fixed
Fix Version/s: 2.8.0

Commit patch to branch-2.

> Error handling in snappy decompressor throws invalid exceptions
> ---
>
> Key: HADOOP-8151
> URL: https://issues.apache.org/jira/browse/HADOOP-8151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, native
>Affects Versions: 1.0.2, 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Matt Foley
> Fix For: 3.0.0, 2.8.0, 1.0.3
>
> Attachments: HADOOP-8151-branch-1.0.patch, HADOOP-8151.patch, 
> HADOOP-8151.patch
>
>
> SnappyDecompressor.c has the following code in a few places:
> {code}
> THROW(env, "Ljava/lang/InternalError", "Could not decompress data. Buffer 
> length is too small.");
> {code}
> this is incorrect, though, since the THROW macro doesn't need the "L" before 
> the class name. This results in a ClassNotFoundException for 
> Ljava.lang.InternalError being thrown, instead of the intended exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect

2015-06-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HADOOP-12033.
-
Resolution: Duplicate

Thanks Zhihai Xu for confirmation on this. Already commit/merge HADOOP-8151 to 
branch-2, so resolve this JIRA as duplicated.

> Reducer task failure with java.lang.NoClassDefFoundError: 
> Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
> ---
>
> Key: HADOOP-12033
> URL: https://issues.apache.org/jira/browse/HADOOP-12033
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Mitic
> Attachments: 0001-HADOOP-12033.patch
>
>
> We have noticed intermittent reducer task failures with the below exception:
> {code}
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#9 at 
> org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at 
> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:415) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
> java.lang.NoClassDefFoundError: Ljava/lang/InternalError at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
>  Method) at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239)
>  at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>  at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534)
>  at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
>  at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) 
> Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more 
> {code}
> Usually, the reduce task succeeds on retry. 
> Some of the symptoms are similar to HADOOP-8423, but this fix is already 
> included (this is on Hadoop 2.6).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #240

2015-06-26 Thread Apache Jenkins Server
See 

Changes:

[devaraj] YARN-3826. Race condition in ResourceTrackerService leads to wrong

[devaraj] YARN-3745. SerializedException should also try to instantiate internal

[aajisaka] HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for 
WebImageViewer. Contributed by Jagadesh Kiran N.

[arp] HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by 
kanaka kumar avvaru)

[jlowe] MAPREDUCE-6413. TestLocalJobSubmission is failing with unknown host. 
Contributed by zhihai xu

[wang] HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication.

[wang] HDFS-8546. Use try with resources in DataStorage and Storage.

[amareshwari] HADOOP-11203. Allow ditscp to accept bandwitdh in fraction 
MegaBytes. Contributed by Raju Bairishetti

--
[...truncated 5647 lines...]
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec - in 
org.apache.hadoop.io.TestMD5Hash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.722 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapFile
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.114 sec - in 
org.apache.hadoop.io.TestMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.io.TestWritableName
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.204 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.097 sec - in 
org.apache.hadoop.io.retry.TestFailoverProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.332 sec - in 
org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.202 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.341 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.154 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=76

Build failed in Jenkins: Hadoop-Common-trunk #1538

2015-06-26 Thread Apache Jenkins Server
See 

Changes:

[devaraj] YARN-3826. Race condition in ResourceTrackerService leads to wrong

[devaraj] YARN-3745. SerializedException should also try to instantiate internal

[aajisaka] HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for 
WebImageViewer. Contributed by Jagadesh Kiran N.

[arp] HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by 
kanaka kumar avvaru)

[jlowe] MAPREDUCE-6413. TestLocalJobSubmission is failing with unknown host. 
Contributed by zhihai xu

[wang] HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication.

[wang] HDFS-8546. Use try with resources in DataStorage and Storage.

[amareshwari] HADOOP-11203. Allow ditscp to accept bandwitdh in fraction 
MegaBytes. Contributed by Raju Bairishetti

--
[...truncated 5185 lines...]
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.59 sec - in 
org.apache.hadoop.util.TestPureJavaCrc32
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.326 sec - in 
org.apache.hadoop.util.TestStringUtils
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in 
org.apache.hadoop.util.TestProtoUtil
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.19 sec - in 
org.apache.hadoop.util.TestSignalLogger
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.183 sec - in 
org.apache.hadoop.util.TestDiskChecker
Running org.apache.hadoop.util.TestShutdownThreadsHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.171 sec - in 
org.apache.hadoop.util.TestShutdownThreadsHelper
Running org.apache.hadoop.util.TestCacheableIPList
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.293 sec - in 
org.apache.hadoop.util.TestCacheableIPList
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - in 
org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.204 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.238 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.232 sec - in 
org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestConfTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec - in 
org.apache.hadoop.util.TestConfTest
Running org.apache.hadoop.util.TestHttpExceptionUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.676 sec - in 
org.apache.hadoop.util.TestHttpExceptionUtils
Running org.apache.hadoop.util.TestJarFinder
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.761 sec - in 
org.apache.hadoop.util.TestJarFinder
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.471 sec - in 
org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestLightWeightCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.65 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.util.TestNativeCodeLoader
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.635 sec - in 
org.apache.hadoop.util.TestReflectionUtils
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.744 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 12.137 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in 
org.ap

[jira] [Created] (HADOOP-12123) Add example test-patch plugin for commit message format

2015-06-26 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-12123:


 Summary: Add example test-patch plugin for commit message format
 Key: HADOOP-12123
 URL: https://issues.apache.org/jira/browse/HADOOP-12123
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Sean Busbey


add an example test plugin to ensures when we get a diff that includes a commit 
message that that commit message starts with a jira ID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12036.
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0

> Consolidate all of the cmake extensions in one directory
> 
>
> Key: HADOOP-12036
> URL: https://issues.apache.org/jira/browse/HADOOP-12036
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
>Assignee: Alan Burlison
> Fix For: 2.8.0
>
> Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch, 
> HADOOP-12036.004.patch, HADOOP-12036.005.patch
>
>
> Rather than have a half-dozen redefinitions, custom extensions, etc, we 
> should move them all to one location so that the cmake environment is 
> consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12124) Add HTrace support for FsShell

2015-06-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12124:
-

 Summary: Add HTrace support for FsShell
 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2015-06-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12125:
---

 Summary: Retrying UnknownHostException on a proxy does not 
actually retry hostname resolution
 Key: HADOOP-12125
 URL: https://issues.apache.org/jira/browse/HADOOP-12125
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Jason Lowe


When RetryInvocationHandler attempts to retry an UnknownHostException the 
hostname fails to be resolved again.  The InetSocketAddress in the ConnectionId 
has cached the fact that the hostname is unresolvable, and when the proxy tries 
to setup a new Connection object with that ConnectionId it checks if the 
(cached) resolution result is unresolved and immediately throws.

The end result is we sleep and retry for no benefit.  The hostname resolution 
is never attempted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] More Maintenance Releases

2015-06-26 Thread Andrew Wang
>
> +1 for ditching CHANGES.txt.
>
> I reached out to Allen recently. He wanted to clean up his scripts more
> before dropping CHANGES.txt altogether. Should we target 2.8 for this?


I read through our previous threads on this topic, and Allen had some
compatibility concerns about dropping CHANGES.txt in a branch-2 release.

Allen, do you still have these concerns? You mentioned having written
scripts to parse CHANGES.txt. I've written scripts for similar tasks, but
what I turned to were git log and JIRA, not CHANGES.txt. I was wondering if
this is a relic from the SVN days, since svn log was dog slow. Git log is
quite a bit better.

My inclination is to just drop CHANGES.txt entirely, including in branch-2.
We already have release notes, JIRA, as well as the per-release webpage
which talks up the new features. CHANGES.txt also has never been a reliable
source of information, and I doubt it has many consumers (if any).

Best,
Andrew


[jira] [Created] (HADOOP-12126) Configuration might use ApplicationClassLoader to create XML parser

2015-06-26 Thread Laurent Goujon (JIRA)
Laurent Goujon created HADOOP-12126:
---

 Summary: Configuration might use ApplicationClassLoader to create 
XML parser
 Key: HADOOP-12126
 URL: https://issues.apache.org/jira/browse/HADOOP-12126
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Laurent Goujon


{{org.apache.hadoop.conf.Configuration}} creates a new DocumentBuilder to parse 
the XML config files, but it doesn't specify which classloader to use to 
discover and instantiate the XML parser.

Because DocumentBuilderFactory relies on ServiceProvider, whose by default, 
uses the context classloader. If classpath isolation is turned on, one might 
expect that that Configuration will only load classes from the system 
classloader, but it turns out that the context classloader is set to 
ApplicationClassLoader, and that a XML parser might be loaded from the user 
classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12127) some personalities are still using releaseaudit instead of asflicense

2015-06-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12127:
-

 Summary: some personalities are still using releaseaudit instead 
of asflicense
 Key: HADOOP-12127
 URL: https://issues.apache.org/jira/browse/HADOOP-12127
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Priority: Trivial


Simple bug: releaseaudit test was renamed to be asflicense.  Some personalities 
are still using the old name and therefore doing the wrong thing.  Just need to 
rename them in the personality files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12128) test-patch docker mode doesn't support non-project personalities

2015-06-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12128:
-

 Summary: test-patch docker mode doesn't support non-project 
personalities
 Key: HADOOP-12128
 URL: https://issues.apache.org/jira/browse/HADOOP-12128
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


If you provide a personality file that doesn't match the project name, docker 
mode won't use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12129) rework test-patch bug system support

2015-06-26 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12129:
-

 Summary: rework test-patch bug system support
 Key: HADOOP-12129
 URL: https://issues.apache.org/jira/browse/HADOOP-12129
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


WARNING: this is a fairly big project.

See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)