Hello:
I found an issue with HADOOP-10027 that :
SnappyCompressor can’t load native lib properly.
Test TestHadoopnative is failing in hadoop-apache2 version 0.10
stacktrace:
Caused by: java.lang.RuntimeException: native snappy library not available:
SnappyCompressor has not been loaded.
at
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:72)
at
org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:195)
at
com.facebook.presto.hadoop.HadoopNative.loadAllCodecs(HadoopNative.java:71)
at
com.facebook.presto.hadoop.HadoopNative.requireHadoopNative(HadoopNative.java:52)
... 24 more
After look into the code found that in SnappyCompressor.java
static {
if (NativeCodeLoader.isNativeCodeLoaded() &&
NativeCodeLoader.buildSupportsSnappy()) {
try {
initIDs();
nativeSnappyLoaded = true;
} catch (Throwable t) {
LOG.error("failed to load SnappyCompressor", t);
}
}
}
the native method initIDs will complain: no such field clazz.
> On Mar 21, 2017, at 4:32 PM, Poepping, Thomas <[email protected]> wrote:
>
>
>
> On 3/21/17, 3:17 PM, "Kuhu Shukla" <[email protected]> wrote:
>
>
> +1 (non-binding)
>
> - Verified signatures.
> - Downloaded and built from source tar.gz.
> - Deployed a pseudo-distributed cluster on Mac Sierra.
> - Ran example Sleep job successfully.
> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> successfully.
>
> Thank you Junping and everyone else who worked on getting this release out.
>
> Warm Regards,
> Kuhu
> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
> <[email protected]> wrote:
> +1 (non-binding)
>
> - Verified checksums and signatures of all files
> - Built from source on MacOS Sierra via JDK 1.8.0 u65
> - Deployed single-node cluster
> - Successfully ran a few sample jobs
>
> Thanks,
>
> Eric
>
> On Tuesday, March 21, 2017 2:56 PM, John Zhuge <[email protected]>
> wrote:
>
>
>
> +1. Thanks for the great effort, Junping!
>
>
> - Verified checksums and signatures of the tarballs
> - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
> - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
> - Cloud connectors:
> - s3a: integration tests, basic fs commands
> - adl: live unit tests, basic fs commands. See notes below.
> - Deployed a pseudo cluster, passed the following sanity tests in
> both insecure and SSL mode:
> - HDFS: basic dfs, distcp, ACL commands
> - KMS and HttpFS: basic tests
> - MapReduce wordcount
> - balancer start/stop
>
>
> Needs the following JIRAs to pass all ADL tests:
>
> - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John Zhuge.
> - HDFS-11132. Allow AccessControlException in contract tests when
> getFileStatus on subdirectory of existing files. Contributed by
> Vishwajeet
> Dusane
> - HADOOP-13928. TestAdlFileContextMainOperationsLive.testGetFileContext1
> runtime error. (John Zhuge via lei)
>
>
> On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge <[email protected]> wrote:
>
>> Yes, it only affects ADL. There is a workaround of adding these 2
>> properties to core-site.xml:
>>
>> <property>
>> <name>fs.adl.impl</name>
>> <value>org.apache.hadoop.fs.adl.AdlFileSystem</value>
>> </property>
>>
>> <property>
>> <name>fs.AbstractFileSystem.adl.impl</name>
>> <value>org.apache.hadoop.fs.adl.Adl</value>
>> </property>
>>
>> I have the initial patch ready but hitting these live unit test failures:
>>
>> Failed tests:
>>
>> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
>> testListStatus:257
>> expected:<1> but was:<10>
>>
>> Tests in error:
>>
>> TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.
>> testMkdirsFailsForSubdirectoryOfExistingFile:254
>> » AccessControl
>>
>> TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
>> testMkdirsFailsForSubdirectoryOfExistingFile:190
>> » AccessControl
>>
>>
>> Stay tuned...
>>
>> John Zhuge
>> Software Engineer, Cloudera
>>
>> On Mon, Mar 20, 2017 at 10:02 AM, Junping Du <[email protected]> wrote:
>>
>>> Thank you for reporting the issue, John! Does this issue only affect ADL
>>> (Azure Data Lake) which is a new feature for 2.8 rather than other
>> existing
>>> FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
>> not a
>>> regression and just a new feature get broken.
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Junping
>>> ------------------------------
>>> *From:* John Zhuge <[email protected]>
>>> *Sent:* Monday, March 20, 2017 9:07 AM
>>> *To:* Junping Du
>>> *Cc:* [email protected]; [email protected];
>>> [email protected]; [email protected]
>>> *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
>>>
>>> Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
>>> FileSystem for scheme: adl".
>>>
>>> The issue were caused by backporting HADOOP-13037 to branch-2 and
>> earlier.
>>> HADOOP-12666 should not be backported, but some changes are needed:
>>> property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.
>>>
>>> I am working on a patch.
>>>
>>>
>>> John Zhuge
>>> Software Engineer, Cloudera
>>>
>>> On Fri, Mar 17, 2017 at 2:18 AM, Junping Du <[email protected]> wrote:
>>>
>>>> Hi all,
>>>> With fix of HDFS-11431 get in, I've created a new release candidate
>>>> (RC3) for Apache Hadoop 2.8.0.
>>>>
>>>> This is the next minor release to follow up 2.7.0 which has been
>>>> released for more than 1 year. It comprises 2,900+ fixes, improvements,
>> and
>>>> new features. Most of these commits are released for the first time in
>>>> branch-2.
>>>>
>>>> More information about the 2.8.0 release plan can be found here:
>>>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>>>>
>>>> New RC is available at: http://home.apache.org/~junpin
>>>> g_du/hadoop-2.8.0-RC3
>>>>
>>>> The RC tag in git is: release-2.8.0-RC3, and the latest commit id
>>>> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
>>>>
>>>> The maven artifacts are available via repository.apache.org at:
>>>> https://repository.apache.org/content/repositories/orgapachehadoop-1057
>>>>
>>>> Please try the release and vote; the vote will run for the usual 5
>>>> days, ending on 03/22/2017 PDT time.
>
>>>>
>>>> Thanks,
>>>>
>>>> Junping
>>>>
>>>
>>>
>>
>
>
>
> --
> John
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>