[jira] [Resolved] (HADOOP-10558) java.net.UnknownHostException: Invalid host name: local host is: (unknown)
[ https://issues.apache.org/jira/browse/HADOOP-10558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-10558. - Resolution: Invalid java.net.UnknownHostException: Invalid host name: local host is: (unknown) -- Key: HADOOP-10558 URL: https://issues.apache.org/jira/browse/HADOOP-10558 Project: Hadoop Common Issue Type: Bug Environment: two node cluster ubuntu 12.04 LTS in both Java version 1.7.0_25 node 1: core i5 4 GB RAM node 2: core i3 4 GB RAM Reporter: Sami Abobala Labels: debian, hadoop, hadoop-3.0.0-SNAPSHOT, mapreduce, ubuntu I had this exception every time i try to run map-red job, I went to http://wiki.apache.org/hadoop/UnknownHost and tried every possible solution and still have the same result Task Id : attempt_1398945803120_0001_m_04_0, Status : FAILED Container launch failed for container_1398945803120_0001_01_06 : java.lang.reflect.UndeclaredThrowableException . . Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: fatima-HP-ProBook-4520s:8042; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost . . -- This message was sent by Atlassian JIRA (v6.2#6252)
Build failed in Jenkins: Hadoop-Common-0.23-Build #1005
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1005/changes Changes: [gkesavan] update hudson env -- [...truncated 20874 lines...] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.259 sec Running org.apache.hadoop.util.TestHostsFileReader Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec Running org.apache.hadoop.util.TestReflectionUtils Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec Running org.apache.hadoop.util.TestDiskChecker Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.492 sec Running org.apache.hadoop.util.TestShutdownHookManager Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec Running org.apache.hadoop.util.TestStringInterner Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec Running org.apache.hadoop.util.TestRunJar Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec Running org.apache.hadoop.util.TestJarFinder Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.81 sec Running org.apache.hadoop.util.TestShell Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.199 sec Running org.apache.hadoop.util.TestPureJavaCrc32 Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.292 sec Running org.apache.hadoop.test.TestMultithreadedTestUtil Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.164 sec Running org.apache.hadoop.test.TestTimedOutTestsListener Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec Running org.apache.hadoop.ipc.TestRPCCompatibility Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.09 sec Running org.apache.hadoop.ipc.TestAvroRpc Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.544 sec Running org.apache.hadoop.ipc.TestServer Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.378 sec Running org.apache.hadoop.ipc.TestIPC Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.271 sec Running org.apache.hadoop.ipc.TestSocketFactory Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.267 sec Running org.apache.hadoop.ipc.TestSaslRPC Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.038 sec Running org.apache.hadoop.ipc.TestIPCServerResponder Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.518 sec Running org.apache.hadoop.ipc.TestRPC Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.883 sec Running org.apache.hadoop.ipc.TestMiniRPCBenchmark Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.861 sec Running org.apache.hadoop.record.TestBuffer Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec Running org.apache.hadoop.record.TestRecordIO Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec Running org.apache.hadoop.record.TestRecordVersioning Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec Running org.apache.hadoop.metrics.TestMetricsServlet Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec Running org.apache.hadoop.metrics.spi.TestOutputRecord Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.043 sec Running org.apache.hadoop.metrics.ganglia.TestGangliaContext Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.156 sec Running org.apache.hadoop.io.TestObjectWritableProtos Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.323 sec Running org.apache.hadoop.io.TestWritableUtils Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.117 sec Running org.apache.hadoop.io.TestArrayPrimitiveWritable Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.136 sec Running org.apache.hadoop.io.compress.TestBlockDecompressorStream Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor Tests run: 7, Failures: 0, Errors: 0, Skipped: 7, Time elapsed: 0.168 sec Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.159 sec Running org.apache.hadoop.io.compress.TestCodec Tests run: 23, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 58.069 sec FAILURE! testSnappyCodec(org.apache.hadoop.io.compress.TestCodec) Time elapsed: 12 sec FAILURE! java.lang.AssertionError: Snappy native available but Hadoop native not at org.junit.Assert.fail(Assert.java:91) at org.apache.hadoop.io.compress.TestCodec.__CLR3_0_2v4ia4819ya(TestCodec.java:132) at org.apache.hadoop.io.compress.TestCodec.testSnappyCodec(TestCodec.java:125) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at
Re: Jenkins build fails
I'm looking into this. -giri On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote: Filed https://issues.apache.org/jira/browse/HADOOP-10804 Please correct me if I am wrong.. Thanks, Akira (2014/07/09 11:24), Akira AJISAKA wrote: Hi Hadoop developers, Now Jenkins is failing with the below message. I'm thinking this is caused by the upgrade of Jenkins server. After the upgrade, the version of svn client was also upgraded, so the following errors occurred. It will be fixed by executing 'svn upgrade' before executing other svn commands. I'll file a JIRA and create a patch shortly. Regards, Akira == == Testing patch for HADOOP-10661. == == svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. Build step 'Execute shell' marked build as failure Archiving artifacts Description set: HADOOP-10661 Recording test results Finished: FAILURE -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Jenkins build fails
I took care of the svn upgrade issue -giri On Wed, Jul 9, 2014 at 5:05 AM, Giridharan Kesavan gkesa...@hortonworks.com wrote: I'm looking into this. -giri On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote: Filed https://issues.apache.org/jira/browse/HADOOP-10804 Please correct me if I am wrong.. Thanks, Akira (2014/07/09 11:24), Akira AJISAKA wrote: Hi Hadoop developers, Now Jenkins is failing with the below message. I'm thinking this is caused by the upgrade of Jenkins server. After the upgrade, the version of svn client was also upgraded, so the following errors occurred. It will be fixed by executing 'svn upgrade' before executing other svn commands. I'll file a JIRA and create a patch shortly. Regards, Akira == == Testing patch for HADOOP-10661. == == svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. Build step 'Execute shell' marked build as failure Archiving artifacts Description set: HADOOP-10661 Recording test results Finished: FAILURE -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Jenkins Build Slaves
Build jobs are now configured to run on the newer set of slaves. -giri On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan gkesa...@hortonworks.com wrote: All Yahoo is in the process of retiring all the hadoop jenkins build slaves, *hadoop[1-9]* and replace them with a newer set of beefier hosts. These new machines are configured with *ubuntu-14.04*. Over the next couple of days I will be configuring the build jobs to run on these newly configured build slaves. To automate the installation of tools and build libraries I have put together ansible scripts and here is the link to the toolchain repo. *https://github.com/apache/toolchain https://github.com/apache/toolchain * During the transition, the old build slave will be accessible, and expected to be shutdown by 07/15. I will send out an update later this week when this transition is complete. *Mean while, I would like to request the project owners to remove/cleanup any stale * *jenkins job for their respective project and help with any builds issue to make this * *transition seamless. * Thanks - Giri -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Jenkins Build Slaves
Hi Giri, Is pkgconfig deployed on the new Jenkins slaves? I noticed this build failed: https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/ Looking in the console output, it appears the HDFS native code failed to build due to missing pkgconfig. [exec] CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message): [exec] Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) Chris Nauroth Hortonworks http://hortonworks.com/ On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan gkesa...@hortonworks.com wrote: Build jobs are now configured to run on the newer set of slaves. -giri On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan gkesa...@hortonworks.com wrote: All Yahoo is in the process of retiring all the hadoop jenkins build slaves, *hadoop[1-9]* and replace them with a newer set of beefier hosts. These new machines are configured with *ubuntu-14.04*. Over the next couple of days I will be configuring the build jobs to run on these newly configured build slaves. To automate the installation of tools and build libraries I have put together ansible scripts and here is the link to the toolchain repo. *https://github.com/apache/toolchain https://github.com/apache/toolchain * During the transition, the old build slave will be accessible, and expected to be shutdown by 07/15. I will send out an update later this week when this transition is complete. *Mean while, I would like to request the project owners to remove/cleanup any stale * *jenkins job for their respective project and help with any builds issue to make this * *transition seamless. * Thanks - Giri -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Jenkins Build Slaves
I dont think so, let me fix that. Thanks Chris for pointing that out. -giri On Wed, Jul 9, 2014 at 9:50 AM, Chris Nauroth cnaur...@hortonworks.com wrote: Hi Giri, Is pkgconfig deployed on the new Jenkins slaves? I noticed this build failed: https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/ Looking in the console output, it appears the HDFS native code failed to build due to missing pkgconfig. [exec] CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message): [exec] Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) Chris Nauroth Hortonworks http://hortonworks.com/ On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan gkesa...@hortonworks.com wrote: Build jobs are now configured to run on the newer set of slaves. -giri On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan gkesa...@hortonworks.com wrote: All Yahoo is in the process of retiring all the hadoop jenkins build slaves, *hadoop[1-9]* and replace them with a newer set of beefier hosts. These new machines are configured with *ubuntu-14.04*. Over the next couple of days I will be configuring the build jobs to run on these newly configured build slaves. To automate the installation of tools and build libraries I have put together ansible scripts and here is the link to the toolchain repo. *https://github.com/apache/toolchain https://github.com/apache/toolchain * During the transition, the old build slave will be accessible, and expected to be shutdown by 07/15. I will send out an update later this week when this transition is complete. *Mean while, I would like to request the project owners to remove/cleanup any stale * *jenkins job for their respective project and help with any builds issue to make this * *transition seamless. * Thanks - Giri -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Jenkins build fails
Now builds are failing because of this. Please make sure build works with -Pnative. [exec] CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message): [exec] Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) [exec] Call Stack (most recent call first): [exec] /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE) [exec] /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:106 (find_package_handle_standard_args) [exec] main/native/fuse-dfs/CMakeLists.txt:23 (find_package) On 7/9/14, 9:05 AM, Giridharan Kesavan gkesa...@hortonworks.com wrote: I took care of the svn upgrade issue -giri On Wed, Jul 9, 2014 at 5:05 AM, Giridharan Kesavan gkesa...@hortonworks.com wrote: I'm looking into this. -giri On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote: Filed https://issues.apache.org/jira/browse/HADOOP-10804 Please correct me if I am wrong.. Thanks, Akira (2014/07/09 11:24), Akira AJISAKA wrote: Hi Hadoop developers, Now Jenkins is failing with the below message. I'm thinking this is caused by the upgrade of Jenkins server. After the upgrade, the version of svn client was also upgraded, so the following errors occurred. It will be fixed by executing 'svn upgrade' before executing other svn commands. I'll file a JIRA and create a patch shortly. Regards, Akira == == Testing patch for HADOOP-10661. == == svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. Build step 'Execute shell' marked build as failure Archiving artifacts Description set: HADOOP-10661 Recording test results Finished: FAILURE -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Branching 2.5
Folks, We have 10 blockers for 2.5. Can the people working on them revisit and see if they are really blockers. If they are, can we try to get them in soon? It would be nice to get an RC out the end of this week or at least early next week? Thanks Karthik On Wed, Jul 2, 2014 at 11:32 PM, Karthik Kambatla ka...@cloudera.com wrote: I just 1. moved non-blocker 2.5 JIRAs to 2.6. 2. created branch-2.5 and added sections for 2.6.0 in all CHANGES.txt in trunk and branch-2. 3. Will create branch-2.5.0 when we are ready to create an RC There are 11 pending blockers for 2.5.0: http://s.apache.org/vJg Committers - please exercise caution when merging to branch-2.5 and target non-blockers preferably to 2.6 On Wed, Jul 2, 2014 at 10:24 PM, Karthik Kambatla ka...@cloudera.com wrote: Committers, I am working on branching 2.5. Will send an update as soon as I am done branching.
[jira] [Created] (HADOOP-10805) ndfs hdfsDelete should check the return boolean
Colin Patrick McCabe created HADOOP-10805: - Summary: ndfs hdfsDelete should check the return boolean Key: HADOOP-10805 URL: https://issues.apache.org/jira/browse/HADOOP-10805 Project: Hadoop Common Issue Type: Sub-task Components: native Affects Versions: HADOOP-10388 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe The delete RPC to the NameNode returns a boolean. We need to check this in the pure native client to ensure that the delete actually succeeded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10805) ndfs hdfsDelete should check the return boolean
[ https://issues.apache.org/jira/browse/HADOOP-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe resolved HADOOP-10805. --- Resolution: Fixed Fix Version/s: HADOOP-10388 Target Version/s: HADOOP-10388 ndfs hdfsDelete should check the return boolean --- Key: HADOOP-10805 URL: https://issues.apache.org/jira/browse/HADOOP-10805 Project: Hadoop Common Issue Type: Sub-task Components: native Affects Versions: HADOOP-10388 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Fix For: HADOOP-10388 Attachments: HADOOP-10805-pnative.001.patch The delete RPC to the NameNode returns a boolean. We need to check this in the pure native client to ensure that the delete actually succeeded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10807) GenericOptionsParser needs updating for Hadoop 2.x+
Allen Wittenauer created HADOOP-10807: - Summary: GenericOptionsParser needs updating for Hadoop 2.x+ Key: HADOOP-10807 URL: https://issues.apache.org/jira/browse/HADOOP-10807 Project: Hadoop Common Issue Type: Bug Components: util Reporter: Allen Wittenauer Priority: Minor The options presented to users, the comments, etc, are all woefully out of date and don't reflect the current reality. These should be updated for Hadoop 2.x and up. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10808) Remove unused native code for munlock.
Chris Nauroth created HADOOP-10808: -- Summary: Remove unused native code for munlock. Key: HADOOP-10808 URL: https://issues.apache.org/jira/browse/HADOOP-10808 Project: Hadoop Common Issue Type: Bug Components: native Affects Versions: 3.0.0, 2.5.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor The Centralized Cache Management project added a native function for calling {{munlock}}. This function is unused though, because Centralized Cache Management calls {{munmap}}, which implicitly unlocks the memory too. Let's remove the unused code. This is a private/unstable class, so there is no backwards-compatibility concern. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10809) hadoop-azure: page blob support
Mike Liddell created HADOOP-10809: - Summary: hadoop-azure: page blob support Key: HADOOP-10809 URL: https://issues.apache.org/jira/browse/HADOOP-10809 Project: Hadoop Common Issue Type: Improvement Components: tools Reporter: Mike Liddell Azure Blob Storage provides two flavors: block-blobs and page-blobs. Block-blobs are the general purpose kind that support convenient APIs and are the basis for the Azure Filesystem for Hadoop (see HADOOP-9629). Page-blobs are more difficult to use but provide a different feature set. Most importantly, page-blobs can cope with an effectively infinite number of small accesses whereas block-blobs can only tolerate 50K appends before relatively manual rewriting of the data is necessary. The simplest analogy is that page-blobs are like a normal filesystem (eg FAT) and the API is like a low-level device driver. See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some introductory material. The primary driving scenario for page-blob support is for HBase transaction log files which require an access pattern of many small writes. Additional scenarios can also be supported. Configuration: The Hadoop Filesystem abstraction needs a mechanism so that file-create can determine whether to create a block- or page-blob. To permit scenarios where application code doesn't know about the details of azure storage we would like the configuration to be Aspect-style, ie configured by the Administrator and transparent to the application. The current solution is to use hadoop configuration to declare a list of page-blob folders -- Azure Filesystem for Hadoop will create files in these folders using page-blob flavor. The configuration key is fs.azure.page.blob.dir, and description can be found in AzureNativeFileSystemStore.java. Code changes: - refactor of basic Azure Filesystem code to use a general BlobWrapper and specialized BlockBlobWrapper vs PageBlobWrapper - introduction of PageBlob support (read, write, etc) - miscellaneous changes such as umask handling, implementation of createNonRecursive(), flush/hflush/hsync. - new unit tests. Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, Mike Liddell. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10810) Clean up native code compilation warnings.
Chris Nauroth created HADOOP-10810: -- Summary: Clean up native code compilation warnings. Key: HADOOP-10810 URL: https://issues.apache.org/jira/browse/HADOOP-10810 Project: Hadoop Common Issue Type: Bug Components: native Affects Versions: 3.0.0, 2.5.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor There are several compilation warnings coming from the native code on both Linux and Windows. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10804) Jenkins is failing due to the upgrade of svn client
[ https://issues.apache.org/jira/browse/HADOOP-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HADOOP-10804. Resolution: Fixed Now Jenkins build doesn't fail. Thanks [~gkesavan]! Jenkins is failing due to the upgrade of svn client --- Key: HADOOP-10804 URL: https://issues.apache.org/jira/browse/HADOOP-10804 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Akira AJISAKA Assignee: Giridharan Kesavan Priority: Blocker Now Jenkins is failing with the following message: {code} == == Testing patch for HADOOP-10661. == == svn: E155036: Please see the 'svn upgrade' command svn: E155036: The working copy at '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk' is too old (format 10) to work with client version '1.8.8 (r1568071)' (expects format 31). You need to upgrade the working copy first. {code} https://builds.apache.org/job/PreCommit-HADOOP-Build/4231/console -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10811) Allow classes to be reloaded at runtime
Chris Li created HADOOP-10811: - Summary: Allow classes to be reloaded at runtime Key: HADOOP-10811 URL: https://issues.apache.org/jira/browse/HADOOP-10811 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 3.0.0 Reporter: Chris Li Assignee: Chris Li Priority: Minor Currently hadoop loads its classes and caches them in the Configuration class. Even if the user swaps a class's jar at runtime, hadoop will continue to use the cached classes when using reflection to instantiate objects. This limits the usefulness of things like HADOOP-10285, because the admin would need to restart each time they wanted to change their queue class. This patch is to add a way to refresh the class cache, by creating a new refresh handler to do so (using HADOOP-10376) -- This message was sent by Atlassian JIRA (v6.2#6252)