[jira] [Resolved] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX
[ https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J resolved HADOOP-8719. - Resolution: Fixed When this was committed, OSX was not a targeted platform for security or native support. If that has changed recently, lets revert this fix over a new JIRA - I see no issues with doing that. The fix here merely got rid of a verbose warning appearing unnecessarily over unsecured pseudo-distributed clusters running on OSX. Re-resolving. Thanks! Workaround for kerberos-related log errors upon running any hadoop command on OSX - Key: HADOOP-8719 URL: https://issues.apache.org/jira/browse/HADOOP-8719 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.0-alpha Environment: Mac OS X 10.7, Java 1.6.0_26 Reporter: Jianbin Wei Priority: Trivial Fix For: 3.0.0 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs the following errors: 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from SCDynamicStore Hadoop does seem to function properly despite this. The workaround takes only 10 minutes. There are numerous discussions about this: google Unable to load realm mapping info from SCDynamicStore returns 1770 hits. Each one has many discussions. Assume each discussion take only 5 minute, a 10-minute fix can save ~150 hours. This does not count much search of this issue and its solution/workaround, which can easily hit (wasted) thousands of hours!!! -- This message was sent by Atlassian JIRA (v6.2#6252)
'current' document links to 2.3.0
Hi, I noticed http://hadoop.apache.org/docs/current/ linked to 2.3.0. Now 2.4.1 is the latest release, would you please update the link? Thanks, Akira
Build failed in Jenkins: Hadoop-Common-0.23-Build #1007
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1007/ -- [...truncated 16202 lines...] Running org.apache.hadoop.fs.TestFileSystemTokens Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.492 sec Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.684 sec Running org.apache.hadoop.fs.TestLocalFSFileContextSymlink Tests run: 61, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.618 sec Running org.apache.hadoop.fs.TestHarFileSystem Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.305 sec Running org.apache.hadoop.fs.TestFcLocalFsUtil Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec Running org.apache.hadoop.fs.TestLocalDirAllocator Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.063 sec Running org.apache.hadoop.fs.TestLocalFileSystemPermission Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.524 sec Running org.apache.hadoop.fs.TestFileSystemCaching Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.872 sec Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.774 sec Running org.apache.hadoop.fs.TestPath Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.802 sec Running org.apache.hadoop.fs.TestListFiles Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec Running org.apache.hadoop.fs.TestHarFileSystemBasics Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.324 sec Running org.apache.hadoop.fs.TestChecksumFileSystem Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec Running org.apache.hadoop.fs.TestGetFileBlockLocations Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.732 sec Running org.apache.hadoop.fs.TestFsShellCopy Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.32 sec Running org.apache.hadoop.fs.TestDU Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.166 sec Running org.apache.hadoop.fs.TestAvroFSInput Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.481 sec Running org.apache.hadoop.fs.shell.TestPathExceptions Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec Running org.apache.hadoop.fs.shell.TestCommandFactory Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec Running org.apache.hadoop.fs.shell.TestPathData Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.786 sec Running org.apache.hadoop.fs.shell.TestCopy Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.729 sec Running org.apache.hadoop.fs.TestHardLink Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec Running org.apache.hadoop.fs.TestFilterFileSystem Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec Running org.apache.hadoop.fs.TestLocalFSFileContextMainOperations Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.277 sec Running org.apache.hadoop.fs.TestTrash Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.084 sec Running org.apache.hadoop.fs.viewfs.TestChRootedFileSystem Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.106 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.437 sec Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.775 sec Running org.apache.hadoop.fs.viewfs.TestFcCreateMkdirLocalFs Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.046 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.928 sec Running org.apache.hadoop.fs.viewfs.TestChRootedFs Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.061 sec Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.32 sec Running org.apache.hadoop.fs.viewfs.TestViewFsTrash Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.96 sec Running org.apache.hadoop.fs.viewfs.TestViewfsFileStatus Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.751 sec Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.854 sec Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.994 sec Running org.apache.hadoop.fs.viewfs.TestViewFsConfig Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time
Sqoop Import Problem ZipException
Hi I am trying import data from Sybase to HDFS but getting ZipException It looks like some the jars are not getting downloaded but not able to trace what is going wrong. Thanks. -- * Regards,* * Vikas *
Re: Sqoop Import Problem ZipException
Can you provide error stack trace? On Fri, Jul 11, 2014 at 8:43 PM, Vikas Jadhav vikascjadha...@gmail.com wrote: Hi I am trying import data from Sybase to HDFS but getting ZipException It looks like some the jars are not getting downloaded but not able to trace what is going wrong. Thanks. -- * Regards,* * Vikas * -- Nitin Pawar
Re: 'current' document links to 2.3.0
I just moved the symlink in the site svn repo according to step 12 in [1], dunno when it'll get propagated. The release notes for 2.3.0+ also still talk about federation and MRv2 being new features. I think it's generated from the release tarball, so we probably should have fixed these notes before releasing. Not sure how we'd go about fixing this. One thing we can do though is have proper notes for 2.5 :) [1] http://wiki.apache.org/hadoop/HowToRelease On Thu, Jul 10, 2014 at 11:40 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote: Hi, I noticed http://hadoop.apache.org/docs/current/ linked to 2.3.0. Now 2.4.1 is the latest release, would you please update the link? Thanks, Akira
[jira] [Created] (HADOOP-10816) key shell returns -1 to the shell on error, should be 1
Mike Yoder created HADOOP-10816: --- Summary: key shell returns -1 to the shell on error, should be 1 Key: HADOOP-10816 URL: https://issues.apache.org/jira/browse/HADOOP-10816 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 3.0.0 Reporter: Mike Yoder I've seen this in several places now - commands returning -1 on failure to the shell. It's a bug. Someone confused their posix style returns (0 on success, 0 on failure) with program returns, which are an unsigned character. Thus, a return of -1 actually becomes 255 to the shell. {noformat} $ hadoop key create happykey2 --provider kms://http@localhost:16000/kms --attr a=a --attr a=b Each attribute must correspond to only one value: atttribute a was repeated ... $ echo $? 255 {noformat} A return value of 1 instead of -1 does the right thing. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10817) ProxyUsers configuration should support configurable prefixes
Alejandro Abdelnur created HADOOP-10817: --- Summary: ProxyUsers configuration should support configurable prefixes Key: HADOOP-10817 URL: https://issues.apache.org/jira/browse/HADOOP-10817 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Alejandro Abdelnur Currently {{ProxyUsers}} and the {{ImpersonationProvider}} are hardcoded to use {{hadoop.proxyuser.}} prefixes for loading proxy user configuration. Adding the possibility of using a custom prefix will enable reusing the {{ProxyUsers}} class from other components (i.e. HttpFS and KMS). -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Meetup invitation: Consensus based replication in Hadoop
One more update: it seema that for ppl in SF, who oftentimes might not even have a car, getting to San Ramon can represent a certain difficulty. So we'll do a shuttle pickup from West Dublin BART station if there's at least a few people who want to use the option. Please respond directly to me if you're interested before end of Sunday, 13th. Cheers, Cos On Tue, Jul 08, 2014 at 12:23PM, Konstantin Boudnik wrote: All, Re-sending this announcement in case it fell through over the long weekend when people were away. We still have seats left, so register soon. Regards, Cos On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote: We'd like to invite you to the Consensus based replication in Hadoop: A deep dive event that we are happy to hold in our San Ramon office on July 15th at noon. We'd like to accommodate as many people as possible, but I think are physically limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation: https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613 We'll provide pizza and beverages (feel free to express your special dietary requirements if any). See you soon! With regards, Cos On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote: Guys, In the last a couple of weeks, we had a very good and productive initial round of discussions on the JIRAs. I think it is worthy to keep the momentum going and have a more detailed conversation. For that, we'd like to host s Hadoop developers meetup to get into the bowls of the consensus-based coordination implementation for HDFS. The proposed venue is our office in San Ramon, CA. Considering that it is already a mid week and the following one looks short because of the holidays, how would the week of July 7th looks for yall? Tuesday or Thursday look pretty good on our end. Please chime in on your preference either here or reach of directly to me. Once I have a few RSVPs I will setup an event on Eventbrite or similar. Looking forward to your input. Regards, Cos On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote: Hello hadoop developers, I just opened two jiras proposing to introduce ConsensusNode into HDFS and a Coordination Engine into Hadoop Common. The latter should benefit HDFS and HBase as well as potentially other projects. See HDFS-6469 and HADOOP-10641 for details. The effort is based on the system we built at Wandisco with my colleagues, who are glad to contribute it to Apache, as quite a few people in the community expressed interest in this ideas and their potential applications. We should probably keep technical discussions in the jiras. Here on the dev list I wanted to touch-base on any logistic issues / questions. - First of all, any ideas and help are very much welcome. - We would like to set up a meetup to discuss this if people are interested. Hadoop Summit next week may be a potential time-place to meet. Not sure in what form. If not, we can organize one in our San Ramon office later on. - The effort may take a few months depending on the contributors schedules. Would it make sense to open a branch for the ConsensusNode work? - APIs and the implementation of the Coordination Engine should be a fairly independent, so it may be reasonable to add it directly to Hadoop Common trunk. Thanks, --Konstantin signature.asc Description: Digital signature
Re: Meetup invitation: Consensus based replication in Hadoop
Few people asked about pick up from Bart. We can organize pick up from either West Dublin/Pleasanton Station or Walnut Creek Station. Whichever gets more requests until Monday 07/14. Please ping me directly if you want to be picked up: s...@wandisco.com Thanks, --Konst On Tue, Jul 8, 2014 at 12:23 PM, Konstantin Boudnik c...@apache.org wrote: All, Re-sending this announcement in case it fell through over the long weekend when people were away. We still have seats left, so register soon. Regards, Cos On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote: We'd like to invite you to the Consensus based replication in Hadoop: A deep dive event that we are happy to hold in our San Ramon office on July 15th at noon. We'd like to accommodate as many people as possible, but I think are physically limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation: https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613 We'll provide pizza and beverages (feel free to express your special dietary requirements if any). See you soon! With regards, Cos On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote: Guys, In the last a couple of weeks, we had a very good and productive initial round of discussions on the JIRAs. I think it is worthy to keep the momentum going and have a more detailed conversation. For that, we'd like to host s Hadoop developers meetup to get into the bowls of the consensus-based coordination implementation for HDFS. The proposed venue is our office in San Ramon, CA. Considering that it is already a mid week and the following one looks short because of the holidays, how would the week of July 7th looks for yall? Tuesday or Thursday look pretty good on our end. Please chime in on your preference either here or reach of directly to me. Once I have a few RSVPs I will setup an event on Eventbrite or similar. Looking forward to your input. Regards, Cos On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote: Hello hadoop developers, I just opened two jiras proposing to introduce ConsensusNode into HDFS and a Coordination Engine into Hadoop Common. The latter should benefit HDFS and HBase as well as potentially other projects. See HDFS-6469 and HADOOP-10641 for details. The effort is based on the system we built at Wandisco with my colleagues, who are glad to contribute it to Apache, as quite a few people in the community expressed interest in this ideas and their potential applications. We should probably keep technical discussions in the jiras. Here on the dev list I wanted to touch-base on any logistic issues / questions. - First of all, any ideas and help are very much welcome. - We would like to set up a meetup to discuss this if people are interested. Hadoop Summit next week may be a potential time-place to meet. Not sure in what form. If not, we can organize one in our San Ramon office later on. - The effort may take a few months depending on the contributors schedules. Would it make sense to open a branch for the ConsensusNode work? - APIs and the implementation of the Coordination Engine should be a fairly independent, so it may be reasonable to add it directly to Hadoop Common trunk. Thanks, --Konstantin
Re: Meetup invitation: Consensus based replication in Hadoop
Ok, or Cos. On Fri, Jul 11, 2014 at 4:41 PM, Konstantin Shvachko shv.had...@gmail.com wrote: Few people asked about pick up from Bart. We can organize pick up from either West Dublin/Pleasanton Station or Walnut Creek Station. Whichever gets more requests until Monday 07/14. Please ping me directly if you want to be picked up: s...@wandisco.com Thanks, --Konst On Tue, Jul 8, 2014 at 12:23 PM, Konstantin Boudnik c...@apache.org wrote: All, Re-sending this announcement in case it fell through over the long weekend when people were away. We still have seats left, so register soon. Regards, Cos On Wed, Jul 02, 2014 at 06:37PM, Konstantin Boudnik wrote: We'd like to invite you to the Consensus based replication in Hadoop: A deep dive event that we are happy to hold in our San Ramon office on July 15th at noon. We'd like to accommodate as many people as possible, but I think are physically limited to 30 (+/- a few), so please RSVP to this Eventbrite invitation: https://www.eventbrite.co.uk/e/consensus-based-replication-in-hadoop-a-deep-dive-tickets-12158236613 We'll provide pizza and beverages (feel free to express your special dietary requirements if any). See you soon! With regards, Cos On Wed, Jun 18, 2014 at 08:45PM, Konstantin Boudnik wrote: Guys, In the last a couple of weeks, we had a very good and productive initial round of discussions on the JIRAs. I think it is worthy to keep the momentum going and have a more detailed conversation. For that, we'd like to host s Hadoop developers meetup to get into the bowls of the consensus-based coordination implementation for HDFS. The proposed venue is our office in San Ramon, CA. Considering that it is already a mid week and the following one looks short because of the holidays, how would the week of July 7th looks for yall? Tuesday or Thursday look pretty good on our end. Please chime in on your preference either here or reach of directly to me. Once I have a few RSVPs I will setup an event on Eventbrite or similar. Looking forward to your input. Regards, Cos On Thu, May 29, 2014 at 02:09PM, Konstantin Shvachko wrote: Hello hadoop developers, I just opened two jiras proposing to introduce ConsensusNode into HDFS and a Coordination Engine into Hadoop Common. The latter should benefit HDFS and HBase as well as potentially other projects. See HDFS-6469 and HADOOP-10641 for details. The effort is based on the system we built at Wandisco with my colleagues, who are glad to contribute it to Apache, as quite a few people in the community expressed interest in this ideas and their potential applications. We should probably keep technical discussions in the jiras. Here on the dev list I wanted to touch-base on any logistic issues / questions. - First of all, any ideas and help are very much welcome. - We would like to set up a meetup to discuss this if people are interested. Hadoop Summit next week may be a potential time-place to meet. Not sure in what form. If not, we can organize one in our San Ramon office later on. - The effort may take a few months depending on the contributors schedules. Would it make sense to open a branch for the ConsensusNode work? - APIs and the implementation of the Coordination Engine should be a fairly independent, so it may be reasonable to add it directly to Hadoop Common trunk. Thanks, --Konstantin
[jira] [Resolved] (HADOOP-10806) ndfs: need to implement umask, pass permission bits to hdfsCreateDirectory
[ https://issues.apache.org/jira/browse/HADOOP-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe resolved HADOOP-10806. --- Resolution: Fixed Fix Version/s: HADOOP-10388 ndfs: need to implement umask, pass permission bits to hdfsCreateDirectory -- Key: HADOOP-10806 URL: https://issues.apache.org/jira/browse/HADOOP-10806 Project: Hadoop Common Issue Type: Sub-task Components: native Affects Versions: HADOOP-10388 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Fix For: HADOOP-10388 Attachments: HADOOP-10806-pnative.001.patch, HADOOP-10806-pnative.002.patch We need to pass in permission bits to {{hdfsCreateDirectory}}. Also, we need to read {{fs.permissions.umask-mode}} so that we know what to mask off of the permission bits (umask is always implemented client-side) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10818) native client: refactor URI code to be clearer
Colin Patrick McCabe created HADOOP-10818: - Summary: native client: refactor URI code to be clearer Key: HADOOP-10818 URL: https://issues.apache.org/jira/browse/HADOOP-10818 Project: Hadoop Common Issue Type: Sub-task Components: native Affects Versions: HADOOP-10388 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Refactor the {{common/uri.c}} code to be a bit clearer. We should just be able to refer to user_info, auth, port, path, etc. fields in the structure, rather than calling accessors. {{hdfsBuilder}} should just have a connection URI rather than separate fields for all these things. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10734) Implement high-performance secure random number sources
[ https://issues.apache.org/jira/browse/HADOOP-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe resolved HADOOP-10734. --- Resolution: Fixed Implement high-performance secure random number sources --- Key: HADOOP-10734 URL: https://issues.apache.org/jira/browse/HADOOP-10734 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) Attachments: HADOOP-10734-fs-enc.004.patch, HADOOP-10734.1.patch, HADOOP-10734.2.patch, HADOOP-10734.3.patch, HADOOP-10734.4.patch, HADOOP-10734.5.patch, HADOOP-10734.patch This JIRA is to implement Secure random using JNI to OpenSSL, and implementation should be thread-safe. Utilize RdRand to return random numbers from hardware random number generator. It's TRNG(True Random Number generators) having much higher performance than {{java.security.SecureRandom}}. https://wiki.openssl.org/index.php/Random_Numbers http://en.wikipedia.org/wiki/RdRand https://software.intel.com/en-us/articles/performance-impact-of-intel-secure-key-on-openssl -- This message was sent by Atlassian JIRA (v6.2#6252)