[jira] [Commented] (HADOOP-7982) UserGroupInformation fails to login if thread's context classloader can't load HadoopLoginModule
[ https://issues.apache.org/jira/browse/HADOOP-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188947#comment-13188947 ] Eli Collins commented on HADOOP-7982: - +1 to the branch-1 patch > UserGroupInformation fails to login if thread's context classloader can't > load HadoopLoginModule > > > Key: HADOOP-7982 > URL: https://issues.apache.org/jira/browse/HADOOP-7982 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 1.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Attachments: hadoop-7982-branch-1.txt > > > In a few hard-to-reproduce situations, we've seen a problem where the UGI > login call causes a failure to login exception with the following cause: > Caused by: javax.security.auth.login.LoginException: unable to find > LoginModule class: org.apache.hadoop.security.UserGroupInformation > $HadoopLoginModule > After a bunch of debugging, I determined that this happens when the login > occurs in a thread whose Context ClassLoader has been set to null. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7938) HA: the FailoverController should optionally fence the active during failover
[ https://issues.apache.org/jira/browse/HADOOP-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188943#comment-13188943 ] Todd Lipcon commented on HADOOP-7938: - {code} +// Try to fence fromSvc +if (fencer != null) { + if (!fencer.fence()) { +throw new FailoverFailedException("Unable to fence " + fromSvcName); + } } {code} Shouldn't you only fence the old node in the case that you got an exception in the {{transitionToStandby}} call? Then, you also wouldn't need to make the user specify whether or not to fence - it would automatically fence in the case that there was a problem with graceful failover. - If the {{transitionToActive}} call fails, we need to be careful before doing a failback. For example, what if it was a timeout, and in fact the new active is still in the process of failing over? Then we need to fence "toSvc" before going back to "fromSvc" > HA: the FailoverController should optionally fence the active during failover > - > > Key: HADOOP-7938 > URL: https://issues.apache.org/jira/browse/HADOOP-7938 > Project: Hadoop Common > Issue Type: Sub-task > Components: ha >Affects Versions: HA Branch (HDFS-1623) >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: HA Branch (HDFS-1623) > > Attachments: hadoop-7938.txt > > > The FailoverController in HADOOP-7924 needs to be able to fence off the > current active in case it fails to transition to standby (or the user > requests it for sanity). This is needed even for manual failover (the CLI > should use the configured fencing mechanism). The FC needs to access the > HDFS-specific implementations HDFS-2179, could add a common fencing interface > (or just shell out but we may not always want to do that). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
[ https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188915#comment-13188915 ] Harsh J commented on HADOOP-6801: - Hey folks, can someone pitch in and give this mostly-docfix a quick review? > io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are > still in CommonConfigurationKeysPublic.java and used in SequenceFile.java > --- > > Key: HADOOP-6801 > URL: https://issues.apache.org/jira/browse/HADOOP-6801 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.22.0 >Reporter: Erik Steffl >Assignee: Harsh J >Priority: Minor > Attachments: HADOOP-6801.r1.diff, HADOOP-6801.r2.diff > > > Following configuration keys in CommonConfigurationKeysPublic.java (former > CommonConfigurationKeys.java): > public static final String IO_SORT_MB_KEY = "io.sort.mb"; > public static final String IO_SORT_FACTOR_KEY = "io.sort.factor"; > are partially moved: > - they were renamed to mapreduce.task.io.sort.mb and > mapreduce.task.io.sort.factor respectively > - they were moved to mapreduce project, documented in mapred-default.xml > However: > - they are still listed in CommonConfigurationKeysPublic.java as quoted > above > - strings "io.sort.mb" and "io.sort.factor" are used in SequenceFile.java > in Hadoop Common project > Not sure what the solution is, these constants should probably be removed > from CommonConfigurationKeysPublic.java but I am not sure what's the best > solution for SequenceFile.java. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes
[ https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188908#comment-13188908 ] Harsh J commented on HADOOP-1381: - Todd/others, are there any other comments you'd like me to address? > The distance between sync blocks in SequenceFiles should be configurable > rather than hard coded to 2000 bytes > - > > Key: HADOOP-1381 > URL: https://issues.apache.org/jira/browse/HADOOP-1381 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Affects Versions: 0.22.0 >Reporter: Owen O'Malley >Assignee: Harsh J > Fix For: 0.24.0 > > Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, > HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff > > > Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much > better if it was configurable with a much higher default (1mb or so?). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7983) HA: failover should be able to pass args to fencers
HA: failover should be able to pass args to fencers --- Key: HADOOP-7983 URL: https://issues.apache.org/jira/browse/HADOOP-7983 Project: Hadoop Common Issue Type: Sub-task Components: ha Affects Versions: HA Branch (HDFS-1623) Reporter: Eli Collins Assignee: Eli Collins Currently fencing method args are passed in the config (eg "sshfence(host1,8022)" indicates to fence the service running on port 8022 on host1. The target service to fence should be determined by the failover (we fence the currently active service) not configured statically. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7982) UserGroupInformation fails to login if thread's context classloader can't load HadoopLoginModule
[ https://issues.apache.org/jira/browse/HADOOP-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HADOOP-7982: Attachment: hadoop-7982-branch-1.txt Here's a branch-1 patch which fixes the issue. Unfortunately I was unable to write a test case for it, since the login functionality uses a lot of static state, etc. > UserGroupInformation fails to login if thread's context classloader can't > load HadoopLoginModule > > > Key: HADOOP-7982 > URL: https://issues.apache.org/jira/browse/HADOOP-7982 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 1.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Attachments: hadoop-7982-branch-1.txt > > > In a few hard-to-reproduce situations, we've seen a problem where the UGI > login call causes a failure to login exception with the following cause: > Caused by: javax.security.auth.login.LoginException: unable to find > LoginModule class: org.apache.hadoop.security.UserGroupInformation > $HadoopLoginModule > After a bunch of debugging, I determined that this happens when the login > occurs in a thread whose Context ClassLoader has been set to null. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7982) UserGroupInformation fails to login if thread's context classloader can't load HadoopLoginModule
[ https://issues.apache.org/jira/browse/HADOOP-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188871#comment-13188871 ] Todd Lipcon commented on HADOOP-7982: - In particular, I think Hive might have a bug which sets a thread's ContextClassLoader to null. In the JAAS LoginContext constructor, it saves off the thread's Context ClassLoader into a member variable, and then uses that CL when loading the HadoopLoginModule class. In the case that its a null CL, it fails because HadoopLoginModule isn't on the bootstrap classpath. A workaround which fixes it is to temporarily change the thread's CCL to match the UGI class's ClassLoader, then flip it back. I don't know quite enough about correct/conventional classloader usage to know if this is the right fix or a hack. > UserGroupInformation fails to login if thread's context classloader can't > load HadoopLoginModule > > > Key: HADOOP-7982 > URL: https://issues.apache.org/jira/browse/HADOOP-7982 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 1.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > > In a few hard-to-reproduce situations, we've seen a problem where the UGI > login call causes a failure to login exception with the following cause: > Caused by: javax.security.auth.login.LoginException: unable to find > LoginModule class: org.apache.hadoop.security.UserGroupInformation > $HadoopLoginModule > After a bunch of debugging, I determined that this happens when the login > occurs in a thread whose Context ClassLoader has been set to null. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7982) UserGroupInformation fails to login if thread's context classloader can't load HadoopLoginModule
UserGroupInformation fails to login if thread's context classloader can't load HadoopLoginModule Key: HADOOP-7982 URL: https://issues.apache.org/jira/browse/HADOOP-7982 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 1.0.0, 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon In a few hard-to-reproduce situations, we've seen a problem where the UGI login call causes a failure to login exception with the following cause: Caused by: javax.security.auth.login.LoginException: unable to find LoginModule class: org.apache.hadoop.security.UserGroupInformation $HadoopLoginModule After a bunch of debugging, I determined that this happens when the login occurs in a thread whose Context ClassLoader has been set to null. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7973) DistributedFileSystem close has severe consequences
[ https://issues.apache.org/jira/browse/HADOOP-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-7973: Attachment: HADOOP-7973-3.patch Idea is basically to allow MR to be backward-compatible by maintaining cached FS objects that are distinct from the FS objects used by user code. That's exactly how it used to work due to the DFS port-stripping bug. I pulled in minimal changes from trunk to allow a unique id in the FS cache key. I didn't pull in the new APIs, but rather smuggled a unique id in via a config setting. It's not the cleanest, but I'm open to alternatives. I've probably missed a few spots in the MR framework, but I'd like comments on the approach before I go further. > DistributedFileSystem close has severe consequences > --- > > Key: HADOOP-7973 > URL: https://issues.apache.org/jira/browse/HADOOP-7973 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 1.0.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Attachments: HADOOP-7973-2.patch, HADOOP-7973-3.patch, > HADOOP-7973.patch > > > The way {{FileSystem#close}} works is very problematic. Since the > {{FileSystems}} are cached, any {{close}} by any caller will cause problems > for every other reference to it. Will add more detail in the comments. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7973) DistributedFileSystem close has severe consequences
[ https://issues.apache.org/jira/browse/HADOOP-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188725#comment-13188725 ] Daryn Sharp commented on HADOOP-7973: - I'm going to try another approach of making MR get unique filesystems w/o relying on the fluke caused by the hdfs bug. > DistributedFileSystem close has severe consequences > --- > > Key: HADOOP-7973 > URL: https://issues.apache.org/jira/browse/HADOOP-7973 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 1.0.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Attachments: HADOOP-7973-2.patch, HADOOP-7973.patch > > > The way {{FileSystem#close}} works is very problematic. Since the > {{FileSystems}} are cached, any {{close}} by any caller will cause problems > for every other reference to it. Will add more detail in the comments. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7981) Improve documentation for org.apache.hadoop.io.compress.Decompressor.getRemaining
[ https://issues.apache.org/jira/browse/HADOOP-7981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188688#comment-13188688 ] Jonathan Eagles commented on HADOOP-7981: - Make clear when this API will be caused from the system. Make clear what actions the system will take based on the return value Make clear distinction between getRemaining == 0 and isFinished == true > Improve documentation for > org.apache.hadoop.io.compress.Decompressor.getRemaining > - > > Key: HADOOP-7981 > URL: https://issues.apache.org/jira/browse/HADOOP-7981 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.23.1 >Reporter: Jonathan Eagles > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7981) Improve documentation for org.apache.hadoop.io.compress.Decompressor.getRemaining
Improve documentation for org.apache.hadoop.io.compress.Decompressor.getRemaining - Key: HADOOP-7981 URL: https://issues.apache.org/jira/browse/HADOOP-7981 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 0.23.1 Reporter: Jonathan Eagles -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-7978) TestHostnameFilter should work with localhost or localhost.localdomain
[ https://issues.apache.org/jira/browse/HADOOP-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur reassigned HADOOP-7978: -- Assignee: Alejandro Abdelnur > TestHostnameFilter should work with localhost or localhost.localdomain > --- > > Key: HADOOP-7978 > URL: https://issues.apache.org/jira/browse/HADOOP-7978 > Project: Hadoop Common > Issue Type: Test > Components: fs, test >Affects Versions: 0.23.0 >Reporter: Eli Collins >Assignee: Alejandro Abdelnur > > TestHostnameFilter may currently fail with the following: > {noformat} > Error Message > null expected: but was: > Stacktrace > junit.framework.ComparisonFailure: null expected: > but was: > at junit.framework.Assert.assertEquals(Assert.java:81) > at junit.framework.Assert.assertEquals(Assert.java:87) > at > org.apache.hadoop.lib.servlet.TestHostnameFilter$1.doFilter(TestHostnameFilter.java:50) > at > org.apache.hadoop.lib.servlet.HostnameFilter.doFilter(HostnameFilter.java:68) > at > org.apache.hadoop.lib.servlet.TestHostnameFilter.hostname(TestHostnameFilter.java:58) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-7978) TestHostnameFilter should work with localhost or localhost.localdomain
[ https://issues.apache.org/jira/browse/HADOOP-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur reopened HADOOP-7978: it seems there is no standard for what goes first, we'll have to change the test to test for both. > TestHostnameFilter should work with localhost or localhost.localdomain > --- > > Key: HADOOP-7978 > URL: https://issues.apache.org/jira/browse/HADOOP-7978 > Project: Hadoop Common > Issue Type: Test > Components: fs, test >Affects Versions: 0.23.0 >Reporter: Eli Collins > > TestHostnameFilter may currently fail with the following: > {noformat} > Error Message > null expected: but was: > Stacktrace > junit.framework.ComparisonFailure: null expected: > but was: > at junit.framework.Assert.assertEquals(Assert.java:81) > at junit.framework.Assert.assertEquals(Assert.java:87) > at > org.apache.hadoop.lib.servlet.TestHostnameFilter$1.doFilter(TestHostnameFilter.java:50) > at > org.apache.hadoop.lib.servlet.HostnameFilter.doFilter(HostnameFilter.java:68) > at > org.apache.hadoop.lib.servlet.TestHostnameFilter.hostname(TestHostnameFilter.java:58) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-7980) API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor
[ https://issues.apache.org/jira/browse/HADOOP-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles resolved HADOOP-7980. - Resolution: Won't Fix @Tom's solution is an acceptable work around > API Compatibility between 0.23 and 1.0 in > org.apache.hadoop.io.compress.Decompressor > > > Key: HADOOP-7980 > URL: https://issues.apache.org/jira/browse/HADOOP-7980 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.23.1 >Reporter: Jonathan Eagles > > HADOOP-6835 introduced in org.apache.hadoop.io.compress.Decompressor the > public int getRemaining() API. The forces custom decompressors to implement > the new API in order to continue to be used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7980) API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor
[ https://issues.apache.org/jira/browse/HADOOP-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188565#comment-13188565 ] Jonathan Eagles commented on HADOOP-7980: - Essentially, that will be the recommended action if we mark this behavior as acceptable. It also seems that Decompressor is an interface, preventing option 2 of providing a default definition. > API Compatibility between 0.23 and 1.0 in > org.apache.hadoop.io.compress.Decompressor > > > Key: HADOOP-7980 > URL: https://issues.apache.org/jira/browse/HADOOP-7980 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.23.1 >Reporter: Jonathan Eagles > > HADOOP-6835 introduced in org.apache.hadoop.io.compress.Decompressor the > public int getRemaining() API. The forces custom decompressors to implement > the new API in order to continue to be used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7980) API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor
[ https://issues.apache.org/jira/browse/HADOOP-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188557#comment-13188557 ] Tom White commented on HADOOP-7980: --- Implementations of custom decompressors can add the new method (but not mark it with @Override) and it will work in both 1 and 0.23, I think. Would that solve your issue? > API Compatibility between 0.23 and 1.0 in > org.apache.hadoop.io.compress.Decompressor > > > Key: HADOOP-7980 > URL: https://issues.apache.org/jira/browse/HADOOP-7980 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.23.1 >Reporter: Jonathan Eagles > > HADOOP-6835 introduced in org.apache.hadoop.io.compress.Decompressor the > public int getRemaining() API. The forces custom decompressors to implement > the new API in order to continue to be used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7980) API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor
[ https://issues.apache.org/jira/browse/HADOOP-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188553#comment-13188553 ] Jonathan Eagles commented on HADOOP-7980: - Possible resolutions: * Reimplement in such a way that new API is not needed * Create a default definition so that only a recompile is needed * Mark this as acceptable > API Compatibility between 0.23 and 1.0 in > org.apache.hadoop.io.compress.Decompressor > > > Key: HADOOP-7980 > URL: https://issues.apache.org/jira/browse/HADOOP-7980 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.23.1 >Reporter: Jonathan Eagles > > HADOOP-6835 introduced in org.apache.hadoop.io.compress.Decompressor the > public int getRemaining() API. The forces custom decompressors to implement > the new API in order to continue to be used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7980) API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor
API Compatibility between 0.23 and 1.0 in org.apache.hadoop.io.compress.Decompressor Key: HADOOP-7980 URL: https://issues.apache.org/jira/browse/HADOOP-7980 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 0.23.1 Reporter: Jonathan Eagles HADOOP-6835 introduced in org.apache.hadoop.io.compress.Decompressor the public int getRemaining() API. The forces custom decompressors to implement the new API in order to continue to be used. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7973) DistributedFileSystem close has severe consequences
[ https://issues.apache.org/jira/browse/HADOOP-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188546#comment-13188546 ] Daryn Sharp commented on HADOOP-7973: - I thought about that myself. If close removes a cached object from the cache, it's still hard to know exactly when to close it since other references may exist. I have a much better idea in mind, will post example shortly. > DistributedFileSystem close has severe consequences > --- > > Key: HADOOP-7973 > URL: https://issues.apache.org/jira/browse/HADOOP-7973 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 1.0.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Attachments: HADOOP-7973-2.patch, HADOOP-7973.patch > > > The way {{FileSystem#close}} works is very problematic. Since the > {{FileSystems}} are cached, any {{close}} by any caller will cause problems > for every other reference to it. Will add more detail in the comments. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7939) Improve Hadoop subcomponent integration in Hadoop 0.23
[ https://issues.apache.org/jira/browse/HADOOP-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188529#comment-13188529 ] Arun C Murthy commented on HADOOP-7939: --- Thanks for the discussion Doug. I agree a long-drawn argument isn't productive. However, please, indulge me a little longer (at least for my own education *smile*). I agree that having downstream packagers is useful and very common. However, it is uncommon for downstream packagers to seek changes upstream, particularly to start/stop scripts. They typically maintain their own i.e. *carry the burden of maintenance*. It would not be unreasonable for Bigtop to do the same i.e. maintain their own bin/hadoop etc. Not that I would prefer this. Yes, historically packaging is done downstream, but not in Hadoop's case. We have had our own scripts, packaging (tarballs, rpms etc.) for a long while and we need to continue to support it for compatibility. Also, Bigtop is an Apache project, and is very different from a random downstream packager/distro. It seems we could do better here at the ASF by collaborating closer between the two communities in the ASF. OTOH, we are currently debating adding features here which Apache Hadoop will never use and then we are assuming the burden of maintenance. If the argument comes down to 'Hadoop scripts are a mess, there is no harm adding some more' then I have very little sympathy as much as I agree we can do better. Seems to me we could eat our dogfood all the time by merging the communities for the 'packaging' (alone) and reduce dead-code and increase collaboration. Clearly Bigtop is more than just packaging i.e. it does stack validation etc. which belongs in a separate project. My primary interest is to have as little 'dead' code in Hadoop as possible and it seems to me we are adding a fair number of variables (features) we'll never use in Hadoop. By having Bigtop contribute the packaging back to the project we could all share the burden of maintenance. Clearly, taking away features is always harder than adding them, and we should be careful to do so. Thus, it would be useful for folks in Apache Bigtop project to share why they feel they cannot collaborate with Apache Hadoop leading to two different implementations of packaging for Hadoop within the ASF. Again, I appreciate this healthy discussion. > Improve Hadoop subcomponent integration in Hadoop 0.23 > -- > > Key: HADOOP-7939 > URL: https://issues.apache.org/jira/browse/HADOOP-7939 > Project: Hadoop Common > Issue Type: Improvement > Components: build, conf, documentation, scripts >Affects Versions: 0.23.0 >Reporter: Roman Shaposhnik >Assignee: Roman Shaposhnik > Fix For: 0.23.1 > > Attachments: HADOOP-7939.patch.txt, hadoop-layout.sh > > > h1. Introduction > For the rest of this proposal it is assumed that the current set > of Hadoop subcomponents is: > * hadoop-common > * hadoop-hdfs > * hadoop-yarn > * hadoop-mapreduce > It must be noted that this is an open ended list, though. For example, > implementations of additional frameworks on top of yarn (e.g. MPI) would > also be considered a subcomponent. > h1. Problem statement > Currently there's an unfortunate coupling and hard-coding present at the > level of launcher scripts, configuration scripts and Java implementation > code that prevents us from treating all subcomponents of Hadoop independently > of each other. In a lot of places it is assumed that bits and pieces > from individual subcomponents *must* be located at predefined places > and they can not be dynamically registered/discovered during the runtime. > This prevents a truly flexible deployment of Hadoop 0.23. > h1. Proposal > NOTE: this is NOT a proposal for redefining the layout from HADOOP-6255. > The goal here is to keep as much of that layout in place as possible, > while permitting different deployment layouts. > The aim of this proposal is to introduce the needed level of indirection and > flexibility in order to accommodate the current assumed layout of Hadoop > tarball > deployments and all the other styles of deployments as well. To this end the > following set of environment variables needs to be uniformly used in all of > the subcomponent's launcher scripts, configuration scripts and Java code > ( stands for a literal name of a subcomponent). These variables are > expected to be defined by -env.sh scripts and sourcing those files is > expected to have the desired effect of setting the environment up correctly. > # HADOOP__HOME >## root of the subtree in a filesystem where a subcomponent is expected to > be installed >## default value: $0/.. > # HADOOP__JARS >## a subdirectory with all of the jar fil
[jira] [Commented] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188500#comment-13188500 ] Arun C Murthy commented on HADOOP-7979: --- Thanks for the patch Michael! I'll commit this, can you please check if we need this for native code in other parts of Hadoop? E.g. linux-container-executor in MR. Thanks. > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.24.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll >Assignee: Michael Noll > Fix For: 0.24.0 > > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f > 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo > './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c > [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c > src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o > .libs/ZlibCompressor.o > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function > 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': > [INFO] src/org/apache/had
[jira] [Assigned] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun C Murthy reassigned HADOOP-7979: - Assignee: Michael Noll > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.24.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll >Assignee: Michael Noll > Fix For: 0.24.0 > > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f > 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo > './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c > [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c > src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o > .libs/ZlibCompressor.o > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function > 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c:71:41: error: > expected expression before ',' token > [INFO] make: *** [ZlibCompressor.lo] Error 1 > {code} > *How to fix* > The fix involves adding proper settings for {
[jira] [Commented] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188390#comment-13188390 ] Michael Noll commented on HADOOP-7979: -- > Please justify why no new tests are needed for this patch. Since it's a fix of the build config, I don't think there is a good way to test it apart from running the build itself successfully. > Also please list what manual steps were performed to verify this patch. The patched version of trunk (as of commit 78147a9 from two days ago) builds successfully on our side on Ubuntu 11.10. We don't have Macs here, so I can't directly test the maven profile that triggers on Macs only. However the activation trigger of the Mac profile follows the Maven docs ([1] and also [2] which defines the possible values for OS families, of which "mac" is used for Macs). Also, I verified that the Mac maven profile setup properly disables the LDFLAGS option when it (i.e. the profile) is actually triggered. [1] http://maven.apache.org/guides/introduction/introduction-to-profiles.html [2] http://maven.apache.org/plugins/maven-enforcer-plugin/rules/requireOS.html > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.24.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll > Fix For: 0.24.0 > > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f > 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo > './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompre
[jira] [Commented] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13188379#comment-13188379 ] Hadoop QA commented on HADOOP-7979: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12510965/HADOOP-7979.trunk.v1.txt against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javadoc. The javadoc tool appears to have generated 7 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/517//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/517//console This message is automatically generated. > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.24.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll > Fix For: 0.24.0 > > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoo
[jira] [Updated] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Noll updated HADOOP-7979: - Fix Version/s: 0.24.0 Affects Version/s: (was: 1.0.0) (was: 0.20.203.0) Status: Patch Available (was: Open) > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.24.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll > Fix For: 0.24.0 > > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f > 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo > './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c > [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c > src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o > .libs/ZlibCompressor.o > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function > 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c:71:41: error: > expected expression before ',' token > [INFO] make: *** [Zl
[jira] [Updated] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Noll updated HADOOP-7979: - Attachment: HADOOP-7979.trunk.v1.txt Patch for Hadoop trunk (0.24). > Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like > Ubuntu 11.10 > - > > Key: HADOOP-7979 > URL: https://issues.apache.org/jira/browse/HADOOP-7979 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.20.203.0, 0.24.0, 1.0.0 > Environment: Ubuntu 11.10+ >Reporter: Michael Noll > Attachments: HADOOP-7979.trunk.v1.txt > > > I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches > fail on Ubuntu 11.10 when trying to include the native code in the build. The > reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. > *Background* > From [Ubuntu 11.10 Release > Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: > {code} > The compiler passes by default two additional flags to the linker: > [...snipp...] > -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag > for a dynamic library mentioned on the command line if if the library is > actually used. > {code} > This was apparently planned to be changed already back in 11.04 but was > eventually reverted in the final release. From [11.04 Toolchain > Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: > {quote} > Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. > This means that, in the example above, if no symbols from {{libwheel}} were > needed by racetrack, then {{libwheel}} would not be linked even if it was > explicitly included in the command-line compiler flags. NOTE: The ld > {{\--as-needed}} default was reverted for the final natty release, and will > be re-enabled in the o-series. > {quote} > I already run into the same issue with Hadoop-LZO > (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the > patch for more details. For Hadoop, the problematic configure script is > {{native/configure}}. > *How to reproduce* > There are two ways to reproduce, depending on the OS you have at hand. > 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native > libs: > {code} > # in the top level directory of the 'hadoop-common' repo, > # i.e. where the BUILDING.txt file resides > $ mvn -Pnative compile > {code} > 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} > explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu > 11.10's default behavior. > *Error message (for trunk/0.24)* > Running the above build command will produce the following output (I added > {{-e -X}} switches to mvn). > {code} > [DEBUG] Executing: /bin/sh -l -c cd > /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native > && make > DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target > install > [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include > -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f > 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo > './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c > [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. > -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src > > -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah > -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo > -MD -MP -MF .deps/ZlibCompressor.Tpo -c > src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o > .libs/ZlibCompressor.o > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function > 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': > [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c:71:41: error: > expected expression before ',' token > [INFO] make: *** [ZlibCompressor.lo] Error 1 > {code} > *How to fix* > The fix involves adding proper settings for
[jira] [Updated] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
[ https://issues.apache.org/jira/browse/HADOOP-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Noll updated HADOOP-7979: - Description: I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches fail on Ubuntu 11.10 when trying to include the native code in the build. The reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. *Background* >From [Ubuntu 11.10 Release >Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: {code} The compiler passes by default two additional flags to the linker: [...snipp...] -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag for a dynamic library mentioned on the command line if if the library is actually used. {code} This was apparently planned to be changed already back in 11.04 but was eventually reverted in the final release. From [11.04 Toolchain Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: {quote} Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. This means that, in the example above, if no symbols from {{libwheel}} were needed by racetrack, then {{libwheel}} would not be linked even if it was explicitly included in the command-line compiler flags. NOTE: The ld {{\--as-needed}} default was reverted for the final natty release, and will be re-enabled in the o-series. {quote} I already run into the same issue with Hadoop-LZO (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the patch for more details. For Hadoop, the problematic configure script is {{native/configure}}. *How to reproduce* There are two ways to reproduce, depending on the OS you have at hand. 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native libs: {code} # in the top level directory of the 'hadoop-common' repo, # i.e. where the BUILDING.txt file resides $ mvn -Pnative compile {code} 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu 11.10's default behavior. *Error message (for trunk/0.24)* Running the above build command will produce the following output (I added {{-e -X}} switches to mvn). {code} [DEBUG] Executing: /bin/sh -l -c cd /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native && make DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target install [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF .deps/ZlibCompressor.Tpo -c src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o .libs/ZlibCompressor.o [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c:71:41: error: expected expression before ',' token [INFO] make: *** [ZlibCompressor.lo] Error 1 {code} *How to fix* The fix involves adding proper settings for {{LDFLAGS}} to the build config. In trunk, this is {{hadoop-common-project/hadoop-common/pom.xml}}. In branches 1.0 and 0.20.20x, this is {{build.xml}}. Basically, the fix explicitly adds {{-Wl,\--no-as-needed}} to {{LDFLAGS}}. Special care must be taken not to add this option when running on Mac OS as its version of ld does not support this option (and does not need it because by default it behaves as desired). was: I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches fail on Ubuntu 11.10 when trying to include the native code in the build. The reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. *Background* >From [Ubuntu 11.10 Release >Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNote
[jira] [Created] (HADOOP-7979) Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10
Native code: configure LDFLAGS and CXXFLAGS to fix the build on systems like Ubuntu 11.10 - Key: HADOOP-7979 URL: https://issues.apache.org/jira/browse/HADOOP-7979 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 1.0.0, 0.20.203.0, 0.24.0 Environment: Ubuntu 11.10+ Reporter: Michael Noll I noticed that the build of Hadoop trunk (0.24) and the 1.0/0.20.20x branches fail on Ubuntu 11.10 when trying to include the native code in the build. The reason is that the default behavior of {{ld}} was changed in Ubuntu 11.10. *Background* >From [Ubuntu 11.10 Release >Notes|https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes#GCC_4.6_Toolchain]: {code} The compiler passes by default two additional flags to the linker: [...snipp...] -Wl,--as-needed with this option the linker will only add a DT_NEEDED tag for a dynamic library mentioned on the command line if if the library is actually used. {code} This was apparently planned to be changed already back in 11.04 but was eventually reverted in the final release. From [11.04 Toolchain Transition|https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition#Indirect_Linking_for_Shared_Libraries]: {quote} Also in Natty, ld runs with the {{\--as-needed}} option enabled by default. This means that, in the example above, if no symbols from {{libwheel}} were needed by racetrack, then {{libwheel}} would not be linked even if it was explicitly included in the command-line compiler flags. NOTE: The ld {{\--as-needed}} default was reverted for the final natty release, and will be re-enabled in the o-series. {quote} I already run into the same issue with Hadoop-LZO (https://github.com/kevinweil/hadoop-lzo/issues/33). See the link and the patch for more details. For Hadoop, the problematic configure script is {{native/configure}}. *How to reproduce* There are two ways to reproduce, depending on the OS you have at hand. 1. Use a stock Ubuntu 11.10 box and run a build that also compiles the native libs: {code} # in the top level directory of the 'hadoop-common' repo, # i.e. where the BUILDING.txt file resides $ mvn -Pnative compile {code} 2. If you do not have Ubuntu 11.10 at hand, simply add {{-Wl,\--as-needed}} explicitly to {{LDFLAGS}}. This configures {{ld}} to work like Ubuntu 11.10's default behavior. *Error message (for trunk/0.24)* Running the above build command will produce the following output (I added {{-e -X}} switches to mvn). {code} [DEBUG] Executing: /bin/sh -l -c cd /home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native && make DESTDIR=/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/target install [INFO] /bin/bash ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo './'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. -I/usr/lib/jvm/default-java/include -I/usr/lib/jvm/default-java/include/linux -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/src -I/home/mnoll/programming/git/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -I/usr/local/include -g -Wall -fPIC -O2 -m64 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF .deps/ZlibCompressor.Tpo -c src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c -fPIC -DPIC -o .libs/ZlibCompressor.o [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c: In function 'Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs': [INFO] src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c:71:41: error: expected expression before ',' token [INFO] make: *** [ZlibCompressor.lo] Error 1 {code} *How to fix* The fix involves adding proper settings for {{LDFLAGS}} to the build config. In trunk, this is {{hadoop-common/pom.xml}}. In branches 1.0 and 0.20.20x, this is {{build.xml}}. Basically, the fix explicitly adds {{-Wl,\--no-as-needed}} to {{LDFLAGS}}. Special care must be taken not to add this option when running on Mac OS as its version of ld does not support this option (and does not need it because by default it behaves as desired). -- This message is automatically generated by JIRA. If you think it was sent in