[jira] [Updated] (HADOOP-14278) [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice package
[ https://issues.apache.org/jira/browse/HADOOP-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-14278: -- Affects Version/s: 3.0.0-alpha2 > [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice > package > - > > Key: HADOOP-14278 > URL: https://issues.apache.org/jira/browse/HADOOP-14278 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Andres Perez >Priority: Minor > > In JDK9 sun.net.spi.nameservice package has been removed > http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/d11ad4b19348 making it fail when > running {{mvn clean install -DskipTests}} with error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-hdfs: Compilation failure: > Compilation failure: > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[66,31] > package sun.net.spi.nameservice does not exist > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[233,11] > cannot find symbol > [ERROR] symbol: class NameService > [ERROR] location: class org.apache.hadoop.hdfs.TestDFSClientFailover > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14278) [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice package
[ https://issues.apache.org/jira/browse/HADOOP-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-14278: -- Tags: (was: jdk9) > [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice > package > - > > Key: HADOOP-14278 > URL: https://issues.apache.org/jira/browse/HADOOP-14278 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andres Perez > > In JDK9 sun.net.spi.nameservice package has been removed > http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/d11ad4b19348 making it fail when > running {{mvn clean install -DskipTests}} with error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-hdfs: Compilation failure: > Compilation failure: > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[66,31] > package sun.net.spi.nameservice does not exist > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[233,11] > cannot find symbol > [ERROR] symbol: class NameService > [ERROR] location: class org.apache.hadoop.hdfs.TestDFSClientFailover > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14278) [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice package
[ https://issues.apache.org/jira/browse/HADOOP-14278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-14278: -- Priority: Minor (was: Major) > [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice > package > - > > Key: HADOOP-14278 > URL: https://issues.apache.org/jira/browse/HADOOP-14278 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andres Perez >Priority: Minor > > In JDK9 sun.net.spi.nameservice package has been removed > http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/d11ad4b19348 making it fail when > running {{mvn clean install -DskipTests}} with error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile > (default-testCompile) on project hadoop-hdfs: Compilation failure: > Compilation failure: > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[66,31] > package sun.net.spi.nameservice does not exist > [ERROR] > /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[233,11] > cannot find symbol > [ERROR] symbol: class NameService > [ERROR] location: class org.apache.hadoop.hdfs.TestDFSClientFailover > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14278) [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice package
Andres Perez created HADOOP-14278: - Summary: [JDK9] TestDFSClientFailover is using the removed sun.net.spi.nameservice package Key: HADOOP-14278 URL: https://issues.apache.org/jira/browse/HADOOP-14278 Project: Hadoop Common Issue Type: Bug Reporter: Andres Perez In JDK9 sun.net.spi.nameservice package has been removed http://hg.openjdk.java.net/jdk9/jdk9/jdk/rev/d11ad4b19348 making it fail when running {{mvn clean install -DskipTests}} with error: {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile (default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation failure: [ERROR] /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[66,31] package sun.net.spi.nameservice does not exist [ERROR] /root/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java:[233,11] cannot find symbol [ERROR] symbol: class NameService [ERROR] location: class org.apache.hadoop.hdfs.TestDFSClientFailover {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13986) UGI.UgiMetrics.renewalFailureTotal is not printable
[ https://issues.apache.org/jira/browse/HADOOP-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832229#comment-15832229 ] Andres Perez edited comment on HADOOP-13986 at 1/20/17 6:44 PM: Could I work on this? Will it be easier to add the abstract signature to the {{MutableMetric}}? was (Author: aaperezl): Could I work on this? And should we include those that extend {{AbstractMetric}} as well as those that extend {{MutableMetric}}? > UGI.UgiMetrics.renewalFailureTotal is not printable > --- > > Key: HADOOP-13986 > URL: https://issues.apache.org/jira/browse/HADOOP-13986 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Priority: Minor > > The metrics (renewalFailures and renewalFailuresTotal) in the following code > snippet are not printable. > {code:title=UserGroupInformation.java} > metrics.renewalFailuresTotal.incr(); > final long tgtEndTime = tgt.getEndTime().getTime(); > LOG.warn("Exception encountered while running the renewal " > + "command for {}. (TGT end time:{}, renewalFailures: {}," > + "renewalFailuresTotal: {})", getUserName(), tgtEndTime, > metrics.renewalFailures, metrics.renewalFailuresTotal, ie); > {code} > The output of the code is like the following: > {quote} > 2017-01-12 12:23:14,062 WARN security.UserGroupInformation > (UserGroupInformation.java:run(1012)) - Exception encountered while running > the renewal command for f...@example.com. (TGT end time:148425260, > renewalFailures: > org.apache.hadoop.metrics2.lib.MutableGaugeInt@323aa7f9,renewalFailuresTotal: > org.apache.hadoop.metrics2.lib.MutableGaugeLong@c8af058) > ExitCodeException exitCode=1: kinit: krb5_cc_get_principal: No credentials > cache file found > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13986) UGI.UgiMetrics.renewalFailureTotal is not printable
[ https://issues.apache.org/jira/browse/HADOOP-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832229#comment-15832229 ] Andres Perez commented on HADOOP-13986: --- Could I work on this? And should we include those that extend {{AbstractMetric}} as well as those that extend {{MutableMetric}}? > UGI.UgiMetrics.renewalFailureTotal is not printable > --- > > Key: HADOOP-13986 > URL: https://issues.apache.org/jira/browse/HADOOP-13986 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Priority: Minor > > The metrics (renewalFailures and renewalFailuresTotal) in the following code > snippet are not printable. > {code:title=UserGroupInformation.java} > metrics.renewalFailuresTotal.incr(); > final long tgtEndTime = tgt.getEndTime().getTime(); > LOG.warn("Exception encountered while running the renewal " > + "command for {}. (TGT end time:{}, renewalFailures: {}," > + "renewalFailuresTotal: {})", getUserName(), tgtEndTime, > metrics.renewalFailures, metrics.renewalFailuresTotal, ie); > {code} > The output of the code is like the following: > {quote} > 2017-01-12 12:23:14,062 WARN security.UserGroupInformation > (UserGroupInformation.java:run(1012)) - Exception encountered while running > the renewal command for f...@example.com. (TGT end time:148425260, > renewalFailures: > org.apache.hadoop.metrics2.lib.MutableGaugeInt@323aa7f9,renewalFailuresTotal: > org.apache.hadoop.metrics2.lib.MutableGaugeLong@c8af058) > ExitCodeException exitCode=1: kinit: krb5_cc_get_principal: No credentials > cache file found > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830410#comment-15830410 ] Andres Perez commented on HADOOP-12953: --- Retesting this patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13957) prevent bad PATHs
[ https://issues.apache.org/jira/browse/HADOOP-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830376#comment-15830376 ] Andres Perez commented on HADOOP-13957: --- I guess you are right, was thinking more from a dev environment POV, but still then having the directories world writable doesn't make sense. > prevent bad PATHs > - > > Key: HADOOP-13957 > URL: https://issues.apache.org/jira/browse/HADOOP-13957 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Affects Versions: 3.0.0-alpha2 >Reporter: Allen Wittenauer > > Apache Hadoop daemons should fail to start if the shell PATH contains world > writable directories or '.' (cwd). Doing so would close an attack vector on > misconfigured systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13957) prevent bad PATHs
[ https://issues.apache.org/jira/browse/HADOOP-13957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819843#comment-15819843 ] Andres Perez commented on HADOOP-13957: --- Maybe this should be implemented as a configuration option that you can enable/disable this check. {{hadoop.security.check-path = true|false}} > prevent bad PATHs > - > > Key: HADOOP-13957 > URL: https://issues.apache.org/jira/browse/HADOOP-13957 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Affects Versions: 3.0.0-alpha2 >Reporter: Allen Wittenauer > > Apache Hadoop daemons should fail to start if the shell PATH contains world > writable directories or '.' (cwd). Doing so would close an attack vector on > misconfigured systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13873) log DNS addresses on s3a init
[ https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733093#comment-15733093 ] Andres Perez edited comment on HADOOP-13873 at 12/8/16 7:00 PM: Would adding this to {{S3AFileSystem#initialize}} be enough? {code} ... bucket = name.getHost(); if(LOG.isDebugEnabled()) { LOG.debug("Bucket endpoint: " + InetAddress.getByName(bucket).toString()); } ... {code} was (Author: aaperezl): Would adding this to {{S3AFileSystem#initialize}} be enough: {code} ... bucket = name.getHost(); if(LOG.isDebugEnabled()) { LOG.debug("Bucket endpoint: " + InetAddress.getByName(bucket).toString()); } ... {code} > log DNS addresses on s3a init > - > > Key: HADOOP-13873 > URL: https://issues.apache.org/jira/browse/HADOOP-13873 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Minor > > HADOOP-13871 has shown that network problems can kill perf, and that it's v. > hard to track down, even if you turn up the logging in hadoop.fs.s3a and > com.amazon layers to debug. > we could maybe improve things by printing out the IPAddress of the s3 > endpoint, as that could help with the network tracing. Printing from within > hadoop shows the one given to S3a, not a different one returned by any load > balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13873) log DNS addresses on s3a init
[ https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15733093#comment-15733093 ] Andres Perez commented on HADOOP-13873: --- Would adding this to {{S3AFileSystem#initialize}} be enough: {code} ... bucket = name.getHost(); if(LOG.isDebugEnabled()) { LOG.debug("Bucket endpoint: " + InetAddress.getByName(bucket).toString()); } ... {code} > log DNS addresses on s3a init > - > > Key: HADOOP-13873 > URL: https://issues.apache.org/jira/browse/HADOOP-13873 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Minor > > HADOOP-13871 has shown that network problems can kill perf, and that it's v. > hard to track down, even if you turn up the logging in hadoop.fs.s3a and > com.amazon layers to debug. > we could maybe improve things by printing out the IPAddress of the s3 > endpoint, as that could help with the network tracing. Printing from within > hadoop shows the one given to S3a, not a different one returned by any load > balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13860) ZKFailoverController.ElectorCallbacks should have a non-trivial implementation for enterNeutralMode
[ https://issues.apache.org/jira/browse/HADOOP-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726511#comment-15726511 ] Andres Perez commented on HADOOP-13860: --- Hi [~kasha], I would like to take care of this issue if possible. Are we more inclined to leave it as it is and document it or create the implementation? In case of the latter, should we follow a similar approach as YARN-5677 did? > ZKFailoverController.ElectorCallbacks should have a non-trivial > implementation for enterNeutralMode > --- > > Key: HADOOP-13860 > URL: https://issues.apache.org/jira/browse/HADOOP-13860 > Project: Hadoop Common > Issue Type: Bug >Reporter: Karthik Kambatla > > ZKFailoverController.ElectorCallbacks implements enterNeutralMode trivially. > This can lead to a master staying active for longer than necessary, unless > the fencing scheme ensures the first active is transitioned to standby before > transitioning another master to active (e.g. ssh fencing). > YARN-5677 does this for YARN in EmbeddedElectorService. If we choose not to > implement, we should at least document this so any user of > ZKFailoverController in the future is aware. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12329) io.file.buffer.size is only for SequenceFiles
[ https://issues.apache.org/jira/browse/HADOOP-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707138#comment-15707138 ] Andres Perez commented on HADOOP-12329: --- I think this can be easily be solved by removing the first setence and reorganizing a little bit the description to look like this: {code:xml} Determines how much data is buffered during read and write operations. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86). {code} > io.file.buffer.size is only for SequenceFiles > - > > Key: HADOOP-12329 > URL: https://issues.apache.org/jira/browse/HADOOP-12329 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.1 >Reporter: Kun Yan >Priority: Trivial > Labels: documentation > > the core-site.xml io.file.buffer.size description:Size of read/write buffer > used in SequenceFiles. > This parameter is only for SequenceFiles? I search for the other one issue as > that is not true? If not only affect SequenceFiles could let users to ignore > this parameter.URL > location:http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/ClusterSetup.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674612#comment-15674612 ] Andres Perez commented on HADOOP-12953: --- This patch provides a good solution, given that it doesn't modify the signature of existing methods and just adds additional functionality. This is something that is still relevant still in 3.0.0-aplha > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15651837#comment-15651837 ] Andres Perez commented on HADOOP-13781: --- [~templedf] [~arpitagarwal] can you please review the patch and provide any possible feedback? > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.5.patch, HADOOP-13781.6.patch, > HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.6.patch Fixed new generated checkstyle issues > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha2 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.5.patch, HADOOP-13781.6.patch, > HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Affects Version/s: (was: 3.0.0-alpha1) (was: 2.8.0) 3.0.0-alpha2 > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha2 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.5.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.5.patch Added test ensuring that {{ActiveStandbyElector}} constructor throws the expected exception when trying to connect with {{failFast = false}} and do not fail silently ensuring {{ZKFailoverController#initZK}} works properly. Also the log now records the number of attempts and not just the exception thrown > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.5.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Affects Version/s: 2.8.0 > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15627182#comment-15627182 ] Andres Perez edited comment on HADOOP-13781 at 11/2/16 1:04 AM: Waiting before throwing the exeception {{ActiceStanbyElector#reEstablishSession}} is just a waste of time, inverting the order on this patch was (Author: aaperezl): Waiting before throwing the exeception {{ActiceStanbyElector#reEstablishSession}} is just a waste of time, reverting the order on this patch > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.4.patch Waiting before throwing the exeception {{ActiceStanbyElector#reEstablishSession}} is just a waste of time, reverting the order on this patch > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.4.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.3.patch Added patch with checkstyle fix > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.3.patch, > HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.2.patch The {{ActiveStandbyElector#reEstablishSession}} method does not throws {{IOException}} nor {{KeeperException}} making the Zookeeper connection fail silently when invoking {{ZKFailoverController#initZK}} making the Unit test fail when the {{failFast}} paremeter is set to {{false}} > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.2.patch, HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Status: Patch Available (was: Open) > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Attachment: HADOOP-13781.patch > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > Attachments: HADOOP-13781.patch > > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
[ https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-13781: -- Description: YARN-4243 introduced the logic that lets retry establishing the connection when initializing the {{ActiveStandbyElector}} adding the parameter {{failFast}}. {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} set to false, to let the ZFKC wait longer for the Zookeeper server be in a ready state when first initializing it. was: YARN-4243 introduced the logic that lets retry establishing the connection when initializing the `ActiveStandbyElector` adding the parameter `failFast`. `ZKFailoverController#initZK` should use this constructor with `failFast` set to false, to let the ZFKC wait longer for the Zookeeper server be in a ready state when first initializing it. > ZKFailoverController#initZK should use the ActiveStanbyElector constructor > with failFast as false > - > > Key: HADOOP-13781 > URL: https://issues.apache.org/jira/browse/HADOOP-13781 > Project: Hadoop Common > Issue Type: Improvement > Components: ha >Affects Versions: 3.0.0-alpha1 >Reporter: Andres Perez >Priority: Minor > > YARN-4243 introduced the logic that lets retry establishing the connection > when initializing the {{ActiveStandbyElector}} adding the parameter > {{failFast}}. > {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} > set to false, to let the ZFKC wait longer for the Zookeeper server be in a > ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false
Andres Perez created HADOOP-13781: - Summary: ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false Key: HADOOP-13781 URL: https://issues.apache.org/jira/browse/HADOOP-13781 Project: Hadoop Common Issue Type: Improvement Components: ha Affects Versions: 3.0.0-alpha1 Reporter: Andres Perez Priority: Minor YARN-4243 introduced the logic that lets retry establishing the connection when initializing the `ActiveStandbyElector` adding the parameter `failFast`. `ZKFailoverController#initZK` should use this constructor with `failFast` set to false, to let the ZFKC wait longer for the Zookeeper server be in a ready state when first initializing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10230) GSetByHashMap breaks contract of GSet
[ https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-10230: -- Attachment: HADOOP-10230.002.patch I added the methods that were missing. > GSetByHashMap breaks contract of GSet > - > > Key: HADOOP-10230 > URL: https://issues.apache.org/jira/browse/HADOOP-10230 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Hiroshi Ikeda >Assignee: Andres Perez >Priority: Trivial > Attachments: HADOOP-10230.001.patch, HADOOP-10230.002.patch > > > The contract of GSet says it is ensured to throw NullPointerException if a > given argument is null for many methods, but GSetByHashMap doesn't. I think > just writing non-null preconditions for GSet are required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10230) GSetByHashMap breaks contract of GSet
[ https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-10230: -- Attachment: HADOOP-10230.001.patch > GSetByHashMap breaks contract of GSet > - > > Key: HADOOP-10230 > URL: https://issues.apache.org/jira/browse/HADOOP-10230 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Hiroshi Ikeda >Assignee: Andres Perez >Priority: Trivial > Attachments: HADOOP-10230.001.patch > > > The contract of GSet says it is ensured to throw NullPointerException if a > given argument is null for many methods, but GSetByHashMap doesn't. I think > just writing non-null preconditions for GSet are required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10230) GSetByHashMap breaks contract of GSet
[ https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez updated HADOOP-10230: -- Status: Patch Available (was: Open) Change the exception type to NullPointerException as it is specificied in the GSet contract. > GSetByHashMap breaks contract of GSet > - > > Key: HADOOP-10230 > URL: https://issues.apache.org/jira/browse/HADOOP-10230 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Hiroshi Ikeda >Assignee: Andres Perez >Priority: Trivial > > The contract of GSet says it is ensured to throw NullPointerException if a > given argument is null for many methods, but GSetByHashMap doesn't. I think > just writing non-null preconditions for GSet are required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-10230) GSetByHashMap breaks contract of GSet
[ https://issues.apache.org/jira/browse/HADOOP-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres Perez reassigned HADOOP-10230: - Assignee: Andres Perez > GSetByHashMap breaks contract of GSet > - > > Key: HADOOP-10230 > URL: https://issues.apache.org/jira/browse/HADOOP-10230 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Hiroshi Ikeda >Assignee: Andres Perez >Priority: Trivial > > The contract of GSet says it is ensured to throw NullPointerException if a > given argument is null for many methods, but GSetByHashMap doesn't. I think > just writing non-null preconditions for GSet are required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12057) swiftfs rename on partitioned file attempts to consolidate partitions
[ https://issues.apache.org/jira/browse/HADOOP-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263295#comment-15263295 ] Andres Perez commented on HADOOP-12057: --- I linked the wrong section, [Large Object Direct API|http://docs.openstack.org/developer/swift/overview_large_objects.html#direct-api] > swiftfs rename on partitioned file attempts to consolidate partitions > - > > Key: HADOOP-12057 > URL: https://issues.apache.org/jira/browse/HADOOP-12057 > Project: Hadoop Common > Issue Type: Bug > Components: fs/swift >Reporter: David Dobbins >Assignee: David Dobbins > Attachments: HADOOP-12057-006.patch, HADOOP-12057-008.patch, > HADOOP-12057.007.patch, HADOOP-12057.patch, HADOOP-12057.patch, > HADOOP-12057.patch, HADOOP-12057.patch, HADOOP-12057.patch > > > In the swift filesystem for openstack, a rename operation on a partitioned > file uses the swift COPY operation, which attempts to consolidate all of the > partitions into a single object. This causes the rename to fail when the > total size of all the partitions exceeds the maximum object size for swift. > Since partitioned files are primarily created to allow a file to exceed the > maximum object size, this bug makes writing to swift extremely unreliable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12057) swiftfs rename on partitioned file attempts to consolidate partitions
[ https://issues.apache.org/jira/browse/HADOOP-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263293#comment-15263293 ] Andres Perez commented on HADOOP-12057: --- I used a Bluemix Object Store, where you don't have access to the swift.conf file to change the default maximum object size limit. I think this is good because it separate the dependency between needing to change something in swift for Hadoop to work. This just uses the [Large Object Direct API|http://docs.openstack.org/developer/swift/overview_large_objects.html#additional-notes] of Swift, which in theory will allow to store 1TB files in several segments without merging them, but still providing a single file to download and combine all the pieces together. The only issues if that is you hdfs dfs -ls swift://container.store/ you will see the file displaying with size 0, which is also expected from the documentation linked above. > swiftfs rename on partitioned file attempts to consolidate partitions > - > > Key: HADOOP-12057 > URL: https://issues.apache.org/jira/browse/HADOOP-12057 > Project: Hadoop Common > Issue Type: Bug > Components: fs/swift >Reporter: David Dobbins >Assignee: David Dobbins > Attachments: HADOOP-12057-006.patch, HADOOP-12057-008.patch, > HADOOP-12057.007.patch, HADOOP-12057.patch, HADOOP-12057.patch, > HADOOP-12057.patch, HADOOP-12057.patch, HADOOP-12057.patch > > > In the swift filesystem for openstack, a rename operation on a partitioned > file uses the swift COPY operation, which attempts to consolidate all of the > partitions into a single object. This causes the rename to fail when the > total size of all the partitions exceeds the maximum object size for swift. > Since partitioned files are primarily created to allow a file to exceed the > maximum object size, this bug makes writing to swift extremely unreliable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12057) swiftfs rename on partitioned file attempts to consolidate partitions
[ https://issues.apache.org/jira/browse/HADOOP-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15263238#comment-15263238 ] Andres Perez commented on HADOOP-12057: --- I deployed this change in my cluster and was able to successfully put and get files >5GB from a swift container. > swiftfs rename on partitioned file attempts to consolidate partitions > - > > Key: HADOOP-12057 > URL: https://issues.apache.org/jira/browse/HADOOP-12057 > Project: Hadoop Common > Issue Type: Bug > Components: fs/swift >Reporter: David Dobbins >Assignee: David Dobbins > Attachments: HADOOP-12057-006.patch, HADOOP-12057-008.patch, > HADOOP-12057.007.patch, HADOOP-12057.patch, HADOOP-12057.patch, > HADOOP-12057.patch, HADOOP-12057.patch, HADOOP-12057.patch > > > In the swift filesystem for openstack, a rename operation on a partitioned > file uses the swift COPY operation, which attempts to consolidate all of the > partitions into a single object. This causes the rename to fail when the > total size of all the partitions exceeds the maximum object size for swift. > Since partitioned files are primarily created to allow a file to exceed the > maximum object size, this bug makes writing to swift extremely unreliable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org