[jira] [Created] (HADOOP-12965) Pull QuotaException from HDFS into Common
Plamen Jeliazkov created HADOOP-12965: - Summary: Pull QuotaException from HDFS into Common Key: HADOOP-12965 URL: https://issues.apache.org/jira/browse/HADOOP-12965 Project: Hadoop Common Issue Type: Wish Affects Versions: 3.0.0 Reporter: Plamen Jeliazkov Priority: Minor While QuotaException is HDFS-specific there is little reason why other FileSystems could not leverage it as an Exception or a FS-agnostic client couldn't attempt to handle it. In order to do this we should move QuotaException to the hadoop-common project. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon
[ https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15168437#comment-15168437 ] Plamen Jeliazkov commented on HADOOP-12842: --- [~iwasakims], thanks for the link there. If that is the intended specification then it is not being enforced; but that is a separate issue. Shall I close this JIRA then or re-purpose it? > LocalFileSystem checksum file creation fails when source filename contains a > colon > -- > > Key: HADOOP-12842 > URL: https://issues.apache.org/jira/browse/HADOOP-12842 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.4 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Attachments: HADOOP-12842_trunk.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > In most FileSystems you can create a file with a colon character in it, > including HDFS. If you try to use the LocalFileSystem implementation (which > extends ChecksumFileSystem) to create a file with a colon character in it you > get a URISyntaxException during the creation of the checksum file because of > the use of {code}new Path(path, checksumFile){code} where checksumFile will > be considered as a relative path during URI parsing due to starting with a > "." and containing a ":" in the path. > Running the following test inside TestLocalFileSystem causes the failure: > {code} > @Test > public void testColonFilePath() throws Exception { > FileSystem fs = fileSys; > Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt"); > fs.delete(file, true); > FSDataOutputStream out = fs.create(file); > try { > out.write("text1".getBytes()); > } finally { > out.close(); > } > } > {code} > With the following stack trace: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: .fileWith:InIt.crc > at java.net.URI.checkPath(URI.java:1804) > at java.net.URI.(URI.java:752) > at org.apache.hadoop.fs.Path.initialize(Path.java:201) > at org.apache.hadoop.fs.Path.(Path.java:170) > at org.apache.hadoop.fs.Path.(Path.java:92) > at > org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) > at > org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon
[ https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-12842 started by Plamen Jeliazkov. - > LocalFileSystem checksum file creation fails when source filename contains a > colon > -- > > Key: HADOOP-12842 > URL: https://issues.apache.org/jira/browse/HADOOP-12842 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.4 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Attachments: HADOOP-12842_trunk.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > In most FileSystems you can create a file with a colon character in it, > including HDFS. If you try to use the LocalFileSystem implementation (which > extends ChecksumFileSystem) to create a file with a colon character in it you > get a URISyntaxException during the creation of the checksum file because of > the use of {code}new Path(path, checksumFile){code} where checksumFile will > be considered as a relative path during URI parsing due to starting with a > "." and containing a ":" in the path. > Running the following test inside TestLocalFileSystem causes the failure: > {code} > @Test > public void testColonFilePath() throws Exception { > FileSystem fs = fileSys; > Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt"); > fs.delete(file, true); > FSDataOutputStream out = fs.create(file); > try { > out.write("text1".getBytes()); > } finally { > out.close(); > } > } > {code} > With the following stack trace: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: .fileWith:InIt.crc > at java.net.URI.checkPath(URI.java:1804) > at java.net.URI.(URI.java:752) > at org.apache.hadoop.fs.Path.initialize(Path.java:201) > at org.apache.hadoop.fs.Path.(Path.java:170) > at org.apache.hadoop.fs.Path.(Path.java:92) > at > org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) > at > org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon
[ https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-12842: -- Attachment: HADOOP-12842_trunk.patch > LocalFileSystem checksum file creation fails when source filename contains a > colon > -- > > Key: HADOOP-12842 > URL: https://issues.apache.org/jira/browse/HADOOP-12842 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.4 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Attachments: HADOOP-12842_trunk.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > In most FileSystems you can create a file with a colon character in it, > including HDFS. If you try to use the LocalFileSystem implementation (which > extends ChecksumFileSystem) to create a file with a colon character in it you > get a URISyntaxException during the creation of the checksum file because of > the use of {code}new Path(path, checksumFile){code} where checksumFile will > be considered as a relative path during URI parsing due to starting with a > "." and containing a ":" in the path. > Running the following test inside TestLocalFileSystem causes the failure: > {code} > @Test > public void testColonFilePath() throws Exception { > FileSystem fs = fileSys; > Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt"); > fs.delete(file, true); > FSDataOutputStream out = fs.create(file); > try { > out.write("text1".getBytes()); > } finally { > out.close(); > } > } > {code} > With the following stack trace: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: .fileWith:InIt.crc > at java.net.URI.checkPath(URI.java:1804) > at java.net.URI.(URI.java:752) > at org.apache.hadoop.fs.Path.initialize(Path.java:201) > at org.apache.hadoop.fs.Path.(Path.java:170) > at org.apache.hadoop.fs.Path.(Path.java:92) > at > org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) > at > org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon
[ https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-12842: -- Status: Patch Available (was: In Progress) > LocalFileSystem checksum file creation fails when source filename contains a > colon > -- > > Key: HADOOP-12842 > URL: https://issues.apache.org/jira/browse/HADOOP-12842 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.4 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Attachments: HADOOP-12842_trunk.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > In most FileSystems you can create a file with a colon character in it, > including HDFS. If you try to use the LocalFileSystem implementation (which > extends ChecksumFileSystem) to create a file with a colon character in it you > get a URISyntaxException during the creation of the checksum file because of > the use of {code}new Path(path, checksumFile){code} where checksumFile will > be considered as a relative path during URI parsing due to starting with a > "." and containing a ":" in the path. > Running the following test inside TestLocalFileSystem causes the failure: > {code} > @Test > public void testColonFilePath() throws Exception { > FileSystem fs = fileSys; > Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt"); > fs.delete(file, true); > FSDataOutputStream out = fs.create(file); > try { > out.write("text1".getBytes()); > } finally { > out.close(); > } > } > {code} > With the following stack trace: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative > path in absolute URI: .fileWith:InIt.crc > at java.net.URI.checkPath(URI.java:1804) > at java.net.URI.(URI.java:752) > at org.apache.hadoop.fs.Path.initialize(Path.java:201) > at org.apache.hadoop.fs.Path.(Path.java:170) > at org.apache.hadoop.fs.Path.(Path.java:92) > at > org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88) > at > org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) > at > org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon
Plamen Jeliazkov created HADOOP-12842: - Summary: LocalFileSystem checksum file creation fails when source filename contains a colon Key: HADOOP-12842 URL: https://issues.apache.org/jira/browse/HADOOP-12842 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.4 Reporter: Plamen Jeliazkov Assignee: Plamen Jeliazkov Priority: Minor In most FileSystems you can create a file with a colon character in it, including HDFS. If you try to use the LocalFileSystem implementation (which extends ChecksumFileSystem) to create a file with a colon character in it you get a URISyntaxException during the creation of the checksum file because of the use of {code}new Path(path, checksumFile){code} where checksumFile will be considered as a relative path during URI parsing due to starting with a "." and containing a ":" in the path. Running the following test inside TestLocalFileSystem causes the failure: {code} @Test public void testColonFilePath() throws Exception { FileSystem fs = fileSys; Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt"); fs.delete(file, true); FSDataOutputStream out = fs.create(file); try { out.write("text1".getBytes()); } finally { out.close(); } } {code} With the following stack trace: {code} java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: .fileWith:InIt.crc at java.net.URI.checkPath(URI.java:1804) at java.net.URI.(URI.java:752) at org.apache.hadoop.fs.Path.initialize(Path.java:201) at org.apache.hadoop.fs.Path.(Path.java:170) at org.apache.hadoop.fs.Path.(Path.java:92) at org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) at org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine interface
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Status: Open (was: Patch Available) > Introduce Coordination Engine interface > --- > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, HADOOP-10641.patch, HADOOP-10641.patch, > NNThroughputBenchmark Results.pdf, ce-tla.zip, hadoop-coordination.patch, > zkbench.pdf > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine interface
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14107750#comment-14107750 ] Plamen Jeliazkov commented on HADOOP-10641: --- The test, testSimpleProposals, was also updated to validate that the CoordinateEngine's GlobalSequenceNumber is incrementing monotonically per Agreement reached. > Introduce Coordination Engine interface > --- > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, HADOOP-10641.patch, HADOOP-10641.patch, > NNThroughputBenchmark Results.pdf, ce-tla.zip, hadoop-coordination.patch, > zkbench.pdf > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine interface
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Attachment: HADOOP-10641.patch Attaching new patch. Here's the major updates. # Added ZK implementation interfaces to make Agreement handling clearer. # ZKCoordinationEngine takes a collection of ZKAgreementHandlers. It is the job of the Handlers to type cast the Agreements. The pre-requisite to the type cast is that ZKAgreementHandler.handles(Agreement) must return true for that very Agreement. # ZKCoordinationEngine executes each Agreement amongst all the ZKAgreementHandlers. Look at ZKCoordinationEngine.executeAllHandlers(). # SampleHandler sets the GlobalSequenceNumber of the SampleProposal before executing it. > Introduce Coordination Engine interface > --- > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, HADOOP-10641.patch, HADOOP-10641.patch, > NNThroughputBenchmark Results.pdf, ce-tla.zip, hadoop-coordination.patch, > zkbench.pdf > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine interface
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Attachment: HADOOP-10641.patch Attaching new patch, here are the changes I've made: # Moved updateCurrentGSN() to after executeAgreement(). We will replay the agreement if we crashed before updating the GSN in ZK. # Removed the ProposalReturnCode(s). We will just return if submission was successful. Exception if unsuccessful. # Renamed ProposalNotAcceptedException to ProposalSubmissionException. # NoQuorumException now extends ProposalSubmissionException. > Introduce Coordination Engine interface > --- > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, HADOOP-10641.patch, hadoop-coordination.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14063127#comment-14063127 ] Plamen Jeliazkov commented on HADOOP-10641: --- We hosted a meet-up at the WANdisco office in San Ramon today. Thank you to everyone who came. I'd especially like to thank [~atm] and [~sanjay.radia] for taking their time to connect with us. I took the liberty to record some of the comments / concerns people raised during our meet-up. I will list all of them here and provide a few responses. * Is NoQuorumException and ProposalNotAcceptedException enough? Are there other exceptions CoordinationEngine might throw? ** My own feeling is that these two in particular were the most general and universal. We could always add IOException, if desired. * In submitProposal() there is ProposalReturnCode return value and possible Exception to be thrown. It is unclear which one we should use. ** I agree. Konstantin looked at me for an answer during this but I remained silent. The reason for this is for ProposalReturnCode to return a deterministic result (NoQuorum has a deterministic event; the Proposal was not sent), and to treat the Exception case as something wrong with the Proposal itself (i.e., doesn't implement equal() or hashcode() correctly, or cannot be serialized properly). I understand the confusion and we could do better with just the Exception case. * ConsensusNode is non-specific. Consider renaming the project to ConsensusNameNode. ** This applies to HDFS-6469. I think ConsensusNameNode is a good name. I'll probably always continue to call them CNodes though. :) * Concern for PAXOS to effectively load balance clients. Two round trips makes writes slow. * CNodeProxyProvider should allow for deterministic host selection. Consider a round-robin approach. * We are weakening read semantics to provide the fast read path. This makes stale reads possible. ** Konstantin discussed the 'coordinated read' mechanism and how we ensure clients talk to up-to-date NameNodes via Proposals. * Sub-namespace WAN replication is highly desirable but double-journaling in the CoordinationEngine and the EditsLog is concerning. * An address of the impact on write performance is desirable by the community. * HBase coming up with WAL plugin for possible coordination. Wary of membership coordination (multiple Distributed State Machines) for HBase WALs. * Small separate project might make it more likely for people to import CE into their own projects and build their own CoordinationEngines. Separate branch also possible. Some of these clearly correspond to the HDFS and HBase projects and not just the CoordinationEngine itself. Apologies if I missed anyone's concern / point; pretty sure I captured everybody though. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, hadoop-coordination.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Status: Patch Available (was: In Progress) > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch, hadoop-coordination.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14021473#comment-14021473 ] Plamen Jeliazkov commented on HADOOP-10641: --- Hi Lohit, thanks for your comments! # checkQuorum is an optimization some coordination engines may choose to implement in order to fail-fast to client requests. In the NameNode case, if quorum loss was suspected, that NameNode could start issuing StandbyExceptions. # You are correct that the ZKCoordinationEngine does not implement ZNode clean-up currently. That is because it was made as a proof of concept for the CoordinationEngine API. Nonetheless, proper clean-up can be implemented. All one has to do is delete the ZNodes that everyone else has already learned about. ## Suppose you have Node A, B, and C, and Agreements 1, 2, 3, 4, and 5. ## Node A and B learn Agreement 1 first. Node C is a lagging node. A & B contain 1. C contains nothing. ## Node A and B continue onwards, learning up to Agreement 4. A & B contain 1, 2, 3, and 4 now. C contains nothing. ## Node C finally learns Agreement 1. A & B contain 1, 2, 3, and 4 now. C contains 1. ## We can now discard Agreement 1 from persistence because we know that all the Nodes, A, B, and C, have safely learned about and applied Agreement 1. ## We can apply this process for all other Agreements. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14020426#comment-14020426 ] Plamen Jeliazkov commented on HADOOP-10641: --- [~szetszwo], thanks for the review! It was nice to finally see a face at the Summit as well. :) [~atm], thanks for the comments! I think I am outside of that discussion, most likely [~cos] or [~shv] can comment better on where to take the project. I posted a new patch around the same time your review came in; there were mistakes in the way agreement executions work. * ProtoBuf is certainly a nice choice for serialization. However, we shouldn't need to bind ourselves to any one serialization format. This is why we use Serializable. It is certainly possible to have the writeObject call write out a ProtoBuf of the proposal itself, for example, and read the values back using ProtoBuf as well. This is feasible with the current interfaces. * Good point on version compatibility. AFAIK, version compatibility would take place once the quorum is established as prior to that there is no communication between the engines. So the coordination engine, as part of bootstrap, should perform a version check against its quorum peers. Perhaps this means extending the API, or making it part of a larger interface? (VersionedCoordinationEngine)? [~shv] might be able to comment better. * Please see my new patch. The idea is indeed to make the agreement execute on some callBack object, SampleLearner in this case. The new patch should show the test making use of it. * Yes we can probably do some refactoring here. I'll work on a new patch. * Yes we can add details for ZkCoordinationEngine. Unsure of any clear advantages and disadvantages. The only thing that comes to my mind right away is that it may be possible to build Paxos directly into the CoordinationEngine implementation, thus co-locating the coordination service with the server / application itself, rather than having to make RPC calls and wait for responses, like with ZooKeeper(s). I don't think the intent of this work is really to compare any one coordination mechanism with another but so much as provide a common interface for which one can implement whichever they prefer. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Attachment: HADOOP-10641.patch New patch. Not clear what I was doing earlier. Indeed the correct usage here is to have agreement executions call the learner, not the other way around. # Factored out SampleLearner. # SampleProposal now does . # Added setCurrentUser to SampleProposal. # Made SampleLearner use UserGroupInformation.doAs when executing agreement. # Removed LICENSE and NOTICE files. Sorry about that. # Unit tests now correctly wait for all agreements to arrive (they were not prior). # Made use of ClientBase in MiniZooKeeperCluster. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch, > HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Attachment: HADOOP-10641.patch Attaching new patch based on Konstantin's suggestions. # Added private static class SampleLearner in unit test. # SampleLearner executes SampleProposals and LOG.info's the return value. # Added NOTICE and LICENSE files. # Reduced the code of MiniZooKeeperCluster. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch, HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Work started] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-10641 started by Plamen Jeliazkov. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-10641: -- Attachment: HADOOP-10641.patch Attaching initial patch. Initial implementation shows using ZooKeeper as a Coordination Engine. The mechanism for sequencing transactions is done by using a single persistent-sequential Znode. The ZooKeeper connection thread is utilized for learning of agreements by constantly checking against the single Znode mentioned above for different sequence values, and reading them one by one. Proposing values and learning about them happen in parallel. > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Attachments: HADOOP-10641.patch > > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HADOOP-10641) Introduce Coordination Engine
[ https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov reassigned HADOOP-10641: - Assignee: Plamen Jeliazkov > Introduce Coordination Engine > - > > Key: HADOOP-10641 > URL: https://issues.apache.org/jira/browse/HADOOP-10641 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > > Coordination Engine (CE) is a system, which allows to agree on a sequence of > events in a distributed system. In order to be reliable CE should be > distributed by itself. > Coordination Engine can be based on different algorithms (paxos, raft, 2PC, > zab) and have different implementations, depending on use cases, reliability, > availability, and performance requirements. > CE should have a common API, so that it could serve as a pluggable component > in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and > HBase (HBASE-10909). > First implementation is proposed to be based on ZooKeeper. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824960#comment-13824960 ] Plamen Jeliazkov commented on HADOOP-9870: -- Hi Jayesh, The patch looks good, but it looks like you are only changing the HADOOP_CLIENT_OPTS here. Were you planning to address JAVA_HEAP_MAX here too? If not, then this patch looks good from what Konstantin and I noticed in HADOOP-9211. > Mixed configurations for JVM -Xmx in hadoop command > --- > > Key: HADOOP-9870 > URL: https://issues.apache.org/jira/browse/HADOOP-9870 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei Yan > Attachments: HADOOP-9870.patch > > > When we use hadoop command to launch a class, there are two places setting > the -Xmx configuration. > *1*. The first place is located in file > {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}. > {code} > exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" > {code} > Here $JAVA_HEAP_MAX is configured in hadoop-config.sh > ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The > default value is "-Xmx1000m". > *2*. The second place is set with $HADOOP_OPTS in file > {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}. > {code} > HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS" > {code} > Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh > ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}}) > {code} > export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" > {code} > Currently the final default java command looks like: > {code}java -Xmx1000m -Xmx512m CLASS_NAME ARGUMENTS"{code} > And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be > three -Xmx configurations. > The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that > users should not make any change in hadoop-config.sh. > We should let hadoop smart to choose the right one before launching the java > command, instead of leaving for jvm to make the decision. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE
[ https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-9211: - Status: Patch Available (was: Open) > HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards > HADOOP_HEAPSIZE > -- > > Key: HADOOP-9211 > URL: https://issues.apache.org/jira/browse/HADOOP-9211 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.0.2-alpha >Reporter: Sarah Weissman >Assignee: Plamen Jeliazkov > Attachments: HADOOP-9211.patch, hadoop-xmx.patch > > Original Estimate: 1m > Remaining Estimate: 1m > > hadoop-env.sh as included in the 2.0.2alpha release tarball contains: > export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS" > This overrides any heap settings in HADOOP_HEAPSIZE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE
[ https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-9211: - Attachment: HADOOP-9211.patch > HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards > HADOOP_HEAPSIZE > -- > > Key: HADOOP-9211 > URL: https://issues.apache.org/jira/browse/HADOOP-9211 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.0.2-alpha >Reporter: Sarah Weissman > Attachments: HADOOP-9211.patch, hadoop-xmx.patch > > Original Estimate: 1m > Remaining Estimate: 1m > > hadoop-env.sh as included in the 2.0.2alpha release tarball contains: > export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS" > This overrides any heap settings in HADOOP_HEAPSIZE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE
[ https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov reassigned HADOOP-9211: Assignee: Plamen Jeliazkov > HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards > HADOOP_HEAPSIZE > -- > > Key: HADOOP-9211 > URL: https://issues.apache.org/jira/browse/HADOOP-9211 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 2.0.2-alpha >Reporter: Sarah Weissman >Assignee: Plamen Jeliazkov > Attachments: HADOOP-9211.patch, hadoop-xmx.patch > > Original Estimate: 1m > Remaining Estimate: 1m > > hadoop-env.sh as included in the 2.0.2alpha release tarball contains: > export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS" > This overrides any heap settings in HADOOP_HEAPSIZE. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9448) Reimplement things
[ https://issues.apache.org/jira/browse/HADOOP-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13619460#comment-13619460 ] Plamen Jeliazkov commented on HADOOP-9448: -- Patch is failing to apply. Computer is questioning my actions. Dog is barking. Can't sleep. Actually patch just finished applying. All unit tests passed. +One. > Reimplement things > -- > > Key: HADOOP-9448 > URL: https://issues.apache.org/jira/browse/HADOOP-9448 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.0.4-alpha >Reporter: Alejandro Abdelnur >Assignee: Alejandro Abdelnur >Priority: Blocker > Attachments: remove-trunk.patch > > > We've got to the point we need to reimplement things from scratch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9244) Upgrade servlet-api dependency from version 2.5 to 3.0.
[ https://issues.apache.org/jira/browse/HADOOP-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13562115#comment-13562115 ] Plamen Jeliazkov commented on HADOOP-9244: -- Steve, Boudnik: Is it not Jetty 8 that is supposed to be fully compatible with Servlet API 3.0? Not Jetty 7? Perhaps that will be too far foward. > Upgrade servlet-api dependency from version 2.5 to 3.0. > --- > > Key: HADOOP-9244 > URL: https://issues.apache.org/jira/browse/HADOOP-9244 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.3-alpha >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha > > Attachments: HDFS-4422.patch > > > Please update the servlet-api jar from 2.5 to javax.servlet 3.0 via Maven: > > javax.servlet > javax.servlet-api > 3.0.1 > provided > > I am running a 2.0.3 dev-cluster and can confirm compatibility. I have > removed the servlet-api-2.5.jar file and replaced it with > javax.servlet-3.0.jar file. I am using javax.servlet-3.0 because it > implements methods that I use for a filter, namely the > HttpServletResponse.getStatus() method. > I believe it is a gain to have this dependency as it allows more > functionality and has so far proven to be backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9244) Upgrade servlet-api dependency from version 2.5 to 3.0.
[ https://issues.apache.org/jira/browse/HADOOP-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13562105#comment-13562105 ] Plamen Jeliazkov commented on HADOOP-9244: -- Suresh: I have ran the full unit test suite locally as well as operate a distributed cluster which uses the 3.0 API in custom filters. I have not seen any disruptions and / or drops of any of the HTTP servers with the dependency upgrade installed and usage of the cluster. I have not tested with MapReduce / YARN however. All: If you are more comfortable with an upgrade to jetty7 and servlet3 that is fine. > Upgrade servlet-api dependency from version 2.5 to 3.0. > --- > > Key: HADOOP-9244 > URL: https://issues.apache.org/jira/browse/HADOOP-9244 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.0.3-alpha >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Minor > Fix For: 3.0.0, 2.0.3-alpha > > Attachments: HDFS-4422.patch > > > Please update the servlet-api jar from 2.5 to javax.servlet 3.0 via Maven: > > javax.servlet > javax.servlet-api > 3.0.1 > provided > > I am running a 2.0.3 dev-cluster and can confirm compatibility. I have > removed the servlet-api-2.5.jar file and replaced it with > javax.servlet-3.0.jar file. I am using javax.servlet-3.0 because it > implements methods that I use for a filter, namely the > HttpServletResponse.getStatus() method. > I believe it is a gain to have this dependency as it allows more > functionality and has so far proven to be backwards compatible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-7568: - Attachment: HADOOP-7568.023.patch Patch for 0.23 SequenceFile.java > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Fix For: 0.22.0 > > Attachments: HADOOP-7568.023.patch, HADOOP-7568.patch, > HADOOP-7568.patch, HADOOP-7568.r2.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13102873#comment-13102873 ] Plamen Jeliazkov commented on HADOOP-7568: -- Thanks Harsh. :) Yeah I was wondering that was going on there. I will provide a patch for 0.23; no need to up it, you can leave it here and I will attach a patch for 0.23 asap. > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Fix For: 0.22.0 > > Attachments: HADOOP-7568.patch, HADOOP-7568.patch, > HADOOP-7568.r2.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-7568: - Attachment: HADOOP-7568.r2.patch Last one, I promise. > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Fix For: 0.22.0 > > Attachments: HADOOP-7568.patch, HADOOP-7568.patch, > HADOOP-7568.r2.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-7568: - Attachment: HADOOP-7568.patch Patch fixed. > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov > Fix For: 0.22.0 > > Attachments: HADOOP-7568.patch, HADOOP-7568.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-7568: - Attachment: HADOOP-7568.patch Patch fix. > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko > Fix For: 0.22.0 > > Attachments: HADOOP-7568.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7568) SequenceFile should not print into stdout
[ https://issues.apache.org/jira/browse/HADOOP-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HADOOP-7568: - Status: Patch Available (was: Open) > SequenceFile should not print into stdout > - > > Key: HADOOP-7568 > URL: https://issues.apache.org/jira/browse/HADOOP-7568 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 0.22.0 >Reporter: Konstantin Shvachko > Fix For: 0.22.0 > > Attachments: HADOOP-7568.patch > > > The following line in {{SequenceFile.Reader.initialize()}} should be removed: > {code} > System.out.println("Setting end to " + end); > {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira