I hope the following information will be helpful.

>From the perspective of compilation, Hadoop on JDK17 currently doesn’t face
many obstacles.

The code on the trunk branch can already be compiled directly on JDK17.

I will verify the situation for hadoop-3.4.1 and then provide feedback.

If we want to fully support Hadoop on JDK17, there is still a lot of work
to be done. In the next quarter, I will also dedicate some effort to
complete this task.

If we need to test the JDK17 version of Hadoop, it should be feasible.

Best Regards,
Shilun Fan.

On Thu, Oct 3, 2024 at 3:05 AM Steve Loughran <ste...@cloudera.com> wrote:

you using the hadoop thirdparty jar? there is a 1.3.0 release out
>
> On Wed, 2 Oct 2024 at 17:01, Wei-Chiu Chuang wrote:
>
> > HBase project is adding support for Hadoop 3.4.0, and I had to add a few
> > changes on top of that to let HBase shading to pass (license issues due
> to
> > transitive dependencies and so on). Those are quite common when updating
> to
> > a new Hadoop version.
> >
> > But apart from that it builds and unit tests passed
> > https://github.com/apache/hbase/pull/6331 there was one failure but it
> > passes locally for me.
> > One more thing to add is that HBase master requires JDK17 or higher to
> > build now. That just works out of the box.
> >
> > Ozone is a separate story.
> >
> https://github.com/jojochuang/ozone/actions/runs/11134281812/job/30942713712
> > I had to make a code change to due Ozone's use of Hadoop's non public
> > static variables. So that's okay.
> > I am having trouble with the unit tests (docker based acceptance test
> > doesn't work yet due to the lack of Hadoop 3.4.1 images) due to mixed
> > version of protobuf (or so I thought)
> >
> > There are failures like this that look similar to HADOOP-9845
> > so I suspect it's due
> > to the protobuf version updated from 3.7 to 3.25. I guess I can update
> > Ozone's protobuf version to match what's in Hadoop thirdparty.
> >
> > com.google.protobuf.ServiceException:
> > java.lang.UnsupportedOperationException: This is supposed to be
> overridden
> > by subclasses.
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:264)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:132)
> > at com.sun.proxy.$Proxy94.submitRequest(Unknown Source)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100)
> > at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366)
> > at com.sun.proxy.$Proxy94.submitRequest(Unknown Source)
> > at
> >
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:80)
> > at
> >
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:338)
> > at
> >
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1863)
> > at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:273)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:248)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:231)
> > at
> >
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:151)
> > at
> > org.apache.hadoop.ozone.om.OmTestManagers.(OmTestManagers.java:124)
> > at org.apache.hadoop.ozone.om.OmTestManagers.(OmTestManagers.java:83)
> > at
> >
> org.apache.hadoop.ozone.security.acl.TestOzoneNativeAuthorizer.setup(TestOzoneNativeAuthorizer.java:147)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
> > at
> >
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
> > at
> >
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:70)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> > at
> >
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
> > at
> >
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$13(ClassBasedTestDescriptor.java:412)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:410)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:216)
> > at
> >
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:85)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> > at
> > org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> > at java.util.ArrayList.forEach(ArrayList.java:1259)
> > at
> >
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> > at
> > org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> > at
> >
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> > at
> >
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> > at
> >
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
> > at
> >
> org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
> > at
> >
> org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141)
> > at
> >
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57)
> > at
> >
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103)
> > at
> >
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85)
> > at
> >
> org.junit.platform.launcher.core.DelegatingLauncher.execute(DelegatingLauncher.java:47)
> > at
> >
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:63)
> > at
> >
> com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
> > at
> >
> com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
> > at
> >
> com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
> > at
> >
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
> > at
> >
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:232)
> > at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:55)
> > Caused by: java.lang.UnsupportedOperationException: This is supposed to
> be
> > overridden by subclasses.
> > at
> >
> org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3.getUnknownFields(GeneratedMessageV3.java:280)
> > at
> >
> org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcRequestHeaderProto.getSerializedSize(RpcHeaderProtos.java:2381)
> > at
> >
> org.apache.hadoop.thirdparty.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:88)
> > at org.apache.hadoop.ipc.Client$Connection.(Client.java:428)
> > at org.apache.hadoop.ipc.Client.lambda$getConnection$1(Client.java:1633)
> > at
> >
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1473)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1426)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:250)
> > ... 82 more
> >
> > On Wed, Oct 2, 2024 at 7:51 AM Steve Loughran wrote:
> >
> >>
> >> Please do!
> >>
> >> On Tue, 1 Oct 2024 at 20:54, Wei-Chiu Chuang wrote:
> >>
> >>> Hi I'm late to the party, but I'd like to build and test this release
> >>> with
> >>> Ozone and HBase.
> >>>
> >>> On Tue, Oct 1, 2024 at 2:12 AM Mukund Madhav Thakur
> >>> wrote:
> >>>
> >>> > Thanks @Dongjoon Hyun for trying out the RC
> >>> and
> >>> > finding out this bug. This has to be fixed.
> >>> > It would be great if others can give the RC a try such that we know
> of
> >>> any
> >>> > issues earlier.
> >>> >
> >>> > Thanks
> >>> > Mukund
> >>> >
> >>> > On Tue, Oct 1, 2024 at 2:21 AM Steve Loughran
> >>> <ste...@cloudera.com.invalid
> >>> > >
> >>> > wrote:
> >>> >
> >>> > > ok, we will have to consider that a -1
> >>> > >
> >>> > > Interestingly we haven't seen that on any of our internal QE, maybe
> >>> none
> >>> > of
> >>> > > the requests weren't overlapping.
> >>> > >
> >>> > > I was just looking towards an =0 because of
> >>> > >
> >>> > > https://issues.apache.org/jira/browse/HADOOP-19295
> >>> > >
> >>> > > *Unlike the v1 sdk, PUT/POST of data now shares the same timeout as
> >>> all
> >>> > > other requests, and on a slow network connection requests time out.
> >>> > > Furthermore, large file uploads cn generate the same failure
> >>> > > condition because the competing block uploads reduce the bandwidth
> >>> for
> >>> > the
> >>> > > others.*
> >>> > >
> >>> > > I'll describe more on the JIRA -the fix is straightforward, have a
> >>> much
> >>> > > longer timeout, such as 15 minutes. It will mean that problems with
> >>> other
> >>> > > calls will not timeout for the same time.
> >>> > >
> >>> > > Note that In previous releases that request timeout *did not* apply
> >>> to
> >>> > the
> >>> > > big upload. This has been reverted.
> >>> > >
> >>> > > This is not a regression between 3.4.0; it had the same problem
> just
> >>> > nobody
> >>> > > has noticed. That's what comes from doing a lot of the testing
> >>> within AWS
> >>> > > and other people doing the testing (me) not trying to upload files
> >
> >>> > 1GB. I
> >>> > > have now.
> >>> > >
> >>> > > Anyway, I do not consider that a -1 because it wasn't a regression
> >>> and
> >>> > it's
> >>> > > straightforward to work around in a site configuration.
> >>> > >
> >>> > > Other than that, my findings were
> >>> > > -Pnative breaks enforcer on macos (build only; fix is upgrade
> >>> enforcer
> >>> > > version)
> >>> > >
> >>> > > -native code probes on my ubuntu rasberry pi5 (don't laugh -this is
> >>> the
> >>> > > most powerful computer I personally own) wan about a missing link
> in
> >>> the
> >>> > > native checks.
> >>> > > I haven't yet set up openssl bindings for s3a and abfs to see if
> >>> they
> >>> > > actually work.
> >>> > >
> >>> > > [hadoopq] 2024-09-27 19:52:16,544 WARN crypto.OpensslCipher:
> >>> Failed to
> >>> > > load OpenSSL Cipher.
> >>> > > [hadoopq] java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_block_size
> >>> > > [hadoopq] at
> >>> org.apache.hadoop.crypto.OpensslCipher.initIDs(Native
> >>> > > Method)
> >>> > > [hadoopq] at
> >>> > >
> >>> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:90)
> >>> > > [hadoopq] at
> >>> > >
> >>> org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.
> >>> > >
> >>> > > You're one looks like it is. Pity -but thank you for the testing.
> >>> Give
> >>> > it a
> >>> > > couple more days to see if people report any other issues.
> >>> > >
> >>> > > Mukund has been doing all the work on this; I'll see how much I can
> >>> do
> >>> > > myself to share the joy.
> >>> > >
> >>> > > On Sun, 29 Sept 2024 at 06:24, Dongjoon Hyun
> >>> > wrote:
> >>> > >
> >>> > > > Unfortunately, it turns out to be a regression in addition to a
> >>> > breaking
> >>> > > > change.
> >>> > > >
> >>> > > > In short, HADOOP-19098 (or more) makes Hadoop 3.4.1 fails even
> when
> >>> > users
> >>> > > > give disjoint ranges.
> >>> > > >
> >>> > > > I filed a Hadoop JIRA issue and a PR. Please take a look at that.
> >>> > > >
> >>> > > > - HADOOP-19291. `CombinedFileRange.merge` should not convert
> >>> disjoint
> >>> > > > ranges into overlapped ones
> >>> > > > - https://github.com/apache/hadoop/pull/7079
> >>> > > >
> >>> > > > I believe this is a Hadoop release blocker for both Apache ORC
> and
> >>> > Apache
> >>> > > > Parquet project perspective.
> >>> > > >
> >>> > > > Dongjoon.
> >>> > > >
> >>> > > > On 2024/09/29 03:16:18 Dongjoon Hyun wrote:
> >>> > > > > Thank you for 3.4.1 RC2.
> >>> > > > >
> >>> > > > > HADOOP-19098 (Vector IO: consistent specified rejection of
> >>> > overlapping
> >>> > > > ranges) seems to be a hard breaking change at 3.4.1.
> >>> > > > >
> >>> > > > > Do you think we can have an option to handle the overlapping
> >>> ranges
> >>> > in
> >>> > > > Hadoop layer instead of introducing a breaking change to the
> users
> >>> at
> >>> > the
> >>> > > > maintenance release?
> >>> > > > >
> >>> > > > > Dongjoon.
> >>> > > > >
> >>> > > > > On 2024/09/25 20:13:48 Mukund Madhav Thakur wrote:
> >>> > > > > > Apache Hadoop 3.4.1
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > With help from Steve I have put together a release candidate
> >>> (RC2)
> >>> > > for
> >>> > > > > > Hadoop 3.4.1.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > What we would like is for anyone who can to verify the
> >>> tarballs,
> >>> > > > especially
> >>> > > > > >
> >>> > > > > > anyone who can try the arm64 binaries as we want to include
> >>> them
> >>> > too.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The RC is available at:
> >>> > > > > >
> >>> > > > > >
> >>> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The git tag is release-3.4.1-RC2, commit
> >>> > > > > > b3a4b582eeb729a0f48eca77121dd5e2983b2004
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The maven artifacts are staged at
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> >
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1426
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > You can find my public key at:
> >>> > > > > >
> >>> > > > > >
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Change log
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/CHANGELOG.md
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Release notes
> >>> > > > > >
> >>> > > > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/RELEASENOTES.md
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > This is off branch-3.4.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Key changes include
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > * Bulk Delete API.
> >>> > > https://issues.apache.org/jira/browse/HADOOP-18679
> >>> > > > > >
> >>> > > > > > * Fixes and enhancements in Vectored IO API.
> >>> > > > > >
> >>> > > > > > * Improvements in Hadoop Azure connector.
> >>> > > > > >
> >>> > > > > > * Fixes and improvements post upgrade to AWS V2 SDK in
> >>> > S3AConnector.
> >>> > > > > >
> >>> > > > > > * This release includes Arm64 binaries. Please can anyone
> with
> >>> > > > > >
> >>> > > > > > compatible systems validate these.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Note, because the arm64 binaries are built separately on a
> >>> > different
> >>> > > > > >
> >>> > > > > > platform and JVM, their jar files may not match those of the
> >>> x86
> >>> > > > > >
> >>> > > > > > release -and therefore the maven artifacts. I don't think
> this
> >>> is
> >>> > > > > >
> >>> > > > > > an issue (the ASF actually releases source tarballs, the
> >>> binaries
> >>> > are
> >>> > > > > >
> >>> > > > > > there for help only, though with the maven repo that's a bit
> >>> > > blurred).
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > The only way to be consistent would actually untar the
> >>> x86.tar.gz,
> >>> > > > > >
> >>> > > > > > overwrite its binaries with the arm stuff, retar, sign and
> >>> push out
> >>> > > > > >
> >>> > > > > > for the vote. Even automating that would be risky.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Please try the release and vote. The vote will run for 5
> days.
> >>> > > > > >
> >>> > > > > >
> >>> > > > > >
> >>> > > > > > Thanks,
> >>> > > > > >
> >>> > > > > > Mukund
> >>> > > > > >
> >>> > > > >
> >>> > > > >
> >>> ---------------------------------------------------------------------
> >>> > > > > To unsubscribe, e-mail:
> common-dev-unsubscr...@hadoop.apache.org
> >>> > > > > For additional commands, e-mail:
> >>> common-dev-h...@hadoop.apache.org
> >>> > > > >
> >>> > > > >
> >>> > > >
> >>> > > >
> >>> ---------------------------------------------------------------------
> >>> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >>> > > > For additional commands, e-mail:
> common-dev-h...@hadoop.apache.org
> >>> > > >
> >>> > > >
> >>> > >
> >>> >
> >>>
> >>
>
>

Reply via email to