Re: Connecting Hadoop HA cluster via java client

2016-10-18 Thread 권병창
that configure need to using webhdfs://${nameservice}
try to "hdfs dfs -ls webhdfs://${nameservice}/some/files"
 
 
-Original Message-
From: "Pushparaj Motamari"pushpara...@gmail.com 
To: "권병창"magnu...@navercorp.com; 
Cc: user@hadoop.apache.org; 
Sent: 2016-10-18 (화) 23:02:14
Subject: Re: Connecting Hadoop HA cluster via java client
 
Hi,
 Following are not required I guess. I am able to connect to cluster without 
these. Is there any reason to include them?
dfs.namenode.http-address.${dfs.nameservices}.nn1 
dfs.namenode.http-address.${dfs.nameservices}.nn2 
Regards
Pushparaj 
 
On Wed, Oct 12, 2016 at 6:39 AM, 권병창 magnu...@navercorp.com wrote:
Hi.
 
1. minimal configuration to connect HA namenode is below properties.
zookeeper information does not necessary.
 
dfs.nameservices
dfs.ha.namenodes.${dfs.nameservices}
dfs.namenode.rpc-address.${dfs.nameservices}.nn1 
dfs.namenode.rpc-address.${dfs.nameservices}.nn2
dfs.namenode.http-address.${dfs.nameservices}.nn1 
dfs.namenode.http-address.${dfs.nameservices}.nn2
dfs.client.failover.proxy.provider.c3=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  
 
2. client use round robin manner for selecting active namenode.
 
 
-Original Message-
From: "Pushparaj Motamari"pushpara...@gmail.com 
To: user@hadoop.apache.org; 
Cc: 
Sent: 2016-10-12 (수) 03:20:53
Subject: Connecting Hadoop HA cluster via java client
 
Hi,
 I have two questions pertaining to accessing the hadoop ha cluster from java 
client.  1. Is  it necessary to supply 
conf.set("dfs.ha.automatic-failover.enabled",true);
and
conf.set("ha.zookeeper.quorum","zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181");

in addition to the other properties set in the code below?
private Configuration initHAConf(URI journalURI, Configuration conf) {
  conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY,
  journalURI.toString());
  
  String address1 = "127.0.0.1:" + NN1_IPC_PORT;
  String address2 = "127.0.0.1:" + NN2_IPC_PORT;
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN1), address1);
  conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
  NAMESERVICE, NN2), address2);
  conf.set(DFSConfigKeys.DFS_NAMESERVICES, NAMESERVICE);
  conf.set(DFSUtil.addKeySuffixes(DFS_HA_NAMENODES_KEY_PREFIX, NAMESERVICE),
  NN1 + "," + NN2);
  conf.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + NAMESERVICE,
  ConfiguredFailoverProxyProvider.class.getName());
  conf.set("fs.defaultFS", "hdfs://" + NAMESERVICE);
  
  return conf;
}

2. If we supply zookeeper configuration details as mentioned in the question 1 
is it necessary to set the primary and secondary namenode addresses as 
mentioned in the code above? Since we have 
given zookeeper connection details the client should be able to figure out the 
active namenode connection details.


Regards

Pushparaj




 




Re: Connecting Hadoop HA cluster via java client

2016-10-18 Thread Rakesh Radhakrishnan
Hi,

dfs.namenode.http-address, this is the fully-qualified HTTP address for
each NameNode to listen on. Similarly to rpc-address configuration, set the
addresses for both NameNodes HTTP servers(Web UI) to listen on and can
browse the status of Active/Standby NN in Web browser. Also, hdfs supports
secure http server address and port, can use "dfs.namenode.https-address"
for this.

For example:-
I assume dfs.nameservices(the logical name for your nameservice) config
item is configured as "mycluster"


  dfs.namenode.http-address.mycluster.nn1
  machine1.example.com:50070


  dfs.namenode.http-address.mycluster.nn2
  machine2.example.com:50070


Regards,
Rakesh

On Tue, Oct 18, 2016 at 7:32 PM, Pushparaj Motamari 
wrote:

> Hi,
>
> Following are not required I guess. I am able to connect to cluster
> without these. Is there any reason to include them?
>
> dfs.namenode.http-address.${dfs.nameservices}.nn1
>
> dfs.namenode.http-address.${dfs.nameservices}.nn2
>
> Regards
>
> Pushparaj
>
>
>
> On Wed, Oct 12, 2016 at 6:39 AM, 권병창  wrote:
>
>> Hi.
>>
>>
>>
>> 1. minimal configuration to connect HA namenode is below properties.
>>
>> zookeeper information does not necessary.
>>
>>
>>
>> dfs.nameservices
>>
>> dfs.ha.namenodes.${dfs.nameservices}
>>
>> dfs.namenode.rpc-address.${dfs.nameservices}.nn1
>>
>> dfs.namenode.rpc-address.${dfs.nameservices}.nn2
>>
>> dfs.namenode.http-address.${dfs.nameservices}.nn1
>>
>> dfs.namenode.http-address.${dfs.nameservices}.nn2
>> dfs.client.failover.proxy.provider.c3=org.apache.hadoop.hdfs
>> .server.namenode.ha.ConfiguredFailoverProxyProvider
>>
>>
>>
>>
>>
>> 2. client use round robin manner for selecting active namenode.
>>
>>
>>
>>
>>
>> -Original Message-
>> *From:* "Pushparaj Motamari"
>> *To:* ;
>> *Cc:*
>> *Sent:* 2016-10-12 (수) 03:20:53
>> *Subject:* Connecting Hadoop HA cluster via java client
>>
>> Hi,
>>
>> I have two questions pertaining to accessing the hadoop ha cluster from
>> java client.
>>
>> 1. Is  it necessary to supply
>>
>> conf.set("dfs.ha.automatic-failover.enabled",true);
>>
>> and
>>
>> conf.set("ha.zookeeper.quorum","zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181");
>>
>> in addition to the other properties set in the code below?
>>
>> private Configuration initHAConf(URI journalURI, Configuration conf) {
>>   conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY,
>>   journalURI.toString());
>>
>>   String address1 = "127.0.0.1:" + NN1_IPC_PORT;
>>   String address2 = "127.0.0.1:" + NN2_IPC_PORT;
>>   conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
>>   NAMESERVICE, NN1), address1);
>>   conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
>>   NAMESERVICE, NN2), address2);
>>   conf.set(DFSConfigKeys.DFS_NAMESERVICES, NAMESERVICE);
>>   conf.set(DFSUtil.addKeySuffixes(DFS_HA_NAMENODES_KEY_PREFIX, NAMESERVICE),
>>   NN1 + "," + NN2);
>>   conf.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + NAMESERVICE,
>>   ConfiguredFailoverProxyProvider.class.getName());
>>   conf.set("fs.defaultFS", "hdfs://" + NAMESERVICE);
>>
>>   return conf;}
>>
>> 2. If we supply zookeeper configuration details as mentioned in the question 
>> 1 is it necessary to set the primary and secondary namenode addresses as 
>> mentioned in the code above? Since we have
>> given zookeeper connection details the client should be able to figure out 
>> the active namenode connection details.
>>
>>
>> Regards
>>
>> Pushparaj
>>
>>
>


Re: S3AFileSystem & read-after-write consistency

2016-10-18 Thread Chris Nauroth
Hello Dave,

You are correct that S3A currently may suffer unexpected effects from eventual 
consistency due to negative caching on the S3 side for the initial HEAD 
request.  In practice, I have never seen any negative consequences from this 
particular aspect of S3 eventual consistency, but in theory the problem is 
possible.

If you are interested in mitigating the effects of S3 eventual consistency, 
then you might be interested in watching development of the S3Guard project, 
tracked in Apache JIRA HADOOP-13345.

https://issues.apache.org/jira/browse/HADOOP-13345

To summarize, we plan to support use of an external store with strong 
consistency guarantees for S3A file system metadata.  In the interaction you 
described, we could consult the consistent metadata store instead of sending a 
HEAD request to S3 to determine if the object already exists.

--Chris Nauroth

From: Dave Maughan 
Date: Thursday, October 6, 2016 at 4:07 AM
To: "user@hadoop.apache.org" 
Subject: S3AFileSystem & read-after-write consistency

Hi,

I'm investigating S3's read-after-write consistency model with S3AFileSystem 
and something is not quite clear to me, so I'm hoping someone more 
knowledgeable can clarify it for me.

Amazon state (http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html):

"Amazon S3 provides read-after-write consistency for PUTS of new objects in 
your S3 bucket in all regions with one caveat. The caveat is that if you make a 
HEAD or GET request to the key name (to find if the object exists) before 
creating the object, Amazon S3 provides eventual consistency for 
read-after-write".

In S3FileSystem, create -> exists -> getFileStatus -> 
AmazonS3Client.getObjectMetadata (HEAD).

Does this mean that currently, S3AFileSystem cannot take advantage of S3's 
read-after-write consistency?

Thanks
- Dave



Re: Connecting Hadoop HA cluster via java client

2016-10-18 Thread Pushparaj Motamari
Hi,

Following are not required I guess. I am able to connect to cluster without
these. Is there any reason to include them?

dfs.namenode.http-address.${dfs.nameservices}.nn1

dfs.namenode.http-address.${dfs.nameservices}.nn2

Regards

Pushparaj



On Wed, Oct 12, 2016 at 6:39 AM, 권병창  wrote:

> Hi.
>
>
>
> 1. minimal configuration to connect HA namenode is below properties.
>
> zookeeper information does not necessary.
>
>
>
> dfs.nameservices
>
> dfs.ha.namenodes.${dfs.nameservices}
>
> dfs.namenode.rpc-address.${dfs.nameservices}.nn1
>
> dfs.namenode.rpc-address.${dfs.nameservices}.nn2
>
> dfs.namenode.http-address.${dfs.nameservices}.nn1
>
> dfs.namenode.http-address.${dfs.nameservices}.nn2
> dfs.client.failover.proxy.provider.c3=org.apache.hadoop.
> hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>
>
>
>
>
> 2. client use round robin manner for selecting active namenode.
>
>
>
>
>
> -Original Message-
> *From:* "Pushparaj Motamari"
> *To:* ;
> *Cc:*
> *Sent:* 2016-10-12 (수) 03:20:53
> *Subject:* Connecting Hadoop HA cluster via java client
>
> Hi,
>
> I have two questions pertaining to accessing the hadoop ha cluster from
> java client.
>
> 1. Is  it necessary to supply
>
> conf.set("dfs.ha.automatic-failover.enabled",true);
>
> and
>
> conf.set("ha.zookeeper.quorum","zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181");
>
> in addition to the other properties set in the code below?
>
> private Configuration initHAConf(URI journalURI, Configuration conf) {
>   conf.set(DFSConfigKeys.DFS_NAMENODE_SHARED_EDITS_DIR_KEY,
>   journalURI.toString());
>
>   String address1 = "127.0.0.1:" + NN1_IPC_PORT;
>   String address2 = "127.0.0.1:" + NN2_IPC_PORT;
>   conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
>   NAMESERVICE, NN1), address1);
>   conf.set(DFSUtil.addKeySuffixes(DFS_NAMENODE_RPC_ADDRESS_KEY,
>   NAMESERVICE, NN2), address2);
>   conf.set(DFSConfigKeys.DFS_NAMESERVICES, NAMESERVICE);
>   conf.set(DFSUtil.addKeySuffixes(DFS_HA_NAMENODES_KEY_PREFIX, NAMESERVICE),
>   NN1 + "," + NN2);
>   conf.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + NAMESERVICE,
>   ConfiguredFailoverProxyProvider.class.getName());
>   conf.set("fs.defaultFS", "hdfs://" + NAMESERVICE);
>
>   return conf;}
>
> 2. If we supply zookeeper configuration details as mentioned in the question 
> 1 is it necessary to set the primary and secondary namenode addresses as 
> mentioned in the code above? Since we have
> given zookeeper connection details the client should be able to figure out 
> the active namenode connection details.
>
>
> Regards
>
> Pushparaj
>
>


RE: LeaseExpiredException: No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.

2016-10-18 Thread Brahma Reddy Battula
Can you trace namenode logs, whether this is file deleted/renamed(might be 
parent folder) before this reducer run..?




--Brahma Reddy Battula

From: Zhang Jianfeng [mailto:jzhang...@gmail.com]
Sent: 18 October 2016 18:55
To: Gaurav Kumar
Cc: user.hadoop; Rakesh Radhakrishnan
Subject: Re: LeaseExpiredException: No lease on 
/user/biadmin/analytic‐root/SX5XPWPPDPQH/.

Thanks Gaurav. For my case, I called the HDFS API to write the reducer result 
into HDFS directly, not using Spark.

2016-10-17 23:24 GMT+08:00 Gaurav Kumar 
>:

Hi,

Please also check for coalesced RDD. I encountered the same error while writing 
a coalesced rdd/df to HDFS. If this is the case, please use repartition instead.

Sent from OnePlus 3

Thanks & Regards,
Gaurav Kumar

On Oct 17, 2016 11:22 AM, "Zhang Jianfeng" 
> wrote:
Thanks Rakesh for your kind help. Actually during the job only one reducer 
result file (for example part-r-2) had this error, other reducers worked well.

Best Regards,
Jian Feng

2016-10-17 11:49 GMT+08:00 Rakesh Radhakrishnan 
>:
Hi Jian Feng,

Could you please check your code and see any possibilities of simultaneous 
access to the same file. Mostly this situation happens when multiple clients 
tries to access the same file.

Code Reference:- 
https://github.com/apache/hadoop/blob/branch-2.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L2737

Best Regards,
Rakesh
Intel

On Mon, Oct 17, 2016 at 7:16 AM, Zhang Jianfeng 
> wrote:
Hi ,

I hit an wired error. On our hadoop cluster (2.2.0), occasionally a 
LeaseExpiredException is thrown.

The stacktrace is as below:


org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
 No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.executions/.at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2737)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2801)

at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2783)

at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:611)

at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:428)

at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59586)

at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)

at java.security.AccessController.doPrivileged(AccessController.java:310)

at 
javax.security.auth.Subject.doAs(Subject.java:573)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)

at org.apache.hadoop.ipc.Client.call(Client.java:1300)

at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at $Proxy7.complete(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)

at java.lang.reflect.Method.invoke(Method.java:611)

at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at $Proxy7.complete(Unknown Source)

at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:371)

at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1894)

at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1881)

at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:71)

at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:104)

at java.io.FilterOutputStream.close(FilterOutputStream.java:154)
Any help will be appreciated!

--
Best Regards,
Jian Feng




--
Best Regards,
Jian Feng




--
Best Regards,
Jian Feng


Re: LeaseExpiredException: No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.

2016-10-18 Thread Zhang Jianfeng
Thanks Gaurav. For my case, I called the HDFS API to write the reducer
result into HDFS directly, not using Spark.

2016-10-17 23:24 GMT+08:00 Gaurav Kumar :

> Hi,
>
> Please also check for coalesced RDD. I encountered the same error while
> writing a coalesced rdd/df to HDFS. If this is the case, please use
> repartition instead.
>
> Sent from OnePlus 3
>
> Thanks & Regards,
> Gaurav Kumar
>
> On Oct 17, 2016 11:22 AM, "Zhang Jianfeng"  wrote:
>
> Thanks Rakesh for your kind help. Actually during the job only one
> reducer result file (for example part-r-2) had this error, other reducers
> worked well.
>
> Best Regards,
> Jian Feng
>
> 2016-10-17 11:49 GMT+08:00 Rakesh Radhakrishnan :
>
>> Hi Jian Feng,
>>
>> Could you please check your code and see any possibilities of
>> simultaneous access to the same file. Mostly this situation happens when
>> multiple clients tries to access the same file.
>>
>> Code Reference:- https://github.com/apache/hadoop/blob/branch-2.2
>> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/ha
>> doop/hdfs/server/namenode/FSNamesystem.java#L2737
>>
>> Best Regards,
>> Rakesh
>> Intel
>>
>> On Mon, Oct 17, 2016 at 7:16 AM, Zhang Jianfeng 
>> wrote:
>>
>>> Hi ,
>>>
>>> I hit an wired error. On our hadoop cluster (2.2.0), occasionally a
>>> LeaseExpiredException is thrown.
>>>
>>> The stacktrace is as below:
>>>
>>>
>>> *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>> No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.executions/.at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2737)*
>>>
>>> *at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2801)*
>>>
>>> *at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2783)*
>>>
>>> *at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.com
>>> plete(NameNodeRpcServer.java:611)*
>>>
>>> *at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:428)*
>>>
>>> *at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59586)*
>>>
>>> *at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)*
>>>
>>> *at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)*
>>>
>>> *at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)*
>>>
>>> *at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)*
>>>
>>> *at
>>> java.security.AccessController.doPrivileged(AccessController.java:310)*
>>>
>>> *at javax.security.auth.Subject.do
>>> As(Subject.java:573)*
>>>
>>> *at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)*
>>>
>>> *at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)*
>>>
>>> *at org.apache.hadoop.ipc.Client.call(Client.java:1347)*
>>>
>>> *at org.apache.hadoop.ipc.Client.call(Client.java:1300)*
>>>
>>> *at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)*
>>>
>>> *at $Proxy7.complete(Unknown Source)*
>>>
>>> *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>>>
>>> *at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)*
>>>
>>> *at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)*
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:611)
>>>
>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMeth
>>> od(RetryInvocationHandler.java:186)
>>>
>>> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(Ret
>>> ryInvocationHandler.java:102)
>>>
>>> at $Proxy7.complete(Unknown Source)
>>>
>>> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTran
>>> slatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:371)
>>>
>>> at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutpu
>>> tStream.java:1894)
>>>
>>> at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream
>>> .java:1881)
>>>
>>> at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(
>>> FSDataOutputStream.java:71)
>>>
>>> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputSt
>>> ream.java:104)
>>>
>>> at java.io.FilterOutputStream.close(FilterOutputStream.java:154)
>>>
>>> Any help will be appreciated!
>>>
>>> --
>>> Best Regards,
>>> Jian Feng
>>>
>>
>>
>
>
> --
> Best Regards,
> Jian Feng
>
>
>


-- 
Best Regards,
Jian Feng