[jira] [Created] (HDFS-15572) RBF: Quota updating every 1 min once, if I setquota 50 on mount path, in a minute i am able to write the files morethan quota in mount path.

2020-09-11 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-15572:


 Summary: RBF: Quota updating every 1 min once, if I setquota 50 on 
mount path, in a minute i am able to write the files morethan quota in mount 
path. 
 Key: HDFS-15572
 URL: https://issues.apache.org/jira/browse/HDFS-15572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


IN Router State store quota updating every 1 min once only, if i set quota 50 
on mount path am able to write more that quota in the mount path here quota 
will not work out.


{noformat}
1. Create a destinations dir in Namespaces
2. Create a mount path with multiple destinations
3. Setquota 50 on mount path 
4. write a files morethan 50 + in the mount path in a minute
{noformat}

*Excepted Result:-*
Here after setquota mount path should not allow morethan that at any cost.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15571) RBF: When we set the quota on mount Path which is having 4 Destinations, count command shows the 4 as dir count, it should be 1 as per mount entry.

2020-09-11 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-15571:


 Summary: RBF: When we set the quota on mount Path which is having 
4 Destinations, count command shows the 4 as dir count, it should be 1 as per 
mount entry.
 Key: HDFS-15571
 URL: https://issues.apache.org/jira/browse/HDFS-15571
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.1.1
 Environment: 
{code:java}
1. Create dir's in 4 NS.
2. Create Mount path  with 4 destinations with order HASH_ALL
{code}

Reporter: Harshakiran Reddy


When we set the quota on mount Path which is having 4 Destinations, count 
command shows the 4 as dir count, it should be 1 as per End user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15543) RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount points with fault Tolerance enabled.

2020-08-27 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-15543:


 Summary: RBF: Write Should allow, when a subcluster is unavailable 
for RANDOM mount points with fault Tolerance enabled. 
 Key: HDFS-15543
 URL: https://issues.apache.org/jira/browse/HDFS-15543
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.1.1
 Environment: FI_MultiDestination_client]# *hdfs dfsrouteradmin -ls 
/test_ec*
*Mount Table Entries:*
SourceDestinations   Owner  
   Group Mode  Quota/Usage
*/test_ec*  *hacluster->/tes_ec,hacluster1->/tes_ec* test   
   ficommon  rwxr-xr-x [NsQuota: 
-/-, SsQuota: -/-]

Reporter: Harshakiran Reddy


A RANDOM mount point should allow to creating new files if one subcluster is 
down also with Fault Tolerance was enabled. but here it's failed.

*File Write throne the Exception:-*

2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
 2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode 
DatanodeInfoWithStorage[DISK]
 2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
 java.io.IOException: Unable to create new block.
 at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
 2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block locations. 
Source file "/test_ec/f1._COPYING_" - Aborting...block==null
 put: Could not get block locations. Source file "/test_ec/f1._COPYING_" - 
Aborting...block==null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15093) RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified

2020-01-06 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-15093:


 Summary: RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is 
specified
 Key: HDFS-15093
 URL: https://issues.apache.org/jira/browse/HDFS-15093
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy


When Rename Overwrite flag is specified the To_TRASH option gets silently 
ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14808) EC: Improper size values for corrupt ec block in LOG

2019-08-31 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-14808:


 Summary: EC: Improper size values for corrupt ec block in LOG 
 Key: HDFS-14808
 URL: https://issues.apache.org/jira/browse/HDFS-14808
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy


If the block corruption reason is size mismatch the log. The values shown and 
compared are ambiguous.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14807) SetTimes updates all negative values apart from -1

2019-08-31 Thread Harshakiran Reddy (Jira)
Harshakiran Reddy created HDFS-14807:


 Summary: SetTimes updates all negative values apart from -1
 Key: HDFS-14807
 URL: https://issues.apache.org/jira/browse/HDFS-14807
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy


Set Times API, updates negative time on all negative values apart from -1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14636) SBN : If you configure the default proxy provider still read Request going to Observer namenode only.

2019-07-22 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy resolved HDFS-14636.
--
Resolution: Duplicate

Resolved as Duplicate, HDFS-14660 has further discussion on this!!!

> SBN : If you configure the default proxy provider still read Request going to 
> Observer namenode only.
> -
>
> Key: HDFS-14636
> URL: https://issues.apache.org/jira/browse/HDFS-14636
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: SBN
>
> {noformat}
> In Observer cluster, will configure the default proxy provider instead of 
> "org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider", still 
> Read request going to Observer namenode only.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14655:


 Summary: SBN : Namenode crashes if one of The jN is down
 Key: HDFS-14655
 URL: https://issues.apache.org/jira/browse/HDFS-14655
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 9 
time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
sleepTime=1000 MILLISECONDS) | Client.java:975
2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at 
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
java.lang.OutOfMemoryError: unable to create new native thread | 
ExitUtil.java:210
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14636) SBN : If you configure the default proxy provider still read Request going to Observer namenode only.

2019-07-08 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14636:


 Summary: SBN : If you configure the default proxy provider still 
read Request going to Observer namenode only.
 Key: HDFS-14636
 URL: https://issues.apache.org/jira/browse/HDFS-14636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
In Observer cluster, will configure the default proxy provider instead of 
"org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider", still 
Read request going to Observer namenode only.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14529) NPE while Loading the Editlogs

2019-05-31 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14529:


 Summary: NPE while Loading the Editlogs
 Key: HDFS-14529
 URL: https://issues.apache.org/jira/browse/HDFS-14529
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
2019-05-31 15:15:42,397 ERROR namenode.FSEditLogLoader: Encountered exception 
on operation TimesOp [length=0, 
path=/testLoadSpace/dir0/dir0/dir0/dir2/_file_9096763, mtime=-1, 
atime=1559294343288, opCode=OP_TIMES, txid=18927893]
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetTimes(FSDirAttrOp.java:490)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:711)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:286)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:181)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:924)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:771)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1105)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:726)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1558)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1640)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1725){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14077) DFSAdmin Report datanode Count was not matched when datanode in Decommissioned state

2019-04-15 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy reopened HDFS-14077:
--

> DFSAdmin  Report datanode Count was not matched when datanode in 
> Decommissioned state
> -
>
> Key: HDFS-14077
> URL: https://issues.apache.org/jira/browse/HDFS-14077
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>
> {noformat}
> In DFSAdmin Reports showing the live datanodes are incorrect when some 
> datanodes in Decommissioned State
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14077) DFSAdmin Report datanode Count was not matched when datanode in Decommissioned state

2019-04-15 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy resolved HDFS-14077.
--
Resolution: Not A Problem

> DFSAdmin  Report datanode Count was not matched when datanode in 
> Decommissioned state
> -
>
> Key: HDFS-14077
> URL: https://issues.apache.org/jira/browse/HDFS-14077
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>
> {noformat}
> In DFSAdmin Reports showing the live datanodes are incorrect when some 
> datanodes in Decommissioned State
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14077) DFSAdmin Report datanode Count was not matched when datanode in Decommissioned state

2019-04-15 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy resolved HDFS-14077.
--
Resolution: Fixed

> DFSAdmin  Report datanode Count was not matched when datanode in 
> Decommissioned state
> -
>
> Key: HDFS-14077
> URL: https://issues.apache.org/jira/browse/HDFS-14077
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>
> {noformat}
> In DFSAdmin Reports showing the live datanodes are incorrect when some 
> datanodes in Decommissioned State
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14266) EC : Unable To Get Datanode Info for EC Blocks if One Block Is Not Available.

2019-02-11 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14266:


 Summary: EC : Unable To Get Datanode Info for EC Blocks if One 
Block Is Not Available.
 Key: HDFS-14266
 URL: https://issues.apache.org/jira/browse/HDFS-14266
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy


If one block gets removed from the block group then the datanode information 
for the block group comes shows null.

 
{noformat}
Block Id: blk_-9223372036854775792
Block belongs to: /ec/file1
No. of Expected Replica: 2
No. of live Replica: 2
No. of excess Replica: 0
No. of stale Replica: 0
No. of decommissioned Replica: 0
No. of decommissioning Replica: 0
No. of corrupted Replica: 0
null

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14265) In WEBHDFS Output Extra TATs are printing

2019-02-10 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14265:


 Summary: In WEBHDFS Output Extra TATs are printing
 Key: HDFS-14265
 URL: https://issues.apache.org/jira/browse/HDFS-14265
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
bin> curl -i -k -X PUT --negotiate -u: 
"http://NNIP:9864/webhdfs/v1/file1?op=CREATE=hacluster1==true=false;
HTTP/1.1 100 Continue

HTTP/1.1 403 Forbidden
Content-Type: application/json; charset=utf-8
Content-Length: 2110
Connection: close

{"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission
 denied: user=dr.who, access=WRITE, 
inode=\"/\":securedn:supergroup:drwxr-xr-x\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1904)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1888)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1847)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:376)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2418)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2362)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:775)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:490)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)\n"}}
/bin>
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14257) RBF : NPE when given the Invalid path to create target dir

2019-02-05 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14257:


 Summary: RBF : NPE when given the Invalid path to create target dir
 Key: HDFS-14257
 URL: https://issues.apache.org/jira/browse/HDFS-14257
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



bin> ./hdfs dfs -mkdir hdfs://{color:red}hacluster2 /hacluster1{color}dest2/
{noformat}
-mkdir: Fatal internal error
java.lang.NullPointerException
at org.apache.hadoop.fs.FileSystem.fixRelativePart(FileSystem.java:2714)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.fixRelativePart(DistributedFileSystem.java:3229)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1618)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
at 
org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:74)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:287)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
bin>
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14255) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-04 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14255:


 Summary: Tail Follow Interval Should Allow To Specify The Sleep 
Interval To Save Unnecessary RPC's 
 Key: HDFS-14255
 URL: https://issues.apache.org/jira/browse/HDFS-14255
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Harshakiran Reddy


As of now tail -f follows every 5 seconds. We should allow a parameter to 
specify this sleep interval. Linux has this configurable as in form of -s 
parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14197) Invalid exit codes for invalid fsck report

2019-01-10 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14197:


 Summary: Invalid exit codes for invalid fsck report 
 Key: HDFS-14197
 URL: https://issues.apache.org/jira/browse/HDFS-14197
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
bin> ./hdfs fsck /harsha/file1 -blocks -files -locations
FileSystem is inaccessible due to:
java.io.FileNotFoundException: File does not exist: /harsha/file1
DFSck exiting.
bin> echo $?
0
bin>
{noformat}

Excepted Result : 

It should be 1




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14178) Crypto admin should support for Viewfs

2018-12-28 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14178:


 Summary: Crypto admin should support for Viewfs
 Key: HDFS-14178
 URL: https://issues.apache.org/jira/browse/HDFS-14178
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Reporter: Harshakiran Reddy



{noformat}
bin>./hdfs crypto -getFileEncryptionInfo -path /src
IllegalArgumentException: FileSystem viewfs://ClusterX/ is not an HDFS file 
system
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14163) Debug Admin Command Should Support Generic Options.

2018-12-20 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14163:


 Summary: Debug Admin Command Should Support Generic Options.
 Key: HDFS-14163
 URL: https://issues.apache.org/jira/browse/HDFS-14163
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Harshakiran Reddy


{noformat}
namenode> ./bin/hdfs debug -fs hdfs://hacluster recoverLease -path /dest/file2
Usage: hdfs debug  [arguments]

These commands are for advanced users only.

Incorrect usages may result in data loss. Use at your own risk.

verifyMeta -meta  [-block ]
computeMeta -block  -out 
recoverLease -path  [-retries ]
namenode>
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14157) RBF : refreshServiceAcl command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14157:


 Summary: RBF : refreshServiceAcl command fail with router
 Key: HDFS-14157
 URL: https://issues.apache.org/jira/browse/HDFS-14157
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
namenode> ./bin/hdfs dfsadmin -refreshServiceAcl
Refresh service acl failed for host:
Refresh service acl failed for host:
refreshServiceAcl: 2 exceptions 
[org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchProtocolException):
 Unknown protocol: 
org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:444)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:502)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
, java.net.ConnectException: Call From host1 to host2: failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused]
namenode>
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14156:


 Summary: RBF : RollEdit command fail with router
 Key: HDFS-14156
 URL: https://issues.apache.org/jira/browse/HDFS-14156
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
bin> ./hdfs dfsadmin -rollEdits
rollEdits: Cannot cast java.lang.Long to long
bin>
{noformat}

Trace :-
{noformat}
org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
cast java.lang.Long to long
at java.lang.Class.cast(Class.java:3369)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
at org.apache.hadoop.ipc.Client.call(Client.java:1376)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14143) RBF: After clrQuota mount point is not allowing to create new files

2018-12-13 Thread Harshakiran Reddy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harshakiran Reddy resolved HDFS-14143.
--
Resolution: Duplicate

> RBF: After clrQuota mount point is not allowing to create new files 
> 
>
> Key: HDFS-14143
> URL: https://issues.apache.org/jira/browse/HDFS-14143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
>
> {noformat}
> bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3
> Successfully set quota for mount point /src10
> bin> ./hdfs dfsrouteradmin -clrQuota /src10
> Successfully clear quota for mount point /src10
> bin> ./hdfs dfs -put harsha /dest10/file1
> bin> ./hdfs dfs -put harsha /dest10/file2
> bin> ./hdfs dfs -put harsha /dest10/file3
> put: The NameSpace quota (directories and files) of directory /dest10 is 
> exceeded: quota=3 file count=4
> bin> ./hdfs dfsrouteradmin -ls /src10
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /src10hacluster->/dest10hdfs  
> hadooprwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> bin>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14143) RBF : after clrQuota mount point is not allowing to create new files

2018-12-12 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14143:


 Summary: RBF : after clrQuota mount point is not allowing to 
create new files 
 Key: HDFS-14143
 URL: https://issues.apache.org/jira/browse/HDFS-14143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
bin> ./hdfs dfsrouteradmin -setQuota /src10 -nsQuota 3
Successfully set quota for mount point /src10
bin> ./hdfs dfsrouteradmin -clrQuota /src10
Successfully clear quota for mount point /src10
bin> ./hdfs dfs -put harsha /dest10/file1
bin> ./hdfs dfs -put harsha /dest10/file2
bin> ./hdfs dfs -put harsha /dest10/file3
put: The NameSpace quota (directories and files) of directory /dest10 is 
exceeded: quota=3 file count=4
bin> ./hdfs dfsrouteradmin -ls /src10
Mount Table Entries:
SourceDestinations  Owner 
Group Mode  Quota/Usage
/src10hacluster->/dest10hdfs  
hadooprwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
bin>

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14087) RBF : In Router UI NameNode heartbeat printing the negative values

2018-11-18 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14087:


 Summary: RBF : In Router UI NameNode heartbeat printing the 
negative values 
 Key: HDFS-14087
 URL: https://issues.apache.org/jira/browse/HDFS-14087
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14077) DFSAdmin Report datanode Count was not matched when datanode in Decommissioned state

2018-11-14 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14077:


 Summary: DFSAdmin  Report datanode Count was not matched when 
datanode in Decommissioned state
 Key: HDFS-14077
 URL: https://issues.apache.org/jira/browse/HDFS-14077
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{noformat}
In DFSAdmin Reports showing the live datanodes are incorrect when some 
datanodes in Decommissioned State
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14066) upgradeDomain : Datanode was stopped when re-configure the datanodes in upgrade Domain script file

2018-11-12 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14066:


 Summary: upgradeDomain : Datanode was stopped when re-configure 
the datanodes in upgrade Domain script file 
 Key: HDFS-14066
 URL: https://issues.apache.org/jira/browse/HDFS-14066
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy


{{Steps:-}}

{noformat}
1. Create 3 upgrade Domain groups with 2 Datanode each upgrade domains 
   UD1->DN1,DN2
   UD2->DN3,DN4
   UD3->DN5,DN6
2. Remove DN4 and DN6 from the JSON script file
3. Verify the status of DN4 and DN6
4. Again add those 2 Datanode to respective Upgrade Domains 
5. Again Verify the status of DN4 and DN6 Datanodes 
{noformat}

{{Actual Output:-}}
{noformat}
Datanode status was stopped as per the Datanod UI page but  Datanode service is 
running in that Node
{noformat}

{{Excepted Output:-}}
{noformat}
Datanode should be in Running state and when we re-configure those 2 Datanodes 
it should take and work properly
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14046) There is no ICON for Maintenance In Datanode UI page and after Datanode moved into Maintenance states still datanode mark status is empty in Datanode UI.

2018-11-02 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14046:


 Summary: There is no ICON for Maintenance In Datanode UI page and 
after Datanode moved into Maintenance  states still datanode mark status is 
empty in Datanode UI.
 Key: HDFS-14046
 URL: https://issues.apache.org/jira/browse/HDFS-14046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.1
Reporter: Harshakiran Reddy






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13918) Direct Exception is Printing in Console, when we given the invalid IPC port for TirggerBlockReport Command

2018-09-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13918:


 Summary: Direct Exception is Printing in Console, when we given 
the invalid IPC port for TirggerBlockReport Command
 Key: HDFS-13918
 URL: https://issues.apache.org/jira/browse/HDFS-13918
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Harshakiran Reddy


{noformat}
/namenode/bin > ./hdfs dfsadmin -triggerBlockReport Datanodeip:9864
2018-09-17 12:58:43,866 WARN net.NetUtils: Unable to wrap exception of type 
class org.apache.hadoop.ipc.RpcException: it has no (String) constructor
java.lang.NoSuchMethodException: 
org.apache.hadoop.ipc.RpcException.(java.lang.String)
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getConstructor(Class.java:1825)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1503)
at org.apache.hadoop.ipc.Client.call(Client.java:1445)
at org.apache.hadoop.ipc.Client.call(Client.java:1355)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy8.triggerBlockReport(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.triggerBlockReport(ClientDatanodeProtocolTranslatorPB.java:327)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.triggerBlockReport(DFSAdmin.java:732)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2406)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2567)
triggerBlockReport error: java.io.IOException: Failed on local exception: 
org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
Host Details : local host is: "Datanodeip/Datanodeip"; destination host is: 
"datanodeip":9864;
namenode/bin >
{noformat}

{{Actual Output:-}}

 In console logs Printing Direct exception, bellow is the exception  

{color:#14892c}2018-09-17 12:58:43,866 WARN net.NetUtils: Unable to wrap 
exception of type class org.apache.hadoop.ipc.RpcException: it has no (String) 
constructor
java.lang.NoSuchMethodException: 
org.apache.hadoop.ipc.RpcException.(java.lang.String){color}

{{Excepted Output:-}}

Direct exception shouldn't print in console logs. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13897) DiskBalancer: for invalid configurations print WARN message in console output while executing the Diskbalaner commands.

2018-09-06 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13897:


 Summary: DiskBalancer: for invalid configurations print WARN 
message in console output while executing the Diskbalaner commands.
 Key: HDFS-13897
 URL: https://issues.apache.org/jira/browse/HDFS-13897
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{Scenario:-}}

1. configure the invalid value for any disk balancer configurations and restart 
the Datanode
2. Run the disk balancer commands

{{Actual output:-}}

it's continue with default configurations

{{Excepted output:-}} 

it will print WARN message in console like *configured invalid value and taking 
the default value for this configuration* and that time user/customer knows our 
configurations are not effected to current disk balancer otherwise he will 
think it taking their configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13893:


 Summary: DiskBalancer: no validations for Disk balancer commands 
 Key: HDFS-13893
 URL: https://issues.apache.org/jira/browse/HDFS-13893
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{Scenario:-}}

 
 1 Run the Disk Balancer commands with extra arguments passing  

{noformat} 
hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
*sgfsdgfs*
2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
hostname:50077
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
hostname:50077 took 23 ms
2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
2018-08-31 14:57:35,457 INFO command.Command: 
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
Writing plan to:
/system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
{noformat} 

Expected Output:- 
=
Disk balancer commands should be fail if we pass any invalid arguments or extra 
arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13892) Disk Balancer : Invalid exit code for disk balancer execute command

2018-09-03 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13892:


 Summary: Disk Balancer : Invalid exit code for disk balancer 
execute command
 Key: HDFS-13892
 URL: https://issues.apache.org/jira/browse/HDFS-13892
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: diskbalancer
Reporter: Harshakiran Reddy


{{scenario:-}}

1. Write some 5GB data with one DISK
2. Add one more non-empty Disk to above Datanode 
3.Run the plan command for the above specific datanode 
4. run the Execute command with the above plan file
the above execute command not happened as per the datanode log 
{noformat}
ERROR org.apache.hadoop.hdfs.server.datanode.DiskBalancer: Destination volume: 
file:/Test_Disk/DISK2/ does not have enough space to accommodate a block. Block 
Size: 268435456 Exiting from copyBlocks.
{noformat}
5. see the exit code for execute command, it display the 0

{{Expected Result :-}}

1. Exit code should be 1 why means execute command was not happened 
2. In this type of scenario In console print the that error message that time 
customer/user knows execute was not happened. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13841) RBF : After Click on another Tab in RouterFederation UI page it's not redirecting to new tab

2018-08-21 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13841:


 Summary: RBF : After Click on another Tab in RouterFederation UI 
page it's not redirecting to new tab 
 Key: HDFS-13841
 URL: https://issues.apache.org/jira/browse/HDFS-13841
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Affects Versions: 3.1.0
Reporter: Harshakiran Reddy


{{Scenario :-}}

1. Open the Federation UI
2. Click on Different tabs like {{Mount Tables}} , {{Router}} or {{Subcluster}}
3. when we click on mount table tab or any other tab it will redirect to that 
page but it's not happening when we do refresh the page or submit the Enter 
button that time it's redirected to particular clicked Tab page.

{{Expected Result :-}}

It should redirect to clicked Tab page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13817) HDFSRouterFederation : when we create Mount point with RANDOM policy and with 2 Nameservices, it won't work properly

2018-08-10 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13817:


 Summary: HDFSRouterFederation : when we create Mount point with 
RANDOM policy and with 2 Nameservices, it won't work properly 
 Key: HDFS-13817
 URL: https://issues.apache.org/jira/browse/HDFS-13817
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Harshakiran Reddy


{{Scenario:-}} 

# Create a mount point with RANDOM policy and with 2 Nameservices .
# List the target mount path of the Global path.

Actual Output: 
=== 
{{ls: `/apps5': No such file or directory}}

Expected Output: 
=

{{if the files are availabel list those files or if it's emtpy it will disply 
nothing}}

{noformat} 
bin> ./hdfs dfsrouteradmin -add /apps5 hacluster,ns2 /tmp10 -order RANDOM 
-owner securedn -group hadoop
Successfully added mount point /apps5
bin> ./hdfs dfs -ls /apps5
ls: `/apps5': No such file or directory
bin> ./hdfs dfs -ls /apps3
Found 2 items
drwxrwxrwx   - user group 0 2018-08-09 19:55 /apps3/apps1
-rw-r--r--   3   - user group  4 2018-08-10 11:55 /apps3/ttt
 {noformat}

{{please refer the bellow image for mount inofrmation}}

{{/apps3 tagged with HASH policy}}
{{/apps5 tagged with RANDOM policy}}

{noformat}
/bin> ./hdfs dfsrouteradmin -ls

Mount Table Entries:
SourceDestinations  Owner 
Group Mode  Quota/Usage

/apps3hacluster->/tmp3,ns2->/tmp4 securedn  
users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

/apps5hacluster->/tmp5,ns2->/tmp5 securedn  
users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13482) Crypto command should give proper exception when user is trying to create an EZ with the same key with which it is already created

2018-04-19 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13482:


 Summary: Crypto command should give proper exception when user is 
trying to create an EZ with the same key with which it is already created
 Key: HDFS-13482
 URL: https://issues.apache.org/jira/browse/HDFS-13482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, kms
Affects Versions: 2.8.3
Reporter: Harshakiran Reddy


{{Scenario:}}
 # Create a Dir
 # Create EZ for the above dir with Key1
 # Again you can try to create ZONE for same directory with the same Key1

{noformat}
hadoopclient> hadoop key list
Listing keys for KeyProvider: 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
key2
key1
hadoopclient> hdfs dfs -mkdir /kms
hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
Added encryption zone /kms
hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
RemoteException: Attempt to create an encryption zone for a non-empty 
directory.{noformat}
Actual Output:
 ===
 Exception should be like {{EZ is already created with the same key}}

Expected Output:
 =
 {{RemoteException:Attempt to create an encryption zone for non-empty 
directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13483) Crypto command should give proper exception when user is trying to create an EZ with the same key with which it is already created

2018-04-19 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13483:


 Summary: Crypto command should give proper exception when user is 
trying to create an EZ with the same key with which it is already created
 Key: HDFS-13483
 URL: https://issues.apache.org/jira/browse/HDFS-13483
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, kms
Affects Versions: 2.8.3
Reporter: Harshakiran Reddy


{{Scenario:}}
 # Create a Dir
 # Create EZ for the above dir with Key1
 # Again you can try to create ZONE for same directory with the same Key1

{noformat}
hadoopclient> hadoop key list
Listing keys for KeyProvider: 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@152aa092
key2
key1
hadoopclient> hdfs dfs -mkdir /kms
hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
Added encryption zone /kms
hadoopclient> hdfs crypto -createZone -keyName key1 -path /kms
RemoteException: Attempt to create an encryption zone for a non-empty 
directory.{noformat}
Actual Output:
 ===
 Exception should be like {{EZ is already created with the same key}}

Expected Output:
 =
 {{RemoteException:Attempt to create an encryption zone for non-empty 
directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-03-29 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13369:


 Summary: FSCK Report broken with RequestHedgingProxyProvider 
 Key: HDFS-13369
 URL: https://issues.apache.org/jira/browse/HDFS-13369
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.8.3
Reporter: Harshakiran Reddy


Scenario:-

1.Configure the RequestHedgingProxy

2. write some files in file system

3. Take FSCK report for the above files

 
{noformat}
bin> hdfs fsck /file1 -locations -files -blocks
Exception in thread "main" java.lang.ClassCastException: 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
 cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
at org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13292) when we are creating the EZONE for A Dir but that Dir already Having the EZONE for with Different Key

2018-03-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13292:


 Summary: when we are creating the EZONE for A Dir but that Dir 
already Having the EZONE for with Different Key
 Key: HDFS-13292
 URL: https://issues.apache.org/jira/browse/HDFS-13292
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: kms
Affects Versions: 2.8.3
Reporter: Harshakiran Reddy


{\{Scenario:}}
 # Create a Dir
 # Create EZ for the above dir with Key1
 # Again you can try to create ZONE for same directory with Diff Key i.e Key2
{noformat}

 

 
Actual Output:
===
{\{Exception should be Like Dir already having the ZONE will not allow to 
create new ZONE on this Dir}}

Expected Output:
=
{\{RemoteException:Attempt to create an encryption zone for non-empty 
directory}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13154) Webhdfs : update the Document for allow/disallow snapshots

2018-02-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13154:


 Summary: Webhdfs : update the Document for allow/disallow snapshots
 Key: HDFS-13154
 URL: https://issues.apache.org/jira/browse/HDFS-13154
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, webhdfs
Affects Versions: 2.8.2
Reporter: Harshakiran Reddy


There is no Document for Allow/Disallow snapshots.

http://hadoop.apache.org/docs/r2.8.3/hadoop-project-dist/hadoop-hdfs/WebHDFS.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12897) Path not found when we get the ec policy for a .snapshot dir

2017-12-05 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12897:


 Summary: Path not found when we get the ec policy for a .snapshot 
dir
 Key: HDFS-12897
 URL: https://issues.apache.org/jira/browse/HDFS-12897
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, hdfs, snapshots
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Scenario:-
---

bin> ./hdfs ec -getPolicy -path /dir/
2017-12-06 12:51:35,324 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
RS-3-2-1024k
bin> ./hdfs ec -getPolicy -path /dir/.snapshot/
2017-12-06 12:19:52,204 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
{{FileNotFoundException: Path not found: /dir/.snapshot}}
bin> ./hdfs dfs -ls /dir/.snapshot/
2017-12-06 12:20:12,956 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
Found 2 items
drwxr-xr-x   - user group  0 2017-12-05 12:27 /dir/.snapshot/s1
drwxr-xr-x   - user group  0 2017-12-05 12:28 /dir/.snapshot/s2
bin> ./hdfs storagepolicies -getStoragePolicy -path /dir/.snapshot/
2017-12-06 12:24:10,729 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
{{The storage policy of /dir/.snapshot/ is unspecified}}
bin> ./hdfs storagepolicies -getStoragePolicy -path /dir/
2017-12-06 12:50:17,010 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
The storage policy of /dir/:
BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], 
replicationFallbacks=[]}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12854) Centralized Cache Management should support ViewFileSystem.

2017-11-22 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12854:


 Summary: Centralized Cache Management should support 
ViewFileSystem.
 Key: HDFS-12854
 URL: https://issues.apache.org/jira/browse/HDFS-12854
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: caching, hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Current {{ViewFileSystem}} does not support Centralized Cache Management, it 
will throw {{IllegalArgumentException}} .

{noformat} 
./hdfs cacheadmin -listPools
IllegalArgumentException: FileSystem viewfs://ClusterX/ is not an HDFS file 
system 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12833) In Distcp, Delete option not having the proper usage message.

2017-11-17 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12833:


 Summary: In Distcp, Delete option not having the proper usage 
message.
 Key: HDFS-12833
 URL: https://issues.apache.org/jira/browse/HDFS-12833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp, hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy
Priority: Minor


Basically Delete option applicable only with update or overwrite options. I 
tried as per usage message am getting the bellow exception.

{noformat}
bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
java.lang.IllegalArgumentException: Delete missing is applicable only with 
update or overwrite options
at 
org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
at 
org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
Invalid arguments: Delete missing is applicable only with update or overwrite 
options
usage: distcp OPTIONS [source_path...] 
  OPTIONS
 -append   Reuse existing data in target files and
   append new data to them if possible
 -asyncShould distcp execution be blocking
 -atomic   Commit all changes or none
 -bandwidth   Specify bandwidth per map in MB, accepts
   bandwidth as a fraction.
 -blocksperchunk  If set to a positive value, fileswith more
   blocks than this value will be split into
   chunks of  blocks to be
   transferred in parallel, and reassembled on
   the destination. By default,
is 0 and the files will be
   transmitted in their entirety without
   splitting. This switch is only applicable
   when the source file system implements
   getBlockLocations method and the target
   file system implements concat method
 -copybuffersize  Size of the copy buffer to use. By default
is 8192B.
 -delete   Delete from target, files missing in source
 -diffUse snapshot diff report to identify the
   difference between source and target
{noformat}

Even in Document also it's not updated proper usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12826:


 Summary: Document Saying the RPC port, But it's required IPC port 
in Balancer Document.
 Key: HDFS-12826
 URL: https://issues.apache.org/jira/browse/HDFS-12826
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover, documentation
Affects Versions: 3.0.0-beta1
Reporter: Harshakiran Reddy
Priority: Minor


In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
command required IPC port but in Documentation it's saying the RPC port.

http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer

{noformat} 
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:65110
refreshNamenodes: Unknown protocol: 
org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes
Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:50077
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
{noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12825:


 Summary: After Block Corrupted, FSCK Report printing the Direct 
configuration.  
 Key: HDFS-12825
 URL: https://issues.apache.org/jira/browse/HDFS-12825
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy
Priority: Minor


Scenario:
Corrupt the Block in any datanode
Take the *FSCK *Report for that file.

Actual Output:
==
printing the direct configuration in fsck report

{{dfs.namenode.replication.min}}

Expected Output:

it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12824) ViewFileSystem should support EC.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12824:


 Summary: ViewFileSystem should support EC.
 Key: HDFS-12824
 URL: https://issues.apache.org/jira/browse/HDFS-12824
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, fs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Current ViewFileSystem does not support EC, it will throw 
IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12782) After Unset the EC policy for a directory, Still inside the directory files having the EC Policy

2017-11-06 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12782:


 Summary: After Unset the EC policy for a directory, Still inside 
the directory files having the EC Policy
 Key: HDFS-12782
 URL: https://issues.apache.org/jira/browse/HDFS-12782
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Scenario:

Set the EC policy for Dir
Write a file and check the EC policy for that file
Unset the EC policy for the above Dir
Check the policy for the file.

Actual Output:
==
Still having the EC policy for a file

Expected Output:

Inside the Dir all files release the EC policy when we do unset the top level 
Dir.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-05 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12781:


 Summary: After Datanode down, In Namenode UI Datanode tab is 
throwing warning message.
 Key: HDFS-12781
 URL: https://issues.apache.org/jira/browse/HDFS-12781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Scenario:

Stop one Datanode
Refresh or click on the Datanode tab in namenode UI.

Actual Output:
==
it's throwing the warning message. please find the bellow warning message.

DataTables warning: table id=table-datanodes - Requested unknown parameter '7' 
for row 2. For more information about this error, please see 
http://datatables.net/tn/4

Expected Output:

whenever you click on Datanode tab,it should be display the datanodes 
information.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12717) 'hdfs namenode -help' command will not work when Namenode is Running

2017-10-26 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12717:


 Summary:  'hdfs namenode -help' command will not work when 
Namenode is Running
 Key: HDFS-12717
 URL: https://issues.apache.org/jira/browse/HDFS-12717
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Scenario:

Start namenode
Execute "hdfs namenode -help"

Actul Output:
=
namenode is running as process 85785.  Stop it first.

Expected output:
===
Should Give the help message




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org