[jira] [Updated] (HDFS-14277) [SBN read] Observer benchmark results

2019-08-28 Thread Jonathan Eagles (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-14277:
---
Labels: release-blocker  (was: )

> [SBN read] Observer benchmark results
> -
>
> Key: HDFS-14277
> URL: https://issues.apache.org/jira/browse/HDFS-14277
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ha, namenode
>Affects Versions: 2.10.0, 3.3.0
> Environment: Hardware: 4-node cluster, each node has 4 core, Xeon 
> 2.5Ghz, 25GB memory.
> Software: CentOS 7.4, CDH 6.0 + Consistent Reads from Standby, Kerberos, SSL, 
> RPC encryption + Data Transfer Encryption, Cloudera Navigator.
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: Observer profiler.png, Screen Shot 2019-02-14 at 
> 11.50.37 AM.png, observer RPC queue processing time.png
>
>
> Ran a few benchmarks and profiler (VisualVM) today on an Observer-enabled 
> cluster. Would like to share the results with the community. The cluster has 
> 1 Observer node.
> h2. NNThroughputBenchmark
> Generate 1 million files and send fileStatus RPCs.
> {code:java}
> hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>   -op fileStatus -threads 100 -files 100 -useExisting 
> -keepResults
> {code}
> h3. Kerberos, SSL, RPC encryption, Data Transfer Encryption enabled:
> ||Node||fileStatus (Ops per sec)||
> |Active NameNode|4865|
> |Observer|3996|
> h3. Kerberos, SSL:
> ||Node||fileStatus (Ops per sec)||
> |Active NameNode|7078|
> |Observer|6459|
> Observation:
>  * due to the edit tailing overhead, Observer node consume 30% CPU 
> utilization even if the cluster is idle.
>  * While Active NN has less than 1ms RPC processing time, Observer node has > 
> 5ms RPC processing time. I am still looking for the source of the longer 
> processing time. The longer RPC processing time may be the cause for the 
> performance degradation compared to that of Active NN. Note the cluster has 
> Cloudera Navigator installed which adds additional overhead to RPC processing 
> time.
>  * {{GlobalStateIdContext#isCoordinatedCall()}} pops up as one of the top 
> hotspots in the profiler. 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2018-09-18 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619602#comment-16619602
 ] 

Jonathan Eagles commented on HDFS-7765:
---

[~janmejay], I haven't seen much activity on this jira recently. Do you mind if 
I take this jira over? No worries, if you are going to continue working on 
this. If I don't hear back either way in a few weeks, I'll assume that you have 
stepped away from this jira.

> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
>Priority: Major
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
> at java.lang.Thread.run(Thread.java:744)
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> 2015-02-06 19:46:49,772 [log.DfsLogger] ERROR: 
> java.lang.ArrayIndexOutOfBoundsException: 4609
> java.lang.ArrayIndexOutOfBoundsException: 4609
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> 

[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2018-04-06 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428532#comment-16428532
 ] 

Jonathan Eagles commented on HDFS-7765:
---

[~janmejay], I think [~wankunde]'s assessment matches my experience for when 
this issue happens. Once an IOException happens at max buffer size, this class 
becomes unusable.

Much like this other apache stream class as reference, flush if we can't write, 
then write. That way the state is not modified until safe. 
https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/output/ByteArrayOutputStream.java#L171
{code}
  public synchronized void write(int b) throws IOException {
int newcount = count + 1;
if (newcount > buf.length) {
  flushBuffer();
}
buf[count++] = (byte)b;
  }
{code}

I haven't checked the rest of the FSOutputSummer for correctness. That is worth 
checking.

> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
>Priority: Major
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> 

[jira] [Commented] (HDFS-12591) [READ] Implement LevelDBFileRegionFormat

2017-10-04 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192007#comment-16192007
 ] 

Jonathan Eagles commented on HDFS-12591:


I can't comment regarding the whole of the jira, but it looks like there is a 
memory leak introduced by calling DBIterator.iterator() without calling close. 
Similar to work done in YARN-5368

> [READ] Implement LevelDBFileRegionFormat
> 
>
> Key: HDFS-12591
> URL: https://issues.apache.org/jira/browse/HDFS-12591
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Ewan Higgs
>Priority: Minor
> Attachments: HDFS-12591-HDFS-9806.001.patch
>
>
> The existing work for HDFS-9806 uses an implementation of the {{FileRegion}} 
> read from a csv file. This is good for testability and diagnostic purposes, 
> but it is not very efficient for larger systems.
> There should be a version that is similar to the {{TextFileRegionFormat}} 
> that instead uses LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11754) Make fsserversefaults cache configurable.

2017-05-18 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16016412#comment-16016412
 ] 

Jonathan Eagles commented on HDFS-11754:


[~erofeev], I gave you the right to be assigned to HDFS issues.

> Make fsserversefaults cache configurable.
> -
>
> Key: HDFS-11754
> URL: https://issues.apache.org/jira/browse/HDFS-11754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Mikhail Erofeev
>Priority: Minor
>  Labels: newbie
>
> DFSClient caches the result of FsServerDefaults for 60 minutes.
> But the 60 minutes time is not configurable.
> Continuing the discussion from HDFS-11702, it would be nice if we can make 
> this configurable and make the default as 60 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-14 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-6305:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

With the +1 from [~cnauroth] and from [~hadoopqa], committing this to branch-2 
and trunk. I'm also +1 on this issue. Thanks, [~daryn]

 WebHdfs response decoding may throw RuntimeExceptions
 -

 Key: HDFS-6305
 URL: https://issues.apache.org/jira/browse/HDFS-6305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6305.patch


 WebHdfs does not guard against exceptions while decoding the response 
 payload.  The json parser will throw RunTime exceptions on malformed 
 responses.  The json decoding routines do not validate the expected fields 
 are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-13 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13995737#comment-13995737
 ] 

Jonathan Eagles commented on HDFS-6305:
---

Kicked jenkins. there are 3 or 4 builds ahead of this one. It may be late 
tonight to see the results.

Jon

 WebHdfs response decoding may throw RuntimeExceptions
 -

 Key: HDFS-6305
 URL: https://issues.apache.org/jira/browse/HDFS-6305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-6305.patch


 WebHdfs does not guard against exceptions while decoding the response 
 payload.  The json parser will throw RunTime exceptions on malformed 
 responses.  The json decoding routines do not validate the expected fields 
 are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-29 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-6269:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

 NameNode Audit Log should differentiate between webHDFS open and HDFS open.
 ---

 Key: HDFS-6269
 URL: https://issues.apache.org/jira/browse/HDFS-6269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, webhdfs
Affects Versions: 2.4.0
Reporter: Eric Payne
Assignee: Eric Payne
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6269-AuditLogWebOpen.txt, 
 HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt


 To enhance traceability, the NameNode audit log should use a different string 
 for open in the cmd= part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-29 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13984864#comment-13984864
 ] 

Jonathan Eagles commented on HDFS-6269:
---

+1. Putting this into trunk and branch-2

 NameNode Audit Log should differentiate between webHDFS open and HDFS open.
 ---

 Key: HDFS-6269
 URL: https://issues.apache.org/jira/browse/HDFS-6269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, webhdfs
Affects Versions: 2.4.0
Reporter: Eric Payne
Assignee: Eric Payne
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6269-AuditLogWebOpen.txt, 
 HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt


 To enhance traceability, the NameNode audit log should use a different string 
 for open in the cmd= part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964321#comment-13964321
 ] 

Jonathan Eagles commented on HDFS-6215:
---

+1. Thanks, Kihwal. It is indeed an annoyance. Committing to trunk, branch-2 
and branch-2.4.

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-6215:
--

   Resolution: Fixed
Fix Version/s: 2.4.1
   2.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 3.0.0, 2.5.0, 2.4.1

 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5870) The legacy local reader stops working on block token expiration

2014-04-03 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958912#comment-13958912
 ] 

Jonathan Eagles commented on HDFS-5870:
---

+1. Thanks, Kihwal. Since 2.4.0 is already in RC, I will commit this to trunk 
and branch-2.

jeagles

 The legacy local reader stops working on block token expiration
 ---

 Key: HDFS-5870
 URL: https://issues.apache.org/jira/browse/HDFS-5870
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-5870.patch


 HDFS-5637 fixed the issue for 2.0, but after the new domain socket based 
 local reader, the fix won't work for the old legacy short-circuit local 
 reader. This is because DFSInputStream#getBlockReader() catches IOException 
 and disables it when calling BlockReaderFactory.getLegacyBlockReaderLocal().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-11 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13845567#comment-13845567
 ] 

Jonathan Eagles commented on HDFS-5023:
---

+1. Jing. If I don't hear anything on this issue today, I'll check this in 
tomorrow.

 TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
 ---

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: test
 Attachments: HDFS-5023.patch, HDFS-5023.patch, HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-11 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-5023:
--

Affects Version/s: 3.0.0

 TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
 ---

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: java7, test
 Attachments: HDFS-5023.patch, HDFS-5023.patch, HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7

2013-12-11 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-5023:
--

Summary: TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7  
(was: TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2)

 TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7
 -

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: java7, test
 Attachments: HDFS-5023.patch, HDFS-5023.patch, HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7

2013-12-11 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-5023:
--

Labels: java7 test  (was: test)

 TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7
 -

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: java7, test
 Attachments: HDFS-5023.patch, HDFS-5023.patch, HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7

2013-12-11 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-5023:
--

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

 TestSnapshotPathINodes.testAllowSnapshot is failing with jdk7
 -

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: java7, test
 Fix For: 3.0.0, 2.4.0

 Attachments: HDFS-5023.patch, HDFS-5023.patch, HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HDFS-5023) TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2

2013-12-06 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13841447#comment-13841447
 ] 

Jonathan Eagles commented on HDFS-5023:
---

Mit, Thanks for the patch. Looks like you have left some debugging code in the 
patch submitted. 

//testSnapshotPathINodesAfterModification();


 TestSnapshotPathINodes.testAllowSnapshot is failing in branch-2
 ---

 Key: HDFS-5023
 URL: https://issues.apache.org/jira/browse/HDFS-5023
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, test
Affects Versions: 2.4.0
Reporter: Ravi Prakash
Assignee: Mit Desai
  Labels: test
 Attachments: HDFS-5023.patch, 
 TEST-org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes.xml, 
 org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes-output.txt


 The assertion on line 91 is failing. I am using Fedora 19 + JDK7. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-19 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Target Version/s: 3.0.0, 2.3.0  (was: 3.0.0, 2.3.0, 0.23.10)

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch, HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-19 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Kihwal for the review. I will file the two JIRAs and post them here.

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch, HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5530) HDFS Components are unable to unregister from DefaultMetricsSystem

2013-11-19 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HDFS-5530:
-

 Summary: HDFS Components are unable to unregister from 
DefaultMetricsSystem
 Key: HDFS-5530
 URL: https://issues.apache.org/jira/browse/HDFS-5530
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.3.0
Reporter: Jonathan Eagles






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-19 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827053#comment-13827053
 ] 

Jonathan Eagles commented on HDFS-1386:
---

 HDFS-5530 and YARN-1426 were filed. Thanks again, Kihwal.

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch, HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Status: Open  (was: Patch Available)

Thanks for the review, Kihwal. Working on the proposed additions.

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Attachment: HDFS-1386.patch

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch, HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Status: Patch Available  (was: Open)

Uploaded a new patch that remove the JournalNodeInfo bean. The journal node is 
tricky to clean all the way up since it registers with the default metrics 
system which allows registration, but no easy way to unregister. If you are ok 
with this I can file separate JIRAs, one for YARN and one for Default Metrics 
System unregistration

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch, HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (HDFS-1386) unregister namenode datanode info MXBean

2013-11-14 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reopened HDFS-1386:
---

  Assignee: Jonathan Eagles  (was: Tanping Wang)

Reopening this issue since this has been causing test failures in jdk7

 unregister namenode datanode info MXBean
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
 Fix For: 0.22.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-14 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Summary: TestJMXGet fails in jdk7  (was: unregister namenode datanode info 
MXBean)

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
 Fix For: 0.22.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-14 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823021#comment-13823021
 ] 

Jonathan Eagles commented on HDFS-1386:
---

Upon MiniDFSCluster shutdown it is found that DataNode, FSNamesystem, and 
NameNode don't unregister all beans. During new MiniDFSCluster startup, those 
same systems fail to register the beans, causing null pointer exceptions when 
previous beans are called for clusters that have already been shutdown.

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
 Fix For: 0.22.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-14 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

   Fix Version/s: (was: 0.22.0)
  Labels: java7  (was: )
Target Version/s: 3.0.0, 2.3.0, 0.23.10
  Status: Patch Available  (was: Reopened)

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-14 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1386:
--

Attachment: HDFS-1386.patch

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-1386) TestJMXGet fails in jdk7

2013-11-14 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823032#comment-13823032
 ] 

Jonathan Eagles commented on HDFS-1386:
---

While this fix is listed as part of HADOOP-6728 and HDFS-1117, no fix for this 
looks to be coming soon from those JIRAs. It is needed more urgently than 
before since failures have been occurring regularly in tests running with jdk7.

 TestJMXGet fails in jdk7
 

 Key: HDFS-1386
 URL: https://issues.apache.org/jira/browse/HDFS-1386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, test
Affects Versions: 0.22.0
Reporter: Tanping Wang
Assignee: Jonathan Eagles
Priority: Blocker
  Labels: java7
 Attachments: HDFS-1386.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5514) FSNamesystem's fsLock should allow custom implementation

2013-11-14 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823259#comment-13823259
 ] 

Jonathan Eagles commented on HDFS-5514:
---

Hi, Daryn. I like what is going on here. One minor nit in addition to the above 
feedback. Please remove the unnecessary _import 
java.util.concurrent.locks.ReadWriteLock_ in TestFSNamesystem.java

 FSNamesystem's fsLock should allow custom implementation
 

 Key: HDFS-5514
 URL: https://issues.apache.org/jira/browse/HDFS-5514
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-5514.patch


 Changing {{fsLock}} from a {{ReentrantReadWriteLock}} to an API compatible 
 class that encapsulates the rwLock will allow for more sophisticated locking 
 implementations such as fine grain locking.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4511) Cover package org.apache.hadoop.hdfs.tools with unit test

2013-10-18 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4511:
--

   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0
   Status: Resolved  (was: Patch Available)

I put this into branch-2 and trunk. Thanks everybody.

 Cover package org.apache.hadoop.hdfs.tools with unit test
 -

 Key: HDFS-4511
 URL: https://issues.apache.org/jira/browse/HDFS-4511
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
Assignee: Andrey Klochkov
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-4511-branch-0.23-a.patch, 
 HADOOP-4511-branch-2-a.patch, HADOOP-4511-trunk-a.patch, 
 HDFS-4511-branch-2--N2.patch, HDFS-4511--n6.patch, HDFS-4511-trunk--N4.patch, 
 HDFS-4511-trunk--n5.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5357) TestFileSystemAccessService failures in JDK7

2013-10-14 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13794519#comment-13794519
 ] 

Jonathan Eagles commented on HDFS-5357:
---

+1. Thanks, Rob. This patch is only for branch 0.23, so ignoring the Hadoop QA 
results. This tests now runs successfully consistently with this change.

 TestFileSystemAccessService failures in JDK7
 

 Key: HDFS-5357
 URL: https://issues.apache.org/jira/browse/HDFS-5357
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: HDFS-5357v1.patch


 junit.framework.AssertionFailedError: Expected Exception: ServiceException 
 got: ExceptionInInitializerError
   at junit.framework.Assert.fail(Assert.java:47)
   at 
 org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:56)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray2(ReflectionUtils.java:208)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:159)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:87)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:95)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4329) DFSShell issues with directories with spaces in name

2013-08-23 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13748602#comment-13748602
 ] 

Jonathan Eagles commented on HDFS-4329:
---

Putting this change in before the patches go stale. Thanks, [~cabad]!

 DFSShell issues with directories with spaces in name
 

 Key: HDFS-4329
 URL: https://issues.apache.org/jira/browse/HDFS-4329
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 0.23.10, 2.1.1-beta
Reporter: Andy Isaacson
Assignee: Cristina L. Abad
 Attachments: 4329.branch-0.23.patch, 4329.branch-0.23.v3.patch, 
 4329.branch-2.patch, 4329.trunk.patch, 4329.trunk.v2.patch, 
 4329.trunk.v3.patch


 This bug was discovered by Casey Ching.
 The command {{dfs -put /foo/hello.txt dir}} is supposed to create 
 {{dir/hello.txt}} on HDFS.  It doesn't work right if dir has a space in it:
 {code}
 [adi@haus01 ~]$ hdfs dfs -mkdir 'space cat'
 [adi@haus01 ~]$ hdfs dfs -put /etc/motd 'space cat'
 [adi@haus01 ~]$ hdfs dfs -cat 'space cat/motd'
 cat: `space cat/motd': No such file or directory
 [adi@haus01 ~]$ hdfs dfs -ls space\*
 Found 1 items
 -rw-r--r--   2 adi supergroup251 2012-12-20 11:16 space%2520cat/motd
 [adi@haus01 ~]$ hdfs dfs -cat 'space%20cat/motd'
 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-30-generic x86_64)
 ...
 {code}
 Note that the {{dfs -ls}} output wrongly encodes the wrongly encoded 
 directory name, turning {{%20}} into {{%2520}}.  It does the same thing with 
 space:
 {code}
 [adi@haus01 ~]$ hdfs dfs -touchz 'space cat/foo'
 [adi@haus01 ~]$ hdfs dfs -ls 'space cat'
 Found 1 items
 -rw-r--r--   2 adi supergroup  0 2012-12-20 11:36 space%20cat/foo
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4329) DFSShell issues with directories with spaces in name

2013-08-23 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4329:
--

   Resolution: Fixed
Fix Version/s: 2.1.1-beta
   0.23.10
   2.3.0
   3.0.0
   Status: Resolved  (was: Patch Available)

 DFSShell issues with directories with spaces in name
 

 Key: HDFS-4329
 URL: https://issues.apache.org/jira/browse/HDFS-4329
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 0.23.10, 2.1.1-beta
Reporter: Andy Isaacson
Assignee: Cristina L. Abad
 Fix For: 3.0.0, 2.3.0, 0.23.10, 2.1.1-beta

 Attachments: 4329.branch-0.23.patch, 4329.branch-0.23.v3.patch, 
 4329.branch-2.patch, 4329.trunk.patch, 4329.trunk.v2.patch, 
 4329.trunk.v3.patch


 This bug was discovered by Casey Ching.
 The command {{dfs -put /foo/hello.txt dir}} is supposed to create 
 {{dir/hello.txt}} on HDFS.  It doesn't work right if dir has a space in it:
 {code}
 [adi@haus01 ~]$ hdfs dfs -mkdir 'space cat'
 [adi@haus01 ~]$ hdfs dfs -put /etc/motd 'space cat'
 [adi@haus01 ~]$ hdfs dfs -cat 'space cat/motd'
 cat: `space cat/motd': No such file or directory
 [adi@haus01 ~]$ hdfs dfs -ls space\*
 Found 1 items
 -rw-r--r--   2 adi supergroup251 2012-12-20 11:16 space%2520cat/motd
 [adi@haus01 ~]$ hdfs dfs -cat 'space%20cat/motd'
 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-30-generic x86_64)
 ...
 {code}
 Note that the {{dfs -ls}} output wrongly encodes the wrongly encoded 
 directory name, turning {{%20}} into {{%2520}}.  It does the same thing with 
 space:
 {code}
 [adi@haus01 ~]$ hdfs dfs -touchz 'space cat/foo'
 [adi@haus01 ~]$ hdfs dfs -ls 'space cat'
 Found 1 items
 -rw-r--r--   2 adi supergroup  0 2012-12-20 11:36 space%20cat/foo
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HDFS-4836:
-

 Summary: Update Tomcat version for httpfs to 6.0.37
 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles


Tomcat has release a new version of tomcat with security fixes

http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4836:
--

Attachment: HDFS-4836.patch

 Update Tomcat version for httpfs to 6.0.37
 --

 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles
 Attachments: HDFS-4836.patch


 Tomcat has release a new version of tomcat with security fixes
 http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4836:
--

Assignee: Jonathan Eagles

 Update Tomcat version for httpfs to 6.0.37
 --

 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Attachments: HDFS-4836.patch


 Tomcat has release a new version of tomcat with security fixes
 http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4836:
--

Status: Patch Available  (was: Open)

 Update Tomcat version for httpfs to 6.0.37
 --

 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Attachments: HDFS-4836.patch


 Tomcat has release a new version of tomcat with security fixes
 http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-4836:
--

Priority: Trivial  (was: Major)

 Update Tomcat version for httpfs to 6.0.37
 --

 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
Priority: Trivial
 Attachments: HDFS-4836.patch


 Tomcat has release a new version of tomcat with security fixes
 http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4836) Update Tomcat version for httpfs to 6.0.37

2013-05-17 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661040#comment-13661040
 ] 

Jonathan Eagles commented on HDFS-4836:
---

Test failures are due to ongoing issue described by HDFS-4825. Current tests 
are adequate to test new version of tomcat.

 Update Tomcat version for httpfs to 6.0.37
 --

 Key: HDFS-4836
 URL: https://issues.apache.org/jira/browse/HDFS-4836
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
Priority: Trivial
 Attachments: HDFS-4836.patch


 Tomcat has release a new version of tomcat with security fixes
 http://tomcat.apache.org/security-6.html#Fixed_in_Apache_Tomcat_6.0.37

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-26 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-2359:
--

Attachment: HDFS-2359-branch-0.20-security.patch

 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles
 Attachments: HDFS-2359-branch-0.20-security.patch


 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-26 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13114789#comment-13114789
 ] 

Jonathan Eagles commented on HDFS-2359:
---

No tests were added due to the difficulty of testing a private method of a 
static inner class.

 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles
 Attachments: HDFS-2359-branch-0.20-security.patch


 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-26 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-2359:
--

Status: Patch Available  (was: Open)

 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles
 Attachments: HDFS-2359-branch-0.20-security.patch


 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-26 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13114805#comment-13114805
 ] 

Jonathan Eagles commented on HDFS-2359:
---

 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no tests are needed for 
this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 1.3.9) warnings.


 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles
 Attachments: HDFS-2359-branch-0.20-security.patch


 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-26 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13114807#comment-13114807
 ] 

Jonathan Eagles commented on HDFS-2359:
---

See above comment on why no tests were provided. This patch should go into 
branch-0.20-security and branch-0.20-security-205.

 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles
 Attachments: HDFS-2359-branch-0.20-security.patch


 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2359) NPE found in Datanode log while Disk failed during different HDFS operation

2011-09-25 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reassigned HDFS-2359:
-

Assignee: Jonathan Eagles

 NPE found in Datanode log while Disk failed during different HDFS operation
 ---

 Key: HDFS-2359
 URL: https://issues.apache.org/jira/browse/HDFS-2359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.205.0
Reporter: Rajit
Assignee: Jonathan Eagles

 Scenario:
 I have a cluster of 4 DN ,each of them have 12disks.
 In hdfs-site.xml I have dfs.datanode.failed.volumes.tolerated=3 
 During the execution of distcp (hdfs-hdfs), I am failing 3 disks in one 
 Datanode, by making Data Directory permission 000, The distcp job is 
 successful but , I am getting some NullPointerException in Datanode log
 In one thread
 $hadoop distcp  /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3
 In another thread in a datanode
 $ chmod 000 /xyz/{0,1,2}/hadoop/var/hdfs/data
 where [ dfs.data.dir is set as /xyz/{0..11}/hadoop/var/hdfs/data ]
 Log Snippet from the Datanode
 =
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7065198814142552283_62557. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7066946313092770579_39189. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,314 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7070305189404753930_49359. BlockInfo not found in volumeMap.
 2011-09-19 12:43:40,327 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1820)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1074)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1036)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:891)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1419)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:41,304 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
 DatanodeRegistration(xx.xxx.xxx.xxx:, 
 storageID=xx--xx.xxx.xxx.xxx--xxx, infoPort=1006,
 ipcPort=8020):DataXceiver
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner$LogFileHandler.appendLine(DataBlockScanner.java:788)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.updateScanStatusInternal(DataBlockScanner.java:365)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifiedByClient(DataBlockScanner.java:308)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:205)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:99)
 at java.lang.Thread.run(Thread.java:619)
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7071818644980664768_40827. BlockInfo not found in volumeMap.
 2011-09-19 12:43:43,313 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block
 blk_7073840977856837621_62108. BlockInfo not found in volumeMap.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1331) dfs -test should work like /bin/test

2011-03-25 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13011352#comment-13011352
 ] 

Jonathan Eagles commented on HDFS-1331:
---

e) where is -l?

 dfs -test should work like /bin/test
 

 Key: HDFS-1331
 URL: https://issues.apache.org/jira/browse/HDFS-1331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 0.20.2
Reporter: Allen Wittenauer
Assignee: Daryn Sharp
Priority: Minor

 hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
 to actually use if you are used to the real test command:
 hadoop:
 $hadoop dfs -test -d /nonexist; echo $?
 test: File does not exist: /nonexist
 255
 shell:
 $ test -d /nonexist; echo $?
 1
 a) Why is it spitting out a message? Even so, why is it saying file instead 
 of directory when I used -d?
 b) Why is the return code 255? I realize this is documented as '0' if true.  
 But docs basically say the value is undefined if it isn't.
 c) where is -f?
 d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1788) FsShell ls: Show symlinks properties

2011-03-25 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-1788:
--

Description: ls FsShell command implementation has been consistent with the 
linux implementations of ls \-l. With the addition of symlinks, I would expect 
the ability to show file type 'd' for directory, '\-' for file, and 'l' for 
symlink. In addition, following the linkname entry for symlinks, I would expect 
the ability to show \- link target. In linux, the default is to the the 
properties of the link and not of the link target. In linux, '-L' option allows 
for the dereferencing of symlinks to show link target properties, but it is not 
the default.   (was: ls FsShell command implementation has been consistent with 
the linux implementations of ls -l. With the addition of symlinks, I would 
expect the ability to show file type 'd' for directory, '-' for file, and 'l' 
for symlink. In addition, following the linkname entry for symlinks, I would 
expect the ability to show - link target. In linux, the default is to the 
the properties of the link and not of the link target. In linux, '-L' option 
allows for the dereferencing of symlinks to show link target properties, but it 
is not the default. )

 FsShell ls: Show symlinks properties
 

 Key: HDFS-1788
 URL: https://issues.apache.org/jira/browse/HDFS-1788
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Jonathan Eagles
Priority: Minor

 ls FsShell command implementation has been consistent with the linux 
 implementations of ls \-l. With the addition of symlinks, I would expect the 
 ability to show file type 'd' for directory, '\-' for file, and 'l' for 
 symlink. In addition, following the linkname entry for symlinks, I would 
 expect the ability to show \- link target. In linux, the default is to 
 the the properties of the link and not of the link target. In linux, '-L' 
 option allows for the dereferencing of symlinks to show link target 
 properties, but it is not the default. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira