Hadoop-Hdfs-0.23-Build - Build # 757 - Still Failing

2013-10-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/757/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7876 lines...]
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3330,10]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto,java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto.Builder)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3335,31]
 cannot find symbol
[ERROR] symbol  : class AbstractParser
[ERROR] location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[3344,4]
 method does not override or implement a method from a supertype
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4098,12]
 cannot find symbol
[ERROR] symbol  : method 
ensureFieldAccessorsInitialized(java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto,java.lang.Classorg.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto.Builder)
[ERROR] location: class com.google.protobuf.GeneratedMessage.FieldAccessorTable
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[4371,104]
 cannot find symbol
[ERROR] symbol  : method getUnfinishedMessage()
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5264,8]
 getUnknownFields() in 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto 
cannot override getUnknownFields() in com.google.protobuf.GeneratedMessage; 
overridden method is final
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5284,19]
 cannot find symbol
[ERROR] symbol  : method 
parseUnknownField(com.google.protobuf.CodedInputStream,com.google.protobuf.UnknownFieldSet.Builder,com.google.protobuf.ExtensionRegistryLite,int)
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5314,15]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5317,27]
 cannot find symbol
[ERROR] symbol  : method 
setUnfinishedMessage(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto)
[ERROR] location: class com.google.protobuf.InvalidProtocolBufferException
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5323,8]
 cannot find symbol
[ERROR] symbol  : method makeExtensionsImmutable()
[ERROR] location: class 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpTransferBlockProto
[ERROR] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5334,10]
 cannot find symbol
[ERROR] symbol  : method 

Build failed in Jenkins: Hadoop-Hdfs-0.23-Build #757

2013-10-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/757/changes

Changes:

[jlowe] svn merge -c 1520964 FIXES: MAPREDUCE-5414. TestTaskAttempt fails in 
JDK7 with NPE. Contributed by Nemon Lou

[jlowe] svn merge -c 1377943 FIXES: MAPREDUCE-4579. Split TestTaskAttempt into 
two so as to pass tests on jdk7. Contributed by Thomas Graves

[jlowe] YARN-155. TestAppManager intermittently fails with jdk7. Contributed by 
Thomas Graves

[jlowe] svn merge -c 1511464 FIXES: MAPREDUCE-5425. Junit in 
TestJobHistoryServer failing in jdk 7. Contributed by Robert Parker

[jlowe] svn merge -c 1457061 FIXES: MAPREDUCE-4571. TestHsWebServicesJobs fails 
on jdk7. Contributed by Thomas Graves

[jlowe] svn merge -c 1457065 FIXES: MAPREDUCE-4716. 
TestHsWebServicesJobsQuery.testJobsQueryStateInvalid fails with jdk7. 
Contributed by Thomas Graves

--
[...truncated 7683 lines...]
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[10533,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[10544,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[8357,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[8368,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[12641,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[12652,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[9741,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[9752,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[1781,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[1792,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5338,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[5349,30]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 
https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:[6290,37]
 cannot find symbol
[ERROR] symbol  : class Parser
[ERROR] location: package com.google.protobuf
[ERROR] 

[jira] [Created] (HDFS-5345) NPE in block_info_xml JSP if the block has been deleted

2013-10-11 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-5345:


 Summary: NPE in block_info_xml JSP if the block has been deleted
 Key: HDFS-5345
 URL: https://issues.apache.org/jira/browse/HDFS-5345
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.2.0
Reporter: Steve Loughran
Priority: Minor


If you ask for a block info report on a block that has been deleted, you see a 
stack trace and a 500 error.

Steps to replicae
# create a file
# browse to it
# get the block info
# delete the file
# reload the block info page

Maybe a 404 is the response to raise instead



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5346) Replication queues should not be initialized in the middle of IBR processing.

2013-10-11 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5346:


 Summary: Replication queues should not be initialized in the 
middle of IBR processing.
 Key: HDFS-5346
 URL: https://issues.apache.org/jira/browse/HDFS-5346
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.9, 2.3.0
Reporter: Kihwal Lee






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: hdfs project separation

2013-10-11 Thread Milind Bhandarkar
Let me add a bit more about the feasibility of this.

I have been doing some experiments by duplicating some common code code in 
hdfs-only , and yarn/MR only; and am able to build and use hdfs independently.

Now that bigtop has matured, we can still do a single distro in apache with 
independently released mr/yarn and hdfs.

That will enable parallel development, and will also reduce stabilization 
overload at a mega-release time.

If HDFS is released independently, with its own RPC and protocol versions, 
features such as pluggable namespaces will not have to wait for the next 
mega-release of the entire stack.

Would love to hear what hdfs developers think about this.

- milind

Sent from my iPhone

 On Oct 10, 2013, at 20:31, Milind Bhandarkar mbhandar...@gopivotal.com 
 wrote:
 
 ( this message is not intended for specific folks, by mistake, but for all 
 the hdfs-dev list, deliberately;)
 
 Hello Folks,
 
 I do not want to scratch the already bleeding wounds, and want to resolve 
 these issues amicably, without causing a big inter-vendor confrontation.
 
 So, these are the facts, as I (and several others in the hadoop community) 
 see this.
 
 1. there was an attempt to separate different hadoop projects, such as 
 common, hdfs, mapreduce.
 
 2. that attempt was aborted because of several things. common ownership, i.e. 
 committership being the biggest issue.
 
 3. in the meanwhile, several important, release-worthy, hdfs improvements 
 were committed to Hadoop. (Thats why I supported Konst's appeal for 0.22. And 
 also incorporated into Hadoop products by the largest hadoop ecosystem 
 contributor, and several others.)
 
 4. All the apache hadoop bylaws were followed, to get these improvements into 
 Hadoop project.
 
 5. Yet, common project, which is not even a top-level project, since the 
 awkward re-merge happened, got an invompatible wire-protocol change, which 
 was accepted and promoted by a specific section, in spite of kicking and 
 screaming of (what I think of) a representative of a large hadoop user 
 community.
 
 6. That, and such other changes, has created a big issue for a part of the 
 community which has tested hdfs part of 2.x and has spent a lot of efforts to 
 stabilize hdfs, since this was the major part of assault from proprietary 
 storage systems, such as You-Know-Who.
 
 I would like to raise this issue as an individual, regardless of my 
 affiliation, so that, we can make hdfs worthy of its association with the top 
 level ecosystem, without being closely associated with it.
 
 What do the hdfs developers think? 
 
 - milind
 
 Sent from my iPhone


Re: hdfs project separation

2013-10-11 Thread Doug Cutting
On Fri, Oct 11, 2013 at 9:14 AM, Milind Bhandarkar
mbhandar...@gopivotal.com wrote:
 If HDFS is released independently, with its own RPC and protocol versions, 
 features such as pluggable namespaces will not have to wait for the next 
 mega-release of the entire stack.

The plan as I understand it is to eventually be able to release
common/hdfs  yarn/mr independently, as two, three or perhaps four
different products.  Once we've got that down we can consider
splitting into multiple TLPs.  For this to transpire requires folks to
volunteer to create an independent release, establishing a plan,
helping to make the required changes, calling the vote, etc.  Someone
could propose doing this first with HDFS, YARN or whatever someone
thinks is best.  It would take concerted effort by a few folks, along
with consent of the rest of the project.

Do you have a detailed plan?  If so, you could share it and start
trying to build consensus around it.

Doug


[jira] [Created] (HDFS-5347) add HDFS NFS user guide

2013-10-11 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5347:


 Summary: add HDFS NFS user guide
 Key: HDFS-5347
 URL: https://issues.apache.org/jira/browse/HDFS-5347
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Brandon Li
Assignee: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.1#6144)


RE: hdfs project separation

2013-10-11 Thread Milind Bhandarkar
Doug,

Your understanding is correct. But I would like to start with a less
ambitious plan first. By duplicating common rpc etc code, and renaming
packages in common, we can independently build two different artifacts from
the same repo, one for hdfs and one for Yarn+MR. Then we can decide whether
we want to separate these projects completely, making independent releases.

I believe the last split of project failed because of common dependencies in
both MR and HDFS, which meant that changes to RPC etc were affecting both
upper level projects. I think we should avoid that, by duplicating needed
common code.

I would like to see what the community thinks, before making detailed plans.

- milind


-Original Message-
From: Doug Cutting [mailto:cutt...@apache.org]
Sent: Friday, October 11, 2013 11:12 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: hdfs project separation

On Fri, Oct 11, 2013 at 9:14 AM, Milind Bhandarkar
mbhandar...@gopivotal.com wrote:
 If HDFS is released independently, with its own RPC and protocol versions,
 features such as pluggable namespaces will not have to wait for the next
 mega-release of the entire stack.

The plan as I understand it is to eventually be able to release common/hdfs
 yarn/mr independently, as two, three or perhaps four different products.
Once we've got that down we can consider splitting into multiple TLPs.  For
this to transpire requires folks to volunteer to create an independent
release, establishing a plan, helping to make the required changes, calling
the vote, etc.  Someone could propose doing this first with HDFS, YARN or
whatever someone thinks is best.  It would take concerted effort by a few
folks, along with consent of the rest of the project.

Do you have a detailed plan?  If so, you could share it and start trying to
build consensus around it.

Doug


[jira] [Resolved] (HDFS-5224) Refactor PathBasedCache* methods to use a Path rather than a String

2013-10-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5224.
-

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

I've committed this to the HDFS-4949 branch.  Thanks for the reviews!

 Refactor PathBasedCache* methods to use a Path rather than a String
 ---

 Key: HDFS-5224
 URL: https://issues.apache.org/jira/browse/HDFS-5224
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: HDFS-4949
Reporter: Andrew Wang
Assignee: Chris Nauroth
 Fix For: HDFS-4949

 Attachments: HDFS-5224.1.patch, HDFS-5224.2.patch, HDFS-5224.3.patch


 As discussed in HDFS-5213, we should refactor PathBasedCacheDirective and 
 related methods in DistributedFileSystem to use a Path to represent paths to 
 cache, rather than a String.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5348) Fix error message when dfs.datanode.max.locked.memory is improperly configured

2013-10-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5348:
--

 Summary: Fix error message when dfs.datanode.max.locked.memory is 
improperly configured
 Key: HDFS-5348
 URL: https://issues.apache.org/jira/browse/HDFS-5348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We need to fix the error message when dfs.datanode.max.locked.memory is 
improperly configured.  Currently it says the size is less than the datanode's 
available RLIMIT_MEMLOCK limit when it really means more



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5349) DNA_CACHE and DNA_UNCACHE should be by blockId only

2013-10-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5349:
--

 Summary: DNA_CACHE and DNA_UNCACHE should be by blockId only 
 Key: HDFS-5349
 URL: https://issues.apache.org/jira/browse/HDFS-5349
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-5349-caching.001.patch

DNA_CACHE and DNA_UNCACHE should be by blockId only.  We don't need length and 
genstamp to know what the NN asked us to cache.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5348) Fix error message when dfs.datanode.max.locked.memory is improperly configured

2013-10-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5348.
---

   Resolution: Fixed
Fix Version/s: HDFS-4949
 Hadoop Flags: Reviewed

Committed to branch.

 Fix error message when dfs.datanode.max.locked.memory is improperly configured
 --

 Key: HDFS-5348
 URL: https://issues.apache.org/jira/browse/HDFS-5348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HDFS-4949

 Attachments: HDFS-5348-caching.001.patch


 We need to fix the error message when dfs.datanode.max.locked.memory is 
 improperly configured.  Currently it says the size is less than the 
 datanode's available RLIMIT_MEMLOCK limit when it really means more



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5350) Name Node should report fsimage transfer time as a metric

2013-10-11 Thread Rob Weltman (JIRA)
Rob Weltman created HDFS-5350:
-

 Summary: Name Node should report fsimage transfer time as a metric
 Key: HDFS-5350
 URL: https://issues.apache.org/jira/browse/HDFS-5350
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Rob Weltman


If the (Secondary) Name Node reported fsimage transfer times (perhaps the last 
ten of them), monitoring tools could detect slowdowns that might jeopardize 
cluster stability.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5351) Name Nodes should shut down if no image directories

2013-10-11 Thread Rob Weltman (JIRA)
Rob Weltman created HDFS-5351:
-

 Summary: Name Nodes should shut down if no image directories
 Key: HDFS-5351
 URL: https://issues.apache.org/jira/browse/HDFS-5351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.1-beta
Reporter: Rob Weltman


If, for whatever reason, there are no image directories to write to, all Name 
Node instances should shut down.




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5352) Server#initLog() doesn't close InputStream

2013-10-11 Thread Ted Yu (JIRA)
Ted Yu created HDFS-5352:


 Summary: Server#initLog() doesn't close InputStream
 Key: HDFS-5352
 URL: https://issues.apache.org/jira/browse/HDFS-5352
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: hdfs-5352.patch

Here is related code snippet in 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java:
{code}
  Properties props = new Properties();
  try {
InputStream is = getResource(DEFAULT_LOG4J_PROPERTIES);
props.load(is);
  } catch (IOException ex) {
throw new ServerException(ServerException.ERROR.S03, 
DEFAULT_LOG4J_PROPERTIES, ex.getMessage(), ex);
  }
{code}
is should be closed after loading.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5353) Short circuit reads fail when dfs.encrypt.data.transfer is enabled

2013-10-11 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5353:


 Summary: Short circuit reads fail when dfs.encrypt.data.transfer 
is enabled
 Key: HDFS-5353
 URL: https://issues.apache.org/jira/browse/HDFS-5353
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


DataXceiver tries to establish secure channels via sasl when 
dfs.encrypt.data.transfer is turned on. However, domain socket traffic seems to 
be unencrypted therefore the client cannot communicate with the data node via 
domain sockets, which makes short circuit reads unfunctional.



--
This message was sent by Atlassian JIRA
(v6.1#6144)