[jira] [Created] (HADOOP-10722) Standby NN continuing as standby when active NN machine got shutdown.

2014-06-19 Thread surendra singh lilhore (JIRA)
surendra singh lilhore created HADOOP-10722:
---

 Summary: Standby NN continuing as standby when active NN machine 
got shutdown.
 Key: HADOOP-10722
 URL: https://issues.apache.org/jira/browse/HADOOP-10722
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover, ha
Affects Versions: 2.4.0
Reporter: surendra singh lilhore


I have HA cluster with 3 ZK, 3 QJM.
My Active NN machine got shutdown, but still my standby NN is standby only.
It should be active

ZKFC logs


{noformat}
2014-06-19 13:39:30,810 INFO org.apache.hadoop.ha.NodeFencer: == Beginning 
Service Fencing Process... ==
2014-06-19 13:39:30,810 INFO org.apache.hadoop.ha.NodeFencer: Trying method 
1/1: org.apache.hadoop.ha.SshFenceByTcpPort(null)
2014-06-19 13:39:30,811 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Connecting 
to host-10-18-40-101...
2014-06-19 13:39:30,811 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
Connecting to host-10-18-40-101 port 22
2014-06-19 13:39:33,814 WARN org.apache.hadoop.ha.SshFenceByTcpPort: Unable to 
connect to host-10-18-40-101 as user myuser
com.jcraft.jsch.JSchException: java.net.NoRouteToHostException: No route to host
at com.jcraft.jsch.Util.createSocket(Util.java:386)
at com.jcraft.jsch.Session.connect(Session.java:182)
at 
org.apache.hadoop.ha.SshFenceByTcpPort.tryFence(SshFenceByTcpPort.java:100)
at org.apache.hadoop.ha.NodeFencer.fence(NodeFencer.java:97)
at 
org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:521)
at 
org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:494)
at 
org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:59)
at 
org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:837)
at 
org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:901)
at 
org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:800)
at 
org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:596)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
2014-06-19 13:39:33,814 WARN org.apache.hadoop.ha.NodeFencer: Fencing method 
org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10722) Standby NN continuing as standby when active NN machine got shutdown.

2014-06-19 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore resolved HADOOP-10722.
-

Resolution: Not a Problem

 Standby NN continuing as standby when active NN machine got shutdown.
 -

 Key: HADOOP-10722
 URL: https://issues.apache.org/jira/browse/HADOOP-10722
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover, ha
Affects Versions: 2.4.0
Reporter: surendra singh lilhore

 I have HA cluster with 3 ZK, 3 QJM.
 My Active NN machine got shutdown, but still my standby NN is standby only.
 It should be active
 ZKFC logs
 
 {noformat}
 2014-06-19 13:39:30,810 INFO org.apache.hadoop.ha.NodeFencer: == 
 Beginning Service Fencing Process... ==
 2014-06-19 13:39:30,810 INFO org.apache.hadoop.ha.NodeFencer: Trying method 
 1/1: org.apache.hadoop.ha.SshFenceByTcpPort(null)
 2014-06-19 13:39:30,811 INFO org.apache.hadoop.ha.SshFenceByTcpPort: 
 Connecting to host-10-18-40-101...
 2014-06-19 13:39:30,811 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
 Connecting to host-10-18-40-101 port 22
 2014-06-19 13:39:33,814 WARN org.apache.hadoop.ha.SshFenceByTcpPort: Unable 
 to connect to host-10-18-40-101 as user myuser
 com.jcraft.jsch.JSchException: java.net.NoRouteToHostException: No route to 
 host
   at com.jcraft.jsch.Util.createSocket(Util.java:386)
   at com.jcraft.jsch.Session.connect(Session.java:182)
   at 
 org.apache.hadoop.ha.SshFenceByTcpPort.tryFence(SshFenceByTcpPort.java:100)
   at org.apache.hadoop.ha.NodeFencer.fence(NodeFencer.java:97)
   at 
 org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:521)
   at 
 org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:494)
   at 
 org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:59)
   at 
 org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:837)
   at 
 org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:901)
   at 
 org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:800)
   at 
 org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
   at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:596)
   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
 2014-06-19 13:39:33,814 WARN org.apache.hadoop.ha.NodeFencer: Fencing method 
 org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10723) FileSystem deprecated filesystem name warning : Make error message HCFS compliant

2014-06-19 Thread jay vyas (JIRA)
jay vyas created HADOOP-10723:
-

 Summary: FileSystem deprecated filesystem name warning : Make 
error message HCFS compliant
 Key: HADOOP-10723
 URL: https://issues.apache.org/jira/browse/HADOOP-10723
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: jay vyas


We've found that if we have an alternative filesystem (i.e. xyz://) and we 
enter it without slashes, we get an hdfs specific error message: 

{{hadoop fs -fs xyz: -mkdir -p /foo/bar}} 

Yields an error message which suggests an hdfs URI.  

{noformat}
# hadoop fs -fs xyzfs: -mkdir -p /foo/bar
14/06/12 17:57:24 WARN fs.FileSystem: xyz: is a deprecated filesystem name. 
Use hdfs://xyz:/ instead.
{noformat}

Would be better if the error message threw an Exception (as suggested in the 
comments) and was something that didnt hardcode the hdfs uri to the beggining, 
as its very confusing when running on any alternative filesystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[VOTE] Release Apache Hadoop 0.23.11

2014-06-19 Thread Thomas Graves
Hey Everyone,

There have been various bug fixes that have went into
branch-0.23 since the 0.23.10 release.  We think its time to do a 0.23.11.

This is also the last planned release off of branch-0.23 we plan on doing.

The RC is available at:
http://people.apache.org/~tgraves/hadoop-0.23.11-candidate-0/


The RC Tag in svn is here:
http://svn.apache.org/viewvc/hadoop/common/tags/release-0.23.11-rc0/

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days
til June 26th.

I am +1 (binding).

thanks,
Tom Graves






[jira] [Created] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2014-06-19 Thread Sam Steingold (JIRA)
Sam Steingold created HADOOP-10724:
--

 Summary: `hadoop fs -du -h` incorrectly formatted
 Key: HADOOP-10724
 URL: https://issues.apache.org/jira/browse/HADOOP-10724
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Sam Steingold


{{hadoop fs -du -h}} prints sizes with a space between the number and the unit:

{code}
$ hadoop fs -du -h . 
91.7 G   
583.1 M  
97.6 K   .
{code}

The standard unix {{du -h}} does not:

{code}
$ du -h
400K...
404K
480K.
{code}

the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.

Please see 

* [sort|http://linux.die.net/man/1/sort]: -h --human-numeric-sort
compare human readable numbers (e.g., 2K 1G) 
* [du|http://linux.die.net/man/1/du]: -h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G) 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Is Building on Windows 7 Pro (NOT Windows Server!) supported in some way?

2014-06-19 Thread javadba
The following link from January states that Windows 7 (NOT Server) should
work.  Has anyone been successful with this and have any comments on the
process? 

https://wiki.apache.org/hadoop/Hadoop2OnWindows
https://wiki.apache.org/hadoop/Hadoop2OnWindows   

 Windows Vista and Windows 7 are also likely to work because of the Win32
 API similarities with the respective server SKUs 

The present lack of support by HortonWorks for anything but Windows SERVER
was a really ugly discovery for my team. We do not have any option to use
that O/S per our (v large) company IT policies. So support for Windows 7 Pro
would be a big help.




--
View this message in context: 
http://hadoop.6.n7.nabble.com/Is-Building-on-Windows-7-Pro-NOT-Windows-Server-supported-in-some-way-tp72137.html
Sent from the common-dev mailing list archive at Nabble.com.


Is Building on Windows 7 Pro (NOT Windows Server!) supported somehow?

2014-06-19 Thread javadba
The following link from January states that Windows 7 (NOT Server) /should/
work.  Has anyone been successful with this and have any comments on the
process? Windows Vista and Windows 7 are also likely to work because of
the Win32 API similarities with the respective server SKUs
https://wiki.apache.org/hadoop/Hadoop2OnWindowsHortonWorks supporting ONLY
Windows SERVER was a really /ugly/ discovery for my team. We do not have any
option to use that O/S per our (v large) company IT policies.So this
would be a big help.



--
View this message in context: 
http://hadoop.6.n7.nabble.com/Is-Building-on-Windows-7-Pro-NOT-Windows-Server-supported-somehow-tp72136.html
Sent from the common-dev mailing list archive at Nabble.com.

Re: Is Building on Windows 7 Pro (NOT Windows Server!) supported in some way?

2014-06-19 Thread Steve Loughran
On 19 June 2014 10:07, javadba java...@gmail.com wrote:

 The following link from January states that Windows 7 (NOT Server) should
 work.  Has anyone been successful with this and have any comments on the
 process?

 https://wiki.apache.org/hadoop/Hadoop2OnWindows
 https://wiki.apache.org/hadoop/Hadoop2OnWindows 

  Windows Vista and Windows 7 are also likely to work because of the Win32
  API similarities with the respective server SKUs


I haven't tried it, so don't have opinions...why not follow the
instructions and if something doesn't build (and it's not something basic
like tools not installed), discuss it here



 The present lack of support by HortonWorks for anything but Windows SERVER
 was a really ugly discovery for my team. We do not have any option to use
 that O/S per our (v large) company IT policies. So support for Windows 7
 Pro
 would be a big help.


I understand large IT policies, having experience them -it's just that
Windows Server is the only place people will run Hadoop on Windows, and
there may be enough differences between the desktop and server editions,
that doing everything server-side makes it work.

If you have an MSDN license, you should be able to bring up a 64-bit
windows server VM...that is how I test against hadoop on windows  Linux on
my OS/X laptop,

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-06-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10725:
-

 Summary: Implement listStatus and getFileInfo in the native client
 Key: HADOOP-10725
 URL: https://issues.apache.org/jira/browse/HADOOP-10725
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.4.1

2014-06-19 Thread Akira AJISAKA
I think we should include this issue in 2.4.1, so I uploaded a patch to 
fix it. I'll appreciate your review.


Thanks,
Akira

(2014/06/18 12:13), Vinod Kumar Vavilapalli wrote:


There is one item [MAPREDUCE-5830 HostUtil.getTaskLogUrl is not backwards 
binary compatible with 2.3] marked for 2.4. Should we include it?

There is no patch there yet, it doesn't really help much other than letting 
older clients compile - even if we put the API back in, the URL returned is 
invalid.

+Vinod

On Jun 16, 2014, at 9:27 AM, Arun C Murthy a...@hortonworks.com wrote:


Folks,

I've created a release candidate (rc0) for hadoop-2.4.1 (bug-fix release) that 
I would like to push out.

The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.1-rc0
The RC tag in svn is here: 
https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.1-rc0

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun



--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/hdp/



--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.







Re: Plans of moving towards JDK7 in trunk

2014-06-19 Thread Andrew Purtell
There are a number of security (algorithm, not vulnerability) and
performance improvements that landed in 8, not 7. As a runtime for the
performance conscious, it might be recommendable. I've come across GC
issues in 6 or 7 where, talking with some Java platform folks, the first
suggested course of action is try again with 8. Would be be more of the
current moment if this discussion was about setting guidelines that
prescribe when and when not to use 8+ language features, or concurrency
library improvements that rely on intrinsics only available in the 8
runtime? Has the Java 6 ship sailed? Just set the minimum supported JDK and
runtime at 7 at next release? Use of the  operator or multicatch wouldn't
and shouldn't need be debated, they are quite minor. On the other hand, I
would imagine discussion and debate on what 8+ language features might be
useful to use at some future time could be a lively one.



On Wed, Jun 18, 2014 at 3:03 PM, Colin McCabe cmcc...@alumni.cmu.edu
wrote:

 In CDH5, Cloudera encourages people to use JDK7.  JDK6 has been EOL
 for a while now and is not something we recommend.

 As we discussed before, everyone is in favor of upgrading to JDK7.
 Every cluster operator of a reasonably modern Hadoop should do it
 whatever distro or release you run.  As developers, we run JDK7 as
 well.

 I'd just like to see a plan for when branch-2 (or some other branch)
 will create a stable release that drops support for JDK1.6.  If we
 don't have such a plan, I feel like it's too early to talk about this
 stuff.

 If we drop support for 1.6 in trunk but not in branch-2, we are
 fragmenting the project.  People will start writing unreleaseable code
 (because it doesn't work on branch-2) and we'll be back to the bad old
 days of Hadoop version fragmentation that branch-2 was intended to
 solve.  Backports will become harder.  The biggest problem is that
 trunk will start to depend on libraries or Maven plugins that branch-2
 can't even use, because they're JDK7+-only.

 Steve wrote: if someone actually did file a bug on something on
 branch-2 which didn't work on Java 6 but went away on Java7+, we'd
 probably close it as a WORKSFORME.

 Steve, if this is true, we should just bump the minimum supported
 version for branch-2 to 1.7 today and resolve this.  If we truly
 believe that there are no issues here, then let's just decide to drop
 1.6 in a specific future release of Hadoop 2.  If there are issues
 with releasing JDK1.7+ only code, then let's figure out what they are
 before proceeding.

 best,
 Colin


 On Wed, Jun 18, 2014 at 1:41 PM, Sandy Ryza sandy.r...@cloudera.com
 wrote:
  We do release warnings when we are aware of vulnerabilities in our
  dependencies.
 
  However, unless I'm grossly misunderstanding, the vulnerability that you
  point out is not a vulnerability within the context of our software.
   Hadoop doesn't try to sandbox within JVMs.  In a secure setup, any JVM
  running non-trusted user code is running as that user, so breaking out
  doesn't offer the ability to do anything malicious.
 
  -Sandy
 
  On Wed, Jun 18, 2014 at 1:30 PM, Ottenheimer, Davi 
 davi.ottenhei...@emc.com
  wrote:
 
  Andrew,
 
 
 
  “I don't see any point to switching” is an interesting perspective,
 given
  the well-known risks of running unsafe software. Clearly customer best
  interest is stability. JDK6 is in a known unsafe state. The longer
 anyone
  delays the necessary transition to safety the longer the door is left
 open
  to predictable disaster.
 
 
 
  You also said we still test and support JDK6. I searched but have not
  been able to find Cloudera critical security fixes for JDK6.
 
 
 
  Can you clarify, for example, Java 6 Update 51 for CVE-2013-2465? In
 other
  words, did you release to your customers any kind of public alert or
  warning of this CVSS 10.0 event as part of your JDK6 support?
 
 
 
  http://www.cvedetails.com/cve/CVE-2013-2465/
 
 
 
  If you are not releasing your own security fixes for JDK6 post-EOL would
  it perhaps be safer to say Cloudera is hands-off; neither supports, nor
  opposes the known insecure and deprecated/unpatched JDK?
 
 
 
  I mentioned before in this thread the Oracle support timeline:
 
 
 
  - official public EOL (end of life) was more than a year ago
 
  - premier support ended more than six months ago
 
  - extended support may get critical security fixes until the end of 2016
 
 
 
  Given this timeline, does Cloudera officially take responsibility for
  Hadoop customer safety? Are you going to be releasing critical security
  fixes to a known unsafe JDK?
 
 
 
  Davi
 
 
 
 
 
 
 
   -Original Message-
 
   From: Andrew Wang [mailto:andrew.w...@cloudera.com]
 
   Sent: Wednesday, June 18, 2014 12:33 PM
 
   To: common-dev@hadoop.apache.org
 
   Subject: Re: Plans of moving towards JDK7 in trunk
 
  
 
   Actually, a lot of our customers are still on JDK6, so if anything,
 its
  popularity
 
   hasn't significantly decreased. We 

[jira] [Created] (HADOOP-10727) GraphiteSink metric names should not contain .

2014-06-19 Thread Babak Behzad (JIRA)
Babak Behzad created HADOOP-10727:
-

 Summary: GraphiteSink metric names should not contain .
 Key: HADOOP-10727
 URL: https://issues.apache.org/jira/browse/HADOOP-10727
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Babak Behzad
Priority: Minor


Sometimes the names of the metrics sent to Graphite contain . in them (such 
as hostnames). Graphite interpret these dots as a separator for directories 
causing long directory hierarchies. It would be easier to replace them to _ 
in order to have easier to read metric names.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10728) Metrics system for Windows Azure Storage Filesystem

2014-06-19 Thread Mike Liddell (JIRA)
Mike Liddell created HADOOP-10728:
-

 Summary: Metrics system for Windows Azure Storage Filesystem
 Key: HADOOP-10728
 URL: https://issues.apache.org/jira/browse/HADOOP-10728
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Mike Liddell
Assignee: Mike Liddell


Add a metrics2 source for the Windows Azure Filesystem driver that was 
introduced with HADOOP-9629.

AzureFileSystemInstrumentation is the new MetricsSource.  

AzureNativeFilesystemStore and NativeAzureFilesystem have been modified to 
record statistics and some machinery is added for the accumulation of 'rolling 
average' statistics.

Primary new code appears in org.apache.hadoop.fs.azure.metrics namespace.

h2. Credits and history
Credit for this work goes to the early team: [~minwei], [~davidlao], 
[~lengningliu] and [~stojanovic] as well as multiple people who have taken over 
this work since then (hope I don't forget anyone): [~dexterb], Johannes Klein, 
[~ivanmi], Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and 
[~chuanliu].




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-06-19 Thread Junping Du (JIRA)
Junping Du created HADOOP-10729:
---

 Summary: Add tests for PB RPC in case version mismatch of client 
and server
 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du


We have ProtocolInfo specified in protocol interface with version info, but we 
don't have unit test to verify if/how it works. We should have tests to track 
this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)