[jira] [Resolved] (HADOOP-6253) Add a Ceph FileSystem interface.

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-6253.
---
Resolution: Won't Fix

Resolving as 'Wont Fix' as no changes have been committed

 Add a Ceph FileSystem interface.
 

 Key: HADOOP-6253
 URL: https://issues.apache.org/jira/browse/HADOOP-6253
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Gregory Farnum
Assignee: Gregory Farnum
Priority: Minor
  Labels: ceph
 Attachments: HADOOP-6253.patch, HADOOP-6253.patch, HADOOP-6253.patch, 
 HADOOP-6253.patch, HADOOP-6253.patch


 The experimental distributed filesystem Ceph does not have a single point of 
 failure, uses a distributed metadata cluster instead of a single in-memory 
 server, and might be of use to some Hadoop users.
 http://ceph.com/docs/wip-hadoop-doc/cephfs/hadoop/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-6253) Add a Ceph FileSystem interface.

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-6253:
---

 Add a Ceph FileSystem interface.
 

 Key: HADOOP-6253
 URL: https://issues.apache.org/jira/browse/HADOOP-6253
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Gregory Farnum
Assignee: Gregory Farnum
Priority: Minor
  Labels: ceph
 Attachments: HADOOP-6253.patch, HADOOP-6253.patch, HADOOP-6253.patch, 
 HADOOP-6253.patch, HADOOP-6253.patch


 The experimental distributed filesystem Ceph does not have a single point of 
 failure, uses a distributed metadata cluster instead of a single in-memory 
 server, and might be of use to some Hadoop users.
 http://ceph.com/docs/wip-hadoop-doc/cephfs/hadoop/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11227) error when building hadoop on windows

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reopened HADOOP-11227:


 error when building hadoop on windows  
 ---

 Key: HADOOP-11227
 URL: https://issues.apache.org/jira/browse/HADOOP-11227
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: milq
Assignee: milq

 [INFO] 
 
 [INFO] Building hadoop-mapreduce-client-app 2.2.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
 hadoop-mapreduce-clie
 nt-app ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
 hadoop-map
 reduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
 hadoop-mapred
 uce-client-app ---
 [INFO] Nothing to compile - all classes are up to date
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
 ha
 doop-mapreduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
 hadoo
 p-mapreduce-client-app ---
 [INFO] Compiling 29 source files to 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapr
 educe-client\hadoop-mapreduce-client-app\target\test-classes
 [INFO] -
 [ERROR] COMPILATION ERROR :
 [INFO] -
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1491,60] incomparable types: java.lang.Enumcapture#698 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1495,67] incomparable types: java.lang.Enumcapture#215 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [INFO] 2 errors
 [INFO] -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [7.601s]
 [INFO] Apache Hadoop Project POM . SUCCESS [7.254s]
 [INFO] Apache Hadoop Annotations . SUCCESS [7.177s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.604s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [6.864s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [8.371s]
 [INFO] Apache Hadoop Auth  SUCCESS [5.966s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [4.492s]
 [INFO] Apache Hadoop Common .. SUCCESS [7:26.231s]
 [INFO] Apache Hadoop NFS . SUCCESS [20.858s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.093s]
 [INFO] Apache Hadoop HDFS  SUCCESS [8:10.985s]
 [INFO] Apache Hadoop HttpFS .. SUCCESS [1:00.932s]
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [17.207s]
 [INFO] Apache Hadoop HDFS-NFS  SUCCESS [12.950s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.104s]
 [INFO] hadoop-yarn ... SUCCESS [1.943s]
 [INFO] hadoop-yarn-api ... SUCCESS [2:39.214s]
 [INFO] hadoop-yarn-common  SUCCESS [1:15.391s]
 [INFO] hadoop-yarn-server  SUCCESS [0.278s]
 [INFO] hadoop-yarn-server-common . SUCCESS [14.293s]
 [INFO] hadoop-yarn-server-nodemanager  SUCCESS [25.848s]
 [INFO] hadoop-yarn-server-web-proxy .. SUCCESS [5.866s]
 [INFO] hadoop-yarn-server-resourcemanager  SUCCESS [39.821s]
 [INFO] hadoop-yarn-server-tests .. SUCCESS [0.645s]
 [INFO] hadoop-yarn-client  SUCCESS [6.714s]
 [INFO] hadoop-yarn-applications .. SUCCESS [0.454s]
 [INFO] hadoop-yarn-applications-distributedshell . SUCCESS [3.555s]
 [INFO] hadoop-mapreduce-client ... SUCCESS [0.292s]
 [INFO] hadoop-mapreduce-client-core .. SUCCESS [1:05.441s]
 [INFO] 

[jira] [Resolved] (HADOOP-11227) error when building hadoop on windows

2014-11-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11227.

Resolution: Not a Problem

Closing as 'Not a problem'

 error when building hadoop on windows  
 ---

 Key: HADOOP-11227
 URL: https://issues.apache.org/jira/browse/HADOOP-11227
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.3.0
Reporter: milq
Assignee: milq

 [INFO] 
 
 [INFO] Building hadoop-mapreduce-client-app 2.2.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ 
 hadoop-mapreduce-clie
 nt-app ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
 hadoop-map
 reduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
 hadoop-mapred
 uce-client-app ---
 [INFO] Nothing to compile - all classes are up to date
 [INFO]
 [INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
 ha
 doop-mapreduce-client-app ---
 [INFO] Using default encoding to copy filtered resources.
 [INFO]
 [INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
 hadoo
 p-mapreduce-client-app ---
 [INFO] Compiling 29 source files to 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapr
 educe-client\hadoop-mapreduce-client-app\target\test-classes
 [INFO] -
 [ERROR] COMPILATION ERROR :
 [INFO] -
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1491,60] incomparable types: java.lang.Enumcapture#698 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [ERROR] 
 D:\hdfs\hadoop-mapreduce-project\hadoop-mapreduce-client\hadoop-mapreduc
 e-client-app\src\test\java\org\apache\hadoop\mapreduce\v2\app\TestRecovery.java:
 [1495,67] incomparable types: java.lang.Enumcapture#215 of ? and 
 org.apache.ha
 doop.mapreduce.JobCounter
 [INFO] 2 errors
 [INFO] -
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main  SUCCESS [7.601s]
 [INFO] Apache Hadoop Project POM . SUCCESS [7.254s]
 [INFO] Apache Hadoop Annotations . SUCCESS [7.177s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.604s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [6.864s]
 [INFO] Apache Hadoop Maven Plugins ... SUCCESS [8.371s]
 [INFO] Apache Hadoop Auth  SUCCESS [5.966s]
 [INFO] Apache Hadoop Auth Examples ... SUCCESS [4.492s]
 [INFO] Apache Hadoop Common .. SUCCESS [7:26.231s]
 [INFO] Apache Hadoop NFS . SUCCESS [20.858s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.093s]
 [INFO] Apache Hadoop HDFS  SUCCESS [8:10.985s]
 [INFO] Apache Hadoop HttpFS .. SUCCESS [1:00.932s]
 [INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [17.207s]
 [INFO] Apache Hadoop HDFS-NFS  SUCCESS [12.950s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.104s]
 [INFO] hadoop-yarn ... SUCCESS [1.943s]
 [INFO] hadoop-yarn-api ... SUCCESS [2:39.214s]
 [INFO] hadoop-yarn-common  SUCCESS [1:15.391s]
 [INFO] hadoop-yarn-server  SUCCESS [0.278s]
 [INFO] hadoop-yarn-server-common . SUCCESS [14.293s]
 [INFO] hadoop-yarn-server-nodemanager  SUCCESS [25.848s]
 [INFO] hadoop-yarn-server-web-proxy .. SUCCESS [5.866s]
 [INFO] hadoop-yarn-server-resourcemanager  SUCCESS [39.821s]
 [INFO] hadoop-yarn-server-tests .. SUCCESS [0.645s]
 [INFO] hadoop-yarn-client  SUCCESS [6.714s]
 [INFO] hadoop-yarn-applications .. SUCCESS [0.454s]
 [INFO] hadoop-yarn-applications-distributedshell . SUCCESS [3.555s]
 [INFO] hadoop-mapreduce-client ... SUCCESS [0.292s]
 [INFO] 

Re: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 , HDFS2.4.0.2.1

2014-11-04 Thread Devopam Mittra
hi All,
Please find a very simple script attached that helps me to find the current
'active' namenode fqdn on my HDP2.1 HA cluster . I needed this information
for the ETL jobs to run properly.
Sharing it with everyone in case it may be of use to you as well.

Thanks to Brahma for the initial pointers once again.
Do let me know if you face any issues executing the same.

regards
Devopam


On Mon, Nov 3, 2014 at 6:06 PM, Devopam Mittra devo...@gmail.com wrote:

 hi Brahma,
 Thanks so much for the pointers.
 I am able to achieve my goal by extending the help provided and proceed
 with the automated script I am trying to build.
 Shall share for common benefit after initial testing.

 warm regards
 Devopam


 On Mon, Nov 3, 2014 at 1:40 PM, Brahma Reddy Battula 
 brahmareddy.batt...@huawei.com wrote:

 Yes, you can get, but you need to use two commands like following..

 1) Need to get nameservices
 2) dfs.ha.namenodes.${dfs.nameservices}.


 [root@linux156 bin]# ./hdfs getconf -confKey dfs.nameservices
 hacluster
 [root@linux156 bin]# ./hdfs getconf -confKey dfs.ha.namenodes.hacluster
 100,101




 Thanks  Regards
 Brahma Reddy Battula

 
 From: Devopam Mittra [devo...@gmail.com]
 Sent: Monday, November 03, 2014 1:23 PM
 To: common-dev@hadoop.apache.org
 Subject: Re: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 ,
 HDFS2.4.0.2.1

 hi Brahma,
 Thanks a lot for the clarification.

 This brings me to my next question: Is there a way I can systematically
 retrieve the serviceId s using any CLI command , instead of reading it
 from
 conf xmls ?


 regards
 Devopam


 On Mon, Nov 3, 2014 at 12:51 PM, Brahma Reddy Battula 
 brahmareddy.batt...@huawei.com wrote:

  Hi Mittra
 
 
  As far as I knew, hdfs haadmin -ns is not availble..(please someone
 can
  correct, If I am wrong..)
 
 
  Please check following help command for haadmin..
 
  [root@linux bin]# ./hdfs haadmin
  No GC_PROFILE is given. Defaults to medium.
  Usage: DFSHAAdmin [-ns nameserviceId]
  [-transitionToActive serviceId [--forceactive]]
  [-transitionToStandby serviceId]
  [-failover [--forcefence] [--forceactive] serviceId serviceId]
  [-getServiceState serviceId]
  [-checkHealth serviceId]
  [-help command]
 
 
  Example:
 
  if your service id's are like nn1,nn2
 
  then you can try like,, hdfs haadmin -getServiceState nn1.
 
 
  Thanks  Regards
  Brahma Reddy Battula
  
  From: Devopam Mittra [devo...@gmail.com]
  Sent: Monday, November 03, 2014 10:33 AM
  To: common-dev@hadoop.apache.org
  Subject: unable to use hdfs haadmin -ns nameserviceId on HDP2.1 ,
  HDFS2.4.0.2.1
 
  hi All,
  I wanted to systematically probe and list all my namenode(s) via CLI.
  An easier way was to maintain a masters conf file , but I want to
  dynamically find the same leveraging the system metadata.
 
  In process of trying to find the same , I tried to use
  hdfs haadmin -ns ... but to my dismay the command always returns
 Missing
  Command on the console as if I am doing something syntactically wrong.
 
  I know my nameserviceId and I tried both capital letters and all small
  (just to be sure to not commit a silly mistake).
 
  Can you please help me in trying to figure out what I am doing wrong
 here.
 
 
  --
  Devopam Mittra
  Life and Relations are not binary
 



 --
 Devopam Mittra
 Life and Relations are not binary




 --
 Devopam Mittra
 Life and Relations are not binary




-- 
Devopam Mittra
Life and Relations are not binary


findActiveNameNode.sh
Description: Bourne shell script


[jira] [Resolved] (HADOOP-11263) NativeS3FileSystem doesn't work with hadoop-client

2014-11-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11263.
-
Resolution: Won't Fix

 NativeS3FileSystem doesn't work with hadoop-client
 --

 Key: HADOOP-11263
 URL: https://issues.apache.org/jira/browse/HADOOP-11263
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Sangjin Lee

 When you start using the NativeS3FileSystem (which is in hadoop-common) based 
 on the hadoop-client set of jars, it fails with a NoClassDefFoundError:
 {noformat}
 Caused by: java.lang.NoClassDefFoundError: org/jets3t/service/ServiceException
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:280)
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:270)
   at 
 com.twitter.twadoop.util.hadoop.NativeS3FileSystemWrapper.initialize(NativeS3FileSystemWrapper.java:34)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2438)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2472)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2454)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:384)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
 {noformat}
 NativeS3FileSystem depends on a library called jets3t, which is not found in 
 the hadoop-client build. It turns out that this library was specifically 
 excluded in the hadoop-client pom.xml:
 {noformat}
 exclusion
   groupIdnet.java.dev.jets3t/groupId
   artifactIdjets3t/artifactId
 /exclusion
 {noformat}
 This strikes me as an issue, as a component that's part of hadoop-common 
 cannot run with a hadoop-client build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11235) execute maven plugin(compile-protoc) failed

2014-11-04 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-11235.
-
Resolution: Not a Problem

 execute maven plugin(compile-protoc) failed
 ---

 Key: HADOOP-11235
 URL: https://issues.apache.org/jira/browse/HADOOP-11235
 Project: Hadoop Common
  Issue Type: Bug
 Environment: ubuntu 14.04
 jdk 1.7
 eclipse 4.4.1
 m2e 1.5
Reporter: ccin

 [ERROR] Failed to execute goal 
 org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76)
   at 
 org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.MojoExecutionException: 
 org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not 
 return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:105)
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
   ... 19 more
 Caused by: org.apache.maven.plugin.MojoExecutionException: 'protoc --version' 
 did not return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:68)
   ... 21 more
 [ERROR] 
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11265) Credential and Key Shell Commands not available on Windows

2014-11-04 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-11265:


 Summary: Credential and Key Shell Commands not available on Windows
 Key: HADOOP-11265
 URL: https://issues.apache.org/jira/browse/HADOOP-11265
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 3.0.0


Must add the credential and key commands to the hadoop.cmd file for windows 
environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10133) winutils detection on windows-cygwin fails

2014-11-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10133.
-
Resolution: Won't Fix

hadoop on windows doesn't depend on cygwin any more. resolving as WONTFIX

 winutils detection on windows-cygwin fails
 --

 Key: HADOOP-10133
 URL: https://issues.apache.org/jira/browse/HADOOP-10133
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
 Environment: windows 7, cygwin
Reporter: Franjo Markovic
   Original Estimate: 1h
  Remaining Estimate: 1h

 java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
 Hadoop binaries.
 at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
  at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
 at org.apache.hadoop.util.Shell.clinit(Shell.java:293)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11259) Hadoop /Common directory is missing from all download mirrors I checked

2014-11-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11259.
-
Resolution: Fixed

 Hadoop /Common  directory is missing from all download mirrors I checked
 

 Key: HADOOP-11259
 URL: https://issues.apache.org/jira/browse/HADOOP-11259
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.5.1
Reporter: Nick Kanellos
Assignee: Steve Loughran
Priority: Blocker

 I checked several download mirrors.  They all seem to be missing the /common 
 folder. The only thing I see there is .../dist/hadoop/chukwa/.  This is a 
 blocker since I can't download Hadoop at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11266) Remove no longer supported activation properties for packaging from pom

2014-11-04 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-11266:
-

 Summary: Remove no longer supported activation properties for 
packaging from pom
 Key: HADOOP-11266
 URL: https://issues.apache.org/jira/browse/HADOOP-11266
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial


Packaging rpm and deb are no longer supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11267) TestSecurityUtil fails when run with JDK8 because of empty principal names

2014-11-04 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11267:


 Summary: TestSecurityUtil fails when run with JDK8 because of 
empty principal names
 Key: HADOOP-11267
 URL: https://issues.apache.org/jira/browse/HADOOP-11267
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.3.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


Running {{TestSecurityUtil}} on JDK8 will fail:

{code}
java.lang.IllegalArgumentException: Empty nameString not allowed
at 
sun.security.krb5.PrincipalName.validateNameStrings(PrincipalName.java:171)
at sun.security.krb5.PrincipalName.init(PrincipalName.java:393)
at sun.security.krb5.PrincipalName.init(PrincipalName.java:460)
at 
javax.security.auth.kerberos.KerberosPrincipal.init(KerberosPrincipal.java:120)
at 
org.apache.hadoop.security.TestSecurityUtil.isOriginalTGTReturnsCorrectValues(TestSecurityUtil.java:57)
{code}

In JDK8, PrincipalName checks that its name is not empty and throws an 
IllegalArgumentException if it is empty. This didn't happen in JDK6/7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11268) Update BUILDING.txt to remove the workaround for tools.jar

2014-11-04 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11268:
---

 Summary: Update BUILDING.txt to remove the workaround for tools.jar
 Key: HADOOP-11268
 URL: https://issues.apache.org/jira/browse/HADOOP-11268
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Minor


After HADOOP-10563 lands in branch-2. The workaround for tools.jar documented 
in BUILDING.txt is no longer required.

We should update the document to reflect this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11269) Add java 8 profile for hadoop-annotations

2014-11-04 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11269:
---

 Summary: Add java 8 profile for hadoop-annotations
 Key: HADOOP-11269
 URL: https://issues.apache.org/jira/browse/HADOOP-11269
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu


hadoop-annotations fails to build out-of-the-box under Java 8 because it lacks 
the profile to add {{tools.jar}} into the classpath of {{javac}}. This jira 
proposes to add a new build profile for Java 8 in hadoop-annotations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11270) Seek behavior difference between NativeS3FsInputStream and DFSInputStream

2014-11-04 Thread Venkata Puneet Ravuri (JIRA)
Venkata Puneet Ravuri created HADOOP-11270:
--

 Summary: Seek behavior difference between NativeS3FsInputStream 
and DFSInputStream
 Key: HADOOP-11270
 URL: https://issues.apache.org/jira/browse/HADOOP-11270
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Venkata Puneet Ravuri
Assignee: Venkata Puneet Ravuri


There is a difference in behavior while seeking a given file present
in S3 using NativeS3FileSystem$NativeS3FsInputStream and a file present in HDFS 
using DFSInputStream.

If we seek to the end of the file incase of NativeS3FsInputStream, it fails 
with exception java.io.EOFException: Attempted to seek or read past the end of 
the file. That is because a getObject request is issued on the S3 object with 
range start as value of length of file.

This is the complete exception stack:-
Caused by: java.io.EOFException: Attempted to seek or read past the end of the 
file
at 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:462)
at 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
at 
org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy17.retrieve(Unknown Source)
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:205)
at 
org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:96)
at 
org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:67)
at java.io.DataInputStream.skipBytes(DataInputStream.java:220)
at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.readFields(RCFile.java:739)
at 
org.apache.hadoop.hive.ql.io.RCFile$Reader.currentValueBuffer(RCFile.java:1720)
at org.apache.hadoop.hive.ql.io.RCFile$Reader.getCurrentRow(RCFile.java:1898)
at 
org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:149)
at 
org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:44)
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:339)
... 15 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11271) Use Time.monotonicNow() in Shell.java instead of Time.now()

2014-11-04 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-11271:
--

 Summary: Use Time.monotonicNow() in Shell.java instead of 
Time.now()
 Key: HADOOP-11271
 URL: https://issues.apache.org/jira/browse/HADOOP-11271
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor


Use {{Time.monotonicNow()}} instead of {{Time.now()}} in Shell.java to keep 
track of the last executed time.

Using Time.monotonicNow() in elapsed time calculation usecases will be accurate 
and safe from system time changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)