[jira] [Resolved] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7033.

Resolution: Won't Fix

> dfs.web.authentication.filter should be documented
> --
>
> Key: HDFS-7033
> URL: https://issues.apache.org/jira/browse/HDFS-7033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 2.4.0
>Reporter: Allen Wittenauer
>Assignee: Srikanth Upputuri
>Priority: Major
>
> HDFS-5716 added dfs.web.authentication.filter but this doesn't appear to be 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7231) rollingupgrade needs some guard rails

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7231.

Resolution: Won't Fix

> rollingupgrade needs some guard rails
> -
>
> Key: HDFS-7231
> URL: https://issues.apache.org/jira/browse/HDFS-7231
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Priority: Critical
>
> See first comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7307) Need 'force close'

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7307.

Resolution: Won't Fix

> Need 'force close'
> --
>
> Key: HDFS-7307
> URL: https://issues.apache.org/jira/browse/HDFS-7307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Major
>
> Until HDFS-4882 and HDFS-7306 get real fixes, operations teams need a way to 
> force close files.  DNs are essentially held hostage by broken clients that 
> never close.  This situation will get worse as longer/permanently running 
> jobs start increasing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7777) Consolidate the HA NN documentation down to one

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-.

Resolution: Won't Fix

> Consolidate the HA NN documentation down to one
> ---
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Priority: Major
>
> These are nearly the same document now.  Let's consolidate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7850) distribute-excludes and refresh-namenodes update to new shell framework

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7850.

  Resolution: Won't Fix
Target Version/s:   (was: )

> distribute-excludes and refresh-namenodes update to new shell framework
> ---
>
> Key: HDFS-7850
> URL: https://issues.apache.org/jira/browse/HDFS-7850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> These need to get updated to use new shell framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7904) NFS hard codes ShellBasedIdMapping

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7904.

Resolution: Won't Fix

> NFS hard codes ShellBasedIdMapping
> --
>
> Key: HDFS-7904
> URL: https://issues.apache.org/jira/browse/HDFS-7904
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Allen Wittenauer
>Priority: Major
>
> The current NFS doesn't allow one to configure an alternative to the 
> shell-based id mapping provider.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7983) HTTPFS proxy server needs pluggable-auth support

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7983.

  Resolution: Won't Fix
Target Version/s:   (was: )

> HTTPFS proxy server needs pluggable-auth support
> 
>
> Key: HDFS-7983
> URL: https://issues.apache.org/jira/browse/HDFS-7983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Now that WebHDFS has been fixed to support pluggable auth, the httpfs proxy 
> server also needs support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9055) WebHDFS REST v2

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9055.

  Resolution: Won't Fix
Target Version/s:   (was: )

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9058) enable find via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9058.

  Resolution: Won't Fix
Target Version/s:   (was: )

> enable find via WebHDFS
> ---
>
> Key: HDFS-9058
> URL: https://issues.apache.org/jira/browse/HDFS-9058
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
>Priority: Major
>
> It'd be useful to implement find over webhdfs rather than forcing the client 
> to grab a lot of data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9031) libhdfs should use doxygen plugin to generate mvn site output

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9031.

Resolution: Won't Fix

> libhdfs should use doxygen plugin to generate mvn site output
> -
>
> Key: HDFS-9031
> URL: https://issues.apache.org/jira/browse/HDFS-9031
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Rather than point people to the hdfs.h file, we should take advantage of the 
> doxyfile and actually generate for mvn site so it shows up on the website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9056) add set/remove quota capability to webhdfs

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9056.

  Resolution: Won't Fix
Target Version/s:   (was: )

> add set/remove quota capability to webhdfs
> --
>
> Key: HDFS-9056
> URL: https://issues.apache.org/jira/browse/HDFS-9056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> It would be nice to be able to set and remove quotas via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9059.

  Resolution: Won't Fix
Target Version/s:   (was: )

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Jagadesh Kiran N
>Priority: Major
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9061) hdfs groups should be exposed via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9061.

  Resolution: Won't Fix
Target Version/s:   (was: )

> hdfs groups should be exposed via WebHDFS
> -
>
> Key: HDFS-9061
> URL: https://issues.apache.org/jira/browse/HDFS-9061
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Jagadesh Kiran N
>Priority: Major
>
> It would be extremely useful from a REST perspective to expose which groups 
> the NN says the user belongs to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9464) Documentation needs to be exposed

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9464.

Resolution: Won't Fix

> Documentation needs to be exposed
> -
>
> Key: HDFS-9464
> URL: https://issues.apache.org/jira/browse/HDFS-9464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From the few builds I've done, there doesn't appear to be any user-facing 
> documentation that is actually exposed when mvn site is built.  HDFS-8745 
> allegedly added doxygen support, but even those docs aren't tied into the 
> docs and/or site build. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9465) No header files in mvn package

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9465.

Resolution: Won't Fix

> No header files in mvn package
> --
>
> Key: HDFS-9465
> URL: https://issues.apache.org/jira/browse/HDFS-9465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> The current build appears to only include the shared library and no header 
> files to actually use the library in the final maven binary build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9778) Add liberasurecode support

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9778.

Resolution: Won't Fix

> Add liberasurecode support
> --
>
> Key: HDFS-9778
> URL: https://issues.apache.org/jira/browse/HDFS-9778
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Allen Wittenauer
>Priority: Major
>
> It would be beneficial to use liberasurecode as either supplemental or in 
> lieu of ISA-L in order to provide the widest possible hardware/OS platform 
> and OOB support.  Major software platforms appear to be converging on this 
> library and we should too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10509) httpfs generates docs in bin tarball

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-10509.
-
Resolution: Won't Fix

> httpfs generates docs in bin tarball 
> -
>
> Key: HDFS-10509
> URL: https://issues.apache.org/jira/browse/HDFS-10509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> When building a release, httpfs generates a share/doc/hadoop/httpfs dir with 
> content when it shouldn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11356) figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-11356:
-

> figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native
> ---
>
> Key: HDFS-11356
> URL: https://issues.apache.org/jira/browse/HDFS-11356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11356.001.patch
>
>
> The move of code to hdfs-client-native creation caused all sorts of loose 
> ends, and this is just another one.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7913) HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7913.

Resolution: Won't Fix

> HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations
> --
>
> Key: HDFS-7913
> URL: https://issues.apache.org/jira/browse/HDFS-7913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-7913-01.patch, HDFS-7913.patch
>
>
> The wrong variable is deprecated in hdfs-config.sh.  It should be 
> HDFS_LOG_DIR, not HADOOP_HDFS_LOG_DIR.  This is breaking backward 
> compatibility.
> It might be worthwhile to doublecheck the other dep's to make sure they are 
> correct as well.
> Also, release notes for the deprecation jira should be updated to reflect 
> this change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12743) Fix env vars (again)

2017-10-28 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-12743:
---

 Summary: Fix env vars (again)
 Key: HDFS-12743
 URL: https://issues.apache.org/jira/browse/HDFS-12743
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


Ozone's environment variables are out of whack with the rest of Hadoop again.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12711) deadly hdfs test

2017-10-25 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-12711:
---

 Summary: deadly hdfs test
 Key: HDFS-12711
 URL: https://issues.apache.org/jira/browse/HDFS-12711
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12253) CLONE - Pretty-format the output for DFSIO

2017-08-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-12253.
-
   Resolution: Duplicate
Fix Version/s: (was: 3.0.0-alpha1)
   (was: 2.8.0)

> CLONE - Pretty-format the output for DFSIO
> --
>
> Key: HDFS-12253
> URL: https://issues.apache.org/jira/browse/HDFS-12253
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Dennis Huo
>Assignee: Kai Zheng
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12253) CLONE - Pretty-format the output for DFSIO

2017-08-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-12253:
-

> CLONE - Pretty-format the output for DFSIO
> --
>
> Key: HDFS-12253
> URL: https://issues.apache.org/jira/browse/HDFS-12253
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Dennis Huo
>Assignee: Kai Zheng
> Fix For: 2.8.0, 3.0.0-alpha1
>
>
> Ref. the following DFSIO output, I was surprised the test throughput was only 
> {{17}} MB/s, which doesn't make sense for a real cluster. Maybe it's used for 
> other purpose? For users, it may make more sense to give the throughput 1610 
> MB/s (1228800/763), calculated by *Total MBytes processed / Test exec time*.
> {noformat}
> 15/09/28 11:42:23 INFO fs.TestDFSIO: - TestDFSIO - : write
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Date & time: Mon Sep 28 
> 11:42:23 CST 2015
> 15/09/28 11:42:23 INFO fs.TestDFSIO:Number of files: 100
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Total MBytes processed: 1228800.0
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  Throughput mb/sec: 
> 17.457387239456878
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.57563018798828
> 15/09/28 11:42:23 INFO fs.TestDFSIO:  IO rate std deviation: 
> 1.7076328985378455
> 15/09/28 11:42:23 INFO fs.TestDFSIO: Test exec time sec: 762.697
> 15/09/28 11:42:23 INFO fs.TestDFSIO: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12220) hdfs' parallel tests don't work for Windows

2017-07-28 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-12220:
---

 Summary: hdfs' parallel tests don't work for Windows
 Key: HDFS-12220
 URL: https://issues.apache.org/jira/browse/HDFS-12220
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0-beta1
 Environment: Windows
Reporter: Allen Wittenauer


create-parallel-tests-dirs in hadoop-hdfs-project/hadoop-hdfs/pom.xml fail with:

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
(create-parallel-tests-dirs) on project hadoop-hdfs: An Ant BuildException has 
occured: Directory 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-hdfs-projecthadoop-hdfs
arget\test\data\1 creation was not successful for an unknown reason
[ERROR] around Ant part ...

[jira] [Created] (HDFS-11972) cblockserver uses wrong OPT env var

2017-06-13 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-11972:
---

 Summary: cblockserver uses wrong OPT env var
 Key: HDFS-11972
 URL: https://issues.apache.org/jira/browse/HDFS-11972
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Allen Wittenauer


Current codebase does:

{code}
  hadoop_debug "Appending HADOOP_CBLOCK_OPTS onto HADOOP_OPTS" 
 HADOOP_OPTS="${HADOOP_OPTS} ${HADOOP_CBLOCK_OPTS}"
{code}

This code block breaks the consistency with the rest of the shell scripts:

a) It should be HDFS_CBLOCKSERVER_OPTS
b) HDFS_CBLOCKSERVER_OPTS is already/automatically appended; there is no need 
to do it specifically



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11599) distcp interrupt does not kill hadoop job

2017-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-11599.
-
Resolution: Not A Problem

Yes.  The program running on the command line is just a client after job 
launch. To kill the program actually doing the work, you'll need to use the 
yarn or mapred commands.

Closing as "Not a problem"

> distcp interrupt does not kill hadoop job
> -
>
> Key: HDFS-11599
> URL: https://issues.apache.org/jira/browse/HDFS-11599
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: David Fagnan
>
> keyboard interrupt for example leaves the hadoop job & copy still running, is 
> this intended behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11724) libhdfs compilation is broken on OS X

2017-04-28 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-11724:
---

 Summary: libhdfs compilation is broken on OS X
 Key: HDFS-11724
 URL: https://issues.apache.org/jira/browse/HDFS-11724
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0-alpha3
Reporter: Allen Wittenauer
Priority: Blocker


Looks like HDFS-11529 added an include for malloc.h, which isn't available on 
OS X and likely other operating systems.  Many OSes uses sys/malloc.h, 
including OS X.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11356) figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native

2017-01-20 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-11356:
---

 Summary: figure out what to do about 
hadoop-hdfs-project/hadoop-hdfs/src/main/native
 Key: HDFS-11356
 URL: https://issues.apache.org/jira/browse/HDFS-11356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation
Affects Versions: 3.0.0-alpha2
Reporter: Allen Wittenauer
Priority: Critical


The hdfs-client-native creation caused all sorts of loose ends, and this is 
just another one.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10509) httpfs generates docs in bin tarball

2016-06-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-10509:
---

 Summary: httpfs generates docs in bin tarball 
 Key: HDFS-10509
 URL: https://issues.apache.org/jira/browse/HDFS-10509
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer


When building a release, httpfs generates a share/doc/hadoop/httpfs dir with 
content when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10486) "Cannot start secure datanode with unprivileged HTTP ports" should give config param

2016-06-03 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-10486:
---

 Summary: "Cannot start secure datanode with unprivileged HTTP 
ports" should give config param
 Key: HDFS-10486
 URL: https://issues.apache.org/jira/browse/HDFS-10486
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, security
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer
Priority: Trivial


The "Cannot start secure datanode with unprivileged HTTP ports" error should 
really give users a hint as to which parameter should get changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10483) hadoop-hdfs-native-client tests asking for JDK6 on OS X

2016-06-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-10483.
-
Resolution: Not A Bug

> hadoop-hdfs-native-client tests asking for JDK6 on OS X
> ---
>
> Key: HDFS-10483
> URL: https://issues.apache.org/jira/browse/HDFS-10483
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Critical
>
> Running the native tests in hadoop-hdfs-native-client causes a dialog box to 
> pop up asking for JDK6 on my dev box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10483) hadoop-hdfs-native-client tests asking for JDK6 on OS X

2016-06-03 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-10483:
---

 Summary: hadoop-hdfs-native-client tests asking for JDK6 on OS X
 Key: HDFS-10483
 URL: https://issues.apache.org/jira/browse/HDFS-10483
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer
Priority: Critical


Running the native tests in hadoop-hdfs-native-client causes a dialog box to 
pop up asking for JDK6 on my dev box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10436) dfs.block.access.token.enable should default on when security is !simple

2016-05-19 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-10436:
---

 Summary: dfs.block.access.token.enable should default on when 
security is !simple
 Key: HDFS-10436
 URL: https://issues.apache.org/jira/browse/HDFS-10436
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer


Unless there is a valid configuration where dfs.block.access.token.enable is 
off and security is on, then rather than shutdown we should just enable the 
block access tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9030) libwebhdfs lacks headers, documentation; not part of mvn package

2016-04-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9030.

Resolution: Won't Fix

libwebhdfs has been removed. closing as won't fix.

> libwebhdfs lacks headers, documentation; not part of mvn package
> 
>
> Key: HDFS-9030
> URL: https://issues.apache.org/jira/browse/HDFS-9030
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> This library is useless without header files to include and documentation on 
> how to use it.  Both appear to be missing from the mvn package and site 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2852) Jenkins pre-commit build does not pick up the correct attachment.

2016-03-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2852.

Resolution: Won't Fix

This is by design. Closing as won't fix.

> Jenkins pre-commit build does not pick up the correct attachment.
> -
>
> Key: HDFS-2852
> URL: https://issues.apache.org/jira/browse/HDFS-2852
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0-alpha
>Reporter: Kihwal Lee
>
> When two files are attached to a jira, slaves build twice but only the latest 
> attachement.
> For example, the patch_tested.txt from PreCommit-Admin shows correct 
> attachment numbers for In HDFS-2784.
> From 
> https://builds.apache.org/job/PreCommit-Admin/56284/artifact/patch_tested.txt
> {noformat}
> ...
> HBASE-5271,12511722
> HDFS-2784,12511725
> HDFS-2836,12511727
> HDFS-2784,12511726
> {noformat}
> But the Jenkins build slaves had built #12511726 twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1519) HDFS build is broken, ivy-resolve-common does not find hadoop-common

2016-03-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1519.

Resolution: Won't Fix

Stale.

> HDFS build is broken, ivy-resolve-common does not find hadoop-common
> 
>
> Key: HDFS-1519
> URL: https://issues.apache.org/jira/browse/HDFS-1519
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.21.0
> Environment: openSUSE 11.1, Linux roisin 2.6.27.48-0.2-default #1 SMP 
> 2010-07-29 20:06:52 +0200 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Jon Wilson
>
> HADOOP_DIR/hdfs$ ant ivy-resolve-common
> Buildfile: build.xml
> ivy-download:
>   [get] Getting: 
> http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
>   [get] To: /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivy-2.1.0.jar
>   [get] Not modified - so not downloaded
> ivy-init-dirs:
> ivy-probe-antlib:
> ivy-init-antlib:
> ivy-init:
> [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:configure] :: loading settings :: file = 
> /usr/products/hadoop/v0_21_0/ANY/hdfs/ivy/ivysettings.xml
> ivy-resolve-common:
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.apache.hadoop#hadoop-common;0.21.0
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.pom
> [ivy:resolve]   -- artifact 
> org.apache.hadoop#hadoop-common;0.21.0!hadoop-common.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.21.0/hadoop-common-0.21.0.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.apache.hadoop#hadoop-common;0.21.0: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 
> [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
> BUILD FAILED
> /usr/products/hadoop/v0_21_0/ANY/hdfs/build.xml:1549: impossible to resolve 
> dependencies:
>   resolve failed - see output for details
> Total time: 3 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2276) src/test/unit tests not being run in mavenized HDFS

2016-03-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2276.

  Resolution: Fixed
Target Version/s:   (was: )

looks fixed. closing.

> src/test/unit tests not being run in mavenized HDFS
> ---
>
> Key: HDFS-2276
> URL: https://issues.apache.org/jira/browse/HDFS-2276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hdfs-2276.txt
>
>
> There are about 5 tests in src/test/unit that are no longer being run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1494) Adding new target to build.xml to run test-core without compiling

2016-03-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1494.

Resolution: Won't Fix

stale.

> Adding new target to build.xml to run test-core without compiling
> -
>
> Key: HDFS-1494
> URL: https://issues.apache.org/jira/browse/HDFS-1494
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 0.21.0
> Environment: SLE v. 11, Apache Harmony 6
>Reporter: Guillermo Cabrera
>Priority: Minor
> Attachments: HDFS-1494.patch
>
>
> While testing Apache Harmony Select (lightweight version of Harmony) with 
> Hadoop hdfs we had to first build with Harmony and then test using Harmony 
> Select using the test-core target. This was done in an effort to investigate 
> any issues with Harmony Select in running common. However, the test-core 
> target also compiles the classes which we are unable to do with Harmony 
> Select. A new target is proposed that only runs the tests without compiling 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3316) jsvc should be listed as a dependency for "package" and "bin-package"

2016-03-18 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-3316.

Resolution: Won't Fix

closing as stale, as branch-1 is effectively EOLed.


> jsvc should be listed as a dependency for "package" and "bin-package"
> -
>
> Key: HDFS-3316
> URL: https://issues.apache.org/jira/browse/HDFS-3316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Owen O'Malley
>Assignee: Giridharan Kesavan
>Priority: Minor
> Attachments: hdfs-3316.patch
>
>
> The dependency on jsvc of targets "package" and "bin-package" would be much 
> clearer if made explicit, as in the proposed patch.  (However, the larger 
> issue of HADOOP-8364 should be addressed first.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9778) Add liberasurecode support

2016-02-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9778:
--

 Summary: Add liberasurecode support
 Key: HDFS-9778
 URL: https://issues.apache.org/jira/browse/HDFS-9778
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Allen Wittenauer


It would be beneficial to use liberasurecode as either supplemental or in lieu 
of ISA-L in order to provide the widest possible platform and OOB support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9610) cmake tests don't fail when they should?

2016-01-04 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9610:
--

 Summary: cmake tests don't fail when they should?
 Key: HDFS-9610
 URL: https://issues.apache.org/jira/browse/HDFS-9610
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Allen Wittenauer
 Attachments: LastTest.log

Playing around with adding ctest output support to Yetus, and I stumbled upon a 
case where the tests throw errors left and right but claim success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-402) Display the server version in dfsadmin -report

2015-12-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-402:
---

With rolling upgrade, this is very important now.  The fact that this wasn't 
completed as part of that JIRA goes to the level of 'fit and finish' that seems 
to be permeating Hadoop the past few years.

> Display the server version in dfsadmin -report
> --
>
> Key: HDFS-402
> URL: https://issues.apache.org/jira/browse/HDFS-402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Uma Maheswara Rao G
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-402.patch, HDFS-402.patch, HDFS-402.patch, 
> hdfs-402.txt
>
>
> As part of HADOOP-5094, it was requested to include the server version in the 
> dfsadmin -report, to avoid the need to screen scrape to get this information:
> bq. Please do provide the server version, so there is a quick and non-taxing 
> way of determine what is the current running version on the namenode.
> Currently there is nothing in the dfs client protocol to query this 
> information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9553) unit tests are leaving files undeletable by jenkins in target dir

2015-12-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9553:
--

 Summary: unit tests are leaving files undeletable by jenkins in 
target dir
 Key: HDFS-9553
 URL: https://issues.apache.org/jira/browse/HDFS-9553
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Allen Wittenauer
Priority: Blocker


Once again we have a 'stuck' jenkins slave because a unit test is leaving files 
around that git clean can't remove:

>From https://builds.apache.org/job/PreCommit-HDFS-Build/13851/console:

{code}
stderr: warning: failed to remove 
hadoop-hdfs-project/hadoop-hdfs/target/test/data/2
{code}

The last time this happened: INFRA-10785



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9525) hadoop utilities need to support provided delegation tokens

2015-12-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-9525:


> hadoop utilities need to support provided delegation tokens
> ---
>
> Key: HDFS-9525
> URL: https://issues.apache.org/jira/browse/HDFS-9525
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9465) No header files in mvn package

2015-11-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9465:
--

 Summary: No header files in mvn package
 Key: HDFS-9465
 URL: https://issues.apache.org/jira/browse/HDFS-9465
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Priority: Blocker


The current build appears to only include the shared library and no header 
files to actually use the library in the final maven binary build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9464) Documentation needs to be exposed

2015-11-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9464:
--

 Summary: Documentation needs to be exposed
 Key: HDFS-9464
 URL: https://issues.apache.org/jira/browse/HDFS-9464
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Priority: Blocker


>From the few builds I've done, there doesn't appear to be any user-facing 
>documentation that is actually exposed when mvn site is built.  HDFS-8745 
>allegedly added doxygen support, but even those docs aren't tied into the docs 
>and/or site build. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9416) Respect OpenSSL and protobuf definitions in maven configuration when building libhdfspp

2015-11-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-9416:


> Respect OpenSSL and protobuf definitions in maven configuration when building 
> libhdfspp
> ---
>
> Key: HDFS-9416
> URL: https://issues.apache.org/jira/browse/HDFS-9416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Xiaobing Zhou
>Priority: Blocker
> Attachments: HDFS-9416.004.patch, HDFS-9416.HDFS-8707.004.patch, 
> HDFS-9416.HDFS-8707.005.patch
>
>
> As discovered in HDFS-9380 the current pom.xml / CMakeLists.txt in libhdfspp 
> does not respect the configuration from the maven command line. Subsequently 
> it breaks the Jenkins build.
> Both pom.xml and CMakeLists.txt need to be fixed to get Jenkins working again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3909) Change Jenkins setting to test libwebhdfs

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-3909.

Resolution: Fixed

Closing as fixed. Yetus now compiles libwebhdfs.

> Change Jenkins setting to test libwebhdfs
> -
>
> Key: HDFS-3909
> URL: https://issues.apache.org/jira/browse/HDFS-3909
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Jing Zhao
>
> HDFS-2656 adds libwebhdfs but Jenkins is not yet able to build it and test it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-4764) TestBlockReaderLocalLegacy flakes in MiniDFSCluster#shutdown

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-4764.

Resolution: Cannot Reproduce

Closing this as stale.

> TestBlockReaderLocalLegacy flakes in MiniDFSCluster#shutdown
> 
>
> Key: HDFS-4764
> URL: https://issues.apache.org/jira/browse/HDFS-4764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>
> I've seen this fail on two test-patch runs, and I'm pretty sure it's 
> unrelated.
> {noformat}
> Error Message
> Test resulted in an unexpected exit
> Stacktrace
> java.lang.AssertionError: Test resulted in an unexpected exit
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1416)
>   at 
> org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy.testBothOldAndNewShortCircuitConfigured(TestBlockReaderLocalLegacy.java:152)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2881) org.apache.hadoop.hdfs.TestDatanodeBlockScanner Fails Intermittently

2015-11-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2881.

Resolution: Not A Problem

Closing this jira. Likely already fixed.

> org.apache.hadoop.hdfs.TestDatanodeBlockScanner Fails Intermittently
> 
>
> Key: HDFS-2881
> URL: https://issues.apache.org/jira/browse/HDFS-2881
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha
>Reporter: Robert Joseph Evans
> Attachments: 
> TEST-org.apache.hadoop.hdfs.TestDatanodeBlockScanner.xml, 
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner-output.txt, 
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner.txt
>
>
> org.apache.hadoop.hdfs.TestDatanodeBlockScanner fails intermittently durring 
> test-patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9246.

Resolution: Fixed

I've reverted the patch that caused this issue, so closing this one.

> TestGlobPaths#pTestCurlyBracket is failing
> --
>
> Key: HDFS-9246
> URL: https://issues.apache.org/jira/browse/HDFS-9246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group 
> at pos 10: `myuser}{bc`
>   at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
>   at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
>   at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
>   at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
>   at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
>   at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
>   at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
>   at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
>   at 
> org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9199) rename dfs.namenode.replication.min to dfs.replication.min

2015-10-05 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9199:
--

 Summary: rename dfs.namenode.replication.min to dfs.replication.min
 Key: HDFS-9199
 URL: https://issues.apache.org/jira/browse/HDFS-9199
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer


dfs.namenode.replication.min should be dfs.replication.min to match the other 
dfs.replication config knobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9066) expose truncate via webhdfs

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9066:
--

 Summary: expose truncate via webhdfs
 Key: HDFS-9066
 URL: https://issues.apache.org/jira/browse/HDFS-9066
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


Truncate should be exposed to WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9061) hdfs groups should be exposed via WebHDFS

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9061:
--

 Summary: hdfs groups should be exposed via WebHDFS
 Key: HDFS-9061
 URL: https://issues.apache.org/jira/browse/HDFS-9061
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


It would be extremely useful from a REST perspective to expose which groups the 
NN says the user belongs to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9060) expose snapshotdiff via WebHDFS

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9060:
--

 Summary: expose snapshotdiff via WebHDFS
 Key: HDFS-9060
 URL: https://issues.apache.org/jira/browse/HDFS-9060
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Allen Wittenauer


snapshotDiff should be exposed via webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9059:
--

 Summary: Expose lssnapshottabledir via WebHDFS
 Key: HDFS-9059
 URL: https://issues.apache.org/jira/browse/HDFS-9059
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9058) enable find via WebHDFS

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9058:
--

 Summary: enable find via WebHDFS
 Key: HDFS-9058
 URL: https://issues.apache.org/jira/browse/HDFS-9058
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


It'd be useful to implement find over webhdfs rather than forcing the client to 
grab a lot of data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9057) allow/disallow snapshots via webhdfs

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9057:
--

 Summary: allow/disallow snapshots via webhdfs
 Key: HDFS-9057
 URL: https://issues.apache.org/jira/browse/HDFS-9057
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


We should be able to allow and disallow directories for snapshotting via 
WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9056) add set/remove quota capability to webhdfs

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9056:
--

 Summary: add set/remove quota capability to webhdfs
 Key: HDFS-9056
 URL: https://issues.apache.org/jira/browse/HDFS-9056
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


It would be nice to be able to set and remove quotas via WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9055) WebHDFS REST v2

2015-09-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9055:
--

 Summary: WebHDFS REST v2
 Key: HDFS-9055
 URL: https://issues.apache.org/jira/browse/HDFS-9055
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


There's starting to be enough changes to fix webhdfs that we should probably 
update to REST v2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9051) webhdfs should support recursive list

2015-09-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9051:
--

 Summary: webhdfs should support recursive list
 Key: HDFS-9051
 URL: https://issues.apache.org/jira/browse/HDFS-9051
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Reporter: Allen Wittenauer


There currently doesn't appear to be a way to recursive list a directory via 
webhdfs without making an individual liststatus call per dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9029) libwebhdfs is not in the mvn package and likely missing from all distributions

2015-09-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9029.

Resolution: Duplicate

> libwebhdfs is not in the mvn package and likely missing from all distributions
> --
>
> Key: HDFS-9029
> URL: https://issues.apache.org/jira/browse/HDFS-9029
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> libwebhdfs is not in the tar.gz generated by maven.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9031) libhdfs should use doxygen plugin to generate mvn site output

2015-09-06 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9031:
--

 Summary: libhdfs should use doxygen plugin to generate mvn site 
output
 Key: HDFS-9031
 URL: https://issues.apache.org/jira/browse/HDFS-9031
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Blocker


Rather than point people to the hdfs.h file, we should take advantage of the 
doxyfile and actually generate for mvn site so it shows up on the website.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9030) libwebhdfs lacks headers and documentation

2015-09-06 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9030:
--

 Summary: libwebhdfs lacks headers and documentation
 Key: HDFS-9030
 URL: https://issues.apache.org/jira/browse/HDFS-9030
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Blocker


This library is useless without header files to include and documentation on 
how to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9029) libwebhdfs is not in the mvn package

2015-09-06 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9029:
--

 Summary: libwebhdfs is not in the mvn package
 Key: HDFS-9029
 URL: https://issues.apache.org/jira/browse/HDFS-9029
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: OS X
Reporter: Allen Wittenauer
Priority: Blocker


libwebhdfs is not in the tar.gz generated by maven.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-7728) Avoid updating quota usage while loading edits

2015-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-7728:


> Avoid updating quota usage while loading edits
> --
>
> Key: HDFS-7728
> URL: https://issues.apache.org/jira/browse/HDFS-7728
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.8.0
>
> Attachments: HDFS-7728.000.patch, HDFS-7728.001.patch, 
> HDFS-7728.002.patch, HDFS-7728.003.patch
>
>
> Per the discussion 
> [here|https://issues.apache.org/jira/browse/HDFS-7611?focusedCommentId=14292454&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14292454],
>  currently we call {{INode#addSpaceConsumed}} while file/dir/snapshot 
> deletion, even if this is still in the edits loading process. This is 
> unnecessary and can cause issue like HDFS-7611. We should collect quota 
> change and call {{FSDirectory#updateCount}} at the end of the operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7728) Avoid updating quota usage while loading edits

2015-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7728.

Resolution: Fixed

> Avoid updating quota usage while loading edits
> --
>
> Key: HDFS-7728
> URL: https://issues.apache.org/jira/browse/HDFS-7728
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.8.0
>
> Attachments: HDFS-7728.000.patch, HDFS-7728.001.patch, 
> HDFS-7728.002.patch, HDFS-7728.003.patch
>
>
> Per the discussion 
> [here|https://issues.apache.org/jira/browse/HDFS-7611?focusedCommentId=14292454&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14292454],
>  currently we call {{INode#addSpaceConsumed}} while file/dir/snapshot 
> deletion, even if this is still in the edits loading process. This is 
> unnecessary and can cause issue like HDFS-7611. We should collect quota 
> change and call {{FSDirectory#updateCount}} at the end of the operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-7745:


> HDFS should have its own daemon command  and not rely on the one in common
> --
>
> Key: HDFS-7745
> URL: https://issues.apache.org/jira/browse/HDFS-7745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> HDFS should have its own daemon command and not rely on the one in common.  
> BTW Yarn split out its own daemon command during project split. Note the 
> hdfs-command does have --daemon flag and hence the daemon script is merely a 
> wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7745.

Resolution: Fixed

> HDFS should have its own daemon command  and not rely on the one in common
> --
>
> Key: HDFS-7745
> URL: https://issues.apache.org/jira/browse/HDFS-7745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> HDFS should have its own daemon command and not rely on the one in common.  
> BTW Yarn split out its own daemon command during project split. Note the 
> hdfs-command does have --daemon flag and hence the daemon script is merely a 
> wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8504) Some hdfs admin operations from client should have audit logs

2015-06-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-8504.

Resolution: Won't Fix

These should be getting logged in the NN log. Closing as won't fix.

> Some hdfs admin operations from client should have audit logs 
> --
>
> Key: HDFS-8504
> URL: https://issues.apache.org/jira/browse/HDFS-8504
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.1
>Reporter: Bob
>Priority: Minor
>
> Below  "hdfs  dfsadmin xxx" commands should have audit logs printed, because 
> of those operations are helpful for administrator to check what  happened to 
> hdfs service.
> *hdfs dfsadmin commands*
> {noformat}
> hdfs dfsadmin -safemode enter
> hdfs dfsadmin -safemode leave
> hdfs dfsadmin -rollEdits
> hdfs dfsadmin -refreshNodes
> hdfs dfsadmin -refreshServiceAcl
> hdfs dfsadmin -refreshUserToGroupsMappings
> hdfs dfsadmin -refreshSuperUserGroupsConfiguration
> hdfs dfsadmin -refreshCallQueue
> hdfs dfsadmin -shutdownDatanode
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-8485:


> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authenti

[jira] [Resolved] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-8485.

Resolution: Not A Problem

> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.a

[jira] [Resolved] (HDFS-8436) Changing the replication factor for a directory should apply to new files under the directory too

2015-05-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-8436.

Resolution: Won't Fix

Closing as won't fix.

This is working as designed.  Directories don't have a replication to set, so 
when you set the replication factor on one, you are actually setting it on the 
files in that directory.

> Changing the replication factor for a directory should apply to new files 
> under the directory too
> -
>
> Key: HDFS-8436
> URL: https://issues.apache.org/jira/browse/HDFS-8436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Mala Chikka Kempanna
>
> Changing the replication factor for a directory will only affect the existing 
> files and the new files under the directory will get created with the default 
> replication factor (dfs.replication from hdfs-site.xml) of the cluster. 
> I would expect new files written under a directory to have the same 
> replication factor set for the directory itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3064) Allow datanodes to start with non-privileged ports for testing.

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-3064.

Resolution: Fixed

This has since been fixed. Closing.

> Allow datanodes to start with non-privileged ports for testing.
> ---
>
> Key: HDFS-3064
> URL: https://issues.apache.org/jira/browse/HDFS-3064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS-3064.trunk.patch
>
>
> HADOOP-8078 allows enabling security in unit tests. However, datanodes still 
> can't be started because they require privileged ports. We should allow 
> datanodes to come up on non-privileged ports ONLY for testing. This part of 
> the code will be removed anyway, when HDFS-2856 is committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2745) unclear to users which command to use to access the filesystem

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2745.

Resolution: Incomplete

I'm going to close this as stale, given the doc changes, etc. that have 
happened.

> unclear to users which command to use to access the filesystem
> --
>
> Key: HDFS-2745
> URL: https://issues.apache.org/jira/browse/HDFS-2745
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha
>Reporter: Thomas Graves
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch
>
>
> Its unclear to users which command to use to access the filesystem. Need some 
> background and then we can fix accordingly. We have 3 choices:
> hadoop dfs -> says its deprecated and to use hdfs.  If I run hdfs usage it 
> doesn't list any options like -ls in the usage, although there is an hdfs dfs 
> command
> hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop 
> dfs it should atleast be in the usage.
> hadoop fs -> seems like one to use it appears generic for any filesystem.
> Any input on this what is the recommended way to do this?  Based on that we 
> can fix up the other issues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2142) Namenode in trunk has much slower performance than Namenode in MR-279 branch

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2142.

Resolution: Cannot Reproduce

Stale.

> Namenode in trunk has much slower performance than Namenode in MR-279 branch
> 
>
> Key: HDFS-2142
> URL: https://issues.apache.org/jira/browse/HDFS-2142
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.0
>Reporter: Eric Payne
>
> I am measureing the performance of the namenode by running the 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator application. This 
> application shows there is a very large slowdown in the processing of opens, 
> writes, closes, and operations per second in the trunk when compared to the 
> MR-279 branch
> There have been some race conditions and locking issues fixed in trunk, which 
> is a very good thing because these race conditions were causing the namenode 
> to crash under load conditions (see HDFS:1257). However, the slowdown to the 
> namenode is considerable.
> I am still trying to verify which changes caused the slowdown. It was 
> originally suggested that the HDFS:988 may have caused the slowdown, but I 
> don't think it was the culprit. I have checked out and built from SVN 3 
> revisions previous to HDFS988 and they all have about the same performance.
> Here is my environment:
> Host0: namenode daemon
> Host1-9: simulate many datanodes using org.apache.hadoop.hdfs.DataNodeCluster
>  
> LoadGenerator output on MR-279 branch:
> Average open execution time: 1.8496516782773909ms
> Average deletion execution time: 2.956340167046317ms
> Average create execution time: 3.725259427992913ms
> Average write_close execution time: 11.151860288534548ms
> Average operations per second: 1053.3ops/s
> LoadGenerator output on trunk:
> Average open execution time: 28.603515625ms
> Average deletion execution time: 32.20792079207921ms
> Average create execution time: 32.37326732673267ms
> Average write_close execution time: 82.84752475247525ms
> Average operations per second: 135.13ops/s



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8414) test-patch.sh should only run javadoc if comments are touched

2015-05-15 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-8414:
--

 Summary: test-patch.sh should only run javadoc if comments are 
touched
 Key: HDFS-8414
 URL: https://issues.apache.org/jira/browse/HDFS-8414
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Allen Wittenauer


I suspect an optimization could be made that when a patch to java code only 
touches a comment, the javac and unit test code shouldn't fire off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-05-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7745.

Resolution: Pending Closed

Yeah, I'm guessing Sanjay opened this not knowing about all the work happening 
in trunk to generalize all of this stuff, deprecating all of the daemon 
commands, etc, etc.

If he feels differently, he can always re-open.

> HDFS should have its own daemon command  and not rely on the one in common
> --
>
> Key: HDFS-7745
> URL: https://issues.apache.org/jira/browse/HDFS-7745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> HDFS should have its own daemon command and not rely on the one in common.  
> BTW Yarn split out its own daemon command during project split. Note the 
> hdfs-command does have --daemon flag and hence the daemon script is merely a 
> wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-5745) Unnecessary disk check triggered when socket operation has problem.

2015-05-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-5745.

Resolution: Won't Fix

branch-1 is effectively dead and it sounds like this has been fixed in recent 
releases.  Closing this as stale based upon the previous analysis.


> Unnecessary disk check triggered when socket operation has problem.
> ---
>
> Key: HDFS-5745
> URL: https://issues.apache.org/jira/browse/HDFS-5745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 1.2.1
>Reporter: MaoYuan Xian
>Assignee: jun aoki
> Attachments: HDFS-5745.patch
>
>
> When BlockReceiver transfer data fails, it can be found SocketOutputStream 
> translates the exception as IOException with the message "The stream is 
> closed":
> 2014-01-06 11:48:04,716 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in BlockReceiver.run():
> java.io.IOException: The stream is closed
> at org.apache.hadoop.net.SocketOutputStream.write
> at java.io.BufferedOutputStream.flushBuffer
> at java.io.BufferedOutputStream.flush
> at java.io.DataOutputStream.flush
> at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run
> at java.lang.Thread.run
> Which makes the checkDiskError method of DataNode called and triggers the 
> disk scan.
> Can we make the modifications like below in checkDiskError to avoiding this 
> unneccessary disk scan operations?:
> {code}
> --- a/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
> +++ b/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
> @@ -938,7 +938,8 @@ public class DataNode extends Configured
>   || e.getMessage().startsWith("An established connection was 
> aborted")
>   || e.getMessage().startsWith("Broken pipe")
>   || e.getMessage().startsWith("Connection reset")
> - || e.getMessage().contains("java.nio.channels.SocketChannel")) {
> + || e.getMessage().contains("java.nio.channels.SocketChannel")
> + || e.getMessage().startsWith("The stream is closed")) {
>LOG.info("Not checking disk as checkDiskError was called on a network" 
> +
>  " related exception"); 
>return;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8317) test-patch.sh should be documented

2015-05-04 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-8317:
--

 Summary: test-patch.sh should be documented
 Key: HDFS-8317
 URL: https://issues.apache.org/jira/browse/HDFS-8317
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer


It might be useful to have all of test-patch.sh's functionality documented, how 
to use it, power user hints, etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2031) request HDFS test-patch to support coordinated change in COMMON jar, for post-patch build only

2015-05-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2031.

Resolution: Fixed

This has effectively been fixed.

> request HDFS test-patch to support coordinated change in COMMON jar, for 
> post-patch build only
> --
>
> Key: HDFS-2031
> URL: https://issues.apache.org/jira/browse/HDFS-2031
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Reporter: Matt Foley
>
> For dev test, need to test an HDFS patch that is dependent on a modified 
> COMMON jar.
> For casual testing, one can build in COMMON with "ant mvn-install", then 
> build in HDFS with "ant -Dresolvers=internal", and the modified COMMON jar 
> from the local maven cache (~/.m2/) will be used in the HDFS build.  This 
> works fine.
> However, running test-patch locally should build:
> * pre-patch: build unmodified HDFS with reference to generic Apache COMMON 
> jar (because the modified COMMON jar may be incompatible with the unmodified 
> HDFS)
> * post-patch:  build modified HDFS with reference to custom local COMMON jar
> Currently, each developer has their favorite way to hack build.xml to make 
> this work.  It would be nice if an ant build switch was available for this 
> use case.  It seems to me the easiest way to accomodate it would be to make 
> "-Dresolvers=internal" be effective only for the post-patch build of 
> test-patch, and let the pre-patch build use the generic Apache jar.
> Of course the same thing applies to MAPREDUCE test-patch when dependent on 
> modified COMMON and/or HDFS jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2009) test-patch comment doesn't show names of failed FI tests

2015-04-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2009.

Resolution: Won't Fix

stale.

> test-patch comment doesn't show names of failed FI tests
> 
>
> Key: HDFS-2009
> URL: https://issues.apache.org/jira/browse/HDFS-2009
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Reporter: Todd Lipcon
>
> Looks like test-patch.sh only looks at the build/test/*xml test results, but 
> it should also look at build-fi/test/*xml I think



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8251) Move the synthetic load generator into its own package

2015-04-25 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-8251:
--

 Summary: Move the synthetic load generator into its own package
 Key: HDFS-8251
 URL: https://issues.apache.org/jira/browse/HDFS-8251
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Allen Wittenauer


It doesn't really make sense for the HDFS load generator to be a part of the 
(extremely large) mapreduce jobclient package. It should be pulled out and put 
its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8003) hdfs has 3 new shellcheck warnings and the related code change is questionable

2015-03-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-8003.

Resolution: Duplicate

Duping this to HDFS-7991.

> hdfs has 3 new shellcheck warnings and the related code change is questionable
> --
>
> Key: HDFS-8003
> URL: https://issues.apache.org/jira/browse/HDFS-8003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> HDFS-6353 introduced three new shell check warnings due to an unprotected 
> ${HADOOP_OPTS}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8003) hdfs has 3 new shellcheck warnings

2015-03-27 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-8003:
--

 Summary: hdfs has 3 new shellcheck warnings
 Key: HDFS-8003
 URL: https://issues.apache.org/jira/browse/HDFS-8003
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


HDFS-6353 introduced three new shell check warnings due to an unprotected 
${HADOOP_OPTS}.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7984) WebHDFS needs to support

2015-03-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7984:
--

 Summary: WebHDFS needs to support 
 Key: HDFS-7984
 URL: https://issues.apache.org/jira/browse/HDFS-7984
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker


When using the webhdfs:// filesystem (especially from distcp), we need the 
ability to inject a delegation token rather than webhdfs initialize its own.  
This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7983) HTTPFS proxy server needs pluggable-auth support

2015-03-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7983:
--

 Summary: HTTPFS proxy server needs pluggable-auth support
 Key: HDFS-7983
 URL: https://issues.apache.org/jira/browse/HDFS-7983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker


Now that WebHDFS has been fixed to support pluggable auth, the httpfs proxy 
server also needs support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2771) Move Federation and WebHDFS documentation into HDFS project

2015-03-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2771.

  Resolution: Implemented
Target Version/s:   (was: )

fixed eons ago

> Move Federation and WebHDFS documentation into HDFS project
> ---
>
> Key: HDFS-2771
> URL: https://issues.apache.org/jira/browse/HDFS-2771
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>  Labels: newbie
>
> For some strange reason, the WebHDFS and Federation documentation is 
> currently in the hadoop-yarn site. This is counter-intuitive. We should move 
> these documents to an hdfs site, or if we think that all documentation should 
> go on one site, it should go into the hadoop-common project somewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-2360) Ugly stacktrace when quota exceeds

2015-03-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-2360:


OK, then let me re-open it.

Having oodles of useless stack trace here is *incredibly* user-unfriendly.  
Users do miss this message very very often because, believe it or not, they 
aren't Java programmers who are used to reading these things.

> Ugly stacktrace when quota exceeds
> --
>
> Key: HDFS-2360
> URL: https://issues.apache.org/jira/browse/HDFS-2360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 0.23.0
>Reporter: Rajit Saha
>Priority: Minor
>
> Will it be better to catch the exception and throw a small reasonable messege 
> to user when they exceed quota?
> $hdfs  dfs -mkdir testDir
> $hdfs  dfsadmin -setSpaceQuota 191M  testDir
> $hdfs dfs -count -q testDir
> none inf   200278016   2002780161 
>0  0
> hdfs://:/user/hdfsqa/testDir
> $hdfs dfs -put /etc/passwd /user/hadoopqa/testDir 
> 11/09/19 08:08:15 WARN hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /user/hdfsqa/testDir is exceeded:
> quota=191.0m diskspace consumed=768.0m
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1496)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1492)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1490)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1100)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:972)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
> Caused by: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The 
> DiskSpace quota of /user/hdfsqa/testDir is
> exceeded: quota=191.0m diskspace consumed=768.0m
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe

[jira] [Resolved] (HDFS-2748) fstime of image file newer than fstime of editlog

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2748.

Resolution: Fixed

This is likely a stale issue. Closing. Reopen if you feel otherwise.


> fstime of image file newer than fstime of editlog
> -
>
> Key: HDFS-2748
> URL: https://issues.apache.org/jira/browse/HDFS-2748
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Denny Ye
>  Labels: hdfs
>
> 1.1 first shutdown, then restart and then the fsimage was loaded and saved to 
> disk and editlog was cleared.
> 1.2 shutdown again when in safe mode to make sure no change in editlog, then 
> restart and then the fsimage was loaded and save to disk again, but the 
> editlog was not refreshed because it was empty.
> 1.3 shutdown again when in safe mode to make sure no change in editlog, the 
> restart and then an ERROR printed in log which basically was saying fstime of 
> fsimage was larger then fstime of editlog (which was obviously caused by 
> saving fsimage again and again when no change in editlog), and then the 
> editlog would be discarded (this is OK, the editlog was empty), and current 
> fsimage would be loaded as the latest fsimage. And again save fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1029) Image corrupt with number of files = 1

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1029.

Resolution: Fixed

This is likely a stale issue. Closing. Reopen if you feel otherwise.


> Image corrupt with number of files = 1
> --
>
> Key: HDFS-1029
> URL: https://issues.apache.org/jira/browse/HDFS-1029
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.20.1
>Reporter: Todd Lipcon
>
> Last week I recovered a corrupt namenode image that was completely sane 
> except that the "number of files" in the header was set to 1, rather than the 
> correct number (many million). The NN in question had been running for some 
> time, so I believe the 2NN uploaded this broken image as a checkpoint. After 
> this point, of course, no further checkpoints occurred, and the NN failed to 
> load its image upon restart.
> Not sure how this happens - my only thought is that we may need to add 
> synchronization on the nsCount field in INodeDirectoryWithQuota, but that's a 
> long shot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7913) HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations

2015-03-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7913:
--

 Summary: HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations
 Key: HDFS-7913
 URL: https://issues.apache.org/jira/browse/HDFS-7913
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Critical


The wrong variable is deprecated in hdfs-env.sh.  It should be HDFS_LOG_DIR, 
not HADOOP_HDFS_LOG_DIR.  This is breaking backward compatibility.

It might be worthwhile to doublecheck the other dep's to make sure they are 
correct as well.

Also, release notes for the deprecation jira should be updated to reflect this 
change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1056) Multi-node RPC deadlocks during block recovery

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1056.

  Resolution: Unresolved
Target Version/s:   (was: )

closing as stale.

> Multi-node RPC deadlocks during block recovery
> --
>
> Key: HDFS-1056
> URL: https://issues.apache.org/jira/browse/HDFS-1056
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Todd Lipcon
> Fix For: 0.20-append
>
> Attachments: 
> 0013-HDFS-1056.-Fix-possible-multinode-deadlocks-during-b.patch
>
>
> Believe it or not, I'm seeing HADOOP-3657 / HADOOP-3673 in a 5-node 0.20 
> cluster. I have many concurrent writes on the cluster, and when I kill a DN, 
> some percentage of the time I get one of these cross-node deadlocks among 3 
> of the nodes (replication 3). All of the DN RPC server threads are tied up 
> waiting on RPC clients to other datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7605) hadoop distcp hftp://192.168.80.31:50070/user/wp hdfs://192.168.210.10:8020/

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7605.

Resolution: Cannot Reproduce

> hadoop distcp hftp://192.168.80.31:50070/user/wp hdfs://192.168.210.10:8020/
> 
>
> Key: HDFS-7605
> URL: https://issues.apache.org/jira/browse/HDFS-7605
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
> Environment: between hadoop1.1.2andhadoop2.6.0 distcp on centos6.4
>Reporter: weipan
>
> Error: java.io.IOException: File copy failed: 
> hftp://192.168.80.31:50070/user/wp/test.txt --> 
> hdfs://192.168.210.10:8020/wp/test.txt
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://192.168.80.31:50070/user/wp/test.txt to 
> hdfs://192.168.210.10:8020/wp/test.txt
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
>   ... 10 more
> Caused by: 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: 
> java.net.SocketTimeoutException: connect timed out
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.getInputStream(RetriableFileCopyCommand.java:303)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:248)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:184)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:124)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   ... 11 more
> Caused by: java.net.SocketTimeoutException: connect timed out
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
>   at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
>   at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>   at java.net.Socket.connect(Socket.java:529)
>   at sun.net.NetworkClient.doConnect(NetworkClient.java:158)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:411)
>   at sun.net.www.http.HttpClient.openServer(HttpClient.java:525)
>   at sun.net.www.http.HttpClient.(HttpClient.java:208)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:291)
>   at sun.net.www.http.HttpClient.New(HttpClient.java:310)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:987)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:923)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:841)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.followRedirect(HttpURLConnection.java:2156)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1390)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderUrlOpener.connect(HftpFileSystem.java:370)
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:120)
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104)
>   at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.(ByteRangeInputStream.java:89)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderInputStream.(HftpFileSystem.java:383)
>   at 
> org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderInputStream.(HftpFileSystem.java:388)

[jira] [Resolved] (HDFS-1951) Null pointer exception comes when Namenode recovery happens and there is no response from client to NN more than the hardlimit for NN recovery and the current block is mo

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1951.

Resolution: Won't Fix

> Null pointer exception comes when Namenode recovery happens and there is no 
> response from client to NN more than the hardlimit for NN recovery and the 
> current block is more than the prev block size in NN 
> 
>
> Key: HDFS-1951
> URL: https://issues.apache.org/jira/browse/HDFS-1951
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.20-append
>Reporter: ramkrishna.s.vasudevan
> Attachments: HDFS-1951.patch
>
>
> Null pointer exception comes when Namenode recovery happens and there is no 
> response from client to NN more than the hardlimit for NN recovery and the 
> current block is more than the prev block size in NN 
> 1. Write using a client to 2 datanodes
> 2. Kill one data node and allow pipeline recovery.
> 3. write somemore data to the same block
> 4. Parallely allow the namenode recovery to happen
> Null pointer exception will come in addStoreBlock api.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2075) Add "Number of Reporting Nodes" to namenode web UI

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2075.

Resolution: Unresolved

closing as stale

> Add "Number of Reporting Nodes" to namenode web UI
> --
>
> Key: HDFS-2075
> URL: https://issues.apache.org/jira/browse/HDFS-2075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, tools
>Affects Versions: 0.20.1, 0.20.2
>Reporter: Xing Jin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2075.patch.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The namenode web UI misses some information when safemode is on (e.g., the 
> number of reporting nodes). These information will help us understand the 
> system status.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2723) tomcat tar in the apache archive is corrupted

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2723.

Resolution: Cannot Reproduce

> tomcat tar in the apache archive is corrupted
> -
>
> Key: HDFS-2723
> URL: https://issues.apache.org/jira/browse/HDFS-2723
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
> Environment: linux , Windows
>Reporter: chackaravarthy
>
> when running mvn package , getting the following error and hence not able to 
> create tarball
> {noformat}
> [mkdir] Created dir: 
> /root/ravi/mapred/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/tomcat.exp
>  [exec] Current OS is Linux
>  [exec] Executing 'sh' with arguments:
>  [exec] './tomcat-untar.sh'
>  [exec] 
>  [exec] The ' characters around the executable and arguments are
>  [exec] not part of the command.
> Execute:Java13CommandLauncher: Executing 'sh' with arguments:
> './tomcat-untar.sh'
> The ' characters around the executable and arguments are
> not part of the command.
>  [exec] 
>  [exec] gzip: stdin: unexpected end of file
>  [exec] tar: Unexpected EOF in archive
>  [exec] tar: Unexpected EOF in archive
>  [exec] tar: Error is not recoverable: exiting now
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main  SUCCESS [7.984s]
> [INFO] Apache Hadoop Project POM . SUCCESS [3.028s]
> [INFO] Apache Hadoop Annotations . SUCCESS [5.451s]
> [INFO] Apache Hadoop Assemblies .. SUCCESS [2.987s]
> [INFO] Apache Hadoop Project Dist POM  SUCCESS [13.675s]
> [INFO] Apache Hadoop Auth  SUCCESS [5.766s]
> [INFO] Apache Hadoop Auth Examples ... SUCCESS [6.258s]
> [INFO] Apache Hadoop Common .. SUCCESS [3:50.945s]
> [INFO] Apache Hadoop Common Project .. SUCCESS [0.539s]
> [INFO] Apache Hadoop HDFS  SUCCESS [3:01.761s]
> [INFO] Apache Hadoop HttpFS .. FAILURE [30.532s]
> {noformat}
> It is because the tomcat tarball available in 
> "http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.32/bin/apache-tomcat-6.0.32.tar.gz";
>  is corrupted.
> Getting "Unexpected End of Archive" when trying to untar this tarball.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2774) Use TestDFSIO to test HDFS, and Failed with the exception: All datanodes are bad. Aborting...

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2774.

Resolution: Won't Fix

> Use TestDFSIO to test HDFS, and Failed with the exception: All datanodes are 
> bad. Aborting...
> -
>
> Key: HDFS-2774
> URL: https://issues.apache.org/jira/browse/HDFS-2774
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.20.2
> Environment: 20 nodes with 2-core cpu and 1G RAM  20G Hard disk ,1 
> switch
>Reporter: bdsyq
>  Labels: hadoop
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> use TestDFSIO to test the HDFS
> use the commond:  hadoop jar TestDFSIO - write -nrFiles 10 -fileSize 500
> when running ,errors occurs:
> 12/01/09 16:00:45 INFO mapred.JobClient: Task Id : 
> attempt_201201091556_0001_m_06_2, Status : FAILED
> java.io.IOException: All datanodes 192.168.0.17:50010 are bad. Aborting...
>  at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2556)
>  at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2102)
>  at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2265)
> attempt_201201091637_0002_m_05_0: log4j:WARN No appenders could be found 
> for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201201091637_0002_m_05_0: log4j:WARN Please initialize the log4j 
> system properly.
> I don't know why?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1961) New architectural documentation created

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-1961.

Resolution: Won't Fix

Closing as stale.

> New architectural documentation created
> ---
>
> Key: HDFS-1961
> URL: https://issues.apache.org/jira/browse/HDFS-1961
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.21.0
>Reporter: Rick Kazman
>Assignee: Rick Kazman
>  Labels: architecture, hadoop, newbie
> Attachments: HDFS ArchDoc.Jira.docx, 
> HDFS-1961_ArchDoc.comments.RK.052011.docx, HDFS-1961_ArchDoc.comments.docx
>
>
> This material provides an overview of the HDFS architecture and is intended 
> for contributors. The goal of this document is to provide a guide to the 
> overall structure of the HDFS code so that contributors can more effectively 
> understand how changes that they are considering can be made, and the 
> consequences of those changes. The assumption is that the reader has a basic 
> understanding of HDFS, its purpose, and how it fits into the Hadoop project 
> suite. 
> An HTML version of the architectural documentation can be found at:  
> http://kazman.shidler.hawaii.edu/ArchDoc.html
> All comments and suggestions for improvements are appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2300) TestFileAppend4 and TestMultiThreadedSync fail on 20.append and 20-security.

2015-03-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2300.

  Resolution: Fixed
Target Version/s: 0.20.205.0, 0.20-append  (was: 0.20-append, 0.20.205.0)

> TestFileAppend4 and TestMultiThreadedSync fail on 20.append and 20-security.
> 
>
> Key: HDFS-2300
> URL: https://issues.apache.org/jira/browse/HDFS-2300
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20-append
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.20.205.0
>
> Attachments: HDFS-2300.20-append.1.patch
>
>
> TestFileAppend4 and TestMultiThreadedSync fail on the 20.append and 
> 20-security branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >