[jira] [Created] (HADOOP-18514) Remove the legacy Ozone website

2022-10-28 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-18514:
--

 Summary: Remove the legacy Ozone website
 Key: HADOOP-18514
 URL: https://issues.apache.org/jira/browse/HADOOP-18514
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Arpit Agarwal


Let's remove the old Ozone website: https://hadoop.apache.org/ozone/

Since Ozone has moved to a separate TLP long ago with its own website.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16992) Update download links

2020-04-16 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-16992:
--

 Summary: Update download links
 Key: HADOOP-16992
 URL: https://issues.apache.org/jira/browse/HADOOP-16992
 Project: Hadoop Common
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


The download lists for signatures/checksums/KEYS should be updated from 
dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists

2019-11-06 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HADOOP-16688:
--

 Summary: Update Hadoop website to mention Ozone mailing lists
 Key: HADOOP-16688
 URL: https://issues.apache.org/jira/browse/HADOOP-16688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


Now that Ozone has its separate mailing lists, let's list them on the Hadoop 
website.

https://hadoop.apache.org/mailing_lists.html

Thanks to [~ayushtkn] for suggesting this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13386) Upgrade Avro to 1.8.x

2019-03-25 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13386:


Reopening since HADOOP-14992 upgraded Avro to 1.7.7. This jira requested an 
upgrade to 1.8.x.

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15867) Allow registering MBeans without additional jmx properties

2018-10-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15867:
--

 Summary: Allow registering MBeans without additional jmx properties
 Key: HADOOP-15867
 URL: https://issues.apache.org/jira/browse/HADOOP-15867
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HDDS and Ozone use the MBeans.register overload added by HADOOP-15339. This is 
missing in Apache Hadoop 3.1.0 and earlier. This prevents us from building 
Ozone with earlier versions of Hadoop. More commonly, we see runtime exceptions 
if an earlier version of Hadoop happens to be in the classpath.

Let's add a reflection-based switch to invoke the right version of the API so 
we can build and use Ozone with Apache Hadoop 3.1.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15727) Missing dependency errors from dist-tools-hooks-maker

2018-09-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15727:
--

 Summary: Missing dependency errors from dist-tools-hooks-maker
 Key: HADOOP-15727
 URL: https://issues.apache.org/jira/browse/HADOOP-15727
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.2.0
Reporter: Arpit Agarwal


Building Hadoop with -Pdist -Dtar generates the following errors. These don't 
stop the build from succeeding though.

{code}
ERROR: hadoop-azure has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-common-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-hdfs-client-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: okhttp-2.7.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: okio-1.6.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: jersey-client-1.19.jar
ERROR: hadoop-resourceestimator has missing dependencies: guice-servlet-4.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: guice-4.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: aopalliance-1.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: jersey-guice-1.19.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-module-jaxb-annotations-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-jaxrs-json-provider-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jackson-jaxrs-base-2.9.5.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-api-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-resourcemanager-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
jetty-util-ajax-9.3.19.v20170502.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-common-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-registry-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
commons-daemon-1.0.13.jar
ERROR: hadoop-resourceestimator has missing dependencies: dnsjava-2.1.7.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
geronimo-jcache_1.0_spec-1.0-alpha-1.jar
ERROR: hadoop-resourceestimator has missing dependencies: ehcache-3.3.1.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
HikariCP-java7-2.4.12.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-applicationhistoryservice-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: objenesis-1.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: fst-2.50.jar
ERROR: hadoop-resourceestimator has missing dependencies: java-util-1.9.0.jar
ERROR: hadoop-resourceestimator has missing dependencies: json-io-2.5.1.jar
ERROR: hadoop-resourceestimator has missing dependencies: 
hadoop-yarn-server-web-proxy-3.2.0-SNAPSHOT.jar
ERROR: hadoop-resourceestimator has missing dependencies: leveldbjni-all-1.8.jar
ERROR: hadoop-resourceestimator has missing dependencies: javax.inject-1.jar
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12558) distcp documentation is woefully out of date

2018-09-05 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-12558:


Reopening to evaluate if this needs a fix and bumping priority.

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: newbie
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15493) DiskChecker should handle disk full situation

2018-05-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15493:
--

 Summary: DiskChecker should handle disk full situation
 Key: HADOOP-15493
 URL: https://issues.apache.org/jira/browse/HADOOP-15493
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is 
writable.

However check should not fail when file creation fails due to disk being full. 
This avoids marking full disks as _failed_.

Reported by [~kihwal] and [~daryn] in HADOOP-15450. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15451) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15451:
--

 Summary: Avoid fsync storm triggered by DiskChecker and handle 
disk full situation
 Key: HADOOP-15451
 URL: https://issues.apache.org/jira/browse/HADOOP-15451
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Arpit Agarwal


Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
# When space is low, the os returns ENOSPC. Instead simply stop writing, the 
drive is marked bad and replication happens. This make cluster-wide space 
problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
datanode shuts down.
# There are non-hdfs users of DiskChecker, who use it proactively, not just on 
failures. This was fine before, but now it incurs heavy I/O due to introduction 
of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15450:
--

 Summary: Avoid fsync storm triggered by DiskChecker and handle 
disk full situation
 Key: HADOOP-15450
 URL: https://issues.apache.org/jira/browse/HADOOP-15450
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Fix disk checker issues reported by [~kihwal] in HADOOP-13738:
1. When space is low, the os returns ENOSPC. Instead simply stop writing, the 
drive is marked bad and replication happens. This make cluster-wide space 
problem worse. If the number of "failed" drives exceeds the DFIP limit, the 
datanode shuts down.
1. There are non-hdfs users of DiskChecker, who use it proactively, not just on 
failures. This was fine before, but now it incurs heavy I/O due to introduction 
of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15334) Upgrade Maven surefire plugin

2018-03-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15334:
--

 Summary: Upgrade Maven surefire plugin
 Key: HADOOP-15334
 URL: https://issues.apache.org/jira/browse/HADOOP-15334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Recent versions of the surefire plugin suppress summary test execution output 
in quiet mode. This was recently fixed in plugin version 2.21.0 (via 
SUREFIRE-1436).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15128) TestViewFileSystem tests are broken in trunk

2018-01-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-15128.

Resolution: Not A Problem

Reverted HADOOP-10054, let's make the right fix there.

> TestViewFileSystem tests are broken in trunk
> 
>
> Key: HADOOP-15128
> URL: https://issues.apache.org/jira/browse/HADOOP-15128
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Hanisha Koneru
>
> The fix in Hadoop-10054 seems to have caused a test failure. Please take a 
> look. Thanks [~eyang] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15066) Spurious error stopping secure datanode

2017-11-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-15066:
--

 Summary: Spurious error stopping secure datanode
 Key: HADOOP-15066
 URL: https://issues.apache.org/jira/browse/HADOOP-15066
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


Looks like there is a spurious error when stopping a secure datanode.

{code}
# hdfs --daemon stop datanode
cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or 
directory
WARNING: pid has changed for datanode, skip deleting pid file
cat: /var/run/hadoop/hdfs//hadoop-hdfs-root-datanode.pid: No such file or 
directory
WARNING: daemon pid has changed for datanode, skip deleting daemon pid file
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14287) Compiling trunk with -DskipShade fails

2017-04-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-14287:
--

 Summary: Compiling trunk with -DskipShade fails 
 Key: HADOOP-14287
 URL: https://issues.apache.org/jira/browse/HADOOP-14287
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha3
Reporter: Arpit Agarwal


Get the following errors when compiling trunk with -DskipShade. It succeeds 
with shading.

{code}
[ERROR] COMPILATION ERROR :
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[41,30]
 cannot find symbol
  symbol:   class HdfsConfiguration
  location: package org.apache.hadoop.hdfs
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[45,34]
 cannot find symbol
  symbol:   class WebHdfsConstants
  location: package org.apache.hadoop.hdfs.web
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[71,36]
 cannot find symbol
  symbol:   class HdfsConfiguration
  location: class org.apache.hadoop.example.ITUseMiniCluster
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[85,53]
 cannot access org.apache.hadoop.hdfs.DistributedFileSystem
  class file for org.apache.hadoop.hdfs.DistributedFileSystem not found
[ERROR] 
/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[109,38]
 cannot find symbol
  symbol:   variable WebHdfsConstants
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7880) The Single Node and Cluster Setup docs don't cover HDFS

2017-03-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-7880.
---
Resolution: Not A Problem

This is covered by our docs now, resolving.
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html

> The Single Node and Cluster Setup docs don't cover HDFS
> ---
>
> Key: HADOOP-7880
> URL: https://issues.apache.org/jira/browse/HADOOP-7880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> The main docs page (http://hadoop.apache.org/common/docs/r0.23.0) only has 
> HDFS docs for federation. Only MR2 is covered in the single node and cluster 
> setup documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14121) Fix occasional BindException in TestNameNodeMetricsLogger

2017-02-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-14121:
--

 Summary: Fix occasional BindException in TestNameNodeMetricsLogger
 Key: HADOOP-14121
 URL: https://issues.apache.org/jira/browse/HADOOP-14121
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


TestNameNodeMetricsLogger occasionally hits BindException even though it uses 
ServerSocketUtil.getPort to get a random port number.

It's better to specify a port number of 0 and let the OS allocate an unused 
port.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-14002.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

Thanks for the review [~asuresh]. Committed this to trunk.

> Document -DskipShade property in BUILDING.txt
> -
>
> Key: HADOOP-14002
> URL: https://issues.apache.org/jira/browse/HADOOP-14002
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch
>
>
> HADOOP-13999 added a maven profile to disable client jar shading. This 
> property should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-6751) hadoop daemonlog does not work from command line with security enabled

2016-12-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-6751.
---
Resolution: Duplicate

> hadoop daemonlog does not work from command line with security enabled
> --
>
> Key: HADOOP-6751
> URL: https://issues.apache.org/jira/browse/HADOOP-6751
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>
> daemonlog command line is not working with security enabled.
> We need to support both browser interface and command line with security 
> enabled for daemonlog.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-6751) hadoop daemonlog does not work from command line with security enabled

2016-12-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-6751:
---

> hadoop daemonlog does not work from command line with security enabled
> --
>
> Key: HADOOP-6751
> URL: https://issues.apache.org/jira/browse/HADOOP-6751
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>
> daemonlog command line is not working with security enabled.
> We need to support both browser interface and command line with security 
> enabled for daemonlog.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13737:

  Assignee: Arpit Agarwal

Resolved the wrong issue!

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13738) DiskChecker should perform some file IO

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13738:
--

 Summary: DiskChecker should perform some file IO
 Key: HADOOP-13738
 URL: https://issues.apache.org/jira/browse/HADOOP-13738
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DiskChecker can fail to detect total disk/controller failures indefinitely. We 
have seen this in real clusters. DiskChecker performs simple permissions-based 
checks on directories which do not guarantee that any disk IO will be attempted.

A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13737:
--

 Summary: Cleanup DiskChecker interface
 Key: HADOOP-13737
 URL: https://issues.apache.org/jira/browse/HADOOP-13737
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13668) Make InstrumentedLock require ReentrantLock

2016-09-28 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13668:
--

 Summary: Make InstrumentedLock require ReentrantLock
 Key: HADOOP-13668
 URL: https://issues.apache.org/jira/browse/HADOOP-13668
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Make InstrumentedLock use ReentrantLock instead of Lock, so nested 
acquire/release calls can be instrumented correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13467) Shell#getSignalKillCommand should use the bash builtin on Linux

2016-08-03 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13467:
--

 Summary: Shell#getSignalKillCommand should use the bash builtin on 
Linux
 Key: HADOOP-13467
 URL: https://issues.apache.org/jira/browse/HADOOP-13467
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HADOOP-13434 inadvertently undid the fix made in HADOOP-12441.

The use of the bash builtin for kill was intentional, so let's restore that 
behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13434) Add quoting to Shell class

2016-08-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13434:


Reopening to attach branch-2.7 patch.

> Add quoting to Shell class
> --
>
> Key: HADOOP-13434
> URL: https://issues.apache.org/jira/browse/HADOOP-13434
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13434.patch, HADOOP-13434.patch, 
> HADOOP-13434.patch
>
>
> The Shell class makes assumptions that the parameters won't have spaces or 
> other special characters, even when it invokes bash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13457:
--

 Summary: Remove hardcoded absolute path for shell executable
 Key: HADOOP-13457
 URL: https://issues.apache.org/jira/browse/HADOOP-13457
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Shell.java has a hardcoded path to /bin/bash which is not correct on all 
platforms. 

Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13424) namenode connect time out in cluster with 65 machiones

2016-07-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-13424.

Resolution: Invalid

[~wanglaichao] Jira is not a support channel. Please use u...@hadoop.apache.org.

> namenode connect time out in cluster with 65 machiones
> --
>
> Key: HADOOP-13424
> URL: https://issues.apache.org/jira/browse/HADOOP-13424
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.4.1
> Environment: hadoop 2.4.1
>Reporter: wanglaichao
>
> Befor out cluster has 50 nodes ,it runs ok. Recently we add 15 node ,it 
> always reports errors with connectint  timeout.Who can help me ,thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12903) Add IPC Server support for suppressing exceptions by type, suppress 'server too busy' messages

2016-03-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12903:
--

 Summary: Add IPC Server support for suppressing exceptions by 
type, suppress 'server too busy' messages
 Key: HADOOP-12903
 URL: https://issues.apache.org/jira/browse/HADOOP-12903
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.2
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HADOOP-10597 added support for RPC congestion control by sending retriable 
'server too busy' exceptions to clients. 

However every backoff results in a log message. We've seen these log messages 
slow down the NameNode.
{code}
2016-03-07 15:02:23,272 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for 
port 8020: readAndProcess from client 127.0.0.1 threw exception 
[org.apache.hadoop.ipc.RetriableException: Server is too busy.]
{code}

We already have a metric that tracks the number of backoff events. This log 
message adds nothing useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12746) ReconfigurableBase should update the cached configuration consistently

2016-01-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12746:
--

 Summary: ReconfigurableBase should update the cached configuration 
consistently
 Key: HADOOP-12746
 URL: https://issues.apache.org/jira/browse/HADOOP-12746
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


{{ReconfigurableBase}} does not always update the cached configuration after a 
property is reconfigured.

The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
does not.

See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2015-12-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12665:
--

 Summary: Document hadoop.security.token.service.use_ip
 Key: HADOOP-12665
 URL: https://issues.apache.org/jira/browse/HADOOP-12665
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.8.0
Reporter: Arpit Agarwal


{{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12664) UGI auto-renewer does not verify kinit availability during initialization

2015-12-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12664:
--

 Summary: UGI auto-renewer does not verify kinit availability 
during initialization
 Key: HADOOP-12664
 URL: https://issues.apache.org/jira/browse/HADOOP-12664
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Priority: Minor


UGI auto-renewer does not verify that {{hadoop.kerberos.kinit.command}} is in 
the path during initialization. If not available, the auto-renewal thread will 
hit an error during TGT renewal. We recently saw a case where it manifests as 
transient errors during client program execution which can be hard to track 
down without UGI logging.

It seems like {{kinit}} availability should be verified during initialization 
to make the behavior more predictable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12522) Simplify adding NN service RPC port to an existing HA cluster with ZKFCs

2015-10-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12522:
--

 Summary: Simplify adding NN service RPC port to an existing HA 
cluster with ZKFCs
 Key: HADOOP-12522
 URL: https://issues.apache.org/jira/browse/HADOOP-12522
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.7.1
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


ZKFCs fail the following check in {{DFSZKFailoverController#dataToTarget}} if 
an NN service RPC port is added to an existing cluster.

{code}
  protected HAServiceTarget dataToTarget(byte[] data) {
...
if (!addressFromProtobuf.equals(ret.getAddress())) {
  throw new RuntimeException("Mismatched address stored in ZK for " +
  ret + ": Stored protobuf was " + proto + ", address from our own " +
  "configuration for this NameNode was " + ret.getAddress());
}

{code}

The NN address stored in the znode had the common client+service RPC port 
number whereas the configuration now returns an address with the service RPC 
port. The workaround is to reformat the ZKFC state in ZK with {{hdfs zkfc 
-formatZK}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10571:

  Assignee: (was: Arpit Agarwal)

Thanks Steve. The original patch is quite out of date. I'll post an updated 
patch if I get some time. Leaving unassigned for now.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10571.

Resolution: Won't Fix

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12272:
--

 Summary: Refactor ipc.Server and implementations to reduce 
constructor bloat
 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal


{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12250) Enable RPC Congestion control by default

2015-07-17 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12250:
--

 Summary: Enable RPC Congestion control by default
 Key: HADOOP-12250
 URL: https://issues.apache.org/jira/browse/HADOOP-12250
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


We propose enabling RPC congestion control introduced by HADOOP-10597 by 
default. We enabled it on a couple of large clusters a few weeks ago and it has 
helped keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12212) Hi, I am trying to start the namenode but it keeps showing: Failed to start namenode. java.net.BindException: Address already in use

2015-07-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-12212.

  Resolution: Auto Closed
Target Version/s:   (was: 2.7.0)

Hi Joel, I am closing this. You probably want to your setup questions to 
u...@hadoop.apache.org. Thanks.

> Hi, I am trying to start the namenode but it keeps showing: Failed to start 
> namenode. java.net.BindException: Address already in use
> 
>
> Key: HADOOP-12212
> URL: https://issues.apache.org/jira/browse/HADOOP-12212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.7.0
> Environment: Ubuntu 14.04 trusty
>Reporter: Joel
>  Labels: hadoop, hdfs, namenode
>
> Hi, I am trying to start the namenode but it keeps showing: Failed to start 
> namenode. java.net.BindException: Address already in use;. netstat -a | grep 
> 9000 returns 
> tcp0  0 *:9000  *:* LISTEN
>  
> tcp6   0  0 [::]:9000   [::]:*  LISTEN 
> Is this normal or do I need to kill one of the processes?
> The hdfs-site.xml is given below:  
> dfs.replication1
> dfs.namenode.name.dir
> file:///usr/local/hdfs/namenode
> dfs.datanode.data.dir
> file:///usr/local/hdfs/datanode   
> namenode logs are given below:
> --
> 2015-07-10 00:27:02,513 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> registered UNIX signal handlers for [TERM, HUP, INT]
> 2015-07-10 00:27:02,538 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> createNameNode []
> 2015-07-10 00:27:07,549 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
> loaded properties from hadoop-metrics2.properties
> 2015-07-10 00:27:09,284 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period 
> at 10 second(s).
> 2015-07-10 00:27:09,285 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system 
> started
> 2015-07-10 00:27:09,339 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> fs.defaultFS is hdfs://localhost:9000
> 2015-07-10 00:27:09,340 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> Clients are to use localhost:9000 to access this namenode/service.
> 2015-07-10 00:27:12,475 WARN org.apache.hadoop.util.NativeCodeLoader: Unable 
> to load native-hadoop library for your platform... using builtin-java classes 
> where applicable
> 2015-07-10 00:27:16,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting 
> Web-server for hdfs at: http://0.0.0.0:50070
> 2015-07-10 00:27:17,491 INFO org.mortbay.log: Logging to 
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
> org.mortbay.log.Slf4jLog
> 2015-07-10 00:27:17,702 INFO 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable 
> to initialize FileSignerSecretProvider, falling back to use random secrets.
> 2015-07-10 00:27:17,876 INFO org.apache.hadoop.http.HttpRequestLog: Http 
> request log for http.requests.namenode is not defined
> 2015-07-10 00:27:17,941 INFO org.apache.hadoop.http.HttpServer2: Added global 
> filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context hdfs
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context static
> 2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context logs
> 2015-07-10 00:27:18,441 INFO org.apache.hadoop.http.HttpServer2: Added filter 
> 'org.apache.hadoop.hdfs.web.AuthFilter' 
> (class=org.apache.hadoop.hdfs.web.AuthFilter)
> 2015-07-10 00:27:18,525 INFO org.apache.hadoop.http.HttpServer2: 
> addJerseyResourcePackage: 
> packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
>  pathSpec=/webhdfs/v1/*
> 2015-07-10 00:27:18,747 INFO org.apache.hadoop.http.HttpServer2: Jetty bound 
> to port 50070
> 2015-07-10 00:27:18,760 INFO org.mortbay.log: jetty-6.1.26
> 2015-07-10 00:27:20,832 INFO org.mortbay.log: Started 
> HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
> 2015-07-10 00:27:23,404 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage 
> directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack 
> of redundant storage dir

[jira] [Resolved] (HADOOP-12179) Test Jira, please ignore

2015-07-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-12179.

Resolution: Pending Closed

> Test Jira, please ignore
> 
>
> Key: HADOOP-12179
> URL: https://issues.apache.org/jira/browse/HADOOP-12179
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12179.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12179) Test Jira, please ignore

2015-07-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12179:
--

 Summary: Test Jira, please ignore
 Key: HADOOP-12179
 URL: https://issues.apache.org/jira/browse/HADOOP-12179
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12163) Add xattr APIs to the FileSystem specification

2015-06-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12163:
--

 Summary: Add xattr APIs to the FileSystem specification
 Key: HADOOP-12163
 URL: https://issues.apache.org/jira/browse/HADOOP-12163
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


The following ACL APIs should be added to the [FileSystem 
specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
# modifyAclEntries
# removeAclEntries
# removeDefaultAcl
# removeAcl
# setAcl
# getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2015-06-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12162:
--

 Summary: Add ACL APIs to the FileSystem specification
 Key: HADOOP-12162
 URL: https://issues.apache.org/jira/browse/HADOOP-12162
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


The following ACL APIs should be added to the [FileSystem 
specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
# modifyAclEntries
# removeAclEntries
# removeDefaultAcl
# removeAcl
# setAcl
# getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-06-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12161:
--

 Summary: Add getStoragePolicy API to the FileSystem interface
 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
{{FileSystem#setStoragePolicy}}. Jira to

# Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
policy for a given file/directory.
# Add corresponding implementation for HDFS i.e. 
{{DistributedFileSystem#getStoragePolicy}}.
# Update FileSystem contract docs. This will require editing 
_hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12160) Document snapshot APIs exposed by the FileSystem interface

2015-06-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12160:
--

 Summary: Document snapshot APIs exposed by the FileSystem interface
 Key: HADOOP-12160
 URL: https://issues.apache.org/jira/browse/HADOOP-12160
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.1
Reporter: Arpit Agarwal


The snapshot APIs supported by the {{FileSystem}} interface should be added to 
the interface docs.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12075) Document that DN max locked memory must be configured to use RAM disk

2015-06-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12075:
--

 Summary: Document that DN max locked memory must be configured to 
use RAM disk
 Key: HADOOP-12075
 URL: https://issues.apache.org/jira/browse/HADOOP-12075
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.8.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


HDFS-6599 introduced the requirement that max locked memory must be configured 
to use RAM disk storage via the LAZY_PERSIST storage policy.

We need to document it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12069) Document that NFS gateway does not work with rpcbind on SLES 11

2015-06-05 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12069:
--

 Summary: Document that NFS gateway does not work with rpcbind on 
SLES 11
 Key: HADOOP-12069
 URL: https://issues.apache.org/jira/browse/HADOOP-12069
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The NFS gateway does not work with the system rpcbind service on SLES 11. It 
does work with the hadoop portmap. We'll add a short note to the NFS 
documentation about it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11981) Add storage policy APIs to filesystem docs

2015-05-15 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11981:
--

 Summary: Add storage policy APIs to filesystem docs
 Key: HADOOP-11981
 URL: https://issues.apache.org/jira/browse/HADOOP-11981
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


HDFS-8345 exposed the storage policy APIs via the FileSystem.

The FileSystem docs should be updated accordingly.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11833) Split TestLazyPersistFiles into multiple tests

2015-04-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11833:
--

 Summary: Split TestLazyPersistFiles into multiple tests
 Key: HADOOP-11833
 URL: https://issues.apache.org/jira/browse/HADOOP-11833
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


TestLazyPersistFiles has grown too large and includes both NN and DN tests. We 
can split up related tests into smaller files to keep the test case manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11809) Building hadoop on windows 64 bit, windows 7.1 SDK : \hadoop-common\target\findbugsXml.xml does not exist

2015-04-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-11809.

Resolution: Invalid

Hi [~kantum...@yahoo.com], please use the dev mailing list for questions.

Resolving as Invalid.

> Building hadoop on windows 64 bit, windows 7.1 SDK : 
> \hadoop-common\target\findbugsXml.xml does not exist
> -
>
> Key: HADOOP-11809
> URL: https://issues.apache.org/jira/browse/HADOOP-11809
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Umesh Kant
>
> I am trying to build hadoop 2.6.0 on Windows 7 64 bit, Windows 7.1 SDK. I 
> have gone through Build.txt file and have did follow all the pre-requisites 
> for build on windows. Still when I try to build, I am getting following error:
> Maven command: mvn package -X -Pdist -Pdocs -Psrc -Dtar -DskipTests 
> -Pnative-win findbugs:findbugs
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 04:35 min
> [INFO] Finished at: 2015-04-03T23:16:57-04:00
> [INFO] Final Memory: 123M/1435M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:
> run (site) on project hadoop-common: An Ant BuildException has occured: input 
> fi
> le 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target\findbugsXml.
> xml does not exist
> [ERROR] around Ant part ... in="C:\H\hadoop-2.6.0-src\hadoop-common-project
> \hadoop-common\target/findbugsXml.xml" 
> style="C:\findbugs-3.0.1/src/xsl/default.
> xsl" 
> out="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/
> findbugs.html"/>... @ 44:232 in 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hado
> op-common\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal o
> rg.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
> hadoop-com
> mon: An Ant BuildException has occured: input file 
> C:\H\hadoop-2.6.0-src\hadoop-
> common-project\hadoop-common\target\findbugsXml.xml does not exist
> around Ant part ... in="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-
> common\target/findbugsXml.xml" style="C:\findbugs-3.0.1/src/xsl/default.xsl" 
> out
> ="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/findbugs
> .html"/>... @ 44:232 in 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-commo
> n\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:116)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:80)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThre
> adedBuilder.build(SingleThreadedBuilder.java:51)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
> eStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
> cher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
> a:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
> uncher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
> 356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException
>  has occured: input file 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-comm
> on\target\findbugsXml.xml does not exi

[jira] [Created] (HADOOP-11725) Minor cleanup of BlockPoolManager#getAllNamenodeThreads

2015-03-17 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11725:
--

 Summary: Minor cleanup of BlockPoolManager#getAllNamenodeThreads
 Key: HADOOP-11725
 URL: https://issues.apache.org/jira/browse/HADOOP-11725
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor


{{BlockPoolManager#getAllNamenodeThreads}} can avoid unnecessary list to array 
conversion and vice versa by returning an UnmodifiableList. Since NN 
addition/removal is relatively rare we can just use a CopyOnWriteArrayList for 
concurrency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11513) Artifact errors with Maven build on Linux

2015-01-27 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11513:
--

 Summary: Artifact errors with Maven build on Linux
 Key: HADOOP-11513
 URL: https://issues.apache.org/jira/browse/HADOOP-11513
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
Reporter: Arpit Agarwal


I recently started getting the following errors with _mvn -q clean compile 
install_

{code}
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
[ERROR] Artifact: org.xerial.snappy:snappy-java:jar:1.0.4.1 has no file.
[ERROR] Artifact: xerces:xercesImpl:jar:2.9.1 has no file.
[ERROR] Artifact: xml-apis:xml-apis:jar:1.3.04 has no file.
[ERROR] Artifact: xmlenc:xmlenc:jar:0.52 has no file.
{code}

mvn --version reports:
{code}
Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 
2014-12-14T09:29:23-08:00)
Maven home: /home/vagrant/usr/share/maven
Java version: 1.7.0_65, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-7-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.13.0-24-generic", arch: "amd64", family: "unix"
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11451) TestLazyPersistFiles#testDnRestartWithSavedReplicas is flaky on Windows

2014-12-26 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11451:
--

 Summary: TestLazyPersistFiles#testDnRestartWithSavedReplicas is 
flaky on Windows
 Key: HADOOP-11451
 URL: https://issues.apache.org/jira/browse/HADOOP-11451
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


*Error Message*

Expected: is 
 but: was 

*Stacktrace*

java.lang.AssertionError: 
Expected: is 
 but: was 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:129)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDnRestartWithSavedReplicas(TestLazyPersistFiles.java:668)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11428) Remove obsolete reference to Cygwin in BUILDING.txt

2014-12-18 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11428:
--

 Summary: Remove obsolete reference to Cygwin in BUILDING.txt
 Key: HADOOP-11428
 URL: https://issues.apache.org/jira/browse/HADOOP-11428
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The 'Building on Windows' section of BUILDING.txt has an obsolete reference to 
Cygwin. It should be removed to avoid confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-10977) Periodically dump RPC metrics to logs

2014-08-18 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10977:
--

 Summary: Periodically dump RPC metrics to logs
 Key: HADOOP-10977
 URL: https://issues.apache.org/jira/browse/HADOOP-10977
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.5.0
Reporter: Arpit Agarwal


It would be useful to periodically dump RPC/other metrics to a log file. We 
could use a separate async log stream to avoid contending with logging on hot 
paths.

Placeholder Jira, this needs more thought.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-08-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10335:



> An ip whilelist based implementation to resolve Sasl properties per connection
> --
>
> Key: HADOOP-10335
> URL: https://issues.apache.org/jira/browse/HADOOP-10335
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10335.patch, HADOOP-10335.patch, 
> HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf
>
>
> As noted in HADOOP-10221, it is sometimes required for a Hadoop Server to 
> communicate with some client over encrypted channel and with some other 
> clients over unencrypted channel. 
> Hadoop-10221 introduced an interface _SaslPropertiesResolver_  and the 
> changes required to plugin and use _SaslPropertiesResolver_  to identify the 
> SaslProperties to be used for a connection. 
> In this jira, an ip-whitelist based implementation of 
> _SaslPropertiesResolver_  is attempted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10960) hadoop cause system crash with “soft lock” and “hard lock”

2014-08-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10960.


Resolution: Invalid

Hadoop core has no kernel mode components so it cannot cause a kernel panic. 
You likely have a buggy device driver or hit a kernel bug.

Resolving as Invalid.

> hadoop cause system crash with “soft lock” and “hard lock”
> --
>
> Key: HADOOP-10960
> URL: https://issues.apache.org/jira/browse/HADOOP-10960
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
> Environment: redhat rhel 6.3,6,4,6.5
> jdk1.7.0_45
> hadoop2.2
>Reporter: linbao111
>Priority: Critical
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I am running hadoop2.2 on redhat6.3-6.5,and all of my machines crashed after 
> a while. /var/log/messages shows repeatedly:
> Aug 11 06:30:42 jn4_73_128 kernel: BUG: soft lockup - CPU#1 stuck for 67s! 
> [jsvc:11508]
> Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc 
> iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode 
> dcdbas serio_raw iTCO_w
> dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod 
> crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash 
> dm_log dm_m
> od [last unloaded: scsi_wait_scan]
> Aug 11 06:30:42 jn4_73_128 kernel: CPU 1 
> Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc 
> iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode 
> dcdbas serio_raw iTCO_w
> dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod 
> crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash 
> dm_log dm_m
> od [last unloaded: scsi_wait_scan]
> Aug 11 06:30:42 jn4_73_128 kernel: 
> Aug 11 06:30:42 jn4_73_128 kernel: Pid: 11508, comm: jsvc Tainted: GW 
>  ---2.6.32-279.el6.x86_64 #1 Dell Inc. PowerEdge R510/084YMW
> Aug 11 06:30:42 jn4_73_128 kernel: RIP: 0010:[]  
> [] wait_for_rqlock+0x28/0x40
> Aug 11 06:30:42 jn4_73_128 kernel: RSP: 0018:8807786c3ee8  EFLAGS: 
> 0202
> Aug 11 06:30:42 jn4_73_128 kernel: RAX: f6e9f6e1 RBX: 
> 8807786c3ee8 RCX: 880028216680
> Aug 11 06:30:42 jn4_73_128 kernel: RDX: f6e9 RSI: 
> 88061cd29370 RDI: 0286
> Aug 11 06:30:42 jn4_73_128 kernel: RBP: 8100bc0e R08: 
> 0001 R09: 0001
> Aug 11 06:30:42 jn4_73_128 kernel: R10:  R11: 
>  R12: 0286
> Aug 11 06:30:42 jn4_73_128 kernel: R13: 8807786c3eb8 R14: 
> 810e0f6e R15: 8807786c3e48
> Aug 11 06:30:42 jn4_73_128 kernel: FS:  () 
> GS:88002820() knlGS:
> Aug 11 06:30:42 jn4_73_128 kernel: CS:  0010 DS:  ES:  CR0: 
> 80050033
> Aug 11 06:30:42 jn4_73_128 kernel: CR2: 00e5bd70 CR3: 
> 01a85000 CR4: 06e0
> Aug 11 06:30:42 jn4_73_128 kernel: DR0:  DR1: 
>  DR2: 
> Aug 11 06:30:42 jn4_73_128 kernel: DR3:  DR6: 
> 0ff0 DR7: 0400
> Aug 11 06:30:42 jn4_73_128 kernel: Process jsvc (pid: 11508, threadinfo 
> 8807786c2000, task 880c1def3500)
> Aug 11 06:30:42 jn4_73_128 kernel: Stack:
> Aug 11 06:30:42 jn4_73_128 kernel: 8807786c3f68 8107091b 
>  8807786c3f28
> Aug 11 06:30:42 jn4_73_128 kernel:  880701735260 880c1def39c8 
> 880c1def39c8 
> Aug 11 06:30:42 jn4_73_128 kernel:  8807786c3f28 8807786c3f28 
> 8807786c3f78 7f092d0ad700
> Aug 11 06:30:42 jn4_73_128 kernel: Call Trace:
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? do_exit+0x5ab/0x870
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? sys_exit+0x17/0x20
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? 
> system_call_fastpath+0x16/0x1b
> Aug 11 06:30:42 jn4_73_128 kernel: Code: ff ff 90 55 48 89 e5 0f 1f 44 00 00 
> 48 c7 c0 80 66 01 00 65 48 8b 0c 25 b0 e0 00 00 0f ae f0 48 01 c1 eb 09 0f 1f 
> 80 00 00 00 00  90 8b 01 89 c2 c1 fa 10 66 39 c2 75 f2 c9 c3 0f 1f 84 00 
> 00 
> Aug 11 06:30:42 jn4_73_128 kernel: Call Trace:
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? do_exit+0x5ab/0x870
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? sys_exit+0x17/0x20
> Aug 11 06:30:42 jn4_73_128 kernel: [] ? 
> system_call_fastpath+0x16/0x1b
> 
> and finally crashed
> crash /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux  
> /opt/crash/127.0.0.1-2014-08-10-09\:47\:38/vmcore
> crash 6.1.0-5.el6
> Copyright (C) 2002-2012  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  

[jira] [Resolved] (HADOOP-8069) Enable TCP_NODELAY by default for IPC

2014-07-28 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-8069.
---

  Resolution: Fixed
   Fix Version/s: 2.6.0
  3.0.0
Target Version/s: 2.6.0  (was: 2.0.0-alpha, 3.0.0)
Release Note: This change enables the TCP_NODELAY flag for all Hadoop 
IPC connections, hence bypassing TCP Nagling. Nagling interacts poorly with TCP 
delayed ACKs especially for request-response protocols.
Hadoop Flags: Reviewed

Committed to trunk and branch-2, resolving with release note.

> Enable TCP_NODELAY by default for IPC
> -
>
> Key: HADOOP-8069
> URL: https://issues.apache.org/jira/browse/HADOOP-8069
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.23.0, 2.4.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.6.0
>
> Attachments: hadoop-8069.txt
>
>
> I think we should switch the default for the IPC client and server NODELAY 
> options to true. As wikipedia says:
> {quote}
> In general, since Nagle's algorithm is only a defense against careless 
> applications, it will not benefit a carefully written application that takes 
> proper care of buffering; the algorithm has either no effect, or negative 
> effect on the application.
> {quote}
> Since our IPC layer is well contained and does its own buffering, we 
> shouldn't be careless.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10892) Suppress 'proprietary API' warnings

2014-07-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10892:
--

 Summary: Suppress 'proprietary API' warnings
 Key: HADOOP-10892
 URL: https://issues.apache.org/jira/browse/HADOOP-10892
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal


The 'proprietary API' warnings provide no useful information and clutter up the 
build output, hiding legitimate warnings in the noise.

Most of the warnings appear to be about OutputFormat, XMLSerializer and Unsafe. 
I don't think these APIs are going away any time soon, and if they do we can 
deal with it when it happens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10083) Fix commons logging warning in Hadoop build

2014-05-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10083.


Resolution: Not a Problem

This has been fixed as a side effect of HDFS-6252. Resolving.

> Fix commons logging warning in Hadoop build
> ---
>
> Key: HADOOP-10083
> URL: https://issues.apache.org/jira/browse/HADOOP-10083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Priority: Minor
>
> A clean build spews multiple instances of the following warning on my OS X 
> dev machine.
> {code}
> WARN: The method class 
> org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
> WARN: Please see http://www.slf4j.org/codes.html for an explanation.
> {code}
> I couldn't find another bug mentioning these so filing one to get these 
> cleaned up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10571:
--

 Summary: Use Log.*(Object, Throwable) overload to log exceptions
 Key: HADOOP-10571
 URL: https://issues.apache.org/jira/browse/HADOOP-10571
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Arpit Agarwal


When logging an exception, we often convert the exception to string or call 
{{.getMessage}}. Instead we can use the log method overloads which take 
{{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10538) NumberFormatException happened when hadoop 1.2.1 running on Cygwin

2014-04-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10538:



Never mind, I missed that this is Hadoop 1.2.1 although it is right there in 
the title.

I would recommend upgrading to Hadoop 2.4 unless you have a compelling reason 
to stick with 1.x. It is stable and works natively on Windows.

> NumberFormatException happened  when hadoop 1.2.1 running on Cygwin
> ---
>
> Key: HADOOP-10538
> URL: https://issues.apache.org/jira/browse/HADOOP-10538
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: OS: windows 7 / Cygwin
>Reporter: peter xie
>
> The TaskTracker always failed to startup when it running on Cygwin. And the 
> error info logged in xxx-tasktracker-.log is :
> 2014-04-21 22:13:51,439 DEBUG org.apache.hadoop.mapred.TaskRunner: putting 
> jobToken file name into environment 
> D:/hadoop/mapred/local/taskTracker/pxie/jobcache/job_201404212205_0001/jobToken
> 2014-04-21 22:13:51,439 INFO org.apache.hadoop.mapred.JvmManager: Killing 
> JVM: jvm_201404212205_0001_m_1895177159
> 2014-04-21 22:13:51,439 WARN org.apache.hadoop.mapred.TaskRunner: 
> attempt_201404212205_0001_m_00_0 : Child Error
> java.lang.NumberFormatException: For input string: ""
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:504)
>   at java.lang.Integer.parseInt(Integer.java:527)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.kill(JvmManager.java:552)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.killJvmRunner(JvmManager.java:314)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.reapJvm(JvmManager.java:378)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.access$000(JvmManager.java:189)
>   at org.apache.hadoop.mapred.JvmManager.launchJvm(JvmManager.java:122)
>   at 
> org.apache.hadoop.mapred.TaskRunner.launchJvmAndWait(TaskRunner.java:292)
>   at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:251)
> 2014-04-21 22:13:51,511 DEBUG org.apache.hadoop.ipc.Server: IPC Server 
> listener on 59983: disconnecting client 127.0.0.1:60154. Number of active 
> connections: 1
> 2014-04-21 22:13:51,531 WARN org.apache.hadoop.fs.FileUtil: Failed to set 
> permissions of path: 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10538) NumberFormatException happened when hadoop 1.2.1 running on Cygwin

2014-04-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10538.


Resolution: Won't Fix

Hadoop is not supported on Cygwin.

Please see [these instructions|https://wiki.apache.org/hadoop/Hadoop2OnWindows] 
for how to run Hadoop 2.2+ on Windows natively.

> NumberFormatException happened  when hadoop 1.2.1 running on Cygwin
> ---
>
> Key: HADOOP-10538
> URL: https://issues.apache.org/jira/browse/HADOOP-10538
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: OS: windows 7 / Cygwin
>Reporter: peter xie
>
> The TaskTracker always failed to startup when it running on Cygwin. And the 
> error info logged in xxx-tasktracker-.log is :
> 2014-04-21 22:13:51,439 DEBUG org.apache.hadoop.mapred.TaskRunner: putting 
> jobToken file name into environment 
> D:/hadoop/mapred/local/taskTracker/pxie/jobcache/job_201404212205_0001/jobToken
> 2014-04-21 22:13:51,439 INFO org.apache.hadoop.mapred.JvmManager: Killing 
> JVM: jvm_201404212205_0001_m_1895177159
> 2014-04-21 22:13:51,439 WARN org.apache.hadoop.mapred.TaskRunner: 
> attempt_201404212205_0001_m_00_0 : Child Error
> java.lang.NumberFormatException: For input string: ""
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:504)
>   at java.lang.Integer.parseInt(Integer.java:527)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType$JvmRunner.kill(JvmManager.java:552)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.killJvmRunner(JvmManager.java:314)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.reapJvm(JvmManager.java:378)
>   at 
> org.apache.hadoop.mapred.JvmManager$JvmManagerForType.access$000(JvmManager.java:189)
>   at org.apache.hadoop.mapred.JvmManager.launchJvm(JvmManager.java:122)
>   at 
> org.apache.hadoop.mapred.TaskRunner.launchJvmAndWait(TaskRunner.java:292)
>   at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:251)
> 2014-04-21 22:13:51,511 DEBUG org.apache.hadoop.ipc.Server: IPC Server 
> listener on 59983: disconnecting client 127.0.0.1:60154. Number of active 
> connections: 1
> 2014-04-21 22:13:51,531 WARN org.apache.hadoop.fs.FileUtil: Failed to set 
> permissions of path: 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10518) TestSaslRPC fails on Windows

2014-04-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10518.


Resolution: Duplicate

Okay that does look similar to what I encountered.

I will dup this against HADOOP-8980.

> TestSaslRPC fails on Windows
> 
>
> Key: HADOOP-10518
> URL: https://issues.apache.org/jira/browse/HADOOP-10518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.4.0
> Environment: Windows + Oracle Java 7.
>Reporter: Arpit Agarwal
>
> {{TestSaslRPC}} fails with exceptions such as the following:
> {code}
> Tests run: 85, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.765 sec 
> <<< FAILURE! - in org.apache.hadoop.ipc.TestSaslRPC
> testTokenOnlyServer[0](org.apache.hadoop.ipc.TestSaslRPC)  Time elapsed: 
> 0.092 sec  <<< FAILURE!
> java.lang.AssertionError: 
> expected:<.*RemoteException.*AccessControlException.*: SIMPLE authentication 
> is not enabled.*> but was: java.io.IOException: An established connection was aborted by the software in 
> your host machine; Host Details : local host is: "WIN-Q5VLNTLIBJ0/10.0.2.15"; 
> destination host is: "WIN-Q5VLNTLIBJ0":49623; >
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:147)
>   at 
> org.apache.hadoop.ipc.TestSaslRPC.assertAuthEquals(TestSaslRPC.java:978)
>   at 
> org.apache.hadoop.ipc.TestSaslRPC.testTokenOnlyServer(TestSaslRPC.java:782)
> {code}
> The exact location/number of failures varies by run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10518) TestSaslRPC fails on Windows

2014-04-17 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10518:
--

 Summary: TestSaslRPC fails on Windows
 Key: HADOOP-10518
 URL: https://issues.apache.org/jira/browse/HADOOP-10518
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.4.0
Reporter: Arpit Agarwal


{{TestSaslRPC}} fails with exceptions such as the following:

{code}
Tests run: 85, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.765 sec <<< 
FAILURE! - in org.apache.hadoop.ipc.TestSaslRPC
testTokenOnlyServer[0](org.apache.hadoop.ipc.TestSaslRPC)  Time elapsed: 0.092 
sec  <<< FAILURE!
java.lang.AssertionError: 
expected:<.*RemoteException.*AccessControlException.*: SIMPLE authentication is 
not enabled.*> but was:
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.hadoop.ipc.TestSaslRPC.assertAuthEquals(TestSaslRPC.java:978)
at 
org.apache.hadoop.ipc.TestSaslRPC.testTokenOnlyServer(TestSaslRPC.java:782)
{code}

The exact location/number of failures varies by run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10413) Log statements must include pid and tid information

2014-03-18 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10413:
--

 Summary: Log statements must include pid and tid information
 Key: HADOOP-10413
 URL: https://issues.apache.org/jira/browse/HADOOP-10413
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.3.0, 3.0.0
Reporter: Arpit Agarwal


Log statements do not include process IDs and thread IDs which makes debugging 
hard when the output of multiple requests is interleaved. It's even worse when 
looking at the output of test runs because the logs from multiple daemons are 
interleaved in the same file.

Log4j does not provide a builtin mechanism for this, so we'd likely have to 
write some extra code. One possible solution is to initialize the IDs in the 
[MDC|https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html] 
and extract by updating the {{ConversionPattern}} as [described 
here|http://stackoverflow.com/a/12202124].



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10408) TestMetricsSystemImpl fails occasionally

2014-03-13 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10408:
--

 Summary: TestMetricsSystemImpl fails occasionally
 Key: HADOOP-10408
 URL: https://issues.apache.org/jira/browse/HADOOP-10408
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Arpit Agarwal


{{TestMetricsSystemImpl#testMultiThreadedPublish}} fails occasionally due to 
dropped events. Exception details in comment below.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10395) TestCallQueueManager is flaky

2014-03-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10395:
--

 Summary: TestCallQueueManager is flaky
 Key: HADOOP-10395
 URL: https://issues.apache.org/jira/browse/HADOOP-10395
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal


{{TestCallQueueManager#testSwapUnderContention}} fails occasionally on a test 
VM with this assert.
{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:193)
{code}

I think the issue is that the assert is probabilistic and although extremely 
unlikely it is possible for the queue to be intermittently empty while the 
putters and getters are running.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10394) TestAuthenticationFilter is flaky

2014-03-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10394:
--

 Summary: TestAuthenticationFilter is flaky
 Key: HADOOP-10394
 URL: https://issues.apache.org/jira/browse/HADOOP-10394
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.3.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


We have seen this assert causes occasional failures on Ubuntu.

{code}
Assert.assertEquals(System.currentTimeMillis() + 1000 * 1000,
 token.getExpires(), 100);
{code}

The expected fudge is up to 100ms, we have seen up to ~110ms in practice.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10347) branch-2 fails to compile

2014-02-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10347.


Resolution: Not A Problem

> branch-2 fails to compile
> -
>
> Key: HADOOP-10347
> URL: https://issues.apache.org/jira/browse/HADOOP-10347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Priority: Critical
>
> I get the following error compiling branch-2.
> {code}
> Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
> [ERROR] COMPILATION ERROR :
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> symbol  : method isSecure()
> location: class org.apache.hadoop.http.HttpConfig
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile 
> (default-compile) on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> [ERROR] symbol  : method isSecure()
> [ERROR] location: class org.apache.hadoop.http.HttpConfig
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10347) branch-2 fails to compile

2014-02-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10347.


Resolution: Duplicate

Yep Haohui just pointed out the same. Thanks Andrew, resolving as a dupe.

> branch-2 fails to compile
> -
>
> Key: HADOOP-10347
> URL: https://issues.apache.org/jira/browse/HADOOP-10347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Priority: Critical
>
> I get the following error compiling branch-2.
> {code}
> Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
> [ERROR] COMPILATION ERROR :
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> symbol  : method isSecure()
> location: class org.apache.hadoop.http.HttpConfig
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile 
> (default-compile) on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> [ERROR] symbol  : method isSecure()
> [ERROR] location: class org.apache.hadoop.http.HttpConfig
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HADOOP-10347) branch-2 fails to compile

2014-02-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10347:



> branch-2 fails to compile
> -
>
> Key: HADOOP-10347
> URL: https://issues.apache.org/jira/browse/HADOOP-10347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Priority: Critical
>
> I get the following error compiling branch-2.
> {code}
> Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
> [ERROR] COMPILATION ERROR :
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> symbol  : method isSecure()
> location: class org.apache.hadoop.http.HttpConfig
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile 
> (default-compile) on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
>  cannot find symbol
> [ERROR] symbol  : method isSecure()
> [ERROR] location: class org.apache.hadoop.http.HttpConfig
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10330) TestFrameDecoder fails if it cannot bind port 12345

2014-02-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10330:
--

 Summary: TestFrameDecoder fails if it cannot bind port 12345
 Key: HADOOP-10330
 URL: https://issues.apache.org/jira/browse/HADOOP-10330
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0, 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


{{TestFrameDecoder}} fails if port 12345 is in use.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10273) Fix 'maven site'

2014-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10273:
--

 Summary: Fix 'maven site'
 Key: HADOOP-10273
 URL: https://issues.apache.org/jira/browse/HADOOP-10273
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


'mvn site' is broken - it gives the following error.

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
hadoop-main: Execution default-site of goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
was missing while executing 
org.apache.maven.plugins:maven-site-plugin:3.0:site: 
org/sonatype/aether/graph/DependencyFilter

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
{code}

Looks related to 
https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound

Bumping the maven-site-plugin version should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10217) Unable to run 'hadoop' commands, after installing on Cygwin

2014-01-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-10217.


Resolution: Invalid

Anand, Cygwin is not supported.

Hadoop 2.0 has native support for Windows. There is no official Windows  
package yet and some features are work in progress but the good news is that 
most of the functionality is in place and building Windows packages is easy.

I suggest looking at BUILDING.txt in the source distribution for instructions. 
If you run into any issues please send email to the mailing list. Resolving as 
'Invalid'.


> Unable to run 'hadoop' commands, after installing on Cygwin
> ---
>
> Key: HADOOP-10217
> URL: https://issues.apache.org/jira/browse/HADOOP-10217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.2.0
> Environment: Installed on Cygwin (latest version) running Window XP. 
> Set Java 1.7.0_45 path (JDK) to /cygdrive/e/JDK. Installed ssh (on 
> /cygdrive/e/Openssh-6.4p1), created all keys and stored on 
> /home/admin.Installed hadoop-2.2.0 on /cygdrive/e/hadoop-2.2.0
>Reporter: Anand Murali
>  Labels: test
>
> Did following
> 1. export JAVA_HOME=/cygdrive/e/JDK
> 2. export HADOOP_INSTALL=/cygdrive/e/hadoop-2.2.0
> 3. export 
> PATH=:$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_INSTALL/etc:$HADOOP_INSTALL/share:$HADOOP_INSTALL/lib:$HADOOP_INSTALL/libexec
> $hadoop version
> Error: Could not find or load main class org.apache.hadoop.util.VersionInfo.
> Cannot run anymore commands. I am unable to detect what path problems is 
> causing this error 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10066) Cleanup ant dependencies

2013-10-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10066:
--

 Summary: Cleanup ant dependencies
 Key: HADOOP-10066
 URL: https://issues.apache.org/jira/browse/HADOOP-10066
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


The Maven build seems to pull in multiple versions of some ant plugins. Filing 
a Jira to address this.

{code}
~/.m2$ find ./ -name "ant"
./repository/org/apache/maven/plugins/maven-antrun-plugin
./repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar.sha1
./repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar
./repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.pom
./repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.pom.sha1
./repository/org/apache/ant
./repository/org/apache/ant/ant
./repository/org/apache/ant/ant/1.8.2/ant-1.8.2.pom.sha1
./repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar.sha1
./repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar
./repository/org/apache/ant/ant/1.8.2/ant-1.8.2.pom
./repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar
./repository/org/apache/ant/ant/1.8.1/ant-1.8.1.pom
./repository/org/apache/ant/ant/1.8.1/ant-1.8.1.pom.sha1
./repository/org/apache/ant/ant/1.8.1/ant-1.8.1.jar.sha1
./repository/org/apache/ant/ant/1.7.1/ant-1.7.1.pom.sha1
./repository/org/apache/ant/ant/1.7.1/ant-1.7.1.pom
./repository/org/apache/ant/ant/1.7.1/ant-1.7.1.jar
./repository/org/apache/ant/ant/1.7.1/ant-1.7.1.jar.sha1
./repository/org/apache/ant/ant-launcher
./repository/org/apache/ant/ant-launcher/1.8.2/ant-launcher-1.8.2.jar.sha1
./repository/org/apache/ant/ant-launcher/1.8.2/ant-launcher-1.8.2.jar
./repository/org/apache/ant/ant-launcher/1.8.2/ant-launcher-1.8.2.pom.sha1
./repository/org/apache/ant/ant-launcher/1.8.2/ant-launcher-1.8.2.pom
./repository/org/apache/ant/ant-launcher/1.8.1/ant-launcher-1.8.1.pom.sha1
./repository/org/apache/ant/ant-launcher/1.8.1/ant-launcher-1.8.1.pom
./repository/org/apache/ant/ant-launcher/1.8.1/ant-launcher-1.8.1.jar.sha1
./repository/org/apache/ant/ant-launcher/1.8.1/ant-launcher-1.8.1.jar
./repository/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.jar.sha1
./repository/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.pom.sha1
./repository/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.jar
./repository/org/apache/ant/ant-launcher/1.7.1/ant-launcher-1.7.1.pom
./repository/org/apache/ant/ant-parent
./repository/org/apache/ant/ant-parent/1.8.2/ant-parent-1.8.2.pom.sha1
./repository/org/apache/ant/ant-parent/1.8.2/ant-parent-1.8.2.pom
./repository/org/apache/ant/ant-parent/1.8.1/ant-parent-1.8.1.pom.sha1
./repository/org/apache/ant/ant-parent/1.8.1/ant-parent-1.8.1.pom
./repository/org/apache/ant/ant-parent/1.7.1/ant-parent-1.7.1.pom
./repository/org/apache/ant/ant-parent/1.7.1/ant-parent-1.7.1.pom.sha1
./repository/ant
./repository/ant/ant
./repository/ant/ant/1.6.5/ant-1.6.5.pom
./repository/ant/ant/1.6.5/ant-1.6.5.jar.sha1
./repository/ant/ant/1.6.5/ant-1.6.5.jar
./repository/ant/ant/1.6.5/ant-1.6.5.pom.sha1
{code}

(As pointed out by Jonathan Eagles on HADOOP-10064)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10064) Upgrade to maven antrun plugin version 1.7

2013-10-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10064:
--

 Summary: Upgrade to maven antrun plugin version 1.7
 Key: HADOOP-10064
 URL: https://issues.apache.org/jira/browse/HADOOP-10064
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


v1.6 does not respect 'mvn -q'. 

I have been building with 1.7 on my dev machine and haven't encountered any 
problems so far.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9527.
---

Resolution: Fixed

> Add symlink support to LocalFileSystem on Windows
> -
>
> Key: HADOOP-9527
> URL: https://issues.apache.org/jira/browse/HADOOP-9527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
> HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
> HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
> HADOOP-9527.009.patch, HADOOP-9527.010.branch-2.1.0-beta.patch, 
> HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
> HADOOP-9527.012-branch-2.1-beta.patch, HADOOP-9527.012.patch, RenameLink.java
>
>
> Multiple test cases are broken. I didn't look at each failure in detail.
> The main cause of the failures appears to be that RawLocalFS.readLink() does 
> not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9527) Add symlink support to LocalFileSystem on Windows

2013-08-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-9527:
---


> Add symlink support to LocalFileSystem on Windows
> -
>
> Key: HADOOP-9527
> URL: https://issues.apache.org/jira/browse/HADOOP-9527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, 
> HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, 
> HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, 
> HADOOP-9527.009.patch, HADOOP-9527.010.patch, HADOOP-9527.011.patch, 
> HADOOP-9527.012.patch, RenameLink.java
>
>
> Multiple test cases are broken. I didn't look at each failure in detail.
> The main cause of the failures appears to be that RawLocalFS.readLink() does 
> not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9807) Fix TestSymlinkLocalFSFileSystem on Windows

2013-08-01 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9807.
---

Resolution: Duplicate
  Assignee: Arpit Agarwal

Incremental fixes on top of HADOOP-9527 turned out to be quite trivial so I am 
dup'ing this bug.

> Fix TestSymlinkLocalFSFileSystem on Windows
> ---
>
> Key: HADOOP-9807
> URL: https://issues.apache.org/jira/browse/HADOOP-9807
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> TestSymlinkLocalFSFileSystem is broken on Windows. Placeholder Jira to fix 
> it. Details to follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself

2013-08-01 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9819:
-

 Summary: FileSystem#rename is broken, deletes target when renaming 
link to itself
 Key: HADOOP-9819
 URL: https://issues.apache.org/jira/browse/HADOOP-9819
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows.

This block of code deletes the symlink, the correct behavior is to do nothing.

{code:java}
try {
  dstStatus = getFileLinkStatus(dst);
} catch (IOException e) {
  dstStatus = null;
}
if (dstStatus != null) {
  if (srcStatus.isDirectory() != dstStatus.isDirectory()) {
throw new IOException("Source " + src + " Destination " + dst
+ " both should be either file or directory");
  }
  if (!overwrite) {
throw new FileAlreadyExistsException("rename destination " + dst
+ " already exists.");
  }
  // Delete the destination that is a file or an empty directory
  if (dstStatus.isDirectory()) {
FileStatus[] list = listStatus(dst);
if (list != null && list.length != 0) {
  throw new IOException(
  "rename cannot overwrite non empty destination directory " + dst);
}
  }
  delete(dst, false);
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9807) Fix TestSymlinkLocalFSFileSystem on Windows

2013-07-31 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9807:
-

 Summary: Fix TestSymlinkLocalFSFileSystem on Windows
 Key: HADOOP-9807
 URL: https://issues.apache.org/jira/browse/HADOOP-9807
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


TestSymlinkLocalFSFileSystem is broken on Windows. Placeholder Jira to fix it. 
Details to follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9782) Datanode daemon cannot be started on OS X

2013-07-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9782.
---

Resolution: Fixed

> Datanode daemon cannot be started on OS X
> -
>
> Key: HADOOP-9782
> URL: https://issues.apache.org/jira/browse/HADOOP-9782
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: OS X
>Reporter: Arpit Agarwal
>
> Datanode fails to start with the following exception on OS X.
> {code}
> java.lang.UnsupportedOperationException: stat is not supported on this 
> platform
> at org.apache.hadoop.fs.Stat.getExecString(Stat.java:91)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:405)
> at org.apache.hadoop.util.Shell.run(Shell.java:400)
> at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:65)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:792)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
> at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1782)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1829)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1807)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1190)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:665)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> at 
> org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:112)
> {code}
> It appears to be caused by {{Stat#getExecString}} not supporting OS X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9782) Datanode daemon cannot be started on OS X

2013-07-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-9782:
---


> Datanode daemon cannot be started on OS X
> -
>
> Key: HADOOP-9782
> URL: https://issues.apache.org/jira/browse/HADOOP-9782
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: OS X
>Reporter: Arpit Agarwal
>
> Datanode fails to start with the following exception on OS X.
> {code}
> java.lang.UnsupportedOperationException: stat is not supported on this 
> platform
> at org.apache.hadoop.fs.Stat.getExecString(Stat.java:91)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:405)
> at org.apache.hadoop.util.Shell.run(Shell.java:400)
> at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:65)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:792)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
> at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1782)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1829)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1807)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1190)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:665)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> at 
> org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:112)
> {code}
> It appears to be caused by {{Stat#getExecString}} not supporting OS X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9782) Datanode daemon cannot be started on OS X

2013-07-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9782.
---

Resolution: Not A Problem

> Datanode daemon cannot be started on OS X
> -
>
> Key: HADOOP-9782
> URL: https://issues.apache.org/jira/browse/HADOOP-9782
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: OS X
>Reporter: Arpit Agarwal
>
> Datanode fails to start with the following exception on OS X.
> {code}
> java.lang.UnsupportedOperationException: stat is not supported on this 
> platform
> at org.apache.hadoop.fs.Stat.getExecString(Stat.java:91)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:405)
> at org.apache.hadoop.util.Shell.run(Shell.java:400)
> at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:65)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:792)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:523)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
> at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1782)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1829)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1807)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1190)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:665)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
> at 
> org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:112)
> {code}
> It appears to be caused by {{Stat#getExecString}} not supporting OS X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9627) TestSocketIOTimeout should be rewritten without platform-specific assumptions

2013-06-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9627:
-

 Summary: TestSocketIOTimeout should be rewritten without 
platform-specific assumptions
 Key: HADOOP-9627
 URL: https://issues.apache.org/jira/browse/HADOOP-9627
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.3.0
Reporter: Arpit Agarwal


TestSocketIOTimeout makes some assumptions about the behavior of file channels 
wrt partial writes that do not appear to hold true on Windows [details in 
HADOOP-8982].

Currently part of the test is skipped on Windows.

This bug is to track fixing the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9553) Few tests that use timeouts are broken on Windows

2013-05-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9553:
-

 Summary: Few tests that use timeouts are broken on Windows
 Key: HADOOP-9553
 URL: https://issues.apache.org/jira/browse/HADOOP-9553
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


Following tests that use timeouts are broken on Windows. From a quick glance 
these appear to be test issues but more investigation is needed.

# TestAuthenticationToken
# TestsocketIOWithTimeout



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9537) Backport AIX patches to branch-1

2013-05-01 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9537:
-

 Summary: Backport AIX patches to branch-1
 Key: HADOOP-9537
 URL: https://issues.apache.org/jira/browse/HADOOP-9537
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.3.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 1.3.0


Backport couple of trivial Jiras to branch-1.

HADOOP-9305  Add support for running the Hadoop client on 64-bit AIX
HADOOP-9283  Add support for running the Hadoop client on AIX


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9527) TestLocalFSFileContextSymlink is broken on Windows

2013-04-30 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9527:
-

 Summary: TestLocalFSFileContextSymlink is broken on Windows
 Key: HADOOP-9527
 URL: https://issues.apache.org/jira/browse/HADOOP-9527
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


Multiple test cases are broken. I didn't look at each failure in detail.

The main cause of the failures appears to be that RawLocalFS.readLink() does 
not work on Windows. We need "winutils readlink" to fix the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9526) TestShellCommandFencer fails on Windows

2013-04-29 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9526:
-

 Summary: TestShellCommandFencer fails on Windows
 Key: HADOOP-9526
 URL: https://issues.apache.org/jira/browse/HADOOP-9526
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


The following tests fail on Windows.
# testTargetAsEnvironment
# testConfAsEnvironment
# testTargetAsEnvironment

All failures look like test issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9524) Fix ShellCommandFencer to work on Windows

2013-04-28 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9524:
-

 Summary: Fix ShellCommandFencer to work on Windows
 Key: HADOOP-9524
 URL: https://issues.apache.org/jira/browse/HADOOP-9524
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0


ShellcommandFencer has a hard-coded dependency on bash. Since we no longer 
require cygwin/bash on Windows we must fix it to use cmd.exe instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9508) chmod -R behaves unexpectedly on Windows

2013-04-25 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9508.
---

Resolution: Invalid

I was repeatedly misreading the code, this is not a bug. Resolving as such.

> chmod -R behaves unexpectedly on Windows
> 
>
> Key: HADOOP-9508
> URL: https://issues.apache.org/jira/browse/HADOOP-9508
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HADOOP-9508-test-to-repro.patch
>
>
> FileUtil.chmod behaves unexpectedly on Windows (it uses "winutils chmod -R" 
> under the covers.
> The problem can be manually reproduced with winutils or with the attached 
> test case (TestNativeIO#testWindowsChmodRecursive)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9508) chmod -R behaves unexpectedly on Windows

2013-04-25 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9508:
-

 Summary: chmod -R behaves unexpectedly on Windows
 Key: HADOOP-9508
 URL: https://issues.apache.org/jira/browse/HADOOP-9508
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0


FileUtil.chmod behaves unexpectedly on Windows (it uses "winutils chmod -R" 
under the covers.

The problem can be manually reproduced with winutils or with the attached test 
case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9499) TestWebHdfsUrl timeouts too conservative for Windows

2013-04-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9499:
-

 Summary: TestWebHdfsUrl timeouts too conservative for Windows
 Key: HADOOP-9499
 URL: https://issues.apache.org/jira/browse/HADOOP-9499
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0


TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
timeout is too low when I test in a Windows VM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9428) TestNativeIO#testRenameTo is broken on Windows

2013-03-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9428.
---

Resolution: Duplicate

> TestNativeIO#testRenameTo is broken on Windows
> --
>
> Key: HADOOP-9428
> URL: https://issues.apache.org/jira/browse/HADOOP-9428
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: windows
> Fix For: 3.0.0
>
>
> Exception details:
> testRenameTo(org.apache.hadoop.io.nativeio.TestNativeIO)  Time elapsed: 16 
> sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.failNotEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:126)
>   at org.junit.Assert.assertEquals(Assert.java:145)
>   at 
> org.apache.hadoop.io.nativeio.TestNativeIO.testRenameTo(TestNativeIO.java:423)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9428) TestNativeIO#testRenameTo is broken on Windows

2013-03-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9428:
-

 Summary: TestNativeIO#testRenameTo is broken on Windows
 Key: HADOOP-9428
 URL: https://issues.apache.org/jira/browse/HADOOP-9428
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9427) use jUnit assumptions to skip platform-specific tests

2013-03-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9427:
-

 Summary: use jUnit assumptions to skip platform-specific tests
 Key: HADOOP-9427
 URL: https://issues.apache.org/jira/browse/HADOOP-9427
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


Certain tests for platform-specific functionality are either executed only on 
Windows or bypass on Windows using checks like "if (Path.WINDOWS)" e.g. 
TestNativeIO.

Prefer using jUnit assumptions instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9400) Investigate emulating sticky bit directory permissions on Windows

2013-03-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9400:
-

 Summary: Investigate emulating sticky bit directory permissions on 
Windows
 Key: HADOOP-9400
 URL: https://issues.apache.org/jira/browse/HADOOP-9400
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
 Fix For: 3.0.0


It should be possible to emulate sticky bit permissions on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9398) Fix TestDFSShell failures on Windows

2013-03-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9398:
-

 Summary: Fix TestDFSShell failures on Windows
 Key: HADOOP-9398
 URL: https://issues.apache.org/jira/browse/HADOOP-9398
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
 Environment: Windows
Reporter: Arpit Agarwal


List of failed tests with exceptions. Filing under Hadoop since some of the 
fixes will need to be in Hadoop common.
{code}
testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 284 sec 
 <<< FAILURE!
org.junit.ComparisonFailure: expected: but was:
at org.junit.Assert.assertEquals(Assert.java:123)
at org.junit.Assert.assertEquals(Assert.java:145)
at 
org.apache.hadoop.hdfs.TestDFSShell.confirmPermissionChange(TestDFSShell.java:934)
at org.apache.hadoop.hdfs.TestDFSShell.testChmod(TestDFSShell.java:901)
at 
org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(TestDFSShell.java:955)

testCopyCommandsWithForceOption(org.apache.hadoop.hdfs.TestDFSShell)  Time 
elapsed: 765 sec  <<< ERROR!
java.lang.IllegalArgumentException: Pathname 
/C:/hdp2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/ForceTestDir from 
C:/hdp2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/ForceTestDir is not a 
valid DFS filename.
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:171)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:350)
at 
org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithForceOption(TestDFSShell.java:1654)

testServerConfigRespected(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 
14 sec  <<< ERROR!
java.io.IOException: Could not fully delete 
C:\hdp2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at 
org.apache.hadoop.hdfs.TestDFSShell.deleteFileUsingTrash(TestDFSShell.java:1673)
at 
org.apache.hadoop.hdfs.TestDFSShell.testServerConfigRespected(TestDFSShell.java:1725)

testServerConfigRespectedWithClient(org.apache.hadoop.hdfs.TestDFSShell)  Time 
elapsed: 11 sec  <<< ERROR!
java.io.IOException: Could not fully delete 
C:\hdp2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at 
org.apache.hadoop.hdfs.TestDFSShell.deleteFileUsingTrash(TestDFSShell.java:1673)
at 
org.apache.hadoop.hdfs.TestDFSShell.testServerConfigRespectedWithClient(TestDFSShell.java:1734)

testClientConfigRespected(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 
11 sec  <<< ERROR!
java.io.IOException: Could not fully delete 
C:\hdp2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at 
org.apache.hadoop.hdfs.TestDFSShell.deleteFileUsingTrash(TestDFSShell.java:1673)
at 
org.apache.hadoop.hdfs.TestDFSShell.testClientConfigRespected(TestDFSShell.java:1743)

testNoTrashConfig(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 12 sec  
<<< ERROR!
java.io.IOException: Could not fully delete 
C:\hdp2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at 
org.apache.hadoop.hdfs.TestDFSShell.deleteFileUsingTrash(TestDFSShell.java:1673)
at 
org.apache.hadoop.hdfs.TestDFSShell.testNoTrashConfig(TestDFSShell.java:1751){code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9383) Windows build fails without install goal

2013-03-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9383:
-

 Summary: Windows build fails without install goal
 Key: HADOOP-9383
 URL: https://issues.apache.org/jira/browse/HADOOP-9383
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


'mvn clean compile' fails on Windows with the following error:

[ERROR] Could not find goal 'protoc' in plugin org.apache.hadoop:hadoop-maven-pl
ugins:3.0.0-SNAPSHOT among available goals -> [Help 1]

The build succeeds if the install goal is specified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9368) Add timeouts to new tests in branch-trunk-win

2013-03-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-9368.
---

Resolution: Fixed

Resolving. HADOOP-9372 tracks the annotation fixes.

> Add timeouts to new tests in branch-trunk-win
> -
>
> Key: HADOOP-9368
> URL: https://issues.apache.org/jira/browse/HADOOP-9368
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: trunk-win
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: trunk-win
>
> Attachments: HADOOP-9368.patch, HADOOP-9368.patch, HADOOP-9368.patch
>
>
> Add timeouts to the new tests so they can be integrated into trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9372) Fix bad timeout annotations on tests

2013-03-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9372:
-

 Summary: Fix bad timeout annotations on tests
 Key: HADOOP-9372
 URL: https://issues.apache.org/jira/browse/HADOOP-9372
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Following two tests have bad timeout annotations:

org.apache.hadoop.util.TestWinUtils
org.apache.hadoop.mapreduce.v2.TestMRJobs




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >