[jira] [Created] (HADOOP-15221) Swift driver should not fail if JSONUtils reports UnknowPropertyException

2018-02-12 Thread Chen He (JIRA)
Chen He created HADOOP-15221:


 Summary: Swift driver should not fail if JSONUtils reports 
UnknowPropertyException
 Key: HADOOP-15221
 URL: https://issues.apache.org/jira/browse/HADOOP-15221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Reporter: Chen He
Assignee: Chen He


org.apache.hadoop.fs.swift.exceptions.SwiftJsonMarshallingException: 
org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field 
We know system is keep involving and new field will be added. However, for 
compatibility point of view, extra field added to json should be logged but may 
not lead to failure from the robustness point of view.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14716) SwiftNativeFileSystem should not eat the exception when rename

2017-08-01 Thread Chen He (JIRA)
Chen He created HADOOP-14716:


 Summary: SwiftNativeFileSystem should not eat the exception when 
rename
 Key: HADOOP-14716
 URL: https://issues.apache.org/jira/browse/HADOOP-14716
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Chen He
Assignee: Chen He
Priority: Minor


Currently, if "rename" will eat excpetions and return "false" in 
SwiftNativeFileSystem. It is not easy for user to find root cause about why 
rename failed. It has to, at least, write out some logs instead of directly 
eats these exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14641) hadoop-openstack driver reports input stream leaking

2017-07-10 Thread Chen He (JIRA)
Chen He created HADOOP-14641:


 Summary: hadoop-openstack driver reports input stream leaking
 Key: HADOOP-14641
 URL: https://issues.apache.org/jira/browse/HADOOP-14641
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.3
Reporter: Chen He


[2017-07-07 14:51:07,052] ERROR Input stream is leaking handles by not being 
closed() properly: HttpInputStreamWithRelease working with https://url/logs 
released=false dataConsumed=false 
(org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream:259)
[2017-07-07 14:51:07,052] DEBUG Releasing connection to https://url/logs:  
finalize() (org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease:101)
java.lang.Exception: stack
at 
org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease.(HttpInputStreamWithRelease.java:71)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient$10.extractResult(SwiftRestClient.java:1523)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient$10.extractResult(SwiftRestClient.java:1520)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1406)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.doGet(SwiftRestClient.java:1520)
at 
org.apache.hadoop.fs.swift.http.SwiftRestClient.getData(SwiftRestClient.java:679)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObject(SwiftNativeFileSystemStore.java:276)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeInputStream.(SwiftNativeInputStream.java:104)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.open(SwiftNativeFileSystem.java:555)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.open(SwiftNativeFileSystem.java:536)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
at 
com.oracle.kafka.connect.swift.SwiftStorage.exists(SwiftStorage.java:74)
at io.confluent.connect.hdfs.DataWriter.createDir(DataWriter.java:371)
at io.confluent.connect.hdfs.DataWriter.(DataWriter.java:175)
at 
com.oracle.kafka.connect.swift.SwiftSinkTask.start(SwiftSinkTask.java:78)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:231)
at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:145)
at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13570) Hadoop Swift driver should use new Apache httpclient

2016-09-01 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He resolved HADOOP-13570.
--
Resolution: Duplicate

Dup to HADOOP-11614, close it.

> Hadoop Swift driver should use new Apache httpclient
> 
>
> Key: HADOOP-13570
> URL: https://issues.apache.org/jira/browse/HADOOP-13570
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Chen He
>
> Current Hadoop openstack module is still using apache httpclient v1.x. It is 
> too old. We need to update it to a higher version to catch up in performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13570) Hadoop swift Driver should use new Apache httpclient

2016-08-31 Thread Chen He (JIRA)
Chen He created HADOOP-13570:


 Summary: Hadoop swift Driver should use new Apache httpclient
 Key: HADOOP-13570
 URL: https://issues.apache.org/jira/browse/HADOOP-13570
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.6.4, 2.7.3
Reporter: Chen He


Current Hadoop openstack module is still using apache httpclient v1.x. It is 
too old. We need to update it to a higher version to catch up the performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13211) Swift driver should have a configurable retry feature when ecounter 5xx error

2016-05-26 Thread Chen He (JIRA)
Chen He created HADOOP-13211:


 Summary: Swift driver should have a configurable retry feature 
when ecounter 5xx error
 Key: HADOOP-13211
 URL: https://issues.apache.org/jira/browse/HADOOP-13211
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/swift
Affects Versions: 2.7.2
Reporter: Chen He
Assignee: Chen He


In current code. if Swift driver meets a HTTP 5xx, it will throw exception and 
stop. As a driver, it will be more sophisticate if it can retry a configurable 
times before report failure. There are two reasons that I can image:

1. if the server is really busy, it is possible that the server will drop some 
requests to avoid DDoS attack.

2. If server accidentally unavailable for a short period of time and come back 
again, we may not need to fail the whole driver. Just record the exception and 
retry may be more flexible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13021) Hadoop swift driver unit test should use unique directory each run

2016-04-12 Thread Chen He (JIRA)
Chen He created HADOOP-13021:


 Summary: Hadoop swift driver unit test should use unique directory 
each run
 Key: HADOOP-13021
 URL: https://issues.apache.org/jira/browse/HADOOP-13021
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.7.2
Reporter: Chen He
Assignee: Chen He


Since all "unit test" in swift package are actually functionality test, it 
requires server's information in the core-site.xml file. However, multiple unit 
test runs on difference machines using the same core-site.xml file will result 
in some unit tests failure. For example:
In TestSwiftFileSystemBasicOps.java
public void testMkDir() throws Throwable {
Path path = new Path("/test/MkDir");
fs.mkdirs(path);
//success then -so try a recursive operation
fs.delete(path, true);
  }

It is possible that machine A and B are running "mvn clean install" using same 
core-site.xml file. However, machine A run testMkDir() first and delete the 
dir, but machine B just tried to run fs.delete(path,true). It will report 
failure. This is just an example. There are many similar cases in the unit test 
sets. I would propose we use a unique dir for each unit test run instead of 
using "Path path = new Path("/test/MkDir")" for all concurrent runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12623) Swift should support more flexible container name than RFC952

2015-12-07 Thread Chen He (JIRA)
Chen He created HADOOP-12623:


 Summary: Swift should support more flexible container name than 
RFC952
 Key: HADOOP-12623
 URL: https://issues.apache.org/jira/browse/HADOOP-12623
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/swift
Affects Versions: 2.6.2, 2.7.1
Reporter: Chen He


Just a thought. 

It will be great if Hadoop swift driver can support more flexible container 
name. Current Hadoop swift driver requires container name to follow RFC952. It 
will report error if container name does not obey RFC952:

"Invalid swift hostname 'test.1.serviceName': hostname must in form 
container.service"

 However, user can use any other Swift object store drivers (cURL, cyberduck, 
JOSS, swift python driver, etc) to upload data to Object store but current 
hadoop swift driver can not recognize those containers whose names do not 
follow RFC952. 

I dig into the source code and figure out it is because of in
RestClientBindings.java{

 public static String extractContainerName(URI uri) throws
  SwiftConfigurationException {
return extractContainerName(uri.getHost());
  }
}

And URI.java line 3143 gives "host = null" . 

We may need to find a better way to do the container name parsing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12501) Enable SwiftNativeFileSystem to preserve user, group, permission

2015-10-21 Thread Chen He (JIRA)
Chen He created HADOOP-12501:


 Summary: Enable SwiftNativeFileSystem to preserve user, group, 
permission
 Key: HADOOP-12501
 URL: https://issues.apache.org/jira/browse/HADOOP-12501
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/swift
Affects Versions: 2.7.1
Reporter: Chen He
Assignee: Chen He


Currently, if user copy file/dir from localFS or HDFS to swift object store, 
u/g/p will be gone. There should be a way to preserve u/g/p. It will provide 
benefit for  a large number of files/dirs transferring between HDFS/localFS and 
Swift object store. We also need to be careful since Hadoop prevent general 
user from changing u/g/p especially if Kerberos is enabled.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12471) Support Swift file (> 5GB) continuious uploading where there is a failure

2015-10-08 Thread Chen He (JIRA)
Chen He created HADOOP-12471:


 Summary: Support Swift file (> 5GB) continuious uploading where 
there is a failure
 Key: HADOOP-12471
 URL: https://issues.apache.org/jira/browse/HADOOP-12471
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/swift
Affects Versions: 2.7.1
Reporter: Chen He


Current Swift FileSystem supports file larger than 5GB. 
File will be chunked as large as 4.6GB (configurable). For example, if there is 
a 46GB file "foo" in swift, 
Then the structure will look like:

foo/01
foo/02
foo/03
...
foo/10

User will not see those 0x files if they don't specify. That means, if use 
do:
\> hadoop fs -ls swift://container.serviceProvidor/foo

It only shows:
dwr-r--r--4.6GBfoo

However, in my test, if there is a failure, during uploading the foo file, the 
previous uploaded chunks will be left in the object store. It will be good to 
support continuous uploading based on previous leftover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12461) Swift driver should have the ability to renew token if server has timeout

2015-10-06 Thread Chen He (JIRA)
Chen He created HADOOP-12461:


 Summary: Swift driver should have the ability to renew token if 
server has timeout
 Key: HADOOP-12461
 URL: https://issues.apache.org/jira/browse/HADOOP-12461
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Affects Versions: 2.7.1
Reporter: Chen He


Current swift driver will encounter authentication issue if swift server has 
token timeout. It will be good if driver can automatically renew once it 
expired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12343) Error message of Swift driver should be more clear when there is mal-format of hostname and service

2015-08-19 Thread Chen He (JIRA)
Chen He created HADOOP-12343:


 Summary: Error message of Swift driver should be more clear when 
there is mal-format of hostname and service
 Key: HADOOP-12343
 URL: https://issues.apache.org/jira/browse/HADOOP-12343
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.7.1
Reporter: Chen He
Assignee: Chen He


Swift driver reports:"Invalid swift hostname 'null', hostname must in form 
container.service" if the container name does not follow RFC952. However, the 
container or service name is not 'null'. The error message should be more clear.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12086) Swift driver reports NPE if user try to create a dir without name

2015-06-11 Thread Chen He (JIRA)
Chen He created HADOOP-12086:


 Summary: Swift driver reports NPE if user try to create a dir 
without name
 Key: HADOOP-12086
 URL: https://issues.apache.org/jira/browse/HADOOP-12086
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.3.0
Reporter: Chen He
Assignee: Chen He


hadoop fs -mkdir swift://container.Provider/
-mkdir: Fatal internal error
java.lang.NullPointerException
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.makeAbsolute(SwiftNativeFileSystem.java:691)
at 
org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:197)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
at org.apache.hadoop.fs.shell.Mkdir.processNonexistentPath(Mkdir.java:73)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:262)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12046) Avoid creating "._COPYING_" temporary file when copying file to Swift file system

2015-05-31 Thread Chen He (JIRA)
Chen He created HADOOP-12046:


 Summary: Avoid creating "._COPYING_" temporary file when copying 
file to Swift file system
 Key: HADOOP-12046
 URL: https://issues.apache.org/jira/browse/HADOOP-12046
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He


When copy file from HDFS or local to another file system implementation, in 
CommandWithDestination.java, it creates a temp file by adding suffix 
"._COPYING_". Once file is successfully copied, it will remove the suffix by 
rename(). 

try {
  PathData tempTarget = target.suffix("._COPYING_");
  targetFs.setWriteChecksum(writeChecksum);
  targetFs.writeStreamToFile(in, tempTarget, lazyPersist);
  targetFs.rename(tempTarget, target);
} finally {
  targetFs.close(); // last ditch effort to ensure temp file is removed
}

It is not costly in HDFS. However, if copy to Swift file system, the rename 
process is to create a new file. It is not efficient if users copy a lot of 
files to swift file system. I did some tests, for a 1G file copying to swift, 
it will take 10% more time. We should only do the copy one time for Swift file 
system. Changes should be limited to the Swift driver level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)
Chen He created HADOOP-12038:


 Summary: SwiftNativeOutputStream should check whether a file 
exists or not before deleting
 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
Priority: Minor


15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
/tmp/hadoop-root/output-3695386887711395289.tmp

It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11811) Fix typos in hadoop-project/pom.xml

2015-04-07 Thread Chen He (JIRA)
Chen He created HADOOP-11811:


 Summary: Fix typos in hadoop-project/pom.xml
 Key: HADOOP-11811
 URL: https://issues.apache.org/jira/browse/HADOOP-11811
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Priority: Trivial





etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11786) Fix Javadoc typos in org.apache.hadoop.fs.FileSystem

2015-04-01 Thread Chen He (JIRA)
Chen He created HADOOP-11786:


 Summary: Fix Javadoc typos in org.apache.hadoop.fs.FileSystem
 Key: HADOOP-11786
 URL: https://issues.apache.org/jira/browse/HADOOP-11786
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Chen He
Assignee: Yanjun Wang
Priority: Trivial


/**
 * Resets all statistics to 0.
 *
 * In order to reset, we add up all the thread-local statistics data, and
 * set rootData to the negative of that.
 *
 * This may seem like a counterintuitive way to reset the statsitics.  Why
 * can't we just zero out all the thread-local data?  Well, thread-local
 * data can only be modified by the thread that owns it.  If we tried to
 * modify the thread-local data from this thread, our modification might get
 * interleaved with a read-modify-write operation done by the thread that
 * owns the data.  That would result in our update getting lost.
 *
 * The approach used here avoids this problem because it only ever reads
 * (not writes) the thread-local data.  Both reads and writes to rootData
 * are done under the lock, so we're free to modify rootData from any thread
 * that holds the lock.
 */

etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11762:


 Summary: Enable swift distcp to secure HDFS
 Key: HADOOP-11762
 URL: https://issues.apache.org/jira/browse/HADOOP-11762
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.4.0, 2.3.0
Reporter: Chen He
Assignee: Chen He


Even we can use "dfs -put" or "dfs -cp" to move data between swift and secured 
HDFS, it will be impractical for moving huge amount of data like 10TB or larger.

Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
java.net.UnknownHostException: container.swiftdomain" 

Since it does not support token feature in SwiftNativeFileSystem right now, it 
will be reasonable that we override the "getCanonicalServiceName" method like 
other filesystem extensions (S3FileSystem, S3AFileSystem)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11760:


 Summary: Typo in DistCp.java
 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * Create a default working folder for the job, under the
   * job staging directory
   *
   * @return Returns the working folder information
   * @throws Exception - EXception if any
   */
  private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11759:


 Summary: TockenCache doc has minor problem
 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * get delegation token for a specific FS
   * @param fs
   * @param credentials
   * @param p
   * @param conf
   * @throws IOException
   */
  static void obtainTokensForNamenodesInternal(FileSystem fs, 
  Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11750) distcp fails if we copy data from swift to secure HDFS

2015-03-25 Thread Chen He (JIRA)
Chen He created HADOOP-11750:


 Summary: distcp fails if we copy data from swift to secure HDFS
 Key: HADOOP-11750
 URL: https://issues.apache.org/jira/browse/HADOOP-11750
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.3.0
Reporter: Chen He
Assignee: Chen He


ERROR tools.DistCp: Exception encountered
java.lang.IllegalArgumentException: java.net.UnknownHostException: 
babynames.main
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at 
org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:301)
at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:523)
at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:133)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:83)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
Caused by: java.net.UnknownHostException: babynames.main
... 17 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11292) "mvm package" reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)
Chen He created HADOOP-11292:


 Summary: "mvm package" reports error when using Java 1.8 
 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He


mvn package -Pdist -Dtar -DskipTests" reports following error based on latest 
trunk:

[INFO] BUILD FAILURE

[INFO] 

[INFO] Total time: 11.010 s

[INFO] Finished at: 2014-11-10T11:23:49-08:00

[INFO] Final Memory: 51M/555M

[INFO] 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
project hadoop-maven-plugins: MavenReportException: Error while creating 
archive:

[ERROR] Exit code: 1 - 
./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
 error: unknown tag: String

[ERROR] * @param command List containing command and all arguments

[ERROR] ^

[ERROR] 
./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
 error: unknown tag: String

[ERROR] * @param output List in/out parameter to receive command output

[ERROR] ^

[ERROR] 
./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
 error: unknown tag: File

[ERROR] * @return List containing every element of the FileSet as a File

[ERROR] ^

[ERROR] 

[ERROR] Command line was: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
-J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
-J-Dhttp.proxyPort=80 @options @packages

[ERROR] 

[ERROR] Refer to the generated Javadoc files in 
'./hadoop/hadoop/hadoop-maven-plugins/target' dir.

[ERROR] -> [Help 1]

[ERROR] 

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR] 

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

[ERROR] 

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn  -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11020) TestRefreshUserMappings fails

2014-09-10 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He resolved HADOOP-11020.
--
Resolution: Duplicate

> TestRefreshUserMappings fails
> -
>
> Key: HADOOP-11020
> URL: https://issues.apache.org/jira/browse/HADOOP-11020
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chen He
>
> Error Message
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
>  (No such file or directory)
> Stacktrace
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
>  (No such file or directory)
>   at java.io.FileOutputStream.open(Native Method)
>   at java.io.FileOutputStream.(FileOutputStream.java:194)
>   at java.io.FileOutputStream.(FileOutputStream.java:84)
>   at 
> org.apache.hadoop.security.TestRefreshUserMappings.addNewConfigResource(TestRefreshUserMappings.java:242)
>   at 
> org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:203)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11020) TestRefreshUserMappings fails

2014-08-28 Thread Chen He (JIRA)
Chen He created HADOOP-11020:


 Summary: TestRefreshUserMappings fails
 Key: HADOOP-11020
 URL: https://issues.apache.org/jira/browse/HADOOP-11020
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He


Error Message

/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
 (No such file or directory)
Stacktrace

java.io.FileNotFoundException: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
 (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:194)
at java.io.FileOutputStream.(FileOutputStream.java:84)
at 
org.apache.hadoop.security.TestRefreshUserMappings.addNewConfigResource(TestRefreshUserMappings.java:242)
at 
org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:203)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10664) TestNetUtils.testNormalizeHostName failes

2014-06-04 Thread Chen He (JIRA)
Chen He created HADOOP-10664:


 Summary: TestNetUtils.testNormalizeHostName failes
 Key: HADOOP-10664
 URL: https://issues.apache.org/jira/browse/HADOOP-10664
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He


java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:617)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10148) backport hadoop-10107 to branch-0.23

2013-12-06 Thread Chen He (JIRA)
Chen He created HADOOP-10148:


 Summary: backport hadoop-10107 to branch-0.23
 Key: HADOOP-10148
 URL: https://issues.apache.org/jira/browse/HADOOP-10148
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Chen He
Assignee: Kihwal Lee
 Fix For: 2.4.0


Found this in [build 
#5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]

Caused by: java.lang.NullPointerException
at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
at 
org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)