[ https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16055869#comment-16055869 ]
Hongyuan Li edited comment on HADOOP-14469 at 6/20/17 2:56 PM: --------------------------------------------------------------- resubmit a new patch to correct the following things: 1、fix the related findbug warn 2、make the test units more gracefully and more obvious *Update/Correction* 3、this bug affects the {{FTPFileSystem}} since version 2.6.0 *Update* the latest patch correct the spelling error was (Author: hongyuan li): resubmit a new patch to correct the following things: 1、fix the related findbug warn 2、make the test units more gracefully and more obvious *Update/Correction* 3、this bug affects the {{FTPFileSystem}} since version 2.6.0 > FTPFileSystem#listStatus get currentPath and parentPath at the same time, > causing recursively list action endless > ----------------------------------------------------------------------------------------------------------------- > > Key: HADOOP-14469 > URL: https://issues.apache.org/jira/browse/HADOOP-14469 > Project: Hadoop Common > Issue Type: Bug > Components: fs, tools/distcp > Affects Versions: 2.6.0 > Environment: ftp build by windows7 + Serv-U_64 12.1.0.8 > code runs any os > Reporter: Hongyuan Li > Assignee: Hongyuan Li > Priority: Critical > Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, > HADOOP-14469-003.patch, HADOOP-14469-004.patch, HADOOP-14469-005.patch, > HADOOP-14469-006.patch, HADOOP-14469-007.patch > > > for some ftpsystems, liststatus method will return new Path(".") and new > Path(".."), thus causing list op looping.for example, Serv-U > We can see the logic in code below: > {code} > private FileStatus[] listStatus(FTPClient client, Path file) > throws IOException { > …… > FileStatus[] fileStats = new FileStatus[ftpFiles.length]; > for (int i = 0; i < ftpFiles.length; i++) { > fileStats[i] = getFileStatus(ftpFiles[i], absolute); > } > return fileStats; > } > {code} > {code} > public void test() throws Exception{ > FTPFileSystem ftpFileSystem = new FTPFileSystem(); > ftpFileSystem.initialize(new > Path("ftp://test:123456@192.168.44.1/").toUri(), > new Configuration()); > FileStatus[] fileStatus = ftpFileSystem.listStatus(new Path("/new")); > for(FileStatus fileStatus1 : fileStatus) > System.out.println(fileStatus1); > } > {code} > using test code below, the test results list below > {code} > FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; > modification_time=1496716980000; access_time=0; owner=user; group=group; > permission=---------; isSymlink=false} > FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; > modification_time=1496716980000; access_time=0; owner=user; group=group; > permission=---------; isSymlink=false} > FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; > modification_time=1496716980000; access_time=0; owner=user; group=group; > permission=---------; isSymlink=false} > FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; > isDirectory=false; length=2036; replication=1; blocksize=4096; > modification_time=1495797780000; access_time=0; owner=user; group=group; > permission=---------; isSymlink=false} > FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; > isDirectory=false; length=1322; replication=1; blocksize=4096; > modification_time=1496716980000; access_time=0; owner=user; group=group; > permission=---------; isSymlink=false} > FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; > isDirectory=true; modification_time=1495797120000; access_time=0; owner=user; > group=group; permission=---------; isSymlink=false} > {code} > In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} > is obviously the current Path, and > {{FileStatus{path=ftp://test:123456@192.168.44.1/;……}} is obviously the > parent Path. > So, if we want to walk the directory recursively, it will stuck. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org