I am not opposed to this since we are obviously dependent on Hadoop.  

-- 
Brock Noland
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Wednesday, August 8, 2012 at 6:22 PM, Jarek Jarcec Cecho wrote:

> Maybe let me add few words - I know that it might seem that this method is 
> meant to be used only for HDFS as it's part of the hadoop, but I was using it 
> on LocalFileSystem as well without any issues.
> 
> Jarcec
> 
> On Thu, Aug 09, 2012 at 12:44:57AM +0200, Jarek Jarcec Cecho wrote:
> > Another possibility would be to utilize FileSystem.delete() from Hadoop:
> > 
> > http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html
> > 
> > This class seems to be present in current dependency hadoop-core in profile 
> > hadoop1 or hadoop-common in profile hadoop2.
> > 
> > Jarcec
> > 
> > On Wed, Aug 08, 2012 at 11:24:05PM +0100, Dave Beech wrote:
> > > In a few places now in MRUnit we're creating temp files / directories as
> > > part of testing (e.g. dist cache testing, MockMapredOutputFormat).
> > > 
> > > These will obviously need to be cleaned up as part of the test execution.
> > > But, from experience I've found deleting files/folders from Java to be
> > > pretty unreliable (especially if folders are not empty), so usually I'd 
> > > use
> > > commons-io FileUtils.forceDelete() to get the job done.
> > > 
> > > I'd really like to be able to use this method in MRUnit, but adding a new
> > > dependency to the POM for one method just seems.... wrong.
> > > 
> > > I don't know what's worse. Adding the dependency, or re-implementing some
> > > file deletion code that's already been done "properly" elsewhere.
> > > 
> > > What's your opinion?
> > > 
> > > Thanks,
> > > Dave
> > > 
> > 
> > 
> 
> 
> 


Reply via email to