thanks Jarek,
   How would I do that?  Do I need to set fs.defaultFS in core-site.xml, or
is it something else?  Is there a document somewhere which describes this?

yours,
imran


On Mon, Feb 3, 2014 at 9:31 PM, Jarek Jarcec Cecho <[email protected]>wrote:

> Would you mind trying to set the S3 filesystem as the default one for
> Sqoop?
>
> Jarcec
>
> On Mon, Feb 03, 2014 at 10:25:50AM -0800, Imran Akbar wrote:
> > Hi,
> >     I've been able to sqoop from MySQL into HDFS, but I was wondering if
> it
> > was possible to send the data directly to S3 instead.  I've read some
> posts
> > on this forum and others that indicate that it's not possible to do this
> -
> > could someone confirm?
> >
> > I tried to get it to work by setting:
> > --warehouse-dir s3n://MYS3APIKEY:MYS3SECRETKEY@bucketname/folder/
> > or
> > --target-dir s3n://MYS3APIKEY:MYS3SECRETKEY@bucketname/folder/
> >
> > options but I get the error:
> > ERROR tool.ImportTool: Imported Failed: This file system object (hdfs://
> > 10.168.22.133:9000) does not support access to the request path
> > 's3n://****:****@iakbar.emr/new-hive-output/_logs' You possibly called
> > FileSystem.get(conf) when you should have called FileSystem.get(uri,
> conf)
> > to obtain a file system supporting your path
> >
> > If it's not possible to do this, should I just import to HDFS and then
> > output to S3?  Is there an easy way to do this without having to specify
> > the schema of the whole table again?
> >
> > thanks,
> > imran
>

Reply via email to