Hi,

Thanx a lot all, I understand my problem comes from *hadoop version* and I
move the spark 1.6.0 *hadoop 2.4 *version and there is no problem.

Best,
yasemin

2016-03-15 17:31 GMT+02:00 Gourav Sengupta <gourav.sengu...@gmail.com>:

> Once again, please use roles, there is no way that you have to specify the
> access keys in the URI under any situation. Please read Amazon
> documentation and they will say the same. The only situation when you use
> the access keys in URI is when you have not read the Amazon documentation :)
>
> Regards,
> Gourav
>
> On Tue, Mar 15, 2016 at 3:22 PM, Sabarish Sasidharan <
> sabarish.sasidha...@manthan.com> wrote:
>
>> There are many solutions to a problem.
>>
>> Also understand that sometimes your situation might be such. For ex what
>> if you are accessing S3 from your Spark job running in your continuous
>> integration server sitting in your data center or may be a box under your
>> desk. And sometimes you are just trying something.
>>
>> Also understand that sometimes you want answers to solve your problem at
>> hand without redirecting you to something else. Understand what you
>> suggested is an appropriate way of doing it, which I myself have proposed
>> before, but that doesn't solve the OP's problem at hand.
>>
>> Regards
>> Sab
>> On 15-Mar-2016 8:27 pm, "Gourav Sengupta" <gourav.sengu...@gmail.com>
>> wrote:
>>
>>> Oh!!! What the hell!!!!
>>>
>>> Please never use the URI
>>>
>>> *s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY.*That is a major cause
>>> of pain, security issues, code maintenance issues and ofcourse something
>>> that Amazon strongly suggests that we do not use. Please use roles and you
>>> will not have to worry about security.
>>>
>>> Regards,
>>> Gourav Sengupta
>>>
>>> On Tue, Mar 15, 2016 at 2:38 PM, Sabarish Sasidharan <
>>> sabarish....@gmail.com> wrote:
>>>
>>>> You have a slash before the bucket name. It should be @<bucket name>.
>>>>
>>>> Regards
>>>> Sab
>>>> On 15-Mar-2016 4:03 pm, "Yasemin Kaya" <godo...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am using Spark 1.6.0 standalone and I want to read a txt file from
>>>>> S3 bucket named yasemindeneme and my file name is deneme.txt. But I am
>>>>> getting this error. Here is the simple code
>>>>> <https://gist.github.com/anonymous/6d174f8587f0f3fd2334>
>>>>> Exception in thread "main" java.lang.IllegalArgumentException: Invalid
>>>>> hostname in URI s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@
>>>>> /yasemindeneme/deneme.txt
>>>>> at
>>>>> org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:45)
>>>>> at
>>>>> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:55)
>>>>>
>>>>>
>>>>> I try 2 options
>>>>> *sc.hadoopConfiguration() *and
>>>>> *sc.textFile("s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@/yasemindeneme/deneme.txt/");*
>>>>>
>>>>> Also I did export AWS_ACCESS_KEY_ID= .....
>>>>>  export AWS_SECRET_ACCESS_KEY=
>>>>> But there is no change about error.
>>>>>
>>>>> Could you please help me about this issue?
>>>>>
>>>>>
>>>>> --
>>>>> hiç ender hiç
>>>>>
>>>>
>>>
>


-- 
hiç ender hiç

Reply via email to