Re: Riak-S2 javascript aws-sdk failing on multi-part uploads

2016-01-14 Thread Shunichi Shinohara
Hi John,

I tested Multipart Upload with aws-sdk-js with the patch you
mentioned, against riak_cs (s2),
upload finished without errors up to 1GB object. The environment is
all local on my laptop,
so latency is small. The script I used is in [1].

As Luke mentioned, HAProxy would be the point to be investigated first.
Or, if it's possible to get packet capture, by finding some wrong TCP
packet, like
RST or premature (from the point of HTTP) FIN, one can identify who
closes TCP connection
actively. In this case, packet should be captured on client box,
HAProxy box and riak cs box.

[1] https://gist.github.com/shino/ac7d56398557fb936899

Thanks,
Shino


2016-01-14 8:51 GMT+09:00 Luke Bakken :
> Hi John,
>
> Thanks for the info. I'm very curious to see what's in the haproxy
> logs with regard to TCP.
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Wed, Jan 13, 2016 at 3:50 PM, John Fanjoy  wrote:
>> Luke,
>>
>> As a test I’ve already increased all timeouts to 5 minutes but the failure 
>> occurs within under 1 minute so it doesn’t appear to be timeout related. I 
>> change the logs to tcplog tomorrow and let you know if I find anything.
>>
>> Thanks
>>
>> John Fanjoy
>> Systems Engineer
>> jfan...@inetu.net
>>
>>
>>
>>
>>
>> On 1/13/16, 6:05 PM, "Luke Bakken"  wrote:
>>
>>>haproxy ships with some "short" default timeouts. If CyberDuck is able
>>>to upload these files faster than aws-sdk, it may be doing so within
>>>the default haproxy timeouts.
>>>
>>>You can also look at haproxy's log to see if you find any TCP
>>>connections that it has closed.
>>>--
>>>Luke Bakken
>>>Engineer
>>>lbak...@basho.com
>>>
>>>
>>>On Wed, Jan 13, 2016 at 3:02 PM, John Fanjoy  wrote:
 Luke,

 I may be able to do that. The only problem is without haproxy I have no 
 way to inject CORS headers which the browser requires, but I may be able 
 to write up small nodejs app to get past that and see if it is somehow 
 related to haproxy. The fact that these errors are not present when using 
 Cyberduck which is also talking to haproxy leads me to believe that’s not 
 the cause, but it’s definitely worth testing.

 --
 John Fanjoy
 Systems Engineer
 jfan...@inetu.net





 On 1/13/16, 5:55 PM, "Luke Bakken"  wrote:

>John -
>
>The following error indicates that the connection was unexpectedly
>closed by something outside of Riak while the chunk is uploading:
>
>{badmatch,{error,closed}}
>
>Is it possible to remove haproxy to test using the the aws-sdk?
>
>That is my first thought as to the cause of this issue, especially
>since writing to S3 works with the same code.
>
>--
>Luke Bakken
>Engineer
>lbak...@basho.com
>
>On Wed, Jan 13, 2016 at 2:46 PM, John Fanjoy  wrote:
>> Luke,
>>
>> Yes on both parts. To confirm cyberduck was using multi-part I actually 
>> tailed the console.log while it was uploading the file, and it uploaded 
>> the file in approx. 40 parts. Afterwards the parts were reassembled as 
>> you would expect. The AWS-SDK for javascript has an object called 
>> ManagedUpload which automatically switches to multi-part when the input 
>> is larger than the maxpartsize (default 5mb). I have confirmed that it 
>> is splitting the files up, but so far I’ve only ever seen one part get 
>> successfully uploaded before the others failed at which point it removes 
>> the upload (DELETE call) automatically. I also verified that the 
>> javascript I have in place does work with an actual AWS S3 bucket to 
>> rule out coding issues on my end and the same >400mb file was 
>> successfully uploaded to the bucket I created there without issue.
>>
>> A few things worth mentioning that I missed before. I am running riak-s2 
>> behind haproxy. Haproxy is handling ssl and enabling CORS for browser 
>> based requests. I have tested smaller files (~4-5mb) and GET requests 
>> using the browser client and everything works with my current haproxy 
>> configuration, but the larger files are failing, usually after 1 part is 
>> successfully uploaded. I can also list bucket contents and delete 
>> existing contents. The only feature that is not working appears to be 
>> the multi-part uploads. We are running centOS 7 (kernel version 
>> 3.10.0-327.4.4.el7.x86_64). Please let me know if you have any further 
>> questions.
>>
>> --
>> John Fanjoy
>> Systems Engineer
>> jfan...@inetu.net
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com

Re: Riak S2 error

2016-01-06 Thread Shunichi Shinohara
Hi Michael,

Could you provide all the result of "riak config generate -l debug" as well as
riak.conf and advanced.config?

Thanks,
Shino

2016-01-07 9:05 GMT+09:00 Michael Walsh :
> All,
>
> I'm trying to set up a RIak S2 instance to integrate with KV and I'm getting
> the following cuttlefish error out of advanced.config. Any suggestions on
> where I'm going wrong?
>
> notes: erlang error was fixed. All paths are correct.
>
> Thanks!
>
> -Michael Walsh
>
> $riak config generate -l debug
>
> ...
>  {riak_kv,
>   [{add_paths,
>
> "/usr/lib64/riak-cs/lib/riak_cs-2.1.0/ebin"},
>{storage_backend,riak_cs_kv_multi_backend},
>{multi_backend_prefix_list,
> [{<<"0b:">>,be_blocks}]},
>{multi_backend_default,be_default},
>{multi_backend,
> [{be_default,riak_kv_eleveldb_backend,
>   [{total_leveldb_mem_percent,30},
>{data_root,"/var/lib/riak/leveldb"}]},
>  {be_blocks,riak_kv_bitcask_backend,
>
> [{data_root,"/var/lib/riak/bitcask"}]}]}]}) (lists.erl, line 1247)
>   in function  cuttlefish_escript:engage_cuttlefish/1
> (src/cuttlefish_escript.erl, line 375)
>   in call from cuttlefish_escript:generate/1 (src/cuttlefish_escript.erl,
> line 258)
>   in call from escript:run/2 (escript.erl, line 747)
>   in call from escript:start/1 (escript.erl, line 277)
>   in call from init:start_it/1
>   in call from init:start_em/1
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS - AWS-CLI command works but Node.js API fails

2015-12-03 Thread Shunichi Shinohara
Thanks for update. Then, please let me ask some questions:

- What was the actual error message?
- Could you confirm your code / SDK generate network communication
  to Riak CS?

Shino

2015-12-01 17:18 GMT+09:00 Dattaraj J Rao <dattaraj...@yahoo.com>:
> Thanks Shino for your response.
>
> I tried providing the buclet url as endpoint - also tried setting s3endpoint 
> to true. Same problem.
>
> Surprisingly the command line tool works fine.
>
> Regards,
> Dattaraj
> http://in.linkedin.com/in/dattarajrao
>
> -----Original Message-
> From: Shunichi Shinohara <sh...@basho.com>
> Date: Mon, 30 Nov 2015 10:26:48
> To: Dattaraj Rao<dattaraj...@yahoo.com>
> Cc: riak-users@lists.basho.com<riak-users@lists.basho.com>
> Subject: Re: RiakCS - AWS-CLI command works but Node.js API fails
>
> Hi Dattaraj,
>
> I'm not sure how AWS SDK JS works in detail, I'm wondering whether
> it's good to include
> S3/CS bucket name in endpoint string.  One example of the doc [1], it does not
> have bucket name part.
>
> [1] http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Endpoint.html
>
> Thanks,
> Shino
>
> 2015-11-28 23:13 GMT+09:00 Dattaraj Rao <dattaraj...@yahoo.com>:
>> Hello,
>> I am trying to access a RiakCS data store. I can access it using following
>> command in AWS-CLI:
>>
>> $ aws s3 --endpoint-url https://my-riak-address.io cp my-local-file
>> s3://service-instance-e689c062-dee6-45d7-90fe-39e63256915f
>>
>> However when I try connecting to same repository using Node JS and AWS-SDK
>> bunlde - it does not connect. Says endpoint not exposed.
>>
>> var AWS = require('aws-sdk');
>> AWS.config.update({accessKeyId: 'mykey', secretAccessKey: 'mysecret'});
>>
>> var ep = new
>> AWS.Endpoint('https://my-riak-address.io/service-instance-e689c062-dee6-45d7-90fe-39e63256915f');
>> var s3 = new AWS.S3({endpoint: ep});
>>
>>
>> Regards,
>> Dattaraj
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS - AWS-CLI command works but Node.js API fails

2015-12-03 Thread Shunichi Shinohara
It seems that it's related with an trouble with server certificates
with wildcard domain and client side verification. A dot(".") in
"wildcard part", "service-instance-ee57eed6-6f95-4de3-b2c7-6b787c11e922.riakcs"
in your example, may make things complicated? ... but I'm sorry that I'm
not very good at the area.
Quick search shows something may/may not related, e.g. [1]

[1] https://github.com/Automattic/knox/issues/153

Thanks,
Shino

2015-12-03 18:22 GMT+09:00 Dattaraj Rao <dattaraj...@yahoo.com>:
> Error message is:
>
> Error: Hostname/IP doesn't match certificate's altnames: "Host:
> service-instance-ee57eed6-6f95-4de3-b2c7-6b787c11e922.riakcs.system.aws-usw02-pr.ice.predix.io.
> is not in the cert's altnames: DNS:*.system.aws-usw02-pr.ice.predix.io,
> DNS:system.aws-usw02-pr.ice.predix.io"
>
> My code is - i also tried on tonicdev - same error:
> var AWS = require('aws-sdk');
>
> AWS.config.update({sslEnabled: true, accessKeyId: 'DJDTSN2GITBEL4QMPXKN',
> secretAccessKey: 'z2NeYH7R3VfNOOARvHHe5MAKM7pGkc66MWU_VA==', endpoint:
> 'https://riakcs.system.aws-usw02-pr.ice.predix.io/'});
>
> var s3 = new AWS.S3();
>
> s3.getObject({Bucket:
> 'service-instance-ee57eed6-6f95-4de3-b2c7-6b787c11e922', Key: 'lvision_1'},
> function (err, data) {
>   if(err)
> console.log("Error - ", err);
>   if(data)
> console.log("Data - ", data);
> });
>
> console.log('test');
>
>
> Regards,
> Dattaraj Jagdish Rao
> http://www.linkedin.com/in/dattarajrao
>
>
>
>
> On Thursday, December 3, 2015 2:43 PM, Shunichi Shinohara <sh...@basho.com>
> wrote:
>
>
> Thanks for update. Then, please let me ask some questions:
>
> - What was the actual error message?
> - Could you confirm your code / SDK generate network communication
>   to Riak CS?
>
> Shino
>
> 2015-12-01 17:18 GMT+09:00 Dattaraj J Rao <dattaraj...@yahoo.com>:
>> Thanks Shino for your response.
>>
>> I tried providing the buclet url as endpoint - also tried setting
>> s3endpoint to true. Same problem.
>>
>> Surprisingly the command line tool works fine.
>>
>> Regards,
>> Dattaraj
>> http://in.linkedin.com/in/dattarajrao
>>
>> -Original Message-
>> From: Shunichi Shinohara <sh...@basho.com>
>> Date: Mon, 30 Nov 2015 10:26:48
>> To: Dattaraj Rao<dattaraj...@yahoo.com>
>> Cc: riak-users@lists.basho.com<riak-users@lists.basho.com>
>> Subject: Re: RiakCS - AWS-CLI command works but Node.js API fails
>>
>> Hi Dattaraj,
>>
>> I'm not sure how AWS SDK JS works in detail, I'm wondering whether
>> it's good to include
>> S3/CS bucket name in endpoint string.  One example of the doc [1], it does
>> not
>> have bucket name part.
>>
>> [1] http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Endpoint.html
>>
>> Thanks,
>> Shino
>>
>> 2015-11-28 23:13 GMT+09:00 Dattaraj Rao <dattaraj...@yahoo.com>:
>>> Hello,
>>> I am trying to access a RiakCS data store. I can access it using
>>> following
>>> command in AWS-CLI:
>>>
>>> $ aws s3 --endpoint-url https://my-riak-address.io cp my-local-file
>>> s3://service-instance-e689c062-dee6-45d7-90fe-39e63256915f
>>>
>>> However when I try connecting to same repository using Node JS and
>>> AWS-SDK
>>> bunlde - it does not connect. Says endpoint not exposed.
>>>
>>> var AWS = require('aws-sdk');
>>> AWS.config.update({accessKeyId: 'mykey', secretAccessKey: 'mysecret'});
>>>
>>> var ep = new
>>>
>>> AWS.Endpoint('https://my-riak-address.io/service-instance-e689c062-dee6-45d7-90fe-39e63256915f');
>>> var s3 = new AWS.S3({endpoint: ep});
>>>
>>>
>>> Regards,
>>> Dattaraj
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS - AWS-CLI command works but Node.js API fails

2015-11-29 Thread Shunichi Shinohara
Hi Dattaraj,

I'm not sure how AWS SDK JS works in detail, I'm wondering whether
it's good to include
S3/CS bucket name in endpoint string.  One example of the doc [1], it does not
have bucket name part.

[1] http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Endpoint.html

Thanks,
Shino

2015-11-28 23:13 GMT+09:00 Dattaraj Rao :
> Hello,
> I am trying to access a RiakCS data store. I can access it using following
> command in AWS-CLI:
>
> $ aws s3 --endpoint-url https://my-riak-address.io cp my-local-file
> s3://service-instance-e689c062-dee6-45d7-90fe-39e63256915f
>
> However when I try connecting to same repository using Node JS and AWS-SDK
> bunlde - it does not connect. Says endpoint not exposed.
>
> var AWS = require('aws-sdk');
> AWS.config.update({accessKeyId: 'mykey', secretAccessKey: 'mysecret'});
>
> var ep = new
> AWS.Endpoint('https://my-riak-address.io/service-instance-e689c062-dee6-45d7-90fe-39e63256915f');
> var s3 = new AWS.S3({endpoint: ep});
>
>
> Regards,
> Dattaraj
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs sync buckets from S3 to Riak-cs

2015-11-16 Thread Shunichi Shinohara
Hi Alberto,

I didn't look into boto implementation, but I suspect that COPY Object API
does NOT work between different S3-like systems.
The actual interface definition of the API is [1] and source bucket/key
is just a string in the x-amz-copy-source header. The request went into the
system that includes rk02.ejemplo.com in your example, but it did not know
anything about source bucket/key because it does not have the bucket/key.
Object contents should be transferred in some way, e.g. GET Object from
source and PUT Object (or Multipart Upload for large objects) to target
system.

[1] http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html

Thanks,
Shino

2015-11-16 21:28 GMT+09:00 Alberto Ayllon :
> Hello.
>
> Thanks for your help Dmitri.
>
> Perhaps this is not the correct place to ask this, but maybe someone had
> have the same problem.
>
> I build a test environment with two nodes of RIAK-CS, not in cluster, and
> I'm trying to move objects from one to another, to a bucket with the same
> name (testbucket).
> I'm using BOTO, and the copy_key method, but it fails, I guess that problem
> is in the HEAD call, this call checks if the key exists in the destination
> bucket before make the PUT, that copy the key.
>
> Here is the code I'm using.
>
> from boto.s3.key import Key
> from boto.s3.connection import S3Connection
> from boto.s3.connection import OrdinaryCallingFormat
>
> apikey04='GFR3O0HFPXQ-BWSXEMAG'
> secretkey04='eIiigR4Rov2O2kxuSHNW7WPoJE2KmrtMpzzqlg=='
>
> apikey02='J0TT_C9MJPWPGHW-KEWY'
> secretkey02='xcLOt3ANqyNJ0kAjP8Mxx68qr7kgyXG3eqJuMA=='
> cf=OrdinaryCallingFormat()
>
> conn04=S3Connection(aws_access_key_id=apikey04,aws_secret_access_key=secretkey04,
>
> is_secure=False,host='rk04.ejemplo.com',port=8080,calling_format=cf)
>
> conn02=S3Connection(aws_access_key_id=apikey02,aws_secret_access_key=secretkey02,
>
> is_secure=False,host='rk02.ejemplo.com',port=8080,calling_format=cf)
>
> bucket04=conn04.get_bucket('testbucket')
> bucket02=conn02.get_bucket('testbucket')
>
> rs04 = bucket04.list()
>
> for k in rs04:
> print k.name
> bucket02.copy_key(k.key, bucket04, k.key)
>
>
> When this script is executed it returns:
>
> Traceback (most recent call last):
>   File "s3_connect_2.py", line 38, in 
> bucket02.copy_key(k.key, bucket04, k.key)
>   File
> "/home/alberto/.virtualenvs/boto/local/lib/python2.7/site-packages/boto/s3/bucket.py",
> line 888, in copy_key
> response.reason, body)
> boto.exception.S3ResponseError: S3ResponseError: 404 Not Found
>  encoding="UTF-8"?>NoSuchKeyThe specified key
> does not exist./Bucket: testbucket$
>
>
>
> The idea is copy all keys in testbucket from rk04.ejemplo.com, to testbucket
> in test02.ejemplo.com, maybe someone can help me.
>
>
> Thanks a lot.
>
>
> 2015-11-13 17:06 GMT+01:00 Dmitri Zagidulin :
>>
>> Hi Alberto,
>>
>> From what I understand, the state of the art in terms of migration of
>> objects from Amazon S3 to Riak CS is -- writing migration scripts.
>> Either as shell scripts (using s3cmd), or language-specific libraries like
>> boto (or even just the S3 SDKs).
>> And the scripts would consist of:
>> 1) get a list of the buckets you want to migrate
>> 2) List the keys in those buckets
>> 3) Migrate each object from AWS to CS.
>>
>> You're right that mounting buckets as filesystems is a (distant)
>> possibility, but we have not seen much successful use of those (though if
>> anybody's made that work, let us know).
>>
>>
>>
>> On Thu, Nov 12, 2015 at 12:40 PM, Alberto Ayllon 
>> wrote:
>>>
>>> Hello.
>>>
>>> I'm new using Riak and Riak-cs, I have installed a Riak-cs cluster with 4
>>> nodes and it works fine,
>>>
>>> Here is my question,  the company where I work has some buckets in Amazon
>>> s3, and I would like migrate objects from these buckets to our Riak-cs
>>> installation, as far as I know I can do it using S3FUSE or S3BACKER,
>>> mounting buckets as a filesystem, but would like avoid mount it as
>>> filesystem. I tried it with boto python library, using the copy_key method,
>>> but it doesn't work.
>>>
>>> Has anybody try with success synchronize buckets from AS3 to Riak-CS?
>>>
>>> Thanks.
>>>
>>> P:D: Excuse for my English.
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Mecking riakc_pb_socket:get/3

2015-11-16 Thread Shunichi Shinohara
Hi Michael,

Sorry for very late response.
I tried mecking the module in erl shell.
In list passing style for expected arguments, Pid in call should
be the same term you passed in expect call, unless function_clause
is thrown.

> Pid = self().
<0.43.0>
> meck:expect(riakc_pb_socket, get, [Pid, {<<"t">>, <<"b">>}, <<"k">>], {ok, 
> dummy}).
> riakc_pb_socket:get(Pid, {<<"t">>, <<"b">>}, <<"k">>).
{ok,dummy}
> Pid2 = spawn(fun() -> receive forever -> ok end end).
<0.57.0>
> riakc_pb_socket:get(Pid2, {<<"t">>, <<"b">>}, <<"k">>).
** exception error: function_clause

If you do not want to match the first argument, fun style can be used.

9> meck:expect(riakc_pb_socket, get, fun(_Pid, {<<"t">>, <<"b">>},
<<"k">>) -> {ok, dummy} end).
10> riakc_pb_socket:get(Pid2, {<<"t">>, <<"b">>}, <<"k">>).
{ok,dummy}
11> riakc_pb_socket:get(not_actually_pid, {<<"t">>, <<"b">>}, <<"k">>).
{ok,dummy}

Thanks,
Shino

2015-11-03 0:22 GMT+09:00 Michael Martin :
> Hi all,
>
> I'm trying to meck riakc_pb_socket:get/3 in my eunit tests, and consistently
> get a function_clause error.
>
> The meck:expect looks like:
>
> meck:expect(riakc_pb_socket, get, [Pid, ?RIAK_TYPE_AND_BUCKET, ?TestOid],
> {ok, ?TestRiakObject}),
>
> where Pid is a pid, the macro ?RIAK_TYPE_AND_BUCKET evaluates to
> {<<"buckettype">>, <<"bucketname">>},
> and ?TestOid evalutes to
> <<"809876fd89ac405680b7251c2e57faa30004524100486220">>).
>
> With the exception of the Pid, the other arguments, as well as the expected
> response, are taken from a live,
> running system, where the call to riakc_pb_socket:get/3 works as expected.
>
> Looking at the source for riakc_pb_socket:get/3, I see that the -spec looks
> like:
>
> -spec get(pid(), bucket(), key()) -> {ok, riakc_obj()} | {error, term()}.
>
> and the types bucket() and key() are defined as:
> -type bucket() :: binary(). %% A bucket name.
> -type key() :: binary(). %% A key name.
>
> In reality, when using bucket types, shouldn't the bucket() type be a tuple?
> At any rate, changing it
> to tuple() didn't help my case any.
>
> Can anyone show me an example of a working meck:expect for
> riakc_pb_socket:get/3?
>
> Thanks,
> Michael
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: crash in riak-cs

2015-10-15 Thread Shunichi Shinohara
Hi Gautam,

Hmm... It seems like a bug. List objects fails when "delimiter" query
parameter is empty like:
   http://foo.s3.amazonaws.com/?delimiter=

May I ask some questions?
- What client (s3cmd, s3curl, java sdk, etc...) do you use?
- Can you control the query parameter and remove it when its value is empty?

Thanks,
Shino

2015-10-15 2:44 GMT+09:00 Gautam Pulla :
> Hello,
>
>
>
> I’m seeing the following error in the riak-cs logs on sending a list-buckets
> request. The request fails as well. Is this a known/fixed issue? I am
> running version 2.0.1-1.
>
>
>
> 2015-10-10 18:53:55.828 [error] <0.768.5> CRASH REPORT Process <0.768.5>
> with 1 neighbours exited with reason: bad argument in call to
> binary:match(<<"2011/06/10/12/clicks_tdpartnerid_v2_2011-06-10T12_2011-06-10T13.log.gz">>,
> [<<>>]) in riak_cs_list_objects_utils:extract_group/2 line 197 in
> gen_fsm:terminate/7 line 622
>
>
>
> Thanks,
>
> Gautam
>
>
>
> PS: This is a re-send of my first message sent on 10/10. The original
> message appears delayed in moderation, so I’m resending after joining the
> list.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS - Unable to create/view bucket details using dragon disk

2015-10-04 Thread Shunichi Shinohara
Thanks for detailed information.
You got 403 then authentication failure or authorization failure.
First to check is StringToSign on both side. s3cmd with debug flag
showed StringToSign at "DEBUG: SignHeaders" line. For Riak CS,
you can see it by setting log level to debug like:
   [debug] STS:
["GET","\n",[],"\n",[],"\n","\n",[["x-amz-date",":",<<"Mon, 05 Oct
2015 01:37:43 +">>,"\n"]],["/",[]]]
This is represented as Erlang style iodata(), which is a deep list of
strings and chars.

If StringToSign on both side is the same and secret key on both side
is the same,
authentication should success. Otherwise, authorization fails and
there will be debug log :
   [debug] bad_auth

Thanks,
Shino

2015-10-04 3:35 GMT+09:00 Johan Sommerfeld :
> Seem like the response is 403 forbidden, have no idea why. Did you get
> any stack trace this time? the response seems legit and that it
> managed to parse it?
>
> /J
>
> On 2 October 2015 at 18:35, G  wrote:
>> Hey Johan,
>>
>> I have executed s3cmd with --debug option.
>>
>> Please find the output.
>>
>> Test access with supplied credentials? [Y/n] Y
>> Please wait, attempting to list all buckets...
>> DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Fri, 02 Oct 2015 16:32:44
>> +\n/'
>> DEBUG: CreateRequest: resource[uri]=/
>> DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Fri, 02 Oct 2015 16:32:44
>> +\n/'
>> DEBUG: Processing request, please wait...
>> DEBUG: get_hostname(None): 172.31.44.38
>> DEBUG: format_uri(): http://172.31.44.38/?delimiter=/
>> DEBUG: Sending request method_string='GET',
>> uri='http://172.31.44.38/?delimiter=/', headers={'content-length': '0',
>> 'Authorization': 'AWS access-key:OeIcaJSH58eWgIlLiyQBgzN16Pc=',
>> 'x-amz-date': 'Fri, 02 Oct 2015 16:32:44 +'}, body=(0 bytes)
>> DEBUG: Response: {'status': 403, 'headers': {'date': 'Fri, 02 Oct 2015
>> 16:32:44 GMT', 'content-length': '154', 'content-type': 'application/xml',
>> 'server': 'Riak CS'}, 'reason': 'Forbidden', 'data': '> encoding="UTF-8"?>AccessDeniedAccess
>> Denied'}
>> DEBUG: S3Error: 403 (Forbidden)
>> DEBUG: HttpHeader: date: Fri, 02 Oct 2015 16:32:44 GMT
>> DEBUG: HttpHeader: content-length: 154
>> DEBUG: HttpHeader: content-type: application/xml
>> DEBUG: HttpHeader: server: Riak CS
>> DEBUG: ErrorXML: Code: 'AccessDenied'
>> DEBUG: ErrorXML: Message: 'Access Denied'
>> DEBUG: ErrorXML: Resource: None
>> DEBUG: ErrorXML: RequestId: None
>> ERROR: Test failed: 403 (AccessDenied): Access Denied
>>
>> Do you have any idea about this error?
>>
>>
>>
>>
>> --
>> View this message in context: 
>> http://riak-users.197444.n3.nabble.com/Riak-CS-Unable-to-create-view-bucket-details-using-dragon-disk-tp4033494p4033524.html
>> Sent from the Riak Users mailing list archive at Nabble.com.
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> --
> Johan Sommerfeld
> tel: +46 (0) 70 769 15 73
> S2HC Sweden AB
> Litsbyvägen 56
> 187 46 Täby
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS - Unable to create/view bucket details using dragon disk

2015-09-30 Thread Shunichi Shinohara
I asked some questions but the error stack did not answer any.

s/Dragon disk/s3cmd/ version of my questions:

- Does s3cmd actually attempt to connect to Riak CS?
  (instead of AWS S3)
- If yes, how does TCP communication go on?
- If TCP is ok, how is HTTP request/response?
- If HTTP is ok, what is the HTTP response status?

2015-10-01 12:44 GMT+09:00 G :
> I get below error when I try to create a bucket using s3cmd
>
> Test access with supplied credentials? [Y/n] Y
> Please wait, attempting to list all buckets...
> Success. Your access key and secret key worked fine :-)
>
> Now verifying that encryption works...
> Success. Encryption and decryption worked fine :-)
>
> Save settings? [y/N] y
> Configuration saved to '/root/.s3cfg'
>
>
>
> root@ip-172-31-44-38:/etc/stanchion# s3cmd mb s3://test-bucket
>
> !
> An unexpected error has occurred.
>   Please report the following lines to:
>s3tools-b...@lists.sourceforge.net
> !
>
> Problem: ParseError: mismatched tag: line 1, column 165
> S3cmd:   1.1.0-beta3
>
> Traceback (most recent call last):
>   File "/usr/bin/s3cmd", line 1800, in 
> main()
>   File "/usr/bin/s3cmd", line 1741, in main
> cmd_func(args)
>   File "/usr/bin/s3cmd", line 158, in cmd_bucket_create
> response = s3.bucket_create(uri.bucket(), cfg.bucket_location)
>   File "/usr/share/s3cmd/S3/S3.py", line 263, in bucket_create
> response = self.send_request(request, body)
>   File "/usr/share/s3cmd/S3/S3.py", line 624, in send_request
> raise S3Error(response)
>   File "/usr/share/s3cmd/S3/Exceptions.py", line 48, in __init__
> tree = getTreeFromXml(response["data"])
>   File "/usr/share/s3cmd/S3/Utils.py", line 69, in getTreeFromXml
> tree = ET.fromstring(xml)
>   File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
> parser.feed(text)
>   File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
> self._raiseerror(v)
>   File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in
> _raiseerror
> raise err
> ParseError: mismatched tag: line 1, column 165
>
> !
> An unexpected error has occurred.
> Please report the above lines to:
>s3tools-b...@lists.sourceforge.net
> !
>
>
>
>
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Riak-CS-Unable-to-create-view-bucket-details-using-dragon-disk-tp4033494p4033502.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: adding auth user not valid

2015-09-28 Thread Shunichi Shinohara
Hi Outback,

Sorry for very late response.
It seems that riak-cs doesn't/can't communicate with stanchion node.
There are a couple of possible reasons:

- admin.key and admin.secret should be set both in riak-cs and stanchion
- stanchion_host should be set properly as stanchion node's listen configuration
- servers that riak-cs node and stanchion nodes are running should be
  communicate by TCP.

I think packet capture (by tcpdump or wireshark or...) is useful to
investigate the communication between riak-cs and stanchion.

Thanks,
Shino

2015-09-15 13:21 GMT+09:00 Outback Dingo :
> seems the docs with auth creds is invalid... or im still broken...
>
>
>
> curl -XPOST http://localhost:8080/riak-cs/user \
>   -H 'Content-Type: application/json' \
>   -d '{"email":"ad...@admin.com", "name":"admin"}'
>
>
> returns
>
> curl -H 'Content-Type: application/json' -X POST
> http://localhost:8080/riak-cs/user --data '{"email":"ad...@admin.com",
> "name":"admin"}'
>
> 500 Internal Server
> ErrorInternal Server ErrorThe server
> encountered an error while processing this
> requestroot@vmbsd:/usr/local/etc/riak-cs #
>
> and the log shows
> 2015-09-15 14:20:33.678 [error] <0.9846.4> Webmachine error at path
> "/riak-cs/user" :
> {error,{error,{badmatch,{error,malformed_xml}},[{riak_cs_s3_response,xml_error_code,1,[{file,"src/riak_cs_s3_response.erl"},{line,335}]},{riak_cs_s3_r
> esponse,error_response,1,[{file,"src/riak_cs_s3_response.erl"},{line,249}]},{riak_cs_wm_user,accept_json,2,[{file,"src/riak_cs_wm_user.erl"},{line,119}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_resource.erl"},{line,1
> 86}]},{webmachine_resource,do,3,[{file,"src/webmachine_resource.erl"},{line,142}]},{webmachine_decision_core,resource_call,...},...]}}
> in riak_cs_s3_response:xml_error_code/1 line 335
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: admin_key_undefined creating user with riak-cs escript

2015-09-23 Thread Shunichi Shinohara
Hi Gautam,

Sorry for late response.
The function riak_cs_user:create_user() should be executed on riak cs node.
If you want to call it by escript, you can use rpc:call to execute it.

Shino

On Sat, Sep 19, 2015 at 7:43 AM, Gautam Pulla
 wrote:
> Hello,
>
>
>
> I’d like to create a riak-cs user with a pre-set key/secret and am following
> the workaround from https://github.com/basho/riak_cs/issues/565. This is
> works when I type it into the interactive riak-cs console, however I want to
> script this in a non-interactive fashion.
>
>
>
> I’ve created a small erlang script and am using riak-cs escript to run it (I
> don’t know erlang at all so there are probably basic errors).
>
>
>
> This is what I get when I run the script:
>
>
>
> # riak-cs escript /tmp/createuser.erl
>
> {error,admin_key_undefined}
>
>
>
> Here is my script:
>
>
>
> # cat /tmp/createuser.erl
>
> #!/usr/bin/env escript
>
> -import(riak_cs_user, [create_user/4]).
>
>
>
> main(_) ->
>
> R = riak_cs_user:create_user("test", "t...@ttd.com", "blah", "foo"),
>
> Rs=lists:flatten(io_lib:format("~p", [R])),
>
> io:format(Rs),
>
> io:format("\n").
>
>
>
> Any pointers?
>
>
>
> Thanks!
>
> Gautam
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs auth_module riak_cs_s3_passthru_auth startup problem

2015-09-23 Thread Shunichi Shinohara
Hi Kent,

riak_cs_s3_passthru_auth is for internal use and does not work as
auth_module.
It will be possible to make it work as auth_module but some effort
of refactering and (maybe) some additional implementation.

It can be a workaround to use bucket policy to permit GET Object and
PUT Object for anonymous users.

Thanks,
Shino

On Wed, Sep 23, 2015 at 4:40 AM, KENT JR., LARRY G  wrote:
> Using riak 2.1.1/riak-cs 2.0.1
> I want to use CS as a large object store without authentication.
> Riak-cs will not start when I use riak_cs_s3_passthru_auth
>
> auth_module = riak_cs_s3_passthru_auth
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: s3cmd error: access to bucket was denied

2015-08-31 Thread Shunichi Shinohara
Hmm.. it seems that situation is not good.

GET Bucket (aka List Objects) of Riak CS uses a kind of coverage
operation of Riak (KV),
and it shows timeout. There may be some log at riak nodes. You have to
look into logs in
all nodes because all nodes participate in coverage operations.

Could you try some further commands?
1. for sanity check,
   s3cmd -c s3cfg1 ls
2. To try list objects for completely empty buckets,
   s3cmd -c s3cfg1 mb s3://
   s3cmd -c s3cfg1 ls s3://
3. Another coverage operation, without riak cs
   curl -v 'http://127.0.0.1:8098/buckets/foobar/keys?keys=true'
   # 127.0.0.1 and 8098 should be changed to HTTP host and port
   # (*NOT* PB host/port) of one of riak nodes

Thanks,
Shino

On Thu, Aug 27, 2015 at 6:41 PM, changmao wang <wang.chang...@gmail.com> wrote:
> Shunichi,
>
> Just now, I followed your direction to change fold_objects_for_list_keys to
> true, and restart riak-cs service.
>
>
> sed -i '/fold_objects_for_list_keys/ s/false/true/g'
> /etc/riak-cs/app.config; riak-cs restart
>
> After that, I run below command and got same error.
>
> s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>
> ...
> DEBUG: Unicodising 'ls' using UTF-8
> DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
> using UTF-8
> DEBUG: Command: ls
> DEBUG: Bucket 's3://stock':
> DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
> 'XSHE/0/50/2008/XSHE-50-20080102'
> DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
> +\n/stock/'
> DEBUG: CreateRequest: resource[uri]=/
> DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
> +\n/stock/'
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
> DEBUG: format_uri():
> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102=/
> WARNING: Retrying failed request:
> /?prefix=XSHE/0/50/2008/XSHE-50-20080102=/ (timed out)
> WARNING: Waiting 3 sec...
> DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:37:34
> +\n/stock/'
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
> DEBUG: format_uri():
> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102=/
>
> below was last 10 errors from "/var/log/riak-cs/console.log"
> 2015-08-27 17:35:40.085 [error]
> <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-08-27 17:37:34.744 [error]
> <0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-08-27 17:37:49.356 [error]
> <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-08-27 17:39:49.249 [error]
> <0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-08-27 17:39:54.811 [error]
> <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
>
>
>
> On Wed, Aug 26, 2015 at 5:35 PM, Shunichi Shinohara <sh...@basho.com> wrote:
>>
>> Sorry for delay and thanks for further information.
>> The log of Riak CS shows timeout in list query to Riak:
>>2015-08-24 16:59:49.717 [error] <0.18027.7> gen_fsm <0.18027.7> in
>> state
>>waiting_list_keys terminated with reason: <<"timeout">>
>>
>> There are some kinds of possibility for timeout, e.g. hardware resource.
>> But if the timeout is occured by large number of objects (manifests) in
>> your
>> bucket, improvement from 1.4.0 may help.
>> It can be used by setting
>>{fold_objects_for_list_keys, true}
>> in your app.config's riak_cs section or executing
>>application:set_env(riak_cs, fold_objects_for_list_keys, true).
>> by attaching to riak-cs node.
>> For more information about it, please refer the original PR [1].
>>
>> [1] https://github.com/basho/riak_cs/pull/600
>> --
>> Shunichi Shinohara
>> Basho Japan KK
>>
>>
>> On Wed, Aug 26, 2015 at 2:04 PM, Stanislav Vlasov
>> <stanislav@gmail.com> wrote:
>> > 2015-08-25 11:03 GMT+05:00 changmao wang <wang.chang...@gmail.com>:
>> >> Any ideas on this issue?
>> >
>> > Can you check credentials with another client?
>> > s3curl, for example?
>> >
>> > I got some bugs in s3cmd after debian upgrade, so if another client
>> > works, than s3cmd has bug.
>

Re: s3cmd error: access to bucket was denied

2015-08-26 Thread Shunichi Shinohara
Sorry for delay and thanks for further information.
The log of Riak CS shows timeout in list query to Riak:
   2015-08-24 16:59:49.717 [error] 0.18027.7 gen_fsm 0.18027.7 in state
   waiting_list_keys terminated with reason: timeout

There are some kinds of possibility for timeout, e.g. hardware resource.
But if the timeout is occured by large number of objects (manifests) in your
bucket, improvement from 1.4.0 may help.
It can be used by setting
   {fold_objects_for_list_keys, true}
in your app.config's riak_cs section or executing
   application:set_env(riak_cs, fold_objects_for_list_keys, true).
by attaching to riak-cs node.
For more information about it, please refer the original PR [1].

[1] https://github.com/basho/riak_cs/pull/600
--
Shunichi Shinohara
Basho Japan KK


On Wed, Aug 26, 2015 at 2:04 PM, Stanislav Vlasov
stanislav@gmail.com wrote:
 2015-08-25 11:03 GMT+05:00 changmao wang wang.chang...@gmail.com:
 Any ideas on this issue?

 Can you check credentials with another client?
 s3curl, for example?

 I got some bugs in s3cmd after debian upgrade, so if another client
 works, than s3cmd has bug.

 On Mon, Aug 24, 2015 at 5:09 PM, changmao wang wang.chang...@gmail.com
 wrote:

 Please check attached file for details.

 On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara sh...@basho.com
 wrote:

 Then, back to my first questions:
 Could you provide results following commands with s3cfg1?
 - s3cmd ls
 - s3cmd info s3://stock

 From log file, gc index queries timed out again and again.
 Not sure but it may be subtle situation...

 --
 Shunichi Shinohara
 Basho Japan KK


 On Mon, Aug 24, 2015 at 11:03 AM, changmao wang wang.chang...@gmail.com
 wrote:
  1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
{cs_root_host, api2.cloud-datayes.com},
  root@cluster1-hd10:~# grep host_base .s3cfg
  host_base = api2.cloud-datayes.com
  root@cluster1-hd10:~# grep host_base s3cfg1
  host_base = api2.cloud-datayes.com
 
  2. please check attached file for s3cmd -d output and
  '/etc/riak-cs/console.log'.
 
 
  On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara sh...@basho.com
  wrote:
 
  What is api2.cloud-datayes.com? Your s3cfg attached at the first one
  in this email thread
  does not include it. Please make sure you provide correct / consistent
  information to
  debug the issue.
 
  - What is your riak cs config cs_root_host?
  - What is your host_base in s3cfg that you USE?
  - What is your host_bucket in s3cfg?
 
  Also, please attach s3cmd debug output AND riak cs console log at the
  same
  time
  interval.
  --
  Shunichi Shinohara
  Basho Japan KK
 
 
  On Mon, Aug 24, 2015 at 10:42 AM, changmao wang
  wang.chang...@gmail.com
  wrote:
   I'm not sure who created it. This's a legacy production system.
  
   Just now, I used another s3cfg file to access it. Below is my
   output:
   root@cluster1-hd10:~# s3cmd -c s3cfg1 info
   s3://stock/XSHE/0/50/2008/XSHE-50-20080102
   s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
  File size: 397535
  Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
  MIME type: binary/octet-stream
  MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
  ACL:   stockwrite: FULL_CONTROL
  ACL:   *anon*: READ
  URL:
  
   http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
   root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
   s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
   DEBUG: ConfigParser: Reading file 's3cfg1'
   DEBUG: ConfigParser: access_key-TE...17_chars...0
   DEBUG: ConfigParser: bucket_location-US
   DEBUG: ConfigParser: cloudfront_host-cloudfront.amazonaws.com
   DEBUG: ConfigParser: cloudfront_resource-/2010-07-15/distribution
   DEBUG: ConfigParser: default_mime_type-binary/octet-stream
   DEBUG: ConfigParser: delete_removed-False
   DEBUG: ConfigParser: dry_run-False
   DEBUG: ConfigParser: encoding-UTF-8
   DEBUG: ConfigParser: encrypt-False
   DEBUG: ConfigParser: follow_symlinks-False
   DEBUG: ConfigParser: force-False
   DEBUG: ConfigParser: get_continue-False
   DEBUG: ConfigParser: gpg_command-/usr/bin/gpg
   DEBUG: ConfigParser: gpg_decrypt-%(gpg_command)s -d --verbose
   --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
   %(output_file)s %(input_file)s
   DEBUG: ConfigParser: gpg_encrypt-%(gpg_command)s -c --verbose
   --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
   %(output_file)s %(input_file)s
   DEBUG: ConfigParser: gpg_passphrase-...-3_chars...
   DEBUG: ConfigParser: guess_mime_type-True
   DEBUG: ConfigParser: host_base-api2.cloud-datayes.com
   DEBUG: ConfigParser: host_bucket-%(bucket)s.api2.cloud-datayes.com
   DEBUG: ConfigParser: human_readable_sizes-False
   DEBUG: ConfigParser: list_md5-False
   DEBUG: ConfigParser: log_target_prefix-
   DEBUG: ConfigParser: preserve_attrs-True
   DEBUG: ConfigParser: progress_meter-True
   DEBUG: ConfigParser: proxy_host-10.21.136.81
   DEBUG: ConfigParser: proxy_port

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread Shunichi Shinohara
Then, back to my first questions:
Could you provide results following commands with s3cfg1?
- s3cmd ls
- s3cmd info s3://stock

From log file, gc index queries timed out again and again.
Not sure but it may be subtle situation...

--
Shunichi Shinohara
Basho Japan KK


On Mon, Aug 24, 2015 at 11:03 AM, changmao wang wang.chang...@gmail.com wrote:
 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
   {cs_root_host, api2.cloud-datayes.com},
 root@cluster1-hd10:~# grep host_base .s3cfg
 host_base = api2.cloud-datayes.com
 root@cluster1-hd10:~# grep host_base s3cfg1
 host_base = api2.cloud-datayes.com

 2. please check attached file for s3cmd -d output and
 '/etc/riak-cs/console.log'.


 On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara sh...@basho.com wrote:

 What is api2.cloud-datayes.com? Your s3cfg attached at the first one
 in this email thread
 does not include it. Please make sure you provide correct / consistent
 information to
 debug the issue.

 - What is your riak cs config cs_root_host?
 - What is your host_base in s3cfg that you USE?
 - What is your host_bucket in s3cfg?

 Also, please attach s3cmd debug output AND riak cs console log at the same
 time
 interval.
 --
 Shunichi Shinohara
 Basho Japan KK


 On Mon, Aug 24, 2015 at 10:42 AM, changmao wang wang.chang...@gmail.com
 wrote:
  I'm not sure who created it. This's a legacy production system.
 
  Just now, I used another s3cfg file to access it. Below is my output:
  root@cluster1-hd10:~# s3cmd -c s3cfg1 info
  s3://stock/XSHE/0/50/2008/XSHE-50-20080102
  s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
 File size: 397535
 Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
 MIME type: binary/octet-stream
 MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
 ACL:   stockwrite: FULL_CONTROL
 ACL:   *anon*: READ
 URL:
  http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
  root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
  s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
  DEBUG: ConfigParser: Reading file 's3cfg1'
  DEBUG: ConfigParser: access_key-TE...17_chars...0
  DEBUG: ConfigParser: bucket_location-US
  DEBUG: ConfigParser: cloudfront_host-cloudfront.amazonaws.com
  DEBUG: ConfigParser: cloudfront_resource-/2010-07-15/distribution
  DEBUG: ConfigParser: default_mime_type-binary/octet-stream
  DEBUG: ConfigParser: delete_removed-False
  DEBUG: ConfigParser: dry_run-False
  DEBUG: ConfigParser: encoding-UTF-8
  DEBUG: ConfigParser: encrypt-False
  DEBUG: ConfigParser: follow_symlinks-False
  DEBUG: ConfigParser: force-False
  DEBUG: ConfigParser: get_continue-False
  DEBUG: ConfigParser: gpg_command-/usr/bin/gpg
  DEBUG: ConfigParser: gpg_decrypt-%(gpg_command)s -d --verbose
  --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
  %(output_file)s %(input_file)s
  DEBUG: ConfigParser: gpg_encrypt-%(gpg_command)s -c --verbose
  --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
  %(output_file)s %(input_file)s
  DEBUG: ConfigParser: gpg_passphrase-...-3_chars...
  DEBUG: ConfigParser: guess_mime_type-True
  DEBUG: ConfigParser: host_base-api2.cloud-datayes.com
  DEBUG: ConfigParser: host_bucket-%(bucket)s.api2.cloud-datayes.com
  DEBUG: ConfigParser: human_readable_sizes-False
  DEBUG: ConfigParser: list_md5-False
  DEBUG: ConfigParser: log_target_prefix-
  DEBUG: ConfigParser: preserve_attrs-True
  DEBUG: ConfigParser: progress_meter-True
  DEBUG: ConfigParser: proxy_host-10.21.136.81
  DEBUG: ConfigParser: proxy_port-8080
  DEBUG: ConfigParser: recursive-False
  DEBUG: ConfigParser: recv_chunk-4096
  DEBUG: ConfigParser: reduced_redundancy-False
  DEBUG: ConfigParser: secret_key-Hk...37_chars...=
  DEBUG: ConfigParser: send_chunk-4096
  DEBUG: ConfigParser: simpledb_host-sdb.amazonaws.com
  DEBUG: ConfigParser: skip_existing-False
  DEBUG: ConfigParser: socket_timeout-100
  DEBUG: ConfigParser: urlencoding_mode-normal
  DEBUG: ConfigParser: use_https-False
  DEBUG: ConfigParser: verbosity-WARNING
  DEBUG: Updating Config.Config encoding - UTF-8
  DEBUG: Updating Config.Config follow_symlinks - False
  DEBUG: Updating Config.Config verbosity - 10
  DEBUG: Unicodising 'ls' using UTF-8
  DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
  using UTF-8
  DEBUG: Command: ls
  DEBUG: Bucket 's3://stock':
  DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
  'XSHE/0/50/2008/XSHE-50-20080102'
  DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
  +\n/stock/'
  DEBUG: CreateRequest: resource[uri]=/
  DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
  +\n/stock/'
  DEBUG: Processing request, please wait...
  DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
  DEBUG: format_uri():
 
  http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102delimiter=/
  WARNING: Retrying failed request:
  /?prefix=XSHE/0/50/2008

Re: s3cmd error: access to bucket was denied

2015-08-23 Thread Shunichi Shinohara
The error message in console.log shows no user with access_key in your s3cfg.
Could you provide resutls following commands?

- s3cmd ls
- s3cmd info s3://stock

If error happens, debug print switch -d of s3cmd might help.

[1] 
http://docs.basho.com/riakcs/latest/cookbooks/Account-Management/#Creating-a-User-Account

--
Shunichi Shinohara
Basho Japan KK


On Fri, Aug 21, 2015 at 10:00 AM, changmao wang wang.chang...@gmail.com wrote:
 Kazuhiro,

 Maybe that's not the key point. I'm using riak 1.4.2 and follow below docs
 to configure s3cfg file.
 http://docs.basho.com/riakcs/1.4.2/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use

 There's no signature_v2 parameter in s3cfg. However, I added this
 parameter to s3cfg and tried again with same errors.




 On Thu, Aug 20, 2015 at 10:31 PM, Kazuhiro Suzuki k...@basho.com wrote:

 Hi Changmao,

 It seems your s3cmd config should include 2 items:

 signature_v2 = True
 host_base  = api2.cloud-datayes.com

 Riak CS requires signature_v2 = True since Riak CS has not supported
 s3 authentication version 4 yet.
 You can find a sample configuration of s3cmd here to interact with Riak CS
 [1].

 [1]:
 http://docs.basho.com/riakcs/2.0.1/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use

 Thanks,

 On Thu, Aug 20, 2015 at 7:44 PM, changmao wang wang.chang...@gmail.com
 wrote:
  Just now, I used admin_key and admin_secret from
  /etc/riak-cs/app.config
  to run s3cmd -c s3-stock ls  s3://stock/XSHE/0/000600
  and I got the below error:
  ERROR: Access to bucket 'stock' was denied
 
  Below is abstract from /var/log/riak-cs/console.log
  2015-08-20 18:40:22.790 [debug]
  0.28085.18@riak_cs_s3_auth:calculate_signature:129 STS:
  [GET,\n,[],\n,[],\n,\n,[[x-amz-date,:,Thu, 20 Aug 2015
  10:40:22 +,\n]],[/stock/,[]]]
  2015-08-20 18:40:32.861 [error]
  0.28153.18@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
  record for s3 failed. Reason: no_user_key
  2015-08-20 18:40:32.861 [debug]
  0.28153.18@riak_cs_wm_common:post_authentication:452 No user key
  2015-08-20 18:40:32.969 [debug] 0.28189.18@riak_cs_get_fsm:prepare:406
  Manifest:
 
  {lfs_manifest_v3,3,1048576,{pipeline,100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,122,47,83,90,48,48,50,49,55,53,67,78,47,50,48,49,48,95,50,48,49,48,45,48,52,45,50,52,95,229,133,172,229,143,184,231,171,160,231,168,139,239,188,136,50,48,49,48,229,185,180,52,230,156,136,239,188,137,46,80,68,70},[],2013-12-16T23:01:12.000Z,192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166,255387,application/pdf,55,41,141,170,187,226,47,223,183,95,105,129,155,154,210,202,active,{1387,234872,598819},{1387,234872,918555},[],undefined,undefined,undefined,undefined,{acl_v2,{pipelinewrite,ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51,AVG2DHZ4UNUYFAZ8F4WR},[{{pipelinewrite,ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,234872,598546}},[],undefined}
  2015-08-20 18:40:33.043 [debug]
  0.28189.18@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0,
  FinalBlock:
  0
  2015-08-20 18:40:33.043 [debug]
  0.28189.18@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0,
  KeepFinal:
  255387
  2015-08-20 18:40:33.050 [debug]
  0.28189.18@riak_cs_get_fsm:waiting_continue_or_stop:229 Block Servers:
  [0.28191.18]
  2015-08-20 18:40:33.079 [debug]
  0.28189.18@riak_cs_get_fsm:waiting_chunks:307 Retrieved block
  {192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166,0}
  2015-08-20 18:40:33.079 [debug]
  0.28189.18@riak_cs_get_fsm:perhaps_send_to_user:280 Returning block
  {192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166,0} to
  client
  2015-08-20 18:40:38.218 [error]
  0.28086.18@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
  record for s3 failed. Reason: no_user_key
  2015-08-20 18:40:38.218 [debug]
  0.28086.18@riak_cs_wm_common:post_authentication:452 No user key
  2015-08-20 18:40:38.226 [debug] 0.28210.18@riak_cs_get_fsm:prepare:406
  Manifest:
 
  {lfs_manifest_v3,3,1048576,{pipeline,100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,104,47,83,72,54,48,48,55,53,48,67,78,47,50,48,48,55,95,50,48,48,55,45,49,49,45,50,49,95,230,177,159,228,184,173,232,141,175,228,184,154,229,133,179,228,186,142,73,66,69,95,53,232,141,175,229,147,129,232,142,183,229,190,151,228,186,140,230,156,159,228,184,180,229,186,138,230,137,185,230,150,135,231,154,132,229,133,172,229,145,138,229,143,138,233,163,142,233,153,169,230,143,144,231,164,186,46,112,100,102},[],2013-12-15T09:04:48.000Z,201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105,89863,application/pdf,139,151,203,173,6,111,222,48,17,81,102,170,216,66,193,77,active,{1387,98288,545827},{1387,98288,618409},[],undefined,undefined,undefined,undefined,{acl_v2,{pipelinewrite

Re: s3cmd error: access to bucket was denied

2015-08-23 Thread Shunichi Shinohara
What is api2.cloud-datayes.com? Your s3cfg attached at the first one
in this email thread
does not include it. Please make sure you provide correct / consistent
information to
debug the issue.

- What is your riak cs config cs_root_host?
- What is your host_base in s3cfg that you USE?
- What is your host_bucket in s3cfg?

Also, please attach s3cmd debug output AND riak cs console log at the same time
interval.
--
Shunichi Shinohara
Basho Japan KK


On Mon, Aug 24, 2015 at 10:42 AM, changmao wang wang.chang...@gmail.com wrote:
 I'm not sure who created it. This's a legacy production system.

 Just now, I used another s3cfg file to access it. Below is my output:
 root@cluster1-hd10:~# s3cmd -c s3cfg1 info
 s3://stock/XSHE/0/50/2008/XSHE-50-20080102
 s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
File size: 397535
Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
MIME type: binary/octet-stream
MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
ACL:   stockwrite: FULL_CONTROL
ACL:   *anon*: READ
URL:
 http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
 root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
 s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
 DEBUG: ConfigParser: Reading file 's3cfg1'
 DEBUG: ConfigParser: access_key-TE...17_chars...0
 DEBUG: ConfigParser: bucket_location-US
 DEBUG: ConfigParser: cloudfront_host-cloudfront.amazonaws.com
 DEBUG: ConfigParser: cloudfront_resource-/2010-07-15/distribution
 DEBUG: ConfigParser: default_mime_type-binary/octet-stream
 DEBUG: ConfigParser: delete_removed-False
 DEBUG: ConfigParser: dry_run-False
 DEBUG: ConfigParser: encoding-UTF-8
 DEBUG: ConfigParser: encrypt-False
 DEBUG: ConfigParser: follow_symlinks-False
 DEBUG: ConfigParser: force-False
 DEBUG: ConfigParser: get_continue-False
 DEBUG: ConfigParser: gpg_command-/usr/bin/gpg
 DEBUG: ConfigParser: gpg_decrypt-%(gpg_command)s -d --verbose
 --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
 %(output_file)s %(input_file)s
 DEBUG: ConfigParser: gpg_encrypt-%(gpg_command)s -c --verbose
 --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
 %(output_file)s %(input_file)s
 DEBUG: ConfigParser: gpg_passphrase-...-3_chars...
 DEBUG: ConfigParser: guess_mime_type-True
 DEBUG: ConfigParser: host_base-api2.cloud-datayes.com
 DEBUG: ConfigParser: host_bucket-%(bucket)s.api2.cloud-datayes.com
 DEBUG: ConfigParser: human_readable_sizes-False
 DEBUG: ConfigParser: list_md5-False
 DEBUG: ConfigParser: log_target_prefix-
 DEBUG: ConfigParser: preserve_attrs-True
 DEBUG: ConfigParser: progress_meter-True
 DEBUG: ConfigParser: proxy_host-10.21.136.81
 DEBUG: ConfigParser: proxy_port-8080
 DEBUG: ConfigParser: recursive-False
 DEBUG: ConfigParser: recv_chunk-4096
 DEBUG: ConfigParser: reduced_redundancy-False
 DEBUG: ConfigParser: secret_key-Hk...37_chars...=
 DEBUG: ConfigParser: send_chunk-4096
 DEBUG: ConfigParser: simpledb_host-sdb.amazonaws.com
 DEBUG: ConfigParser: skip_existing-False
 DEBUG: ConfigParser: socket_timeout-100
 DEBUG: ConfigParser: urlencoding_mode-normal
 DEBUG: ConfigParser: use_https-False
 DEBUG: ConfigParser: verbosity-WARNING
 DEBUG: Updating Config.Config encoding - UTF-8
 DEBUG: Updating Config.Config follow_symlinks - False
 DEBUG: Updating Config.Config verbosity - 10
 DEBUG: Unicodising 'ls' using UTF-8
 DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
 using UTF-8
 DEBUG: Command: ls
 DEBUG: Bucket 's3://stock':
 DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
 'XSHE/0/50/2008/XSHE-50-20080102'
 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
 +\n/stock/'
 DEBUG: CreateRequest: resource[uri]=/
 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
 +\n/stock/'
 DEBUG: Processing request, please wait...
 DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
 DEBUG: format_uri():
 http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102delimiter=/
 WARNING: Retrying failed request:
 /?prefix=XSHE/0/50/2008/XSHE-50-20080102delimiter=/ ('')
 WARNING: Waiting 3 sec...
 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:37:05
 +\n/stock/'
 DEBUG: Processing request, please wait...
 DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
 DEBUG: format_uri():
 http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102delimiter=/
 WARNING: Retrying failed request:
 /?prefix=XSHE/0/50/2008/XSHE-50-20080102delimiter=/ ('')
 WARNING: Waiting 6 sec...
 



 On Mon, Aug 24, 2015 at 9:17 AM, Shunichi Shinohara sh...@basho.com wrote:

 The result of s3cmd ls (aka, GET Service API) indicates there
 is no bucket with name stock:

  root@cluster-s3-hd1:~# s3cmd ls
  2013-12-01 06:45  s3://test

 Have you created it?

 --
 Shunichi Shinohara
 Basho Japan KK


 On Mon, Aug 24, 2015 at 10:14 AM, changmao wang wang.chang...@gmail.com
 wrote

Re: s3cmd error: access to bucket was denied

2015-08-23 Thread Shunichi Shinohara
The result of s3cmd ls (aka, GET Service API) indicates there
is no bucket with name stock:

 root@cluster-s3-hd1:~# s3cmd ls
 2013-12-01 06:45  s3://test

Have you created it?

--
Shunichi Shinohara
Basho Japan KK


On Mon, Aug 24, 2015 at 10:14 AM, changmao wang wang.chang...@gmail.com wrote:
 Shunichi,

 Thanks for your reply. Below is my command result:
 root@cluster-s3-hd1:~# s3cmd ls
 2013-12-01 06:45  s3://test
 root@cluster-s3-hd1:~# s3cmd info s3://stock
 ERROR: Access to bucket 'stock' was denied
 root@cluster-s3-hd1:~# s3cmd info s3://stock -d
 DEBUG: ConfigParser: Reading file '/root/.s3cfg'
 DEBUG: ConfigParser: access_key-M2...17_chars...K
 DEBUG: ConfigParser: bucket_location-US
 DEBUG: ConfigParser: cloudfront_host-cloudfront.amazonaws.com
 DEBUG: ConfigParser: cloudfront_resource-/2010-07-15/distribution
 DEBUG: ConfigParser: default_mime_type-binary/octet-stream
 DEBUG: ConfigParser: delete_removed-False
 DEBUG: ConfigParser: dry_run-False
 DEBUG: ConfigParser: encoding-UTF-8
 DEBUG: ConfigParser: encrypt-False
 DEBUG: ConfigParser: follow_symlinks-False
 DEBUG: ConfigParser: force-False
 DEBUG: ConfigParser: get_continue-False
 DEBUG: ConfigParser: gpg_command-/usr/bin/gpg
 DEBUG: ConfigParser: gpg_decrypt-%(gpg_command)s -d --verbose
 --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
 %(output_file)s %(input_file)s
 DEBUG: ConfigParser: gpg_encrypt-%(gpg_command)s -c --verbose
 --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
 %(output_file)s %(input_file)s
 DEBUG: ConfigParser: gpg_passphrase-...-3_chars...
 DEBUG: ConfigParser: guess_mime_type-True
 DEBUG: ConfigParser: host_base-api2.cloud-datayes.com
 DEBUG: ConfigParser: host_bucket-%(bucket)s.api2.cloud-datayes.com
 DEBUG: ConfigParser: human_readable_sizes-False
 DEBUG: ConfigParser: list_md5-False
 DEBUG: ConfigParser: log_target_prefix-
 DEBUG: ConfigParser: preserve_attrs-True
 DEBUG: ConfigParser: progress_meter-True
 DEBUG: ConfigParser: proxy_host-10.21.136.81
 DEBUG: ConfigParser: proxy_port-8080
 DEBUG: ConfigParser: recursive-False
 DEBUG: ConfigParser: recv_chunk-4096
 DEBUG: ConfigParser: reduced_redundancy-False
 DEBUG: ConfigParser: secret_key-1u...37_chars...=
 DEBUG: ConfigParser: send_chunk-4096
 DEBUG: ConfigParser: simpledb_host-sdb.amazonaws.com
 DEBUG: ConfigParser: skip_existing-False
 DEBUG: ConfigParser: socket_timeout-10
 DEBUG: ConfigParser: urlencoding_mode-normal
 DEBUG: ConfigParser: use_https-False
 DEBUG: ConfigParser: verbosity-WARNING
 DEBUG: Updating Config.Config encoding - UTF-8
 DEBUG: Updating Config.Config follow_symlinks - False
 DEBUG: Updating Config.Config verbosity - 10
 DEBUG: Unicodising 'info' using UTF-8
 DEBUG: Unicodising 's3://stock' using UTF-8
 DEBUG: Command: info
 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09
 +\n/stock/?location'
 DEBUG: CreateRequest: resource[uri]=/?location
 DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09
 +\n/stock/?location'
 DEBUG: Processing request, please wait...
 DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
 DEBUG: format_uri(): http://stock.api2.cloud-datayes.com/?location
 DEBUG: Response: {'status': 403, 'headers': {'date': 'Mon, 24 Aug 2015
 01:11:09 GMT', 'content-length': '160', 'content-type': 'application/xml',
 'server': 'Riak CS'}, 'reason': 'Forbidden', 'data': '?xml version=1.0
 encoding=UTF-8?ErrorCodeAccessDenied/CodeMessageAccess
 Denied/MessageResource/stock/ResourceRequestId/RequestId/Error'}
 DEBUG: S3Error: 403 (Forbidden)
 DEBUG: HttpHeader: date: Mon, 24 Aug 2015 01:11:09 GMT
 DEBUG: HttpHeader: content-length: 160
 DEBUG: HttpHeader: content-type: application/xml
 DEBUG: HttpHeader: server: Riak CS
 DEBUG: ErrorXML: Code: 'AccessDenied'
 DEBUG: ErrorXML: Message: 'Access Denied'
 DEBUG: ErrorXML: Resource: '/stock'
 DEBUG: ErrorXML: RequestId: None
 ERROR: Access to bucket 'stock' was denied

 On Mon, Aug 24, 2015 at 9:04 AM, Shunichi Shinohara sh...@basho.com wrote:

 The error message in console.log shows no user with access_key in your
 s3cfg.
 Could you provide resutls following commands?

 - s3cmd ls
 - s3cmd info s3://stock

 If error happens, debug print switch -d of s3cmd might help.

 [1]
 http://docs.basho.com/riakcs/latest/cookbooks/Account-Management/#Creating-a-User-Account

 --
 Shunichi Shinohara
 Basho Japan KK


 On Fri, Aug 21, 2015 at 10:00 AM, changmao wang wang.chang...@gmail.com
 wrote:
  Kazuhiro,
 
  Maybe that's not the key point. I'm using riak 1.4.2 and follow below
  docs
  to configure s3cfg file.
 
  http://docs.basho.com/riakcs/1.4.2/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use
 
  There's no signature_v2 parameter in s3cfg. However, I added this
  parameter to s3cfg and tried again with same errors.
 
 
 
 
  On Thu, Aug 20, 2015 at 10:31 PM, Kazuhiro Suzuki k...@basho.com wrote:
 
  Hi Changmao,
 
  It seems your s3cmd config should include 2

Re: is it possible to start riak_kv as foreground process?

2015-07-12 Thread Shunichi Shinohara
Hi Roman,

FWIW, -noinput option of erl [1] makes beam not to read input and disables
interactive shell. Runner scripts that rebar generates passes extra args
of console command to erl (actually erlexec), one can add the option as:

  riak console -noinput

Note: some features can not be used, e.g. pid file or riak attach-direct.

[1] http://erlang.org/doc/man/erl.html

Thanks,
Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS 2 backend config confusion

2015-06-03 Thread Shunichi Shinohara
Hi Toby,

Current versions for Riak CS system are: Riak CS 2.0.x and it is
tested with Riak 2.0.x. Sorry for confusion, but the document you
pointed is for the combination. You can use the configuration steps
using advanced.config in the doc.

We are now in development of Riak CS 2.1 and prefix_multi and
cs_version will be available in the future. It will reduce configuration
complexity, but please wait for a while :)

[1] https://github.com/basho/riak_kv/pull/1082
--
Shunichi Shinohara
Basho Japan KK


On Tue, Jun 2, 2015 at 2:55 PM, Toby Corkindale t...@dryft.net wrote:
 Hi
 I'm in the process of moving from a Riak 1.4.x w/CS installation, to a Riak
 CS 2 installation, and I'm confused by comments in the riak.conf compared to
 the documentation online for riak cs, regarding the backend.

 riak.conf contains the following lines:

 ## Specifies the storage engine used for Riak's key-value data
 ## and secondary indexes (if supported).
 ##
 ## Default: bitcask
 ##
 ## Acceptable values:
 ##   - one of: bitcask, leveldb, memory, multi, prefix_multi
 storage_backend = bitcask

 ## Simplify prefix_multi configuration for Riak CS. Keep this
 ## commented out unless Riak is configured for Riak CS.
 ##
 ## Acceptable values:
 ##   - an integer
 ## cs_version = 2

 What is that bit about cs_version and prefix_multi there for?
 There's no mention of that here:
 http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/


 Cheers,
 Toby

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-12 Thread Shunichi Shinohara
Hi Niels,

I made PR to aws-sdk-js that fixes 403 in Multipart Upload Part requests [1].
I hope you can patch your aws sdk installation by its diff.

[1] https://github.com/aws/aws-sdk-js/pull/530

Thanks,
Shino

On Wed, Mar 11, 2015 at 5:29 PM, Niels O niel...@gmail.com wrote:
 I was testing some more.. and now the 400 issue (files from 1024-8191K) is
 solved .. the 403 issue indeed is not yet solved (files  8192K)

 so indeed still an issue :-(

 here a pcap of the 403 issue (with -w option this time :-)
 http://we.tl/AFhslBBhGo

 On Wed, Mar 11, 2015 at 8:02 AM, Shunichi Shinohara sh...@basho.com wrote:

 Congrats :)

 Just my two cents,

  tcpdump 'host 172.16.3.21'  -s 65535 -i eth0  /opt/dump.pcap

 tcpdump's option -w file.pcap is helpful because dump contains
 not only header information but raw packet data.

 How about 403 - AccessDenied case? Is it also solved by version
 up or still an issue?

 Thanks,
 Shino



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-11 Thread Shunichi Shinohara
Congrats :)

Just my two cents,

 tcpdump 'host 172.16.3.21'  -s 65535 -i eth0  /opt/dump.pcap

tcpdump's option -w file.pcap is helpful because dump contains
not only header information but raw packet data.

How about 403 - AccessDenied case? Is it also solved by version
up or still an issue?

Thanks,
Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-10 Thread Shunichi Shinohara
Niels,

I tested PUT object by your script (slightly modified for keys etc.) and
succeeded.
My environment:
- Riak CS, both 1.5 branch and develop branch
- node.js v0.10.25
- npm 1.3.10
- % npm ls
/home/shino/b/g/riak_cs-2.0
└─┬ aws-sdk@2.1.16
  ├─┬ xml2js@0.2.6
  │ └── sax@0.4.2
  └── xmlbuilder@0.4.2
- script https://gist.github.com/shino/36f02377a687f8312631

maybe version difference of node or aws sdk (?)

Thanks,
Shino

On Wed, Mar 11, 2015 at 11:13 AM, Shunichi Shinohara sh...@basho.com wrote:
 ngrep does not show some bytes. tcpdump can dump network data in pcap format.

 ex: sudo tcpdump -s 65535 -w /tmp/out.pcap -i eth0 'port 8080'
 --
 Shunichi Shinohara
 Basho Japan KK


 On Tue, Mar 10, 2015 at 7:30 PM, Niels O niel...@gmail.com wrote:
 Hello Shino,

 I was uploading the attached file to riakCS so the correct MD5 digest should
 be calculatable

 I don't know how to generate a pcap formatted file from linux, but I made an
 ngrep which might also do the job? ...

 the ngrep below:

 interface: eth0 (172.16.0.0/255.255.248.0)
 filter: (ip or ip6) and ( host 172.16.3.21 )
 
 T 172.16.2.99:35151 - 172.16.3.21:8080 [AP]
   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
 aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
 application/octet-stream..Content-MD5:
 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015 10:16:36
 GMT..Authorization: AWS
 GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
 ##
 T 172.16.2.99:35151 - 172.16.3.21:8080 [AP]
   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
 aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
 application/octet-stream..Content-MD5:
 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015 10:16:36
 GMT..Authorization: AWS
 GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
 ##
 T 172.16.3.21:8080 - 172.16.2.99:35151 [AP]
   HTTP/1.1 100 Continue
 ##
 T 172.16.2.99:35151 - 172.16.3.21:8080 [A]

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2

Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-10 Thread Shunichi Shinohara
ngrep does not show some bytes. tcpdump can dump network data in pcap format.

ex: sudo tcpdump -s 65535 -w /tmp/out.pcap -i eth0 'port 8080'
--
Shunichi Shinohara
Basho Japan KK


On Tue, Mar 10, 2015 at 7:30 PM, Niels O niel...@gmail.com wrote:
 Hello Shino,

 I was uploading the attached file to riakCS so the correct MD5 digest should
 be calculatable

 I don't know how to generate a pcap formatted file from linux, but I made an
 ngrep which might also do the job? ...

 the ngrep below:

 interface: eth0 (172.16.0.0/255.255.248.0)
 filter: (ip or ip6) and ( host 172.16.3.21 )
 
 T 172.16.2.99:35151 - 172.16.3.21:8080 [AP]
   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
 aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
 application/octet-stream..Content-MD5:
 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015 10:16:36
 GMT..Authorization: AWS
 GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
 ##
 T 172.16.2.99:35151 - 172.16.3.21:8080 [AP]
   PUT http://testje.s3.amazonaws.com:443/4096k HTTP/1.1..User-Agent:
 aws-sdk-nodejs/2.1.8 linux/v0.10.20..Content-Type:
 application/octet-stream..Content-MD5:
 0BsQLab2tMEzr8IWoS2m5w==..Content-Length: 4194304..Host: testje.s3.
   amazonaws.com..Expect: 100-continue..X-Amz-Date: Tue, 10 Mar 2015 10:16:36
 GMT..Authorization: AWS
 GHSEZVCH4NYD359IZUEX:E2Ur8OR+po687h1c6/PUBx6gYzQ=..Connection: close
 ##
 T 172.16.3.21:8080 - 172.16.2.99:35151 [AP]
   HTTP/1.1 100 Continue
 ##
 T 172.16.2.99:35151 - 172.16.3.21:8080 [A]

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.
 ##
 T 172.16.2.99:35151 - 172.16.3.21:8080 [AP]

 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2

Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-09 Thread Shunichi Shinohara
Hi Niels,

Thank you for your interest on Riak CS.

Some questions about 400 - InvalidDigest:

- Can you confirm which MD5 was correct for the log
  2015-02-11 16:34:17.854 [debug]
0.23568.18@riak_cs_put_fsm:is_digest_valid:326
 Calculated = pIFX5fpeo7+sPPNjtSBWBg==,
 Reported = 0BsQLab2tMEzr8IWoS2m5w==
- What was the transfer-encoding? I want to confirm chunked encoding
was NOT used.
- Hopefully, packet capture (e.g. by pcap format) will be helpful to debug

Thanks,
Shino

On Tue, Mar 10, 2015 at 10:25 AM, Kota Uenishi k...@basho.com wrote:
 Sorry, being late. I thought I've replied to you, but it was a very
 close one where I think you're hitting the same problem as this:

 http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-February/016845.html

 Riak CS often includes `=' in uploadId of multipart uploads while S3
 doesn't (where no specs described in official documents).

 On Thu, Feb 12, 2015 at 12:41 AM, Niels O niel...@gmail.com wrote:
 Hello everyone,

 I have just installed riakcs and have the s3cmd and nodejs (the official
 amazon) plugin working.

 with the same credentials (accesskeysecret) I CAN upload big files with
 S3CMD but I CANNOT with the AWS/S3 nodejs plugin? (downloading very big
 files is no problem b.t.w.)


 with the nodejs plugin

 - until 992k, (I tested with 32 KiB increases) everything works
 - starting at 1024 KiB I get [400 InvalidDigest: The Content-MD5 you
 specified was invalid.]
 - from 8192 KiB and beyond I get [403 AccessDenied] back from riakcs.

 this while -again- with s3cmd I am able to upload files of over 1 GiB size
 easily  .. same machine, same creds

 any ideas?





 (below some riakcs debug logging from both the 400 and 403)  ...


 400 - InvalidDigest:

 2015-02-11 16:34:16.911 [debug]
 0.17889.18@riak_cs_s3_auth:calculate_signature:129 STS:
 [PUT,\n,0BsQLab2tMEzr8IWoS2m5w==,\n,application/octet-stream,\n,\n,[[x-amz-date,:,Wed,
 11 Feb 2015 15:34:16 GMT,\n]],[/testje/4096k,[]]]
 2015-02-11 16:34:17.854 [debug]
 0.23568.18@riak_cs_put_fsm:is_digest_valid:326 Calculated =
 pIFX5fpeo7+sPPNjtSBWBg==, Reported = 0BsQLab2tMEzr8IWoS2m5w==
 2015-02-11 16:34:17.860 [debug] 0.23568.18@riak_cs_put_fsm:done:303
 Invalid digest in the PUT FSM


 403 - AccessDenied

 2015-02-11 16:36:00.448 [debug]
 0.22889.18@riak_cs_s3_auth:calculate_signature:129 STS:
 [POST,\n,[],\n,application/octet-stream,\n,\n,[[x-amz-date,:,Wed,
 11 Feb 2015 15:36:00 GMT,\n]],[/testje/8192k,?uploads]]
 2015-02-11 16:36:00.484 [debug]
 0.23539.18@riak_cs_s3_auth:calculate_signature:129 STS:
 [PUT,\n,sq5d2PIhC7I1xxT8Rp9cVg==,\n,application/octet-stream,\n,\n,[[x-amz-date,:,Wed,
 11 Feb 2015 15:36:00
 GMT,\n]],[/testje/8192k,?partNumber=1uploadId=TXR2AuCeRDWwc2bviLPcOg==]]
 2015-02-11 16:36:00.484 [debug]
 0.23539.18@riak_cs_wm_common:post_authentication:471 bad_auth
 2015-02-11 16:36:00.494 [debug]
 0.23543.18@riak_cs_s3_auth:calculate_signature:129 STS:
 [DELETE,\n,[],\n,application/octet-stream,\n,\n,[[x-amz-date,:,Wed,
 11 Feb 2015 15:36:00
 GMT,\n]],[/testje/8192k,?uploadId=TXR2AuCeRDWwc2bviLPcOg==]]
 2015-02-11 16:36:00.494 [debug]
 0.23543.18@riak_cs_wm_common:post_authentication:471 bad_auth


 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Kota UENISHI / @kuenishi
 Basho Japan KK

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK-CS Unable to create bucket using s3cmd - AccessDenied

2015-01-12 Thread Shunichi Shinohara
Hi Sellmy,

New versions of s3cmd uses AWS v4 authentication [1] but Riak CS
does not support it yet [2].
Tentatively, please add following one line to your .s3cfg file:
signature_v2 = True

[1] https://github.com/s3tools/s3cmd/issues/402
[2] https://github.com/basho/riak_cs/issues/897

Thanks,
Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs-gc won't start

2014-07-11 Thread Shunichi Shinohara
It seems GC works well :)

 I try to delete those file and check the size on the server won't reduce

If you care about disk usage of the server that riak is running on,
there is another factor. You need backend compaction to reduce disk usage.
It will require putting/deleting many objects or large objects and executing GC.

For details of backend compaction, please refer docuemnts [1] [2].
Riak CS uses, probabry you know, both bitcask and leveldb.

[1] http://docs.basho.com/riak/latest/ops/advanced/backends/bitcask
[2] http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/

Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-cs-gc won't start

2014-07-10 Thread Shunichi Shinohara
Hi Adhi,

Could you please specify the version of riak cs?

riak-cs-gc batch just triggers GC and does not wait its completion.
After GC finishes, there will be a log in console.log, which looks like

  Finished garbage collection: 0 seconds, 1 batch_count, 0
batch_skips, 1 manif_count, 1 block_count

Are there such lines in your console.log?

GC collects deleted objects after leeway_seconds (default: 24 hours)
since deletion.
In case you deleted objects and run GC within 24 hours, objects will
not be collected.

Thanks,
Shino

On Fri, Jul 11, 2014 at 1:34 PM, Adhi Priharmanto adhi@gmail.com wrote:
 Hi,
 I'am new in riak-cs, so I try to follow the documentation on riak-cs web,
 setup in single riak-cs node.
 After successfully setup and store data via dragondisk S3 client,I try to
 delete those file and check the size on the server won't reduce.
 some search result on google suggest me to run the riak-cs-gc, but it looks
 like riak-cs-gc won't run

 root@riakcs:~# riak-cs-gc status
 There is no garbage collection in progress
   The current garbage collection interval is: 300
   Last run started at: undefined
   Next run scheduled for: 00440710T041127Z
 root@riakcs:~# riak-cs-gc batch
 Garbage collection batch started.
 root@riakcs:~# riak-cs-gc status
 There is no garbage collection in progress
   The current garbage collection interval is: 300
   Last run started at: undefined
   Next run scheduled for: 00440710T041225Z
 root@riakcs:~#



 no error on log, any suggest ?


 --
 Cheers,

 Adhi Priharmanto


 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS: Undeletable broken buckets

2014-07-07 Thread Shunichi Shinohara
Hi Toby,

There is a rarely used option disable_local_bucket_check in Riak CS.
I don't know it solves your case, please let me mention it.

To use it, first set it in app.config of riak-cs (or
application:set_env/3 in shell),
{riak_cs, [...
   {disable_local_bucket_check, true},
   ...]},
Then create bucket as usual (e.g. s3cmd mb ...).

These steps solves one of partial update patterns that Andrew mentioned.

Thanks,
Shino

On Mon, Jul 7, 2014 at 2:12 PM, Toby Corkindale t...@dryft.net wrote:
 Hi Andrew,
 Thanks for the details.
 The Puppet config should never have let it be setup with
 allow_mult=false, but as this is a test cluster, it's possible
 something went awry there at some point.

 If it's not really a bug that needs reporting then I can let it go.
 Thanks,,
 Toby

 On 7 July 2014 12:21, Andrew Stone ast...@basho.com wrote:
 Hi Toby,

 We've seen this scenario before. It occurs because riak-cs stores bucket
 information in 2 places on disk:
   1) Inside the user record (for bucket permissions)
   2) Inside a global list of buckets, since each bucket must be unique

 What has happened most likely is that the bucket is no longer stored for the
 given user, but still in the global list of bucket. It shows up in bucket
 lists, but the current user doesn't have permission to actually do anything
 with it. Essentially you have partially written (or partially deleted) data.
 I believe the only time we saw this was when Riak was configured with
 {allow_mult, false} which is an invalid setting when used with riak-cs.
 Riak-cs uses siblings intelligently to merge conflicting data, and without
 that it's possible to end up in these types of scenarios. Later versions of
 riak-cs should refuse to run with {allow_mult, false}. I'd check your riak
 config to see if that is the case here.

 We actually have scripts to detect and remove the bad buckets that we've
 used in support. We can probably get you a copy if you want. Just let me
 know. And make sure when running in production that allow_mult = true.

 -Andrew



 On Sun, Jul 6, 2014 at 9:59 PM, Toby Corkindale t...@dryft.net wrote:

 Hi,
 At some point we've managed to create a couple of buckets that don't
 work and can't be deleted (in a development/testing cluster, not
 production).
 They show up with both 's3cmd ls' or by querying the HTTP API for a
 user's buckets.
 However attempting to list files in the bucket, or removing the
 bucket, or recreating the bucket, fails.

 It's not in a production cluster so it's not a huge concern to me, but
 thought I'd report the bug here in case it's of interest to you.
 Riak 1.4.9-1 and Riak-CS 1.4.5-1 on Ubuntu 12.04 LTS.

 $ s3cmd ls
 2014-02-07 00:07  s3://test5403
 2013-12-13 07:25  s3://test9857

 $ s3cmd ls s3://test5403
 ERROR: Bucket 'test5403' does not exist
 tobyc@adonai:~$ s3cmd ls s3://test9857
 ERROR: Bucket 'test9857' does not exist

 $ s3cmd rb s3://test5403
 ERROR: Bucket 'test5403' does not exist
 Bucket 's3://test5403/' removed

 $ s3cmd ls
 2014-02-07 00:07  s3://test5403
 2013-12-13 07:25  s3://test9857

 $ s3cmd mb s3://test5403
 Bucket 's3://test5403/' created

 $ s3cmd ls s3://test5403
 ERROR: Bucket 'test5403' does not exist

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





 --
 Turning and turning in the widening gyre
 The falcon cannot hear the falconer
 Things fall apart; the center cannot hold
 Mere anarchy is loosed upon the world

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Download stops at certain megabyte boundary (RiakCS)

2013-09-10 Thread Shunichi Shinohara
Martin,

I think you have already read this page:
http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/#Setting-up-the-Proper-Riak-Backend

As it says, you can use add_paths to set load path.
--
Shunichi Shinohara
Basho Japan KK


On Wed, Sep 11, 2013 at 2:05 AM, Martin Alpers martin-alp...@web.de wrote:
 Thank you Shino for your patient assistance.

 I did follow Seth's advice, and upgrading resulted in smooth downloads, as 
 stated in my reply to his mail.
 Additionally, I upgraded the one noed I am working on to the very recent 
 1.4.1 version. (Riak is 1.4.2)
 I still use the packages for Debian Squeeze.

 Outputs of console.log and crash.log are attached to this email.
 Although I am not an expert with these, the term invalid_storage_backend 
 appears in both of them, and riak used to start just fine before.

 I also tried to find the files:
 root@riak2:~# find /usr/lib/riak -name *kv_multi*
 /usr/lib/riak/lib/riak_kv-1.4.2-0-g61ac9d8/ebin/riak_kv_multi_backend.beam
 root@riak2:~# find /usr/lib/riak-cs -name *kv_multi*
 /usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin/riak_cs_kv_multi_backend.beam

 So is adding -pz /usr/lib/riak-cs/lib/riak_cs-1.4.1/ebin/ to 
 /etc/riak/vm.args, as suggested here:
 http://lists.basho.com/pipermail/riak-users_lists.basho.com/2010-October/002211.html
 the right thing to do?

 Kind regards,
 Martin

 On 13/09/10.10:48:1378806537, Shunichi Shinohara wrote:
 Martin,

 Thank you for more information.
 I found a mis-configuration of riak's app.config. The backend of riak for
 Riak CS is riak_cs_kv_multi_backend (look out for _cs_). This is the custom
 riak_kv backend for Riak CS. The error of s3cmd ls was caused by this.

 As for MB boundary stop, the issue 
 https://github.com/basho/riak_cs/issues/613
 may be related. Although not direct way to treat it, some points which may 
 help:

 - As Seth said, upgrade to 1.4, which can reduce network communication
 between riak nodes
 - Set pb_backlog to 256 or higher in riak_api section of riak's app.config

 Regards,
 Shino

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

 --
 Greetings, Martin Alpers
 --
 martin-alp...@web.de
 Mobile: 0176/66185173, but I prefer typing to talking (:
 Jabber: martin.alp...@jabber.org
 My mails are signed using GPG to verify their origin; request my public key 
 (10216CFB).
 See also: http://apps.opendatacity.de/stasi-vs-nsa/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Download stops at certain megabyte boundary (RiakCS)

2013-09-09 Thread Shunichi Shinohara
Martin,

Thank you for more information.
I found a mis-configuration of riak's app.config. The backend of riak for
Riak CS is riak_cs_kv_multi_backend (look out for _cs_). This is the custom
riak_kv backend for Riak CS. The error of s3cmd ls was caused by this.

As for MB boundary stop, the issue https://github.com/basho/riak_cs/issues/613
may be related. Although not direct way to treat it, some points which may help:

- As Seth said, upgrade to 1.4, which can reduce network communication
between riak nodes
- Set pb_backlog to 256 or higher in riak_api section of riak's app.config

Regards,
Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Download stops at certain megabyte boundary (RiakCS)

2013-09-04 Thread Shunichi Shinohara
Hi Martin,

Thank you for sharing detailed information.

On Wed, Sep 4, 2013 at 11:20 PM, Martin Alpers martin-alp...@web.de wrote:
 * downloading from the frist node stopped at exactly 3MB, i.e. 3145728 bytes
 * downloading from the second node stopped at exactly 22MB
 [snip]

I also tried your URL and found sudden stop at MB boundary.
Could you please add some information:

- Riak version
- Riak CS version
- app.config and vm.args of Riak
- app.config and vm.args of Riak CS
- log of Riak and Riak CS if there is output around download

# If log files are large, please send them to me (sh...@basho.com).

 I have another problem that might or might not be related, and is described 
 much easier:
 s3cmd ls s3://abc
 WARNING: Retrying failed request: /?delimiter=/ ('')

Log might be useful also for this issue.

 One remaining question:
 root@riak3:~# riak-admin diag
 ...
 [sh: 1: exec: sysctl: not found],

It may be better to separate this from above ones ;)
Would you file this issue at https://github.com/basho/riak/issues ?

Regards,
Shino

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com