I'm not sure who created it. This's a legacy production system. Just now, I used another "s3cfg" file to access it. Below is my output: root@cluster1-hd10:~# s3cmd -c s3cfg1 info s3://stock/XSHE/0/000050/2008/XSHE-000050-20080102 s3://stock/XSHE/0/000050/2008/XSHE-000050-20080102 (object): File size: 397535 Last mod: Thu, 05 Dec 2013 03:19:00 GMT MIME type: binary/octet-stream MD5 sum: feb2609ecfc9bb21549f2401a5c9477d ACL: stockwrite: FULL_CONTROL ACL: *anon*: READ URL: http://stock.s3.amazonaws.com/XSHE/0/000050/2008/XSHE-000050-20080102 root@cluster1-hd10:~# s3cmd -c s3cfg1 ls s3://stock/XSHE/0/000050/2008/XSHE-000050-20080102 -d DEBUG: ConfigParser: Reading file 's3cfg1' DEBUG: ConfigParser: access_key->TE...17_chars...0 DEBUG: ConfigParser: bucket_location->US DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution DEBUG: ConfigParser: default_mime_type->binary/octet-stream DEBUG: ConfigParser: delete_removed->False DEBUG: ConfigParser: dry_run->False DEBUG: ConfigParser: encoding->UTF-8 DEBUG: ConfigParser: encrypt->False DEBUG: ConfigParser: follow_symlinks->False DEBUG: ConfigParser: force->False DEBUG: ConfigParser: get_continue->False DEBUG: ConfigParser: gpg_command->/usr/bin/gpg DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s DEBUG: ConfigParser: gpg_passphrase->...-3_chars... DEBUG: ConfigParser: guess_mime_type->True DEBUG: ConfigParser: host_base->api2.cloud-datayes.com DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com DEBUG: ConfigParser: human_readable_sizes->False DEBUG: ConfigParser: list_md5->False DEBUG: ConfigParser: log_target_prefix-> DEBUG: ConfigParser: preserve_attrs->True DEBUG: ConfigParser: progress_meter->True DEBUG: ConfigParser: proxy_host->10.21.136.81 DEBUG: ConfigParser: proxy_port->8080 DEBUG: ConfigParser: recursive->False DEBUG: ConfigParser: recv_chunk->4096 DEBUG: ConfigParser: reduced_redundancy->False DEBUG: ConfigParser: secret_key->Hk...37_chars...= DEBUG: ConfigParser: send_chunk->4096 DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com DEBUG: ConfigParser: skip_existing->False DEBUG: ConfigParser: socket_timeout->100 DEBUG: ConfigParser: urlencoding_mode->normal DEBUG: ConfigParser: use_https->False DEBUG: ConfigParser: verbosity->WARNING DEBUG: Updating Config.Config encoding -> UTF-8 DEBUG: Updating Config.Config follow_symlinks -> False DEBUG: Updating Config.Config verbosity -> 10 DEBUG: Unicodising 'ls' using UTF-8 DEBUG: Unicodising 's3://stock/XSHE/0/000050/2008/XSHE-000050-20080102' using UTF-8 DEBUG: Command: ls DEBUG: Bucket 's3://stock': DEBUG: String 'XSHE/0/000050/2008/XSHE-000050-20080102' encoded to 'XSHE/0/000050/2008/XSHE-000050-20080102' DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01 +0000\n/stock/' DEBUG: CreateRequest: resource[uri]=/ DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01 +0000\n/stock/' DEBUG: Processing request, please wait... DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com DEBUG: format_uri(): http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/000050/2008/XSHE-000050-20080102&delimiter=/ WARNING: Retrying failed request: /?prefix=XSHE/0/000050/2008/XSHE-000050-20080102&delimiter=/ ('') WARNING: Waiting 3 sec... DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:37:05 +0000\n/stock/' DEBUG: Processing request, please wait... DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com DEBUG: format_uri(): http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/000050/2008/XSHE-000050-20080102&delimiter=/ WARNING: Retrying failed request: /?prefix=XSHE/0/000050/2008/XSHE-000050-20080102&delimiter=/ ('') WARNING: Waiting 6 sec... ....
On Mon, Aug 24, 2015 at 9:17 AM, Shunichi Shinohara <sh...@basho.com> wrote: > The result of "s3cmd ls" (aka, GET Service API) indicates there > is no bucket with name "stock": > > > root@cluster-s3-hd1:~# s3cmd ls > > 2013-12-01 06:45 s3://test > > Have you created it? > > -- > Shunichi Shinohara > Basho Japan KK > > > On Mon, Aug 24, 2015 at 10:14 AM, changmao wang <wang.chang...@gmail.com> > wrote: > > Shunichi, > > > > Thanks for your reply. Below is my command result: > > root@cluster-s3-hd1:~# s3cmd ls > > 2013-12-01 06:45 s3://test > > root@cluster-s3-hd1:~# s3cmd info s3://stock > > ERROR: Access to bucket 'stock' was denied > > root@cluster-s3-hd1:~# s3cmd info s3://stock -d > > DEBUG: ConfigParser: Reading file '/root/.s3cfg' > > DEBUG: ConfigParser: access_key->M2...17_chars...K > > DEBUG: ConfigParser: bucket_location->US > > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com > > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution > > DEBUG: ConfigParser: default_mime_type->binary/octet-stream > > DEBUG: ConfigParser: delete_removed->False > > DEBUG: ConfigParser: dry_run->False > > DEBUG: ConfigParser: encoding->UTF-8 > > DEBUG: ConfigParser: encrypt->False > > DEBUG: ConfigParser: follow_symlinks->False > > DEBUG: ConfigParser: force->False > > DEBUG: ConfigParser: get_continue->False > > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg > > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose > > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o > > %(output_file)s %(input_file)s > > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose > > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o > > %(output_file)s %(input_file)s > > DEBUG: ConfigParser: gpg_passphrase->...-3_chars... > > DEBUG: ConfigParser: guess_mime_type->True > > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com > > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com > > DEBUG: ConfigParser: human_readable_sizes->False > > DEBUG: ConfigParser: list_md5->False > > DEBUG: ConfigParser: log_target_prefix-> > > DEBUG: ConfigParser: preserve_attrs->True > > DEBUG: ConfigParser: progress_meter->True > > DEBUG: ConfigParser: proxy_host->10.21.136.81 > > DEBUG: ConfigParser: proxy_port->8080 > > DEBUG: ConfigParser: recursive->False > > DEBUG: ConfigParser: recv_chunk->4096 > > DEBUG: ConfigParser: reduced_redundancy->False > > DEBUG: ConfigParser: secret_key->1u...37_chars...= > > DEBUG: ConfigParser: send_chunk->4096 > > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com > > DEBUG: ConfigParser: skip_existing->False > > DEBUG: ConfigParser: socket_timeout->10 > > DEBUG: ConfigParser: urlencoding_mode->normal > > DEBUG: ConfigParser: use_https->False > > DEBUG: ConfigParser: verbosity->WARNING > > DEBUG: Updating Config.Config encoding -> UTF-8 > > DEBUG: Updating Config.Config follow_symlinks -> False > > DEBUG: Updating Config.Config verbosity -> 10 > > DEBUG: Unicodising 'info' using UTF-8 > > DEBUG: Unicodising 's3://stock' using UTF-8 > > DEBUG: Command: info > > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09 > > +0000\n/stock/?location' > > DEBUG: CreateRequest: resource[uri]=/?location > > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09 > > +0000\n/stock/?location' > > DEBUG: Processing request, please wait... > > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com > > DEBUG: format_uri(): http://stock.api2.cloud-datayes.com/?location > > DEBUG: Response: {'status': 403, 'headers': {'date': 'Mon, 24 Aug 2015 > > 01:11:09 GMT', 'content-length': '160', 'content-type': > 'application/xml', > > 'server': 'Riak CS'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" > > encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access > > > Denied</Message><Resource>/stock</Resource><RequestId></RequestId></Error>'} > > DEBUG: S3Error: 403 (Forbidden) > > DEBUG: HttpHeader: date: Mon, 24 Aug 2015 01:11:09 GMT > > DEBUG: HttpHeader: content-length: 160 > > DEBUG: HttpHeader: content-type: application/xml > > DEBUG: HttpHeader: server: Riak CS > > DEBUG: ErrorXML: Code: 'AccessDenied' > > DEBUG: ErrorXML: Message: 'Access Denied' > > DEBUG: ErrorXML: Resource: '/stock' > > DEBUG: ErrorXML: RequestId: None > > ERROR: Access to bucket 'stock' was denied > > > > On Mon, Aug 24, 2015 at 9:04 AM, Shunichi Shinohara <sh...@basho.com> > wrote: > >> > >> The error message in console.log shows no user with access_key in your > >> s3cfg. > >> Could you provide resutls following commands? > >> > >> - s3cmd ls > >> - s3cmd info s3://stock > >> > >> If error happens, debug print switch "-d" of s3cmd might help. > >> > >> [1] > >> > http://docs.basho.com/riakcs/latest/cookbooks/Account-Management/#Creating-a-User-Account > >> > >> -- > >> Shunichi Shinohara > >> Basho Japan KK > >> > >> > >> On Fri, Aug 21, 2015 at 10:00 AM, changmao wang < > wang.chang...@gmail.com> > >> wrote: > >> > Kazuhiro, > >> > > >> > Maybe that's not the key point. I'm using riak 1.4.2 and follow below > >> > docs > >> > to configure "s3cfg" file. > >> > > >> > > http://docs.basho.com/riakcs/1.4.2/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use > >> > > >> > There's no "signature_v2" parameter in "s3cfg". However, I added this > >> > parameter to "s3cfg" and tried again with same errors. > >> > > >> > > >> > > >> > > >> > On Thu, Aug 20, 2015 at 10:31 PM, Kazuhiro Suzuki <k...@basho.com> > wrote: > >> >> > >> >> Hi Changmao, > >> >> > >> >> It seems your s3cmd config should include 2 items: > >> >> > >> >> signature_v2 = True > >> >> host_base = api2.cloud-datayes.com > >> >> > >> >> Riak CS requires "signature_v2 = True" since Riak CS has not > supported > >> >> s3 authentication version 4 yet. > >> >> You can find a sample configuration of s3cmd here to interact with > Riak > >> >> CS > >> >> [1]. > >> >> > >> >> [1]: > >> >> > >> >> > http://docs.basho.com/riakcs/2.0.1/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use > >> >> > >> >> Thanks, > >> >> > >> >> On Thu, Aug 20, 2015 at 7:44 PM, changmao wang > >> >> <wang.chang...@gmail.com> > >> >> wrote: > >> >> > Just now, I used "admin_key" and "admin_secret" from > >> >> > /etc/riak-cs/app.config > >> >> > to run "s3cmd -c s3-stock ls s3://stock/XSHE/0/000600" > >> >> > and I got the below error: > >> >> > ERROR: Access to bucket 'stock' was denied > >> >> > > >> >> > Below is abstract from "/var/log/riak-cs/console.log" > >> >> > 2015-08-20 18:40:22.790 [debug] > >> >> > <0.28085.18>@riak_cs_s3_auth:calculate_signature:129 STS: > >> >> > ["GET","\n",[],"\n",[],"\n","\n",[["x-amz-date",":",<<"Thu, 20 Aug > >> >> > 2015 > >> >> > 10:40:22 +0000">>,"\n"]],["/stock/",[]]] > >> >> > 2015-08-20 18:40:32.861 [error] > >> >> > <0.28153.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of > >> >> > user > >> >> > record for s3 failed. Reason: no_user_key > >> >> > 2015-08-20 18:40:32.861 [debug] > >> >> > <0.28153.18>@riak_cs_wm_common:post_authentication:452 No user key > >> >> > 2015-08-20 18:40:32.969 [debug] > >> >> > <0.28189.18>@riak_cs_get_fsm:prepare:406 > >> >> > Manifest: > >> >> > > >> >> > > >> >> > > {lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,122,47,83,90,48,48,50,49,55,53,67,78,47,50,48,49,48,95,50,48,49,48,45,48,52,45,50,52,95,229,133,172,229,143,184,231,171,160,231,168,139,239,188,136,50,48,49,48,229,185,180,52,230,156,136,239,188,137,46,80,68,70>>},[],"2013-12-16T23:01:12.000Z",<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,255387,<<"application/pdf">>,<<55,41,141,170,187,226,47,223,183,95,105,129,155,154,210,202>>,active,{1387,234872,598819},{1387,234872,918555},[],undefined,undefined,undefined,undefined,{acl_v2,{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51","AVG2DHZ4UNUYFAZ8F4WR"},[{{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51"},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,234872,598546}},[],undefined} > >> >> > 2015-08-20 18:40:33.043 [debug] > >> >> > <0.28189.18>@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0, > >> >> > FinalBlock: > >> >> > 0 > >> >> > 2015-08-20 18:40:33.043 [debug] > >> >> > <0.28189.18>@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0, > >> >> > KeepFinal: > >> >> > 255387 > >> >> > 2015-08-20 18:40:33.050 [debug] > >> >> > <0.28189.18>@riak_cs_get_fsm:waiting_continue_or_stop:229 Block > >> >> > Servers: > >> >> > [<0.28191.18>] > >> >> > 2015-08-20 18:40:33.079 [debug] > >> >> > <0.28189.18>@riak_cs_get_fsm:waiting_chunks:307 Retrieved block > >> >> > {<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0} > >> >> > 2015-08-20 18:40:33.079 [debug] > >> >> > <0.28189.18>@riak_cs_get_fsm:perhaps_send_to_user:280 Returning > block > >> >> > {<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0} to > >> >> > client > >> >> > 2015-08-20 18:40:38.218 [error] > >> >> > <0.28086.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of > >> >> > user > >> >> > record for s3 failed. Reason: no_user_key > >> >> > 2015-08-20 18:40:38.218 [debug] > >> >> > <0.28086.18>@riak_cs_wm_common:post_authentication:452 No user key > >> >> > 2015-08-20 18:40:38.226 [debug] > >> >> > <0.28210.18>@riak_cs_get_fsm:prepare:406 > >> >> > Manifest: > >> >> > > >> >> > > >> >> > > {lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,104,47,83,72,54,48,48,55,53,48,67,78,47,50,48,48,55,95,50,48,48,55,45,49,49,45,50,49,95,230,177,159,228,184,173,232,141,175,228,184,154,229,133,179,228,186,142,73,66,69,95,53,232,141,175,229,147,129,232,142,183,229,190,151,228,186,140,230,156,159,228,184,180,229,186,138,230,137,185,230,150,135,231,154,132,229,133,172,229,145,138,229,143,138,233,163,142,233,153,169,230,143,144,231,164,186,46,112,100,102>>},[],"2013-12-15T09:04:48.000Z",<<201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105>>,89863,<<"application/pdf">>,<<139,151,203,173,6,111,222,48,17,81,102,170,216,66,193,77>>,active,{1387,98288,545827},{1387,98288,618409},[],undefined,undefined,undefined,undefined,{acl_v2,{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51","AVG2DHZ4UNUYFAZ8F4WR"},[{{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51"},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,98288,545618}},[],undefined} > >> >> > 2015-08-20 18:40:38.280 [debug] > >> >> > <0.28210.18>@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0, > >> >> > FinalBlock: > >> >> > 0 > >> >> > 2015-08-20 18:40:38.280 [debug] > >> >> > <0.28210.18>@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0, > >> >> > KeepFinal: > >> >> > 89863 > >> >> > 2015-08-20 18:40:38.280 [debug] > >> >> > <0.28210.18>@riak_cs_get_fsm:waiting_continue_or_stop:229 Block > >> >> > Servers: > >> >> > [<0.28212.18>] > >> >> > 2015-08-20 18:40:38.343 [debug] > >> >> > <0.28210.18>@riak_cs_get_fsm:waiting_chunks:307 Retrieved block > >> >> > {<<201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105>>,0} > >> >> > 2015-08-20 18:40:38.344 [debug] > >> >> > <0.28210.18>@riak_cs_get_fsm:perhaps_send_to_user:280 Returning > block > >> >> > {<<201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105>>,0} > >> >> > to > >> >> > client > >> >> > > >> >> > On Thu, Aug 20, 2015 at 6:04 PM, Stanislav Vlasov > >> >> > <stanislav....@gmail.com> > >> >> > wrote: > >> >> >> > >> >> >> 2015-08-20 14:47 GMT+05:00 changmao wang <wang.chang...@gmail.com > >: > >> >> >> > >> >> >> > what's your meaning of domain name of /etc/riak-cs/app.config > and > >> >> >> > ~/.s3cfg? > >> >> >> > I guess it's cs_root_host parameter from /etc/riak-cs/app.config > >> >> >> > and > >> >> >> > host_base from '~/.s3cfg'. > >> >> >> > If so, there're same as "api2.cloud-datayes.com". > >> >> >> > >> >> >> Yes, is that i mean, but i see, it is not your case > >> >> >> Try to set {level, debug} in lager_file_backend section for > >> >> >> console.log. > >> >> >> > >> >> >> > However, I can not ping this host from localhost. > >> >> >> > >> >> >> It's ok, if you write proper proxy_host and proxy_port in .s3cfg > >> >> >> > >> >> >> > On Thu, Aug 20, 2015 at 5:23 PM, Stanislav Vlasov > >> >> >> > <stanislav....@gmail.com> > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> 2015-08-20 13:57 GMT+05:00 changmao wang > >> >> >> >> <wang.chang...@gmail.com>: > >> >> >> >> > somebody watching on this? > >> >> >> >> > >> >> >> >> Do you set up same domain in riak-cs.conf and in .s3cfg? > >> >> >> >> I got such error in this case. > >> >> >> >> > >> >> >> >> > On Wed, Aug 19, 2015 at 9:01 AM, changmao wang > >> >> >> >> > <wang.chang...@gmail.com> > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> Matthew, > >> >> >> >> >> > >> >> >> >> >> I used s3cmd --configure to generate ".s3cfg" config file > and > >> >> >> >> >> then > >> >> >> >> >> access > >> >> >> >> >> RIAK service by s3cmd. > >> >> >> >> >> The access_key and secret_key from ".s3cfg" is same as > >> >> >> >> >> admin_key > >> >> >> >> >> and > >> >> >> >> >> admin_secret from "/etc/riak-cs/app.config". > >> >> >> >> >> > >> >> >> >> >> However, I got error as below using s3cmd to access one > >> >> >> >> >> bucket. > >> >> >> >> >> > >> >> >> >> >> root@cluster-s3-hd1:~# s3cmd -c /root/.s3cfg ls > >> >> >> >> >> s3://pipeline/article/111.pdf > >> >> >> >> >> ERROR: Access to bucket 'pipeline' was denied > >> >> >> >> >> > >> >> >> >> >> By the way, I used Riak and Riak-CS 1.4.2 on Ubuntu. Current > >> >> >> >> >> production > >> >> >> >> >> cluster is a legacy system without documents for co-workers. > >> >> >> >> >> > >> >> >> >> >> Attached file is "s3cfg" generated by "s3cmd --configure". > >> >> >> >> >> -- > >> >> >> >> >> Amao Wang > >> >> >> >> >> Best & Regards > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > > >> >> >> >> > -- > >> >> >> >> > Amao Wang > >> >> >> >> > Best & Regards > >> >> >> >> > > >> >> >> >> > _______________________________________________ > >> >> >> >> > riak-users mailing list > >> >> >> >> > riak-users@lists.basho.com > >> >> >> >> > > >> >> >> >> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >> >> >> >> > > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> >> -- > >> >> >> >> Stanislav > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> > -- > >> >> >> > Amao Wang > >> >> >> > Best & Regards > >> >> >> > >> >> >> > >> >> >> > >> >> >> -- > >> >> >> Stanislav > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > -- > >> >> > Amao Wang > >> >> > Best & Regards > >> >> > > >> >> > _______________________________________________ > >> >> > riak-users mailing list > >> >> > riak-users@lists.basho.com > >> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Kazuhiro Suzuki | Basho Japan KK > >> > > >> > > >> > > >> > > >> > -- > >> > Amao Wang > >> > Best & Regards > >> > > >> > _______________________________________________ > >> > riak-users mailing list > >> > riak-users@lists.basho.com > >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >> > > > > > > > > > > > -- > > Amao Wang > > Best & Regards > -- Amao Wang Best & Regards
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com