An outbound handoff of partition error enotconn

2015-07-31 Thread changmao wang
Hi Riak-users,

I have found some errors related to handoff of partition in
/etc/riak/log/errors.
Details are as below:

2015-07-30 16:04:33.643 [error]
<0.12872.15>@riak_core_handoff_sender:start_fold:262 ownership_transfer
transfer of riak_kv_vnode from 'riak@10.21.136.76'
45671926166590716193865151022383844364247891968 to 'riak@10.21.136.93'
45671926166590716193865151022383844364247891968 failed because of enotconn
2015-07-30 16:04:33.643 [error]
<0.197.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
partition riak_kv_vnode 45671926166590716193865151022383844364247891968 was
terminated for reason: {shutdown,{error,enotconn}}



I have searched it with google and found related articles. However, there's
no solution.
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-October/016052.html

-- 
Amao Wang
Best & Regards
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Fwd: riak-users post from wang.chang...@gmail.com requires approval

2015-08-04 Thread changmao wang
-- Forwarded message --
From: changmao wang 
Date: Tue, Aug 4, 2015 at 6:37 PM
Subject: Re: riak-users post from wang.chang...@gmail.com requires approval
To: Matthew Brender 


Hi riak-users,

Thanks for your reply. However, I have some problems with my riak cluster.
It's pending on leaving status for 7 days after I removed 4 new added nodes.

root@cluster-s3-hd1:~# riak-admin member-status
= Membership
==
Status RingPendingNode
---
leaving10.9% 10.9%'riak@10.21.136.91'
leaving 9.4% 10.9%'riak@10.21.136.92'
leaving 7.8% 10.9%'riak@10.21.136.93'
leaving 7.8% 10.9%'riak@10.21.136.94'
valid  10.9% 10.9%'riak@10.21.136.66'
valid  10.9% 10.9%'riak@10.21.136.71'
valid  14.1% 10.9%'riak@10.21.136.76'
valid  17.2% 12.5%'riak@10.21.136.81'
valid  10.9% 10.9%'riak@10.21.136.86'

below is error log from riak@10.21.136.81
2015-08-04 01:04:03.883 [error]
<0.27021.262>@riak_core_handoff_sender:start_fold:262 ownership_transfer
transfer of riak_kv_vnode from 'riak@10.21.136.81'
296867520082839655260123481645494988367611297792 to 'riak@10.21.136.92'
296867520082839655260123481645494988367611297792 failed because of enotconn
2015-08-04 01:04:03.883 [error]
<0.195.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
partition riak_kv_vnode 296867520082839655260123481645494988367611297792
was terminated for reason: {shutdown,{error,enotconn}}
2015-08-04 04:38:39.512 [error] <0.15702.1> Trailing data, discarding (2753
bytes)
2015-08-04 08:06:39.080 [error]
<0.26581.272>@riak_core_handoff_sender:start_fold:262 ownership_transfer
transfer of riak_kv_vnode from 'riak@10.21.136.81'
411047335499316445744786359201454599278231027712 to 'riak@10.21.136.93'
411047335499316445744786359201454599278231027712 failed because of enotconn
2015-08-04 08:06:39.081 [error]
<0.195.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
partition riak_kv_vnode 411047335499316445744786359201454599278231027712
was terminated for reason: {shutdown,{error,enotconn}}
2015-08-04 18:07:53.040 [error]
<0.6973.342>@riak_api_pb_server:handle_info:141 Unrecognized message
{63685090,{error,timeout}}

Today I detached the network bond(we used the bond mode 6/0) in several
nodes of the riak cluster. After that, "ifconfig" command show decreased
"dropped" packages,
There're only 1~2 dropped network packages during the last four hours.

However, I want to know what's the status of the Riak cluster?
Currently I started riak-cs service on four new nodes. That's ok, they can
service.
how many hours those four node will take to leave the cluster?






On Tue, Aug 4, 2015 at 4:54 AM, Matthew Brender  wrote:

> Hey Amao,
>
> Excuse the delay on this message being posted. You hit our upper limit on
> size (see below). When possible, please link to a gist or another external
> service to prevent this from being filtered.
>
> Thanks for your patience!
> Matt
>
> *Matt Brender | Developer Advocacy Lead*
> Basho Technologies
> t: @mjbrender <https://twitter.com/mjbrender>
>
>
> On Thu, Jul 30, 2015 at 5:35 AM,  wrote:
>
>> As list administrator, your authorization is requested for the
>> following mailing list posting:
>>
>> List:riak-users@lists.basho.com
>> From:wang.chang...@gmail.com
>> Subject: riak handoff of partition error
>> Reason:  Message body is too big: 3105110 bytes with a limit of 40 KB
>>
>> At your convenience, visit:
>>
>> http://lists.basho.com/mailman/admindb/riak-users_lists.basho.com
>>
>> to approve or deny the request.
>>
>>
>> -- Forwarded message --
>> From: changmao wang 
>> To: riak-users@lists.basho.com
>> Cc:
>> Date: Thu, 30 Jul 2015 17:35:13 +0800
>> Subject: riak handoff of partition error
>> Hi Riak-users group,
>>
>> I have found some errors related to handoff of partition in
>> /etc/riak/log/errors.
>> Details are as below:
>>
>> 2015-07-30 16:04:33.643 [error]
>> <0.12872.15>@riak_core_handoff_sender:start_fold:262 ownership_transfer
>> transfer of riak_kv_vnode from 'riak@10.21.136.76'
>> 45671926166590716193865151022383844364247891968 to 'riak@10.21.136.93'
>> 45671926166590716193865151022383844364247891968 failed because of enotconn
>> 2015-07-30 16:04:33.643 [error]
>> <0.197.0>@riak_core_ha

Re: why leaving riak cluster so slowly and how to accelerate the speed

2015-08-09 Thread changmao wang
Is there any ideas to fix this issue?

Amao

On Fri, Aug 7, 2015 at 6:55 AM, changmao wang 
wrote:

> Dmitri,
>
> Thanks for your quick reply.
> my question are as below:
> 1. what's the current status of the whole cluster? Is't doing data balance?
> 2. there's so many errors during one of the node error log. how to handle
> it?
> 2015-08-05 01:38:59.717 [error]
> <0.23000.298>@riak_core_handoff_sender:start_fold:262 ownership_transfer
> transfer of riak_kv_vnode from 'riak@10.21.136.81'
> 525227150915793236229449236757414210188850757632 to 'riak@10.21.136.94'
> 525227150915793236229449236757414210188850757632 failed because of enotconn
> 2015-08-05 01:38:59.718 [error]
> <0.195.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
> partition riak_kv_vnode 525227150915793236229449236757414210188850757632
> was terminated for reason: {shutdown,{error,enotconn}}
>
> During the last 5 days, there's no changes of the "riak-admin member
> status" output.
> 3. how to accelerate the data balance?
>
>
> On Fri, Aug 7, 2015 at 6:41 AM, Dmitri Zagidulin 
> wrote:
>
>> Ok, I think I understand so far. So what's the question?
>>
>> On Thursday, August 6, 2015, Changmao.Wang 
>> wrote:
>>
>>> Hi Riak users,
>>>
>>> Before adding new nodes, the cluster only have five nodes. The member
>>> list are as below:
>>> 10.21.136.66,10.21.136.71,10.21.136.76,10.21.136.81,10.21.136.86.
>>> We did not setup http proxy for the cluster, only one node of the
>>> cluster provide the http service.  so the CPU load is always high on this
>>> node.
>>>
>>> After that, I added four nodes (10.21.136.[91-94]) to those cluster.
>>> During the ring/data balance progress, each node failed(riak stopped)
>>> because of disk 100% full.
>>> I used multi-disk path to "data_root" parameter in
>>> '/etc/riak/app.config'. Each disk is only 580MB size.
>>> As you know, bitcask storage engine did not support multi-disk path.
>>> After one of the disks is 100% full, it can not switch next idle disk. So
>>> the "riak" service is down.
>>>
>>> After that, I removed the new add four nodes at active nodes with
>>> "riak-admin cluster leave riak@'10.21.136.91'".
>>> and then stop "riak" service on other active new nodes, reformat the
>>> above new nodes with LVM disk management (bind 6 disk with virtual disk
>>> group).
>>> Replace the "data-root" parameter with one folder, and then start "riak"
>>> service again. After that, the cluster began the data balance again.
>>> That's the whole story.
>>>
>>>
>>> Amao
>>>
>>> --
>>> *From: *"Dmitri Zagidulin" 
>>> *To: *"Changmao.Wang" 
>>> *Sent: *Thursday, August 6, 2015 10:46:59 PM
>>> *Subject: *Re: why leaving riak cluster so slowly and how to accelerate
>>> the speed
>>>
>>> Hi Amao,
>>>
>>> Can you explain a bit more which steps you've taken, and what the
>>> problem is?
>>>
>>> Which nodes have been added, and which nodes are leaving the cluster?
>>>
>>> On Tue, Jul 28, 2015 at 11:03 PM, Changmao.Wang <
>>> changmao.w...@datayes.com> wrote:
>>>
>>>> Hi Raik user group,
>>>>
>>>>  I'm using riak and riak-cs 1.4.2. Last weekend, I added four nodes to
>>>> cluster with 5 nodes. However, it's failed with one of disks 100% full.
>>>> As you know bitcask storage engine can not support multifolders.
>>>>
>>>> After that, I restarted the "riak" and leave the cluster with the
>>>> command "riak-admin cluster leave" and "riak-admin cluster plan", and the
>>>> commit.
>>>> However, riak is always doing KV balance after my submit leaving
>>>> command. I guess that it's doing join cluster progress.
>>>>
>>>> Could you show us how to accelerate the leaving progress? I have tuned
>>>> the "transfer-limit" parameters on 9 nodes.
>>>>
>>>> below is some commands output:
>>>> riak-admin member-status
>>>> = Membership
>>>> ==
>>>> Status RingPendingNode
>>>>
>>>> ---

riak-admin diag errors

2015-08-10 Thread changmao wang
Hi Riak-users,

when I run riak-admin diag command, sometimes it's ok, sometimes it found
run time errors like below:

root@cluster1-hd13:~# riak-admin diag
RPC to 'riak@10.21.136.81' failed: {'EXIT',
{function_clause,
 [{lists,zip,
   [['riak@10.21.136.94'],[]],
   [{file,"lists.erl"},{line,321}]},
  {lists,zip,2,
   [{file,"lists.erl"},{line,321}]},
  {lists,zip,2,
   [{file,"lists.erl"},{line,321}]},
  {riaknostic_check_search,check,0,
   [{file,
 "src/riaknostic_check_search.erl"},
{line,49}]},
  {riaknostic_check,check,1,
   [{file,"src/riaknostic_check.erl"},
{line,74}]},
  {riaknostic,'-run/1-fun-1-',2,
   [{file,"src/riaknostic.erl"},
{line,106}]},
  {lists,foldl,3,
   [{file,"lists.erl"},{line,1197}]},
  {riaknostic,run,1,
   [{file,"src/riaknostic.erl"},
{line,105}]}]}}

By the way, I used riak 1.4.2 version on ubuntu with 9 nodes. (four nodes
have been added last month).
Any more background, do u want to know?

-- 
Amao Wang
Best & Regards
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: why leaving riak cluster so slowly and how to accelerate the speed

2015-08-11 Thread changmao wang
1. About backuping new nodes of four and then using 'riak-admin
force-replace'. what's the status of new added nodes?
as you know, we want to replace one of leaving nodes.

2. what's the risk of 'riak-admin force-remove' 'riak@10.21.136.91' without
backup?
As you know, now the node(riak@10.21.136.91) is a member of the cluster,
and keeping almost 2.5TB data, maybe 10 percent of the whole cluster.



On Tue, Aug 11, 2015 at 7:32 PM, Dmitri Zagidulin 
wrote:

> 1. How to force leave "leaving"'s nodes without data loss?
>
> This depends on - did you back up the data directory of the 4 new nodes,
> before you reformatted them?
> If you backed them up (and then restored the data directory once you
> reformatted them), you can try:
>
> riak-admin force-replace 'riak@10.21.136.91' 'riak@ address is for that node>'
> (same for the other 3)
>
> If you did not back up those nodes, the only thing you can do is force
> them to leave, and then join the new ones. So, for each of the 4:
>
> riak-admin force-remove 'riak@10.21.136.91' 'riak@10.21.136.66'
> (same for the other 3)
>
> In either case, after force-replacing or force-removing, you have to join
> the new nodes to the cluster, before you commit.
>
> riak-admin join 'riak@new node' 'riak@10.21.136.66'
> (same for the other 3)
> and finally:
> riak-cluster plan
> riak-cluster commit
>
> As for the error, the reason you're seeing it, is because the other nodes
> can't contact the 4 that are supposed to be leaving. (Since you wiped them).
> The amount of time that passed doesn't matter, the cluster will be waiting
> for those nodes to leave indefinitely, unless you force-remove or
> force-replace.
>
>
>
> On Tue, Aug 11, 2015 at 1:32 AM, changmao wang 
> wrote:
>
>> HI Dmitri,
>>
>> For your question,
>> 3) Re-formatted those four nodes and re-installed Riak. Here is where it
>> gets tricky though. Several questions for you:
>> - Did you attempt to re-join those 4 reinstalled nodes into the cluster?
>> What was the output of the cluster join and cluster plan commands?
>> - Did the IP address change, after they were reformatted? If so, you
>> probably need to use something like 'reip' at this point:
>> http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#reip
>>
>> I did  NOT try to re-join those 4 re-join those 4 reinstalled nodes into
>> the cluster. As you know, member-status shows 'they're leaving" as below:
>> riak-admin member-status
>> = Membership
>> ==
>> Status RingPendingNode
>>
>> ---
>> leaving10.9% 10.9%'riak@10.21.136.91'
>> leaving 9.4% 10.9%'riak@10.21.136.92'
>> leaving 7.8% 10.9%'riak@10.21.136.93'
>> leaving 7.8% 10.9%'riak@10.21.136.94'
>> valid  10.9% 10.9%'riak@10.21.136.66'
>> valid  10.9% 10.9%'riak@10.21.136.71'
>> valid  14.1% 10.9%'riak@10.21.136.76'
>> valid  17.2% 12.5%'riak@10.21.136.81'
>> valid  10.9% 10.9%'riak@10.21.136.86'
>>
>> ---
>> Valid:5 / Leaving:4 / Exiting:0 / Joining:0 / Down:0
>>
>> two weeks elapsed, 'riak-admin member-status' shows same result. I don't
>> know which step ring hand off?
>>
>> I did not changed the IP address of four newly adding nodes.
>>
>> My questions:
>>
>> 1. How to force leave "leaving"'s nodes without data loss?
>> 2. I have found some errors related to handoff of partition in
>> /etc/riak/log/errors.
>> Details are as below:
>>
>> 2015-07-30 16:04:33.643 [error]
>> <0.12872.15>@riak_core_handoff_sender:start_fold:262 ownership_transfer
>> transfer of riak_kv_vnode from 'riak@10.21.136.76'
>> 45671926166590716193865151022383844364247891968 to 'riak@10.21.136.93'
>> 45671926166590716193865151022383844364247891968 failed because of enotconn
>> 2015-07-30 16:04:33.643 [error]
>> <0.197.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
>> partition riak_kv_vnode 45671926166590716193865151022383844364247891968 was
>> terminated for reason: {shutdown,{error,enotconn}}
>>
>>
>>
>> I have search

Re: why leaving riak cluster so slowly and how to accelerate the speed

2015-08-14 Thread changmao wang
During last three days, I setup a developing riak cluster with five nodes,
and used "s3cmd" to upload 18GB testing data(maybe 20 thousands of files).
After that, I tried to let one node leaving the cluster, and then shutdown
and mark down it. Replacing the IP address and joining the cluster again.
The above whole processes were successful. However, I'm not sure whether no
not it can be done on production environment.

I followed below the docs to do above steps:

http://docs.basho.com/riak/latest/ops/running/nodes/renaming/

After I run "riak-admin cluster leave riak@'x.x.x.x'" ,"riak-admin cluster
plan", "riak-admin cluster commit", then checked the member-status, the
main difference of leaving cluster on production and developing environment
are as below:

root@cluster-s3-dev-hd1:~# riak-admin member-status
= Membership
==
Status RingPendingNode
---
leaving18.8%  0.0%'riak@10.21.236.185'
valid  21.9% 25.0%'riak@10.21.236.181'
valid  21.9% 25.0%'riak@10.21.236.182'
valid  18.8% 25.0%'riak@10.21.236.183'
valid  18.8% 25.0%'riak@10.21.236.184'
---

several minutes elapsed, the then checking the status as below:


root@cluster-s3-dev-hd1:~# riak-admin member-status
= Membership
==
Status RingPendingNode
---
leaving12.5%  0.0%'riak@10.21.236.185'
valid  21.9% 25.0%'riak@10.21.236.181'
valid  28.1% 25.0%'riak@10.21.236.182'
valid  18.8% 25.0%'riak@10.21.236.183'
valid  18.8% 25.0%'riak@10.21.236.184'
---
Valid:4 / Leaving:1 / Exiting:0 / Joining:0 / Down:0

After that, I shutdown riak  with "riak stop", and mark down it on active
nodes.
My question is what's the meaning ot "Pending 0.0%"?

On production cluster, the status are as below:
root@cluster1-hd12:/root/scripts# riak-admin transfers
'riak@10.21.136.94' waiting to handoff 5 partitions
'riak@10.21.136.93' waiting to handoff 5 partitions
'riak@10.21.136.92' waiting to handoff 5 partitions
'riak@10.21.136.91' waiting to handoff 5 partitions
'riak@10.21.136.86' waiting to handoff 5 partitions
'riak@10.21.136.81' waiting to handoff 2 partitions
'riak@10.21.136.76' waiting to handoff 3 partitions
'riak@10.21.136.71' waiting to handoff 5 partitions
'riak@10.21.136.66' waiting to handoff 5 partitions

And there're active transfers.  On developing environment, there're no
active transfers after my running of "riak-admin cluster commit".
Can I follow the same steps as developing environment to run it on
production cluster?



On Wed, Aug 12, 2015 at 10:39 PM, Dmitri Zagidulin 
wrote:

> Responses inline.
>
>
> On Tue, Aug 11, 2015 at 12:53 PM, changmao wang 
> wrote:
>
>> 1. About backuping new nodes of four and then using 'riak-admin
>> force-replace'. what's the status of new added nodes?
>> as you know, we want to replace one of leaving nodes.
>>
>
> I don't understand the question. Doing 'riak-admin force-replace' on one
> of the nodes that's leaving should overwrite the leave request and tell it
> to change its node id / ip address. (If that doesn't work, stop the leaving
> node, and do a 'riak-admin reip' command instead).
>
>
>
>> 2. what's the risk of 'riak-admin force-remove' 'riak@10.21.136.91'
>> without backup?
>> As you know, now the node(riak@10.21.136.91) is a member of the cluster,
>> and keeping almost 2.5TB data, maybe 10 percent of the whole cluster.
>>
>
> The only reason I asked about backup is because it sounded like you
> cleared the disk on it. If it currently has the data, then it'll be fine.
> Force-remove just changes the IP address, and doesn't delete the data or
> anything.
>
>
> On Tue, Aug 11, 2015 at 7:32 PM, Dmitri Zagidulin 
> wrote:
>
>> 1. How to force leave "leaving"'s nodes without data loss?
>>
>> This depends on - did you back up the data directory of the 4 new nodes,
>> before you reformatted them?
>> If you backed them up (and then restored the data directory once you
>> reformatted them), you can try:
>>

Re: why leaving riak cluster so slowly and how to accelerate the speed

2015-08-18 Thread changmao wang
Dmitri,

I got it. Just following the steps tested on developing environment, I
finished the changing on production environment.
Now the production cluster status is as below:

root@cluster1-hd13:~# riak-admin member-status
= Membership
==
Status RingPendingNode
---
valid  10.9% 10.9%'riak@10.21.136.66'
valid  10.9% 10.9%'riak@10.21.136.71'
valid  14.1% 10.9%'riak@10.21.136.76'
valid  17.2% 12.5%'riak@10.21.136.81'
valid  10.9% 10.9%'riak@10.21.136.86'
valid   7.8% 10.9%'riak@10.21.136.95'
valid   7.8% 10.9%'riak@10.21.136.96'
valid   9.4% 10.9%'riak@10.21.136.97'
valid  10.9% 10.9%'riak@10.21.136.98'
---
Valid:9 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

Thanks for your great help. we can close this ticket.
several issues related with transferring partition, I'll open it in anther
email.


On Mon, Aug 17, 2015 at 11:01 PM, Dmitri Zagidulin 
wrote:

> Amao,
>
> As I've mentioned, those pending transfers are going to stay there
> indefinitely. They will keep showing up on the 'status' list, until you do
> a 'force-replace' or 'force-remove'.
>
>
>


-- 
Amao Wang
Best & Regards
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


s3cmd error: access to bucket was denied

2015-08-18 Thread changmao wang
Matthew,

I used s3cmd --configure to generate ".s3cfg" config file and then access
RIAK service by s3cmd.
The access_key and secret_key from ".s3cfg" is same as admin_key
and admin_secret from "/etc/riak-cs/app.config".

However, I got error as below using s3cmd to access one bucket.

root@cluster-s3-hd1:~# s3cmd -c /root/.s3cfg ls
s3://pipeline/article/111.pdf
ERROR: Access to bucket 'pipeline' was denied

By the way, I used Riak and Riak-CS 1.4.2 on Ubuntu. Current production
cluster is a legacy system without documents for co-workers.

Attached file is "s3cfg" generated by "s3cmd --configure".
-- 
Amao Wang
Best & Regards


.s3cfg
Description: Binary data
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: s3cmd error: access to bucket was denied

2015-08-20 Thread changmao wang
somebody watching on this?

On Wed, Aug 19, 2015 at 9:01 AM, changmao wang 
wrote:

> Matthew,
>
> I used s3cmd --configure to generate ".s3cfg" config file and then access
> RIAK service by s3cmd.
> The access_key and secret_key from ".s3cfg" is same as admin_key
> and admin_secret from "/etc/riak-cs/app.config".
>
> However, I got error as below using s3cmd to access one bucket.
>
> root@cluster-s3-hd1:~# s3cmd -c /root/.s3cfg ls
> s3://pipeline/article/111.pdf
> ERROR: Access to bucket 'pipeline' was denied
>
> By the way, I used Riak and Riak-CS 1.4.2 on Ubuntu. Current production
> cluster is a legacy system without documents for co-workers.
>
> Attached file is "s3cfg" generated by "s3cmd --configure".
> --
> Amao Wang
> Best & Regards
>



-- 
Amao Wang
Best & Regards
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: s3cmd error: access to bucket was denied

2015-08-20 Thread changmao wang
stanislav,

what's your meaning of domain name of /etc/riak-cs/app.config and ~/.s3cfg?
I guess it's cs_root_host parameter from /etc/riak-cs/app.config
and host_base from '~/.s3cfg'.
If so, there're same as "api2.cloud-datayes.com". However, I can not ping
this host from localhost.

On Thu, Aug 20, 2015 at 5:23 PM, Stanislav Vlasov 
wrote:

> 2015-08-20 13:57 GMT+05:00 changmao wang :
> > somebody watching on this?
>
> Do you set up same domain in riak-cs.conf and in .s3cfg?
> I got such error in this case.
>
> > On Wed, Aug 19, 2015 at 9:01 AM, changmao wang 
> > wrote:
> >>
> >> Matthew,
> >>
> >> I used s3cmd --configure to generate ".s3cfg" config file and then
> access
> >> RIAK service by s3cmd.
> >> The access_key and secret_key from ".s3cfg" is same as admin_key and
> >> admin_secret from "/etc/riak-cs/app.config".
> >>
> >> However, I got error as below using s3cmd to access one bucket.
> >>
> >> root@cluster-s3-hd1:~# s3cmd -c /root/.s3cfg ls
> >> s3://pipeline/article/111.pdf
> >> ERROR: Access to bucket 'pipeline' was denied
> >>
> >> By the way, I used Riak and Riak-CS 1.4.2 on Ubuntu. Current production
> >> cluster is a legacy system without documents for co-workers.
> >>
> >> Attached file is "s3cfg" generated by "s3cmd --configure".
> >> --
> >> Amao Wang
> >> Best & Regards
> >
> >
> >
> >
> > --
> > Amao Wang
> > Best & Regards
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
>
>
> --
> Stanislav
>



-- 
Amao Wang
Best & Regards
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: s3cmd error: access to bucket was denied

2015-08-20 Thread changmao wang
Just now, I used "admin_key" and "admin_secret" from
/etc/riak-cs/app.config to run "s3cmd -c s3-stock ls
 s3://stock/XSHE/0/000600"
and I got the below error:
ERROR: Access to bucket 'stock' was denied

Below is abstract from "/var/log/riak-cs/console.log"
2015-08-20 18:40:22.790 [debug]
<0.28085.18>@riak_cs_s3_auth:calculate_signature:129 STS:
["GET","\n",[],"\n",[],"\n","\n",[["x-amz-date",":",<<"Thu, 20 Aug 2015
10:40:22 +">>,"\n"]],["/stock/",[]]]
2015-08-20 18:40:32.861 [error]
<0.28153.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-20 18:40:32.861 [debug]
<0.28153.18>@riak_cs_wm_common:post_authentication:452 No user key
2015-08-20 18:40:32.969 [debug] <0.28189.18>@riak_cs_get_fsm:prepare:406
Manifest:
{lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,122,47,83,90,48,48,50,49,55,53,67,78,47,50,48,49,48,95,50,48,49,48,45,48,52,45,50,52,95,229,133,172,229,143,184,231,171,160,231,168,139,239,188,136,50,48,49,48,229,185,180,52,230,156,136,239,188,137,46,80,68,70>>},[],"2013-12-16T23:01:12.000Z",<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,255387,<<"application/pdf">>,<<55,41,141,170,187,226,47,223,183,95,105,129,155,154,210,202>>,active,{1387,234872,598819},{1387,234872,918555},[],undefined,undefined,undefined,undefined,{acl_v2,{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51","AVG2DHZ4UNUYFAZ8F4WR"},[{{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51"},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,234872,598546}},[],undefined}
2015-08-20 18:40:33.043 [debug]
<0.28189.18>@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0,
FinalBlock: 0
2015-08-20 18:40:33.043 [debug]
<0.28189.18>@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0, KeepFinal:
255387
2015-08-20 18:40:33.050 [debug]
<0.28189.18>@riak_cs_get_fsm:waiting_continue_or_stop:229 Block Servers:
[<0.28191.18>]
2015-08-20 18:40:33.079 [debug]
<0.28189.18>@riak_cs_get_fsm:waiting_chunks:307 Retrieved block
{<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0}
2015-08-20 18:40:33.079 [debug]
<0.28189.18>@riak_cs_get_fsm:perhaps_send_to_user:280 Returning block
{<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0} to client
2015-08-20 18:40:38.218 [error]
<0.28086.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-20 18:40:38.218 [debug]
<0.28086.18>@riak_cs_wm_common:post_authentication:452 No user key
2015-08-20 18:40:38.226 [debug] <0.28210.18>@riak_cs_get_fsm:prepare:406
Manifest:
{lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,104,47,83,72,54,48,48,55,53,48,67,78,47,50,48,48,55,95,50,48,48,55,45,49,49,45,50,49,95,230,177,159,228,184,173,232,141,175,228,184,154,229,133,179,228,186,142,73,66,69,95,53,232,141,175,229,147,129,232,142,183,229,190,151,228,186,140,230,156,159,228,184,180,229,186,138,230,137,185,230,150,135,231,154,132,229,133,172,229,145,138,229,143,138,233,163,142,233,153,169,230,143,144,231,164,186,46,112,100,102>>},[],"2013-12-15T09:04:48.000Z",<<201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105>>,89863,<<"application/pdf">>,<<139,151,203,173,6,111,222,48,17,81,102,170,216,66,193,77>>,active,{1387,98288,545827},{1387,98288,618409},[],undefined,undefined,undefined,undefined,{acl_v2,{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51","AVG2DHZ4UNUYFAZ8F4WR"},[{{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51"},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,98288,545618}},[],undefined}
2015-08-20 18:40:38.280 [debug]
<0.28210.18>@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0,
FinalBlock: 0
2015-08-20 18:40:38.280 [debug]
<0.28210.18>@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0, KeepFinal:
89863
2015-08-20 18:40:38.280 [debug]
<0.28210.18>@riak_cs_get_fsm:waiting_continue_or_stop:229 Block Servers:
[<0.28212.18>]
2015-08-20 18:40:38.343 [debug]
<0.28210.18>@riak_cs_get_fsm:waiting_chunks:307 Retrieved block
{<<201,247,249,158,95,22,64,242,161,118,253,64,120,187,205,105>>,0

Re: s3cmd error: access to bucket was denied

2015-08-20 Thread changmao wang
Kazuhiro,

Maybe that's not the key point. I'm using riak 1.4.2 and follow below docs
to configure "s3cfg" file.
http://docs.basho.com/riakcs/1.4.2/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use

There's no "signature_v2" parameter in "s3cfg". However, I added this
parameter to "s3cfg" and tried again with same errors.




On Thu, Aug 20, 2015 at 10:31 PM, Kazuhiro Suzuki  wrote:

> Hi Changmao,
>
> It seems your s3cmd config should include 2 items:
>
> signature_v2 = True
> host_base  = api2.cloud-datayes.com
>
> Riak CS requires "signature_v2 = True" since Riak CS has not supported
> s3 authentication version 4 yet.
> You can find a sample configuration of s3cmd here to interact with Riak CS
> [1].
>
> [1]:
> http://docs.basho.com/riakcs/2.0.1/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use
>
> Thanks,
>
> On Thu, Aug 20, 2015 at 7:44 PM, changmao wang 
> wrote:
> > Just now, I used "admin_key" and "admin_secret" from
> /etc/riak-cs/app.config
> > to run "s3cmd -c s3-stock ls  s3://stock/XSHE/0/000600"
> > and I got the below error:
> > ERROR: Access to bucket 'stock' was denied
> >
> > Below is abstract from "/var/log/riak-cs/console.log"
> > 2015-08-20 18:40:22.790 [debug]
> > <0.28085.18>@riak_cs_s3_auth:calculate_signature:129 STS:
> > ["GET","\n",[],"\n",[],"\n","\n",[["x-amz-date",":",<<"Thu, 20 Aug 2015
> > 10:40:22 +">>,"\n"]],["/stock/",[]]]
> > 2015-08-20 18:40:32.861 [error]
> > <0.28153.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> > record for s3 failed. Reason: no_user_key
> > 2015-08-20 18:40:32.861 [debug]
> > <0.28153.18>@riak_cs_wm_common:post_authentication:452 No user key
> > 2015-08-20 18:40:32.969 [debug] <0.28189.18>@riak_cs_get_fsm:prepare:406
> > Manifest:
> >
> {lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,122,47,83,90,48,48,50,49,55,53,67,78,47,50,48,49,48,95,50,48,49,48,45,48,52,45,50,52,95,229,133,172,229,143,184,231,171,160,231,168,139,239,188,136,50,48,49,48,229,185,180,52,230,156,136,239,188,137,46,80,68,70>>},[],"2013-12-16T23:01:12.000Z",<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,255387,<<"application/pdf">>,<<55,41,141,170,187,226,47,223,183,95,105,129,155,154,210,202>>,active,{1387,234872,598819},{1387,234872,918555},[],undefined,undefined,undefined,undefined,{acl_v2,{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51","AVG2DHZ4UNUYFAZ8F4WR"},[{{"pipelinewrite","ef38ca69e145a40c1f8378633994192dace4539339315e6b42d7d1e6e2d2de51"},['FULL_CONTROL']},{'AllUsers',['READ']}],{1387,234872,598546}},[],undefined}
> > 2015-08-20 18:40:33.043 [debug]
> > <0.28189.18>@riak_cs_lfs_utils:range_blocks:118 InitialBlock: 0,
> FinalBlock:
> > 0
> > 2015-08-20 18:40:33.043 [debug]
> > <0.28189.18>@riak_cs_lfs_utils:range_blocks:120 SkipInitial: 0,
> KeepFinal:
> > 255387
> > 2015-08-20 18:40:33.050 [debug]
> > <0.28189.18>@riak_cs_get_fsm:waiting_continue_or_stop:229 Block Servers:
> > [<0.28191.18>]
> > 2015-08-20 18:40:33.079 [debug]
> > <0.28189.18>@riak_cs_get_fsm:waiting_chunks:307 Retrieved block
> > {<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0}
> > 2015-08-20 18:40:33.079 [debug]
> > <0.28189.18>@riak_cs_get_fsm:perhaps_send_to_user:280 Returning block
> > {<<192,71,150,153,181,181,77,61,186,41,100,32,5,91,197,166>>,0} to client
> > 2015-08-20 18:40:38.218 [error]
> > <0.28086.18>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> > record for s3 failed. Reason: no_user_key
> > 2015-08-20 18:40:38.218 [debug]
> > <0.28086.18>@riak_cs_wm_common:post_authentication:452 No user key
> > 2015-08-20 18:40:38.226 [debug] <0.28210.18>@riak_cs_get_fsm:prepare:406
> > Manifest:
> >
> {lfs_manifest_v3,3,1048576,{<<"pipeline">>,<<100,97,116,97,121,101,115,47,112,105,112,101,108,105,110,101,47,100,97,116,97,47,114,101,112,111,114,116,47,115,104,47,83,72,54,48,48,55,53,48,67,78,47,50,48,48,55,95,50,48,48,55,45,49,49,45,50,49,95,230,177,159,228,184,173,23

Re: s3cmd error: access to bucket was denied

2015-08-23 Thread changmao wang
Shunichi,

Thanks for your reply. Below is my command result:
root@cluster-s3-hd1:~# s3cmd ls
2013-12-01 06:45  s3://test
root@cluster-s3-hd1:~# s3cmd info s3://stock
ERROR: Access to bucket 'stock' was denied
root@cluster-s3-hd1:~# s3cmd info s3://stock -d
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->M2...17_chars...K
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->10.21.136.81
DEBUG: ConfigParser: proxy_port->8080
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->1u...37_chars...=
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->10
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'info' using UTF-8
DEBUG: Unicodising 's3://stock' using UTF-8
DEBUG: Command: info
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09
+\n/stock/?location'
DEBUG: CreateRequest: resource[uri]=/?location
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:11:09
+\n/stock/?location'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri(): http://stock.api2.cloud-datayes.com/?location
DEBUG: Response: {'status': 403, 'headers': {'date': 'Mon, 24 Aug 2015
01:11:09 GMT', 'content-length': '160', 'content-type': 'application/xml',
'server': 'Riak CS'}, 'reason': 'Forbidden', 'data': 'AccessDeniedAccess
Denied/stock'}
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: date: Mon, 24 Aug 2015 01:11:09 GMT
DEBUG: HttpHeader: content-length: 160
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: server: Riak CS
DEBUG: ErrorXML: Code: 'AccessDenied'
DEBUG: ErrorXML: Message: 'Access Denied'
DEBUG: ErrorXML: Resource: '/stock'
DEBUG: ErrorXML: RequestId: None
ERROR: Access to bucket 'stock' was denied

On Mon, Aug 24, 2015 at 9:04 AM, Shunichi Shinohara  wrote:

> The error message in console.log shows no user with access_key in your
> s3cfg.
> Could you provide resutls following commands?
>
> - s3cmd ls
> - s3cmd info s3://stock
>
> If error happens, debug print switch "-d" of s3cmd might help.
>
> [1]
> http://docs.basho.com/riakcs/latest/cookbooks/Account-Management/#Creating-a-User-Account
>
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Fri, Aug 21, 2015 at 10:00 AM, changmao wang 
> wrote:
> > Kazuhiro,
> >
> > Maybe that's not the key point. I'm using riak 1.4.2 and follow below
> docs
> > to configure "s3cfg" file.
> >
> http://docs.basho.com/riakcs/1.4.2/cookbooks/configuration/Configuring-an-S3-Client/#Sample-s3cmd-Configuration-File-for-Production-Use
> >
> > There's no "signature_v2" parameter in "s3cfg". However, I added this
> > parameter to "s3cfg" and tried again with same errors.
> >
> >
> >
> >
&

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
  {cs_root_host, "api2.cloud-datayes.com"},
root@cluster1-hd10:~# grep host_base .s3cfg
host_base = api2.cloud-datayes.com
root@cluster1-hd10:~# grep host_base s3cfg1
host_base = api2.cloud-datayes.com

2. please check attached file for "s3cmd -d" output and
'/etc/riak-cs/console.log'.


On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara  wrote:

> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
> in this email thread
> does not include it. Please make sure you provide correct / consistent
> information to
> debug the issue.
>
> - What is your riak cs config "cs_root_host"?
> - What is your host_base in s3cfg that you USE?
> - What is your host_bucket in s3cfg?
>
> Also, please attach s3cmd debug output AND riak cs console log at the same
> time
> interval.
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang 
> wrote:
> > I'm not sure who created it. This's a legacy production system.
> >
> > Just now, I used another "s3cfg" file to access it. Below is my output:
> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
> >File size: 397535
> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
> >MIME type: binary/octet-stream
> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
> >ACL:   stockwrite: FULL_CONTROL
> >ACL:   *anon*: READ
> >URL:
> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
> > DEBUG: ConfigParser: Reading file 's3cfg1'
> > DEBUG: ConfigParser: access_key->TE...17_chars...0
> > DEBUG: ConfigParser: bucket_location->US
> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
> > DEBUG: ConfigParser: delete_removed->False
> > DEBUG: ConfigParser: dry_run->False
> > DEBUG: ConfigParser: encoding->UTF-8
> > DEBUG: ConfigParser: encrypt->False
> > DEBUG: ConfigParser: follow_symlinks->False
> > DEBUG: ConfigParser: force->False
> > DEBUG: ConfigParser: get_continue->False
> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> > %(output_file)s %(input_file)s
> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> > %(output_file)s %(input_file)s
> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
> > DEBUG: ConfigParser: guess_mime_type->True
> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
> > DEBUG: ConfigParser: human_readable_sizes->False
> > DEBUG: ConfigParser: list_md5->False
> > DEBUG: ConfigParser: log_target_prefix->
> > DEBUG: ConfigParser: preserve_attrs->True
> > DEBUG: ConfigParser: progress_meter->True
> > DEBUG: ConfigParser: proxy_host->10.21.136.81
> > DEBUG: ConfigParser: proxy_port->8080
> > DEBUG: ConfigParser: recursive->False
> > DEBUG: ConfigParser: recv_chunk->4096
> > DEBUG: ConfigParser: reduced_redundancy->False
> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
> > DEBUG: ConfigParser: send_chunk->4096
> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
> > DEBUG: ConfigParser: skip_existing->False
> > DEBUG: ConfigParser: socket_timeout->100
> > DEBUG: ConfigParser: urlencoding_mode->normal
> > DEBUG: ConfigParser: use_https->False
> > DEBUG: ConfigParser: verbosity->WARNING
> > DEBUG: Updating Config.Config encoding -> UTF-8
> > DEBUG: Updating Config.Config follow_symlinks -> False
> > DEBUG: Updating Config.Config verbosity -> 10
> > DEBUG: Unicodising 'ls' using UTF-8
> > DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
> > using UTF-8
> > DEBUG: Command: ls
> > DEBUG: Bucket 's3://stock':
> > DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
> > 'XSHE/0/50/2008

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
I'm not sure who created it. This's a legacy production system.

Just now, I used another "s3cfg" file to access it. Below is my output:
root@cluster1-hd10:~# s3cmd -c s3cfg1 info
s3://stock/XSHE/0/50/2008/XSHE-50-20080102
s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
   File size: 397535
   Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
   MIME type: binary/octet-stream
   MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
   ACL:   stockwrite: FULL_CONTROL
   ACL:   *anon*: READ
   URL:
http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
DEBUG: ConfigParser: Reading file 's3cfg1'
DEBUG: ConfigParser: access_key->TE...17_chars...0
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->10.21.136.81
DEBUG: ConfigParser: proxy_port->8080
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->Hk...37_chars...=
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->100
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://stock':
DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
'XSHE/0/50/2008/XSHE-50-20080102'
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
+\n/stock/'
DEBUG: CreateRequest: resource[uri]=/
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
WARNING: Retrying failed request:
/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ ('')
WARNING: Waiting 3 sec...
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:37:05
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
WARNING: Retrying failed request:
/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ ('')
WARNING: Waiting 6 sec...




On Mon, Aug 24, 2015 at 9:17 AM, Shunichi Shinohara  wrote:

> The result of "s3cmd ls" (aka, GET Service API) indicates there
> is no bucket with name "stock":
>
> > root@cluster-s3-hd1:~# s3cmd ls
> > 2013-12-01 06:45  s3://test
>
> Have you created it?
>
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 10:14 AM, changmao wang 
> wrote:
> > Shunichi,
> >
> > Thanks for your reply. Below is my command result:
> > root@cluster-s3-hd1:~# s3cmd ls
> > 2013-12-01 06:45  s3://test
> > root@cluster-s3-hd1:~# s3cmd info s3://stock
> > ERROR: Access to

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
Please check attached file for details.

On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara  wrote:

> Then, back to my first questions:
> Could you provide results following commands with s3cfg1?
> - s3cmd ls
> - s3cmd info s3://stock
>
> From log file, gc index queries timed out again and again.
> Not sure but it may be subtle situation...
>
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang 
> wrote:
> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
> >   {cs_root_host, "api2.cloud-datayes.com"},
> > root@cluster1-hd10:~# grep host_base .s3cfg
> > host_base = api2.cloud-datayes.com
> > root@cluster1-hd10:~# grep host_base s3cfg1
> > host_base = api2.cloud-datayes.com
> >
> > 2. please check attached file for "s3cmd -d" output and
> > '/etc/riak-cs/console.log'.
> >
> >
> > On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara 
> wrote:
> >>
> >> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
> >> in this email thread
> >> does not include it. Please make sure you provide correct / consistent
> >> information to
> >> debug the issue.
> >>
> >> - What is your riak cs config "cs_root_host"?
> >> - What is your host_base in s3cfg that you USE?
> >> - What is your host_bucket in s3cfg?
> >>
> >> Also, please attach s3cmd debug output AND riak cs console log at the
> same
> >> time
> >> interval.
> >> --
> >> Shunichi Shinohara
> >> Basho Japan KK
> >>
> >>
> >> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang <
> wang.chang...@gmail.com>
> >> wrote:
> >> > I'm not sure who created it. This's a legacy production system.
> >> >
> >> > Just now, I used another "s3cfg" file to access it. Below is my
> output:
> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
> >> >File size: 397535
> >> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
> >> >MIME type: binary/octet-stream
> >> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
> >> >ACL:   stockwrite: FULL_CONTROL
> >> >ACL:   *anon*: READ
> >> >URL:
> >> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
> >> > DEBUG: ConfigParser: Reading file 's3cfg1'
> >> > DEBUG: ConfigParser: access_key->TE...17_chars...0
> >> > DEBUG: ConfigParser: bucket_location->US
> >> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
> >> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
> >> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
> >> > DEBUG: ConfigParser: delete_removed->False
> >> > DEBUG: ConfigParser: dry_run->False
> >> > DEBUG: ConfigParser: encoding->UTF-8
> >> > DEBUG: ConfigParser: encrypt->False
> >> > DEBUG: ConfigParser: follow_symlinks->False
> >> > DEBUG: ConfigParser: force->False
> >> > DEBUG: ConfigParser: get_continue->False
> >> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
> >> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> >> > %(output_file)s %(input_file)s
> >> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> >> > %(output_file)s %(input_file)s
> >> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
> >> > DEBUG: ConfigParser: guess_mime_type->True
> >> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
> >> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
> >> > DEBUG: ConfigParser: human_readable_sizes->False
> >> > DEBUG: ConfigParser: list_md5->False
> >> > DEBUG: ConfigParser: log_target_prefix->
> >> > DEBUG: ConfigParser: preserve_attrs->True
> >> > DEBUG: ConfigParser: progress_meter->True
> >> >

Re: s3cmd error: access to bucket was denied

2015-08-25 Thread changmao wang
Any ideas on this issue?

On Mon, Aug 24, 2015 at 5:09 PM, changmao wang 
wrote:

> Please check attached file for details.
>
> On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara 
> wrote:
>
>> Then, back to my first questions:
>> Could you provide results following commands with s3cfg1?
>> - s3cmd ls
>> - s3cmd info s3://stock
>>
>> From log file, gc index queries timed out again and again.
>> Not sure but it may be subtle situation...
>>
>> --
>> Shunichi Shinohara
>> Basho Japan KK
>>
>>
>> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang 
>> wrote:
>> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
>> >   {cs_root_host, "api2.cloud-datayes.com"},
>> > root@cluster1-hd10:~# grep host_base .s3cfg
>> > host_base = api2.cloud-datayes.com
>> > root@cluster1-hd10:~# grep host_base s3cfg1
>> > host_base = api2.cloud-datayes.com
>> >
>> > 2. please check attached file for "s3cmd -d" output and
>> > '/etc/riak-cs/console.log'.
>> >
>> >
>> > On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara 
>> wrote:
>> >>
>> >> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
>> >> in this email thread
>> >> does not include it. Please make sure you provide correct / consistent
>> >> information to
>> >> debug the issue.
>> >>
>> >> - What is your riak cs config "cs_root_host"?
>> >> - What is your host_base in s3cfg that you USE?
>> >> - What is your host_bucket in s3cfg?
>> >>
>> >> Also, please attach s3cmd debug output AND riak cs console log at the
>> same
>> >> time
>> >> interval.
>> >> --
>> >> Shunichi Shinohara
>> >> Basho Japan KK
>> >>
>> >>
>> >> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang <
>> wang.chang...@gmail.com>
>> >> wrote:
>> >> > I'm not sure who created it. This's a legacy production system.
>> >> >
>> >> > Just now, I used another "s3cfg" file to access it. Below is my
>> output:
>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
>> >> >File size: 397535
>> >> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
>> >> >MIME type: binary/octet-stream
>> >> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
>> >> >ACL:   stockwrite: FULL_CONTROL
>> >> >ACL:   *anon*: READ
>> >> >URL:
>> >> >
>> http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>> >> > DEBUG: ConfigParser: Reading file 's3cfg1'
>> >> > DEBUG: ConfigParser: access_key->TE...17_chars...0
>> >> > DEBUG: ConfigParser: bucket_location->US
>> >> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
>> >> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
>> >> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
>> >> > DEBUG: ConfigParser: delete_removed->False
>> >> > DEBUG: ConfigParser: dry_run->False
>> >> > DEBUG: ConfigParser: encoding->UTF-8
>> >> > DEBUG: ConfigParser: encrypt->False
>> >> > DEBUG: ConfigParser: follow_symlinks->False
>> >> > DEBUG: ConfigParser: force->False
>> >> > DEBUG: ConfigParser: get_continue->False
>> >> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
>> >> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> >> > %(output_file)s %(input_file)s
>> >> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> >> > %(output_file)s %(input_file)s
>> >> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
>> >> > DEBUG: ConfigParser: guess_mime_type->True
>> >> > DEBUG: Confi

Re: s3cmd error: access to bucket was denied

2015-08-30 Thread changmao wang
Shunichi,

Just now, I followed your direction to change fold_objects_for_list_keys to
true, and restart riak-cs service.


sed -i '/fold_objects_for_list_keys/ s/false/true/g'
/etc/riak-cs/app.config; riak-cs restart

After that, I run below command and got same error.

s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d

...
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://stock':
DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
'XSHE/0/50/2008/XSHE-50-20080102'
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
+\n/stock/'
DEBUG: CreateRequest: resource[uri]=/
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
WARNING: Retrying failed request:
/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ (timed out)
WARNING: Waiting 3 sec...
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:37:34
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/

below was last 10 errors from "/var/log/riak-cs/console.log"
2015-08-27 17:35:40.085 [error]
<0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-27 17:37:34.744 [error]
<0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-27 17:37:49.356 [error]
<0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-27 17:39:49.249 [error]
<0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-08-27 17:39:54.811 [error]
<0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key



On Wed, Aug 26, 2015 at 5:35 PM, Shunichi Shinohara  wrote:

> Sorry for delay and thanks for further information.
> The log of Riak CS shows timeout in list query to Riak:
>2015-08-24 16:59:49.717 [error] <0.18027.7> gen_fsm <0.18027.7> in state
>waiting_list_keys terminated with reason: <<"timeout">>
>
> There are some kinds of possibility for timeout, e.g. hardware resource.
> But if the timeout is occured by large number of objects (manifests) in
> your
> bucket, improvement from 1.4.0 may help.
> It can be used by setting
>{fold_objects_for_list_keys, true}
> in your app.config's riak_cs section or executing
>application:set_env(riak_cs, fold_objects_for_list_keys, true).
> by attaching to riak-cs node.
> For more information about it, please refer the original PR [1].
>
> [1] https://github.com/basho/riak_cs/pull/600
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Wed, Aug 26, 2015 at 2:04 PM, Stanislav Vlasov
>  wrote:
> > 2015-08-25 11:03 GMT+05:00 changmao wang :
> >> Any ideas on this issue?
> >
> > Can you check credentials with another client?
> > s3curl, for example?
> >
> > I got some bugs in s3cmd after debian upgrade, so if another client
> > works, than s3cmd has bug.
> >
> >> On Mon, Aug 24, 2015 at 5:09 PM, changmao wang  >
> >> wrote:
> >>>
> >>> Please check attached file for details.
> >>>
> >>> On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara 
> >>> wrote:
> >>>>
> >>>> Then, back to my first questions:
> >>>> Could you provide results following commands with s3cfg1?
> >>>> - s3cmd ls
> >>>> - s3cmd info s3://stock
> >>>>
> >>>> From log file, gc index queries timed out again and again.
> >>>> Not sure but it may be subtle situation...
> >>>>
> >>>> --
> >>>> Shunichi Shinohara
> >>>> Basho Japan KK
> >>>>
> >>>>
> >>>> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang <
> wang.chang...@gmail.com>
> >>>> wrote:
> >>>> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
> >>>> >   {cs_root_host, "api2.cloud-datayes.com&

Re: s3cmd error: access to bucket was denied

2015-09-04 Thread changmao wang
root@cluster1-hd10:~# s3cmd -c s3cfg1 info
s3://stock/XSHE/0/50/2008/XSHE-50-20080102
s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
   File size: 397535
   Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
   MIME type: binary/octet-stream
   MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
   ACL:   stockwrite: FULL_CONTROL
   ACL:   *anon*: READ
   URL:
http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102

s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d

After running above command, I checked "/var/log/riak-cs/console.log" on
all nodes, I got below error:
2015-09-02 09:45:03.405 [error]
<0.22108.130>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
2015-09-02 09:45:03.581 [error]
<0.31347.130>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
record for s3 failed. Reason: no_user_key
root@cluster1-hd10:~#
..

root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
2013-12-03 08:18  s3://stock
root@cluster1-hd10:~# s3cmd -c s3cfg1 mb s3://logfile

!
An unexpected error has occurred.
  Please report the following lines to:
   s3tools-b...@lists.sourceforge.net
!

Problem: AttributeError: 'NoneType' object has no attribute 'getchildren'
S3cmd:   1.0.0

On Tue, Sep 1, 2015 at 8:57 AM, Shunichi Shinohara  wrote:

> Hmm.. it seems that situation is not good.
>
> GET Bucket (aka List Objects) of Riak CS uses a kind of coverage
> operation of Riak (KV),
> and it shows timeout. There may be some log at riak nodes. You have to
> look into logs in
> all nodes because all nodes participate in coverage operations.
>
> Could you try some further commands?
> 1. for sanity check,
>s3cmd -c s3cfg1 ls
> 2. To try list objects for completely empty buckets,
>s3cmd -c s3cfg1 mb s3://
>s3cmd -c s3cfg1 ls s3://
> 3. Another coverage operation, without riak cs
>curl -v 'http://127.0.0.1:8098/buckets/foobar/keys?keys=true'
># 127.0.0.1 and 8098 should be changed to HTTP host and port
># (*NOT* PB host/port) of one of riak nodes
>
> Thanks,
> Shino
>
> On Thu, Aug 27, 2015 at 6:41 PM, changmao wang 
> wrote:
> > Shunichi,
> >
> > Just now, I followed your direction to change fold_objects_for_list_keys
> to
> > true, and restart riak-cs service.
> >
> >
> > sed -i '/fold_objects_for_list_keys/ s/false/true/g'
> > /etc/riak-cs/app.config; riak-cs restart
> >
> > After that, I run below command and got same error.
> >
> > s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
> >
> > ...
> > DEBUG: Unicodising 'ls' using UTF-8
> > DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
> > using UTF-8
> > DEBUG: Command: ls
> > DEBUG: Bucket 's3://stock':
> > DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
> > 'XSHE/0/50/2008/XSHE-50-20080102'
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
> > +\n/stock/'
> > DEBUG: CreateRequest: resource[uri]=/
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
> > +\n/stock/'
> > DEBUG: Processing request, please wait...
> > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
> > DEBUG: format_uri():
> >
> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
> > WARNING: Retrying failed request:
> > /?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ (timed out)
> > WARNING: Waiting 3 sec...
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:37:34
> > +\n/stock/'
> > DEBUG: Processing request, please wait...
> > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
> > DEBUG: format_uri():
> >
> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
> >
> > below was last 10 errors from "/var/log/riak-cs/console.log"
> > 2015-08-27 17:35:40.085 [error]
> > <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> > record for s3 failed. Reason: no_user_key
> > 2015-08-27 17:37:34.744 [error]
> > <0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> > record for s3 failed. Reason: no_user_key
> > 2015-08-27 17:37:49.356 [error]
> > <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> > record for s3 failed. Reason: no_user_key
> > 2015-08-27 1

Re: s3cmd error: access to bucket was denied

2015-09-04 Thread changmao wang
A little strange thing happened as below:

root@cluster1-hd10:~# s3cmd -c s3cfg1 del
 s3://stock/XSHE/0/50/2008/XSHE-50-20080102
s3://stock/XSHE/0/50/2008/XSHE-50-20080102
File s3://stock/XSHE/0/50/2008/XSHE-50-20080102 deleted



On Wed, Sep 2, 2015 at 9:50 AM, changmao wang 
wrote:

> root@cluster1-hd10:~# s3cmd -c s3cfg1 info
> s3://stock/XSHE/0/50/2008/XSHE-50-20080102
> s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
>File size: 397535
>Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
>MIME type: binary/octet-stream
>MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
>ACL:   stockwrite: FULL_CONTROL
>ACL:   *anon*: READ
>URL:
> http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
>
> s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>
> After running above command, I checked "/var/log/riak-cs/console.log" on
> all nodes, I got below error:
> 2015-09-02 09:45:03.405 [error]
> <0.22108.130>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-09-02 09:45:03.581 [error]
> <0.31347.130>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> root@cluster1-hd10:~#
> ..
>
> root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
> 2013-12-03 08:18  s3://stock
> root@cluster1-hd10:~# s3cmd -c s3cfg1 mb s3://logfile
>
> !
> An unexpected error has occurred.
>   Please report the following lines to:
>s3tools-b...@lists.sourceforge.net
> !
>
> Problem: AttributeError: 'NoneType' object has no attribute 'getchildren'
> S3cmd:   1.0.0
>
> On Tue, Sep 1, 2015 at 8:57 AM, Shunichi Shinohara 
> wrote:
>
>> Hmm.. it seems that situation is not good.
>>
>> GET Bucket (aka List Objects) of Riak CS uses a kind of coverage
>> operation of Riak (KV),
>> and it shows timeout. There may be some log at riak nodes. You have to
>> look into logs in
>> all nodes because all nodes participate in coverage operations.
>>
>> Could you try some further commands?
>> 1. for sanity check,
>>s3cmd -c s3cfg1 ls
>> 2. To try list objects for completely empty buckets,
>>s3cmd -c s3cfg1 mb s3://
>>s3cmd -c s3cfg1 ls s3://
>> 3. Another coverage operation, without riak cs
>>curl -v 'http://127.0.0.1:8098/buckets/foobar/keys?keys=true'
>># 127.0.0.1 and 8098 should be changed to HTTP host and port
>># (*NOT* PB host/port) of one of riak nodes
>>
>> Thanks,
>> Shino
>>
>> On Thu, Aug 27, 2015 at 6:41 PM, changmao wang 
>> wrote:
>> > Shunichi,
>> >
>> > Just now, I followed your direction to change
>> fold_objects_for_list_keys to
>> > true, and restart riak-cs service.
>> >
>> >
>> > sed -i '/fold_objects_for_list_keys/ s/false/true/g'
>> > /etc/riak-cs/app.config; riak-cs restart
>> >
>> > After that, I run below command and got same error.
>> >
>> > s3cmd -c s3cfg1 ls s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>> >
>> > ...
>> > DEBUG: Unicodising 'ls' using UTF-8
>> > DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
>> > using UTF-8
>> > DEBUG: Command: ls
>> > DEBUG: Bucket 's3://stock':
>> > DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
>> > 'XSHE/0/50/2008/XSHE-50-20080102'
>> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
>> > +\n/stock/'
>> > DEBUG: CreateRequest: resource[uri]=/
>> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:35:51
>> > +\n/stock/'
>> > DEBUG: Processing request, please wait...
>> > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
>> > DEBUG: format_uri():
>> >
>> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
>> > WARNING: Retrying failed request:
>> > /?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ (timed out)
>> > WARNING: Waiting 3 sec...
>> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Thu, 27 Aug 2015 09:37:34
>> > +\n/stock/'
>> > DEBUG: Processing request, please wait...
>> > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
>> > DEBUG: format_uri():
>>