AW: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
I tried it now with one partition on 6 different machines and everywhere the 
same result: index_scan_timeout and the info: bad argument in call to 
eleveldb:async_get (2x) or async_write (4x).


Von Samsung Mobile gesendet


 Ursprüngliche Nachricht 
Von: "Effenberg, Simon"
Datum:30.07.2014 07:49 (GMT+01:00)
An: bryan hunt
Cc: riak-users@lists.basho.com
Betreff: AW: repair-2i stops with "bad argument in call to eleveldb:async_write"

Hi,

 I tried it on two different nodes with one partition each. Both multiple times 
before the upgrade and after the upgrade.

I will try it on other machines in a minute but because I tried it already on 
two different nodes and one of them is 2 weeks old and stored on a HP 3par I 
bet that this is not a disk corruption issue..

Simon


Von Samsung Mobile gesendet


 Ursprüngliche Nachricht 
Von: bryan hunt
Datum:29.07.2014 18:21 (GMT+01:00)
An: "Effenberg, Simon"
Cc: riak-users@lists.basho.com
Betreff: Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

Hi Simon,

Does the problem persist if you run it again?

Does it happen if you run it against any other partition?

Best Regards,

Bryan



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 09:35, Effenberg, Simon  wrote:

> Hi,
>
> we have some issues with 2i queries like that:
>
> seffenberg@kriak46-1:~$ while :; do curl -s 
> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
>
> 13853
> 13853
> 0
> 557
> 557
> 557
> 13853
> 0
>
>
> ...
>
> So I tried to start a repair-2i first on one vnode/partition on one node
> (which is quiet new in the cluster.. 2 weeks or so).
>
> The command is failing with the following log entries:
>
> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> 22835963083295358096932575511191922182123945984
> Will repair 2i on these partitions:
>22835963083295358096932575511191922182123945984
> Watch the logs for 2i repair progress reports
> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> partitions [22835963083295358096932575511191922182123945984]
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.740 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-07-29 08:20:22.751 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:25:22.752 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>Total partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 22835963083295358096932575511191922182123945984
> Error: index_scan_timeout
>
>
> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> terminated with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155
> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in context child_terminated
>
>
> Anything I can do about that? What's the issue here?
>
> I'm using Riak 1.4.8 (.deb package).
>
> Cheers
> Simon
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing

AW: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
Hi,

 I tried it on two different nodes with one partition each. Both multiple times 
before the upgrade and after the upgrade.

I will try it on other machines in a minute but because I tried it already on 
two different nodes and one of them is 2 weeks old and stored on a HP 3par I 
bet that this is not a disk corruption issue..

Simon


Von Samsung Mobile gesendet


 Ursprüngliche Nachricht 
Von: bryan hunt
Datum:29.07.2014 18:21 (GMT+01:00)
An: "Effenberg, Simon"
Cc: riak-users@lists.basho.com
Betreff: Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

Hi Simon,

Does the problem persist if you run it again?

Does it happen if you run it against any other partition?

Best Regards,

Bryan



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 09:35, Effenberg, Simon  wrote:

> Hi,
>
> we have some issues with 2i queries like that:
>
> seffenberg@kriak46-1:~$ while :; do curl -s 
> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
>
> 13853
> 13853
> 0
> 557
> 557
> 557
> 13853
> 0
>
>
> ...
>
> So I tried to start a repair-2i first on one vnode/partition on one node
> (which is quiet new in the cluster.. 2 weeks or so).
>
> The command is failing with the following log entries:
>
> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> 22835963083295358096932575511191922182123945984
> Will repair 2i on these partitions:
>22835963083295358096932575511191922182123945984
> Watch the logs for 2i repair progress reports
> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> partitions [22835963083295358096932575511191922182123945984]
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.740 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-07-29 08:20:22.751 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:25:22.752 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>Total partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 22835963083295358096932575511191922182123945984
> Error: index_scan_timeout
>
>
> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> terminated with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155
> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in context child_terminated
>
>
> Anything I can do about that? What's the issue here?
>
> I'm using Riak 1.4.8 (.deb package).
>
> Cheers
> Simon
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread bryan hunt
Hi Simon,

Does the problem persist if you run it again? 

Does it happen if you run it against any other partition?

Best Regards,

Bryan



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 09:35, Effenberg, Simon  wrote:

> Hi,
> 
> we have some issues with 2i queries like that:
> 
> seffenberg@kriak46-1:~$ while :; do curl -s 
> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
> 
> 13853
> 13853
> 0
> 557
> 557
> 557
> 13853
> 0
> 
> 
> ...
> 
> So I tried to start a repair-2i first on one vnode/partition on one node
> (which is quiet new in the cluster.. 2 weeks or so).
> 
> The command is failing with the following log entries:
> 
> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> 22835963083295358096932575511191922182123945984
> Will repair 2i on these partitions:
>22835963083295358096932575511191922182123945984
> Watch the logs for 2i repair progress reports
> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> partitions [22835963083295358096932575511191922182123945984]
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.740 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-07-29 08:20:22.751 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:25:22.752 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
>Total partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 22835963083295358096932575511191922182123945984
> Error: index_scan_timeout
> 
> 
> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> terminated with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155
> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in context child_terminated
> 
> 
> Anything I can do about that? What's the issue here?
> 
> I'm using Riak 1.4.8 (.deb package).
> 
> Cheers
> Simon
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread bryan hunt
Sounds like disk corruption to me.



Bryan Hunt - Client Services Engineer - Basho Technologies Limited - Registered 
Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

On 29 Jul 2014, at 13:06, Effenberg, Simon  wrote:

> Said to say but the issue stays the same.. even after the upgrade to
> 1.4.10.
> 
> Any ideas what is happening here?
> 
> Cheers
> Simon
> 
> On Tue, Jul 29, 2014 at 08:46:42AM +, Effenberg, Simon wrote:
>> Already started to prepare everything for it.. :)
>> 
>> On Tue, Jul 29, 2014 at 09:43:22AM +0100, Guido Medina wrote:
>>> Hi Simon,
>>> 
>>> There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I don't
>>> think there isn't any harm for you to do a rolling upgrade since nothing
>>> major changed, just bug fixes, here is the release notes' link for
>>> reference:
>>> 
>>> https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
>>> 
>>> Best regards,
>>> 
>>> Guido.
>>> 
>>> On 29/07/14 09:35, Effenberg, Simon wrote:
 Hi,
 
 we have some issues with 2i queries like that:
 
 seffenberg@kriak46-1:~$ while :; do curl -s 
 localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
 -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
 
 13853
 13853
 0
 557
 557
 557
 13853
 0
 
 
 ...
 
 So I tried to start a repair-2i first on one vnode/partition on one node
 (which is quiet new in the cluster.. 2 weeks or so).
 
 The command is failing with the following log entries:
 
 seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
 22835963083295358096932575511191922182123945984
 Will repair 2i on these partitions:
22835963083295358096932575511191922182123945984
 Watch the logs for 2i repair progress reports
 seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
 <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
 partitions [22835963083295358096932575511191922182123945984]
 2014-07-29 08:20:22.729 UTC [info] 
 <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on 
 partition 22835963083295358096932575511191922182123945984
 2014-07-29 08:20:22.729 UTC [info] 
 <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
 partition 22835963083295358096932575511191922182123945984
 2014-07-29 08:20:22.740 UTC [info] 
 <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
 database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
 2014-07-29 08:20:22.751 UTC [info] 
 <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index 
 data for partition 22835963083295358096932575511191922182123945984
 2014-07-29 08:25:22.752 UTC [info] 
 <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
Total partitions: 1
Finished partitions: 1
Speed: 100
Total 2i items scanned: 0
Total tree objects: 0
Total objects fixed: 0
 With errors:
 Partition: 22835963083295358096932575511191922182123945984
 Error: index_scan_timeout
 
 
 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
 terminated with reason: bad argument in call to 
 eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
 [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
  []) in eleveldb:write/3 line 155
 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
 <0.4711.1061> with 0 neighbours exited with reason: bad argument in call 
 to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
 [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
 {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
 {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
 reason bad argument in call to 
 eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
 [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
  []) in eleveldb:write/3 line 155 in context child_terminated
 
 
 Anything I can do about that? What's the issue here?
 
 I'm using Riak 1.4.8 (.deb package).
 
 Cheers
 Simon
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>> 
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> --

Re: riak 2.0 - java client 'unavailable' exception

2014-07-29 Thread Joseph Blomstedt
> Is the requirement for having AAE enabled now removed for strong consistency?

Yes. AAE is no longer required to use strong consistency as of RC1.
The strong consistency subsystem now maintains it's own completely
separate set of Merkle trees that are used very differently than how
normal Riak AAE works. This new approach allows the consistency
subsystem to detect and recover from a variety of byzantine faults.

However, AAE is still necessary to detect and repair any corrupted
cold data (eg. data that you never read/write).

The consistency subsystem detects corruption on reads/writes and
automatically repairs data, but there is no mechanism that scans over
all data periodically to re-verify it. However, that's precisely what
normal Riak AAE does on a weekly basis, and normal Riak AAE can repair
both eventually consistent and strongly consistent keys.

Thus, you have the same choice with consistent data as you do with
eventually consistent data. Enable AAE and have Riak guarantee the
integrity of all data against bit-rot / silient on-disk corruption, or
disable AAE for improved performance but lose the ability for Riak to
detect/repair corruption of cold data (hot data is always protected).

-Joe


On Tue, Jul 29, 2014 at 2:29 PM, Sargun Dhillon  wrote:
> Is the requirement for having AAE enabled now removed for strong consistency?
>
> On Mon, Jul 28, 2014 at 4:55 PM, Joseph Blomstedt  wrote:
>> This means the consistency sub-system is not enabled/active. You can
>> verify this with the output of `riak-admin ensemble-status`.
>>
>> To enable strong consistency you must:
>>
>> 1) Set 'strong_consistency = on' in riak.conf.
>> 2) Have at least a 3 node cluster.
>>
>> You can address #2 by setting up 3+ local developer nodes as detailed
>> in the 5 minute tutorial:
>> http://docs.basho.com/riak/2.0.0/quickstart
>>
>> Alternatively, you can override the need for 3 nodes and use 1 node.
>>
>> To do that,
>>
>> 1) Run 'riak attach' to attach to your Riak's node console
>> 2) Enter (including the period): riak_ensemble_manager:enable().
>> 3) Enter (including the period): riak_core_ring_manager:force_update().
>> 3) Detach from the console using: Ctrl-C a
>>
>> After either approach, re-check `riak-admin ensemble-status`. It may
>> take up to a minute for the consistency sub-system to be enabled.
>>
>> If you haven't already, please take a look at the temporary (until we
>> finish updating docs.basho.com) strong consistency related
>> documentation (linked from the 2.0 RC1 release notes) here:
>> https://github.com/basho/riak_ensemble/blob/wip/riak-2.0-user-docs/riak_consistent_user_docs.md
>>
>> Regards,
>> Joe
>>
>> On Mon, Jul 28, 2014 at 3:05 PM, Jason W  wrote:
>>> Hi,
>>>
>>> I am trying out 2.0 w/ just one local node, created a strongly consistent
>>> bucket type.  But keep getting below exception.  If I just use the default
>>> bucket type, everything works fine.  Here is the bucket type detail with
>>> consistency bit on.
>>>
>>> young_vclock: 20
>>> w: quorum
>>> small_vclock: 50
>>> rw: quorum
>>> r: quorum
>>> pw: 0
>>> precommit: []
>>> pr: 0
>>> postcommit: []
>>> old_vclock: 86400
>>> notfound_ok: true
>>> n_val: 1
>>> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
>>> last_write_wins: false
>>> dw: quorum
>>> dvv_enabled: true
>>> chash_keyfun: {riak_core_util,chash_std_keyfun}
>>> big_vclock: 50
>>> basic_quorum: false
>>> allow_mult: true
>>> consistent: true
>>> active: true
>>> claimant: 'riak@0.0.0.0'
>>>
>>> Here is the java code
>>>
>>> List addresses = new LinkedList();
>>>
>>> addresses.add("172.16.0.254");
>>>
>>> RiakClient  riakClient = RiakClient.newClient(addresses);
>>>
>>> try {
>>>
>>> Location wildeGeniusQuote = new Location(new
>>> Namespace("strongly_consistent2", "sample"), emp.getId());
>>>
>>> BinaryValue text =
>>> BinaryValue.create(objectMapper.writeValueAsBytes(sampleObj));
>>>
>>> RiakObject obj = new RiakObject()
>>>
>>> .setContentType("text/plain")
>>>
>>> .setValue(text);
>>>
>>> StoreValue store = new
>>> StoreValue.Builder(obj).withLocation(wildeGeniusQuote)
>>>
>>> .withOption(Option.ASIS, true)
>>>
>>> .withOption(Option.DW, new Quorum(1))
>>>
>>> .withOption(Option.IF_NONE_MATCH, true)
>>>
>>> .withOption(Option.IF_NOT_MODIFIED, true)
>>>
>>> .withOption(Option.PW, new Quorum(1))
>>>
>>> .withOption(Option.N_VAL, 1)
>>>
>>> .withOption(Option.RETURN_BODY, true)
>>>
>>> .withOption(Option.RETURN_HEAD, true)
>>>
>>> .withOption(Option.SLOPPY_QUORUM, true)
>>>
>>> .withOption(Option.TIMEOUT, 1000)
>>>
>>> .withOption(Option.W, new Quorum(1))
>>>
>>> .build();
>>>
>>> riakClient.execute(store);
>>>
>>> } catch (Exception e) {
>>>
>>> e.printStackTrace();
>>>
>>> return null;
>>>
>>> }
>>>
>>> Am I still missing something? Thanks.
>>>
>>>
>>>
>>> Caused by: com.basho.riak.client.core.netty.RiakResponseException:
>>> unavailable
>>>
>>> at
>>> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:52)

Re: riak 2.0 - java client 'unavailable' exception

2014-07-29 Thread Eric Redmond
In short, not really. Solr is at best near realtime (NRT). By default, it takes 
Solr one second to soft commit the value to its index. This timing can be 
lowered, but note that the lower the number, the more it impacts overall system 
performance. One second is the general top speed, but feel free t experiment. 
You can read a bit more about NRT here 
https://cwiki.apache.org/confluence/display/solr/Near+Real+Time+Searching

Eric


On Jul 29, 2014, at 2:24 PM, Jason W  wrote:

> Thanks Joseph and Sean!
> I did give it a quick try with override approach, then 'riak-admin 
> ensemble-status' did return enabled/active...but when I try to insert a new 
> entry the server just crashed.  After few trials and server restart, the 
> setting somehow got auto reverted back.  Now I am back w/ non-consistent 
> bucket type and it works fine. 
> 
> One thing that I did notice is that when I insert an entry into a bucket 
> through java client, I can see the new entry in the bucket through http curl 
> cmd immediately, BUT not getting the newly entry key/id through solr query.  
> I wonder if there is a setting that I can tune to sorta of make the solr 
> indexing real-time?  Thanks.
> 
> 
> On Mon, Jul 28, 2014 at 8:52 PM, Sean Cribbs  wrote:
> Furthermore, you should almost never use the 'ASIS' option. It does not mean 
> what you think it means (and probably means nothing to strong consistency).
> 
> 
> On Mon, Jul 28, 2014 at 6:55 PM, Joseph Blomstedt  wrote:
> This means the consistency sub-system is not enabled/active. You can
> verify this with the output of `riak-admin ensemble-status`.
> 
> To enable strong consistency you must:
> 
> 1) Set 'strong_consistency = on' in riak.conf.
> 2) Have at least a 3 node cluster.
> 
> You can address #2 by setting up 3+ local developer nodes as detailed
> in the 5 minute tutorial:
> http://docs.basho.com/riak/2.0.0/quickstart
> 
> Alternatively, you can override the need for 3 nodes and use 1 node.
> 
> To do that,
> 
> 1) Run 'riak attach' to attach to your Riak's node console
> 2) Enter (including the period): riak_ensemble_manager:enable().
> 3) Enter (including the period): riak_core_ring_manager:force_update().
> 3) Detach from the console using: Ctrl-C a
> 
> After either approach, re-check `riak-admin ensemble-status`. It may
> take up to a minute for the consistency sub-system to be enabled.
> 
> If you haven't already, please take a look at the temporary (until we
> finish updating docs.basho.com) strong consistency related
> documentation (linked from the 2.0 RC1 release notes) here:
> https://github.com/basho/riak_ensemble/blob/wip/riak-2.0-user-docs/riak_consistent_user_docs.md
> 
> Regards,
> Joe
> 
> On Mon, Jul 28, 2014 at 3:05 PM, Jason W  wrote:
> > Hi,
> >
> > I am trying out 2.0 w/ just one local node, created a strongly consistent
> > bucket type.  But keep getting below exception.  If I just use the default
> > bucket type, everything works fine.  Here is the bucket type detail with
> > consistency bit on.
> >
> > young_vclock: 20
> > w: quorum
> > small_vclock: 50
> > rw: quorum
> > r: quorum
> > pw: 0
> > precommit: []
> > pr: 0
> > postcommit: []
> > old_vclock: 86400
> > notfound_ok: true
> > n_val: 1
> > linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
> > last_write_wins: false
> > dw: quorum
> > dvv_enabled: true
> > chash_keyfun: {riak_core_util,chash_std_keyfun}
> > big_vclock: 50
> > basic_quorum: false
> > allow_mult: true
> > consistent: true
> > active: true
> > claimant: 'riak@0.0.0.0'
> >
> > Here is the java code
> >
> > List addresses = new LinkedList();
> >
> > addresses.add("172.16.0.254");
> >
> > RiakClient  riakClient = RiakClient.newClient(addresses);
> >
> > try {
> >
> > Location wildeGeniusQuote = new Location(new
> > Namespace("strongly_consistent2", "sample"), emp.getId());
> >
> > BinaryValue text =
> > BinaryValue.create(objectMapper.writeValueAsBytes(sampleObj));
> >
> > RiakObject obj = new RiakObject()
> >
> > .setContentType("text/plain")
> >
> > .setValue(text);
> >
> > StoreValue store = new
> > StoreValue.Builder(obj).withLocation(wildeGeniusQuote)
> >
> > .withOption(Option.ASIS, true)
> >
> > .withOption(Option.DW, new Quorum(1))
> >
> > .withOption(Option.IF_NONE_MATCH, true)
> >
> > .withOption(Option.IF_NOT_MODIFIED, true)
> >
> > .withOption(Option.PW, new Quorum(1))
> >
> > .withOption(Option.N_VAL, 1)
> >
> > .withOption(Option.RETURN_BODY, true)
> >
> > .withOption(Option.RETURN_HEAD, true)
> >
> > .withOption(Option.SLOPPY_QUORUM, true)
> >
> > .withOption(Option.TIMEOUT, 1000)
> >
> > .withOption(Option.W, new Quorum(1))
> >
> > .build();
> >
> > riakClient.execute(store);
> >
> > } catch (Exception e) {
> >
> > e.printStackTrace();
> >
> > return null;
> >
> > }
> >
> > Am I still missing something? Thanks.
> >
> >
> >
> > Caused by: com.basho.riak.client.core.netty.RiakResponseException:
> > unavailable
> >
> > at
> > com.basho.riak.client.core.netty.RiakResponseHandler.channel

Re: riak 2.0 - java client 'unavailable' exception

2014-07-29 Thread Sargun Dhillon
Is the requirement for having AAE enabled now removed for strong consistency?

On Mon, Jul 28, 2014 at 4:55 PM, Joseph Blomstedt  wrote:
> This means the consistency sub-system is not enabled/active. You can
> verify this with the output of `riak-admin ensemble-status`.
>
> To enable strong consistency you must:
>
> 1) Set 'strong_consistency = on' in riak.conf.
> 2) Have at least a 3 node cluster.
>
> You can address #2 by setting up 3+ local developer nodes as detailed
> in the 5 minute tutorial:
> http://docs.basho.com/riak/2.0.0/quickstart
>
> Alternatively, you can override the need for 3 nodes and use 1 node.
>
> To do that,
>
> 1) Run 'riak attach' to attach to your Riak's node console
> 2) Enter (including the period): riak_ensemble_manager:enable().
> 3) Enter (including the period): riak_core_ring_manager:force_update().
> 3) Detach from the console using: Ctrl-C a
>
> After either approach, re-check `riak-admin ensemble-status`. It may
> take up to a minute for the consistency sub-system to be enabled.
>
> If you haven't already, please take a look at the temporary (until we
> finish updating docs.basho.com) strong consistency related
> documentation (linked from the 2.0 RC1 release notes) here:
> https://github.com/basho/riak_ensemble/blob/wip/riak-2.0-user-docs/riak_consistent_user_docs.md
>
> Regards,
> Joe
>
> On Mon, Jul 28, 2014 at 3:05 PM, Jason W  wrote:
>> Hi,
>>
>> I am trying out 2.0 w/ just one local node, created a strongly consistent
>> bucket type.  But keep getting below exception.  If I just use the default
>> bucket type, everything works fine.  Here is the bucket type detail with
>> consistency bit on.
>>
>> young_vclock: 20
>> w: quorum
>> small_vclock: 50
>> rw: quorum
>> r: quorum
>> pw: 0
>> precommit: []
>> pr: 0
>> postcommit: []
>> old_vclock: 86400
>> notfound_ok: true
>> n_val: 1
>> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
>> last_write_wins: false
>> dw: quorum
>> dvv_enabled: true
>> chash_keyfun: {riak_core_util,chash_std_keyfun}
>> big_vclock: 50
>> basic_quorum: false
>> allow_mult: true
>> consistent: true
>> active: true
>> claimant: 'riak@0.0.0.0'
>>
>> Here is the java code
>>
>> List addresses = new LinkedList();
>>
>> addresses.add("172.16.0.254");
>>
>> RiakClient  riakClient = RiakClient.newClient(addresses);
>>
>> try {
>>
>> Location wildeGeniusQuote = new Location(new
>> Namespace("strongly_consistent2", "sample"), emp.getId());
>>
>> BinaryValue text =
>> BinaryValue.create(objectMapper.writeValueAsBytes(sampleObj));
>>
>> RiakObject obj = new RiakObject()
>>
>> .setContentType("text/plain")
>>
>> .setValue(text);
>>
>> StoreValue store = new
>> StoreValue.Builder(obj).withLocation(wildeGeniusQuote)
>>
>> .withOption(Option.ASIS, true)
>>
>> .withOption(Option.DW, new Quorum(1))
>>
>> .withOption(Option.IF_NONE_MATCH, true)
>>
>> .withOption(Option.IF_NOT_MODIFIED, true)
>>
>> .withOption(Option.PW, new Quorum(1))
>>
>> .withOption(Option.N_VAL, 1)
>>
>> .withOption(Option.RETURN_BODY, true)
>>
>> .withOption(Option.RETURN_HEAD, true)
>>
>> .withOption(Option.SLOPPY_QUORUM, true)
>>
>> .withOption(Option.TIMEOUT, 1000)
>>
>> .withOption(Option.W, new Quorum(1))
>>
>> .build();
>>
>> riakClient.execute(store);
>>
>> } catch (Exception e) {
>>
>> e.printStackTrace();
>>
>> return null;
>>
>> }
>>
>> Am I still missing something? Thanks.
>>
>>
>>
>> Caused by: com.basho.riak.client.core.netty.RiakResponseException:
>> unavailable
>>
>> at
>> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:52)
>>
>> at
>> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
>>
>> at
>> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
>>
>> at
>> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)
>>
>> at
>> io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:108)
>>
>> at
>> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
>>
>> at
>> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
>>
>> at
>> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
>>
>> at
>> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:116)
>>
>> at
>> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
>>
>> at
>> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
>>
>> at
>> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
>>
>> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
>>
>> at
>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
>>
>> ... 1 more
>>
>>
>> ___
>> riak-users mailing list
>> r

Re: riak 2.0 - java client 'unavailable' exception

2014-07-29 Thread Jason W
Thanks Joseph and Sean!
I did give it a quick try with override approach, then 'riak-admin
ensemble-status' did return enabled/active...but when I try to insert a new
entry the server just crashed.  After few trials and server restart, the
setting somehow got auto reverted back.  Now I am back w/ non-consistent
bucket type and it works fine.

One thing that I did notice is that when I insert an entry into a bucket
through java client, I can see the new entry in the bucket through http
curl cmd immediately, BUT not getting the newly entry key/id through solr
query.  I wonder if there is a setting that I can tune to sorta of make the
solr indexing real-time?  Thanks.


On Mon, Jul 28, 2014 at 8:52 PM, Sean Cribbs  wrote:

> Furthermore, you should almost never use the 'ASIS' option. It does not
> mean what you think it means (and probably means nothing to strong
> consistency).
>
>
> On Mon, Jul 28, 2014 at 6:55 PM, Joseph Blomstedt  wrote:
>
>> This means the consistency sub-system is not enabled/active. You can
>> verify this with the output of `riak-admin ensemble-status`.
>>
>> To enable strong consistency you must:
>>
>> 1) Set 'strong_consistency = on' in riak.conf.
>> 2) Have at least a 3 node cluster.
>>
>> You can address #2 by setting up 3+ local developer nodes as detailed
>> in the 5 minute tutorial:
>> http://docs.basho.com/riak/2.0.0/quickstart
>>
>> Alternatively, you can override the need for 3 nodes and use 1 node.
>>
>> To do that,
>>
>> 1) Run 'riak attach' to attach to your Riak's node console
>> 2) Enter (including the period): riak_ensemble_manager:enable().
>> 3) Enter (including the period): riak_core_ring_manager:force_update().
>> 3) Detach from the console using: Ctrl-C a
>>
>> After either approach, re-check `riak-admin ensemble-status`. It may
>> take up to a minute for the consistency sub-system to be enabled.
>>
>> If you haven't already, please take a look at the temporary (until we
>> finish updating docs.basho.com) strong consistency related
>> documentation (linked from the 2.0 RC1 release notes) here:
>>
>> https://github.com/basho/riak_ensemble/blob/wip/riak-2.0-user-docs/riak_consistent_user_docs.md
>>
>> Regards,
>> Joe
>>
>> On Mon, Jul 28, 2014 at 3:05 PM, Jason W  wrote:
>> > Hi,
>> >
>> > I am trying out 2.0 w/ just one local node, created a strongly
>> consistent
>> > bucket type.  But keep getting below exception.  If I just use the
>> default
>> > bucket type, everything works fine.  Here is the bucket type detail with
>> > consistency bit on.
>> >
>> > young_vclock: 20
>> > w: quorum
>> > small_vclock: 50
>> > rw: quorum
>> > r: quorum
>> > pw: 0
>> > precommit: []
>> > pr: 0
>> > postcommit: []
>> > old_vclock: 86400
>> > notfound_ok: true
>> > n_val: 1
>> > linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
>> > last_write_wins: false
>> > dw: quorum
>> > dvv_enabled: true
>> > chash_keyfun: {riak_core_util,chash_std_keyfun}
>> > big_vclock: 50
>> > basic_quorum: false
>> > allow_mult: true
>> > consistent: true
>> > active: true
>> > claimant: 'riak@0.0.0.0'
>> >
>> > Here is the java code
>> >
>> > List addresses = new LinkedList();
>> >
>> > addresses.add("172.16.0.254");
>> >
>> > RiakClient  riakClient = RiakClient.newClient(addresses);
>> >
>> > try {
>> >
>> > Location wildeGeniusQuote = new Location(new
>> > Namespace("strongly_consistent2", "sample"), emp.getId());
>> >
>> > BinaryValue text =
>> > BinaryValue.create(objectMapper.writeValueAsBytes(sampleObj));
>> >
>> > RiakObject obj = new RiakObject()
>> >
>> > .setContentType("text/plain")
>> >
>> > .setValue(text);
>> >
>> > StoreValue store = new
>> > StoreValue.Builder(obj).withLocation(wildeGeniusQuote)
>> >
>> > .withOption(Option.ASIS, true)
>> >
>> > .withOption(Option.DW, new Quorum(1))
>> >
>> > .withOption(Option.IF_NONE_MATCH, true)
>> >
>> > .withOption(Option.IF_NOT_MODIFIED, true)
>> >
>> > .withOption(Option.PW, new Quorum(1))
>> >
>> > .withOption(Option.N_VAL, 1)
>> >
>> > .withOption(Option.RETURN_BODY, true)
>> >
>> > .withOption(Option.RETURN_HEAD, true)
>> >
>> > .withOption(Option.SLOPPY_QUORUM, true)
>> >
>> > .withOption(Option.TIMEOUT, 1000)
>> >
>> > .withOption(Option.W, new Quorum(1))
>> >
>> > .build();
>> >
>> > riakClient.execute(store);
>> >
>> > } catch (Exception e) {
>> >
>> > e.printStackTrace();
>> >
>> > return null;
>> >
>> > }
>> >
>> > Am I still missing something? Thanks.
>> >
>> >
>> >
>> > Caused by: com.basho.riak.client.core.netty.RiakResponseException:
>> > unavailable
>> >
>> > at
>> >
>> com.basho.riak.client.core.netty.RiakResponseHandler.channelRead(RiakResponseHandler.java:52)
>> >
>> > at
>> >
>> io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:340)
>> >
>> > at
>> >
>> io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:326)
>> >
>> > at
>> >
>> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)

Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
Said to say but the issue stays the same.. even after the upgrade to
1.4.10.

Any ideas what is happening here?

Cheers
Simon

On Tue, Jul 29, 2014 at 08:46:42AM +, Effenberg, Simon wrote:
> Already started to prepare everything for it.. :)
> 
> On Tue, Jul 29, 2014 at 09:43:22AM +0100, Guido Medina wrote:
> > Hi Simon,
> > 
> > There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I don't
> > think there isn't any harm for you to do a rolling upgrade since nothing
> > major changed, just bug fixes, here is the release notes' link for
> > reference:
> > 
> > https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
> > 
> > Best regards,
> > 
> > Guido.
> > 
> > On 29/07/14 09:35, Effenberg, Simon wrote:
> > >Hi,
> > >
> > >we have some issues with 2i queries like that:
> > >
> > >seffenberg@kriak46-1:~$ while :; do curl -s 
> > >localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> > >-rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
> > >
> > >13853
> > >13853
> > >0
> > >557
> > >557
> > >557
> > >13853
> > >0
> > >
> > >
> > >...
> > >
> > >So I tried to start a repair-2i first on one vnode/partition on one node
> > >(which is quiet new in the cluster.. 2 weeks or so).
> > >
> > >The command is failing with the following log entries:
> > >
> > >seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> > >22835963083295358096932575511191922182123945984
> > >Will repair 2i on these partitions:
> > > 22835963083295358096932575511191922182123945984
> > >Watch the logs for 2i repair progress reports
> > >seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> > ><0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> > >partitions [22835963083295358096932575511191922182123945984]
> > >2014-07-29 08:20:22.729 UTC [info] 
> > ><0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on 
> > >partition 22835963083295358096932575511191922182123945984
> > >2014-07-29 08:20:22.729 UTC [info] 
> > ><0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> > >partition 22835963083295358096932575511191922182123945984
> > >2014-07-29 08:20:22.740 UTC [info] 
> > ><0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> > >database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> > >2014-07-29 08:20:22.751 UTC [info] 
> > ><0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index 
> > >data for partition 22835963083295358096932575511191922182123945984
> > >2014-07-29 08:25:22.752 UTC [info] 
> > ><0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
> > > Total partitions: 1
> > > Finished partitions: 1
> > > Speed: 100
> > > Total 2i items scanned: 0
> > > Total tree objects: 0
> > > Total objects fixed: 0
> > >With errors:
> > >Partition: 22835963083295358096932575511191922182123945984
> > >Error: index_scan_timeout
> > >
> > >
> > >2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> > >terminated with reason: bad argument in call to 
> > >eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> > >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > > []) in eleveldb:write/3 line 155
> > >2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> > ><0.4711.1061> with 0 neighbours exited with reason: bad argument in call 
> > >to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> > >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > > []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> > >2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> > >{<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> > >{riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> > >reason bad argument in call to 
> > >eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> > >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > > []) in eleveldb:write/3 line 155 in context child_terminated
> > >
> > >
> > >Anything I can do about that? What's the issue here?
> > >
> > >I'm using Riak 1.4.8 (.deb package).
> > >
> > >Cheers
> > >Simon
> > >___
> > >riak-users mailing list
> > >riak-users@lists.basho.com
> > >http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > 
> > 
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> -- 
> Simon Effenberg | Site Op | mobile.international GmbH
> 
> Phone:+ 49. 30. 8109. 7173
> M-Phone:  + 49. 151. 5266. 1558
> Mail: seffenb...@team.mobile.de
> Web:  www.mobile.de
> 
> Marktplatz 1 | 14532 Europarc Dreilinden | Ge

Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
Already started to prepare everything for it.. :)

On Tue, Jul 29, 2014 at 09:43:22AM +0100, Guido Medina wrote:
> Hi Simon,
> 
> There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I don't
> think there isn't any harm for you to do a rolling upgrade since nothing
> major changed, just bug fixes, here is the release notes' link for
> reference:
> 
> https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
> 
> Best regards,
> 
> Guido.
> 
> On 29/07/14 09:35, Effenberg, Simon wrote:
> >Hi,
> >
> >we have some issues with 2i queries like that:
> >
> >seffenberg@kriak46-1:~$ while :; do curl -s 
> >localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> >-rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
> >
> >13853
> >13853
> >0
> >557
> >557
> >557
> >13853
> >0
> >
> >
> >...
> >
> >So I tried to start a repair-2i first on one vnode/partition on one node
> >(which is quiet new in the cluster.. 2 weeks or so).
> >
> >The command is failing with the following log entries:
> >
> >seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> >22835963083295358096932575511191922182123945984
> >Will repair 2i on these partitions:
> > 22835963083295358096932575511191922182123945984
> >Watch the logs for 2i repair progress reports
> >seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> ><0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> >partitions [22835963083295358096932575511191922182123945984]
> >2014-07-29 08:20:22.729 UTC [info] 
> ><0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> >22835963083295358096932575511191922182123945984
> >2014-07-29 08:20:22.729 UTC [info] 
> ><0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> >partition 22835963083295358096932575511191922182123945984
> >2014-07-29 08:20:22.740 UTC [info] 
> ><0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> >database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> >2014-07-29 08:20:22.751 UTC [info] 
> ><0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index 
> >data for partition 22835963083295358096932575511191922182123945984
> >2014-07-29 08:25:22.752 UTC [info] 
> ><0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
> > Total partitions: 1
> > Finished partitions: 1
> > Speed: 100
> > Total 2i items scanned: 0
> > Total tree objects: 0
> > Total objects fixed: 0
> >With errors:
> >Partition: 22835963083295358096932575511191922182123945984
> >Error: index_scan_timeout
> >
> >
> >2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> >terminated with reason: bad argument in call to 
> >eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > []) in eleveldb:write/3 line 155
> >2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> ><0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> >eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> >2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> >{<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> >{riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> >reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> ><<>>, 
> >[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
> > []) in eleveldb:write/3 line 155 in context child_terminated
> >
> >
> >Anything I can do about that? What's the issue here?
> >
> >I'm using Riak 1.4.8 (.deb package).
> >
> >Cheers
> >Simon
> >___
> >riak-users mailing list
> >riak-users@lists.basho.com
> >http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

-- 
Simon Effenberg | Site Op | mobile.international GmbH

Phone:+ 49. 30. 8109. 7173
M-Phone:  + 49. 151. 5266. 1558
Mail: seffenb...@team.mobile.de
Web:  www.mobile.de

Marktplatz 1 | 14532 Europarc Dreilinden | Germany

__
Geschäftsführer: Malte Krüger
HRB Nr.: 18517 P, Amtsgericht Potsdam
Sitz der Gesellschaft: Kleinmachnow
__

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Guido Medina

Hi Simon,

There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I 
don't think there isn't any harm for you to do a rolling upgrade since 
nothing major changed, just bug fixes, here is the release notes' link 
for reference:


https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md

Best regards,

Guido.

On 29/07/14 09:35, Effenberg, Simon wrote:

Hi,

we have some issues with 2i queries like that:

seffenberg@kriak46-1:~$ while :; do curl -s 
localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby -rjson -e 
"o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done

13853
13853
0
557
557
557
13853
0


...

So I tried to start a repair-2i first on one vnode/partition on one node
(which is quiet new in the cluster.. 2 weeks or so).

The command is failing with the following log entries:

seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
22835963083295358096932575511191922182123945984
Will repair 2i on these partitions:
 22835963083295358096932575511191922182123945984
Watch the logs for 2i repair progress reports
seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
<0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
partitions [22835963083295358096932575511191922182123945984]
2014-07-29 08:20:22.729 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
22835963083295358096932575511191922182123945984
2014-07-29 08:20:22.729 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
partition 22835963083295358096932575511191922182123945984
2014-07-29 08:20:22.740 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
2014-07-29 08:20:22.751 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
for partition 22835963083295358096932575511191922182123945984
2014-07-29 08:25:22.752 UTC [info] 
<0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
 Total partitions: 1
 Finished partitions: 1
 Speed: 100
 Total 2i items scanned: 0
 Total tree objects: 0
 Total objects fixed: 0
With errors:
Partition: 22835963083295358096932575511191922182123945984
Error: index_scan_timeout


2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> terminated with reason: bad 
argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}], 
[]) in eleveldb:write/3 line 155
2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process <0.4711.1061> with 0 neighbours exited 
with reason: bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}], 
[]) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker 
started with {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with reason bad argument in call to 
eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}], []) in 
eleveldb:write/3 line 155 in context child_terminated


Anything I can do about that? What's the issue here?

I'm using Riak 1.4.8 (.deb package).

Cheers
Simon
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
One thing I realized right now.. the new node is on wheezy whereas all
other 12 nodes are running squeeze.. but shouldn't be a problem, right?

On Tue, Jul 29, 2014 at 08:35:15AM +, Effenberg, Simon wrote:
> Hi,
> 
> we have some issues with 2i queries like that:
> 
> seffenberg@kriak46-1:~$ while :; do curl -s 
> localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
> -rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done
> 
> 13853
> 13853
> 0
> 557
> 557
> 557
> 13853
> 0
> 
> 
> ...
> 
> So I tried to start a repair-2i first on one vnode/partition on one node
> (which is quiet new in the cluster.. 2 weeks or so).
> 
> The command is failing with the following log entries:
> 
> seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
> 22835963083295358096932575511191922182123945984
> Will repair 2i on these partitions:
> 22835963083295358096932575511191922182123945984
> Watch the logs for 2i repair progress reports
> seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
> partitions [22835963083295358096932575511191922182123945984]
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
> 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.729 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
> partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:20:22.740 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
> database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
> 2014-07-29 08:20:22.751 UTC [info] 
> <0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
> for partition 22835963083295358096932575511191922182123945984
> 2014-07-29 08:25:22.752 UTC [info] 
> <0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
> Total partitions: 1
> Finished partitions: 1
> Speed: 100
> Total 2i items scanned: 0
> Total tree objects: 0
> Total objects fixed: 0
> With errors:
> Partition: 22835963083295358096932575511191922182123945984
> Error: index_scan_timeout
> 
> 
> 2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
> terminated with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155
> 2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
> <0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
> eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
> 2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
> {<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
> {riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with 
> reason bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, 
> <<>>, 
> [{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
>  []) in eleveldb:write/3 line 155 in context child_terminated
> 
> 
> Anything I can do about that? What's the issue here?
> 
> I'm using Riak 1.4.8 (.deb package).
> 
> Cheers
> Simon
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

-- 
Simon Effenberg | Site Op | mobile.international GmbH

Phone:+ 49. 30. 8109. 7173
M-Phone:  + 49. 151. 5266. 1558
Mail: seffenb...@team.mobile.de
Web:  www.mobile.de

Marktplatz 1 | 14532 Europarc Dreilinden | Germany

__
Geschäftsführer: Malte Krüger
HRB Nr.: 18517 P, Amtsgericht Potsdam
Sitz der Gesellschaft: Kleinmachnow
__

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


repair-2i stops with "bad argument in call to eleveldb:async_write"

2014-07-29 Thread Effenberg, Simon
Hi,

we have some issues with 2i queries like that:

seffenberg@kriak46-1:~$ while :; do curl -s 
localhost:8098/buckets/conversation/index/createdat_int/0/23182680 | ruby 
-rjson -e "o = JSON.parse(STDIN.read); puts o['keys'].size"; sleep 1; done

13853
13853
0
557
557
557
13853
0


...

So I tried to start a repair-2i first on one vnode/partition on one node
(which is quiet new in the cluster.. 2 weeks or so).

The command is failing with the following log entries:

seffenberg@kriak46-7:~$ sudo riak-admin repair-2i 
22835963083295358096932575511191922182123945984
Will repair 2i on these partitions:
22835963083295358096932575511191922182123945984
Watch the logs for 2i repair progress reports
seffenberg@kriak46-7:~$ 2014-07-29 08:20:22.729 UTC [info] 
<0.5929.1061>@riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for 
partitions [22835963083295358096932575511191922182123945984]
2014-07-29 08:20:22.729 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:repair_partition:257 Acquired lock on partition 
22835963083295358096932575511191922182123945984
2014-07-29 08:20:22.729 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:repair_partition:259 Repairing indexes in 
partition 22835963083295358096932575511191922182123945984
2014-07-29 08:20:22.740 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:create_index_data_db:324 Creating temporary 
database of 2i data in /var/lib/riak/anti_entropy/2i/tmp_db
2014-07-29 08:20:22.751 UTC [info] 
<0.5930.1061>@riak_kv_2i_aae:create_index_data_db:361 Grabbing all index data 
for partition 22835963083295358096932575511191922182123945984
2014-07-29 08:25:22.752 UTC [info] 
<0.5929.1061>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
Total partitions: 1
Finished partitions: 1
Speed: 100
Total 2i items scanned: 0
Total tree objects: 0
Total objects fixed: 0
With errors:
Partition: 22835963083295358096932575511191922182123945984
Error: index_scan_timeout


2014-07-29 08:25:22.752 UTC [error] <0.4711.1061> gen_server <0.4711.1061> 
terminated with reason: bad argument in call to 
eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
 []) in eleveldb:write/3 line 155
2014-07-29 08:25:22.753 UTC [error] <0.4711.1061> CRASH REPORT Process 
<0.4711.1061> with 0 neighbours exited with reason: bad argument in call to 
eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
 []) in eleveldb:write/3 line 155 in gen_server:terminate/6 line 747
2014-07-29 08:25:22.753 UTC [error] <0.1031.0> Supervisor 
{<0.1031.0>,poolboy_sup} had child riak_core_vnode_worker started with 
{riak_core_vnode_worker,start_link,undefined} at <0.4711.1061> exit with reason 
bad argument in call to eleveldb:async_write(#Ref<0.0.10120.211816>, <<>>, 
[{put,<<131,104,2,109,0,0,0,20,99,111,110,118,101,114,115,97,116,105,111,110,95,115,101,99,114,...>>,...}],
 []) in eleveldb:write/3 line 155 in context child_terminated


Anything I can do about that? What's the issue here?

I'm using Riak 1.4.8 (.deb package).

Cheers
Simon
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com