Re: Riak - deleted objects not going away

2016-05-11 Thread Daniel Abrahamsson
Hi David,

Look at the documentation for object deletion:
http://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/object-deletion/

There is also a blog post talking about this:
http://basho.com/posts/technical/riaks-config-behaviors-part-3/

When deleting an object, riak will first write a tombstone value. This
will by default live for 3 seconds before the object is deleted by the
backend (which may create a second backend specific form of
tombstone). If you do a GET on the key of the deleted object before
these 3 seconds have passed, you will get a not_found as expected.
However, the key will still show up in key listings, map-reduce jobs,
etc.

See the links above for more details.

//Daniel

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Sanket Agrawal
Luke, yep, that is right. I did that smoke test when debugging precommit
issue. Each of the three nodes has access to compiled beam files it needs
(in a shared folder that each node is configured for, via add_paths in
advanced config, to make sure they have same beam files). I verified it
manually in the console for each node.

On Wed, May 11, 2016 at 4:20 PM, Luke Bakken  wrote:

> Hi Sanket,
>
> Another thing I'd like to confirm - you have installed your compiled
> .beam file on all Riak nodes and can confirm via "m(precommit)" that
> the code is available?
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Wed, May 11, 2016 at 12:12 PM, Sanket Agrawal
>  wrote:
> > Yep, I can always load the module in question in riak console fine. I did
> > all my testing in riak console, before trying to turn on precommit hook.
> > Here is the output for loading the module that you asked for - if sasl
> > logging is not working, perhaps something is broken about commit hooks
> then:
> >
> >> $ ~/riak/riak1/bin/riak attach
> >> (riak1@127.0.0.1)1> m(precommit).
> >> Module precommit compiled: Date: May 11 2016, Time: 02.05
> >> Compiler options:  []
> >> Object file: /home/ec2-user/riak/customcode/precommit.beam
> >> Exports:
> >>  module_info/0
> >>  module_info/1
> >>  pre_all/1
> >>  pre_uauth/1
> >>  pre_uprofile/1
> >>  pre_uuid/1
> >> ok
> >
> >
> > On Wed, May 11, 2016 at 2:55 PM, Douglas Rohrer 
> wrote:
> >>
> >> Huh - I get a huge amount of logging when I turn on sasl using
> >> advanced.config - specifically, I have:
> >>
> >> {sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}
> >>
> >> in my advanced.config, and for just a startup/shutdown cycle I get a
> >> 191555 byte file.
> >>
> >> Just to confirm that you can, in fact, load the modules in question,
> what
> >> happens if you do a `riak-admin attach` and do `m(precommit).` what do
> you
> >> see?
> >>
> >> Doug
> >>
> >> On Wed, May 11, 2016 at 11:32 AM Sanket Agrawal <
> sanket.agra...@gmail.com>
> >> wrote:
> >>>
> >>> Thanks, Doug. I have enabled sasl logging now through advanced.config
> >>> though it doesn't seem to be creating any log yet.
> >>>
> >>> If this might help you folks with debugging precommit issue, what I
> have
> >>> observed is that erl-reload command doesn't load the precommit modules
> for
> >>> any of the three nodes (though precommit hook has been enabled on one
> of the
> >>> buckets for testing).
> 
>  $ ~/riak/riak1/bin/riak-admin erl-reload
>  Module precommit not yet loaded, skipped.
>  Module rutils not yet loaded, skipped.
> >>>
> >>>
> >>>
> >>> On Wed, May 11, 2016 at 2:05 PM, Douglas Rohrer 
> >>> wrote:
> 
>  As to the SASL logging, unfortunately it's not "on by default" and the
>  setting in riak.conf, as you found out, doesn't work correctly.
> However, you
>  can enable SASL via adding a setting to your advanced.config:
> 
>  {sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
>  {sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
>  output to "/path/to/log" file
> 
>  We're evaluating if we shouldn't just remove the sasl setting from
>  riak.conf altogether, as you're the first person (that we know of)
> since
>  2012 that has tried to turn it on and noticed this bug.
> 
>  Doug
> 
>  On Wed, May 11, 2016 at 10:14 AM Luke Bakken 
> wrote:
> >
> > Hi Sanket -
> >
> > I'd like to confirm some details. Is this a one-node cluster? Did you
> > install an official package or build from source?
> >
> > Thanks -
> > --
> > Luke Bakken
> > Engineer
> > lbak...@basho.com
> >
> >
> > On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
> >  wrote:
> > > One more thing - I set up the hooks by bucket, not bucket type. The
> > > documentation for 2.1.4 says that hooks are defined on the bucket
> > > level.
> > > Here is how I set up precommit hook (derived from "Riak Handbook"
> > > p95):
> > >
> > > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props
> -H
> > > 'Content-Type: application/json' -d '{ "props": { "precommit":
> > > [{"mod":
> > > "precommit", "fun": "pre_uuid"}]}}' -v
> > >
> > >
> > > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal
> > > 
> > > wrote:
> > >>
> > >> I just set up a precommit hook function in dev environment (KV
> > >> 2.1.4)
> > >> which doesn't seem to be triggering off at all. The object is
> being
> > >> stored
> > >> in the bucket, but the precommit logic is not kicking off. I
> checked
> > >> couple
> > >> of things as listed below but came up with no error - so, it is a
> > >> head-scratcher why 

Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Luke Bakken
Hi Sanket,

Another thing I'd like to confirm - you have installed your compiled
.beam file on all Riak nodes and can confirm via "m(precommit)" that
the code is available?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, May 11, 2016 at 12:12 PM, Sanket Agrawal
 wrote:
> Yep, I can always load the module in question in riak console fine. I did
> all my testing in riak console, before trying to turn on precommit hook.
> Here is the output for loading the module that you asked for - if sasl
> logging is not working, perhaps something is broken about commit hooks then:
>
>> $ ~/riak/riak1/bin/riak attach
>> (riak1@127.0.0.1)1> m(precommit).
>> Module precommit compiled: Date: May 11 2016, Time: 02.05
>> Compiler options:  []
>> Object file: /home/ec2-user/riak/customcode/precommit.beam
>> Exports:
>>  module_info/0
>>  module_info/1
>>  pre_all/1
>>  pre_uauth/1
>>  pre_uprofile/1
>>  pre_uuid/1
>> ok
>
>
> On Wed, May 11, 2016 at 2:55 PM, Douglas Rohrer  wrote:
>>
>> Huh - I get a huge amount of logging when I turn on sasl using
>> advanced.config - specifically, I have:
>>
>> {sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}
>>
>> in my advanced.config, and for just a startup/shutdown cycle I get a
>> 191555 byte file.
>>
>> Just to confirm that you can, in fact, load the modules in question, what
>> happens if you do a `riak-admin attach` and do `m(precommit).` what do you
>> see?
>>
>> Doug
>>
>> On Wed, May 11, 2016 at 11:32 AM Sanket Agrawal 
>> wrote:
>>>
>>> Thanks, Doug. I have enabled sasl logging now through advanced.config
>>> though it doesn't seem to be creating any log yet.
>>>
>>> If this might help you folks with debugging precommit issue, what I have
>>> observed is that erl-reload command doesn't load the precommit modules for
>>> any of the three nodes (though precommit hook has been enabled on one of the
>>> buckets for testing).

 $ ~/riak/riak1/bin/riak-admin erl-reload
 Module precommit not yet loaded, skipped.
 Module rutils not yet loaded, skipped.
>>>
>>>
>>>
>>> On Wed, May 11, 2016 at 2:05 PM, Douglas Rohrer 
>>> wrote:

 As to the SASL logging, unfortunately it's not "on by default" and the
 setting in riak.conf, as you found out, doesn't work correctly. However, 
 you
 can enable SASL via adding a setting to your advanced.config:

 {sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
 {sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
 output to "/path/to/log" file

 We're evaluating if we shouldn't just remove the sasl setting from
 riak.conf altogether, as you're the first person (that we know of) since
 2012 that has tried to turn it on and noticed this bug.

 Doug

 On Wed, May 11, 2016 at 10:14 AM Luke Bakken  wrote:
>
> Hi Sanket -
>
> I'd like to confirm some details. Is this a one-node cluster? Did you
> install an official package or build from source?
>
> Thanks -
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
>  wrote:
> > One more thing - I set up the hooks by bucket, not bucket type. The
> > documentation for 2.1.4 says that hooks are defined on the bucket
> > level.
> > Here is how I set up precommit hook (derived from "Riak Handbook"
> > p95):
> >
> > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
> > 'Content-Type: application/json' -d '{ "props": { "precommit":
> > [{"mod":
> > "precommit", "fun": "pre_uuid"}]}}' -v
> >
> >
> > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal
> > 
> > wrote:
> >>
> >> I just set up a precommit hook function in dev environment (KV
> >> 2.1.4)
> >> which doesn't seem to be triggering off at all. The object is being
> >> stored
> >> in the bucket, but the precommit logic is not kicking off. I checked
> >> couple
> >> of things as listed below but came up with no error - so, it is a
> >> head-scratcher why precommit hook is not triggering:
> >>>
> >>> - Verify precommit is set in bucket properties - snippet from curl
> >>> query
> >>> for bucket props below:
> >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
> >>>
> >>> - check there is no error in logs
> >>>
> >>> - check riak-console for commit errors:
> >>> $ ./riak1/bin/riak-admin status|grep commit
> >>> postcommit_fail : 0
> >>> precommit_fail : 0
> >>>
> >>> - Run the precommit function manually on Riak console itself with a
> >>> riak
> >>> object (that the hook failed to trigger on), and verify it works
> >>
> >>
> >>
> >> 

Riak - deleted objects not going away

2016-05-11 Thread AWS

I have a test database with 158 items in it. Some keys are Riak generated and 
some are explicitly set. I have two methods. One does a single delete of an 
object whilst the other goes through a loop and deletes all of them.

When I run the deleteAllFromBucket: aBucket it loops through all 158. If I then 
look at the number of keys, there are still 158. If I run the 
deleteAllFromBucket: aBucket again, it still loops over 158 but I get a 404 
response for each.

If I do a single delete and specify the key, the next time I run 
deleteAllFromBucket: aBucket, I get one less in the count.

Why would a single delete remove the objects entirely when the loop doesn't?

The loop calls the single delete so I am using the same code for both.

I am a newby to Riak (1 week) so any explanation would be helpful.

Long Haired David AKA David Pennington 

Message sent using Winmail Mail Server
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alexander Sicular
That's a great use case cause it's not ad hoc (worst case). Your pre
compute/cache solution will work whichever approach you take. Then the
question just becomes space vs compute.

On Wednesday, May 11, 2016, Alex De la rosa  wrote:

> My use case for searching is mainly for internal purposes, rankings and
> statistics (all that data is pre-compiled and stored into final objects for
> the app to display)... so I think is best to not store anything in SOLR and
> just fetch keys to compile the data when required.
>
> Thanks,
> Alex
>
> On Wed, May 11, 2016 at 10:40 PM, Alexander Sicular  > wrote:
>
>> Those are exactly the two options and opinions vary generally based on
>> use case. Storing the data not only take up more space but also more io
>> which makes things slower not only on read time , but more crucially , at
>> write time.
>>
>> Often people will take a hybrid approach and store certain elements like
>> , say , for blog posts , the author , publish date and title fields. Yet
>> they will leave the body out of the solr index. That way you could quickly
>> generate lists of posts by title and only fetch the body when the post is
>> clicked through.
>>
>> What is your use case?
>>
>> Best,
>> Alexander
>>
>> On Wednesday, May 11, 2016, Alex De la rosa > > wrote:
>>
>>> Hi all,
>>>
>>> When creating a SOLR schema for Riak Search, we can chose to store or
>>> not the data we are indexing, for example:
>>>
>>> 
>>>
>>> I know that the point to have the value stored is to be able to get it
>>> returned automatically when doing a search query... that implies using more
>>> disc to store data that maybe never would be searched and making the return
>>> slower as more bytes are required to get the data.
>>>
>>> Would it be better to just index data but not store the values,
>>> returning only Riak IDs (_yz_id) and then doing a multi-get in the
>>> client/API to fetch the objects for the final response?
>>>
>>> Or would it be better to store the values in SOLR so they will be
>>> already fetched when searching?
>>>
>>> What would give better performance or more sense in terms of disc space
>>> on an application that normally you won't be using much searching (all data
>>> is more or less discoverable without searching using GETs)
>>>
>>> Thanks and Best Regards,
>>> Alex
>>>
>>
>>
>> --
>>
>>
>> Alexander Sicular
>> Solutions Architect
>> Basho Technologies
>> 9175130679
>> @siculars
>>
>>
>

-- 


Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Sanket Agrawal
Yep, I can always load the module in question in riak console fine. I did
all my testing in riak console, before trying to turn on precommit hook.
Here is the output for loading the module that you asked for - if sasl
logging is not working, perhaps something is broken about commit hooks then:

$ ~/riak/riak1/bin/riak attach
> (riak1@127.0.0.1)1> m(precommit).
> Module precommit compiled: Date: May 11 2016, Time: 02.05
> Compiler options:  []
> Object file: /home/ec2-user/riak/customcode/precommit.beam
> Exports:
>  module_info/0
>  module_info/1
>  pre_all/1
>  pre_uauth/1
>  pre_uprofile/1
>  pre_uuid/1
> ok


On Wed, May 11, 2016 at 2:55 PM, Douglas Rohrer  wrote:

> Huh - I get a huge amount of logging when I turn on sasl using
> advanced.config - specifically, I have:
>
> {sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}
>
> in my advanced.config, and for just a startup/shutdown cycle I get
> a 191555 byte file.
>
> Just to confirm that you can, in fact, load the modules in question, what
> happens if you do a `riak-admin attach` and do `m(precommit).` what do you
> see?
>
> Doug
>
> On Wed, May 11, 2016 at 11:32 AM Sanket Agrawal 
> wrote:
>
>> Thanks, Doug. I have enabled sasl logging now through advanced.config
>> though it doesn't seem to be creating any log yet.
>>
>> If this might help you folks with debugging precommit issue, what I have
>> observed is that erl-reload command doesn't load the precommit modules for
>> any of the three nodes (though precommit hook has been enabled on one of
>> the buckets for testing).
>>
>>> $ ~/riak/riak1/bin/riak-admin erl-reload
>>> Module precommit not yet loaded, skipped.
>>> Module rutils not yet loaded, skipped.
>>
>>
>>
>> On Wed, May 11, 2016 at 2:05 PM, Douglas Rohrer 
>> wrote:
>>
>>> As to the SASL logging, unfortunately it's not "on by default" and the
>>> setting in riak.conf, as you found out, doesn't work correctly. However,
>>> you can enable SASL via adding a setting to your advanced.config:
>>>
>>> {sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
>>> {sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
>>> output to "/path/to/log" file
>>>
>>> We're evaluating if we shouldn't just remove the sasl setting from
>>> riak.conf altogether, as you're the first person (that we know of) since
>>> 2012 that has tried to turn it on and noticed this bug.
>>>
>>> Doug
>>>
>>> On Wed, May 11, 2016 at 10:14 AM Luke Bakken  wrote:
>>>
 Hi Sanket -

 I'd like to confirm some details. Is this a one-node cluster? Did you
 install an official package or build from source?

 Thanks -
 --
 Luke Bakken
 Engineer
 lbak...@basho.com


 On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
  wrote:
 > One more thing - I set up the hooks by bucket, not bucket type. The
 > documentation for 2.1.4 says that hooks are defined on the bucket
 level.
 > Here is how I set up precommit hook (derived from "Riak Handbook"
 p95):
 >
 > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
 > 'Content-Type: application/json' -d '{ "props": { "precommit":
 [{"mod":
 > "precommit", "fun": "pre_uuid"}]}}' -v
 >
 >
 > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal <
 sanket.agra...@gmail.com>
 > wrote:
 >>
 >> I just set up a precommit hook function in dev environment (KV 2.1.4)
 >> which doesn't seem to be triggering off at all. The object is being
 stored
 >> in the bucket, but the precommit logic is not kicking off. I checked
 couple
 >> of things as listed below but came up with no error - so, it is a
 >> head-scratcher why precommit hook is not triggering:
 >>>
 >>> - Verify precommit is set in bucket properties - snippet from curl
 query
 >>> for bucket props below:
 >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
 >>>
 >>> - check there is no error in logs
 >>>
 >>> - check riak-console for commit errors:
 >>> $ ./riak1/bin/riak-admin status|grep commit
 >>> postcommit_fail : 0
 >>> precommit_fail : 0
 >>>
 >>> - Run the precommit function manually on Riak console itself with a
 riak
 >>> object (that the hook failed to trigger on), and verify it works
 >>
 >>
 >>
 >> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
 >> because it fails with bad_config error. So, I am assuming sasl
 logging is
 >> enabled by default.
 >>
 >> Here is what precommit function does:
 >> - For the object (an immutable log append of JSON), calculate the
 location
 >> of a LWW bucket, and update a easily calculated key with that JSON
 body. It
 >> works fine from Riak console itself. Code below - we call 

Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Douglas Rohrer
Huh - I get a huge amount of logging when I turn on sasl using
advanced.config - specifically, I have:

{sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}

in my advanced.config, and for just a startup/shutdown cycle I get a 191555
byte file.

Just to confirm that you can, in fact, load the modules in question, what
happens if you do a `riak-admin attach` and do `m(precommit).` what do you
see?

Doug

On Wed, May 11, 2016 at 11:32 AM Sanket Agrawal 
wrote:

> Thanks, Doug. I have enabled sasl logging now through advanced.config
> though it doesn't seem to be creating any log yet.
>
> If this might help you folks with debugging precommit issue, what I have
> observed is that erl-reload command doesn't load the precommit modules for
> any of the three nodes (though precommit hook has been enabled on one of
> the buckets for testing).
>
>> $ ~/riak/riak1/bin/riak-admin erl-reload
>> Module precommit not yet loaded, skipped.
>> Module rutils not yet loaded, skipped.
>
>
>
> On Wed, May 11, 2016 at 2:05 PM, Douglas Rohrer  wrote:
>
>> As to the SASL logging, unfortunately it's not "on by default" and the
>> setting in riak.conf, as you found out, doesn't work correctly. However,
>> you can enable SASL via adding a setting to your advanced.config:
>>
>> {sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
>> {sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
>> output to "/path/to/log" file
>>
>> We're evaluating if we shouldn't just remove the sasl setting from
>> riak.conf altogether, as you're the first person (that we know of) since
>> 2012 that has tried to turn it on and noticed this bug.
>>
>> Doug
>>
>> On Wed, May 11, 2016 at 10:14 AM Luke Bakken  wrote:
>>
>>> Hi Sanket -
>>>
>>> I'd like to confirm some details. Is this a one-node cluster? Did you
>>> install an official package or build from source?
>>>
>>> Thanks -
>>> --
>>> Luke Bakken
>>> Engineer
>>> lbak...@basho.com
>>>
>>>
>>> On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
>>>  wrote:
>>> > One more thing - I set up the hooks by bucket, not bucket type. The
>>> > documentation for 2.1.4 says that hooks are defined on the bucket
>>> level.
>>> > Here is how I set up precommit hook (derived from "Riak Handbook" p95):
>>> >
>>> > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
>>> > 'Content-Type: application/json' -d '{ "props": { "precommit": [{"mod":
>>> > "precommit", "fun": "pre_uuid"}]}}' -v
>>> >
>>> >
>>> > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal <
>>> sanket.agra...@gmail.com>
>>> > wrote:
>>> >>
>>> >> I just set up a precommit hook function in dev environment (KV 2.1.4)
>>> >> which doesn't seem to be triggering off at all. The object is being
>>> stored
>>> >> in the bucket, but the precommit logic is not kicking off. I checked
>>> couple
>>> >> of things as listed below but came up with no error - so, it is a
>>> >> head-scratcher why precommit hook is not triggering:
>>> >>>
>>> >>> - Verify precommit is set in bucket properties - snippet from curl
>>> query
>>> >>> for bucket props below:
>>> >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
>>> >>>
>>> >>> - check there is no error in logs
>>> >>>
>>> >>> - check riak-console for commit errors:
>>> >>> $ ./riak1/bin/riak-admin status|grep commit
>>> >>> postcommit_fail : 0
>>> >>> precommit_fail : 0
>>> >>>
>>> >>> - Run the precommit function manually on Riak console itself with a
>>> riak
>>> >>> object (that the hook failed to trigger on), and verify it works
>>> >>
>>> >>
>>> >>
>>> >> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
>>> >> because it fails with bad_config error. So, I am assuming sasl
>>> logging is
>>> >> enabled by default.
>>> >>
>>> >> Here is what precommit function does:
>>> >> - For the object (an immutable log append of JSON), calculate the
>>> location
>>> >> of a LWW bucket, and update a easily calculated key with that JSON
>>> body. It
>>> >> works fine from Riak console itself. Code below - we call pre_uuid in
>>> >> precommit hook - both precommit.beam (where the function is) and
>>> rutils.beam
>>> >> have been copied to the relevant location as set in riak config, are
>>> >> accessible through Riak console and work fine if manually executed on
>>> an
>>> >> object:
>>> >>
>>> >>> %% Preprocess JSON, and copy to a LWW bucket type
>>> >>> preprocessJ(RObj,B,Choplen) ->
>>> >>>   Bn = {rutils:calcBLWWType(RObj),B}, %%this returns the location of
>>> LWW
>>> >>> bucket - works fine in riak console
>>> >>>   %% We store uuid map in  key - we take out timestamp of
>>> >>> length 32 including "_"
>>> >>>   K = riak_object:key(RObj),
>>> >>>   Kn = binary:part(K,0,byte_size(K) - Choplen),
>>> >>>   NObj =
>>> >>>
>>> riak_object:new(Bn,Kn,riak_object:get_value(RObj),riak_object:get_metadata(RObj)),
>>> >>>   {ok, C} = riak:local_client(),
>>> >>>   case C:put(NObj) of

Re: Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alex De la rosa
My use case for searching is mainly for internal purposes, rankings and
statistics (all that data is pre-compiled and stored into final objects for
the app to display)... so I think is best to not store anything in SOLR and
just fetch keys to compile the data when required.

Thanks,
Alex

On Wed, May 11, 2016 at 10:40 PM, Alexander Sicular 
wrote:

> Those are exactly the two options and opinions vary generally based on use
> case. Storing the data not only take up more space but also more io which
> makes things slower not only on read time , but more crucially , at write
> time.
>
> Often people will take a hybrid approach and store certain elements like ,
> say , for blog posts , the author , publish date and title fields. Yet they
> will leave the body out of the solr index. That way you could quickly
> generate lists of posts by title and only fetch the body when the post is
> clicked through.
>
> What is your use case?
>
> Best,
> Alexander
>
> On Wednesday, May 11, 2016, Alex De la rosa 
> wrote:
>
>> Hi all,
>>
>> When creating a SOLR schema for Riak Search, we can chose to store or not
>> the data we are indexing, for example:
>>
>> 
>>
>> I know that the point to have the value stored is to be able to get it
>> returned automatically when doing a search query... that implies using more
>> disc to store data that maybe never would be searched and making the return
>> slower as more bytes are required to get the data.
>>
>> Would it be better to just index data but not store the values, returning
>> only Riak IDs (_yz_id) and then doing a multi-get in the client/API to
>> fetch the objects for the final response?
>>
>> Or would it be better to store the values in SOLR so they will be already
>> fetched when searching?
>>
>> What would give better performance or more sense in terms of disc space
>> on an application that normally you won't be using much searching (all data
>> is more or less discoverable without searching using GETs)
>>
>> Thanks and Best Regards,
>> Alex
>>
>
>
> --
>
>
> Alexander Sicular
> Solutions Architect
> Basho Technologies
> 9175130679
> @siculars
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alexander Sicular
Those are exactly the two options and opinions vary generally based on use
case. Storing the data not only take up more space but also more io which
makes things slower not only on read time , but more crucially , at write
time.

Often people will take a hybrid approach and store certain elements like ,
say , for blog posts , the author , publish date and title fields. Yet they
will leave the body out of the solr index. That way you could quickly
generate lists of posts by title and only fetch the body when the post is
clicked through.

What is your use case?

Best,
Alexander

On Wednesday, May 11, 2016, Alex De la rosa  wrote:

> Hi all,
>
> When creating a SOLR schema for Riak Search, we can chose to store or not
> the data we are indexing, for example:
>
> 
>
> I know that the point to have the value stored is to be able to get it
> returned automatically when doing a search query... that implies using more
> disc to store data that maybe never would be searched and making the return
> slower as more bytes are required to get the data.
>
> Would it be better to just index data but not store the values, returning
> only Riak IDs (_yz_id) and then doing a multi-get in the client/API to
> fetch the objects for the final response?
>
> Or would it be better to store the values in SOLR so they will be already
> fetched when searching?
>
> What would give better performance or more sense in terms of disc space on
> an application that normally you won't be using much searching (all data is
> more or less discoverable without searching using GETs)
>
> Thanks and Best Regards,
> Alex
>


-- 


Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Sanket Agrawal
Thanks, Doug. I have enabled sasl logging now through advanced.config
though it doesn't seem to be creating any log yet.

If this might help you folks with debugging precommit issue, what I have
observed is that erl-reload command doesn't load the precommit modules for
any of the three nodes (though precommit hook has been enabled on one of
the buckets for testing).

> $ ~/riak/riak1/bin/riak-admin erl-reload
> Module precommit not yet loaded, skipped.
> Module rutils not yet loaded, skipped.



On Wed, May 11, 2016 at 2:05 PM, Douglas Rohrer  wrote:

> As to the SASL logging, unfortunately it's not "on by default" and the
> setting in riak.conf, as you found out, doesn't work correctly. However,
> you can enable SASL via adding a setting to your advanced.config:
>
> {sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
> {sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
> output to "/path/to/log" file
>
> We're evaluating if we shouldn't just remove the sasl setting from
> riak.conf altogether, as you're the first person (that we know of) since
> 2012 that has tried to turn it on and noticed this bug.
>
> Doug
>
> On Wed, May 11, 2016 at 10:14 AM Luke Bakken  wrote:
>
>> Hi Sanket -
>>
>> I'd like to confirm some details. Is this a one-node cluster? Did you
>> install an official package or build from source?
>>
>> Thanks -
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
>>  wrote:
>> > One more thing - I set up the hooks by bucket, not bucket type. The
>> > documentation for 2.1.4 says that hooks are defined on the bucket level.
>> > Here is how I set up precommit hook (derived from "Riak Handbook" p95):
>> >
>> > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
>> > 'Content-Type: application/json' -d '{ "props": { "precommit": [{"mod":
>> > "precommit", "fun": "pre_uuid"}]}}' -v
>> >
>> >
>> > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal <
>> sanket.agra...@gmail.com>
>> > wrote:
>> >>
>> >> I just set up a precommit hook function in dev environment (KV 2.1.4)
>> >> which doesn't seem to be triggering off at all. The object is being
>> stored
>> >> in the bucket, but the precommit logic is not kicking off. I checked
>> couple
>> >> of things as listed below but came up with no error - so, it is a
>> >> head-scratcher why precommit hook is not triggering:
>> >>>
>> >>> - Verify precommit is set in bucket properties - snippet from curl
>> query
>> >>> for bucket props below:
>> >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
>> >>>
>> >>> - check there is no error in logs
>> >>>
>> >>> - check riak-console for commit errors:
>> >>> $ ./riak1/bin/riak-admin status|grep commit
>> >>> postcommit_fail : 0
>> >>> precommit_fail : 0
>> >>>
>> >>> - Run the precommit function manually on Riak console itself with a
>> riak
>> >>> object (that the hook failed to trigger on), and verify it works
>> >>
>> >>
>> >>
>> >> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
>> >> because it fails with bad_config error. So, I am assuming sasl logging
>> is
>> >> enabled by default.
>> >>
>> >> Here is what precommit function does:
>> >> - For the object (an immutable log append of JSON), calculate the
>> location
>> >> of a LWW bucket, and update a easily calculated key with that JSON
>> body. It
>> >> works fine from Riak console itself. Code below - we call pre_uuid in
>> >> precommit hook - both precommit.beam (where the function is) and
>> rutils.beam
>> >> have been copied to the relevant location as set in riak config, are
>> >> accessible through Riak console and work fine if manually executed on
>> an
>> >> object:
>> >>
>> >>> %% Preprocess JSON, and copy to a LWW bucket type
>> >>> preprocessJ(RObj,B,Choplen) ->
>> >>>   Bn = {rutils:calcBLWWType(RObj),B}, %%this returns the location of
>> LWW
>> >>> bucket - works fine in riak console
>> >>>   %% We store uuid map in  key - we take out timestamp of
>> >>> length 32 including "_"
>> >>>   K = riak_object:key(RObj),
>> >>>   Kn = binary:part(K,0,byte_size(K) - Choplen),
>> >>>   NObj =
>> >>>
>> riak_object:new(Bn,Kn,riak_object:get_value(RObj),riak_object:get_metadata(RObj)),
>> >>>   {ok, C} = riak:local_client(),
>> >>>   case C:put(NObj) of
>> >>> ok -> RObj;
>> >>> _ -> {fail,<<"Error when trying to process in precommit hook">>}
>> >>>   end.
>> >>>
>> >>> pre_uuid(RObj) -> preprocessJ(RObj,<<"uuid_latest">>,32).
>> >>
>> >>
>> >> Below is a manual execution from riak console of precommit function -
>> >> first we execute it to confirm it is returning the original object:
>> >>>
>> >>> (riak1@127.0.0.1)5> precommit:pre_uuid(O1).
>> >>> {r_object,{<<"test_kv_wo">>,<<"uuid_log">>},
>> >>>   <<"ahmed_2016-05-10T20%3a47%3a47.346299Z">>,
>> >>>   [{r_content,{dict,3,16,16,8,80,48,
>> >>>
>> >>> 

Riak-TS 1.3 - how to Drop an existing table and enable Authorization for HTTP APIs

2016-05-11 Thread Nguyen, Kyle
Hi all,

We're doing a POC on Riak-TS 1.3 and want to find out if there is an option to 
Drop an existing table and enable Authorization for HTTP APIs.

Thanks

-kyle-



The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alex De la rosa
Hi all,

When creating a SOLR schema for Riak Search, we can chose to store or not
the data we are indexing, for example:



I know that the point to have the value stored is to be able to get it
returned automatically when doing a search query... that implies using more
disc to store data that maybe never would be searched and making the return
slower as more bytes are required to get the data.

Would it be better to just index data but not store the values, returning
only Riak IDs (_yz_id) and then doing a multi-get in the client/API to
fetch the objects for the final response?

Or would it be better to store the values in SOLR so they will be already
fetched when searching?

What would give better performance or more sense in terms of disc space on
an application that normally you won't be using much searching (all data is
more or less discoverable without searching using GETs)

Thanks and Best Regards,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Douglas Rohrer
As to the SASL logging, unfortunately it's not "on by default" and the
setting in riak.conf, as you found out, doesn't work correctly. However,
you can enable SASL via adding a setting to your advanced.config:

{sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
{sasl,[{sasl_error_logger,{file, "/path/to/log"}]} %% Enable SASL and
output to "/path/to/log" file

We're evaluating if we shouldn't just remove the sasl setting from
riak.conf altogether, as you're the first person (that we know of) since
2012 that has tried to turn it on and noticed this bug.

Doug

On Wed, May 11, 2016 at 10:14 AM Luke Bakken  wrote:

> Hi Sanket -
>
> I'd like to confirm some details. Is this a one-node cluster? Did you
> install an official package or build from source?
>
> Thanks -
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
>  wrote:
> > One more thing - I set up the hooks by bucket, not bucket type. The
> > documentation for 2.1.4 says that hooks are defined on the bucket level.
> > Here is how I set up precommit hook (derived from "Riak Handbook" p95):
> >
> > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
> > 'Content-Type: application/json' -d '{ "props": { "precommit": [{"mod":
> > "precommit", "fun": "pre_uuid"}]}}' -v
> >
> >
> > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal <
> sanket.agra...@gmail.com>
> > wrote:
> >>
> >> I just set up a precommit hook function in dev environment (KV 2.1.4)
> >> which doesn't seem to be triggering off at all. The object is being
> stored
> >> in the bucket, but the precommit logic is not kicking off. I checked
> couple
> >> of things as listed below but came up with no error - so, it is a
> >> head-scratcher why precommit hook is not triggering:
> >>>
> >>> - Verify precommit is set in bucket properties - snippet from curl
> query
> >>> for bucket props below:
> >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
> >>>
> >>> - check there is no error in logs
> >>>
> >>> - check riak-console for commit errors:
> >>> $ ./riak1/bin/riak-admin status|grep commit
> >>> postcommit_fail : 0
> >>> precommit_fail : 0
> >>>
> >>> - Run the precommit function manually on Riak console itself with a
> riak
> >>> object (that the hook failed to trigger on), and verify it works
> >>
> >>
> >>
> >> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
> >> because it fails with bad_config error. So, I am assuming sasl logging
> is
> >> enabled by default.
> >>
> >> Here is what precommit function does:
> >> - For the object (an immutable log append of JSON), calculate the
> location
> >> of a LWW bucket, and update a easily calculated key with that JSON
> body. It
> >> works fine from Riak console itself. Code below - we call pre_uuid in
> >> precommit hook - both precommit.beam (where the function is) and
> rutils.beam
> >> have been copied to the relevant location as set in riak config, are
> >> accessible through Riak console and work fine if manually executed on an
> >> object:
> >>
> >>> %% Preprocess JSON, and copy to a LWW bucket type
> >>> preprocessJ(RObj,B,Choplen) ->
> >>>   Bn = {rutils:calcBLWWType(RObj),B}, %%this returns the location of
> LWW
> >>> bucket - works fine in riak console
> >>>   %% We store uuid map in  key - we take out timestamp of
> >>> length 32 including "_"
> >>>   K = riak_object:key(RObj),
> >>>   Kn = binary:part(K,0,byte_size(K) - Choplen),
> >>>   NObj =
> >>>
> riak_object:new(Bn,Kn,riak_object:get_value(RObj),riak_object:get_metadata(RObj)),
> >>>   {ok, C} = riak:local_client(),
> >>>   case C:put(NObj) of
> >>> ok -> RObj;
> >>> _ -> {fail,<<"Error when trying to process in precommit hook">>}
> >>>   end.
> >>>
> >>> pre_uuid(RObj) -> preprocessJ(RObj,<<"uuid_latest">>,32).
> >>
> >>
> >> Below is a manual execution from riak console of precommit function -
> >> first we execute it to confirm it is returning the original object:
> >>>
> >>> (riak1@127.0.0.1)5> precommit:pre_uuid(O1).
> >>> {r_object,{<<"test_kv_wo">>,<<"uuid_log">>},
> >>>   <<"ahmed_2016-05-10T20%3a47%3a47.346299Z">>,
> >>>   [{r_content,{dict,3,16,16,8,80,48,
> >>>
> >>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
> >>>
> >>> {{[],[],[],[],[],[],[],[],[],[],[[...]|...],[],...}}},
> >>>
> >>>
> <<"{\"uname\":\"ahmed\",\"uuid\":\"df8c10e0-381d-5f65-bf43-cb8b4cb806fc\",\"timestamp\":\"2016-05-"...>>}],
> >>>   [{<<0>>,{1,63630132467}}],
> >>>   {dict,1,16,16,8,80,48,
> >>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
> >>> {{[],[],[],[],[],[],[],[],[],[],[],[],[],...}}},
> >>>   undefined}
> >>
> >>
> >> Now, we check if the object has been written to test_lww/uuid_latest
> >> bucket type:
> >>>
> >>> (riak1@127.0.0.1)6>
> >>> C:get({<<"test_lww">>,<<"uuid_latest">>},<<"ahmed">>).
> >>>
> >>> 

Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Sanket Agrawal
Hi Luke,

This is on a three-node cluster on Amazon EC2 Linux. I built it from 2.1.4
KV source using erlang interpreter that Basho provided (REPL header: Erlang
R16B02_basho8 (erts-5.10.3) [source] [64-bit] [async-threads:10] [hipe]
[kernel-poll:false]).

Also, riak-admin output below to confirm it is a 3-node cluster:

> ~/riak/riak1/bin/riak-admin  member-status
> = Membership
> ==
> Status RingPendingNode
>
> ---
> valid  34.4%  --  'riak1@127.0.0.1'
> valid  32.8%  --  'riak2@127.0.0.1'
> valid  32.8%  --  'riak3@127.0.0.1'
>
> ---
> Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0


Will appreciate help with debugging and fixing this.
Thanks!

On Wed, May 11, 2016 at 1:13 PM, Luke Bakken  wrote:

> Hi Sanket -
>
> I'd like to confirm some details. Is this a one-node cluster? Did you
> install an official package or build from source?
>
> Thanks -
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
>  wrote:
> > One more thing - I set up the hooks by bucket, not bucket type. The
> > documentation for 2.1.4 says that hooks are defined on the bucket level.
> > Here is how I set up precommit hook (derived from "Riak Handbook" p95):
> >
> > curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
> > 'Content-Type: application/json' -d '{ "props": { "precommit": [{"mod":
> > "precommit", "fun": "pre_uuid"}]}}' -v
> >
> >
> > On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal <
> sanket.agra...@gmail.com>
> > wrote:
> >>
> >> I just set up a precommit hook function in dev environment (KV 2.1.4)
> >> which doesn't seem to be triggering off at all. The object is being
> stored
> >> in the bucket, but the precommit logic is not kicking off. I checked
> couple
> >> of things as listed below but came up with no error - so, it is a
> >> head-scratcher why precommit hook is not triggering:
> >>>
> >>> - Verify precommit is set in bucket properties - snippet from curl
> query
> >>> for bucket props below:
> >>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
> >>>
> >>> - check there is no error in logs
> >>>
> >>> - check riak-console for commit errors:
> >>> $ ./riak1/bin/riak-admin status|grep commit
> >>> postcommit_fail : 0
> >>> precommit_fail : 0
> >>>
> >>> - Run the precommit function manually on Riak console itself with a
> riak
> >>> object (that the hook failed to trigger on), and verify it works
> >>
> >>
> >>
> >> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
> >> because it fails with bad_config error. So, I am assuming sasl logging
> is
> >> enabled by default.
> >>
> >> Here is what precommit function does:
> >> - For the object (an immutable log append of JSON), calculate the
> location
> >> of a LWW bucket, and update a easily calculated key with that JSON
> body. It
> >> works fine from Riak console itself. Code below - we call pre_uuid in
> >> precommit hook - both precommit.beam (where the function is) and
> rutils.beam
> >> have been copied to the relevant location as set in riak config, are
> >> accessible through Riak console and work fine if manually executed on an
> >> object:
> >>
> >>> %% Preprocess JSON, and copy to a LWW bucket type
> >>> preprocessJ(RObj,B,Choplen) ->
> >>>   Bn = {rutils:calcBLWWType(RObj),B}, %%this returns the location of
> LWW
> >>> bucket - works fine in riak console
> >>>   %% We store uuid map in  key - we take out timestamp of
> >>> length 32 including "_"
> >>>   K = riak_object:key(RObj),
> >>>   Kn = binary:part(K,0,byte_size(K) - Choplen),
> >>>   NObj =
> >>>
> riak_object:new(Bn,Kn,riak_object:get_value(RObj),riak_object:get_metadata(RObj)),
> >>>   {ok, C} = riak:local_client(),
> >>>   case C:put(NObj) of
> >>> ok -> RObj;
> >>> _ -> {fail,<<"Error when trying to process in precommit hook">>}
> >>>   end.
> >>>
> >>> pre_uuid(RObj) -> preprocessJ(RObj,<<"uuid_latest">>,32).
> >>
> >>
> >> Below is a manual execution from riak console of precommit function -
> >> first we execute it to confirm it is returning the original object:
> >>>
> >>> (riak1@127.0.0.1)5> precommit:pre_uuid(O1).
> >>> {r_object,{<<"test_kv_wo">>,<<"uuid_log">>},
> >>>   <<"ahmed_2016-05-10T20%3a47%3a47.346299Z">>,
> >>>   [{r_content,{dict,3,16,16,8,80,48,
> >>>
> >>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
> >>>
> >>> {{[],[],[],[],[],[],[],[],[],[],[[...]|...],[],...}}},
> >>>
> >>>
> <<"{\"uname\":\"ahmed\",\"uuid\":\"df8c10e0-381d-5f65-bf43-cb8b4cb806fc\",\"timestamp\":\"2016-05-"...>>}],
> >>>   [{<<0>>,{1,63630132467}}],
> >>>   {dict,1,16,16,8,80,48,
> >>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
> >>>

Re: Precommit hook function - no error log - how to debug?

2016-05-11 Thread Luke Bakken
Hi Sanket -

I'd like to confirm some details. Is this a one-node cluster? Did you
install an official package or build from source?

Thanks -
--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
 wrote:
> One more thing - I set up the hooks by bucket, not bucket type. The
> documentation for 2.1.4 says that hooks are defined on the bucket level.
> Here is how I set up precommit hook (derived from "Riak Handbook" p95):
>
> curl -X PUT localhost:8098/types/test_kv_wo/buckets/uuid_log/props -H
> 'Content-Type: application/json' -d '{ "props": { "precommit": [{"mod":
> "precommit", "fun": "pre_uuid"}]}}' -v
>
>
> On Tue, May 10, 2016 at 9:15 PM, Sanket Agrawal 
> wrote:
>>
>> I just set up a precommit hook function in dev environment (KV 2.1.4)
>> which doesn't seem to be triggering off at all. The object is being stored
>> in the bucket, but the precommit logic is not kicking off. I checked couple
>> of things as listed below but came up with no error - so, it is a
>> head-scratcher why precommit hook is not triggering:
>>>
>>> - Verify precommit is set in bucket properties - snippet from curl query
>>> for bucket props below:
>>> "precommit":[{"mod":"precommit","fun":"pre_uuid"}]
>>>
>>> - check there is no error in logs
>>>
>>> - check riak-console for commit errors:
>>> $ ./riak1/bin/riak-admin status|grep commit
>>> postcommit_fail : 0
>>> precommit_fail : 0
>>>
>>> - Run the precommit function manually on Riak console itself with a riak
>>> object (that the hook failed to trigger on), and verify it works
>>
>>
>>
>> Also, there is no sasl-error.log. "sasl = on" doesn't work in 2.1.4
>> because it fails with bad_config error. So, I am assuming sasl logging is
>> enabled by default.
>>
>> Here is what precommit function does:
>> - For the object (an immutable log append of JSON), calculate the location
>> of a LWW bucket, and update a easily calculated key with that JSON body. It
>> works fine from Riak console itself. Code below - we call pre_uuid in
>> precommit hook - both precommit.beam (where the function is) and rutils.beam
>> have been copied to the relevant location as set in riak config, are
>> accessible through Riak console and work fine if manually executed on an
>> object:
>>
>>> %% Preprocess JSON, and copy to a LWW bucket type
>>> preprocessJ(RObj,B,Choplen) ->
>>>   Bn = {rutils:calcBLWWType(RObj),B}, %%this returns the location of LWW
>>> bucket - works fine in riak console
>>>   %% We store uuid map in  key - we take out timestamp of
>>> length 32 including "_"
>>>   K = riak_object:key(RObj),
>>>   Kn = binary:part(K,0,byte_size(K) - Choplen),
>>>   NObj =
>>> riak_object:new(Bn,Kn,riak_object:get_value(RObj),riak_object:get_metadata(RObj)),
>>>   {ok, C} = riak:local_client(),
>>>   case C:put(NObj) of
>>> ok -> RObj;
>>> _ -> {fail,<<"Error when trying to process in precommit hook">>}
>>>   end.
>>>
>>> pre_uuid(RObj) -> preprocessJ(RObj,<<"uuid_latest">>,32).
>>
>>
>> Below is a manual execution from riak console of precommit function -
>> first we execute it to confirm it is returning the original object:
>>>
>>> (riak1@127.0.0.1)5> precommit:pre_uuid(O1).
>>> {r_object,{<<"test_kv_wo">>,<<"uuid_log">>},
>>>   <<"ahmed_2016-05-10T20%3a47%3a47.346299Z">>,
>>>   [{r_content,{dict,3,16,16,8,80,48,
>>>
>>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
>>>
>>> {{[],[],[],[],[],[],[],[],[],[],[[...]|...],[],...}}},
>>>
>>> <<"{\"uname\":\"ahmed\",\"uuid\":\"df8c10e0-381d-5f65-bf43-cb8b4cb806fc\",\"timestamp\":\"2016-05-"...>>}],
>>>   [{<<0>>,{1,63630132467}}],
>>>   {dict,1,16,16,8,80,48,
>>> {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],...},
>>> {{[],[],[],[],[],[],[],[],[],[],[],[],[],...}}},
>>>   undefined}
>>
>>
>> Now, we check if the object has been written to test_lww/uuid_latest
>> bucket type:
>>>
>>> (riak1@127.0.0.1)6>
>>> C:get({<<"test_lww">>,<<"uuid_latest">>},<<"ahmed">>).
>>>
>>> {ok,{r_object,{<<"hnm_fsm_lww">>,<<"uuid_latest">>},
>>>   <<"ahmed">>,
>>>   [{r_content,{dict,4,16,16,8,80,48,
>>>
>>> {[],[],[],[],[],[],[],[],[],[],[],[],...},
>>> {{[],[],[],[],[],[],[],[],[],[],...}}},
>>>
>>> <<"{\"uname\":\"ahmed\",\"uuid\":\"df8c10e0-381d-5f65-bf43-cb8b4cb806fc\",\"timestamp\":\""...>>}],
>>>   [{<<153,190,230,200,210,126,212,127,0,0,156,65>>,
>>> {1,63630148036}}],
>>>   {dict,1,16,16,8,80,48,
>>> {[],[],[],[],[],[],[],[],[],[],[],[],[],...},
>>> {{[],[],[],[],[],[],[],[],[],[],[],...}}},
>>>   undefined}}
>>
>>
>> Will appreciate pointer on how to debug precommit hook.
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>