Re: Riak 1.4x counter examples using riakc (erlang-riak-client)

2013-11-21 Thread Russell Brown
Hi Mark,

The Counter is just a a riak_object under the hood (at the Riak end of things) 
to the erlang client though it is modelled as an integer and operations on an 
integer.

We’ll get around to the README, sorry about that.

Using the counter is pretty simple. First you need to set whatever bucket you 
wish to store the counter in to allow_mult=true
Then just use the increment or fetch functions.

Here is an example session: https://gist.github.com/russelldb/7596268

Please note, you can also use the regular R,W,PR,PW etc options on increment 
and fetch.

Sorry about the lack of examples in the README, hope this makes up for it a 
little.

Cheers

Russell

On 21 Nov 2013, at 00:17, Mark Allen  wrote:

> Hi -
> 
> I'm trying to puzzle through how to use the PN counters in Riak 1.4.x via the 
> riakc client.  It *looks* like
> a 1.4 counter is a special kind of riak_obj metadata tuple.  So do you just 
> set the tuple in a riakc_obj with
> an undefined value?
> 
> The README in the client repo also doesn't seem to have an PN counter 
> examples.  Any guidance would
> be appreciated.
> 
> Thanks.
> 
> Mark
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Links and uniqueness

2013-11-21 Thread Matt Black
Thanks Brian, I suspected it was a constraint applied in the client tools
(although in my case the Python ones).


On 22 November 2013 13:05, Brian Roach  wrote:

> Matt -
>
> This has never been a restriction in Riak itself AFAIK. I fixed the
> same issue in the Java client over a year ago - it was using a hashmap
> for links so duplicates were discarded;
> https://github.com/basho/riak-java-client/pull/165
>
> - Roach
>
> On Thu, Nov 21, 2013 at 7:00 PM, Matt Black 
> wrote:
> > Apologies for the bump!
> >
> > Basho guys, can I get a confirmation on the uniqueness of links between
> two
> > objects please? (Before I go an modify the code in my app to suit)
> >
> > Thanks
> > Matt
> >
> >
> >
> > On 19 November 2013 14:31, Matt Black  wrote:
> >>
> >> Hello list,
> >>
> >> Once upon a time, a link from one object to another was unique - you
> >> couldn't add two links from object A onto object B. I know this as I
> had to
> >> code around it in our app.
> >>
> >> At some stage that limitation has been removed - in either the Python
> >> bindings or Riak itself.
> >>
> >> Can anyone else confirm this? Basho peeps, are non-unique links the
> >> intended behaviour?
> >>
> >> Thanks
> >> Matt Black
> >>
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Links and uniqueness

2013-11-21 Thread Brian Roach
Matt -

This has never been a restriction in Riak itself AFAIK. I fixed the
same issue in the Java client over a year ago - it was using a hashmap
for links so duplicates were discarded;
https://github.com/basho/riak-java-client/pull/165

- Roach

On Thu, Nov 21, 2013 at 7:00 PM, Matt Black  wrote:
> Apologies for the bump!
>
> Basho guys, can I get a confirmation on the uniqueness of links between two
> objects please? (Before I go an modify the code in my app to suit)
>
> Thanks
> Matt
>
>
>
> On 19 November 2013 14:31, Matt Black  wrote:
>>
>> Hello list,
>>
>> Once upon a time, a link from one object to another was unique - you
>> couldn't add two links from object A onto object B. I know this as I had to
>> code around it in our app.
>>
>> At some stage that limitation has been removed - in either the Python
>> bindings or Riak itself.
>>
>> Can anyone else confirm this? Basho peeps, are non-unique links the
>> intended behaviour?
>>
>> Thanks
>> Matt Black
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Links and uniqueness

2013-11-21 Thread Matt Black
Apologies for the bump!

Basho guys, can I get a confirmation on the uniqueness of links between two
objects please? (Before I go an modify the code in my app to suit)

Thanks
Matt


On 19 November 2013 14:31, Matt Black  wrote:

> Hello list,
>
> Once upon a time, a link from one object to another was unique - you
> couldn't add two links from object A onto object B. I know this as I had to
> code around it in our app.
>
> At some stage that limitation has been removed - in either the Python
> bindings or Riak itself.
>
> Can anyone else confirm this? Basho peeps, are non-unique links the
> intended behaviour?
>
> Thanks
> Matt Black
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using Riak to perform aggregate queries

2013-11-21 Thread NC
Our use-case is very similar to what Chris has described till now. I am new
to the riak store and have a background with RDBMS.

Going over this thread, there was a suggestion to pre-compute things. I am
trying to understand what pre-compute exactly means. Does it mean using pre
or post commit hooks to perform aggregation as different events enter our
system? Or does it mean running map reduce jobs in the background to
precompute the aggregations?

A brief background on our use-case. We have vendors in our system that get
millions of events every week. Every two weeks, we sum the amount on all the
events for the vendor to generate an invoice. Querying millions of events
for the vendor using secondary indices or key filters doesn't seem feasible
in riak. I am wondering if we can use post-commit hooks so that as events
enter our system, we maintain a real-time account for the vendor, adding and
subtracting things on the go. When the time comes to create an invoice, we
just look at the account to find the amount to pay to the vendor.

My questions are: can we even use post-commit hook in that manner where we
insert / update multiple records? Is there a different way to design such a
schema that I am missing?

Thanks.



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Using-Riak-to-perform-aggregate-queries-tp4027668p4029900.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Search Crashes

2013-11-21 Thread Ryan Zezeski
On Wed, Nov 20, 2013 at 2:38 PM, Gabriel Littman  wrote:
>
>
> 1) We are installed via deb package
>  ii  riak 1.4.1-1
> Riak is a distributed data store
>

There's a 1.4.2 out but your issue doesn't seem to have anything do with a
specific 1.4.1 bug.


>
> 2) We did recently upgrade our to riak python library 2.0 but I also have
> a cluster still on the 1.4 client that has similar problems.
>

Okay, so for now we assume the client upgrade didn't cause the issues
either.


>
> 3) We less recently upgraded riak itself from 1.2.x to 1.4.  We ended up
> starting with an empty riak store in the processes.  Honestly we've had
> many problems with search index even under 1.2.  Mostly riak would get into
> a state where it would continuously crash after startup until we
> deleted /var/lib/riak/merge_index on the node and then rebuilt the search
> index via read/write.  The particular problems I'm having now I cannot
> confirm if they were happening under riak 1.2 or not.
>

The 1.2 issues may very well have been caused by a corruption bug that was
fixed in 1.4.0 [1].


>
> looks like allow_mult is false, but I just confirmed with my colleague
> that *it was previously set to true* so it could be that we have a hold
> over issue from that.
> $ curl 'http://10.1.2.95:8098/buckets/ctv_tvdata/props'
>
> {"props":{"allow_mult":false,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dw":0,"last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"ctv_tvdata","notfound_ok":false,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"fun":"precommit","mod":"riak_search_kv_hook"},{"mod":"riak_search_kv_hook","fun":"precommit"}],"pw":0,"r":1,"rw":1,"search":true,"small_vclock":50,"w":1,"young_vclock":20}}
>

So after setting allow_mult back to false you'd have to make sure to
resolve any siblings but that should be done automatically for you now that
allow_mult is false again. However, the commit hook will also crash if you
have allow_mult set to true on Riak Search's special "proxy object" bucket.
Looking at your original insert crash message I notice the problem is
actually with the proxy objets stored in this bucket [2]. What does the
following curl show you:

curl 'http://host:post/buckets/_rsid_ctv_tvdata/props'

I bet $5 it has allow_mult set to true. Try setting that to false and see
what happens.



>
> Since it is now set to false now would you have a suggestion on how to
> clear the problem?  (Delete merge_index?)
>

You shouldn't have to delete merge index files unless they are corrupted.
Let's see if we can fix your insert/index problem first. Then we can work
on search if it is still broken.

-Z


[1]: https://github.com/basho/merge_index/pull/30

[2]: It's not easy to see by there is the atom 'riak_idx_doc' which
indicates this is a "proxy object" created by Riak Search. If you squint
hard enough you can see the analyzed fields as well. I should have looked
more closely the first time. This is not an obvious error. I wouldn't
expect many people to pick up on it.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Python client 2.0.2 release

2013-11-21 Thread Dave Martorana
Hi Sean!

I was wondering if you were starting to work on Riak 2.0 features, and if
so, which branch I might follow development?

Cheers,

Dave


On Mon, Nov 18, 2013 at 4:44 PM, Sean Cribbs  wrote:

> Hi riak-users,
>
> I've just released version 2.0.2 of the official Python client for
> Riak[1]. This includes a minor feature addition that was included in the
> 1.4.1 release of Riak, namely client-specified timeouts on 2i
> operations[2]. The documentation site has also been updated.
>
> Happy hacking,
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>
> [1] https://pypi.python.org/pypi/riak/2.0.2
> [2]
> http://basho.github.io/riak-python-client/client.html#riak.client.RiakClient.get_index
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Python client 2.0.2 release

2013-11-21 Thread Sean Cribbs
Some of the new Search (Yokozuna) features have already landed on master
thanks to Eric Redmond. I will be hacking on the 2.0 features in earnest
next month.


On Thu, Nov 21, 2013 at 2:02 PM, Dave Martorana  wrote:

> Hi Sean!
>
> I was wondering if you were starting to work on Riak 2.0 features, and if
> so, which branch I might follow development?
>
> Cheers,
>
> Dave
>
>
> On Mon, Nov 18, 2013 at 4:44 PM, Sean Cribbs  wrote:
>
>> Hi riak-users,
>>
>> I've just released version 2.0.2 of the official Python client for
>> Riak[1]. This includes a minor feature addition that was included in the
>> 1.4.1 release of Riak, namely client-specified timeouts on 2i
>> operations[2]. The documentation site has also been updated.
>>
>> Happy hacking,
>>
>> --
>> Sean Cribbs 
>> Software Engineer
>> Basho Technologies, Inc.
>> http://basho.com/
>>
>> [1] https://pypi.python.org/pypi/riak/2.0.2
>> [2]
>> http://basho.github.io/riak-python-client/client.html#riak.client.RiakClient.get_index
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>


-- 
Sean Cribbs 
Software Engineer
Basho Technologies, Inc.
http://basho.com/
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Map Reduce error

2013-11-21 Thread Roger Diller
Ok, I'm not exactly sure what will be most helpful to you, but I am
attaching an example of a Key/Value.

What you see in the file, we would have thousands of those. Probably would
get to at least a million or so KV's like that.

The bucket is search enabled, so all those JSON fields are going to be
indexed.

However, we are primarily search filtering on the systemId, indexId, &
fullText fields in the value.

We currently have a 6 node cluster with each server having approx 16 cores
& in most cases at least 10 GB (20 GB free on most servers probably) free
memory with all the process running (including Riak). Computing power
should not be an issue period!

Let me know what use that would be useful to provide.



On Wed, Nov 20, 2013 at 12:46 PM, Todd Tyree  wrote:

> Hi Roger,
>
> Sorry, meant to reply to the mailing list, but accidentally replied
> directly to you.
>
> Before I can say whether or not secondary indexes are suitable, I need to
> know more about your data, access patterns and query patterns.
>
> Can you share this information with me here?  What kind of data is being
> searched and how frequently is it updated?
>
> Best,
> Todd
>
>
> On Wed, Nov 20, 2013 at 5:05 PM, Roger Diller <
> ro...@flexrentalsolutions.com> wrote:
>
>> Do you have any other suggestions on how we can find data in real time
>> from a bucket (without moving to 2.0)? What about secondary indexes?
>>
>>
>> On Wed, Nov 20, 2013 at 11:59 AM, Todd Tyree  wrote:
>>
>>> Hi Roger,
>>>
>>> You are essentially correct in that map reduce was never designed as a
>>> realtime query tool.
>>>
>>> However, we do have a solution in a technology preview release stage
>>> that may solve this problem for you: Yokozuna [0].  It is a tight
>>> integration of Riak and Solr and brings the best of both technologies
>>> together.
>>>
>>> It is currently scheduled for release as part of Riak 2.0, but you can
>>> clone the repo and build it now if you would like.  Just be aware it is
>>> still undergoing development and the API may be subject to change before
>>> the final release.
>>>
>>> [0] https://github.com/basho/yokozuna
>>>
>>> Best,
>>> Todd
>>>
>>
>
> On Wed, Nov 20, 2013 at 4:45 PM, Roger Diller <
> ro...@flexrentalsolutions.com> wrote:
>
>> I could dig up all our nitty gritty Riak details but I don't think that
>> will help really.
>>
>> The point I think is this: Using search map reduce is not a viable way to
>> do real time search queries. Especially ones that may have 2000+ plus
>> results each. Couple that with search requests coming in every few seconds
>> from 300+ customer app instances and you literally bring Riak to it's
>> knees.
>>
>> Not that Riak is the problem really, it's just we are using it in a way
>> it was not designed for. In essence, we are using Riak as a search engine
>> for our application data. Correct me if I'm wrong but Riak is more for
>> storing large amounts of KV data, but not really for finding that data in a
>> search sense.
>>
>> Am I missing something here? Is there a viable way for doing real time
>> search queries on a bucket with 1 million keys?
>>
>>
>> On Mon, Nov 18, 2013 at 5:29 PM, Alexander Sicular wrote:
>>
>>> More info please...
>>>
>>> Version
>>> Current config
>>> Hardware
>>> Data size
>>> Search Schema
>>> Etc.
>>>
>>> But I would probably say that your search is returning too many keys to
>>> your mr. More inline.
>>>
>>> @siculars
>>> http://siculars.posthaven.com
>>>
>>> Sent from my iRotaryPhone
>>>
>>> On Nov 18, 2013, at 13:59, Roger Diller 
>>> wrote:
>>>
>>> Using the Riak Java client, I am executing a search map reduce like this:
>>>
>>> MapReduceResult result = riakClient.mapReduce(SEARCH_BUCKET,
>>> search).execute();
>>>
>>>
>>> ^is this part a typo. Cause otherwise it looks like you do a s>mr, set
>>> the search and then another s>mr.
>>>
>>>
>>> String search = "systemId:" + systemName + " AND indexId:" + indexId;
>>>
>>> MapReduceResult result = riakClient.mapReduce(SEARCH_BUCKET,
>>> search).execute();
>>>
>>> This worked fine when the bucket contained a few thousand keys. Now that
>>> we have far more data stored in the bucket (at least 250K keys), it's
>>> throwing this generic error:
>>>
>>> com.basho.riak.client.RiakException: java.io.IOException:
>>> {"error":"map_reduce_error"}
>>>
>>> We've also noticed that storing new key/values in the bucket has slowed
>>> WAY down.
>>>
>>> Any idea what's going on?
>>>
>>>
>>> Your data set is incorrectly sized to your production config.
>>>
>>> Are there limitations to Search Map Reduce?
>>>
>>>
>>> Certainly
>>>
>>> Are there configuration options that need changed?
>>>
>>>
>>> Possibly
>>>
>>> Any help would be greatly appreciated.
>>>
>>>
>>> --
>>> Roger Diller
>>> Flex Rental Solutions, LLC
>>> Email: ro...@flexrentalsolutions.com
>>> Skype: rogerdiller
>>> Time Zone: Eastern Time
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/

Re: 404 Error: Object Not Found

2013-11-21 Thread Hector Castro
Hi Ari,

I applied the same changes you described to a local single instance
Riak setup and was unable to reproduce your issue.

Can you please answer/provide the following?

- Double-check your configuration changes
- Did you change IP addresses in app.config or vm.args?
- Did you startup Riak before making those IP changes and then attempt
to restart it again afterwards?
- Provide the exact commands you're using to PUT and GET data
- Provide the output of `curl http://localhost:10018/buckets//props

--
Hector


On Wed, Nov 20, 2013 at 5:29 PM, Ari King  wrote:
> Update:
>
> I'm able to "GET" information of the buckets I created using the java PB
> client. In both the error and console log files I find the same error
> message. I've searched online but I haven't found anything that explains how
> to resolve the issue. Does anyone know what is the issue and how to solve
> it?
>
> In the error log file I find the following repeated numerous times:
>
> 2013-11-20 19:22:13.715 [error] <0.2067.0> CRASH REPORT Process <0.2067.0>
> with 0 neighbours exited with reason: no match of right hand value
> {error,{db_open,"IO error:
> ./data/anti_entropy/411047335499316445744786359201454599278231027712/MANIFEST-01:
> Invalid argument"}} in hashtree:new_segment_store/2 line 499 in
> gen_server:init_it/6 line 328
>
> And in the console log file I once again find a similar error notice:
>
> 2013-11-20 21:54:50.796 [info]
> <0.700.0>@riak_kv_vnode:maybe_create_hashtrees:142
> riak_kv/205523667749658222872393179600727299639115513856: unable to start
> index_hashtree: {error,{{badmatch,{error,{db_open,"IO error:
> ./data/anti_entropy/205523667749658222872393179600727299639115513856/MANIFEST-01:
> Invalid
> argument"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,499}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,215}]},{riak_kv_index_hashtree,do_new_tree,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,426}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{riak_kv_index_hashtree,init_trees,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,366}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,226}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}}
>
>
>
> -- Forwarded message --
> From: Ari King 
> Date: Wed, Nov 20, 2013 at 4:11 PM
> Subject: 404 Error: Object Not Found
> To: riak-users@lists.basho.com
>
>
> I've installed a single node Riak cluster using the 5 minute install guide
> and modified the app.config riak_core to include {target_n_val, 1} and
> {default_bucket_props, [{n_val, 1}]}. I've also set an IP address of
> 192.168.2.25, and set the HTTP port to 10018 respectively.
>
> With this setup, no matter what HTTP request I execute, I receive the error
> below.
>
> Does anyone know what I've done wrong?
>
> * About to connect() to 192.168.2.25 port 10018 (#0)
> *   Trying 192.168.2.25... connected
>> PUT /riak/test/abc123?returnbody=true HTTP/1.1
>> User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
>> zlib/
>> 1.2.3.4 libidn/1.23 librtmp/2.3
>> Host: 192.168.2.25:10018
>> Accept: */*
>> Content-Type: application/json
>> Content-Length: 47
>>
> * upload completely sent off: 47out of 47 bytes
> < HTTP/1.1 404 Object Not Found
> < Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
> < Date: Wed, 20 Nov 2013 20:48:30 GMT
> < Content-Type: text/html
> < Content-Length: 193
> <
> * Connection #0 to host 192.168.2.25 left intact
> * Closing connection #0
>
> Thanks.
>
> -Ari
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [Confusing search docs] Enabling search on bucket in Riak 2.0

2013-11-21 Thread Ryan Zezeski
On Wed, Nov 20, 2013 at 3:48 PM, Kartik Thakore  wrote:

> Thank you.
>
> I am creating indexes with:
>
> curl -i -XPUT http://192.168.1.10:8098/yz/index/allLogs \
>-H 'content-type: application/json' \
>   -d '{"schema" : "_yz_default", "bucket" : "logs" }'
>
>
> But when I check the index with:
>
>  curl -i  http://192.168.1.10:8098/yz/index/allLogs
>
> It drops the bucket association
>
> HTTP/1.1 200 OK
> Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
> Date: Wed, 20 Nov 2013 20:45:21 GMT
> Content-Type: application/json
> Content-Length: 41
>
> {"name":"allLogs","schema":"_yz_default"}
>

Sorry, that documentation is out of date.

To associate an index to a bucket you need to set the bucket's properties.

curl -XPUT -H 'content-type: application/json' '
http://localhost:8098/buckets/logs/props' -d
'{"props":{"yz_index":"allLogs"}}'

You can perform a GET on that same resource to check the yz_index property
is set.


> Also
>
> what is going on here
>
> curl -XPUT -H'content-type:application/json'
> http://localhost:8098/buckets/people/keys/me \
> -d'{ "name_s" : "kartik" }'
>
> Why not:
> curl -XPUT -H'content-type:application/json'
> http://localhost:8098/rial/people/me
>  \
> -d'{ "name_s" : "kartik" }'
>


In Riak 1.0.0 we changed the resource from '/riak//' to
'/buckets//keys/'. We were supposed to deprecate and
eventually remove the old resource but we never did. You can still use the
old style but I would recommend using the new style as it is what we use in
official docs and there is a chance perhaps the old resources don't stay up
to date with the latest features.


-Z
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Hard time with Riak 2.0.0pre5 features

2013-11-21 Thread Valter Balegas
Thanks, it works now! 
i wouldn’t realize that bucket type had to be specified as a type. But i did 
say that i didn’t now what that meant. :)

I hope i can do all my experiments now,
Valter

No dia 20/11/2013, às 21:04, Valter Balegas  escreveu:

> Hello,
> 
> Here is the sequence of commands:
> 
> --Compiled Riak with "make rel" and riak-erlang-client with “make". Erlang 
> version R16B02; using the latest version on GitHub or the riak2.0.0.5pre 
> package.
> 
> ./bin/riak stop
> 
> --changed the consensus flag to true on ./etc/riak.conf
> 
> ./bin/riak start
> 
> .bin//riak-admin bucket-type create bucket '{"props": {"consistent": true}}'
> ./bin/riak-admin bucket-type activate bucket
> 
> ./bin/riak stop
> ./bin/riak start
> 
> on a erlang console, initialized with:
> 
> ./erts-5.10.3/bin/erl -pa ../riak-erlang-client/ebin 
> ../riak-erlang-client/deps/*/ebin
> 
> f(Pid), {ok, Pid} = riakc_pb_socket:start_link("localhost", 8087).
> NewA = riakc_obj:new(<<"bucket">>, <<"key">>, <<"my binary data">>).
> NewB = riakc_obj:new(<<"bucket">>, <<"key">>, <<"my other binary data">>).
> riakc_pb_socket:put(Pid, NewA, [return_body]).
> {ok,{riakc_obj,<<"bucket">>,<<"key">>,
><<107,206,97,96,96,96,204,96,202,5,82,28,202,156,255,126,
>  6,245,74,255,202,96,74,...>>,
>[{{dict,2,16,16,8,80,48,
>{[],[],[],[],[],[],[],[],[],[],[],[],...},
>{{[],[],[],[],[],[],[],[],[],[],...}}},
>  <<"my binary data">>}],
>undefined,undefined}}
> 5> riakc_pb_socket:put(Pid, NewB, [return_body]).
> {ok,{riakc_obj,<<"bucket">>,<<"key">>,
><<107,206,97,96,96,96,204,96,202,5,82,28,202,156,255,126,
>  6,245,74,255,202,96,74,...>>,
>[{{dict,2,16,16,8,80,48,
>{[],[],[],[],[],[],[],[],[],[],[],[],...},
>{{[],[],[],[],[],[],[],[],[],[],...}}},
>  <<"my binary data">>},
> {{dict,2,16,16,8,80,48,
>{[],[],[],[],[],[],[],[],[],[],[],...},
>{{[],[],[],[],[],[],[],[],[],...}}},
>  <<"my other binary data">>}],
>undefined,undefined}}
> 
> bin/riak-admin bucket-type status bucket
> bucket is active
> 
> young_vclock: 20
> w: quorum
> small_vclock: 50
> rw: quorum
> r: quorum
> pw: 0
> precommit: []
> pr: 0
> postcommit: []
> old_vclock: 86400
> notfound_ok: true
> n_val: 3
> linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
> last_write_wins: false
> dw: quorum
> consistent: true
> chash_keyfun: {riak_core_util,chash_std_keyfun}
> big_vclock: 50
> basic_quorum: false
> allow_mult: true
> active: true
> claimant: 'riak@127.0.0.1
> 
> bin/riak-admin bucket-type list
> bucket (active)
> 
> 
> 
> No dia 20/11/2013, às 17:23, Jordan West  escreveu:
> 
>> Hi Valter,
>> 
>> Could you provide the code you are using to generate the concurrent requests 
>> in addition to the output of `riak-admin bucket-type list` and `riak-admin 
>> bucket-type status ` where  is the name of the 
>> strongly consistent bucket you created (from one node should be sufficient)? 
>> I've been using this feature in a personal project and just tested on a 
>> local cluster using curl and was unable to reproduce (the cluster was a bit 
>> behind develop but there have been no recent changes to the feature).
>> 
>> Cheers,
>> Jordan 
>> 
>> 
>> On Wed, Nov 20, 2013 at 5:30 AM, Valter Balegas  wrote:
>> Hello,
>> 
>> I was able to activate strong consistency, but the node keeps generating 
>> siblings, when i execute two concurrent writes (Store two objects without 
>> vector clock).
>> I tried with the Riak 2.0.0.5pre package and the latest version on the 
>> repository. Am i missing anything? i was expecting operations to fail, in 
>> this case.
>> 
>> It seems that the error was caused by my Erlang version. The creation of the 
>> bucket type didn’t fail with R16B02.
>> 
>> Valter
>> 
>> No dia 20/11/2013, às 03:13, Jordan West  escreveu:
>> 
>>> Valter,
>>> 
>>> You mentioned you are using a recent develop. Would you be able to pull 
>>> down any recent changes and update-deps (or build a fresh devrel)? I just 
>>> tried a fresh clone of develop and was unable to reproduce the same error. 
>>> I, also, tried with R15B01 (Riak 2.0 is slated to use R16B02 right now). 
>>> 
>>> Alternatively, you should be able to workaround this by doing a `riak 
>>> attach-direct` and defining the atom at the erlang shell:
>>> 
>>> 1> consistent.
>>> consistent
>>> 
>>> Jordan
>>> 
>>> 
>>> On Tue, Nov 19, 2013 at 3:06 PM, Valter Balegas  wrote:
>>> I can’t create the bucket type:
>>> 
>>> RPC to 'riak@127.0.0.1' failed: {'EXIT',
>>>  {badarg,
>>>   [{erlang,list_to_existing_atom,
>>> ["consistent"],
>>> []},
>>>

Re: Auto-expiring blobs on Riak CS

2013-11-21 Thread Siddhu Warrier (siwarrie)
Hi Kota,

Thank you for the information. 

Thanks,

Siddhu

Sent from my iPhone

> On 21 Nov 2013, at 07:09, "Kota Uenishi"  wrote:
> 
> Hi,
> Unfortunately, there is no support of timed expiry of objects for now
> - "object lifecycle" [0] would have been best candidate if we had
> decided to make those functionalities, but unfortunately that is just
> in our future roadmap about more S3 API coverage.
> 
> [0] http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
> 
> On Wed, Oct 23, 2013 at 12:41 AM, Siddhu Warrier (siwarrie)
>  wrote:
>> First off, apologies if this was documented, but I couldn't seem to find it.
>> 
>> I have a use-case whereby I need to remove blobs written into Riak CS every
>> 90 days. I  was wondering if there was a way I could set {expiry_secs}in my
>> Riak CS multi_kv backend config, like I can with Riak's bitcask backend? Can
>> I set the Expires header using the AWS S3 Java SDK, for instance?
>> 
>> Thanks,
>> 
>> Siddhu
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> -- 
> Kota UENISHI / @kuenishi
> Basho Japan KK

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com