David what client are you using to access Riak? We are using the Python client
and the procedure for working with 2i (or anything else) is exactly the same
whether you are using the http transport (riak.RiakHttpTransport) or the pbc
transport (riak.RiakPbcTransport).
--gordon
On Nov 5, 2012,
Pavel as an alternative to re-writing the objects to cause them to be indexed,
you may invoke what I call a map operation with side-effects.
You define an Erlang map-phase function as follows:
map_reindex({error,notfound}, _, _) ->
[];
map_reindex(RiakObject, _, _) ->
riak_search_kv_hoo
Great news! Thanks Sean!!
--g
On Aug 29, 2012, at 13:58 , Sean Cribbs wrote:
> Hey riak-users,
>
> We've just officially released the official Riak Python Client,
> version 1.5.0, to pypi.python.org. The primary updates in this release
> are related to Riak 1.2 compatibility, so if you're usin
Mark you sure can,
If you have a bucket named "foo", you can PUT your schema to:
/riak/_rs_schema/foo
--gordon
On Aug 13, 2012, at 09:58 , Mark Volkmann wrote:
> Is it possible to set the schema for a bucket using HTTP instead of
> search-cmd?
>
> --
> R. Mark Volkmann
> Object Computing,
Sean we use the Python client extensively and I don't see anything in what you
are proposing that will cause us any pain. :-)
--gordon
On Jul 24, 2012, at 08:53 , Sean Cribbs wrote:
> Hey riak-users,
>
> I've begun adding Riak 1.2-related features to the Python client. As I am
> doing so, I
Jared this is great news. There are some supremely-useful additions and
enhancements in this release!
--gordon
On Jul 13, 2012, at 19:16 , Jared Morrow wrote:
> Riak Users,
>
> Today we are really excited to make available Riak 1.2 release candidate 1.
> Riak 1.2 is a huge release for us an
Reid I really like your proposal. It gives some of the same expressiveness to
2i that one gets with CouchDB views. Ability to retrieve sorted descending
would be very cool as well.
--gordon
On May 17, 2012, at 15:13 , Reid Draper wrote:
> Riak Users,
>
> I've been working on some proposed
Charles by default search is disabled. Take a look in your app.config file and
make sure you have set the following:
{riak_search, [
%% To enable Search functionality set this 'true'.
{enabled, true}
]},
--gordon
On Jan 24, 2012, at 19:18 , char
Shuhao try embedding the search term in single quotes and see if that makes a
difference:
client.search(bucketname, "name:'mrrow'").run()
--g
On Dec 15, 2011, at 09:37 , Shuhao Wu wrote:
> I'm trying to use riak search from the python-client,
>
> client.search(bucketname, 'name:"mrrow"').r
011 at 9:45 AM, Gordon Tillman wrote:
>> I'm really interested in being able to implement distributed
>> reduce phases (specifically to do a partial sort) and then have that output
>> handle by a final reduce phase that could perform an efficient merge sort
>> and s
I forgot to CC the mailing list with this response.
--g
From: Gordon Tillman
Subject: Re: Secondary Indexes - Feedback?
Date: November 16, 2011 14:55:00 CST
To: Rusty Klophaus
On Nov 16, 2011, at 13:53 , Rusty Klophaus wrote:
> Hi Gordon,
>
> Thanks for your feedback! Some
On Nov 16, 2011, at 11:57 , Rusty Klophaus wrote:
> Now that you've had a few weeks to investigate and experiment with
> Secondary Indexes, I'm hoping to hear about your experiences to help
> us focus future development efforts most effectively:
> Have you tried Secondary Indexes?
> Does the featu
I do believe that you can use Riak very well to handle what your application
requires.
Give me a shout off-list if you you like and I'll put together a working
example to get you started.
--gordon
On Nov 12, 2011, at 17:43 , Keith Irwin wrote:
> On Nov 12, 2011, at 2:32 PM, Gordon Tillman w
Keith I have an idea that might work for you. This is a bit vague but I would
be glad to put together a more concrete example if you like.
Use secondary indexes to tag each entry with the device id.
You can then find all of the entries for a given device by using the the
secondary index to fe
Jon you can also install directly with homebrew as follows:
brew install riak --HEAD
--g
On Nov 9, 2011, at 12:48 , Hector Castro wrote:
> Hi Jon,
>
> I did the following to solve the OTP release issue with Homebrew:
>
> cd /usr/local
> # Checkout commit right before r14b04 releas
ackup and restore of search data is coming, but it is just not there
> yet. Hope this helps.
>
> Kelly
>
> On Oct 28, 2011, at 12:46 PM, Gordon Tillman wrote:
>
>> Howdy Gang,
>>
>> We have a 3 node Riak 1.0.1 with search enabled and are seeing the following
&
Howdy Gang,
We have a 3 node Riak 1.0.1 with search enabled and are seeing the following
errors in the Riak log file:
==> /var/log/riak/console.log <==
2011-10-28 11:04:21.900 [error] <0.993.0> gen_fsm <0.993.0> in state initialize
terminated with reason: bad argument in call to erlang:hd([]) i
> -Ryan
>
> [1]: Checkout the section titled "Get/Put Improvements" in the release notes.
> https://github.com/basho/riak/blob/riak-1.0.0/RELEASE-NOTES.org
>
> [2]: http://wiki.basho.com/Vector-Clocks.html#Siblings
>
> On Fri, Sep 2, 2011 at 11:33 AM, Gord
, respectivly.
I know that that are various performance optimizations that are based upon
certain default bucket settings and was wondering if this change would
adversely affect any of those.
Many thanks!
--gordon tillman
___
riak-users mailing list
riak
Greetings all,
After an extended datacenter power outage, a 3-node Riak cluster shut down.
When the power was restored, two of the three nodes came back up. Don't know
what is going on with the third node. But in the mean time, have removed the
dead node from the ring. The two remaining node
Many Thanks Ryan,
You the man!!
--g
On Jun 24, 2011, at 10:58 , Ryan Zezeski wrote:
Gordon, Gilbert, and all you Search fans out there,
I've patched this bug in the riak_search-0.14 branch. Below you'll find a link
to the pull request.
The bug was a little tricky to find but is fairly "obv
gt; On Tue, Jun 7, 2011 at 1:33 PM, Gordon Tillman wrote:
>> Guys I have put together a simple test to reproduce the error that we are
>> seeing.
>> It is on github here:
>> https://github.com/gordyt/riaksearch-test
>> This is a multi-threaded test that connects to
and uploads
one small json object.
Thanks very much for any input you might have.
Regards,
--gordon
On Jun 6, 2011, at 10:01 , Gordon Tillman wrote:
Good Morning Gilbert,
I have posted this gist:
https://gist.github.com/1010384
<https://gist.github.com/1010384>It is a minor update w
Hi Dave,
First of all I think it is a great idea to combine riak and riak_search!
We are not using the java-based analyzers and so would have no problem with
them being omitted from riak_search.
Regards,
--gordon
On Jun 6, 2011, at 17:33 , David Smith wrote:
> Hi all,
>
> One of the things
Glåns wrote:
Gordon,
Great news! Much appreciated.
Gilbert
On Tue, May 31, 2011 at 2:25 PM, Gordon Tillman
mailto:gtill...@mezeo.com>> wrote:
Howdy Gilbert,
Hey we are testing a fix now. If this works I will send you a copy of the
update file.
--gordon
On May 31, 2011, at 12:55 , G
n May 27, 2011, at 20:10 , Gilbert Glåns wrote:
>>
>> Gordon,
>> Could you try:
>>
>> erlang:process_info(list_to_pid("<0.16614.32>"), [messages,
>> current_function, initial_call, links, memory, status]).
>>
>> in a riak search
urious to see if you are having the same systemic
memory consumption I am experiencing.
Gilbert
On Fri, May 27, 2011 at 5:15 PM, Gordon Tillman
mailto:gtill...@mezeo.com>> wrote:
Howdy Gang,
We are having a bit of an issue with our 3-node riaksearch cluster. What is
happing is this:
Clu
On May 27, 2011, at 21:22 , Dan Reverri wrote:
What are the steps to reproduce the issue?
Thanks,
Dan
Daniel Reverri
Developer Advocate
Basho Technologies, Inc.
d...@basho.com<mailto:d...@basho.com>
On Fri, May 27, 2011 at 6:44 PM, Gordon Tillman
mailto:gtill...@mezeo.com>> wrot
614.32>"), [messages,
> current_function, initial_call, links, memory, status]).
>
> in a riak search console for one/some of those mailboxes and share the
> results? I am curious to see if you are having the same systemic
> memory consumption I am experiencing.
>
> Gilbert
>
Howdy Gang,
We are having a bit of an issue with our 3-node riaksearch cluster. What is
happing is this:
Cluster is up and running. We start testing our application against it. As
the application runs the erlang process consumes more and more memory without
ever releasing it.
In trying to
Thanks Grant!
--gordon
On May 25, 2011, at 15:00 , Grant Schofield wrote:
> Yes, that is perfectly safe.
>
> Grant
>
> On May 25, 2011, at 2:30 PM, Gordon Tillman wrote:
>
>> Howdy Folks,
>>
>> Assuming one has re-written all their M/R code and pre-com
Howdy Folks,
Assuming one has re-written all their M/R code and pre-commit hooks in Erlang,
is it OK to set all these to 0 to save memory:
map_js_vm_count
reduce_js_vm_count
hook_js_vm_count
Thanks!
--gordon
___
riak-users mailing list
riak-users@l
er_size) is about one-half the memory you wish for merge_index to
consume, hopefully somewhere between 1M and 10M. The rest of the memory will be
used by in-memory offset tables, compaction processes, and during query
operations.
Hope that helps.
Best,
Rusty
On Mon, May 23, 2011 at 2:05 PM,
Greetings!
We are working with a riaksearch cluster that uses innostore as the primary
backend in tandem with merge_index that is required by search. From reading
the Basho wiki it looks like the following are the most important factors
affecting memory and performance:
• innostore
ates that the search queues up a very large
result set before it applies the row limit. Is there something I'm missing
here?
thanks,
Daniel
Basically, I'm wondering if my query time will remain
On Thu, Apr 14, 2011 at 7:53 AM, Gordon Tillman
mailto:gtill...@mezeo.com>> wrote
Daniel the max_search_results only applies to searches done via the solr
interface. From
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-January/002974.html:
- System now aborts queries that would queue up too many documents in
a result set. This is controlled by a 'max_searc
odes. (I stress "connected nodes" here
because if a network partition results in an unreachable--but still
running--node, then that node will continue to use the old cached version of
the schema.)
Best,
Rusty
On Wed, Apr 6, 2011 at 3:39 PM, Gordon Tillman
mailto:gtill...@mezeo.com>>
Howdy Gang,
During our various development iterations I have had to alter the schema that
riaksearch uses for our application's bucket. I have noticed that when I do
that -- when I assign a new schema to a bucket -- even if I completely purge
all data and then repopulate the bucket -- the new
Daniel AFAIK the mapreduce endpoint don't support the SOLR query parameters.
You need to add a reduce function to slice the results.
--gordon
Sent from my iPhone
On Apr 2, 2011, at 6:34 PM, "Daniel Rathbone"
mailto:dan.rathb...@gmail.com>> wrote:
Hi list,
I've got a simple question of syntax
Howdy Folks,
The other day Grant had helped out someone that was having trouble with
map/reduce functions in Javascript. He made mention of an issue with caching.
I was wondering if this also applies to m/r functions written in Erlang?
Thanks!
--gordon
__
Greetings All,
I just have a couple of quick comments regarding using riak-search with
innostore. Thought it may save someone else a bit of trouble.
OK, starting with a standard dev install of riak-search (latest version). I
installed innostore and configured as per the wiki. So I had the fo
Greetings All,
I just have a couple of quick comments regarding using riak-search with
innostore. Thought it may save someone else a bit of trouble.
OK, starting with a standard dev install of riak-search (latest version). I
installed innostore and configured as per the wiki. So I had the fo
, 2011, at 08:57, Sean Cribbs wrote:
I'm not sure why that's crashing (I suspect it's string_to_int on 0-prefixed
numbers), but your phase has to have "keep":true to return any data to the
client.
Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Tech
ring_to_int on 0-prefixed
numbers), but your phase has to have "keep":true to return any data to the
client.
Sean Cribbs mailto:s...@basho.com>>
Developer Advocate
Basho Technologies, Inc.
http://basho.com/
On Jan 25, 2011, at 9:53 AM, Gordon Tillman wrote:
Sean thanks again for th
t,{66195534,1278813932664540053428224228626747642198940975104,done}}]
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 2584
stack_size: 24
reductions: 94951
neighbours:
--gordon
On Jan 25, 2011, at 07:24, Sean Cribbs wrote:
Use a reduce phase i
Greetings All,
I have a use case for our app where I need to fetch a list of keys that match
some pattern and was hoping to be able to use key filters for that.
In my test I defined a key filter for the input phase of mapred and then
defined just a single map phase that returns the object key.
Greetings All,
It is my understanding the only backend that is compatible with Riak Search
indexes is the merge_index_backend.
I am wondering if merge_index backend has a similar memory footprint as
bitcask; i.e., must the keydir structure for merge_index fit entirely in RAM as
is the case wit
Howdy Folks,
Is there a way to configure Riak search so that only objects with certain
content-types (e.g., application/json) are to be indexed?
I have installed a schema on a test bucket that contains field definitions for
the (JSON) fields that I wish to have indexed. At then end I include a
48 matches
Mail list logo