Re: Riak and the demise of Basho
+1 to Apache 2 for MDC and other relevant EE repos. On Wed, Sep 6, 2017 at 9:02 AM DeadZen wrote: > Sept 16 is software freedom day, a release that day might be a nice idea. > > On Wed, Sep 6, 2017 at 8:37 AM, Christopher Meiklejohn > wrote: > > Given that all of the code except for MDC (and maybe JMX) is under > > Apache 2, I would assume that those components would follow the > > already existing license of the other components. > > > > Is there a plan to change the original license on the open source > > product from Apache 2? > > > > Thanks, > > Christopher > > > > On Wed, Sep 6, 2017 at 2:13 PM, wrote: > >> Hi, > >> > >> A quick status update. Today bet365 have signed the agreement to > purchase all Basho IP. We expect the agreement to be ratified in the US > courts next week. Once cleared, our intention is to open source all code, > help rebuild the community and collaboratively take the development of RIAK > forward. > >> > >> In the coming weeks we will hopefully answer the questions people have > and will be calling on the community to help forge the initial RIAK Roadmap. > >> > >> One of the initial questions we have for the community is which OSS > license would people like applied to the code? Our thought is the most open > and permissive. > >> > >> Andy. > >> > >> Andrew Deane > >> Systems Development Manager - Middleware > >> Hillside (Technology) Limited > >> andrew.de...@bet365.com > >> bet365.com > >> -Original Message- > >> From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf > Of martin@bet365.com > >> Sent: 24 August 2017 16:18 > >> To: riak-users@lists.basho.com > >> Cc: Martin Davies > >> Subject: FW: Riak and the demise of Basho > >> > >> Hi > >> > >> I have been asked to forward the below message from Martin Davies, the > CEO of Technology for bet365. > >> > >> Kind Regards > >> > >> Martin Cox > >> > >> From: Martin Davies > >> Sent: 24 August 2017 16:11 > >> To: Martin Cox > >> Subject: Riak and the demise of Basho > >> > >> Hi, > >> I have been wanting to make you aware for a few weeks now that we > have reached an agreement, in principle, to buy all of Basho's remaining > assets (except support contracts) from the receiver. Up until this > afternoon, I was constrained by some confidentiality needs of the receiver > and was unable to speak. > >> > >> We have agreed a price for the assets and are almost at the end of > sorting out the legal agreement. Once this is complete, this will then need > to be processed through the courts which, I am advised, should take a week > or so. > >> > >> It is our intention to open source all of Basho's products and all > of the source code that they have been working on. We'll do this as quickly > as we are able to organise it, and we would appreciate some input from the > community on how you would like this done. > >> > >> > >> Martin Davies > >> Chief Executive Officer - Technology > >> Hillside (Technology) Limited > >> bet365.com > >> > >> This email and any files transmitted with it are confidential and > contain information which may be privileged or confidential and are > intended solely to be for the use of the individual(s) or entity to which > they are addressed. If you are not the intended recipient be aware that any > disclosure, copying, distribution or use of the contents of this > information is strictly prohibited and may be illegal. If you have received > this email in error, please notify us by telephone or email immediately and > delete it from your system. Activity and use of our email system is > monitored to secure its effective operation and for other lawful business > purposes. Communications using this system will also be monitored and may > be recorded to secure effective operation and for other lawful business > purposes. Internet emails are not necessarily secure. We do not accept > responsibility for changes made to this message after it was sent. You are > advised to scan this message for viruses and we cannot accept liability for > any loss or damage which may be caused as a result of any computer virus. > >> > >> This email is sent by a bet365 group entity. The bet365 group includes > the following entities: Hillside (Shared Services) Limited (registration > no. 3958393), Hillside (Spain New Media) Plc (registration no. 07833226), > bet365 Group Limited (registration no. 4241161), Hillside (Technology) > Limited (registration no. 8273456), Hillside (Media Services) Limited > (registration no. 9171710), Hillside (Trader Services) Limited > (registration no. 9171598) each registered in England and Wales with a > registered office address at bet365 House, Media Way, Stoke-on-Trent, ST1 > 5SZ, United Kingdom; Hillside (Gibraltar) Limited (registration no. 97927), > Hillside (Sports) GP Limited (registration no. 111829) and Hillside > (Gaming) GP Limited (registered no. 111830) each registered in Gibraltar > with a registered offic
Re: dc/os stuff
I am indeed at Mesosphere now, I've reached out to Charles directly. Cheers! Drew On Thu, Jul 6, 2017 at 4:33 PM Christopher Meiklejohn < christopher.meiklej...@gmail.com> wrote: > The maintainer no longer works at Basho, but I believe he does work at > Mesosphere now, so it might be worth reaching out to him. > > Christopher > > On Thu, Jul 6, 2017 at 11:01 AM, Charles Solar > wrote: > > Getting riak running in DC/OS was great - nice an easy with good > > documentation. > > > > But I'm having an issue with the riak-mesos-director > > (https://github.com/basho-labs/riak-mesos-director) > > > > I opened an issue on this repo but it looks like the maintainer doesn't > work > > for basho? > > > > Anyway it seems the new datatypes in riak (sets, counters, hlls) don't > work > > through the director. > > > > I assume its a pretty easy fix, so I just wanted to bring it to the > > attention of someone at basho > > > > Thanks > > > > ___ > > riak-users mailing list > > riak-users@lists.basho.com > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak search on an index limited to only 1 bucket
@Alex please kindly take a look at the default solr schema for Riak Search. You should have based your custom schema on this (if you've created a custom schema): https://docs.basho.com/riak/kv/2.1.4/developing/usage/search-schemas/ -> https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_schema.xml Specifically take a look at these lines: https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml#L124-L131 (This is where the _yz_rt/rb/rk are defined to be indexed) And these: https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml#L101-L104 - These dynamic fields catch all Riak DTs because the solr field names of data types automatically get their type name appended to the end (as you noticed with your reference to "*likes_counter" *in your own index). As you can see in the default schema, all sets are automatically indexed as multivalued. Hopefully this info takes away some of the magic for you ;-) Drew On Fri, May 13, 2016 at 12:16 PM Vitaly <13vitam...@gmail.com> wrote: > In general, Riak/Solr is capable of indexing multi-valued properties (i.g. > lists). You're right thinking that multiValued = "true" should be used for > it. That said, check if it works with your client library (it's Python, > isn't it?). I believe it does. > > Regards, > Vitaly > > On Fri, May 13, 2016 at 9:59 PM, Alex De la rosa > wrote: > >> Another question... if I have a set of tags for the elements... like >> photo.set['tags'] with things like: ["holidays", "Hawaii", "2016"]... will >> it be indexed like this? >> >> > multiValued="true" /> >> >> Thanks, >> Alex >> >> On Fri, May 13, 2016 at 10:52 PM, Alex De la rosa < >> alex.rosa@gmail.com> wrote: >> >>> Oh!! silly me... *_yz_rb* and *_yz_rt*... how didn't I think of that?... >>> >>> thanks also for the "*:*" tip ; ) >>> >>> Thanks! >>> Alex >>> >>> On Fri, May 13, 2016 at 10:50 PM, Vitaly <13vitam...@gmail.com> wrote: >>> Hi Alex, 'likes_counter:[100 TO *] AND _yz_rb:photos' will limit query results to the photos bucket only. Similarly, "_yz_rt" is for a bucket type. Searching for anything in an index can be done with "*:*" (any field, any value). Regards, Vitaly On Fri, May 13, 2016 at 9:40 PM, Alex De la rosa < alex.rosa@gmail.com> wrote: > Hi all, > > Imaging I have an index called "*posts*" where I index the following > fields > > > > stored="false" /> > > and I reuse the index in 3 buckets: "status", "photos" and "videos"... > then I do the following: > > *results = client.fulltext_search('posts', 'likes_counter:[100 TO *]', > sort='likes_counter desc', rows=10)* > > This query would give me the top10 most liked items (can be statuses, > photos or videos) with at least 100 likes. But how could I limit the > resultset to only the "photos" bucket?? The goal is to get the Top10 liked > photos without creating an index for itself... as is good to also be able > to query the top10 items in general. Any way to do it? > > In another hand... does somebody know how to do the same query but > without the [100 TO *]?? I leave it empty? > > *results = client.fulltext_search('**posts**', '', > sort='likes_counter desc', rows=10)* > > Thanks, > Alex > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > >>> >> > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: cuttlefish
Hi Michael/Luke, This thread is old, but after fighting through similar issues with cuttlefish, clique, and node package, I put together a simple skeleton app using all of those for myself and others: https://github.com/drewkerrigan/erlang-app-skeleton Hopefully someone weekday finds it useful as well. Drew Kerrigan On Thu, Oct 29, 2015, 10:08 AM Michael Martin wrote: > It's not open source yet. It needs quite a bit more testing before I'm > comfortable releasing it into > the wild, but that will happen soon, I hope. > > > On 10/29/2015 11:42 AM, Luke Bakken wrote: > > Hi Michael, > > > > Please use "Reply All" so that riak-users is included in this discussion. > > > > I am assuming that the important part, for Riak, is here: > > > > https://github.com/basho/riak/blob/develop/rel/vars.config#L55-L59 > > > > Is your application open-source? I would be interested in getting relx > > support into cuttlefish. > > > > > https://github.com/basho/cuttlefish/wiki/Cuttlefish-for-non-node_package-users > > > > -- > > Luke Bakken > > Engineer > > lbak...@basho.com > > > > > > On Thu, Oct 29, 2015 at 9:32 AM, Michael Martin > wrote: > >> Hi Luke, > >> > >> I understand now how to create a new .conf file. But, I still don't see > how > >> to make my application aware of > >> that .conf file and properly parse it into an app.config. Starting the > >> cuttlefish application in my application > >> has no effect. > >> > >> Any ideas? > >> > >> Thanks, > >> Michael > >> > >> > >> > >> On 10/29/2015 10:18 AM, Luke Bakken wrote: > >>> Hi Michael, > >>> > >>> I'm figuring this out as I go as well. I searched for "riak.conf" > >>> (https://github.com/basho/riak/search?q=riak.conf) and got a hit here: > >>> > >>> https://github.com/basho/riak/blob/develop/rel/rebar.config > >>> > >>> I suspect this configuration is what causes "riak.conf" to be > >>> generated at build time, via this code: > >>> > >>> > >>> > https://github.com/basho/cuttlefish/blob/develop/src/cuttlefish_rebar_plugin.erl > >>> > >>> -- > >>> Luke Bakken > >>> Engineer > >>> lbak...@basho.com > >>> > >>> > >>> On Thu, Oct 29, 2015 at 8:12 AM, Michael Martin > > >>> wrote: > >>>> Hi Luke, > >>>> > >>>> Thanks for the quick reply. > >>>> > >>>> I've been throught the "Cuttlefish-for-Erlang-Developers" doc a > couple of > >>>> times now. > >>>> I must be missing something. > >>>> > >>>> The main README states: > >>>> > >>>> "When we build Riak, Cuttlefish generates a riak.conf file that > contains > >>>> the > >>>> default shipping configuration of Riak. When a script to start Riak is > >>>> run, > >>>> a Cuttlefish escript is spun up, reads the riak.conf file and combines > >>>> that > >>>> with the Schema to generate an app.config." > >>>> > >>>> But I see no reference to how I make Cuttlefish generate my script, > nor > >>>> where to find the Cuttlefish escript that reads > >>>> the .conf file and generates the app.config. I don't see this > information > >>>> in > >>>> the Wiki, either. > >>>> > >>>> Thanks, > >>>> Michael > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Recap - August 31, 2015
Hi Christopher, the Vagrantfile was moved to a separate repo: https://github.com/basho-labs/vagrant-riak-mesos That being said, this project is a moving target, and you can expect changes over the next few days. I intend to upload the precompiled artifacts this week to enable builds to happen on other platforms (such as Mac OSX). Thanks, Drew On Wed, Sep 2, 2015 at 2:09 PM Christopher Meiklejohn < christopher.meiklej...@gmail.com> wrote: > On Tue, Sep 1, 2015 at 11:52 AM, Matthew Brender > wrote: > > Hey Christopher, > > > > You're running into the fact that this project is an experimental > > demo. Please continue down the right steps by using issues on the > > repo, as you've done with #60 [0]. It may help to know the Vagrantfile > > in the project spins up without issue [1]. > > > > [0] https://github.com/basho-labs/riak-mesos/issues/60 > > [1] > https://github.com/basho-labs/riak-mesos/blob/master/build/ubuntu/Vagrantfile > > Hi Matthew, > > The link you provided to the Vagrant file returns a 404. > > Thanks, > - Christopher > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: For erlang proplist object, extractor is needed?
Hello Hao, Riak object values must be supplied to Riak as binary, are you attempting to store the proplist using term_to_binary/1? If so, you would need to create a custom search extractor for your values. Here is a small tutorial on creating extractors for yokozuna: http://docs.basho.com/riak/latest/dev/search/custom-extractors/ If you don't require the value to be specifically a proplist, a simpler solution would be to convert your proplist to JSON which already has an extractor built into RIak Search. A good module for JSON encoding / decoding is mochijson2 which comes with the https://github.com/mochi/mochiweb repo. > SimpleProplist = [{key1, <<"value1">>}, {key2, <<"value2">>}]. ... > mochijson2:encode(SimpleProplist). ... > RiakObjectValue = list_to_binary(lists:flatten(mochijson2:encode(SimpleProplist))). <<"{\"key1\":\"value1\",\"key2\":\"value2\"}">> Cheers! Drew On Sat, Aug 8, 2015 at 2:06 AM Hao wrote: > If I want to save Erlang proplist into Riak and use Riak Search 2.0, do I > need to create a Erlang Proplist extractor for solr to be able to index it? > > -- > Hao > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Recommended way to delete keys
Another idea for a large-scale one-time removal of data, as well as an opportunity for a fresh start, would be to: 1. set up multi-data center replication between 2 clusters 2. implement a recv/2 hook on the sink which refuses data from the buckets / keys you would like to ignore / delete 3. trigger a full sync replication 4. start using the sync as your new source of data sans the ignored data Obviously this is costly, but it should have a fairly minimal impact to existing production users other than the moment that you switch traffic from the old cluster to the new one. Caveats: Not all Riak features are supported with MDC (search indexes and strong consistency in particular). On Wed, Jun 3, 2015 at 2:11 PM Peter Herndon wrote: > Sadly, this is a production cluster already using leveldb as the backend. > With that constraint in mind, and rebuilding the cluster not really being > an option to enable multi-backends or bitcask, what would our best approach > be? > > Thanks! > > —Peter > > > On Jun 3, 2015, at 12:09 PM, Alexander Sicular > wrote: > > > > We are actively investigating better options for deletion of large > amounts of keys. As Sargun mentioned, deleting the data dir for an entire > backend via an operationalized rolling restart is probably the best > approach right now for killing large amounts of keys. > > > > But if your key space can fit in memory the best way to kill keys is to > use bitcask ttl if that's an option. 1. If you can even use bitcask in your > environment due to the memory overhead and 2. If your use case allows for > ttls which it may considering you may already be using time bound > buckets > > > > -Alexander > > > > @siculars > > http://siculars.posthaven.com > > > > Sent from my iRotaryPhone > > > > On Jun 3, 2015, at 09:54, Sargun Dhillon wrote: > > > >> You could map your keys to a given bucket, and that bucket to a given > backend using multi_backend. There is some cost to having lots of backends > (memory overhead, FDs, etc...). When you want to do a mass drop, you could > down the node, and delete that given backend, and bring it up. Caveat: AAE, > MDC, nor mutable data play well with this scenario. > >> > >> On Wed, Jun 3, 2015 at 10:43 AM, Peter Herndon > wrote: > >> Hi list, > >> > >> We’re looking for the best way to handle large scale expiration of > no-longer-useful data stored in Riak. We asked a while back, and the > recommendation was to store the data in time-segmented buckets (bucket per > day or per month), query on the current buckets, and use the streaming list > keys API to handle slowly deleting the buckets that have aged out. > >> > >> Is that still the best approach for doing this kind of task? Or is > there a better approach? > >> > >> Thanks! > >> > >> —Peter Herndon > >> Sr. Application Engineer > >> @Bitly > >> ___ > >> riak-users mailing list > >> riak-users@lists.basho.com > >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >> > >> ___ > >> riak-users mailing list > >> riak-users@lists.basho.com > >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: SolrSpatial Problem
Hello Sinh Here is a working example of the setup for a "find all records within X miles of a lat,lon point": https://gist.github.com/drewkerrigan/c7cabcbc46c10957248e Without digging too much into the exact problem you're encountering, I think at a high level you should not be attempting to modify the default schema (http://localhost:8098/search/schema/_yz_default) Any custom schemas should be named something other than _yz_default, like "my_geo_schema". Thanks, Drew On Mon, Jun 1, 2015 at 3:37 PM sinh nguyen wrote: > My goal is trying to retrieve all locations within a boundary using Solr > function IsWithin(Polygon()). I am using 2.1.1 and follow this > documentation from solr > https://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4. > > First, I downloaded schema from Basho git hub > > https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_schema.xml > > > Then I added new fieldtype to schema.xml > > class="solr.SpatialRecursivePrefixTreeFieldType" > > spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory" >distErrPct="0.025" >maxDistErr="0.09" >units="degrees" > /> > > > I added new field to schema.xml > > stored="true" multiValued="true" /> > > multiValued="true" /> > > > I uploaded to server > > curl -i -XPUT http://localhost:8098/search/schema/_yz_default \ > -H 'content-type: application/xml' \ > --data-binary @schema.xml > > > I create new object > > curl -i -H 'content-type: application/json' -X PUT > 'http://localhost:8098/types/geo_type/buckets/stuff/keys/sf' -d > '{"name_tssd":"San Francisco", > "loc_geotest":"37.774929,-122.419416","myloc":"37.774929,-122.419416"}' > > > But Solr only indexes "name_tssd" and neither "loc_geotest" nor "myloc" is > indexed. > > Please help! > > PS: How do you remove custom schema.xml? > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Manipulating CRDTs inside commit hooks/mapreduces
Hi Cezary, I've created some sample code that creates a CRDT from within a postcommit hook: https://github.com/drewkerrigan/riak_snippets/tree/master/hooks/crdts It should be noted however that this is not ideal usage of Riak; There's a reason that examples of this type of behavior are not easy to find. If / when postcommit hooks fail, you will get almost no notice that such a failure occurred because the PUT to the bucket which has the commit hook will return a success to the client whether or not the commit hook succeeds. Drew Kerrigan ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com