;>>
>>>>> Looks like Basho has the trademark on Riak.
>>>>> http://www.trademarkia.com/riak-77954950.html
>>>>
>>>> They sure do. And if you look at the Apache2 license you’ll see that it
>>>> grants use of the name t
> On Jul 14, 2017, at 11:32, Russell Brown wrote:
>
> What do you mean “encumbered”? Riak is the name of an Apache2 licensed open
> source database, so it can continue to be used to describe that apache 2
> licensed database, please don’t spread FUD.
You willing to
I love your enthusiasm Lloyd. How about starting with a slack channel
and take it from there...
Things that would kinda need to happen for Riak to grow beyond where it is now:
- complete rebranding. Basho is dead. The term "Riak" may be encumbered.
- new everything. name. domain. github repo.
Abandon hope all ye who enter here.
On Thu, Jul 13, 2017 at 3:15 PM, Tom Santero wrote:
> RICON: A New Hope
>
> On Thu, Jul 13, 2017 at 4:00 PM, Russell Brown
> wrote:
>>
>> We have talked about it. Let's do it!
>>
>> On Jul 13, 2017 7:56 PM,
In case you haven't seen it...
https://www.theregister.co.uk/2017/07/13/will_the_last_person_at_basho_get_the_lights_oh_too_late/
Cheers,
Alexander
On Thu, Jul 13, 2017 at 1:56 PM, wrote:
> Hmmm--- I wonder if anyone has given thought to a conference focused on the
>
I'm not 100% certain but I do not believe that is the case. Part of the reason
for structured data is efficient retrieval. I believe the data is read but only
the data selected leaves the leveldb backend, unselected data never leaves
leveldb so there's no overhead when passing data from level
Also, when using default setting Riak will write three copies of your data to
your cluster - even if you only have a cluster of one machine.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Mar 27, 2017, at 11:25, Luke Bakken wrote:
>
>
etting, for one thing.
>
> Daniel -
>
> The storage_backend setting in advanced.config will *override*
> storage_backend in riak.conf. If you wish to ensure the riak.conf
> setting is overridden, you may comment it out in that file.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho
Hi Daniel,
Riak CS uses multi by default. By default the manifests are stored in leveldb
and the blobs/chunks are stored in bitcask. If you're looking to force
everything to level you should remove multi and use level as the backend
setting. As Luke noted elsewhere, this configuration hasn't
/lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
--
Variables that
> I've tried to change:
>- vnode_management_timer from 10s to 1s
>- transfer_limit from 2 to 100
> But still transfer take about a minute. Any other variables that I should
> take a look at?
>
> 2017-02-22 21:12 GMT+03:00 Alexander Sicular <sicul...@bash
his process finishes, I read stale data on both sides
> of ex-netsplit.
>
>
> --
> Thanks,
> Andrey
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/li
Please don't do that. Don't point the internet at your database. Have them
communicate amongst each other on internal ips and route the public through a
proxy / middleware.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Feb 13, 2017, at 04:00, AWS
sibility of switching to Riak TS.
> But I guess the question still valid, does it ? Should I divide millions of
> keys to different buckets, does it make any difference in performance,
> memory, space ?
>
>
> Br,
> Alex
>
> 2017-01-27 2:50 GMT+01:00 Alexander Sicular
Hi, you should consider using Riak TS for this use case.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Jan 27, 2017, at 01:54, Alex Feng wrote:
>
> Hi,
>
> I am wondering if there are some best practice or recommendation for how
Riak CS stores data chunks in bitcask and the index/metadata file in leveldb.
Bitcask, as noted, has no compression. When you force Riak to use level for the
data chunks you get compression for that data which may or may not be good for
your use case. If it's not good for your use case I
Hi Ricardo,
Riak itself won't do this. Afaik, the client libraries return language
specific array like data structures. To actually convert those to csv would
be an exercise for the developer. Thankfully most languages have a readily
available array to csv library which will basically do it for
are
reflected in Solr [23]. Unfortunately there is no atomic way to do that
natively, afaik.
-
Arun Rajagopalan wants to know the status of CS compatibility with Riak
KV 2.2.0 [24].
Have a fabulous new year and, most importantly, a great weekend!
-Alexander Sicular
Solution
the cluster. It
is specific to the data stored specifically on each node.
-Alexander
Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
On Fri, Jan 6, 2017 at 11:31 AM, Alex Feng <sweden.f...@gmail.com> wrote:
>
> Hi Riak Users,
>
> I am a little
!
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] http://basho.com/blog/
[1] http://docs.basho.com/riak/ts/1.5.0/
[2] http://docs.basho.com/riak/ts/1.5.0/releasenotes/
[3] http://docs.basho.com/riak/ts/1.5.0/downloads/
[4] http://repo1.maven.org/maven2/com/basho/riak/riak-client
Hi Joe,
You could do that. Riak currently supports millisecond resolution. Look at the
primary key composition. There are two lines, the first is the partition key
and the second is the local key. The local key denotes the sort order and is
the actual unique key for that grouping (quanta).
].
Have a cozy weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] http://basho.com/blog/
[1]
http://basho.com/posts/business/iot-roundtable-participants-basho-ibm-and-raytheon-discuss-iot-at-the-edge-part-1-of-3/
[2] http://basho.com/resources/video/
[3] https://www.youtube.com
f using riak ts and riak kv
> in the same cluster. Nothing but tradeoffs.
>
>> On Tue, Nov 22, 2016 at 12:29 PM Alexander Sicular <sicul...@basho.com>
>> wrote:
>> Hi Daniel,
>>
>> Ya, I'm not surprised you're having issues. 4GB ram is woefully underspec
t; information from a running cluster so I can give you more accurate
> information?
>
>
>> On Tue, Nov 22, 2016 at 2:42 AM, Alexander Sicular <sicul...@gmail.com>
>> wrote:
>> Hi Daniel,
>>
>> How many nodes?
>> -You should be using 5 minimum if you
Hi Daniel,
How many nodes?
-You should be using 5 minimum if you using the default config. There
are reasons.
How much ram per node?
-As you noted, in Riak CS, 1MB file chunks are stored in bitcask.
Their key names and some overhead consume memory.
How many objects (files)? What is the average
in an Android environment [67].
-
Jing Liu is looking for best practices on how to load test Riak [68].
Magnus Kessler asks some more q’s and offers guidance [69].
-
Toby Corkindale has concerns about Riak CS logging [70].
Have a fantastical weekend,
-Alexander Sicular
Solution
Hi Ryan, yes, you can change a number of settings. Have you had a look
at http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#transfer-limit
and
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-July/015529.html
?
-Alexander
On Tue, Nov 1, 2016 at 2:43 AM, Ryan Maclear
plan
discussions, Guillaume Boddaert opens a new line of questioning around
tuning the coverage plan itself [36].
Have a spooktacular weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] https://academy.basho.com/
[1] http://basho.com/blog/
[2] https://academy.basho.com
t;> - Memory seems to be ok as well:
>> KiB Mem: 15400916 total, 14493744 used, 907172 free,36244 buffers
>>
>> - Disk is ok
>>
>> /dev/xvda1 20G 4.1G 15G 22% / # root device
>>
>> /dev/xvdb 148G 69G 72G 49% /mnt/riak-data
ame: []
> exception error: {{case_clause,{ok,{http_error,
> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/
> mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,
> 3,[{file,"proc_lib.erl"},{line,227}]}]}
Boddaert opens a new line of questioning around
tuning the coverage plan itself [27].
Have a safe weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] https://opensource.com/life/16/9/time-series-analysis-riak-ts
[1] https://allthingsopen.org/speakers/craig-vitter/
[2] ht
Hi Vikram,
Bucket maximums aside, why are you modeling in this fashion? How will you
retrieve individual keys if you don't know the time stamp in advance? Do you
have a lookup somewhere else? Doable as lookup keys or crdts or other systems.
Are you relying on listing all keys in a bucket?
].
Have a fantastic weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] https://github.com/basho/spark-riak-connector/releases/v1.6.0
[1] https://github.com/basho/spark-riak-connector/blob/master/README.md
[2] https://github.com/basho-labs/riak-mesos/releases/tag/1.3.0
[3
increase performance if you are not already doing it.
-alexander
[0] http://docs.basho.com/riak/kv/2.1.4/configuring/load-balancing-proxy/
Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
On Wed, Aug 31, 2016 at 10:41 AM, Travis Kirstine <
tki
programmatically vs parsing the command line output [17].
-
Chris Johnson runs into an overload error in Riak TS [18] and continues
a conversation on how best to deal with it, take a look at the github issue
[19].
Have a fantastic weekend,
-Alexander Sicular
Solution Architect
[21].
-
Michael Gnatz may have found a bug in Riak TS, Gordon Guthrie asks
Michael to file a bug report [22].
-
Guido Medina is looking for word on an updated Java client library that
supports Netty-4.1.x [23].
Have a rejuvenating weekend,
-Alexander Sicular
Solution Archit
Hi churcho,
LIMIT has not yet been implemented in Riak TS. It is on the roadmap
for a future release.
-Alexander
On Thu, Jul 28, 2016 at 5:25 AM, churcho wrote:
> Due to the nature of what I want to build, I would like to be able to limit
> the keys I fetch from a bucket. Is
z may have found a bug in Riak TS, Gordon Guthrie asks
Michael to file a bug report [23].
-
Guido Medina is looking for word on an updated Java client library that
supports Netty-4.1.x [24].
Have a fantastical weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
Take a look at the "pw" and "pr" tunable consistency options for gets and puts.
The base level of abstraction in Riak is the virtual node - not the physical
machine. When data is replicated it is replicated to a replica set of virtual
nodes. Those virtual nodes have primary and secondary (due
. Magnus Kessler tries to help him out [37].
Have an outstanding weekend,
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] http://docs.basho.com/riak/ts/1.3.1/releasenotes/
[1] http://docs.basho.com/riak/ts/1.3.1/downloads/
[2] https://github.com/basho/riak-nodejs-client/releases/tag
to
another [26].
Have a phenomenal holiday weekend (for those in the States!),
-Alexander Sicular
Solution Architect, Basho
@siculars
[0] http://docs.basho.com/riak/kv/2.0.7/release-notes
[1] http://docs.basho.com/community/productadvisories/leveldbsegfault/
[2]
http://docs.basho.com/community
use cases over Riak KV for his social network
project [31].
-
Michael is looking for some guidance on properly sizing a Riak S2
cluster [32].
-
Psterk is looking for help on using hadoops’s distcpy to copy files from
Riak S2 to hadoop [33][34].
Peace and love,
-Alexander
Hi Gianluca, I'll answer inline. My question to you is what is your
use case? In general, the methods you mention could all work but there
are pros and cons with each. -Alexander
On Wed, Jun 15, 2016 at 12:46 PM, Gianluca Padovani wrote:
> Hi all,
> I'm exploring RiakKV and
for his social network
project [27].
-
Michael is looking for some guidance on properly sizing a Riak S2
cluster [28].
-
Psterk is looking for help on using hadoops’s distcpy to copy files from
Riak S2 to hadoop [29][30].
Do have yourselves a great weekend,
-Alexander Sicular
Solr config is a bit of an acquired skill, a black art even. Would you be
willing to shed some light on what you find to work best for you with your use
case?
Thank you,
Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On May 31, 2016, at 10:54, Steve Garon
and
>>>> the third is on a second node. That runs contrary to my understanding (link
>>>> here: http://docs.basho.com/riak/kv/2.1.4/learn/concepts/clusters/)
>>>> that the data is spread out across partitions in such a way that the
>>>> partitions are on dif
)
> Vary: Accept-Encoding
>
> {
>"props": {
>"active": true,
>"allow_mult": false,
>"basic_quorum": false,
>"big_vclock": 50,
>"chash_keyfun": {
>"fun": &
Hi Vikram,
If you're using the defaults, two of copies may be on the same
machine. When using the default values (ring_size=64, n_val=3) you are
not guaranteed copies on distinct physical machines. Implement a
back-off retry design pattern. aka, fail once, try again with r=1.
Also, a read will
].
-
A number of folks contribute to a great thread helping Alex on his
question about write performance when writing sequential alpha-numeric keys
[33]. There is a sidebar on CRDT’s vs write_once buckets in there as well
[34].
Have a great weekend,
-Alexander Sicular
Solution
On Wed, May 11, 2016 at 10:40 PM, Alexander Sicular <sicul...@basho.com
> <javascript:_e(%7B%7D,'cvml','sicul...@basho.com');>> wrote:
>
>> Those are exactly the two options and opinions vary generally based on
>> use case. Storing the data not only take up more space
in terms of disc space on
> an application that normally you won't be using much searching (all data is
> more or less discoverable without searching using GETs)
>
> Thanks and Best Regards,
> Alex
>
--
Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@s
in a talk entitled “Easy Time Series Analysis with Riak TS, Python, Pandas
& Jupyter” [4]. This will be a great introduction to working with Riak TS.
-
Alexander Sicular will be speaking in Dallas on May 13th about Riak KV
and assorted friends, Solr, Redis and Spark at the Global
c keys and
> what we can change to improve performance?
>
> Thanks!
>
> --
> View this message in context: How to increase Riak write performance for
> sequential alpha-numeric keys
> <http://riak-users.197444.n3.nabble.com/How-to-increase-Riak
I believe you should be looking for the get_fsm_objsize stats listed here:
http://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/inspecting-node/#get-fsm-objsize
. Unless you are using consistent bucket types or write once bucket types.
-Alexander
Alexander Sicular
Solutions Architect
for some guidance on properly sizing a Riak S2
cluster [31].
## Jobs at Basho
Interested in working on distributed computing related problems? Perhaps
these open positions at Basho may be of interest:
-
Client Services Engineer (USA) [32]
Till next time,
-Alexander Sicular
g Riak2.0 and bitcask as backend, and we are writing the keys once
> into bucket, keep it for a few hours, then delete it, never update the
> existing data.
>
>
>> On Wed, Apr 20, 2016 at 11:32 AM, Alexander Sicular <sicul...@gmail.com>
>> wrote:
>> He
]
Are you working on something Riak related and would like to be highlighted?
Send me a note and let me know what you’re up to.
Weekend time!
-Alexander Sicular
@siculars
[1]
https://www.erlang-solutions.com/blog/mongooseim-1-6-2-is-out-time-to-upgrade.html
[2]
http://lists.basho.com/pipermail
>
>
> On 21-Mar-2016, at 10:18 PM, Alexander Sicular <sicul...@basho.com> wrote:
>
> H. Not that I know of and I hadn't heard of JugglingDB previously.
> What do you find interesting about it, Shailesh?
>
> On Monday, March 21, 2016, Shailesh Mangal <shail
//www.npmjs.com/package/jugglingdb> adapter for RIAK?
>
> - Shailesh Mangal
>
> On 18-Mar-2016, at 10:29 AM, Alexander Sicular <sicul...@basho.com
> <javascript:_e(%7B%7D,'cvml','sicul...@basho.com');>> wrote:
>
> Hello All!
>
> Here’s the latest summary of the
in working on distributed computing related problems? Perhaps
these open positions at Basho may be of interest:
-
Developer Advocate (London and US) [22]
-
Client Services Engineer (USA) [23]
-
Consulting Engineer (USA) [24]
Enjoy your weekend!
Alexander Sicular
Hi Qiang,
Check out RYOW semantics in this blog post, part 2 of a 4 part series ,
http://basho.com/posts/technical/riaks-config-behaviors-part-2/ .
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Mar 3, 2016, at 20:12, Qiang Cao
Hi Travis,
Beyond performance reasons, this architecture is a bad idea from an
availability perspective. If you lose one physical machine you'll lose two
segments of your Riak cluster. And that's generally "not a good thing."
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my
Patrice, since Riak is distributed it uses a quorum (default 2 when
n_val is 3) to read your data. Since you're doing a post commit hook
to read your data immediately after you write there may be delay in
getting the new value from two of three copies written due simply to
timing. So the rule of
Besides just plainly writing a key, you could also do something like (pseudo
code):
Riak.put(canaryKey, pw=n_val){
If ok -> cool!
If borked -> sad face
}
The important bit is the pw (primary write) equals your replication value. This
means that all copies in the virtual node replica set
gt; wrote:
>
> Hi Alexander,
>
> I’m parsing the file and storing each row with own key in a map datatype
> bucket and each column is a register.
>
> Thanks,
> Dennis
>
> From: Alexander Sicular [mailto:sicul...@gmail.com]
> Sent: Tuesday, October 2
Hi Dennis,
It's a bit unclear what you are trying to do here. Are you 1. uploading the
entire file and saving it to one key with the value being the file? Or are you
2. parsing the file and storing each row as a register in a map?
Either of those approaches are not appropriate in Riak KV. For
Seconded. This makes your cluster so fresh, so clean for new tests.
For node in nodes
Stop node
Do stuff (ie. delete data directory)
Start node
That general pattern is known as rolling restarts and is more or less how Basho
recommends doing maintenance on a Riak cluster.
Regards,
Greetings and salutations, Vanessa.
I am obliged to point out that running n_val of 3 (or more) is highly
detrimental in a cluster size smaller than 5 nodes. Needless to say, it is
not recommended. Let's talk about why that is the case for a moment.
The default level of abstraction in Riak is
That could be a problem if your box(es) is(are) dual/multi homed. Be awares!
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Jul 27, 2015, at 02:16, Toby Corkindale t...@dryft.net wrote:
Hi Roman,
I just set the IP to 0.0.0.0 as then it'll bind to
Hi All,
Praveen sent me the necessary config files and logs. Together with our
support team (thanks Jimmy) we were able to identify a probable cause of
this issue. Firstly, we do not recommend doing any serious testing on a
single machine with an n_val=1 environment (default replica count or
an option to enable multi-backends or bitcask,
what would our best approach be?
Thanks!
—Peter
On Jun 3, 2015, at 12:09 PM, Alexander Sicular
sicul...@gmail.com
wrote:
We are actively investigating better options for deletion of
large
amounts of keys. As Sargun mentioned, deleting
We are actively investigating better options for deletion of large amounts of
keys. As Sargun mentioned, deleting the data dir for an entire backend via an
operationalized rolling restart is probably the best approach right now for
killing large amounts of keys.
But if your key space can fit
Hi Alex!
Yes, each node would use an even amount of space regardless of maximum disk
space available.
The evenness has to do with Riak's uniform data distribution due to the sha1
consistent hashing algorithm. The output of sha1 is a number in the range of 0
to 2^160. That range is
Hi Jonathan,
staging (3 servers across NA)
If this means you're spreading your cluster across North America I would
suggest you reconsider. A Riak cluster is meant to be deployed in one data
center, more specifically in one LAN. Connecting Riak nodes over a WAN
introduces network latencies.
That should work but are you sure you want to expose your riak server to the
Internet? I would recommend against that and instead suggest your app talk to
some middleware.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Apr 14, 2015, at 12:54, Gustavo
Not sure off the top of my head. Just look for something that does that via
Amazon S3... cause that's how you talk to RiakCS.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Apr 3, 2015, at 15:04, Shankar Dhanasekaran shan...@opendrops.com wrote:
Is
Hi Alex,
It basically works the same way. Shut down riak. Locate the data folder and
delete all the stuff in it.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Mar 30, 2015, at 04:41, Alex De la rosa alex.rosa@gmail.com wrote:
Hi there,
I have a
, Mar 30, 2015 at 4:14 PM, Alexander Sicular sicul...@gmail.com
javascript:_e(%7B%7D,'cvml','sicul...@gmail.com'); wrote:
Hi Alex,
It basically works the same way. Shut down riak. Locate the data folder
and delete all the stuff in it.
-Alexander
@siculars
http://siculars.posthaven.com
I'll second what Chris said. Afaik, Solr does not solve this problem for you.
Riak won't either. I just googled for sanitize solr query inputs in java and
there are quite a few hits. I'd use that as a starting point but I'm a bit
surprised there isn't a lib somewhere that makes this a non
Seconded. Deterministic materialized keys at specific time granularities are
definitely the way to go. If your frequency is high enough you could r/w data
at second or ms resolution directly into memory and then roll those up into
higher time resolutions on disk. The value, as noted, could be
Map/reduce aside, in the general case, I do time series in Riak with
deterministic materialized keys at specific time granularities. Ie.
/devices/deviceID_MMDDHHMM[SS]
So my device or app stack will drop data into a one second resolution key (if
second resolution is needed) into Riak
I would probably take a look at the ulimit docs:
http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/ . This becomes
more acute the more nodes you run on the same os.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Feb 5, 2015, at 01:35, YouBarco
I would probably add them all in one go so you have one vnode migration plan
that gets executed. What is your ring size? How much data are we talking about?
It's not necessarily the number of keys but rather the total amount of data and
how quickly that data can move en mass between machines.
Hi Ildar,
Please take a look at the docs,
http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/ , you need
to set up your IP address most likely.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Jan 12, 2015, at 06:56, Ildar Alishev
?
Thank you!
12 янв. 2015 г., в 16:37, Alexander Sicular sicul...@gmail.com
javascript:_e(%7B%7D,'cvml','sicul...@gmail.com'); написал(а):
Hi Ildar,
Please take a look at the docs,
http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/ , you
need to set up your IP address most
Same client code writing to all 5 clusters?
How does the config of the 5th cluster differ from the first 4?
Quick notes:
Minimum of 5 nodes for a production deployment to ensure the default 3 replicas
are all on different physical nodes. Which is a good segue into the fact that
you shouldn't
You basically need to read and re-write each key as Geoff says. Hopefully this
gets automated in the future in some fashion.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Nov 25, 2014, at 12:31, Geoff Garbers ge...@totalsend.com wrote:
Hey Zhenguo.
because both have a name stats1
and both have some value 1.
Andrew
On Sun, Oct 26, 2014 at 12:50 AM, Alexander Sicular sicul...@gmail.com
wrote:
Haven't tried it out but should stats be an array?
And the query would be something like
Stats_name = stat1 and stats_value 1
I
Haven't tried it out but should stats be an array?
And the query would be something like
Stats_name = stat1 and stats_value 1
I think the extractor flattens everything and separates with underscores.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Oct
Look into vector clock growth and siblings. You may have siblings to true.
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Sep 25, 2014, at 03:54, riakuser01 jethrorev...@hotmail.com wrote:
Hi,
I am currently using an in-memory backend on Riak 1.4.8, and I am
So if we go by your suggestion to set up multiple nodes per physical server
- say 6 nodes per - that would be 36 Riak nodes and a corresponding Solr
instance - each hosting about 21M documents.
Something to be careful with is that as you ramp up the number of Riak
nodes on a single physical
Resizing the ring is a well known issue. A singleton riak with stock 64 ring
size should only consume more fd's.
What concerns me is the reluctance to change the n_val (replication factor). I
was under the impression this was not an issue and could be changed at any
time. I thought read
Congrats to the whole Basho team. Great achievement! -Alexander
On Tue, Sep 2, 2014 at 5:30 PM, Jared Morrow ja...@basho.com wrote:
Riak Users,
We are overjoyed to announce the final release of Riak 2.0.0.
The documentation page http://docs.basho.com/riak/latest/ has been
completely
Re. Riak pipes. What's the latest regarding accessing the pipe framework?
Haven't heard t much about it lately, admittedly haven't been listening
t hard either. The thought would be to do stormish stream processing in
situ.
@siculars
http://siculars.posthaven.com
Sent from my
This just showed up on HN:
Show HN: Decentralized, k-ordered unique IDs in Clojure
https://github.com/maxcountryman/flake
https://news.ycombinator.com/item?id=8202284
On Wed, Aug 20, 2014 at 1:23 PM, Shailesh Mangal
shailesh.man...@getzephyr.com wrote:
Hi,
I wanted to ask what are
And, afaik, a single index.xml file with multiple docs should probably be
broken up into one file per doc to make better use of the parallelism already
mentioned.
Regards,
Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Aug 14, 2014, at 10:43, Eric Redmond
Not that I know of. I believe keys are independent in this regard. Basho is
introducing sets in riak 2.0 but I don't think they will bee sorted sets like
in redis.
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
On Jun 29, 2014, at 15:54, Alex De la rosa
I think the number of characters preceding an asterisk is related to the
tokenizer. White space or standard. One of them allows a one character search,
the other three I believe. And this worked in riak 1.x.
As pointed out elsewhere, the docs you really should be referring to for search
I'm not sure what looking up entries... in batches of 100 from Riak devolves
into in the java client but riak doesn't have a native multiget. It either does
100 get ops or a [search]mapreduce. That might inform some of your performance
issues.
-Alexander
@siculars
pairs and push
out an array of responses. But either way, it's not really a show stopper.
Ya sugar is nice but, as you know, eventually you crash.
-Alexander Sicular
@siculars
On Mar 26, 2014, at 2:10 PM, Elias Levy fearsome.lucid...@gmail.com wrote:
On Wed, Mar 26, 2014 at 10:36 AM, Eric
1 - 100 of 344 matches
Mail list logo