Hi Joe,
You could do that. Riak currently supports millisecond resolution. Look at the
primary key composition. There are two lines, the first is the partition key
and the second is the local key. The local key denotes the sort order and is
the actual unique key for that grouping (quanta).
You
Hi, Luke,
It worked! Thanks a lot for the help and the feedback!
Felipe Esteves
Tecnologia
felipe.este...@b2wdigital.com
Tel.: (21) 3504-7162 ramal 57162
Skype: felipe2esteves
2016-12-15 14:19 GMT-02:00 Luke Bakken :
> What is the output of this command:
>
> curl 127.0.1.1:8098/types/ldb/b
What is the output of this command:
curl 127.0.1.1:8098/types/ldb/buckets?buckets=true
I see that our docs do *not* give an example of listing buckets in a
bucket type:
http://docs.basho.com/riak/kv/2.2.0/developing/api/http/list-buckets/
I have opened an issue here to improve the documentation
Hi, Luke,
The bucket_type with leveldb is called *ldb*. Buckets "teste and "books"
already existed.
*Python3:*
>>> myClient = riak.RiakClient(http_port=8098, protocol='http',
host='127.0.1.1')
>>> myBucket = myClient.bucket('foo', bucket_type='ldb')
>>> keyb = myBucket.new('bar', data='foobar')
>
> But I can find the created bucket when I run buckets?buckets=true
> Seems to me it isn't being persisted, I'm investigating.
What is the command you are running? Please provide the complete command.
--
Luke Bakken
Engineer
lbak...@basho.com
___
riak-
Hi,
I've managed to correct this error of mine, using the correct backend name,
that is, *leveldb_mult* instead of *leveldb*
Now I have another problem: the python client returns no error, the riak
log is also clean.
But I can find the created bucket when I run buckets?buckets=true
Seems to me it
I'd completely forgotten leveldb had that advantage. Russell is correct.
Sent from my iPhone
> On Dec 8, 2016, at 4:51 PM, Russell Brown wrote:
>
> Depends on what backend you are running, no? If leveldb then this list keys
> operation can be pretty cheap.
>
> It’s a coverage query, but if it’s
Depends on what backend you are running, no? If leveldb then this list keys
operation can be pretty cheap.
It’s a coverage query, but if it’s leveldb at least you will seek to the start
of the bucket and iterate over only the keys in that bucket.
Cheers
Russell
On 8 Dec 2016, at 21:19, John D
The size of the bucket has no real impact on the cost of a list keys operation
because each key on the cluster must be examined to determined whether it
resides in the relevant bucket.
-John
> On Dec 8, 2016, at 4:17 PM, Arun Rajagopalan
> wrote:
>
> Hello Riak Users
>
> I have a use case w
The process is (typically) beam.smp, though you may have multiple on your
machine, if for example, you are connected to riak via the console, or if you
are running administrative commands (e.g., riak-admin). For the ports (if that
is also what you are looking for) see:
http://docs.basho.com/ri
Hi Ricardo,
Yes, we do plan to add PERCENTILE aggregation function to the next version
of Riak TS, it is high on our to-do list. We also want to add several other
aggregators, like MEDIAN, LAST, TOP, BOTTOM, etc.
You probably noticed we have a number of aggregation functions already,
they are des
Hi Gal,
I am the product manager for Riak TS and Spark Connector. Thanks for
your questions and PRs.
We have two versions of Riak - KV and TS. They share most of the code,
but RIak TS is optimised for time series operations, while Riak KV -
for key value operations. It is correct that most KV ope
Just to follow up on this:
Andrew Thompson and I are going to start hosting a monthly lager issue/PR
triage meeting on freenode, in #lager every third Thursday of the month
starting 19 January 2017 at 1600 US/Central time, 1400 US/Pacific time, 2200
UTC for about an hour or so.
During that time
Hi Henning,
So normally, a bucket's custom properties are stored in the ring file.
It's this file which is gossiped around regularly in a cluster. When users
create hundreds/thousands of custom customer properties (it's been done),
it can grind the cluster almost to a standstill.
A bucket-type on
Hi guys,
I still struggle with bucket types and have some questions. Going back
a year I could not find many threads about it, but forgive me if I
missed something and am asking already-answered questions.
## Cluster-awareness
I've understood so far that bucket types are used as part of the
name
and has no
> knowledge of how long each value has persisted.
>
>
>>
>> Kind Regards,
>>
>> Jan Paulus
>>
>>
>>
>> *Von:* Andrei Zavada [mailto:azav...@contractor.basho.com]
>> *Gesendet:* Montag, 14. November 2016 16:19
>> *A
Hi Konstantin,
The RiakClient class is reentrant and thread-safe, so you should be able to
share it among the different workers. You may have to adjust the min / max
connection settings to get the most performance, but that's relatively
easy.
One other thing to notice is RiakClient's cleanup() me
rs@lists.basho.com<mailto:riak-users@lists.basho.com>"
mailto:riak-users@lists.basho.com>>
Subject: Re: Uneven distribution of partitions in RIAK cluster
Hi Ray,
Riak's partition distribution is automatically calculated using our
nondeterministic `claim` algorithm. That syst
Hi Neo,
1) When building from source, the version numbers never show. They only
appear in the packaged versions. That's why you're not seeing them.
2) After creating the admin user - did you change anonymous_user_creation
back to 'off'?
3) I'm not entirely clear on what you did there - did you
On 7 November 2016 at 16:07, Luca Favatella
wrote:
>
> Hi All,
>
> What file format easy to read on an Android mobile device would you recommend
> for representing a snapshot of part of a live Riak KV store? The main aim is
> minimizing development effort on the Android device while keeping batt
Hi Alexander,
Thanks a lot for your input. I have a few follow-up questions.
> Stupid math:
>
> 3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more
> due to obj > 1MB size )
>
> 1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4
> GB.
>
> You alread
Hi Toby -
Thanks for reporting this. We can continue the discussion via GH issue #689.
--
Luke Bakken
Engineer
lbak...@basho.com
On Wed, Nov 23, 2016 at 9:58 PM, Toby Corkindale wrote:
> Hi,
> I'm using the Java client via protocol buffers to Riak.
> (Actually I'm using it via Scala 2.11.8 on O
Hello DeadZen,
Yes, networking interconnect becomes a bigger issue with more nodes in the
cluster. A Riak cluster is actually a fully meshed network of erlang virtual
machines. Multiple 1/10 gig nics dedicated to inter/intra networking are your
friends. That said, we have many customers running
ok I loled at this. then got worries trump could win a node election.
anyways. 24gigs per riak server is not a bad safe bet.
Erlang in general is ram heavy. It uses it more effectively then most
languages wrt concurrency, but ram is the fuel for concurrency and buffer
for operations, especially du
oman
> Automation Engineer
> Energy Metrics
> e. andre.ro...@energymetricsllc.com
> www.energymetricsllc.com | LinkedIn | Twitter
>
> -Original Message-
> From: riak-users [mailto:riak-users-boun...@lists.basho.com] On Behalf Of
> Luke Bakken
> Sent: Wednesday, November 23, 2016 10:
Hi Andre,
If you remove the load balancer, does it work?
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, Nov 22, 2016 at 10:56 AM, A R wrote:
> To whom it may concern,
>
>
> I've set up a 2 riak ts nodes and a load-balancer on independent machines.
> I'm able to successfully create tables, li
Hi Alexander,
Thanks for responding.
> How many nodes?
We currently have 9 nodes in our cluster.
> How much ram per node?
Each node has 4GB of ram and 4GB of swap. The memory levels (ram + swap) on
each node are currently between 4GB and 5.5GB.
> How many objects (files)? What is the average
Hi Daniel,
Ya, I'm not surprised you're having issues. 4GB ram is woefully underspecd. 😔
🤓Stupid math:
3e7 x 3 (replication) / 9 = 1e7 minimum objects per node ( absolutely more due
to obj > 1MB size )
1e7 x ~400 bytes per obj in ram = 4e9 ram per node just for bitcask. Aka 4 GB.
You already
Hi Daniel,
How many nodes?
-You should be using 5 minimum if you using the default config. There
are reasons.
How much ram per node?
-As you noted, in Riak CS, 1MB file chunks are stored in bitcask.
Their key names and some overhead consume memory.
How many objects (files)? What is the average f
I found a similar question from over a year ago (
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-July/017327.html),
and it sounds like leveldb is the way to go, although possibly not well
tested. Has anything changed with regard to Basho's (or anyone else)
experience with using le
Hi Mav,
I opened the following issue to continue investigation:
https://github.com/basho/riak_kv/issues/1541
That would be the best place to continue discussion. I'll find time to
reproduce what you have reported.
Thanks -
--
Luke Bakken
Engineer
lbak...@basho.com
On Fri, Nov 18, 2016 at 4:57
No luck :(
I set up a bucket type called test-bucket-type. I did NOT set data type.
I set the hooks
Ran your curl -X PUT. The Hook was not called. Tried several times, no luck
I changed the curl to hit my non-typed bucket, and the commit hook hit
$ riak-admin bucket-type list
default (active)
tes
Thanks for correcting that. Everything looks set up correctly.
How are you saving objects? If you're using HTTP, what is the URL?
Can you associate your precommit hook with a bucket type
("test-bucket-type" below) that is *not* set up for the "map" data
type and see if your hook is called correct
Here you go ...
For the second url - I think you meant testbucket without the hyphen, Also
I think you have an extra "props" in there
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8098 (#0)
> GET /types/maps/props HTTP/1.1
> User-Agent
What is the output of these commands?
curl -4vvv localhost:8098/types/maps/props
curl -4vvv localhost:8098/types/maps/props/buckets/test-bucket/props
On Fri, Nov 18, 2016 at 2:21 PM, Mav erick wrote:
> Luke
>
> I was able to change the properties with your URL, but still the hooks are
> not bei
Mav -
You're not using the correct HTTP URL. You can use this command:
http://docs.basho.com/riak/kv/2.1.4/using/reference/bucket-types/#updating-a-bucket-type
Or this URL:
curl -XPUT localhost:8098/types/maps/props -H 'Content-Type:
application/json' -d
'{"props":{"precommit":[{"mod":"myhooks"
Hi Luke
I tried that and didn't work for a bucket with bucket type = maps. My
erlang code below does work for buckets without types.
But I think its because I didn't set the hook for the typed bucket
correctly.Could you check my curl below, please ?
I did this to set the hook
curl -X PUT localho
Mav -
Please remember to use "Reply All" so that the riak-users list can
learn from what you find out. Thanks.
Thebucket = riak_object:bucket(Object),
Can you check to see if "Thebucket" is really a two-tuple of
"{BucketType, Bucket}"? I believe that is what is returned.
--
Luke Bakken
Engineer
Mav -
Can you go into more detail? The subject of your message is
"initializing a commit hook".
--
Luke Bakken
Engineer
lbak...@basho.com
On Thu, Nov 17, 2016 at 9:09 AM, Mav erick wrote:
> Folks
>
> Is there way RIAK can call an erlang function in a module when RIAK starts
> up ?
>
> Thanks
>
ailable 24 X 7
[Blue_Pil]
From: Alexander Sicular [mailto:sicul...@basho.com]
Sent: 18 November 2016 10:03
To: Krishnamurthy,R,Rajaa,TAB13 C; riak-users@lists.basho.com
Cc: Balaji,H,Hari,TAB13 C
Subject: Re: Riak TS Agility on handling Petabytes of Data
Hi Rajaa,
What's your retention policy
Hi Rajaa,
What's your retention policy? At the moment, TS supports a global TTL.
What's your read pattern? Is this a metrics or logging use case, aka can
you downsample.
Thanks,
Alexander
On Fri, Nov 18, 2016 at 03:25 wrote:
> Dear Team,
>
>
>
> As a process of validation, we would like to kno
This is awesome...! Many thanks Magnus - much appreciated...
Must have overlooked some of these details in my initial analysis but am
sure I have a very good starting point / details now!
Thanks again!
On Thu, Nov 17, 2016 at 7:48 AM, Magnus Kessler wrote:
> On 16 November 2016 at 17:40, Vikra
On 16 November 2016 at 17:40, Vikram Lalit wrote:
> Hi - I am trying to leveraging CRDT sets to store chat messages that my
> distributed Riak infrastructure would store. Given the intrinsic
> conflict-resolution, I thought this might be more beneficial than me
> putting together a merge implemen
Cheers Luca, easy when you know how ;-)
PR has been made.
//Sean.
On Tue, Nov 15, 2016 at 9:31 AM, Luca Favatella <
luca.favate...@erlang-solutions.com> wrote:
> On 15 November 2016 at 09:17, sean mcevoy wrote:
> [...]
>
>> Hi Basho guys,
>>
>> What's your procedure on reporting documentation b
On 15 November 2016 at 09:17, sean mcevoy wrote:
[...]
> Hi Basho guys,
>
> What's your procedure on reporting documentation bugs?
>
>
>
Hi Sean,
I understand the source of the docs is at
https://github.com/basho/basho_docs and the usual pull requests workflow
applies.
Regards
Luca
Thank you Magnus.
On Mon, Nov 14, 2016 at 7:06 AM, Magnus Kessler wrote:
> On 12 November 2016 at 00:08, Johnny Tan wrote:
>
>> When doing a node replace (http://docs.basho.com/riak/1.
>> 4.12/ops/running/nodes/replacing/), after commit-ing the plan, how does
>> the cluster handle reads/writes?
Hi Arun -
When you install Riak it installs the Erlang VM to a well-known location,
like /usr/lib/riak/erts-5.9.1
You can use /usr/lib/riak/erts-5.9.1/bin/erlc and know that it is the same
Erlang that Riak is using.
--
Luke Bakken
Engineer
lbak...@basho.com
On Mon, Nov 14, 2016 at 11:20 AM, Aru
Hi Ray,
Riak's partition distribution is automatically calculated using our
nondeterministic `claim` algorithm. That system is able to re-balance
clusters, but is typically only run during membership operations; joining,
leaving, or replacing nodes. The uneven partition distribution won
Hi Anthony,
Looking at the linked issue it appears that the 503 response can be
returned erroneously when communication between a Riak CS and Riak
node has an error ("If the first member of the preflist is down").
Is there anything predictable about these errors? You say they come
from 1 client o
gt;
> *Von:* Andrei Zavada [mailto:azav...@contractor.basho.com]
> *Gesendet:* Montag, 14. November 2016 16:19
> *An:* Jan Paulus
> *Cc:* riak-users@lists.basho.com
> *Betreff:* Re: Query data with Riak TS
>
>
>
> Hello Jan,
>
>
>
> Replying to your questions inline:
Hello Jan,
Replying to your questions inline:
> Hi,
we are testing Riak TS for our Application right now. I have a couple of
> question how to query the data. We are measuring electric power which comes
> in in odd time intervals.
>
> 1. Is it possible to query the value which has been r
On 12 November 2016 at 00:08, Johnny Tan wrote:
> When doing a node replace (http://docs.basho.com/riak/1.
> 4.12/ops/running/nodes/replacing/), after commit-ing the plan, how does
> the cluster handle reads/writes? Do I include the new node in my app's
> config as soon as I commit, and let riak
On 12 November 2016 at 03:00, Jing Liu wrote:
> Hi,
>
> When I try to simply test the throughput of Risk in the setting that
> just start a single node and utilize two clients to issue requests, I
> got connection refused after the client's thread of sending GET
> request overcome about 400. Actu
Hi Mav,
I've never written any commit hooks myself really, but it looks like we
have some examples you could check out here:
https://github.com/basho/riak_function_contrib
Additionally, if you haven't already, you should check out the official
Basho commit hook docs here:
http://docs.basho.com/
//lists.basho.com/mailman/listinfo/riak-users_lists.
>> basho.com
>> or, via email, send a message with subject or body 'help' to
>> riak-users-requ...@lists.basho.com
>>
>> You can reach the person managing the list at
>>
/2.1.4/developing/usage/commit-hooks/ ?
Thanks,
Pavel
*Pavel Hardak* | Director of Product Management @ Basho
-- Forwarded message --
> From: Matthew Von-Maszewski
> Date: Tue, Nov 1, 2016 at 1:01 PM
> Subject: Re: RiakTS TTL
> To: Joe Olson
> Cc: riak-users
1. The global expiry module is an external C++ module that is open source.
There is no definition at this time for an Erlang callback, but the design
supports it. You can patch the open source code now.
2. The TTL has two components: when the record is written and number of
minutes until e
Hi Alexander,
Excellent! Thanks for the feedback - I will see what I can find there.
Regards,
Ryan
On Tue, Nov 1, 2016 at 11:06 AM, Alexander Sicular
wrote:
> Hi Ryan, yes, you can change a number of settings. Have you had a look
> at http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin
Hi Ryan, yes, you can change a number of settings. Have you had a look
at http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#transfer-limit
and
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-July/015529.html
?
-Alexander
On Tue, Nov 1, 2016 at 2:43 AM, Ryan Maclear wr
On 29 October 2016 at 19:59, vmalhotra wrote:
> We run 8 nodes RIAK cluster in our Prod environment. Lot of time, RIAK
> process stops and we also noticed out of memory issues. Typically, we run
> restart the affected node to recover from the issue. I thought of using
> Supervisor to control the
.com
>>
>> You can reach the person managing the list at
>> riak-users-ow...@lists.basho.com
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of riak-users digest..."
&g
g the list at
> riak-users-ow...@lists.basho.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of riak-users digest..."
>
>
> Today's Topics:
>
>1. Riak Java
Hi Pratik,
>From exception msg you are missing joda time jar, download one and put in
your classpath.
If you use maven it will download the dependency for you automatically.
Hope this help.
Ajax
On Friday, 28 October 2016, Pratik Kulkarni wrote:
> Hi All,
>
> I am working on a distributed file
Take a look at the AAE settings here:
http://docs.basho.com/riak/kv/latest/using/cluster-operations/active-anti-entropy/
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Oct 26, 2016, at 16:17, Steven Joseph wrote:
>
> I don't think you should disable AAE, y
Hi Rohit,
Mochiweb's max connections are set as an argument to the start()
function. I don't believe there is a way to increase it at run time.
If you're hitting the listen backlog, your servers aren't able to keep
up with the request workload. Are you doing any listing or mapreduce
operations?
-
I don't think you should disable AAE, you can tune its frequency.
Steven
On Thu, 27 Oct 2016 03:50 Ricardo Mayerhofer wrote:
> Yes, I'll check if the problem is the AAE! I will disable it and see the
> results.
>
> Thanks Steven!
>
> On Tue, Oct 25, 2016 at 6:54 PM, Steven Joseph
> wrote:
>
>
Yes, I'll check if the problem is the AAE! I will disable it and see the
results.
Thanks Steven!
On Tue, Oct 25, 2016 at 6:54 PM, Steven Joseph wrote:
> Hi Ricardo,
>
> If you are using systemd might have to check LimitNOFILE for your units.
> Active anti entropy runs periodically.
>
> Steven
>
Hi Ricardo,
If you are using systemd might have to check LimitNOFILE for your units.
Active anti entropy runs periodically.
Steven
On Wed, 26 Oct 2016 04:36 Ricardo Mayerhofer wrote:
> What's weird is that the node crashes every minute at the same second. Is
> there anything Riak may be runnin
What's weird is that the node crashes every minute at the same second. Is
there anything Riak may be running every minute?
On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer
wrote:
> I'm also pasting the free -m:
>
> total used free sharedbuffers cached
> Me
I'm also pasting the free -m:
total used free sharedbuffers cached
Mem: 15039 14557482 0 37 4594
-/+ buffers/cache: 9925 5114
Swap:0 0 0
On Mon, Oct 24, 2016 at 8:24 PM, Rica
Hi Alexander,
Thanks for your response. We use multi-backend with bitcask and leveldb.
- File descriptors seems to be ok, at least the config.
ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
sudo: unable to resolve host ip-10-2-58-5
riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
65535
- Memory seems
Disk, memory or file descriptors would be my guess. Bitcask?
On Monday, October 24, 2016, Ricardo Mayerhofer
wrote:
> Hi all,
> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
> nodes are affected.
>
> However it seems Riak manage to get them up again.
>
> Any idea on wh
On 21 October 2016 at 03:45, AJAX DoneBy Jack wrote:
> Hello Basho,
>
> Today I turned on security on my cluster but riak_explorer stopped working
> after that.
> Anything I need to check on riak_explorer to make it works again?
>
> Thanks,
> Ajax
>
>
Hi Ajax,
After turning on Riak security, all
Hi Alex,
I retry again and this time it works, here is the query string:
"{!type=edismax qf=’title_s content_s’}riak solr"
Thanks,
Ajax
> On Oct 17, 2016, at 10:55 AM, Alex Moore wrote:
>
> Hey Ajax,
>
> Have you tried adding those parameters to the LocalParameters {!dismax} block?
>
> e.g
One of the catches regarding the quantum limit is that unless the
query starts exactly on a boundary, the effective limit is one fewer
because it is determined by the number of partitions the query has to
touch.
I suspect that's the behavior you're seeing.
Sent from my iPhone
> On Oct 17, 2016,
Hi Alex,
I did tried that in Java client using PB, but getting exception, I can
paste the error here tonight.
Thanks,
Ajax
On Monday, 17 October 2016, Alex Moore wrote:
> Hey Ajax,
>
> Have you tried adding those parameters to the LocalParameters {!dismax}
> block?
>
> e.g.: {!type=dismax qf=
Hey Ajax,
Have you tried adding those parameters to the LocalParameters {!dismax}
block?
e.g.: {!type=dismax qf='myfield yourfield'}solr rocks
http://wiki.apache.org/solr/LocalParams#Basic_Syntax
Thanks,
Alex
On Fri, Oct 14, 2016 at 3:18 PM, AJAX DoneBy Jack
wrote:
> Hello Basho,
>
> I am v
The internal solr API will not use the distributed queries generated from
coverage plans. You will only get results from the local node. Theoretically,
you could aggregate and de-duplicate across multiple nodes, but that would
result in more data movement than necessary, as it does not leverag
Hi Magnus,
So you suggest to use http API right? That day I were thinking query the
internal Solr http by sending request. Could you advise what's the
difference between Riak http API and internal Solr http API? What's the
pros and cons to use them?
Thanks,
Ajax
On Monday, 17 October 2016, Magnu
On 14 October 2016 at 20:18, AJAX DoneBy Jack wrote:
> Hello Basho,
>
> I am very new on Riak Search, I know can add {!dismax}before query string
> to use it, but don't know how to specify qf or other dismax related
> parameters in Riak Java Client. Could you advise?
>
> Thanks,
> Ajax
>
Hi Ajax
On 12 October 2016 at 19:07, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:
> Does the riak claimant node have higher load than the other nodes?
>
Hi Travis,
The role of the claimant node is simply to coordinate certain cluster
related operations that involve changes to the ring, suc
For what it's worth, I've successfully used KV features inside of Riak
TS and tested it quite a lot, including with a heavy load. As John said,
I didn't use multi-backend and I disabled AAE.
Riak TS was happy when using Gets, Sets, bucket types, and including
CRDTs (I tested only the Set CRDTs).
There are several important KV components such as AAE and Search with which TS
integration has not been sufficiently tested. At this time we still recommend
running multiple clusters as mixed use workloads in Riak TS are currently not
supported.
Your internal testing may reveal that small supp
So are you suggesting that it's not advisable to use even KV backed by LevelDB
in a TS instance, or is it more that performance is unknown and therefore not
guaranteed / supported? If the data is otherwise "safe", this may still be a
better option for us than running separate clusters.
Are ot
We have not done any work to support the multi-backend, hence the error you’re
seeing. TS depends exclusively on leveldb.
We’re not recommending the use of KV functionality in the TS product yet,
because the latter is still changing rapidly and we will need to go back and
fix some basic KV mech
Travis -
What are the client failures you are seeing? What Riak client library
are you using, and are you using the PB or HTTP interface to Riak?
The error message you provided indicates that the ping request
returned from Riak after haproxy closed the socket for the request.
One cause would be v
Hi Brandon -
The riak_object module exports a type() function that will return the
bucket type of an object in Riak
(https://github.com/basho/riak_kv/blob/develop/src/riak_object.erl#L589-L592).
MapReduce docs:
http://docs.basho.com/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce/
In addit
Thanks for your response Charlie! Riak KV is a great product, but as any
other product it must evolve to keep ahead of its competitors.
Glad to hear that improvements are coming :)
On Tue, Oct 4, 2016 at 10:18 AM, Charlie Voiselle
wrote:
> Ricardo:
>
> Thank you for your question, and I want to
Ricardo:
Thank you for your question, and I want to reassure you that Riak KV is still
very much under active development. Work that is being done in the Riak TS
codebase is being used to improve Riak KV where it applies. Riak KV 2.2 is
coming soon and will include these new features:
Global O
Hiya Alexander,
Thanks much indeed for the detailed note... very interesting insights...
As you deduced, I actually omitted some pieces from my email for the sake
of simplicity. I'm actually leveraging a transient / stateless chat server
(ejabberd) wherein messages get delivered on live sessions
Hi Luke - many thanks... actually I was planning to have different bucket
types have a different n_val. Or I might end up doing so... the thinking
being that I intend to start my production workloads with fewer
replications, but as the system matures / stabilizes (and also increases in
userbase!),
Hi Vikram,
Bucket maximums aside, why are you modeling in this fashion? How will you
retrieve individual keys if you don't know the time stamp in advance? Do you
have a lookup somewhere else? Doable as lookup keys or crdts or other systems.
Are you relying on listing all keys in a bucket? Defin
Hi Vikram,
If all of your buckets use the same bucket type with your custom
n_val, there won't be a performance issue. Just be sure to set n_val
on the bucket type, and that all buckets are part of that bucket type.
http://docs.basho.com/riak/kv/2.1.4/developing/usage/bucket-types/
--
Luke Bakke
On 28 September 2016 at 17:59, Nguyen, Kyle wrote:
> Thank you for your quick reply, Magnus! We’re considering using bucket
> type to support multi-tenancy in our system. Hence, all objects stored
> within the namespace of the bucket type and bucket type need to be removed
> once the client has d
[mailto:mkess...@basho.com]
Sent: Wednesday, September 28, 2016 2:13 AM
To: Nguyen, Kyle
Cc: Riak Users
Subject: Re: Delete bucket type
On 27 September 2016 at 20:50, Nguyen, Kyle
mailto:kyle.ngu...@philips.com>> wrote:
Hi all,
Is deleting bucket type possible in version 2.1.4? If not, is
On 27 September 2016 at 20:50, Nguyen, Kyle wrote:
> Hi all,
>
>
>
> Is deleting bucket type possible in version 2.1.4? If not, is there any
> workaround or available script/code that we can do this in a production
> environment without too much performance impact?
>
>
>
> Thanks
>
>
>
> -Kyle-
>
On Mon, Sep 26, 2016 at 10:13 AM, Matthew Von-Maszewski
wrote:
> Neither.
>
> The leveldb instance creates a snapshot of the current files and generates a
> working MANIFEST to go with them. That means the snapshot is in “ready to
> run” condition. This is based upon hard links for the .sst tabl
Neither.
The leveldb instance creates a snapshot of the current files and generates a
working MANIFEST to go with them. That means the snapshot is in “ready to run”
condition. This is based upon hard links for the .sst table files.
The user can then choose to copy that snapshot elsewhere, poi
there a backup tool that uses this yet? or is this meant more to be used
with snapshots provided through xfs/zfs?
On Monday, September 26, 2016, Matthew Von-Maszewski
wrote:
> Here are notes on the new hot backup:
>
> https://github.com/basho/leveldb/wiki/mv-hot-backup
>
> This sound like what y
nice post, not a big fan of the proxy design.
On Monday, September 26, 2016, Andra Dinu
wrote:
> Hi,
>
> This post is a story about investigating a struggling Riak cluster,
> finding out why Riak's usual self-healing processes got stuck, and how our
> operations and maintenances tool WombatOAM c
601 - 700 of 13621 matches
Mail list logo