Re: Doc typo

2016-11-15 Thread Luca Favatella
On 15 November 2016 at 09:17, sean mcevoy  wrote:
[...]

> Hi Basho guys,
>
> What's your procedure on reporting documentation bugs?
>
>
>
Hi Sean,

I understand the source of the docs is at
https://github.com/basho/basho_docs and the usual pull requests workflow
applies.

Regards
Luca
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: reads/writes during node replacement

2016-11-14 Thread Johnny Tan
Thank you Magnus.

On Mon, Nov 14, 2016 at 7:06 AM, Magnus Kessler  wrote:

> On 12 November 2016 at 00:08, Johnny Tan  wrote:
>
>> When doing a node replace (http://docs.basho.com/riak/1.
>> 4.12/ops/running/nodes/replacing/), after commit-ing the plan, how does
>> the cluster handle reads/writes? Do I include the new node in my app's
>> config as soon as I commit, and let riak internally handle which node(s)
>> will do the reads/writes? Or do I wait until the ringready on the new node
>> before being able to do reads/writes to it?
>>
>> johnny
>>
>>
> Hi Johnny,
>
> As soon as a node has been joined to the cluster it is capable of taking
> on requests. `riak-admin ringready` returns true after a join or leave
> operation when the new ring state has been communicated successfully to all
> nodes in the cluster.
>
> During a replacement operation, the leaving node will hand off [0] all its
> partitions to the joining node. Both nodes can handle requests during this
> phase and store data in the partitions they own. Once the leaving node has
> handed off all its partitions, it will automatically shut down. Please keep
> this in mind when configuring your clients or load balancers. Clients
> should deal with nodes being temporarily or permanently unavailable.
>
> Kind Regards,
>
> Magnus
>
> [0]: http://docs.basho.com/riak/kv/2.1.4/using/reference/handoff/
>
> --
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
>
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Calling erlang shell from the command line

2016-11-14 Thread Luke Bakken
Hi Arun -

When you install Riak it installs the Erlang VM to a well-known location,
like /usr/lib/riak/erts-5.9.1

You can use /usr/lib/riak/erts-5.9.1/bin/erlc and know that it is the same
Erlang that Riak is using.

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, Nov 14, 2016 at 11:20 AM, Arun Rajagopalan <
arun.v.rajagopa...@gmail.com> wrote:
> Hi RIAK users
>
> I would like to attach to the riak shell and compile an .erl program and
> quit. The reason is I want to be absolutely sure I am building the erl
> program with the version of erlang that my riak installation has
>
> Something like
> riak attach c(myprog.erl).
> Ctr-C a
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Uneven distribution of partitions in RIAK cluster

2016-11-14 Thread Drew Pirrone-Brusse
Hi Ray,

Riak's partition distribution is automatically calculated using our
nondeterministic `claim` algorithm. That system is able to re-balance
clusters, but is typically only run during membership operations; joining,
leaving, or replacing nodes. The uneven partition distribution won't
self-heal unless you add a new node to this cluster.

We can force a re-balance of this sort of uneven distribution by
temporarily switching from `claim_v2` to `claim_v3`, and triggering a
membership recalculation. `claim_v3` is still an experimental system that
is much more aggressive about avoiding preflist violations and lumpy
claims, without much regard for limiting the scope of membership changes.
With `claim_v2`, the addition of a new node to an existing cluster will
almost always only involve moving partitions off of existing nodes and onto
the new node. With `claim_v3`, it's somewhat common to see partitions also
being moved between existing partitions in order to prevent lumpy claims.

These unpredictable spikes in membership changes have caused serious
problems for our customers in the past, and they are nearly impossible to
plan for, so we don't advise using `claim_v3` for the majority of
operations.

To enable `claim_v3` and trigger a re-balance of the ring,

1. Enable the use of `claim_v3` by opening a `riak attach` session on any
node in this cluster, and running the below snippets,

rpc:multicall(application, set_env, [riak_core, wants_claim_fun,
{riak_core_claim, wants_claim_v3}]).
rpc:multicall(application, set_env, [riak_core, choose_claim_fun,
{riak_core_claim, choose_claim_v3}]).

(Please note, the `.`s are syntactically significant in Erlang, and you can
exit `attach` sessions with `ctrl+g, q, enter`.)

2. Determine which node is currently the Claimant by running `riak-admin
ring-status` on any node in the cluster. Look for the line similar to
`Claimant: 'dev2@127.0.0.1'`.

3. Stop the claimant. In this case I would run `riak stop` on dev2@127.0.0.1
.

4. Trigger the election of a new claimant by marking the current claimant
DOWN in the ring. In this case, I would run `riak-admin down dev2@127.0.0.1`
on any active node in this cluster.

5. Verify the reelection with `riak-admin ring-status` (checking to make
sure the claimant has changed), and restart the node that was previously
stopped.

At this time the rebalance should have occurred and membership transfers
started.

6. To disable `claim_v3`, open another `riak attach` session on any node in
this cluster, and run the below snippets,

rpc:multicall(application, set_env, [riak_core, wants_claim_fun,
{riak_core_claim, default_wants_claim}]).
rpc:multicall(application, set_env, [riak_core, choose_claim_fun,
{riak_core_claim, default_choose_claim}]).

This can be done while the transfers are in-flight. The new plan will have
already been injected into the ring.

I hope this helps.
Best regards,
-Drew

On Fri, Nov 11, 2016 at 2:13 PM, Semov, Raymond <rse...@ebay.com> wrote:

> I have a 5-node cluster with 12 partitions in 4 of the nodes and 16
> partitions in node #5. That is causing dangerously high disk utilization in
> that node. I plowed thru the documentation and Googled the hell out of it
> but I can’t find info on how rebalance the extra 4 partitions on the 4
> underutilized nodes. The docs say the cluster balances itself but that’s
> apparently not the case here. Can anyone give any suggestions?
> I run RIAK version 1.4.8 on Linux kernel 3.13
> Ray
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS False 503 Service Unavailable Error on Client

2016-11-14 Thread Luke Bakken
Hi Anthony,

Looking at the linked issue it appears that the 503 response can be
returned erroneously when communication between a Riak CS and Riak
node has an error ("If the first member of the preflist is down").

Is there anything predictable about these errors? You say they come
from 1 client only?
--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Nov 1, 2016 at 7:51 AM, Valenti, Anthony
 wrote:
> We are having a lot of 503 Service Unvailable errors for 1 particular
> application client(s) when connecting to Riak-CS.  Everything looks fine in
> Riak/Riak-CS and when I check the Riak-CS access logs, I can see access from
> other applications to other buckets before, during and after the reported
> error time.  We have a 5 node cluster that are load balanced and all of them
> seem to be operating normally and they should be able to handle the incoming
> connections.  We did find this Jira in Github which looks like our exact
> problem (https://github.com/basho/riak_cs/issues/1283) , but there is 1
> comment and it was closed and I’m not sure of the fix/workaround/resolution.
> Has this been resolved in a later version than we are using – riak cs
> 1.5.3-1?  Is there a way to correct the issue from the client side?
>
>
>
> Thanks,
>
> Anthony
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query data with Riak TS

2016-11-14 Thread Andrei Zavada
On Mon, Nov 14, 2016 at 5:44 PM, Jan Paulus <pau...@next-kraftwerke.de>
wrote:

> Hello Andrei,
>
>
>
> thanks for the answer. It is good news that 1.5 will get the ODER BY and
> LIMIT functionality.
>
> Unfortunately the current AVG implementation is not exactly what we are
> looking for. The current implementation will sum up all points in the time
> range and will it divide by the number of points. Currently Riak TS return
> 4.5 for my example. We are looking for an average which takes into account
> how long a value was persistent.
>
> ​
This​ seems to be a peculiar requirement that Riak TS cannot accommodate
(sorry for not reading your explanation carefully enough).  Indeed, the AVG
function operates on numeric values as isolated scalars, and has no
knowledge of how long each value has persisted.


>
> Kind Regards,
>
> Jan Paulus
>
>
>
> *Von:* Andrei Zavada [mailto:azav...@contractor.basho.com]
> *Gesendet:* Montag, 14. November 2016 16:19
> *An:* Jan Paulus
> *Cc:* riak-users@lists.basho.com
> *Betreff:* Re: Query data with Riak TS
>
>
>
> Hello Jan,
>
>
>
> R​eplying to your questions inline:​​
>
> ​
>
>  Hi,
>
> we are testing Riak TS for our Application right now. I have a couple of
> question how to query the data. We are measuring electric power which comes
> in in odd time intervals.
>
>
>
> 1. Is it possible to query the value which has been received bevor at a
> given time? In other words: Get the current / last value of a series.
>
>
>
> In Riak TS 1.4 (the latest release), you still have to specify the exact
> range ​in the WHERE clause.  Because of that, if you are only interested
> in a single record that was added last before a certain time, you have to
> do some guesswork to arrive at the suitable boundaries.  Depending on the
> rate of data ingress, the selection returned may contain one, a few, too
> many, or no results at all.  One workaround is to issue a sequence of
> SELECTs, starting from a small range, with the lower boundary of the range
> progressively moved back until you have the last record included (and
> hopefully not many previous records which you are going to discard anyway).
>
>
>
> In 1.5, we will support ORDER BY and LIMIT clauses which will allow a much
> simpler solution, e.g.:
>
>
>
>  SELECT value FROM table
>  WHERE time > 0 AND time < $now
>  ORDER BY value DESC LIMIT 1
>
>
>
> 2. Is it possible to query the average in respect to time?
> For instance you have such a measurement reading:
> time| value
> -
> 09:31   |  4
> 10:02   |  6
> 10:05   |  3
> And you want to query the average from 10:00 to 10:10. I would expect a
> value of 4.1
> 4 * 20% + 6 * 30% + 3 * 50% = 4.1
>
>
>
> ​It is definitely possible:​
>
>
>
>  SELECT AVG(val
>
> ​ue​
>
> ) FROM
>
> ​table
>
> ​
>
>  WHERE time >
>
> ​=​
>
> ​'​2016-11-22
>
> 10
>
> ​:00'​
>
> AND time <
>
> ​'2016-11-22 ​
>
> ​10:10'​
>
>
>
> ​Note that you can specify timestamp values in ISO8601 format:
> http://docs.basho.com/riak/ts/1.4.0/using/timerepresentations
>
>
>
> ​
>
> 3. Is it possible to query 15 minutes average values for the last day? I
> would expect 96 values with the average as described in question 2.
>
>
>
> No, you will have to issue 96 separate queries, each similar to the one in
> (2) but with a different 15-min range.​
>
>
>
>
>
> Thanks a lot.
> Jan Paulus
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query data with Riak TS

2016-11-14 Thread Andrei Zavada
Hello Jan,

R​eplying to your questions inline:​​
​

>  Hi,

we are testing Riak TS for our Application right now. I have a couple of
> question how to query the data. We are measuring electric power which comes
> in in odd time intervals.
>


> 1. Is it possible to query the value which has been received bevor at a
> given time? In other words: Get the current / last value of a series.
>

In Riak TS 1.4 (the latest release), you still have to specify the exact
range ​in the WHERE clause.  Because of that, if you are only interested in
a single record that was added last before a certain time, you have to do
some guesswork to arrive at the suitable boundaries.  Depending on the rate
of data ingress, the selection returned may contain one, a few, too many,
or no results at all.  One workaround is to issue a sequence of SELECTs,
starting from a small range, with the lower boundary of the range
progressively moved back until you have the last record included (and
hopefully not many previous records which you are going to discard anyway).

In 1.5, we will support ORDER BY and LIMIT clauses which will allow a much
simpler solution, e.g.:

 SELECT value FROM table
 WHERE time > 0 AND time < $now
 ORDER BY value DESC LIMIT 1


> 2. Is it possible to query the average in respect to time?
> For instance you have such a measurement reading:
> time| value
> -
> 09:31   |  4
> 10:02   |  6
> 10:05   |  3
> And you want to query the average from 10:00 to 10:10. I would expect a
> value of 4.1
> 4 * 20% + 6 * 30% + 3 * 50% = 4.1
>

​It is definitely possible:​

 SELECT AVG(val
​ue​
) FROM
​table
​
 WHERE time >
​=​
​'​2016-11-22
10
​:00'​
AND time <
​'2016-11-22 ​
​10:10'​

​Note that you can specify timestamp values in ISO8601 format:
http://docs.basho.com/riak/ts/1.4.0/using/timerepresentations

​

> 3. Is it possible to query 15 minutes average values for the last day? I
> would expect 96 values with the average as described in question 2.
>

No, you will have to issue 96 separate queries, each similar to the one in
(2) but with a different 15-min range.​



> Thanks a lot.
> Jan Paulus
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: reads/writes during node replacement

2016-11-14 Thread Magnus Kessler
On 12 November 2016 at 00:08, Johnny Tan  wrote:

> When doing a node replace (http://docs.basho.com/riak/1.
> 4.12/ops/running/nodes/replacing/), after commit-ing the plan, how does
> the cluster handle reads/writes? Do I include the new node in my app's
> config as soon as I commit, and let riak internally handle which node(s)
> will do the reads/writes? Or do I wait until the ringready on the new node
> before being able to do reads/writes to it?
>
> johnny
>
>
Hi Johnny,

As soon as a node has been joined to the cluster it is capable of taking on
requests. `riak-admin ringready` returns true after a join or leave
operation when the new ring state has been communicated successfully to all
nodes in the cluster.

During a replacement operation, the leaving node will hand off [0] all its
partitions to the joining node. Both nodes can handle requests during this
phase and store data in the partitions they own. Once the leaving node has
handed off all its partitions, it will automatically shut down. Please keep
this in mind when configuring your clients or load balancers. Clients
should deal with nodes being temporarily or permanently unavailable.

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.1.4/using/reference/handoff/

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: throughput test & connection fail

2016-11-14 Thread Magnus Kessler
On 12 November 2016 at 03:00, Jing Liu  wrote:

> Hi,
>
> When I try to simply test the throughput of Risk in the setting that
> just start a single node and utilize two clients to issue requests, I
> got connection refused after the client's thread of sending GET
> request overcome about 400. Actually the server crashed. Why is it and
> how can I fix?
>
> Thanks
> J.
>

Hi Jing,

You do not specify exactly how you perform your throughput test, so let me
first ask a few questions. How many concurrent requests are active against
the server at any given time? Can you clarify what you mean by "the server
crashed"? Did the riak process actually terminate?

What hardware environment does Riak run on? Please provide some information
about CPU, memory and most importantly your disk IO subsystem.

How big are the objects that you send to / request from Riak?

Even on moderate hardware, a single node should be able to serve hundreds
of requests per second. However, every system can be pushed to the limits
of what the hardware can support, and Riak is no exception. Depending on
the bottlenecks, an overload can manifest itself in a multitude of
different ways.

For load tests, I recommend to start with a small, sustainable load and
ramp it up to establish which subsystem is the bottleneck. Please monitor
the OS and Riak's performance carefully during the test. Riak exposes many
performance metrics as JSON through its HTTP stats endpoint
(http://:8098/stats).
Consider ingesting these into your favourite monitoring solution. OS
metrics like CPU, network and IO usage should also be collected and
graphed. One easy open source solution that's gaining traction recently is
netdata (https://github.com/firehol/netdata).

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: A simple commit hook

2016-11-09 Thread Nick Marino
Hi Mav,

I've never written any commit hooks myself really, but it looks like we
have some examples you could check out here:

https://github.com/basho/riak_function_contrib

Additionally, if you haven't already, you should check out the official
Basho commit hook docs here:

http://docs.basho.com/riak/kv/2.1.4/developing/usage/commit-hooks/

Since commit hooks are written in Erlang and loaded directly into the Riak
node, you should be able to achieve any of those things that you mentioned
just by writing the appropriate Erlang code to do it. I know that may not
be super helpful to you if you don't know Erlang, but I believe that's the
only language we support for commit hooks at this time, so hopefully that's
not a showstopper.

Hope that helps!
Nick

On Mon, Nov 7, 2016 at 9:20 AM, Mav erick  wrote:

> Hello Folks
>
> I am looking for a really simple hook (pre or post commit) that will do
> send http put of the key to a server. I could also live with some simple
> send of the key via a tcp connection, or for that matter pipe the key value
> to a local process
>
> Any pointers on how to create such a hook or an example would be greatly
> appreciated
>
> Regards
> Mav
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-users Digest, Vol 87, Issue 16

2016-11-03 Thread Pratik Kulkarni
Hi,

Thank-you Alex .The all in one riak client jar for Java  works very well

Best!
Pratik

On Oct 30, 2016 9:22 PM, "Alex Moore" <amo...@basho.com> wrote:

> Hi Pratik,
>
> You should try our All-In-One / Uber jar: http://riak-java-client.
> s3.amazonaws.com/index.html
>
> It will contain all the dependencies that the Riak Java Client needs to
> operate.
>
> Thanks,
> Alex
>
> On Oct 30, 2016, at 1:19 PM, Pratik Kulkarni <pratik.kulka...@sjsu.edu>
> wrote:
>
> I added the joda time jar . Then it throws some time xxx jar and keeps on
> throwing this. The problem with maven is i am using ant to build my project
>
> Thanks!
>
> On Oct 30, 2016 9:00 AM, <riak-users-requ...@lists.basho.com> wrote:
>
>> Send riak-users mailing list submissions to
>> riak-users@lists.basho.com
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.
>> basho.com
>> or, via email, send a message with subject or body 'help' to
>> riak-users-requ...@lists.basho.com
>>
>> You can reach the person managing the list at
>> riak-users-ow...@lists.basho.com
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of riak-users digest..."
>>
>>
>> Today's Topics:
>>
>>1. Riak Java client API (Pratik Kulkarni)
>>2. Re: Riak Java client API (AJAX DoneBy Jack)
>>
>>
>> --
>>
>> Message: 1
>> Date: Fri, 28 Oct 2016 11:47:00 -0700
>> From: Pratik Kulkarni <pratik1...@icloud.com>
>> To: riak-users@lists.basho.com
>> Subject: Riak Java client API
>> Message-ID: <437945ad-b9e1-4e4f-9e8e-5aac04894...@icloud.com>
>> Content-Type: text/plain; charset="us-ascii"
>>
>> Hi All,
>>
>> I am working on a distributed file storage system using the Java Netty
>> framework. For this purpose i have Raik KV as an in memory  storage
>> solution.
>> Following jar dependencies are present in my build path :
>>
>> jackson-all-1.8.5.jar
>> netty-all-4.0.15.Final.jar
>> slf4j-api-1.7.2.jar
>> slf4j-simple-1.7.2.jar
>> protobuf-java-2.6.1.jar
>> json-20160212.jar
>> riak-client-2.0.5.jar
>>
>> When i try initiate connection with the riak node. The connection attempt
>> is successful but when i try to store the object in Riak KV. I keep getting
>> the following NoClassDefFoundError. I am not sure why these errors arrive
>> though i have included all the jars. Do we require apart from riak-client
>> X.X jar any more dependencies. As per the terminal output I tried to add
>> the dependencies by downloading the jars. But it just keeps giving me new
>> dependencies error every time.  Kindly help ?
>>
>> Please see the riak client code in java to store the file object
>>
>>
>> package gash.router.inmemory;
>>
>> import com.basho.riak.client.api.RiakClient;
>> import com.basho.riak.client.api.commands.kv.DeleteValue;
>> import com.basho.riak.client.api.commands.kv.FetchValue;
>> import com.basho.riak.client.api.commands.kv.StoreValue;
>> import com.basho.riak.client.core.RiakCluster;
>> import com.basho.riak.client.core.RiakNode;
>> import com.basho.riak.client.core.query.Location;
>> import com.basho.riak.client.core.query.Namespace;
>> import com.basho.riak.client.core.query.RiakObject;
>> import com.basho.riak.client.core.util.BinaryValue;
>>
>> import java.net.UnknownHostException;
>>
>> public class RiakClientHandler {
>>
>> private static RiakCluster setUpCluster() throws
>> UnknownHostException{
>> // This example will use only one node listening on
>> localhost:8098--default config
>>
>> RiakNode node = new RiakNode.Builder()
>> .withRemoteAddress("127.0.0.1")
>> .withRemotePort(8098)
>> .build();
>>  // This cluster object takes our one node as an argument
>> RiakCluster cluster = new RiakCluster.Builder(node)
>> .build();
>>
>> // The cluster must be started to work, otherwise you will see
>> errors
>> cluster.start();
>>
>> return cluster;
>> }
>>
>> private static class RiakFile{
>>
>> public String filename;
>> public byte[] byteData;
>>   

Re: RiakTS TTL

2016-11-01 Thread Pavel Hardak
Hi Joe,

1. I am curious why you would need it. Can you please elaborate on the use
case?
2. In addition to Matthew response, please see
http://docs.basho.com/riak/ts/1.4.0/using/global-object-expiration
3. Do you refer to something like post-commit hook in Riak KV:
http://docs.basho.com/riak/kv/2.1.4/developing/usage/commit-hooks/ ?

Thanks,
Pavel



*Pavel Hardak* | Director of Product Management @ Basho


-- Forwarded message --
> From: Matthew Von-Maszewski <matth...@basho.com>
> Date: Tue, Nov 1, 2016 at 1:01 PM
> Subject: Re: RiakTS TTL
> To: Joe Olson <technol...@nododos.com>
> Cc: riak-users <riak-users@lists.basho.com>
>
>
> 1.  The global expiry module is an external C++ module that is open
> source.  There is no definition at this time for an Erlang callback, but
> the design supports it.  You can patch the open source code now.
>
> 2.  The TTL has two components:  when the record is written and number of
> minutes until expiry.  The write time goes into each record.  The number of
> minutes until expiry is loaded at start.  A change to the global minutes
> until expiry impacts the evaluation of all records.  A subsequent release
> will allow distinct TTL by table.
>
> 3.  Not my area of the code.
>
> Matthew
>
> On Nov 1, 2016, at 3:46 PM, Joe Olson <technol...@nododos.com> wrote:
>
> Two questions about the RiakTS TTL functionality (and its future
> direction):
>
> 1. Is it possible to replace the standard delete upon TTL expiry with a
> user defined delete?
> 2. Can the current global setting for the TTL timeout be changed? Will
> that affect new records going forward?
>
> Bonus question:
> 3. Are there any plans to implement general triggering in RiakTS?
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
> --
> *Dorothy Pults - **Sr. Director of Product Marketing*
> 425 435 8198
> Bellevue, WA
> www.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakTS TTL

2016-11-01 Thread Matthew Von-Maszewski
1.  The global expiry module is an external C++ module that is open source.  
There is no definition at this time for an Erlang callback, but the design 
supports it.  You can patch the open source code now.

2.  The TTL has two components:  when the record is written and number of 
minutes until expiry.  The write time goes into each record.  The number of 
minutes until expiry is loaded at start.  A change to the global minutes until 
expiry impacts the evaluation of all records.  A subsequent release will allow 
distinct TTL by table.

3.  Not my area of the code.

Matthew

> On Nov 1, 2016, at 3:46 PM, Joe Olson  wrote:
> 
> Two questions about the RiakTS TTL functionality (and its future direction):
> 
> 1. Is it possible to replace the standard delete upon TTL expiry with a user 
> defined delete?
> 2. Can the current global setting for the TTL timeout be changed? Will that 
> affect new records going forward?
> 
> Bonus question:
> 3. Are there any plans to implement general triggering in RiakTS?
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Hinted handoff failed because of tcp errors

2016-11-01 Thread Ryan Maclear
Hi Alexander,

Excellent!  Thanks for the feedback - I will see what I can find there.

Regards,
Ryan

On Tue, Nov 1, 2016 at 11:06 AM, Alexander Sicular 
wrote:

> Hi Ryan, yes, you can change a number of settings. Have you had a look
> at http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#
> transfer-limit
> and http://lists.basho.com/pipermail/riak-users_lists.
> basho.com/2014-July/015529.html
> ?
>
> -Alexander
>
> On Tue, Nov 1, 2016 at 2:43 AM, Ryan Maclear 
> wrote:
> > Good Day,
> >
> > We have a 4 node riak cluster running inside AWS. The riak is riak-kv
> 2.1.2
> > with AAE enabled on Ubuntu 14.04.4 LTS
> >
> > We are in the process of replacing one node with another using the
> process
> > described here:
> >
> > http://docs.basho.com/riak/kv/2.1.4/using/cluster-
> operations/replacing-node/
> >
> > We have successfully replaced two of the nodes so far but we are having a
> > problem with the third. If we look at /var/log/riak/console.log we see
> the
> > start of the hinted handoff, and some time later (sometimes minutes and
> > sometimes hours) we see:
> >
> > 2016-10-31 06:30:40.090 [error]
> > <0.19834.2101>@riak_core_handoff_sender:start_fold:272 hinted transfer
> of
> > riak_kv_vnode from 'r...@aew54.miranetworks.net'
> > 274031556999544297163190906134303066185487351808 to
> > 'r...@aew75.miranetworks.net'
> > 274031556999544297163190906134303066185487351808 failed because of TCP
> recv
> > timeout
> > 2016-10-31 06:30:40.090 [error]
> > <0.187.0>@riak_core_handoff_manager:handle_info:303 An outbound handoff
> of
> > partition riak_kv_vnode 274031556999544297163190906134303066185487351808
> was
> > terminated for reason: {shutdown,timeout}
> >
> > So the handoff was terminated due to a tcp timeout. The handoff then
> starts
> > again.
> >
> > This has been going on for some times (about two weeks now).
> >
> > The current member status is as follows:
> >
> > riak-admin member-status
> > = Membership
> > ==
> > Status RingPendingNode
> > 
> ---
> > leaving 0.0%  --  'r...@aew54.miranetworks.net'
> > valid  25.0%  --  'r...@aew59.miranetworks.net'
> > valid  25.0%  --  'r...@aew73.miranetworks.net'
> > valid  25.0%  --  'r...@aew74.miranetworks.net'
> > valid  25.0%  --  'r...@aew75.miranetworks.net'
> > 
> ---
> > Valid:4 / Leaving:1 / Exiting:0 / Joining:0 / Down:0
> >
> >
> > Here are some questions:
> >
> > 1. What is the default tcp timeout?
> > 2. Is there any way to increase this timeout?
> > 3. Is there any way to increase the rate of handoff?
> > 4. Are there any other parameters we can tune to try and avoid this?
> >
> > The output from riak-admin transfers is as follows:
> >
> > 'r...@aew54.miranetworks.net' waiting to handoff 1 partitions
> >
> > Active Transfers:
> >
> > transfer type: hinted
> > vnode type: riak_kv_vnode
> > partition: 274031556999544297163190906134303066185487351808
> > started: 2016-11-01 05:30:47 [2.10 hr ago]
> > last update: 2016-11-01 07:36:51 [3.03 s ago]
> > total size: 78393086512 bytes
> > objects transferred: 11440967
> >
> >  1513 Objs/s
> > riak@aew54.miranetworks.n  ===>  riak@aew75.miranetworks.n
> > et   et
> > |== |  15%
> >   1.53 MB/s
> >
> >
> > Thanks,
> > Ryan Maclear
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Hinted handoff failed because of tcp errors

2016-11-01 Thread Alexander Sicular
Hi Ryan, yes, you can change a number of settings. Have you had a look
at http://docs.basho.com/riak/kv/2.1.4/using/admin/riak-admin/#transfer-limit
and 
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-July/015529.html
?

-Alexander

On Tue, Nov 1, 2016 at 2:43 AM, Ryan Maclear  wrote:
> Good Day,
>
> We have a 4 node riak cluster running inside AWS. The riak is riak-kv 2.1.2
> with AAE enabled on Ubuntu 14.04.4 LTS
>
> We are in the process of replacing one node with another using the process
> described here:
>
> http://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/replacing-node/
>
> We have successfully replaced two of the nodes so far but we are having a
> problem with the third. If we look at /var/log/riak/console.log we see the
> start of the hinted handoff, and some time later (sometimes minutes and
> sometimes hours) we see:
>
> 2016-10-31 06:30:40.090 [error]
> <0.19834.2101>@riak_core_handoff_sender:start_fold:272 hinted transfer of
> riak_kv_vnode from 'r...@aew54.miranetworks.net'
> 274031556999544297163190906134303066185487351808 to
> 'r...@aew75.miranetworks.net'
> 274031556999544297163190906134303066185487351808 failed because of TCP recv
> timeout
> 2016-10-31 06:30:40.090 [error]
> <0.187.0>@riak_core_handoff_manager:handle_info:303 An outbound handoff of
> partition riak_kv_vnode 274031556999544297163190906134303066185487351808 was
> terminated for reason: {shutdown,timeout}
>
> So the handoff was terminated due to a tcp timeout. The handoff then starts
> again.
>
> This has been going on for some times (about two weeks now).
>
> The current member status is as follows:
>
> riak-admin member-status
> = Membership
> ==
> Status RingPendingNode
> ---
> leaving 0.0%  --  'r...@aew54.miranetworks.net'
> valid  25.0%  --  'r...@aew59.miranetworks.net'
> valid  25.0%  --  'r...@aew73.miranetworks.net'
> valid  25.0%  --  'r...@aew74.miranetworks.net'
> valid  25.0%  --  'r...@aew75.miranetworks.net'
> ---
> Valid:4 / Leaving:1 / Exiting:0 / Joining:0 / Down:0
>
>
> Here are some questions:
>
> 1. What is the default tcp timeout?
> 2. Is there any way to increase this timeout?
> 3. Is there any way to increase the rate of handoff?
> 4. Are there any other parameters we can tune to try and avoid this?
>
> The output from riak-admin transfers is as follows:
>
> 'r...@aew54.miranetworks.net' waiting to handoff 1 partitions
>
> Active Transfers:
>
> transfer type: hinted
> vnode type: riak_kv_vnode
> partition: 274031556999544297163190906134303066185487351808
> started: 2016-11-01 05:30:47 [2.10 hr ago]
> last update: 2016-11-01 07:36:51 [3.03 s ago]
> total size: 78393086512 bytes
> objects transferred: 11440967
>
>  1513 Objs/s
> riak@aew54.miranetworks.n  ===>  riak@aew75.miranetworks.n
> et   et
> |== |  15%
>   1.53 MB/s
>
>
> Thanks,
> Ryan Maclear
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Monitor RIAK process with Supervisord

2016-10-31 Thread Magnus Kessler
On 29 October 2016 at 19:59, vmalhotra  wrote:

> We run 8 nodes RIAK cluster in our Prod environment. Lot of time, RIAK
> process stops and we also noticed out of memory issues. Typically, we run
> restart the affected node to recover from the issue. I thought of using
> Supervisor to control the RIAK processes so the idea is if any of the
> process crash SupervisorD daemon will automatically restart that process on
> a crash.
>
> Wanted to know what you guys think? Can it cause any other issue or it
> should work fine?
>

Hi Varun,

I would recommend against blindly restarting Riak nodes, in particular if
these were shut down uncleanly, as may happen in out of memory situations.
There is a risk that an unclean shutdown leaves behind corrupted files and
that a subsequent restart is unsuccessful.

You should instead investigate why Riak stops being responsive.

Please have a look at the documentation, in particular memory
requirements[0], and OS tuning[1].

Kind Regards,

Magnus

[0]: http://docs.basho.com/riak/kv/2.1.4/setup/planning/cluster-capacity/
[1]: http://docs.basho.com/riak/kv/2.1.4/using/performance/




>
> Thanks in advance.
>
>
>
> --
> View this message in context: http://riak-users.197444.n3.
> nabble.com/Monitor-RIAK-process-with-Supervisord-tp4034655.html
> Sent from the Riak Users mailing list archive at Nabble.com.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak-users Digest, Vol 87, Issue 16

2016-10-30 Thread Alex Moore
Hi Pratik,

You should try our All-In-One / Uber jar: 
http://riak-java-client.s3.amazonaws.com/index.html

It will contain all the dependencies that the Riak Java Client needs to operate.

Thanks,
Alex

> On Oct 30, 2016, at 1:19 PM, Pratik Kulkarni <pratik.kulka...@sjsu.edu> wrote:
> 
> I added the joda time jar . Then it throws some time xxx jar and keeps on 
> throwing this. The problem with maven is i am using ant to build my project 
> 
> Thanks!
> 
> 
>> On Oct 30, 2016 9:00 AM, <riak-users-requ...@lists.basho.com> wrote:
>> Send riak-users mailing list submissions to
>> riak-users@lists.basho.com
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> or, via email, send a message with subject or body 'help' to
>> riak-users-requ...@lists.basho.com
>> 
>> You can reach the person managing the list at
>> riak-users-ow...@lists.basho.com
>> 
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of riak-users digest..."
>> 
>> 
>> Today's Topics:
>> 
>>1. Riak Java client API (Pratik Kulkarni)
>>2. Re: Riak Java client API (AJAX DoneBy Jack)
>> 
>> 
>> --
>> 
>> Message: 1
>> Date: Fri, 28 Oct 2016 11:47:00 -0700
>> From: Pratik Kulkarni <pratik1...@icloud.com>
>> To: riak-users@lists.basho.com
>> Subject: Riak Java client API
>> Message-ID: <437945ad-b9e1-4e4f-9e8e-5aac04894...@icloud.com>
>> Content-Type: text/plain; charset="us-ascii"
>> 
>> Hi All,
>> 
>> I am working on a distributed file storage system using the Java Netty 
>> framework. For this purpose i have Raik KV as an in memory  storage solution.
>> Following jar dependencies are present in my build path :
>> 
>> jackson-all-1.8.5.jar
>> netty-all-4.0.15.Final.jar
>> slf4j-api-1.7.2.jar
>> slf4j-simple-1.7.2.jar
>> protobuf-java-2.6.1.jar
>> json-20160212.jar
>> riak-client-2.0.5.jar
>> 
>> When i try initiate connection with the riak node. The connection attempt is 
>> successful but when i try to store the object in Riak KV. I keep getting the 
>> following NoClassDefFoundError. I am not sure why these errors arrive though 
>> i have included all the jars. Do we require apart from riak-client X.X jar 
>> any more dependencies. As per the terminal output I tried to add the 
>> dependencies by downloading the jars. But it just keeps giving me new 
>> dependencies error every time.  Kindly help ?
>> 
>> Please see the riak client code in java to store the file object
>> 
>> 
>> package gash.router.inmemory;
>> 
>> import com.basho.riak.client.api.RiakClient;
>> import com.basho.riak.client.api.commands.kv.DeleteValue;
>> import com.basho.riak.client.api.commands.kv.FetchValue;
>> import com.basho.riak.client.api.commands.kv.StoreValue;
>> import com.basho.riak.client.core.RiakCluster;
>> import com.basho.riak.client.core.RiakNode;
>> import com.basho.riak.client.core.query.Location;
>> import com.basho.riak.client.core.query.Namespace;
>> import com.basho.riak.client.core.query.RiakObject;
>> import com.basho.riak.client.core.util.BinaryValue;
>> 
>> import java.net.UnknownHostException;
>> 
>> public class RiakClientHandler {
>> 
>> private static RiakCluster setUpCluster() throws 
>> UnknownHostException{
>> // This example will use only one node listening on 
>> localhost:8098--default config
>> 
>> RiakNode node = new RiakNode.Builder()
>> .withRemoteAddress("127.0.0.1")
>> .withRemotePort(8098)
>> .build();
>>  // This cluster object takes our one node as an argument
>> RiakCluster cluster = new RiakCluster.Builder(node)
>> .build();
>> 
>> // The cluster must be started to work, otherwise you will see errors
>> cluster.start();
>> 
>> return cluster;
>> }
>> 
>> private static class RiakFile{
>> 
>> public String filename;
>> public byte[] byteData;
>> }
>> 
>> public static void saveFile(String filename,byte[] byteData)
>> {
>> try{
>> System.out.println("

Re: riak-users Digest, Vol 87, Issue 16

2016-10-30 Thread Pratik Kulkarni
I added the joda time jar . Then it throws some time xxx jar and keeps on
throwing this. The problem with maven is i am using ant to build my project

Thanks!

On Oct 30, 2016 9:00 AM, <riak-users-requ...@lists.basho.com> wrote:

> Send riak-users mailing list submissions to
> riak-users@lists.basho.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> or, via email, send a message with subject or body 'help' to
> riak-users-requ...@lists.basho.com
>
> You can reach the person managing the list at
> riak-users-ow...@lists.basho.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of riak-users digest..."
>
>
> Today's Topics:
>
>1. Riak Java client API (Pratik Kulkarni)
>2. Re: Riak Java client API (AJAX DoneBy Jack)
>
>
> --
>
> Message: 1
> Date: Fri, 28 Oct 2016 11:47:00 -0700
> From: Pratik Kulkarni <pratik1...@icloud.com>
> To: riak-users@lists.basho.com
> Subject: Riak Java client API
> Message-ID: <437945ad-b9e1-4e4f-9e8e-5aac04894...@icloud.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi All,
>
> I am working on a distributed file storage system using the Java Netty
> framework. For this purpose i have Raik KV as an in memory  storage
> solution.
> Following jar dependencies are present in my build path :
>
> jackson-all-1.8.5.jar
> netty-all-4.0.15.Final.jar
> slf4j-api-1.7.2.jar
> slf4j-simple-1.7.2.jar
> protobuf-java-2.6.1.jar
> json-20160212.jar
> riak-client-2.0.5.jar
>
> When i try initiate connection with the riak node. The connection attempt
> is successful but when i try to store the object in Riak KV. I keep getting
> the following NoClassDefFoundError. I am not sure why these errors arrive
> though i have included all the jars. Do we require apart from riak-client
> X.X jar any more dependencies. As per the terminal output I tried to add
> the dependencies by downloading the jars. But it just keeps giving me new
> dependencies error every time.  Kindly help ?
>
> Please see the riak client code in java to store the file object
>
>
> package gash.router.inmemory;
>
> import com.basho.riak.client.api.RiakClient;
> import com.basho.riak.client.api.commands.kv.DeleteValue;
> import com.basho.riak.client.api.commands.kv.FetchValue;
> import com.basho.riak.client.api.commands.kv.StoreValue;
> import com.basho.riak.client.core.RiakCluster;
> import com.basho.riak.client.core.RiakNode;
> import com.basho.riak.client.core.query.Location;
> import com.basho.riak.client.core.query.Namespace;
> import com.basho.riak.client.core.query.RiakObject;
> import com.basho.riak.client.core.util.BinaryValue;
>
> import java.net.UnknownHostException;
>
> public class RiakClientHandler {
>
> private static RiakCluster setUpCluster() throws
> UnknownHostException{
> // This example will use only one node listening on
> localhost:8098--default config
>
> RiakNode node = new RiakNode.Builder()
> .withRemoteAddress("127.0.0.1")
> .withRemotePort(8098)
> .build();
>  // This cluster object takes our one node as an argument
> RiakCluster cluster = new RiakCluster.Builder(node)
> .build();
>
> // The cluster must be started to work, otherwise you will see
> errors
> cluster.start();
>
> return cluster;
> }
>
> private static class RiakFile{
>
> public String filename;
> public byte[] byteData;
> }
>
> public static void saveFile(String filename,byte[] byteData)
> {
> try{
> System.out.println("Inside Riak handler");
> RiakCluster cluster = setUpCluster();
> RiakClient client = new RiakClient(cluster);
> RiakFile newFile = createRiakFile(filename, byteData);
> System.out.println("Riak file created");
> Namespace fileBucket = new Namespace("files");
> Location fileLocation = new Location(fileBucket, filename);
> StoreValue storeFile = new StoreValue.Builder(newFile).
> withLocation(fileLocation).build();
> client.execute(storeFile);
> System.out.println("File saved to riak ");
> cluster.shutdown();
> }
> catch(Exception e){
> e.printStackTra

Re: Riak Java client API

2016-10-29 Thread AJAX DoneBy Jack
Hi Pratik,

>From exception msg you are missing joda time jar, download one and put in
your classpath.
If you use maven it will download the dependency for you automatically.

Hope this help.
Ajax

On Friday, 28 October 2016, Pratik Kulkarni  wrote:

> Hi All,
>
> I am working on a distributed file storage system using the Java Netty
> framework. For this purpose i have Raik KV as an in memory  storage
> solution.
> Following jar dependencies are present in my build path :
>
> jackson-all-1.8.5.jar
> netty-all-4.0.15.Final.jar
> slf4j-api-1.7.2.jar
> slf4j-simple-1.7.2.jar
> protobuf-java-2.6.1.jar
> json-20160212.jar
> riak-client-2.0.5.jar
>
> When i try initiate connection with the riak node. The connection attempt
> is successful but when i try to store the object in *Riak KV.* I keep
> getting the following NoClassDefFoundError. I am not sure why these errors
> arrive though i have included all the jars. Do we require apart from*
> riak-client X.X jar* any more dependencies. As per the terminal output I
> tried to add the dependencies by downloading the jars. But it just keeps
> giving me new dependencies error every time.  Kindly help ?
>
> *Please see the riak client code in java to store the file object *
>
>
> package gash.router.inmemory;
>
> import com.basho.riak.client.api.RiakClient;
> import com.basho.riak.client.api.commands.kv.DeleteValue;
> import com.basho.riak.client.api.commands.kv.FetchValue;
> import com.basho.riak.client.api.commands.kv.StoreValue;
> import com.basho.riak.client.core.RiakCluster;
> import com.basho.riak.client.core.RiakNode;
> import com.basho.riak.client.core.query.Location;
> import com.basho.riak.client.core.query.Namespace;
> import com.basho.riak.client.core.query.RiakObject;
> import com.basho.riak.client.core.util.BinaryValue;
>
> import java.net.UnknownHostException;
>
> public class RiakClientHandler {
> private static RiakCluster setUpCluster() throws UnknownHostException{
> // This example will use only one node listening on
> localhost:8098--default config
>
> RiakNode node = new RiakNode.Builder()
> .withRemoteAddress("127.0.0.1")
> .withRemotePort(8098)
> .build();
>  // This cluster object takes our one node as an argument
> RiakCluster cluster = new RiakCluster.Builder(node)
> .build();
>
> // The cluster must be started to work, otherwise you will see
> errors
> cluster.start();
>
> return cluster;
> }
> private static class RiakFile{
> public String filename;
> public byte[] byteData;
> }
> public static void saveFile(String filename,byte[] byteData)
> {
> try{
> System.out.println("Inside Riak handler");
> RiakCluster cluster = setUpCluster();
> RiakClient client = new RiakClient(cluster);
> RiakFile newFile = createRiakFile(filename, byteData);
> System.out.println("Riak file created");
> Namespace fileBucket = new Namespace("files");
> Location fileLocation = new Location(fileBucket, filename);
> StoreValue storeFile = new StoreValue.Builder(newFile).
> withLocation(fileLocation).build();
> client.execute(storeFile);
> System.out.println("File saved to riak ");
> cluster.shutdown();
> }
> catch(Exception e){
> e.printStackTrace();
> }
> }
> private static RiakFile createRiakFile(String filename, byte[] byteData)
> {
> RiakFile file=new RiakFile();
> file.filename=filename;
> file.byteData=byteData;
> return file;
> }
>
> }
>
>
> *The terminal Output error:*
>
>
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak nodes constantly crashing

2016-10-27 Thread Alexander Sicular
Take a look at the AAE settings here:

http://docs.basho.com/riak/kv/latest/using/cluster-operations/active-anti-entropy/

-Alexander 

@siculars
http://siculars.posthaven.com

Sent from my iRotaryPhone

> On Oct 26, 2016, at 16:17, Steven Joseph  wrote:
> 
> I don't think you should disable AAE, you can tune its frequency.
> 
> Steven
> 
> 
>> On Thu, 27 Oct 2016 03:50 Ricardo Mayerhofer  wrote:
>> Yes, I'll check if the problem is the AAE! I will disable it and see the 
>> results.
>> 
>> Thanks Steven!
>> 
>> On Tue, Oct 25, 2016 at 6:54 PM, Steven Joseph  wrote:
>> Hi Ricardo,
>> 
>> If you are using systemd might have to check LimitNOFILE for your units. 
>> Active anti entropy runs periodically. 
>> 
>> Steven
>> 
>> 
>> On Wed, 26 Oct 2016 04:36 Ricardo Mayerhofer  wrote:
>> What's weird is that the node crashes every minute at the same second. Is 
>> there anything Riak may be running every minute? 
>> 
>> On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer  
>> wrote:
>> I'm also pasting the free -m:
>> 
>>  total   used   free sharedbuffers cached
>> Mem: 15039  14557482  0 37   4594
>> -/+ buffers/cache:   9925   5114
>> Swap:0  0  0
>> 
>> On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer  
>> wrote:
>> Hi Alexander,
>> Thanks for your response. We use multi-backend with bitcask and leveldb.
>> 
>> - File descriptors seems to be ok, at least the config.
>> 
>> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak 
>> sudo: unable to resolve host ip-10-2-58-5
>> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
>> 65535
>> 
>> - Memory seems to be ok as well:
>> KiB Mem:  15400916 total, 14493744 used,   907172 free,36244 buffers
>> 
>> - Disk is ok
>> 
>> /dev/xvda1   20G  4.1G   15G  22% / # root device
>> 
>> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak 
>> data disk
>> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>> 
>> Any other idea? Thanks.
>> 
>> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular  
>> wrote:
>> Disk, memory or file descriptors would be my guess. Bitcask?
>> 
>> 
>> On Monday, October 24, 2016, Ricardo Mayerhofer  
>> wrote:
>> Hi all,
>> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5 
>> nodes are affected. 
>> 
>> However it seems Riak manage to get them up again.
>> 
>> Any idea on whats going on? Erros logs below.
>> 
>> Thanks.
>> 
>> error.log
>> ...
>> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process 
>> <0.24570.1174> with 0 neighbours crashed with reason: no case clause 
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 
>> 107
>> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process 
>> <0.7109.1175> with 0 neighbours crashed with reason: no case clause matching 
>> {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 107
>> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process 
>> <0.19612.1175> with 0 neighbours crashed with reason: no case clause 
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 
>> 107
>> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process 
>> <0.1356.1176> with 0 neighbours crashed with reason: no case clause matching 
>> {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 107
>> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process 
>> <0.11380.1176> with 0 neighbours crashed with reason: no case clause 
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 
>> 107
>> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process 
>> <0.24279.1176> with 0 neighbours crashed with reason: no case clause 
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line 
>> 107
>> 
>> crash.log
>> 2016-10-24 21:51:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.28136.1621>
>> registered_name: []
>> exception error: 
>> {{case_clause,{ok,{http_error,"exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>> messages: []
>> links: [<0.201.0>,#Port<0.235869290>]
>> dictionary: []
>> trap_exit: false
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 423
>>   neighbours:
>> 2016-10-24 21:52:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.7845.1622>
>> registered_name: []
>> exception error: 
>> 

Re: Increasing listen() backlog on riak's HTTP api

2016-10-27 Thread Luke Bakken
Hi Rohit,

Mochiweb's max connections are set as an argument to the start()
function. I don't believe there is a way to increase it at run time.

If you're hitting the listen backlog, your servers aren't able to keep
up with the request workload. Are you doing any listing or mapreduce
operations?
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Oct 19, 2016 at 10:20 PM, Rohit Sanbhadti
 wrote:
> Hi all,
>
> I’ve been trying to increase the backlog that riak uses when opening a 
> listening socket with HTTP, as I’ve seen a fair number of backlog overflow 
> errors in my use case (we have a 10 node riak cluster which takes a lot of 
> traffic, and we certainly expect the peak of concurrent traffic to exceed the 
> default backlog size of 128). I just found out that there appears to be no 
> way to customize the backlog that riak passes to webmachine/mochiweb, as 
> indicated by this issue (https://github.com/basho/riak_api/issues/108).  Can 
> anyone recommend a way to increase this backlog without having to modify and 
> recompile the riak_api, or without switching to protocol buffers? Is there 
> any set of erlang commands I can run from the attachable riak console to 
> change the backlog and restart the listening socket?
>
> --
> Rohit S.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak nodes constantly crashing

2016-10-27 Thread Steven Joseph
I don't think you should disable AAE, you can tune its frequency.

Steven

On Thu, 27 Oct 2016 03:50 Ricardo Mayerhofer  wrote:

> Yes, I'll check if the problem is the AAE! I will disable it and see the
> results.
>
> Thanks Steven!
>
> On Tue, Oct 25, 2016 at 6:54 PM, Steven Joseph 
> wrote:
>
> Hi Ricardo,
>
> If you are using systemd might have to check LimitNOFILE for your units.
> Active anti entropy runs periodically.
>
> Steven
>
> On Wed, 26 Oct 2016 04:36 Ricardo Mayerhofer 
> wrote:
>
> What's weird is that the node crashes every minute at the same second. Is
> there anything Riak may be running every minute?
>
> On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer  > wrote:
>
> I'm also pasting the free -m:
>
>  total   used   free sharedbuffers cached
> Mem: 15039  14557482  0 37   4594
> -/+ buffers/cache:   9925   5114
> Swap:0  0  0
>
> On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer  > wrote:
>
> Hi Alexander,
> Thanks for your response. We use multi-backend with bitcask and leveldb.
>
> - File descriptors seems to be ok, at least the config.
>
> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
> sudo: unable to resolve host ip-10-2-58-5
> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
> 65535
>
> - Memory seems to be ok as well:
>
> KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
> buffers
>
> - Disk is ok
>
> /dev/xvda1   20G  4.1G   15G  22% / # root device
> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
> data disk
> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>
> Any other idea? Thanks.
>
> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
> wrote:
>
> Disk, memory or file descriptors would be my guess. Bitcask?
>
>
> On Monday, October 24, 2016, Ricardo Mayerhofer 
> wrote:
>
> Hi all,
> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
> nodes are affected.
>
> However it seems Riak manage to get them up again.
>
> Any idea on whats going on? Erros logs below.
>
> Thanks.
>
> error.log
> ...
> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
>
> crash.log
> 2016-10-24 21:51:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.28136.1621>
> registered_name: []
> exception error:
> {{case_clause,{ok,{http_error,"exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235869290>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 423
>   neighbours:
> 2016-10-24 21:52:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.7845.1622>
> registered_name: []
> exception error:
> {{case_clause,{ok,{http_error,"exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235879110>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 406
> --
> Ricardo Mayerhofer
>
>
>
> --
>
>
> 

Re: Riak nodes constantly crashing

2016-10-26 Thread Ricardo Mayerhofer
Yes, I'll check if the problem is the AAE! I will disable it and see the
results.

Thanks Steven!

On Tue, Oct 25, 2016 at 6:54 PM, Steven Joseph  wrote:

> Hi Ricardo,
>
> If you are using systemd might have to check LimitNOFILE for your units.
> Active anti entropy runs periodically.
>
> Steven
>
> On Wed, 26 Oct 2016 04:36 Ricardo Mayerhofer 
> wrote:
>
>> What's weird is that the node crashes every minute at the same second. Is
>> there anything Riak may be running every minute?
>>
>> On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer <
>> ricardo@gmail.com> wrote:
>>
>> I'm also pasting the free -m:
>>
>>  total   used   free sharedbuffers cached
>> Mem: 15039  14557482  0 37   4594
>> -/+ buffers/cache:   9925   5114
>> Swap:0  0  0
>>
>> On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer <
>> ricardo@gmail.com> wrote:
>>
>> Hi Alexander,
>> Thanks for your response. We use multi-backend with bitcask and leveldb.
>>
>> - File descriptors seems to be ok, at least the config.
>>
>> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
>> sudo: unable to resolve host ip-10-2-58-5
>> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
>> 65535
>>
>> - Memory seems to be ok as well:
>>
>> KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
>> buffers
>>
>> - Disk is ok
>>
>> /dev/xvda1   20G  4.1G   15G  22% / # root device
>> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
>> data disk
>> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>>
>> Any other idea? Thanks.
>>
>> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
>> wrote:
>>
>> Disk, memory or file descriptors would be my guess. Bitcask?
>>
>>
>> On Monday, October 24, 2016, Ricardo Mayerhofer 
>> wrote:
>>
>> Hi all,
>> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
>> nodes are affected.
>>
>> However it seems Riak manage to get them up again.
>>
>> Any idea on whats going on? Erros logs below.
>>
>> Thanks.
>>
>> error.log
>> ...
>> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
>> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
>> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
>> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
>> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
>> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
>> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>>
>> crash.log
>> 2016-10-24 21:51:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.28136.1621>
>> registered_name: []
>> exception error: {{case_clause,{ok,{http_error,
>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/
>> mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,
>> 3,[{file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>> messages: []
>> links: [<0.201.0>,#Port<0.235869290>]
>> dictionary: []
>> trap_exit: false
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 423
>>   neighbours:
>> 2016-10-24 21:52:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.7845.1622>
>> registered_name: []
>> exception error: {{case_clause,{ok,{http_error,
>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/
>> mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,
>> 3,[{file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>> messages: []
>> links: [<0.201.0>,#Port<0.235879110>]
>> dictionary: []
>> trap_exit: false
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 406
>> --
>> Ricardo Mayerhofer
>>
>>
>>
>> --
>>
>>
>> Alexander Sicular
>> 

Re: Riak nodes constantly crashing

2016-10-26 Thread Steven Joseph
Hi Ricardo,

If you are using systemd might have to check LimitNOFILE for your units.
Active anti entropy runs periodically.

Steven

On Wed, 26 Oct 2016 04:36 Ricardo Mayerhofer  wrote:

> What's weird is that the node crashes every minute at the same second. Is
> there anything Riak may be running every minute?
>
> On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer  > wrote:
>
> I'm also pasting the free -m:
>
>  total   used   free sharedbuffers cached
> Mem: 15039  14557482  0 37   4594
> -/+ buffers/cache:   9925   5114
> Swap:0  0  0
>
> On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer  > wrote:
>
> Hi Alexander,
> Thanks for your response. We use multi-backend with bitcask and leveldb.
>
> - File descriptors seems to be ok, at least the config.
>
> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
> sudo: unable to resolve host ip-10-2-58-5
> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
> 65535
>
> - Memory seems to be ok as well:
>
> KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
> buffers
>
> - Disk is ok
>
> /dev/xvda1   20G  4.1G   15G  22% / # root device
> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
> data disk
> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>
> Any other idea? Thanks.
>
> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
> wrote:
>
> Disk, memory or file descriptors would be my guess. Bitcask?
>
>
> On Monday, October 24, 2016, Ricardo Mayerhofer 
> wrote:
>
> Hi all,
> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
> nodes are affected.
>
> However it seems Riak manage to get them up again.
>
> Any idea on whats going on? Erros logs below.
>
> Thanks.
>
> error.log
> ...
> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3 line
> 107
>
> crash.log
> 2016-10-24 21:51:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.28136.1621>
> registered_name: []
> exception error:
> {{case_clause,{ok,{http_error,"exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235869290>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 423
>   neighbours:
> 2016-10-24 21:52:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.7845.1622>
> registered_name: []
> exception error:
> {{case_clause,{ok,{http_error,"exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235879110>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 406
> --
> Ricardo Mayerhofer
>
>
>
> --
>
>
> Alexander Sicular
> Solutions Architect
> Basho Technologies
> 9175130679
> @siculars
>
>
>
>
> --
> Ricardo Mayerhofer
>
>
>
>
> --
> Ricardo Mayerhofer
>
>
>
>
> --
> Ricardo Mayerhofer
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

Re: Riak nodes constantly crashing

2016-10-25 Thread Ricardo Mayerhofer
What's weird is that the node crashes every minute at the same second. Is
there anything Riak may be running every minute?

On Mon, Oct 24, 2016 at 8:28 PM, Ricardo Mayerhofer 
wrote:

> I'm also pasting the free -m:
>
>  total   used   free sharedbuffers cached
> Mem: 15039  14557482  0 37   4594
> -/+ buffers/cache:   9925   5114
> Swap:0  0  0
>
> On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer  > wrote:
>
>> Hi Alexander,
>> Thanks for your response. We use multi-backend with bitcask and leveldb.
>>
>> - File descriptors seems to be ok, at least the config.
>>
>> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
>> sudo: unable to resolve host ip-10-2-58-5
>> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
>> 65535
>>
>> - Memory seems to be ok as well:
>>
>> KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
>> buffers
>>
>> - Disk is ok
>>
>> /dev/xvda1   20G  4.1G   15G  22% / # root device
>> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
>> data disk
>> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>>
>> Any other idea? Thanks.
>>
>> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
>> wrote:
>>
>>> Disk, memory or file descriptors would be my guess. Bitcask?
>>>
>>>
>>> On Monday, October 24, 2016, Ricardo Mayerhofer 
>>> wrote:
>>>
 Hi all,
 I have a Riak 1.4 where the nodes seems to be constantly crashing. All
 5 nodes are affected.

 However it seems Riak manage to get them up again.

 Any idea on whats going on? Erros logs below.

 Thanks.

 error.log
 ...
 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
 <0.24570.1174> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107
 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
 <0.7109.1175> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107
 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
 <0.19612.1175> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107
 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
 <0.1356.1176> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107
 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
 <0.11380.1176> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107
 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
 <0.24279.1176> with 0 neighbours crashed with reason: no case clause
 matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
 line 107

 crash.log
 2016-10-24 21:51:56 =CRASH REPORT
   crasher:
 initial call: mochiweb_acceptor:init/3
 pid: <0.28136.1621>
 registered_name: []
 exception error: {{case_clause,{ok,{http_error,
 "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
 hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
 file,"proc_lib.erl"},{line,227}]}]}
 ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
 messages: []
 links: [<0.201.0>,#Port<0.235869290>]
 dictionary: []
 trap_exit: false
 status: running
 heap_size: 377
 stack_size: 24
 reductions: 423
   neighbours:
 2016-10-24 21:52:56 =CRASH REPORT
   crasher:
 initial call: mochiweb_acceptor:init/3
 pid: <0.7845.1622>
 registered_name: []
 exception error: {{case_clause,{ok,{http_error,
 "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
 hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
 file,"proc_lib.erl"},{line,227}]}]}
 ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
 messages: []
 links: [<0.201.0>,#Port<0.235879110>]
 dictionary: []
 trap_exit: false
 status: running
 heap_size: 377
 stack_size: 24
 reductions: 406
 --
 Ricardo Mayerhofer

>>>
>>>
>>> --
>>>
>>>
>>> Alexander Sicular
>>> Solutions Architect
>>> Basho Technologies
>>> 9175130679
>>> @siculars
>>>
>>>
>>
>>
>> --
>> Ricardo Mayerhofer
>>
>
>
>
> --
> Ricardo Mayerhofer
>



-- 
Ricardo Mayerhofer
___
riak-users mailing list

Re: Riak nodes constantly crashing

2016-10-24 Thread Ricardo Mayerhofer
I'm also pasting the free -m:

 total   used   free sharedbuffers cached
Mem: 15039  14557482  0 37   4594
-/+ buffers/cache:   9925   5114
Swap:0  0  0

On Mon, Oct 24, 2016 at 8:24 PM, Ricardo Mayerhofer 
wrote:

> Hi Alexander,
> Thanks for your response. We use multi-backend with bitcask and leveldb.
>
> - File descriptors seems to be ok, at least the config.
>
> ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
> sudo: unable to resolve host ip-10-2-58-5
> riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
> 65535
>
> - Memory seems to be ok as well:
>
> KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
> buffers
>
> - Disk is ok
>
> /dev/xvda1   20G  4.1G   15G  22% / # root device
> /dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
> data disk
> /dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk
>
> Any other idea? Thanks.
>
> On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
> wrote:
>
>> Disk, memory or file descriptors would be my guess. Bitcask?
>>
>>
>> On Monday, October 24, 2016, Ricardo Mayerhofer 
>> wrote:
>>
>>> Hi all,
>>> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
>>> nodes are affected.
>>>
>>> However it seems Riak manage to get them up again.
>>>
>>> Any idea on whats going on? Erros logs below.
>>>
>>> Thanks.
>>>
>>> error.log
>>> ...
>>> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
>>> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
>>> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
>>> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
>>> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
>>> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
>>> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
>>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>>> line 107
>>>
>>> crash.log
>>> 2016-10-24 21:51:56 =CRASH REPORT
>>>   crasher:
>>> initial call: mochiweb_acceptor:init/3
>>> pid: <0.28136.1621>
>>> registered_name: []
>>> exception error: {{case_clause,{ok,{http_error,
>>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
>>> hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
>>> file,"proc_lib.erl"},{line,227}]}]}
>>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>>> messages: []
>>> links: [<0.201.0>,#Port<0.235869290>]
>>> dictionary: []
>>> trap_exit: false
>>> status: running
>>> heap_size: 377
>>> stack_size: 24
>>> reductions: 423
>>>   neighbours:
>>> 2016-10-24 21:52:56 =CRASH REPORT
>>>   crasher:
>>> initial call: mochiweb_acceptor:init/3
>>> pid: <0.7845.1622>
>>> registered_name: []
>>> exception error: {{case_clause,{ok,{http_error,
>>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
>>> hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
>>> file,"proc_lib.erl"},{line,227}]}]}
>>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>>> messages: []
>>> links: [<0.201.0>,#Port<0.235879110>]
>>> dictionary: []
>>> trap_exit: false
>>> status: running
>>> heap_size: 377
>>> stack_size: 24
>>> reductions: 406
>>> --
>>> Ricardo Mayerhofer
>>>
>>
>>
>> --
>>
>>
>> Alexander Sicular
>> Solutions Architect
>> Basho Technologies
>> 9175130679
>> @siculars
>>
>>
>
>
> --
> Ricardo Mayerhofer
>



-- 
Ricardo Mayerhofer
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak nodes constantly crashing

2016-10-24 Thread Ricardo Mayerhofer
Hi Alexander,
Thanks for your response. We use multi-backend with bitcask and leveldb.

- File descriptors seems to be ok, at least the config.

ubuntu@ip-10-2-58-5:/var/log/riak$ sudo su riak
sudo: unable to resolve host ip-10-2-58-5
riak@ip-10-2-58-5:/var/log/riak$ ulimit -n
65535

- Memory seems to be ok as well:

KiB Mem: * 15400916 *total,* 14493744 *used,*   907172 *free,*36244 *
buffers

- Disk is ok

/dev/xvda1   20G  4.1G   15G  22% / # root device
/dev/xvdb   148G   69G   72G  49% /mnt/riak-data  # bitcask and riak
data disk
/dev/xvdc   296G   23G  258G   8% /mnt/riak-data/leveldb #leveldb disk

Any other idea? Thanks.

On Mon, Oct 24, 2016 at 8:06 PM, Alexander Sicular 
wrote:

> Disk, memory or file descriptors would be my guess. Bitcask?
>
>
> On Monday, October 24, 2016, Ricardo Mayerhofer 
> wrote:
>
>> Hi all,
>> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
>> nodes are affected.
>>
>> However it seems Riak manage to get them up again.
>>
>> Any idea on whats going on? Erros logs below.
>>
>> Thanks.
>>
>> error.log
>> ...
>> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
>> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
>> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
>> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
>> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
>> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
>> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
>> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
>> line 107
>>
>> crash.log
>> 2016-10-24 21:51:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.28136.1621>
>> registered_name: []
>> exception error: {{case_clause,{ok,{http_error,
>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
>> hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
>> file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>> messages: []
>> links: [<0.201.0>,#Port<0.235869290>]
>> dictionary: []
>> trap_exit: false
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 423
>>   neighbours:
>> 2016-10-24 21:52:56 =CRASH REPORT
>>   crasher:
>> initial call: mochiweb_acceptor:init/3
>> pid: <0.7845.1622>
>> registered_name: []
>> exception error: {{case_clause,{ok,{http_error,
>> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/moc
>> hiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,3,[{
>> file,"proc_lib.erl"},{line,227}]}]}
>> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
>> messages: []
>> links: [<0.201.0>,#Port<0.235879110>]
>> dictionary: []
>> trap_exit: false
>> status: running
>> heap_size: 377
>> stack_size: 24
>> reductions: 406
>> --
>> Ricardo Mayerhofer
>>
>
>
> --
>
>
> Alexander Sicular
> Solutions Architect
> Basho Technologies
> 9175130679
> @siculars
>
>


-- 
Ricardo Mayerhofer
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak nodes constantly crashing

2016-10-24 Thread Alexander Sicular
Disk, memory or file descriptors would be my guess. Bitcask?

On Monday, October 24, 2016, Ricardo Mayerhofer 
wrote:

> Hi all,
> I have a Riak 1.4 where the nodes seems to be constantly crashing. All 5
> nodes are affected.
>
> However it seems Riak manage to get them up again.
>
> Any idea on whats going on? Erros logs below.
>
> Thanks.
>
> error.log
> ...
> 2016-10-24 21:57:29.185 [error] <0.24570.1174> CRASH REPORT Process
> <0.24570.1174> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
> 2016-10-24 21:58:29.187 [error] <0.7109.1175> CRASH REPORT Process
> <0.7109.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
> 2016-10-24 21:59:29.228 [error] <0.19612.1175> CRASH REPORT Process
> <0.19612.1175> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
> 2016-10-24 22:00:29.218 [error] <0.1356.1176> CRASH REPORT Process
> <0.1356.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
> 2016-10-24 22:01:29.197 [error] <0.11380.1176> CRASH REPORT Process
> <0.11380.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
> 2016-10-24 22:02:29.231 [error] <0.24279.1176> CRASH REPORT Process
> <0.24279.1176> with 0 neighbours crashed with reason: no case clause
> matching {ok,{http_error,"exit\r\n"},<<>>} in mochiweb_http:request/3
> line 107
>
> crash.log
> 2016-10-24 21:51:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.28136.1621>
> registered_name: []
> exception error: {{case_clause,{ok,{http_error,
> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/
> mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,
> 3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235869290>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 423
>   neighbours:
> 2016-10-24 21:52:56 =CRASH REPORT
>   crasher:
> initial call: mochiweb_acceptor:init/3
> pid: <0.7845.1622>
> registered_name: []
> exception error: {{case_clause,{ok,{http_error,
> "exit\r\n"},<<>>}},[{mochiweb_http,request,3,[{file,"src/
> mochiweb_http.erl"},{line,107}]},{proc_lib,init_p_do_apply,
> 3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors: ['http_0.0.0.0:8098_mochiweb',riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.201.0>,#Port<0.235879110>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 377
> stack_size: 24
> reductions: 406
> --
> Ricardo Mayerhofer
>


-- 


Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak_explorer stopped working after turn on security on cluster

2016-10-21 Thread Magnus Kessler
On 21 October 2016 at 03:45, AJAX DoneBy Jack  wrote:

> Hello Basho,
>
> Today I turned on security on my cluster but riak_explorer stopped working
> after that.
> Anything I need to check on riak_explorer to make it works again?
>
> Thanks,
> Ajax
>
>
Hi Ajax,

After turning on Riak security, all clients must communicate with Riak over
a TLS secured connection, and must also send valid security credentials
with each request. AFAIK, this functionality has not yet been added to Riak
Explorer. There is an open github issue to add this functionality [0].

Kind Regards,

Magnus

[0]: https://github.com/basho-labs/riak_explorer/issues/91

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread Ajax Done
Hi Alex,

I retry again and this time it works, here is the query string:
"{!type=edismax qf=’title_s content_s’}riak solr"

Thanks,
Ajax



> On Oct 17, 2016, at 10:55 AM, Alex Moore  wrote:
> 
> Hey Ajax,
> 
> Have you tried adding those parameters to the LocalParameters {!dismax} block?
> 
> e.g.: {!type=dismax qf='myfield yourfield'}solr rocks
> 
> http://wiki.apache.org/solr/LocalParams#Basic_Syntax 
> 
> 
> Thanks,
> Alex
> 
> On Fri, Oct 14, 2016 at 3:18 PM, AJAX DoneBy Jack  > wrote:
> Hello Basho,
> 
> I am very new on Riak Search, I know can add {!dismax}before query string to 
> use it, but don't know how to specify qf or other dismax related parameters 
> in Riak Java Client. Could you advise?
> 
> Thanks,
> Ajax 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Inconsistency in Querying Riak TS?

2016-10-17 Thread John Daily
One of the catches regarding the quantum limit is that unless the
query starts exactly on a boundary, the effective limit is one fewer
because it is determined by the number of partitions the query has to
touch.

I suspect that's the behavior you're seeing.

Sent from my iPhone

> On Oct 17, 2016, at 9:38 PM, Joe Olson  wrote:
>
> According to the documentation at
>
> https://docs.basho.com/riak/ts/1.4.0/using/querying/guidelines/
>
> "A query covering more than a certain number of quanta (5 by default) will 
> generate the error too_many_subqueries and the query system will refuse to 
> run it. Assuming a default quantum of 15 minutes, the maximum query time 
> range is 75 minutes."
>
> However, the example shows a table of quantum 15 seconds. After the example:
>
> "The maximum time range we can query is 60s, anything beyond will fail."
>
> which seems to contradict the first assertion.
>
> Furthermore, I have been getting inconsistant behavior using the quantum.
>
> I have a code snippet placed here, demonstrating this behavior:
>
> https://gist.github.com/anonymous/a4a4ccb8617a00d38fb47a6b11571d81
>
> In this example, I set up two tables, one with a quantum of 6 hours, another 
> with a quantum of 12 days.
>
> I am using the default range (5) on my cluster.
>
> A query spanning 5 quantum partitions is allowed on the 6 hour table, a query 
> spanning 5 quantum partitions
> on the 12 day table fails with the 'too many subqueries' error.
>
> Are the number of allowed subqueries different depending on the quantum size?
>
> If so, is there more detailed documentation on the subject?
>
> Thanks!
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread AJAX DoneBy Jack
Hi Alex,

I did tried that in Java client using PB, but getting exception, I can
paste the error here tonight.

Thanks,
Ajax

On Monday, 17 October 2016, Alex Moore  wrote:

> Hey Ajax,
>
> Have you tried adding those parameters to the LocalParameters {!dismax}
>  block?
>
> e.g.: {!type=dismax qf='myfield yourfield'}solr rocks
>
> http://wiki.apache.org/solr/LocalParams#Basic_Syntax
>
> Thanks,
> Alex
>
> On Fri, Oct 14, 2016 at 3:18 PM, AJAX DoneBy Jack  > wrote:
>
>> Hello Basho,
>>
>> I am very new on Riak Search, I know can add {!dismax}before query string
>> to use it, but don't know how to specify qf or other dismax related
>> parameters in Riak Java Client. Could you advise?
>>
>> Thanks,
>> Ajax
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> 
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread Alex Moore
Hey Ajax,

Have you tried adding those parameters to the LocalParameters {!dismax}
 block?

e.g.: {!type=dismax qf='myfield yourfield'}solr rocks

http://wiki.apache.org/solr/LocalParams#Basic_Syntax

Thanks,
Alex

On Fri, Oct 14, 2016 at 3:18 PM, AJAX DoneBy Jack 
wrote:

> Hello Basho,
>
> I am very new on Riak Search, I know can add {!dismax}before query string
> to use it, but don't know how to specify qf or other dismax related
> parameters in Riak Java Client. Could you advise?
>
> Thanks,
> Ajax
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread Fred Dushin
The internal solr API will not use the distributed queries generated from 
coverage plans.  You will only get results from the local node.  Theoretically, 
you could aggregate and de-duplicate across multiple nodes, but that would 
result in more data movement than necessary, as it does not leverage the "r=1" 
behavior you get from cover sets and which you get automatically from the Riak 
HTTP API.

-Fred

> On Oct 17, 2016, at 10:17 AM, AJAX DoneBy Jack  wrote:
> 
> Hi Magnus,
> 
> So you suggest to use http API right? That day I were thinking query the 
> internal Solr http by sending request. Could you advise what's the difference 
> between Riak http API and internal Solr http API? What's the pros and cons to 
> use them?
> 
> Thanks,
> Ajax
> 
> On Monday, 17 October 2016, Magnus Kessler  > wrote:
> On 14 October 2016 at 20:18, AJAX DoneBy Jack  > wrote:
> Hello Basho,
> 
> I am very new on Riak Search, I know can add {!dismax}before query string to 
> use it, but don't know how to specify qf or other dismax related parameters 
> in Riak Java Client. Could you advise?
> 
> Thanks,
> Ajax
> 
> Hi Ajax,
> 
> The Riak Java Client, as most other Riak clients, uses the Protocol Buffer 
> API to communicate with Riak. Yokozuna's implementation of the Protocol 
> Buffer API allows only for a small set of query parameters [0], which have 
> been chosen to support the standard query parser. As such, there is currently 
> no easy way to use the extended set of query parameters through the java api.
> 
> However, you may have better luck if you talk directly to HTTP API, exposed 
> at http://:8098/search/query/. This will accept all queries 
> supported by Solr 4.7. Please be aware, though, that some query results that 
> require accumulating data from all Solr nodes (such as stats queries), may 
> not work as expected. Yokozuna constructs a new coverage query very 
> frequently, and the actual results returned depend on which nodes are chosen 
> in this query.
> 
> Kind Regards,
> 
> Magnus
> 
> [0]: 
> https://github.com/basho/yokozuna/blob/develop/src/yz_pb_search.erl#L144-L150 
> 
> 
>  
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 
> 
> 
> 
> 
> -- 
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
> 
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread AJAX DoneBy Jack
Hi Magnus,

So you suggest to use http API right? That day I were thinking query the
internal Solr http by sending request. Could you advise what's the
difference between Riak http API and internal Solr http API? What's the
pros and cons to use them?

Thanks,
Ajax

On Monday, 17 October 2016, Magnus Kessler  wrote:

> On 14 October 2016 at 20:18, AJAX DoneBy Jack  > wrote:
>
>> Hello Basho,
>>
>> I am very new on Riak Search, I know can add {!dismax}before query string
>> to use it, but don't know how to specify qf or other dismax related
>> parameters in Riak Java Client. Could you advise?
>>
>> Thanks,
>> Ajax
>>
>
> Hi Ajax,
>
> The Riak Java Client, as most other Riak clients, uses the Protocol Buffer
> API to communicate with Riak. Yokozuna's implementation of the Protocol
> Buffer API allows only for a small set of query parameters [0], which have
> been chosen to support the standard query parser. As such, there is
> currently no easy way to use the extended set of query parameters through
> the java api.
>
> However, you may have better luck if you talk directly to HTTP API,
> exposed at http://:8098/search/query/. This will accept
> all queries supported by Solr 4.7. Please be aware, though, that some query
> results that require accumulating data from all Solr nodes (such as stats
> queries), may not work as expected. Yokozuna constructs a new coverage
> query very frequently, and the actual results returned depend on which
> nodes are chosen in this query.
>
> Kind Regards,
>
> Magnus
>
> [0]: https://github.com/basho/yokozuna/blob/develop/src/yz_
> pb_search.erl#L144-L150
>
>
>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> 
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
>
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to specify dismax related parameters like qf

2016-10-17 Thread Magnus Kessler
On 14 October 2016 at 20:18, AJAX DoneBy Jack  wrote:

> Hello Basho,
>
> I am very new on Riak Search, I know can add {!dismax}before query string
> to use it, but don't know how to specify qf or other dismax related
> parameters in Riak Java Client. Could you advise?
>
> Thanks,
> Ajax
>

Hi Ajax,

The Riak Java Client, as most other Riak clients, uses the Protocol Buffer
API to communicate with Riak. Yokozuna's implementation of the Protocol
Buffer API allows only for a small set of query parameters [0], which have
been chosen to support the standard query parser. As such, there is
currently no easy way to use the extended set of query parameters through
the java api.

However, you may have better luck if you talk directly to HTTP API, exposed
at http://:8098/search/query/. This will accept all
queries supported by Solr 4.7. Please be aware, though, that some query
results that require accumulating data from all Solr nodes (such as stats
queries), may not work as expected. Yokozuna constructs a new coverage
query very frequently, and the actual results returned depend on which
nodes are chosen in this query.

Kind Regards,

Magnus

[0]:
https://github.com/basho/yokozuna/blob/develop/src/yz_pb_search.erl#L144-L150



>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak claimant

2016-10-17 Thread Magnus Kessler
On 12 October 2016 at 19:07, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:

> Does the riak claimant node have higher load than the other nodes?
>

Hi Travis,

The role of the claimant node is simply to coordinate certain cluster
related operations that involve changes to the ring, such as nodes joining
or leaving the cluster. Otherwise this node has no special role during
operations that manipulate data stored in the cluster.

Kind Regards,

Magnus



>
>
> Travis Kirstine
>
> *Project Supervisor *
>
> 140 Renfrew Drive, Suite 100
>
> Markham, Ontario L3R 6B3 Canada
>
> 
> T: 905.477.3600 Ext 267 | C: 647
>


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakTS and Multi-Backend

2016-10-13 Thread Damien Krotkine

For what it's worth, I've successfully used KV features inside of Riak
TS and tested it quite a lot, including with a heavy load. As John said,
I didn't use multi-backend and I disabled AAE.

Riak TS was happy when using Gets, Sets, bucket types, and including
CRDTs (I tested only the Set CRDTs). So you can go ahead and test these
features, they should work fine, at least for now.

That said, I would not recommend using this in production in the long
term. Instead, use a different cluster for KV. However, different
cluster doesn't have to mean different hardware. You could setup Riak KV
and Riak TS alongside on the same nodes (you need some work to have
separate configuration, data directory, different cookies, etc, but it's not a
lot of work). Granted, both clusters will share resources, but having
them running on the same nodes has a lot of benefits in term of
maintenance and administration.

John Daily writes:

> There are several important KV components such as AAE and Search with which 
> TS integration has not been sufficiently tested.  At this time we still 
> recommend running multiple clusters as mixed use workloads in Riak TS are 
> currently not supported.
>
> Your internal testing may reveal that small supporting KV datasets may work 
> as expected and allow you to augment your TS data in a single cluster.  We 
> are interested in your results as this will help us to improve and expand 
> Riak TS moving forward.
>
> -John
>
>> On Oct 12, 2016, at 1:04 PM, Junk, Damion A  wrote:
>>
>> So are you suggesting that it's not advisable to use even KV backed by 
>> LevelDB in a TS instance, or is it more that performance is unknown and 
>> therefore not guaranteed / supported?  If the data is otherwise "safe", this 
>> may still be a better option for us than running separate clusters.
>>
>> Are others who need both a general KV store and the TS functionality running 
>> multiple clusters to handle this use case?
>>
>> Thanks for the quick reply!
>>
>>
>> Damion
>>
>>> On Oct 12, 2016, at 11:53 AM, John Daily  wrote:
>>>
>>> We have not done any work to support the multi-backend, hence the error 
>>> you’re seeing. TS depends exclusively on leveldb.
>>>
>>> We’re not recommending the use of KV functionality in the TS product yet, 
>>> because the latter is still changing rapidly and we will need to go back 
>>> and fix some basic KV mechanisms. We’re also not yet sure of the 
>>> performance characteristics if both KV and TS are in use under heavy load.
>>>
>>> In short: I apologize, but we’re not really ready to support your use case 
>>> yet.
>>>
>>> -John
>>>
 On Oct 12, 2016, at 12:17 PM, Junk, Damion A  wrote:

 Hello all -

 I was wondering if it is possible to use RiakTS with a multi-backend 
 configuration.

 I have an existing set of applications using RiakKV and Bitcask, but we're 
 now wanting to start using some of the TS features on a new project. 
 Setting up the multi-backend configuration with TS seems to work fine (our 
 app existing app reads the Bitcask buckets and SOLR indices without 
 issue), and we can even create a TS schema. We have LevelDB as the default 
 backend, so presumably this is what was used during the "create table" 
 query.

 When trying to query the TS table, we're seeing:

 2016-10-12 10:02:41.372 [error] <0.1410.0> gen_fsm <0.1410.0> in state 
 active terminated with reason: call to undefined function 
 riak_kv_multi_backend:range_scan/4 from riak_kv_vnode:list/7 line 1875

 And then in the client code, no return until finally an Exception (Riak 
 Java Client).


 Thanks for any assistance!


 Damion
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


--
Cheers,
Damien Krotkine

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakTS and Multi-Backend

2016-10-12 Thread John Daily
There are several important KV components such as AAE and Search with which TS 
integration has not been sufficiently tested.  At this time we still recommend 
running multiple clusters as mixed use workloads in Riak TS are currently not 
supported.

Your internal testing may reveal that small supporting KV datasets may work as 
expected and allow you to augment your TS data in a single cluster.  We are 
interested in your results as this will help us to improve and expand Riak TS 
moving forward.

-John

> On Oct 12, 2016, at 1:04 PM, Junk, Damion A  wrote:
> 
> So are you suggesting that it's not advisable to use even KV backed by 
> LevelDB in a TS instance, or is it more that performance is unknown and 
> therefore not guaranteed / supported?  If the data is otherwise "safe", this 
> may still be a better option for us than running separate clusters. 
> 
> Are others who need both a general KV store and the TS functionality running 
> multiple clusters to handle this use case?
> 
> Thanks for the quick reply!
> 
> 
> Damion
> 
>> On Oct 12, 2016, at 11:53 AM, John Daily  wrote:
>> 
>> We have not done any work to support the multi-backend, hence the error 
>> you’re seeing. TS depends exclusively on leveldb.
>> 
>> We’re not recommending the use of KV functionality in the TS product yet, 
>> because the latter is still changing rapidly and we will need to go back and 
>> fix some basic KV mechanisms. We’re also not yet sure of the performance 
>> characteristics if both KV and TS are in use under heavy load.
>> 
>> In short: I apologize, but we’re not really ready to support your use case 
>> yet.
>> 
>> -John
>> 
>>> On Oct 12, 2016, at 12:17 PM, Junk, Damion A  wrote:
>>> 
>>> Hello all -
>>> 
>>> I was wondering if it is possible to use RiakTS with a multi-backend 
>>> configuration. 
>>> 
>>> I have an existing set of applications using RiakKV and Bitcask, but we're 
>>> now wanting to start using some of the TS features on a new project. 
>>> Setting up the multi-backend configuration with TS seems to work fine (our 
>>> app existing app reads the Bitcask buckets and SOLR indices without issue), 
>>> and we can even create a TS schema. We have LevelDB as the default backend, 
>>> so presumably this is what was used during the "create table" query.
>>> 
>>> When trying to query the TS table, we're seeing:
>>> 
>>> 2016-10-12 10:02:41.372 [error] <0.1410.0> gen_fsm <0.1410.0> in state 
>>> active terminated with reason: call to undefined function 
>>> riak_kv_multi_backend:range_scan/4 from riak_kv_vnode:list/7 line 1875
>>> 
>>> And then in the client code, no return until finally an Exception (Riak 
>>> Java Client).
>>> 
>>> 
>>> Thanks for any assistance!
>>> 
>>> 
>>> Damion
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakTS and Multi-Backend

2016-10-12 Thread Junk, Damion A
So are you suggesting that it's not advisable to use even KV backed by LevelDB 
in a TS instance, or is it more that performance is unknown and therefore not 
guaranteed / supported?  If the data is otherwise "safe", this may still be a 
better option for us than running separate clusters. 

Are others who need both a general KV store and the TS functionality running 
multiple clusters to handle this use case?

Thanks for the quick reply!


Damion

> On Oct 12, 2016, at 11:53 AM, John Daily  wrote:
> 
> We have not done any work to support the multi-backend, hence the error 
> you’re seeing. TS depends exclusively on leveldb.
> 
> We’re not recommending the use of KV functionality in the TS product yet, 
> because the latter is still changing rapidly and we will need to go back and 
> fix some basic KV mechanisms. We’re also not yet sure of the performance 
> characteristics if both KV and TS are in use under heavy load.
> 
> In short: I apologize, but we’re not really ready to support your use case 
> yet.
> 
> -John
> 
>> On Oct 12, 2016, at 12:17 PM, Junk, Damion A  wrote:
>> 
>> Hello all -
>> 
>> I was wondering if it is possible to use RiakTS with a multi-backend 
>> configuration. 
>> 
>> I have an existing set of applications using RiakKV and Bitcask, but we're 
>> now wanting to start using some of the TS features on a new project. Setting 
>> up the multi-backend configuration with TS seems to work fine (our app 
>> existing app reads the Bitcask buckets and SOLR indices without issue), and 
>> we can even create a TS schema. We have LevelDB as the default backend, so 
>> presumably this is what was used during the "create table" query.
>> 
>> When trying to query the TS table, we're seeing:
>> 
>> 2016-10-12 10:02:41.372 [error] <0.1410.0> gen_fsm <0.1410.0> in state 
>> active terminated with reason: call to undefined function 
>> riak_kv_multi_backend:range_scan/4 from riak_kv_vnode:list/7 line 1875
>> 
>> And then in the client code, no return until finally an Exception (Riak Java 
>> Client).
>> 
>> 
>> Thanks for any assistance!
>> 
>> 
>> Damion
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakTS and Multi-Backend

2016-10-12 Thread John Daily
We have not done any work to support the multi-backend, hence the error you’re 
seeing. TS depends exclusively on leveldb.

We’re not recommending the use of KV functionality in the TS product yet, 
because the latter is still changing rapidly and we will need to go back and 
fix some basic KV mechanisms. We’re also not yet sure of the performance 
characteristics if both KV and TS are in use under heavy load.

In short: I apologize, but we’re not really ready to support your use case yet.

-John

> On Oct 12, 2016, at 12:17 PM, Junk, Damion A  wrote:
> 
> Hello all -
> 
> I was wondering if it is possible to use RiakTS with a multi-backend 
> configuration. 
> 
> I have an existing set of applications using RiakKV and Bitcask, but we're 
> now wanting to start using some of the TS features on a new project. Setting 
> up the multi-backend configuration with TS seems to work fine (our app 
> existing app reads the Bitcask buckets and SOLR indices without issue), and 
> we can even create a TS schema. We have LevelDB as the default backend, so 
> presumably this is what was used during the "create table" query.
> 
> When trying to query the TS table, we're seeing:
> 
> 2016-10-12 10:02:41.372 [error] <0.1410.0> gen_fsm <0.1410.0> in state active 
> terminated with reason: call to undefined function 
> riak_kv_multi_backend:range_scan/4 from riak_kv_vnode:list/7 line 1875
> 
> And then in the client code, no return until finally an Exception (Riak Java 
> Client).
> 
> 
> Thanks for any assistance!
> 
> 
> Damion
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: enotconn error

2016-10-12 Thread Luke Bakken
Travis -

What are the client failures you are seeing? What Riak client library
are you using, and are you using the PB or HTTP interface to Riak?

The error message you provided indicates that the ping request
returned from Riak after haproxy closed the socket for the request.
One cause would be very high server load causing timeouts.
--
Luke Bakken
Engineer
lbak...@basho.com


On Wed, Sep 14, 2016 at 7:23 AM, Travis Kirstine
 wrote:
> Hi all,
>
>
>
> I’m using haproxy with a riak 2.1.4 in a 5 node cluster.  I’m getting fairly
> consistent enotconn errors in riak which happen to coincide with client
> failures.  We’ve setup haproxy as recommended
> (https://gist.github.com/gburd/1507077) see below.  I’m running a leveldb
> backend with 9 GB max memory (I can go higher if needed).  I’m not sure at
> this point if I have a network issue or leveldb / riak issue.
>
>
>
> 2016-09-14 08:10:16 =CRASH REPORT
>
>   crasher:
>
> initial call: mochiweb_acceptor:init/3
>
> pid: <0.28104.442>
>
> registered_name: []
>
> exception error:
> {function_clause,[{webmachine_request,peer_from_peername,[{error,enotconn},{webmachine_request,{wm_reqstate,#Port<0.6539192>,[],undefined,undefined,undefined,{wm_reqdata,'GET',http,{1,0},"defined_in_wm_req_srv_init","defined_in_wm_req_srv_init",defined_on_call,defined_in_load_dispatch_data,"/ping","/ping",[],defined_in_load_dispatch_data,"defined_in_load_dispatch_data",500,1073741824,67108864,[],[],{0,nil},not_fetched_yet,false,{0,nil},<<>>,follow_request,undefined,undefined,[]},undefined,undefined,undefined}}],[{file,"src/webmachine_request.erl"},{line,150}]},{webmachine_request,get_peer,1,[{file,"src/webmachine_request.erl"},{line,124}]},{webmachine,new_request,2,[{file,"src/webmachine.erl"},{line,69}]},{webmachine_mochiweb,loop,2,[{file,"src/webmachine_mochiweb.erl"},{line,49}]},{mochiweb_http,headers,5,[{file,"src/mochiweb_http.erl"},{line,96}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
>
> ancestors: ['http://192.168.18.64:8098_mochiweb',riak_api_sup,<0.319.0>]
>
> messages: []
>
> links: [<0.325.0>,#Port<0.6539192>]
>
> dictionary: []
>
> trap_exit: false
>
> status: running
>
> heap_size: 987
>
> stack_size: 27
>
> reductions: 963
>
>   neighbours:
>
>
>
>
>
> # haproxy.cfg
>
> global
>
> log 127.0.0.1 local2 info
>
> chroot  /var/lib/haproxy
>
> pidfile /var/run/haproxy.pid
>
> maxconn 256000
>
> userhaproxy
>
> group   haproxy
>
> spread-checks   5
>
> daemon
>
> quiet
>
> stats socket /var/lib/haproxy/stats
>
> defaults
>
> log global
>
> option  httplog
>
> option  dontlognull
>
> option  redispatch
>
> timeout connect 5000
>
> maxconn 256000
>
>
>
> frontend  main *:80
>
> modehttp
>
> acl url_static   path_beg   -i /static /images /javascript
> /stylesheets
>
> acl url_static   path_end   -i .jpg .gif .png .css .js
>
>
>
> backend static
>
> balance roundrobin
>
> server  static 127.0.0.1:4331 check
>
>
>
> backend app
>
> modehttp
>
> balance roundrobin
>
> server  wmts1riak 192.168.18.72:80 check
>
> server  wmts2riak 192.168.18.73:80 check
>
>
>
> backend riak_rest_backend
>
>mode   http
>
>balanceroundrobin
>
>option httpchk GET /ping
>
>option httplog
>
>server riak1 192.168.18.64:8098 weight 1 maxconn 1024  check
>
>server riak2 192.168.18.65:8098 weight 1 maxconn 1024  check
>
>server riak3 192.168.18.66:8098 weight 1 maxconn 1024  check
>
>server riak4 192.168.18.67:8098 weight 1 maxconn 1024  check
>
>server riak5 192.168.18.68:8098 weight 1 maxconn 1024  check
>
>
>
> frontend riak_rest
>
>bind   *:8098
>
>mode   http
>
>option contstats
>
>default_backendriak_rest_backend
>
>
>
> backend riak_protocol_buffer_backend
>
>balanceleastconn
>
>mode   tcp
>
>option tcpka
>
>option srvtcpka
>
>server riak1 192.168.18.64:8087 weight 1 maxconn 1024  check
>
>server riak2 192.168.18.65:8087 weight 1 maxconn 1024  check
>
>server riak3 192.168.18.66:8087 weight 1 maxconn 1024  check
>
>server riak4 192.168.18.67:8087 weight 1 maxconn 1024  check
>
>server riak5 192.168.18.68:8087 weight 1 maxconn 1024  check
>
>
>
> frontend riak_protocol_buffer
>
>bind   *:8087
>
>mode   tcp
>
>option tcplog
>
>option contstats
>
>mode   tcp
>
>option tcpka
>

Re: Erlang client map reduce?

2016-10-12 Thread Luke Bakken
Hi Brandon -

The riak_object module exports a type() function that will return the
bucket type of an object in Riak
(https://github.com/basho/riak_kv/blob/develop/src/riak_object.erl#L589-L592).

MapReduce docs:
http://docs.basho.com/riak/kv/2.1.4/developing/app-guide/advanced-mapreduce/

In addition, here is a repository containing some exampe map/reduce
code: https://github.com/basho/riak_function_contrib/wiki

Having said all that, your use case may be better suited to Riak
Search. MapReduce is best run on an intermittent basis due to the load
it places on a cluster. Is your query something that will frequently
be run or could the output of the query be saved in Riak on a daily
basis, for instance?

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Sep 29, 2016 at 5:57 PM, Brandon Martin  wrote:
> So I am trying to figure out how to do a map reduce on a bucket type with
> the erlang client in erl. I didn’t see in the documentation how to do a map
> reduce with a bucket type. I have the bucket type and the bucket. I want to
> map reduce to basically filter out any documents whose createTime(which is
> just int/number) is less then 24 hours and return those. I have only been
> using riak for a few weeks and erlang for about a day. Any pointers or help
> would be appreciated.
>
> Thanks
>
> --
> Brandon Martin

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak KV evolution

2016-10-04 Thread Ricardo Mayerhofer
Thanks for your response Charlie! Riak KV is a great product, but as any
other product it must evolve to keep ahead of its competitors.

Glad to hear that improvements are coming :)

On Tue, Oct 4, 2016 at 10:18 AM, Charlie Voiselle 
wrote:

> Ricardo:
>
> Thank you for your question, and I want to reassure you that Riak KV is
> still very much under active development. Work that is being done in the
> Riak TS codebase is being used to improve Riak KV where it applies. Riak KV
> 2.2 is coming soon and will include these new features:
>
>- Global Object TTL in eLevelDB
>- LZ4 Compression in eLevelDB
>- Debian 8 and Ubuntu 16 support
>- Switches to disable certain operations in Riak (Key Listing, 2i,
>Search)
>- HyperLogLog Data Type
>
> We have also been working to enhance performance and pay-down technical
> debt in the product. Some examples of this are:
>
>- Inconsistent hashing of equivalent objects with AAE
>- Riak Search performance optimizations
>- Upgrading Riak Search's internal Solr version to 4.10.4
>
> As to the future, while we don't provide public roadmaps, some of the
> themes coming in Riak will be:
>
>- Advanced data lifecycle management
>- Enhancements to Multi-data Center Replication
>
> I sincerely appreciate your feedback.
>
> Regards,
> Charlie Voiselle
> Product Manager, Riak KV and Clients
>
>
> On Oct 3, 2016, at 1:34 PM, Ricardo Mayerhofer 
> wrote:
>
> Hi everyone,
> Despite seeing new components being added to Basho data solutions, which
> is great (e.g. Riak TS) I wonder about Riak KV evolution.
>
> Since 2013 there's no major release of Riak KV and not many features were
> added since then. Is Riak KV still a priority to Basho?
>
> Some features that I miss:
>
>- TTL per bucket or per key,
>- Secondary index in Bitcask or TTL in LevelDB.
>- Be able to drop an entire bucket.
>
> Is there a Roadmap of Riak KV or a list of features coming?
>
> --
> Ricardo Mayerhofer
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>


-- 
Ricardo Mayerhofer
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak KV evolution

2016-10-04 Thread Charlie Voiselle
Ricardo:

Thank you for your question, and I want to reassure you that Riak KV is still 
very much under active development. Work that is being done in the Riak TS 
codebase is being used to improve Riak KV where it applies. Riak KV 2.2 is 
coming soon and will include these new features:

Global Object TTL in eLevelDB
LZ4 Compression in eLevelDB
Debian 8 and Ubuntu 16 support
Switches to disable certain operations in Riak (Key Listing, 2i, Search)
HyperLogLog Data Type
We have also been working to enhance performance and pay-down technical debt in 
the product. Some examples of this are:

Inconsistent hashing of equivalent objects with AAE
Riak Search performance optimizations
Upgrading Riak Search's internal Solr version to 4.10.4
As to the future, while we don't provide public roadmaps, some of the themes 
coming in Riak will be:

Advanced data lifecycle management
Enhancements to Multi-data Center Replication
I sincerely appreciate your feedback.

Regards,
Charlie Voiselle
Product Manager, Riak KV and Clients



> On Oct 3, 2016, at 1:34 PM, Ricardo Mayerhofer  wrote:
> 
> Hi everyone,
> Despite seeing new components being added to Basho data solutions, which is 
> great (e.g. Riak TS) I wonder about Riak KV evolution.
> 
> Since 2013 there's no major release of Riak KV and not many features were 
> added since then. Is Riak KV still a priority to Basho?
> 
> Some features that I miss:
> TTL per bucket or per key,
> Secondary index in Bitcask or TTL in LevelDB.
> Be able to drop an entire bucket. 
> Is there a Roadmap of Riak KV or a list of features coming?
> 
> -- 
> Ricardo Mayerhofer
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High number of Riak buckets

2016-09-30 Thread Vikram Lalit
Hiya Alexander,

Thanks much indeed for the detailed note... very interesting insights...

As you deduced, I actually omitted some pieces from my email for the sake
of simplicity. I'm actually leveraging a transient / stateless chat server
(ejabberd) wherein messages get delivered on live sessions / streams
without the client having to do look-ups. So the storage in Riak is
actually a post-facto delivery / archival rather than prior to the client
receiving them. Hence determining the time key for the look-up isn't going
to be an issue unless I run some analytics where I query all keys (which
would be an issue as I now understand from your comments).

There is of course the question of offline messages whose delivery would
depend on look-ups, but ejabberd there uses the username (the offline
storage is with the secondary index as well on leveldb) and hence the
timestamp not being important. Riak TS sure looks promising there but I'll
check further whether the change would be justified for only offline
messages, or in case other use cases crop up...

Makes sense on the listing all keys in a bucket being expensive though -
let me see how I can model my data for that!!!

Thanks again for your inputs... very informative...

Cheers.
Vikram


On Fri, Sep 30, 2016 at 12:23 PM, Alexander Sicular 
wrote:

> Hi Vikram,
>
> Bucket maximums aside, why are you modeling in this fashion? How will you
> retrieve individual keys if you don't know the time stamp in advance? Do
> you have a lookup somewhere else? Doable as lookup keys or crdts or other
> systems. Are you relying on listing all keys in a bucket? Definitely don't
> do that.
>
> Yes, there is a better way. Use Riak TS. Create a table with a composite
> primary key of topic and time. You can then retrieve by topic equality and
> time range. You can then cache those results in deterministic keys as
> necessary.
>
> If you don't already know, Riak TS is basically (there are some notable
> differences) Riak KV plus the time series data model. Riak TS makes all
> sorts of time series oriented projects easier than modeling them against
> KV. Oh, and you can also leverage KV buckets alongside TS (resource
> limitations not withstanding.)
>
> Would love to hear more,
> Alexander
>
> @siculars
> http://siculars.posthaven.com
>
> Sent from my iRotaryPhone
>
> > On Sep 29, 2016, at 19:42, Vikram Lalit  wrote:
> >
> > Hi - I am creating a messaging platform wherein am modeling each topic
> to serve as a separate bucket. That means there can potentially be millions
> of buckets, with each message from a user becoming a value on a distinct
> timestamp key.
> >
> > My question is there any downside to modeling my data in such a manner?
> Or can folks advise a better way of storing the same in Riak?
> >
> > Secondly, I would like to modify the default bucket properties (n_val) -
> I understand that such 'custom' buckets have a higher performance overhead
> due to the extra load on the gossip protocol. Is there a way the default
> n_val of newly created buckets be changed so that even if I have the above
> said high number of buckets, there is no performance degrade? Believe there
> was such a config allowed in app.config but not sure that file is leveraged
> any more after riak.conf was introduced.
> >
> > Thanks much.
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High number of Riak buckets

2016-09-30 Thread Vikram Lalit
Hi Luke - many thanks... actually I was planning to have different bucket
types have a different n_val. Or I might end up doing so... the thinking
being that I intend to start my production workloads with fewer
replications, but as the system matures / stabilizes (and also increases in
userbase!), I would want to increase n_val.

In my testing that I had done a few weeks ago, each time I tried to
increase the n_val of an existing bucket, I've found conflicting results
(prior question here:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-July/018631.html)
- perhaps due to read-repair taking time - not sure. Understood though from
various Riak papers that decreasing n_val should not be done, but couldn't
conclude yet as to why would increasing be an issue...

So to avoid the scenario, I've been thinking that as the system criticality
increases, I would create a new bucket (with a higher n_val) and then start
pushing newer conversations on to that bucket. Still not sure how this
would behave, but let me test further with bucket types as you suggest...

Do let know please if there's something glaring I'm missing as am trying to
clarify the thought-process to myself as well!!!

Cheers.

On Fri, Sep 30, 2016 at 12:07 PM, Luke Bakken  wrote:

> Hi Vikram,
>
> If all of your buckets use the same bucket type with your custom
> n_val, there won't be a performance issue. Just be sure to set n_val
> on the bucket type, and that all buckets are part of that bucket type.
>
> http://docs.basho.com/riak/kv/2.1.4/developing/usage/bucket-types/
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Thu, Sep 29, 2016 at 4:42 PM, Vikram Lalit 
> wrote:
> > Hi - I am creating a messaging platform wherein am modeling each topic to
> > serve as a separate bucket. That means there can potentially be millions
> of
> > buckets, with each message from a user becoming a value on a distinct
> > timestamp key.
> >
> > My question is there any downside to modeling my data in such a manner?
> Or
> > can folks advise a better way of storing the same in Riak?
> >
> > Secondly, I would like to modify the default bucket properties (n_val) -
> I
> > understand that such 'custom' buckets have a higher performance overhead
> due
> > to the extra load on the gossip protocol. Is there a way the default
> n_val
> > of newly created buckets be changed so that even if I have the above said
> > high number of buckets, there is no performance degrade? Believe there
> was
> > such a config allowed in app.config but not sure that file is leveraged
> any
> > more after riak.conf was introduced.
> >
> > Thanks much.
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High number of Riak buckets

2016-09-30 Thread Alexander Sicular
Hi Vikram,

Bucket maximums aside, why are you modeling in this fashion? How will you 
retrieve individual keys if you don't know the time stamp in advance? Do you 
have a lookup somewhere else? Doable as lookup keys or crdts or other systems. 
Are you relying on listing all keys in a bucket? Definitely don't do that.  

Yes, there is a better way. Use Riak TS. Create a table with a composite 
primary key of topic and time. You can then retrieve by topic equality and time 
range. You can then cache those results in deterministic keys as necessary. 

If you don't already know, Riak TS is basically (there are some notable 
differences) Riak KV plus the time series data model. Riak TS makes all sorts 
of time series oriented projects easier than modeling them against KV. Oh, and 
you can also leverage KV buckets alongside TS (resource limitations not 
withstanding.)

Would love to hear more,
Alexander 

@siculars
http://siculars.posthaven.com

Sent from my iRotaryPhone

> On Sep 29, 2016, at 19:42, Vikram Lalit  wrote:
> 
> Hi - I am creating a messaging platform wherein am modeling each topic to 
> serve as a separate bucket. That means there can potentially be millions of 
> buckets, with each message from a user becoming a value on a distinct 
> timestamp key.
> 
> My question is there any downside to modeling my data in such a manner? Or 
> can folks advise a better way of storing the same in Riak?
> 
> Secondly, I would like to modify the default bucket properties (n_val) - I 
> understand that such 'custom' buckets have a higher performance overhead due 
> to the extra load on the gossip protocol. Is there a way the default n_val of 
> newly created buckets be changed so that even if I have the above said high 
> number of buckets, there is no performance degrade? Believe there was such a 
> config allowed in app.config but not sure that file is leveraged any more 
> after riak.conf was introduced.
> 
> Thanks much.
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: High number of Riak buckets

2016-09-30 Thread Luke Bakken
Hi Vikram,

If all of your buckets use the same bucket type with your custom
n_val, there won't be a performance issue. Just be sure to set n_val
on the bucket type, and that all buckets are part of that bucket type.

http://docs.basho.com/riak/kv/2.1.4/developing/usage/bucket-types/

--
Luke Bakken
Engineer
lbak...@basho.com

On Thu, Sep 29, 2016 at 4:42 PM, Vikram Lalit  wrote:
> Hi - I am creating a messaging platform wherein am modeling each topic to
> serve as a separate bucket. That means there can potentially be millions of
> buckets, with each message from a user becoming a value on a distinct
> timestamp key.
>
> My question is there any downside to modeling my data in such a manner? Or
> can folks advise a better way of storing the same in Riak?
>
> Secondly, I would like to modify the default bucket properties (n_val) - I
> understand that such 'custom' buckets have a higher performance overhead due
> to the extra load on the gossip protocol. Is there a way the default n_val
> of newly created buckets be changed so that even if I have the above said
> high number of buckets, there is no performance degrade? Believe there was
> such a config allowed in app.config but not sure that file is leveraged any
> more after riak.conf was introduced.
>
> Thanks much.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Delete bucket type

2016-09-29 Thread Magnus Kessler
On 28 September 2016 at 17:59, Nguyen, Kyle  wrote:

> Thank you for your quick reply, Magnus! We’re considering using bucket
> type to support multi-tenancy in our system. Hence, all objects stored
> within the namespace of the bucket type and bucket type need to be removed
> once the client has decided to opt-out.
>
>
>
> Thanks
>
>
>
> -Kyle-
>
>
>

Hi Kile,

Riak uses bucket-types and buckets primarily as a name space. In a default
configuration, the bucket type and bucket names are hashed together with
the key, and this hash determines the location of a given object on the
ring. This concept is known as consistent hashing and ensures that there
are no hot spots and that data is evenly distributed across the partitions.

Riak does *NOT* keep data stored under different bucket types or buckets
separate from each other. Therefore, in order to delete all data stored
under a given bucket type or bucket, it is necessary to delete each
matching object individually. Furthermore, Riak does not keep an index of
objects stored in a bucket type or bucket.

It is possible to obtain lists of these objects via key-listing or
mapreduce. However, these are very expensive operations in Riak and should
be avoided in a high availability production cluster.

You should also be aware of how deletion in Riak works. When an object is
deleted, a tombstone is placed into the database (effectively an empty
object with some additional metadata). Tombstones are eventually reaped
during merging (bitcask) or compaction (leveldb) phases of the backend
storage. However, there is no guarantee when this actually happens, and in
particular with leveldb it can take a long time for any particular object
to be actually deleted from disk.

Please let me know if you have any additional questions regarding this
topic.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: Delete bucket type

2016-09-28 Thread Nguyen, Kyle
Thank you for your quick reply, Magnus! We’re considering using bucket type to 
support multi-tenancy in our system. Hence, all objects stored within the 
namespace of the bucket type and bucket type need to be removed once the client 
has decided to opt-out.

Thanks

-Kyle-

From: Magnus Kessler [mailto:mkess...@basho.com]
Sent: Wednesday, September 28, 2016 2:13 AM
To: Nguyen, Kyle
Cc: Riak Users
Subject: Re: Delete bucket type

On 27 September 2016 at 20:50, Nguyen, Kyle 
<kyle.ngu...@philips.com<mailto:kyle.ngu...@philips.com>> wrote:
Hi all,

Is deleting bucket type possible in version 2.1.4? If not, is there any 
workaround or available script/code that we can do this in a production 
environment without too much performance impact?

Thanks

-Kyle-


Hi Kyle,

There is currently no option to delete bucket types once they have been 
created. Are you trying to delete a bucket type and any objects stored within 
the namespace of the bucket type, or just remove a previously configured bucket 
type?

In Riak, bucket types have very little operational overhead. They get stored in 
the cluster meta data and take up a small amount of disk space there. Bucket 
types are not gossiped around the ring on a regular basis, though, and 
therefore have not got the negative impact a large number of custom buckets 
would have. With Riak-2.x we generally recommend storing any configuration in 
bucket types, rather than creating custom buckets, even if there is only one 
bucket using that specific configuration.


Kind Regards,

Magnus




--
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431


The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Delete bucket type

2016-09-28 Thread Magnus Kessler
On 27 September 2016 at 20:50, Nguyen, Kyle  wrote:

> Hi all,
>
>
>
> Is deleting bucket type possible in version 2.1.4? If not, is there any
> workaround or available script/code that we can do this in a production
> environment without too much performance impact?
>
>
>
> Thanks
>
>
>
> -Kyle-
>
>
>

Hi Kyle,

There is currently no option to delete bucket types once they have been
created. Are you trying to delete a bucket type and any objects stored
within the namespace of the bucket type, or just remove a previously
configured bucket type?

In Riak, bucket types have very little operational overhead. They get
stored in the cluster meta data and take up a small amount of disk space
there. Bucket types are not gossiped around the ring on a regular basis,
though, and therefore have not got the negative impact a large number of
custom buckets would have. With Riak-2.x we generally recommend storing any
configuration in bucket types, rather than creating custom buckets, even if
there is only one bucket using that specific configuration.


Kind Regards,

Magnus




-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Seeking Backup solution for live nodes

2016-09-26 Thread Outback Dingo
On Mon, Sep 26, 2016 at 10:13 AM, Matthew Von-Maszewski
 wrote:
> Neither.
>
> The leveldb instance creates a snapshot of the current files and generates a
> working MANIFEST to go with them.  That means the snapshot is in “ready to
> run” condition.  This is based upon hard links for the .sst table files.
>
> The user can then choose to copy that snapshot elsewhere, point a backup
> tool at it, and/or just leave it there for the five snapshot rotation.
>
> One line cron job can kick off the snapshot.
>
> Similar to using mySql hot backup.  You have the system generate the backup
> data, then you decide how to manage / store it.
>
> Matthew
>
>
> On Sep 26, 2016, at 10:07 AM, DeadZen  wrote:
>
> there a backup tool that uses this yet? or is this meant more to be used
> with snapshots provided through xfs/zfs?
>
> On Monday, September 26, 2016, Matthew Von-Maszewski 
> wrote:
>>
>> Here are notes on the new hot backup:
>>
>> https://github.com/basho/leveldb/wiki/mv-hot-backup
>>
>> This sound like what you need?
>>
>> Matthew
>>
>> Sent from my iPad
>>
>> > On Sep 26, 2016, at 5:39 AM, Niels Christian Sorensen
>> >  wrote:
>> >
>> > Hi,
>> >
>> > We use Riak-kv Enterprise Edition as base for Riak CS to store files in.
>> > Each customer has a separate bucket in the cluster(s) and all data is 
>> > stored
>> > multi site in 3 copies. Thus the "i lost a node" situation is fully 
>> > covered.
>> >
>> > I need however, a solution for providing customers with a "single
>> > instance" backup of their data.
>> >
>> > I am aware of the possibility of tar, cp, scp, what-ever-copy of the
>> > data - but this require me to take system off-line according to this:
>> >
>> > https://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/backing-up/
>> >
>> > Also I would still have to restore all customers data (and all involved
>> > nodes) - this is not trivial!
>> >
>> > Also I realize that the "riak-admin backup" is deprecated and should be
>> > avoided - It seemed like an easy solution to my problem but
>> >
>> > The s3cmd will allow me to pull out the data and I could most likely
>> > write a fantastic automatic script based system to use this - I do not have
>> > the time for that and are therefor looking for a commercial or "pre-build -
>> > adjustable" solution that will allow me to pull out a single copy of all
>> > data stored in a bucket and keep it elsewhere.
>> >
>> > Any ideas / solutions / quotes (as external consultant) on a solution to
>> > this problem?
>> >
>> > A plan for recovery is obviously also needed ;-)
>> >

also note that there are S3 capable backup tools such as restic and
rclone .. maybe simpler to have clients use them. Id steer away
from s3cmd.



>> > Thanks in advance
>> >
>> > /Christian
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Seeking Backup solution for live nodes

2016-09-26 Thread Matthew Von-Maszewski
Neither.

The leveldb instance creates a snapshot of the current files and generates a 
working MANIFEST to go with them.  That means the snapshot is in “ready to run” 
condition.  This is based upon hard links for the .sst table files.

The user can then choose to copy that snapshot elsewhere, point a backup tool 
at it, and/or just leave it there for the five snapshot rotation.

One line cron job can kick off the snapshot.

Similar to using mySql hot backup.  You have the system generate the backup 
data, then you decide how to manage / store it.

Matthew


> On Sep 26, 2016, at 10:07 AM, DeadZen  wrote:
> 
> there a backup tool that uses this yet? or is this meant more to be used with 
> snapshots provided through xfs/zfs?
> 
> On Monday, September 26, 2016, Matthew Von-Maszewski  > wrote:
> Here are notes on the new hot backup:
> 
> https://github.com/basho/leveldb/wiki/mv-hot-backup 
> 
> 
> This sound like what you need?
> 
> Matthew
> 
> Sent from my iPad
> 
> > On Sep 26, 2016, at 5:39 AM, Niels Christian Sorensen 
> > > wrote:
> >
> > Hi,
> >
> > We use Riak-kv Enterprise Edition as base for Riak CS to store files in. 
> > Each customer has a separate bucket in the cluster(s) and all data is 
> > stored multi site in 3 copies. Thus the "i lost a node" situation is fully 
> > covered.
> >
> > I need however, a solution for providing customers with a "single instance" 
> > backup of their data.
> >
> > I am aware of the possibility of tar, cp, scp, what-ever-copy of the data - 
> > but this require me to take system off-line according to this:
> > https://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/backing-up/ 
> > 
> >
> > Also I would still have to restore all customers data (and all involved 
> > nodes) - this is not trivial!
> >
> > Also I realize that the "riak-admin backup" is deprecated and should be 
> > avoided - It seemed like an easy solution to my problem but
> >
> > The s3cmd will allow me to pull out the data and I could most likely write 
> > a fantastic automatic script based system to use this - I do not have the 
> > time for that and are therefor looking for a commercial or "pre-build - 
> > adjustable" solution that will allow me to pull out a single copy of all 
> > data stored in a bucket and keep it elsewhere.
> >
> > Any ideas / solutions / quotes (as external consultant) on a solution to 
> > this problem?
> >
> > A plan for recovery is obviously also needed ;-)
> >
> > Thanks in advance
> >
> > /Christian
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com 
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> > 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Seeking Backup solution for live nodes

2016-09-26 Thread DeadZen
there a backup tool that uses this yet? or is this meant more to be used
with snapshots provided through xfs/zfs?

On Monday, September 26, 2016, Matthew Von-Maszewski 
wrote:

> Here are notes on the new hot backup:
>
> https://github.com/basho/leveldb/wiki/mv-hot-backup
>
> This sound like what you need?
>
> Matthew
>
> Sent from my iPad
>
> > On Sep 26, 2016, at 5:39 AM, Niels Christian Sorensen <
> n...@corporateencryption.com > wrote:
> >
> > Hi,
> >
> > We use Riak-kv Enterprise Edition as base for Riak CS to store files in.
> Each customer has a separate bucket in the cluster(s) and all data is
> stored multi site in 3 copies. Thus the "i lost a node" situation is fully
> covered.
> >
> > I need however, a solution for providing customers with a "single
> instance" backup of their data.
> >
> > I am aware of the possibility of tar, cp, scp, what-ever-copy of the
> data - but this require me to take system off-line according to this:
> > https://docs.basho.com/riak/kv/2.1.4/using/cluster-
> operations/backing-up/
> >
> > Also I would still have to restore all customers data (and all involved
> nodes) - this is not trivial!
> >
> > Also I realize that the "riak-admin backup" is deprecated and should be
> avoided - It seemed like an easy solution to my problem but
> >
> > The s3cmd will allow me to pull out the data and I could most likely
> write a fantastic automatic script based system to use this - I do not have
> the time for that and are therefor looking for a commercial or "pre-build -
> adjustable" solution that will allow me to pull out a single copy of all
> data stored in a bucket and keep it elsewhere.
> >
> > Any ideas / solutions / quotes (as external consultant) on a solution to
> this problem?
> >
> > A plan for recovery is obviously also needed ;-)
> >
> > Thanks in advance
> >
> > /Christian
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com 
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Operational nightmare fun: dealing with misconfigured Riak Clusters - blog post

2016-09-26 Thread DeadZen
nice post, not a big fan of the proxy design.

On Monday, September 26, 2016, Andra Dinu 
wrote:

> Hi,
>
> This post is a story about investigating a struggling Riak cluster,
> finding out why Riak's usual self-healing processes got stuck, and how our
> operations and maintenances tool WombatOAM can help with the struggle:
> https://www.erlang-solutions.com/blog/operational-nightmare-fun-
> dealing-with-misconfigured-riak-clusters.html
>
> Thanks,
> Andra
>
> *Andra Dinu*
> Community  & Social
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Seeking Backup solution for live nodes

2016-09-26 Thread Matthew Von-Maszewski
Here are notes on the new hot backup:

https://github.com/basho/leveldb/wiki/mv-hot-backup

This sound like what you need?

Matthew

Sent from my iPad

> On Sep 26, 2016, at 5:39 AM, Niels Christian Sorensen 
>  wrote:
> 
> Hi,
> 
> We use Riak-kv Enterprise Edition as base for Riak CS to store files in. Each 
> customer has a separate bucket in the cluster(s) and all data is stored multi 
> site in 3 copies. Thus the "i lost a node" situation is fully covered.
> 
> I need however, a solution for providing customers with a "single instance" 
> backup of their data.
> 
> I am aware of the possibility of tar, cp, scp, what-ever-copy of the data - 
> but this require me to take system off-line according to this:
> https://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/backing-up/
> 
> Also I would still have to restore all customers data (and all involved 
> nodes) - this is not trivial!
> 
> Also I realize that the "riak-admin backup" is deprecated and should be 
> avoided - It seemed like an easy solution to my problem but
> 
> The s3cmd will allow me to pull out the data and I could most likely write a 
> fantastic automatic script based system to use this - I do not have the time 
> for that and are therefor looking for a commercial or "pre-build - 
> adjustable" solution that will allow me to pull out a single copy of all data 
> stored in a bucket and keep it elsewhere.
> 
> Any ideas / solutions / quotes (as external consultant) on a solution to this 
> problem?
> 
> A plan for recovery is obviously also needed ;-)
> 
> Thanks in advance
> 
> /Christian
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using cursorMark with '_yz_rk'

2016-09-21 Thread Fred Dushin
Okay, I probably spoke too soon.

While Solr 4.7 supports cursor marks, we do have an issue in Riak (or Yokozuna) 
whereby it is actually impractical to use cursor marks for query.  The problem 
is that while Yokozuna uses coverage plans generate a filter query that will 
guarantee that we get no replicas in a result set, these coverage plans change 
every few seconds, in order to ensure we are not constantly querying a subset 
of the cluster (thus possibly creating hot zones in the cluster, especially for 
query-heavy work loads).

Theoretically you could change the interval by which these coverage plans are 
updated (by setting the yokozuna cover_tick configuration setting in 
advanced.config [1]), which would be okay in a development or test environment, 
but which would be unsuitable in production.

The solution is to pin a query to a coverage plan, so that subsequent 
iterations of the query with the next cursor will use the same filter, and 
hence will give you proper result sets.  We do not currently have this 
implemented in Yokozuna.

-Fred

[1] https://github.com/basho/yokozuna/blob/2.0.4/src/yz_cover.erl#L285

> On Sep 21, 2016, at 10:40 AM, Guillaume Boddaert 
>  wrote:
> 
> I'm very curious of your cursorMark implementation, I'm in deep need of that 
> feature.
> 
> From my experience I wasn't even able to trigger a query with my riak version 
> as it was not yet supported by the Solr bundled with it. But I might missed a 
> point with that.
> 
> I'm using 2.1.2.
> 
> Guillaume
> 
> On 21/09/2016 03:28, Vipin Sharma wrote:
>> Hi all,
>>  
>> In our system we have default implementation of querying the data from riak 
>> using “pagination”.
>> For some of the queries, with huge number of resulting records (into the 
>> tunes of 10,000+) , it is becoming an issue and hence we wanted to change it 
>> to use “cursorMark” as suggested here : 
>> https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results 
>> 
>>  
>> While using cursorMark, 
>> -  It asks for unique key in the sort field. We didn’t have a unique 
>> key of our own so wanted to use “_yz_rk”  but It gives error mentioned below.
>> -  Query is accepted when sort parameter is changed to use “_yz_id” 
>> instead but  It gives redundant / duplicate records. It is probably a known 
>> issue as mentioned here 
>>  ( Pagination 
>> Warning). Solution recommended is to use { _yz_rt asc, _yz_rb asc, _yz_rk 
>> asc } instead but for each of them query is returning the following error :
>>  
>> "error":{"msg":"Cursor functionality requires a sort 
>> containing a uniqueKey field tie breaker","code":400}
>>  
>> Can somebody please share some suggestions on this.
>>  
>> Thanks
>> Vipin 
>>  
>>  
>>  
>>  
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com 
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
>> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr search performance

2016-09-21 Thread sean mcevoy
Hi Fred,

Thanks for the pointer! 'cursorMark' is a lot more performant alright,
though apparently it doesn't suit our use case.

I've written a loop function using OTP's httpc that reads each page, gets
the cursorMark and repeats, and it returns all 147 pages with consistent
times in the 40-60ms bracket which is an excellent improvement!

I would have been asking about the effort involved in making the protocol
buffers client support this, but instead our GUI guys insist that they need
to request a page number as sometimes they want to start in the middle of a
set of data.

So I'm almost back to square one.
Can you shed any light on the internal workings of SOLR that produce the
slow-down in my original question?
I'm hoping I can find a way to restructure my index data without having to
change the higher-level API's that I support.

Cheers,
//Sean.


On Mon, Sep 19, 2016 at 10:00 PM, Fred Dushin  wrote:

> All great questions, Sean.
>
> A few things.  First off, for result sets that are that large, you are
> probably going to want to use Solr cursor marks [1], which are supported in
> the current version of Solr we ship.  Riak allows queries using cursor
> marks through the HTTP interface.  At present, it does not support cursors
> using the protobuf API, due to some internal limitations of the server-side
> protobuf library, but we do hope to fix that in the future.
>
> Secondly, we have found sorting with distributed queries to be far more
> performant using Solr 4.10.4.  Currently released versions of Riak use Solr
> 4.7, but as you can see on github [2], Solr 4.10.4 support has been merged
> into the develop-2.2 branch, and is in the pipeline for release.  I can't
> say when the next version of Riak is that will ship with this version
> because of indeterminacy around bug triage, but it should not be too long.
>
> I would start to look at using cursor marks and measure their relative
> performance in your scenario.  My guess is that you should see some
> improvement there.
>
> -Fred
>
> [1] https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
> [2] https://github.com/basho/yokozuna/commit/
> f64e19cef107d982082f5b95ed598da96fb419b0
>
>
> > On Sep 19, 2016, at 4:48 PM, sean mcevoy  wrote:
> >
> > Hi All,
> >
> > We have an index with ~548,000 entries, ~14,000 of which match one of
> our queries.
> > We read these in a paginated search and the first page (of 100 hits)
> returns quickly in ~70ms.
> > This response time seems to increase exponentially as we walk through
> the pages:
> > the 4th page takes ~200ms,
> > the 8th page takes ~1200ms
> > the 12th page takes ~2100ms
> > the 16th page takes ~6100ms
> > the 20th page takes ~24000ms
> >
> > And by the time we're searching for the 22nd page it regularly times out
> at the default 60 seconds.
> >
> > I have a good unsderstanding of riak KV internals but absolutely nothing
> of Lucene which I think is what's most relevant here. If anyone in the know
> can point me towards any relevant resource or can explain what's happening
> I'd be much obliged :-)
> > As I would also be if anyone with experience of using Riak/Lucene can
> tell me:
> > - Is 500K a crazy number of entries to put into one index?
> > - Is 14K a crazy number of entries to expect to be returned?
> > - Are there any methods we can use to make the search time more constant
> across the full search?
> > I read one blog post on inlining but it was a bit old & not very obvious
> how to implement using riakc_pb_socket calls.
> >
> > And out of curiosity, do we not traverse the full range of hits for each
> page? I naively thought that because I'm sorting the returned values we'd
> have to get them all first and then sort, but the response times suggests
> otherwise. Does Lucene store the data sorted by each field just in case a
> query asks for it? Or what other magic is going on?
> >
> >
> > For the technical details, we use the "_yz_default" schema and all the
> fields stored are strings:
> > - entry_id_s: unique within the DB, the aim of the query is to gather a
> list of these
> > - type_s: has one of 2 values
> > - sub_category_id_s: in the query described above all 14K hits will
> match on this, in the DB of ~500K entries there are ~43K different values
> for this field, withe each category typically having 2-6 sub categories
> > - category_id_s: not matched in this query, in the DB of ~500K entries
> there are ~13K different values for this field
> > - status_s: has one of 2 values, in the query described baove all hits
> will have the value "active"
> > - user_id_s: unique within the DB but not matched in this query
> > - first_name_s: almost unique within the DB, this query will sort by
> this field
> > - last_name_s: almost unique within the DB, this query will sort by this
> field
> >
> > This search query looks like:
> > <<"sub_category_id_s:test_1 AND status_s:active AND
> type_s:sub_category">>
> >
> > Our options 

Re: Using cursorMark with '_yz_rk'

2016-09-21 Thread Guillaume Boddaert
I'm very curious of your cursorMark implementation, I'm in deep need of 
that feature.


From my experience I wasn't even able to trigger a query with my riak 
version as it was not yet supported by the Solr bundled with it. But I 
might missed a point with that.


I'm using 2.1.2.

Guillaume

On 21/09/2016 03:28, Vipin Sharma wrote:


Hi all,

In our system we have default implementation of querying the data from 
riak using “pagination”.


For some of the queries, with huge number of resulting records (into 
the tunes of 10,000+) , it is becoming an issue and hence we wanted to 
change it to use “cursorMark” as suggested here : 
https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results


While using cursorMark,

-It asks for unique key in the sort field. We didn’t have a unique key 
of our own so wanted to use “_yz_rk”  but It gives error mentioned below.


-Query is accepted when sort parameter is changed to use “_yz_id” 
instead but  It gives redundant / duplicate records. It is probably a 
known issue as mentioned here 
 ( 
Pagination Warning). Solution recommended is to use { _yz_rt 
asc, _yz_rb asc, _yz_rk asc } instead but for each of them query is 
returning the following error :


"error":{"msg":"Cursor functionality requires a sort containing a 
uniqueKey field tie breaker","code":400}


Can somebody please share some suggestions on this.

Thanks

Vipin



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Joining node answers to requests

2016-09-20 Thread Guillaume Boddaert
Well, I guess that's on me. Solr is unable to create my index due to an 
extension I'm using, thus Riak-4 don't know a working index for the 
bucket-type.


Guillaume

On 20/09/2016 14:48, Guillaume Boddaert wrote:

Hi,

I'm currently adding two new node to my production. Yet, they seems to 
answer my requests while joining. I have strange answers with bad prop 
types for my bucket (index_name in my case).


admin@riak-1:~$ sudo riak-admin member-status
= Membership 
==

Status RingPendingNode
--- 


joining 0.0%  --  'r...@riak-4.riak.wisel.io'
joining 0.0%  --  'r...@riak-5.riak.wisel.io'
valid  32.8%  --  'r...@riak-1.riak.wisel.io'
valid  34.4%  --  'r...@riak-2.riak.wisel.io'
valid  32.8%  --  'r...@riak-3.riak.wisel.io'
--- 


Valid:3 / Leaving:0 / Exiting:0 / Joining:2 / Down:0

This is a sample response from python when a ask for properties, 
index_name is missing and the claimant is riak-4


{'n_val': 2, 'r': 'quorum', 'dw': 'quorum', 'pr': 0, 'claimant': 
'r...@riak-4.riak.wisel.io', 'w': 'quorum', 'precommit': [], 
'dvv_enabled': True, 'big_vclock': 50, 'last_write_wins': False, 
'notfound_ok': True, 'basic_quorum': False, 'linkfun': {'fun': 
'mapreduce_linkfun', 'mod': 'riak_kv_wm_link_walker'}, 'pw': 0, 
'small_vclock': 50, 'rw': 'quorum', 'young_vclock': 20, 
'chash_keyfun': {'fun': 'chash_std_keyfun', 'mod': 'riak_core_util'}, 
'allow_mult': True, 'datatype': 'map', 'active': True, 'postcommit': 
[], 'old_vclock': 86400}


I'm pretty sure I first reached one of my 3 valid nodes for the 
original query.
How can I exclude my two joining nodes from serving any response to my 
clients ?


Guillaume

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr search performance

2016-09-19 Thread Fred Dushin
All great questions, Sean.

A few things.  First off, for result sets that are that large, you are probably 
going to want to use Solr cursor marks [1], which are supported in the current 
version of Solr we ship.  Riak allows queries using cursor marks through the 
HTTP interface.  At present, it does not support cursors using the protobuf 
API, due to some internal limitations of the server-side protobuf library, but 
we do hope to fix that in the future.

Secondly, we have found sorting with distributed queries to be far more 
performant using Solr 4.10.4.  Currently released versions of Riak use Solr 
4.7, but as you can see on github [2], Solr 4.10.4 support has been merged into 
the develop-2.2 branch, and is in the pipeline for release.  I can't say when 
the next version of Riak is that will ship with this version because of 
indeterminacy around bug triage, but it should not be too long.

I would start to look at using cursor marks and measure their relative 
performance in your scenario.  My guess is that you should see some improvement 
there.

-Fred

[1] https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
[2] 
https://github.com/basho/yokozuna/commit/f64e19cef107d982082f5b95ed598da96fb419b0


> On Sep 19, 2016, at 4:48 PM, sean mcevoy  wrote:
> 
> Hi All,
> 
> We have an index with ~548,000 entries, ~14,000 of which match one of our 
> queries.
> We read these in a paginated search and the first page (of 100 hits) returns 
> quickly in ~70ms.
> This response time seems to increase exponentially as we walk through the 
> pages:
> the 4th page takes ~200ms,
> the 8th page takes ~1200ms
> the 12th page takes ~2100ms
> the 16th page takes ~6100ms
> the 20th page takes ~24000ms
> 
> And by the time we're searching for the 22nd page it regularly times out at 
> the default 60 seconds.
> 
> I have a good unsderstanding of riak KV internals but absolutely nothing of 
> Lucene which I think is what's most relevant here. If anyone in the know can 
> point me towards any relevant resource or can explain what's happening I'd be 
> much obliged :-)
> As I would also be if anyone with experience of using Riak/Lucene can tell me:
> - Is 500K a crazy number of entries to put into one index?
> - Is 14K a crazy number of entries to expect to be returned?
> - Are there any methods we can use to make the search time more constant 
> across the full search?
> I read one blog post on inlining but it was a bit old & not very obvious how 
> to implement using riakc_pb_socket calls.
> 
> And out of curiosity, do we not traverse the full range of hits for each 
> page? I naively thought that because I'm sorting the returned values we'd 
> have to get them all first and then sort, but the response times suggests 
> otherwise. Does Lucene store the data sorted by each field just in case a 
> query asks for it? Or what other magic is going on?
> 
> 
> For the technical details, we use the "_yz_default" schema and all the fields 
> stored are strings:
> - entry_id_s: unique within the DB, the aim of the query is to gather a list 
> of these
> - type_s: has one of 2 values
> - sub_category_id_s: in the query described above all 14K hits will match on 
> this, in the DB of ~500K entries there are ~43K different values for this 
> field, withe each category typically having 2-6 sub categories
> - category_id_s: not matched in this query, in the DB of ~500K entries there 
> are ~13K different values for this field
> - status_s: has one of 2 values, in the query described baove all hits will 
> have the value "active"
> - user_id_s: unique within the DB but not matched in this query
> - first_name_s: almost unique within the DB, this query will sort by this 
> field
> - last_name_s: almost unique within the DB, this query will sort by this field
> 
> This search query looks like:
> <<"sub_category_id_s:test_1 AND status_s:active AND type_s:sub_category">>
> 
> Our options parameter has the sort directive:
> {sort, <<"first_name_s asc, last_name_s asc">>}
> 
> The query was run on a 5-node cluster with n_val of 3.
> 
> Thanks in advance fo rany pointers!
> //Sean.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: app.config missing?

2016-09-19 Thread DeadZen
Nope, app.config is actually generated by riak.conf, through an
obscure process known as cuttlefishing ;p

On Mon, Sep 19, 2016 at 3:44 AM, Alex De la rosa
 wrote:
> Ok, documentation was confusing, i thought i had to add the data in both
> riak.conf and app.config
>
> Thanks,
> Alex
>
> On Mon, Sep 19, 2016 at 11:42 AM, Magnus Kessler  wrote:
>>
>> On 18 September 2016 at 07:51, Alex De la rosa 
>> wrote:
>>>
>>> Hi there,
>>>
>>> I'm trying to locate the app.config file in Riak 2.1.4-1 to add the
>>> following:
>>>
>>> { kernel, [
>>> {inet_dist_listen_min, 6000},
>>> {inet_dist_listen_max, 7999}
>>>   ]},
>>>
>>> as explained at http://docs.basho.com/riak/kv/2.1.4/using/security but I
>>> can't find it.
>>>
>>> Thanks,
>>> Alex
>>>
>>
>>
>> Hi Alex,
>>
>> With Riak 2.x we recommend using the new configuration mechanism (a.k.a
>> cuttlefish). Please use the instructions for using riak.conf on the page you
>> quoted.
>>
>> erlang.distribution.port_range.minimum = 6000
>> erlang.distribution.port_range.maximum = 7999
>>
>> For more information about Riak's configuration system, please see the
>> configuration reference documentation [0].
>>
>> Kind Regards,
>>
>> Magnus
>>
>> [0]: http://docs.basho.com/riak/kv/2.1.4/configuring/reference/
>>
>>  --
>> Magnus Kessler
>> Client Services Engineer
>> Basho Technologies Limited
>>
>> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: app.config missing?

2016-09-19 Thread Alex De la rosa
Ok, documentation was confusing, i thought i had to add the data in both
riak.conf and app.config

Thanks,
Alex

On Mon, Sep 19, 2016 at 11:42 AM, Magnus Kessler  wrote:

> On 18 September 2016 at 07:51, Alex De la rosa 
> wrote:
>
>> Hi there,
>>
>> I'm trying to locate the app.config file in Riak 2.1.4-1 to add the
>> following:
>>
>> { kernel, [
>> {inet_dist_listen_min, 6000},
>> {inet_dist_listen_max, 7999}
>>   ]},
>>
>> as explained at http://docs.basho.com/riak/kv/2.1.4/using/security but I
>> can't find it.
>>
>> Thanks,
>> Alex
>>
>
>
> Hi Alex,
>
> With Riak 2.x we recommend using the new configuration mechanism (a.k.a
> cuttlefish). Please use the instructions for using riak.conf on the page
> you quoted.
>
> erlang.distribution.port_range.minimum = 6000 
> erlang.distribution.port_range.maximum
> = 7999
>
> For more information about Riak's configuration system, please see the
> configuration reference documentation [0].
>
> Kind Regards,
>
> Magnus
>
> [0]: http://docs.basho.com/riak/kv/2.1.4/configuring/reference/
>
>  --
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
>
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak cluster protected by firewall

2016-09-18 Thread DeadZen
Looks right, jmx not imx ;),
and yes provided the erlang kernel options are given to limit dist
comm range to 6000-7999

you can check this from the node (to make sure) with:
> [ application:get_env(kernel, X) || X <- 
> [inet_dist_listen_min,inet_dist_listen_max] ].
[{ok,6000},{ok,7999}]

On Sun, Sep 18, 2016 at 2:42 AM, Alex De la rosa
 wrote:
> So mainly the ports are:
>
> epmd listener: TCP:4369
> handoff_port listener: TCP:8099
> http: TCP:8098
> protocol buffers: TCP: 8087
> solr: TCP:8093
> solr imx: TCP:8985
> erlang range: TCP:6000~7999 (if configured in riak's configuration)
>
> Is that alright? am I missing any? or is there any of them that is not
> needed to add in the firewall?
>
> Thanks,
> Alex
>
> On Sun, Sep 18, 2016 at 5:57 AM, John Daily  wrote:
>>
>> You should find most of what you need here:
>> http://docs.basho.com/riak/kv/2.1.4/using/security/
>>
>> Sent from my iPhone
>>
>> On Sep 17, 2016, at 1:26 PM, Alex De la rosa 
>> wrote:
>>
>> Hi all,
>>
>> I have a cluster of 5 nodes connected to each other and now I want to use
>> UFW to deny any  external incoming traffic into them but i will allow each
>> node to access between themselves. Which ports should i open
>> (pb_port,http_port,solr,...)? I connect via pbc but i may need more ports
>> open i guess.
>>
>> A configurations like this (assuming is node_1):
>>
>> ufw default deny incoming
>> ufw default allow outgoing
>> ufw allow 22 --> SSH (private keys)
>> ufw allow from  to any port 443 --> HTTPS (API that talks
>> with Riak locally via Python client)
>>
>> ufw allow from  to any port 
>> ufw allow from  to any port 
>> ufw allow from  to any port 
>> ufw allow from  to any port 
>>
>> Thanks!
>> Alex
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak cluster protected by firewall

2016-09-18 Thread Alex De la rosa
So mainly the ports are:

epmd listener: TCP:4369
handoff_port listener: TCP:8099
http: TCP:8098
protocol buffers: TCP: 8087
solr: TCP:8093
solr imx: TCP:8985
erlang range: TCP:6000~7999 (if configured in riak's configuration)

Is that alright? am I missing any? or is there any of them that is not
needed to add in the firewall?

Thanks,
Alex

On Sun, Sep 18, 2016 at 5:57 AM, John Daily  wrote:

> You should find most of what you need here: http://docs.basho.com/
> riak/kv/2.1.4/using/security/
>
> Sent from my iPhone
>
> On Sep 17, 2016, at 1:26 PM, Alex De la rosa 
> wrote:
>
> Hi all,
>
> I have a cluster of 5 nodes connected to each other and now I want to use
> UFW to deny any  external incoming traffic into them but i will allow each
> node to access between themselves. Which ports should i open
> (pb_port,http_port,solr,...)? I connect via pbc but i may need more ports
> open i guess.
>
> A configurations like this (assuming is node_1):
>
> ufw default deny incoming
> ufw default allow outgoing
> ufw allow 22 --> SSH (private keys)
> ufw allow from  to any port 443 --> HTTPS (API that talks
> with Riak locally via Python client)
>
> ufw allow from  to any port 
> ufw allow from  to any port 
> ufw allow from  to any port 
> ufw allow from  to any port 
>
> Thanks!
> Alex
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak cluster protected by firewall

2016-09-17 Thread John Daily
You should find most of what you need here:
http://docs.basho.com/riak/kv/2.1.4/using/security/

Sent from my iPhone

On Sep 17, 2016, at 1:26 PM, Alex De la rosa 
wrote:

Hi all,

I have a cluster of 5 nodes connected to each other and now I want to use
UFW to deny any  external incoming traffic into them but i will allow each
node to access between themselves. Which ports should i open
(pb_port,http_port,solr,...)? I connect via pbc but i may need more ports
open i guess.

A configurations like this (assuming is node_1):

ufw default deny incoming
ufw default allow outgoing
ufw allow 22 --> SSH (private keys)
ufw allow from  to any port 443 --> HTTPS (API that talks
with Riak locally via Python client)

ufw allow from  to any port 
ufw allow from  to any port 
ufw allow from  to any port 
ufw allow from  to any port 

Thanks!
Alex

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-14 Thread Weixi Yen
> So yes, it is possible to find several additional objects in Solr for
each KV object. When querying Solr through Riak/Yokozuna, the internal
queries are structured in a way that only one replica is returned. Quering
Solr nodes directly will typically lack these filters and may return more
than one copy of an object.

Got it, that's what's happening.

> You can perform a GET/PUT cycle through Riak KV on an object.

Perfect, will do this to fix when I notice the dupes coming in, thanks!

On Tue, Sep 13, 2016 at 9:21 AM, Weixi Yen <we...@sleeperbot.com> wrote:

> > So yes, it is possible to find several additional objects in Solr for
> each KV object. When querying Solr through Riak/Yokozuna, the internal
> queries are structured in a way that only one replica is returned. Quering
> Solr nodes directly will typically lack these filters and may return more
> than one copy of an object.
>
> Got it, that's what's happening.
>
> > You can perform a GET/PUT cycle through Riak KV on an object.
>
> Perfect, will do this to fix when I notice the dupes coming in, thanks!
>
> On Tue, Sep 13, 2016 at 4:35 AM, Magnus Kessler <mkess...@basho.com>
> wrote:
>
>> On 11 September 2016 at 02:27, Weixi Yen <we...@blitzchat.com> wrote:
>>
>>> Sort of a unique case, my app was under heavy stress and one of my riak
>>> nodes got backed up (other 4 nodes were fine).
>>>
>>> I think this caused Riak.update to create an extra index in Solr for the
>>> same object when users began running .update on that object.
>>>
>>
>> Hi Weixi,
>>
>> Can you please confirm what you mean by "extra index"? Do you mean that
>> an object was indexed more than once and gets counted / returned by Solr
>> queries? If that's the case, can you please let me know how you query Solr?
>>
>>
>>
>>>
>>> I have basically 2 questions:
>>>
>>> 1) Is what I'm describing something that is possible?
>>>
>>
>> Riak/Yokozuna indexes each replica of a Riak object into Solr. With the
>> default n_val of 3, there will be 3 copies of any given object indexed in
>> Solr. Depending on the version of Riak you are using, it's also possible
>> that siblings of Riak objects get indexed independently. So yes, it is
>> possible to find several additional objects in Solr for each KV object.
>> When querying Solr through Riak/Yokozuna, the internal queries are
>> structured in a way that only one replica is returned. Quering Solr nodes
>> directly will typically lack these filters and may return more than one
>> copy of an object.
>>
>>
>>>
>>> 2) Is there a way to tell Solr to re-index one single item and get rid
>>> of all other indexes of that item?
>>>
>>
>> You can perform a GET/PUT cycle through Riak KV on an object. This will
>> result in n_val copies of the objects across the Solr instances, that
>> replace previous versions. It is not possible to have just 1 copy, unless
>> the n_val for the object is exactly 1. AFAIK, there have been some fixes to
>> Yokozuna in 2.0.7 and the upcoming 2.2 release that deal better with
>> indexed siblings. Discrepancies between KV objects and their Solr
>> counterparts should be detected and resolved by active anti-entropy (AAE).
>>
>>
>>>
>>> Considering RiakTS to resolve these issues long term, but have to stick
>>> with Solr for at least the next 3 months, would appreciate any insight into
>>> how to solve this duplicate index problem.
>>>
>>> Thanks,
>>>
>>> Weixi
>>>
>>>
>> Regards,
>>
>> Magnus
>>
>> --
>> Magnus Kessler
>> Client Services Engineer
>> Basho Technologies Limited
>>
>> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: RIAK TS installed nodes not connecting

2016-09-14 Thread Agtmaal, Joris van
Hi Stephen

I’m using emr-4.7.1 (Spark 1.6.1)  EMR cluster to run pyspark through zeppelin. 
I’ve downloaded the spark-riak-connector Jar file (Version: 1.6.0 ( 16d483 | 
zip | jar ) / Date: 2016-09-07 / License: Apache-2.0 / Scala version: 2.10) 
from https://spark-packages.org/package/basho/spark-riak-connector onto the EMR 
master node.

My complete zeppelin script is as follows:

%dep
z.reset()
z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")

%pyspark
import riak, datetime, time, random

host='172.31.00.00’
pb_port = '8087'
hostAndPort = ":".join([host, pb_port])
client = riak.RiakClient(host=host, pb_port=pb_port)
table=client.table("test")

%pyspark
site = 'AA'
species = 'fff'
start_date = int(time.time())
events = []
for i in range(9):
measurementDate = start_date + i
value = random.uniform(-20, 110)
events.append([site, species, measurementDate, value])

end_date = measurementDate

for e in events:
print e


testRDD = sc.parallelize(events)
df = testRDD.toDF(['site', 'species','measurementDate','value'])
df.show()

   %pyspark
df.write \
.format('org.apache.spark.sql.riak') \
.option('spark.riak.connection.host', '172.31.41.86:8087') \
.mode('Append') \
.save('test')

If i run pyspark from the masternode directly with –jars 
"/home/hadoop/spark-riak-connector_2.10-1.6.0.jar" it gives me the same error 
message as above.
If i run pyspark from the masternode directly without the --jar file argument 
it will give the error as below (same in zeppelin), so it seems the package is 
found.

py4j.protocol.Py4JJavaError: An error occurred while calling o61.save.
: java.lang.ClassNotFoundException: Failed to find data source: 
org.apache.spark.sql.riak. Please find packages at http://spark-packages.org
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:219)
...

Thanks for the help!

Kind Regards,
Joris van Agtmaal
+(0) 6 25 39 39 06

From: Stephen Etheridge [mailto:setheri...@basho.com]
Sent: 13 September 2016 14:45
To: Agtmaal, Joris van <joris.vanagtm...@wartsila.com>
Cc: riak-users@lists.basho.com; Manu Marchal <emarc...@basho.com>
Subject: Re: RIAK TS installed nodes not connecting

Hi Joris,

I have looked at the tutorial you have been following but I confess I am 
confused.  In the example you are following I do not see where the spark and 
sql contexts are created.  I use PySpark through the Jupyter notebook and I 
have to specify a path to the connector on invoking the jupyter notebook. Is it 
possible for you to share all your code (and how you are invoking zeppelin) 
with me so I can trace everything through?

regards
Stephen

On Mon, Sep 12, 2016 at 3:27 PM, Agtmaal, Joris van 
<joris.vanagtm...@wartsila.com<mailto:joris.vanagtm...@wartsila.com>> wrote:
Hi

I’m new to Riak and followed the installation instructions to get it working on 
an AWS cluster (3 nodes).

So far ive been able to use Riak in pyspark (zeppelin) to create/read/write 
tables, but i would like to use the dataframes directly from spark, using the 
Spark-Riak Connector.
When following the example found here: 
http://docs.basho.com/riak/ts/1.4.0/add-ons/spark-riak-connector/quick-start/#python
But i run into trouble on this last part:

host= my_ip_adress_of_riak_node
pb_port = '8087'
hostAndPort = ":".join([host, pb_port])
client = riak.RiakClient(host=host, pb_port=pb_port)

df.write \
.format('org.apache.spark.sql.riak') \
.option('spark.riak.connection.host', hostAndPort) \
.mode('Append') \
.save('test')

Important to note that i’m using a local download of the Jar file that is 
loaded into the pyspark interpreter in zeppeling through:
%dep
z.reset()
z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")

Here is the error message i get back:
Py4JJavaError: An error occurred while calling o569.save. : 
java.lang.NoClassDefFoundError: com/basho/riak/client/core/util/HostAndPort at 
com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:76)
 at 
com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:89)
 at org.apache.spark.sql.riak.RiakRelation$.apply(RiakRelation.scala:115) at 
org.apache.spark.sql.riak.DefaultSource.createRelation(DefaultSource.scala:51) 
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148) at 
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(Deleg

RE: RIAK TS installed nodes not connecting

2016-09-14 Thread Agtmaal, Joris van
Hi Alex

I had just figured out that i probably was missing something, so i tried 
installing an earlier version using the Uber jar. This worked fine, so my next 
step is to get back to the latest version of the Uber jar and i expect indeed 
that that was my mistake.

Thanks for the help, and your patience with my rookie mistakes.
;-)

Kind Regards,
Joris van Agtmaal
+(0) 6 25 39 39 06

From: Alex Moore [mailto:amo...@basho.com]
Sent: 13 September 2016 15:35
To: Stephen Etheridge <setheri...@basho.com>
Cc: Agtmaal, Joris van <joris.vanagtm...@wartsila.com>; 
riak-users@lists.basho.com; Manu Marchal <emarc...@basho.com>
Subject: Re: RIAK TS installed nodes not connecting

Joris,

One thing to check - since you are using a downloaded jar, are you using the 
Uber jar that contains all the dependencies?
http://search.maven.org/remotecontent?filepath=com/basho/riak/spark-riak-connector_2.10/1.6.0/spark-riak-connector_2.10-1.6.0-uber.jar

Thanks,
Alex

On Tue, Sep 13, 2016 at 8:44 AM, Stephen Etheridge 
<setheri...@basho.com<mailto:setheri...@basho.com>> wrote:
Hi Joris,

I have looked at the tutorial you have been following but I confess I am 
confused.  In the example you are following I do not see where the spark and 
sql contexts are created.  I use PySpark through the Jupyter notebook and I 
have to specify a path to the connector on invoking the jupyter notebook. Is it 
possible for you to share all your code (and how you are invoking zeppelin) 
with me so I can trace everything through?

regards
Stephen

On Mon, Sep 12, 2016 at 3:27 PM, Agtmaal, Joris van 
<joris.vanagtm...@wartsila.com<mailto:joris.vanagtm...@wartsila.com>> wrote:
Hi

I’m new to Riak and followed the installation instructions to get it working on 
an AWS cluster (3 nodes).

So far ive been able to use Riak in pyspark (zeppelin) to create/read/write 
tables, but i would like to use the dataframes directly from spark, using the 
Spark-Riak Connector.
When following the example found here: 
http://docs.basho.com/riak/ts/1.4.0/add-ons/spark-riak-connector/quick-start/#python
But i run into trouble on this last part:

host= my_ip_adress_of_riak_node
pb_port = '8087'
hostAndPort = ":".join([host, pb_port])
client = riak.RiakClient(host=host, pb_port=pb_port)

df.write \
.format('org.apache.spark.sql.riak') \
.option('spark.riak.connection.host', hostAndPort) \
.mode('Append') \
.save('test')

Important to note that i’m using a local download of the Jar file that is 
loaded into the pyspark interpreter in zeppeling through:
%dep
z.reset()
z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")

Here is the error message i get back:
Py4JJavaError: An error occurred while calling o569.save. : 
java.lang.NoClassDefFoundError: com/basho/riak/client/core/util/HostAndPort at 
com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:76)
 at 
com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:89)
 at org.apache.spark.sql.riak.RiakRelation$.apply(RiakRelation.scala:115) at 
org.apache.spark.sql.riak.DefaultSource.createRelation(DefaultSource.scala:51) 
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148) at 
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606) at 
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at 
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at 
py4j.Gateway.invoke(Gateway.java:259) at 
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at 
py4j.commands.CallCommand.execute(CallCommand.java:79) at 
py4j.GatewayConnection.run(GatewayConnection.java:209) at 
java.lang.Thread.run(Thread.java:745) (, 
Py4JJavaError(u'An error occurred while calling o569.save.\n', JavaObject 
id=o570), )

Hope somebody can help out.
thanks, joris

___
riak-users mailing list
riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



--
{ "name" : "Stephen Etheridge",
   "title" : "Solution Architect, EMEA",
   "Organisation" : "Basho Technologies, Inc",
   "Telephone" : "07814 406662",
   "email" : "mailto:setheri...@basho.com<mailto:setheri...@basho.com>",
   "github" : "http://github.com/datalemming;,
   "twitter" : "@datalemming"}


___
ria

Re: RIAK TS installed nodes not connecting

2016-09-13 Thread Alex Moore
Joris,

One thing to check - since you are using a downloaded jar, are you using
the Uber jar that contains all the dependencies?
http://search.maven.org/remotecontent?filepath=com/basho/riak/spark-riak-connector_2.10/1.6.0/spark-riak-connector_2.10-1.6.0-uber.jar

Thanks,
Alex

On Tue, Sep 13, 2016 at 8:44 AM, Stephen Etheridge 
wrote:

> Hi Joris,
>
> I have looked at the tutorial you have been following but I confess I am
> confused.  In the example you are following I do not see where the spark
> and sql contexts are created.  I use PySpark through the Jupyter notebook
> and I have to specify a path to the connector on invoking the jupyter
> notebook. Is it possible for you to share all your code (and how you are
> invoking zeppelin) with me so I can trace everything through?
>
> regards
> Stephen
>
> On Mon, Sep 12, 2016 at 3:27 PM, Agtmaal, Joris van <
> joris.vanagtm...@wartsila.com> wrote:
>
>> Hi
>>
>>
>>
>> I’m new to Riak and followed the installation instructions to get it
>> working on an AWS cluster (3 nodes).
>>
>>
>>
>> So far ive been able to use Riak in pyspark (zeppelin) to
>> create/read/write tables, but i would like to use the dataframes directly
>> from spark, using the Spark-Riak Connector.
>>
>> When following the example found here: http://docs.basho.com/riak/ts/
>> 1.4.0/add-ons/spark-riak-connector/quick-start/#python
>>
>> But i run into trouble on this last part:
>>
>>
>>
>> host= my_ip_adress_of_riak_node
>>
>> pb_port = '8087'
>>
>> hostAndPort = ":".join([host, pb_port])
>>
>> client = riak.RiakClient(host=host, pb_port=pb_port)
>>
>>
>>
>> df.write \
>>
>> .format('org.apache.spark.sql.riak') \
>>
>> .option('spark.riak.connection.host', hostAndPort) \
>>
>> .mode('Append') \
>>
>> .save('test')
>>
>>
>>
>> Important to note that i’m using a local download of the Jar file that is
>> loaded into the pyspark interpreter in zeppeling through:
>>
>> %dep
>>
>> z.reset()
>>
>> z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")
>>
>>
>>
>> Here is the error message i get back:
>>
>> Py4JJavaError: An error occurred while calling o569.save. :
>> java.lang.NoClassDefFoundError: com/basho/riak/client/core/util/HostAndPort
>> at 
>> com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:76)
>> at 
>> com.basho.riak.spark.rdd.connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:89)
>> at org.apache.spark.sql.riak.RiakRelation$.apply(RiakRelation.scala:115)
>> at 
>> org.apache.spark.sql.riak.DefaultSource.createRelation(DefaultSource.scala:51)
>> at org.apache.spark.sql.execution.datasources.ResolvedDataSourc
>> e$.apply(ResolvedDataSource.scala:222) at org.apache.spark.sql.DataFrame
>> Writer.save(DataFrameWriter.scala:148) at org.apache.spark.sql.DataFrame
>> Writer.save(DataFrameWriter.scala:139) at 
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method) at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606) at
>> py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at
>> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at
>> py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.
>> invokeMethod(AbstractCommand.java:133) at 
>> py4j.commands.CallCommand.execute(CallCommand.java:79)
>> at py4j.GatewayConnection.run(GatewayConnection.java:209) at
>> java.lang.Thread.run(Thread.java:745) (> 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while
>> calling o569.save.\n', JavaObject id=o570), > 0x7f7021bb0200>)
>>
>>
>>
>> Hope somebody can help out.
>>
>> thanks, joris
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
>
> --
> { "name" : "Stephen Etheridge",
>"title" : "Solution Architect, EMEA",
>"Organisation" : "Basho Technologies, Inc",
>"Telephone" : "07814 406662",
>"email" : "mailto:setheri...@basho.com;,
>"github" : "http://github.com/datalemming;,
>"twitter" : "@datalemming"}
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RIAK TS installed nodes not connecting

2016-09-13 Thread Stephen Etheridge
Hi Joris,

I have looked at the tutorial you have been following but I confess I am
confused.  In the example you are following I do not see where the spark
and sql contexts are created.  I use PySpark through the Jupyter notebook
and I have to specify a path to the connector on invoking the jupyter
notebook. Is it possible for you to share all your code (and how you are
invoking zeppelin) with me so I can trace everything through?

regards
Stephen

On Mon, Sep 12, 2016 at 3:27 PM, Agtmaal, Joris van <
joris.vanagtm...@wartsila.com> wrote:

> Hi
>
>
>
> I’m new to Riak and followed the installation instructions to get it
> working on an AWS cluster (3 nodes).
>
>
>
> So far ive been able to use Riak in pyspark (zeppelin) to
> create/read/write tables, but i would like to use the dataframes directly
> from spark, using the Spark-Riak Connector.
>
> When following the example found here: http://docs.basho.com/riak/ts/
> 1.4.0/add-ons/spark-riak-connector/quick-start/#python
>
> But i run into trouble on this last part:
>
>
>
> host= my_ip_adress_of_riak_node
>
> pb_port = '8087'
>
> hostAndPort = ":".join([host, pb_port])
>
> client = riak.RiakClient(host=host, pb_port=pb_port)
>
>
>
> df.write \
>
> .format('org.apache.spark.sql.riak') \
>
> .option('spark.riak.connection.host', hostAndPort) \
>
> .mode('Append') \
>
> .save('test')
>
>
>
> Important to note that i’m using a local download of the Jar file that is
> loaded into the pyspark interpreter in zeppeling through:
>
> %dep
>
> z.reset()
>
> z.load("/home/hadoop/spark-riak-connector_2.10-1.6.0.jar")
>
>
>
> Here is the error message i get back:
>
> Py4JJavaError: An error occurred while calling o569.save. : 
> java.lang.NoClassDefFoundError:
> com/basho/riak/client/core/util/HostAndPort at com.basho.riak.spark.rdd.
> connector.RiakConnectorConf$.apply(RiakConnectorConf.scala:76) at
> com.basho.riak.spark.rdd.connector.RiakConnectorConf$.
> apply(RiakConnectorConf.scala:89) at org.apache.spark.sql.riak.
> RiakRelation$.apply(RiakRelation.scala:115) at org.apache.spark.sql.riak.
> DefaultSource.createRelation(DefaultSource.scala:51) at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) at 
> java.lang.reflect.Method.invoke(Method.java:606)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at
> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) at
> py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.
> invokeMethod(AbstractCommand.java:133) at 
> py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:209) at
> java.lang.Thread.run(Thread.java:745) ( 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while
> calling o569.save.\n', JavaObject id=o570),  0x7f7021bb0200>)
>
>
>
> Hope somebody can help out.
>
> thanks, joris
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
{ "name" : "Stephen Etheridge",
   "title" : "Solution Architect, EMEA",
   "Organisation" : "Basho Technologies, Inc",
   "Telephone" : "07814 406662",
   "email" : "mailto:setheri...@basho.com;,
   "github" : "http://github.com/datalemming;,
   "twitter" : "@datalemming"}
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-13 Thread Magnus Kessler
On 11 September 2016 at 02:27, Weixi Yen <we...@blitzchat.com> wrote:

> Sort of a unique case, my app was under heavy stress and one of my riak
> nodes got backed up (other 4 nodes were fine).
>
> I think this caused Riak.update to create an extra index in Solr for the
> same object when users began running .update on that object.
>

Hi Weixi,

Can you please confirm what you mean by "extra index"? Do you mean that an
object was indexed more than once and gets counted / returned by Solr
queries? If that's the case, can you please let me know how you query Solr?



>
> I have basically 2 questions:
>
> 1) Is what I'm describing something that is possible?
>

Riak/Yokozuna indexes each replica of a Riak object into Solr. With the
default n_val of 3, there will be 3 copies of any given object indexed in
Solr. Depending on the version of Riak you are using, it's also possible
that siblings of Riak objects get indexed independently. So yes, it is
possible to find several additional objects in Solr for each KV object.
When querying Solr through Riak/Yokozuna, the internal queries are
structured in a way that only one replica is returned. Quering Solr nodes
directly will typically lack these filters and may return more than one
copy of an object.


>
> 2) Is there a way to tell Solr to re-index one single item and get rid of
> all other indexes of that item?
>

You can perform a GET/PUT cycle through Riak KV on an object. This will
result in n_val copies of the objects across the Solr instances, that
replace previous versions. It is not possible to have just 1 copy, unless
the n_val for the object is exactly 1. AFAIK, there have been some fixes to
Yokozuna in 2.0.7 and the upcoming 2.2 release that deal better with
indexed siblings. Discrepancies between KV objects and their Solr
counterparts should be detected and resolved by active anti-entropy (AAE).


>
> Considering RiakTS to resolve these issues long term, but have to stick
> with Solr for at least the next 3 months, would appreciate any insight into
> how to solve this duplicate index problem.
>
> Thanks,
>
> Weixi
>
>
Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak KV memory backend did't discarding oldest objects when met max memory

2016-09-12 Thread 周磊
Dear friends,

Any idea about this?

Best Regards & Thanks

John

2016-09-01 11:58 GMT+08:00 周磊 :

> Dear Friends,
> I'm sorry to disturb you, could you help on this?
>
> ENV
>
> OS: CenterOS 6.5
> Riak KV:2.1.4
> Installed by:
>
> wget http://s3.amazonaws.com/downloads.basho.com/riak/2.1/
> 2.1.4/rhel/6/riak-2.1.4-1.el6.x86_64.rpm
> sudo rpm -Uvh riak-2.1.4-1.el6.x86_64.rpm
>
> My Setting:
>
> storage_backend = multi
> multi_backend.bitcask_multi.storage_backend = bitcask
> multi_backend.bitcask_multi.bitcask.data_root = /var/lib/riak/bitcask_mult
> multi_backend.memory_multi.storage_backend = memory
> multi_backend.memory_multi.memory_backend.max_memory_per_vnode = 2MB
> multi_backend.default = memory_multi
>
> Document
>
> http://docs.basho.com/riak/kv/2.1.4/setup/planning/backend/memory/
> 
>
> When the threshold value that you set has been met in a particular vnode, Riak
> will begin discarding objects, beginning with the *oldest* object and
> proceeding until memory usage returns below the allowable threshold.
> You can configure maximum memory using the memory_backend.max_memory_per_vnode
> setting. You can specify max_memory_per_vnode however you’d like, using
> kilobytes, megabytes, or even gigabytes.
>
> Setps
>
> 1.Use sh to Loop POST [512kb] files (0.ts,index.m3u8,1.ts,index.
> m3u8,2.ts,index.m3u8)
> 2.When meet the max memory 2 MB, some object discarded but not beginning
> with the oldest object(index.m3u8 discarded)
> curl.sh.txt 
>
> http://10.20.122.45:8098/buckets/test/keys?keys=true
>
> {"keys":["217.ts","212.ts","203.ts","210.ts","173.ts","
> 166.ts","200.ts","199.ts","192.ts","215.ts","129.ts","
> 124.ts","208.ts","179.ts","198.ts","196.ts","185.ts","97.
> ts","219.ts","114.ts","201.ts","190.ts","165.ts","223.ts","
> 220.ts","214.ts","222.ts","211.ts","209.ts","213.ts","
> 183.ts","143.ts","205.ts","171.ts","139.ts","66.ts","142.
> ts","122.ts","216.ts","170.ts","162.ts","194.ts","119.ts","
> 105.ts","178.ts","160.ts","158.ts","221.ts","193.ts","
> 187.ts","197.ts","159.ts","155.ts","218.ts","207.ts","
> 184.ts","188.ts","181.ts","176.ts","206.ts","202.ts","195.ts"]}
>
>
>
> Best Regards & Thanks
>
> John
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-12 Thread Fred Dushin
Hi Weixi,

You might have to try describing your use case in more detail.  Solr Indices 
are independent from Riak objects.  They are, instead, associated with riak 
buckets (or bucket types), and an object (key/value) can only be associated 
with one bucket.  Therefore, a Riak object can only be associated with one Solr 
index.  A Solr index can be associated with multiple buckets, but in general 
the mapping from Riak objects to Solr indices is injective.

Is it possible that you changed the index associated with a bucket at some 
point in the bucket or bucket type lifecycle?

-Fred

> On Sep 10, 2016, at 9:27 PM, Weixi Yen <we...@blitzchat.com> wrote:
> 
> Sort of a unique case, my app was under heavy stress and one of my riak nodes 
> got backed up (other 4 nodes were fine).
> 
> I think this caused Riak.update to create an extra index in Solr for the same 
> object when users began running .update on that object.
> 
> I have basically 2 questions:
> 
> 1) Is what I'm describing something that is possible?
> 
> 2) Is there a way to tell Solr to re-index one single item and get rid of all 
> other indexes of that item?
> 
> Considering RiakTS to resolve these issues long term, but have to stick with 
> Solr for at least the next 3 months, would appreciate any insight into how to 
> solve this duplicate index problem.
> 
> Thanks,
> 
> Weixi
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: a weird error while post request to server for store object

2016-09-12 Thread Christopher Mancini
Hi Alan,

So, the quick answer, is that CRDTs and secondary indexes are not supported
on riak-ts at the moment which explains why you are getting errors. To
complete the example you were working on, you would need to run it against
riak-kv.

The long answer is our goal is to have TS and KV merged together so you can
have the features of both on one instance of Riak but unfortunately I do
not have a timeline for when that will be available.

Chris

On Wed, Sep 7, 2016 at 9:43 PM HQS^∞^ <hqs...@qq.com> wrote:

> Hi Luke :
>I'm sorry , I did not say clear my develop environment which I
> mentioned. I deployed three riak-ts server(at least version 1.3) in
> separate vmware virtual machine , PHP Client Libaray version is 2.0  and
> the riak-ts version is 1.3.0.
>
> Regards
>Alan
>
> -- 原始邮件 --
> *发件人:* "Luke Bakken";<lbak...@basho.com>;
> *发送时间:* 2016年9月7日(星期三) 晚上9:27
> *收件人:* "HQS^∞^"<hqs...@qq.com>;
> *抄送:* "riak-users"<riak-users@lists.basho.com>;
> *主题:* Re: a weird error while post request to server for store object
>
> Hello Alan -
>
> Which PHP client library are you using?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Tue, Sep 6, 2016 at 10:29 PM, HQS^∞^ <hqs...@qq.com> wrote:
> > dear everyone:
> > I follow the tutorial at
> > http://docs.basho.com/riak/kv/2.1.4/developing/usage/document-store/   ,
> > Step by Step Practice  , when I've Post a request for store object , but
> the
> > riak server respond 400 (Bad Request)  ,  I review my code again and
> again ,
> > but  no problem found . see below:
> >
> >  >
> >
> > class BlogPost {
> >  var  $_title = '';
> >  var  $_author = '';
> >  var  $_content = '';
> >  var  $_keywords = [];
> >  var  $_datePosted = '';
> >  var  $_published = false;
> >  var  $_bucketType = "cms";
> >  var  $_bucket = null;
> >  var  $_riak = null;
> >  var  $_location = null;
> >   public function __construct(Riak $riak, $bucket, $title, $author,
> > $content, array $keywords, $date, $published)
> >   {
> > $this->_riak = $riak;
> > $this->_bucket = new Bucket($bucket, "cms");
> > $this->_location = new Riak\Location('blog1',$this->_bucket,"cms");
> > $this->_title = $title;
> > $this->_author = $author;
> > $this->_content = $content;
> > $this->_keywords = $keywords;
> > $this->_datePosted = $date;
> > $this->_published = $published;
> >   }
> >
> >   public function store()
> >   {
> > $setBuilder = (new UpdateSet($this->_riak));
> >
> > foreach($this->_keywords as $keyword) {
> >   $setBuilder->add($keyword);
> > }
> > /*
> >(new UpdateMap($this->_riak))
> >   ->updateRegister('title', $this->_title)
> >   ->updateRegister('author', $this->_author)
> >   ->updateRegister('content', $this->_content)
> >   ->updateRegister('date', $this->_datePosted)
> >   ->updateFlag('published', $this->_published)
> >   ->updateSet('keywords', $setBuilder)
> >   ->withBucket($this->_bucket)
> >   ->build()
> >   ->execute();
> >
> > */
> >$response = (new UpdateMap($this->_riak))
> >   ->updateRegister('title', $this->_title)
> >   ->updateRegister('author', $this->_author)
> >   ->updateRegister('content', $this->_content)
> >   ->updateRegister('date', $this->_datePosted)
> >   ->updateFlag('published', $this->_published)
> >   ->updateSet('keywords', $setBuilder)
> >   ->atLocation($this->_location)
> >   ->build()
> >   ->execute();
> >
> > echo '';
> >   var_dump($response);
> > echo '';
> >   }
> > }
> >
> >  $node = (new Node\Builder)
> > ->atHost('192.168.111.2')
> > ->onPort(8098)
> > ->build();
> >
> > $riak = new Riak([$node]);
> >
> >
> > $keywords = ['adorbs', 'cheshire'];
> > $date = new \DateTime('now');
> >
> >
> > $post1 = new BlogPost(
> >   $riak,
> >   'cat_pics', // bucket
> >   'This one is so lulz!', // title
> >   'Cat Stevens', // author
> >   'Please check out these cat pics!', // content
> >   $keywords, 

Re: Spaces in the search string

2016-09-08 Thread sean mcevoy
Hi Alexander,
Unfortunately it didn't shake with any satisfaction.
I'm sure there's an easy answer, and I hope I'll get back to search for it
some day.
But for now me & my pragmatic overlords have gone for a work-around
solution that avoids the problem.
//Sean.


On Wed, Sep 7, 2016 at 2:06 PM, Alexander Sicular 
wrote:

> Hi Sean, Familiarize yourself with the default schema[0], if that is what
> you're using. Also check details around this specific type of search around
> the web[1].
>
> Let us know how it shakes out,
> -Alexander
>
>
> [0] https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_
> schema.xml
> [1] http://stackoverflow.com/questions/10023133/solr-
> wildcard-query-with-whitespace
>
>
>
> On Wednesday, September 7, 2016, sean mcevoy 
> wrote:
>
>> Hi again!
>>
>> Apologies for the premature post earlier. I thought I had a solution when
>> I didn't get the error but when I got around to plugging it into my
>> application it's still not doing everything that I need.
>> I've narrowed it down to this minimal testcase, first setup the index &
>> insert the data:
>>
>>
>> {ok,Pid} = riakc_pb_socket:start("127.0.0.1", 10017).
>> ok = riakc_pb_socket:create_search_index(Pid, <<"test_index">>,
>> <<"_yz_default">>, []).
>> ok = riakc_pb_socket:set_search_index(Pid, <<"test_bucket">>,
>> <<"test_index">>).
>> RO = riakc_obj:new(<<"test_bucket">>, <<"test_key">>,
>> <<"{\"name_s\":\"my test name\",\"age_i\":2}">>, "application/json").
>> ok = riakc_pb_socket:put(Pid, RO).
>>
>>
>> Now I can get the hit when search for a partial name with wildcards & no
>> escapes or spaces:
>> 521>
>> 521> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:*test* AND
>> age_i:2">>, []).
>> {ok,{search_results,[{<<"test_index">>,
>>   [{<<"score">>,<<"1.227760798549e+00">>},
>>{<<"_yz_rb">>,<<"test_bucket">>},
>>{<<"_yz_rt">>,<<"default">>},
>>{<<"_yz_rk">>,<<"test_key">>},
>>{<<"_yz_id">>,<<"1*default*tes
>> t_bucket*test_key*57">>},
>>{<<"name_s">>,<<"my test name">>},
>>{<<"age_i">>,<<"2">>}]}],
>> 1.2277607917785645,1}}
>>
>>
>> And I can get the hit when I search for the full name with spaces & the
>> escaped quotes:
>> 522>
>> 522> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"my test
>> name\" AND age_i:2">>, []).
>> {ok,{search_results,[{<<"test_index">>,
>>   [{<<"score">>,<<"1.007369608719e+00">>},
>>{<<"_yz_rb">>,<<"test_bucket">>},
>>{<<"_yz_rt">>,<<"default">>},
>>{<<"_yz_rk">>,<<"test_key">>},
>>{<<"_yz_id">>,<<"1*default*tes
>> t_bucket*test_key*58">>},
>>{<<"name_s">>,<<"my test name">>},
>>{<<"age_i">>,<<"2">>}]}],
>> 1.0073696374893188,1}}
>>
>>
>> But how can I search for a partial name with spaces:
>> 523>
>> 523> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"*y test
>> na*\" AND age_i:2">>, []).
>> {ok,{search_results,[],0.0,0}}
>> 524>
>> 524>
>>
>>
>> I get the feeling that I'm missing something really obvious but can't see
>> it. Any more pointers appreciated!
>>
>> //Sean.
>>
>>
>> On Wed, Sep 7, 2016 at 10:11 AM, sean mcevoy 
>> wrote:
>>
>>> Hi Jason,
>>>
>>> Thanks for the kick, I just needed to look closer!
>>> Yes, had tried escaping but one of my utility functions for dynamically
>>> building the search string had been stripping it out again. D'oh!
>>>
>>> Curiously, just escaping the space doesn't work as in the example in the
>>> stackoverflow post.
>>> Putting the search term in an inner string and escaping its quotes both
>>> feels more natural and does work so I'm going with something more like:
>>>
>>> 409>
>>> 409>
>>> 409> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"we rt\"
>>> AND age_i:0">>, []).
>>> {ok,{search_results,[],0.0,0}}
>>> 410>
>>> 410>
>>> 410> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:we\ rt AND
>>> age_i:0">>, []).
>>> {error,<<"Query unsuccessful check the logs.">>}
>>> 411>
>>> 411>
>>>
>>> Cheers,
>>> //Sean.
>>>
>>>
>>> On Tue, Sep 6, 2016 at 2:48 PM, Jason Voegele 
>>> wrote:
>>>
 Hi Sean,

 Have you tried escaping the space in your query?

 http://stackoverflow.com/questions/10023133/solr-wildcard-qu
 ery-with-whitespace


 On Sep 5, 2016, at 6:24 PM, sean mcevoy  wrote:

 Hi List,

 We have a solr index where we store something like:
 <<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],

 Then we try to do a riakc_pb_socket:search with the pattern:
 <<"body_s:*test str*">>

 The request will fail with an error message 

Re: Using Riak KV with Amazon ML

2016-09-08 Thread Matt Digan
Hi Ricardo,

Full bucket read is not supported yet by Riak KV. If you're able to
upgrade, Riak TS does support that feature.

Otherwise, with Riak KV, you could use a range of 2i indexes. See
https://github.com/basho/spark-riak-connector/blob/master/docs/using-connector.md#reading-data-from-kv-bucket
for more info.

--Matt

On Thu, Sep 1, 2016 at 4:52 PM, Ricardo Mayerhofer 
wrote:

> Hi all,
> I've some events stored in Riak KV which I'd like to use as input to
> Amazon Machine Learning.
> What I want to accomplish is to export the JSON entries from Riak
> transform it to a CSV file and upload it to S3.
>
> What is the best way to export a full bucket (or a 2i subset) from Riak?
>
> I've read there were some improvements in Riak in this sense in order to
> build Apache Spark Connector (https://databricks.com/blog/
> 2016/08/11/the-quest-for-hidden-treasure-an-apache-
> spark-connector-for-the-riak-nosql-database.html)
>
> Is there a way to take advantage of the new full bucket read and of the
> distributed export?
>
> Thanks.
>
> --
> Ricardo Mayerhofer
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>


-- 
Matt Digan
Engineering Director
Basho Technologies
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: a weird error while post request to server for store object

2016-09-07 Thread Luke Bakken
Hello Alan -

Which PHP client library are you using?

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Sep 6, 2016 at 10:29 PM, HQS^∞^  wrote:
> dear everyone:
> I follow the tutorial at
> http://docs.basho.com/riak/kv/2.1.4/developing/usage/document-store/   ,
> Step by Step Practice  , when I've Post a request for store object , but the
> riak server respond 400 (Bad Request)  ,  I review my code again and again ,
> but  no problem found . see below:
>
> 
>
> class BlogPost {
>  var  $_title = '';
>  var  $_author = '';
>  var  $_content = '';
>  var  $_keywords = [];
>  var  $_datePosted = '';
>  var  $_published = false;
>  var  $_bucketType = "cms";
>  var  $_bucket = null;
>  var  $_riak = null;
>  var  $_location = null;
>   public function __construct(Riak $riak, $bucket, $title, $author,
> $content, array $keywords, $date, $published)
>   {
> $this->_riak = $riak;
> $this->_bucket = new Bucket($bucket, "cms");
> $this->_location = new Riak\Location('blog1',$this->_bucket,"cms");
> $this->_title = $title;
> $this->_author = $author;
> $this->_content = $content;
> $this->_keywords = $keywords;
> $this->_datePosted = $date;
> $this->_published = $published;
>   }
>
>   public function store()
>   {
> $setBuilder = (new UpdateSet($this->_riak));
>
> foreach($this->_keywords as $keyword) {
>   $setBuilder->add($keyword);
> }
> /*
>(new UpdateMap($this->_riak))
>   ->updateRegister('title', $this->_title)
>   ->updateRegister('author', $this->_author)
>   ->updateRegister('content', $this->_content)
>   ->updateRegister('date', $this->_datePosted)
>   ->updateFlag('published', $this->_published)
>   ->updateSet('keywords', $setBuilder)
>   ->withBucket($this->_bucket)
>   ->build()
>   ->execute();
>
> */
>$response = (new UpdateMap($this->_riak))
>   ->updateRegister('title', $this->_title)
>   ->updateRegister('author', $this->_author)
>   ->updateRegister('content', $this->_content)
>   ->updateRegister('date', $this->_datePosted)
>   ->updateFlag('published', $this->_published)
>   ->updateSet('keywords', $setBuilder)
>   ->atLocation($this->_location)
>   ->build()
>   ->execute();
>
> echo '';
>   var_dump($response);
> echo '';
>   }
> }
>
>  $node = (new Node\Builder)
> ->atHost('192.168.111.2')
> ->onPort(8098)
> ->build();
>
> $riak = new Riak([$node]);
>
>
> $keywords = ['adorbs', 'cheshire'];
> $date = new \DateTime('now');
>
>
> $post1 = new BlogPost(
>   $riak,
>   'cat_pics', // bucket
>   'This one is so lulz!', // title
>   'Cat Stevens', // author
>   'Please check out these cat pics!', // content
>   $keywords, // keywords
>   $date, // date posted
>   true // published
> );
> $post1->store();
>
> the wireshark captured packet :
>
>  192.168.171.124(client ip)  =>  192.168.111.2(riak server ip)HTTP
> 511POST /types/cms/buckets/cat_pics/datatypes/alldoc? HTTP/1.1
> (application/json)
>  192.168.111.2192.168.171.124HTTP251HTTP/1.1 400 Bad Request
>
>  GET http://192.168.111.2:8098//types/cms/buckets/cat_pics/props
> {"props":{"name":"cat_pics","young_vclock":20,"w":"quorum","small_vclock":50,"search_index":"blog_posts","rw":"quorum","r":"quorum","pw":0,"precommit":[],"pr":0,"postcommit":[],"old_vclock":86400,"notfound_ok":true,"n_val":3,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"last_write_wins":false,"dw":"quorum","dvv_enabled":true,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"big_vclock":50,"basic_quorum":false,"allow_mult":true,"datatype":"map","active":true,"claimant":"node1@192.168.111.1"}}
>
>please help me catch the bugs  thanks in advance!
>
> regards
>
> Alan

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Spaces in the search string

2016-09-07 Thread sean mcevoy
Hi again!

Apologies for the premature post earlier. I thought I had a solution when I
didn't get the error but when I got around to plugging it into my
application it's still not doing everything that I need.
I've narrowed it down to this minimal testcase, first setup the index &
insert the data:


{ok,Pid} = riakc_pb_socket:start("127.0.0.1", 10017).
ok = riakc_pb_socket:create_search_index(Pid, <<"test_index">>,
<<"_yz_default">>, []).
ok = riakc_pb_socket:set_search_index(Pid, <<"test_bucket">>,
<<"test_index">>).
RO = riakc_obj:new(<<"test_bucket">>, <<"test_key">>, <<"{\"name_s\":\"my
test name\",\"age_i\":2}">>, "application/json").
ok = riakc_pb_socket:put(Pid, RO).


Now I can get the hit when search for a partial name with wildcards & no
escapes or spaces:
521>
521> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:*test* AND
age_i:2">>, []).
{ok,{search_results,[{<<"test_index">>,
  [{<<"score">>,<<"1.227760798549e+00">>},
   {<<"_yz_rb">>,<<"test_bucket">>},
   {<<"_yz_rt">>,<<"default">>},
   {<<"_yz_rk">>,<<"test_key">>},

{<<"_yz_id">>,<<"1*default*test_bucket*test_key*57">>},
   {<<"name_s">>,<<"my test name">>},
   {<<"age_i">>,<<"2">>}]}],
1.2277607917785645,1}}


And I can get the hit when I search for the full name with spaces & the
escaped quotes:
522>
522> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"my test
name\" AND age_i:2">>, []).
{ok,{search_results,[{<<"test_index">>,
  [{<<"score">>,<<"1.007369608719e+00">>},
   {<<"_yz_rb">>,<<"test_bucket">>},
   {<<"_yz_rt">>,<<"default">>},
   {<<"_yz_rk">>,<<"test_key">>},

{<<"_yz_id">>,<<"1*default*test_bucket*test_key*58">>},
   {<<"name_s">>,<<"my test name">>},
   {<<"age_i">>,<<"2">>}]}],
1.0073696374893188,1}}


But how can I search for a partial name with spaces:
523>
523> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"*y test
na*\" AND age_i:2">>, []).
{ok,{search_results,[],0.0,0}}
524>
524>


I get the feeling that I'm missing something really obvious but can't see
it. Any more pointers appreciated!

//Sean.


On Wed, Sep 7, 2016 at 10:11 AM, sean mcevoy  wrote:

> Hi Jason,
>
> Thanks for the kick, I just needed to look closer!
> Yes, had tried escaping but one of my utility functions for dynamically
> building the search string had been stripping it out again. D'oh!
>
> Curiously, just escaping the space doesn't work as in the example in the
> stackoverflow post.
> Putting the search term in an inner string and escaping its quotes both
> feels more natural and does work so I'm going with something more like:
>
> 409>
> 409>
> 409> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"we rt\" AND
> age_i:0">>, []).
> {ok,{search_results,[],0.0,0}}
> 410>
> 410>
> 410> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:we\ rt AND
> age_i:0">>, []).
> {error,<<"Query unsuccessful check the logs.">>}
> 411>
> 411>
>
> Cheers,
> //Sean.
>
>
> On Tue, Sep 6, 2016 at 2:48 PM, Jason Voegele  wrote:
>
>> Hi Sean,
>>
>> Have you tried escaping the space in your query?
>>
>> http://stackoverflow.com/questions/10023133/solr-wildcard-
>> query-with-whitespace
>>
>>
>> On Sep 5, 2016, at 6:24 PM, sean mcevoy  wrote:
>>
>> Hi List,
>>
>> We have a solr index where we store something like:
>> <<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],
>>
>> Then we try to do a riakc_pb_socket:search with the pattern:
>> <<"body_s:*test str*">>
>>
>> The request will fail with an error message telling us to check the logs
>> and in there we find:
>>
>> 2016-09-05 13:37:29.271 [error] <0.12067.10>@yz_pb_search:maybe_process:107
>> {solr_error,{400,"http://localhost:10014/internal_solr/crm_
>> db.campaign_index/select",<<"{\"error\":{\"msg\":\"no field name
>> specified in query and no default specified via 'df'
>> param\",\"code\":400}}\n">>}} [{yz_solr,search,3,[{file,"src
>> /yz_solr.erl"},{line,284}]},{yz_pb_search,maybe_process,3,[
>> {file,"src/yz_pb_search.erl"},{line,78}]},{riak_api_pb_
>> server,process_message,4,[{file,"src/riak_api_pb_server.erl"
>> },{line,388}]},{riak_api_pb_server,connected,2,[{file,"src
>> /riak_api_pb_server.erl"},{line,226}]},{riak_api_pb_server,
>> decode_buffer,2,[{file,"src/riak_api_pb_server.erl"},{
>> line,364}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{lin
>> e,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]
>>
>>
>> Through experiment I've figured out that it doesn't like the space as it
>> seems to think the part of the search string after that space is a new key
>> to search for. Which seems fair enough.
>>
>> Anyone know of a work-around? Or am I formatting my request 

Re: Spaces in the search string

2016-09-07 Thread sean mcevoy
Hi Jason,

Thanks for the kick, I just needed to look closer!
Yes, had tried escaping but one of my utility functions for dynamically
building the search string had been stripping it out again. D'oh!

Curiously, just escaping the space doesn't work as in the example in the
stackoverflow post.
Putting the search term in an inner string and escaping its quotes both
feels more natural and does work so I'm going with something more like:

409>
409>
409> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:\"we rt\" AND
age_i:0">>, []).
{ok,{search_results,[],0.0,0}}
410>
410>
410> riakc_pb_socket:search(Pid, <<"test_index">>, <<"name_s:we\ rt AND
age_i:0">>, []).
{error,<<"Query unsuccessful check the logs.">>}
411>
411>

Cheers,
//Sean.


On Tue, Sep 6, 2016 at 2:48 PM, Jason Voegele  wrote:

> Hi Sean,
>
> Have you tried escaping the space in your query?
>
> http://stackoverflow.com/questions/10023133/solr-
> wildcard-query-with-whitespace
>
>
> On Sep 5, 2016, at 6:24 PM, sean mcevoy  wrote:
>
> Hi List,
>
> We have a solr index where we store something like:
> <<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],
>
> Then we try to do a riakc_pb_socket:search with the pattern:
> <<"body_s:*test str*">>
>
> The request will fail with an error message telling us to check the logs
> and in there we find:
>
> 2016-09-05 13:37:29.271 [error] <0.12067.10>@yz_pb_search:maybe_process:107
> {solr_error,{400,"http://localhost:10014/internal_solr/
> crm_db.campaign_index/select",<<"{\"error\":{\"msg\":\"no field name
> specified in query and no default specified via 'df'
> param\",\"code\":400}}\n">>}} [{yz_solr,search,3,[{file,"
> src/yz_solr.erl"},{line,284}]},{yz_pb_search,maybe_process,
> 3,[{file,"src/yz_pb_search.erl"},{line,78}]},{riak_api_
> pb_server,process_message,4,[{file,"src/riak_api_pb_server.
> erl"},{line,388}]},{riak_api_pb_server,connected,2,[{file,"
> src/riak_api_pb_server.erl"},{line,226}]},{riak_api_pb_
> server,decode_buffer,2,[{file,"src/riak_api_pb_server.erl"},
> {line,364}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{
> line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.
> erl"},{line,239}]}]
>
>
> Through experiment I've figured out that it doesn't like the space as it
> seems to think the part of the search string after that space is a new key
> to search for. Which seems fair enough.
>
> Anyone know of a work-around? Or am I formatting my request incorrectly?
>
> Thanks in advance.
> //Sean.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: speeding up bulk loading

2016-09-06 Thread Guido Medina

Hi Travis,

I have done similar things using the Java client but I will assume you 
have access to change certain settings at the C client, assuming you 
have RW = 2 and N =3, your client is returning to you once 2 writes are 
made but an asynchronous write is still pending which will eventually 
create lots of back pressure and overload your Riak nodes.


Concurrently speaking don't use more than 8 writer threads for this and 
set RW = N (in your case 3) for this tasks so that each write blocks, 
that way you won't hit a high back pressure.


Have some sort of blocking queue where you can only put up to (as an 
example) 500 tasks, 4 to 8 writer threads consuming from such queue, so 
here is a brief description of what I recommend:


 * RW = N (in this case 3) so that your write operation blocks and
   don't leave asynchronous tasks pending for the Riak cluster.
 * Don't exceed say, 8 writers (I have been there, more threads won't
   really help, sometimes 4 will just write faster than 8)
 * Have a bounded blocking queue (example max size = 500) where you
   schedule your tasks which the writer threads consume from.

Try that and see if that helps, I'm quite certain that at using such 
mechanism it will be consistent and your Riak nodes should never crash.


Best regards,

Guido.

On 06/09/16 16:42, Travis Kirstine wrote:


Thank Alexander, we’re using HAproxy

*From:*Alexander Sicular [mailto:sicul...@basho.com]
*Sent:* September-06-16 11:09 AM
*To:* Travis Kirstine <tkirst...@firstbasesolutions.com>
*Cc:* Magnus Kessler <mkess...@basho.com>; riak-users@lists.basho.com
*Subject:* Re: speeding up bulk loading

Hi Travis,

I also want to confirm that you are spreading your load amongst all 
nodes in the cluster. You should be connecting your C client to Riak 
via a proxy like nginx/HAproxy/F5 [0]. The proxy will do a round 
robin/least connections distribution to all nodes in the cluster. This 
will greatly increase performance if you are not already doing it.


-alexander

[0] http://docs.basho.com/riak/kv/2.1.4/configuring/load-balancing-proxy/




Alexander Sicular

Solutions Architect

Basho Technologies
9175130679

@siculars

On Wed, Aug 31, 2016 at 10:41 AM, Travis Kirstine 
<tkirst...@firstbasesolutions.com 
<mailto:tkirst...@firstbasesolutions.com>> wrote:


Magnus

Thanks for your reply.  We’re are using the riack C client library
for riak (https://github.com/trifork/riack) which is used within
an application called MapCache to store 256x256 px images with a
corresponding key within riak.  Currently we have 75 million
images to transfer from disk into riak which is being done
concurrently.  Periodically this transfer process will crash

Riak is setup using n=3 on 5 nodes with a leveldb backend.  Each
server has 45GB of memory and 16 cores with  standard hard
drives.  We made no significant modification to the riak.conf
except upping the leveldb.maximum_memory.percent to 70 and
tweeking the sysctl.conf as follows

vm.swappiness = 0

net.ipv4.tcp_max_syn_backlog = 4

net.core.somaxconn = 4

net.core.wmem_default = 8388608

net.core.rmem_default = 8388608

net.ipv4.tcp_sack = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_fin_timeout = 15

net.ipv4.tcp_keepalive_intvl = 30

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_moderate_rcvbuf = 1

# Increase the open file limit

# fs.file-max = 65536 # current setting

I have seen this error in the logs

2016-08-30 22:26:07.180 [error] <0.20777.512> CRASH REPORT Process
<0.20777.512> with 0 neighbours crashed with reason: no function
clause matching
webmachine_request:peer_from_peername({error,enotconn},

{webmachine_request,{wm_reqstate,#Port<0.2817336>,[],undefined,undefined,undefined,{wm_reqdata,'GET',...},...}})
line 150

Regards

*From:*Magnus Kessler [mailto:mkess...@basho.com
<mailto:mkess...@basho.com>]
*Sent:* August-31-16 4:08 AM
*To:* Travis Kirstine <tkirst...@firstbasesolutions.com
<mailto:tkirst...@firstbasesolutions.com>>
*Cc:* riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
*Subject:* Re: speeding up bulk loading

On 26 August 2016 at 22:20, Travis Kirstine
<tkirst...@firstbasesolutions.com
<mailto:tkirst...@firstbasesolutions.com>> wrote:

Is there any way to speed up bulk loading?  I wondering if I
should be tweeking the erlang, aae or other config options?

Hi Travis,

Excuse the late reply; your message had been stuck in the
moderation queue. Please consider subscribing to this list.

Without knowing more about how you perform bulk uploads, it's
difficult to recommend any changes. Are you using the HTTP REST
API or one of the client libraries, which use protocol buffers by
default? What concerns do you have about the upload perf

RE: speeding up bulk loading

2016-09-06 Thread Travis Kirstine
Thank Alexander, we’re using HAproxy

From: Alexander Sicular [mailto:sicul...@basho.com]
Sent: September-06-16 11:09 AM
To: Travis Kirstine <tkirst...@firstbasesolutions.com>
Cc: Magnus Kessler <mkess...@basho.com>; riak-users@lists.basho.com
Subject: Re: speeding up bulk loading

Hi Travis,

I also want to confirm that you are spreading your load amongst all nodes in 
the cluster. You should be connecting your C client to Riak via a proxy like 
nginx/HAproxy/F5 [0]. The proxy will do a round robin/least connections 
distribution to all nodes in the cluster. This will greatly increase 
performance if you are not already doing it.

-alexander

[0] http://docs.basho.com/riak/kv/2.1.4/configuring/load-balancing-proxy/




Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars

On Wed, Aug 31, 2016 at 10:41 AM, Travis Kirstine 
<tkirst...@firstbasesolutions.com<mailto:tkirst...@firstbasesolutions.com>> 
wrote:
Magnus

Thanks for your reply.  We’re are using the riack C client library for riak 
(https://github.com/trifork/riack) which is used within an application called 
MapCache to store 256x256 px images with a corresponding key within riak.  
Currently we have 75 million images to transfer from disk into riak which is 
being done concurrently.  Periodically this transfer process will crash

Riak is setup using n=3 on 5 nodes with a leveldb backend.  Each server has 
45GB of memory and 16 cores with  standard hard drives.  We made no significant 
modification to the riak.conf except upping the leveldb.maximum_memory.percent 
to 70 and tweeking the sysctl.conf as follows

vm.swappiness = 0
net.ipv4.tcp_max_syn_backlog = 4
net.core.somaxconn = 4
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# Increase the open file limit
# fs.file-max = 65536 # current setting

I have seen this error in the logs
2016-08-30 22:26:07.180 [error] <0.20777.512> CRASH REPORT Process 
<0.20777.512> with 0 neighbours crashed with reason: no function clause 
matching webmachine_request:peer_from_peername({error,enotconn}, 
{webmachine_request,{wm_reqstate,#Port<0.2817336>,[],undefined,undefined,undefined,{wm_reqdata,'GET',...},...}})
 line 150

Regards

From: Magnus Kessler [mailto:mkess...@basho.com<mailto:mkess...@basho.com>]
Sent: August-31-16 4:08 AM
To: Travis Kirstine 
<tkirst...@firstbasesolutions.com<mailto:tkirst...@firstbasesolutions.com>>
Cc: riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>
Subject: Re: speeding up bulk loading

On 26 August 2016 at 22:20, Travis Kirstine 
<tkirst...@firstbasesolutions.com<mailto:tkirst...@firstbasesolutions.com>> 
wrote:
Is there any way to speed up bulk loading?  I wondering if I should be tweeking 
the erlang, aae or other config options?



Hi Travis,

Excuse the late reply; your message had been stuck in the moderation queue. 
Please consider subscribing to this list.

Without knowing more about how you perform bulk uploads, it's difficult to 
recommend any changes. Are you using the HTTP REST API or one of the client 
libraries, which use protocol buffers by default? What concerns do you have 
about the upload performance? Please let us know a bit more about your setup.

Kind Regards,

Magnus


--
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

___
riak-users mailing list
riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: speeding up bulk loading

2016-09-06 Thread Alexander Sicular
Hi Travis,

I also want to confirm that you are spreading your load amongst all nodes
in the cluster. You should be connecting your C client to Riak via a proxy
like nginx/HAproxy/F5 [0]. The proxy will do a round robin/least
connections distribution to all nodes in the cluster. This will greatly
increase performance if you are not already doing it.

-alexander

[0] http://docs.basho.com/riak/kv/2.1.4/configuring/load-balancing-proxy/




Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars

On Wed, Aug 31, 2016 at 10:41 AM, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:

> Magnus
>
>
>
> Thanks for your reply.  We’re are using the riack C client library for
> riak (https://github.com/trifork/riack) which is used within an
> application called MapCache to store 256x256 px images with a corresponding
> key within riak.  Currently we have 75 million images to transfer from disk
> into riak which is being done concurrently.  Periodically this transfer
> process will crash
>
>
>
> Riak is setup using n=3 on 5 nodes with a leveldb backend.  Each server
> has 45GB of memory and 16 cores with  standard hard drives.  We made no
> significant modification to the riak.conf except upping the
> leveldb.maximum_memory.percent to 70 and tweeking the sysctl.conf as follows
>
>
>
> vm.swappiness = 0
>
> net.ipv4.tcp_max_syn_backlog = 4
>
> net.core.somaxconn = 4
>
> net.core.wmem_default = 8388608
>
> net.core.rmem_default = 8388608
>
> net.ipv4.tcp_sack = 1
>
> net.ipv4.tcp_window_scaling = 1
>
> net.ipv4.tcp_fin_timeout = 15
>
> net.ipv4.tcp_keepalive_intvl = 30
>
> net.ipv4.tcp_tw_reuse = 1
>
> net.ipv4.tcp_moderate_rcvbuf = 1
>
> # Increase the open file limit
>
> # fs.file-max = 65536 # current setting
>
>
>
> I have seen this error in the logs
>
> 2016-08-30 22:26:07.180 [error] <0.20777.512> CRASH REPORT Process
> <0.20777.512> with 0 neighbours crashed with reason: no function clause
> matching webmachine_request:peer_from_peername({error,enotconn},
> {webmachine_request,{wm_reqstate,#Port<0.2817336>,[],
> undefined,undefined,undefined,{wm_reqdata,'GET',...},...}}) line 150
>
>
>
> Regards
>
>
>
> *From:* Magnus Kessler [mailto:mkess...@basho.com]
> *Sent:* August-31-16 4:08 AM
> *To:* Travis Kirstine <tkirst...@firstbasesolutions.com>
> *Cc:* riak-users@lists.basho.com
> *Subject:* Re: speeding up bulk loading
>
>
>
> On 26 August 2016 at 22:20, Travis Kirstine <tkirstine@firstbasesolutions.
> com> wrote:
>
> Is there any way to speed up bulk loading?  I wondering if I should be
> tweeking the erlang, aae or other config options?
>
>
>
>
>
>
>
> Hi Travis,
>
>
>
> Excuse the late reply; your message had been stuck in the moderation
> queue. Please consider subscribing to this list.
>
>
>
> Without knowing more about how you perform bulk uploads, it's difficult to
> recommend any changes. Are you using the HTTP REST API or one of the client
> libraries, which use protocol buffers by default? What concerns do you have
> about the upload performance? Please let us know a bit more about your
> setup.
>
>
>
> Kind Regards,
>
>
>
> Magnus
>
>
>
>
>
> --
>
> Magnus Kessler
>
> Client Services Engineer
>
> Basho Technologies Limited
>
>
>
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Spaces in the search string

2016-09-06 Thread Jason Voegele
Hi Sean,

Have you tried escaping the space in your query?

http://stackoverflow.com/questions/10023133/solr-wildcard-query-with-whitespace


> On Sep 5, 2016, at 6:24 PM, sean mcevoy  wrote:
> 
> Hi List,
> 
> We have a solr index where we store something like:
> <<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],
> 
> Then we try to do a riakc_pb_socket:search with the pattern:
> <<"body_s:*test str*">>
> 
> The request will fail with an error message telling us to check the logs and 
> in there we find:
> 
> 2016-09-05 13:37:29.271 [error] <0.12067.10>@yz_pb_search:maybe_process:107 
> {solr_error,{400,"http://localhost:10014/internal_solr/crm_db.campaign_index/select
>  
> ",<<"{\"error\":{\"msg\":\"no
>  field name specified in query and no default specified via 'df' 
> param\",\"code\":400}}\n">>}} 
> [{yz_solr,search,3,[{file,"src/yz_solr.erl"},{line,284}]},{yz_pb_search,maybe_process,3,[{file,"src/yz_pb_search.erl"},{line,78}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,388}]},{riak_api_pb_server,connected,2,[{file,"src/riak_api_pb_server.erl"},{line,226}]},{riak_api_pb_server,decode_buffer,2,[{file,"src/riak_api_pb_server.erl"},{line,364}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]
> 
> 
> Through experiment I've figured out that it doesn't like the space as it 
> seems to think the part of the search string after that space is a new key to 
> search for. Which seems fair enough.
> 
> Anyone know of a work-around? Or am I formatting my request incorrectly?
> 
> Thanks in advance.
> //Sean.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-31 Thread Luke Bakken
Kyle -

Verify return code: 19 (self signed certificate in certificate chain)

Since your server cert is self-signed, there's not much more that can
be done at this point I believe. My security tests use a dedicated CA
where the Root cert is available for validation
(https://github.com/basho/riak-client-tools/tree/master/test-ca)

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Aug 31, 2016 at 3:11 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> I am getting the following information:
>
> Verify return code: 19 (self signed certificate in certificate chain)

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-31 Thread Nguyen, Kyle
ert - www.getacert.com
 1 s:/C=US/ST=Washington/L=Seattle/O=getaCert - www.getacert.com
   i:/C=US/ST=Washington/L=Seattle/O=getaCert - www.getacert.com
---
Server certificate
-BEGIN CERTIFICATE-
MIIDHDCCAgSgAwIBAgICE3cwDQYJKoZIhvcNAQELBQAwWjELMAkGA1UEBhMCVVMx
EzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1NlYXR0bGUxJDAiBgNVBAoT
G2dldGFDZXJ0IC0gd3d3LmdldGFjZXJ0LmNvbTAeFw0xNjA4MjIyMjQyNDBaFw0x
NjEwMjEyMjQyNDBaMBkxFzAVBgNVBAMMDnJpYWtAMTI3LjAuMC4xMIIBIjANBgkq
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiWEXAA193kK/9jVx77/v3grXazowaYD4
oUc78Sk6CfAc8kaRlyrIyvZk+yz8xmnZWhL/UVlEtYkXRTjJ3c8RCCF332TW5C4L
VJfdh+l0yXOfB4dsmzfJ9S67PMT6pwGIoj/4khhbHuLKjuCwo8iPjZBs2m1agxwH
XFBAZx9NOBgdCKm/NDK3qsx60CaAIe4l8PRVyV6WKYMyt9m3+H5vEXz46907Bbku
8gspGvUjL51xpXeev8osNLFrEAOxHRYjEjzlZVqroxwdBfv00LCh+A/scaWpJ5bi
BIFQ9F1RVTIuEIfsjSyfXO/e+ZYpJSSrfwFWv2eSrDQPlepQFapyDQIDAQABoy0w
KzAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIE8DALBgNVHQ8EBAMCBSAwDQYJ
KoZIhvcNAQELBQADggEBAGh6mMvE3QizwNQGyL5f4ynegLaR7hE+Td2PaEuty/2t
I2y4aCkKV+R/TTZDkFpZ+Mv3ZZyfzECrEdeGmSMqRbYMD/uHTiMZGBjqcrsVpp5U
BtdrIWRkJ4kMhyVUY/gp6rYTomqJWcr03w0kI9hBJUYpJ7To21eZGL0Wqz8daFRD
QaoHwPJFe2qAaco+lJqMc/8hwAuVMJ1+Tn34fWU6tUYPSBosvzZzMR90jfVK7AGF
GY/5cu+HbDwZlACHTp9XDJrR2xtLA8xC1ZtUULBG0CIQUpt5fixjdI4g4nORAuOd
3vVTd+vRDilYcpFiUfgZ2TkzJzY1hElNBFM2XNwZTw0=
-END CERTIFICATE-
subject=/CN=riak@127.0.0.1
issuer=/C=US/ST=Washington/L=Seattle/O=getaCert - www.getacert.com
---
Acceptable client certificate CA names
/C=US/ST=Washington/L=Seattle/O=getaCert - www.getacert.com
---
SSL handshake has read 2123 bytes and written 665 bytes
---
New, TLSv1/SSLv3, Cipher is AES128-SHA256
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol  : TLSv1.2
Cipher: AES128-SHA256
Session-ID: D556E6A1910D375558B3FDA7A69A16E65907336F84B002DD6AB2BD47CAD2DA3A
Session-ID-ctx:
Master-Key: 
A1B04B4C00A411B47CE8F0A5EDE4E72448E109D9549246E814BEC1B997DC69C2599D61A904340B5185DD4EC798D66729
Key-Arg   : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1472681389
Timeout   : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---


-Original Message-
From: Luke Bakken [mailto:lbak...@basho.com]
Sent: Tuesday, August 30, 2016 2:21 PM
To: Nguyen, Kyle
Cc: Riak Users
Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
using Java client

This command will show the handshake used for HTTPS. It will show if the 
server's certificate (the same one used for TLS) can be validated.

Using "openssl s_client" is a good way to start diagnosing what's actually 
happening when SSL/TLS is enabled in Riak.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Aug 30, 2016 at 2:18 PM, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:
> Hi Luke,
>
> I am using TLS for protocol buffer - not sure if you're thinking of HTTP only.
>
> Thanks
>
> -Kyle-
>
> -Original Message-
> From: Luke Bakken [mailto:lbak...@basho.com]
> Sent: Tuesday, August 30, 2016 2:14 PM
> To: Nguyen, Kyle
> Cc: Riak Users
> Subject: Re: Need help with Riak-KV (2.1.4) certificate based
> authentication using Java client
>
> Kyle,
>
> I would be interested to see the output of this command run on the same 
> server as your Riak node:
>
> openssl s_client -debug -connect localhost:8098
>
> Please replace "8098" with the HTTPS port used in this configuration setting 
> in your /etc/riak.conf file:
>
> listener.https.internal


The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: speeding up bulk loading

2016-08-31 Thread Matthew Von-Maszewski
Travis,

I cannot address the crash error you see in the logs.  Someone else will have 
to review that problem.

I want to point you to this wiki article:  
https://github.com/basho/leveldb/wiki/riak-tuning-2

The article details how you can potentially increase Riak's throughput by 
restricting the number of schedulers (CPUs) assigned solely to Erlang.  Heavy 
bulk loading scenarios demonstrated 20 to 30% throughput gain.

Per the wiki article, you would create an (or add to your existing) 
advanced.config with the following based upon your 16 core machines:

[
{vm_args, [{"+S", "14:14"}]}
].
Matthew
> On Aug 31, 2016, at 11:41 AM, Travis Kirstine 
> <tkirst...@firstbasesolutions.com> wrote:
> 
> Magnus
>  
> Thanks for your reply.  We’re are using the riack C client library for riak 
> (https://github.com/trifork/riack <https://github.com/trifork/riack>) which 
> is used within an application called MapCache to store 256x256 px images with 
> a corresponding key within riak.  Currently we have 75 million images to 
> transfer from disk into riak which is being done concurrently.  Periodically 
> this transfer process will crash
>  
> Riak is setup using n=3 on 5 nodes with a leveldb backend.  Each server has 
> 45GB of memory and 16 cores with  standard hard drives.  We made no 
> significant modification to the riak.conf except upping the 
> leveldb.maximum_memory.percent to 70 and tweeking the sysctl.conf as follows
>  
> vm.swappiness = 0
> net.ipv4.tcp_max_syn_backlog = 4
> net.core.somaxconn = 4
> net.core.wmem_default = 8388608
> net.core.rmem_default = 8388608
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_window_scaling = 1
> net.ipv4.tcp_fin_timeout = 15
> net.ipv4.tcp_keepalive_intvl = 30
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_moderate_rcvbuf = 1
> # Increase the open file limit
> # fs.file-max = 65536 # current setting
>  
> I have seen this error in the logs
> 2016-08-30 22:26:07.180 [error] <0.20777.512> CRASH REPORT Process 
> <0.20777.512> with 0 neighbours crashed with reason: no function clause 
> matching webmachine_request:peer_from_peername({error,enotconn}, 
> {webmachine_request,{wm_reqstate,#Port<0.2817336>,[],undefined,undefined,undefined,{wm_reqdata,'GET',...},...}})
>  line 150
>  
> Regards
>  
> From: Magnus Kessler [mailto:mkess...@basho.com <mailto:mkess...@basho.com>] 
> Sent: August-31-16 4:08 AM
> To: Travis Kirstine <tkirst...@firstbasesolutions.com 
> <mailto:tkirst...@firstbasesolutions.com>>
> Cc: riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> Subject: Re: speeding up bulk loading
>  
> On 26 August 2016 at 22:20, Travis Kirstine <tkirst...@firstbasesolutions.com 
> <mailto:tkirst...@firstbasesolutions.com>> wrote:
> Is there any way to speed up bulk loading?  I wondering if I should be 
> tweeking the erlang, aae or other config options?
>  
>  
>  
> Hi Travis,
>  
> Excuse the late reply; your message had been stuck in the moderation queue. 
> Please consider subscribing to this list.
>  
> Without knowing more about how you perform bulk uploads, it's difficult to 
> recommend any changes. Are you using the HTTP REST API or one of the client 
> libraries, which use protocol buffers by default? What concerns do you have 
> about the upload performance? Please let us know a bit more about your setup.
>  
> Kind Regards,
>  
> Magnus
>  
>  
> -- 
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
>  
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> ___
> riak-users mailing list
> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: speeding up bulk loading

2016-08-31 Thread Travis Kirstine
Magnus

Thanks for your reply.  We’re are using the riack C client library for riak 
(https://github.com/trifork/riack) which is used within an application called 
MapCache to store 256x256 px images with a corresponding key within riak.  
Currently we have 75 million images to transfer from disk into riak which is 
being done concurrently.  Periodically this transfer process will crash

Riak is setup using n=3 on 5 nodes with a leveldb backend.  Each server has 
45GB of memory and 16 cores with  standard hard drives.  We made no significant 
modification to the riak.conf except upping the leveldb.maximum_memory.percent 
to 70 and tweeking the sysctl.conf as follows

vm.swappiness = 0
net.ipv4.tcp_max_syn_backlog = 4
net.core.somaxconn = 4
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# Increase the open file limit
# fs.file-max = 65536 # current setting

I have seen this error in the logs
2016-08-30 22:26:07.180 [error] <0.20777.512> CRASH REPORT Process 
<0.20777.512> with 0 neighbours crashed with reason: no function clause 
matching webmachine_request:peer_from_peername({error,enotconn}, 
{webmachine_request,{wm_reqstate,#Port<0.2817336>,[],undefined,undefined,undefined,{wm_reqdata,'GET',...},...}})
 line 150

Regards

From: Magnus Kessler [mailto:mkess...@basho.com]
Sent: August-31-16 4:08 AM
To: Travis Kirstine <tkirst...@firstbasesolutions.com>
Cc: riak-users@lists.basho.com
Subject: Re: speeding up bulk loading

On 26 August 2016 at 22:20, Travis Kirstine 
<tkirst...@firstbasesolutions.com<mailto:tkirst...@firstbasesolutions.com>> 
wrote:
Is there any way to speed up bulk loading?  I wondering if I should be tweeking 
the erlang, aae or other config options?



Hi Travis,

Excuse the late reply; your message had been stuck in the moderation queue. 
Please consider subscribing to this list.

Without knowing more about how you perform bulk uploads, it's difficult to 
recommend any changes. Are you using the HTTP REST API or one of the client 
libraries, which use protocol buffers by default? What concerns do you have 
about the upload performance? Please let us know a bit more about your setup.

Kind Regards,

Magnus


--
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: Using Riak CS with Hadoop

2016-08-31 Thread Valenti, Anthony
I did try it with the command below, followed by the error.  I am not sure how 
to specify that it should go to our s3 cluster, but I tried it after the @ sign 
and then the bucket.  It goes to Amazon, which seems to be by default.  I know 
it isn't an issue with Riak and that technically it should work, but I was 
looking for someone that might have tried this and gotten it to work.  It seems 
like a configuration in Hadoop more than anything.  So I am asking there also.

hadoop distcp -update /user/test/client/part-m-0 
s3a://access_key:secret_...@s3.test.com/test-bucket

16/08/31 09:37:38 INFO s3a.S3AFileSystem: Caught an AmazonServiceException, 
which means your request made it to Amazon S3, but was rejected with an error 
response for some reason.
16/08/31 09:37:38 INFO s3a.S3AFileSystem: Error Message: Status Code: 403, AWS 
Service: Amazon S3, AWS Request ID: 94AE01A87CE82C24, AWS Error Code: null, AWS 
Error Message: Forbidden
16/08/31 09:37:38 INFO s3a.S3AFileSystem: HTTP Status Code: 403
16/08/31 09:37:38 INFO s3a.S3AFileSystem: AWS Error Code: null
16/08/31 09:37:38 INFO s3a.S3AFileSystem: Error Type: Client
16/08/31 09:37:38 INFO s3a.S3AFileSystem: Request ID: 94AE01A87CE82C24
16/08/31 09:37:38 INFO s3a.S3AFileSystem: Class Name: 
com.amazonaws.services.s3.model.AmazonS3Exception
16/08/31 09:37:38 ERROR tools.DistCp: Invalid arguments:
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS 
Service: Amazon S3, AWS Request ID: 94AE01A87CE82C24, AWS Error Code: null, AWS 
Error Message: Forbidden, S3 Extended Request ID: 
qYOsZBJ5O3cFaKpbdKEpbou0Zx5hSgsHJHum/kPvjZu6CAvJ2jqVoUVUUXpquEtw
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:217)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:116)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
Invalid arguments: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 
94AE01A87CE82C24, AWS Error Code: null, AWS Error Message: Forbidden


-Original Message-
From: Luke Bakken [mailto:lbak...@basho.com] 
Sent: Wednesday, August 31, 2016 1:18 PM
To: Valenti, Anthony
Cc: riak-users@lists.basho.com
Subject: Re: Using Riak CS with Hadoop

Riak CS provides an S3 capable API, so theoretically it could work.
Have you tried? If so and you're having issues, follow up here.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Aug 31, 2016 at 7:38 AM, Valenti, Anthony <anthony.vale...@inmar.com> 
wrote:
> Has anyone setup Hadoop to be able use Raik CS as an S3 
> source/destination instead of or in addition to Amazon S3?  Hadoop 
> assumes that it should go to Amazon S3 by default.  Specifically, I am 
> trying to use Hadoop distcp to copy files to Riak CS.



Inmar Confidentiality Note:  This e-mail and any attachments are confidential 
and intended to be viewed and used solely by the intended recipient.  If you 
are not the intended recipient, be aware that any disclosure, dissemination, 
distribution, copying or use of this e-mail or any attachment is prohibited.  
If you received this e-mail in error, please notify us immediately by returning 
it to the sender and delete this copy and all attachments from your system and 
destroy any printed copies.  Thank you for your cooperation.

Notice of Protected Rights:  The removal of any copyright, trademark, or 
proprietary legend contained in this e-mail or any attachment is prohibited 
without the express, written permission of Inmar, Inc.  Furthermore, the 
intended recipient must maintain all copyright notices, trademarks, and 
proprietary legends within this e-mail and any attachments in their original 
form and location if the e-mail or any attachments are reproduced, printed or 
distributed.



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Riak TS v1.4 is released

2016-08-31 Thread Chris.Johnson
Hey Eric,

Very cool. Thank you!

From: riak-users <riak-users-boun...@lists.basho.com> on behalf of Eric Johnson 
<ejohn...@basho.com>
Date: Monday, August 29, 2016 at 8:34 AM
To: "riak-users@lists.basho.com" <riak-users@lists.basho.com>
Subject: Re: [ANN] Riak TS v1.4 is released

Hey Chris, thanks for the heads up.

We’re looking into issues #2 and #4, but we went ahead and fixed #1 and #3 
here<http://docs.basho.com/riak/ts/1.4.0/setup/upgrading/>. Let us know if 
anything else comes up!

On Fri, Aug 26, 2016 at 2:03 PM, 
<chris.john...@vaisala.com<mailto:chris.john...@vaisala.com>> wrote:
We ran this rolling upgrade yesterday and it seemed to mostly work!

A couple problems we encountered:

1.   The command under item 6 on this page 
http://docs.basho.com/riak/ts/1.4.0/setup/upgrading/ did not work for us. We 
had to replace “riak_ts” with “riak_kv”.

2.   There were some transient errors from our clients about the server not 
supporting CRDTs (specifically sets):

a.   Riak::ProtobuffsErrorResponse: Expected success from Riak but received 
0. `set` is not a supported type

3.   The upgrade instructions weren’t clear about the fact that you need to 
update your riak.conf if you have any non-default settings before doing the 
“riak start” step.

4.   There was a unit change for riak_kv.query.timeseries.timeout 
(previously timeseries_query_timeout_ms) in riak.conf that wasn’t clear to us 
from the upgrade instructions. We were under the impression that only the names 
had changed.

chris

From: riak-users 
<riak-users-boun...@lists.basho.com<mailto:riak-users-boun...@lists.basho.com>> 
on behalf of Pavel Hardak <pa...@basho.com<mailto:pa...@basho.com>>
Date: Thursday, August 25, 2016 at 7:27 PM
To: "riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>" 
<riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>>
Cc: Product Team <prod...@basho.com<mailto:prod...@basho.com>>
Subject: [ANN] Riak TS v1.4 is released


Dear Riak users,


Earlier today we have released Riak TS v1.4. This is well-rounded release, 
combining several new features, number of incremental improvements and bug 
fixes. Please upgrade your Riak TS clusters or if you still did not, take it 
for the spin. Any feedback, good or bad, is welcome. We do listen.



Release notes: http://docs.basho.com/riak/ts/1.4.0/releasenotes/


The new and improved features are:

•  (NEW) GROUP BY statement can be used in SELECT queries. This allows to 
condense the result set using one of Riak TS aggregation functions. [1]

•  (NEW) Users can now use textual date/time representations (per ISO 8601) in 
Riak TS SQL statements, without the need to translate to milliseconds before 
issuing the commands. [2]

•  (NEW) Global object expiration (a.k.a. TTL - time to leave) at the backend 
(LevelDB) level allows efficiently remove aged data from Riak TS. [3]

•  (NEW) Added SHOW TABLES command to list Riak TS tables from Riak Shell. This 
is in addition to riak-admin ‘bucket-type list’, which also works. [4]

•  (NEW) Added official support for Debian 8 Jessie, dropped Debian 6. [6]

•  (IMPROVED) DESCRIBE  command now returns quantum information, in 
addition to the columns names, types and primary key. [5]

•  (IMPROVED) Fine grained security using Riak TS specific permissions. [7]

•  (IMPROVED) Rolling upgrade and downgrade to and from Riak TS 1.3.1. [8]

•  … many documentation and performance improvements.


[1] http://docs.basho.com/riak/ts/1.4.0/using/querying/select/group-by/

[2] http://docs.basho.com/riak/ts/1.4.0/using/timerepresentations/

[3] http://docs.basho.com/riak/ts/1.4.0/using/global-object-expiration/

[4] http://docs.basho.com/riak/ts/1.4.0/using/querying/show-tables/

[5] http://docs.basho.com/riak/ts/1.4.0/downloads/#debian

[6] http://docs.basho.com/riak/ts/1.4.0/using/querying/describe/

[7] http://docs.basho.com/riak/ts/1.4.0/using/security/user-management

[8] http://docs.basho.com/riak/ts/1.4.0/setup/upgrading/


More info:

•  Riak® TS<http://basho.com/products/riak-ts/>

•  Documentation<http://docs.basho.com/>

•  Basho Blog<http://basho.com/blog/>

•  Downloads<http://docs.basho.com/riak/ts/1.4.0/downloads/>

•  Source Code<https://github.com/basho/riak/releases/tag/riak_ts-1.4.0>
Pavel Hardak,
Director of Product Management, Basho Technologies

___
riak-users mailing list
riak-users@lists.basho.com<mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Using Riak CS with Hadoop

2016-08-31 Thread Luke Bakken
Riak CS provides an S3 capable API, so theoretically it could work.
Have you tried? If so and you're having issues, follow up here.

--
Luke Bakken
Engineer
lbak...@basho.com

On Wed, Aug 31, 2016 at 7:38 AM, Valenti, Anthony
 wrote:
> Has anyone setup Hadoop to be able use Raik CS as an S3 source/destination
> instead of or in addition to Amazon S3?  Hadoop assumes that it should go to
> Amazon S3 by default.  Specifically, I am trying to use Hadoop distcp to
> copy files to Riak CS.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: speeding up bulk loading

2016-08-31 Thread Magnus Kessler
On 26 August 2016 at 22:20, Travis Kirstine <
tkirst...@firstbasesolutions.com> wrote:

> Is there any way to speed up bulk loading?  I wondering if I should be
> tweeking the erlang, aae or other config options?
>
>
>
>
>
Hi Travis,

Excuse the late reply; your message had been stuck in the moderation queue.
Please consider subscribing to this list.

Without knowing more about how you perform bulk uploads, it's difficult to
recommend any changes. Are you using the HTTP REST API or one of the client
libraries, which use protocol buffers by default? What concerns do you have
about the upload performance? Please let us know a bit more about your
setup.

Kind Regards,

Magnus


-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Luke Bakken
This command will show the handshake used for HTTPS. It will show if
the server's certificate (the same one used for TLS) can be validated.

Using "openssl s_client" is a good way to start diagnosing what's
actually happening when SSL/TLS is enabled in Riak.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Aug 30, 2016 at 2:18 PM, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:
> Hi Luke,
>
> I am using TLS for protocol buffer - not sure if you're thinking of HTTP only.
>
> Thanks
>
> -Kyle-
>
> -Original Message-
> From: Luke Bakken [mailto:lbak...@basho.com]
> Sent: Tuesday, August 30, 2016 2:14 PM
> To: Nguyen, Kyle
> Cc: Riak Users
> Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
> using Java client
>
> Kyle,
>
> I would be interested to see the output of this command run on the same 
> server as your Riak node:
>
> openssl s_client -debug -connect localhost:8098
>
> Please replace "8098" with the HTTPS port used in this configuration setting 
> in your /etc/riak.conf file:
>
> listener.https.internal

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Nguyen, Kyle
Hi Luke,

I am using TLS for protocol buffer - not sure if you're thinking of HTTP only.

Thanks

-Kyle-

-Original Message-
From: Luke Bakken [mailto:lbak...@basho.com]
Sent: Tuesday, August 30, 2016 2:14 PM
To: Nguyen, Kyle
Cc: Riak Users
Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
using Java client

Kyle,

I would be interested to see the output of this command run on the same server 
as your Riak node:

openssl s_client -debug -connect localhost:8098

Please replace "8098" with the HTTPS port used in this configuration setting in 
your /etc/riak.conf file:

listener.https.internal

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Aug 30, 2016 at 12:01 PM, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:
> Hi Luke,
>
> I believe this is not the case. The Java riak-client (version 2.0.6) that I 
> used does validate the server's cert but not checking on server's CN. If I 
> replaced getACert CA in the trustor with another unknown CA then SSL will 
> fail with "unable to find valid certification path to requested target". I 
> don't even see an option to ignore server cert validation on the client side. 
> I am wondering if you can help provide some details related to SSL 
> certification validation configuration.
>
> My riak node builder code:
> RiakNode.Builder builder = new 
> RiakNode.Builder().withRemoteAddress("127.0.0.1").withRemotePort(8087);
> builder.withAuth(username, password, trustStore, keyStore,
> keyPasswd);
>
> Thanks
>
> -Kyle-


The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Luke Bakken
Kyle,

I would be interested to see the output of this command run on the
same server as your Riak node:

openssl s_client -debug -connect localhost:8098

Please replace "8098" with the HTTPS port used in this configuration
setting in your /etc/riak.conf file:

listener.https.internal

--
Luke Bakken
Engineer
lbak...@basho.com


On Tue, Aug 30, 2016 at 12:01 PM, Nguyen, Kyle  wrote:
> Hi Luke,
>
> I believe this is not the case. The Java riak-client (version 2.0.6) that I 
> used does validate the server's cert but not checking on server's CN. If I 
> replaced getACert CA in the trustor with another unknown CA then SSL will 
> fail with "unable to find valid certification path to requested target". I 
> don't even see an option to ignore server cert validation on the client side. 
> I am wondering if you can help provide some details related to SSL 
> certification validation configuration.
>
> My riak node builder code:
> RiakNode.Builder builder = new 
> RiakNode.Builder().withRemoteAddress("127.0.0.1").withRemotePort(8087);
> builder.withAuth(username, password, trustStore, keyStore, 
> keyPasswd);
>
> Thanks
>
> -Kyle-

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


RE: Need help with Riak-KV (2.1.4) certificate based authentication using Java client

2016-08-30 Thread Nguyen, Kyle
Hi Luke,

I believe this is not the case. The Java riak-client (version 2.0.6) that I 
used does validate the server's cert but not checking on server's CN. If I 
replaced getACert CA in the trustor with another unknown CA then SSL will fail 
with "unable to find valid certification path to requested target". I don't 
even see an option to ignore server cert validation on the client side. I am 
wondering if you can help provide some details related to SSL certification 
validation configuration.

My riak node builder code:
RiakNode.Builder builder = new 
RiakNode.Builder().withRemoteAddress("127.0.0.1").withRemotePort(8087);
builder.withAuth(username, password, trustStore, keyStore, 
keyPasswd);

Thanks

-Kyle-


-Original Message-
From: Luke Bakken [mailto:lbak...@basho.com]
Sent: Tuesday, August 30, 2016 7:14 AM
To: Nguyen, Kyle
Cc: Riak Users
Subject: Re: Need help with Riak-KV (2.1.4) certificate based authentication 
using Java client

Kyle -

The CN should be either the DNS-resolvable host name of the Riak node, or its 
IP address (without "riak@"). Then, the Java client should be configured to use 
that to connect to the node (either DNS or IP).
Without doing that, I really don't have any idea how the Java client is 
validating the server certificate during TLS handshake. Did you configure the 
client to *not* validate the server cert?

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, Aug 29, 2016 at 3:18 PM, Nguyen, Kyle <kyle.ngu...@philips.com> wrote:
> Hi Luke,
>
> The CN for client's certificate is "kyle" and the CN for riak cert 
> (ssl.certfile) is "riak@127.0.0.1" which matches the nodename in the 
> riak.conf. Riak ssl.cacertfile.pem contains the same CA (getACert) which I 
> used to sign both client and riak public keys. It appears that riak also 
> validated the client certificate following this SSL debug info. I do see *** 
> CertificateVerify (toward the end) after the client certificate is requested 
> by Riak. Please let me know if it looks right to you.


The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search, post schema change reindexation

2016-08-29 Thread Guillaume Boddaert

Hi Fred, thanks for your answer.

I'm using Riak 2.1 see attached status export.
I'm working on a single cluster, and need to update from time to time 
the some search index on all nodes.
As a cloud user, I can consider buying a spare host for a few days in 
order to achieve a complete rollout.


I can understand your plan to remove an host from production while it 
reconstruct its index. From my point of view your solution can only be 
applied on a broken Solr index, that needs to be rebuild from scratch on 
a single host.
In my case, I need to reindex my documents because I was updated my solr 
schema, which requires to wipe existing index beforehand (create new 
index, change bucket index_name prop, drop old index), on all hosts 
since that's a bucket type property that I need to update.


Fred, is your plan can be really applied on a « I want to update my 
search schema on my full cluster » ?


At the moment, I already created the new index, destroyed the old one, 
and I am unable to use a slow python script to force all items to be 
written again (and subsequently pushed to solr) since I get regular 
timeout on key stream API (both protobuff and http).
Is there a way to run a program inside riak nodes (not http, not 
protobuf) to achieve this simple algorithm:


for key in bucket.stream_keys():
  obj = bucket.get(key)
  bucket.store(obj)

I really fear that will not be able to restore my index any time soon. I 
am not stressed out because we are not in production yet, I have still 
plenty of time to fix that as new data is available. But this kind of 
complex operations required by index update really freak me out.


Guillaume


On 29/08/2016 14:41, Fred Dushin wrote:

Hi Guillame,

A few questions.

What version of Riak?

Does the reindexing need to occur across the entire cluster, or just 
on one node?


What are the expectations about query-ability while re-indexing is 
going on?


If you can afford to take a node out of commission for query, then one 
approach would be to delete your YZ data and YZ AAE trees, and let AAE 
sync your 30 million documents from Riak.  You can increase AAE tree 
rebuild and exchange concurrency to make that occur more quickly than 
it does by default, but that will put a fairly significant load on 
that node.  Moreover, because you have deleted indexed data on one 
node, you will get inconsistent search results from Yokozuna, as the 
node being reindexed will still show up as part of a coverage plan. 
 Depending on the version of Riak, however, you may be able to 
manually remove that node from coverage plans through the Riak console 
while re-indexing is going on.  The node is still available for Riak 
get/put operations (including indexing new entries into Solr), but it 
will be excluded from any cover set when a query plan is generated.  I 
can't guarantee that this would take less than 5 days, however.


-Fred

On Aug 29, 2016, at 3:56 AM, Guillaume Boddaert 
<guilla...@lighthouse-analytics.co 
<mailto:guilla...@lighthouse-analytics.co>> wrote:


Hi,

I recently needed to alter my Riak Search schema for a bucket type 
that contains ~30 millions rows. As a result, my index was wiped 
since we are waiting for a Riak Search 2.2 feature that will sync 
Riak storage with Solr index on such an occasion.


I adapted a since script suggested by Evren Esat Özkan there 
(https://github.com/basho/yokozuna/issues/130#issuecomment-196189344). It 
is a simple python script that will stream keys and trigger a store 
action for any items. Unfortunately it failed past 178k items due to 
time out on the key stream. I calculated that this kind of 
reindexation mechanism would take up to 5 days without a crash to 
succeed.


I was wondering if there would be a pure Erlang mean to achieve a 
complete forced rewrite of every single element in my bucket type 
rather that an error prone and very long python process.


How would you guys reindex a 30 million item bucket type in a fast 
and reliable way ?


Thanks, Guillaume
___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




riak_auth_mods_version : <<"2.1.0-0-g31b8b30">>
erlydtl_version : <<"0.7.0">>
riak_control_version : <<"2.1.2-0-gab3f924">>
cluster_info_version : <<"2.0.3-0-g76c73fc">>
yokozuna_version : <<"2.1.2-0-g3520d11">>
ibrowse_version : <<"4.0.2">>
riak_search_version : <<"2.1.1-0-gffe2113">>
merge_index_version : <<"2.0.1-0-g0c8f77c">>
riak_kv_version : <<"2.1.2-0-gf969bba">>
riak_api_version : <<"2.1.2-0-gd8d510f">>
riak_pb_version : <<"2.1.0.2-0-g620bc70">>
protobuffs_version : <<"0.8.1p5-0-gf

Re: [ANN] Riak TS v1.4 is released

2016-08-29 Thread Eric Johnson
Hey Chris, thanks for the heads up.

We’re looking into issues #2 and #4, but we went ahead and fixed #1 and #3
here . Let us know if
anything else comes up!

On Fri, Aug 26, 2016 at 2:03 PM,  wrote:

> We ran this rolling upgrade yesterday and it seemed to mostly work!
>
>
>
> A couple problems we encountered:
>
> 1.   The command under item 6 on this page
> http://docs.basho.com/riak/ts/1.4.0/setup/upgrading/ did not work for us.
> We had to replace “riak_ts” with “riak_kv”.
>
> 2.   There were some transient errors from our clients about the
> server not supporting CRDTs (specifically sets):
>
> a.   Riak::ProtobuffsErrorResponse: Expected success from Riak but
> received 0. `set` is not a supported type
>
> 3.   The upgrade instructions weren’t clear about the fact that you
> need to update your riak.conf if you have any non-default settings before
> doing the “riak start” step.
>
> 4.   There was a unit change for riak_kv.query.timeseries.timeout 
> (previously
> timeseries_query_timeout_ms) in riak.conf that wasn’t clear to us from the
> upgrade instructions. We were under the impression that only the names had
> changed.
>
>
>
> chris
>
>
>
> *From: *riak-users  on behalf of
> Pavel Hardak 
> *Date: *Thursday, August 25, 2016 at 7:27 PM
> *To: *"riak-users@lists.basho.com" 
> *Cc: *Product Team 
> *Subject: *[ANN] Riak TS v1.4 is released
>
>
>
> Dear Riak users,
>
>
>
> Earlier today we have released Riak TS v1.4. This is well-rounded release,
> combining several new features, number of incremental improvements and bug
> fixes. Please upgrade your Riak TS clusters or if you still did not, take
> it for the spin. Any feedback, good or bad, is welcome. We do listen.
>
>
>
> Release notes: http://docs.basho.com/riak/ts/1.4.0/releasenotes/
>
>
>
> The new and improved features are:
>
> ·  (NEW) GROUP BY statement can be used in SELECT queries. This allows to
> condense the result set using one of Riak TS aggregation functions. [1]
>
> ·  (NEW) Users can now use textual date/time representations (per ISO
> 8601) in Riak TS SQL statements, without the need to translate to
> milliseconds before issuing the commands. [2]
>
> ·  (NEW) Global object expiration (a.k.a. TTL - time to leave) at the
> backend (LevelDB) level allows efficiently remove aged data from Riak TS.
> [3]
>
> ·  (NEW) Added SHOW TABLES command to list Riak TS tables from Riak
> Shell. This is in addition to riak-admin ‘bucket-type list’, which also
> works. [4]
>
> ·  (NEW) Added official support for Debian 8 Jessie, dropped Debian 6. [6]
>
> ·  (IMPROVED) DESCRIBE  command now returns quantum information,
> in addition to the columns names, types and primary key. [5]
>
> ·  (IMPROVED) Fine grained security using Riak TS specific permissions.
> [7]
>
> ·  (IMPROVED) Rolling upgrade and downgrade to and from Riak TS 1.3.1.
> [8]
>
> ·  … many documentation and performance improvements.
>
>
>
> [1] http://docs.basho.com/riak/ts/1.4.0/using/querying/select/group-by/
>
> [2] http://docs.basho.com/riak/ts/1.4.0/using/timerepresentations/
>
> [3] http://docs.basho.com/riak/ts/1.4.0/using/global-object-expiration/
>
> [4] http://docs.basho.com/riak/ts/1.4.0/using/querying/show-tables/
>
> [5] *http://docs.basho.com/riak/ts/1.4.0/downloads/#debian
> *
>
> [6] *http://docs.basho.com/riak/ts/1.4.0/using/querying/describe/
> *
>
> [7] http://docs.basho.com/riak/ts/1.4.0/using/security/user-management
>
> [8] http://docs.basho.com/riak/ts/1.4.0/setup/upgrading/
>
>
>
> More info:
>
> ·  Riak® TS 
>
> ·  Documentation 
>
> ·  Basho Blog 
>
> ·  Downloads 
>
> ·  Source Code 
>
> Pavel Hardak,
>
> Director of Product Management, Basho Technologies
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search, post schema change reindexation

2016-08-29 Thread Fred Dushin
Hi Guillame,

A few questions.

What version of Riak?

Does the reindexing need to occur across the entire cluster, or just on one 
node?

What are the expectations about query-ability while re-indexing is going on?

If you can afford to take a node out of commission for query, then one approach 
would be to delete your YZ data and YZ AAE trees, and let AAE sync your 30 
million documents from Riak.  You can increase AAE tree rebuild and exchange 
concurrency to make that occur more quickly than it does by default, but that 
will put a fairly significant load on that node.  Moreover, because you have 
deleted indexed data on one node, you will get inconsistent search results from 
Yokozuna, as the node being reindexed will still show up as part of a coverage 
plan.  Depending on the version of Riak, however, you may be able to manually 
remove that node from coverage plans through the Riak console while re-indexing 
is going on.  The node is still available for Riak get/put operations 
(including indexing new entries into Solr), but it will be excluded from any 
cover set when a query plan is generated.  I can't guarantee that this would 
take less than 5 days, however.

-Fred

> On Aug 29, 2016, at 3:56 AM, Guillaume Boddaert 
> <guilla...@lighthouse-analytics.co> wrote:
> 
> Hi,
> 
> I recently needed to alter my Riak Search schema for a bucket type that 
> contains ~30 millions rows. As a result, my index was wiped since we are 
> waiting for a Riak Search 2.2 feature that will sync Riak storage with Solr 
> index on such an occasion.
> 
> I adapted a since script suggested by Evren Esat Özkan there 
> (https://github.com/basho/yokozuna/issues/130#issuecomment-196189344 
> <https://github.com/basho/yokozuna/issues/130#issuecomment-196189344>). It is 
> a simple python script that will stream keys and trigger a store action for 
> any items. Unfortunately it failed past 178k items due to time out on the key 
> stream. I calculated that this kind of reindexation mechanism would take up 
> to 5 days without a crash to succeed. 
> 
> I was wondering if there would be a pure Erlang mean to achieve a complete 
> forced rewrite of every single element in my bucket type rather that an error 
> prone and very long python process.
> 
> How would you guys reindex a 30 million item bucket type in a fast and 
> reliable way ?
> 
> Thanks, Guillaume
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


<    2   3   4   5   6   7   8   9   10   11   >