Wait, wait, wait. You disabled Azure's automated NSA duplication layer, ya? Oh,
right...
-Alexander Sicular
@siculars
NSA & Co.: If/when you knock down my door, give me a minute to put my pants on.
On Aug 1, 2013, at 9:28 PM, Paul Ingalls wrote:
> Couple of questions.
>
> I have migrated my
On 02/08/13 13:13, Jeremy Ong wrote:
What erlang version did you build with? How are you load balancing
between the nodes? What kind of disks are you using?
I don't think load-balancing or poor disks could cause performance to
drop down to that 1/second rate.
I mean, even if you're using a
Also, are you generating the load from the same VMs that Riak is running on
or do you have separate machines generating load?
On Thursday, August 1, 2013, Jeremy Ong wrote:
> What erlang version did you build with? How are you load balancing
> between the nodes? What kind of disks are you using
What erlang version did you build with? How are you load balancing
between the nodes? What kind of disks are you using?
On Thu, Aug 1, 2013 at 7:53 PM, Paul Ingalls wrote:
> FYI, 2 more nodes died with the end of the last test. Storm, which I'm
> using to put data in, kills the topology a bit ab
FYI, 2 more nodes died with the end of the last test. Storm, which I'm using
to put data in, kills the topology a bit abruptly, perhaps the nodes don't like
a client going away like that?
log from one of the nodes:
2013-08-02 02:27:23 =ERROR REPORT
Error in process <0.4959.0> on node 'riak
I should say that I build riak from the master branch on the git repository.
Perhaps that was a bad idea?
Paul Ingalls
Founder & CEO Fanzo
p...@fanzo.me
@paulingalls
http://www.linkedin.com/in/paulingalls
On Aug 1, 2013, at 7:47 PM, Paul Ingalls wrote:
> Thanks for the quick response Matthe
Thanks for the quick response Matthew!
I gave that a shot, and if anything the performance was worse. When I picked
128 I ran through the calculations on this page:
http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
and thought that would work, but it sounds li
Try cutting your max open files in half. I am working from my iPad not my
workstation so my numbers are rough. Will get better ones to you in the
morning.
The math goes like this:
- vnode/partition heap usage is (4Mbytes * (max_open_files -10)) + 8Mbyte
- you have 18 vnodes per server (multi
I should add more details about the nodes that crashed. I ran this for the
first time for all of 10 minutes.
Here is the log from the first one:
2013-08-02 00:09:44 =ERROR REPORT
** State machine <0.2368.0> terminating
** Last event in was unregistered
** When State == active
** Data
Couple of questions.
I have migrated my system to use Riak on the back end. I have setup a 1.4
cluster with 128 partitions on 7 nodes with LevelDB as the store. Each node
looks like:
Azure Large instance (4CPU 7GB RAM)
data directory is on a RAID 0
max files is set to 128
async thread on the
On Thu, Aug 1, 2013 at 10:46 PM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:
> What's the underlying goal of getting this count of records in a bucket?
> Do you want to just have a live count or will you be eventually performing
> additional filters on the count?
>
The main goal was to
What's the underlying goal of getting this count of records in a bucket? Do
you want to just have a live count or will you be eventually performing
additional filters on the count?
One option might be to use counters [1] to hold these counts, instead of
attempting to compute them on the fly.
In d
On Wed, Jul 31, 2013 at 9:54 AM, Christian Rosnes <
christian.ros...@gmail.com> wrote:
> I have 4 node Riak 1.4 test cluster on Azure
> (Large: 4core, 7GB RAM instances).
>
>
Ran 7, slightly different, Erlang map-reduce jobs overnight to count the
118 million
records in the 'entries' bucket. Ther
Hi Mahesh,
Can you please confirm that you're executing steps similar to what's
in this Gist? [0]
If your steps are similar, please provide your versions of Riak and
Riak CS, along with the `app.config` files.
--
Hector
[0] https://gist.github.com/hectcastro/126b5657f228096775c6
On Thu, Aug 1,
Yokozuna returns keys and optionally matching fields. If you use highlighting,
you can get a portion of the value, but no, it still returns keys, just like
normal Solr (because it is normal solr).
YZ does support pagination in the way that you can skip and limit, but it
doesn't have a cursor or
At its current stage, does a Yokozuna search query returns the matching
records keys only, or it is able to also retrieve the record values with it.
Additionally, does Yokozuna support pagination of results? In other words,
get me the top 100 hits, or get me hits 101-200, etc.
Thanks!
--
View
Hello,
I am little bit stuck with the riak bytes in / bytes out.
The 'moss.access' bucket is not getting created when we start
riak-cs-storage manually.
I have 14 MB of data in one bucket.
What could be the solution? or is this configuration issue?
--
Thanks & Regards,*
*
*Mahesh Shitole
Hello dear list.
Recently we had an issue: 1 from 4 of our riak nodes went down
('riak@10.0.1.192'). As expected 3 other nodes took it's work and everything
was ok.
After some time we was able to recover this node and put it again into
cluster.
But now I see some strange things in logs:
# riak-a
18 matches
Mail list logo