What I failed to say, was make the copy after you populate and stop, but before
you attempt to start Riak again.
Matthew
> On Feb 26, 2016, at 11:19 AM, Joe Olson wrote:
>
>
>
> Negative.
>
> I have ring size set to 8, leveldb split across two sets of drives ("fast"
> and "slow", but meani
Joe,
If the sample data is not confidential, how about creating a tar file of the
entire leveldb data directory and either emailing to me directly or posting
somewhere I can download it? No need to copy the entire mailing list on the
file or download location.
Matthew
> On Feb 26, 2016, at 1
Thanks for the quick response! I am using 2.1.3 and will check out that
tutorial. I can see everything in the logs but want to repair the indexes
programmatically. It sounds like Jason's solution is what I'm looking for.
Cheers,
Colin
On 26 Feb 2016 8:17 a.m., "Jason Voegele" wrote:
> On Feb 26
Negative.
I have ring size set to 8, leveldb split across two sets of drives ("fast" and
"slow", but meaningless on the test Vagrant box...just two separate
directories). I checked all of the ../leveldb/* directories. All LOG files are
identical, and no errors in any of them.
I will try to
regarding the coverage plan, is there any way to get it with protocol
buffers API? As RpbSearchQueryResp messages doesn't seem to contain
anything but docs:
message RpbSearchQueryResp {
repeated RpbSearchDoc docs = 1; // Result documents
optional floatmax_score = 2; // Maximum sco
On Feb 26, 2016, at 10:40 AM, Colin Walker wrote:
> Due to bad planning on my part, Solr is having trouble indexing some of the
> fields I am sending to it, specifically, I ended up with some string fields
> in a numerical field. Is there a way to retrieve the records from Riak that
> have thro
Joe,
Are there any error messages in the leveldb LOG and/or LOG.old files? These
files are located within each vnode's directory, likely
/var/lib/riak/data/leveldb/*/LOG* on your machine.
The LOG files are not to be confused with 000xxx.log files. The lower case
*.log files are the recovery
Hey Joe,
I will do my best to help, but I am not the most experienced with Riak
operations. Your best bet to get to a solution as fast as possible is to
include the full users group, which I have added to the recipients of this
message.
1. Are the Riak data directories within Vagrant shared direc
Hey Colin,
Do you see any errors in your solr log that would give you the info on the
bad entries?
Thanks,
Alex
On Fri, Feb 26, 2016 at 10:40 AM, Colin Walker wrote:
> Hey again everyone,
>
> Due to bad planning on my part, Solr is having trouble indexing some of
> the fields I am sending to i
Yes, AAE is enabled:
anti_entropy = active
anti_entropy.use_background_manager = on
handoff.use_background_manager = on
anti_entropy.throttle.tier1.mailbox_size = 0
anti_entropy.throttle.tier1.delay = 5ms
anti_entropy.throttle.tier2.mailbox_size = 50
anti_entropy.throttle.tier2.delay = 50ms
an
Hey again everyone,
Due to bad planning on my part, Solr is having trouble indexing some of the
fields I am sending to it, specifically, I ended up with some string fields
in a numerical field. Is there a way to retrieve the records from Riak that
have thrown errors in solr?
Cheers,
Colin
__
I would check the coverage plans that are being used for the different queries,
which you can usually see in the headers of the resulting document. When you
run a search query though yokozuna, it will use a coverage plan from riak core
to find a minimal set of nodes (and partitions) to query to
did some investigation
- this problem seems to be the same as described in
https://github.com/basho/riak/issues/415 (and others)
- using the erlang interface i retrieved the bucket list as:
[<<"moss.buckets">>,<<"moss.access">>,<<"moss.users">>,
<<48,111,58,16,121,107,99,149,64,231,6,8,234,204
Hi!
Riak 2.1.3
Having a stable data set (no documents deleted in months) I'm receiving
inconsistent search results with Yokozuna. For example first query can
return num_found: 3000 (correct), the same query repeated in next seconds
can return 2998, or 2995, then 3000 again. Similar inconsistency
To second Vitaly, losing data in restart is not normal. What OS are you
running it on? Is it in a VM or on bare metal? How did you install it? Did
you change any riak.conf vars? With this info, we should be able to help
you trouble shoot this issue better.
Chris
On Fri, Feb 26, 2016 at 12:39 AM V
hello,
i'm kind of new to the world of riak but i succeeded to install and
configure riak_2.1.3 (5 instances on 5 machines running debian jessie),
riak-cs_2.1.1, riak-cs-control_1.0.2 and stanchion_2.1.1.
everything works fine but i'm observing a strange behavior:
when i had finished install
16 matches
Mail list logo