Hi,
I am trying to get started with Riak, so installed the
'riak_1.1.4-1_i386.deb' on a Ubuntu 12.04 LTS 32bit System running in a
Virtual Box VM. I no not have any cluster.
>
> I have a script which does the following:
>
> 1. It creates 50 child processes which run in parallel
> 2. Each Child pr
I literally don't have logs. All the logs are empty. (As this is a new install)
Shuhao
On Wed, Jul 25, 2012 at 11:01 PM, Joe Caswell wrote:
> Riak start make a daemon call to riak console with input and output
> redirected. Riak console calls logger -t "$SCRIPT[$$]" "Starting up", do
> you see
Also try starting with riak console. That works perfectly fine for me.
Shuhao
Sent from my phone.
On Jul 25, 2012 10:51 PM, "Shuhao Wu" wrote:
> Yup. /tmp is 777 and /tmp/risk is 755, owned by riak:riak
>
> Shuhao
> Sent from my phone.
> On Jul 25, 2012 10:42 PM, "Joe Caswell" wrote:
>
>> We ha
Yup. /tmp is 777 and /tmp/risk is 755, owned by riak:riak
Shuhao
Sent from my phone.
On Jul 25, 2012 10:42 PM, "Joe Caswell" wrote:
> We have seen this a couple times. A common cause is the riak user needs
> write access to the $PIPE_DIR in order to start properly. PIPE_DIR is
> usually /tmp/r
We have seen this a couple times. A common cause is the riak user needs
write access to the $PIPE_DIR in order to start properly. PIPE_DIR is
usually /tmp/riak/, check your riak script to make sure. If the directory
doesn't exist, it will be created with unix perm 755. Verify that the
riak user
This seems to be a somewhat common issue nowadays. I have this under
one of my server as well, though it's a debian 6 64bit.
Cheers,
Shuhao
On Wed, Jul 25, 2012 at 12:29 PM, David Montgomery
wrote:
> Hi,
>
> I am new to riak.
>
> I followed the instructions for a cluster setup at
> https://wiki
Ive tried deleting all the files in merge_index, still crashing.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-Crashing-Constantly-tp4024737p4024779.html
Sent from the Riak Users mailing list archive at Nabble.com.
___
I'll toss our Twisted/Riak component into the conversation:
https://bitbucket.org/asi/txriak
Note that this was a minimal port of the original Riak Python code
into a Twisted/deferred model. As such, we kept as close as possible to the
original
Python code -- just introduced the asynchronous
Alright. Just took a look at my old code base. It seems like I tried
to hack the async stuff into the methods that calls networking (in the
transports) and used some threading to do it. It's ugly and it pretty
much breaks everything, which is probably why I abandoned it at the
time.
Just want to g
Hi --
I'm seeing an issue with timeouts for map/reduces. We're running erlang files
via a curl command, as
part of a haskell job. In the curl data we specify the timeout to be one hour
(3,600,000 milliseconds --
see the example below). However, the job crashes (times out) after well less
tha
Using key filter on a big bucket could cause performance problems.
On Jul 25, 2012 9:53 PM, "Andrew Kondratovich" <
andrew.kondratov...@gmail.com> wrote:
> Yeap.. half a thousand requests to riak isn't cool =( I'm looking some
> strategy of storing data so that i could fetch all items by 1 request
Sorry, I missed this one.
As Ryan notes, you simply rewrite the object to Riak with the new indexes.
There's no need to delete the object beforehand.
With the Java client you can do this via a Mutation that would be used when you
call StoreObject.execute()
Thanks,
- Roach
On Jul 25, 2012, a
Hi,
I am new to riak.
I followed the instructions for a cluster setup at
https://wiki.basho.com/Basic-Cluster-Setup.html. It did not work. I
am using ubuntu64
I changed the ip in app.confing and vm.args and stoped the service.
then I ran the below.
riak-admin reip riak@127.0.0.1 r...@xxx.xx
Yeap.. half a thousand requests to riak isn't cool =( I'm looking some
strategy of storing data so that i could fetch all items by 1 request.
I could use index MR at time and filter results at map phase. I could use
special keys with from data and use key filters (with time filtering at map
phase)
Kaspar,
I don't know the details of the Java API but re-writing the object should
suffice. Riak will remove the old indexes for you and create the new ones.
-Z
On Wed, Jul 18, 2012 at 5:00 AM, Kaspar Thommen wrote:
> Hi,
>
> Say I have a 'users' bucket that stores user data (name, email) and I
One or more of your merge index buffer files is corrupt. Merge index is
the backend that stores Riak Search indexes. Unfortunately this error msg
doesn't tell you which partitions are corrupt. Depending on how
comfortable you are and if this is a production environment you could edit
the files b
Is that a realistic strategy for low latency requirements? Imagine this
were some web service, and people generate this query at some reasonable
frequency.
(not that I know what Andrew is looking for, exactly)
2012/7/25 Yousuf Fauzan
> Since 500 is not that big a number, I think you can run tha
Mick,
This is a bug. At one time I fixed it but it had to be reverted because it
broke rolling upgrade [1]. It has languished ever since. To work around
explicitly put AND in the query. E.g.
q=nickname:Ring%20AND%20breed:Shepherd
-Z
[1]:
https://github.com/basho/riak_search/commit/67ca6efca7
Since 500 is not that big a number, I think you can run that many M/Rs with
each emitting only records having "time" greater than specified. Input
would be {index, <<"bucket">>, <<"from_bin">>, <<"from_field_value">>}
If you decide to split the data into separate buckets based on "from"
field, inp
Hello, Yousuf.
Thanks for your reply.
We have several millions of items. It's about 10 000 of unique 'from'
fields (about 1000 items for each). Usually, we need to get items for about
500 'from' identifiers with 'time' limit (about 5% of items is
corresponding).
On Wed, Jul 25, 2012 at 1:02 PM,
Good afternoon.
I am considering several storage solutions for my project, and now I look
at Riak.
We work with the following pattern of data:
{
time: unixtime
from: int
data: binary
...
}
The amount of data is about several millions items for now, but it's
growing. It is necessary to han
21 matches
Mail list logo