Re: Build otp_src_R16B02-basho5 on Heroku

2015-02-05 Thread Corentin Jechoux
Yes, I have clearly a problem with writing a file.
You're right, I will contact Heroku for that.

May I ask you the steps you follow to build this release on Heroku ?

Regards

Corentin

2015-02-05 18:31 GMT+01:00 Christopher Meiklejohn :

>
> On Feb 4, 2015, at 10:59 PM, Corentin Jechoux <
> corentin.jechoux@gmail.com> wrote:
>
> Thank you Christopher for your fast answer. However, I did not mentionned
> that I was using "cedar-14" stack. With an ephemeral file system : the file
> system enable writes operations, but wrtten files are not saved when a dyno
> restart [1]. Moreover, I can write file, because the "make" command did not
> fail at the 1st compilation, but it fails, each time, with the same file.
>
> cd lib && \
>   ERL_TOP=/app/otp_src_R16B02-basho5
> PATH=/app/otp_src_R16B02-basho5/bootstrap/bin:"${PATH}" \
> make opt SECONDARY_BOOTSTRAP=true
> make[1]: Entering directory `/app/otp_src_R16B02-basho5/lib'
> make[2]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe'
> Makefile:71: warning: overriding commands for target `clean'
> /app/otp_src_R16B02-basho5/make/otp_subdir.mk:28: warning: ignoring old
> commands for target `clean'
> === Entering application hipe
> make[3]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
> (cd ../main && make hipe.hrl)
> make[4]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
> sed -e "s;%VSN%;3.10.2.1;" ../../hipe/main/hipe.hrl.src >
> ../../hipe/main/hipe.hrl
> make[4]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
> erlc -W  +debug_info +inline +warn_unused_import +warn_exported_vars
> -o../ebin hipe_rtl.erl
> /app/otp_src_R16B02-basho5/lib/hipe/rtl/../ebin/hipe_rtl.bea#: error
> writing file
>
>
> You’re clearly having a problem writing a file, but I’m not sure why.  I’m
> able to
> build this release without running into the same issue.
>
> Have you tried contacting Heroku?
>
> - Chris
>
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklej...@basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Adding nodes to cluster

2015-02-05 Thread Edgar Veiga
It is expected that the total amount of data per node lowers quite a lot, 
correct? I'm doubling the size of the cluster (6 more nodes).




I ask this because the actual 6 machines have 1.5Tb in disks, but the new ones 
( for now) have only 1Tb.




Best regards



—
Sent from my iPhone

On Sat, Jan 24, 2015 at 9:49 PM, Edgar Veiga 
wrote:

> Yeah, after sending the email I realized both! :)
> Thanks! Have a nice weekend
> On 24 January 2015 at 21:46, Sargun Dhillon  wrote:
>> 1) Potentially re-enable AAE after migration. As your cluster gets
>> bigger, the likelihood of any node failing in the cluster goes up.
>> Replica divergence only becomes scarier in light of this. Losing data
>> != awesome.
>>
>> 6) There shouldn't be any problems, but for safe measures you should
>> probably upgrade the old ones before the migration.
>>
>>
>>
>> On Sat, Jan 24, 2015 at 1:31 PM, Edgar Veiga 
>> wrote:
>> > Sargun,
>> >
>> > Regarding 1) - AAE is disabled. We had a problems with it and there's a
>> lot
>> > of threads here in the mailing list regarding this. AAE won't stop using
>> > more and more disk space and the only solution was disabling it! Since
>> then
>> > the cluster has been pretty stable...
>> >
>> > Regarding 6) Can you or anyone in basho confirm that there won't be any
>> > problems using the latest (1.4.12) version of riak in the new nodes and
>> only
>> > upgrading the old ones after this process is completed?
>> >
>> > Thanks a lot for the other tips, you've been very helpful!
>> >
>> > Best regards,
>> > Edgar
>> >
>> > On 24 January 2015 at 21:09, Sargun Dhillon  wrote:
>> >>
>> >> Several things:
>> >> 1) If you have data at rest that doesn't change, make sure you have
>> >> AAE, and it's ran before your cluster is manipulated. Given that
>> >> you're running at 85% space, I would be a little worried to turn it
>> >> on, because you might run out of disk space. You can also pretty
>> >> reasonably put the AAE trees on magnetic storage. AAE is nice in the
>> >> sense that you _know_ your cluster is consistent at a point in time.
>> >>
>> >> 2) Make sure you're getting SSDs of roughly the same quality. I've
>> >> seen enterprise SSDs get higher and higher latency as time goes on,
>> >> due to greater data protection features. We don't need any of that.
>> >> Basho_bench is your friend if you have the time.
>> >>
>> >> 3) Do it all in one go. This will enable handoffs more cleanly, and all
>> at
>> >> once.
>> >>
>> >> 4) Do not add the new nodes to the load balancer until handoff is
>> >> done. At least experimentally, latency increases slightly on the
>> >> original cluster, but the target nodes have pretty awful latency.
>> >>
>> >> 5) Start with a handoff_limit of 1. You can easily raise this. If
>> >> things look good, you can increase it. We're not optimizing for the
>> >> total time to handoff, we really should be optimizing for individual
>> >> vnode handoff time.
>> >>
>> >> 6) If you're using Leveldb, upgrade to the most recent version of Riak
>> >> 1.4. There have been some improvements. 1.4.9 made me happier. I think
>> >> it's reasonable for the new nodes to start on 1.4.12, and the old
>> >> nodes to be switched over later.
>> >>
>> >> 7) Watch your network utilization. Keep your disk latency flat. Stop
>> >> it if it spikes. Start from enabling one node with the lowest usage
>> >> and see if it works.
>> >>
>> >>
>> >> These are the things I can think of immediately.
>> >>
>> >> On Sat, Jan 24, 2015 at 12:42 PM, Alexander Sicular > >
>> >> wrote:
>> >> > I would probably add them all in one go so you have one vnode
>> migration
>> >> > plan that gets executed. What is your ring size? How much data are we
>> >> > talking about? It's not necessarily the number of keys but rather the
>> total
>> >> > amount of data and how quickly that data can move en mass between
>> machines.
>> >> >
>> >> > -Alexander
>> >> >
>> >> >
>> >> > @siculars
>> >> > http://siculars.posthaven.com
>> >> >
>> >> > Sent from my iRotaryPhone
>> >> >
>> >> >> On Jan 24, 2015, at 15:37, Ed  wrote:
>> >> >>
>> >> >> Hi everyone!
>> >> >>
>> >> >> I have a riak cluster, working in production for about one year, with
>> >> >> the following characteristics:
>> >> >> - Version 1.4.8
>> >> >> - 6 nodes
>> >> >> - leveldb backend
>> >> >> - replication (n) = 3
>> >> >> ~ 3 billion keys
>> >> >>
>> >> >> My ssd's are reaching 85% of capacity and we have decided to buy 6
>> more
>> >> >> nodes to expand the cluster.
>> >> >>
>> >> >> Have you got any kind of advice on executing this operation or
>> should I
>> >> >> just follow the documentation on adding new nodes to a cluster?
>> >> >>
>> >> >> Best regards!
>> >> >> Edgar
>> >> >>
>> >> >> ___
>> >> >> riak-users mailing list
>> >> >> riak-users@lists.basho.com
>> >> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >> >
>> >> > ___
>> >> > riak-users mailing l

Re: Mapred unable to find keys that use bucket types

2015-02-05 Thread Mikhail Pustovalov
Thank you again Christopher,
An example would be awesome! Although I really need only one line of code
that shows how to pass specific keys to mapred function to operate on (in
the case when they use bucket type).
As to my map function: it really does nothing - simply returns key-value
pairs, because I use mapred functionality to simply fetch multiple objects
in one query. Here it is:

map_kv_pairs ( Obj, _, _ ) ->
  case riak_object:is_robject(Obj) of
true -> [ {riak_object:key(Obj), riak_object:get_value(Obj)} ] ;
false -> []
  end.

On Fri Feb 06 2015 at 1:24:50 AM Christopher Meiklejohn <
cmeiklej...@basho.com> wrote:

>
> On Feb 5, 2015, at 12:47 PM, Mikhail Pustovalov 
> wrote:
>
> Hi Chris,
>
> Thank you for the prompt reply.
> Although that is exactly what I do. I've noticed that bucket can now be
> both binary or a tuple {binary, binary} where the first element is bucket
> type and the second is bucket. And it works for put/get operations and for
> mapred_bucket which traverses the whole bucket. But what I am trying to
> achieve is traverse only specified keys and that doesn't seem to work for
> mapred queiries. Here are the commands that work:
> simple get:
> riakc_pb_socket:get(Pid,{<<"bucket_type">>,<<"bucket">>},<>).
> mapred over the whole bucket:
> riakc_pb_socket:mapred_bucket(Pid, {<<"bucket_type">>,<<"bucket">>},
> [{map, {modfun, rc_mapred, map_kv_pairs}, none, true}]).
> but this one fails with the result {ok,[{0,[{error,notfound}]}]}:
> riakc_pb_socket:mapred(Pid, 
> [{{<<"bucket_type">>,<<"bucket">>},<>}],[{map,
> {modfun, rc_mapred, map_kv_pairs}, none, true}]).
>
> If you can run a mapred query over specified keys could you please show me
> an example?
>
>
> I can try to put together an example.
>
> Do you mind sharing what your map function is doing?
>
> - Chris
>
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklej...@basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Shawn Debnath
The only thing I can think of is that your flattened full name is not being 
matched.  Also looking at Basho’s default schema, it should be “_yz_str” and 
not “yz_str”, and that only works if you actually have it defined as:


  



in your schema file. The data types for solr are all class names as in solr.*.  
I would double check your mappings for name, type, and if they are being 
indexed and stored. If stored is false, you won’t get the data back but you can 
query on it.


On 2/5/15, 2:01 PM, "Nirav Shah" 
mailto:niravis...@yahoo.com>> wrote:

Hi Shawn,
I am using plain old types. Some of the fields we index ends with Id like 
execId, verId, orderId and are defined as long. There are some that has random 
strings which are defined as yz_str. Do you think this fields can cause issues ?

Surprisingly, i am seeing some data and some are not present in the Solr.  I 
can understand if no data is found for some field but the fact being i am 
seeing some data.

Also, Is there any suggestions around AAE configuration for production cluster?


-Nirav


From: Shawn Debnath mailto:sh...@debnath.net>>
To: Nirav Shah mailto:niravis...@yahoo.com>>; Luc Perkins 
mailto:lperk...@basho.com>>
Cc: "riak-users@lists.basho.com" 
mailto:riak-users@lists.basho.com>>
Sent: Thursday, February 5, 2015 1:49 PM
Subject: Re: Riak Search Production Configuration

Nirav, are you using CRDTs or plain old types with Riak? The definition for 
field names makes a big difference in what gets archived and solr will not 
complain if it couldn’t find matching fields, it just won’t index them. You can 
take a peek at the data dir on the Riak instance to see what, if any, is being 
indexed.




On 2/5/15, 12:18 PM, "Nirav Shah" 
mailto:niravis...@yahoo.com>> wrote:

Hi Luc,
Thanks for the response.

Here are the steps i performed on my application start up. I am using the 
default bucket type in my application

1. Create Custom Schema

 String schema = Source.fromInputStream(inputStream, 
"UTF-8").mkString(); (inputStream comes from a schema file that 
i read)

YokozunaSchema yokozunaSchema = new YokozunaSchema("test", 
schema.toString());
StoreSchema storeSchema = new 
StoreSchema.Builder(yokozunaSchema).build();
riakClient.execute(storeSchema);


2.Post creation of Schema

I created Index for my Object based on the required fields as mentioned in 
the Riak Search docs and other fields as mapped to my object. I have only 
few fields set to be Indexed and Stored from my object as i only want to 
search on them

YokozunaIndex yokozunaIndex = new YokozunaIndex("test_idx", "test");
StoreIndex storeIndex = new StoreIndex.Builder(yokozunaIndex).build();
riakClient.execute(storeIndex);


3.Set Bucket Properties

I than associate my bucket to the Index as part of the same application 
start up. Before i attach Index to my bucket, i verify if it has already 
been attached using a Fetch
StoreBucketProperties sbp =
new StoreBucketProperties.Builder(namespace)
.withAllowMulti(false)
.withLastWriteWins(true)
.withSearchIndex("test_idx")
.build();
riakClient.execute(sbp);



Regards,
Nirav



From: Luc Perkins mailto:lperk...@basho.com>>
To: Nirav Shah mailto:niravis...@yahoo.com>>
Cc: Shawn Debnath mailto:sh...@debnath.net>>; 
"riak-users@lists.basho.com" 
mailto:riak-users@lists.basho.com>>
Sent: Thursday, February 5, 2015 11:04 AM
Subject: Re: Riak Search Production Configuration

Nirav,

Could you possibly detail the steps you used to upload the schema, adjust the 
bucket properties, etc.? That would help us identify the issue.

Luc

On Thu, Feb 5, 2015 at 9:42 AM, Nirav Shah 
mailto:niravis...@yahoo.com>> wrote:


Hi Shawn,
Thanks for the response. To give you some background

1. We are using custom schema with default bucket type
2. I have the search set to on:)
3. I have associated BucketProperties/Index to my buckets..
4. What i am seeing is, i am getting data back, but for some reason i am not 
getting the entire set. When i query RIAK i see the data, however when i query 
the solr indexes, its missing that data. At this point, i don't know what can 
cause this  and am looking for people who might have faced similar issues.

My default config is just changing search=on in riak.conf, changed the JVM 
settings in riak.conf for Solr.


Would appreciate any pointers and best practice around settings for Riak Search 
and AAE in production cluster that i should add that folks have running in 
production cluster.



Regards,
Nirav



From: Shawn Debnath mailto:sh...@debnath.net>>
To: Nirav Shah mailto:niravis...@yahoo.com>>; 
"riak-users@lists.basho.com

Re: Mapred unable to find keys that use bucket types

2015-02-05 Thread Christopher Meiklejohn

> On Feb 5, 2015, at 12:47 PM, Mikhail Pustovalov  wrote:
> 
> Hi Chris,
> 
> Thank you for the prompt reply.
> Although that is exactly what I do. I've noticed that bucket can now be both 
> binary or a tuple {binary, binary} where the first element is bucket type and 
> the second is bucket. And it works for put/get operations and for 
> mapred_bucket which traverses the whole bucket. But what I am trying to 
> achieve is traverse only specified keys and that doesn't seem to work for 
> mapred queiries. Here are the commands that work:
> simple get:
> riakc_pb_socket:get(Pid,{<<"bucket_type">>,<<"bucket">>},<>).
> mapred over the whole bucket:
> riakc_pb_socket:mapred_bucket(Pid, {<<"bucket_type">>,<<"bucket">>}, [{map, 
> {modfun, rc_mapred, map_kv_pairs}, none, true}]).
> but this one fails with the result {ok,[{0,[{error,notfound}]}]}:
> riakc_pb_socket:mapred(Pid, 
> [{{<<"bucket_type">>,<<"bucket">>},<>}],[{map, {modfun, rc_mapred, 
> map_kv_pairs}, none, true}]).
> 
> If you can run a mapred query over specified keys could you please show me an 
> example?

I can try to put together an example.

Do you mind sharing what your map function is doing?

- Chris

Christopher Meiklejohn
Senior Software Engineer
Basho Technologies, Inc.
cmeiklej...@basho.com___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Nirav Shah
Hi Shawn,I am using plain old types. Some of the fields we index ends with Id 
like execId, verId, orderId and are defined as long. There are some that has 
random strings which are defined as yz_str. Do you think this fields can cause 
issues ? 
Surprisingly, i am seeing some data and some are not present in the Solr.  I 
can understand if no data is found for some field but the fact being i am 
seeing some data.

Also, Is there any suggestions around AAE configuration for production cluster?

-Nirav
  From: Shawn Debnath 
 To: Nirav Shah ; Luc Perkins  
Cc: "riak-users@lists.basho.com"  
 Sent: Thursday, February 5, 2015 1:49 PM
 Subject: Re: Riak Search Production Configuration
   
Nirav, are you using CRDTs or plain old types with Riak? The definition for 
field names makes a big difference in what gets archived and solr will not 
complain if it couldn’t find matching fields, it just won’t index them. You can 
take a peek at the data dir on the Riak instance to see what, if any, is being 
indexed.



On 2/5/15, 12:18 PM, "Nirav Shah"  wrote:

Hi Luc,Thanks for the response. 
Here are the steps i performed on my application start up. I am using the 
default bucket type in my application

1. Create Custom Schema
 String schema = Source.fromInputStream(inputStream, 
"UTF-8").mkString(); (inputStream comes from a schema file that 
i read)        YokozunaSchema yokozunaSchema = new 
YokozunaSchema("test", schema.toString());        StoreSchema storeSchema = new 
StoreSchema.Builder(yokozunaSchema).build();        
riakClient.execute(storeSchema);

2.Post creation of Schema 
I created Index for my Object based on the required fields as mentioned in 
the Riak Search docs and other fields as mapped to my object. I have only 
few fields set to be Indexed and Stored from my object as i only want to 
search on them 
        YokozunaIndex yokozunaIndex = new YokozunaIndex("test_idx", "test");    
    StoreIndex storeIndex = new StoreIndex.Builder(yokozunaIndex).build();      
  riakClient.execute(storeIndex);

3.Set Bucket Properties 
I than associate my bucket to the Index as part of the same application 
start up. Before i attach Index to my bucket, i verify if it has already 
been attached using a Fetch        StoreBucketProperties sbp =                
new StoreBucketProperties.Builder(namespace)                        
.withAllowMulti(false)                        .withLastWriteWins(true)          
              .withSearchIndex("test_idx")                        .build();     
   riakClient.execute(sbp);
                            
Regards,Nirav                                                
From: Luc Perkins 
To: Nirav Shah 
Cc: Shawn Debnath ; "riak-users@lists.basho.com" 

Sent: Thursday, February 5, 2015 11:04 AM
Subject: Re: Riak Search Production Configuration

Nirav,
Could you possibly detail the steps you used to upload the schema, adjust the 
bucket properties, etc.? That would help us identify the issue.
Luc
On Thu, Feb 5, 2015 at 9:42 AM, Nirav Shah  wrote:



Hi Shawn,Thanks for the response. To give you some background
1. We are using custom schema with default bucket type2. I have the search set 
to on:)3. I have associated BucketProperties/Index to my buckets..4. What i am 
seeing is, i am getting data back, but for some reason i am not getting the 
entire set. When i query RIAK i see the data, however when i query the solr 
indexes, its missing that data. At this point, i don't know what can cause this 
 and am looking for people who might have faced similar issues.
My default config is just changing search=on in riak.conf, changed the JVM 
settings in riak.conf for Solr. 

Would appreciate any pointers and best practice around settings for Riak Search 
and AAE in production cluster that i should add that folks have running in 
production cluster. 


Regards,Nirav

From: Shawn Debnath 
To: Nirav Shah ; "riak-users@lists.basho.com" 

Sent: Thursday, February 5, 2015 9:13 AM
Subject: Re: Riak Search Production Configuration

Hi Nirav,
About your last point. Just yesterday I started playing with Search 2.0 (solr) 
and riak. Basho did a good job at integrating the solr platform but docs are 
sometimes misleading. One thing I found out was the using the default schema 
provided by Basho, if you are using CRDTs, your fields are suffixed with 
_register, _counter, _set. This link 
(http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a good 
set of examples but best is to experiment. I ended up diving into the data dir 
of solar and grep’ed for parts of my field names to figure out what it actually 
was. When running queries, solr/riak will not let you know that fields are 
incorrect, it just doesn’t have any data for those so it returns no search 
results.
Good luck.
Shawn
PS. Be sure to have search=on in riak.conf :)


On 2/5/15, 7:34 AM, "Nirav Shah"  wrote:

Hi All,Just wanted to check what kind of configuration settings d

Re: Riak Search Production Configuration

2015-02-05 Thread Shawn Debnath
Nirav, are you using CRDTs or plain old types with Riak? The definition for 
field names makes a big difference in what gets archived and solr will not 
complain if it couldn’t find matching fields, it just won’t index them. You can 
take a peek at the data dir on the Riak instance to see what, if any, is being 
indexed.


On 2/5/15, 12:18 PM, "Nirav Shah" 
mailto:niravis...@yahoo.com>> wrote:

Hi Luc,
Thanks for the response.

Here are the steps i performed on my application start up. I am using the 
default bucket type in my application

1. Create Custom Schema

 String schema = Source.fromInputStream(inputStream, 
"UTF-8").mkString(); (inputStream comes from a schema file that 
i read)

YokozunaSchema yokozunaSchema = new YokozunaSchema("test", 
schema.toString());
StoreSchema storeSchema = new 
StoreSchema.Builder(yokozunaSchema).build();
riakClient.execute(storeSchema);


2.Post creation of Schema

I created Index for my Object based on the required fields as mentioned in 
the Riak Search docs and other fields as mapped to my object. I have only 
few fields set to be Indexed and Stored from my object as i only want to 
search on them

YokozunaIndex yokozunaIndex = new YokozunaIndex("test_idx", "test");
StoreIndex storeIndex = new StoreIndex.Builder(yokozunaIndex).build();
riakClient.execute(storeIndex);


3.Set Bucket Properties

I than associate my bucket to the Index as part of the same application 
start up. Before i attach Index to my bucket, i verify if it has already 
been attached using a Fetch
StoreBucketProperties sbp =
new StoreBucketProperties.Builder(namespace)
.withAllowMulti(false)
.withLastWriteWins(true)
.withSearchIndex("test_idx")
.build();
riakClient.execute(sbp);



Regards,
Nirav



From: Luc Perkins mailto:lperk...@basho.com>>
To: Nirav Shah mailto:niravis...@yahoo.com>>
Cc: Shawn Debnath mailto:sh...@debnath.net>>; 
"riak-users@lists.basho.com" 
mailto:riak-users@lists.basho.com>>
Sent: Thursday, February 5, 2015 11:04 AM
Subject: Re: Riak Search Production Configuration

Nirav,

Could you possibly detail the steps you used to upload the schema, adjust the 
bucket properties, etc.? That would help us identify the issue.

Luc

On Thu, Feb 5, 2015 at 9:42 AM, Nirav Shah 
mailto:niravis...@yahoo.com>> wrote:


Hi Shawn,
Thanks for the response. To give you some background

1. We are using custom schema with default bucket type
2. I have the search set to on:)
3. I have associated BucketProperties/Index to my buckets..
4. What i am seeing is, i am getting data back, but for some reason i am not 
getting the entire set. When i query RIAK i see the data, however when i query 
the solr indexes, its missing that data. At this point, i don't know what can 
cause this  and am looking for people who might have faced similar issues.

My default config is just changing search=on in riak.conf, changed the JVM 
settings in riak.conf for Solr.


Would appreciate any pointers and best practice around settings for Riak Search 
and AAE in production cluster that i should add that folks have running in 
production cluster.



Regards,
Nirav



From: Shawn Debnath mailto:sh...@debnath.net>>
To: Nirav Shah mailto:niravis...@yahoo.com>>; 
"riak-users@lists.basho.com" 
mailto:riak-users@lists.basho.com>>
Sent: Thursday, February 5, 2015 9:13 AM

Subject: Re: Riak Search Production Configuration

Hi Nirav,

About your last point. Just yesterday I started playing with Search 2.0 (solr) 
and riak. Basho did a good job at integrating the solr platform but docs are 
sometimes misleading. One thing I found out was the using the default schema 
provided by Basho, if you are using CRDTs, your fields are suffixed with 
_register, _counter, _set. This link 
(http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a good 
set of examples but best is to experiment. I ended up diving into the data dir 
of solar and grep’ed for parts of my field names to figure out what it actually 
was. When running queries, solr/riak will not let you know that fields are 
incorrect, it just doesn’t have any data for those so it returns no search 
results.

Good luck.

Shawn

PS. Be sure to have search=on in riak.conf :)



On 2/5/15, 7:34 AM, "Nirav Shah"  wrote:

Hi All,
Just wanted to check what kind of configuration settings does everyone use in 
production clustered environment for Riak Search/AAE and if someone can share 
some experience over it? We currently have a 2g memory allocated to Solr and 
are currently just using the default parameters from riak.conf.

What we have seen so far is that there is data in RIAK but somehow, Solr/Riak 
search does not return it.

Re: Riak client returnTerm and regexp

2015-02-05 Thread Daniel Iwan
By the look of it it seems returnTerm is available in 1.3+ and regexp
matching got merged into 2.0?
Also is there any documentation what subset of Perl regexp is supported?

Thanks
Daniel



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Riak-client-returnTerm-and-regexp-tp4032539p4032556.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Mapred unable to find keys that use bucket types

2015-02-05 Thread Mikhail Pustovalov
Hi Chris,

Thank you for the prompt reply.
Although that is exactly what I do. I've noticed that bucket can now be
both binary or a tuple {binary, binary} where the first element is bucket
type and the second is bucket. And it works for put/get operations and for
mapred_bucket which traverses the whole bucket. But what I am trying to
achieve is traverse only specified keys and that doesn't seem to work for
mapred queiries. Here are the commands that work:
simple get:
riakc_pb_socket:get(Pid,{<<"bucket_type">>,<<"bucket">>},<>).
mapred over the whole bucket:
riakc_pb_socket:mapred_bucket(Pid, {<<"bucket_type">>,<<"bucket">>}, [{map,
{modfun, rc_mapred, map_kv_pairs}, none, true}]).
but this one fails with the result {ok,[{0,[{error,notfound}]}]}:
riakc_pb_socket:mapred(Pid, [{{<<"bucket_type">>,<<"bucket">>},<>}],[{map,
{modfun, rc_mapred, map_kv_pairs}, none, true}]).

If you can run a mapred query over specified keys could you please show me
an example?

Thanks,
Michael


On Thu Feb 05 2015 at 10:16:32 PM Christopher Meiklejohn <
cmeiklej...@basho.com> wrote:

>
> > On Feb 5, 2015, at 10:55 AM, Mikhail Pustovalov 
> wrote:
> >
> > Hello,
> > I am using MapReduce just as a way to get multiple keys in one query (I
> couldn't find a better way). My code used to work with Riak v.1.4 but now
> when I try to run it against the latest version (2.0.4) mapred queries
> return {error, notfound} for each key supplied.
> > I have created a bucket type, put my keys inside a bucket in that type.
> Simple 'put' and 'get' work fine. This line returns requested object:
> > riakc_pb_socket:get(Pid,{<<"avs_n2">>,<<"avatars">>},<<
> 145,3,100,41,46>>).
> > This line though:
> > riakc_pb_socket:mapred(Pid, 
> > [{{<<"avs_n2">>,<<"avatars">>},<<145,3,100,41,46>>}],[{map,
> {modfun, rc_mapred, map_kv_pairs}, none, true}]).
> > returns this:
> > {ok,[{0,[{error,notfound}]}]}
> > Seems like mapred functions are unable to query using bucket types.
> Without bucket types everything still works fine.
> > Also mapred_bucket over a whole bucket also works fine.
> > Please, advise. Is it possible to use mapred with newly introduced
> bucket types when I want only specific keys and not the full scan of a
> bucket?
>
> Hi Mikhail,
>
> You’ll need to specify the bucket type as part of the bucket name when
> performing the map reduce, for example, for inputs for the “maps” bucket
> type and “users” bucket, you should use {<<“maps”>>, <<“users”>>} as the
> bucket name for riakc_pb_socket.
>
> Thanks,
> - Chris
>
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklej...@basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Nirav Shah
Hi Luc,Thanks for the response. 
Here are the steps i performed on my application start up. I am using the 
default bucket type in my application

1. Create Custom Schema
 String schema = Source.fromInputStream(inputStream, 
"UTF-8").mkString(); (inputStream comes from a schema file that 
i read)        YokozunaSchema yokozunaSchema = new 
YokozunaSchema("test", schema.toString());        StoreSchema storeSchema = new 
StoreSchema.Builder(yokozunaSchema).build();        
riakClient.execute(storeSchema);

2. Post creation of Schema 
I created Index for my Object based on the required fields as mentioned in 
the Riak Search docs  and other fields as mapped to my object. I have only 
few fields set to be Indexed and Stored from my object as i only want to 
search on them 
        YokozunaIndex yokozunaIndex = new YokozunaIndex("test_idx", "test");    
    StoreIndex storeIndex = new StoreIndex.Builder(yokozunaIndex).build();      
  riakClient.execute(storeIndex);

3. Set Bucket Properties 
I than associate my bucket to the Index as part of the same application 
start up. Before i attach Index to my bucket, i verify if it has already 
been attached using a Fetch        StoreBucketProperties sbp =                
new StoreBucketProperties.Builder(namespace)                        
.withAllowMulti(false)                        .withLastWriteWins(true)          
              .withSearchIndex("test_idx")                        .build();     
   riakClient.execute(sbp);
                            
Regards,Nirav                                                
  From: Luc Perkins 
 To: Nirav Shah  
Cc: Shawn Debnath ; "riak-users@lists.basho.com" 
 
 Sent: Thursday, February 5, 2015 11:04 AM
 Subject: Re: Riak Search Production Configuration
   
Nirav,
Could you possibly detail the steps you used to upload the schema, adjust the 
bucket properties, etc.? That would help us identify the issue.
Luc
On Thu, Feb 5, 2015 at 9:42 AM, Nirav Shah  wrote:



Hi Shawn,Thanks for the response. To give you some background
1. We are using custom schema with default bucket type2. I have the search set 
to on:)3. I have associated BucketProperties/Index to my buckets..4. What i am 
seeing is, i am getting data back, but for some reason i am not getting the 
entire set. When i query RIAK i see the data, however when i query the solr 
indexes, its missing that data. At this point, i don't know what can cause this 
 and am looking for people who might have faced similar issues.
My default config is just changing search=on in riak.conf, changed the JVM 
settings in riak.conf for Solr. 

Would appreciate any pointers and best practice around settings for Riak Search 
and AAE in production cluster that i should add that folks have running in 
production cluster. 


Regards,Nirav

  From: Shawn Debnath 
 To: Nirav Shah ; "riak-users@lists.basho.com" 
 
 Sent: Thursday, February 5, 2015 9:13 AM
 Subject: Re: Riak Search Production Configuration
   
Hi Nirav,
About your last point. Just yesterday I started playing with Search 2.0 (solr) 
and riak. Basho did a good job at integrating the solr platform but docs are 
sometimes misleading. One thing I found out was the using the default schema 
provided by Basho, if you are using CRDTs, your fields are suffixed with 
_register, _counter, _set. This link 
(http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a good 
set of examples but best is to experiment. I ended up diving into the data dir 
of solar and grep’ed for parts of my field names to figure out what it actually 
was. When running queries, solr/riak will not let you know that fields are 
incorrect, it just doesn’t have any data for those so it returns no search 
results.
Good luck.
Shawn
PS. Be sure to have search=on in riak.conf :)


On 2/5/15, 7:34 AM, "Nirav Shah"  wrote:

Hi All,Just wanted to check what kind of configuration settings does everyone 
use in production clustered environment for Riak Search/AAE and if someone can 
share some experience over it? We currently have a 2g memory allocated to Solr 
and are currently just using the default parameters from riak.conf.  
What we have seen so far is that there is data in RIAK but somehow, Solr/Riak 
search does not return it. I am trying to find out what can cause this and am i 
missing some kind of configuration settings. 


Any response would be appreciated.  

Regards,Nirav


   
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





  ___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Mapred unable to find keys that use bucket types

2015-02-05 Thread Christopher Meiklejohn

> On Feb 5, 2015, at 10:55 AM, Mikhail Pustovalov  wrote:
> 
> Hello,
> I am using MapReduce just as a way to get multiple keys in one query (I 
> couldn't find a better way). My code used to work with Riak v.1.4 but now 
> when I try to run it against the latest version (2.0.4) mapred queries return 
> {error, notfound} for each key supplied.
> I have created a bucket type, put my keys inside a bucket in that type. 
> Simple 'put' and 'get' work fine. This line returns requested object:
> riakc_pb_socket:get(Pid,{<<"avs_n2">>,<<"avatars">>},<<145,3,100,41,46>>).
> This line though:
> riakc_pb_socket:mapred(Pid, 
> [{{<<"avs_n2">>,<<"avatars">>},<<145,3,100,41,46>>}],[{map, {modfun, 
> rc_mapred, map_kv_pairs}, none, true}]).
> returns this:
> {ok,[{0,[{error,notfound}]}]}
> Seems like mapred functions are unable to query using bucket types. Without 
> bucket types everything still works fine.
> Also mapred_bucket over a whole bucket also works fine.
> Please, advise. Is it possible to use mapred with newly introduced bucket 
> types when I want only specific keys and not the full scan of a bucket?

Hi Mikhail,

You’ll need to specify the bucket type as part of the bucket name when 
performing the map reduce, for example, for inputs for the “maps” bucket type 
and “users” bucket, you should use {<<“maps”>>, <<“users”>>} as the bucket name 
for riakc_pb_socket.

Thanks,
- Chris

Christopher Meiklejohn
Senior Software Engineer
Basho Technologies, Inc.
cmeiklej...@basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Luc Perkins
Nirav,

Could you possibly detail the steps you used to upload the schema, adjust
the bucket properties, etc.? That would help us identify the issue.

Luc

On Thu, Feb 5, 2015 at 9:42 AM, Nirav Shah  wrote:

> Hi Shawn,
> Thanks for the response. To give you some background
>
> 1. We are using custom schema with default bucket type
> 2. I have the search set to on:)
> 3. I have associated BucketProperties/Index to my buckets..
> 4. What i am seeing is, i am getting data back, but for some reason i am
> not getting the entire set. When i query RIAK i see the data, however when
> i query the solr indexes, its missing that data. At this point, i don't
> know what can cause this  and am looking for people who might have faced
> similar issues.
>
> My default config is just changing search=on in riak.conf, changed the JVM
> settings in riak.conf for Solr.
>
>
> Would appreciate any pointers and best practice around settings for Riak
> Search and AAE in production cluster that i should add that folks have
> running in production cluster.
>
>
>
> Regards,
> Nirav
>
>
>   --
>  *From:* Shawn Debnath 
> *To:* Nirav Shah ; "riak-users@lists.basho.com" <
> riak-users@lists.basho.com>
> *Sent:* Thursday, February 5, 2015 9:13 AM
>
> *Subject:* Re: Riak Search Production Configuration
>
>  Hi Nirav,
>
>  About your last point. Just yesterday I started playing with Search 2.0
> (solr) and riak. Basho did a good job at integrating the solr platform but
> docs are sometimes misleading. One thing I found out was the using the
> default schema provided by Basho, if you are using CRDTs, your fields are
> suffixed with _register, _counter, _set. This link (
> http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a
> good set of examples but best is to experiment. I ended up diving into the
> data dir of solar and grep’ed for parts of my field names to figure out
> what it actually was. When running queries, solr/riak will not let you know
> that fields are incorrect, it just doesn’t have any data for those so it
> returns no search results.
>
>  Good luck.
>
>  Shawn
>
>  PS. Be sure to have search=on in riak.conf :)
>
>
>
> On 2/5/15, 7:34 AM, "Nirav Shah"  wrote:
>
>Hi All,
> Just wanted to check what kind of configuration settings does everyone use
> in production clustered environment for Riak Search/AAE and if someone can
> share some experience over it? We currently have a 2g memory allocated to
> Solr and are currently just using the default parameters from riak.conf.
>
>  What we have seen so far is that there is data in RIAK but somehow,
> Solr/Riak search does not return it. I am trying to find out what can cause
> this and am i missing some kind of configuration settings.
>
>
>
>  Any response would be appreciated.
>
>
>  Regards,
> Nirav
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Mapred unable to find keys that use bucket types

2015-02-05 Thread Mikhail Pustovalov
Hello,
I am using MapReduce just as a way to get multiple keys in one query (I
couldn't find a better way). My code used to work with Riak v.1.4 but now
when I try to run it against the latest version (2.0.4) mapred queries
return {error, notfound} for each key supplied.
I have created a bucket type, put my keys inside a bucket in that type.
Simple 'put' and 'get' work fine. This line returns requested object:
riakc_pb_socket:get(Pid,{<<"avs_n2">>,<<"avatars">>},<<145,3,100,41,46>>).
This line though:
riakc_pb_socket:mapred(Pid,
[{{<<"avs_n2">>,<<"avatars">>},<<145,3,100,41,46>>}],[{map, {modfun,
rc_mapred, map_kv_pairs}, none, true}]).
returns this:
{ok,[{0,[{error,notfound}]}]}
Seems like mapred functions are unable to query using bucket types. Without
bucket types everything still works fine.
Also mapred_bucket over a whole bucket also works fine.
Please, advise. Is it possible to use mapred with newly introduced bucket
types when I want only specific keys and not the full scan of a bucket?

Kind regards,
Michael
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Nirav Shah
Hi Shawn,Thanks for the response. To give you some background
1. We are using custom schema with default bucket type2. I have the search set 
to on:)3. I have associated BucketProperties/Index to my buckets..4. What i am 
seeing is, i am getting data back, but for some reason i am not getting the 
entire set. When i query RIAK i see the data, however when i query the solr 
indexes, its missing that data. At this point, i don't know what can cause this 
 and am looking for people who might have faced similar issues.
My default config is just changing search=on in riak.conf, changed the JVM 
settings in riak.conf for Solr. 

Would appreciate any pointers and best practice around settings for Riak Search 
and AAE in production cluster that i should add that folks have running in 
production cluster. 


Regards,Nirav

  From: Shawn Debnath 
 To: Nirav Shah ; "riak-users@lists.basho.com" 
 
 Sent: Thursday, February 5, 2015 9:13 AM
 Subject: Re: Riak Search Production Configuration
   
Hi Nirav,
About your last point. Just yesterday I started playing with Search 2.0 (solr) 
and riak. Basho did a good job at integrating the solr platform but docs are 
sometimes misleading. One thing I found out was the using the default schema 
provided by Basho, if you are using CRDTs, your fields are suffixed with 
_register, _counter, _set. This link 
(http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a good 
set of examples but best is to experiment. I ended up diving into the data dir 
of solar and grep’ed for parts of my field names to figure out what it actually 
was. When running queries, solr/riak will not let you know that fields are 
incorrect, it just doesn’t have any data for those so it returns no search 
results.
Good luck.
Shawn
PS. Be sure to have search=on in riak.conf :)


On 2/5/15, 7:34 AM, "Nirav Shah"  wrote:

Hi All,Just wanted to check what kind of configuration settings does everyone 
use in production clustered environment for Riak Search/AAE and if someone can 
share some experience over it? We currently have a 2g memory allocated to Solr 
and are currently just using the default parameters from riak.conf.  
What we have seen so far is that there is data in RIAK but somehow, Solr/Riak 
search does not return it. I am trying to find out what can cause this and am i 
missing some kind of configuration settings. 


Any response would be appreciated.  

Regards,Nirav


  ___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Build otp_src_R16B02-basho5 on Heroku

2015-02-05 Thread Christopher Meiklejohn

> On Feb 4, 2015, at 10:59 PM, Corentin Jechoux 
>  wrote:
> 
> Thank you Christopher for your fast answer. However, I did not mentionned 
> that I was using "cedar-14" stack. With an ephemeral file system : the file 
> system enable writes operations, but wrtten files are not saved when a dyno 
> restart [1]. Moreover, I can write file, because the "make" command did not 
> fail at the 1st compilation, but it fails, each time, with the same file.
> 
> cd lib && \
>   ERL_TOP=/app/otp_src_R16B02-basho5 
> PATH=/app/otp_src_R16B02-basho5/bootstrap/bin:"${PATH}" \
> make opt SECONDARY_BOOTSTRAP=true
> make[1]: Entering directory `/app/otp_src_R16B02-basho5/lib'
> make[2]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe'
> Makefile:71: warning: overriding commands for target `clean'
> /app/otp_src_R16B02-basho5/make/otp_subdir.mk:28 : 
> warning: ignoring old commands for target `clean'
> === Entering application hipe
> make[3]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
> (cd ../main && make hipe.hrl)
> make[4]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
> sed -e "s;%VSN%;3.10.2.1;" ../../hipe/main/hipe.hrl.src > 
> ../../hipe/main/hipe.hrl
> make[4]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
> erlc -W  +debug_info +inline +warn_unused_import +warn_exported_vars 
> -o../ebin hipe_rtl.erl
> /app/otp_src_R16B02-basho5/lib/hipe/rtl/../ebin/hipe_rtl.bea#: error writing 
> file

You’re clearly having a problem writing a file, but I’m not sure why.  I’m able 
to 
build this release without running into the same issue.

Have you tried contacting Heroku?

- Chris

Christopher Meiklejohn
Senior Software Engineer
Basho Technologies, Inc.
cmeiklej...@basho.com___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Production Configuration

2015-02-05 Thread Shawn Debnath
Hi Nirav,

About your last point. Just yesterday I started playing with Search 2.0 (solr) 
and riak. Basho did a good job at integrating the solr platform but docs are 
sometimes misleading. One thing I found out was the using the default schema 
provided by Basho, if you are using CRDTs, your fields are suffixed with 
_register, _counter, _set. This link 
(http://docs.basho.com/riak/latest/dev/search/search-data-types/) has a good 
set of examples but best is to experiment. I ended up diving into the data dir 
of solar and grep’ed for parts of my field names to figure out what it actually 
was. When running queries, solr/riak will not let you know that fields are 
incorrect, it just doesn’t have any data for those so it returns no search 
results.

Good luck.

Shawn

PS. Be sure to have search=on in riak.conf :)

On 2/5/15, 7:34 AM, "Nirav Shah" 
mailto:niravis...@yahoo.com>> wrote:

Hi All,
Just wanted to check what kind of configuration settings does everyone use in 
production clustered environment for Riak Search/AAE and if someone can share 
some experience over it? We currently have a 2g memory allocated to Solr and 
are currently just using the default parameters from riak.conf.

What we have seen so far is that there is data in RIAK but somehow, Solr/Riak 
search does not return it. I am trying to find out what can cause this and am i 
missing some kind of configuration settings.



Any response would be appreciated.


Regards,
Nirav
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Build otp_src_R16B02-basho5 on Heroku

2015-02-05 Thread Corentin Jechoux
Thank you Christopher for your fast answer. However, I did not mentionned
that I was using "cedar-14" stack. With an ephemeral file system : the file
system enable writes operations, but wrtten files are not saved when a dyno
restart [1]. Moreover, I can write file, because the "make" command did not
fail at the 1st compilation, but it fails, each time, with the same file.

cd lib && \
  ERL_TOP=/app/otp_src_R16B02-basho5
PATH=/app/otp_src_R16B02-basho5/bootstrap/bin:"${PATH}" \
make opt SECONDARY_BOOTSTRAP=true
make[1]: Entering directory `/app/otp_src_R16B02-basho5/lib'
make[2]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe'
Makefile:71: warning: overriding commands for target `clean'
/app/otp_src_R16B02-basho5/make/otp_subdir.mk:28: warning: ignoring old
commands for target `clean'
=== Entering application hipe
make[3]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
(cd ../main && make hipe.hrl)
make[4]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
sed -e "s;%VSN%;3.10.2.1;" ../../hipe/main/hipe.hrl.src >
../../hipe/main/hipe.hrl
make[4]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe/main'
erlc -W  +debug_info +inline +warn_unused_import +warn_exported_vars
-o../ebin hipe_rtl.erl
/app/otp_src_R16B02-basho5/lib/hipe/rtl/../ebin/hipe_rtl.bea#: error
writing file
make[3]: *** [../ebin/hipe_rtl.beam] Error 1
make[3]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
make[2]: *** [opt] Error 2
make[2]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe'
make[1]: *** [opt] Error 2
make[1]: Leaving directory `/app/otp_src_R16B02-basho5/lib'
make: *** [secondary_bootstrap_build] Error 2

Regards

Corentin

[1] https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem

2015-02-03 22:30 GMT+01:00 Christopher Meiklejohn :

>
> On Feb 3, 2015, at 7:45 AM, Corentin Jechoux <
> corentin.jechoux@gmail.com> wrote:
>
> Hello,
>
> I try to build Basho's Erlang version on Heroku server.
>
> The commands used are
> ./configure
> make
>
> However I face an issue :
>
> cd lib && \
>   ERL_TOP=/app/otp_src_R16B02-basho5
> PATH=/app/otp_src_R16B02-basho5/bootstrap/bin:"${PATH}" \
> make opt SECONDARY_BOOTSTRAP=true
> make[1]: Entering directory `/app/otp_src_R16B02-basho5/lib'
> make[2]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe'
> Makefile:71: warning: overriding commands for target `clean'
> /app/otp_src_R16B02-basho5/make/otp_subdir.mk:28: warning: ignoring old
> commands for target `clean'
> === Entering application hipe
> make[3]: Entering directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
> erlc -W  +debug_info +inline +warn_unused_import +warn_exported_vars
> -o../ebin hipe_rtl.erl
> /app/otp_src_R16B02-basho5/lib/hipe/rtl/../ebin/hipe_rtl.bea#: error
> writing file
> make[3]: *** [../ebin/hipe_rtl.beam] Error 1
> make[3]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe/rtl'
> make[2]: *** [opt] Error 2
> make[2]: Leaving directory `/app/otp_src_R16B02-basho5/lib/hipe'
> make[1]: *** [opt] Error 2
> make[1]: Leaving directory `/app/otp_src_R16B02-basho5/lib'
> make: *** [secondary_bootstrap_build] Error 2
>
>
> Heroku’s dyno's provide a read-only filesystem.  There are two locations
> that provide ephemeral storage, specifically documented here [1], where you
> need to compile Erlang.
>
> Thanks,
> - Chris
>
> [1] https://devcenter.heroku.com/articles/read-only-filesystem
>
> Christopher Meiklejohn
> Senior Software Engineer
> Basho Technologies, Inc.
> cmeiklej...@basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Search Production Configuration

2015-02-05 Thread Nirav Shah
Hi All,Just wanted to check what kind of configuration settings does everyone 
use in production clustered environment for Riak Search/AAE and if someone can 
share some experience over it? We currently have a 2g memory allocated to Solr 
and are currently just using the default parameters from riak.conf.  
What we have seen so far is that there is data in RIAK but somehow, Solr/Riak 
search does not return it. I am trying to find out what can cause this and am i 
missing some kind of configuration settings. 


Any response would be appreciated.  

Regards,Nirav___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [riak-user]Cannot startup riak node correctly after successful installation

2015-02-05 Thread Ryan Zezeski

YouBarco writes:

> Hello,
>
> My OS is ubuntu 14.04 64bit, and installed erlang from source with version 
> R16B as following:

That's your problem. You MUST use the custom Basho branch of Erlang/OTP
with Riak. If you insist on building Erlang/Riak from source then follow
this guide for Erlang:

http://docs.basho.com/riak/latest/ops/building/installing/erlang/

> bad scheduling option -sfwi

This flag was added by Basho to the 16B series. IIRC, vanilla 16B02
includes this flag but you still should use Basho's custom branch since
it has other fixes required by Riak.

-Z

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [riak-user]Cannot startup riak node correctly after successful installation

2015-02-05 Thread Ildar Alishev
I don’t think problems in file limit. Problems are in that Riak cannot start 
due to Erlang problem (something like this). I had the same error before
What i did?

I have deleted everything that says about riak and Erlang,
installed OTP_R16B02   and otp_r16b02

then installed Riak 2.0 + 
And it works :)

Ildar


> 5 февр. 2015 г., в 12:35, YouBarco  написал(а):
> 
> Hello,
> 
> My OS is ubuntu 14.04 64bit, and installed erlang from source with version 
> R16B as following:
> ---
> ubuntu@riak1:~/riak-2.0.4/dev$ erl
> Erlang R16B (erts-5.10.1) [source] [64-bit] [smp:4:4] [async-threads:10] 
> [kernel-poll:false] [dtrace]
> 
> Eshell V5.10.1  (abort with ^G)
> 1> 
> 
> I downloaded riak-2.0.4 source and compile it successfully, then do following:
> >make devrel DEVNODES=2
> >cd dev
> >dev1/bin/riak start
> 
>  WARNING: ulimit -n is 1024; 65536 is the recommended minimum.
> 
> riak failed to start within 15 seconds,
> see the output of 'riak console' for more information.
> If you want to wait longer, set the environment variable
> WAIT_FOR_ERLANG to the number of seconds to wait.
> ubuntu@riak1:~/riak-2.0.4/dev$ dev1/bin/riak console
> config is OK
> -config 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
>  -args_file 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>  -vm_args 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
> 
>  WARNING: ulimit -n is 1024; 65536 is the recommended minimum.
> 
> Exec:  /home/ubuntu/riak-2.0.4/dev/dev1/bin/../erts-5.10.1/bin/erlexec -boot 
> /home/ubuntu/riak-2.0.4/dev/dev1/bin/../releases/2.0.4/riak   
> -config 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
>  -args_file 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>  -vm_args 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>   -pa /home/ubuntu/riak-2.0.4/dev/dev1/bin/../lib/basho-patches 
> -- console
> Root: /home/ubuntu/riak-2.0.4/dev/dev1/bin/..
> bad scheduling option -sfwi
> Usage: beam.smp [flags] [ -- [init_args] ]
> The flags are:
> 
> -a size suggested stack size in kilo words for threads
> in the async-thread pool, valid range is [16-8192]
> -A number   set number of threads in async thread pool,
> valid range is [0-1024]
> -B[c|d|i]   c to have Ctrl-c interrupt the Erlang shell,
> d (or no extra option) to disable the break
> handler, i to ignore break signals
> -c  disable continuous date/time correction with
> respect to uptime
> -d  don't write a crash dump for internally detected errors
> (halt(String) will still produce a crash dump)
> -fn[u|a|l]  Control how filenames are interpreted
> -hms size   set minimum heap size in words (default 233)
> -hmbs size  set minimum binary virtual heap size in words (default 32768)
> -K boolean  enable or disable kernel poll
> -n[s|a|d]   Control behavior of signals to ports
> Note that this flag is deprecated!
> -Mmemory allocator switches,
> see the erts_alloc(3) documentation for more info.
> -pcControl what characters are considered printable (default latin1)
> -P number   set maximum number of processes on this node,
> valid range is [1024-134217727]
> -Q number   set maximum number of ports on this node,
> valid range is [1024-134217727]
> -R number   set compatibility release number,
> valid range [14-16]
> -r  force ets memory block to be moved on realloc
> -rg amount  set reader groups limit
> -sbt type   set scheduler bind type, valid types are:
> -stbt type  u|ns|ts|ps|s|nnts|nnps|tnnps|db
> -sbwt val   set scheduler busy wait threshold, valid values are:
> none|very_short|short|medium|long|very_long.
> -scl bool   enable/disable compaction of scheduler load,
> see the erl(1) documentation for more info.
> -sct cput   set cpu topology,
> see the erl(1) documentation for more info.
> -sws valset scheduler wakeup strategy, valid values are:
> default|legacy.
> -swt valset scheduler wakeup threshold, valid values are:
> very_low|low|medium|high|very_high.
> -sss size   suggested stack size in kilo words for scheduler threads,
> valid range is [4-8192]
> -spp Bool   set port parallelism scheduling hint
> -S n1:n2set number of schedulers (n1), and number of
> schedulers online (n2), valid range for both
> numbers are [1-1024]
> -t size set the maxim

Re: [riak-user]Cannot startup riak node correctly after successful installation

2015-02-05 Thread Alexander Sicular
I would probably take a look at the ulimit docs: 
http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/ . This becomes 
more acute the more nodes you run on the same os. 

-Alexander 

@siculars
http://siculars.posthaven.com

Sent from my iRotaryPhone

> On Feb 5, 2015, at 01:35, YouBarco  wrote:
> 
> Hello,
> 
> My OS is ubuntu 14.04 64bit, and installed erlang from source with version 
> R16B as following:
> ---
> ubuntu@riak1:~/riak-2.0.4/dev$ erl
> Erlang R16B (erts-5.10.1) [source] [64-bit] [smp:4:4] [async-threads:10] 
> [kernel-poll:false] [dtrace]
> 
> Eshell V5.10.1  (abort with ^G)
> 1> 
> 
> I downloaded riak-2.0.4 source and compile it successfully, then do following:
> >make devrel DEVNODES=2
> >cd dev
> >dev1/bin/riak start
> 
>  WARNING: ulimit -n is 1024; 65536 is the recommended minimum.
> 
> riak failed to start within 15 seconds,
> see the output of 'riak console' for more information.
> If you want to wait longer, set the environment variable
> WAIT_FOR_ERLANG to the number of seconds to wait.
> ubuntu@riak1:~/riak-2.0.4/dev$ dev1/bin/riak console
> config is OK
> -config 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
>  -args_file 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>  -vm_args 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
> 
>  WARNING: ulimit -n is 1024; 65536 is the recommended minimum.
> 
> Exec:  /home/ubuntu/riak-2.0.4/dev/dev1/bin/../erts-5.10.1/bin/erlexec -boot 
> /home/ubuntu/riak-2.0.4/dev/dev1/bin/../releases/2.0.4/riak   
> -config 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
>  -args_file 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>  -vm_args 
> /home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
>   -pa /home/ubuntu/riak-2.0.4/dev/dev1/bin/../lib/basho-patches 
> -- console
> Root: /home/ubuntu/riak-2.0.4/dev/dev1/bin/..
> bad scheduling option -sfwi
> Usage: beam.smp [flags] [ -- [init_args] ]
> The flags are:
> 
> -a size suggested stack size in kilo words for threads
> in the async-thread pool, valid range is [16-8192]
> -A number   set number of threads in async thread pool,
> valid range is [0-1024]
> -B[c|d|i]   c to have Ctrl-c interrupt the Erlang shell,
> d (or no extra option) to disable the break
> handler, i to ignore break signals
> -c  disable continuous date/time correction with
> respect to uptime
> -d  don't write a crash dump for internally detected errors
> (halt(String) will still produce a crash dump)
> -fn[u|a|l]  Control how filenames are interpreted
> -hms size   set minimum heap size in words (default 233)
> -hmbs size  set minimum binary virtual heap size in words (default 32768)
> -K boolean  enable or disable kernel poll
> -n[s|a|d]   Control behavior of signals to ports
> Note that this flag is deprecated!
> -Mmemory allocator switches,
> see the erts_alloc(3) documentation for more info.
> -pcControl what characters are considered printable (default latin1)
> -P number   set maximum number of processes on this node,
> valid range is [1024-134217727]
> -Q number   set maximum number of ports on this node,
> valid range is [1024-134217727]
> -R number   set compatibility release number,
> valid range [14-16]
> -r  force ets memory block to be moved on realloc
> -rg amount  set reader groups limit
> -sbt type   set scheduler bind type, valid types are:
> -stbt type  u|ns|ts|ps|s|nnts|nnps|tnnps|db
> -sbwt val   set scheduler busy wait threshold, valid values are:
> none|very_short|short|medium|long|very_long.
> -scl bool   enable/disable compaction of scheduler load,
> see the erl(1) documentation for more info.
> -sct cput   set cpu topology,
> see the erl(1) documentation for more info.
> -sws valset scheduler wakeup strategy, valid values are:
> default|legacy.
> -swt valset scheduler wakeup threshold, valid values are:
> very_low|low|medium|high|very_high.
> -sss size   suggested stack size in kilo words for scheduler threads,
> valid range is [4-8192]
> -spp Bool   set port parallelism scheduling hint
> -S n1:n2set number of schedulers (n1), and number of
> schedulers online (n2), valid range for both
> numbers are [1-1024]
> -t size set the maximum number of atoms the emulator can handle
> 

[riak-user]Cannot startup riak node correctly after successful installation

2015-02-05 Thread YouBarco
Hello,

My OS is ubuntu 14.04 64bit, and installed erlang from source with version R16B 
as following:
---
ubuntu@riak1:~/riak-2.0.4/dev$ erl
Erlang R16B (erts-5.10.1) [source] [64-bit] [smp:4:4] [async-threads:10] 
[kernel-poll:false] [dtrace]

Eshell V5.10.1  (abort with ^G)
1> 

I downloaded riak-2.0.4 source and compile it successfully, then do following:
>make devrel DEVNODES=2
>cd dev
>dev1/bin/riak start

 WARNING: ulimit -n is 1024; 65536 is the recommended minimum.

riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
ubuntu@riak1:~/riak-2.0.4/dev$ dev1/bin/riak console
config is OK
-config 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
 -args_file 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
 -vm_args 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args

 WARNING: ulimit -n is 1024; 65536 is the recommended minimum.

Exec:  /home/ubuntu/riak-2.0.4/dev/dev1/bin/../erts-5.10.1/bin/erlexec -boot 
/home/ubuntu/riak-2.0.4/dev/dev1/bin/../releases/2.0.4/riak   
-config 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/app.2015.02.05.17.26.32.config
 -args_file 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
 -vm_args 
/home/ubuntu/riak-2.0.4/dev/dev1/data/generated.configs/vm.2015.02.05.17.26.32.args
  -pa /home/ubuntu/riak-2.0.4/dev/dev1/bin/../lib/basho-patches -- 
console
Root: /home/ubuntu/riak-2.0.4/dev/dev1/bin/..
bad scheduling option -sfwi
Usage: beam.smp [flags] [ -- [init_args] ]
The flags are:

-a size suggested stack size in kilo words for threads
in the async-thread pool, valid range is [16-8192]
-A number   set number of threads in async thread pool,
valid range is [0-1024]
-B[c|d|i]   c to have Ctrl-c interrupt the Erlang shell,
d (or no extra option) to disable the break
handler, i to ignore break signals
-c  disable continuous date/time correction with
respect to uptime
-d  don't write a crash dump for internally detected errors
(halt(String) will still produce a crash dump)
-fn[u|a|l]  Control how filenames are interpreted
-hms size   set minimum heap size in words (default 233)
-hmbs size  set minimum binary virtual heap size in words (default 32768)
-K boolean  enable or disable kernel poll
-n[s|a|d]   Control behavior of signals to ports
Note that this flag is deprecated!
-Mmemory allocator switches,
see the erts_alloc(3) documentation for more info.
-pcControl what characters are considered printable (default latin1)
-P number   set maximum number of processes on this node,
valid range is [1024-134217727]
-Q number   set maximum number of ports on this node,
valid range is [1024-134217727]
-R number   set compatibility release number,
valid range [14-16]
-r  force ets memory block to be moved on realloc
-rg amount  set reader groups limit
-sbt type   set scheduler bind type, valid types are:
-stbt type  u|ns|ts|ps|s|nnts|nnps|tnnps|db
-sbwt val   set scheduler busy wait threshold, valid values are:
none|very_short|short|medium|long|very_long.
-scl bool   enable/disable compaction of scheduler load,
see the erl(1) documentation for more info.
-sct cput   set cpu topology,
see the erl(1) documentation for more info.
-sws valset scheduler wakeup strategy, valid values are:
default|legacy.
-swt valset scheduler wakeup threshold, valid values are:
very_low|low|medium|high|very_high.
-sss size   suggested stack size in kilo words for scheduler threads,
valid range is [4-8192]
-spp Bool   set port parallelism scheduling hint
-S n1:n2set number of schedulers (n1), and number of
schedulers online (n2), valid range for both
numbers are [1-1024]
-t size set the maximum number of atoms the emulator can handle
valid range is [8192-0]
-T number   set modified timing level,
valid range is [0-9]
-V  print Erlang version
-v  turn on chatty mode (GCs will be reported etc)
-W set error logger warnings mapping,
see error_logger documentation for details
-zdbbl size set the distribution buffer busy limit in kilobytes
valid range is [1-2097151]

Note that if the emulator is started with erlexec (typically
from the erl script), these flags should be specified