Stopping solrq supervisor in log; What is this?

2017-06-22 Thread Robert Latko

Anybody seen this in their logs?

yz_solrq_sup:stop_queue_pair:130 Stopping solrq supervisor for index 
<<"search1">>


Searching using the index still works, however I am getting log notices 
for most my indexes?


If you know, please tell me...  Thank you!

Sincerely,

Robert



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak install on instance with multiple OTP's

2016-05-23 Thread Robert Latko

Hi Sargun,

I installed it from source on Ubuntu14.04 LTS.

I'll take a look at kerl as well; right now this issue more academic 
than anything else.


Sincerely,

Robert

On 05/21/2016 12:42 PM, Sargun Dhillon wrote:

How did you install OTP18? When dealing with Erlang, and multiple
installs of it, I might suggest using kerl
(https://github.com/kerl/kerl). It's an excellent tool for dealing
with the problem.


On Sat, May 21, 2016 at 12:01 PM, Robert Latko <rob...@lmi-global.com> wrote:

Hi all,

Quick question:

I have an instance with OTP18 and I want to make it a Riak Node. I
DL/install  the OTP16 patch 8 for Riak 2.1.4.  How then do I use make rel to
use OTP16 instead of the default OTP18??


Thanks in advance.

Sincerely,

Robert

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak install on instance with multiple OTP's

2016-05-21 Thread Robert Latko

Hi all,

Quick question:

I have an instance with OTP18 and I want to make it a Riak Node. I 
DL/install  the OTP16 patch 8 for Riak 2.1.4.  How then do I use make 
rel to use OTP16 instead of the default OTP18??



Thanks in advance.

Sincerely,

Robert

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: cannot start solr, no logs

2016-04-04 Thread Robert Latko

Hi Saran,

I have a  5 node AWS c3.2xlarge instance cluster  running ubuntu 
14.04lts and I have my solr set to the below (yeah I know, but -Xmx will 
never get the full 16g's!) ;


search.solr.jvm_options = -d64 -Xms1g -Xmx16g -XX:+UseG1GC 
-XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m 
-XX:MaxGCPauseMillis=200

-XX:+UseLargePages -XX:+AggressiveOpts

I changed out the default Java and load  1.8.45 (jdk-8u45-linux-x64.tar.gz)

Solr will spin up just find and process heavy queries under load like a 
champ.


Maybe on a smaller instances changing out the the default Java to 
1.8.45+ and setting -Xms and -Xmx accordingly would work for you?


Also, check console.log for errors regarding solr.

Lastly, make sure the proper ports are open for solr to communicate 
across your nodes. ( search.solr.jmx_port = 8985 )



Robert





On 04/04/2016 09:52 PM, Vitaly E wrote:

Hi Saran,

Just noticed the link at the bottom. You should have given the context 
in your message.


In any case, I would check the total memory your machine has and how 
much of it is utilized by other processes.
 Try DECREASING the heap allocation for Solr, just to see it it starts 
- it looks like the requested memory just is not available.


Regards,
Vitaly

On Tue, Apr 5, 2016 at 7:45 AM, Vitaly E <13vitam...@gmail.com 
> wrote:


Hi,

Could you give some more details? Which version of Riak? Your
subject line suggests that you cannot start Solr, but what do you
mean by this? Can you start Riak successfully? Where are you
looking for logs?

Do you have search enabled in riak.conf (search = on)?

Regards,
Vitaly

On Tue, Apr 5, 2016 at 7:32 AM, saran > wrote:

Hello,

I have same problem.
I increased -d64 -Xms2.5g -Xmx2.5g still having same problem.
My ulimit is also good.
ulimit -Sn 4096
ulimit -Hn 65536

Please assist.

Thanks





--
View this message in context:

http://riak-users.197444.n3.nabble.com/cannot-start-solr-no-logs-tp4033816p4034114.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com 
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: ring_creation_size & ring_partition_number error joining 2.1.3 node to 2.1.1 cluster

2016-03-06 Thread Robert Latko

Hi all,

Found the problem/solution.

With the first cluster, I set it up with a ring size of 64. Prior to 
loading data. I stopped the node(s), changed the ring size to 256, then 
restarted.


Therefore, if I want to add a new node, I FIRST must start the node with 
a ring size of 64, then stop it, change the conf for a ring size of 256, 
then start, then stage, plan, and commit.


I am still not sure the relation ship with ring size and ring partitions.

Sincerely,

Robert



On 03/06/2016 02:10 PM, Robert Latko wrote:

Hi All,

I've got a cluster in 2.1.1 and was wanting to join a 2.1.3 node to it

I am getting the following error:

# ./bin/riak-admin cluster join a-backend@10.0.0.199
Failed: a-backend@10.0.0.199 has a different ring_creation_size

So... for the new 2.1.3
# ./bin/riak-admin status | grep ring
ring_creation_size : 256
ring_members : ['node-template@10.0.0.54']
*ring_num_partitions : 256*
ring_ownership : <<"[{'node-template@10.0.0.54',256}]">>
rings_reconciled : 0
rings_reconciled_total : 0

AND for node a-backend@10.0.0.199
# ./bin/riak-admin status | grep ring
ring_creation_size : 256
ring_members : ['a-backend@10.0.0.199','b-backend@10.0.0.200',
*ring_num_partitions : 64*
ring_ownership : <<"[{'b-backend@10.0.0.200',13},\n 
{'c-backend@10.0.0.201',13},\n {'d-backend@10.0.0.202',13},\n 
{'e-backend@10.0.0.203',12},\n {'a-backend@10.0.0.199',13}]">>

rings_reconciled : 0
rings_reconciled_total : 32

The 'ring_num_partitions' are different which leaves me to believe 
this is the cause.


A.) How do I increase the 'ring_num_partitions' to 256 in the 2.1.1 
cluster  **OR**

B.) How do I decrease the 'ring_num_partitions' to 64 in the 2.1.3 node?

Any ideas? recommendations?


Thanks in advance,


Robert









___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


ring_creation_size & ring_partition_number error joining 2.1.3 node to 2.1.1 cluster

2016-03-06 Thread Robert Latko

Hi All,

I've got a cluster in 2.1.1 and was wanting to join a 2.1.3 node to it

I am getting the following error:

# ./bin/riak-admin cluster join a-backend@10.0.0.199
Failed: a-backend@10.0.0.199 has a different ring_creation_size

So... for the new 2.1.3
# ./bin/riak-admin status | grep ring
ring_creation_size : 256
ring_members : ['node-template@10.0.0.54']
*ring_num_partitions : 256*
ring_ownership : <<"[{'node-template@10.0.0.54',256}]">>
rings_reconciled : 0
rings_reconciled_total : 0

AND for node a-backend@10.0.0.199
# ./bin/riak-admin status | grep ring
ring_creation_size : 256
ring_members : ['a-backend@10.0.0.199','b-backend@10.0.0.200',
*ring_num_partitions : 64*
ring_ownership : <<"[{'b-backend@10.0.0.200',13},\n 
{'c-backend@10.0.0.201',13},\n {'d-backend@10.0.0.202',13},\n 
{'e-backend@10.0.0.203',12},\n {'a-backend@10.0.0.199',13}]">>

rings_reconciled : 0
rings_reconciled_total : 32

The 'ring_num_partitions' are different which leaves me to believe this 
is the cause.


A.) How do I increase the 'ring_num_partitions' to 256 in the 2.1.1 
cluster  **OR**

B.) How do I decrease the 'ring_num_partitions' to 64 in the 2.1.3 node?

Any ideas? recommendations?


Thanks in advance,


Robert







___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak 2.1.3 with levelDB and EC2 instances

2016-02-09 Thread Robert Latko

Hi all,

I have used M3.2xlarge instances in the past because of the 
compute(8CPU)/memory(30GB RAM)combo. I also played around with 
C3.8xlarge's using similar to test against the same dataset for 
performance and was able to double the performance.


I am to create a new cluster with 1.5 mil records occupying ~ 15GB.

I was looking at general community feedback to see what successes people 
have had with EC2 instance types and their Riak/levelDB.


What is a good instance for this? High Memory? High Compute? or Both?

I am thinking of using 5 x M4.2xlarge 8CPU/32GB RAM for this with 
Provisioned IOPS but I am thinking of stepping down to 5 x M4.xlarge 
instead (4CPU/16GB) which would be ~$1000/month less for cost. I'll 
build it today, init the data, and start load testing. As with Riak, I 
could always add the appropriate EC2 type nodes later and subtract the 
inadequate ones.


Thanks in advance,

Robert













___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak2.1.1/java.lang.OutOfMemoryError

2015-06-02 Thread Robert Latko

Hi all,

I am new to the list and have been playing around with Riak over the 
last couple of months. I've come up on a client project to evaluate an 
alternative DB to their  MS SQL server for a specific issue they are having.


I am having some big problems with Riak... so any guidance, help, 
advice, wisdom is much appreciated.


I have set up a 5 Node AWS EC2 m3.2xlarge (8vCPU  30GB RAM) cluster 
behind a ELB.

I am using Riak v2.1.1 and using for the JavaVM: openjdk-7-jdk

I am initializing the cluster using 1.6million XML records in a 15Gb 
file. I am using PHP XMLreader to stream the file  simpleXML to parse 
individual 'records' (each record have about 120 fields; I am only using 
16 for the Riak DB) Going through entire 15GB file in ~500 seconds for 
just parsing the data. (is there a bulk insert tool available?)


I am using riak-php-client to insert records into a map datatype. I have 
a custom schema - 1 field type=location_rpt indexed=true 
stored=true, 15 fields type either float, int, string and 
indexed=false stored=true


*THE PROBLEM*: After 250K records, inserts come to a slow crawl - going 
from 1 insert in 0.02 seconds  to 1 insert in 1 second. Finally, it 
starts hitting java.lang.OutOfMemoryError: GC overhead limit exceeded 
errors!!


Some riak.conf settings...
storage_backend = leveldb
search = on
leveldb.maximum_memory.percent = 90
ring_size = 128
search.solr.jvm_options = -d64 -Xms1g -Xmx22g -XX:+UseStringCache 
-XX:+UseCompressedOops


Did you note the **-Xmx22g** Heap Size?!?!

I've searched Solr perfomance documents, Java VM Garbage Collection 
documents (should I try Java 8 with -XX:G1GC ??)


So prior to the java.lang.OutOfMemoryError: GC overhead limit exceeded, 
I start accruing the below errors...


2015-06-02 13:01:04.300 [error]
0.1482.0@yz_kv:index:219 failed to index object 
{{activelistings,listings},
aa164f5cc4c50c61bea377b4a82a26dc} with error {Failed to index 
docs,{error,req_timedout}}

because [{yz_solr,index,3,[{file,src/yz_solr.erl},{line,199}]},
{yz_kv,index,7,[{file,src/yz_kv.erl},{line,269}]},
{yz_kv,index,3,[{file,src/yz_kv.erl},{line,206}]},
{riak_kv_vnode,actual_put,6,[{file,src/riak_kv_vnode.erl},{line,1582}]},
{riak_kv_vnode,perform_put,3,[{file,src/riak_kv_vnode.erl},{line,1570}]},
{riak_kv_vnode,do_put,7,[{file,src/riak_kv_vnode.erl},{line,1361}]},
{riak_kv_vnode,handle_command,3,[{file,src/riak_kv_vnode.erl},{line,543}]},
{riak_core_vnode,vnode_command,3,[{file,src/riak_core_vnode.erl},{line,345}]}]

And then after buildup of the above, I get... the 
java.lang.OutOfMemoryError: GC overhead limit exceeded error:


2015-06-02 13:01:57.735 [info] 0.711.0@yz_solr_proc:handle_info:135
solr stdout/err:at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:75)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:48)
at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:152)
at 
org.apache.lucene.store.RAMOutputStream.writeByte(RAMOutputStream.java:127)

at org.apache.lucene.store.DataOutput.writeVInt(DataOutput.java:192)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter.encodeTerm(Lucene41PostingsWriter.java:572)
at 
org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlock(BlockTreeTermsWriter.java:882)
at 
org.apache.lucene.codecs.BlockTreeTermsWriter$TermsWriter.writeBlocks(BlockTreeTermsWriter.java:579)


Sincerely,


Robert Latko

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com