Re: Precommit hook function - no error log - how to debug?

2016-05-16 Thread Sanket Agrawal
Luke, just tested and confirmed that precommit hooks are still not
triggered for normal buckets. Objects get written but precommit hooks don't
trigger. Manually simulating a trigger by executing the function on riak
console works fine.

Since sasl log is not being written despite being turned on, isn't that a
sign that something is wrong with precommit/postcommit setup? This is what
Douglar Rohrer said earlier in the thread:

> Huh - I get a huge amount of logging when I turn on sasl using
> advanced.config - specifically, I have:
> {sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}
> in my advanced.config, and for just a startup/shutdown cycle I get
> a 191555 byte file.


On Mon, May 16, 2016 at 11:29 AM, Luke Bakken  wrote:

> Sanket,
>
> I can't speak to the output of cluster-info with regard to the precomit
> hook.
>
> If you save an object to the "test_kv" bucket, does
> rtriggers:pre_all() get called?
>
> I have confirmed that precommit hooks are *NOT* triggered for buckets
> where "write_once" is true, and filed the following docs issue:
> https://github.com/basho/basho_docs/issues/2076
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 16, 2016 at 8:20 AM, Sanket Agrawal
>  wrote:
> > Shouldn't precommit body show up in cluster-info first before we try to
> test
> > it with an object? I just set the precommit hook for test_kv bucket type
> > which has no custom property set. Curl on props shows precommit, but
> > cluster-info doesn't (took another dump after 5 minutes of precommit hook
> > creation). No sasl logging is triggered either though the debug mode is
> on.
> >
> > Curl output for test_kv props:
> >
> >
> >>
> >> $ curl localhost:8098/types/test_kv/props
> >>
> >>
> {"props":{"active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"
> riak1@127.0.0.1
> ","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_all"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
> >>
> >>
> >
> >
> > Cluster-info dump after setting the props - precommit body is still
> empty in
> > the cluster-info for test_kv bucket type:
> >
> >>
> >> grep -A15 test_kv cluster_info.txt |grep precommit
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >> {precommit,[]},
> >
> >
> > On Mon, May 16, 2016 at 10:48 AM, Luke Bakken  wrote:
> >>
> >> Hi Sanket,
> >>
> >> Thanks for providing that information, this is the first time that the
> >> "write_once" bucket type property was mentioned.
> >>
> >> If you set the precommit hook on a different bucket type where there
> >> are *no other* custom bucket type properties set, does the precommit
> >> hook run successfully?
> >>
> >> --
> >> Luke Bakken
> >> Engineer
> >> lbak...@basho.com
> >>
> >> On Mon, May 16, 2016 at 7:17 AM, Sanket Agrawal
> >>  wrote:
> >> > Hi Luke,
> >> >
> >> > More on the precommit hook debugging I just did - I might have a
> handle
> >> > on
> >> > what the problem is. I took a cluster-info dump for all the nodes in
> the
> >> > cluster. In the dump, precommit is set to [] for all the buckets in
> the
> >> > cluster. The reason for that seems to be that the bucket type with
> >> > "write-once" property where precommit hook is set, is not in the
> cluster
> >> > somehow!
> >> >
> >> > The bucket type name is "test_kv_wo". Let us grep for it in
> >> > cluster_info.txt
> >> > - it doesn't show up at all (another bucket type "test_kv" shows up
> >> > instead):
> >> >>
> >> >> $ grep test_kv cluster_info.txt | sort -u
> >> >> [[{bucket,"test_kv">>}|
> >> >> {name,"test_kv">>},
> >> >
> >> >
> >> > Now, let us verify we can get the props for the bucket type fine from
> >> > curl:
> >> >>
> >> >> $ curl localhost:8098/types/test_kv_wo/buckets/uuid_log/props:
> >> >>
> >> >>
> >> >>
> {"props":{"name":"uuid_log","active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"
> riak1@127.0.0.1
> ","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"uuid_log","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_uuid"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","write_once":true,"young_vclock":20}}
> >> >
> >> >
> 

Re: RIAK TS - FreeBSD

2016-05-16 Thread Luke Bakken
Hello,

There is no plan for official FreeBSD support in Riak TS. Support for
FreeBSD packages for Riak KV will be discontinued in the future.

--
Luke Bakken
Engineer
lbak...@basho.com

On Sun, May 15, 2016 at 5:41 PM, Outback Dingo  wrote:
> Curious why there is no FreeBSD pkg for RIAK TS on the web site.
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-16 Thread Luke Bakken
Sanket,

I can't speak to the output of cluster-info with regard to the precomit hook.

If you save an object to the "test_kv" bucket, does
rtriggers:pre_all() get called?

I have confirmed that precommit hooks are *NOT* triggered for buckets
where "write_once" is true, and filed the following docs issue:
https://github.com/basho/basho_docs/issues/2076

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, May 16, 2016 at 8:20 AM, Sanket Agrawal
 wrote:
> Shouldn't precommit body show up in cluster-info first before we try to test
> it with an object? I just set the precommit hook for test_kv bucket type
> which has no custom property set. Curl on props shows precommit, but
> cluster-info doesn't (took another dump after 5 minutes of precommit hook
> creation). No sasl logging is triggered either though the debug mode is on.
>
> Curl output for test_kv props:
>
>
>>
>> $ curl localhost:8098/types/test_kv/props
>>
>> {"props":{"active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak1@127.0.0.1","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_all"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
>>
>>
>
>
> Cluster-info dump after setting the props - precommit body is still empty in
> the cluster-info for test_kv bucket type:
>
>>
>> grep -A15 test_kv cluster_info.txt |grep precommit
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>> {precommit,[]},
>
>
> On Mon, May 16, 2016 at 10:48 AM, Luke Bakken  wrote:
>>
>> Hi Sanket,
>>
>> Thanks for providing that information, this is the first time that the
>> "write_once" bucket type property was mentioned.
>>
>> If you set the precommit hook on a different bucket type where there
>> are *no other* custom bucket type properties set, does the precommit
>> hook run successfully?
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>> On Mon, May 16, 2016 at 7:17 AM, Sanket Agrawal
>>  wrote:
>> > Hi Luke,
>> >
>> > More on the precommit hook debugging I just did - I might have a handle
>> > on
>> > what the problem is. I took a cluster-info dump for all the nodes in the
>> > cluster. In the dump, precommit is set to [] for all the buckets in the
>> > cluster. The reason for that seems to be that the bucket type with
>> > "write-once" property where precommit hook is set, is not in the cluster
>> > somehow!
>> >
>> > The bucket type name is "test_kv_wo". Let us grep for it in
>> > cluster_info.txt
>> > - it doesn't show up at all (another bucket type "test_kv" shows up
>> > instead):
>> >>
>> >> $ grep test_kv cluster_info.txt | sort -u
>> >> [[{bucket,"test_kv">>}|
>> >> {name,"test_kv">>},
>> >
>> >
>> > Now, let us verify we can get the props for the bucket type fine from
>> > curl:
>> >>
>> >> $ curl localhost:8098/types/test_kv_wo/buckets/uuid_log/props:
>> >>
>> >>
>> >> {"props":{"name":"uuid_log","active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak1@127.0.0.1","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"uuid_log","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_uuid"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","write_once":true,"young_vclock":20}}
>> >
>> >
>> >
>> > So, the problem might be missing bucket types in the cluster since they
>> > don't show up in cluster-info at all? The same thing happens with LWW
>> > bucket
>> > type that I created, test_kv_lww. It doesn't show up either. CRDT bucket
>> > types show up fine.
>
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-16 Thread Sanket Agrawal
Shouldn't precommit body show up in cluster-info first before we try to
test it with an object? I just set the precommit hook for test_kv bucket
type which has no custom property set. Curl on props shows precommit, but
cluster-info doesn't (took another dump after 5 minutes of precommit hook
creation). No sasl logging is triggered either though the debug mode is on.

Curl output for test_kv props:
>
>

> $ curl localhost:8098/types/test_kv/props
>
> {"props":{"active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"
> riak1@127.0.0.1
> ","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_all"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}




Cluster-info dump after setting the props - precommit body is still empty
in the cluster-info for test_kv bucket type:


> grep -A15 test_kv cluster_info.txt |grep precommit
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},
> {*precommit*,[]},


On Mon, May 16, 2016 at 10:48 AM, Luke Bakken  wrote:

> Hi Sanket,
>
> Thanks for providing that information, this is the first time that the
> "write_once" bucket type property was mentioned.
>
> If you set the precommit hook on a different bucket type where there
> are *no other* custom bucket type properties set, does the precommit
> hook run successfully?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Mon, May 16, 2016 at 7:17 AM, Sanket Agrawal
>  wrote:
> > Hi Luke,
> >
> > More on the precommit hook debugging I just did - I might have a handle
> on
> > what the problem is. I took a cluster-info dump for all the nodes in the
> > cluster. In the dump, precommit is set to [] for all the buckets in the
> > cluster. The reason for that seems to be that the bucket type with
> > "write-once" property where precommit hook is set, is not in the cluster
> > somehow!
> >
> > The bucket type name is "test_kv_wo". Let us grep for it in
> cluster_info.txt
> > - it doesn't show up at all (another bucket type "test_kv" shows up
> > instead):
> >>
> >> $ grep test_kv cluster_info.txt | sort -u
> >> [[{bucket,"test_kv">>}|
> >> {name,"test_kv">>},
> >
> >
> > Now, let us verify we can get the props for the bucket type fine from
> curl:
> >>
> >> $ curl localhost:8098/types/test_kv_wo/buckets/uuid_log/props:
> >>
> >>
> {"props":{"name":"uuid_log","active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"
> riak1@127.0.0.1
> ","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"uuid_log","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_uuid"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","write_once":true,"young_vclock":20}}
> >
> >
> >
> > So, the problem might be missing bucket types in the cluster since they
> > don't show up in cluster-info at all? The same thing happens with LWW
> bucket
> > type that I created, test_kv_lww. It doesn't show up either. CRDT bucket
> > types show up fine.
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Jean Chassoul
Hi Luke,

great advice, reminds me of what Joe Armstrong say from time to time:
measure don't guess.

(=

On Mon, May 16, 2016 at 7:55 AM, Luke Bakken  wrote:

> Hi Alex,
>
> Benchmarking is the only sure way to know if you need to add this
> additional complexity to your system for your own use-case or if
> search in Riak 2.0 will suffice. I suspect the latter will be true.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 16, 2016 at 6:08 AM, Alex De la rosa
>  wrote:
> > Hi Fred,
> >
> > Yeah, I realised that on my testing node with n_val of 3 I was getting
> the
> > triple of results in the count... that is not ideal.
> >
> > I was just concerned on how much extra-work would get Riak to talk with
> SOLR
> > and compile data against hitting SOLR directly... For my tests these
> days,
> > seems the /search interface is pretty fast and it may not be a real
> problem
> > for Riak... but still have my fears from Riak 0.14 an Riak 1.4
> >
> > Thanks,
> > Alex
> >
> > On Mon, May 16, 2016 at 4:49 PM, Fred Dushin  wrote:
> >>
> >> Hi Alex,
> >>
> >> Other people have chimed in, but let me repeat that while the
> >> internal_solr interface is accessible via HTTP (and needs to be, at
> least
> >> from Riak processes), you cannot use that interface to query Solr and
> expect
> >> a correct result set (unless you are using a single node cluster with an
> >> n_val of 1).
> >>
> >> When you run your queries through Riak, Yokozuna, the component that
> >> interfaces with Solr, will use a riak_core coverage plan to generate a
> >> distributed Solr filter query across the entire cluster that guarantees
> that
> >> for any document stored on all Solr nodes in the cluster, the query will
> >> select one (and only one) replica.  If you were to run your query
> locally
> >> using the internal_solr interface, your query would not span the cluster
> >> (likely missing documents on other nodes) and may have duplicates
> (e.g., in
> >> degenerate cases where you have more than one replica on the same node).
> >>
> >> I hope that helps explain why using the internal_solr interface is not
> >> only not recommended, it's also not going to give you the results you
> >> expect.
> >>
> >> -Fred
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Alex De la rosa
Hi Luke,

Yes, I think I will go with /search instead; should be enough.

Thanks,
Alex

On Mon, May 16, 2016 at 6:55 PM, Luke Bakken  wrote:

> Hi Alex,
>
> Benchmarking is the only sure way to know if you need to add this
> additional complexity to your system for your own use-case or if
> search in Riak 2.0 will suffice. I suspect the latter will be true.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 16, 2016 at 6:08 AM, Alex De la rosa
>  wrote:
> > Hi Fred,
> >
> > Yeah, I realised that on my testing node with n_val of 3 I was getting
> the
> > triple of results in the count... that is not ideal.
> >
> > I was just concerned on how much extra-work would get Riak to talk with
> SOLR
> > and compile data against hitting SOLR directly... For my tests these
> days,
> > seems the /search interface is pretty fast and it may not be a real
> problem
> > for Riak... but still have my fears from Riak 0.14 an Riak 1.4
> >
> > Thanks,
> > Alex
> >
> > On Mon, May 16, 2016 at 4:49 PM, Fred Dushin  wrote:
> >>
> >> Hi Alex,
> >>
> >> Other people have chimed in, but let me repeat that while the
> >> internal_solr interface is accessible via HTTP (and needs to be, at
> least
> >> from Riak processes), you cannot use that interface to query Solr and
> expect
> >> a correct result set (unless you are using a single node cluster with an
> >> n_val of 1).
> >>
> >> When you run your queries through Riak, Yokozuna, the component that
> >> interfaces with Solr, will use a riak_core coverage plan to generate a
> >> distributed Solr filter query across the entire cluster that guarantees
> that
> >> for any document stored on all Solr nodes in the cluster, the query will
> >> select one (and only one) replica.  If you were to run your query
> locally
> >> using the internal_solr interface, your query would not span the cluster
> >> (likely missing documents on other nodes) and may have duplicates
> (e.g., in
> >> degenerate cases where you have more than one replica on the same node).
> >>
> >> I hope that helps explain why using the internal_solr interface is not
> >> only not recommended, it's also not going to give you the results you
> >> expect.
> >>
> >> -Fred
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Luke Bakken
Hi Alex,

Benchmarking is the only sure way to know if you need to add this
additional complexity to your system for your own use-case or if
search in Riak 2.0 will suffice. I suspect the latter will be true.

--
Luke Bakken
Engineer
lbak...@basho.com


On Mon, May 16, 2016 at 6:08 AM, Alex De la rosa
 wrote:
> Hi Fred,
>
> Yeah, I realised that on my testing node with n_val of 3 I was getting the
> triple of results in the count... that is not ideal.
>
> I was just concerned on how much extra-work would get Riak to talk with SOLR
> and compile data against hitting SOLR directly... For my tests these days,
> seems the /search interface is pretty fast and it may not be a real problem
> for Riak... but still have my fears from Riak 0.14 an Riak 1.4
>
> Thanks,
> Alex
>
> On Mon, May 16, 2016 at 4:49 PM, Fred Dushin  wrote:
>>
>> Hi Alex,
>>
>> Other people have chimed in, but let me repeat that while the
>> internal_solr interface is accessible via HTTP (and needs to be, at least
>> from Riak processes), you cannot use that interface to query Solr and expect
>> a correct result set (unless you are using a single node cluster with an
>> n_val of 1).
>>
>> When you run your queries through Riak, Yokozuna, the component that
>> interfaces with Solr, will use a riak_core coverage plan to generate a
>> distributed Solr filter query across the entire cluster that guarantees that
>> for any document stored on all Solr nodes in the cluster, the query will
>> select one (and only one) replica.  If you were to run your query locally
>> using the internal_solr interface, your query would not span the cluster
>> (likely missing documents on other nodes) and may have duplicates (e.g., in
>> degenerate cases where you have more than one replica on the same node).
>>
>> I hope that helps explain why using the internal_solr interface is not
>> only not recommended, it's also not going to give you the results you
>> expect.
>>
>> -Fred

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Precommit hook function - no error log - how to debug?

2016-05-16 Thread Luke Bakken
Hi Sanket,

Thanks for providing that information, this is the first time that the
"write_once" bucket type property was mentioned.

If you set the precommit hook on a different bucket type where there
are *no other* custom bucket type properties set, does the precommit
hook run successfully?

--
Luke Bakken
Engineer
lbak...@basho.com

On Mon, May 16, 2016 at 7:17 AM, Sanket Agrawal
 wrote:
> Hi Luke,
>
> More on the precommit hook debugging I just did - I might have a handle on
> what the problem is. I took a cluster-info dump for all the nodes in the
> cluster. In the dump, precommit is set to [] for all the buckets in the
> cluster. The reason for that seems to be that the bucket type with
> "write-once" property where precommit hook is set, is not in the cluster
> somehow!
>
> The bucket type name is "test_kv_wo". Let us grep for it in cluster_info.txt
> - it doesn't show up at all (another bucket type "test_kv" shows up
> instead):
>>
>> $ grep test_kv cluster_info.txt | sort -u
>> [[{bucket,"test_kv">>}|
>> {name,"test_kv">>},
>
>
> Now, let us verify we can get the props for the bucket type fine from curl:
>>
>> $ curl localhost:8098/types/test_kv_wo/buckets/uuid_log/props:
>>
>> {"props":{"name":"uuid_log","active":true,"allow_mult":true,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"claimant":"riak1@127.0.0.1","dvv_enabled":true,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"uuid_log","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[{"mod":"rtriggers","fun":"pre_uuid"}],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","write_once":true,"young_vclock":20}}
>
>
>
> So, the problem might be missing bucket types in the cluster since they
> don't show up in cluster-info at all? The same thing happens with LWW bucket
> type that I created, test_kv_lww. It doesn't show up either. CRDT bucket
> types show up fine.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Alex De la rosa
Hi Fred,

Yeah, I realised that on my testing node with n_val of 3 I was getting the
triple of results in the count... that is not ideal.

I was just concerned on how much extra-work would get Riak to talk with
SOLR and compile data against hitting SOLR directly... For my tests these
days, seems the /search interface is pretty fast and it may not be a real
problem for Riak... but still have my fears from Riak 0.14 an Riak 1.4

Thanks,
Alex

On Mon, May 16, 2016 at 4:49 PM, Fred Dushin  wrote:

> Hi Alex,
>
> Other people have chimed in, but let me repeat that while the
> internal_solr interface is accessible via HTTP (and needs to be, at least
> from Riak processes), you cannot use that interface to query Solr and
> expect a correct result set (unless you are using a single node cluster
> with an n_val of 1).
>
> When you run your queries through Riak, Yokozuna, the component that
> interfaces with Solr, will use a riak_core coverage plan to generate a
> distributed Solr filter query across the entire cluster that guarantees
> that for any document stored on all Solr nodes in the cluster, the query
> will select one (and only one) replica.  If you were to run your query
> locally using the internal_solr interface, your query would not span the
> cluster (likely missing documents on other nodes) and may have duplicates
> (e.g., in degenerate cases where you have more than one replica on the same
> node).
>
> I hope that helps explain why using the internal_solr interface is not
> only not recommended, it's also not going to give you the results you
> expect.
>
> -Fred
>
> On May 15, 2016, at 4:18 AM, Alex De la rosa 
> wrote:
>
> Hi Vitaly,
>
> I know that you can access search via HTTP through Riak like this:
>
> http://localhost:8098/search/query/famous?wt=json=leader:true AND
> age_i:[25 TO *]
>
> I didn't find documentation about this, but according to your words I
> could access SOLR directly like this?
>
> http://localhost:8093/internal_solr/famous/select?wt=json=leader:true
> AND age_i:[25 TO *]
>
> If I go through "8098/search" would it be adding extra stress into the
> Riak cluster? Or is recommended to go through "8098/search" instead of
> "8093/internal_solr"??
>
> I just want to see if I can make use of SOLR with an external mapreduce
> platform (Disco) without giving extra stress to Riak.
>
> Thanks,
> Rohman
>
> On Sun, May 15, 2016 at 12:07 PM, Vitaly <13vitam...@gmail.com> wrote:
>
>> There is, you can *query *Solr directly via HTTP, at least as of Riak
>> 2.0.x
>>
>> Have a look at http://:8093/internal_solr/#/ and
>> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>>
>> Vitaly
>>
>>
>> On Sun, May 15, 2016 at 10:49 AM, Alex De la rosa <
>> alex.rosa@gmail.com> wrote:
>>
>>> Nobody knows if there is a way to access SOLR right away without going
>>> through RIAK's interface?
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa <
>>> alex.rosa@gmail.com> wrote:
>>>
 Hi all,

 If I want to create a Disco cluster [ http://discoproject.org ] to
 build statistics and compile data attacking Riak's SOLR directly without
 using Riak, how can I do it?

 In this way, I would leave Riak mainly for data IO (post/get) and leave
 the heavy duty of searching and compiling data to Disco; so Riak's
 performance shouldn't be affected for searching as mainly it will store and
 retrieve data only.

 Thanks,
 Alex

>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Fred Dushin
Hi Alex,

Other people have chimed in, but let me repeat that while the internal_solr 
interface is accessible via HTTP (and needs to be, at least from Riak 
processes), you cannot use that interface to query Solr and expect a correct 
result set (unless you are using a single node cluster with an n_val of 1).

When you run your queries through Riak, Yokozuna, the component that interfaces 
with Solr, will use a riak_core coverage plan to generate a distributed Solr 
filter query across the entire cluster that guarantees that for any document 
stored on all Solr nodes in the cluster, the query will select one (and only 
one) replica.  If you were to run your query locally using the internal_solr 
interface, your query would not span the cluster (likely missing documents on 
other nodes) and may have duplicates (e.g., in degenerate cases where you have 
more than one replica on the same node).

I hope that helps explain why using the internal_solr interface is not only not 
recommended, it's also not going to give you the results you expect.

-Fred

> On May 15, 2016, at 4:18 AM, Alex De la rosa  wrote:
> 
> Hi Vitaly,
> 
> I know that you can access search via HTTP through Riak like this:
> 
> http://localhost:8098/search/query/famous?wt=json=leader:true 
>  AND 
> age_i:[25 TO *]
> 
> I didn't find documentation about this, but according to your words I could 
> access SOLR directly like this?
> 
> http://localhost:8093/internal_solr/famous/select?wt=json=leader:true 
>  AND 
> age_i:[25 TO *]
> 
> If I go through "8098/search" would it be adding extra stress into the Riak 
> cluster? Or is recommended to go through "8098/search" instead of 
> "8093/internal_solr"??
> 
> I just want to see if I can make use of SOLR with an external mapreduce 
> platform (Disco) without giving extra stress to Riak.
> 
> Thanks,
> Rohman
> 
> On Sun, May 15, 2016 at 12:07 PM, Vitaly <13vitam...@gmail.com 
> > wrote:
> There is, you can query Solr directly via HTTP, at least as of Riak 2.0.x
> 
> Have a look at http://:8093/internal_solr/#/ <> and 
> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying 
> 
> 
> Vitaly
> 
> 
> On Sun, May 15, 2016 at 10:49 AM, Alex De la rosa  > wrote:
> Nobody knows if there is a way to access SOLR right away without going 
> through RIAK's interface?
> 
> Thanks,
> Alex
> 
> On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa  > wrote:
> Hi all,
> 
> If I want to create a Disco cluster [ http://discoproject.org 
>  ] to build statistics and compile data attacking 
> Riak's SOLR directly without using Riak, how can I do it?
> 
> In this way, I would leave Riak mainly for data IO (post/get) and leave the 
> heavy duty of searching and compiling data to Disco; so Riak's performance 
> shouldn't be affected for searching as mainly it will store and retrieve data 
> only.
> 
> Thanks,
> Alex
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com 
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> 
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Compilation issues with OTP 18.3: "Failed to load erlang_js_drv.so"

2016-05-16 Thread Humberto Rodríguez Avila

> On 16 May 2016, at 11:51, Magnus Kessler  wrote:
> 
> On 15 May 2016 at 23:38, Humberto Rodríguez Avila  > wrote:
> Hello, I have been trying to compile erlang_js with OTP 18.3, but allways I 
> get this message "Failed to load erlang_js_drv.so". I tried in Ubunto 14.0.04 
> and OSX 10.11.4.
> 
> Here you can find the full log of my error: 
> https://gist.github.com/rhumbertgz/ee0bf432edfa89ffa0a47405f3250fcd 
> 
> Any suggestion?
> 
> Thanks in advance
> 
> 
> Hi Humberto,
> 
> I saw that you also opened a github issue about this 
> (https://github.com/basho/erlang_js/issues/61 
> ), and that you were pointed to 
> this pull request (https://github.com/basho/erlang_js/pull/58 
> ) that may fix the issue for you.
> 
> The development engineers are working on a OTP-18 compatible code base and a 
> release planned for later this year should work with OTP-18. According to the 
> program managers this will also then include OTP-18 compatible versions of 
> Erlang based client libraries.
> 
> Please bear with us while the development work is being done.
> 
> Kind Regards,
> 
> Magnus
> 
> -- 
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
> 
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431

Hi Magnus,

Thanks, that pull request fixed my problem, now it works in both Linux and OSX 
:)

Best regards,
Humberto

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Compilation issues with OTP 18.3: "Failed to load erlang_js_drv.so"

2016-05-16 Thread Magnus Kessler
On 15 May 2016 at 23:38, Humberto Rodríguez Avila 
wrote:

> Hello, I have been trying to compile erlang_js with OTP 18.3, but allways
> I get this message "Failed to load erlang_js_drv.so". I tried in Ubunto
> 14.0.04 and OSX 10.11.4.
>
> Here you can find the full log of my error:
> https://gist.github.com/rhumbertgz/ee0bf432edfa89ffa0a47405f3250fcd
>
> Any suggestion?
> Thanks in advance
>
>
Hi Humberto,

I saw that you also opened a github issue about this (
https://github.com/basho/erlang_js/issues/61), and that you were pointed to
this pull request (https://github.com/basho/erlang_js/pull/58) that may fix
the issue for you.

The development engineers are working on a OTP-18 compatible code base and
a release planned for later this year should work with OTP-18. According to
the program managers this will also then include OTP-18 compatible versions
of Erlang based client libraries.

Please bear with us while the development work is being done.

Kind Regards,

Magnus

-- 
Magnus Kessler
Client Services Engineer
Basho Technologies Limited

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Compilation issues with OTP 18.3: "Failed to load erlang_js_drv.so"

2016-05-16 Thread Humberto Rodríguez Avila
Hello, I have been trying to compile erlang_js with OTP 18.3, but allways I get 
this message "Failed to load erlang_js_drv.so". I tried in Ubunto 14.0.04 and 
OSX 10.11.4.

Here you can find the full log of my error: 
https://gist.github.com/rhumbertgz/ee0bf432edfa89ffa0a47405f3250fcd 

Any suggestion?

Thanks in advance___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com