Thanks. Looks like it is in the Solr ClientUtils.java class. Now need to
decide whether to import solr into client code or dup the function.
Jason
On Mon, Mar 23, 2015 at 2:59 PM, Alexander Sicular
wrote:
> I'll second what Chris said. Afaik, Solr does not solve this problem for
> you. Riak w
I'll second what Chris said. Afaik, Solr does not solve this problem for you.
Riak won't either. I just googled for "sanitize solr query inputs in java" and
there are quite a few hits. I'd use that as a starting point but I'm a bit
surprised there isn't a lib somewhere that makes this a non prob
Thanks Chris. I meant the query injection. Was really looking for an api
that takes parametrized query in risk java client, do you know whether solr
provides that? It would not be a easy task to do a 100% secure santize
function, the above query is really just a simple use case.
Jason
On Mon,
> On Mar 22, 2015, at 7:03 PM, Jason W wrote:
>
> Hello,
>
> I try to use the riak search java client, specifically the Search.Builder
> class, like the following
>
> Search search = new Search.Builder("test", "_yz_rb:accounts AND email:" +
> [user-email]).
>
>
>
> "[user-email]" is what
> On Mar 22, 2015, at 8:54 PM, Toby Corkindale wrote:
>
> Hi,
> I wondered if anyone has written a stats gathering plugin for Prometheus?
> It doesn't look like it'd be too hard to do; but I'm still lazy enough to
> hope that someone else has done it first :)
Hi Toby,
I looked around and it d
Thank you for the help. And yes, I'll keep that in mind for the future!
Matt.
On Mon, Mar 23, 2015 at 12:21 PM Zeeshan Lakhani wrote:
> And Matt, please choose one platform to discuss the matter, preferably
> here :). It makes things easier.
>
> Thanks.
>
> Zeeshan Lakhani
> programmer |
> soft
And Matt, please choose one platform to discuss the matter, preferably here :).
It makes things easier.
Thanks.
Zeeshan Lakhani
programmer |
software engineer at @basho |
org. member/founder of @papers_we_love | paperswelove.org
twitter => @zeeshanlakhani
> On Mar 23, 2015, at 12:14 PM, Zees
Hey Matt,
Remember, with the reload, it’s only new objects in *new* fields that will take
the updates. Depending on how big your dataset is, the quick fix may be to
upload the *updated* schema under a new name and tie the index to that. The K/V
pairs should then work correctly when PUT.
Zeesh
Zeeshan,
Is there a way to reload the index manually? I tried running
"rp(yz_index:reload(<<"index_name">>))." on each node as suggested on a
GitHub issue, and though I got an "ok" response back listing each nodes in
the cluster, I still could not query using the updated schema. I even
deleted all
Hi all,
I recently used Riak’s Strong Consistency functionality to get
auto-incrementing IDs for a feature of an application I’m working on, and
although this worked great in dev (5 nodes in 1 VM) and staging (3 servers
across NA) environments, I’ve run into some odd behaviour in production
(o
10 matches
Mail list logo