--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
That's interesting that you were able to setup OpenLink. At Alfresco we've
done quite a bit of work on the Solr's JDBC driver to integrate it with the
Alfresco repository, which uses Solr. But we haven't yet tackled the ODBC
setup. That will come very soon. To really take advantage of Tableau's
Hi Erik
I have explored the facetquery but it doesnotreally help. Thank you for
your suggestion.
On 12/6/2018 7:49 PM, Erik Hatcher wrote:
Derek -
One trick I like to do is try various forms of a query all in one go. With
facet=on, you can:
=big brown bear
=big brown
=brown
Seems like theHighlight feature could help but with some workaround.
Will need to explore more on it. Thank you.
On 12/6/2018 5:32 PM, Alessandro Benedetti wrote:
I would recommend to look into the Highlight feature[1] .
There are few implementations and they should be all right for your user
You may have to DISABLEBUFFER in source to get rid of tlogs.
On Mon, Jun 18, 2018 at 6:13 PM, Brian Yee wrote:
> So I've read a bunch of stuff on hard/soft commits and tlogs. As I
> understand, after a hard commit, solr is supposed to delete old tlogs
> depending on the numRecordsToKeep and
Asher:
Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You
must use the _exact_ same e-mail as you used to subscribe.
If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But
unsubscribe
On Mon, Jun 18, 2018 at 3:37 PM, Aroop Ganguly wrote:
> Ok I was able to setup the odic bridge (using OpenLink) and I see the
> collections popping up in Tableau too.
> But I am unable to actually get data flowing into Tableau reports because,
> Tableau keeps creating inner queries
Hi,
So, for context, I have very little experience with Kerberos. The
environment has SolrCloud configured, and I am using SolrJ libraries from
Solr 7.0.0 and attempting to set up my application to be able to make Solr
requests when Kerberos is enabled. Specifically, I am making a request to
add
Ok I was able to setup the odic bridge (using OpenLink) and I see the
collections popping up in Tableau too.
But I am unable to actually get data flowing into Tableau reports because,
Tableau keeps creating inner queries and Solr seems to hate inner queries.
Is there a way to do inner queries in
So I've read a bunch of stuff on hard/soft commits and tlogs. As I understand,
after a hard commit, solr is supposed to delete old tlogs depending on the
numRecordsToKeep and maxNumLogsToKeep values in the autocommit settings in
solrconfig.xml. I am occasionally seeing solr fail to do this and
Hi,
I thing this might give some clue.
I tried to reproduce the issue with a collection called testCloud.
fetch(testCloud1,
search(testCloud1, q="*:*", fq="type:name", fl="parentId",
sort="parentId asc"),
fl="id,name",
on="parentId=id")
The expression above produces 3 log
Hi Everyone
I am not sure if something has been done on this yet, though I did see a JIRA
with links to the parallel sql documentation, but I do not think that answers
the question.
I love the jdbc driver and it works well for many UIs but there are other
systems that need an ODBC driver.
>From the error, I think the issue is with your zookeeperList definition.
Try changing:
zookeeperList.add("http://100.12.119.10:2281;);
zookeeperList.add("http://100.12.119.10:2282;);
zookeeperList.add("http://100.12.119.10:2283;);
to
Hi,
I am trying to find some help with the format of the collection name under
permissions.
The collection can be "*", "null", "collection_name". Is there a way to
group a set of collections?
example:
This format is not working.
{"collection": ["colname1, colname2, colname3"], "path":
Thanks. So I tried what you had, however, you are not specifying _zkChroot,
so I don't know what to put there. I don't know what that value would be for
me, or if I even need it. So I commented out most you example to:
Optional chrootOption = null;
// if
Hi,
Yes, Andy has the right idea. If no zk-chroot is being used,
"Optional.empty()" is the correct value to specify, not null.
This API is a bit trappy (or at least unintuitive), and we're in the
process of hashing out some doc improvements (and/or API changes). If
you're curious or would
I am using the following (Solr 7.3.1) successfully:
import java.util.Optional;
Optional chrootOption = null;
if (StringUtils.isNotBlank(_zkChroot))
{
chrootOption = Optional.of(_zkChroot);
}
else
{
chrootOption =
Hello,
I am using solr 7.3 and zookeeper 3.4.10. I have custom client code that is
supposed to connect the a zookeeper cluster. For the sake of clarity, the
main code focus:
private synchronized void initSolrClient()
{
List zookeeperList = new
sorry, I mean to an ip adress as a numeric value, example in MySQL:
https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_inet-aton
On Mon, Jun 18, 2018 at 12:35 PM, root23 wrote:
> I am sorry i am not sure what you mean by store as atom. Is that an
> fieldType
> in solr
I am sorry i am not sure what you mean by store as atom. Is that an fieldType
in solr ?
I couldnt find it in here
https://lucene.apache.org/solr/guide/6_6/field-types-included-with-solr.html
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Store it as an atom rather than an up address.
> On Jun 18, 2018, at 12:14 PM, root23 wrote:
>
> Hi all,
> is there a built in data type which i can use for ip address which can
> provide me sorting ip address based on the class? if not then what is the
> best way to sort based on ip address
Hi all,
is there a built in data type which i can use for ip address which can
provide me sorting ip address based on the class? if not then what is the
best way to sort based on ip address ?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Shawn et al,
As a follow-up to this - then how would you solve the issue? I tried to use
the instructions to set up basic authentication in solr (per a Stack
Overflow post) and it worked to secure things, but the web app couldn't
access solr. Tampering with the app code - which is the solr
Hi,
I have the following scenario, I'm having a shared cluster solr
installation environment (app server 1-app server 2 load balanced) which
has 4 solr instances.
After reviewing the space audit we have noticed that the partition where
the installation resides is too big versus what is used in
Indeed, you first configure it in the solrconfig.xml ( manually).
Then you can query and parse the response as you like with the SolrJ client
library.
Cheers
-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from:
Hi, We are using solr 6.3.0 and a collection has 3 of 4 replicas down and 1
is up and serving.
I see a single line error repeating in logs as below. nothing else specific
exception apart from it. Wondering what this below message is saying, is it
the cause of nodes being down, but saw that this
You are doing things correctly. I was incorrect about the behavior of the
group() operation.
I think the behavior you are looking for should be done using reduce() but
we'll need to create a reduce operation that does this. If you want to
create a ticket we can work through exactly how the
There is a test case working that is basically the same construct that you
are having issues with. So, I think the next step is to try and reproduce
the problem that you are seeing in a test case.
If you have a small sample test dataset I can use to reproduce the error
please create a jira ticket
> On 18 Jun 2018, at 14:02, Jan Høydahl wrote:
>
> Is there still a valid reason to keep the inactive shards around?
> If shard splitting is robust, could not the split operation delete the
> inactive shard once the new shards are successfully loaded, just like what
> happens during an
Is there still a valid reason to keep the inactive shards around?
If shard splitting is robust, could not the split operation delete the inactive
shard once the new shards are successfully loaded, just like what happens
during an automated merge of segments?
--
Jan Høydahl, search solution
If I’m not mistaken the weird accounting of “inactive” shard cores is caused
also by the fact that individual cores that constitute replicas in the inactive
shard are still loaded, so they still affect the number of active cores. If
that’s the case then we should probably fix this to prevent
Hi,
Solr ships an UIMA component and examples that haven't worked for a
while. Details are in:
https://issues.apache.org/jira/browse/SOLR-11694
The choices for developers are:
1) Rip UIMA out (and save space)
2) Update UIMA to latest 2.x version
3) Update UIMA to super-latest possibly-breaking
Hi,
Thank you for your help. As far as I understood, I cannot configure and
enable the suggester through solrj client. The configuration should be done
manually.
Arunan
On 18 June 2018 at 14:50, Alessandro Benedetti wrote:
> Hi,
> me and Tommaso contributed this few years ago.[1]
> You can
Hi,
me and Tommaso contributed this few years ago.[1]
You can easily get the suggester response now from the Solr response.
Of course you need to configure and enable the suggester first.[2][3][4]
[1] https://issues.apache.org/jira/browse/SOLR-7719
[2]
Hi List,
the documentation of 'cursorMarks' recommends to fetch until a query
returns the cursorMark that was passed in to a request.
But that always requires an additional request at the end, so I wonder
if I can stop already, if a request returns less results than requested
(num rows).
On 17/06/2018 03:10, Umadevi Nalluri wrote:
I am getting Connection refused (Connection refused) when I am runnind
reload_core with dsetool after we setup jmx , this issue is happening since the
dse upgrade to 5.0.12 , can some one please help with this issue is this a bug?
Is there a work
Hi Erick,
Thanks for the response.
First thing we not indexing on Slave. And we are not re-indexing/optimizing
entire the core in Master node.
The only warning which I see in the log is "Unable clean the unused index
directory so starting full copy".
That one i can understand and I don't
37 matches
Mail list logo