Hi,
I have two fields in the index with company and year. Following surround query
finds computer and applications within and 5 words of each is working fine with
surround query parser.
{!surround maxBasicQueries=10}company:5N(comput*, appli*)
Now If I have add another boolean query
Hi Tag,
I dont' see any query(q) given for execution in the firstSearcher and
newSearcher event listener. Can you add a query term:
str name=qquery term here/str
Check your logs and it will log that firstSeacher event executed and prints an
message with investerdIndex and number of facet items
Hi,
What are these square brackets, back slashes, quotes?
Are they part of JSON output? Can you paste human reman able XML response
writer output?
Thanks,
Ahmet
On Friday, June 20, 2014 12:17 AM, Ethan eh198...@gmail.com wrote:
Ahmet,
Assuming there is a multiValued field called Name of
Hello,
special field name _query_ is your friend.
+_query_:{!surround maxBasicQueries=10}company:5N(comput*, appli*)
+_query_:{!lucene}year:[2005 TO *]
http://searchhub.org/2009/03/31/nested-queries-in-solr/
Ahmet
On Friday, June 20, 2014 9:39 AM, Shyamsunder R Mutcha
Hi Experts,
I have configured solrcloud4.8 with zookeeper and tomcat , this is 3 node
cluster configuration, we have a requirement that , searching the table data
which is stored in hbase tables, for this i have configured below setup,
1. Edit the solrconfig.xml and added contriblib and
Hi,
I think this might be a silly question but i want to make it clear.
What is query parser...? What does it do.? I know its used for converting
query. But from What to what?what is the input and what is the output of
query parser. And where exactly this feature can be used?
If possible please
I have the following problem with Solr 4.5.1, with a cloud install with 4
shards, no replication, using the built-in zookeeper on one Solr:
I have updated a document via the Solr console (select a core, then select
Documents). I used the CSV format to upload the document, including the
document
Hello,
we have a Solr Cloud 4.7 with 2 shards with 2 nodes each one.
Until now it was working fine, but since yesterday we have this error in
almost all the updates:
org.apache.solr.common.SolrException: Fallo en lectura de Conector
at
I am going to have a go at this. Maybe others can add/correct.
When you make a request to Solr, it hits a request handler first. E.g.
a /select request handler. That's defined in solrconfig.xml
The request handler can change your request with some defaults,
required and overriding parameters.
Alexandre's response is very thorough, so I'm really simplifying things, I
confess but here's my query parsers for dummies. :)
In terms of inputs/outputs, a QueryParser takes a string (generally assumed
to be human generated i.e. something a user might type in, so maybe a
sentence, a set of
Alex,
Thank you for the quick response. Apologies for my delay.
Y, we'll use edismax. That won't solve the issue of multilingual documents...I
don't think...unless we index every document as every language.
Let's say a predominantly English document contains a Chinese sentence. If the
Hi,
I know how to send solrconfig.xml and schema.xml files to SolR using curl
commands.
But my problem is that i want to send them with java, and i can't find a
way to do so.
I used HttpComponentsand got http headers before the file begins, which SAX
parser does not like at all.
What is the best
Hi Daniel,
You said inputs are human-generated and outputs are lucene objects. So
my question is what does the below query mean. Does this fall under
human-generated one or lucene.?
http://localhost:8983/solr/collection1/select?q=*%3A*wt=xmlindent=true
Thanks,
Vivek
On Fri, Jun 20, 2014 at
That's *:* and a special case. There is no scoring here, nor searching.
Just a dump of documents. Not even filtering or faceting. I sure hope you
have more interesting examples.
Regards,
Alex
On 20/06/2014 6:40 pm, Vivekanand Ittigi vi...@biginfolabs.com wrote:
Hi Daniel,
You said inputs
All right let me put this.
http://192.168.1.78:8983/solr/collection1/select?q=inStock:falsefacet=truefacet.field=popularitywt=xmlindent=true
.
I just want to know what is this form. is it lucene query or this query
should go under query parser to get converted to lucene query.
Thanks,
Vivek
I would say *:* is a human-readable/writable query. as is
inStock:false. The former will be converted by the query parser into a
MatchAllDocsQuery which is what Lucene understands. The latter will be
converted (again by the query parser) into some query. Now this is where
*which* query parser
I am upgrading an index from Solr 3.6 to 4.2.0.
Everything has been picked up except for the old DateFields.
I read some posts that due to the extra functionality of the TrieDateField
you would need to re-index for those fields.
To avoid re-indexing I was trying to do a Partial Update
On Fri, Jun 20, 2014 at 12:36 AM, Andy angelf...@yahoo.com.invalid wrote:
Congrats! Any idea when will native faceting off-heap fieldcache be
available for multivalued fields? Most of my fields are multivalued so that's
the big one for me.
Hopefully within the next month or so
If anyone
On 6/20/2014 5:16 AM, Frederic Esnault wrote:
I know how to send solrconfig.xml and schema.xml files to SolR using curl
commands.
But my problem is that i want to send them with java, and i can't find a
way to do so.
I used HttpComponentsand got http headers before the file begins, which SAX
Yonik,
This native code uses in any way the docValues?
In the past I was forced to indexed a big portion of my data with docValues
enable. OOP problems with large terms dictionaries and GC was my main problem.
Other good optimization can be do facet aggregations offsite the heap to
minimize
If I have documents with a person and his email address:
u...@domain.commailto:u...@domain.com
How can I configure Solr (4.6) so that the email address source field is
indexed as
- the user part of the address (e.g., user) is in Lucene index X
- the domain part of the
On Fri, Jun 20, 2014 at 10:15 AM, Yago Riveiro yago.rive...@gmail.com wrote:
Yonik,
This native code uses in any way the docValues?
Nope... not yet. It is something I think we should look into in the
future though.
In the past I was forced to indexed a big portion of my data with docValues
Hi Shawn,
First thank you for taking the time to answer me.
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
All websites, blogs and docs i found seem to be based on the
On Fri, Jun 20, 2014 at 9:46 PM, Frederic Esnault fesna...@serenzia.com wrote:
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
Is this something solvable with configsets?
Hi Alexandre,
Nope, I cannot access the server (well i can actually, but my users won't
be able to do so), and i can't rely on an http curl call.
As for the final http call indicated in the link you gave, this is my last
step, but before that i need my solrconfig.xml and schema.xml uploaded via
Will these awesome features being implemented in Solr soon
2014/6/20 下午10:43 於 Yonik Seeley yo...@heliosearch.com 寫道:
On Fri, Jun 20, 2014 at 10:15 AM, Yago Riveiro yago.rive...@gmail.com
wrote:
Yonik,
This native code uses in any way the docValues?
Nope... not yet. It is something I
On 6/19/2014 4:51 PM, Huang, Roger wrote:
If I have documents with a person and his email address:
u...@domain.commailto:u...@domain.com
How can I configure Solr (4.6) so that the email address source field is
indexed as
- the user part of the address (e.g., user) is in Lucene
On 6/20/2014 8:46 AM, Frederic Esnault wrote:
First thank you for taking the time to answer me.
Actually i tried looking for a way to use SolrJ to upload my files, but i
cannot find anywhere informations about how to create nodes with their
config files using SolrJ.
All websites, blogs and
Hi Shawn,
Actually i should say that i'm using DSE Search (ie. Datastax Enterprise
with SolR enabled).
With cURL, i'm doing like this :
$ curl http://localhost:8983/solr/resource/nhanes_ks.nhanes/solrconfig.xml
--data-binary @solrconfig.xml -H 'Content-type:text/xml;
charset=utf-8'
$ curl
Shawn,
Thanks for your response.
Due to security requirements, I do need the name and domain parts of the email
address stored in separate Lucene indexes.
How do you recommend doing this? What are the challenges?
Once the name and domain parts of the email address are in different Lucene
On Fri, Jun 20, 2014 at 11:16 AM, Floyd Wu floyd...@gmail.com wrote:
Will these awesome features being implemented in Solr soon
2014/6/20 下午10:43 於 Yonik Seeley yo...@heliosearch.com 寫道:
Given the current makeup of the joint Lucene/Solr PMC, it's unclear.
I'm not worrying about that for now,
On 6/20/2014 10:04 AM, Huang, Roger wrote:
Due to security requirements, I do need the name and domain parts of the
email address stored in separate Lucene indexes.
How do you recommend doing this? What are the challenges?
Once the name and domain parts of the email address are in different
Hi Yonik, i dont' understand the relationship between solr and heliosearch
since you were committer of solr?
I just curious.
2014/6/21 上午12:07 於 Yonik Seeley yo...@heliosearch.com 寫道:
On Fri, Jun 20, 2014 at 11:16 AM, Floyd Wu floyd...@gmail.com wrote:
Will these awesome features being
On Fri, Jun 20, 2014 at 12:36 PM, Floyd Wu floyd...@gmail.com wrote:
Hi Yonik, i dont' understand the relationship between solr and heliosearch
since you were committer of solr?
Heliosearch is a Solr fork that will hopefully find it's way back to
the ASF in the future.
Here's the original
Hi Sameer, Thanks for looking the post. Below are the two variables read from
the xml file in my tool.
add key=JavaPath value=%JAVA_HOME%\bin\java.exe /
add key=JavaArgument value= -Xms128m -Xmx256m
-Durl=http://localhost:8983/solr/{0}/update -jar F:/DataDump/Tools/post.jar /
In commandline
On 6/20/2014 12:17 PM, Huang, Roger wrote:
How would you recommend storing the name and domain parts of the email
address in separate Lucene indexes?
To query, would I use the Solr cross-core join, fromIndex, toIndex?
I have absolutely no idea how to use Solr's join functionality. It is
not
I'd like to discuss moving the nextCursorMark to the beginning of a query
response. This way one can fetch another result set before completely
downloading the response. Currently, it's placed into the SOLR response
last. I figure this is just coincidence because it's a recent addition to
SOLR.
If you update to a specific core, I suspect you're getting the doc
indexed on two shards which leads to duplicate documents being
returned. So it depends on which core happens to answer the request...
Fundamentally, all versions of a document must go to the same shard in
order for the new version
Hi,
I have the following situation using SolrCloud:
deleteCollection foo - Could not find collection:foo
createCollection foo - Error CREATEing SolrCore 'foo_shard1_replica1':
Could not create a new core in solr/foo_shard1_replica1/as another core is
already defined there
unload
On 6/20/2014 1:24 PM, John Smodic wrote:
I have the following situation using SolrCloud:
deleteCollection foo - Could not find collection:foo
createCollection foo - Error CREATEing SolrCore 'foo_shard1_replica1':
Could not create a new core in solr/foo_shard1_replica1/as another core is
On 06/20/2014 04:04 AM, Allison, Timothy B. wrote:
Let's say a predominantly English document contains a Chinese sentence. If the
English field uses the WhitespaceTokenizer with a basic WordDelimiterFilter,
the Chinese sentence could be tokenized as one big token (if it doesn't have
any
Please post this issue on StackOverflow and one of us DataStax guys will
deal with it there, since nobody here would know much about the specialized
way that DataStax uses for dynamic schema and config loading.
Check your DSE server log for the 500 exception - but post it on SO since it
is
Hi Jack, actually i posted on OS first, but got no anwser.
Check here :
https://stackoverflow.com/questions/24296014/datastax-dse-search-how-to-post-solrconfig-xml-and-schema-xml-using-java
I can't see any exception in cassandra/system.log at the moment of the
error. :(
*Frédéric Esnault*
CTO /
Oops! Sorry I missed it. Please post of the rest of the info on SO as well.
We'll get to it!
-- Jack Krupansky
-Original Message-
From: Frederic Esnault
Sent: Friday, June 20, 2014 7:03 PM
To: solr-user@lucene.apache.org
Subject: Re: Question about sending solrconfig and schema files
Hi Tim,
I'm working on a similar project with some differences and may be we can
share our knowledge in this area :
1) I have no problem with the Chinese characters. You can try this link :
http://123.100.239.158:8983/solr/collection1/browse?q=%E4%B8%AD%E5%9B%BD
Solr can find the record even
Hi
As the title.I am using solr 4.6 with solrCloud. One of my leader core
within a shard have bean unloaded,the ping to the unloaded core and
return OK.
Is it normal?
How to send the right ping request to the core,and get the no ok?
46 matches
Mail list logo