Works like this?
edismax
SignalImpl.baureihe^1011 text^0.1
Another option:
How about just but to the desired fields a high boosting factor while adding
the field to the document, using solr?!
Can this work?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-fiel
We have just one more Problem:
When we search explicit, like *:* or partNumber:A32783627 we still don't
get any results.
What we are doing here wrong?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-field-with-defType-dismax-No-results-at-all-tp4095850p
We have just one more Problem:
When we search explicit, like *:* or partNumber:A32783627 we still don’t get
any results.
What we are doing here wrong?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-field-with-defType-dismax-No-results-at-all-tp4095850p4095918
Perfect!!! THANKS A LOT
That was the mistake.
Von: Jack Krupansky-2 [via Lucene]
[mailto:ml-node+s472066n409590...@n3.nabble.com]
Gesendet: Mittwoch, 16. Oktober 2013 14:55
An: uwe72
Betreff: Re: Boosting a field with defType:dismax --> No results at all
Get rid of the newlines bef
Hi there,
i want to boost a field, see below.
If i add the defType:dismax i don't get results at all anymore.
What i am doing wrong?
Regards
Uwe
true
text
AND
default
true
unfortunately i didn't understand at all.
We are using a tomcat for the solr server.
how exactly can i prevent that user access the solr admin page?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080p4092236.html
Sent from th
Hi there,
how can i prevent that everybody who knows the URL of our solr admin page,
has the right to access it?
Thanks in advance!
Uwe
--
View this message in context:
http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080.html
Sent from the Solr - User mailing
Erick, i think he didn't at the validate=false to a field, but global to
the schema.xml/solrconfig.xml (i don't remember where exactly define this
globally)
Von: Erick Erickson [via Lucene]
[mailto:ml-node+s472066n4070067...@n3.nabble.com]
Gesendet: Donnerstag, 13. Juni 2013 00:51
How can i load this custom properties with solrJ?
Von: Erick Erickson [via Lucene]
[mailto:ml-node+s472066n4070068...@n3.nabble.com]
Gesendet: Donnerstag, 13. Juni 2013 00:53
An: uwe72
Betreff: Re: SOLR-4641: Schema now throws exception on illegal field
parameters.
But see Steve Rowe
Is there a way to tell solr, that it should not check these parameters?
Because we added our own parameters, which we load on runtime for other
proposes.
Thans in advance!
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-4641-Schema-now-throws-exception-on-illegal-fiel
i have very big documents in the index.
i want to update a multivalue field of a document, without loading the whole
document.
how can i do this?
is there somewhere a good documentation?
regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-Atomic-Updates-How-wor
Erik, what do u mean with this parameter, i don't find it..
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-ContentStreamUpdateRequest-Accessing-parsed-items-without-committing-to-solr-tp4032636p4032656.html
Sent from the Solr - User mailing list archive at Nabble.com.
ok, seems this works:
Tika tika = new Tika();
String tokens = tika.parseToString(file);
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-ContentStreamUpdateRequest-Accessing-parsed-items-without-committing-to-solr-tp4032636p4032649.html
Sent from the Sol
Yes, i don't really want to index/store the pdf document in lucene.
i just need the parsed tokens for other things.
So you mean i can use ExtractingRequestHandler.java to retrieve the items.
has anybody a piece of code, doing that?
actually i give the pdf as input and want the parsed items (the
i have a bit strange usecase.
when i index a pdf to solr i use ContentStreamUpdateRequest.
The lucene document then contains in the "text" field all containing items
(the parsed items of the physical pdf).
i also need to add these parsed items to another lucene document.
is there a way, to recei
A Lucene 4.0 document returns for a Date field now a string value, instead of
a Date object.
"2009-10-29T00:00:009Z"
Solr3.6 --> Date instance
Can this be set somewhere in the config?
I prefer to receive a date instance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sol
wasn't it the stacetrace in my posting before?
It is the same behavior when i use the HttpSolrServer.java
here is the console output of the solr server:
03.01.2013 11:32:31 org.apache.solr.core.SolrDeletionPolicy updateCommits
INFO: newest commit = 1
03.01.2013 11:32:31 org.apache.solr.update.pr
Hi there,
how can i add a date field to a pdf document?
ContentStreamUpdateRequest up = new
ContentStreamUpdateRequest("/update/extract");
up.addFile(pdfFile, "application/octet-stream");
up.setParam("literal." + SolrConstants.ID, solrPDFId);
Regards
Uwe
--
View this message in conte
we have more than hundreds fields...i don't want to put them all to the fl
parameters
is there a other way, like to say return all fields, except the fields...?
anyhow i will change the field from stored to stored=false in the schema.
--
View this message in context:
http://lucene.472066.
>>>your query-time fl parameter.
means "don't return" this field?
because we have many many fields, so probably now i use the default and all
fields will be loaded. so i just want to tell the query to don't load the
"text" field. I do this with the fl parameter?
--
View this message in conte
hi
i am indexing pdf documents to solr by tika.
when i do the query in the client with solrj the performance is very bad (40
seconds) to load 100 documents?
Probably because to load all the content. The content i don't need. How can
i tell the query to don't load the content?
Or other reasons w
You mean this:
stats: entries_count : 24
entry#0 :
'NIOFSIndexInput(path="/home/connect/ConnectPORTAL/preview/solr-home/data/index/_2f3.frq")'=>'WiringDiagramSheetImpl.pageNumber',class
org.apache.lucene.search.FieldCache$StringIndex,null=>org.apache.lucene.search.FieldCache$StringIndex#32159051
My design is like this at the moment:
Documents in general has a relation to each other.
So, a document has a id, some attributes and a multivalue-field
"navigateTo".
E.g.
Document1: id1, some attributes, naviagteToAllDocumentsWhenColor:red,
navigateTo: id2, id3
Document2: id2, some attribute
Yes it works when i increase the maxBooleanClauses
But any case i have to think how i redesign the document structure.
i have big problems do the relations between documents.
also a document can be changed, then i have to update many documents which
has a relation to the modified one.
--
View
i have already:
--
View this message in context:
http://lucene.472066.n3.nabble.com/Pls-help-Very-long-query-what-to-do-tp4021606p4021619.html
Sent from the Solr - User mailing list archive at Nabble.com.
my query is like this, see below. I use already POST request.
i got a solr exception:
org.apache.solr.client.solrj.SolrServerException: Server at
http://server:7056/solr returned non ok status:400, message:Bad Request
is there a way in order to prevent this?
id:("ModuleImpl@20117" OR "ModuleImpl
Hi there,
i have a principal question.
We have arround 5 million lucene documents.
At the beginning we have arround 4000 XML-files which we transform to
SolrInputDocuemnts by using solrj and adding them to the index.
A document is also related to other documents, so while adding a document we
Thanks Andrew!
Parallel i also found this thread:
http://grokbase.com/t/lucene/solr-user/117m8e9n8t/solr-3-3-exception-in-thread-lucene-merge-thread-1
they are talking about the same
We just started the importer again, with the unlimited-flag (/ulimit -v
unlimited /), then we will see.
today the same exception:
INFO: [] webapp=/solr path=/update
params={waitSearcher=true&commit=true&wt=javabin&waitFlush=true&version=2}
status=0 QTime=1009
Nov 13, 2012 2:02:27 PM org.apache.solr.core.SolrDeletionPolicy onInit
INFO: SolrDeletionPolicy.onInit: commits:num=1
commit{dir=/net/smtcax
Kernel: 2.6.32.29-0.3-default #1 SMP 2011-02-25 13:36:59 +0100 x86_64
x86_64 x86_64 GNU/Linux
SUSE Linux Enterprise Server 11 SP1 (x86_64)
physical Memory: 4 GB
portadm@smtcax0033:/srv/connect/tomcat/instances/SYSTEST_Portal_01/bin>
java -version
java version "1.6.0_33"
Java(TM) SE Runtime Envi
Thanks Eric. We are using:
export JAVA_OPTS="-XX:MaxPermSize=400m -Xmx2000m -Xms200M
-Dsolr.solr.home=/home/connect/ConnectPORTAL/preview/solr-home"
We have arround 5 Millions documents. The index size is arround 50GB.
Before we add a document we delete the same id in the cache, doesn't matter
i
While adding lucene document we got this problem: What can we do here?
Nov 12, 2012 3:25:09 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start
commit(optimize=false,waitFlush=true,waitSearcher=true,expungeDeletes=false)
Exception in thread "Lucene Merge Thread #0"
org.apache.lucene
32 matches
Mail list logo