I think you are looking for 'fl' param.
From: pcmanprogrammeur [via Lucene]
[mailto:ml-node+761818-821639313-124...@n3.nabble.com]
Sent: Wednesday, April 28, 2010 12:38 AM
To: caman
Subject: Only one field in the result
Hello,
In my schema.xml, i have some fields stored and
On Tue, Apr 27, 2010 at 9:56 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
but as i understand the new cloud stuff (by which i mean: i don't
understand the new cloud stuff, but i've heard rumors) this will be
possible with that functionality.
Yeah, that should be the goal.
The
What about change the schema.xml and classify the field as stored:false
E.g.:
field name=myfieldname type=string indexed=true stored=false/ --
searchable, but value ist not returned in the Solr-Response.
HTH,
Sven
-Ursprüngliche Nachricht-
Von: Darren Govoni
On Apr 27, 2010, at 4:02 PM, Király Péter wrote:
Dear Solr users,
I am interesting, whether it is possible to get date facets without
intersecting
ranges. Now the documents which stands on boundaries of ranges are covered
by both ranges. An example:
facet result (from Solr):
int
Dear solr-user,
Using a quartz scheduler, I want to index all documents inside a specific
folder with Solr(J). To perform the actual indexing I selected the
org.apache.solr.handler.extraction.ExtractingRequestHandler. The request
handler functions perfectly when the request is send by curl: curl
What error are you getting? Does
http://www.lucidimagination.com/blog/2009/09/14/posting-rich-documents-to-apache-solr-using-solrj-and-solr-cell-apache-tika/
work for you?
On Apr 28, 2010, at 6:44 AM, Jeroen van Schagen wrote:
Dear solr-user,
Using a quartz scheduler, I want to index all
Grant Ingersoll said:
You should be able to do inclusive/exclusive ranges using the query parser
by
mixing matching brackets [] and braces {}.
See
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html#Range%20Searches
Hi Grant,
Thanks for your answer, but my problem is not that how to
Hi,
Several possible solutions are discussed in
http://lucene.472066.n3.nabble.com/Date-Faceting-and-Double-Counting-td502014.html
Regards,
gwk
On 4/27/2010 10:02 PM, Király Péter wrote:
Dear Solr users,
I am interesting, whether it is possible to get date facets without
intersecting
On second hand, the curl command doesn't index the file content on this
system either. It worked on my home system (mac) but isn't working anymore
on my job system (windows), could this have anything to do with the issue?
I'm not getting any error messages, the file identifier is properly indexed
Hi Péter,
There has been a fair amount of discussion on this topic, and there are some
pending things being looked at with regards inclusive/exclusive combinations
etc.
For today, the easiest way to get the results you want to use a granular
time that allows you to specify a non-overlapping
Thanks for all the help guys. I now have it up and running.
Jon Baer wrote:
I would not use this layout, you are putting important Solr config files
outside onto the docroot (presuming we are looking @ the webapps folder) ...
here is my current Tomcat project (if it helps):
Hi,
Can highlights be returned when using the dismax request handler?
I read in the below post that I can use a workaround with qf?
http://lucene.472066.n3.nabble.com/bug-No-highlighting-results-with-dismax-and-q-alt-td498132.html
Any advise is greatly appreciated.
Regards, Will
--
Thanks,
that's great!
Péter
- Original Message -
From: gwk g...@eyefi.nl
To: solr-user@lucene.apache.org
Sent: Wednesday, April 28, 2010 3:05 PM
Subject: Re: date facets without intersections
Hi,
Several possible solutions are discussed in
I guess I didn't explain it properly. I want to create a core on
the master, and then have N slaves also (aka replicate) create
those new core(s) on the slave servers, then of course, begin to
replicate (yeah, got that part). There doesn't appear to be
anything today that does this, it's unclear
On Wed, Apr 28, 2010 at 10:14 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I guess I didn't explain it properly. I want to create a core on
the master, and then have N slaves also (aka replicate) create
those new core(s) on the slave servers, then of course, begin to
replicate (yeah,
Because I need to store it to get highlighting to work from the
extracted text.
The problem now, is that unless I specify only the fields I want, I will
get back the store text field which is huge.
I use a lot of dynamic fields, so I cannot simply tell Solr which fields
I want in the query.
may be this is one very imp feature to be considered for next releases of
SOLR.
i agree, is glaringly important. i'm quite suprised that this feature isn't
already available. can anyone elaborate on the reason that it isn't
available? (out of interest).
sometimes these kind of cases would come
Thanks all,
Tom, your results are interesting. We both have about 5 million documents, but
my index is 20 gigs vs. yours 2 TB. I imagine we'll have a much easier time
getting quick responses against these small documents compared to your
multi-second queries. As for index/search disk
Hi,
Does anyone have an idea about the performance benefits of searching across
floats compared to strings? I have one multi-valued field that contains about
3000 distinct IDs across 5 million documents. I am going to be a lot of queries
like q=id:102 OR id:303 OR id:305, etc. Right now it is
Hi!
I'm currently trying to implement a full text search for CouchDB using
Solr. I went through the tutorial and also some of the examples
(slashdot rss feed import, hsql import,..) within the downloadable
distribution.
Since CouchDB works with REST + plaintext JSON and Solr is looking for
Hi Patrick,
I don't know much about couch, but if you to return json from solr (which I
think couch would understand) you can do that with wt=json in the query string
when querying solr. See here for more details:
http://wiki.apache.org/solr/SolJSON
HTH a little
Brendan
On Apr 28, 2010, at
Hey Brendan!
Thanks for your response.
I don't know much about couch, but if you to return json from solr
(which I think couch would understand) you can do that with wt=json
in the query string when querying solr. See here for more details:
http://wiki.apache.org/solr/SolJSON
Actually I'm
In this case, there's a index/core created per day. Each new day/core
needs to fairly immediately after creation, replicate to the slave for
queries. Is there some other mechanism in trunk/cloud that would
solve this?
On Wed, Apr 28, 2010 at 9:22 AM, Yonik Seeley
yo...@lucidimagination.com
Correct me if Im wrong but I think the problem here is that while there is a
fetchindex command in replication the handler and the master/slave setup
pertain to the core config.
For example for this to work properly the solr.xml configuration would need to
setup some type of global replication
Hi,
Setting up CouchDB-Lucene is quite easy, but you don't want that i guess. You
could construct a show function to convert input to Solr accepted XML, should
be very straightforward. You just need some program to fetch from CouchDB and
push it in Solr.
Cheers,
-Original
Setting up CouchDB-Lucene is quite easy, but you don't want that i
guess.
Yeah, I was thinking about CouchDB-Lucene too (also found it in the
CouchDB wiki). It's not like I HAVE to make it work with Solr. If it
turns out that it's not possible or a pain in the ass, I'll probably go
for the
Are there any actual errors in the log? What occurs after the last line in
your log below? What happens if you send in an extract only message? Do you
get back out content?
On Apr 28, 2010, at 9:10 AM, Jeroen van Schagen wrote:
On second hand, the curl command doesn't index the file
Whether you need Solr depends on if you require some features such as
highlighting, faceting, more-like-this etc. They will not work with
CouchDB-Lucene, nor can you, at this moment, use CoucDB-Lucene behind
CouchDB-Lounge although a seperate shard can have a sharded Lucene index, you
cannot
On 4/27/10 12:04 PM, Chris Hostetter wrote:
: SEVERE: Could not start SOLR. Check solr/home property
it means something when horribly wrong when starting solr, and since this
is frequently caused by either an incorrect explicit solr/home or an
incorrect implicitly guessed solr home, that is
Multiple spellcheckers may be specified by name in solrconfig, such as
str name=namejarowinkler/str, however how does one make a
request to this particular spellchecker, as opposed to the one named
default?
Multiple spellcheckers may be
specified by name in solrconfig, such as
str name=namejarowinkler/str, however
how does one make a
request to this particular spellchecker, as opposed to the
one named
default?
With spellcheck.dictionary parameter.
Ahmet, thanks, however it's un-intuitive, it should be spellchecker.name?
On Wed, Apr 28, 2010 at 12:01 PM, Ahmet Arslan iori...@yahoo.com wrote:
Multiple spellcheckers may be
specified by name in solrconfig, such as
str name=namejarowinkler/str, however
how does one make a
request to this
Hello,
I'm just new on the list.
I searched a lot on the list, but I didn't find an answer to my question.
I'm using Solr 1.4 on Windows with an Oracle 10g database.
I am able to do full-import without any problem, but I'm not able to get
delta-import working.
I have the following in the
You should end up w/ a file like conf/dataimport.properties @ full import
time, might be that it did not get written out?
- Jon
On Apr 28, 2010, at 3:05 PM, safl wrote:
Hello,
I'm just new on the list.
I searched a lot on the list, but I didn't find an answer to my question.
I'm
Jon Baer wrote:
You should end up w/ a file like conf/dataimport.properties @ full
import time, might be that it did not get written out?
- Jon
That's right. I get the dataimport.properties in conf directory after full
import and the deltaQuery looks like:
select objectid from table
Jumping in late here, but if you're interested, we're currently
implementing a LCF connector for couchdb at JTeam (http://www.jteam.nl)
. We'll make it available on line and try to contribute it back to LCF.
We'll also soon publish a blog post about it as an example of how to
develop custom
Is there anything wrong with wrapping the text content of all fields
with CDATA whether they be analyzed, not analyzed, indexed, not indexed
and etc.? I have a script that creates update XML documents and it's
just simple to wrap all text content in all fields with CDATA. From my
brief tests it
Actually, it simply a matter of adding a core via core admin handler.
It's not hard to accomplish, though, I guess it's not a part of Solr
today to perform this task.
On Wed, Apr 28, 2010 at 8:50 AM, Jon Baer jonb...@gmail.com wrote:
Correct me if Im wrong but I think the problem here is that
This thread spurred me on enough to follow through with the idea i posted
in SOLR-397 a while back. I've attached a patch to that issue if you want
to try it out...
https://issues.apache.org/jira/browse/SOLR-397
It adds a facet.date.include param that supports the following
options: all,
: Could this be a needed capability in Solr? It seems like a problem that
: a -field operator would help, that simply does not return those
: fields.
the broader issues is more complicated then that...
http://wiki.apache.org/solr/FieldAliasesAndGlobsInParams
there is nothing to prevent
On Wed, Apr 28, 2010 at 2:23 PM, Jon Baer jonb...@gmail.com wrote:
From what I understand Cassandra uses a generic gossip protocol for node
discovery (custom), will the Solr-Cloud have something similar?
SolrCloud uses zookeeper, so node discovery is a simple matter of
looking there. Nodes
Hi,
I am trying the NOT clause with dismax query type, and it is not working. It
works with the standard query type. I tried the following query:
q=NOT+keywordfq=qt=dismaxstart=0rows=26fl=score
But, when I replace qt=standard, it works.
How to make it work with dismax query type?
regards,
Naga Darbha wrote:
Hi,
I am trying the NOT clause with dismax query type, and it is not working. It
works with the standard query type. I tried the following query:
q=NOT+keywordfq=qt=dismaxstart=0rows=26fl=score
But, when I replace qt=standard, it works.
How to make it work with dismax
I tried - even, but it works only with standard queries, but not dismax.
regards,
Naga
-Original Message-
From: Koji Sekiguchi [mailto:k...@r.email.ne.jp]
Sent: Thursday, April 29, 2010 9:46 AM
To: solr-user@lucene.apache.org
Subject: Re: NOT keyword - doesn't work with dismax?
Naga
Naga Darbha wrote:
I tried - even, but it works only with standard queries, but not dismax.
regards,
Naga
-Original Message-
From: Koji Sekiguchi [mailto:k...@r.email.ne.jp]
Sent: Thursday, April 29, 2010 9:46 AM
To: solr-user@lucene.apache.org
Subject: Re: NOT keyword - doesn't work
Hello, I have solr 1.4 deployed in websphere 6.1. Im trying to add a url
based security constraint to my project but if I specify the core name in
the constraint the path to the admin of each core give a 404 error. Does
anyone have any experience of this or suggestions of how I can work around
it?
46 matches
Mail list logo