Hello Solr,
Do we have to specify double quotes for a single term (if the term is a
camelcase, eg, OrientalTradingCo) while querying. I am using apache-solr-3.3.0.
For example the query :
q=OrientalTradingCo&debugQuery=true gives the debugging response as ---
OrientalTradingCo
OrientalTradingC
Tried using the ord() function, but it was the same as the standard sort.
Do I just need to bite the bullet and reindex everything?
Thanks!
Pete
On Oct 21, 2011, at 5:26 PM, Tomás Fernández Löbbe wrote:
> I don't know if you'll find exactly what you need, but you can sort by any
> field or Fun
Hi,
I am implementing an solr solution where I want to use some field values
from main query output as an input in building facet. How do I do that?
Eg:
Response from main query:
name1
200
name1
400
I want to build facet for the query where "prod_id:200 prod_id:400". I like
to do all this i
Hi,
I've started to use Solr to build up a search service, but I have
encountered a problem here.
However, when I use this URL, it always returns "*sort param could not be
parsed as a query, and is not a field that exists in the index: geodist()"*
*
*
http://localhost:8080/solr/select/?indent=tru
> ?q.alt=*:* worked for me -- how do I make sure that the
> standard query
> parser is configured.
You can append defType=lucene to your search URL.
More permanent way is to default defType parameter in solrconfig.xml.
Hi,
I want to use Solr 3.1 to index the content of a website. Rather than using a
web crawler to fetch the content and load it into Solr I want to use the DIH to
get the data from the Content Management Database that supports the website.
It would be possible to write SQL to obtain a complete s
Greetings guys,
Is there a good front end application / interface for solr?
Features I'm looking for are:
configure query interface (using non programatic features)
configure pagination
configure bookmarking of results
export results of a query to a csv or other format (JSON, etc.)
Is
Thanks,
?q.alt=*:* worked for me -- how do I make sure that the standard query
parser is configured.
Thanks.
MM.
On Mon, Oct 24, 2011 at 2:47 AM, Ahmet Arslan wrote:
> > 2. If send solr the following query:
> > q=*:*
> >
> > I get nothing just:
> > > name="response" numFound="0" star
I have 2 types of docs, users and posts.
I want to view all the docs that belong to certain users by joining posts
and users together. I have to filter the users with a filter query of
"is_active_boolean:true" so that the score is not effected,but since I do a
join, I have to move the filter query
JRJ,
We did check the solr official website but found it was really technical,
since we are not on the developer side and we just want some basic
information or numbers about its usage.
Thanks for your answer, anyway.
2011/10/24 Jaeger, Jay - DOT
> 1. Solr, proper, does not index "files".
Is this really a stumper? This is my first experience with Solr and having
spent only an hour or so with it I hit this barrier (below). I'm sure *I* am
doing something completely wrong just hoping someone more familiar with the
platform can help me identify & fix it.
For starters...what's "Coul
Maybe put them in a single string field (or any other field type that is not
analyzed -- certainly not text) using some character separator that will
connect them, but won't confuse the Solr query parser?
So maybe you start out with key value pairs of
Key1 value1
Key2 value2
Key3 value3
Prepro
1. Solr, proper, does not index "files". An adjunct called Solr Cel can. See
http://wiki.apache.org/solr/ExtractingRequestHandler . That article describes
which kinds of files it Solr Cel can handle.
2. I have no idea what you mean by "incidents per year". Please explain.
3. Even though
Hi all,
I am doing a student project on search engine research. Right now I have
some basic questions about Slor.
1. How many types of data file Solr can support (estimate)? i.e. No. of
file types solr can look at for indexing and searching.
2. How much is estimated cost of incidents per year f
I have not spent a lot of time researching it, but one would expect that the OS
RAM requirement for optimization of an index to be minimal.
My understanding is that during optimization an essentially new index is built.
Once complete it switches out the indexes and will throw away the old one.
Sure. Just facet on a tokenized field of the tweet text. You'll want to tune
the analysis configuration to suit your desires, but no problem getting counts
back using &facet=on&facet.field=tweet_text kinda thing.
Erik
On Oct 24, 2011, at 13:14 , Rohit wrote:
> I have saved tweets rel
I have saved tweets related to some keywords in solr, can Solr be used to
generate the tag cloud of important words from these tweets?
Regards,
Rohit
I am currently running into the exact same exception, but I'm not using
Maven. What are my options to fix the issue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-NoSuchMethodError-org-slf4j-spi-LocationAwareLogger-log-tp3435001p3447968.html
Sent from the Solr - Us
Thanks Koji. I found it. I should the solution there.
Xue-Feng
From: Koji Sekiguchi
To: solr-user@lucene.apache.org
Sent: Monday, October 24, 2011 7:30:01 AM
Subject: Re: help needed on solr-uima integration
(11/10/24 17:42), Xue-Feng Yang wrote:
> Hi,
>
> Whe
On Oct 24, 2011, at 1:41pm, jame vaalet wrote:
> hi,
> in my use case i have list of key value pairs in each document object, if i
> index them as separate index fields then in the result doc object i will get
> two arrays corresponding to my keys and values. The problem i face here is
> that the
Hi Jame,
preserve order in index fields:
if you don't want to use phrase queries in key or value this order is
"position".
if you use phrase queries but no value has more then 50 Tokens you also could
use position and start each pair with position 100, 200, 300 ...
Otherwise you could use paylo
thanks karsten.
can we preserve order within index field ? if yes, i can index them
separately and map them using their order.
On 24 October 2011 17:32, wrote:
> Hi Jame,
>
> you can
> - generate one token for each pair (key, value) --> key_value
> - insert a gap between each pair and us phras
Hi Jame,
you can
- generate one token for each pair (key, value) --> key_value
- insert a gap between each pair and us phrase queries
- use key as field-name (if you have a restricted set of keys)
- wait for joins in Solr 4.0 (http://wiki.apache.org/solr/Join)
- use position or payloads to co
hi,
in my use case i have list of key value pairs in each document object, if i
index them as separate index fields then in the result doc object i will get
two arrays corresponding to my keys and values. The problem i face here is
that there wont be any mapping between those keys and values.
do w
(11/10/24 17:42), Xue-Feng Yang wrote:
Hi,
Where can I find test code for solr-uima component?
You should find them under:
solr/contrib/uima/src/test
koji
--
Check out "Query Log Visualizer" for Apache Solr
http://www.rondhuit-demo.com/loganalyzer/loganalyzer.html
http://www.rondhuit.com/en/
Hi Erick,
Your right I think. On resources we gain a little bit on:
disk (a production implementation with live data would be 500 mb saved
in disk usage on each slave and master)
some reduction in network traffic on replication (we do a full
re-index every 24 hours at present)
On design we gain a
Hi Radha Krishna,
try command "full-import" instead of "fullimport"
see
http://wiki.apache.org/solr/DataImportHandler#Commands
Best regards
Karsten
Original-Nachricht
> Datum: Mon, 24 Oct 2011 11:10:22 +0530
> Von: Radha Krishna Reddy
> An: solr-user@lucene.apache.org
> Bet
Ok i'll surely check out what i can!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-indexing-plugin-skip-single-faulty-document-tp3427646p3447537.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks for quick response
I am working on Windows machine and also I need to post text, zip, pdfs, images
etc , it would be gr8 if you can help me out with multiple filetypes on windows
Thanks
Jagdish
Date: Mon, 24 Oct 2011 09:30:49 +0200
Subject: Re: Using CURL to index directory
From:
Hi,
Where can I find test code for solr-uima component?
Thanks,
Xue-Feng
From: Xue-Feng Yang
To: "solr-user@lucene.apache.org"
Sent: Sunday, October 23, 2011 3:43:58 AM
Subject: help needed on solr-uima integration
Hi,
After google online, some parts in th
Don't get too excited, I don't know what the state of that patch is
in. It's on my lng
TODO list to go back and look some more. If you wanted to work on it
and bring it up
to snuff please feel free to do it and submit a modernized patch!
Erick
On Mon, Oct 24, 2011 at 9:44 AM, samuele.mattiuzz
Thanks Erik! I'll be reading that issue, it's pretty much everything i need!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-indexing-plugin-skip-single-faulty-document-tp3427646p3447400.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hey guys,
Your responses are welcome, but I still haven't gained a lot of improvements
*Are you posting through HTTP/SOLRJ?*
I am using RSolr gem, which internally uses Ruby HTTP lib to POST document
to Solr
*Your script time 'T' includes time between sending POST request -to-
the response fetch
Hi,
Try the attached post-text.sh file.
It was not written by me, it's part of a great tutorial written by Avi
Rappoport that you can find at:
http://www.lucidimagination.com/devzone/technical-articles/whitepapers/indexing-text-and-html-files-solr
Regards,
On Mon, Oct 24, 2011 at 9:13 AM, Jagdis
Thanks for the reminder - I had that set to 214xxx... (the max), but perf was
terrible when I injected large files.
So what's the max recommended field size in kb? I can try chopping up the
syslogs into arbitrarily small pieces, but would love to know where to start.
Thanks!
Sent from my iPho
Hi
I have been using curl for indexing individual files, does anyone of you knows
how to index entire directory using curl ?
Thanks
Jagdish
36 matches
Mail list logo