What Erick is saying is that the facet.query seen by solr is
price_min:[*+TO+1300]
rather than
price_min:[* TO 1300]
Having done this sort of thing myself, my guess is that you're probably
doing a urlencode operation more than you should be (on the facet.query
value).
On Fri, Jan 17, 2014 at
I have no idea Mr. Eric :(
thanks,
Rachun
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-by-range-of-price-tp4111655p4111853.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hmmm, it looks like you have already encoded the spaces as '+' (as
Raymond pointed out). Then they're further encoded as %2B which sends
a literal plus through, which is invalid
Best,
Erick
On Thu, Jan 16, 2014 at 9:51 PM, rachun wrote:
> this is the log says:
> INFO - 2014-01-17 09:50:14.4
this is the log says:
INFO - 2014-01-17 09:50:14.448; org.apache.solr.core.SolrCore;
[collection1] webapp=/solr path=/select
params={start=0&q=???%26sort%3Dprice_min+asc,update_date+desc%26facet.query%3Dprice_min:[*%2BTO%2B1300]&json.nl=map&wt=json&rows=100}
status=400 QTime=2
could you plea
Thank you Jorge. We looked at phrase suggestions from previous user
queries, but they're not so useful in our case. However, I have a follow-up
question about similar functionality that I'll post shortly.
The list might like to know that I've come up with a quick and exceedingly
dirty hack solutio
Hi, Will,
Have you investigated not using EBS volumes at all? I'm not sure what node
size you're using, but for example, you can build a RAID 0 out of the four
instance volumes on an m1.xlarge and get lots of disk bandwidth. Also,
there's some nice SSD instances available now. http://www.ec2instan
We currently have a SolrCloud cluster that contains two collections which we
toggle between for querying and indexing. When bulk indexing to our “offline"
collection, our query performance from the “online” collection suffers
somewhat. When segment merges occur, it gets downright abysmal. We hav
Hello Wiki admin,
I would like to some value links. Can you please add me, my user name is
Baruch Labunski
Thank You,
Baruch!
Okay. I had used that previously and I just tried it again. The following
generated no errors:
bin/nutch solrindex http://localhost/solr/ crawl/crawldb -linkdb crawl/linkdb
-dir crawl/segments/
Solr is still not getting an anchor field and the outlinks are not appearing in
the index anywhere e
cool, np.
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Thu, Jan 16, 2014 at 11:30 AM, heaven wrote:
> Nvm, figured it out.
>
> To match profiles that have "test entry" in own attributes or in related
> rss
> entries it is possible to use ({!join from=profile_ids_im to=i
Nvm, figured it out.
To match profiles that have "test entry" in own attributes or in related rss
entries it is possible to use ({!join from=profile_ids_im to=id_i
v=$rssQuery}Test entry) OR Test entry in "q" parameter, not in "fq".
Thanks again for the help,
Alex
--
View this message in conte
Usage: SolrIndexer [-linkdb ] [-params
k1=v1&k2=v2...] ( ... | -dir ) [-noCommit] [-deleteGone]
[-deleteRobotsNoIndex] [-deleteSkippedByIndexingFilter] [-filter] [-normalize]
You must point to the linkdb via the -linkdb parameter.
-Original message-
> From:Teague James
> Sent: Thur
Hi, thanks for the response. Seems almost figured things out.
Since both Profiles and RssEntries are in the same index (same core), it is
possible to either use `v=` param or specify `type:RssEntry` right after the
closing `}`. Both will work:
{!join from=profile_ids_im to=id_i}type:RssEntry
or
{!
Hello,
I can't say anything from this thread dump, but it's really suspicious
stacks:
java.lang.Thread.State: RUNNABLE
at java.util.WeakHashMap.get(
WeakHashMap.java:355)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)
java.lang.Thread.State: RUN
Okay. I changed my solrindex to this:
bin/nutch solrindex http://localhost/solr/ crawl/crawldb crawl/linkdb
crawl/segments/20140115143147
I got the same errors:
Indexer: org.apache.hadoop.mapred.InvalidInputException: Input path does not
exist: file:/.../crawl/linkdb/crawl_fetch
Input path does
To start with, you have "+"-coded spaces in the range part, but the sort
parameter has an unencoded space character.
Not sure if this is the reason that it fails, but it is certainly a reason to
look closer at how you encode your queries...
On 16 Jan 2014, at 12:29 , rachun wrote:
> Hi Gurus
What do you get in our solr logs? Because this looks reasonable
My guess is that the URL is incorrect, but it's only a guess. Tail -f
and submit this and you should see something interesting.
Best,
Erick
On Thu, Jan 16, 2014 at 6:29 AM, rachun wrote:
> Hi Gurus,
>
> Please help...
> I just
Hi - you cannot use wildcards for segments. You need to give one segment or a
-dir segments_dir. Check the usage of your indexer command.
-Original message-
> From:Teague James
> Sent: Thursday 16th January 2014 16:43
> To: solr-user@lucene.apache.org
> Subject: RE: Indexing URLs from
Hello Markus,
I do get a linkdb folder in the crawl folder that gets created - but it is
created at the time that I execute the command automatically by Nutch. I just
tried to use solrindex against yesterday's cawl and did not get any errors, but
did not get the anchor field or any of the outli
Reload the core. seej: http://wiki.apache.org/solr/CoreAdmin#RELOAD
Best,
Erick
On Thu, Jan 16, 2014 at 9:48 AM, ranaivomahaleo
wrote:
> I installed SOLR 4.1 in standalone and I launched it under
> /solr_root_folder/example folder using the command: java -jar start.jar
>
> I'd like to update the
When the JVM is out of memory, you get OOM exceptions, one of
the characteristics of the op system.
I'd guess that you're not actually in the same environment on both
machines. The Solr admin page will tell you how much memory Solr
_thinks_ it has allocated to the JVM, it's worth checking just
At a glance, this looks like you're seeing autowarming, is there
any process that could be indexing documents when you see this?
Here's a quick test... set all your autowarm counts in solrconfig to 0
and if the problem goes away that's a smoking gun.
Or look through your Solr logs around the time
I installed SOLR 4.1 in standalone and I launched it under
/solr_root_folder/example folder using the command: java -jar start.jar
I'd like to update the configuration file related to a particular core (here
for instance core_name), which is solrconfig.xml (under
/solr_root_folder/example/solr/cor
Hi Shawn,
Thanks for the helpful and thorough response. While I understand all of the
factors that you've outlined for memory requirements (in fact, I'd
previously read your page on Solr performance problems), it is baffling to
me why two identical SolrCloud instances, each sharded across 3 machi
In a custom application we have, we use a separated core (under Solr 3.6.1) to
store the queries used by the users and then provide the autocomplete feauture.
In our case we need to filter some phrases, that we don't need to be suggested
to the users. I build a custom UpdateRequestProcessor to i
I think that https://issues.apache.org/jira/browse/SOLR-5623 should be
ready to go. Would someone please commit from the PR? If there's a
preference, I can attach a patch as well.
On Fri, Jan 10, 2014 at 1:37 PM, Benson Margulies wrote:
> Thanks, that's the recipe that I need.
>
> On Fri, Jan 10,
Plus, admin analysis page displays nicely intermediate tokens produced by each
component. Very nice feature I think. If you plug lucene analyzer, you won't be
able to see intermediate results.
Ahmet
On Thursday, January 16, 2014 5:59 AM, Otis Gospodnetic
wrote:
But the latter gives users
I know it's "on the roadmap", but it's always a resource problem...
Any help appreciated, of course
Best,
Erick
On Wed, Jan 15, 2014 at 10:57 PM, Otis Gospodnetic
wrote:
> Hi,
>
> I think this is a known issue and I don't know of anyone working on
> changing this.
>
> Otis
> --
> Performanc
Hi Gurus,
Please help...
I just want to query the document with price_min range and I do like this
q=...&sort=price_min asc,update_date desc&facet.query=price_min:[*+TO+1300]
I got error
'400' Status: Bad Request
what's wrong with this?
Thank you very much.
--
View this message in context:
-Original message-
> From:Teague James
> Sent: Wednesday 15th January 2014 22:01
> To: solr-user@lucene.apache.org
> Subject: Re: Indexing URLs from websites
>
> I am still unsuccessful in getting this to work. My expectation is that the
> index-anchor plugin should produce values for t
If you need a framework to build your enhancement pipeline on I think
Apache UIMA [1] is good as it's also able to store annotated documents into
Lucene and Solr so it may be a good fit for your needs. Just consider that
you have to learn how to use / develop on top of it, it's not a big deal
but n
Hi,
You can have a look at OpenNLP.
http://opennlp.apache.org/
Thanks,
Parnab
On Thu, Jan 16, 2014 at 1:12 PM, Philippe de Rochambeau wrote:
> Hello,
>
> can anyone suggest alternatives to GATE (http://gate.ac.uk/download/)? I
> would like to index place and person names in PDFs using gazett
On 16/01/2014 07:42, Philippe de Rochambeau wrote:
Hello,
can anyone suggest alternatives to GATE
(http://gate.ac.uk/download/)? I would like to index place and person
names in PDFs using gazetteers (ie, dictionaries) and normalize dates
( (eg, December 1st, 2001 will be indexed as 20011201) and
Can some help me out with earlier query?
In short:
Can we change the QuerParser.jj file to identify the SpanNot query as
boolean clause?
Can we use ComplexPhraseQuery Parser to support SpanOR and SpanNOT queries?
On Tue, Oct 15, 2013 at 11:27 PM, Ankit Kumar wrote:
> *I have a business use ca
34 matches
Mail list logo