On 3/20/2014 12:55 PM, solr2020 wrote:
Thanks Shawn. When we run any solrj application , the below message is
displayed
org.apache.solr.client.solrj.impl.HttpClientUtil createClient
INFO: Creating new http client,
config:maxConnections=128maxConnectionsPerHost=32followRedirects=false
Those
Hi,
If you are not on windows, you can try to disable the tracking of clones in
the MMapDirectory by setting unmap to false in your solrconfig.xml:
*directoryFactory name=DirectoryFactory
class=solr.MMapDirectoryFactory} *
* bool name=unmapfalse/bool*
*/directoryFactory*
The MMapDirectory
Hey Group,
I am trying to use SOLR with TYPO3.
It works so far. But I get an ?sword_list[]=endometrialno_cache=1 on
the end of each link, causing the linking not to work. How do I remove
that? Do I have to configure this within RealUrl?
Thanks for your help.
On 21 March 2014 13:54, Bernhard Prange m...@bernhard-prange.de wrote:
Hey Group,
I am trying to use SOLR with TYPO3.
It works so far. But I get an ?sword_list[]=endometrialno_cache=1 on the
end of each link, causing the linking not to work. How do I remove that? Do
I have to configure this
Hi,
Erick, I do not get your point. What kind of servlet container settings do
you mean and why do you think they might be related ? I'm using Jetty and
never set any limit for packet size. My query does not work only in case of
double quotes and space between words. Why? It works in other cases
i found a good page to explain the debug output but it is still unclear for me.
why is the field plain_text not worth anything? the query term was found 3
times.
you can see it here: http://explain.solr.pl/explains/a90aze3o
ao...@hispeed.ch schrieb:
i want the infos simplified so that
Are you sure that SOLR is rounding incorrectly, and not simply differently
from what you expect? I was surprised myself at some of the rounding
behaviour I saw with SOLR, but according to
http://en.wikipedia.org/wiki/Rounding , the results were valid (just not
the round-up-from-half that I naively
For now I am going with 64kb and results seem good. Thanks for the useful
feedback.
On Wed, Mar 19, 2014 at 9:30 PM, Shawn Heisey s...@elyograg.org wrote:
On 3/19/2014 12:09 AM, Salman Akram wrote:
Thanks for the info. The articles were really useful but still seems I
have
to do my own
That looks right. Have you mistakenly added the synonym filter on the
indexing side as well? You can use the solr admin analysis page (maybe at
http://localhost:8983/solr/#/collection1/analysis) to debug.
Niki
On 21 March 2014 00:03, bbi123 bbar...@gmail.com wrote:
I need some
I see properties to enable term vectors, positions and offsets, but didn't
find one for payloads? Did I just miss it? If not, is this something
that may be added in the future?
Thanks
Broken pipe errors are generally caused by unexpected disconnections and are
some times hard to track down. Given the stack traces you've provided it's hard
to point to any one thing and I suspect the relevant information was snipped
out in the long dump of document fields. You might grab the
I've seen this on 4.6.
Thanks,
Greg
On Mar 20, 2014, at 11:58 PM, Shalin Shekhar Mangar shalinman...@gmail.com
wrote:
That's not right. Which Solr versions are you on (question for both
William and Chris)?
On Fri, Mar 21, 2014 at 8:07 AM, William Bell billnb...@gmail.com wrote:
Yeah.
My index size 20 GB and I have issues solr backup command , now this backup
is going on its taking too much time , so how can i stop backup command?
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-stop-backup-once-initiated-tp4126020.html
Sent from the Solr - User
Recently fixed in Lucene - should be able to find the issue if you dig a little.
--
Mark Miller
about.me/markrmiller
On March 21, 2014 at 10:25:56 AM, Greg Walters (greg.walt...@answers.com) wrote:
I've seen this on 4.6.
Thanks,
Greg
On Mar 20, 2014, at 11:58 PM, Shalin Shekhar Mangar
I just managed to track this down -- as you said the disconnect was a
red herring.
Ultimately the problem was caused by a custom analysis component we
wrote that was raising an IOException -- it was missing some
configuration files it relies on.
What might be interesting for solr devs to
Daniel, I had a similar issue. The option is not in the normal FieldType,
so I had to create my own FieldType in a Solr plugin that enabled payloads
in term vectors. This mostly involved extending TextField, copy pasting
createField from FieldType (TextField's parent class) and adding the one
line
Doug, thanks for the quick reply!
I have a separate question:
The FieldType I was using when I ran into this happened to be a
PreAnalyzedField. I tried to specify an alternate parser to be used
instead of the default JsonPreAnalyzedParser. To do this, I added the
parameter parserImpl to my
You may try this
(({!join from=inner_id to=outer_id fromIndex=othercore v=$joinQuery}
And pass another parameter joinQuery=(city:Stara Zagora AND prod:214)
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Fri, Mar 21, 2014 at 4:47 AM, Marcin Rzewucki
Hi,
just started to move my SolrJ queries over to our SolrCloud environment and I
want to know how to do a query where you combine multiple indexes.
Previously I had a string called shards which links all the indexes together
and adds them to the query.
String shards =
Hi,
got a nice talk on IRC about this. The right thing to do is to start with a
clean SOLR cluster (no cores) and then create all the proper collections
with the Collections API.
Ugo
On Thu, Mar 20, 2014 at 7:26 PM, Jeff Wartes jwar...@whitepages.com wrote:
Please note that although the
I am trying to write a POC about indexing URL's with Solr using solrJ and
solrCell. (The code is written in groovy).
The relevant code is here
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest(/update/extract);
req.setParam(literal.id,p.id.toString())
I've never tried indexing via groovy or using solrCell but I think you might be
working a bit too low level in solrj if you're just adding documents. You might
try checking out https://wiki.apache.org/solr/Solrj#Adding_Data_to_Solr and I
might be way off base :)
Thanks,
Greg
On Mar 21, 2014,
Hi,
I have a two shard collection running and I'm getting this error on each
query:
2014-03-21 17:08:42,018 [qtp-75] ERROR
org.apache.solr.servlet.SolrDispatchFilter -
*null:java.lang.IllegalArgumentException:
numHits must be 0; please use TotalHitCountCollector if you just need the
total hit
The extractOnly option is simply telling you what the raw metadata is, while
normal non-extractOnly mode is indexing meta exactly as you have requested
it to be indexed. You haven't shown us any of your parameters that describe
how you want the metadata indexed. If you didn't specify any
Hi Chris,
Thanks for the link to Patrick's github (looks like some good stuff in there).
One thing to try (and this isn't the final word on this, but is helpful) is to
go into the tree view in the Cloud panel and find out which node is hosting the
Overseer (/overseer_elect/leader). When
I suspect that this is a bug in the implementation of the parsing of
embedded nested query parsers . That's a fairly new feature compared to
non-embedded nested query parsers - maybe Yonik could shed some light. This
may date from when he made a copy of the Lucene query parser for Solr and
On March 21, 2014 at 1:46:13 PM, Tim Potter (tim.pot...@lucidworks.com) wrote:
We've seen instances where you end up restarting the overseer node each time as
you restart the cluster, which causes all kinds of craziness.
That would be a great test to add tot he suite.
--
Mark Miller
Correct. This is only a limitation of embedding a local-params style
subquery within lucene syntax.
The parser, not knowing the syntax of the embedded query, currently
assumes the query text ends at whitespace or other special punctuation
such as ).
Original:
(({!join from=inner_id to=outer_id
Thanks Tim. I would definitely try that next time. I have seen a few
instances where the overseer_queue not getting processed but that looks
like an existing bug which got fixed in 4.6 (overseer doesnt process
requests when reload collection fails)
One question: Assuming our cluster can tolerate
My example should also work, am I missing something?
q=({!join from=inner_id to=outer_id fromIndex=othercore
v=$joinQuery})joinQuery=(city:Stara Zagora AND prod:214)
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Fri, Mar 21, 2014 at 2:11 PM, Yonik Seeley
Sorry for the piecemeal approach but had another question. I have a 3 zk
ensemble. Does making 2 zk as observer roles help speed up bootup of solr
(due to decrease in time it takes to decide leaders for shards)?
On Fri, Mar 21, 2014 at 11:49 AM, Chris W chris1980@gmail.com wrote:
Thanks
What is the default value for the required attribute of a field element in a
schema? I've just looked everywhere I can think of in the wiki, the reference
manual, and the JavaDoc. Most of the documentation doesn't even mention that
attribute.
Once we answer this, it should be added to the
Good afternoon,
I'm using solr 4.0 Final.
I have an IBM atom feed I'm trying to index but it won't work.
There are no errors in the log.
All the other DIH I've created consumed RSS 2.0
Does it NOT work with an atom feed?
here's my configuration:
?xml version=1.0 encoding=UTF-8 ?
dataConfig
Hi Russell;
You say that:
| CloudSolrServer server = new CloudSolrServer(solrServer1:
2111,solrServer2:2111,solrServer2:2111);
but I should mention that they are not Solr Servers that is passed into a
CloudSolrServer. They are zookeeper host:port pairs optionally includes a
chroot parameter
false
alexei martchenko
Facebook http://www.facebook.com/alexeiramone |
Linkedinhttp://br.linkedin.com/in/alexeimartchenko|
Steam http://steamcommunity.com/id/alexeiramone/ |
4sqhttps://pt.foursquare.com/alexeiramone| Skype: alexeiramone |
Github https://github.com/alexeiramone | (11) 9
On 22 March 2014 02:55, eShard zim...@yahoo.com wrote:
Good afternoon,
I'm using solr 4.0 Final.
I have an IBM atom feed I'm trying to index but it won't work.
There are no errors in the log.
All the other DIH I've created consumed RSS 2.0
Does it NOT work with an atom feed?
[...]
Atom is
36 matches
Mail list logo