Hi,All.
I update some fields by Solj Atomic Update.But in
particular case, an error occurred.
When I try to set the value "2017-01-01" to date filed
by Solrj Atomic Update,the following error message appears.
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from
server a
WARNING: I've never really tried this, but I don't see why it wouldn't work.
What does the full query look like?
suggest.q=*:*&suggest.cfq=[2017-05-15T0:0:0.0Z TO *] maybe? And I'm
assuming that the contextField here is a tdate type and you're using
one of the Document*Dictionary implementations f
Hi Joel,
Regarding the implementation, I am wrapping the topmost TupleStream in a
ParallelStream and execute it on the worker cluster (one of the joined
cluster doubles up as worker cluster). ParallelStream does submit the query
to /stream handler.
for #2, for e.g. I am creating 2 CloudSolrStreams
I'd like to use a date (TrieDate) for the contextField in my SuggestComponent
with an AnalyzingInfixLookupFactory. Basically am trying to narrow my
suggestions by a relevant date range, something like
suggest.cfq=[2017-05-15T0:0:0.0Z TO *]
Doesn't seem to work, so before trying further I wondered
Hi, Damien Kamerman
Thanks for your reply. The problem is when we add a parent documents which
doesn't contain child info yet.
Later we will add same parent documents with child documents.
But this would cause 2 parent documents with same id in the solr index.
I workaround this issue by
Does this fl help?
fl=*,[child childFilter="docType:child" parentFilter=docType:parent]
On 14 May 2017 at 16:16, Jeffery Yuan wrote:
> Nested documents is quite useful to model structural hierarchy data.
>
> Sometimes, we only have parent document which doesn't have child documents
> yet, we wa
Most likely you're searching against your default field, often "text".
A frequent problem is that you enter a search like
q=content:University of Wisconsin
and the search is actually
q=content:university text:of text:wisconsin
Try your debug=query with the original maybe?
In fact, somehow you'
Can you upload your schema to some site like dropbox etc. to look and send
the query which you are using and returning no results?
Thanks,
Susheel
On Mon, May 15, 2017 at 1:46 PM, Chip Calhoun wrote:
> I'm creating a new Solr core to crawl a local site. We have a page on
> "University of Wiscon
I'm creating a new Solr core to crawl a local site. We have a page on
"University of Wisconsin--Madison", but a search for that name in any form
won't appear within the first 10 results. the page is indexed, and I can search
for it by filename. Termfreq(title) shows 0s for search terms which are
Björn
Yes, at query time you could downcase the names. Not in Solr, but in the
front-end web app you have in front of Solr. It needs to be a bit smart, so it
can downcase the field names but not the query terms.
I assume you do not expose Solr directly to the web.
This downcasing might be easie
Hi,
I have a few doubts regarding the documentation at https://cwiki.apache.org/
confluence/display/solr/Making+and+Restoring+Backups for backing up the
indexes to a HDFS filesystem
1) How frequently are the indexes backed up?
2) Is there a possibility of data-loss if Solr crashes between two bac
So do you have _users_ directly entering Solr queries? And are the
totally trusted to be
1> not malicious
2> already know your schema?
Because direct access to the Solr URL allows me to delete all your
data. Usually there are drop-downs or other UI "stuff" that allows you
to programmatically assig
As you're using the extended dismax parser, it has an option to include per
field aliasing:
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
You could include this in your solr requesthandler config, e.g.
id
Which would direct ID:1 to instead search id:1
Geraint
I'd pick the one I was most comfortable with and just try it.
Best,
Erick
On Mon, May 15, 2017 at 6:26 AM, Mithu Tokder wrote:
> Hi,
> I have one question regarding stopping Solr instance.
> Solr is deployed in three machines(cluster deployment). I have configured
> STOP.PORT and STOP.KEY in sta
Hi,
I have one question regarding stopping Solr instance.
Solr is deployed in three machines(cluster deployment). I have configured
STOP.PORT and STOP.KEY in start script, accordingly configured STOP.PORT
and STOP.KEY in stop script.
There are three sets of start & stop script for each machine.
No
Sorry, Thomas. I am unable to download right now the solr 6.5.1 version.
What i would suggest is try to create shard on two different machines/run
two solr instances on different ports. There may be some issue with this
version.
Anyone insight with 6.5.1?
Thanks,
Susheel
On Mon, May 15, 2017 at
Ok please do report any issues you run into. This is quite a good bug
report.
I reviewed the code and I believe I see the problem. The problem seems to
be that output code from the /stream handler is not properly accounting for
client disconnects and closing the underlying stream. What I see in th
Good test. That tells something. Let me also run same on my end.
On Mon, May 15, 2017 at 9:33 AM, Thomas Porschberg
wrote:
> Hi,
>
> I get no error message and the shard is created when I use
> numShards=1
> in the url.
>
> http://localhost:8983/solr/admin/collections?action=
> CREATE&name=kar
Hi,
I get no error message and the shard is created when I use
numShards=1
in the url.
http://localhost:8983/solr/admin/collections?action=CREATE&name=karpfen&numShards=1&replicationFactor=1&maxShardsPerNode=1&collection.configName=karpfen
--> success
http://localhost:8983/solr/admin/collectio
Hi Rick,
thank you for your reply! I really meant field *names*, since our values are
already processed by a lower case filter (both index and query). However, our
users are confused because they can search for "id:1" but not for "ID:1".
Furthermore, we employ the EDisMax query parser, so then
what happens if you create just one shard. Just use this command directly
on browser or thru curl. Empty the contents from
/home/pberg/solr_new2/solr-6.5.1/server/data before running
http://localhost:8983/solr/admin/collections?action=
CREATE&name=karpfen&numShards=1&replicationFactor=1&
maxSha
1.
So I think it is a spark problem first (https://issues.apache.org/jir
a/browse/SPARK-10413). What we can do is to create our own model (cf
https://github.com/apache/lucene-solr/tree/master/solr/contr
ib/ltr/src/java/org/apache/solr/ltr/model) that applies the prediction, it
should be easy to do
Björn
Field names or values? I assume values. Your analysis chain in schema.xml
probably downcases chars, if not then that could be your problem.
Field _name_? Then you might have to copyfield the field to a new field with
the desired case. Avoid doing that if you can. Cheers -- Rick
On May 15,
Hi all,
I'm fairly new at using Solr and I need to configure our instance to accept
field names in both uppercase and lowercase (they are defined as lowercase in
our configuration). Is there a simple way to achieve this?
Thanks in advance,
Björn
Björn Peemöller
IT & IT Operations
BERENBERG
Jo
On Sun, May 14, 2017 at 7:40 PM, Chris Troullis wrote:
> Hi,
>
> I've been experimenting with various sharding strategies with Solr cloud
> (6.5.1), and am seeing some odd behavior when using the implicit router. I
> am probably either doing something wrong or misinterpreting what I am
> seeing in
25 matches
Mail list logo