On Thu, 2017-05-25 at 15:56 -0700, Nawab Zada Asad Iqbal wrote:
> I have 31 machine cluster with 3 shards on each (93 shards). Each
> machine has 250~GB ram and 3TB SSD for search index (there is another
> drive for OS and stuff). One solr process runs for each shard with
> 48G heap. So we have 3 l
Currently Data is processed by R and it pushes the data to Solr using a R
package called Solrium. I'm running a webserver which has banana to visualize
the events.
R re-formats the event created time to "-MM-dd'T'HH:mm:sss'Z'". And if I
process the timezone to UTC (substracting 8 hours in m
What are your autocommit settings? Are you waiting for that interval
to expire? Does the doc ever show up?
What leads you to suspect "timezone changes in 6x"? Solr tries to be
timezone-agnostic so this would be a surprise.
Best,
Erick
On Thu, May 25, 2017 at 8:26 PM, Adline Dsilva wrote:
> Hi A
Hi All,
I recently upgraded solr from 5.5.x to 6.4.x and I'm facing few issues with
date filtering. Few queries which works on 5.x fails to work on 6.x. and I
assume its related to timezone changes in 6.x.
Currently I'm using solr to store & monitor Event data and incoming data is
"created
ZK 3.5 isn't officially released. It is alpha/beta for years. I wouldn't
use it in production.
The setup I proposed :
DC1 : 3 nodes, all are non observer's.
DC2 : 3 nodes, 2 are non observer's and 1 is observer
This means only 5 nodes participate in voting and 3 nodes make quorum. If
DC1 goes do
Hi Toke,
I don't have any blog, but here is a high level idea:
I have 31 machine cluster with 3 shards on each (93 shards). Each machine
has 250~GB ram and 3TB SSD for search index (there is another drive for OS
and stuff). One solr process runs for each shard with 48G heap. So we have
3 large fi
Hi,
I'm currently running 6.5.1 with a tiny index, less than 1MB.
When I restart another app on the same server as Solr, Solr occasionally
dies, but no solr_oom_killer.log file.
Heap size is 256MB (~30MB used), Physical RAM 2GB, typically using 1.5GB.
How else can I debug what's causing it?
Thanks for the tip Pushkar,
> A setup I have used in the past was to have an observer I DC2. If DC1 one
I was not aware that ZK 3.4 supports observers, thought it was a 3.5 feature.
So do you setup followers only in DC1 (3x), and observers only in DC2 (3x) and
then point each Solr node to all 6
This is in regards to changing a field type from string to
text_en_splitting, re-indexing all documents, even optimizing to give the
index a chance to merge segments and rewrite itself entirely, and then
getting this error when running a phrase query:
java.lang.IllegalStateException: field "blah" w
> Hi - Again, hiring a simple VM at a third location without a Solr cloud
> sounds like the simplest solution. It keeps the quorum tight and sound. This
> simple solution is the one i would try first.
My only worry here is latency, and perhaps complexity, security and cost.
- Latency if you have
Hi All,
I am trying to integrate openNLP-UIMA with Solr.I have installed the pear
package generated by building the opennlp-uima source.
I have analyzed the text files using *CAS Visual Debugger* by loading the
respective AE and tokens are annotated as expected.
*Solrconfig:*
Ok, I could see the images in the "Approve this email" admin UI :-) You
still need to resend it for others.
But what I can see is that the problem seems to be with the quoted phrase
search. And the first thought to me was whether you typed those double
quotes (") yourself or copy/pasted them. Beca
The images do not come through on this list (most of the time).
Can you put them on some hosting and link to them. And/or show the actual
query 1 and query 2 as text, please.
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 25 May 2017 at 14:0
Hi team
We are facing a strange issue with the solr. When we query with particular
field the query is not returning any values.
Query 1: I am specifying an isbn number and extracting all the values available
in publisher filed. Which is working as expected.
[cid:image001.jpg@01D2D589.FBC62B70]
Or why not just index it all into the same core/collection and use
fq:date[date1 TO date2] to restrict the searches?
Best,
Erick
On Thu, May 25, 2017 at 8:18 AM, Susheel Kumar wrote:
> Didn't understand what exact use case you have and why you need two
> different index. Can't same monthly index
Can you use Google as grep? Not so much. That's because the use cases
are very different. grep is for linear search through unparsed content
(whitespaces and all) and it primarily find the matched string. There
is no ranking or long range matches.
Solr - similar to Google - is for processed/normal
Hi everyone,
Is there a way to setup Solr so the search commands I send it and the
searches it does is similar to the way "grep" works (i.e.: regex)? If not,
how close can Solr be setup to mimic "grep"?
Thanks in advanced.
Steve
Didn't understand what exact use case you have and why you need two
different index. Can't same monthly indexing and daily indexing job update
same index or you create different core / collection and utilise
collection aliasing to switch back and forth.
On Thu, May 25, 2017 at 3:13 AM, ankur.168
If the exception can't be reproduced each time for the same query, then I
think it does point to some intermittent network issue, possibly timeout
related.
Let's open a ticket for this and investigate what might be the cause.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 25, 2017 at 9
You rock Shawn, thanks! Some follow up questions.
Using our existing Apache, or AWS setup, could we prevent those complex/slow
denial of service queries?
Could we use the same setup to only allow our JavaScript ajax calls direct
access or is a light API layer required, and we then lock down Sol
Can you describe little more about your setup like how many shards and if
shard replicas are there, and how many zookeeper nodes etc. How often and
does solr logs shows any sign of error/exceptions when this is happening.
How do you know if it is creating new index and deleting old one. Do you
se
Nop, this happened since 6.3.0 (when I started use the CloudSolrStream), now
I’m using 6.5.1 code.
Normally this happen with streams with more than 4M documents.
Can be related with network? Is there any TTL in the CloudSolrStream at
connection level?
--
/Yago Riveiro
On 25 May 2017 13:14 +0
Thank you Shawn for the detailed explanation.
I've posted this also in SO (with examples):
https://stackoverflow.com/questions/44181274/solr-mapping-types-sharing-field-name-but-different-datatype
this will shed some light on your mystery :)
On Thu, May 25, 2017 at 4:01 PM, Shawn Heisey wrote:
Hi Shawn,
Yes, I couldn't use more than one field for the df parameter.
What I am worried about is the increase of the size of the index, and
affecting overall performance when using copyField, which is why we are not
very keen on using that.
When using dynamic fields for copyField, is it true t
Hi Rick,
Thanks for the advice.
Regards,
Edwin
On 25 May 2017 at 15:11, Rick Leir wrote:
> Edwin,
>
> Use copyfield to copy all fields to one field, which could be your default
> field. This is a common pattern in Solr installations.
>
> cheers -- Rick
>
> https://cwiki.apache.org/confluence/
On 5/25/2017 1:49 AM, Yarden Bar wrote:
> A question about collection (or mapping, I'm an ES user).
>
> I have a use-case where I index AWS CloudTrail logs. currently, in ES, I
> use the CT eventName as document type for dynamic mapping.
> There are several event types which use the same field name
On 5/25/2017 12:45 AM, Zheng Lin Edwin Yeo wrote:
> Would like to check, is it possible to set a configuration in
> solrconfig.xml whereby the search will go through all the fields in the
> collections?
>
> Currently, I am defining the fields to be search under the "df" setting,
> but unlike "fl",
>>> It is relatively easy to downgrade to an earlier release within the
>>> same major version. We have not switched to 6.5.1 simply because we
>>> have no pressing need for it - Solr 6.3 works well for us.
>
>> That strikes me as a little bit dangerous, unless your indexes are very
>> static. Th
I've never seen this error. Is this something you just started seeing
recently?
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 25, 2017 at 7:10 AM, Yago Riveiro
wrote:
> I have a process that uses the CloudSolrStream to run a streaming
> expression
> and I can see this exception fr
I have a process that uses the CloudSolrStream to run a streaming expression
and I can see this exception frequently:
Caused by: org.apache.http.TruncatedChunkException: Truncated chunk (
expected size: 32768; actual size: 1100)
at
org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInp
hi yonik,
i like your work on solr very much, and i'm hoping it can deliver what we are
looking to acheive here... and apologies for the direct aproach but i dont i
have a choice, i've sumitted the request below to the mailing list and i still
havent had a reply ... and part of me wondering it
We have Apache Solr 6.3.0
Some the nodes go to recover mode randomly
when i SSH to the node, i see that it is creating a new index and deleting
existing one
once this is done node, comes back online
we have sharding and each shard has two nodes primary and secondary
can you please help us in iden
Hi all,
A question about collection (or mapping, I'm an ES user).
I have a use-case where I index AWS CloudTrail logs. currently, in ES, I
use the CT eventName as document type for dynamic mapping.
There are several event types which use the same field name but with
different datatype.
Example: ev
David,
The articles by Yonick were written around the time that he and others
were developing the features. This one
http://yonik.com/solr-nested-objects/ says 5.3 and later. Considering
that development and bugfixes spanned several versions, you would do
well to test your configuration with
Hi All,
I am using SOLR 4.6, I am running quartz job to trigger solr indexing. I
have requirement to maintain index in different locations for different
jobs. For ex. I have daily indexing job and monthly indexing job, so I want
to main 2 different index location for both. Is there a way we can ch
Edwin,
Use copyfield to copy all fields to one field, which could be your
default field. This is a common pattern in Solr installations.
cheers -- Rick
https://cwiki.apache.org/confluence/display/solr/Copying+Fields
https://drupal.stackexchange.com/questions/54538/search-api-solr-how-to-index
36 matches
Mail list logo