Why would someone stay on 5.3.x instead of upgrading to 5.4? Why backport
when you can just upgrade?
On Tue, Dec 22, 2015 at 6:33 PM, Noble Paul wrote:
> A 5.3.2 release is coming up which will back port the fixes introduced in
> 5.4
> On Dec 17, 2015 10:25 PM, "tine-2" wrote:
>
> > Noble Paul
I agree that when using timeAllowed in the header info there should be an
entry that indicates timeAllowed triggered.
This is the only reason why we have not used timeAllowed. So this is a
great suggestion. Something like: 1 ??
That would be great.
0
1
107
*:*
1000
On Tue, Dec 22, 2015 at 6
Sematext.com has a service for this...
Or just curl "http://localhost:8983/solr//select?q=*:*" to see
if it returns ?
On Tue, Dec 22, 2015 at 12:15 PM, Tiwari, Shailendra <
shailendra.tiw...@macmillan.com> wrote:
> Hi,
>
> Last week our Solr Search was un-responsive and we need to re-boot the
>
Last week our Solr Search was un-responsive and we need to re-boot the
server, but we were able to find out after customer complained about it.
What's best way to monitor that search is working?
May not be the best way but you can write a class which keeps on checking
the status of all the nodes o
timeAllowed was designed to handle queries that by themselves consume lots
of resources, not to try to handle situations with large numbers of
requests that starve other requests from accessing CPU and I/O resources.
The usual technique for handling large numbers of requests is replication,
making
Yesterday I found there are some slow join operations in another collection
whose from index is the collection with many searchers opened.
Those slow join operations will be auto warmed when that collection is soft
committed. The auto warm time is about 120s but the soft commit
time is 30s. So th
Hi,
I'm also facing the same issue as what you faced 2 months back, like able
to extract the image content if there are in .jpg or .png format, but not
able to extract the images in pdf, even after setting "extractInlineImages
true" in the PDFParser.properties.
Have you managed to find alternativ
The http parameter "stream" was recently changed to "expr" in SOLR-8443.
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Dec 22, 2015 at 8:45 PM, Jason Gerlowski
wrote:
> I'll preface this email by saying that I wasn't sure which mailing list it
> belonged on. It might fit on the dev list
I'll preface this email by saying that I wasn't sure which mailing list it
belonged on. It might fit on the dev list (since it involves a potential
Solr bug), but maybe the solr-users list is a better choice (since I'm
probably just misusing Solr). I settled on the solr-users list. Sorry if
I ch
Well... I can write everything, but really all this just to understand
when timeAllowed
parameter trigger a partial answer? I mean, isn't there anything set in the
response when is partial?
On Wed, Dec 23, 2015 at 2:38 AM, Walter Underwood
wrote:
> We need to know a LOT more about your site. Num
Dear solr-user
I use two shards of solr5.3.1 ,every node have 16G memeroy.
I use facet.pivot in four field with int ,but i can't get any data,It's so
slowly.
then I use a single computer with 16G,I give one shard, the index size is as
same as before there,
I execute the same facet.pivot,It's
We need to know a LOT more about your site. Number of documents, size of index,
frequency of updates, length of queries approximate size of server (CPUs, RAM,
type of disk), version of Solr, version of Java, and features you are using
(faceting, highlighting, etc.).
After that, we’ll have more
A 5.3.2 release is coming up which will back port the fixes introduced in
5.4
On Dec 17, 2015 10:25 PM, "tine-2" wrote:
> Noble Paul നോബിള് नोब्ळ् wrote
> > It works as designed.
> >
> > Protect the read path [...]
>
> Works like described in 5.4.0, didn't work in 5.3.1, s.
> https://issues.apa
Hi All,
my website is under pressure, there is a big number of concurrent searches.
When the connected users are too many, the searches becomes so slow that in
some cases users have to wait many seconds.
The queue of searches becomes so long that, in same cases, servers are
blocked trying to serve
I'm happy to report that we are seeing significant speed-ups in our queries
with Json facets on 5.4 vs regular facets on 5.1. Our queries contain mostly
terms facets, many of them with exclusion tags and prefix filtering.
Nice work!
Hi guys,
just went through this use case :
I have one field A with analysis_A ( for example an edgeNgram tokenised
text) .
Then I have one copy field copy_A with analysis_B ( a simple text_general
would fit).
At this point I should be able to store the term vector for the fields at
my pleasure ( I
On 12/22/2015 6:46 AM, Bram Van Dam wrote:
> This indexing job has been running for about 5 days now, and is pretty
> much IO-bound. CPU usage is ~50%. The load average, on the other hand,
> has been 128 for 5 days straight. Which is high, but fine: the machine
> is responsive.
A load average of 1
Thanks, Jack for various points. A question when you have hundreds of
fields from different sources and you also have lot of copy fields
instructions for facets, sort or catch all etc. you suffer some performance
hit during ingestion as many of the copy instructions would just be
executing but doin
Hi,
Last week our Solr Search was un-responsive and we need to re-boot the server,
but we were able to find out after customer complained about it. What's best
way to monitor that search is working?
We can always add Gomez alerts from UI.
What are the best practices?
Thanks
Shail
If you have GC logs, check if you have long GC pauses that make zookeeper
think that node(s) are going down. If this is the cases then your nodes are
going into recovery and and based on your settings in in
solr.xml you may end up in situation when no nodes gets promoted to be a
leader.
On 22 D
I had previously piggybacked on another post, but I think it may have been
lost there. I had a need to do UnInvertedField based faceting in the
FacetsComponent and as such started looking at what would be required to
implement something similar to what the JSON Facets based API does in this
regard
Step one is to refine and more clearly state the requirements. Sure,
sometimes (most of the time?) the end user really doesn't know exactly what
they expect or want other than "Gee, I want to search for everything, isn't
that obvious??!!", but that simply means that an analyst is needed to
interven
Hello,
I am going thru few use cases where we have kind of multiple disparate data
sources which in general doesn't have much common fields and i was thinking
to design different schema/index/collection for each of them and query each
of them separately and provide different result sets to the cli
Here a live example
[yago@dev-1 ~]$ time curl -g
"http://dev-1:8983/solr/collection-perf/query?rows=0&q=date:[20150101%20TO%2020150115]&json.facet={label:{type:terms,field:url_encoded,limit:-1,sort:{index:asc},facet:{user:'hll(user_id)'}}}"
> dump
% Total % Received % Xferd Average
Hi Alex,
Can you let us know what do you mean by
*"timestamps" are truly atomic and not local clock-based." ?*
*Thanks,*
On Mon, Dec 14, 2015 at 10:53 PM, Alexandre Rafalovitch
wrote:
> At the first glance, this sounds like a perfect match to
>
> https://cwiki.apache.org/confluence/display/so
The collection is a 12 shards distributed to 12 physical nodes (24G heap each,
32G RAM) (no replication). all cache are disable in solrconfig.xml, The rate of
indexing is about 2000 docs/s, this transform cache useless
At the time of the perf test the amount of docs were 34M (now is 54 but t
I’m in 5.3.1.
I’m waiting some time to upgrade to 5.4 to see if some nasty bug is reported.
But after hitting this issue I think that I should upgrade ...
—/Yago Riveiro
On Tue, Dec 22, 2015 at 3:17 PM, Yonik Seeley wrote:
> OK found the issue:
> https://issues.apache.org/jira/browse/SOL
On Tue, Dec 22, 2015 at 6:06 AM, Yago Riveiro wrote:
> I’m surprised with the difference of speed between DV and stream, the same
> query (aggregate 7M unique keys) with stream method takes 21s and with DV is
> about 3 minutes ...
Wow - is this a "real" DV field, or one that was built on-demand
Yes.
-Original Message-
From: Yago Riveiro [mailto:yago.rive...@gmail.com]
Sent: Tuesday, December 22, 2015 5:51 AM
To: solr-user@lucene.apache.org
Subject: Indexing using a collection alias
Hi,
It's possible index documents using the alias and not the collection name, if
the alias onl
OK found the issue:
https://issues.apache.org/jira/browse/SOLR-5971
Fixed in 5.4
-Yonik
On Tue, Dec 22, 2015 at 10:15 AM, Yonik Seeley wrote:
> This was a generic query-forwarding bug in Solr, that was recently fixed.
> Not sure the JIRA now... what version are you using?
> -Yonik
>
>
> On Tue,
This was a generic query-forwarding bug in Solr, that was recently fixed.
Not sure the JIRA now... what version are you using?
-Yonik
On Tue, Dec 22, 2015 at 10:11 AM, Yago Riveiro wrote:
> Hi,
>
> I'm hitting an error when a try to run a json facet query in a node that
> doesn't have any shard
Hi,
I'm hitting an error when a try to run a json facet query in a node that
doesn't have any shard that belongs to collection. The same query using
using the legacy facet method works.
http://devel-16:8983/solr/collection-perf/query?rows=0&q=*:*&json.facet={label:{type:terms,field:url,limit:-1,s
Hi folks,
Been doing some SolrCloud testing and I've been experiencing some
problems. I'll try to be relatively brief, but feel free to ask for
additional information.
I've added about 200 million documents to a SolrCloud. The cloud
contains 3 collections, and all documents were added to all thre
If you're on solr 5.4.0, this is a bug.
https://issues.apache.org/jira/browse/SOLR-8418
On Tue, 22 Dec 2015, 19:01 CrazyDiamond wrote:
> i use Morelikethis query and i need to boost some documents at query
> time
> i've tried to use fq and ^ boost, it does not work.
>
>
>
> --
> View this mes
i use Morelikethis query and i need to boost some documents at query time
i've tried to use fq and ^ boost, it does not work.
--
View this message in context:
http://lucene.472066.n3.nabble.com/mlt-and-document-boost-tp4246522.html
Sent from the Solr - User mailing list archive at Nabble.com
Just did a quick review of the InnerJoinStream and it appears that it
should handle one-to-one, one-to-many, many-to-one and many-to-many joins.
It will take a closer review of the tests to see if all these cases are
covered. So the innerJoin is designed to handle the case you describe. If
it doesn
Hi,
It's possible index documents using the alias and not the collection name,
if the alias only point to one collection?
The Solr collection API doesn't allow rename a collection, so I wan't to
know if with aliases I can achieve this functionality.
All documentation that I googled use the alias
Ok,
I’m surprised with the difference of speed between DV and stream, the same
query (aggregate 7M unique keys) with stream method takes 21s and with DV is
about 3 minutes ...
—/Yago Riveiro
On Tue, Dec 22, 2015 at 1:46 AM, Yonik Seeley wrote:
> On Mon, Dec 21, 2015 at 6:56 PM, Yago Riv
Hi,
I tried a straight forward join against something that is connected to
many things but didn't get the results I expected - I wanted to check
whether my expectations are off, and whether I can do anything in Solr to
do what I want. So given the data:
id,type,e1,e2,text
1,ABC,,,John Smith
2,
Hi,
Thanks Erick for your input. I've added GC logging, but it was normal
when the error came again this morning. I was adding a large collection
(27 Gb): on the first server all went well. At the time I created the
core on a second server, it was almost immediately disconnected from the
cloud. Th
Those of you who are planning to upgrade to Solr 5.4.0, be aware that
there's a bug in the MoreLikeThis handler that makes it fail with
boosting. There's a Solr issue with a patch thanks to Jens Wille:
https://issues.apache.org/jira/browse/SOLR-8418. I really hope this gets
into 5.4.1, for us i
Oh OK. I see. So you want both the docs to have the same score?
In that case, I don't think there's any other way of going about it other
than writing your own custom similarity class.
Maybe someone else can suggest something better.
On Tue, 22 Dec 2015, 13:08 elisabeth benoit
wrote:
> hello,
>
42 matches
Mail list logo