Hi - no, i don't think so, it doesn't happen all the time, but too frequently.
The machine running the tests has a high powered CPU, plenty of cores and RAM.
Markus
-Original message-
> From:Mark Miller
> Sent: Monday 5th October 2015 19:52
> To:
Help please?
On Sun, Oct 4, 2015 at 5:07 PM, Siddhartha Singh Sandhu <
sandhus...@gmail.com> wrote:
> Hi Shawn and Andrew,
>
> I am on page with you guys about the ssh authentication and communicating
> with the API's that SOLR has to provide. I simply don't want the GUI as it
> is nobody will
: value? I need the sort such that ascending will have the NULL values first
: and descending will have the NULL values last (i.e. sortMissingFirst="false"
: and sortMissingLast="false").
You can configure a default="X" attribute on your field such that X is the
minimum legal value for your
If it's always when using https as in your examples, perhaps it's SOLR-5776.
- mark
On Mon, Oct 5, 2015 at 10:36 AM Markus Jelsma
wrote:
> Hmmm, i tried that just now but i sometimes get tons of Connection reset
> errors. The tests then end with "There are still
You understand that disabling the admin API will leave you with an
unmaintainable Solr installation, right? You might not even be able to diagnose
the problem.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Oct 5, 2015, at 11:34 AM, Siddhartha
Well, there's a difference between disabling the UI and disabling the
API. The UI can be disabled (I think) by deleting the contents of
server/solr-webapp/webapp (leaving behind the WEB-INF directory). But
really, all that is doing is hiding a heap of code that is public
already.
As has been
http://www.slideshare.net/lucidworks/high-performance-solr-and-jvm-tuning-strategies-used-for-map-quests-search-ahead-darren-spehr
See ArrayBlockingQueue.
What would this help with?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
What should this be set to?
Do you set it with -Dsolr.jetty.https.acceptQueueSize=5000 ?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Just put my solr on a private subnet. Nobody can reach it unless I will it.
I am just a bit concerned whether the solr requesthandler checks against
pen test logic.
Thank you for the support everyone. Appreciate it.
On Mon, Oct 5, 2015 at 2:43 PM, Upayavira wrote:
> Well,
Hi,
I pressed the optimize switch. Wasn't the best decision I made today. The
aftermath of it was that when I tried to index more documents the curl just
waited and waited.
I pinged my SOLR and all is well. I am able to access the admin console
also. I can query the SOLR machine too. But, I
I'm looking for a way to delete term vectors from existing index, schema is
changed to 'termVectors="false" ' and optimization was performed after that,
but index size remains the same (I'm totally sure, that optimization was
successful).
I've also tried to add some new documents to existing index
Hello,
Could you please try same search query on your machine to check if it
matches?
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Keyword-match-distance-rule-issue-tp4231624p4232713.html
Sent from the Solr - User mailing list archive at Nabble.com.
Dear Solr Users,
I am facing a problem with highligting on ngram fields.
Highlighting is working well, except for words with german character
"ß".
Eg : with q=rosen&
"highlighting": {
"gcl3r:12723710:6643": {
"textng": [
"Rosensteinpark (Métro), Stuttgart
Hello,
I have several implementations of AbstractFullDistribZkTestBase of Solr 5.3.0.
Sometimes a test fails with either "There are still nodes recoverying - waited
for 30 seconds" or "IOException occured when talking to server at:
https://127.0.0.1:44474/collection1;, so usually at least one
Best tool for this job really depends on your needs, but one option:
I have a dev tool for Solr log analysis:
https://github.com/markrmiller/SolrLogReader
If you use the -o option, it will spill out just the queries to a file with
qtimes.
- Mark
On Wed, Sep 23, 2015 at 8:16 PM Tarala, Magesh
I restarted my SOLR and it is now not reloading the configuration.
Is my solr index corrupted?
Sid.
On Mon, Oct 5, 2015 at 5:09 PM, Erick Erickson
wrote:
> You should be able to insert while optimizing. Do be aware that
> optimize will probably require that your disk
Thank you Eric. I think your explanation was the reason.
Following up on that: Would having an SSD make considerable difference in
speed?
On Mon, Oct 5, 2015 at 5:18 PM, Siddhartha Singh Sandhu <
sandhus...@gmail.com> wrote:
> Scrap the last one. It just took 10 mins to load. I panicked too
You should be able to insert while optimizing. Do be aware that
optimize will probably require that your disk have free _at least_ as
much space as the index takes up.
It may just be that the disk is so busy with the optimize (it's mostly
just writing from one file to another) that it's appearing
Scrap the last one. It just took 10 mins to load. I panicked too quick.
On Mon, Oct 5, 2015 at 5:15 PM, Siddhartha Singh Sandhu <
sandhus...@gmail.com> wrote:
> I restarted my SOLR and it is now not reloading the configuration.
> Is my solr index corrupted?
>
> Sid.
>
> On Mon, Oct 5, 2015 at
Hi Alessandro, thanks for the reply!
I wasn’t aware of nested documents, as you say, it seems precisely what I need
.. in fact it looks like I plagiarised that article while writing my
description hehe. Upgrading from 4.10 is a bit of work but, might just be worth
it.
Many Thanks!
Douglas
bq: Would having an SSD make considerable difference in
speed
Almost certainly. Optimize is rarely necessary, especially if
you're indexing relatively constantly so just avoiding that might
do the trick ;).
But reloading shouldn't be taking 10 minutes. Before 5.2, if you
had suggesters
Not sure what that means :)
SOLR-5776 would not happen all the time, but too frequently. It also
wouldn't matter the power of CPU, cores or RAM :)
Do you see fails without https is what you want to check.
- mark
On Mon, Oct 5, 2015 at 2:16 PM Markus Jelsma
wrote:
I'd make two guess:
Looks like you are using Jrocket? I don't think that is common or well
tested at this point.
There are a billion or so bug fixes from 4.6.1 to 5.3.2. Given the pace of
SolrCloud, you are dealing with something fairly ancient and so it will be
harder to find help with older
On 10/4/2015 3:07 PM, Siddhartha Singh Sandhu wrote:
> I am on page with you guys about the ssh authentication and communicating
> with the API's that SOLR has to provide. I simply don't want the GUI as it
> is nobody will be able to access it once I set the policy on my server
> except for
So the FieldCache was removed from Solr 5.
What is the implication of this? Should we move all facets to DocValues
when we have high cardinality (lots of values) ? Are we adding it back?
Other ideas to improve performance?
>From Mike M:
FieldCache is gone (moved to a dedicated
Hi Douglas !
Your use case is a really good fit for Nested Objects in Solr[1]
After you model your problem in nested objects, you should play a little
bit with faceting at different levels ( parent/children).
A pivot faceting can be good in some scenario, probably not in yours.
I would probably
What make believe you there is a good way to remove the term vectors
without re-indexing ?
It does make sense that the simple optimise did not the job. It is what I
would expect.
I agree with you that term vectors are a separated data structure in the
index, but I doubt there is a way to
Hello,
Could you please add me to the ContributorsGroup of the Solr Wiki? I have
made Serbian analyzer for Solr [
https://issues.apache.org/jira/browse/LUCENE-6053] and would now like to
write about some Serbian search considerations.
My wiki username is NikolaSmolenski.
--
Nikola Smolenski
I was doing some studies and analysis, just wondering in your opinion which
one is the best approach to use to index in Solr to reach the best
throughput possible.
I know that a lot of factor are affecting Indexing time, so let's only
focus in the feeding approach.
Let's isolate different
Hi Anil,
what does make you feel that bridgewater~2 is not matching bridwater ?
Are you sure bridwater is in your index ?
Are you sure is in the field where you are looking for bridgewater ?
I would verify that, because it does n to make sense as they both have the
same distance to bridwater.
Are
Hi Remi,
Your use-case is more-or-less exactly what I wrote luwak for:
https://github.com/flaxsearch/luwak. You register your queries with a Monitor
object, and then match documents against them. The monitor analyzes the
documents that are passed in and tries to filter out queries that it
SolrJ tends to be faster for several reasons, not the least of which
is that it sends packets to Solr in a more efficient binary format.
Batching is critical. I did some rough tests using SolrJ and sending
docs one at a time gave a throughput of < 400 docs/second.
Sending 10 gave 2,300 or so.
Hello Troy,
What a challenge!!
On Thu, Oct 1, 2015 at 3:42 PM, Troy Edwards
wrote:
>
> 2) It appears that I cannot have fromIndex=Contracts because it is very
> large and has to be sharded. Per my understanding SolrCloud join does not
> support multiple shards
>
..
Hmmm, i tried that just now but i sometimes get tons of Connection reset
errors. The tests then end with "There are still nodes recoverying - waited for
30 seconds".
[RecoveryThread-collection1] ERROR org.apache.solr.cloud.RecoveryStrategy -
Error while trying to
Done, thanks!
On Mon, Oct 5, 2015 at 3:24 AM, Nikola Smolenski wrote:
> Hello,
>
> Could you please add me to the ContributorsGroup of the Solr Wiki? I have
> made Serbian analyzer for Solr [
> https://issues.apache.org/jira/browse/LUCENE-6053] and would now like to
> write
Right, I'm assuming you're creating a cluster somewhere.
Try calling (from memory) waitForRecoveriesToFinish in
AbstractDistribZkTestBase after creating the collection
to insure that the nodes are up and running before you
index to them.
Shot in the dark
Erick
On Mon, Oct 5, 2015 at 1:36 AM,
Hi Remi,
I'm not sure what you mean by filtering on the fly? With the percolator, if
you're going to do filtering at match time, you still need to have added the
terms to filter on when you add the query. And you can actually do the same
sort of thing in luwak, using a
Hi Alan,
I became aware of Luwak a few months ago and I'm planning on using it in
the future. The only reason I couldn’t use it for my specific scenario was
the fact that I needed the possibility to filter on the fly and not
necessarily include filtering while building the query index. Apparently
Any takers on this? Any kinda clue would help. Thanks.
On 10/4/15 10:14 AM, Rallavagu wrote:
As there were no responses so far, I assume that this is not a very
common issue that folks come across. So, I went into source (4.6.1) to
see if I can figure out what could be the cause.
The thread
Thanks Erick,
you confirmed my impressions!
Thank you very much for the insights, an other opinion is welcome :)
Cheers
2015-10-05 14:55 GMT+01:00 Erick Erickson :
> SolrJ tends to be faster for several reasons, not the least of which
> is that it sends packets to Solr
40 matches
Mail list logo