Solr import doubling space on disk

2018-05-18 Thread Darko Todoric

Hi guys,

We have about 250gb solr data on one server and when we start full 
import solr doubling space on disk... This is problem for us because we 
have 500gb SSD on this server and we hit almost 100% disk usage when 
full import running.
Because we don't use "clean" option, are they are way to tell full/delta 
import that update data immediately and don't wait to finished and then 
update all? In that way, full import no need to create this tmp folder 
from the 250gb.


Kind regards,
Darko Todoric


Re: Search by similarity?

2017-08-29 Thread Darko Todoric
nt=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n", 
"29030":"\n13.699375 = sum of:\n 13.699375 = sum of:\n 13.699375 = max 
of:\n 13.699375 = weight(abstract:titl in 28959) [], result of:\n 
13.699375 = score(doc=28959,freq=2.0 = termFreq=2.0\n), product of:\n 
2.0 = boost\n 5.503748 = idf(docFreq=74, docCount=18297)\n 1.2445496 = 
tfNorm, computed from:\n 2.0 = termFreq=2.0\n 1.2 = parameter k1\n 0.75 
= parameter b\n 186.49593 = avgFieldLength\n 256.0 = fieldLength\n 
3.816711E-5 = weight(title:titl in 28959) [], result of:\n 3.816711E-5 = 
score(doc=28959,freq=1.0 = termFreq=1.0\n), product of:\n 3.0 = boost\n 
1.4457239E-5 = idf(docFreq=34584, docCount=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n", 
"31444":"\n13.699375 = sum of:\n 13.699375 = sum of:\n 13.699375 = max 
of:\n 13.699375 = weight(abstract:titl in 31373) [], result of:\n 
13.699375 = score(doc=31373,freq=2.0 = termFreq=2.0\n), product of:\n 
2.0 = boost\n 5.503748 = idf(docFreq=74, docCount=18297)\n 1.2445496 = 
tfNorm, computed from:\n 2.0 = termFreq=2.0\n 1.2 = parameter k1\n 0.75 
= parameter b\n 186.49593 = avgFieldLength\n 256.0 = fieldLength\n 
3.816711E-5 = weight(title:titl in 31373) [], result of:\n 3.816711E-5 = 
score(doc=31373,freq=1.0 = termFreq=1.0\n), product of:\n 3.0 = boost\n 
1.4457239E-5 = idf(docFreq=34584, docCount=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n", 
"30621":"\n13.096554 = sum of:\n 13.096554 = sum of:\n 13.096554 = max 
of:\n 13.096554 = weight(abstract:titl in 30550) [], result of:\n 
13.096554 = score(doc=30550,freq=1.0 = termFreq=1.0\n), product of:\n 
2.0 = boost\n 5.503748 = idf(docFreq=74, docCount=18297)\n 1.189785 = 
tfNorm, computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 
= parameter b\n 186.49593 = avgFieldLength\n 113.8 = fieldLength\n 
3.816711E-5 = weight(title:titl in 30550) [], result of:\n 3.816711E-5 = 
score(doc=30550,freq=1.0 = termFreq=1.0\n), product of:\n 3.0 = boost\n 
1.4457239E-5 = idf(docFreq=34584, docCount=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n", 
"32067":"\n13.096554 = sum of:\n 13.096554 = sum of:\n 13.096554 = max 
of:\n 13.096554 = weight(abstract:titl in 31996) [], result of:\n 
13.096554 = score(doc=31996,freq=1.0 = termFreq=1.0\n), product of:\n 
2.0 = boost\n 5.503748 = idf(docFreq=74, docCount=18297)\n 1.189785 = 
tfNorm, computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 
= parameter b\n 186.49593 = avgFieldLength\n 113.8 = fieldLength\n 
3.816711E-5 = weight(title:titl in 31996) [], result of:\n 3.816711E-5 = 
score(doc=31996,freq=1.0 = termFreq=1.0\n), product of:\n 3.0 = boost\n 
1.4457239E-5 = idf(docFreq=34584, docCount=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n", 
"1935":"\n11.583146 = sum of:\n 11.583146 = sum of:\n 11.583146 = max 
of:\n 11.583146 = weight(abstract:titl in 1934) [], result of:\n 
11.583146 = score(doc=1934,freq=1.0 = termFreq=1.0\n), product of:\n 2.0 
= boost\n 5.503748 = idf(docFreq=74, docCount=18297)\n 1.0522962 = 
tfNorm, computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 
= parameter b\n 186.49593 = avgFieldLength\n 163.84 = fieldLength\n 
3.816711E-5 = weight(title:titl in 1934) [], result of:\n 3.816711E-5 = 
score(doc=1934,freq=1.0 = termFreq=1.0\n), product of:\n 3.0 = boost\n 
1.4457239E-5 = idf(docFreq=34584, docCount=34584)\n 0.88 = tfNorm, 
computed from:\n 1.0 = termFreq=1.0\n 1.2 = parameter k1\n 0.75 = 
parameter b\n 3.0 = avgFieldLength\n 4.0 = fieldLength\n"}, 
"QParser":"DisMaxQParser", "altquerystring":null, "boostfuncs":null,


Kind regards,
Darko Todoric

On 08/28/2017 06:35 PM, Erick Erickson wrote:

What are the results of adding &debug=query to the URL? The parsed
query will be especially illuminating.

Best,
Erick

On Mon, Aug 28, 2017 at 4:37 AM, Emir Arnautovic
 wrote:

Hi Darko,

The issue is the wrong expectations: title-1-end is parsed to 3 tokens
(guessing) and mm=99% of 3 tokens is 2.99 and it is rounded down to 2. Since
all your documents have 'title' and 'end' tokens, all match. If you want to
round up, you can use mm=-1% - that will result in zero (or one match if you
do not filter out original document).

You have to play with your tokenizers and define what is similarity match
percentage (if you want to stick with mm).

Regards,
Emir



On 28.08.2017 09:17, Darko Todoric wr

Re: Search by similarity?

2017-08-28 Thread Darko Todoric

Hm... I cannot make that this DisMax work on my Solr...

In solr I have document with title:
 - "title-1-end"
 - "title-2-end"
 - "title-3-end"
 - ...
 - ...
 - "title-312-end"

and when I make query 
"*http://localhost:8983/solr/SciLit/select?defType=dismax&indent=on&mm=99%&q=title:"title-123123123-end"&wt=json*' 
I get all documents from solr :\

What I doing wrong?

Also, I don't know if affecting results, but on "title" field I use 
"WhitespaceTokenizerFactory".


Kind regards,
Darko


On 08/25/2017 06:38 PM, Junte Zhang wrote:

If you already have the title of the document, then you could run that title as 
a new query against the whole index and exclude the source document from the 
results as a filter.

You could use the DisMax query parser: 
https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser

And then set the minimum match ratio of the OR clauses to 90%.

/JZ

-Original Message-
From: Darko Todoric [mailto:todo...@mdpi.com]
Sent: Friday, August 25, 2017 5:49 PM
To: solr-user@lucene.apache.org
Subject: Search by similarity?

Hi,


I have 90.000.000 documents in Solr and I need to compare "title" of this document and 
get all documents with more than 80% similarity. PHP have "similar_text" but it's not so 
smart inserting 90m documents in the array...
Can I do some query in Solr which will give me the more the 80% similarity?


Kind regards,
Darko Todoric

--
Darko Todoric
Web Engineer, MDPI DOO
Veljka Dugosevica 54, 11060 Belgrade, Serbia
+381 65 43 90 620
www.mdpi.com

Disclaimer: The information and files contained in this message are 
confidential and intended solely for the use of the individual or entity to 
whom they are addressed.
f you have received this message in error, please notify me and delete this 
message from your system.
You may not copy this message in its entirety or in part, or disclose its 
contents to anyone.



--
Darko Todoric
Web Engineer, MDPI DOO
Veljka Dugosevica 54, 11060 Belgrade, Serbia
+381 65 43 90 620
www.mdpi.com

Disclaimer: The information and files contained in this message are confidential
and intended solely for the use of the individual or entity to whom they are 
addressed.
f you have received this message in error, please notify me and delete this 
message from your system.
You may not copy this message in its entirety or in part, or disclose its 
contents to anyone.



Search by similarity?

2017-08-25 Thread Darko Todoric

Hi,


I have 90.000.000 documents in Solr and I need to compare "title" of 
this document and get all documents with more than 80% similarity. PHP 
have "similar_text" but it's not so smart inserting 90m documents in the 
array...

Can I do some query in Solr which will give me the more the 80% similarity?


Kind regards,
Darko Todoric

--
Darko Todoric
Web Engineer, MDPI DOO
Veljka Dugosevica 54, 11060 Belgrade, Serbia
+381 65 43 90 620
www.mdpi.com

Disclaimer: The information and files contained in this message are confidential
and intended solely for the use of the individual or entity to whom they are 
addressed.
f you have received this message in error, please notify me and delete this 
message from your system.
You may not copy this message in its entirety or in part, or disclose its 
contents to anyone.