RE: 20180917-Need Apache SOLR support

2018-09-18 Thread Liu, Daphne
You have to increase your RAM. We have upgraded our Solr cluster to  12 solr 
nodes, each with 64G RAM, our shard size is around 25G, each server only hosts 
either one shard ( leading node or replica),  Performance is very good.
For better performance, memory needs to be over your shard size.


Kind regards,

Daphne Liu
BI Architect • Big Data - Matrix SCM

CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256 
USA / www.cevalogistics.com
T 904.9281448 / F 904.928.1525 / daphne@cevalogistics.com

Making business flow

-Original Message-
From: zhenyuan wei 
Sent: Tuesday, September 18, 2018 3:12 AM
To: solr-user@lucene.apache.org
Subject: Re: 20180917-Need Apache SOLR support

I have 6 machines,and each machine run a solr server, each solr server use RAM 
18GB.  Total document number is 3.2billion,1.4TB ,
my collection‘s replica factor is 1。collection shard number is
 60,currently each shard is 20~30GB。
15 fields per document。 Query rate is slow now,maybe 100-500 requests per 
second.

Shawn Heisey  于2018年9月18日周二 下午12:07写道:

> On 9/17/2018 9:05 PM, zhenyuan wei wrote:
> > Is that means: Small amount of shards  gains  better performance?
> > I also have a usecase which contains 3 billion documents,the
> > collection contains 60 shard now. Is that 10 shard is better than 60 shard?
>
> There is no definite answer to this question.  It depends on a bunch
> of things.  How big is each shard once it's finally built?  What's
> your query rate?  How many machines do you have, and how much memory
> do those machines have?
>
> Thanks,
> Shawn
>
>

NVOCC Services are provided by CEVA as agents for and on behalf of Pyramid 
Lines Limited trading as Pyramid Lines.
This e-mail message is intended for the above named recipient(s) only. It may 
contain confidential information that is privileged. If you are not the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this e-mail and any attachment(s) is strictly 
prohibited. If you have received this e-mail by error, please immediately 
notify the sender by replying to this e-mail and deleting the message including 
any attachment(s) from your system. Thank you in advance for your cooperation 
and assistance. Although the company has taken reasonable precautions to ensure 
no viruses are present in this email, the company cannot accept responsibility 
for any loss or damage arising from the use of this email or attachments.


Re: 20180917-Need Apache SOLR support

2018-09-18 Thread Shawn Heisey

On 9/18/2018 1:11 AM, zhenyuan wei wrote:

I have 6 machines,and each machine run a solr server, each solr server use
RAM 18GB.  Total document number is 3.2billion,1.4TB ,
my collection‘s replica factor is 1。collection shard number is
  60,currently each shard is 20~30GB。
15 fields per document。 Query rate is slow now,maybe 100-500 requests per
second.


That is NOT a slow query rate.  In the recent past, I was the 
administrator to a Solr install.  When things got *REALLY BUSY*, the 
servers would see as many as five requests per second. Usually the 
request rate was less than one per second.  A high request rate can 
drastically impact overall performance.


I have heard of big Solr installs that handle thousands of requests per 
second, which is certainly larger than yours ... but 100-500 is NOT 
slow.  I'm surprised that you can get acceptable performance on an index 
that big, with that many queries, and only six machines.  Congratulations.


Despite appearances, I wasn't actually asking you for this information.  
I was telling you that those things would all be factors in the decision 
about how many shards you should have.Perhaps I should have worded the 
message differently.


See this page for a discussion about how total memory size and index 
size affect performance:


https://wiki.apache.org/solr/SolrPerformanceProblems

Thanks,
Shawn



Re: 20180917-Need Apache SOLR support

2018-09-18 Thread zhenyuan wei
I have 6 machines,and each machine run a solr server, each solr server use
RAM 18GB.  Total document number is 3.2billion,1.4TB ,
my collection‘s replica factor is 1。collection shard number is
 60,currently each shard is 20~30GB。
15 fields per document。 Query rate is slow now,maybe 100-500 requests per
second.

Shawn Heisey  于2018年9月18日周二 下午12:07写道:

> On 9/17/2018 9:05 PM, zhenyuan wei wrote:
> > Is that means: Small amount of shards  gains  better performance?
> > I also have a usecase which contains 3 billion documents,the collection
> > contains 60 shard now. Is that 10 shard is better than 60 shard?
>
> There is no definite answer to this question.  It depends on a bunch of
> things.  How big is each shard once it's finally built?  What's your
> query rate?  How many machines do you have, and how much memory do those
> machines have?
>
> Thanks,
> Shawn
>
>


Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Shawn Heisey

On 9/17/2018 9:05 PM, zhenyuan wei wrote:

Is that means: Small amount of shards  gains  better performance?
I also have a usecase which contains 3 billion documents,the collection
contains 60 shard now. Is that 10 shard is better than 60 shard?


There is no definite answer to this question.  It depends on a bunch of 
things.  How big is each shard once it's finally built?  What's your 
query rate?  How many machines do you have, and how much memory do those 
machines have?


Thanks,
Shawn



Re: 20180917-Need Apache SOLR support

2018-09-17 Thread zhenyuan wei
Is that means: Small amount of shards  gains  better performance?
I also have a usecase which contains 3 billion documents,the collection
contains 60 shard now. Is that 10 shard is better than 60 shard?



Shawn Heisey  于2018年9月18日周二 上午12:04写道:

> On 9/17/2018 7:04 AM, KARTHICKRM wrote:
> > Dear SOLR Team,
> >
> > We are beginners to Apache SOLR, We need following clarifications from
> you.
>
> Much of what I'm going to say is a mirror of what you were already told
> by Jan.  All of Jan's responses are good.
>
> > 1.  In SOLRCloud, How can we install more than one Shared on Single
> PC?
>
> One Solr instance can run multiple indexes.  Except for one specific
> scenario that I hope you don't run into, you should NOT run multiple
> Solr instances per server.  There should only be one.  If your query
> rate is very low, then you can get good performance from multiple shards
> per node, but with a high query rate, you'll only want one shard per node.
>
> Thanks,
> Shawn
>
>


Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Ere Maijala



Shawn Heisey kirjoitti 17.9.2018 klo 19.03:

7.   If I have Billions of indexes, If the "start" parameter is 10th
Million index and "end" parameter is  start+100th index, for this case 
any

performance issue will be raised ?


Let's say that you send a request with these parameters, and the index 
has three shards:


start=1000=100

Every shard in the index is going to return a result to the coordinating 
node of ten million plus 100.  That's thirty million individual 
results.  The coordinating node will combine those results, sort them, 
and then request full documents for the 100 specific rows that were 
requested.  This takes a lot of time and a lot of memory.


What Shawn says above means that even if you give Solr a heap big enough 
to handle that, you'll run into serious performance issues even with a 
light load since the these huge allocations easily lead to 
stop-the-world garbage collections that kill performance. I've tried it 
and it was bad.


If you are thinking of a user interface that allows jumping to an 
arbitrary result page, you'll have to limit it to some sensible number 
of results (10 000 is probably safe, 100 000 may also work) or use 
something else than Solr. Cursor mark or streaming are great options, 
but only if you want to process all the records. Often the deep paging 
need is practically the need to see the last results, and that can also 
be achieved by allowing reverse sorting.


Regards,
Ere

--
Ere Maijala
Kansalliskirjasto / The National Library of Finland


Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Shawn Heisey

On 9/17/2018 7:04 AM, KARTHICKRM wrote:

Dear SOLR Team,

We are beginners to Apache SOLR, We need following clarifications from you.


Much of what I'm going to say is a mirror of what you were already told 
by Jan.  All of Jan's responses are good.



1.  In SOLRCloud, How can we install more than one Shared on Single PC?


One Solr instance can run multiple indexes.  Except for one specific 
scenario that I hope you don't run into, you should NOT run multiple 
Solr instances per server.  There should only be one.  If your query 
rate is very low, then you can get good performance from multiple shards 
per node, but with a high query rate, you'll only want one shard per node.



2.  How many maximum number of shared can be added under on SOLRCloud?


There is no practical limit.  If you create enough of them (more than a 
few hundred), you can end up with severe scalability problems related to 
SolrCloud's interaction with ZooKeeper.



3.  In my application there is no need of ACID properties, other than
this can I use SOLR as a Complete Database?


Solr is NOT a database.  All of its capability and all the optimizations 
it contains are all geared towards search.  If you try to use it as a 
database, you're going to be disappointed with it.



4.  In Which OS we can feel the better performance, Windows Server OS /
Linux?


From those two choices, I would strongly recommend Linux. If you have 
an open source operating system that you prefer to Linux, go with that.



5.  If a SOLR Core contains 2 Billion indexes, what is the recommended
RAM size and Java heap space for better performance?


I hope you mean 2 billion documents here, not 2 billion indexes.  Even 
though technically speaking there's nothing preventing SolrCloud from 
handling that many indexes, you'll run into scalability problems long 
before you reach that many.


If you do mean documents ... don't put that many documents in one core.  
That number includes deleted documents, which means there's a good 
possibility of going beyond the actual limit if you try to have 2 
billion documents that haven't been deleted.



6.  I have 20 fields per document, how many maximum number of documents
can be inserted / retrieved in a single request?


There's no limit to the number that can be retrieved.  But because the 
entire response must be built in memory, you can run your Solr install 
out of heap memory by trying to build a large response.  Streaming 
expressions can be used for really large results to avoid the memory issues.


As for the number of documents that can be inserted by a single request 
... Solr defaults to a maximum POST body size of 2 megabytes.  This can 
be increased through an option in solrconfig.xml.  Unless your documents 
are huge, this is usually enough to send several thousand at once, which 
should be plenty.



7.   If I have Billions of indexes, If the "start" parameter is 10th
Million index and "end" parameter is  start+100th index, for this case any
performance issue will be raised ?


Let's say that you send a request with these parameters, and the index 
has three shards:


start=1000=100

Every shard in the index is going to return a result to the coordinating 
node of ten million plus 100.  That's thirty million individual 
results.  The coordinating node will combine those results, sort them, 
and then request full documents for the 100 specific rows that were 
requested.  This takes a lot of time and a lot of memory.


For deep paging, use cursorMark.  For large result sets, use streaming 
expressions.  I have used cursorMark ... it's only disadvantage is that 
you can't jump straight to page 1, you must go through all of the 
earlier pages too.  But page 1 will be just as fast as page 1.  I 
have never used streaming expressions.



8.  Which .net client is best for SOLR?


No idea.  The only client produced by this project is the Java client.  
All other clients are third-party, including .NET clients.



9.  Is there any limitation for single field, I mean about the size for
blob data?


There are technically no limitations here.  But if your data is big 
enough, it begins to cause scalability problems.  It takes time to read 
data off the disk, for the CPU to process it, etc.


In conclusion, I have much the same thing to say as Jan said.  It sounds 
to me like you're not after a search engine, and that Solr might not be 
the right product for what you're trying to accomplish.  I'll say this 
again: Solr is NOT a database.


Thanks,
Shawn



Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Walter Underwood
Do not use Solr as a database. It was never designed to be a database.
It is missing a lot of features that are normal in databases.

* no transactions
* no rollback (in Solr Cloud)
* no session isolation (one client’s commit will commit all data in progress)
* no schema migration
* no version migration
* no real backups (Solr backup is a cold server, not a dump/load)
* no dump/load
* modify record (atomic updates are a subset of this)

Solr assumes you can always reload all the data from a repository. This is done
instead of migration or backups.

If you use Solr as a database and lose all your data, don’t blame us. It was
never designed to do that.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Sep 17, 2018, at 7:01 AM, Jan Høydahl  wrote:
> 
>> We are beginners to Apache SOLR, We need following clarifications from you.
>> 
>> 
>> 
>> 1.  In SOLRCloud, How can we install more than one Shared on Single PC? 
> 
> You typically have one installation of Solr on each server. Then you can add 
> a collection with multiple shards, specifying how many shards you wish when 
> creating the collection, e.g.
> 
> bin/solr create -c mycoll -shards 4
> 
> Although possible, it is normally not advised to install multiple instances 
> of Solr on the same server.
> 
>> 2.  How many maximum number of shared can be added under on SOLRCloud?
> 
> There is no limit. You should find a good number based on the number of 
> documents, the size of your data, the number of servers in your cluster, 
> available RAM and disk size and the required performance.
> 
> In practice you will guess the initial #shards and then benchmark a few 
> different settings before you decide.
> Note that you can also adjust the number of shards as you go through 
> CREATESHARD / SPLITSHARD APIs, so even if you start out with few shards you 
> can grow later.
> 
>> 3.  In my application there is no need of ACID properties, other than
>> this can I use SOLR as a Complete Database?
> 
> You COULD, but Solr is not intended to be your primary data store. You should 
> always design your system so that you can re-index all content from some 
> source (does not need to be a database) when needed. There are several use 
> cases for a complete re-index that you should consider.
> 
>> 4.  In Which OS we can feel the better performance, Windows Server OS /
>> Linux?
> 
> I'd say Linux if you can. If you HAVE to, then you could also run on Windows 
> :-)
> 
>> 5.  If a SOLR Core contains 2 Billion indexes, what is the recommended
>> RAM size and Java heap space for better performance? 
> 
> It depends. It is not likely that you will ever put 2bn docs in one single 
> core. Normally you would have sharded long before that number.
> The amount of physical RAM and the amount of Java heap to allocate to Solr 
> must be calculated and decided on a per case basis.
> You could also benchmark this - test if a larger RAM size improves 
> performance due to caching. Depending on your bottlennecks, adding more RAM 
> may be a way to scale further before needing to add more servers.
> 
> Sounds like you should consult with a Solr expert to dive deep into your 
> exact usecase and architect the optimal setup for your case, if you have 
> these amounts of data.
> 
>> 6.  I have 20 fields per document, how many maximum number of documents
>> can be inserted / retrieved in a single request?
> 
> No limit. But there are practical limits.
> For indexing (update), attempt various batch sizes and find which gives the 
> best performance for you. It is just as important to do inserts (updates) in 
> many parallell connections as in large batches.
> 
> For searching, why would you want to know a maximum? Normally the usecase for 
> search is to get TOP N docs, not a maximum number?
> If you need to retrieve thousands of results, you should have a look at 
> /export handler and/or streaming expressions.
> 
>> 7.   If I have Billions of indexes, If the "start" parameter is 10th
>> Million index and "end" parameter is  start+100th index, for this case any
>> performance issue will be raised ?
> 
> Don't do it!
> This is a warning sign that you are using Solr in a wrong way.
> 
> If you need to scroll through all docs in the index, have a look at streaming 
> expressions or cursorMark instead!
> 
>> 8.  Which .net client is best for SOLR?
> 
> The only I'm aware of is SolrNET. There may be others. None of them are 
> supported by the Solr project.
> 
>> 9.  Is there any limitation for single field, I mean about the size for
>> blob data?
> 
> I think there is some default cutoff for very large values.
> 
> Why would you want to put very large blobs into documents?
> This is a warning flag that you may be using the search index in a wrong way. 
> Consider storing large blobs outside of the search index and reference them 
> from the docs.
> 
> 
> In general, it would help a lot if you start telling us 

Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Susheel Kumar
I'll highly advice if you can use Java library/SolrJ to connect to Solr
than .Net.  There are many things taken care by CloudSolrClient and other
classes when communicated with Solr Cloud having shards/replica's etc and
if your .Net port for SolrJ are not up to date/having all the functionality
(which I am sure) , you may run into issues.

Thnx

On Mon, Sep 17, 2018 at 10:01 AM Jan Høydahl  wrote:

> > We are beginners to Apache SOLR, We need following clarifications from
> you.
> >
> >
> >
> > 1.  In SOLRCloud, How can we install more than one Shared on Single
> PC?
>
> You typically have one installation of Solr on each server. Then you can
> add a collection with multiple shards, specifying how many shards you wish
> when creating the collection, e.g.
>
> bin/solr create -c mycoll -shards 4
>
> Although possible, it is normally not advised to install multiple
> instances of Solr on the same server.
>
> > 2.  How many maximum number of shared can be added under on
> SOLRCloud?
>
> There is no limit. You should find a good number based on the number of
> documents, the size of your data, the number of servers in your cluster,
> available RAM and disk size and the required performance.
>
> In practice you will guess the initial #shards and then benchmark a few
> different settings before you decide.
> Note that you can also adjust the number of shards as you go through
> CREATESHARD / SPLITSHARD APIs, so even if you start out with few shards you
> can grow later.
>
> > 3.  In my application there is no need of ACID properties, other than
> > this can I use SOLR as a Complete Database?
>
> You COULD, but Solr is not intended to be your primary data store. You
> should always design your system so that you can re-index all content from
> some source (does not need to be a database) when needed. There are several
> use cases for a complete re-index that you should consider.
>
> > 4.  In Which OS we can feel the better performance, Windows Server
> OS /
> > Linux?
>
> I'd say Linux if you can. If you HAVE to, then you could also run on
> Windows :-)
>
> > 5.  If a SOLR Core contains 2 Billion indexes, what is the
> recommended
> > RAM size and Java heap space for better performance?
>
> It depends. It is not likely that you will ever put 2bn docs in one single
> core. Normally you would have sharded long before that number.
> The amount of physical RAM and the amount of Java heap to allocate to Solr
> must be calculated and decided on a per case basis.
> You could also benchmark this - test if a larger RAM size improves
> performance due to caching. Depending on your bottlennecks, adding more RAM
> may be a way to scale further before needing to add more servers.
>
> Sounds like you should consult with a Solr expert to dive deep into your
> exact usecase and architect the optimal setup for your case, if you have
> these amounts of data.
>
> > 6.  I have 20 fields per document, how many maximum number of
> documents
> > can be inserted / retrieved in a single request?
>
> No limit. But there are practical limits.
> For indexing (update), attempt various batch sizes and find which gives
> the best performance for you. It is just as important to do inserts
> (updates) in many parallell connections as in large batches.
>
> For searching, why would you want to know a maximum? Normally the usecase
> for search is to get TOP N docs, not a maximum number?
> If you need to retrieve thousands of results, you should have a look at
> /export handler and/or streaming expressions.
>
> > 7.   If I have Billions of indexes, If the "start" parameter is 10th
> > Million index and "end" parameter is  start+100th index, for this case
> any
> > performance issue will be raised ?
>
> Don't do it!
> This is a warning sign that you are using Solr in a wrong way.
>
> If you need to scroll through all docs in the index, have a look at
> streaming expressions or cursorMark instead!
>
> > 8.  Which .net client is best for SOLR?
>
> The only I'm aware of is SolrNET. There may be others. None of them are
> supported by the Solr project.
>
> > 9.  Is there any limitation for single field, I mean about the size
> for
> > blob data?
>
> I think there is some default cutoff for very large values.
>
> Why would you want to put very large blobs into documents?
> This is a warning flag that you may be using the search index in a wrong
> way. Consider storing large blobs outside of the search index and reference
> them from the docs.
>
>
> In general, it would help a lot if you start telling us WHAT you intend to
> use Solr for, what you try to achieve, what performance goals/requirements
> you have etc, instead of a lot of very specific max/min questions. There
> are very seldom hard limits, and if there are, it is usually not a good
> idea to approach them :)
>
> Jan
>
>


Re: 20180917-Need Apache SOLR support

2018-09-17 Thread Jan Høydahl
> We are beginners to Apache SOLR, We need following clarifications from you.
> 
> 
> 
> 1.  In SOLRCloud, How can we install more than one Shared on Single PC? 

You typically have one installation of Solr on each server. Then you can add a 
collection with multiple shards, specifying how many shards you wish when 
creating the collection, e.g.

bin/solr create -c mycoll -shards 4

Although possible, it is normally not advised to install multiple instances of 
Solr on the same server.

> 2.  How many maximum number of shared can be added under on SOLRCloud?

There is no limit. You should find a good number based on the number of 
documents, the size of your data, the number of servers in your cluster, 
available RAM and disk size and the required performance.

In practice you will guess the initial #shards and then benchmark a few 
different settings before you decide.
Note that you can also adjust the number of shards as you go through 
CREATESHARD / SPLITSHARD APIs, so even if you start out with few shards you can 
grow later.

> 3.  In my application there is no need of ACID properties, other than
> this can I use SOLR as a Complete Database?

You COULD, but Solr is not intended to be your primary data store. You should 
always design your system so that you can re-index all content from some source 
(does not need to be a database) when needed. There are several use cases for a 
complete re-index that you should consider.

> 4.  In Which OS we can feel the better performance, Windows Server OS /
> Linux?

I'd say Linux if you can. If you HAVE to, then you could also run on Windows :-)

> 5.  If a SOLR Core contains 2 Billion indexes, what is the recommended
> RAM size and Java heap space for better performance? 

It depends. It is not likely that you will ever put 2bn docs in one single 
core. Normally you would have sharded long before that number.
The amount of physical RAM and the amount of Java heap to allocate to Solr must 
be calculated and decided on a per case basis.
You could also benchmark this - test if a larger RAM size improves performance 
due to caching. Depending on your bottlennecks, adding more RAM may be a way to 
scale further before needing to add more servers.

Sounds like you should consult with a Solr expert to dive deep into your exact 
usecase and architect the optimal setup for your case, if you have these 
amounts of data.

> 6.  I have 20 fields per document, how many maximum number of documents
> can be inserted / retrieved in a single request?

No limit. But there are practical limits.
For indexing (update), attempt various batch sizes and find which gives the 
best performance for you. It is just as important to do inserts (updates) in 
many parallell connections as in large batches.

For searching, why would you want to know a maximum? Normally the usecase for 
search is to get TOP N docs, not a maximum number?
If you need to retrieve thousands of results, you should have a look at /export 
handler and/or streaming expressions.

> 7.   If I have Billions of indexes, If the "start" parameter is 10th
> Million index and "end" parameter is  start+100th index, for this case any
> performance issue will be raised ?

Don't do it!
This is a warning sign that you are using Solr in a wrong way.

If you need to scroll through all docs in the index, have a look at streaming 
expressions or cursorMark instead!

> 8.  Which .net client is best for SOLR?

The only I'm aware of is SolrNET. There may be others. None of them are 
supported by the Solr project.

> 9.  Is there any limitation for single field, I mean about the size for
> blob data?

I think there is some default cutoff for very large values.

Why would you want to put very large blobs into documents?
This is a warning flag that you may be using the search index in a wrong way. 
Consider storing large blobs outside of the search index and reference them 
from the docs.


In general, it would help a lot if you start telling us WHAT you intend to use 
Solr for, what you try to achieve, what performance goals/requirements you have 
etc, instead of a lot of very specific max/min questions. There are very seldom 
hard limits, and if there are, it is usually not a good idea to approach them :)

Jan