Re: disable auto-commit

2018-12-12 Thread Danilo Tomasoni
Hello I tried setting both autocommit and autosoftcommit to -1, but i 
still see the documents just seconds after indexing it.


These are the actual configurations in /conf/solrconfig.xml


${solr.autoCommit.maxTime:999}
  false
    

    

    
${solr.autoSoftCommit.maxTime:999}
    


but even that way after every single POST to /update request handler, If 
I search * I see 1K documents more (i index in chunk of 1k documents).


Do you have any idea of why this happens?


On 12/12/18 17:16, Erick Erickson wrote:

The answer to your question is to set the interval to -1.

however, for  that's a really bad idea. Why do you think
this will help with OOM errors? _Querying_ usually is the place OOMs
are generated, especially if you do things like facet on very
high-cardinality fields and/or do _not_ have docValues enabled for
fields you facet, group, or sort on.



I have a single machine where I just index data, no concurrent querying 
is happening, that's why I don't care about visibility but just about 
speed/no crash.


I'm planning to make a single hard commit at the end (roughly once every 
500.000 docs)


copy the final index to a clone machine where all the querying happens, 
to avoid OOM presumably generated by concurrent indexing/querying.


I thought this can help lowering the solr memory requirements.

We don't facet, group, sort. The default solr sorting by relevance is ok 
for us.


We just have big edismax queries with sub-edismax queries with different 
mm values. Every sub-edismax query do have a lot (order of K) of 
alternative words/phrases.




If you do disable hard commits, your TLOG sizes will grow without
bound until your entire indexing run is complete. Worse, if the TLOG
replays due to abnormal restart, it would try to re-index everything.
Hard commits with openSearcher=false are recommended.



yes I know, but I want to have the control on the time where the hard 
commit is triggered.


It would also be nice to know when solr finishes the hard commit, so 
that I can stop sending POST request in that timeframe, but I haven't 
seen any API for that.



Thank you for your help

Danilo


Best,
Erick
On Wed, Dec 12, 2018 at 4:44 AM Danilo Tomasoni  wrote:

I want to disable even that.

I saw here

https://lucene.apache.org/solr/guide/6_6/updatehandlers-in-solrconfig.html


that probably to achieve what I want I just need to comment out the
autoCommit tag.. correct?

What do you think about disabling autocommit/autosoftcommit?

it can lower the system requirements while indexing?


What about transaction logs? they can be disabled?

When solr crashes I always reimport from scratch because I don't expect
that the documents accepted by solr between the last hard commit and the
crash will be saved somewhere.

But this article

https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

says that solr is capable of restoring documents even if they weren't
committed, is it still correct?


Thank you

Danilo


On 12/12/18 13:33, Mikhail Khludnev wrote:

What about autoSoftCommit ?

On Wed, Dec 12, 2018 at 3:24 PM Danilo Tomasoni  wrote:


Hello, I'm experiencing oom while indexing a big amount of documents.

The main idea to avoid OOM is to avoid commit (just one big commit at
the end).

Is this a correct idea?

How can I disable autocommit?

I've set


 ${solr.autoCommit.maxTime:-1}
 false
   

in solrconfig.xml

but it's not sufficient, while indexing I still see documents.

Thank you

Danilo


--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the
protection of natural persons with regard to the processing of personal
data, we inform you that all the data we possess are object of treatement
in the respect of the normative provided for by the cited GDPR.

It is your right to be informed on which of your data are used and how;
you may ask for their correction, cancellation or you may oppose to their
use by written request sent by recorded delivery to The Microsoft Research
– University of Trento Centre for Computational and Systems Biology Scarl,
Piazza Manifattura 1, 38068 Rovereto (TN), Italy.



--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the 
protection of natural persons with regard to the processing of personal data, 
we inform you that all the data we possess are object of treatement in the 
respect of the normative provided for by the cited GDPR.

It is your right to be informed on which of your data are used and how; you may 
ask for their correction, cancellation or you may oppose to their use by 
written request sent by recorded delivery to The Microsoft Research – 
University of Trento Centre for Computational and Systems Biology Scarl, Piazza 
Manifattura 1, 38068 Rovereto (TN), Italy.


--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the 
protection of natural persons with

Increasing Fault Tolerance of SOLR Cloud and Zookeeper

2018-12-12 Thread Stephen Lewis Bianamara
Hello SOLR Community!

I have a SOLR cluster which recently hit this error (full error
below). ""Cannot
talk to ZooKeeper - Updates are disabled."" I'm running solr 6.6.2 and
zookeeper 3.4.6.  The first time this happened, we replaced a node within
our cluster. The second time, we followed the advice in this post

and just restarted the SOLR service, which resolved the issue. I traced
this down (at least the second time) to this message: ""WARN
(zkCallback-4-thread-31-processing-n:<>:<>_solr) [ ]
o.a.s.c.c.ConnectionManager Watcher
org.apache.solr.common.cloud.ConnectionManager@4586a480 name:
ZooKeeperConnection Watcher:zookeeper-1.dns.domain.foo:1234,zookeeper-2.
dns.domain.foo:1234,zookeeper-3. dns.domain.foo:1234 got event WatchedEvent
state:Disconnected type:None path:null path: null type: None"".

I'm wondering a few things. First, can you help me understand what this
error means in this context? Did the Zookeepers themselves experience an
issue, or just the SOLR node trying to talk to the zookeepers? There was
only one SOLR node affected, which was the leader, and thus stopped all
writes. Any way to trace this to a specific resource limitation? Our ZK
cluster looks to be rather low utilization, but perhaps I'm missing
something.

The second, what steps can I take to make the SOLR-zookeeper interaction
more fault tolerant in general? It seems to me like we might want to (a)
Increase the Zookeeper SyncLimit to provide more flexibility within the ZK
quorum, but this would only help if the issue was truly on the zk side. We
could also increase the tolerance on the SOLR side of things; would this be
controlled via the zkClientTimeout? Any other thoughts?

The third, is there some more fault tolerant ZK Connection string than
listing out all three ZK nodes? I *think*, and please correct me if I'm
wrong, this will require all three ZK nodes to be reporting as healthy for
the SOLR node to consider the connection healthy. Is that true? Maybe
including all three does mean a 2/3 quorum only need be maintained. If the
connection health is based on quorum, Is moving a busy cluster to 5 nodes
for a 3/5 quorum desirable? Any other recommendations to make this
healthier?

Fourth, is any of the fault tolerance in this area improved in later
SOLR/Zookeeper versions?

Finally, this looks to be connected to this Jira issue
? The issue doesn't appear
to be very actionable unfortunately, but it appears people have wondered
this before. Are there any plans in the works to allow for recovery? We
found our ZK cluster was healthy and restarting the solr service fixed the
issue, so it seems a reasonable feature to add auto-recovery on the SOLR
side when the ZK cluster returns to healthy. Would you agree?

Thanks for your help!!
Stephen


Not able to perform case insensitive sorting with SortableTextField

2018-12-12 Thread Ritesh Kumar
Hello Team,

I am facing a problem with sorting on a field with type as mentioned below










The field may contain numerical as well as alphabetical values. My
Solr version is 7.5.0.

If I sort by "fieldName desc", this field sorts lowercase values first,
followed by Uppercase and then the digits.

I want to be able to sort values irrespective of the case and also I don't
want to change the class of the field (SortableTextField) as this type will
also be used to perform a case-insensitive search on this very field which
is working fine.

I supposed, LowerCaseFilterFactory was enough for this scenario. Is there
anything I am missing here?

Best,
Ritesh Kumar


is Solr version 6.6.2 supported by SolrCloud on AWS EC2

2018-12-12 Thread abhishek_itengg
Hi,

I am reviewing the ref guide for Solr 7.2 for SolrCloud on AWS EC2.

href="https://lucene.apache.org/solr/guide/7_2/aws-solrcloud-tutorial.html";>solr
guide 7.2 

The ask is to implement solr cloud on AWS for the Sitecore 9.0 application,
which only supports solr 6.6.2

Questions:-
(I) Does SolrCloud on AWS EC2 support version 6.6.2?
(II) Where can I calculate the cost accrued for 3 Zookeeper ensemble, one in
each availability zones.

Please pardon my ignorance here, am new to the AWS world.

Thanks,
Abhishek




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr recovery issue in 7.5

2018-12-12 Thread shamik
Erick,

   Thanks for your input. All our fields (for facet, group & sort) have
docvalues enabled since 6.5. That includes the id field. Here's the field
cache entry:

CACHE.core.fieldCache.entries_count:0
CACHE.core.fieldCache.total_size:  0 bytes

Based on whatever I've seen so far, I think zookeeper is not the culprit
here. All the nodes including zookeeper was setup recently. The all are all
inside the same VPC within the same AZ. The instances talk to each other
through a dedicated network. Both zookeeper and Solr instances have SSDs.

Here's what's happening based on my observation. Whenever an instance is
getting restarted, it initiates a preprecovery command to its leader or a
different node in the other shard. The node which receives the recovery
request is the one which is due to go down next. Within few minutes, the
heap size (old gen) reaches the max allocated heap, thus stalling the
process. I guess due to this, it fails to send the credentials for a
zookeeper session within the stipulated timeframe, which is why zookeeper
terminates the session. Here's from the startup log.

2018-12-13 04:02:34.910 INFO 
(recoveryExecutor-4-thread-1-processing-n:x.x.193.244:8983_solr
x:knowledge_shard2_replica_n4 c:knowledge s:shard2 r:core_node9)
[c:knowledge s:shard2 r:core_node9 x:knowledge_shard2_replica_n4]
o.a.s.c.RecoveryStrategy Sending prep recovery command to
[http://x.x.240.225:8983/solr]; [WaitForState:
action=PREPRECOVERY&core=knowledge_shard2_replica_n6&nodeName=x.x.x.244:8983_solr&coreNodeName=core_node9&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]

The node sends a recovery command to its replica which immediately triggers
G1 Old Gen jvm pool to reach the max heap size. Please see the screenshot
below which shows the sudden jump in heap size. We've made sure that the
indexing process is completely switched off at this point, so there's no
commit happening.

JVM Pool --> https://www.dropbox.com/s/5s0igznhrol6c05/jvm_pool_1.png?dl=0

I'm totally puzzled by this weird behavior, never seen something like this
before. Could G1GC settings be contributing to this issue? 

>From zookeeper log:

2018-12-13 03:47:27,905 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] -
Accepted socket connection from /10.0.0.160:58376
2018-12-13 03:47:27,905 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] -
Accepted socket connection from /10.0.0.160:58378
2018-12-13 03:47:27,905 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@938] - Client
attempting to establish new session at /10.0.0.160:58376
2018-12-13 03:47:27,905 [myid:1] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to
read additional data from client sessionid 0x0, likely client has closed
socket
2018-12-13 03:47:27,905 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1040] - Closed
socket connection for client /10.0.0.160:58378 (no session established for
client)
2018-12-13 03:47:27,907 [myid:1] - INFO 
[CommitProcessor:1:ZooKeeperServer@683] - Established session
0x100c46d01440072 with negotiated timeout 1 for client /10.0.0.160:58376
2018-12-13 03:47:39,386 [myid:1] - INFO 
[CommitProcessor:1:NIOServerCnxn@1040] - Closed socket connection for client
/10.0.0.160:58376 which had sessionid 0x100c46d01440072



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr recovery issue in 7.5

2018-12-12 Thread Erick Erickson
Whenever I see "memory consumption changed",  my first question is are
any fields that sort, group or facet set with docValues="false"? I
consider this unlikely since one of the changes recently was to
default to "true" for primitive types, but it's worth checking. The
easiest way to check (and this does not have to be on a node that's
having problems, any node will do that's been serving queries for a
while) is to go into the admin
UI>>select_a_core>>plugins/stats>>cache>>fieldCache. Are there any
entries there?

This part of your logs is very suspicious:

2018-12-12 19:57:13.730 WARN
(recoveryExecutor-4-thread-3-processing-n:x.x.23.51:8983_solr
x:knowledge_shard1_replica_n1 c:knowledge s:shard1 r:core_node7)
[c:knowledge s:shard1 r:core_node7 x:knowledge_shard1_replica_n1]
o.a.s.c.ZkController Unable to read
/collections/knowledge/leader_initiated_recovery/shard1/core_node7 due
to: org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/collections/knowledge/leader_initiated_recovery/shard1/core_node7

Solr 7.5 uses ZK as part of knowing whether shards are up to date,
there was a presentation at Activate by  Dat Cao Manh about the
details, but I think if your ZK connection is timing out that may be
the underlying cause. with a 60 second timeout that would be odd to be
just a timeout issue, so that's puzzling.

Have you done anything odd with ZooKeeper? Like replace nodes or the
like? ZK had an issue at one point where resolving the DNS names for
reconfigured ZK nodes didn't work well. If you've restarted all your
instances (zk first of course) that wouldn't be relevant.

Just a few random thoughts...

Best,
Erick
On Wed, Dec 12, 2018 at 2:09 PM Shamik Bandopadhyay  wrote:
>
> Hi,
>
>   We recently upgraded Solr from 6.5 to 7.5. We are observing some weird
> issues in terms of recovery and memory usage. We've a cluster of 6 physical
> nodes with 2 shards having two replica each. 7.5 seemed to have a higher
> memory consumption where the average heap utilization hovers around 18 gb.
> Couple of days back, one of the replicas went down as heap (30 gb) was
> exhausted. Upon restart, the instance came back quickly but then started a
> spiral effect where one of the nodes in the cluster kept going down one
> after the other. So at any point of time, there were 5 instances available
> instead of 6. Every time we would bring the bad instance back up, it would
> be functional immediately but the shard it was recovering from will
> eventually (within minutes) go down . This cycle (of restarting instances)
> went for almost an hour before all the nodes were finally started active.
> It again occurred today where we are observing similar behavior. We even
> stopped the indexing pipeline to make sure that recovery is minimal. But it
> didn't make any difference, the error is consistent, the affected node goes
> into a recovery (not sure why) and encounters session time out in the
> process.
>
> 2018-12-12 19:59:16.813 ERROR
> (recoveryExecutor-4-thread-3-processing-n:x.x.23.51:8983_solr
> x:knowledge_shard1_replica_n1 c:knowledge s:shard1 r:core_node7)
> [c:knowledge s:shard1 r:core_node7 x:knowledge_shard1_replica_n1]
> o.a.s.c.RecoveryStrategy Error while trying to recover.
> core=knowledge_shard1_replica_n1:org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for /overseer/queue/qn-
>  at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
>  at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
>  at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:786)
>  at
> org.apache.solr.common.cloud.SolrZkClient.lambda$create$7(SolrZkClient.java:398)
>  at
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
>  at org.apache.solr.common.cloud.SolrZkClient.create(SolrZkClient.java:398)
>  at
> org.apache.solr.cloud.ZkDistributedQueue.offer(ZkDistributedQueue.java:321)
>  at org.apache.solr.cloud.ZkController.publish(ZkController.java:1548)
>  at org.apache.solr.cloud.ZkController.publish(ZkController.java:1436)
>  at
> org.apache.solr.cloud.RecoveryStrategy.doSyncOrReplicateRecovery(RecoveryStrategy.java:549)
>  at
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:310)
>  at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:294)
>  at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>  at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/

Solr recovery issue in 7.5

2018-12-12 Thread Shamik Bandopadhyay
Hi,

  We recently upgraded Solr from 6.5 to 7.5. We are observing some weird
issues in terms of recovery and memory usage. We've a cluster of 6 physical
nodes with 2 shards having two replica each. 7.5 seemed to have a higher
memory consumption where the average heap utilization hovers around 18 gb.
Couple of days back, one of the replicas went down as heap (30 gb) was
exhausted. Upon restart, the instance came back quickly but then started a
spiral effect where one of the nodes in the cluster kept going down one
after the other. So at any point of time, there were 5 instances available
instead of 6. Every time we would bring the bad instance back up, it would
be functional immediately but the shard it was recovering from will
eventually (within minutes) go down . This cycle (of restarting instances)
went for almost an hour before all the nodes were finally started active.
It again occurred today where we are observing similar behavior. We even
stopped the indexing pipeline to make sure that recovery is minimal. But it
didn't make any difference, the error is consistent, the affected node goes
into a recovery (not sure why) and encounters session time out in the
process.

2018-12-12 19:59:16.813 ERROR
(recoveryExecutor-4-thread-3-processing-n:x.x.23.51:8983_solr
x:knowledge_shard1_replica_n1 c:knowledge s:shard1 r:core_node7)
[c:knowledge s:shard1 r:core_node7 x:knowledge_shard1_replica_n1]
o.a.s.c.RecoveryStrategy Error while trying to recover.
core=knowledge_shard1_replica_n1:org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /overseer/queue/qn-
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
 at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:786)
 at
org.apache.solr.common.cloud.SolrZkClient.lambda$create$7(SolrZkClient.java:398)
 at
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
 at org.apache.solr.common.cloud.SolrZkClient.create(SolrZkClient.java:398)
 at
org.apache.solr.cloud.ZkDistributedQueue.offer(ZkDistributedQueue.java:321)
 at org.apache.solr.cloud.ZkController.publish(ZkController.java:1548)
 at org.apache.solr.cloud.ZkController.publish(ZkController.java:1436)
 at
org.apache.solr.cloud.RecoveryStrategy.doSyncOrReplicateRecovery(RecoveryStrategy.java:549)
 at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:310)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:294)
 at
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
 at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
 at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base/java.lang.Thread.run(Thread.java:844)

I've added the dropbox links to the relevant error from solr log, solr gc
log, top screenshot and heap usage from spm monitor.

Solr log --> https://www.dropbox.com/s/als26je0dgrp10r/solr.log?dl=0

Solr gc log 1 -->
https://www.dropbox.com/s/m0ikb3kc9enme4f/solr_gc.log.0?dl=0

Solr gc log 2 -->
https://www.dropbox.com/s/jfs7wcjyult5ud8/solr_gc.log.1?dl=0

Top --> https://www.dropbox.com/s/x6f0zwqfbabybd0/top.png?dl=0

SPM monitor screenshots:

JVM pool --> https://www.dropbox.com/s/nbko83eflp8y2tp/jvm_pool.png?dl=0

GC screenshot --> https://www.dropbox.com/s/6zofkvgfknxwjgd/gc.png?dl=0

Solr Cache --> https://www.dropbox.com/s/o6zsxwal6pzspve/cache.png?dl=0

Here are the relevant entries from solr startup script:

SOLR_JAVA_MEM="-Xms40g -Xmx40g"

GC_LOG_OPTS='-Xlog:gc*'

GC_TUNE="-XX:+UseG1GC \
-XX:+PerfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=250 \
-XX:InitiatingHeapOccupancyPercent=75 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
-XX:OnOutOfMemoryError=/mnt/ebs2/solrhome/bin/oom_solr.sh"

ZK_CLIENT_TIMEOUT="6"

SOLR_WAIT_FOR_ZK="180"

SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=12"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoCommit.maxTime=60"
SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=0x20"
SOLR_OPTS="$SOLR_OPTS -Dpkiauth.ttl=12"

All our nodes are running on AWS having 16 vCPU and 128gb . We started with
30gb heap space as the average utilization was between 18-19 gb. For this
recovery issue, we tried bumping up to 40gb but didn't make any difference.
We are using jdk 9.0.4+11 with 6 Solr nodes and 3 zookeeper (3.4.10) quorum.

Out index has close to 10 million documents with 55gb index size. Not sure
if it's relevant, but we have noticed that filter cache utilization has
drastically reduced (0.17) while document cache 

Re: Infrastructure required for SOLR 7.5

2018-12-12 Thread Toke Eskildsen
Priya Krishnasamy  wrote:
> Can anyone help me with the infrastructure needed for SOLR 7.5 ?

Your question is very vague. The basic requirement is just Java 8
https://lucene.apache.org/solr/7_5_0//SYSTEM_REQUIREMENTS.html

The reference guide has some details on setup
http://lucene.apache.org/solr/guide/7_5/taking-solr-to-production.html

If you are thinking about hardware, then the requirements depend on what you 
are trying to do. It can be a discarded laptop or 10 of the beefiest 
AWS-machines, we have no way of guessing.


Try and describe in more detail what you want to build and what you are unsure 
of.

- Toke Eskildsen


Re: Keyword field with tabs in Solr 7.4

2018-12-12 Thread Erick Erickson
Good to hear. Actually, though, this is a little bit of an XY problem.
I think a better solution would be to use
PatternReplaceCharFilterFactory on the field to replace all sequences
of whitespace characters with a single space at both query and index
time.

That charfilter replaces whatever you tell it to _before_
tokenization. Assuming, of course, that this fits your problem ;)

Best,
Erick
On Wed, Dec 12, 2018 at 12:41 AM Michael Aleythe, Sternwald
 wrote:
>
> Hey Erik,
>
> thanks a lot for your suggestion. It lead me on the rigth path. What actually 
> did the trick was, sending the tab as unicode: 
> IPTC_2_080_KY:"\u0009bus\u0009bahn" matched perfectly.
>
> Best,
> Michael
>
> -Ursprüngliche Nachricht-
> Von: Erick Erickson 
> Gesendet: Dienstag, 11. Dezember 2018 18:45
> An: solr-user 
> Betreff: Re: Keyword field with tabs in Solr 7.4
>
> You are probably in "url-encoding hell". Add &debug=query to your search and 
> check the parsed query returned to see what Solr actually sees. Try 
> url-encoding the backslash *%5C" maybe?
>
> Best,
> Erick
> On Tue, Dec 11, 2018 at 1:40 AM Michael Aleythe, Sternwald 
>  wrote:
> >
> > Hey everybody,
> >
> > i have a Solr field keyword field defined as:
> >
> > 
> >  
> > > />
> >  
> > 
> >
> >  > stored="true" termVectors="false" multiValued="false" />
> >
> > Some documents have tabs (\t) indexed in this field, e.g. 
> > IPTC_2_080_KY:"\tbus\tbahn"
> >
> > How can i query this content? I tried  "\tbus\tbahn", 
> > \\tbus\\tbahn and " bus bahn" but nothing matches. Does 
> > anybody know what to do?
> >
> > Regards
> > Michael


Re: disable auto-commit

2018-12-12 Thread Erick Erickson
The answer to your question is to set the interval to -1.

however, for  that's a really bad idea. Why do you think
this will help with OOM errors? _Querying_ usually is the place OOMs
are generated, especially if you do things like facet on very
high-cardinality fields and/or do _not_ have docValues enabled for
fields you facet, group, or sort on.

If you do disable hard commits, your TLOG sizes will grow without
bound until your entire indexing run is complete. Worse, if the TLOG
replays due to abnormal restart, it would try to re-index everything.
Hard commits with openSearcher=false are recommended.

Best,
Erick
On Wed, Dec 12, 2018 at 4:44 AM Danilo Tomasoni  wrote:
>
> I want to disable even that.
>
> I saw here
>
> https://lucene.apache.org/solr/guide/6_6/updatehandlers-in-solrconfig.html
>
>
> that probably to achieve what I want I just need to comment out the
> autoCommit tag.. correct?
>
> What do you think about disabling autocommit/autosoftcommit?
>
> it can lower the system requirements while indexing?
>
>
> What about transaction logs? they can be disabled?
>
> When solr crashes I always reimport from scratch because I don't expect
> that the documents accepted by solr between the last hard commit and the
> crash will be saved somewhere.
>
> But this article
>
> https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
>
> says that solr is capable of restoring documents even if they weren't
> committed, is it still correct?
>
>
> Thank you
>
> Danilo
>
>
> On 12/12/18 13:33, Mikhail Khludnev wrote:
> > What about autoSoftCommit ?
> >
> > On Wed, Dec 12, 2018 at 3:24 PM Danilo Tomasoni  wrote:
> >
> >> Hello, I'm experiencing oom while indexing a big amount of documents.
> >>
> >> The main idea to avoid OOM is to avoid commit (just one big commit at
> >> the end).
> >>
> >> Is this a correct idea?
> >>
> >> How can I disable autocommit?
> >>
> >> I've set
> >>
> >> 
> >> ${solr.autoCommit.maxTime:-1}
> >> false
> >>   
> >>
> >> in solrconfig.xml
> >>
> >> but it's not sufficient, while indexing I still see documents.
> >>
> >> Thank you
> >>
> >> Danilo
> >>
> >>
> >> --
> >> Danilo Tomasoni
> >> COSBI
> >>
> >> As for the European General Data Protection Regulation 2016/679 on the
> >> protection of natural persons with regard to the processing of personal
> >> data, we inform you that all the data we possess are object of treatement
> >> in the respect of the normative provided for by the cited GDPR.
> >>
> >> It is your right to be informed on which of your data are used and how;
> >> you may ask for their correction, cancellation or you may oppose to their
> >> use by written request sent by recorded delivery to The Microsoft Research
> >> – University of Trento Centre for Computational and Systems Biology Scarl,
> >> Piazza Manifattura 1, 38068 Rovereto (TN), Italy.
> >>
> >>
> --
> Danilo Tomasoni
> COSBI
>
> As for the European General Data Protection Regulation 2016/679 on the 
> protection of natural persons with regard to the processing of personal data, 
> we inform you that all the data we possess are object of treatement in the 
> respect of the normative provided for by the cited GDPR.
>
> It is your right to be informed on which of your data are used and how; you 
> may ask for their correction, cancellation or you may oppose to their use by 
> written request sent by recorded delivery to The Microsoft Research – 
> University of Trento Centre for Computational and Systems Biology Scarl, 
> Piazza Manifattura 1, 38068 Rovereto (TN), Italy.
>


Help CJK OOM Errors

2018-12-12 Thread Webster Homer
Recently we had a few Japanese queries that killed our production Solrcloud 
instance. Our schemas support multiple languages, with language specific search 
fields.

This query and similar ones caused OOM errors in Solr:
モノクローナル抗ニコチン性アセチルコリンレセプター(??7サブユニット)抗体 マウス宿主抗体

The query doesn’t match anything

We are running Solr 7.2 in Google cloud. The Solr cloud has 4 solr nodes (3 
zookeepers on their own nodes) holding 18 collections. The usage on most of the 
collections is currently fairly light. One of them gets a lot of traffic. This 
has 500,000 documents of which 25,000 contain some Japanese fields.
We did a lot of tests, but I think we used historical search data which tends 
to have short queries. A 44 character CJK string generates ~80 tokens

I ran the query against a single Japanese field and it took ~30 seconds to come 
back. Removing the ?? from it made no significant difference in performance.
I’ve run other Japanese queries of a similar length and they return in ~200 
msecs.

Our solr cloud usually performs quite well, but in this case it was horrible. 
The bigram filter creates a lot of tokens, but this seems to be a fairly 
standard approach for Chinese and Japanese searches.
How can I debug what is going on with this query?
How resource intensive will searches against these fields be?
How do we estimate the additional memory that seem to require?

We have about a dozen Japanese search fields. These all have this CJKBigram 
field type.
   

 
 
   
 






   

  

 
 
   

   







   

  




Re: Open file limit warning when starting solr

2018-12-12 Thread Daniel Carrasco
Hello,

Strange... Solr user is created during the installation... What user is
your Solr running?

> cat /etc/init.d/solr |grep -i "RUNAS="
>

Have you followed all the info in the link I've sent?, because they talk
also about this:

You also need to edit /etc/pam.d/common-session* and add the following line
to the end:

session required pam_limits.so

I've not done this on my Debian, but maybe Ubuntu need it.

Greetings!.

El mié., 12 dic. 2018 a las 16:30, Armon, Rony () escribió:

> Probably not solr:
> rony@rony-VirtualBox:~$ sudo su -
> root@rony-VirtualBox:~# su solr -
> No passwd entry for user 'solr'
>
> I tried the solution he suggested placing the following in limits.conf
> root soft nofile 65000
> root hard nofile 65000
>
> And then with the asterix:
> *   soft nofile 65000
> *  hard nofile 65000
>
> The result is the same
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 5:00 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> I mean change to solr user using su as sudo. For your system will be
> something like:
>
> $ sudo su -
> pasword
> # su solr -
> $ ulimit -n
> ...a file limit number...
>
> Your file limit in Ubuntu is fine, so looks like a problem with file limit
> for that user, that's why I ask you if the user you've using for run the
> Solr daemon is "solr" instead "root".
>
> Take a look at this:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__askubuntu.com_questions_162229_how-2Ddo-2Di-2Dincrease-2Dthe-2Dopen-2Dfiles-2Dlimit-2Dfor-2Da-2Dnon-2Droot-2Duser&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=VbOekCwtf82rMzNmRFMfBCbyWyvenumeFABhdfeKkcM&s=ZyP8I32R-VUBIINch0q8HuGBrRFFxvTSC72UP4RB6mA&e=
>
> He is changing the limit to 4096 for all users using an asterisk, but you
> can put solr instead asterisk to change the limit to solr user only, just
> like my first message.
>
> Greetings!
>
>
>
> El mié., 12 dic. 2018 a las 15:47, Armon, Rony ()
> escribió:
>
> > Tried it as well...
> >
> > rony@rony-VirtualBox:~$ sudo su -
> > [sudo] password for rony:
> > root@rony-VirtualBox:~# sysctl -a|grep -i fs.file-max fs.file-max =
> > 810202
> >
> > -Original Message-
> > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > Sent: Wednesday, December 12, 2018 4:04 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Open file limit warning when starting solr
> >
> > Hello,
> >
> > The *su solr* command is important, because you change to Solr user
> > before check the limits again, then it shows its limits. Are you
> > running the daemon as solr user?
> >
> > Other command to check is:
> >
> > > # sysctl -a|grep -i fs.file-max
> > > fs.file-max = 6167826
> >
> >
> > If is low then you may increase it.
> >
> > Greetings!
> >
> > El mié., 12 dic. 2018 a las 14:44, Armon, Rony ()
> > escribió:
> >
> > > rony@rony-VirtualBox:~$ ulimit -n
> > > 1024
> > > rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> > > 1024
> > >
> > > -Original Message-
> > > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > > Sent: Wednesday, December 12, 2018 3:31 PM
> > > To: solr-user@lucene.apache.org
> > > Subject: Re: Open file limit warning when starting solr
> > >
> > > Hello,
> > >
> > > What output you get with this commands?:
> > >
> > > > root@solr-temp01:/# ulimit -n
> > > > 1024
> > > > root@solr-temp01:/# su solr
> > > > solr@solr-temp01:/$ ulimit -n
> > > > 65000
> > >
> > >
> > > Greetings!
> > >
> > > El mié., 12 dic. 2018 a las 12:53, Armon, Rony ()
> > > escribió:
> > >
> > > > Hi Daniel and thanks for the prompt reply. I tried that but I'm
> > > > still getting the file limit warning.
> > > >
> > > > -Original Message-
> > > > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > > > Sent: Wednesday, December 12, 2018 12:14 PM
> > > > To: solr-user@lucene.apache.org
> > > > Subject: Re: Open file limit warning when starting solr
> > > >
> > > > Hello,
> > > >
> > > > Try creating a file in /etc/security/limits.d/solr.conf with this:
> > > > solr softnofile  65000
> > > > solr hardnofile  65000
> > > > solr softnproc   65000
> > > > solr hardnproc   65000
> > > >
> > > > This worked for me on Debian 9.
> > > >
> > > > Greetings!
> > > >
> > > > El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> > > > escribió:
> > > >
> > > > > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > > > > *   [WARN] *** Your open file limit is currently 1024.
> > > > >It should be set to 65000 to avoid
> > > > > operational disruption.
> > > > >If you no longer wish to see this warning,
> > > > > set SOLR_ULIMIT_CHECKS to false in your profile or
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=D
> > > > wI
> > > > Fa
> > > > Q&c=0TzQCy9lgR5hSW-bDg5H

RE: Open file limit warning when starting solr

2018-12-12 Thread Armon, Rony
Probably not solr:
rony@rony-VirtualBox:~$ sudo su -
root@rony-VirtualBox:~# su solr -
No passwd entry for user 'solr'

I tried the solution he suggested placing the following in limits.conf
root soft nofile 65000
root hard nofile 65000

And then with the asterix: 
*   soft nofile 65000
*  hard nofile 65000

The result is the same

-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com] 
Sent: Wednesday, December 12, 2018 5:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting solr

Hello,

I mean change to solr user using su as sudo. For your system will be something 
like:

$ sudo su -
pasword
# su solr -
$ ulimit -n
...a file limit number...

Your file limit in Ubuntu is fine, so looks like a problem with file limit for 
that user, that's why I ask you if the user you've using for run the Solr 
daemon is "solr" instead "root".

Take a look at this:
https://urldefense.proofpoint.com/v2/url?u=https-3A__askubuntu.com_questions_162229_how-2Ddo-2Di-2Dincrease-2Dthe-2Dopen-2Dfiles-2Dlimit-2Dfor-2Da-2Dnon-2Droot-2Duser&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=VbOekCwtf82rMzNmRFMfBCbyWyvenumeFABhdfeKkcM&s=ZyP8I32R-VUBIINch0q8HuGBrRFFxvTSC72UP4RB6mA&e=

He is changing the limit to 4096 for all users using an asterisk, but you can 
put solr instead asterisk to change the limit to solr user only, just like my 
first message.

Greetings!



El mié., 12 dic. 2018 a las 15:47, Armon, Rony () escribió:

> Tried it as well...
>
> rony@rony-VirtualBox:~$ sudo su -
> [sudo] password for rony:
> root@rony-VirtualBox:~# sysctl -a|grep -i fs.file-max fs.file-max = 
> 810202
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 4:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> The *su solr* command is important, because you change to Solr user 
> before check the limits again, then it shows its limits. Are you 
> running the daemon as solr user?
>
> Other command to check is:
>
> > # sysctl -a|grep -i fs.file-max
> > fs.file-max = 6167826
>
>
> If is low then you may increase it.
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 14:44, Armon, Rony ()
> escribió:
>
> > rony@rony-VirtualBox:~$ ulimit -n
> > 1024
> > rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> > 1024
> >
> > -Original Message-
> > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > Sent: Wednesday, December 12, 2018 3:31 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Open file limit warning when starting solr
> >
> > Hello,
> >
> > What output you get with this commands?:
> >
> > > root@solr-temp01:/# ulimit -n
> > > 1024
> > > root@solr-temp01:/# su solr
> > > solr@solr-temp01:/$ ulimit -n
> > > 65000
> >
> >
> > Greetings!
> >
> > El mié., 12 dic. 2018 a las 12:53, Armon, Rony ()
> > escribió:
> >
> > > Hi Daniel and thanks for the prompt reply. I tried that but I'm 
> > > still getting the file limit warning.
> > >
> > > -Original Message-
> > > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > > Sent: Wednesday, December 12, 2018 12:14 PM
> > > To: solr-user@lucene.apache.org
> > > Subject: Re: Open file limit warning when starting solr
> > >
> > > Hello,
> > >
> > > Try creating a file in /etc/security/limits.d/solr.conf with this:
> > > solr softnofile  65000
> > > solr hardnofile  65000
> > > solr softnproc   65000
> > > solr hardnproc   65000
> > >
> > > This worked for me on Debian 9.
> > >
> > > Greetings!
> > >
> > > El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> > > escribió:
> > >
> > > > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > > > *   [WARN] *** Your open file limit is currently 1024.
> > > >It should be set to 65000 to avoid 
> > > > operational disruption.
> > > >If you no longer wish to see this warning, 
> > > > set SOLR_ULIMIT_CHECKS to false in your profile or
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=D
> > > wI
> > > Fa
> > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37
> > > tP
> > > 0K
> > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCR
> > > fL
> > > L8
> > > I599seXXPaNeEmSZuUSmKTEo&e=
> > > > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> > > >  It should be set to 65000 to avoid operational disruption.
> > > >  If you no longer wish to see this warning, set 
> > > > SOLR_ULIMIT_CHECKS to false in your profile or 
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d
> > > > =D
> > > > wI
> > > > Fa
> > > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_
> > > > 37
> > > > tP
> > > > 0K
> > > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8W
> > > > CR
> > > > fL
> >

Re: Open file limit warning when starting solr

2018-12-12 Thread Daniel Carrasco
Hello,

I mean change to solr user using su as sudo. For your system will be
something like:

$ sudo su -
pasword
# su solr -
$ ulimit -n
...a file limit number...

Your file limit in Ubuntu is fine, so looks like a problem with file limit
for that user, that's why I ask you if the user you've using for run the
Solr daemon is "solr" instead "root".

Take a look at this:
https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user

He is changing the limit to 4096 for all users using an asterisk, but you
can put solr instead asterisk to change the limit to solr user only, just
like my first message.

Greetings!



El mié., 12 dic. 2018 a las 15:47, Armon, Rony () escribió:

> Tried it as well...
>
> rony@rony-VirtualBox:~$ sudo su -
> [sudo] password for rony:
> root@rony-VirtualBox:~# sysctl -a|grep -i fs.file-max
> fs.file-max = 810202
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 4:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> The *su solr* command is important, because you change to Solr user before
> check the limits again, then it shows its limits. Are you running the
> daemon as solr user?
>
> Other command to check is:
>
> > # sysctl -a|grep -i fs.file-max
> > fs.file-max = 6167826
>
>
> If is low then you may increase it.
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 14:44, Armon, Rony ()
> escribió:
>
> > rony@rony-VirtualBox:~$ ulimit -n
> > 1024
> > rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> > 1024
> >
> > -Original Message-
> > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > Sent: Wednesday, December 12, 2018 3:31 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Open file limit warning when starting solr
> >
> > Hello,
> >
> > What output you get with this commands?:
> >
> > > root@solr-temp01:/# ulimit -n
> > > 1024
> > > root@solr-temp01:/# su solr
> > > solr@solr-temp01:/$ ulimit -n
> > > 65000
> >
> >
> > Greetings!
> >
> > El mié., 12 dic. 2018 a las 12:53, Armon, Rony ()
> > escribió:
> >
> > > Hi Daniel and thanks for the prompt reply. I tried that but I'm
> > > still getting the file limit warning.
> > >
> > > -Original Message-
> > > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > > Sent: Wednesday, December 12, 2018 12:14 PM
> > > To: solr-user@lucene.apache.org
> > > Subject: Re: Open file limit warning when starting solr
> > >
> > > Hello,
> > >
> > > Try creating a file in /etc/security/limits.d/solr.conf with this:
> > > solr softnofile  65000
> > > solr hardnofile  65000
> > > solr softnproc   65000
> > > solr hardnproc   65000
> > >
> > > This worked for me on Debian 9.
> > >
> > > Greetings!
> > >
> > > El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> > > escribió:
> > >
> > > > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > > > *   [WARN] *** Your open file limit is currently 1024.
> > > >It should be set to 65000 to avoid operational
> > > > disruption.
> > > >If you no longer wish to see this warning, set
> > > > SOLR_ULIMIT_CHECKS to false in your profile or
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwI
> > > Fa
> > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP
> > > 0K
> > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfL
> > > L8
> > > I599seXXPaNeEmSZuUSmKTEo&e=
> > > > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> > > >  It should be set to 65000 to avoid operational disruption.
> > > >  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS
> > > > to false in your profile or
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=D
> > > > wI
> > > > Fa
> > > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37
> > > > tP
> > > > 0K
> > > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCR
> > > > fL
> > > > L8
> > > > I599seXXPaNeEmSZuUSmKTEo&e=
> > > >
> > > > This appears to be related to a known bug in Ubuntu<
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache
> > > > .o
> > > > rg
> > > > _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4l
> > > > vO
> > > > zv
> > > > Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq
> > > > 3a
> > > > J7 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.c
> > > > om
> > > > _2
> > > > 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfile
> > > > s-
> > > > 2D
> > > > problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bD
> > > > g5
> > > > HA
> > > > 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_
> > > > kO
> > > > 

RE: Open file limit warning when starting solr

2018-12-12 Thread Armon, Rony
Tried it as well...

rony@rony-VirtualBox:~$ sudo su -
[sudo] password for rony: 
root@rony-VirtualBox:~# sysctl -a|grep -i fs.file-max
fs.file-max = 810202

-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com] 
Sent: Wednesday, December 12, 2018 4:04 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting solr

Hello,

The *su solr* command is important, because you change to Solr user before 
check the limits again, then it shows its limits. Are you running the daemon as 
solr user?

Other command to check is:

> # sysctl -a|grep -i fs.file-max
> fs.file-max = 6167826


If is low then you may increase it.

Greetings!

El mié., 12 dic. 2018 a las 14:44, Armon, Rony () escribió:

> rony@rony-VirtualBox:~$ ulimit -n
> 1024
> rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> 1024
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 3:31 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> What output you get with this commands?:
>
> > root@solr-temp01:/# ulimit -n
> > 1024
> > root@solr-temp01:/# su solr
> > solr@solr-temp01:/$ ulimit -n
> > 65000
>
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 12:53, Armon, Rony ()
> escribió:
>
> > Hi Daniel and thanks for the prompt reply. I tried that but I'm 
> > still getting the file limit warning.
> >
> > -Original Message-
> > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > Sent: Wednesday, December 12, 2018 12:14 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Open file limit warning when starting solr
> >
> > Hello,
> >
> > Try creating a file in /etc/security/limits.d/solr.conf with this:
> > solr softnofile  65000
> > solr hardnofile  65000
> > solr softnproc   65000
> > solr hardnproc   65000
> >
> > This worked for me on Debian 9.
> >
> > Greetings!
> >
> > El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> > escribió:
> >
> > > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > > *   [WARN] *** Your open file limit is currently 1024.
> > >It should be set to 65000 to avoid operational 
> > > disruption.
> > >If you no longer wish to see this warning, set 
> > > SOLR_ULIMIT_CHECKS to false in your profile or
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwI
> > Fa 
> > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP
> > 0K
> > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfL
> > L8
> > I599seXXPaNeEmSZuUSmKTEo&e=
> > > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> > >  It should be set to 65000 to avoid operational disruption.
> > >  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
> > > to false in your profile or 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=D
> > > wI
> > > Fa
> > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37
> > > tP
> > > 0K
> > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCR
> > > fL
> > > L8
> > > I599seXXPaNeEmSZuUSmKTEo&e=
> > >
> > > This appears to be related to a known bug in Ubuntu< 
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache
> > > .o
> > > rg
> > > _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4l
> > > vO
> > > zv
> > > Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq
> > > 3a
> > > J7 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.c
> > > om
> > > _2
> > > 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfile
> > > s-
> > > 2D
> > > problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bD
> > > g5
> > > HA
> > > 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_
> > > kO
> > > n8
> > > BXuwT2pq3aJ77UmLZcV0SIg&s=o2V4fE_fqaIbrjbmNR6_fsyMlwSXxnUqLeXJ9BJO
> > > tq 8& e= I was wondering if you have some workaround. I followed 
> > > the solutions in the following threads:
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__vufind.org_ji
> > > ra
> > > _b
> > > rowse_VUFIND-2D1290&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVo
> > > p5
> > > GM
> > > 3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77U
> > > mL Zc V0SIg&s=eKl-Z7ZRS2z4218azygyFqhq9frW1NoxP1tQPyNsSNA&e=
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__underyx.me_20
> > > 15
> > > _0
> > > 5_18_raising-2Dthe-2Dmaximum-2Dnumber-2Dof-2Dfile-2Ddescriptors&d=
> > > Dw
> > > IF
> > > aQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_3
> > > 7t
> > > P0
> > > KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=vYN5GI8ow39gUf
> > > Rs
> > > e7 au0f5AMd53tNfns8fibFtBLJA&e= and was able to resolve Max 
> > > Processes Limit b

Infrastructure required for SOLR 7.5

2018-12-12 Thread Priya Krishnasamy
Hi Team,

I'm a new SOLR user.

Can anyone help me with the infrastructure needed for SOLR 7.5 ?


Kind Regards,
Priya Krishnasamy
Principal Consultant
E  priya.krishnas...@aveva.com
2nd Floor, Salarpuria Touchstone, Marathalli Outer Ring Road, Bengaluru - 
560103, Karnataka, India
Connect with AVEVA: Twitter | 
LinkedIn | 
YouTube | 
www.aveva.com
Schneider Electric's industrial software business and AVEVA have merged to 
create a new global leader in engineering and industrial software. For more 
details click 
here.

Please note, my email address has changed to priya.krishnas...@aveva.com. 
Please update in your Contacts list.
__

AVEVA Group plc is registered in England at High Cross, Madingley Road, 
Cambridge, England CB3 0HB. Number 2937296.


Re: Metrics

2018-12-12 Thread Jean-Marc Spaggiari
Hi Erick,

(Late) thanks for the clarification on that.

JMS

Le lun. 24 sept. 2018 à 19:06, Erick Erickson  a
écrit :

> The other caches (filterCache, queryResultCache) only have meaning for
> the current searcher opened. Whenever a soft or
> hard-commit-with-opensearcher-true, those caches are all reset to 0
> and stats start accumulating until the _next_ commit. So depending on
> when you look you might see a cache that's just been cleared because a
> new searcher has been opened or one that's been in use for a long
> time.
>
> However, there is a cumulative section for those two caches. The
> cumulative section accumulates the statistics since Solr was started.
> That section should be much more stable on a system that's been
> running for a while. Of course those all start out at zero when you
> start Solr.
>
> Whether the HDFS cache stats follow the same pattern, I don't know.
>
> Best,
> Erick
> On Mon, Sep 24, 2018 at 3:22 PM Jean-Marc Spaggiari
>  wrote:
> >
> > It might be similar to the other cache.. Do you have any pointer the the
> > LRUCache documentation? I want to see what kind of metrics it uses and
> see
> > if it's similar or not
> > to solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java,,,
> >
> > Le lun. 24 sept. 2018 à 18:04, Shawn Heisey  a
> écrit :
> >
> > > On 9/24/2018 3:43 PM, Jean-Marc Spaggiari wrote:
> > > > Thanks for taking a look. My indexes are on HDFS. And I configured
> all
> > > the
> > > > solr parameters for that. The "shard page" is when I click on a SOLR
> > > server
> > > > to go to the UI, then in the dropdown on the left I select a shard (a
> > > > leader one), then I click on "plugins/stats". And then I open
> > > > "HdfsBlockCache".  There from time to time I can see some numbers,
> but
> > > when
> > > > I keep refreshing it comes back to 0 always, and I have no clues
> what is
> > > > into the cache, or even if it just work :(
> > >
> > > Apologies for jumping into a world where I can't offer any help.  I
> have
> > > never used HDFS, so things like this that are related to it are an
> > > unknown to me.  I understand most of the other things in Plugins/Stats,
> > > but not that one.
> > >
> > > Thanks,
> > > Shawn
> > >
> > >
>


Re: Open file limit warning when starting solr

2018-12-12 Thread Daniel Carrasco
Hello,

The *su solr* command is important, because you change to Solr user before
check the limits again, then it shows its limits. Are you running the
daemon as solr user?

Other command to check is:

> # sysctl -a|grep -i fs.file-max
> fs.file-max = 6167826


If is low then you may increase it.

Greetings!

El mié., 12 dic. 2018 a las 14:44, Armon, Rony () escribió:

> rony@rony-VirtualBox:~$ ulimit -n
> 1024
> rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
> 1024
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 3:31 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> What output you get with this commands?:
>
> > root@solr-temp01:/# ulimit -n
> > 1024
> > root@solr-temp01:/# su solr
> > solr@solr-temp01:/$ ulimit -n
> > 65000
>
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 12:53, Armon, Rony ()
> escribió:
>
> > Hi Daniel and thanks for the prompt reply. I tried that but I'm still
> > getting the file limit warning.
> >
> > -Original Message-
> > From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> > Sent: Wednesday, December 12, 2018 12:14 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Open file limit warning when starting solr
> >
> > Hello,
> >
> > Try creating a file in /etc/security/limits.d/solr.conf with this:
> > solr softnofile  65000
> > solr hardnofile  65000
> > solr softnproc   65000
> > solr hardnproc   65000
> >
> > This worked for me on Debian 9.
> >
> > Greetings!
> >
> > El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> > escribió:
> >
> > > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > > *   [WARN] *** Your open file limit is currently 1024.
> > >It should be set to 65000 to avoid operational
> > > disruption.
> > >If you no longer wish to see this warning, set
> > > SOLR_ULIMIT_CHECKS to false in your profile or
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8
> > I599seXXPaNeEmSZuUSmKTEo&e=
> > > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> > >  It should be set to 65000 to avoid operational disruption.
> > >  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS
> > > to false in your profile or
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwI
> > > Fa
> > > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP
> > > 0K
> > > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfL
> > > L8
> > > I599seXXPaNeEmSZuUSmKTEo&e=
> > >
> > > This appears to be related to a known bug in Ubuntu<
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.o
> > > rg
> > > _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvO
> > > zv
> > > Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3a
> > > J7 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.com
> > > _2
> > > 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfiles-
> > > 2D
> > > problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5
> > > HA
> > > 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kO
> > > n8
> > > BXuwT2pq3aJ77UmLZcV0SIg&s=o2V4fE_fqaIbrjbmNR6_fsyMlwSXxnUqLeXJ9BJOtq
> > > 8& e= I was wondering if you have some workaround. I followed the
> > > solutions in the following threads:
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__vufind.org_jira
> > > _b
> > > rowse_VUFIND-2D1290&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5
> > > GM
> > > 3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmL
> > > Zc V0SIg&s=eKl-Z7ZRS2z4218azygyFqhq9frW1NoxP1tQPyNsSNA&e=
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__underyx.me_2015
> > > _0
> > > 5_18_raising-2Dthe-2Dmaximum-2Dnumber-2Dof-2Dfile-2Ddescriptors&d=Dw
> > > IF
> > > aQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37t
> > > P0
> > > KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=vYN5GI8ow39gUfRs
> > > e7 au0f5AMd53tNfns8fibFtBLJA&e= and was able to resolve Max
> > > Processes Limit but not File limit:
> > > *   [WARN] *** Your open file limit is currently 1024.
> > >It should be set to 65000 to avoid operational
> > > disruption.
> > >If you no longer wish to see this warning, set
> > > SOLR_ULIMIT_CHECKS to false in your profile or
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA22

RE: Open file limit warning when starting solr

2018-12-12 Thread Armon, Rony
rony@rony-VirtualBox:~$ ulimit -n
1024
rony@rony-VirtualBox:~/solr-7.5.0$ ulimit -n
1024

-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com] 
Sent: Wednesday, December 12, 2018 3:31 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting solr

Hello,

What output you get with this commands?:

> root@solr-temp01:/# ulimit -n
> 1024
> root@solr-temp01:/# su solr
> solr@solr-temp01:/$ ulimit -n
> 65000


Greetings!

El mié., 12 dic. 2018 a las 12:53, Armon, Rony () escribió:

> Hi Daniel and thanks for the prompt reply. I tried that but I'm still 
> getting the file limit warning.
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 12:14 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> Try creating a file in /etc/security/limits.d/solr.conf with this:
> solr softnofile  65000
> solr hardnofile  65000
> solr softnproc   65000
> solr hardnproc   65000
>
> This worked for me on Debian 9.
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> escribió:
>
> > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > *   [WARN] *** Your open file limit is currently 1024.
> >It should be set to 65000 to avoid operational 
> > disruption.
> >If you no longer wish to see this warning, set 
> > SOLR_ULIMIT_CHECKS to false in your profile or
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8
> I599seXXPaNeEmSZuUSmKTEo&e=
> > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> >  It should be set to 65000 to avoid operational disruption.
> >  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
> > to false in your profile or 
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwI
> > Fa 
> > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP
> > 0K
> > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfL
> > L8
> > I599seXXPaNeEmSZuUSmKTEo&e=
> >
> > This appears to be related to a known bug in Ubuntu< 
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.o
> > rg 
> > _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvO
> > zv
> > Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3a
> > J7 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.com
> > _2 
> > 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfiles-
> > 2D 
> > problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5
> > HA
> > 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kO
> > n8 
> > BXuwT2pq3aJ77UmLZcV0SIg&s=o2V4fE_fqaIbrjbmNR6_fsyMlwSXxnUqLeXJ9BJOtq
> > 8& e= I was wondering if you have some workaround. I followed the 
> > solutions in the following threads:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__vufind.org_jira
> > _b 
> > rowse_VUFIND-2D1290&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5
> > GM 
> > 3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmL
> > Zc V0SIg&s=eKl-Z7ZRS2z4218azygyFqhq9frW1NoxP1tQPyNsSNA&e=
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__underyx.me_2015
> > _0 
> > 5_18_raising-2Dthe-2Dmaximum-2Dnumber-2Dof-2Dfile-2Ddescriptors&d=Dw
> > IF
> > aQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37t
> > P0
> > KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=vYN5GI8ow39gUfRs
> > e7 au0f5AMd53tNfns8fibFtBLJA&e= and was able to resolve Max 
> > Processes Limit but not File limit:
> > *   [WARN] *** Your open file limit is currently 1024.
> >It should be set to 65000 to avoid operational 
> > disruption.
> >If you no longer wish to see this warning, set 
> > SOLR_ULIMIT_CHECKS to false in your profile or
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8
> I599seXXPaNeEmSZuUSmKTEo&e=
> >   Waiting up to 180 seconds to see Solr running on 
> > port
> > 8983 []
> >   Started Solr server on port 8983 (pid=2843). Happy 
> > searching!
> >
> > cd proc# cat 2843/limits:
> > Max processes 6500065000
> > processes
> > Max open files4096 4096 files
> >
> > The problem persisted after upgrade to Ubuntu 18.10 Any other 
> > solution would be appreciated.
> > Otherwise can you please tell me what are the likely cons

Re: Open file limit warning when starting solr

2018-12-12 Thread Daniel Carrasco
Hello,

What output you get with this commands?:

> root@solr-temp01:/# ulimit -n
> 1024
> root@solr-temp01:/# su solr
> solr@solr-temp01:/$ ulimit -n
> 65000


Greetings!

El mié., 12 dic. 2018 a las 12:53, Armon, Rony () escribió:

> Hi Daniel and thanks for the prompt reply. I tried that but I'm still
> getting the file limit warning.
>
> -Original Message-
> From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
> Sent: Wednesday, December 12, 2018 12:14 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Open file limit warning when starting solr
>
> Hello,
>
> Try creating a file in /etc/security/limits.d/solr.conf with this:
> solr softnofile  65000
> solr hardnofile  65000
> solr softnproc   65000
> solr hardnproc   65000
>
> This worked for me on Debian 9.
>
> Greetings!
>
> El mié., 12 dic. 2018 a las 11:09, Armon, Rony ()
> escribió:
>
> > Hello, When launching solr (Ubuntu 16.04) I'm getting:
> > *   [WARN] *** Your open file limit is currently 1024.
> >It should be set to 65000 to avoid operational
> > disruption.
> >If you no longer wish to see this warning, set
> > SOLR_ULIMIT_CHECKS to false in your profile or
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8I599seXXPaNeEmSZuUSmKTEo&e=
> > *   [WARN] ***  Your Max Processes Limit is currently 15058.
> >  It should be set to 65000 to avoid operational disruption.
> >  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to
> > false in your profile or
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> > Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> > Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8
> > I599seXXPaNeEmSZuUSmKTEo&e=
> >
> > This appears to be related to a known bug in Ubuntu<
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org
> > _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzv
> > Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ7
> > 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.com_2
> > 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfiles-2D
> > problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA
> > 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8
> > BXuwT2pq3aJ77UmLZcV0SIg&s=o2V4fE_fqaIbrjbmNR6_fsyMlwSXxnUqLeXJ9BJOtq8&
> > e= I was wondering if you have some workaround. I followed the
> > solutions in the following threads:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__vufind.org_jira_b
> > rowse_VUFIND-2D1290&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM
> > 3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZc
> > V0SIg&s=eKl-Z7ZRS2z4218azygyFqhq9frW1NoxP1tQPyNsSNA&e=
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__underyx.me_2015_0
> > 5_18_raising-2Dthe-2Dmaximum-2Dnumber-2Dof-2Dfile-2Ddescriptors&d=DwIF
> > aQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0
> > KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=vYN5GI8ow39gUfRse7
> > au0f5AMd53tNfns8fibFtBLJA&e= and was able to resolve Max Processes
> > Limit but not File limit:
> > *   [WARN] *** Your open file limit is currently 1024.
> >It should be set to 65000 to avoid operational
> > disruption.
> >If you no longer wish to see this warning, set
> > SOLR_ULIMIT_CHECKS to false in your profile or
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8I599seXXPaNeEmSZuUSmKTEo&e=
> >   Waiting up to 180 seconds to see Solr running on
> > port
> > 8983 []
> >   Started Solr server on port 8983 (pid=2843). Happy
> > searching!
> >
> > cd proc# cat 2843/limits:
> > Max processes 6500065000
> > processes
> > Max open files4096 4096 files
> >
> > The problem persisted after upgrade to Ubuntu 18.10 Any other solution
> > would be appreciated.
> > Otherwise can you please tell me what are the likely consequences of
> > the open file limit?
> >
> >
> >
> >
> >
> >
> > **
> > The information in this email is confidential and may be legally
> > privileged. It is intended solely for the addressee. Access to this
> > email by anyone else is unauthorized. If you are not the intended
> > recipient, any disclosure, copying, distribution or any action t

Re: disable auto-commit

2018-12-12 Thread Danilo Tomasoni

I want to disable even that.

I saw here

https://lucene.apache.org/solr/guide/6_6/updatehandlers-in-solrconfig.html


that probably to achieve what I want I just need to comment out the 
autoCommit tag.. correct?


What do you think about disabling autocommit/autosoftcommit?

it can lower the system requirements while indexing?


What about transaction logs? they can be disabled?

When solr crashes I always reimport from scratch because I don't expect 
that the documents accepted by solr between the last hard commit and the 
crash will be saved somewhere.


But this article

https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

says that solr is capable of restoring documents even if they weren't 
committed, is it still correct?



Thank you

Danilo


On 12/12/18 13:33, Mikhail Khludnev wrote:

What about autoSoftCommit ?

On Wed, Dec 12, 2018 at 3:24 PM Danilo Tomasoni  wrote:


Hello, I'm experiencing oom while indexing a big amount of documents.

The main idea to avoid OOM is to avoid commit (just one big commit at
the end).

Is this a correct idea?

How can I disable autocommit?

I've set


${solr.autoCommit.maxTime:-1}
false
  

in solrconfig.xml

but it's not sufficient, while indexing I still see documents.

Thank you

Danilo


--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the
protection of natural persons with regard to the processing of personal
data, we inform you that all the data we possess are object of treatement
in the respect of the normative provided for by the cited GDPR.

It is your right to be informed on which of your data are used and how;
you may ask for their correction, cancellation or you may oppose to their
use by written request sent by recorded delivery to The Microsoft Research
– University of Trento Centre for Computational and Systems Biology Scarl,
Piazza Manifattura 1, 38068 Rovereto (TN), Italy.



--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the 
protection of natural persons with regard to the processing of personal data, 
we inform you that all the data we possess are object of treatement in the 
respect of the normative provided for by the cited GDPR.

It is your right to be informed on which of your data are used and how; you may 
ask for their correction, cancellation or you may oppose to their use by 
written request sent by recorded delivery to The Microsoft Research – 
University of Trento Centre for Computational and Systems Biology Scarl, Piazza 
Manifattura 1, 38068 Rovereto (TN), Italy.



Re: disable auto-commit

2018-12-12 Thread Mikhail Khludnev
What about autoSoftCommit ?

On Wed, Dec 12, 2018 at 3:24 PM Danilo Tomasoni  wrote:

> Hello, I'm experiencing oom while indexing a big amount of documents.
>
> The main idea to avoid OOM is to avoid commit (just one big commit at
> the end).
>
> Is this a correct idea?
>
> How can I disable autocommit?
>
> I've set
>
> 
>${solr.autoCommit.maxTime:-1}
>false
>  
>
> in solrconfig.xml
>
> but it's not sufficient, while indexing I still see documents.
>
> Thank you
>
> Danilo
>
>
> --
> Danilo Tomasoni
> COSBI
>
> As for the European General Data Protection Regulation 2016/679 on the
> protection of natural persons with regard to the processing of personal
> data, we inform you that all the data we possess are object of treatement
> in the respect of the normative provided for by the cited GDPR.
>
> It is your right to be informed on which of your data are used and how;
> you may ask for their correction, cancellation or you may oppose to their
> use by written request sent by recorded delivery to The Microsoft Research
> – University of Trento Centre for Computational and Systems Biology Scarl,
> Piazza Manifattura 1, 38068 Rovereto (TN), Italy.
>
>

-- 
Sincerely yours
Mikhail Khludnev


disable auto-commit

2018-12-12 Thread Danilo Tomasoni

Hello, I'm experiencing oom while indexing a big amount of documents.

The main idea to avoid OOM is to avoid commit (just one big commit at 
the end).


Is this a correct idea?

How can I disable autocommit?

I've set


  ${solr.autoCommit.maxTime:-1}
  false
    

in solrconfig.xml

but it's not sufficient, while indexing I still see documents.

Thank you

Danilo


--
Danilo Tomasoni
COSBI

As for the European General Data Protection Regulation 2016/679 on the 
protection of natural persons with regard to the processing of personal data, 
we inform you that all the data we possess are object of treatement in the 
respect of the normative provided for by the cited GDPR.

It is your right to be informed on which of your data are used and how; you may 
ask for their correction, cancellation or you may oppose to their use by 
written request sent by recorded delivery to The Microsoft Research – 
University of Trento Centre for Computational and Systems Biology Scarl, Piazza 
Manifattura 1, 38068 Rovereto (TN), Italy.



RE: Open file limit warning when starting solr

2018-12-12 Thread Armon, Rony
Hi Daniel and thanks for the prompt reply. I tried that but I'm still getting 
the file limit warning. 

-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com] 
Sent: Wednesday, December 12, 2018 12:14 PM
To: solr-user@lucene.apache.org
Subject: Re: Open file limit warning when starting solr

Hello,

Try creating a file in /etc/security/limits.d/solr.conf with this:
solr softnofile  65000
solr hardnofile  65000
solr softnproc   65000
solr hardnproc   65000

This worked for me on Debian 9.

Greetings!

El mié., 12 dic. 2018 a las 11:09, Armon, Rony () escribió:

> Hello, When launching solr (Ubuntu 16.04) I'm getting:
> *   [WARN] *** Your open file limit is currently 1024.
>It should be set to 65000 to avoid operational 
> disruption.
>If you no longer wish to see this warning, set 
> SOLR_ULIMIT_CHECKS to false in your profile or 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8I599seXXPaNeEmSZuUSmKTEo&e=
> *   [WARN] ***  Your Max Processes Limit is currently 15058.
>  It should be set to 65000 to avoid operational disruption.
>  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to 
> false in your profile or 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFa
> Q&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0K
> Fw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8
> I599seXXPaNeEmSZuUSmKTEo&e=
>
> This appears to be related to a known bug in Ubuntu< 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org
> _jira_browse_SOLR-2D13063&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzv
> Vop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ7
> 7UmLZcV0SIg&s=cCjwFj8bcVvmcTIcm0Rj2GNiW7wWaOeKufHSSdPd_uY&e=>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.jayway.com_2
> 012_02_11_how-2Dto-2Dreally-2Dfix-2Dthe-2Dtoo-2Dmany-2Dopen-2Dfiles-2D
> problem-2Dfor-2Dtomcat-2Din-2Dubuntu_&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA
> 76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8
> BXuwT2pq3aJ77UmLZcV0SIg&s=o2V4fE_fqaIbrjbmNR6_fsyMlwSXxnUqLeXJ9BJOtq8&
> e= I was wondering if you have some workaround. I followed the 
> solutions in the following threads:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__vufind.org_jira_b
> rowse_VUFIND-2D1290&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM
> 3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZc
> V0SIg&s=eKl-Z7ZRS2z4218azygyFqhq9frW1NoxP1tQPyNsSNA&e=
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__underyx.me_2015_0
> 5_18_raising-2Dthe-2Dmaximum-2Dnumber-2Dof-2Dfile-2Ddescriptors&d=DwIF
> aQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0
> KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=vYN5GI8ow39gUfRse7
> au0f5AMd53tNfns8fibFtBLJA&e= and was able to resolve Max Processes 
> Limit but not File limit:
> *   [WARN] *** Your open file limit is currently 1024.
>It should be set to 65000 to avoid operational 
> disruption.
>If you no longer wish to see this warning, set 
> SOLR_ULIMIT_CHECKS to false in your profile or 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh&d=DwIFaQ&c=0TzQCy9lgR5hSW-bDg5HA76y7nf4lvOzvVop5GM3Y80&r=pklH2GQ2S8JTF_37tP0KFw&m=5cbIisJU0sjIn9-_kOn8BXuwT2pq3aJ77UmLZcV0SIg&s=RlMA2272ZBR8WCRfLL8I599seXXPaNeEmSZuUSmKTEo&e=
>   Waiting up to 180 seconds to see Solr running on 
> port
> 8983 []
>   Started Solr server on port 8983 (pid=2843). Happy 
> searching!
>
> cd proc# cat 2843/limits:
> Max processes 6500065000
> processes
> Max open files4096 4096 files
>
> The problem persisted after upgrade to Ubuntu 18.10 Any other solution 
> would be appreciated.
> Otherwise can you please tell me what are the likely consequences of 
> the open file limit?
>
>
>
>
>
>
> **
> The information in this email is confidential and may be legally 
> privileged. It is intended solely for the addressee. Access to this 
> email by anyone else is unauthorized. If you are not the intended 
> recipient, any disclosure, copying, distribution or any action taken 
> or omitted to be taken in reliance on it, is prohibited and may be 
> unlawful. When addressed to our clients any opinions or advice 
> contained in this email are subject to the terms and conditions 
> expressed in the governing KPMG client engagement letter.
> **
> *
>


--
___

Re: Open file limit warning when starting solr

2018-12-12 Thread Daniel Carrasco
Hello,

Try creating a file in /etc/security/limits.d/solr.conf with this:
solr softnofile  65000
solr hardnofile  65000
solr softnproc   65000
solr hardnproc   65000

This worked for me on Debian 9.

Greetings!

El mié., 12 dic. 2018 a las 11:09, Armon, Rony () escribió:

> Hello, When launching solr (Ubuntu 16.04) I'm getting:
> *   [WARN] *** Your open file limit is currently 1024.
>It should be set to 65000 to avoid operational
> disruption.
>If you no longer wish to see this warning, set
> SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
> *   [WARN] ***  Your Max Processes Limit is currently 15058.
>  It should be set to 65000 to avoid operational disruption.
>  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to
> false in your profile or solr.in.sh
>
> This appears to be related to a known bug in Ubuntu<
> https://issues.apache.org/jira/browse/SOLR-13063>
> https://blog.jayway.com/2012/02/11/how-to-really-fix-the-too-many-open-files-problem-for-tomcat-in-ubuntu/
> I was wondering if you have some workaround. I followed the solutions in
> the following threads:
> https://vufind.org/jira/browse/VUFIND-1290
>
> https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors
> and was able to resolve Max Processes Limit but not File limit:
> *   [WARN] *** Your open file limit is currently 1024.
>It should be set to 65000 to avoid operational
> disruption.
>If you no longer wish to see this warning, set
> SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
>   Waiting up to 180 seconds to see Solr running on port
> 8983 []
>   Started Solr server on port 8983 (pid=2843). Happy
> searching!
>
> cd proc# cat 2843/limits:
> Max processes 6500065000
> processes
> Max open files4096 4096 files
>
> The problem persisted after upgrade to Ubuntu 18.10
> Any other solution would be appreciated.
> Otherwise can you please tell me what are the likely consequences of the
> open file limit?
>
>
>
>
>
>
> **
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this email are subject
> to the terms and conditions expressed in the governing KPMG client
> engagement letter.
> ***
>


-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_


Open file limit warning when starting solr

2018-12-12 Thread Armon, Rony
Hello, When launching solr (Ubuntu 16.04) I'm getting:
*   [WARN] *** Your open file limit is currently 1024.
   It should be set to 65000 to avoid operational disruption.
   If you no longer wish to see this warning, set 
SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*   [WARN] ***  Your Max Processes Limit is currently 15058.
 It should be set to 65000 to avoid operational disruption.
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in 
your profile or solr.in.sh

This appears to be related to a known bug in 
Ubuntu 
https://blog.jayway.com/2012/02/11/how-to-really-fix-the-too-many-open-files-problem-for-tomcat-in-ubuntu/
I was wondering if you have some workaround. I followed the solutions in the 
following threads:
https://vufind.org/jira/browse/VUFIND-1290
https://underyx.me/2015/05/18/raising-the-maximum-number-of-file-descriptors
and was able to resolve Max Processes Limit but not File limit:
*   [WARN] *** Your open file limit is currently 1024.
   It should be set to 65000 to avoid operational disruption.
   If you no longer wish to see this warning, set 
SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
  Waiting up to 180 seconds to see Solr running on port 8983 []
  Started Solr server on port 8983 (pid=2843). Happy searching!

cd proc# cat 2843/limits:
Max processes 6500065000processes
Max open files4096 4096 files

The problem persisted after upgrade to Ubuntu 18.10
Any other solution would be appreciated.
Otherwise can you please tell me what are the likely consequences of the open 
file limit?






**
The information in this email is confidential and may be legally privileged. It 
is intended solely for the addressee. Access to this email by anyone else is 
unauthorized. If you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in reliance on it, is 
prohibited and may be unlawful. When addressed to our clients any opinions or 
advice contained in this email are subject to the terms and conditions 
expressed in the governing KPMG client engagement letter.
***


AW: Keyword field with tabs in Solr 7.4

2018-12-12 Thread Michael Aleythe, Sternwald
Hey Erik,

thanks a lot for your suggestion. It lead me on the rigth path. What actually 
did the trick was, sending the tab as unicode: 
IPTC_2_080_KY:"\u0009bus\u0009bahn" matched perfectly.

Best,
Michael

-Ursprüngliche Nachricht-
Von: Erick Erickson  
Gesendet: Dienstag, 11. Dezember 2018 18:45
An: solr-user 
Betreff: Re: Keyword field with tabs in Solr 7.4

You are probably in "url-encoding hell". Add &debug=query to your search and 
check the parsed query returned to see what Solr actually sees. Try 
url-encoding the backslash *%5C" maybe?

Best,
Erick
On Tue, Dec 11, 2018 at 1:40 AM Michael Aleythe, Sternwald 
 wrote:
>
> Hey everybody,
>
> i have a Solr field keyword field defined as:
>
> 
>  
>
>  
> 
>
>  stored="true" termVectors="false" multiValued="false" />
>
> Some documents have tabs (\t) indexed in this field, e.g. 
> IPTC_2_080_KY:"\tbus\tbahn"
>
> How can i query this content? I tried  "\tbus\tbahn", 
> \\tbus\\tbahn and " bus bahn" but nothing matches. Does 
> anybody know what to do?
>
> Regards
> Michael