Replicas going into recovery

2020-06-11 Thread Anshuman Singh
We are running a test case, ingesting 2B records in a collection in 24 hrs.
This collection is spread across 10 solr nodes with a replication factor of
2.

We are noticing many replicas going into recovery while indexing. And it is
degrading indexing performance.
We are observing errors like:

*org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at http://host:8983/solr/test_shard13_replica_n50
*


*Expected mime type application/octet-stream but got application/json.*
*o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: No
registered leader was found after waiting for 4000ms*

Sometimes both the replicas of a shard go into recovery and the error log
is something related to zookeeper, cannot elect a leader.

Also related to indexing performance, when we left a run overnight we can
see that in the morning the indexing performance had degraded from <1000ms
to >10s for 10k batch insertions.
But we have noticed that restarting solr on all nodes again starts gives
better performance. We are using Solr 7.4. What can be the issue here?


Re: Filtering Solr pivot facet values

2020-06-11 Thread emerikusz
This works: &f.interests.facet.matches=hockey|soccer 



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


CDCRLogReader.getVersion ClassCastException

2020-06-11 Thread Scott Woolcox
Hello all,

We are using Solr 7.7.3 (SolrCloud, 2 nodes w/zookeepers per data center) and 
today we started to receive alerts:

CdcrReplicator Failed to forward update request to target: Authors
java.lang.ClassCastException: class java.lang.Long cannot be cast to class 
java.util.List (java.lang.Long and java.util.List are in module java.base of 
loader 'bootstrap')

at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.getVersion(CdcrUpdateLog.java:732)

This error repeated until we took action. Stopping and starting CDCR made the 
error stop, we are still trying to verify our data. I suspect we lost something…

I noticed this issue was first raised in Oct. 2019 (as indicated here: 
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201910.mbox/%3c1572288519571-0.p...@n3.nabble.com%3E)
 and we are looking for advice on this.
I have reviewed the code and it looks like the implementation has not changed, 
so this issue may still occur in current master. It looks like the compiler 
warning is now being ignored though! Someone has added 
@SuppressWarnings({"rawtypes"})

Any guidance is very much appreciated.

- Thank you



Re: How to determine why solr stops running?

2020-06-11 Thread Walter Underwood
1. You have a tiny heap. 536 Megabytes is not enough.
2. I stopped using the CMS GC years ago.

Here is the GC config we use on every one of our 150+ Solr hosts. We’re still 
on Java 8, but will be upgrading soon.

SOLR_HEAP=8g
# Use G1 GC  -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=200 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
"

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jun 11, 2020, at 10:52 AM, Ryan W  wrote:
> 
> On Wed, Jun 10, 2020 at 8:35 PM Hup Chen  wrote:
> 
>> I will check "dmesg" first, to find out any hardware error message.
>> 
> 
> Here is what I see toward the end of the output from dmesg:
> 
> [1521232.781785] [118857]48 118857   108785  677 201
> 901 0 httpd
> [1521232.781787] [118860]48 118860   108785  710 201
> 881 0 httpd
> [1521232.781788] [118862]48 118862   113063 5256 210
> 725 0 httpd
> [1521232.781790] [118864]48 118864   114085 6634 212
> 703 0 httpd
> [1521232.781791] [118871]48 118871   13968732323 262
> 620 0 httpd
> [1521232.781793] [118873]48 118873   108785  821 201
> 792 0 httpd
> [1521232.781795] [118879]48 118879   14026332719 263
> 621 0 httpd
> [1521232.781796] [118903]48 118903   108785  812 201
> 771 0 httpd
> [1521232.781798] [118905]48 118905   113575 5606 211
> 660 0 httpd
> [1521232.781800] [118906]48 118906   113563 5694 211
> 626 0 httpd
> [1521232.781801] Out of memory: Kill process 117529 (httpd) score 9 or
> sacrifice child
> [1521232.782908] Killed process 117529 (httpd), UID 48, total-vm:675824kB,
> anon-rss:181844kB, file-rss:0kB, shmem-rss:0kB
> 
> Is this a relevant "Out of memory" message?  Does this suggest an OOM
> situation is the culprit?
> 
> When I grep in the solr logs for oom, I see some entries like this...
> 
> ./solr_gc.log.4.current:CommandLine flags: -XX:CICompilerCount=4
> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark
> -XX:ConcGCThreads=4 -XX:GCLogFileSize=20971520
> -XX:InitialHeapSize=536870912 -XX:MaxHeapSize=536870912
> -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=8
> -XX:MinHeapDeltaBytes=196608 -XX:NewRatio=3 -XX:NewSize=134217728
> -XX:NumberOfGCLogFiles=9 -XX:OldPLABSize=16 -XX:OldSize=402653184
> -XX:-OmitStackTraceInFastThrow
> -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /opt/solr/server/logs
> -XX:ParallelGCThreads=4 -XX:+ParallelRefProcEnabled
> -XX:PretenureSizeThreshold=67108864 -XX:+PrintGC
> -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC
> -XX:+PrintTenuringDistribution -XX:SurvivorRatio=4
> -XX:TargetSurvivorRatio=90 -XX:ThreadStackSize=256
> -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers
> -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation
> -XX:+UseParNewGC
> 
> Buried in there I see "OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh". But I
> think this is just a setting that indicates what to do in case of an OOM.
> And if I look in that oom_solr.sh file, I see it would write an entry to a
> solr_oom_kill log. And there is no such log in the logs directory.
> 
> Many thanks.
> 
> 
> 
> 
>> Then use some system admin tools to monitor that server,
>> for instance, top, vmstat, lsof, iostat ... or simply install some nice
>> free monitoring tool into this system, like monit, monitorix, nagios.
>> Good luck!
>> 
>> 
>> From: Ryan W 
>> Sent: Thursday, June 11, 2020 2:13 AM
>> To: solr-user@lucene.apache.org 
>> Subject: Re: How to determine why solr stops running?
>> 
>> Hi all,
>> 
>> People keep suggesting I check the logs for errors.  What do those errors
>> look like?  Does anyone have examples of the text of a Solr oom error?  Or
>> the text of any other errors I should be looking for the next time solr
>> fails?  Are there phrases I should grep for in the logs?  Should I be
>> looking in the Solr logs for an OOM error, or in the Apache logs?
>> 
>> There is nothing failing on the server except for solr -- at least not that
>> I can see.  There is no apparent problem with the hardware or anything else
>> on the server.  The OS is Red Hat Enterprise Linux. The server has 16 GB of
>> RAM and hosts one website that does not get a huge amount of traffic.
>> 
>> When the start command is given to solr, does it first check to see if solr
>> is running, or does it always start solr whether it is already running or
>> not?
>> 
>> Many thanks!
>> Ryan
>> 
>> 
>> On Tue, Jun 9, 2020 at 7:58 AM Erick Erickson 
>> wrote:
>> 
>>> To add to what Dave said,

Re: How to determine why solr stops running?

2020-06-11 Thread Ryan W
On Wed, Jun 10, 2020 at 8:35 PM Hup Chen  wrote:

> I will check "dmesg" first, to find out any hardware error message.
>

Here is what I see toward the end of the output from dmesg:

[1521232.781785] [118857]48 118857   108785  677 201
901 0 httpd
[1521232.781787] [118860]48 118860   108785  710 201
881 0 httpd
[1521232.781788] [118862]48 118862   113063 5256 210
725 0 httpd
[1521232.781790] [118864]48 118864   114085 6634 212
703 0 httpd
[1521232.781791] [118871]48 118871   13968732323 262
620 0 httpd
[1521232.781793] [118873]48 118873   108785  821 201
792 0 httpd
[1521232.781795] [118879]48 118879   14026332719 263
621 0 httpd
[1521232.781796] [118903]48 118903   108785  812 201
771 0 httpd
[1521232.781798] [118905]48 118905   113575 5606 211
660 0 httpd
[1521232.781800] [118906]48 118906   113563 5694 211
626 0 httpd
[1521232.781801] Out of memory: Kill process 117529 (httpd) score 9 or
sacrifice child
[1521232.782908] Killed process 117529 (httpd), UID 48, total-vm:675824kB,
anon-rss:181844kB, file-rss:0kB, shmem-rss:0kB

Is this a relevant "Out of memory" message?  Does this suggest an OOM
situation is the culprit?

When I grep in the solr logs for oom, I see some entries like this...

./solr_gc.log.4.current:CommandLine flags: -XX:CICompilerCount=4
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
-XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark
-XX:ConcGCThreads=4 -XX:GCLogFileSize=20971520
-XX:InitialHeapSize=536870912 -XX:MaxHeapSize=536870912
-XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=8
-XX:MinHeapDeltaBytes=196608 -XX:NewRatio=3 -XX:NewSize=134217728
-XX:NumberOfGCLogFiles=9 -XX:OldPLABSize=16 -XX:OldSize=402653184
-XX:-OmitStackTraceInFastThrow
-XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /opt/solr/server/logs
-XX:ParallelGCThreads=4 -XX:+ParallelRefProcEnabled
-XX:PretenureSizeThreshold=67108864 -XX:+PrintGC
-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC
-XX:+PrintTenuringDistribution -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:ThreadStackSize=256
-XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation
-XX:+UseParNewGC

Buried in there I see "OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh". But I
think this is just a setting that indicates what to do in case of an OOM.
And if I look in that oom_solr.sh file, I see it would write an entry to a
solr_oom_kill log. And there is no such log in the logs directory.

Many thanks.




> Then use some system admin tools to monitor that server,
> for instance, top, vmstat, lsof, iostat ... or simply install some nice
> free monitoring tool into this system, like monit, monitorix, nagios.
> Good luck!
>
> 
> From: Ryan W 
> Sent: Thursday, June 11, 2020 2:13 AM
> To: solr-user@lucene.apache.org 
> Subject: Re: How to determine why solr stops running?
>
> Hi all,
>
> People keep suggesting I check the logs for errors.  What do those errors
> look like?  Does anyone have examples of the text of a Solr oom error?  Or
> the text of any other errors I should be looking for the next time solr
> fails?  Are there phrases I should grep for in the logs?  Should I be
> looking in the Solr logs for an OOM error, or in the Apache logs?
>
> There is nothing failing on the server except for solr -- at least not that
> I can see.  There is no apparent problem with the hardware or anything else
> on the server.  The OS is Red Hat Enterprise Linux. The server has 16 GB of
> RAM and hosts one website that does not get a huge amount of traffic.
>
> When the start command is given to solr, does it first check to see if solr
> is running, or does it always start solr whether it is already running or
> not?
>
> Many thanks!
> Ryan
>
>
> On Tue, Jun 9, 2020 at 7:58 AM Erick Erickson 
> wrote:
>
> > To add to what Dave said, if you have a particular machine that’s prone
> to
> > suddenly stopping, that’s usually a red flag that you should seriously
> > think about hardware issues.
> >
> > If the problem strikes different machines, then I agree with Shawn that
> > the first thing I’d be suspicious of is OOM errors.
> >
> > FWIW,
> > Erick
> >
> > > On Jun 9, 2020, at 6:05 AM, Dave  wrote:
> > >
> > > I’ll add that whenever I’ve had a solr instance shut down, for me it’s
> > been a hardware failure. Either the ram or the disk got a “glitch” and
> both
> > of these are relatively fragile and wear and tear type parts of the
> > machine, and should be expected to fail and be replaced from time to
> time.
> > Solr is pretty aggressive with its logging so there are a lot of writes
> > always happening and of

Re: Solr atomic update successful but other fields' contents got dropped

2020-06-11 Thread Erick Erickson
Are they stored=true?

On Thu, Jun 11, 2020, 10:14 Dongtao  wrote:

> I have a Solr atomic json update job running successfully for adding values
> on specific fields, but two fields description/keywords (i am using Nutch
> web crawler) contents are dropped after json atomic update.  Could not
> figure out why
>
> Any helps are appreciated.
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: [EXTERNAL] - Re: HTTP 401 when searching on alias in secured Solr

2020-06-11 Thread Isabelle Giguere
Some extra info:
Collections have 1 shard, 1 replica.  Only 1 Solr node running.

The HTTP 401 is not intermittent, as reported in SOLR-13421 and SOLR-13510.

Any request to the alias fails.

Thanks;

Isabelle Giguère
Computational Linguist & Java Developer
Linguiste informaticienne & développeur java



De : Isabelle Giguere 
Envoyé : 10 juin 2020 16:11
À : solr-user@lucene.apache.org 
Objet : Re: [EXTERNAL] - Re: HTTP 401 when searching on alias in secured Solr

Hi Jan;

Thank you for your reply.

This is security.json as seen in Zookeeper.  Credentials are admin / admin

{
  "authentication":{
"blockUnknown":false,
"realm":"MTM Solr",
"forwardCredentials":true,
"class":"solr.BasicAuthPlugin",
"credentials":{"admin":"0rTOgObKYwzSyPoYuj2su2/90eQCfysF1aasxTx+wrc= 
+tCMmpawYYtTsp3JfkG9avb8bKZlm/IGTZirsufYvns="},
"":{"v":2}},
  "authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{
"name":"all",
"role":"admin"}],
"user-role":{"admin":"admin"},
"":{"v":8}}}

Thanks for feedback

Isabelle Giguère
Computational Linguist & Java Developer
Linguiste informaticienne & développeur java



De : Jan Høydahl 
Envoyé : 10 juin 2020 16:01
À : solr-user@lucene.apache.org 
Objet : [EXTERNAL] - Re: HTTP 401 when searching on alias in secured Solr

Please share your security.json file

Jan Høydahl

> 10. jun. 2020 kl. 21:53 skrev Isabelle Giguere 
> :
>
> Hi;
>
> I'm using Solr 8.5.0.  I have uploaded security.json to Zookeeper.  I can log 
> in the Solr Admin UI.  I can create collections and aliases, and I can index 
> documents in Solr.
>
> Collections : test1, test2
> Alias: test (combines test1, test2)
>
> Indexed document "solr-word.pdf" in collection test1
>
> Searching on a collection works:
> http://localhost:8983/solr/test1/select?q=*:*&wt=xml
> 
>
> But searching on an alias results in HTTP 401
> http://localhost:8983/solr/test/select?q=*:*&wt=xml
>
> Error from server at null: Expected mime type application/octet-stream but 
> got text/html.content="text/html;charset=utf-8"/> Error 401 Authentication failed, 
> Response code: 401  HTTP ERROR 401 Authentication 
> failed, Response code: 401  
> URI:/solr/test1_shard1_replica_n1/select 
> STATUS:401 MESSAGE:Authentication 
> failed, Response code: 401 
> SERVLET:default   
>
> Even if 
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/SOLR-13510__;!!Obbck6kTJA!P6ugA-rw1I80PaH0U_GVasNqn8EXwmVQ33lwcPOU-cvNgTJK6-3zAf8ukzvv3ynJ$
>   is fixed in Solr 8.5.0, I did try to start Solr with -Dsolr.http1=true, and 
> I set "forwardCredentials":true in security.json.
>
> Nothing works.  I just cannot use aliases when Solr is secured.
>
> Can anyone confirm if this may be a configuration issue, or if this could 
> possibly be a bug ?
>
> Thank you;
>
> Isabelle Giguère
> Computational Linguist & Java Developer
> Linguiste informaticienne & développeur java
>
>


Re: Facet Performance

2020-06-11 Thread James Bodkin
Could you explain why the performance is an issue for points-based fields? I've 
looked through the referenced issue (which is fixed in the version we are 
running) but I'm missing the link between the two. Is there an issue to improve 
this for points-based fields?
We're going to change the field type to a string, as our queries are always 
looking for a specific value (and not intervals/ranges) and rerun our load test.


Kind Regards,

James Bodkin

On 11/06/2020, 14:49, "Erick Erickson"  wrote:

There’s a lot of confusion about using points-based fields for faceting, 
see: https://issues.apache.org/jira/browse/SOLR-13227 for instance.

Two options you might try:
1> copyField to a string field and facet on that (won’t work, of course, 
for any kind of interval/range facet)
2> use the deprecated Trie field instead. You could use the copyField to a 
Trie field for this too.

Best,
Erick



Solr atomic update successful but other fields' contents got dropped

2020-06-11 Thread Dongtao
I have a Solr atomic json update job running successfully for adding values
on specific fields, but two fields description/keywords (i am using Nutch
web crawler) contents are dropped after json atomic update.  Could not
figure out why

Any helps are appreciated.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Facet Performance

2020-06-11 Thread Erick Erickson
There’s a lot of confusion about using points-based fields for faceting, see: 
https://issues.apache.org/jira/browse/SOLR-13227 for instance.

Two options you might try:
1> copyField to a string field and facet on that (won’t work, of course, for 
any kind of interval/range facet)
2> use the deprecated Trie field instead. You could use the copyField to a Trie 
field for this too.

Best,
Erick

> On Jun 11, 2020, at 9:39 AM, James Bodkin  
> wrote:
> 
> We’ve been running a load test against our index and have noticed that the 
> facet queries are significantly slower than we would like.
> Currently these types of queries are taking several seconds to execute and 
> are wondering if it would be possible to speed these up.
> Repeating the same query over and over does not improve the response time so 
> does not appear to utilise any caching.
> Ideally we would like to be targeting a response time around tens or hundreds 
> of milliseconds if possible.
> 
> An example query that is taking around 2-3 seconds to execute is:
> 
> q=*.*
> facet=true
> facet.field=D_UserRatingGte
> facet.mincount=1
> facet.limit=-1
> rows=0
> 
> "response":{"numFound":18979503,"start":0,"maxScore":1.0,"docs":[]}
> "facet_counts":{
>"facet_queries":{},
>"facet_fields":{
>  "D_UserRatingGte":[
>"1575",16614238,
>"1576",16614238,
>"1577",16614238,
>"1578",16065938,
>"1579",12079545,
>"1580",458799]},
>"facet_ranges":{},
>"facet_intervals":{},
>"facet_heatmaps":{}}}
> 
> I have also tried the equivalent query using the JSON Facet API with the same 
> outcome of slow response time.
> Additionally I have tried changing the facet method (on both facet apis) with 
> the same outcome of slow response time.
> 
> The underlying field for the above query is configured as a 
> solr.IntPointField with docValues, indexed and multiValued set to true.
> The index has just under 19 million documents and the physical size on disk 
> is 10.95GB. The index is read-only and consists of 4 segments with 0 
> deletions.
> We’re running standalone Solr 8.3.1 with a 8GB Heap and the underlying Google 
> Cloud Virtual Machine in our load test environment has 6 vCPUs, 32G RAM and 
> 100GB SSD.
> 
> Would anyone be able to point me in a direction to either improve the 
> performance or understand the current performance is expected?
> 
> Kind Regards,
> 
> James Bodkin



Re: [EXTERNAL] - SolR OOM error due to query injection

2020-06-11 Thread Michael Gibney
Guilherme,
The answer is likely to be dependent on the query parser, query parser
configuration, and analysis chains. If you post those it could aid in
helping troubleshoot. One thing that jumps to mind is the asterisks
("*") -- if they're interpreted as wildcards, that could be
problematic? More generally, it's of course true that Solr won't parse
this input as SQL, but as Isabelle pointed out, there are still
potentially lots of meta-characters (in addition to quite a few short,
common terms).
Michael


On Thu, Jun 11, 2020 at 7:43 AM Guilherme Viteri  wrote:
>
> Hi Isabelle
> Thanks for your input.
> In fact SolR returns 30 results out of this queries. Why does it behave in a 
> way that causes OOM ? Also the commands, they are SQL commands and solr would 
> parse it as normal character …
>
> Thanks
>
>
> > On 10 Jun 2020, at 22:50, Isabelle Giguere  
> > wrote:
> >
> > Hi Guilherme;
> >
> > The only thing I can think of right now is the number of non-alphanumeric 
> > characters.
> >
> > In the first 'q' in your examples, after resolving the character escapes, 
> > 1/3 of characters are non-alphanumeric (* / = , etc).
> >
> > Maybe filter-out queries that contain too many non-alphanumeric characters 
> > before sending the request to Solr ?  Whatever "too many" could be.
> >
> > Isabelle Giguère
> > Computational Linguist & Java Developer
> > Linguiste informaticienne & développeur java
> >
> >
> > 
> > De : Guilherme Viteri 
> > Envoyé : 10 juin 2020 16:57
> > À : solr-user@lucene.apache.org 
> > Objet : [EXTERNAL] - SolR OOM error due to query injection
> >
> > Hi,
> >
> > Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup 
> > has been running for at least 4 years without having OutOfMemory error. (it 
> > is never too late for an OOM…)
> >
> > This week, our search tool has been attacked via ‘sql injection’ like, and 
> > that led to an OOM. These requests weren’t aggressive that stressed the 
> > server with an excessive number of hits, however 5 to 10 request of this 
> > nature was enough to crash the server.
> >
> > I’ve come across a this link 
> > https://urldefense.com/v3/__https://stackoverflow.com/questions/26862474/prevent-from-solr-query-injections-when-using-solrj__;!!Obbck6kTJA!IdbT_RQCp3jXO5KJxMkWNJIRlNU9Hu1hnJsWqCWT_QS3zpZSAxYeFPM_hGWNwp3y$
> >   
> >  >  >, however, that’s not what I am after. In our case we do allow lucene 
> > query and field search like title:Title or our ids have dash and if it get 
> > escaped, then the search won’t work properly.
> >
> > Does anyone have an idea ?
> >
> > Cheers
> > G
> >
> > Here are some of the requests that appeared in the logs in relation to the 
> > attack (see below: sorry it is messy)
> > query?q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F2%2A%28IF%28%28SELECT%2F%2A%2A%2F%2A%2F%2A%2A%2FFROM%2F%2A%2A%2F%28SELECT%2F%2A%2A%2FCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283235%3D3235%2C1%29%29%29%2C0x717a626271%2C0x78%29%29s%29%2C%2F%2A%2A%2F8446744073709551610%2C%2F%2A%2A%2F8446744073709551610%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22YBXk%22%2F%2A%2A%2FLIKE%2F%2A%2A%2F%22YBXk&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> >
> > q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F2%2A%28IF%28%28SELECT%2F%2A%2A%2F%2A%2F%2A%2A%2FFROM%2F%2A%2A%2F%28SELECT%2F%2A%2A%2FCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283235%3D3235%2C1%29%29%29%2C0x717a626271%2C0x78%29%29s%29%2C%2F%2A%2A%2F8446744073709551610%2C%2F%2A%2A%2F8446744073709551610%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22rDmG%22%3D%22rDmG&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> >
> > q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F3641%2F%2A%2A%2FFROM%28SELECT%2F%2A%2A%2FCOUNT%28%2A%29%2CCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283641%3D3641%2C1%29%29%29%2C0x717a626271%2CFLOOR%28RAND%280%29%2A2%29%29x%2F%2A%2A%2FFROM%2F%2A%2A%2FINFORMATION_SCHEMA.PLUGINS%2F%2A%2A%2FGROUP%2F%2A%2A%2FBY%2F%2A%2A%2Fx%29a%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22dfkM%22%2F%2A%2A%2FLIKE%2F%2A%2A%2F%22dfkM&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> >
> > q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F3641%2F%2A%2A%2FFROM%28SELECT%2F%2A%2A%2FCOUNT%28%2A%29%2CCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283641%3D3641%2C1%29%29%29%2C0x717a626271%2CFLOOR%28RAND%280%29%2A2%29%29x%2F%2A%2A%2FFROM%2F%2A%2A%2FINFORMATION_SCHEMA.PLUGINS%2F%2A%2A%2FGROUP%2F%2A%2A%2FBY%2F%2A%2A%2Fx%29a%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22yBhx%22%3D%22yBhx&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> >
> > q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F1695%3DCTXSYS.DRITHSX.SN%28

Facet Performance

2020-06-11 Thread James Bodkin
We’ve been running a load test against our index and have noticed that the 
facet queries are significantly slower than we would like.
Currently these types of queries are taking several seconds to execute and are 
wondering if it would be possible to speed these up.
Repeating the same query over and over does not improve the response time so 
does not appear to utilise any caching.
Ideally we would like to be targeting a response time around tens or hundreds 
of milliseconds if possible.

An example query that is taking around 2-3 seconds to execute is:

q=*.*
facet=true
facet.field=D_UserRatingGte
facet.mincount=1
facet.limit=-1
rows=0

"response":{"numFound":18979503,"start":0,"maxScore":1.0,"docs":[]}
"facet_counts":{
"facet_queries":{},
"facet_fields":{
  "D_UserRatingGte":[
"1575",16614238,
"1576",16614238,
"1577",16614238,
"1578",16065938,
"1579",12079545,
"1580",458799]},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}}

I have also tried the equivalent query using the JSON Facet API with the same 
outcome of slow response time.
Additionally I have tried changing the facet method (on both facet apis) with 
the same outcome of slow response time.

The underlying field for the above query is configured as a solr.IntPointField 
with docValues, indexed and multiValued set to true.
The index has just under 19 million documents and the physical size on disk is 
10.95GB. The index is read-only and consists of 4 segments with 0 deletions.
We’re running standalone Solr 8.3.1 with a 8GB Heap and the underlying Google 
Cloud Virtual Machine in our load test environment has 6 vCPUs, 32G RAM and 
100GB SSD.

Would anyone be able to point me in a direction to either improve the 
performance or understand the current performance is expected?

Kind Regards,

James Bodkin


Re: combined multiple bf into a single bf

2020-06-11 Thread Paras Lehana
Although you can use nested maps, for injecting variable values, I would
have used an intermediate script that makes the Solr URL.

On Wed, 10 Jun 2020 at 08:58, Derek Poh  wrote:

> I have the following boost requirement using bf
>
> response_rate is 3, boost by ^0.6
> response_rate is 2, boost by ^0.3
> response_time is 4, boost by ^0.6
> response_time is 3, boost by ^0.3
>
> I am using a bf for each of the boost requirement,
>
>
> bf=map(response_rate,3,3,0.6,0)&bf=map(response_rate,2,2,0.3,0)&bf=map(response_time,4,4,0.6,0)&bf=map(response_time,3,3,0.3,0)
>
> I am trying to reduce on the number of parameters in the query.
>
> Is it possible to combined them into 1 or 2 bf?
>
> Running Solr 4.10.4.
>
> Derek
>
> --
> CONFIDENTIALITY NOTICE
>
> This e-mail (including any attachments) may contain confidential and/or
> privileged information. If you are not the intended recipient or have
> received this e-mail in error, please inform the sender immediately and
> delete this e-mail (including any attachments) from your computer, and you
> must not use, disclose to anyone else or copy this e-mail (including any
> attachments), whether in whole or in part.
>
> This e-mail and any reply to it may be monitored for security, legal,
> regulatory compliance and/or other appropriate reasons.
>
>

-- 
-- 
Regards,

*Paras Lehana* [65871]
Development Engineer, *Auto-Suggest*,
IndiaMART InterMESH Ltd,

11th Floor, Tower 2, Assotech Business Cresterra,
Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305

Mob.: +91-9560911996
Work: 0120-4056700 | Extn:
*1196*

-- 
*
*

 


Re: Periodically 100% cpu and high load/IO

2020-06-11 Thread Marvin Bredal Lillehaug
This may have been a bunch of simultaious queries, not index merging.

Managed to get more thread dumps, and they all have severat threads like

> "qtp385739920-1050156" #1050156 prio=5 os_prio=0 cpu=38864.53ms
> elapsed=7414.80s tid=0x7f62080eb000 nid=0x5f8a runnable
>  [0x7f611748]
>java.lang.Thread.State: RUNNABLE
> at
> org.apache.lucene.codecs.lucene80.IndexedDISI.advance(IndexedDISI.java:384)
> at
> org.apache.lucene.codecs.lucene80.Lucene80DocValuesProducer$SparseNumericDocValues.advance(Lucene80DocValuesProducer.java:434)
> at org.apache.lucene.search.ReqExclScorer$1.matches(ReqExclScorer.java:154)
> at
> org.apache.lucene.search.DisjunctionScorer$TwoPhase.matches(DisjunctionScorer.java:159)
> at
> org.apache.lucene.search.ConjunctionDISI$ConjunctionTwoPhaseIterator.matches(ConjunctionDISI.java:345)
> at
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:274)
> at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:218)
> at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:661)
> at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:445)
> at
> org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:151)
> at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:140)
> at
> org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1191)
> at
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:831)
> at
> org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1039)
> at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1554)
> at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1434)
> at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:581)
> at
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1487)
> at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:399)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:328)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)


 with queries like

> q=*:*
>
> &fl=docId,id,typeId,versjonsId,start_date,out_link_*,regulars,spatials,associations
> &fq=(typeId:945 AND exportId:hex0945)
> &fq=docType:object
> &fq=+start_date:[0 TO 20200611] +(end_date:{20200611 TO *] (*:*
> -end_date:[* TO *])

where all fields but start_date and end_date having doc values.

So I guess we'll just have to adjust our rate limiter so that Solr is not
overwhelmed.


On Sun, Jun 7, 2020 at 8:21 PM Marvin Bredal Lillehaug <
marvin.lilleh...@gmail.com> wrote:

> We have upgrading 8.5.2 on the way to production, so we'll see.
>
> We are running with default merge config, and based on the description on
> https://lucene.apache.org/solr/guide/8_5/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
> I don't understand why all cpus are maxed.
>
>
> On Sun, 7 Jun 2020, 16:59 Phill Campbell,  wrote:
>
>> Can you switch to 8.5.2 and see if it still happens.
>> In my testing of 8.5.1 I had one of my machines get really hot and bring
>> the entire system to a crawl.
>> What seemed to cause my issue was memory usage. I could give the JVM
>> running Solr less heap and the problem wouldn’t manifest.
>> I haven’t seen it with 8.5.2. Just a thought.
>>
>> > On Jun 3, 2020, at 8:27 AM, Marvin Bredal Lillehaug <
>> marvin.lilleh...@gmail.com> wrote:
>> >
>> > Yes, there are light/moderate indexing most of the time.
>> > The setup has NRT replicas. And the shards are around 45GB each.
>> > Index merging has been the hypothesis for some time, but we haven't
>> dared
>> > to activate info stream logging.
>> >
>> > On Wed, Jun 3, 2020 at 2:34 PM Erick Erickson 
>> > wrote:
>> >
>> >> One possibility is merging index segments. When this happens, are you
>> >> actively indexing? And are these NRT replicas or TLOG/PULL? If the
>> latter,
>> >> are your TLOG leaders on the affected machines?
>> >>
>> >> Best,
>> >> Erick
>> >>
>> >>> On Jun 3, 2020, at 3:57 AM, Marvin Bredal Lillehaug <
>> >> marvin.lilleh...@gmail.com> wrote:
>> >>>
>> >>> Hi,
>> >>> We have a cluster with five Solr

RE: Timeout issue while doing update operations from clients (using SolrJ)

2020-06-11 Thread Kommu, Vinodh K.
Hi,

Can someone shed some light on this issue please?


Regards,
Vinodh Kumar K
Middleware Cache and Search Engineering
DTCC Chennai


[cid:image006.png@01D2A70B.AF8789E0]

From: Kommu, Vinodh K.
Sent: Wednesday, June 10, 2020 10:43 PM
To: solr-user@lucene.apache.org
Subject: RE: Timeout issue while doing update operations from clients (using 
SolrJ)

We are getting following socket timeout exception during this error. Any idea 
on this?

ERROR (updateExecutor-3-thread-1392-processing-n:hostname:1100_solr 
x:TestCollection_shard6_replica_n10 c:TestCollection s:shard6 r:core_node13) 
[c:TestCollection s:shard6 r:core_node13 x:TestCollection_shard6_replica_n10] 
o.a.s.u.SolrCmdDistributor org.apache.solr.client.solrj.SolrServerException: 
Timeout occured while waiting response from server at: 
https://hostname:1100/solr/TestCollection_shard6_replica_n34
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient.request(ConcurrentUpdateSolrClient.java:491)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260)
at 
org.apache.solr.update.SolrCmdDistributor.doRequest(SolrCmdDistributor.java:326)
at 
org.apache.solr.update.SolrCmdDistributor.lambda$submit$0(SolrCmdDistributor.java:315)
at 
org.apache.solr.update.SolrCmdDistributor.dt_access$675(SolrCmdDistributor.java)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.dt_access$303(ExecutorUtil.java)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.solr.util.stats.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:120)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl

Re: using solr to extarct keywords from a long text?

2020-06-11 Thread David Zimmermann
Hi Mikhail

Your suggested solution does seem to work for me. Thank you so much for the 
help!

Best regards
David

For future reference in case someone else wants do the same, here are some more 
details about the steps needed:
- The more like this handler is not in the default solrconfig.xml anymore (I’m 
using 8.2). I had to add it. 
  
- The text needs to be sent to solr as stream.body (/mlt?stream.body=text)
- Streaming needs to be activated inside the solrconfig.xml by adding 

Re: [EXTERNAL] - SolR OOM error due to query injection

2020-06-11 Thread Guilherme Viteri
Hi Isabelle
Thanks for your input.
In fact SolR returns 30 results out of this queries. Why does it behave in a 
way that causes OOM ? Also the commands, they are SQL commands and solr would 
parse it as normal character …

Thanks


> On 10 Jun 2020, at 22:50, Isabelle Giguere  
> wrote:
> 
> Hi Guilherme;
> 
> The only thing I can think of right now is the number of non-alphanumeric 
> characters.
> 
> In the first 'q' in your examples, after resolving the character escapes, 1/3 
> of characters are non-alphanumeric (* / = , etc).
> 
> Maybe filter-out queries that contain too many non-alphanumeric characters 
> before sending the request to Solr ?  Whatever "too many" could be.
> 
> Isabelle Giguère
> Computational Linguist & Java Developer
> Linguiste informaticienne & développeur java
> 
> 
> 
> De : Guilherme Viteri 
> Envoyé : 10 juin 2020 16:57
> À : solr-user@lucene.apache.org 
> Objet : [EXTERNAL] - SolR OOM error due to query injection
> 
> Hi,
> 
> Environment: SolR 6.6.2, with org.apache.solr.solr-core:6.1.0. This setup has 
> been running for at least 4 years without having OutOfMemory error. (it is 
> never too late for an OOM…)
> 
> This week, our search tool has been attacked via ‘sql injection’ like, and 
> that led to an OOM. These requests weren’t aggressive that stressed the 
> server with an excessive number of hits, however 5 to 10 request of this 
> nature was enough to crash the server.
> 
> I’ve come across a this link 
> https://urldefense.com/v3/__https://stackoverflow.com/questions/26862474/prevent-from-solr-query-injections-when-using-solrj__;!!Obbck6kTJA!IdbT_RQCp3jXO5KJxMkWNJIRlNU9Hu1hnJsWqCWT_QS3zpZSAxYeFPM_hGWNwp3y$
>   
>   >, however, that’s not what I am after. In our case we do allow lucene query 
> and field search like title:Title or our ids have dash and if it get escaped, 
> then the search won’t work properly.
> 
> Does anyone have an idea ?
> 
> Cheers
> G
> 
> Here are some of the requests that appeared in the logs in relation to the 
> attack (see below: sorry it is messy)
> query?q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F2%2A%28IF%28%28SELECT%2F%2A%2A%2F%2A%2F%2A%2A%2FFROM%2F%2A%2A%2F%28SELECT%2F%2A%2A%2FCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283235%3D3235%2C1%29%29%29%2C0x717a626271%2C0x78%29%29s%29%2C%2F%2A%2A%2F8446744073709551610%2C%2F%2A%2A%2F8446744073709551610%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22YBXk%22%2F%2A%2A%2FLIKE%2F%2A%2A%2F%22YBXk&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> 
> q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F2%2A%28IF%28%28SELECT%2F%2A%2A%2F%2A%2F%2A%2A%2FFROM%2F%2A%2A%2F%28SELECT%2F%2A%2A%2FCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283235%3D3235%2C1%29%29%29%2C0x717a626271%2C0x78%29%29s%29%2C%2F%2A%2A%2F8446744073709551610%2C%2F%2A%2A%2F8446744073709551610%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22rDmG%22%3D%22rDmG&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> 
> q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F3641%2F%2A%2A%2FFROM%28SELECT%2F%2A%2A%2FCOUNT%28%2A%29%2CCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283641%3D3641%2C1%29%29%29%2C0x717a626271%2CFLOOR%28RAND%280%29%2A2%29%29x%2F%2A%2A%2FFROM%2F%2A%2A%2FINFORMATION_SCHEMA.PLUGINS%2F%2A%2A%2FGROUP%2F%2A%2A%2FBY%2F%2A%2A%2Fx%29a%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22dfkM%22%2F%2A%2A%2FLIKE%2F%2A%2A%2F%22dfkM&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> 
> q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28SELECT%2F%2A%2A%2F3641%2F%2A%2A%2FFROM%28SELECT%2F%2A%2A%2FCOUNT%28%2A%29%2CCONCAT%280x717a707871%2C%28SELECT%2F%2A%2A%2F%28ELT%283641%3D3641%2C1%29%29%29%2C0x717a626271%2CFLOOR%28RAND%280%29%2A2%29%29x%2F%2A%2A%2FFROM%2F%2A%2A%2FINFORMATION_SCHEMA.PLUGINS%2F%2A%2A%2FGROUP%2F%2A%2A%2FBY%2F%2A%2A%2Fx%29a%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22yBhx%22%3D%22yBhx&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> 
> q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F1695%3DCTXSYS.DRITHSX.SN%281695%2C%28CHR%28113%29%7C%7CCHR%28122%29%7C%7CCHR%28112%29%7C%7CCHR%28120%29%7C%7CCHR%28113%29%7C%7C%28SELECT%2F%2A%2A%2F%28CASE%2F%2A%2A%2FWHEN%2F%2A%2A%2F%281695%3D1695%29%2F%2A%2A%2FTHEN%2F%2A%2A%2F1%2F%2A%2A%2FELSE%2F%2A%2A%2F0%2F%2A%2A%2FEND%29%2F%2A%2A%2FFROM%2F%2A%2A%2FDUAL%29%7C%7CCHR%28113%29%7C%7CCHR%28122%29%7C%7CCHR%2898%29%7C%7CCHR%2898%29%7C%7CCHR%28113%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F%28%28%28%22eEdc%22%2F%2A%2A%2FLIKE%2F%2A%2A%2F%22eEdc&species=Homo%20sapiens&types=Reaction&types=Pathway&cluster=true
> 
> q=IPP%22%29%29%29%2F%2A%2A%2FAND%2F%2A%2A%2F1695%3DCTXSYS.DRITHSX.SN%281695%2C%28CHR%28113%29%7C%7CCHR%28122%29%7C%7CCHR%28112%29%7C%7CCHR%28120%29%7C%7CCHR%