Re: [EXTERNAL] Re: Is this list alive? I need help

2024-02-28 Thread Gus Heck
Ah sorry my eyes flew past the long hard to read link straight to the
pretty table. Sorry.

Yeah so a 1 row grouping query is not a good idea. If you did it
paginated with cursor mark you would want to play around with trading off
number of requests vs size of request. very likely the optimal size is a
lot less than 1 so long as the looping code isn't crazy inefficient but
it might be as high as 100 or even 500. No way to know for any particular
system and query other than testing it.

As for how EFS could change it's performance on you, check out references
to "bursting credits" here:
https://docs.aws.amazon.com/efs/latest/ug/performance.html

On Wed, Feb 28, 2024 at 10:55 PM Beale, Jim (US-KOP)
 wrote:

> I did send the query. Here it is:
>
>
> http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true=OR=business_id,call_id,call_date,call_callerno,caller_name,dialog_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2020240101}=true=call_callerno=call_date%20desc=1=true
> 
>
> I suppose all the indexes are about 150 GB so you are close.
>
> I set the limit to 10,000 or 5000 for these tests. Setting the limit at 10
> or 50 would mean that there would need to be 1000-2000 requests. That seems
> like an awful lot to me.
>
> That is interesting about the export. I will look into other types of data
> collection.
>
> Also there is no quota on the EFS. It is apparently encrypted both ways.
> But if it is fast the one time, rebooting Solr shouldn't affect how it uses
> disk access.
>
>
> Jim Beale
> Lead Software Engineer
> hibu.com
> 2201 Renaissance Boulevard, King of Prussia, PA, 19406
> Office: 610-879-3864
> Mobile: 610-220-3067
>
>
>
> -Original Message-
> From: Gus Heck 
> Sent: Wednesday, February 28, 2024 9:22 PM
> To: users@solr.apache.org
> Subject: Re: [EXTERNAL] Re: Is this list alive? I need help
>
> Caution!Attachments and links (urls) can contain deceptive and/or
> malicious content.
>
> Your description leads me to believe that at worst you have ~20M docs in
> one index, If the average doc size is 5k or so it sounds like 100GB.. This
> is smalish and across 3 machines it ought to be fine. Your time 1 values
> are very slow to begin with. Unfortunately you didn't send us the query,
> only the code that generates the query. A key bit not shown is what value
> you are passing in for limit (which is then set for rows. it *should* be
> something like 10 or 25 or 50. It should NOT be 1000 or 9 etc... but
> the fact you have hardcoded the start to zero makes me think you are not
> paging and you are doing something in the "NOT" realm. If you are trying to
> export ALL matches to a query you'd be better off using /export rather than
> /select (reqquires docvalues for all fields involved) or if you don't have
> docvalues, use the cursormark feature to iteratively fetch pages of data.
>
> If you say rows-1 then each node sends back 1, the coordinator
> sorts all 3 and then sends the top 1 to the client
>
> Note that the grouping feature you are using can be heavy too. To do that
> in an /export context you would probably have to use streaming expressions
> and even there you would have to design carefully to avoid trying to hold
> large fractions of the index in memory while you formed groups...
>
> As for the change in speed I'm still betting on some sort of quota for
> your EFS access (R5 are fixed cpu availability so that's not it) However,
> it's worth looking at your GC logs in case your (probable) large queries
> are getting you into trouble with memory/GC. As with any performance
> troubleshooting you'll want to have eyes on the CPU load, disk io bytes,
> disk iOPs and network bandwidth.
>
> Oh one more thing that comes to mind. Make sure you don't configure ANY
> swap drive on your server. If the OS starts trying to put solr's cached
> memory on a swap disk the query times just go in the trash instantly. in
> most cases (YMMV) you would MUCH rather crash the server than have it start
> using swap. (because then you know you need a bigger server, rather than
> silently serving dog slow results while you limp along).
>
> -Gus
>
> On Wed, Feb 28, 2024 at 4:09 PM Beale, Jim (US-KOP)
>  wrote:
>
> > Here is the performance for this query on these nodes. You saw the
> > code in a previous email.
> >
> >
> >
> >
> > http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true
> > .op=OR=business_id,call_id,call_date,call_callerno,caller_name,dial
> > og_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2
> > 020240101}=true=call_callerno=call_date%20desc&
> > rows=1=true
> >  

Re: [EXTERNAL] Is this list alive? I need help

2024-02-28 Thread Walter Underwood
What does the CPU utilization look like while that query is executing? If it is 
using 100% of one CPU, then it is CPU limited. If it is using less than 100% of 
one CPU, then it is IO limited.

Regardless, that is a VERY expensive query.

A shared EFS disk is a poor system design for Solr. Each node should have its 
own EBS volume, preferably GP3.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 28, 2024, at 7:51 PM, Beale, Jim (US-KOP)  
> wrote:
> 
> I did send the query. Here it is:
> 
> http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true=OR=business_id,call_id,call_date,call_callerno,caller_name,dialog_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2020240101}=true=call_callerno=call_date%20desc=1=true
> 
> I suppose all the indexes are about 150 GB so you are close. 
> 
> I set the limit to 10,000 or 5000 for these tests. Setting the limit at 10 or 
> 50 would mean that there would need to be 1000-2000 requests. That seems like 
> an awful lot to me. 
> 
> That is interesting about the export. I will look into other types of data 
> collection.
> 
> Also there is no quota on the EFS. It is apparently encrypted both ways. But 
> if it is fast the one time, rebooting Solr shouldn't affect how it uses disk 
> access.
> 
> 
> Jim Beale
> Lead Software Engineer 
> hibu.com
> 2201 Renaissance Boulevard, King of Prussia, PA, 19406
> Office: 610-879-3864
> Mobile: 610-220-3067
>  
> 
> 
> -Original Message-
> From: Gus Heck  
> Sent: Wednesday, February 28, 2024 9:22 PM
> To: users@solr.apache.org
> Subject: Re: [EXTERNAL] Re: Is this list alive? I need help
> 
> Caution!Attachments and links (urls) can contain deceptive and/or 
> malicious content.
> 
> Your description leads me to believe that at worst you have ~20M docs in one 
> index, If the average doc size is 5k or so it sounds like 100GB.. This is 
> smalish and across 3 machines it ought to be fine. Your time 1 values are 
> very slow to begin with. Unfortunately you didn't send us the query, only the 
> code that generates the query. A key bit not shown is what value you are 
> passing in for limit (which is then set for rows. it *should* be something 
> like 10 or 25 or 50. It should NOT be 1000 or 9 etc... but the fact you 
> have hardcoded the start to zero makes me think you are not paging and you 
> are doing something in the "NOT" realm. If you are trying to export ALL 
> matches to a query you'd be better off using /export rather than /select 
> (reqquires docvalues for all fields involved) or if you don't have docvalues, 
> use the cursormark feature to iteratively fetch pages of data.
> 
> If you say rows-1 then each node sends back 1, the coordinator sorts 
> all 3 and then sends the top 1 to the client
> 
> Note that the grouping feature you are using can be heavy too. To do that in 
> an /export context you would probably have to use streaming expressions and 
> even there you would have to design carefully to avoid trying to hold large 
> fractions of the index in memory while you formed groups...
> 
> As for the change in speed I'm still betting on some sort of quota for your 
> EFS access (R5 are fixed cpu availability so that's not it) However, it's 
> worth looking at your GC logs in case your (probable) large queries are 
> getting you into trouble with memory/GC. As with any performance 
> troubleshooting you'll want to have eyes on the CPU load, disk io bytes, disk 
> iOPs and network bandwidth.
> 
> Oh one more thing that comes to mind. Make sure you don't configure ANY swap 
> drive on your server. If the OS starts trying to put solr's cached memory on 
> a swap disk the query times just go in the trash instantly. in most cases 
> (YMMV) you would MUCH rather crash the server than have it start using swap. 
> (because then you know you need a bigger server, rather than silently serving 
> dog slow results while you limp along).
> 
> -Gus
> 
> On Wed, Feb 28, 2024 at 4:09 PM Beale, Jim (US-KOP) 
>  wrote:
> 
>> Here is the performance for this query on these nodes. You saw the 
>> code in a previous email.
>> 
>> 
>> 
>> 
>> http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true
>> .op=OR=business_id,call_id,call_date,call_callerno,caller_name,dial
>> og_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2
>> 020240101}=true=call_callerno=call_date%20desc&
>> rows=1=true 
>> > q.op=OR=business_id,call_id,call_date,call_callerno,caller_name,dia
>> log_merged=business_id%3A7016655681%20AND%20call_day:%5b20230101%20T
>> O%2020240101%7d=true=call_callerno=call_date%20
>> desc=1=true>
>> 
>> 
>> 
>> The two times given are right after a restart and the next day, or 
>> sometime a few hours later. The only difference is how Solr is 
>> running. I can’t understand what makes it run so slowly after a 

RE: [EXTERNAL] Re: Is this list alive? I need help

2024-02-28 Thread Beale, Jim (US-KOP)
I did send the query. Here it is:

http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true=OR=business_id,call_id,call_date,call_callerno,caller_name,dialog_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2020240101}=true=call_callerno=call_date%20desc=1=true

I suppose all the indexes are about 150 GB so you are close. 

I set the limit to 10,000 or 5000 for these tests. Setting the limit at 10 or 
50 would mean that there would need to be 1000-2000 requests. That seems like 
an awful lot to me. 

That is interesting about the export. I will look into other types of data 
collection.

Also there is no quota on the EFS. It is apparently encrypted both ways. But if 
it is fast the one time, rebooting Solr shouldn't affect how it uses disk 
access.


Jim Beale
Lead Software Engineer 
hibu.com
2201 Renaissance Boulevard, King of Prussia, PA, 19406
Office: 610-879-3864
Mobile: 610-220-3067
 


-Original Message-
From: Gus Heck  
Sent: Wednesday, February 28, 2024 9:22 PM
To: users@solr.apache.org
Subject: Re: [EXTERNAL] Re: Is this list alive? I need help

Caution!Attachments and links (urls) can contain deceptive and/or 
malicious content.

Your description leads me to believe that at worst you have ~20M docs in one 
index, If the average doc size is 5k or so it sounds like 100GB.. This is 
smalish and across 3 machines it ought to be fine. Your time 1 values are very 
slow to begin with. Unfortunately you didn't send us the query, only the code 
that generates the query. A key bit not shown is what value you are passing in 
for limit (which is then set for rows. it *should* be something like 10 or 25 
or 50. It should NOT be 1000 or 9 etc... but the fact you have hardcoded 
the start to zero makes me think you are not paging and you are doing something 
in the "NOT" realm. If you are trying to export ALL matches to a query you'd be 
better off using /export rather than /select (reqquires docvalues for all 
fields involved) or if you don't have docvalues, use the cursormark feature to 
iteratively fetch pages of data.

If you say rows-1 then each node sends back 1, the coordinator sorts 
all 3 and then sends the top 1 to the client

Note that the grouping feature you are using can be heavy too. To do that in an 
/export context you would probably have to use streaming expressions and even 
there you would have to design carefully to avoid trying to hold large 
fractions of the index in memory while you formed groups...

As for the change in speed I'm still betting on some sort of quota for your EFS 
access (R5 are fixed cpu availability so that's not it) However, it's worth 
looking at your GC logs in case your (probable) large queries are getting you 
into trouble with memory/GC. As with any performance troubleshooting you'll 
want to have eyes on the CPU load, disk io bytes, disk iOPs and network 
bandwidth.

Oh one more thing that comes to mind. Make sure you don't configure ANY swap 
drive on your server. If the OS starts trying to put solr's cached memory on a 
swap disk the query times just go in the trash instantly. in most cases (YMMV) 
you would MUCH rather crash the server than have it start using swap. (because 
then you know you need a bigger server, rather than silently serving dog slow 
results while you limp along).

-Gus

On Wed, Feb 28, 2024 at 4:09 PM Beale, Jim (US-KOP) 
 wrote:

> Here is the performance for this query on these nodes. You saw the 
> code in a previous email.
>
>
>
>
> http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true
> .op=OR=business_id,call_id,call_date,call_callerno,caller_name,dial
> og_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2
> 020240101}=true=call_callerno=call_date%20desc&
> rows=1=true 
>  q.op=OR=business_id,call_id,call_date,call_callerno,caller_name,dia
> log_merged=business_id%3A7016655681%20AND%20call_day:%5b20230101%20T
> O%2020240101%7d=true=call_callerno=call_date%20
> desc=1=true>
>
>
>
> The two times given are right after a restart and the next day, or 
> sometime a few hours later. The only difference is how Solr is 
> running. I can’t understand what makes it run so slowly after a short while.
>
>
>
> Business_id
>
> Time 1
>
> Time 2
>
> 7016274253
>
> 11.572
>
> 23.397
>
> 7010707194
>
> 21.941
>
> 21.414
>
> 701491
>
> 9.516
>
> 39.051
>
> 7029931968
>
> 10.755
>
> 59.196
>
> 7014676602
>
> 14.508
>
> 14.083
>
> 7004551760
>
> 12.873
>
> 36.856
>
> 7016274253
>
> 1.792
>
> 17.415
>
> 7010707194
>
> 5.671
>
> 25.442
>
> 701491
>
> 6.84
>
> 36.244
>
> 7029931968
>
> 6.291
>
> 38.483
>
> 7014676602
>
> 7.643
>
> 12.584
>
> 7004551760
>
> 5.669
>
> 21.977
>
> 7029931968
>
> 8.293
>
> 36.688
>
> 7008606979
>
> 16.976
>
> 30.569
>
> 7002264530
>
> 13.862
>
> 35.113
>
> 7017281920
>
> 10.1
>
> 31.914
>
> 701491
>
> 8.665
>
> 35.141
>
> 7058630709
>
> 

Re: [EXTERNAL] Re: Is this list alive? I need help

2024-02-28 Thread Gus Heck
Your description leads me to believe that at worst you have ~20M docs in
one index, If the average doc size is 5k or so it sounds like 100GB.. This
is smalish and across 3 machines it ought to be fine. Your time 1 values
are very slow to begin with. Unfortunately you didn't send us the query,
only the code that generates the query. A key bit not shown is what value
you are passing in for limit (which is then set for rows. it *should* be
something like 10 or 25 or 50. It should NOT be 1000 or 9 etc... but
the fact you have hardcoded the start to zero makes me think you are not
paging and you are doing something in the "NOT" realm. If you are trying to
export ALL matches to a query you'd be better off using /export rather than
/select (reqquires docvalues for all fields involved) or if you don't have
docvalues, use the cursormark feature to iteratively fetch pages of data.

If you say rows-1 then each node sends back 1, the coordinator
sorts all 3 and then sends the top 1 to the client

Note that the grouping feature you are using can be heavy too. To do that
in an /export context you would probably have to use streaming expressions
and even there you would have to design carefully to avoid trying to hold
large fractions of the index in memory while you formed groups...

As for the change in speed I'm still betting on some sort of quota for your
EFS access (R5 are fixed cpu availability so that's not it) However, it's
worth looking at your GC logs in case your (probable) large queries are
getting you into trouble with memory/GC. As with any performance
troubleshooting you'll want to have eyes on the CPU load, disk io bytes,
disk iOPs and network bandwidth.

Oh one more thing that comes to mind. Make sure you don't configure ANY
swap drive on your server. If the OS starts trying to put solr's cached
memory on a swap disk the query times just go in the trash instantly. in
most cases (YMMV) you would MUCH rather crash the server than have it start
using swap. (because then you know you need a bigger server, rather than
silently serving dog slow results while you limp along).

-Gus

On Wed, Feb 28, 2024 at 4:09 PM Beale, Jim (US-KOP)
 wrote:

> Here is the performance for this query on these nodes. You saw the code in
> a previous email.
>
>
>
>
> http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true=OR=business_id,call_id,call_date,call_callerno,caller_name,dialog_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2020240101}=true=call_callerno=call_date%20desc=1=true
> 
>
>
>
> The two times given are right after a restart and the next day, or
> sometime a few hours later. The only difference is how Solr is running. I
> can’t understand what makes it run so slowly after a short while.
>
>
>
> Business_id
>
> Time 1
>
> Time 2
>
> 7016274253
>
> 11.572
>
> 23.397
>
> 7010707194
>
> 21.941
>
> 21.414
>
> 701491
>
> 9.516
>
> 39.051
>
> 7029931968
>
> 10.755
>
> 59.196
>
> 7014676602
>
> 14.508
>
> 14.083
>
> 7004551760
>
> 12.873
>
> 36.856
>
> 7016274253
>
> 1.792
>
> 17.415
>
> 7010707194
>
> 5.671
>
> 25.442
>
> 701491
>
> 6.84
>
> 36.244
>
> 7029931968
>
> 6.291
>
> 38.483
>
> 7014676602
>
> 7.643
>
> 12.584
>
> 7004551760
>
> 5.669
>
> 21.977
>
> 7029931968
>
> 8.293
>
> 36.688
>
> 7008606979
>
> 16.976
>
> 30.569
>
> 7002264530
>
> 13.862
>
> 35.113
>
> 7017281920
>
> 10.1
>
> 31.914
>
> 701491
>
> 8.665
>
> 35.141
>
> 7058630709
>
> 11.236
>
> 38.104
>
> 7011363889
>
> 10.977
>
> 19.72
>
> 7016319075
>
> 15.763
>
> 26.023
>
> 7053262466
>
> 10.917
>
> 48.3
>
> 7000313815
>
> 9.786
>
> 24.617
>
> 7015187150
>
> 8.312
>
> 29.485
>
> 7016381845
>
> 11.51
>
> 34.545
>
> 7016379523
>
> 10.543
>
> 29.27
>
> 7026102159
>
> 6.047
>
> 30.381
>
> 7010707194
>
> 8.298
>
> 27.069
>
> 7016508018
>
> 7.98
>
> 34.48
>
> 7016280579
>
> 5.443
>
> 26.617
>
> 7016302809
>
> 3.491
>
> 12.578
>
> 7016259866
>
> 7.723
>
> 33.462
>
> 7016390730
>
> 11.358
>
> 32.997
>
> 7013498165
>
> 8.214
>
> 26.004
>
> 7016392929
>
> 6.612
>
> 19.711
>
> 7007737612
>
> 2.198
>
> 4.19
>
> 7012687678
>
> 8.627
>
> 35.342
>
> 7016606704
>
> 5.951
>
> 21.732
>
> 7007870203
>
> 2.524
>
> 16.534
>
> 7016268227
>
> 6.296
>
> 25.651
>
> 7016405011
>
> 3.288
>
> 18.541
>
> 7016424246
>
> 9.756
>
> 31.243
>
> 7000336592
>
> 5.465
>
> 31.486
>
> 7004696397
>
> 4.713
>
> 29.528
>
> 7016279283
>
> 2.473
>
> 24.243
>
> 7016623672
>
> 6.958
>
> 35.96
>
> 7016582537
>
> 5.112
>
> 33.475
>
> 7015713947
>
> 5.162
>
> 25.972
>
> 7003530665
>
> 8.223
>
> 26.549
>
> 7012825693
>
> 7.4
>
> 16.849
>
> 7010707194
>
> 6.781
>
> 23.835
>
> 7079272278
>
> 7.793
>
> 24.686
>
>
>
> *Jim Beale*
>
> *Lead Software Engineer *
>
> *hibu.com 

Re: Multiple query parsers syntax

2024-02-28 Thread rajani m
Thank you Hoss, the slides and presentation are gold. Thank you Robi and
Mikhail for looking into this one.


On Wed, Feb 28, 2024, 1:50 PM Robi Petersen  wrote:

> Thanks Hoss! I'd forgotten that link didn't work... I'd just scrolled thru
> the slides (a long time ago)
>
> On Wed, Feb 28, 2024 at 10:25 AM Chris Hostetter  >
> wrote:
>
> >
> > : I like Hoss' breakdown in this presentation for query substitution
> > : syntax... :)
> > :
> > :  the Lucene/Solr Revolution 2016 presentation by hoss
> > :  - see slideshow link
> > at
> > : top...
> >
> > FWIW: If a talk I give is ever recorded, I add link to that video from my
> > main apache page...
> >   https://home.apache.org/~hossman/
> > ...but I typically leave the slides alone and don't add a video link to
> > them.
> >
> > So for that talk: https://home.apache.org/~hossman/rev2016/
> > The Video is: https://youtu.be/qTVi7eMGe1A
> >
> >
> > -Hoss
> > http://www.lucidworks.com/
> >
>


RE: Setting up basic authentication in Solr 8.11.1 Standalone

2024-02-28 Thread Hodder, Rick (Chief Information Office - IT)
Thanks Jan, worked like a charm!


Thanks,

RICK HODDER
Staff Software Engineer
Global Specialty

The Hartford
83 Wooster Heights Rd. | 2nd floor
Danbury, CT, 06810
W: 475-329-6251
Email: richard.hod...@thehartford.com
www.thehartford.com
www.facebook.com/thehartford
twitter.com/thehartford
 



-Original Message-
From: Jan Høydahl  
Sent: Wednesday, February 28, 2024 5:22 PM
To: users@solr.apache.org
Subject: Re: Setting up basic authentication in Solr 8.11.1 Standalone

CAUTION:  This email originated from outside the organization.  Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

You need a few more permissions in order for that Admin screen to work. Try 
instead the default security.json generated by bin/solr auth enable (cloud 
mode):

{
  "authentication":{
   "blockUnknown": true,
   "class":"solr.BasicAuthPlugin",
   "credentials":{"solr":"cHFNAKbTL930UaGklonJT02g/NVUSbUc0cn2ssvV5sA= 
xG5Fa6oifv6deIHWnRSus4hxfq5mOxTwdwy9GZDeHgc="}
  },
  "authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[
 {"name":"security-edit", "role":"admin"},
 {"name":"security-read", "role":"admin"},
 {"name":"config-edit", "role":"admin"},
 {"name":"config-read", "role":"admin"},
 {"name":"collection-admin-edit", "role":"admin"},
 {"name":"collection-admin-read", "role":"admin"},
 {"name":"core-admin-edit", "role":"admin"},
 {"name":"core-admin-read", "role":"admin"},
 {"name":"all", "role":"admin"}
   ],
   "user-role":{"solr":"admin"}
  }
}

Jan

> 28. feb. 2024 kl. 21:57 skrev Hodder, Rick (Chief Information Office - IT) 
> :
> 
> Hi,
>  
> I have an existing 8.11.1 standalone installation on a windows server, and I 
> have been asked to make it run under basic authentication.
>  
> I have created security.json file in the solr home folder used the 
> contents of the sample on Configuring Authentication, Authorization 
> and Audit Logging | Apache Solr Reference Guide 8.11 
>  tication-and-authorization-plugins.html*enable-plugins-with-security-j
> son__;Iw!!PZ0xAML5PpHLxYfxmvfEjrhN5g!UKWTgUEO68E3z34048t6bOqHqOBx3NQrV
> ym70IktnNnoWqIeHb7EstVu_WZmylflVPxrJ4SkxKlOSevRGPa7wv7aTw$ >
>  
> {
> "authentication":{
>"class":"solr.BasicAuthPlugin",
>"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
> Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
> },
> "authorization":{
>"class":"solr.RuleBasedAuthorizationPlugin",
>"permissions":[{"name":"security-edit",
>   "role":"admin"}],
>"user-role":{"solr":"admin"}
> }
> }
>  
> Which is supposed to create a user solr with the password “SolrRocks” with 
> admin privileges.
>  
> I restart SOLR and then click on Security and I am taken to a page 
> that says
>  
> Current user is not authenticated! Security panel is disabled.
> You do not have permission to view the security panel.
>  
> I don’t get a log in window or anything.
>  
> Can someone tell me what I need to do next?
>  
> Thanks,
>  
> RICK HODDER
> Staff Software Engineer
> Global Specialty
>  
> The Hartford
> 83 Wooster Heights Rd. | 2nd floor
> Danbury, CT, 06810
> W: 475-329-6251
> 
> Email: richard.hod...@thehartford.com 
> 
> http://www.thehartford.com  
> https://urldefense.com/v3/__http://www.facebook.com/thehartford__;!!PZ0xAML5PpHLxYfxmvfEjrhN5g!UKWTgUEO68E3z34048t6bOqHqOBx3NQrVym70IktnNnoWqIeHb7EstVu_WZmylflVPxrJ4SkxKlOSevRGPascqhPYQ$
>   
>   >
> https://urldefense.com/v3/__http://twitter.com/thehartford__;!!PZ0xAML5PpHLxYfxmvfEjrhN5g!UKWTgUEO68E3z34048t6bOqHqOBx3NQrVym70IktnNnoWqIeHb7EstVu_WZmylflVPxrJ4SkxKlOSevRGPZZ5L2eHg$
>   
>   >  
>  
>  
>  
> **
>  This communication, including 
> attachments, is for the exclusive use of addressee and may contain 
> proprietary, confidential and/or privileged information. If you are not the 
> intended recipient, any use, copying, disclosure, dissemination or 
> distribution is strictly prohibited. If you are not the intended recipient, 
> please notify the sender immediately by return e-mail, delete this 
> communication and destroy all copies.
> 
> **
> 


**
This 

Re: Setting up basic authentication in Solr 8.11.1 Standalone

2024-02-28 Thread Jan Høydahl
You need a few more permissions in order for that Admin screen to work. Try 
instead the default security.json generated by bin/solr auth enable (cloud 
mode):

{
  "authentication":{
   "blockUnknown": true,
   "class":"solr.BasicAuthPlugin",
   "credentials":{"solr":"cHFNAKbTL930UaGklonJT02g/NVUSbUc0cn2ssvV5sA= 
xG5Fa6oifv6deIHWnRSus4hxfq5mOxTwdwy9GZDeHgc="}
  },
  "authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[
 {"name":"security-edit", "role":"admin"},
 {"name":"security-read", "role":"admin"},
 {"name":"config-edit", "role":"admin"},
 {"name":"config-read", "role":"admin"},
 {"name":"collection-admin-edit", "role":"admin"},
 {"name":"collection-admin-read", "role":"admin"},
 {"name":"core-admin-edit", "role":"admin"},
 {"name":"core-admin-read", "role":"admin"},
 {"name":"all", "role":"admin"}
   ],
   "user-role":{"solr":"admin"}
  }
}

Jan

> 28. feb. 2024 kl. 21:57 skrev Hodder, Rick (Chief Information Office - IT) 
> :
> 
> Hi,
>  
> I have an existing 8.11.1 standalone installation on a windows server, and I 
> have been asked to make it run under basic authentication.
>  
> I have created security.json file in the solr home folder used the contents 
> of the sample on Configuring Authentication, Authorization and Audit Logging 
> | Apache Solr Reference Guide 8.11 
> 
>  
> {
> "authentication":{
>"class":"solr.BasicAuthPlugin",
>"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
> Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
> },
> "authorization":{
>"class":"solr.RuleBasedAuthorizationPlugin",
>"permissions":[{"name":"security-edit",
>   "role":"admin"}],
>"user-role":{"solr":"admin"}
> }
> }
>  
> Which is supposed to create a user solr with the password “SolrRocks” with 
> admin privileges.
>  
> I restart SOLR and then click on Security and I am taken to a page that says
>  
> Current user is not authenticated! Security panel is disabled.
> You do not have permission to view the security panel.
>  
> I don’t get a log in window or anything.
>  
> Can someone tell me what I need to do next?
>  
> Thanks,
>  
> RICK HODDER
> Staff Software Engineer
> Global Specialty
>  
> The Hartford
> 83 Wooster Heights Rd. | 2nd floor
> Danbury, CT, 06810
> W: 475-329-6251
> 
> Email: richard.hod...@thehartford.com 
> www.thehartford.com 
> www.facebook.com/thehartford 
> twitter.com/thehartford  
>  
>  
>  
> **
> This communication, including attachments, is for the exclusive use of 
> addressee and may contain proprietary, confidential and/or privileged 
> information. If you are not the intended recipient, any use, copying, 
> disclosure, dissemination or distribution is strictly prohibited. If you are 
> not the intended recipient, please notify the sender immediately by return 
> e-mail, delete this communication and destroy all copies.
> 
> **



Re: Backtick character in field data breaks streaming query

2024-02-28 Thread Rahul Goswami
Submitted https://github.com/apache/solr/pull/2321
I can't assign reviewers (at least it seems so), so would be great if
somebody could please take a look. Thanks.

-Rahul

On Tue, Feb 27, 2024 at 3:10 PM Rahul Goswami  wrote:

> Thanks. Submitted https://issues.apache.org/jira/browse/SOLR-17186
> PR on the way!
>
> -Rahul
>
> On Tue, Feb 27, 2024 at 1:29 PM Gus Heck  wrote:
>
>> On Tue, Feb 27, 2024 at 12:13 PM Rahul Goswami 
>> wrote:
>>
>> >  I can submit a fix for
>> > this. Should I open a JIRA?
>> >
>>
>> Certainly!
>>
>> --
>> http://www.needhamsoftware.com (work)
>> https://a.co/d/b2sZLD9 (my fantasy fiction book)
>>
>


RE: [EXTERNAL] Re: Is this list alive? I need help

2024-02-28 Thread Beale, Jim (US-KOP)
Here is the performance for this query on these nodes. You saw the code in a 
previous email.

http://samisolrcld.aws01.hibu.int:8983/solr/calls/select?indent=true=OR=business_id,call_id,call_date,call_callerno,caller_name,dialog_merged=business_id%3A7016655681%20AND%20call_day:[20230101%20TO%2020240101}=true=call_callerno=call_date%20desc=1=true

The two times given are right after a restart and the next day, or sometime a 
few hours later. The only difference is how Solr is running. I can’t understand 
what makes it run so slowly after a short while.

Business_id
Time 1
Time 2
7016274253
11.572
23.397
7010707194
21.941
21.414
701491
9.516
39.051
7029931968
10.755
59.196
7014676602
14.508
14.083
7004551760
12.873
36.856
7016274253
1.792
17.415
7010707194
5.671
25.442
701491
6.84
36.244
7029931968
6.291
38.483
7014676602
7.643
12.584
7004551760
5.669
21.977
7029931968
8.293
36.688
7008606979
16.976
30.569
7002264530
13.862
35.113
7017281920
10.1
31.914
701491
8.665
35.141
7058630709
11.236
38.104
7011363889
10.977
19.72
7016319075
15.763
26.023
7053262466
10.917
48.3
7000313815
9.786
24.617
7015187150
8.312
29.485
7016381845
11.51
34.545
7016379523
10.543
29.27
7026102159
6.047
30.381
7010707194
8.298
27.069
7016508018
7.98
34.48
7016280579
5.443
26.617
7016302809
3.491
12.578
7016259866
7.723
33.462
7016390730
11.358
32.997
7013498165
8.214
26.004
7016392929
6.612
19.711
7007737612
2.198
4.19
7012687678
8.627
35.342
7016606704
5.951
21.732
7007870203
2.524
16.534
7016268227
6.296
25.651
7016405011
3.288
18.541
7016424246
9.756
31.243
7000336592
5.465
31.486
7004696397
4.713
29.528
7016279283
2.473
24.243
7016623672
6.958
35.96
7016582537
5.112
33.475
7015713947
5.162
25.972
7003530665
8.223
26.549
7012825693
7.4
16.849
7010707194
6.781
23.835
7079272278
7.793
24.686

Jim Beale
Lead Software Engineer
hibu.com
2201 Renaissance Boulevard, King of Prussia, PA, 19406
Office: 610-879-3864
Mobile: 610-220-3067

[cid:image002.png@01DA6A5F.592FE780]

From: Beale, Jim (US-KOP) 
Sent: Wednesday, February 28, 2024 3:29 PM
To: users@solr.apache.org
Subject: RE: [EXTERNAL] Re: Is this list alive? I need help

Caution!
Attachments and links (urls) can contain deceptive and/or malicious content.

I didn't see these responses because they were buried in my clutter folder.



We have 12,541,505 docs for calls, 9,144,862 form fills, 53,838 SMS and 12,752 
social leads. These are all a single Solr 9.1 cluster of three nodes with PROD 
and UAT all on a single server. As follows:





[cid:image003.png@01DA6A60.1D5538E0]



The three nodes are r5.xlarge and we’re not sure if those are large enough. The 
documents are not huge, from 1K to 25K each.



samisolrcld.aws01.hibu.int is a load-balancer



The request is



async function getCalls(businessId, limit) {

const config = {

method: 'GET',

url: http://samisolrcld.aws01.hibu.int:8983/solr/calls/select,

params: {

q: `business_id:${businessId} AND call_day:[20230101 TO 20240101}`,

fl: "business_id, call_id, call_day, call_date, dialog_merged, 
call_callerno, call_duration, call_status, caller_name, caller_address, 
caller_state, caller_city, caller_zip",

rows: limit,

start: 0,

group: true,

"group.main": true,

"group.field": "call_callerno",

sort: "call_day desc"

}

};

//console.log(config);



let rval = [];

while(true) {

try {

//console.log(config.params.start);

const rsp = await axios(config);

if(rsp.data && rsp.data.response) {

let docs = rsp.data.response.docs;

if(docs.length == 0) break;

config.params.start += limit;

rval = rval.concat(docs);

}

} catch (err) {

console.log("Error: " + err.message);

}

}

return rval;

}



You wrote:



Note that EFS is encrypted file system, and stunnel is encrypted transport, so 
for each disk read you likely causing:



   - read raw encrypted data from disk to memory (at AWS)

   - decrypt the disk data in memory (at AWS)

   - encrypt the memory data for stunnel transport (at AWS)

   - send the data over the wire

   - decrypt the data for use by solr. (Hardware you specify)



That's guaranteed to be slow, and worse yet, you have no control at all over 
the size or loading of the hardware performing anything but the last step. You 
are completely at the mercy of AWS's cost/speed tradeoffs which are unlikely to 
be targeting the level of performance usually desired for search disk IO.



This is interesting. I can copy the data to 

Setting up basic authentication in Solr 8.11.1 Standalone

2024-02-28 Thread Hodder, Rick (Chief Information Office - IT)
Hi,

I have an existing 8.11.1 standalone installation on a windows server, and I 
have been asked to make it run under basic authentication.

I have created security.json file in the solr home folder used the contents of 
the sample on Configuring Authentication, Authorization and Audit Logging | 
Apache Solr Reference Guide 
8.11

{
"authentication":{
   "class":"solr.BasicAuthPlugin",
   "credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= 
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[{"name":"security-edit",
  "role":"admin"}],
   "user-role":{"solr":"admin"}
}
}

Which is supposed to create a user solr with the password "SolrRocks" with 
admin privileges.

I restart SOLR and then click on Security and I am taken to a page that says

Current user is not authenticated! Security panel is disabled.
You do not have permission to view the security panel.

I don't get a log in window or anything.

Can someone tell me what I need to do next?

Thanks,

RICK HODDER
Staff Software Engineer
Global Specialty
[The Hartford]
The Hartford
83 Wooster Heights Rd. | 2nd floor
Danbury, CT, 06810
W: 475-329-6251
Email: richard.hod...@thehartford.com
www.thehartford.com
www.facebook.com/thehartford
twitter.com/thehartford



**
This communication, including attachments, is for the exclusive use of 
addressee and may contain proprietary, confidential and/or privileged 
information.  If you are not the intended recipient, any use, copying, 
disclosure, dissemination or distribution is strictly prohibited.  If you are 
not the intended recipient, please notify the sender immediately by return 
e-mail, delete this communication and destroy all copies.

**


RE: [EXTERNAL] Re: Is this list alive? I need help

2024-02-28 Thread Beale, Jim (US-KOP)
I didn't see these responses because they were buried in my clutter folder.



We have 12,541,505 docs for calls, 9,144,862 form fills, 53,838 SMS and 12,752 
social leads. These are all a single Solr 9.1 cluster of three nodes with PROD 
and UAT all on a single server. As follows:





[cid:image001.png@01DA6A4F.59817D30]



The three nodes are r5.xlarge and we’re not sure if those are large enough. The 
documents are not huge, from 1K to 25K each.



samisolrcld.aws01.hibu.int is a load-balancer



The request is



async function getCalls(businessId, limit) {

const config = {

method: 'GET',

url: http://samisolrcld.aws01.hibu.int:8983/solr/calls/select,

params: {

q: `business_id:${businessId} AND call_day:[20230101 TO 20240101}`,

fl: "business_id, call_id, call_day, call_date, dialog_merged, 
call_callerno, call_duration, call_status, caller_name, caller_address, 
caller_state, caller_city, caller_zip",

rows: limit,

start: 0,

group: true,

"group.main": true,

"group.field": "call_callerno",

sort: "call_day desc"

}

};

//console.log(config);



let rval = [];

while(true) {

try {

//console.log(config.params.start);

const rsp = await axios(config);

if(rsp.data && rsp.data.response) {

let docs = rsp.data.response.docs;

if(docs.length == 0) break;

config.params.start += limit;

rval = rval.concat(docs);

}

} catch (err) {

console.log("Error: " + err.message);

}

}

return rval;

}



You wrote:



Note that EFS is encrypted file system, and stunnel is encrypted transport, so 
for each disk read you likely causing:



   - read raw encrypted data from disk to memory (at AWS)

   - decrypt the disk data in memory (at AWS)

   - encrypt the memory data for stunnel transport (at AWS)

   - send the data over the wire

   - decrypt the data for use by solr. (Hardware you specify)



That's guaranteed to be slow, and worse yet, you have no control at all over 
the size or loading of the hardware performing anything but the last step. You 
are completely at the mercy of AWS's cost/speed tradeoffs which are unlikely to 
be targeting the level of performance usually desired for search disk IO.



This is interesting. I can copy the data to local and try it from there.





Jim Beale

Lead Software Engineer

hibu.com

2201 Renaissance Boulevard, King of Prussia, PA, 19406

Office: 610-879-3864

Mobile: 610-220-3067







-Original Message-
From: Gus Heck 
Sent: Sunday, February 25, 2024 9:15 AM
To: users@solr.apache.org
Subject: [EXTERNAL] Re: Is this list alive? I need help



Caution!Attachments and links (urls) can contain deceptive and/or 
malicious content.



Hi Jim,



Welcome to the Solr user list, not sure why your are asking about list 
liveliness? I don't see prior messages from you?

https://lists.apache.org/list?users@solr.apache.org:lte=1M:jim



Probably the most important thing you haven't told us is the current size of 
your indexes. You said 20k/day input, but at the start do you have 0days, 1 
day, 10 days, 100 days, 1000 days, or 1 days (27y) on disk already?



If you are starting from zero, then there is likely a 20x or more growth in the 
size of the index between the first and second measurement.. indexes do get 
slower with size though you would need fantastically large documents or some 
sort of disk problem to explain it that way.



However, maybe you do have huge documents or disk issues since your query time 
at time1 is already abysmal? Either you are creating a fantastically expensive 
query, or your system is badly overloaded. New systems, properly sized with 
moderate sized documents ought to be serving simple queries in tens of 
milliseconds.



As others have said it is *critical you show us the entire query request*.

If you are doing something like attempting to return the entire index with 
rows=99, that would almost certainly explain your issues...



How large are your average documents (in terms of bytes)?



Also what version of Solr?



r5.xlarge only has 4 cpu and 32 GB of memory. That's not very large (despite 
the name). However since it's unclear what your total index size looks like, it 
might be OK.



What are your IOPS constraints with EFS? Are you running out of a quota there? 
(bursting mode?)



Note that EFS is encrypted file system, and stunnel is encrypted transport, so 
for each disk read you likely causing:



   - read raw encrypted data from disk to memory (at AWS)

   - decrypt the disk data in memory (at AWS)

   - encrypt the memory data for stunnel transport (at AWS)

   - send the data over the wire

   - decrypt the data for use by solr. (Hardware you specify)



That's guaranteed to be slow, and worse 

Re: Multiple query parsers syntax

2024-02-28 Thread Robi Petersen
Thanks Hoss! I'd forgotten that link didn't work... I'd just scrolled thru
the slides (a long time ago)

On Wed, Feb 28, 2024 at 10:25 AM Chris Hostetter 
wrote:

>
> : I like Hoss' breakdown in this presentation for query substitution
> : syntax... :)
> :
> :  the Lucene/Solr Revolution 2016 presentation by hoss
> :  - see slideshow link
> at
> : top...
>
> FWIW: If a talk I give is ever recorded, I add link to that video from my
> main apache page...
>   https://home.apache.org/~hossman/
> ...but I typically leave the slides alone and don't add a video link to
> them.
>
> So for that talk: https://home.apache.org/~hossman/rev2016/
> The Video is: https://youtu.be/qTVi7eMGe1A
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: Multiple query parsers syntax

2024-02-28 Thread Chris Hostetter


: I like Hoss' breakdown in this presentation for query substitution
: syntax... :)
: 
:  the Lucene/Solr Revolution 2016 presentation by hoss
:  - see slideshow link at
: top...

FWIW: If a talk I give is ever recorded, I add link to that video from my 
main apache page...
  https://home.apache.org/~hossman/
...but I typically leave the slides alone and don't add a video link to 
them.

So for that talk: https://home.apache.org/~hossman/rev2016/
The Video is: https://youtu.be/qTVi7eMGe1A


-Hoss
http://www.lucidworks.com/


Re: Multiple query parsers syntax

2024-02-28 Thread Chris Hostetter


: I tried the following, edismax to search against the description field and
: lucene parser to search against the keywords field,  but it does not work.
: What is wrong?
: 
: host:port/solr/v9/select?q={!edismax qf=description}white roses OR
: {!lucene}keywords:(white AND roses)=true

The {!...} local param syntax used at the begining of the q param causes 
it to specify the parser used for the entire string -- so the `edismax` 
parser is given the entire string...

white roses OR {!lucene}keywords:(white AND roses)

...as it's input, and `edismax` does *NOT* support embeeded parsers.

In order for your (implicit `defType=lucene` parser to see that input and 
recognize that you want itto consist of SHOULD clauses, one parsed by the 
`edismax` parser, and one my the `lucene` parser you need to use parens -- 
not just for grouping, but so that the first chars of the query string 
aren't `{!...` overriding the `defType` parser.

You also need to use local params to specify the `edismax` query input as 
the `v` param, because otherwise it's up to the `defType=lucene` parser to 
decide when the prefix based input to the `edismax` ends.

So this gets you an embedded edismax parser...

q=({!edismax qf=description}white roses OR keywords:(white AND roses))

...but `white` is the query string the `defType=lucene` parser gives to 
the `edismax` parser.

This is what you seem to want...

q=({!edismax qf=description v='white roses'} OR keywords:(white AND roses))

Which can also be broken out as...

q=({!edismax qf=description v=$user_input} OR keywords:(white AND roses))
user_input=white roses

(which can be important if/when you have to worrk about quoting in your 
user input.


-Hoss
http://www.lucidworks.com/


Re: 500 Exception at regular intervals after upgrading to 9.5.0

2024-02-28 Thread Gus Heck
*Here's the full exception:*

* org.apache.solr.common.SolrException:
org.apache.solr.client.solrj.SolrServerException*

You missed the exception message which would be very useful (line above
most likely)

On Tue, Feb 27, 2024 at 6:29 AM Henrik Brautaset Aronsen <
henrik.aron...@gmail.com> wrote:

> I get a "500 Exception" error log message at regular intervals after
> upgrading from 9.4.0 to 9.5.0.  I'm using SolrCloud.
>
> Here's the full exception:
>
> org.apache.solr.common.SolrException:
> org.apache.solr.client.solrj.SolrServerException
>   at
>
> org.apache.solr.handler.component.SearchHandler.throwSolrException(SearchHandler.java:665)
>   at
>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:583)
>   at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:226)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2884)
>   at
>
> org.apache.solr.servlet.HttpSolrCall.executeCoreRequest(HttpSolrCall.java:875)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:561)
>   at
>
> org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:262)
>   at
>
> org.apache.solr.servlet.SolrDispatchFilter.lambda$doFilter$0(SolrDispatchFilter.java:219)
>   at
>
> org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:249)
>   at
>
> org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:215)
>   at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213)
>   at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
>   at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:210)
>   at
>
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1635)
>   at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:527)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:131)
>   at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:598)
>   at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:223)
>   at
>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1580)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221)
>   at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1384)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176)
>   at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:484)
>   at
>
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1553)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174)
>   at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1306)
>   at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129)
>   at
>
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:149)
>   at
>
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:228)
>   at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:141)
>   at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
>   at
>
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:301)
>   at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
>   at
>
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:822)
>   at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
>   at org.eclipse.jetty.server.Server.handle(Server.java:563)
>   at
>
> org.eclipse.jetty.server.HttpChannel$RequestDispatchable.dispatch(HttpChannel.java:1598)
>   at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:753)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:501)
>   at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:287)
>   at
> org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:314)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
>   at
> org.eclipse.jetty.io
> .SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
>   at
>
> org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:421)
>   at
>
> org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:390)
>   at
>
> org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:277)
>   at
>
> org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.run(AdaptiveExecutionStrategy.java:199)
>   at