Solr 8.1.5 Postlogs - Basic Authentication Error

2020-05-11 Thread Waheed, Imran
Is there a way to use bin/postllogs with basic authentication on? I am getting 
error if do not give username/password

bin/postlogs http://localhost:8983/solr/logs 
server/logs/<http://localhost:8983/solr/logs%20server/logs/> server/logs

Exception in thread "main" 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://localhost:8983/solr/logs: Expected mime type 
application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401 require authentication

URI:/solr/logs/update
STATUS:401
MESSAGE:require authentication
SERVLET:default


I get a different error if I try
bin/postlogs -u user:@password http://localhost:8983/solr/logs server/logs/


SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.solr.util.SolrLogPostTool.gatherFiles(SolrLogPostTool.java:127)
at 
org.apache.solr.util.SolrLogPostTool.main(SolrLogPostTool.java:65)

thank you,
Imran


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.


Solr 6.4.1: : SolrException:nfs no locks available

2018-09-05 Thread Imran Rajjad
Hello,

I am using Solr Cloud 6.4.1. After a hard restart the solr nodes are
constantly showing to be in DOWN state and would not go into recovery. I
have also deleted the write.lock files from all the replica folders, but
the problem would not go away. The error displayed at web console is : no
locks available

My replica folders reside in an nfs mount, I am using RHEL 6/CentOS6.8. Has
anyone ever faced this issue?

regards,
Imran

-- 
I.R


RE: deep paging in parallel sql

2017-09-06 Thread Imran Rajjad
My only concern is the performance as the cursor moves forward in resultset 
with approximately 2 billion records

Regards,
Imran

Sent from Mail for Windows 10

From: Joel Bernstein
Sent: Wednesday, September 6, 2017 7:04 PM
To: solr-user@lucene.apache.org
Subject: Re: deep paging in parallel sql

Parallel SQL supports unlimited SELECT statements which return the entire
result set. The documentation discusses the differences between the limited
and unlimited SELECT statements. Other then the LIMIT clause there is not
yet support for paging.

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Sep 6, 2017 at 9:11 AM, Imran Rajjad <im...@elogic.pk> wrote:

> Dear list,
>
> Is it possible to enable deep paging when querying data through Parallel
> SQL?
>
> Regards,
> Imran
>
> Sent from Mail for Windows 10
>
>



deep paging in parallel sql

2017-09-06 Thread Imran Rajjad
Dear list,

Is it possible to enable deep paging when querying data through Parallel SQL?

Regards,
Imran

Sent from Mail for Windows 10



Ambiguous response on TrieDateField

2017-08-03 Thread Imran Rajjad
Hello,

I have observed a difference of Day in TrieDateField when queried from Solr 
Cloud web interface and SolrK (Java API)

Below is the query response from Web Interface

{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":22,
"params":{
  "q":"id:01af04e1-83ce-4eb0-8fb5-dc737115dcce",
  "indent":"on",
  "fl":"dateTime",
  "sort":"dateTime asc, id asc",
  "rows":"100",
  "wt":"json",
  "_":"1501792144786"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"dateTime":"2017-06-17T00:00:00Z"}]
  }}

The same query run from SolrJ shows previous day in the same field

query.setQuery("id:01af04e1-83ce-4eb0-8fb5-dc737115dcce");
query.setFields(""dateTime");
query.addSort("dateTime", ORDER.asc);
query.addSort("id", ORDER.asc); 
query.add("wt","json");

gives
{responseHeader={zkConnected=true,status=0,QTime=24,params={q=id:01af04e1-83ce-4eb0-8fb5-dc737115dcce,_stateVer_=cdr2:818,fl=dateTime,sort=dateTime
 asc,id asc,wt=javabin,version=2}},response={numFound=1,start=0,docs=
[SolrDocument{dateTime=Fri Jun 16 17:00:00 PDT 2017}]}}

The problem was found when the a filter query (dateTime:[ 2017-06-17T00:00:00Z 
TO 2017-06-18T00:00:00Z]) was done for the records of 17 June only, however the 
solrJ response shows some documents with June16 also. Running facet query from 
web interface shows no records from June16


Regards,
Imran

Sent from Mail for Windows 10



RE: Joins in Parallel SQL?

2017-07-18 Thread imran
Is it possible to contribute towards building this capability? What part of 
developer documentation would be suitable for this?

Regards,
Imran

Sent from Mail for Windows 10

From: Joel Bernstein
Sent: Thursday, July 6, 2017 7:40 AM
To: solr-user@lucene.apache.org
Subject: Re: Joins in Parallel SQL?

Joins and OFFSET are not currently supported with Parallel SQL.

The docs for parallel SQL cover all the supported features. Any syntax not
covered in the docs is likely not supported.

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Jul 6, 2017 at 2:40 PM, <im...@elogic.pk> wrote:

>
> Is it possible to join documents from different collections through
> Parallel SQL?
>
> In addition to the LIMIT feature on Parallel SQL, can we do use OFFSET to
> implement paging?
>
> Thanks,
> Imran
>
>
> Sent from Mail for Windows 10
>
>



enable fault-tolerance by default on collection?

2017-07-13 Thread imran
Is it possible to enable shards.tolerant=true parameter by default? 

We are using Spark Thrif Server`s JDBC API to access a collection, while 
testing it has occurred that a completely gone shard is creating problems. Can 
this behavior be enabled through configuration and not from inside the query 
request?

Regards,
Imran

Sent from Mail for Windows 10



RE: help on implicit routing

2017-07-10 Thread imran
Thanks for the reference, I am guessing this feature is not available through 
the post utility inside solr/bin

Regards,
Imran

Sent from Mail for Windows 10

From: Jan Høydahl
Sent: Friday, July 7, 2017 1:51 AM
To: solr-user@lucene.apache.org
Subject: Re: help on implicit routing

http://lucene.apache.org/solr/guide/6_6/shards-and-indexing-data-in-solrcloud.html
 
<http://lucene.apache.org/solr/guide/6_6/shards-and-indexing-data-in-solrcloud.html>

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 6. jul. 2017 kl. 03.15 skrev im...@elogic.pk:
> 
> I am trying out the document routing feature in Solr 6.4.1. I am unable to 
> comprehend the documentation where it states that 
> “The 'implicit' router does not
> automatically route documents to different
> shards.  Whichever shard you indicate on the
> indexing request (or within each document) will
> be used as the destination for those documents”
> 
> How do you specify the shard inside a document? E.g If I have basic 
> collection with two shards called day_1 and day_2. What value should be 
> populated in the router field that will ensure the document routing to the 
> respective shard?
> 
> Regards,
> Imran
> 
> Sent from Mail for Windows 10
> 




RE: uploading solr.xml to zk

2017-07-07 Thread imran
Thanks for the reply
This is the exact command on a RHEL 6 machine

solr-6.4.1/bin/solr cp file:/home/user1/solr/nodes/day1/solr/solr.xml 
zk:/solr.xml -z localhost:9983

I am following the documentation of 6.4.1



I am assuming if the solr.xml is present in zookeeper, we can point to an empty 
directory to start a node?

Regards,
Imran









Sent from Mail for Windows 10

From: Jan Høydahl
Sent: Friday, July 7, 2017 2:01 AM
To: solr-user@lucene.apache.org
Subject: Re: uploading solr.xml to zk

> ERROR: cp is not a valid command!

Can you write the exact command you typed again?
Once solr.xml is in zookeeper, solr will find it automatically.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 7. jul. 2017 kl. 21.31 skrev im...@elogic.pk:
> 
> The documentation says
> 
> If you for example would like to keep your solr.xml in ZooKeeper to avoid 
> having to copy it to every node's so
> lr_home directory, you can push it to ZooKeeper with the bin/solr utility 
> (Unix example):
> bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181
> 
> So Im trying to push the solr.xml my local zookeepr
> 
> solr-6.4.1/bin/solr  file:/home/user1/solr/nodes/day1/solr/solr.xml 
> zk:/solr.xml -z localhost:9983
> 
> ERROR: cp is not a valid command!
> 
> Afterwards
> When starting up a node how do we refer to the solr.xml inside zookeeper? Any 
> examples?
> 
> Thanks
> Imran
> 
> 
> Sent from Mail for Windows 10
> 




RE: help on implicit routing

2017-07-07 Thread imran
Thanks that was helpful, can this be done without modifying the document also 
when posting data through the post utility or a java client?

Regards,
Imran

Sent from Mail for Windows 10

From: Susheel Kumar
Sent: Thursday, July 6, 2017 7:52 AM
To: solr-user@lucene.apache.org
Subject: Re: help on implicit routing

Eric has provided the details on other email.  See below



Use the _route_ field and put in "day_1" or "day_2". You've presumably
named the shards (the "shard" parameter) when you added them with the
CREATESHARD command so use the value you specified there.

Best,
Erick

On Wed, Jul 5, 2017 at 9:15 PM, <im...@elogic.pk> wrote:

> I am trying out the document routing feature in Solr 6.4.1. I am unable to
> comprehend the documentation where it states that
> “The 'implicit' router does not
> automatically route documents to different
> shards.  Whichever shard you indicate on the
> indexing request (or within each document) will
> be used as the destination for those documents”
>
> How do you specify the shard inside a document? E.g If I have basic
> collection with two shards called day_1 and day_2. What value should be
> populated in the router field that will ensure the document routing to the
> respective shard?
>
> Regards,
> Imran
>
> Sent from Mail for Windows 10
>
>



uploading solr.xml to zk

2017-07-07 Thread imran
The documentation says

If you for example would like to keep your solr.xml in ZooKeeper to avoid 
having to copy it to every node's so
lr_home directory, you can push it to ZooKeeper with the bin/solr utility (Unix 
example):
bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181

So Im trying to push the solr.xml my local zookeepr

solr-6.4.1/bin/solr  file:/home/user1/solr/nodes/day1/solr/solr.xml 
zk:/solr.xml -z localhost:9983

ERROR: cp is not a valid command!

Afterwards
When starting up a node how do we refer to the solr.xml inside zookeeper? Any 
examples?

Thanks
Imran


Sent from Mail for Windows 10



Joins in Parallel SQL?

2017-07-06 Thread imran

Is it possible to join documents from different collections through Parallel 
SQL?

In addition to the LIMIT feature on Parallel SQL, can we do use OFFSET to 
implement paging?

Thanks,
Imran


Sent from Mail for Windows 10



help on implicit routing

2017-07-05 Thread imran
I am trying out the document routing feature in Solr 6.4.1. I am unable to 
comprehend the documentation where it states that 
“The 'implicit' router does not
automatically route documents to different
shards.  Whichever shard you indicate on the
indexing request (or within each document) will
be used as the destination for those documents”

How do you specify the shard inside a document? E.g If I have basic collection 
with two shards called day_1 and day_2. What value should be populated in the 
router field that will ensure the document routing to the respective shard?

Regards,
Imran

Sent from Mail for Windows 10



How to enable Gzip compression in Solr v6.1.0 with Jetty 9.3.8.v20160314

2017-03-07 Thread Gul Imran
Hi
I am trying to upgrade Solr from v5.3 to v6.1.0 which comes with Jetty 
9.3.8.v20160314.  However, after the upgrade we seem to have lost Gzip 
compression capability since we still have the old configuration.  When I send 
the following request with the appropriate headers, I do not get a gzipped 
response:
curl -H "Accept-Encoding: gzip,deflate" 
"http://localhost:8983/solr/myApiAlias/select?wt=json=uuid:%22146c521c-9966-4f0a-94f9-465cd847b921%22=true=true=uuid=1=start+asc,definition+asc,id+asc=0=5;

I should be expecting the "Content-Encoding: gzip" header in the response.  
However, I get the following response:
< HTTP/1.1 200 OK< Content-Type: application/json; charset=UTF-8< 
Content-Length: 393
Here is how the previous configuration was for enabling configuration:
dir:  /opt/solr/server/contexts/solr-jetty-context.xml

--Solr v5.3 
configuration---http://www.eclipse.org/jetty/configure_9_0.dtd;>    /solr-webapp/webapp  /etc/webdefault.xml  false    org.eclipse.jetty.servlets.GzipFilter    
/*                    
          
                  
mimetypes        
text/html,text/xml,text/plain,text/css,text/javascript,text/json,application/x-javascript,application/javascript,application/json,application/xml,application/xml+xhtml,image/svg+xml
                methods        
GET,POST    
I have modified this configuration to use the GzipHandler 
(http://www.eclipse.org/jetty/documentation/9.3.x/gzip-filter.html) and updated 
solr-jetty-context.xml as follows:
Solr v6.1.0 
configuration-http://www.eclipse.org/jetty/configure_9_0.dtd;>    /solr-webapp/webapp  /etc/webdefault.xml  false                                /*     
                               
              text/html            
text/xml            text/plain            
text/css            text/javascript            
text/json            application/x-javascript         
   application/javascript            application/json 
           application/xml            
application/xml+xhtml            image/svg+xml        
                              
          GET          POST      
              
However, when I restart Solr v6.1.0 with the new configuration, it does not 
show any errors in the logs but the application becomes unavailable and all 
requests return 404 Not Found response code.
I have also tried the suggestion posted on Stack Overflow 
(http://stackoverflow.com/questions/30391741/gzip-compression-not-working-in-solr-5-1).
  However, as this is not for Solr v6.1.0, it fails to work as well.
Wondering if someone can please provide a way to configure gzip compression 
with Solr+Jetty installation.  Many thanks.
Kind regards,
Gul



Calculating Solr document score by ignoring the boost field.

2013-07-09 Thread imran khan
Greetings,

I am using nutch 2.x as my datasource for Solr 4.3.0. And nutch passes on
its own boost field to my Solr schema

field name=boost type=float stored=true indexed=false/

Now due to some reason I always get boost = 0.0 and due to this my Solr's
document score is also always 0.0.

Is there any way in Solr that it ignores the boost field's value for its
document's score calculation ?

Regards,
Khan


Re: DIH for multilingual index multiValued field?

2010-11-13 Thread Imran
I think a custom transformer would be of help in these scenarios
http://wiki.apache.org/solr/DIHCustomTransformer

http://wiki.apache.org/solr/DIHCustomTransformerCheers
-- Imran

On Sat, Nov 13, 2010 at 8:55 PM, Andy angelf...@yahoo.com wrote:

 I have a MySQL table:

CREATE TABLE documents (
id INT NOT NULL AUTO_INCREMENT,
language_code CHAR(2),
tags CHAR(30),
text TEXT,
PRIMARY KEY (id)
);

 I have 2 questions about Solr DIH:

 1) The langauge_code field indicates what language the text field is
 in. And depending on the language, I want to index text to different Solr
 fields.

# pseudo code

if langauge_code == en:
index text to Solr field text_en
elif langauge_code == fr:
index text to Solr field text_fr
elif langauge_code == zh:
index text to Solr field text_zh
...

 Can DIH handle a usecase like this? How do I configure it to do so?

 2) The tags field needs to be indexed into a Solr multiValued field.
 Multiple values are stored in a string, separated by a comma. For example,
 if `tags` contains the string blue, green, yellow then I want to index the
 3 values blue, green, yellow into a Solr multiValued field.

 How do I do that with DIH?

 Thanks.






Re: Searching with AND + OR and spaces

2010-11-12 Thread Imran
To get a more precise result on exact matches of your terms, how about
having another a string type field for title and subhead. And use dismax to
boost the string type fields more than the text type fields.

Cheers
-- Imran

On Fri, Nov 12, 2010 at 6:56 PM, Jon Drukman j...@cluttered.com wrote:

 Ahmet Arslan iorixxx at yahoo.com writes:

 
   (title:Call of Duty OR subhead:Call of Duty)
  
   No matches, despite the fact that there are many documents
   that should match.
 
  Field types of  title and subhead are important here. Do you use
 stopwordfilterfactory with enable
  position increments?

field name=title type=text indexed=true stored=true/
   field name=subhead type=text indexed=true stored=true/

 text is the default that comes with schema.xml, it has the enable position
 increments stopwordfilterfactory.

  What is you solr version?

 1.4


   So I left out the quotes, and it seems to work.  But
   now when I try doing things
   like
  
   title:Call of Duty OR subhead:Call of Duty AND type:4
  
 
  Try using parenthesis.
  title:(Call of Duty) OR subhead:(Call of Duty) AND type:4

 that seems to work a lot better, thanks!!




Re: Influencing scores on values in multiValue fields

2010-11-02 Thread Imran
Thanks Mike for your suggestion. It did take me down the correct route. I
basically created another multiValue field of type 'string' and boosted
that. To get the partial matches to avoid the length normalisation I had the
'text' type multiValue field to omitNorms. The results look as per expected
so far on this configuration.

Cheers
-- Imran

On Fri, Oct 29, 2010 at 1:09 PM, Michael Sokolov soko...@ifactory.comwrote:

 How about creating another field for doing exact matches (a string);
 searching both and boosting the string match?

 -Mike

  -Original Message-
  From: Imran [mailto:imranboho...@gmail.com]
  Sent: Friday, October 29, 2010 6:25 AM
  To: solr-user@lucene.apache.org
  Subject: Influencing scores on values in multiValue fields
 
  Hi All
 
  We've got an index in which we have a multiValued field per document.
 
  Assume the multivalue field values in each document to be;
 
  Doc1:
  bar lifters
 
  Doc2:
  truck tires
  back drops
  bar lifters
 
  Doc 3:
  iron bar lifters
 
  Doc 4:
  brass bar lifters
  iron bar lifters
  tire something
  truck something
  oil gas
 
  Now when we search for 'bar lifters' the expectation (based on the
  requirements) is that we get results in the order of Doc1,
  Doc 2, Doc4 and Doc3.
  Doc 1 - since there's an exact match (and only one) for the
  search terms Doc 2 - since ther'e an exact match amongst the
  values Doc 4 - since there's a partial match on the values
  but the number of matches are more than Doc 3 Doc 3 - since
  there's a partial match
 
  However, the results come out as Doc1, Doc3, Doc2, Doc4.
  Looking at the explaination of the result it appears Doc 2 is
  loosing to Doc3 and Doc 4 is loosing to Doc3 based on length
  normalisation.
 
  We think we can see the reason for that - the field length in
  doc2 is greater than doc3 and doc 4 is greater doc3.
  However, is there any mechanism I can force doc2 to beat doc3
  and doc4 to beat doc3 with this structure.
 
  We did look at using omitNorms=true, but that messes up the
  scores for all docs. The result comes out as Doc4, Doc1,
  Doc2, Doc3 (where Doc1, Doc2 and
  Doc3 gets the same score)
  This is because the fieldNorm is not taken into account anymore (as
  expected) and the termFrequence being the only contributing
  factor. So trying to avoid length normalisation through
  omitNorms is not helping.
 
  Is there anyway where we can influence an exact match of a
  value in a multiValue field to add on to the overall score
  whilst keeping the lenght normalisation?
 
  Hope that makes sense.
 
  Cheers
  -- Imran
 




Re: Searching for terms on specific fields

2010-10-29 Thread Imran
Cheers Hoss. That did it for me.

~~Sent by an Android
On 29 Oct 2010 00:39, Chris Hostetter hossman_luc...@fucit.org wrote:

 The specifics of your overall goal confuse me a bit, but drilling down to
 your core question...

 : I want to be able to use the dismax parser to search on both terms
 : (assigning slops and tie breaks). I take it the 'fq' is a candidate for
 : this,but can I add dismax capabilities to fq as well? Also my query
would be

 ...you can use any parser you want for fq, using the localparams syntax...

 http://wiki.apache.org/solr/LocalParams

 ..so you could have something like...

 q=foo:barfq={!dismax qf='yak zak'}baz

 ..the one thing you have to watch out for when using localparams and
 dismax is that the outer params are inherited by the inner params by
 default -- so if you are using dismax for your main query 'q' (with
 defType) and you have global params for qf, pf, bq, etc... those are
 inherited by your fq={!dismax} query unless you override them with local
 params


 -Hoss


Influencing scores on values in multiValue fields

2010-10-29 Thread Imran
Hi All

We've got an index in which we have a multiValued field per document.

Assume the multivalue field values in each document to be;

Doc1:
bar lifters

Doc2:
truck tires
back drops
bar lifters

Doc 3:
iron bar lifters

Doc 4:
brass bar lifters
iron bar lifters
tire something
truck something
oil gas

Now when we search for 'bar lifters' the expectation (based on the
requirements) is that we get results in the order of Doc1, Doc 2, Doc4 and
Doc3.
Doc 1 - since there's an exact match (and only one) for the search terms
Doc 2 - since ther'e an exact match amongst the values
Doc 4 - since there's a partial match on the values but the number of
matches are more than Doc 3
Doc 3 - since there's a partial match

However, the results come out as Doc1, Doc3, Doc2, Doc4. Looking at the
explaination of the result it appears Doc 2 is loosing to Doc3 and Doc 4 is
loosing to Doc3 based on length normalisation.

We think we can see the reason for that - the field length in doc2 is
greater than doc3 and doc 4 is greater doc3.
However, is there any mechanism I can force doc2 to beat doc3 and doc4 to
beat doc3 with this structure.

We did look at using omitNorms=true, but that messes up the scores for all
docs. The result comes out as Doc4, Doc1, Doc2, Doc3 (where Doc1, Doc2 and
Doc3 gets the same score)
This is because the fieldNorm is not taken into account anymore (as
expected) and the termFrequence being the only contributing factor. So
trying to avoid length normalisation through omitNorms is not helping.

Is there anyway where we can influence an exact match of a value in a
multiValue field to add on to the overall score whilst keeping the lenght
normalisation?

Hope that makes sense.

Cheers
-- Imran


Searching for terms on specific fields

2010-10-27 Thread Imran
Hi All

We need to be able to perform a search based on two search terms (from the
user) against specific fields and a location. For example assume our index
(for a collection of books) has fields as title, description, authors
(multi-valued), categories(multi-valued), location (ofcourse lng and lats).
Every field is indexed.

I want to give the user the option of having two search options (one box for
title, and one for category - to find a more relevant result) along with a
location option. The user would then want to search for a book with
a certain title belonging to a set of categories in a given location (all of
these should be a AND). I want to show results that would ONLY match the
terms for the corresponding fields.

I want to be able to use the dismax parser to search on both terms
(assigning slops and tie breaks). I take it the 'fq' is a candidate for
this,but can I add dismax capabilities to fq as well? Also my query would be
a spatial search as well. So the spatial tier would be included in the
query. What would be the best way to implement my query to match terms to
specific fields along with spatial capabilities? Appreciate your inputs.
Thanks!!

Cheers
-- Imran