Enabling the Auto purging for the documents which are already index.

2018-07-13 Thread Adarsh_infor
Hi All,

I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old.  I
hear about this feature Time to Live i need to know couple of things before
trying it 

1. In all the example it talks about seconds, can we specify the day counts. 
2. And the time to live needs couple new fields in index, which means a
schema change then do we need to re-index all the existing index.  ?
3.  Is there way to enable autopurging without doing  re-index acitiviy. 


thanks 
Adarsh





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Enabling the Auto purging for the documents which are already index.

2018-07-13 Thread Adarsh_infor
Hi All,

I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old.  I
hear about this feature Time to Live i need to know couple of things before
trying it 

1. In all the example it talks about seconds, can we specify the day counts. 
2. And the time to live needs couple new fields in index, which means a
schema change then do we need to re-index all the existing index.  ?
3.  Is there way to enable autopurging without doing  re-index acitiviy. 


thanks 
Adarsh





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Enabling the Auto purging for the documents which are already index.

2018-07-13 Thread Adarsh_infor
Hi All,

I have index which is being lying in production for quite some time. Now we
need to delete the documents based on Date range, i.e., after 270 days we
should be able to delete the old documents. beyond that many days old.  I
hear about this feature Time to Live i need to know couple of things before
trying it 

1. In all the example it talks about seconds, can we specify the day counts. 
2. And the time to live needs couple new fields in index, which means a
schema change then do we need to re-index all the existing index.  ?
3.  Is there way to enable autopurging without doing  re-index acitiviy. 


thanks 
Adarsh





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: A user defined request handler is failing to fetch the data.

2018-07-03 Thread Adarsh_infor
Hi Shawn,

thanks that helped. I modified the searchHandler as below and it started
working 

 

   *:* 
   localhost:8983/solr/FI_idx 
   /select

 


Regards
Adarsh 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: A user defined request handler is failing to fetch the data.

2018-07-03 Thread Adarsh_infor
@Erick @Shawn

Adding to  my previous comment. 


the below command works . 

http://localhost:8983/solr/FI_idx/select?q=*:*=true=localhost:8983/solr/FI_idx

but the same will not work with filesearch new search handler.  

http://localhost:8983/solr/FI_idx/filesearch?q=*:*=true=localhost:8983/solr/FI_idx

This gives me error, so am trying to figure what difference could have
caused the failed for the second http command.  

And the second http command works fine if i change the lucenematchversion in
solrconfig to LUCENE_40 which is totally weird because the indexing has
happened with lucenematchversion 6.6.3 but search works fine with LUCENE_40. 
which is totally weird behaviour.  

Thanks





--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: A user defined request handler is failing to fetch the data.

2018-07-03 Thread Adarsh_infor
@Shawn Heisey-2

When we say recursive shards, what does that mean?  My distributed node will
not have any data in it it will be just used for searching all the
shards(nodes) where the documents are indexed and try to get consolidated
data from it. My only problem here is if i change the 
to LUCENE_40 everything seems to be working fine, but if we change that to
6.6.3 or LUCENE_CURRENT it starts breaking so does that meaning the
distributed search is not supported from Lucene 6.* version ?

Thanks 




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: A user defined request handler is failing to fetch the data.

2018-07-02 Thread Adarsh_infor
@Erick Erickson

Thanks for the response.  

Yes am going to have the shards on 6 different servers which will be later
called in my searchHandler by specifying the shards list.  But for that
initially i was testing the filesearch with the single shard which was
suppose to work.  I know solr could does handle these thing more better than
but for now i need to use the master/slave architecture with distributed
node in front of them. As of now if in the solrconfig.xml if i keep
lucenematchversion to 6.6.3 then only . am seeing the error which i posted
earlier if switch the version back to LUCENE_40 it just works fine. But is
it not suppose to work with 6.6.3 am confused there. 

And also the logs which i pasted in from solr.log not from the client side. 

Thanks 




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


A user defined request handler is failing to fetch the data.

2018-06-28 Thread Adarsh_infor
Hi All,

I was running SOLR 4.6.1 in master slave architecture on Windows machine,
Now am planning to migrate that to Linux and upgrade the SOLR 6.6.3
version(Remember only the Configs and schema will be migtrated not data). In
the process of doing so i had to do some changes to Schema and Config in
order to create a core without an issue, after creating the core i was
getting a warning like the LUCENE_40 support will be removed from 7 so need
to change it. So in order to fix that warning i have changed it
`6.6.3`.  After which using *DIH* 
i indexed certain documents and was able to query them by using default
`/select` request handler. But i wanted to test the distributed search so in
order to test that . i have created one more core in the same server, which
is having a below requestHandler defined in config.xml 


   
   *:*
   localhost:8983/solr/FI_idx
   document
   


Core *FI_idx* is the core which is having the indexed data in it, with
`6.6.3` but when query using
requestHandler `/filesearch` and getting below error could anyone help to me
to understand whats happening.  



"trace":
"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx: Error from server at
http://localhost:8983/solr/FI_idx:
org.apache.solr.client.solrj.SolrServerException: IOException occured when
talking to server at: http://localhost:8983/solr/FI_idx\n\tat
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)\n\tat
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)\n\tat
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)\n\tat
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)\n\tat
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:218)\n\tat
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:183)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat
java.lang.Thread.run(Thread.java:745)\n",
"code": 500



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Soft commit impact on replication

2018-06-18 Thread Adarsh_infor
Hi Erick,

Thanks for the response. 

First thing we not indexing on Slave.  And we are not re-indexing/optimizing
entire the core in Master node. 

The only warning which I see in the log is "Unable clean the unused index
directory so starting full copy".  
That one i can understand and I don't have issue with that as its normal
behaviour but most of the time. But most of the time it’s just triggers
full-copy without any details in the log.  

And recently in one of the nodes i enabled soft-commit in master nodes and
monitored the corresponding slave node, what i observed is it didn't even
trigger the full-copy not even once for almost 3 consecutive days. So am
wondering do we need to have soft commit enabled in master for replication
to happen smooth if so what’s the dependency there


Thanks 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html