Re: Implementing DIH - Using a non-datetime change tracking column to Identify delta

2017-04-07 Thread subinalex
Thanks Shawn!!..:-)
Ll try out this..


On 6 Apr 2017 00:00, "Shawn Heisey-2 [via Lucene]" <
ml-node+s472066n4328519...@n3.nabble.com> wrote:

On 4/4/2017 7:40 AM, subinalex wrote:

> Can we use a non-datetime column to identify delta rows in deltaQuery for
> DIH configuration.
> Like for example in the below deltaQuery ,
>
>   deltaQuery="select ID from category where last_modified >
> '${dih.last_index_time}'"
>
> the delta rows are picked when the last_modified datetime is greater than
> last index time.
>
> I want to pick the deltas if a column value differs from the
corresponding
> column value in solr.
>
>  deltaQuery="select ID from category where md5hashcode  <> ;
> 'indexedmd5hashcode'"

The only piece of information that DIH saves internally when it starts
an import is the current timestamp.

You can still do what you want, but you will need to be responsible for
keeping track of the information necessary to determine what's new in
your own program.  Solr will not do it for you.

When you start an import, you can provide any arbitrary information with
URL parameters on the request that starts the import.  Here's my full
 config for DIH from one of my Solr cores showing how to use
these parameters:



I am specifying many of the parts of the SQL query from URL parameters.
For example, I will include a "dataView" parameter to choose at import
time what view or table will be queried.  The other parameters control
what ID values will be returned.

The query and deltaImportQuery attributes are identical.  At one time,
all my indexing was done with DIH, so I used these parameters to limit
what was done by the delta-import runs.  Currently, DIH is only used for
full rebuilds, I have a SolrJ program for incremental changes.

Thanks,
Shawn



--
If you reply to this email, your message will be added to the discussion
below:
http://lucene.472066.n3.nabble.com/Implementing-DIH-
Using-a-non-datetime-change-tracking-column-to-Identify-
delta-tp4328306p4328519.html
To unsubscribe from Implementing DIH - Using a non-datetime change tracking
column to Identify delta, click here

.
NAML





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Implementing-DIH-Using-a-non-datetime-change-tracking-column-to-Identify-delta-tp4328306p4329037.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Problems creating index for suggestions

2017-04-07 Thread Alexis Aravena Silva
Thanks Alessandro, I'll read the article.


Saludos,

Alexis Aravena S.

Scrum Master & Agile Coach

Celular: +569 69080134

Correo: aarav...@itsofteg.com



From: alessandro.benedetti 
Sent: Friday, April 7, 2017 11:39:56 AM
To: solr-user@lucene.apache.org
Subject: Re: Problems creating index for suggestions

Hi Alexis,
this is not a reason for the 20Gb overhead, but for sure you are using ina
wrong way the suggester component.
You don't want the analysis chain to produce edge ngrams and then build the
FST out of those tokens.
Read the chapters related the suggesters you are interested.
it may be useful to understand how the suggesters work.
You should use an analysis without the edgeNgram token filter at least.

[1] http://alexbenedetti.blogspot.co.uk/2015/07/solr-you-complete-me.html

Cheers



-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problems-creating-index-for-suggestions-tp4328392p4328914.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr with HDFS on AWS S3 - Server restart fails to load the core

2017-04-07 Thread Amarnath palavalli
Hi Kevin,
Sorry for not being clear on the response.
What I meant here is with attribute loadOnStartup=true is not helping. I
see the same issue as posted on the images 'connection to solr lost' on
choosing core.  And don't see any errors in the log with with DEBUG level.

Thanks,
Amar

On Fri, Apr 7, 2017 at 3:38 PM, Kevin Risden 
wrote:

> >
> > Thank you for the response. Setting “loadOnStartup=true“ results in
> showing
> > the connection timeout on clicking 'Core Admin' on Solr UI. Also, reload
> > does not work as the core is not loaded at all.
>
>
> Can you clarify what you mean by this? Does the core get loaded after you
> restart Solr?
>
> The initial description was the core wasn't loaded after Solr was
> restarted. What you are describing now is different I think.
>
> Kevin Risden
>
> On Fri, Apr 7, 2017 at 6:31 PM, Amarnath palavalli 
> wrote:
>
> > Hi Trey,
> >
> > Thank you for the response. Setting “loadOnStartup=true“ results in
> showing
> > the connection timeout on clicking 'Core Admin' on Solr UI. Also, reload
> > does not work as the core is not loaded at all.
> >
> > I suspect, something to do with HTTP connection idle time, probably the
> > connection is closed before the data is pulled from S3. I see that the '
> > maxUpdateConnectionIdleTime' is 40 seconds by default. However, don't
> know
> > how to change it.
> >
> > Thanks,
> > Amar
> >
> >
> >
> > On Fri, Apr 7, 2017 at 12:47 PM, Cahill, Trey 
> > wrote:
> >
> > > Hi Amarnath,
> > >
> > > It looks like you’ve set the core to not load on startup via the
> > > “loadOnStartup=false“ property.   Your response also shows that the
> core
> > is
> > > not loaded, “false“.
> > >
> > > I’m not really sure how to load cores after a restart, but possibly
> using
> > > the Core Admin Reload would do it (https://cwiki.apache.org/
> > > confluence/display/solr/CoreAdmin+API#CoreAdminAPI-RELOAD).
> > >
> > > Best of luck,
> > >
> > > Trey
> > >
> > > From: Amarnath palavalli [mailto:pamarn...@gmail.com]
> > > Sent: Friday, April 07, 2017 3:20 PM
> > > To: solr-user@lucene.apache.org
> > > Subject: Solr with HDFS on AWS S3 - Server restart fails to load the
> core
> > >
> > > Hello,
> > >
> > > I configured Solr to use HDFS, which in turn configured to use S3N. I
> > used
> > > the information from this issue to configure:
> > > https://issues.apache.org/jira/browse/SOLR-9952
> > >
> > > Here is the command I have used to start the Solr with HDFS:
> > > bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
> > > -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr
> > > -Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop
> > > -DXX:MaxDirectMemorySize=2g
> > >
> > > I am able to create a core, with the following properties:
> > > #Written by CorePropertiesLocator
> > > #Thu Apr 06 23:08:57 UTC 2017
> > > name=amar-s3
> > > loadOnStartup=false
> > > transient=true
> > > configSet=base-config
> > >
> > > I am able to ingest messages into Solr and also query the content.
> > > Everything seems to be fine until this stage and I can see the data dir
> > on
> > > S3.
> > >
> > > However, the problem is when I restart the Solr server, that is when I
> > see
> > > the core not loaded even when accessed/queried against it. Here is the
> > > admin API to get all cores gives:
> > > 
> > > 
> > > 0
> > > 617
> > > 
> > > 
> > > 
> > > ...
> > > 
> > > amar-s3
> > > 
> > > /Users/apalavalli/solr/solr-deployment/server/solr/amar-s3
> > > 
> > > data/
> > > solrconfig.xml
> > > schema.xml
> > > false
> > > 
> > > 
> > > 
> > >
> > > I don't see any issues reported in the log as well, but see this error
> > > from the UI:
> > >
> > > [Inline image 1]
> > >
> > >
> > > Not sure about the problem. This is happening when I ingest more than
> 40K
> > > messages in core before restarting Solr server.
> > >
> > > I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this
> > > issue.
> > >
> > > Thanks,
> > > Regards,
> > > Amar
> > >
> > >
> > >
> > >
> > >
> >
>


Re: SortingMergePolicy in solr 6.4.2

2017-04-07 Thread Dorian Hoxha
Did you get any update on this ?

On Tue, Mar 14, 2017 at 11:56 AM, Sahil Agarwal 
wrote:

> The SortingMergePolicy does not seem to get implemeted.
>
> The csv file gets indexed without errors. But when I search for a term, the
> results returned are not sorted by Marks.
>
> Following is a toy project in Solr 6.4.2 on which I tried to use
> SortingMergePolicyFactory.
>
> Just showing the changes that I did in the core's config files. Please tell
> me if any other info is needed.
> I used the basic_configs when creating core:
> create_core -c corename -d basic_configs
>
>
> managed-schema
>
> 
> .
> .
> .
>  indexed="true" stored="true"/>  "true" stored="true"/>  stored="true"/>  indexed
> ="true" stored="false"/>  multiValued="true" indexed="true" stored="false"/>  type="long" indexed="false" stored="false"/>  multiValued="false" indexed="true" required="true" stored="true"/>
>
>
> solrconfig.xml
>
> 
> 
> Marks descinner str>
>   org.apache.solr.index.TieredMergePolicyFactory str> 
> ​
> 
> ​
>
> 1.csv
>
> id,Name,Subject,Marks 1,Sahil Agarwal,Computers,1108 2,Ian
> Roberts,Maths,7077 3,Karan Vatsa,English,6092 4,Amit Williams,Maths,3924
> 5,Vani Agarwal,Computers,4263 6,Brenda Gupta,Computers,2309
> .
> .
> ​30 rows​
>
> What can be the problem??
>


Re: Solr with HDFS on AWS S3 - Server restart fails to load the core

2017-04-07 Thread Kevin Risden
>
> Thank you for the response. Setting “loadOnStartup=true“ results in showing
> the connection timeout on clicking 'Core Admin' on Solr UI. Also, reload
> does not work as the core is not loaded at all.


Can you clarify what you mean by this? Does the core get loaded after you
restart Solr?

The initial description was the core wasn't loaded after Solr was
restarted. What you are describing now is different I think.

Kevin Risden

On Fri, Apr 7, 2017 at 6:31 PM, Amarnath palavalli 
wrote:

> Hi Trey,
>
> Thank you for the response. Setting “loadOnStartup=true“ results in showing
> the connection timeout on clicking 'Core Admin' on Solr UI. Also, reload
> does not work as the core is not loaded at all.
>
> I suspect, something to do with HTTP connection idle time, probably the
> connection is closed before the data is pulled from S3. I see that the '
> maxUpdateConnectionIdleTime' is 40 seconds by default. However, don't know
> how to change it.
>
> Thanks,
> Amar
>
>
>
> On Fri, Apr 7, 2017 at 12:47 PM, Cahill, Trey 
> wrote:
>
> > Hi Amarnath,
> >
> > It looks like you’ve set the core to not load on startup via the
> > “loadOnStartup=false“ property.   Your response also shows that the core
> is
> > not loaded, “false“.
> >
> > I’m not really sure how to load cores after a restart, but possibly using
> > the Core Admin Reload would do it (https://cwiki.apache.org/
> > confluence/display/solr/CoreAdmin+API#CoreAdminAPI-RELOAD).
> >
> > Best of luck,
> >
> > Trey
> >
> > From: Amarnath palavalli [mailto:pamarn...@gmail.com]
> > Sent: Friday, April 07, 2017 3:20 PM
> > To: solr-user@lucene.apache.org
> > Subject: Solr with HDFS on AWS S3 - Server restart fails to load the core
> >
> > Hello,
> >
> > I configured Solr to use HDFS, which in turn configured to use S3N. I
> used
> > the information from this issue to configure:
> > https://issues.apache.org/jira/browse/SOLR-9952
> >
> > Here is the command I have used to start the Solr with HDFS:
> > bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
> > -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr
> > -Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop
> > -DXX:MaxDirectMemorySize=2g
> >
> > I am able to create a core, with the following properties:
> > #Written by CorePropertiesLocator
> > #Thu Apr 06 23:08:57 UTC 2017
> > name=amar-s3
> > loadOnStartup=false
> > transient=true
> > configSet=base-config
> >
> > I am able to ingest messages into Solr and also query the content.
> > Everything seems to be fine until this stage and I can see the data dir
> on
> > S3.
> >
> > However, the problem is when I restart the Solr server, that is when I
> see
> > the core not loaded even when accessed/queried against it. Here is the
> > admin API to get all cores gives:
> > 
> > 
> > 0
> > 617
> > 
> > 
> > 
> > ...
> > 
> > amar-s3
> > 
> > /Users/apalavalli/solr/solr-deployment/server/solr/amar-s3
> > 
> > data/
> > solrconfig.xml
> > schema.xml
> > false
> > 
> > 
> > 
> >
> > I don't see any issues reported in the log as well, but see this error
> > from the UI:
> >
> > [Inline image 1]
> >
> >
> > Not sure about the problem. This is happening when I ingest more than 40K
> > messages in core before restarting Solr server.
> >
> > I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this
> > issue.
> >
> > Thanks,
> > Regards,
> > Amar
> >
> >
> >
> >
> >
>


Re: Solr with HDFS on AWS S3 - Server restart fails to load the core

2017-04-07 Thread Amarnath palavalli
Hi Trey,

Thank you for the response. Setting “loadOnStartup=true“ results in showing
the connection timeout on clicking 'Core Admin' on Solr UI. Also, reload
does not work as the core is not loaded at all.

I suspect, something to do with HTTP connection idle time, probably the
connection is closed before the data is pulled from S3. I see that the '
maxUpdateConnectionIdleTime' is 40 seconds by default. However, don't know
how to change it.

Thanks,
Amar



On Fri, Apr 7, 2017 at 12:47 PM, Cahill, Trey 
wrote:

> Hi Amarnath,
>
> It looks like you’ve set the core to not load on startup via the
> “loadOnStartup=false“ property.   Your response also shows that the core is
> not loaded, “false“.
>
> I’m not really sure how to load cores after a restart, but possibly using
> the Core Admin Reload would do it (https://cwiki.apache.org/
> confluence/display/solr/CoreAdmin+API#CoreAdminAPI-RELOAD).
>
> Best of luck,
>
> Trey
>
> From: Amarnath palavalli [mailto:pamarn...@gmail.com]
> Sent: Friday, April 07, 2017 3:20 PM
> To: solr-user@lucene.apache.org
> Subject: Solr with HDFS on AWS S3 - Server restart fails to load the core
>
> Hello,
>
> I configured Solr to use HDFS, which in turn configured to use S3N. I used
> the information from this issue to configure:
> https://issues.apache.org/jira/browse/SOLR-9952
>
> Here is the command I have used to start the Solr with HDFS:
> bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
> -Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr
> -Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop
> -DXX:MaxDirectMemorySize=2g
>
> I am able to create a core, with the following properties:
> #Written by CorePropertiesLocator
> #Thu Apr 06 23:08:57 UTC 2017
> name=amar-s3
> loadOnStartup=false
> transient=true
> configSet=base-config
>
> I am able to ingest messages into Solr and also query the content.
> Everything seems to be fine until this stage and I can see the data dir on
> S3.
>
> However, the problem is when I restart the Solr server, that is when I see
> the core not loaded even when accessed/queried against it. Here is the
> admin API to get all cores gives:
> 
> 
> 0
> 617
> 
> 
> 
> ...
> 
> amar-s3
> 
> /Users/apalavalli/solr/solr-deployment/server/solr/amar-s3
> 
> data/
> solrconfig.xml
> schema.xml
> false
> 
> 
> 
>
> I don't see any issues reported in the log as well, but see this error
> from the UI:
>
> [Inline image 1]
>
>
> Not sure about the problem. This is happening when I ingest more than 40K
> messages in core before restarting Solr server.
>
> I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this
> issue.
>
> Thanks,
> Regards,
> Amar
>
>
>
>
>


Re: Solr Shingle is not working properly in solr 6.5.0

2017-04-07 Thread Steve Rowe
Thanks for the reminder about the ref guide.  I’ve added the new field type 
property to 
.

--
Steve
www.lucidworks.com

> On Apr 5, 2017, at 5:56 PM, Markus Jelsma  wrote:
> 
> Hello Steve - that will do the job. I am sure it will be well documented in 
> the reference docs/cwiki as well, so we all can look this up later.
> 
> Many thanks,
> Markus
> 
> 
> 
> -Original message-
>> From:Steve Rowe 
>> Sent: Wednesday 5th April 2017 23:50
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr Shingle is not working properly in solr 6.5.0
>> 
>> Hi Markus,
>> 
>> Here’s what I included in 6.5.1’s CHANGES.txt (as well as on branch_6x and 
>> master, so it’ll be included in future releases’ CHANGES.txt too):
>> 
>> -
>> * SOLR-10423: Disable graph query production via schema configuration 
>> .
>>This fixes broken queries for ShingleFilter-containing query-time 
>> analyzers when request param sow=false.
>>(Steve Rowe)
>> -
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Apr 5, 2017, at 5:43 PM, Markus Jelsma  
>>> wrote:
>>> 
>>> Steve - please include a broad description of this feature in the next 
>>> CHANGES.txt. I will forget about this thread but need to be reminded of why 
>>> i could need it :)
>>> 
>>> Thanks,
>>> Markus
>>> 
>>> 
>>> -Original message-
 From:Steve Rowe 
 Sent: Wednesday 5th April 2017 23:26
 To: solr-user@lucene.apache.org
 Subject: Re: Solr Shingle is not working properly in solr 6.5.0
 
 Aman,
 
 In forthcoming Solr 6.5.1, this problem will be addressed by setting a new 
  option named “enableGraphQueries” to “false".
 
 Your fieldtype will look like this:
 
 -
 >>> positionIncrementGap=“100” enableGraphQueries=“false”>

 
 >>> maxShingleSize="4”/>
 https://issues.apache.org/jira/browse/SOLR-10423> for this 
> problem.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Mar 31, 2017, at 7:34 AM, Aman Deep Singh  
>> wrote:
>> 
>> Hi Rich,
>> Query creation is correct only thing what causing the problem is that
>> Boolean + query while building the lucene query which causing all tokens 
>> to
>> be matched in the document (equivalent of mm=100%) even though I use mm=1
>> it was using BOOLEAN + query as
>> normal query one plus one abc
>> Lucene query -
>> +(((+nameShingle:one plus +nameShingle:plus one +nameShingle:one abc))
>> ((+nameShingle:one plus +nameShingle:plus one abc)) ((+nameShingle:one 
>> plus
>> one +nameShingle:one abc)) (nameShingle:one plus one abc))
>> 
>> Now since my doc contains only one plus one thus --
>> one plus ,plus one, one plus one
>> thus due to Boolean + it was not matching.
>> Thanks,
>> Aman Deep Singh
>> 
>> On Fri, Mar 31, 2017 at 4:41 PM Rick Leir  wrote:
>> 
>>> Hi Aman
>>> Did you try the Admin Analysis tool? It will show you which filters are
>>> effective at index and query time. It will help you understand why you 
>>> are
>>> not getting a mach.
>>> Cheers -- Rick
>>> 
>>> On March 31, 2017 2:36:33 AM EDT, Aman Deep Singh <
>>> amandeep.coo...@gmail.com> wrote:
 Hi,
 I was trying to use the shingle filter but it was not creating the
 query as
 desirable.
 
 my schema is
 >>> positionIncrementGap=
 "100">  
 >>> class="solr.ShingleFilterFactory" outputUnigrams="false"
 maxShingleSize="4"
 />  
 
 >>> stored="true"/>
 
 my solr query is
 
>>> http://localhost:8983/solr/productCollection/select?defType=edismax&debugQuery=true&q=one%20plus%20one%20four&qf=nameShingle&;
 *sow=false*&wt=xml
 
 and it was creating the parsed query as
 
 (+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
 +nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
 +nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one
 plus
 one +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus
 one
 four)))~1)/no_coord
 
 
 *++nameShingle:one plus +nameShingle:plus one +nameShingle:one
 four))
 ((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
 plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*
 
 
 
 So ideally token creations is perfect but in the query it is using
 boolean + operator which is causing the problem as if i have a document
 with name as
 "one plus one" ,according to the shingles it has to matched as its
 token
 will be  ("one plus","one plus one","plus one"

RE: Solr with HDFS on AWS S3 - Server restart fails to load the core

2017-04-07 Thread Cahill, Trey
Hi Amarnath,

It looks like you’ve set the core to not load on startup via the 
“loadOnStartup=false“ property.   Your response also shows that the core is not 
loaded, “false“.

I’m not really sure how to load cores after a restart, but possibly using the 
Core Admin Reload would do it 
(https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API#CoreAdminAPI-RELOAD).

Best of luck,

Trey

From: Amarnath palavalli [mailto:pamarn...@gmail.com]
Sent: Friday, April 07, 2017 3:20 PM
To: solr-user@lucene.apache.org
Subject: Solr with HDFS on AWS S3 - Server restart fails to load the core

Hello,

I configured Solr to use HDFS, which in turn configured to use S3N. I used the 
information from this issue to configure:
https://issues.apache.org/jira/browse/SOLR-9952

Here is the command I have used to start the Solr with HDFS:
bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
-Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr 
-Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop  
-DXX:MaxDirectMemorySize=2g

I am able to create a core, with the following properties:
#Written by CorePropertiesLocator
#Thu Apr 06 23:08:57 UTC 2017
name=amar-s3
loadOnStartup=false
transient=true
configSet=base-config

I am able to ingest messages into Solr and also query the content. Everything 
seems to be fine until this stage and I can see the data dir on S3.

However, the problem is when I restart the Solr server, that is when I see the 
core not loaded even when accessed/queried against it. Here is the admin API to 
get all cores gives:


0
617



...

amar-s3

/Users/apalavalli/solr/solr-deployment/server/solr/amar-s3

data/
solrconfig.xml
schema.xml
false




I don't see any issues reported in the log as well, but see this error from the 
UI:

[Inline image 1]


Not sure about the problem. This is happening when I ingest more than 40K 
messages in core before restarting Solr server.

I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this issue.

Thanks,
Regards,
Amar






Solr with HDFS on AWS S3 - Server restart fails to load the core

2017-04-07 Thread Amarnath palavalli
Hello,

I configured Solr to use HDFS, which in turn configured to use S3N. I used
the information from this issue to configure:
*https://issues.apache.org/jira/browse/SOLR-9952
*

Here is the command I have used to start the Solr with HDFS:

*bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
-Dsolr.lock.type=hdfs -Dsolr.hdfs.home=s3n://amar-hdfs/solr
-Dsolr.hdfs.confdir=/usr/local/Cellar/hadoop/2.7.3/libexec/etc/hadoop
 -DXX:MaxDirectMemorySize=2g*

I am able to create a core, with the following properties:
*#Written by CorePropertiesLocator*
*#Thu Apr 06 23:08:57 UTC 2017*
*name=amar-s3*
*loadOnStartup=false*
*transient=true*
*configSet=base-config*

I am able to ingest messages into Solr and also query the content.
Everything seems to be fine until this stage and I can see the data dir on
S3.

However, the problem is when I restart the Solr server, that is when I see
the core not loaded even when accessed/queried against it. Here is the
admin API to get all cores gives:


0
617



...
**
*amar-s3*
**
*/Users/apalavalli/solr/solr-deployment/server/solr/amar-s3*
**
*data/*
*solrconfig.xml*
*schema.xml*
*false*




I don't see any issues reported in the log as well, but see this error from
the UI:

[image: Inline image 1]


Not sure about the problem. This is happening when I ingest more than 40K
messages in core before restarting Solr server.

I am using Hadoop 2.7.3 with S3N FS. Please help me on resolving this issue.

Thanks,
Regards,
Amar


Re: Simple sql query with where clause doesn't work

2017-04-07 Thread lazarusjohn
Did you find answer I getting same error when I use text instead of number in
Where clause - Please let me know.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Simple-sql-query-with-where-clause-doesn-t-work-tp4324498p4328991.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problems creating index for suggestions

2017-04-07 Thread alessandro.benedetti
Hi Alexis,
this is not a reason for the 20Gb overhead, but for sure you are using ina 
wrong way the suggester component.
You don't want the analysis chain to produce edge ngrams and then build the
FST out of those tokens.
Read the chapters related the suggesters you are interested.
it may be useful to understand how the suggesters work.
You should use an analysis without the edgeNgram token filter at least.

[1] http://alexbenedetti.blogspot.co.uk/2015/07/solr-you-complete-me.html

Cheers



-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problems-creating-index-for-suggestions-tp4328392p4328914.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: logging issue

2017-04-07 Thread KRIS MUSSHORN
removing the colon crushed it. thanks


the reason i'm looking at this is the logging screen is not showing log 
content...last check shows the spinning wheel to the left.



Time (Local)Level   CoreLogger  Message
No Events available
Last Check:4/7/2017, 10:26:43 AM



Google chrome, IE 10 and Firefox all show the same.


What can I do to correct and see the logs?


Kris



> 
> On April 7, 2017 at 10:04 AM Erick Erickson  
> wrote:
> 
> You also put a colon ':' in
> 
> FINEST, :
> 
> BTW, this will give you a _lot_ of output
> 
> Best,
> Erick
> 
> On Fri, Apr 7, 2017 at 6:17 AM, KRIS MUSSHORN  
> wrote:
> 
> > > 
> > SOLR 5.4.1
> > 
> > log files have this entry
> > 
> > log4j:ERROR Could not find value for key log4j.appender.: file
> > log4j:ERROR Could not instantiate appender named ": file".
> > 
> > Here is my config file and the only thing i have changed is set 
> > level to FINEST in line 3. Otherwise this is the default file.
> > 
> > # Logging level
> > solr.log=logs
> > log4j.rootLogger=FINEST,: file, CONSOLE
> > 
> > log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
> > 
> > log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
> > log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) 
> > [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m%n
> > 
> > #- size rotation with log cleanup.
> > log4j.appender.file=org.apache.log4j.RollingFileAppender
> > log4j.appender.file.MaxFileSize=100MB
> > log4j.appender.file.MaxBackupIndex=9
> > 
> > #- File to log to and log format
> > log4j.appender.file.File=${solr.log}/solr.log
> > log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
> > log4j.appender.file.layout.ConversionPattern=%d{-MM-dd 
> > HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} %X{core}] 
> > %c{1.} %m\n
> > 
> > log4j.logger.org.apache.zookeeper=WARN
> > log4j.logger.org.apache.hadoop=WARN
> > 
> > # set to INFO to enable infostream log messages
> > log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF
> > 
> > > 


Re: SolrIndexSearcher accumulation

2017-04-07 Thread Rick Leir
Hi Gerald
The best solution in my mind is to look at the custom code and try to find a 
way to remove it from your system. Solr queries can be complex, and I hope 
there is a way to get the results you need. Would you like to say what results 
you want to get, and what Solr queries you have tried?
I realize that in large organizations it is difficult to suggest change.
Cheers -- Rick

On April 7, 2017 9:08:19 AM EDT, Shawn Heisey  wrote:
>On 4/7/2017 3:09 AM, Gerald Reinhart wrote:
>>We have some custom code that extends SearchHandler to be able to
>:
>> - do an extra request
>> - merge/combine the original request and the extra request
>> results
>>
>>On Solr 5.x, our code was working very well, now with Solr 6.x we
>> have the following issue:  the number of SolrIndexSearcher are
>> increasing (we can see them in the admin view > Plugins/ Stats > Core
>).
>> As SolrIndexSearcher are accumulating, we have the following issues :
>>- the memory used by Solr is increasing => OOM after a long
>> period of time in production
>>- some files in the index has been deleted from the system but
>> the Solr JVM still hold them => ("fake") Full disk after a long
>period
>> of time in production
>>
>>We are wondering,
>>   - what has changed between Solr 5.x and Solr 6.x in the
>> management of the SolrIndexSearcher ?
>>   - what would be the best way, in a Solr plugin, to perform 2
>> queries and merge the results to a single SolrQueryResponse ? 
>
>I hesitated to send a reply because when it comes right down to it, I
>do
>not know a whole lot about deep Solr internals.  I tend to do my work
>with the code at a higher level, and don't dive down in the depths all
>that often.  I am slowly learning, though.  You may need to wait for a
>reply from someone who really knows those internals.
>
>It looks like you and I participated in a discussion last month where
>you were facing a similar problem with searchers -- deleted index files
>being held open.  How did that turn out?  Seems like if that problem
>were solved, it would also solve this problem.
>
>Very likely, the fact that the plugin worked correctly in 5.x was
>actually a bug in Solr related to reference counting, one that has been
>fixed in later versions.
>
>You may need to use a paste website or a file-sharing website to share
>all your plugin code so that people can get a look at it.  The list has
>a habit of deleting attachments.
>
>Thanks,
>Shawn

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

Re: logging issue

2017-04-07 Thread Erick Erickson
You also put a colon ':' in

FINEST, :

BTW, this will give you a _lot_ of output

Best,
Erick

On Fri, Apr 7, 2017 at 6:17 AM, KRIS MUSSHORN  wrote:
> SOLR 5.4.1
>
> log files have this entry
>
>
> log4j:ERROR Could not find value for key log4j.appender.: file
> log4j:ERROR Could not instantiate appender named ": file".
>
>
> Here is my config file and the only thing i have changed is set level to 
> FINEST in line 3. Otherwise this is the default file.
>
> # Logging level
> solr.log=logs
> log4j.rootLogger=FINEST,: file, CONSOLE
>
> log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
>
> log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
> log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) 
> [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m%n
>
> #- size rotation with log cleanup.
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> log4j.appender.file.MaxFileSize=100MB
> log4j.appender.file.MaxBackupIndex=9
>
> #- File to log to and log format
> log4j.appender.file.File=${solr.log}/solr.log
> log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
> log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss.SSS} %-5p 
> (%t) [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m\n
>
> log4j.logger.org.apache.zookeeper=WARN
> log4j.logger.org.apache.hadoop=WARN
>
> # set to INFO to enable infostream log messages
> log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF
>


solr

2017-04-07 Thread Hiam Zgheib
Hi,Kindly note I have installed sorlAnd I have linked it to oracle now I have a json file (extract only data )What should I do in order to transform these data than insert them  into a dataware house database . Best Regards   

  
  

  

  
Hiam Zgheib
  
Database Administrator
  

  
  



  


  

  Regie Libanaise Des Tabacs Et Tombacs

  

  Add.  Hadat, Beirut,
Lebanon

  P.O.Box. 77 Beirut - 
Lebanon

  Tel. +961 5 466471 - Ext. 1518 | Fax. +961 5 461187

  

  
  
www.rltt.com.lb






  



logging issue

2017-04-07 Thread KRIS MUSSHORN
SOLR 5.4.1 

log files have this entry


log4j:ERROR Could not find value for key log4j.appender.: file
log4j:ERROR Could not instantiate appender named ": file".


Here is my config file and the only thing i have changed is set level to FINEST 
in line 3. Otherwise this is the default file.

# Logging level
solr.log=logs
log4j.rootLogger=FINEST,: file, CONSOLE

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender

log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) [%X{collection} 
%X{shard} %X{replica} %X{core}] %c{1.} %m%n

#- size rotation with log cleanup.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.MaxFileSize=100MB
log4j.appender.file.MaxBackupIndex=9

#- File to log to and log format
log4j.appender.file.File=${solr.log}/solr.log
log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss.SSS} %-5p 
(%t) [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m\n

log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.apache.hadoop=WARN

# set to INFO to enable infostream log messages
log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF



Re: Error in Not Contains query

2017-04-07 Thread Zheng Lin Edwin Yeo
Hi Mikhail,

Thanks for your reply.

Your suggestion works, but I found that there is a slight difference with
the number of records returned between both of the queries, even though the
rest of the values are the same.

I'm still finding out which of them are correct, but would the algorithm
caused some results to return differently?

Regards,
Edwin


On 7 April 2017 at 20:10, Mikhail Khludnev  wrote:

> Hello, Edwin.
> {!foo} syntax is surprising when a query string contains space. Thus, it.s
> gonna be fine if there is no space in child query. You can solve it with
> referencing
> fq=-{!parent%20which=%22contentType_s:Header%22
> v=$childq}&childq=matNo_s:(*88060*%20*88061*)
>  Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> 01-01T00:00:00Z]&fq=-{!parent%20which=%22contentType_s:
> Header%22}matNo_s:(*88060*%20*88061*)&json.facet={client_s:{
> type:terms,field:client_s,%20limit:200,%20offset:0,%
> 20mincount:1}}&facet.threads=-1&fl=null&rows=0>
>
> 07 апр. 2017 г. 13:43 пользователь "Zheng Lin Edwin Yeo" <
> edwinye...@gmail.com> написал:
>
> > Hi,
> >
> > When I do the Not Contains query, I get the following error when I do it
> in
> > the following way:
> >
> > *Not working*
> > 1) When put the "-" sign before the {!parent which}
> >
> > http://localhost:8983/solr/test/select?q=contentType_s:
> > Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> > 01-01T00:00:00Z]&fq=-{!parent%20which=%22contentType_s:
> > Header%22}matNo_s:(*88060*%20*88061*)&json.facet={client_s:{
> > type:terms,field:client_s,%20limit:200,%20offset:0,%
> > 20mincount:1}}&facet.threads=-1&fl=null&rows=0
> >
> > This is the error message that I get:
> > {
> >   "responseHeader":{
> > "zkConnected":true,
> > "status":400,
> > "QTime":1},
> >   "error":{
> > "metadata":[
> >   "error-class","org.apache.solr.common.SolrException",
> >   "root-error-class","org.apache.solr.parser.ParseException"],
> > "msg":"org.apache.solr.search.SyntaxError: Cannot parse
> > 'matNo_s:(*88060*': Encountered \"\" at line 1, column 28.\r\nWas
> > expecting one of:\r\n ...\r\n ...\r\n ...\r\n
> >  \"+\" ...\r\n\"-\" ...\r\n ...\r\n\"(\" ...\r\n
> >  \")\" ...\r\n\"*\" ...\r\n\"^\" ...\r\n ...\r\n
> >   ...\r\n ...\r\n ...\r\n
> >   ...\r\n ...\r\n\"[\" ...\r\n\"{\"
> > ...\r\n ...\r\n\"filter(\" ...\r\n ...\r\n
> >  ",
> > "code":400}}
> > *Not working*
> > 2) When I put the "-" sign just before the matNo
> >
> > http://localhost:8983/solr/test/select?q=contentType_s:
> > Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> > 01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:
> > Header%22}-matNo_s:(*88060*%20*88061*)&json.facet={client_
> > s:{type:terms,field:client_s,%20limit:200,%20offset:0,%
> > 20mincount:1}}&facet.threads=-1&fl=null&rows=0
> >
> > There is no result returned for this:
> > {
> >   "responseHeader":{
> > "zkConnected":true,
> > "status":0,
> > "QTime":0},
> >   "response":{"numFound":0,"start":0,"docs":[]
> >   },
> >   "facets":{
> > "count":0}}
> >
> > *Working*
> > I need to include the field parentId_s:* to match all query first
> >
> > http://localhost:8983/solr/test/select?q=contentType_s:
> > Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> > 01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:
> > Header%22}parentId_s:*
> > AND -matNo_s:(88060*
> > 88061*)&json.facet={client_s:{type:terms,field:client_s,%
> > 20limit:200,%20offset:0,%20mincount:1}}&facet.threads=-1&fl=null&rows=0
> >
> > However, this is an inefficient way to get the Not Contain. By right, we
> > should be able to get the results from the first two examples. Can this
> be
> > consider a bug?
> >
> > I'm using Solr 6.4.2.
> >
> >
> > Regards,
> > Edwin
> >
>


Re: SolrIndexSearcher accumulation

2017-04-07 Thread Shawn Heisey
On 4/7/2017 3:09 AM, Gerald Reinhart wrote:
>We have some custom code that extends SearchHandler to be able to :
> - do an extra request
> - merge/combine the original request and the extra request
> results
>
>On Solr 5.x, our code was working very well, now with Solr 6.x we
> have the following issue:  the number of SolrIndexSearcher are
> increasing (we can see them in the admin view > Plugins/ Stats > Core ).
> As SolrIndexSearcher are accumulating, we have the following issues :
>- the memory used by Solr is increasing => OOM after a long
> period of time in production
>- some files in the index has been deleted from the system but
> the Solr JVM still hold them => ("fake") Full disk after a long period
> of time in production
>
>We are wondering,
>   - what has changed between Solr 5.x and Solr 6.x in the
> management of the SolrIndexSearcher ?
>   - what would be the best way, in a Solr plugin, to perform 2
> queries and merge the results to a single SolrQueryResponse ? 

I hesitated to send a reply because when it comes right down to it, I do
not know a whole lot about deep Solr internals.  I tend to do my work
with the code at a higher level, and don't dive down in the depths all
that often.  I am slowly learning, though.  You may need to wait for a
reply from someone who really knows those internals.

It looks like you and I participated in a discussion last month where
you were facing a similar problem with searchers -- deleted index files
being held open.  How did that turn out?  Seems like if that problem
were solved, it would also solve this problem.

Very likely, the fact that the plugin worked correctly in 5.x was
actually a bug in Solr related to reference counting, one that has been
fixed in later versions.

You may need to use a paste website or a file-sharing website to share
all your plugin code so that people can get a look at it.  The list has
a habit of deleting attachments.

Thanks,
Shawn



Re: Error in Not Contains query

2017-04-07 Thread Mikhail Khludnev
Hello, Edwin.
{!foo} syntax is surprising when a query string contains space. Thus, it.s
gonna be fine if there is no space in child query. You can solve it with
referencing
fq=-{!parent%20which=%22contentType_s:Header%22
v=$childq}&childq=matNo_s:(*88060*%20*88061*)


07 апр. 2017 г. 13:43 пользователь "Zheng Lin Edwin Yeo" <
edwinye...@gmail.com> написал:

> Hi,
>
> When I do the Not Contains query, I get the following error when I do it in
> the following way:
>
> *Not working*
> 1) When put the "-" sign before the {!parent which}
>
> http://localhost:8983/solr/test/select?q=contentType_s:
> Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> 01-01T00:00:00Z]&fq=-{!parent%20which=%22contentType_s:
> Header%22}matNo_s:(*88060*%20*88061*)&json.facet={client_s:{
> type:terms,field:client_s,%20limit:200,%20offset:0,%
> 20mincount:1}}&facet.threads=-1&fl=null&rows=0
>
> This is the error message that I get:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":400,
> "QTime":1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.parser.ParseException"],
> "msg":"org.apache.solr.search.SyntaxError: Cannot parse
> 'matNo_s:(*88060*': Encountered \"\" at line 1, column 28.\r\nWas
> expecting one of:\r\n ...\r\n ...\r\n ...\r\n
>  \"+\" ...\r\n\"-\" ...\r\n ...\r\n\"(\" ...\r\n
>  \")\" ...\r\n\"*\" ...\r\n\"^\" ...\r\n ...\r\n
>   ...\r\n ...\r\n ...\r\n
>   ...\r\n ...\r\n\"[\" ...\r\n\"{\"
> ...\r\n ...\r\n\"filter(\" ...\r\n ...\r\n
>  ",
> "code":400}}
> *Not working*
> 2) When I put the "-" sign just before the matNo
>
> http://localhost:8983/solr/test/select?q=contentType_s:
> Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> 01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:
> Header%22}-matNo_s:(*88060*%20*88061*)&json.facet={client_
> s:{type:terms,field:client_s,%20limit:200,%20offset:0,%
> 20mincount:1}}&facet.threads=-1&fl=null&rows=0
>
> There is no result returned for this:
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":0},
>   "response":{"numFound":0,"start":0,"docs":[]
>   },
>   "facets":{
> "count":0}}
>
> *Working*
> I need to include the field parentId_s:* to match all query first
>
> http://localhost:8983/solr/test/select?q=contentType_s:
> Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-
> 01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:
> Header%22}parentId_s:*
> AND -matNo_s:(88060*
> 88061*)&json.facet={client_s:{type:terms,field:client_s,%
> 20limit:200,%20offset:0,%20mincount:1}}&facet.threads=-1&fl=null&rows=0
>
> However, this is an inefficient way to get the Not Contain. By right, we
> should be able to get the results from the first two examples. Can this be
> consider a bug?
>
> I'm using Solr 6.4.2.
>
>
> Regards,
> Edwin
>


Error in Not Contains query

2017-04-07 Thread Zheng Lin Edwin Yeo
Hi,

When I do the Not Contains query, I get the following error when I do it in
the following way:

*Not working*
1) When put the "-" sign before the {!parent which}

http://localhost:8983/solr/test/select?q=contentType_s:Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-01-01T00:00:00Z]&fq=-{!parent%20which=%22contentType_s:Header%22}matNo_s:(*88060*%20*88061*)&json.facet={client_s:{type:terms,field:client_s,%20limit:200,%20offset:0,%20mincount:1}}&facet.threads=-1&fl=null&rows=0

This is the error message that I get:
{
  "responseHeader":{
"zkConnected":true,
"status":400,
"QTime":1},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.parser.ParseException"],
"msg":"org.apache.solr.search.SyntaxError: Cannot parse
'matNo_s:(*88060*': Encountered \"\" at line 1, column 28.\r\nWas
expecting one of:\r\n ...\r\n ...\r\n ...\r\n
 \"+\" ...\r\n\"-\" ...\r\n ...\r\n\"(\" ...\r\n
 \")\" ...\r\n\"*\" ...\r\n\"^\" ...\r\n ...\r\n
  ...\r\n ...\r\n ...\r\n
  ...\r\n ...\r\n\"[\" ...\r\n\"{\"
...\r\n ...\r\n\"filter(\" ...\r\n ...\r\n
 ",
"code":400}}
*Not working*
2) When I put the "-" sign just before the matNo

http://localhost:8983/solr/test/select?q=contentType_s:Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:Header%22}-matNo_s:(*88060*%20*88061*)&json.facet={client_s:{type:terms,field:client_s,%20limit:200,%20offset:0,%20mincount:1}}&facet.threads=-1&fl=null&rows=0

There is no result returned for this:
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":0},
  "response":{"numFound":0,"start":0,"docs":[]
  },
  "facets":{
"count":0}}

*Working*
I need to include the field parentId_s:* to match all query first

http://localhost:8983/solr/test/select?q=contentType_s:Header%20AND%20date_dt:[2016-01-01T00:00:00Z%20TO%202017-01-01T00:00:00Z]&fq={!parent%20which=%22contentType_s:Header%22}parentId_s:*
AND -matNo_s:(88060*
88061*)&json.facet={client_s:{type:terms,field:client_s,%20limit:200,%20offset:0,%20mincount:1}}&facet.threads=-1&fl=null&rows=0

However, this is an inefficient way to get the Not Contain. By right, we
should be able to get the results from the first two examples. Can this be
consider a bug?

I'm using Solr 6.4.2.


Regards,
Edwin


SolrIndexSearcher accumulation

2017-04-07 Thread Gerald Reinhart


Hi,

   We have some custom code that extends SearchHandler to be able to :
- do an extra request
- merge/combine the original request and the extra request results

   On Solr 5.x, our code was working very well, now with Solr 6.x we
have the following issue:  the number of SolrIndexSearcher are
increasing (we can see them in the admin view > Plugins/ Stats > Core ).
As SolrIndexSearcher are accumulating, we have the following issues :
   - the memory used by Solr is increasing => OOM after a long
period of time in production
   - some files in the index has been deleted from the system but
the Solr JVM still hold them => ("fake") Full disk after a long period
of time in production

   We are wondering,
  - what has changed between Solr 5.x and Solr 6.x in the
management of the SolrIndexSearcher ?
  - what would be the best way, in a Solr plugin, to perform 2
queries and merge the results to a single SolrQueryResponse ?

   Thanks a lot.

Gérald, Elodie, Ludo and André



Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


Re: Is there a way to retrieve the a term's position/offset in Solr

2017-04-07 Thread forest_soup
Thanks Rick. Unfortunately we have no that converter, so we have to count
characters in the rich text.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Is-there-a-way-to-retrieve-the-a-term-s-position-offset-in-Solr-tp4326931p4328859.html
Sent from the Solr - User mailing list archive at Nabble.com.


null's in logging

2017-04-07 Thread Dmitry Kan
Hi,

See this in 6.4.2:

2017-04-07 07:49:04.040
[recoveryExecutor-9-thread-1-processing-x:statements] INFO
org.apache.solr.search.SolrIndexSearcher *null* - Opening
[Searcher@75f8f3c5[statements]
realtime]

2017-04-07 07:49:04.054
[recoveryExecutor-9-thread-1-processing-x:statements] INFO
org.apache.solr.update.DirectUpdateHandler2 *null* - Reordered DBQs
detected.


Is this a known issue to have *null* or a misconfig on our part?


Thanks,

Dmitry

-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: https://semanticanalyzer.info