RE: App Studio

2017-11-01 Thread Kris Musshorn
Yes pleaSE

-Original Message-
From: Will Hayes [mailto:w...@lucidworks.com] 
Sent: Wednesday, November 1, 2017 4:04 PM
To: solr-user@lucene.apache.org
Subject: Re: App Studio

There is a community edition of App Studio for Solr and Elasticsearch being 
released by Lucidworks in November. Drop me a line if you would like to get a 
preview release.
-wh

--
Will Hayes | CEO | Lucidworks
direct. +1.415.997.9455 | email. w...@lucidworks.com

On Wed, Nov 1, 2017 at 12:54 PM, David Hastings < hastings.recurs...@gmail.com> 
wrote:

> Hey all, at the conference it was mentioned that lucidworks would 
> release app studio as its own and free project.  is that still the case?
>



Re: logging issue

2017-04-07 Thread KRIS MUSSHORN
removing the colon crushed it. thanks


the reason i'm looking at this is the logging screen is not showing log 
content...last check shows the spinning wheel to the left.



Time (Local)Level   CoreLogger  Message
No Events available
Last Check:4/7/2017, 10:26:43 AM



Google chrome, IE 10 and Firefox all show the same.


What can I do to correct and see the logs?


Kris



> 
> On April 7, 2017 at 10:04 AM Erick Erickson <erickerick...@gmail.com> 
> wrote:
> 
> You also put a colon ':' in
> 
> FINEST, :
> 
> BTW, this will give you a _lot_ of output
> 
> Best,
> Erick
> 
>     On Fri, Apr 7, 2017 at 6:17 AM, KRIS MUSSHORN <mussho...@comcast.net> 
> wrote:
> 
> > > 
> > SOLR 5.4.1
> > 
> > log files have this entry
> > 
> > log4j:ERROR Could not find value for key log4j.appender.: file
> > log4j:ERROR Could not instantiate appender named ": file".
> > 
> > Here is my config file and the only thing i have changed is set 
> > level to FINEST in line 3. Otherwise this is the default file.
> > 
> > # Logging level
> > solr.log=logs
> > log4j.rootLogger=FINEST,: file, CONSOLE
> > 
> > log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
> > 
> > log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
> > log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) 
> > [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m%n
> > 
> > #- size rotation with log cleanup.
> > log4j.appender.file=org.apache.log4j.RollingFileAppender
> > log4j.appender.file.MaxFileSize=100MB
> > log4j.appender.file.MaxBackupIndex=9
> > 
> > #- File to log to and log format
> > log4j.appender.file.File=${solr.log}/solr.log
> > log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
> > log4j.appender.file.layout.ConversionPattern=%d{-MM-dd 
> > HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} %X{core}] 
> > %c{1.} %m\n
> > 
> > log4j.logger.org.apache.zookeeper=WARN
> > log4j.logger.org.apache.hadoop=WARN
> > 
> > # set to INFO to enable infostream log messages
> > log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF
> > 
> > > 


logging issue

2017-04-07 Thread KRIS MUSSHORN
SOLR 5.4.1 

log files have this entry


log4j:ERROR Could not find value for key log4j.appender.: file
log4j:ERROR Could not instantiate appender named ": file".


Here is my config file and the only thing i have changed is set level to FINEST 
in line 3. Otherwise this is the default file.

# Logging level
solr.log=logs
log4j.rootLogger=FINEST,: file, CONSOLE

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender

log4j.appender.CONSOLE.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%-4r %-5p (%t) [%X{collection} 
%X{shard} %X{replica} %X{core}] %c{1.} %m%n

#- size rotation with log cleanup.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.MaxFileSize=100MB
log4j.appender.file.MaxBackupIndex=9

#- File to log to and log format
log4j.appender.file.File=${solr.log}/solr.log
log4j.appender.file.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss.SSS} %-5p 
(%t) [%X{collection} %X{shard} %X{replica} %X{core}] %c{1.} %m\n

log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.apache.hadoop=WARN

# set to INFO to enable infostream log messages
log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF



Re: Query structure

2017-02-01 Thread KRIS MUSSHORN
This was the solution. 
Thank you! 

- Original Message -

From: "Maciej Ł. PCSS" <labed...@man.poznan.pl> 
To: solr-user@lucene.apache.org 
Sent: Wednesday, February 1, 2017 7:57:05 AM 
Subject: Re: Query structure 

You should be able to put 'facetMetatagDatePrefix4:2015 OR 
facetMetatagDatePrefix4:2016' into the filtering query. 

Maciej 


W dniu 01.02.2017 o 13:43, KRIS MUSSHORN pisze: 
> I really need some guidance on this query structure issue. 
> 
> I've got to get this solved today for my employer. 
> 
> "Help me Obiwan. Your my only hope" 
> 
> K 
> 
> ----- Original Message - 
> 
> From: "KRIS MUSSHORN" <mussho...@comcast.net> 
> To: solr-user@lucene.apache.org 
> Sent: Tuesday, January 31, 2017 12:31:13 PM 
> Subject: Query structure 
> 
> I have a defaultSearchField and facetMetatagDatePrefix4 fields that are 
> correctly populated with values in SOLR 5.4.1. 
> 
> if execute this query q=defaultSearchField:this text 
> I get the 7 docs that match. 
> Their are three docs in 2015 and one doc in 2016 per the facet counts in the 
> results. 
> If I then q=defaultSearchField:this text AND facetMetatagDatePrefix4:2015 i 
> get the correct 3 documents. 
> 
> How would I structure my query to get defaultSearchField:this text AND 
> (facetMetatagDatePrefix4:2015 OR facetMetatagDatePrefix4:2016) and return 
> only 4 docs? 
> 
> TIA, 
> Kris 
> 
> 
> 




Re: Query structure

2017-02-01 Thread KRIS MUSSHORN
I really need some guidance on this query structure issue. 

I've got to get this solved today for my employer. 

"Help me Obiwan. Your my only hope" 

K 

- Original Message -

From: "KRIS MUSSHORN" <mussho...@comcast.net> 
To: solr-user@lucene.apache.org 
Sent: Tuesday, January 31, 2017 12:31:13 PM 
Subject: Query structure 

I have a defaultSearchField and facetMetatagDatePrefix4 fields that are 
correctly populated with values in SOLR 5.4.1. 

if execute this query q=defaultSearchField:this text 
I get the 7 docs that match. 
Their are three docs in 2015 and one doc in 2016 per the facet counts in the 
results. 
If I then q=defaultSearchField:this text AND facetMetatagDatePrefix4:2015 i get 
the correct 3 documents. 

How would I structure my query to get defaultSearchField:this text AND 
(facetMetatagDatePrefix4:2015 OR facetMetatagDatePrefix4:2016) and return only 
4 docs? 

TIA, 
Kris 




Query structure

2017-01-31 Thread KRIS MUSSHORN
I have a defaultSearchField and facetMetatagDatePrefix4 fields that are 
correctly populated with values in SOLR 5.4.1. 

if execute this query q=defaultSearchField:this text 
I get the 7 docs that match. 
Their are three docs in 2015 and one doc in 2016 per the facet counts in the 
results. 
If I then q=defaultSearchField:this text AND facetMetatagDatePrefix4:2015 i get 
the correct 3 documents. 

How would I structure my query to get defaultSearchField:this text AND 
(facetMetatagDatePrefix4:2015 OR facetMetatagDatePrefix4:2016) and return only 
4 docs? 

TIA, 
Kris 



Re: How to alter the facet query limit default

2017-01-30 Thread KRIS MUSSHORN

Alexandre.. 

Once i converted to fieldnames without dots it worked.. thank you 

Kris 
- Original Message -

From: "Alexandre Rafalovitch" <arafa...@gmail.com> 
To: "solr-user" <solr-user@lucene.apache.org> 
Sent: Thursday, January 26, 2017 11:40:49 AM 
Subject: Re: How to alter the facet query limit default 

facet.limit? 
f..facet.limit? (not sure how that would work with field 
name that contains dots) 

Docs are at: https://cwiki.apache.org/confluence/display/solr/Faceting 

Regards, 
Alex. 
 
http://www.solr-start.com/ - Resources for Solr users, new and experienced 


On 26 January 2017 at 10:36, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> SOLR 5.4.1 i am running a query with multiple facet fields. 
> _snip_ 
> select?q=*%3A*=metatag.date.prefix4+DESC=7910=metatag.date.prefix7=json=true=true=metatag.date.prefix7
>  =metatag.date.prefix4=metatag.doctype 
> 
> field metatag.date.prefix7 has way more facets than the default of 100. 
> 
> How would I set up solr, or modify my query, to ensure that the facets return 
> all values. 
> 
> TIA, 
> 
> Kris 
> 



Re: Documents issue

2017-01-30 Thread KRIS MUSSHORN
Allesandro et al. 

I tried this with no change in the results. 
"Other" is still missing when doctype is empty.. 
No dynamicFields are involved. 

 
 

 

- Original Message -

From: "alessandro.benedetti"  
To: solr-user@lucene.apache.org 
Sent: Friday, January 27, 2017 4:07:19 AM 
Subject: Re: Documents issue 

I may be wrong and don't have time to check the code in details now, but I 
would say you need to define the default in the destination field as well. 

The copy field should take in input the plain content of the field ( which 
is null) and then pass that content to the destination field. 

Properties and attributes of the source field should not be considered at 
copy field time. 
So what it happens is simply you pass a null content to the destination 
field. 

If you define the default in the destination field, it should work as 
expected. 

N.B. it's a shot in the dark, not sure if you experienced a different 
behavior previously. 

Cheers 



-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-alter-the-facet-query-limit-default-tp4315939p4317514.html
 
Sent from the Solr - User mailing list archive at Nabble.com. 



Documents issue

2017-01-26 Thread KRIS MUSSHORN

Running the latest crawl from Nutch to SOLR 5.4.1 it seems that my copy fields 
do not work as expected anymore. 

 
 
 


Why would copyField ignore the default all of a sudden? 

I've not made any significant changes to SOLR and none at all to nutch. 
{
  "response":{"numFound":699,"start":0,"docs":[
  {
"metatag.doctype":"Articles",
"facet_metatag_doctype":"Articles"}, 
_snipped a bunch of articles _ 
{
"metatag.doctype":"Dispatches",
"facet_metatag_doctype":"Dispatches"}, 

_snipped a bunch of Dispatches_
  
  {
"metatag.doctype":"Other"},
  {
"metatag.doctype":"Other"},
  {
"metatag.doctype":"Other"},
  {
"metatag.doctype":"Other"},
  {
"metatag.doctype":"Other"} 

_snipped a bunch of Other_
  ]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "facet_metatag_doctype":[
"Dispatches",38,
"Articles",33]},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}} 



Re: How to alter the facet query limit default

2017-01-26 Thread KRIS MUSSHORN

Alexandre, 
Thanks. 
I will refactor my schema to eliminate the period seperated values in a field 
names and try your suggestion. 
I'll let you know how it goes. 

Kris 
- Original Message -

From: "Alexandre Rafalovitch" <arafa...@gmail.com> 
To: "solr-user" <solr-user@lucene.apache.org> 
Sent: Thursday, January 26, 2017 11:40:49 AM 
Subject: Re: How to alter the facet query limit default 

facet.limit? 
f..facet.limit? (not sure how that would work with field 
name that contains dots) 

Docs are at: https://cwiki.apache.org/confluence/display/solr/Faceting 

Regards, 
Alex. 
 
http://www.solr-start.com/ - Resources for Solr users, new and experienced 


On 26 January 2017 at 10:36, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> SOLR 5.4.1 i am running a query with multiple facet fields. 
> _snip_ 
> select?q=*%3A*=metatag.date.prefix4+DESC=7910=metatag.date.prefix7=json=true=true=metatag.date.prefix7
>  =metatag.date.prefix4=metatag.doctype 
> 
> field metatag.date.prefix7 has way more facets than the default of 100. 
> 
> How would I set up solr, or modify my query, to ensure that the facets return 
> all values. 
> 
> TIA, 
> 
> Kris 
> 



How to alter the facet query limit default

2017-01-26 Thread KRIS MUSSHORN
SOLR 5.4.1 i am running a query with multiple facet fields. 
_snip_ 
select?q=*%3A*=metatag.date.prefix4+DESC=7910=metatag.date.prefix7=json=true=true=metatag.date.prefix7
 =metatag.date.prefix4=metatag.doctype 

field metatag.date.prefix7 has way more facets than the default of 100. 

How would I set up solr, or modify my query, to ensure that the facets return 
all values. 

TIA, 

Kris 



RE: reset version number

2017-01-10 Thread Kris Musshorn
Obviously deleting and rebuilding the core will work but is there another way?
K

-Original Message-
From: KRIS MUSSHORN [mailto:mussho...@comcast.net] 
Sent: Tuesday, January 10, 2017 12:00 PM
To: solr-user@lucene.apache.org
Subject: reset version number

SOLR 5.4.1 web admin interface has a version number in the selected core's 
overview. 
How does one reset this number? 

Kris 



reset version number

2017-01-10 Thread KRIS MUSSHORN
SOLR 5.4.1 web admin interface has a version number in the selected core's 
overview. 
How does one reset this number? 

Kris 


Re: update operation

2016-12-23 Thread KRIS MUSSHORN
i set the solr logger to FINEST and reran the update script 
It produced reams of data but not errors. just a bunch of DEBUG lines 
would you like me to post it? 

K 

- Original Message -

From: "Erick Erickson" <erickerick...@gmail.com> 
To: "solr-user" <solr-user@lucene.apache.org> 
Sent: Friday, December 23, 2016 1:22:37 PM 
Subject: Re: update operation 

OK, next thing. Find your Solr log and tail -f on it while you send 
you doc. That answers what Solr actually sees .vs. what you think 
you're sending it ;). If anything. 

Best, 
Erick 

On Fri, Dec 23, 2016 at 10:15 AM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> oops wrong thread in subject 
> 
> - Original Message - 
> 
> From: "KRIS MUSSHORN" <mussho...@comcast.net> 
> To: solr-user@lucene.apache.org 
> Sent: Friday, December 23, 2016 1:02:09 PM 
> Subject: Re: copying all fields to one specific single value field 
> 
> Well i guess its still not working.. 
> I'm not getting an error but im not getting an update either... 
> 
>  
>  
> 
> My BASH script: 
> $UUID contains a valid, existing UUID in SOLR. 
> $CURL_RESULT is a valid UTC timestamp 
> 
> 
> 
> curl -X POST -H 'Content-Type: application/json' 
> 'https://snip/solr/TEST_CORE/update/json/docs' --data-binary 
> '{"uuid":"'$UUID'","metatag.date.single":{ "set":"'$CURL_RESULT'"}}' 
> 
> the previous curl is immediatley followed by... 
> 
> curl -s 'https://snip/solr/TEST_CORE/update?commit=true' 
> 
> 
> 
> 
> Thank you all for your incredible patience. 
> 
> K 



Re: update operation

2016-12-23 Thread KRIS MUSSHORN
oops wrong thread in subject 

- Original Message -

From: "KRIS MUSSHORN" <mussho...@comcast.net> 
To: solr-user@lucene.apache.org 
Sent: Friday, December 23, 2016 1:02:09 PM 
Subject: Re: copying all fields to one specific single value field 

Well i guess its still not working.. 
I'm not getting an error but im not getting an update either... 

 
 

My BASH script: 
$UUID contains a valid, existing UUID in SOLR. 
$CURL_RESULT is a valid UTC timestamp 



curl -X POST -H 'Content-Type: application/json' 
'https://snip/solr/TEST_CORE/update/json/docs' --data-binary 
'{"uuid":"'$UUID'","metatag.date.single":{ "set":"'$CURL_RESULT'"}}' 

the previous curl is immediatley followed by... 

curl -s 'https://snip/solr/TEST_CORE/update?commit=true' 




Thank you all for your incredible patience. 

K 


Re: copying all fields to one specific single value field

2016-12-23 Thread KRIS MUSSHORN
Well i guess its still not working.. 
I'm not getting an error but im not getting an update either... 

 
 

My BASH script: 
$UUID contains a valid, existing UUID in SOLR. 
$CURL_RESULT is a valid UTC timestamp 



curl -X POST -H 'Content-Type: application/json' 
'https://snip/solr/TEST_CORE/update/json/docs' --data-binary 
'{"uuid":"'$UUID'","metatag.date.single":{ "set":"'$CURL_RESULT'"}}' 

the previous curl is immediatley followed by... 

curl -s 'https://snip/solr/TEST_CORE/update?commit=true' 




Thank you all for your incredible patience. 

K 

- Original Message -

From: "KRIS MUSSHORN" <mussho...@comcast.net> 
To: solr-user@lucene.apache.org 
Sent: Friday, December 23, 2016 10:14:59 AM 
Subject: Re: copying all fields to one specific single value field 

work backwards and look at the type definition for fields named content, title, 
author, and body. one of them has a type defined as multivalued 

- Original Message - 

From: "武井宜行" <nta...@sios.com> 
To: solr-user@lucene.apache.org 
Sent: Friday, December 23, 2016 10:05:01 AM 
Subject: copying all fields to one specific single value field 

Hi,all 

I would like to copy all fields to one specific single value field. 
The reason is that I must use facet query.I think that 
the fileld to use facet query needs not multi value but single value. 

In order to achive this,I've tried to use CopyFiled in schema.xml,but 
Error occured. 

The Schema is as bellow. 
※I'd like use facet query to "suggest_facet" field. 

 
 
 
 
 
 
 
 

When I tried to index,thf following error ocuured. 

2016-12-22 03:47:38.139 WARN (coreLoadExecutor-6-thread-3) [ ] 
o.a.s.s.IndexSchema Field suggest_facet is not multivalued and destination 
for multiple copyFields (6) 

How do I Solve this in order to copy all fields to one specific single 
value field? 




Re: copying all fields to one specific single value field

2016-12-23 Thread KRIS MUSSHORN
work backwards and look at the type definition for fields named content, title, 
author, and body. one of them has a type defined as multivalued 

- Original Message -

From: "武井宜行"  
To: solr-user@lucene.apache.org 
Sent: Friday, December 23, 2016 10:05:01 AM 
Subject: copying all fields to one specific single value field 

Hi,all 

I would like to copy all fields to one specific single value field. 
The reason is that I must use facet query.I think that 
the fileld to use facet query needs not multi value but single value. 

In order to achive this,I've tried to use CopyFiled in schema.xml,but 
Error occured. 

The Schema is as bellow. 
※I'd like use facet query to "suggest_facet" field. 

 
 
 
 
 
 
 
 

When I tried to index,thf following error ocuured. 

2016-12-22 03:47:38.139 WARN (coreLoadExecutor-6-thread-3) [ ] 
o.a.s.s.IndexSchema Field suggest_facet is not multivalued and destination 
for multiple copyFields (6) 

How do I Solve this in order to copy all fields to one specific single 
value field? 



Re: update operation

2016-12-22 Thread KRIS MUSSHORN
How would i exp;licitly commit? 
Sorry for the silly questions but im pretty fried 

- Original Message -

From: "Erick Erickson" <erickerick...@gmail.com> 
To: "solr-user" <solr-user@lucene.apache.org> 
Sent: Thursday, December 22, 2016 2:49:05 PM 
Subject: Re: update operation 

Kris: 

Maybe too simple, but did you commit afterwards? 

On Thu, Dec 22, 2016 at 10:45 AM, Shawn Heisey <apa...@elyograg.org> wrote: 
> On 12/22/2016 10:18 AM, KRIS MUSSHORN wrote: 
>> UPDATE_RESULT=$( curl -s -X POST -H 'Content-Type: text/json' 
>> "https://snip/solr/TEST_CORE/update/json/docs; --data-binary 
>> '{"id":"*'$DOC_ID'","metatag.date.single":{"set":"$VAL"}}') 
>> 
>> was the only version that did not throw an error but did not update the 
>> document. 
> 
> I think that will put a literal "$VAL" in the output, rather than the 
> value of the VAL variable. It will also put an asterisk before your 
> DOC_ID ... is that what you wanted it to do? If an asterisk is not part 
> of your id value, that might be why it's not working. 
> 
> Answering the earlier email: Your command choices are add, delete, 
> commit, and optimize. An update is just an add that deletes the original. 
> 
> Thanks, 
> Shawn 
> 



Re: update operation

2016-12-22 Thread KRIS MUSSHORN
Shawn, 

Running: 


UPDATE_RESULT=$( curl -s -X POST -H 'Content-Type: text/json' 
"https://snip/solr/TEST_CORE/update/json/docs; --data-binary 
'{"id":"*'$DOC_ID'","metatag.date.single":{"set":"$VAL"}}') 

was the only version that did not throw an error but did not update the 
document. 


It returned: 

{"responseHeader":{"status":0,"QTime":1}} 




Where do i go from here? 




K 





- Original Message -

From: "Shawn Heisey" <apa...@elyograg.org> 
To: solr-user@lucene.apache.org 
Sent: Thursday, December 22, 2016 11:00:21 AM 
Subject: Re: update operation 

On 12/22/2016 8:45 AM, KRIS MUSSHORN wrote: 
> Here is the bash line: 
> 
> UPDATE_RESULT=$( curl -s "https://snip/solr/TEST_CORE/update?=true; 
> --data-binary '{"id":"$DOC_ID","metatag.date.single" :{"set":"$VAL"}}') 

One thing I know you need for sure with the "/update" handler is the 
Content-Type header. Without it, Solr will not know that you are 
sending JSON. 

https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-JSONFormattedIndexUpdates
 

There are some alternate update URL paths that assume JSON. See below. 
Because your JSON does not include the "add" command, but instead has a 
bare document, you *might* need to send to /update/json/docs instead of 
just /update or even /update/json. Or you can restructure it to use the 
"add" command. 

https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-JSONUpdateConveniencePaths
 

Thanks, 
Shawn 




Re: update operation

2016-12-22 Thread KRIS MUSSHORN

Shawn, 

Perhaps i misunderstood the documentation but when you included the add clause 
does it not create an entirely new document? 

K 


- Original Message -

From: "Shawn Heisey" <apa...@elyograg.org> 
To: solr-user@lucene.apache.org 
Sent: Thursday, December 22, 2016 11:00:21 AM 
Subject: Re: update operation 

On 12/22/2016 8:45 AM, KRIS MUSSHORN wrote: 
> Here is the bash line: 
> 
> UPDATE_RESULT=$( curl -s "https://snip/solr/TEST_CORE/update?=true; 
> --data-binary '{"id":"$DOC_ID","metatag.date.single" :{"set":"$VAL"}}') 

One thing I know you need for sure with the "/update" handler is the 
Content-Type header. Without it, Solr will not know that you are 
sending JSON. 

https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-JSONFormattedIndexUpdates
 

There are some alternate update URL paths that assume JSON. See below. 
Because your JSON does not include the "add" command, but instead has a 
bare document, you *might* need to send to /update/json/docs instead of 
just /update or even /update/json. Or you can restructure it to use the 
"add" command. 

https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-JSONUpdateConveniencePaths
 

Thanks, 
Shawn 




update operation

2016-12-22 Thread KRIS MUSSHORN
Merry Christmas everyone, 

I'm using solr 5.4.1 and writing a bash script to update the value in a field 
of a single document in solr. 

Here is the bash line: 



UPDATE_RESULT=$( curl -s "https://snip/solr/TEST_CORE/update?=true; 
--data-binary '{"id":"$DOC_ID","metatag.date.single" :{"set":"$VAL"}}') 




$DOC_ID is a variable in the script that contains the document UID. I have 
confirmed that this value can be found in the documents with query. 

$VAL is the value to set. I have valudated that the field im trying to set will 
accept the value. 

metatag.date.single is the field I want to set and it does NOT yet exist in 
SOlr but is defined in schema.xml. 




When i run the line in bash i get: 

{"responseHeader":{"status":400,"QTime":0},"error":{"msg":"Unknown command 'id' 
at [5]","code":400}} 




the UID field in SOLR is named id. 




What am i doing wrong? 




Is their a better way to handle this? 




TIA, 




Kris 


Re: regex-urlfilter help

2016-12-12 Thread KRIS MUSSHORN

sorry my mistake.. sent to wrong list. 
  
- Original Message -

From: "Shawn Heisey" <apa...@elyograg.org> 
To: solr-user@lucene.apache.org 
Sent: Monday, December 12, 2016 2:36:26 PM 
Subject: Re: regex-urlfilter help 

On 12/12/2016 12:19 PM, KRIS MUSSHORN wrote: 
> I'm using nutch 1.12 and Solr 5.4.1. 
>   
> Crawling a website and indexing into nutch. 
>   
> AFAIK the regex-urlfilter.txt file will cause content to not be crawled.. 
>   
> what if I have 
> https:///inside/default.cfm  as my seed url... 
> I want the links on this page to be crawled and indexed but I do not want 
> this page to be indexed into SOLR. 
> How would I set this up? 
>   
> I'm thnking that the regex.urlfilter.txt file is NOT the right place. 

These sound like questions about how to configure Nutch.  This is a Solr 
mailing list.  Nutch is a completely separate Apache product with its 
own mailing list.  Although there may be people here who do use Nutch, 
it's not the purpose of this list.  Please use support resources for Nutch. 

http://nutch.apache.org/mailing_lists.html 

I'm reasonably certain that this cannot be controlled by Solr's 
configuration.  Solr will index anything that is sent to it, so the 
choice of what to send or not send in this situation will be decided by 
Nutch. 

Thanks, 
Shawn 




error diagnosis help.

2016-12-12 Thread KRIS MUSSHORN
ive scoured my nutch and solr config files and I cant find any cause. 
suggestions? 
Monday, December 12, 2016 2:37:13 PMERROR   nullRequestHandlerBase  
org.apache.solr.common.SolrException: Unexpected character '&' (code 38) in 
epilog; expected '<' 
org.apache.solr.common.SolrException: Unexpected character '&' (code 38) in 
epilog; expected '<'
 at [row,col {unknown-source}]: [1,36]
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:180)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:95)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:70)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745) 



regex-urlfilter help

2016-12-12 Thread KRIS MUSSHORN
I'm using nutch 1.12 and Solr 5.4.1. 
  
Crawling a website and indexing into nutch. 
  
AFAIK the regex-urlfilter.txt file will cause content to not be crawled.. 
  
what if I have 
https:///inside/default.cfm  as my seed url... 
I want the links on this page to be crawled and indexed but I do not want this 
page to be indexed into SOLR. 
How would I set this up? 
  
I'm thnking that the regex.urlfilter.txt file is NOT the right place. 
  
TIA 
Kris 


RE: prefix query help

2016-12-08 Thread Kris Musshorn
I think this will work. Ill try it tomorrow and let you know.
Thanks for the help Eric and Shawn
Kris

-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com] 
Sent: Thursday, December 8, 2016 2:43 PM
To: solr-user@lucene.apache.org
Subject: Re: prefix query help

It’s hard to tell how _exact_ to be here, but if you’re indexing those strings 
and your queries are literally always -MM, then do the truncation of the 
actual data into that format or via analysis techniques to index only the 
-MM piece of the incoming string.  

But given what you’ve got so far, using what the prefix examples I provided 
below, your two queries would be this:

   q={!prefix f=metatag.date v=‘2016-06'}

and

   q=({!prefix f=metatag.date v=‘2016-06’} OR {!prefix f=metatag.date 
v=‘2014-04’} )

Does that work for you?

It really should work to do this q=metadata.date:(2016-06* OR 2014-04*) as 
you’ve got it, but you said that sort of thing wasn’t working (debug out would 
help suss that issue out).

If you did index those strings cleaner as -MM to accommodate the types of 
query you’ve shown then you could do q=metadata.date:(2016-06 OR 2014-04), or 
q={!terms f=metadata.date}2016-06,2014-04

Erik




> On Dec 8, 2016, at 11:34 AM, KRIS MUSSHORN <mussho...@comcast.net> wrote:
> 
> yes I did attach rather than paste sorry. 
>   
> Ok heres an actual, truncated, example of the metatag.date field contents in 
> solr. 
> NONE-NN-NN is the default setting. 
>   
> doc 1 
> " metatag.date ": [ 
>   "2016-06-15T14:51:04Z" ,
>   "2016-06-15T14:51:04Z" 
> ] 
>   
> doc 2 
> " metatag.date ": [ 
>   "2016-06-15" 
> ] 
> doc 3 
> " metatag.date ": [ 
>   "NONE-NN-NN" 
> ] 
> doc 4 
> " metatag.date ": [ 
>   "-mm-dd" 
> ] 
>   
> doc 5 
> " metatag.date ": [ 
>   "2016-07-06" 
> ] 
> 
> doc 6 
> " metatag.date ": [ 
>   "2014-04-15T14:51:06Z" , 
>   "2014-04-15T14:51:06Z" 
> ] 
>   
> q=2016-06 should return doc 2 and 1 
> q=2016-06 OR 2014-04 should return docs 1, 2 and 6 
>   
> yes I know its wonky but its what I have to deal with until he content is 
> cleaned up. 
> I cant use date type.. that would make my life to easy. 
>   
> TIA again 
> Kris 
> 
> - Original Message -
> 
> From: "Erik Hatcher" <erik.hatc...@gmail.com> 
> To: solr-user@lucene.apache.org 
> Sent: Thursday, December 8, 2016 12:36:26 PM 
> Subject: Re: prefix query help 
> 
> Kris - 
> 
> To chain multiple prefix queries together: 
> 
> q=({!prefix f=field1 v=‘prefix1'} {!prefix f=field2 v=‘prefix2’}) 
> 
> The leading paren is needed to ensure it’s being parsed with the lucene 
> qparser (be sure not to have defType set, or a variant would be needed) and 
> that allows multiple {!…} expressions to be parsed.  The outside-the-curlys 
> value for the prefix shouldn’t be attempted with multiples, so the `v` is the 
> way to go, either inline or $referenced. 
> 
> If you do have defType set, say to edismax, then do something like this 
> instead: 
> q={!lucene v=prefixed_queries} 
> _queries={!prefix f=field1 v=‘prefix1'} {!prefix f=field2 
> v=‘prefix2’} 
>// I don’t think parens are needed with _queries, but maybe.  
>  
> 
> =query (or =true) is your friend - see how things are parsed.  I 
> presume in your example that didn’t work that the dash didn’t work as you 
> expected?   or… not sure.  What’s the parsed_query output in debug on that 
> one? 
> 
> Erik 
> 
> p.s. did you really just send a Word doc to the list that could have been 
> inlined in text?  :)   
> 
> 
> 
>> On Dec 8, 2016, at 7:18 AM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
>> 
>> Im indexing data from Nutch into SOLR 5.4.1. 
>> I've got a date metatag that I have to store as text type because the data 
>> stinks. 
>> It's stored in SOLR as field metatag.date. 
>> At the source the dates are formatted (when they are entered correctly ) as 
>> -MM-DD 
>>   
>> q=metatag.date:2016-01* does not produce the correct results and returns 
>> undesireable matches2016-05-01 etc as example. 
>> q={!prefix f=metatag.date}2016-01 gives me exactly what I want for one 
>> month/year. 
>>   
>> My question is how do I chain n prefix queries together? 
>> i.e. 
>> I want all docs where metatag.date prefix is 2016-01 or 2016-07 or 2016-10 
>>   
>> TIA, 
>> Kris 
>>   
> 
> 



Re: prefix query help

2016-12-08 Thread KRIS MUSSHORN
yes I did attach rather than paste sorry. 
  
Ok heres an actual, truncated, example of the metatag.date field contents in 
solr. 
NONE-NN-NN is the default setting. 
  
doc 1 
" metatag.date ": [ 
  "2016-06-15T14:51:04Z" , 
  "2016-06-15T14:51:04Z" 
    ] 
  
doc 2 
" metatag.date ": [ 
  "2016-06-15" 
    ] 
doc 3 
" metatag.date ": [ 
  "NONE-NN-NN" 
    ] 
doc 4 
" metatag.date ": [ 
  "-mm-dd" 
    ] 
  
doc 5 
" metatag.date ": [ 
  "2016-07-06" 
    ] 

doc 6 
" metatag.date ": [ 
  "2014-04-15T14:51:06Z" , 
  "2014-04-15T14:51:06Z" 
    ] 
  
q=2016-06 should return doc 2 and 1 
q=2016-06 OR 2014-04 should return docs 1, 2 and 6 
  
yes I know its wonky but its what I have to deal with until he content is 
cleaned up. 
I cant use date type.. that would make my life to easy. 
  
TIA again 
Kris 

- Original Message -

From: "Erik Hatcher" <erik.hatc...@gmail.com> 
To: solr-user@lucene.apache.org 
Sent: Thursday, December 8, 2016 12:36:26 PM 
Subject: Re: prefix query help 

Kris - 

To chain multiple prefix queries together: 

    q=({!prefix f=field1 v=‘prefix1'} {!prefix f=field2 v=‘prefix2’}) 

The leading paren is needed to ensure it’s being parsed with the lucene qparser 
(be sure not to have defType set, or a variant would be needed) and that allows 
multiple {!…} expressions to be parsed.  The outside-the-curlys value for the 
prefix shouldn’t be attempted with multiples, so the `v` is the way to go, 
either inline or $referenced. 

If you do have defType set, say to edismax, then do something like this 
instead: 
    q={!lucene v=prefixed_queries} 
    _queries={!prefix f=field1 v=‘prefix1'} {!prefix f=field2 
v=‘prefix2’} 
       // I don’t think parens are needed with _queries, but maybe.   

=query (or =true) is your friend - see how things are parsed.  I 
presume in your example that didn’t work that the dash didn’t work as you 
expected?   or… not sure.  What’s the parsed_query output in debug on that one? 

Erik 

p.s. did you really just send a Word doc to the list that could have been 
inlined in text?  :)   



> On Dec 8, 2016, at 7:18 AM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> 
> Im indexing data from Nutch into SOLR 5.4.1. 
> I've got a date metatag that I have to store as text type because the data 
> stinks. 
> It's stored in SOLR as field metatag.date. 
> At the source the dates are formatted (when they are entered correctly ) as 
> -MM-DD 
>   
> q=metatag.date:2016-01* does not produce the correct results and returns 
> undesireable matches2016-05-01 etc as example. 
> q={!prefix f=metatag.date}2016-01 gives me exactly what I want for one 
> month/year. 
>   
> My question is how do I chain n prefix queries together? 
> i.e. 
> I want all docs where metatag.date prefix is 2016-01 or 2016-07 or 2016-10 
>   
> TIA, 
> Kris 
>   




Re: prefix query help

2016-12-08 Thread KRIS MUSSHORN

Here is how I have the field defined... see attachment. 
  
  
- Original Message -

From: "Erick Erickson" <erickerick...@gmail.com> 
To: "solr-user" <solr-user@lucene.apache.org> 
Sent: Thursday, December 8, 2016 10:44:08 AM 
Subject: Re: prefix query help 

You'd probably be better off indexing it as a "string" type given your 
expectations. Depending on the analysis chain (do take a look at 
admin/analysis for the field in question) the tokenization can be tricky 
to get right. 

Best, 
Erick 

On Thu, Dec 8, 2016 at 7:18 AM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> Im indexing data from Nutch into SOLR 5.4.1. 
> I've got a date metatag that I have to store as text type because the data 
> stinks. 
> It's stored in SOLR as field metatag.date. 
> At the source the dates are formatted (when they are entered correctly ) as 
> -MM-DD 
> 
> q=metatag.date:2016-01* does not produce the correct results and returns 
> undesireable matches2016-05-01 etc as example. 
> q={!prefix f=metatag.date}2016-01 gives me exactly what I want for one 
> month/year. 
> 
> My question is how do I chain n prefix queries together? 
> i.e. 
> I want all docs where metatag.date prefix is 2016-01 or 2016-07 or 2016-10 
> 
> TIA, 
> Kris 
> 



field name.docx
Description: MS-Word 2007 document


prefix query help

2016-12-08 Thread KRIS MUSSHORN
Im indexing data from Nutch into SOLR 5.4.1. 
I've got a date metatag that I have to store as text type because the data 
stinks. 
It's stored in SOLR as field metatag.date. 
At the source the dates are formatted (when they are entered correctly ) as 
-MM-DD 
  
q=metatag.date:2016-01* does not produce the correct results and returns 
undesireable matches2016-05-01 etc as example. 
q={!prefix f=metatag.date}2016-01 gives me exactly what I want for one 
month/year. 
  
My question is how do I chain n prefix queries together? 
i.e. 
I want all docs where metatag.date prefix is 2016-01 or 2016-07 or 2016-10 
  
TIA, 
Kris 
  


RE: SOLR vs mongdb

2016-11-23 Thread Kris Musshorn
Will someone please give me a detailed scenario where solr content could 
"disappear"? 

Disappear means what exactly?

TIA,
Kris


-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org] 
Sent: Wednesday, November 23, 2016 7:47 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR vs mongdb

Well, I didn’t actually recommend MongoDB as a repository. :-)

If you want transactions and search, buy MarkLogic. I worked there for two 
years, and that is serious non-muggle technology.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Nov 23, 2016, at 4:43 PM, Alexandre Rafalovitch  wrote:
> 
> Actually, you need to be ok that your content will disappear when you 
> use MongoDB as well :-(
> 
> But I understand what you were trying to say.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and 
> experienced
> 
> 
> On 24 November 2016 at 11:34, Walter Underwood  wrote:
>> The choice is simple. Are you OK if all your content disappears and you need 
>> to reload?
>> If so, use Solr. If not, you need some kind of repository. It can be files 
>> in Amazon S3.
>> But Solr is not designed to preserve your data.
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>> 
>>> On Nov 23, 2016, at 4:12 PM, Alexandre Rafalovitch  
>>> wrote:
>>> 
>>> Solr supports automatic detection of content types for new fields.
>>> That was - unfortunately - named as schemaless mode. It still is 
>>> typed under the covers and has limitations. Such as needing all 
>>> automatically created fields to be multivalued (by the default 
>>> schemaless definition).
>>> 
>>> MongoDB is better about actually storing content, especially nested 
>>> content. Solr can store content, but that's not what it is about. 
>>> You can totally turn off all the stored flags in Solr and return 
>>> just document ids, while storing the content in MongoDB.
>>> 
>>> You can search in Mongo and you can store content in Solr, so for 
>>> simple use cases you can use either one to serve both cause. But you 
>>> can also pound nails with a brick and make holes with a hammer.
>>> 
>>> Oh, and do not read this as me endorsing MongoDB. I would probably 
>>> look at Postgress with JSON columns instead, as it is more reliable 
>>> and feature rich.
>>> 
>>> Regards,
>>>  Alex.
>>> 
>>> http://www.solr-start.com/ - Resources for Solr users, new and 
>>> experienced
>>> 
>>> 
>>> On 24 November 2016 at 07:34, Prateek Jain J 
>>>  wrote:
 SOLR also supports, schemaless behaviour. and my question is same that, 
 why and where should we prefer mongodb. Web search didn’t helped me on 
 this.
 
 
 Regards,
 Prateek Jain
 
 -Original Message-
 From: Rohit Kanchan [mailto:rohitkan2...@gmail.com]
 Sent: 23 November 2016 07:07 PM
 To: solr-user@lucene.apache.org
 Subject: Re: SOLR vs mongdb
 
 Hi Prateek,
 
 I think you are talking about two different animals. Solr(actually 
 embedded
 lucene) is actually a search engine where you can use different features 
 like faceting, highlighting etc but it is a document store where for each 
 text it does create an Inverted index and map that to documents.  Mongodb 
 is also document store but I think it adds basic search capability.  This 
 is my understanding. We are using mongo for temporary storage and I think 
 it is good for that where you want to store a key value document in a 
 collection without any static schema. In Solr you need to define your 
 schema. In solr you can define dynamic fields too. This is all my 
 understanding.
 
 -
 Rohit
 
 
 On Wed, Nov 23, 2016 at 10:27 AM, Prateek Jain J < 
 prateek.j.j...@ericsson.com> wrote:
 
> 
> Hi All,
> 
> I have started to use mongodb and solr recently. Please feel free 
> to correct me where my understanding is not upto the mark:
> 
> 
> 1.   Solr is indexing engine but it stores both data and indexes in
> same directory. Although we can select fields to store/persist in 
> solr via schema.xml. But in nutshell, it's not possible to 
> distinguish between data and indexes like, I can't remove all 
> indexes and still have persisted data with SOLR.
> 
> 2.   Solr indexing capabilities are far better than any other nosql db
> like mongodb etc. like faceting, weighted search.
> 
> 3.   Both support scalability via sharding.
> 
> 4.   We can have architecture where data is stored in separate db like
> mongodb or mysql. SOLR can connect with db and index data (in SOLR).
> 
> I tried googling for question "solr vs mongodb" and there are 
> various threads on sites like stackoverflow. But I still 

RE: field set up help

2016-11-17 Thread Kris Musshorn
This q={!prefix f=metatag.date}2016-10 returns zero records

-Original Message-
From: KRIS MUSSHORN [mailto:mussho...@comcast.net] 
Sent: Thursday, November 17, 2016 3:00 PM
To: solr-user@lucene.apache.org
Subject: Re: field set up help

so if the field was named metatag.date q={!prefix f=metatag.date}2016-10 

- Original Message -

From: "Erik Hatcher" <erik.hatc...@gmail.com> 
To: solr-user@lucene.apache.org 
Sent: Thursday, November 17, 2016 2:46:32 PM 
Subject: Re: field set up help 

Given what you’ve said, my hunch is you could make the query like this: 

q={!prefix f=field_name}2016-10 

tada!  ?! 

there’s nothing wrong with indexing dates as text like that, as long as your 
queries are performantly possible.   And in the case of the query type you 
mentioned, the text/string’ish indexing you’ve done is suited quite well to 
prefix queries to grab dates by year, year-month, and year-month-day.   But you 
could, if needed to get more sophisticated with date queries (DateRangeField is 
my new favorite) you can leverage ParseDateFieldUpdateProcessorFactory without 
having to change the incoming format. 

Erik 




> On Nov 17, 2016, at 1:55 PM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> 
> 
> I have a field in solr 5.4.1 that has values like: 
> 2016-10-15 
> 2016-09-10 
> 2015-10-12 
> 2010-09-02 
>   
> Yes it is a date being stored as text. 
>   
> I am getting the data onto solr via nutch and the metatag plug in. 
>   
> The data is coming directly from the website I am crawling and I am not able 
> to change the data at the source to something more palpable. 
>   
> The field is set in solr to be of type TextField that is indexed, tokenized, 
> stored, multivalued and norms are omitted. 
>   
> Both the index and query analysis chains contain just the whitespace 
> tokenizer factory and the lowercase filter factory. 
>   
> I need to be able to query for 2016-10 and only match 2016-10-15. 
>   
> Any ideas on how to set this up? 
>   
> TIA 
>   
> Kris   
>   





Re: field set up help

2016-11-17 Thread KRIS MUSSHORN
so if the field was named metatag.date q={!prefix f=metatag.date}2016-10 

- Original Message -

From: "Erik Hatcher" <erik.hatc...@gmail.com> 
To: solr-user@lucene.apache.org 
Sent: Thursday, November 17, 2016 2:46:32 PM 
Subject: Re: field set up help 

Given what you’ve said, my hunch is you could make the query like this: 

    q={!prefix f=field_name}2016-10 

tada!  ?! 

there’s nothing wrong with indexing dates as text like that, as long as your 
queries are performantly possible.   And in the case of the query type you 
mentioned, the text/string’ish indexing you’ve done is suited quite well to 
prefix queries to grab dates by year, year-month, and year-month-day.   But you 
could, if needed to get more sophisticated with date queries (DateRangeField is 
my new favorite) you can leverage ParseDateFieldUpdateProcessorFactory without 
having to change the incoming format. 

Erik 




> On Nov 17, 2016, at 1:55 PM, KRIS MUSSHORN <mussho...@comcast.net> wrote: 
> 
> 
> I have a field in solr 5.4.1 that has values like: 
> 2016-10-15 
> 2016-09-10 
> 2015-10-12 
> 2010-09-02 
>   
> Yes it is a date being stored as text. 
>   
> I am getting the data onto solr via nutch and the metatag plug in. 
>   
> The data is coming directly from the website I am crawling and I am not able 
> to change the data at the source to something more palpable. 
>   
> The field is set in solr to be of type TextField that is indexed, tokenized, 
> stored, multivalued and norms are omitted. 
>   
> Both the index and query analysis chains contain just the whitespace 
> tokenizer factory and the lowercase filter factory. 
>   
> I need to be able to query for 2016-10 and only match 2016-10-15. 
>   
> Any ideas on how to set this up? 
>   
> TIA 
>   
> Kris   
>   




field set up help

2016-11-17 Thread KRIS MUSSHORN

I have a field in solr 5.4.1 that has values like: 
2016-10-15 
2016-09-10 
2015-10-12 
2010-09-02 
  
Yes it is a date being stored as text. 
  
I am getting the data onto solr via nutch and the metatag plug in. 
  
The data is coming directly from the website I am crawling and I am not able to 
change the data at the source to something more palpable. 
  
The field is set in solr to be of type TextField that is indexed, tokenized, 
stored, multivalued and norms are omitted. 
  
Both the index and query analysis chains contain just the whitespace tokenizer 
factory and the lowercase filter factory. 
  
I need to be able to query for 2016-10 and only match 2016-10-15. 
  
Any ideas on how to set this up? 
  
TIA 
  
Kris  
  


Re: Custom user web interface for Solr

2016-11-04 Thread KRIS MUSSHORN
https://cwiki.apache.org/confluence/display/solr/Velocity+Search+UI 

You might be able to customize velocity. 

K 
- Original Message -

From: "Binoy Dalal"  
To: solr-user@lucene.apache.org 
Sent: Friday, November 4, 2016 2:33:24 PM 
Subject: Re: Custom user web interface for Solr 

See this link for more details => 
https://lucidworks.com/blog/2015/12/08/browse-new-improved-solr-5/ 

On Sat, Nov 5, 2016 at 12:02 AM Binoy Dalal  wrote: 

> Have you checked out the /browse handler? It provides a pretty rudimentary 
> UI for displaying the results. It is nowhere close to what you would want 
> to present to your users but it is a good place to start off. 
> 
> On Fri, Nov 4, 2016 at 11:32 PM tesm...@gmail.com  
> wrote: 
> 
> Hi, 
> 
> My search query comprises of more than one fields like search string, date 
> field and a one optional field). 
> 
> I need to represent these on the web interface to the users. 
> 
> Secondly, I need to represent the search data in graphical format. 
> 
> Is there some Solr web client that provides the above features or Is there 
> a way to modify the default Solr Browse interface and add above options? 
> 
> 
> 
> 
> 
> Regards, 
> 
> -- 
> Regards, 
> Binoy Dalal 
> 
-- 
Regards, 
Binoy Dalal 



Re: VPAT?

2016-10-11 Thread KRIS MUSSHORN
I'm sure someone will correct me if i am wrong but SOLR is data layer device so 
508 compliance would have to be assured by the presentation layer device. 

Unless your talking about 508 compliance for the admin webapp. 

K 

- Original Message -

From: "Bill Yosmanovich"  
To: solr-user@lucene.apache.org 
Sent: Tuesday, October 11, 2016 12:26:00 PM 
Subject: VPAT? 

Would anyone happen to know if SOLR has a VPAT or where I could obtain any 
Section 508 compliance information? 

Thanks! 
Bill Yosmanovich 



seperate core from engine

2016-10-06 Thread KRIS MUSSHORN
Currently Solr ( 5.4.1 ) and its core data are all in one location. 

How would i set up Solr so that the core data could be stored somewhere else? 

Pointers to helpful instructions are appreciated 

TIA 

Kris 


Re: bash to get doc count

2016-10-05 Thread KRIS MUSSHORN

ps $SOLR_HOST and $SOLR_CORE_NAME are set correctly. 


Kris 

- Original Message -

From: "KRIS MUSSHORN" <mussho...@comcast.net> 
To: solr-user@lucene.apache.org 
Sent: Wednesday, October 5, 2016 3:17:19 PM 
Subject: bash to get doc count 

Will someone please tell me why this stores the text "numDocs" instead of 
returning the number of docs in the core? 

#!/bin/bash 
DOC_COUNT=`wget -O- -q 
$SOLR_HOST'admin/cores?action=STATUS='$SOLR_CORE_NAME'=json=true'
 | grep numDocs | tr -d '0-9'` 

TIA 

Kris 



bash to get doc count

2016-10-05 Thread KRIS MUSSHORN
Will someone please tell me why this stores the text "numDocs" instead of 
returning the number of docs in the core? 

#!/bin/bash 
DOC_COUNT=`wget -O- -q 
$SOLR_HOST'admin/cores?action=STATUS='$SOLR_CORE_NAME'=json=true'
 | grep numDocs | tr -d '0-9'` 

TIA 

Kris 


warning

2016-09-28 Thread KRIS MUSSHORN
My solr 5.4.1 solrconfig.xml is set up thus: 

 
${solr.lock.type:native} 
false 

yet i get a warning on starting the core... 
2016-09-28 14:24:06.049 WARN (coreLoadExecutor-6-thread-1) [ ] o.a.s.c.Config 
Solr no longer supports forceful unlocking via the 'unlockOnStartup' option. 
This is no longer neccessary for the default lockType except in situations 
where it would be dangerous and should not be done. For other lockTypes and/or 
directoryFactory options it may also be dangerous and users must resolve 
problematic locks manually. 


Any suggestions? 

Kris 


RE: script to get core num docs

2016-09-19 Thread Kris Musshorn
Thanks David.. got it working

-Original Message-
From: David Santamauro [mailto:david.santama...@gmail.com] 
Sent: Monday, September 19, 2016 11:55 AM
To: solr-user@lucene.apache.org
Cc: david.santama...@gmail.com
Subject: Re: script to get core num docs


https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API

wget -O- -q \
 
'/admin/cores?action=STATUS=coreName=json=true' 
\

   | grep numDocs

//


/admin/cores?action=STATUS=alexandria_shard2_replica1=json=1'|grep
 
numDocs|cut -f2 -d':'|

On 09/19/2016 11:22 AM, KRIS MUSSHORN wrote:
> How can i get the count of docs from a core with bash?
> Seems like I have to call Admin/Luke but cant find any specifics.
> Thanks
> Kris
>



script to get core num docs

2016-09-19 Thread KRIS MUSSHORN
How can i get the count of docs from a core with bash? 
Seems like I have to call Admin/Luke but cant find any specifics. 
Thanks 
Kris 


extract metadata

2016-09-08 Thread KRIS MUSSHORN
How would one get all metadata/properties from a .doc/pdf/xls etc into fields 
into solr? 



query issue

2016-08-31 Thread KRIS MUSSHORN
SOLR 5.4.1. 




Executing query content:ddd on the core below in the solr web interface returns 
no documents but query content:Dispatches returns doc 1. Why does the first 
query return no documents? 





Doc 1 

{ 

"id":"https://snip/inside/news/dispatches/view.cfm?id=2571;, 




"url":"https://snip/inside/news/dispatches/view.cfm?id=2571;, 




"content":"Dispatches - DISPATCH Security, and 508 Notice Powered By 
Authentication"], 




"_version_":1544180960786907136, 




"entity_type":"node", 




"timestamp":"2016-08-31T12:15:22.165Z"}, 
Doc 2 
{ 
"id":"epwb95/node/3", 
"site":"https://snip/drupaldev/;, 
"hash":"epwb95", 
"entity_id":3, 
"entity_type":"node", 
"bundle":"page", 
"bundle_name":"Basic page", 
"ss_language":"und", 
"path":"node/3", 
"url":"https://snip/drupaldev/me/about;, 
"path_alias":"me/about", 
"label":"About", 
"spell":["About", 
" About me page testing "], 
"content":" About me page testing ", 
"teaser":" About me page testing ", 
"ss_name":"kmusshorn", 
"tos_name":"kmusshorn", 
"ss_name_formatted":"kmusshorn", 
"tos_name_formatted":"kmusshorn", 
"is_uid":6, 
"bs_status":true, 
"bs_sticky":false, 
"bs_promote":false, 
"is_tnid":0, 
"bs_translate":false, 
"ds_created":"2016-08-15T12:54:10Z", 
"ds_changed":"2016-08-16T17:31:47Z", 
"ds_last_comment_or_change":"2016-08-16T17:31:47Z", 
"_version_":1544181596777611264, 
"timestamp":"2016-08-31T12:25:28.714Z"}, 
Doc 3 
{ 
"id":"epwb95/node/10", 
"site":"https://snip/drupaldev/;, 
"hash":"epwb95", 
"entity_id":10, 
"entity_type":"node", 
"bundle":"blog", 
"bundle_name":"Blog entry", 
"ss_language":"und", 
"path":"node/10", 
"url":"https://snip/drupaldev/node/10;, 
"label":"Blogging", 
"spell":["Blogging", 
" kmusshorns blog d "], 
"content":" kmusshorn's blog d ", 
"teaser":" kmusshorn's blog d ", 
"ss_name":"kmusshorn", 
"tos_name":"kmusshorn", 
"ss_name_formatted":"kmusshorn", 
"tos_name_formatted":"kmusshorn", 
"is_uid":6, 
"bs_status":true, 
"bs_sticky":false, 
"bs_promote":true, 
"is_tnid":0, 
"bs_translate":false, 
"ds_created":"2016-08-16T14:18:13Z", 
"ds_changed":"2016-08-16T14:18:13Z", 
"ds_last_comment_or_change":"2016-08-16T14:18:13Z", 
"_version_":1544181596783902720, 
"timestamp":"2016-08-31T12:25:28.714Z"}] 
}}