Solr for Content Management

2018-06-07 Thread Moenieb Davids
Hi All, Background: I am currently testing a deployment of a content management framework where I am trying to punt Solr as the tool of choice for ingestion and searching. Current status: I have deployed SolrCloud across multiple servers with multiple shards and a replication factor of 2. In term

Re: [ANNOUNCE] Apache Solr 5.5.5 released

2017-10-24 Thread Moenieb Davids
:27 PM, Steve Rowe wrote: > Yes. > > -- > Steve > www.lucidworks.com > > > On Oct 24, 2017, at 12:25 PM, Moenieb Davids > wrote: > > > > Solr 5.5.5? > > > > On 24 Oct 2017 17:34, "Steve Rowe" wrote: > > > >> 24 Octob

Re: [ANNOUNCE] Apache Solr 5.5.5 released

2017-10-24 Thread Moenieb Davids
Solr 5.5.5? On 24 Oct 2017 17:34, "Steve Rowe" wrote: > 24 October 2017, Apache Solr™ 5.5.5 available > > The Lucene PMC is pleased to announce the release of Apache Solr 5.5.5. > > Solr is the popular, blazing fast, open source NoSQL search platform from > the > Apache Lucene project. Its major

Deeply nested search return

2017-09-02 Thread Moenieb Davids
Hi All, I would like to know if anybody has done deeply nested searches. I am currently sitting with the use case below: Successfully Indexed Document: Level1_Doc Ø ID Ø DocType Ø Level2_Doc Ø ID Ø DocType Ø Level3_Doc Ø ID Ø DocType Ø Level4_Doc Ø ID Ø DocType What is the

RE: Solr Search Handler Suggestion

2017-01-26 Thread Moenieb Davids
it's not effective to include, java-user@ in this thread. Also, this proposal is purposed DIH, that's worth to be mentioned in subj. Then, this config looks like it will issue solr request per every parent row that's deadly inefficient. On Wed, Jan 25, 2017 at 10:53 AM, Moenieb

Solr Search Handler Suggestion

2017-01-24 Thread Moenieb Davids
Hi Guys, Just an Idea for easier config of search handlers: Will it be feasible to configure a search handler that has its own schema based on the current core as well as inserting nested objects from cross core queries. Example (for illustration purpose, ignore syntax :) ) htt

RE: Joining Across Collections

2017-01-20 Thread Moenieb Davids
"_childDocuments_":[ { "core3_id":"zzz", "core_3_fieldx":"ABC", "core3_fieldy":"123", { "core2_fieldy":"1

RE: Joining Across Collections

2017-01-19 Thread Moenieb Davids
"_childDocuments_":[ { "core3_id":"zzz", "core_3_fieldx":"ABC", "core3_fieldy":"123", { "core2_fieldy":"123",

RE: Search for ISBN-like identifiers

2017-01-17 Thread Moenieb Davids
"core_2_fieldx":"ABC", "_childDocuments_":[ { "core3_id":"zzz", "core_3_fieldx":"ABC", "core3_fieldy":"123", { "core2_fiel

Missing Segment File

2017-01-15 Thread Moenieb Davids
Hi All, How does one resolve the missing segments issue: java.nio.file.NoSuchFileException: /pathxxx/data/index/segments_1bj Seems like it only occurs on large csv imports via DIH === GPAA e-mail Disclaimers and c

RE: Help needed in breaking large index file into smaller ones

2017-01-09 Thread Moenieb Davids
s really works for lucene index files? >> >> Thanks, >> Manan Sheth >> >> From: Moenieb Davids >> Sent: Monday, January 9, 2017 7:36 PM >> To: solr-user@lucene.apache.org >> Subject: RE: Help needed in breaking larg

RE: Help needed in breaking large index file into smaller ones

2017-01-09 Thread Moenieb Davids
Hi, Try split on linux or unix split -l 100 originalfile.csv this will split a file into 100 lines each see other options for how to split like size -Original Message- From: Narsimha Reddy CHALLA [mailto:chnredd...@gmail.com] Sent: 09 January 2017 12:12 PM To: solr-user@lucene.apache.

OnError CSV upload

2017-01-09 Thread Moenieb Davids
Hi All, Background: I have a mainframe file that I want to upload and the data is pipe delimited. Some of the records however have a few fields less that others within the same file and when I try to import the file, Solr has an issue with the amount of columns vs the amount of values, which is

LineEntityProcessor | Separator --- /update/csv | OnError

2017-01-05 Thread Moenieb Davids
Hi, Just wanted to know if anybody can assist with the following scenario: I have a pipe delimited mainframe file\s that sometimes misses certain fields in a row, which obviously causes issues when I try the /update/csv handler. Scenario 1: The csv handler is quite fast, however, when it picks u