(10/11/11 1:57), bbarani wrote:
Hi,
I have a peculiar situation where we are trying to use SOLR for indexing
multiple tables (There is no relation between these tables). We are trying
to use the SOLR index instead of using the source tables and hence we are
trying to create the SOLR index as
Just curious, do these tables have the same schema, like a set of shards would?
If not, how do you map them to the index?
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes,
Thanks Robert,
We will try the termsIndexInterval as a workaround. I have also opened a JIRA
issue: https://issues.apache.org/jira/browse/SOLR-2290.
Hope I found the right sections of the Lucene code. I'm just now in the
process of looking at the Solr IndexReaderFactory and SolrIndexWriter
I think it must work with any version of solr. because it works url base (see
config file).
Attention to this point: Successfully tested on Apache Tomcat v6(should work on
any other servlet container)
From: Ahmet Arslan iori...@yahoo.com
To:
You can have multiple documents generated by the same data-config:
dataConfig
dataSource name=ds1 .../
dataSource name=ds2 .../
dataSource name=ds3 .../
document
entity blah blah rootEntity=false
entity blah blah this is a document
entity sets unique id/
/document
I have set up index replication (triggered on optimize). The problem I
am having is the old index files are not being deleted on the slave.
After each replication, I can see the old files still hanging around
as well as the files that have just been pulled. This causes the data
directory size to
And, a use case: Tika blows up on some files. But we still want other
data like file name etc. and an empty text field. So:
entity rootEntity=false
field set unique id and file name etc.
entity blah blah is a document
use Tika Empty Parser
field failed=true/
/entity
This could be a quirk of the native locking feature. What's the file
system? Can you fsck it?
If this error keeps happening, please file this. It should not happen.
Add the text above and also your solrconfigs if you can.
One thing you could try is to change from the native locking policy to
the
I have a table that is broken up into many virtual shards. So basically I have
N identical tables:
Document1
Document2
.
.
Document36
Currently these tables all live in the same database, but in the future they
may be moved to different servers to scale out if the needs arise.
Is there any
You can have a file with 1,2,3 on separate lines. There is a
line-by-line file reader that can pull these as separate drivers.
Inside that entity the JDBC url has to be altered with the incoming
numbers. I don't know if this will work.
It also may work for single-threaded DIH but not during
--- On Sat, 12/18/10, Lance Norskog goks...@gmail.com wrote:
You can have a file with 1,2,3 on
separate lines. There is a
line-by-line file reader that can pull these as separate
drivers.
Inside that entity the JDBC url has to be altered with the
incoming
numbers. I don't know if this
11 matches
Mail list logo