Solr's schema is mostly flat (though subdocuments are now allowed).
So, the complexity of an XML document need to be mapped to the flat
layout (unless you use Lux as mentioned).
Given the limitation, the use-case for DTD/XSD mappers is not very
strong. I guess it would be useful for type mapping (
Hello all, I'm trying to reconcile what I'm seeing in the file system for a
Solr index versus what it is reporting in the UI. Here's what I see in the UI
for the index:
https://s3-us-west-2.amazonaws.com/pa-darrell/ui.png
As shown, the index is 74.85 GB in size. However, here is what I see in t
Sorry, but you have to create the schema manually, but... you could possibly
get by with Solr schemaless mode to dynamically create the schema based on
the actual data values.
See:
https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
-- Jack Krupansky
-Original Message-
F
Lucene only supports 2^31-1 documents in an index, so Solr can only support
2^31-1 documents in a single shard.
I think it's a bug that Lucene doesn't throw an exception when more than
that number of documents have been inserted. Instead, you get this error
when Solr tries to read such an over