Sorry for not responding back earlier, I went ahead and created a ticket
here:
https://issues.apache.org/jira/browse/SOLR-7613
It does look somewhat trivial if you just update the current loading
mechanism as Chris describes, I can provide a patch for that if you want.
Though, if you want to go
Thanks Toke for the input.
I think the plan is to facet only on class_u1, class_u2 for queries from
user1, etc. So faceting would not happen on all fields on a single query.
But still.
I did not design the schema, just found out about the number of fields and
advised again that, when they asked
Faceting on very high cardinality fields can use up memory, no doubt
about that. I think the entire delete question was a red herring, but
you know that already ;)
So I think you can forget about the delete stuff. Although do note
that if you do re-index your old documents, the new version
Nothing's really changed in that area lately. Your co-worker is
perhaps confusing the statement that Solr has no a-priori limit on
the number of distinct fields that can be in a corpus with supporting
an infinite number of fields. Not having a built-in limit is much
different than supporting
xavi jmlucjav jmluc...@gmail.com wrote:
They reason for such a large number of fields:
- users create dynamically 'classes' of documents, say one user creates 10
classes on average
- for each 'class', the fields are created like this: unique_id_+fieldname
- there are potentially hundreds of
Anything more than a few hundred seems very suspicious.
Anything more than a few dozen or 50 or 75 seems suspicious as well.
The point should not be how crazy can you get with Solr, but that craziness
should be avoided altogether!
Solr's design is optimal for a large number of relatively small
xavi jmlucjav jmluc...@gmail.com wrote:
I think the plan is to facet only on class_u1, class_u2 for queries from
user1, etc. So faceting would not happen on all fields on a single query.
I understand that, but most of the created structures stays in memory between
calls (DocValues helps here).
They reason for such a large number of fields:
- users create dynamically 'classes' of documents, say one user creates 10
classes on average
- for each 'class', the fields are created like this: unique_id_+fieldname
- there are potentially hundreds of thousands of users.
There is faceting in each
On Sat, May 30, 2015 at 11:15 PM, Toke Eskildsen t...@statsbiblioteket.dk
wrote:
xavi jmlucjav jmluc...@gmail.com wrote:
I think the plan is to facet only on class_u1, class_u2 for queries from
user1, etc. So faceting would not happen on all fields on a single query.
I understand that, but
I also met the same problem, could you tell me why? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Storing-positions-and-offsets-vs-FieldType-IndexOptions-DOCS-AND-FREQS-AND-POSITIONS-AND-OFFSETS-tp4061354p4208875.html
Sent from the Solr - User mailing list archive
Hi All,
I am trying to build and compile Solr . I have been following the below
link .
https://wiki.apache.org/solr/HowToCompileSolr
I have taken the latest version of code and have run ant clean compile
command , followed by ant dist. Both the steps were successful, but no war
was created(as
Hi, What would be an optimal FS block size to use?
Using Solr 4.7.2, I have an RAID-5 of SSD drives currently configured with
a 128KB block size.
Can I expect better indexing/query time performance with a smaller block
size (say 8K)?
Considering my documents are almost always smaller than 8K.
I
Thank you Erick. I was thinking that it actually went through and
removed the index data; that you for the clarification. What happened
was I had some bad data that created a lot of fields (some 8000). I was
getting some errors adding new fields where solr could not talk to
zookeeper, and I
Please help me here
With Regards
Aman Tandon
On Sat, May 30, 2015 at 12:43 AM, Aman Tandon amantandon...@gmail.com
wrote:
Thanks Alex, yes it for my testing to understand the code/process flow
actually.
Any other ideas.
With Regards
Aman Tandon
On Fri, May 29, 2015 at 12:48 PM,
Wow,
thanks both for the suggestions
Erik: good point for the uneven shard load
I'm not worried about the growth of a particular shard, in case I'd use
shard splitting and if necessary add a server to the cluster
but even if I manage to spread docs of typeA producer
Unsubscribe me
Quoting Erik from two days ago:
Please follow the instructions here:
http://lucene.apache.org/solr/resources.html. Be sure to use the exact same
e-mail you used to subscribe.
On May 30, 2015, at 6:07 AM, Lalit Kumar 4 lkum...@sapient.com wrote:
Please unsubscribe me as well
On May 30,
Please unsubscribe me as well
On May 30, 2015 15:23, Neha Jatav neha.ja...@gmail.com wrote:
Unsubscribe me
Hi Joseph,
On May 30, 2015, at 8:18 AM, Joseph Obernberger j...@lovehorsepower.com
wrote:
Thank you Erick. I was thinking that it actually went through and removed
the index data; that you for the clarification.
I added more info to the Schema API page about this not being true. Here’s
Hi guys,
someone I work with has been advised that currently Solr can support
'infinite' number of fields.
I thought there was a practical limitation of say thousands of fields (for
sure less than a million), orthings can start to break (I think I
remember seeings memory issues reported on
On 5/30/2015 1:59 AM, Aniket Kumar wrote:
Hi All,
I am trying to build and compile Solr . I have been following the below
link .
https://wiki.apache.org/solr/HowToCompileSolr
I have taken the latest version of code and have run ant clean compile
command , followed by ant dist. Both
What I'm suggesting is that you have two fields, one for searching, one
for faceting.
You may find you can't use docValues for your field type, in which case
Solr will just use caches to improve faceting performance.
Upayavira
On Sat, May 30, 2015, at 01:50 AM, Aman Tandon wrote:
Hi Upayavira,
On Sat, May 30, 2015, at 09:51 AM, Gili Nachum wrote:
Hi, What would be an optimal FS block size to use?
Using Solr 4.7.2, I have an RAID-5 of SSD drives currently configured
with
a 128KB block size.
Can I expect better indexing/query time performance with a smaller block
size (say 8K)?
On 5/30/2015 2:51 AM, Gili Nachum wrote:
Hi, What would be an optimal FS block size to use?
Using Solr 4.7.2, I have an RAID-5 of SSD drives currently configured with
a 128KB block size.
Can I expect better indexing/query time performance with a smaller block
size (say 8K)?
Considering my
24 matches
Mail list logo