We would enjoy this feature as well, if you'd like to create a JIRA ticket.
On Thu, Aug 28, 2014 at 4:21 PM, O. Olson wrote:
> I have hundreds of fields of the form in my schema.xml:
>
> multiValued="true"/>
> multiValued="true"/>
> .
>
> I also have a field 'text' tha
s well
> as having some security features. Therefore accumulo is my choice for the
> database part and for index and search I am going to use Solr. Would you
> please guide me through that?
>
>
>
> On Thu, Jul 24, 2014 at 1:28 AM, Joe Gresock wrote:
>
> > We store data
We store data in both Solr and Accumulo -- do you have more details about
what kind of data and indexing you want? Is there a reason you're thinking
of using both databases in particular?
On Wed, Jul 23, 2014 at 5:17 AM, Ali Nazemian wrote:
> Dear All,
> Hi,
> I was wondering is there anybody
Thanks Hoss, that's a good explanation. I don't have much experience with
the non-sugar parts of the API, so this was a good summary. I suppose I
can at least help out the client heap size this way.
On Wed, Jul 2, 2014 at 10:14 PM, Chris Hostetter
wrote:
>
> : Now that I think about it, thoug
.
Thanks,
Joe
On Sat, Jun 28, 2014 at 5:17 PM, Joe Gresock wrote:
> Yeah, I think that's what I'll have to do, Mikhail. I was just testing
> the waters to see if there was a way to do it with SolrJ.
>
>
> On Sat, Jun 28, 2014 at 4:11 PM, Mikhail Khludnev <
> mk
st
> http://wiki.apache.org/solr/UpdateXmlMessages by own optimized code?
>
>
> On Sat, Jun 28, 2014 at 3:13 AM, Joe Gresock wrote:
>
> > Is there a standard way to stream updates to Solr using SolrJ?
> > Specifically, we have some atomic updates for large field v
Is there a standard way to stream updates to Solr using SolrJ?
Specifically, we have some atomic updates for large field values (hundreds
of MB) we'd like to send. We're currently sending partial updates using
SolrInputDocument objects, but we'd love to be able to keep less on the
heap in our cli
ry
> requirements down.
>
> Small is better.
>
> -- Jack Krupansky
>
> -Original Message- From: Joe Gresock
> Sent: Monday, June 9, 2014 8:50 AM
> To: solr-user@lucene.apache.org
> Subject: Large disjunction query practices
>
>
> I'm wondering what th
I'm wondering what the best practice for large disjunct queries in Solr is.
A user wants to submit a query for several hundred thousand terms, like:
(term1 OR term2 OR ... term500,000)
I know it might be better to break this up into multiple queries that can
be merged on the user's end, but I'm w
ad to heap space problems, but one thing you
> could
> > play with is reducing the cache sizes on that node: if you had very large
> > (in terms of numbers of documents) caches, and a lot of the documents
> were
> > big, that could lead to heap problems. But this is all just
And the followup question would be.. if some of these documents are
legitimately this large (they really do have that much text), is there a
good way to still allow that to be searchable and not explode our index?
These would be "text_en" type fields.
On Mon, Jun 2, 2014 at 6:09 AM, J
t will vary for each system, but assuming a heap of
10g, does anyone have past experience in limiting their field sizes?
Our caches are set to 128.
On Sun, Jun 1, 2014 at 8:32 AM, Joe Gresock wrote:
> These are some good ideas. The "huge document" idea could add up, since I
>
r caches?
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sematext.com/
>
>
> On Sat, May 31, 2014 at 5:54 PM, Joe Gresock wrote:
>
> > Interesting thought about the routing. Our document ids are in
NOT have it go to
> other shards, perhaps if you can isolate the problem queries that might
> shed some light on the problem.
>
>
> Best
> er...@baffled.com
>
>
> On Sat, May 31, 2014 at 8:33 AM, Joe Gresock wrote:
>
> > It has taken as little as 2 minutes
n the log?
>
> If you bring up only a single node in that problematic shard, do you still
> see the problem?
>
> -- Jack Krupansky
>
> -----Original Message- From: Joe Gresock
> Sent: Saturday, May 31, 2014 9:34 AM
> To: solr-user@lucene.apache.org
> Subject:
Hi folks,
I'm trying to figure out why one shard of an evenly-distributed 3-shard
cluster would suddenly start running out of heap space, after 9+ months of
stable performance. We're using the "!" delimiter in our ids to distribute
the documents, and indeed the disk size of our shards are very si
16 matches
Mail list logo