gt; > 26. sep. 2016 kl. 20.39 skrev Shawn Heisey :
> >
> > On 9/26/2016 6:28 AM, xavi jmlucjav wrote:
> >> Yes, I had to change some fields, basically to use TrieIntField etc
> >> instead
> >> of the old IntField. I was assuming by using the IndexUpgrader to
Hi Shawn/Jan,
On Sun, Sep 25, 2016 at 6:18 PM, Shawn Heisey wrote:
> On 9/25/2016 4:24 AM, xavi jmlucjav wrote:
> > Everything went well, no errors when solr restarted, the collections
> shows
> > the right number of docs. But when I try to
Hi,
I have an existing 3.6 standalone installation. It has to be moved to
Solrcloud 6.1.0. Reindexing is not an option, so I did the following:
- Use IndexUpgrader to upgrade 3.6 -> 4.4 -> 5.5. I did not upgrade to 6.X
as 5.5 should be readable by 6.x
- Install solrcloud 6.1 cluster
- modify sche
parameter is always 0.
>
> Or your second query could even be just
> q=id:[last_id_returned_from_previous_query TO *]&sort=id
> asc&start=0&rows=1000
>
> Best,
> Erick
>
> On Mon, Jun 20, 2016 at 12:37 PM, xavi jmlucjav
> wrote:
> > Hi,
>
Hi,
I need to index into a new schema 800M docs, that exist in an older solr.
As all fields are stored, I thought I was very lucky as I could:
- use wt=csv
- combined with cursorMark
to easily script out something that would export/index in chunks of 1M docs
or something. CVS output being very e
Hi,
I have been working with
AnalyzingInfixLookupFactory/BlendedInfixLookupFactory in 5.5.0, and I have
a number of questions/comments, hopefully I get some insight into this:
- Doc not complete/up-to-date:
- blenderType param does not accept 'linear' value, it did in 5.3. I
commented it out
In order to force a OOM do this:
- index a sizable amount of docs with normal -Xmx, if you already have 350k
docs indexed, that should be enough
- now, stop solr and decrease memory, like -Xmx=15m, start it, and run a
query with a facet on a field with very high cardinality, ask for all
facets. If
2016 at 1:46 AM, Erick Erickson
wrote:
> Well, I'd imagine you could spawn threads and monitor/kill them as
> necessary, although that doesn't deal with OOM errors
>
> FWIW,
> Erick
>
> On Thu, Feb 11, 2016 at 3:08 PM, xavi jmlucjav wrote:
> > For sure,
, y, if your use case allows , then we now have
> that in Tika.
>
> I've been wanting to add a similar watchdog to tika-server ... any
> interest in that?
>
>
> -Original Message-
> From: xavi jmlucjav [mailto:jmluc...@gmail.com]
> Sent: Thursday, February
I have found that when you deal with large amounts of all sort of files, in
the end you find stuff (pdfs are typically nasty) that will hang tika. That
is even worse that a crash or OOM.
We used aperture instead of tika because at the time it provided a watchdog
feature to kill what seemed like a h
Mikahil, Yonik
thanks for having a look. This was my bad all the time...I forgot I was on
5.2.1 instead of 5.3.1 on this setup!! It seems some things were not there
yet on 5.2.1, I just upgraded to 5.3.1 and my query works perfectly.
Although I do agree with Mikhail the docs on this feature are a
Hi,
I am trying to get some faceting with the json facet api on nested doc, but
I am having issues. Solr 5.3.1.
This query gest the buckets numbers ok:
curl http://shost:8983/solr/collection1/query -d 'q=*:*&rows=0&
json.facet={
yearly-salaries : {
hi,
While working with DIH, I tried schemaless mode, and found out it does not
work if you are indexing with DIH. I could not find any issue or reference
to this in the mailing list, even if I found it a bit surprising nobody
tried that combination so far. Did anybody tested this before?
I manage
Hi,
I have a setup with AnalyzingInfixLookupFactory, suggest.count works. But
if I just replace:
s/AnalyzingInfixLookupFactory/BlendedInfixLookupFactory
suggest.count is not respected anymore, all suggestions are returned, so
making it virtually useless.
I am using RC4 that I believe is also bein
On Sat, May 30, 2015 at 11:15 PM, Toke Eskildsen
wrote:
> xavi jmlucjav wrote:
> > I think the plan is to facet only on class_u1, class_u2 for queries from
> > user1, etc. So faceting would not happen on all fields on a single query.
>
> I understand that, but most of t
for a second opinion. We did not get to
discuss a different schema, but if we get to this point I will take that
plan into consideration for sure.
xavi
On Sat, May 30, 2015 at 10:17 PM, Toke Eskildsen
wrote:
> xavi jmlucjav wrote:
> > They reason for such a large number of fields:
&g
orting
> >
> > Whether Solr breaks with thousands and thousands of fields is pretty
> > dependent on what you _do_ with those fields. Simply doing keyword
> > searches isn't going to put the same memory pressure on as, say,
> > faceting on them all (even if in
Hi guys,
someone I work with has been advised that currently Solr can support
'infinite' number of fields.
I thought there was a practical limitation of say thousands of fields (for
sure less than a million), orthings can start to break (I think I
remember seeings memory issues reported on th
18 matches
Mail list logo