field?
I'm on Solr 8.6.1 and the link at
https://lucene.apache.org/solr/guide/8_6/schema-factory-definition-in-solrconfig.html#schema-factory-definition-in-solrconfig
doesn't offer much help.
Thanks
Steven
On Mon, Feb 15, 2021 at 11:09 AM Shawn Heisey wrote:
> On 2/15/2021 6:52 AM,
, 2021 at 7:17 PM Shawn Heisey wrote:
> On 2/14/2021 9:00 AM, Steven White wrote:
> > It looks like I'm misusing SolrJ API SolrInputDocument.addField() thus I
> > need clarification.
> >
> > Here is an example of what I have in my code:
> >
> > SolrInpu
Hi everyone,
It looks like I'm misusing SolrJ API SolrInputDocument.addField() thus I
need clarification.
Here is an example of what I have in my code:
SolrInputDocument doc = new SolrInputDocument();
doc.addField("MyFieldOne", "some data");
doc.addField("MyFieldTwo", 100);
The abo
Hi everyone,
Is there a SolrJ API that I can use to collect statistics data about Solr
(everything that I see on the dashboard if possible)?
I am in need to collect data about Solr instances, those same data that I
see on the dashboard such as swap-memory, jvm-memory, list of cores, info
about ea
Hi everyone,
What is the default value for timeAllowed to make it behave as if it is not
set? Is it "-1" or some other number?
Rather than writing my code to include or not include timeAllowed in the
query parameter, I rather have it be part of my query all the time and only
change the value if
Hi Erick,
I'm on Solr 8.6.1. I did further debugging into this and just noticed that
my search is not working too now (this is after I changed the request
handler name from "select" to "select_cpsearch"). I have this very basic
test code as a test which I think revailes the issue:
try
{
Hi everyone,
I'm using SolrJ to ping Solr. This used to work just fine till when I had
to change the name of my search request handler from the default "select"
to "select_cpsearch", i.e.: I now have this:
I looked this up and the solution suggestion on the internet is I need to
add a ping
iptUpdateProcessorFactory.html
> >>
> >> Just a word of warning that Stateless URP is using Javascript, which is
> >> getting a bit of a complicated story as underlying JVM is upgraded
> (Oracle
> >> dropped their javascript engine in JDK 14). So if one of t
e fields into one and
> delete the original fields. There's no point in having your index cluttered
> with unused fields, OTOH, it may not be worth the effort just to satisfy my
> sense of aesthetics 😉
>
> On Thu, Sep 17, 2020, 12:59 Steven White wrote:
>
> > Hi Eric,
&
Hi everyone,
I'm trying to figure out when and how I should handle failures that may
occur during indexing. In the sample code below, look at my comment and
let me know what state my index is in when things fail:
SolrClient solrClient = new HttpSolrClient.Builder(url).build();
solrClient.
ases performance.
>
> Anyway, just a thought you might want to consider.
>
> Best,
> Erick
>
> > On Sep 16, 2020, at 9:31 PM, Steven White wrote:
> >
> > Hi everyone,
> >
> > I figured it out. It is as simple as creating a List and using
> >
Hi everyone,
I figured it out. It is as simple as creating a List and using
that as the value part for SolrInputDocument.addField() API.
Thanks,
Steven
On Wed, Sep 16, 2020 at 9:13 PM Steven White wrote:
> Hi everyone,
>
> I want to avoid creating a source="OneFieldOfMany&q
Hi everyone,
I want to avoid creating a in my schema (there will be over 1000 of them and
maybe more so managing it will be a pain). Instead, I want to use SolrJ
API to do what does. Any example of how I can do this? If
there is an example online, that would be great.
Thanks in advance.
Ste
Hi everyone,
In Solr's schema, I have come across field types that use a different logic
for "index" than for "query". To be clear, I"m talking about this block:
Why would one want to not use the same logic for both and simply use:
IELDS_ field entirely. You can even boost individual fields
> differently
> by default.
>
> And, if you really want _ALL_FIELDS_, you may not need edismax.
>
> Best,
> Erick
>
> > On May 25, 2020, at 1:15 PM, Steven White wrote:
> >
> > Hi everyone,
> >
Hi everyone,
I index my data from the DB into their own fields. I then use copyField to
index the value of all the fields into _ALL_FIELDS_ that I created. In my
edismax, I use _ALL_FIELDS_ for “df”. Here is how my edismax looks like:
explicit
edismax
*:*
AND
Never mind, I found the answer. WordDelimiterFilterFactory is
deprecated and is replaced by WordDelimiterGraphFilterFactory.
Steve
On Sat, May 9, 2020 at 5:22 PM Steven White wrote:
> Hi everyone,
>
> Why I cannot find the filter solr.WordDelimiterFilterFactory at
> https://lucen
Hi everyone,
Why I cannot find the filter solr.WordDelimiterFilterFactory at
https://lucene.apache.org/solr/guide/8_5/index.html but it is at
https://cwiki.apache.org/confluence/display/SOLR/AnalyzersTokenizersTokenFilters
?
Thanks
Steve
Hi everyone,
There are multiple copies with each a bit different of the
files solrconfig.xml and the various schema files. Should I be using
what's under \solr-8.5.1\server\solr\configsets\_default\conf as my
foundation to build on?
Thanks
Steve
>> http://observer.wunderwood.org/ (my blog)
> >>
> >>> On Apr 24, 2020, at 7:34 AM, Erick Erickson
> >> wrote:
> >>>
> >>> +1 to removing stopword filters.
> >>>
> >>>> On Apr 24, 2020, at 10:28 AM, Jan Høydahl
> >
t;>
> >>> 24. apr. 2020 kl. 14:44 skrev David Hastings <
> hastings.recurs...@gmail.com>:
> >>>
> >>> you should never use the stopword filter unless you have a very
> specific
> >>> purpose
> >>>
> >>> On Fri, Apr 24, 2020 a
Hi everyone,
What is, if any, the impact of stopwords in to my search ranking quality?
Will my ranking improve is I do not index stopwords?
I'm trying to figure out if I should use the stopword filter or not.
Thanks in advanced.
Steve
Hi Chris,
I was able to fix the issue by adding the line "false " to my request handler. Here is how
my request handler looks like
explicit
edismax
*:*
100
true
CC_UNIQUE_FIELD,CC_FILE_PATH,score
CC_ALL_FIELDS_DATA
xml
false
gt; what do all of your request params (including defaults) look like?
>
> it's possible you are seeing the effects of edismax's "lowercaseOperators"
> param, which _should_ default to "false" in modern solr, but
> in very old versions it defaulted to "tr
Hi everyone,
My schema is setup to index all words (no stop-words such as "or", "and",
etc.) are removed. My default operator is AND. But when I search for "one
or two" (without the quotes as this is not a phrase search) I'm getting
hits on documents that have either "one" or "two". It has the
Hi everyone,
I'm indexing my data into multiple Solr fields, such as A, B, C and I'm
also copying all the data of those fields into a master field such as X.
By default, my "qf" is set to X so anytime a user is searching they are
searching across the data that also exist in fields A, B and C.
In
Hi everyone,
I have 2 files like so:
FA has the letter "i" only 2 times, and the file size is 54,246 bytes
FB has the letter "i" 362 times and the file size is 9,953
When I search on the letter "i" FB is ranked lower which confuses me
because I was under the impression the occurrences of the ter
an dump all the dynamic fields into a copyField
>
> stored="false" multiValued="true" />
>
>
> Then you can just set
> "qf":"CC_COMP_NAME_ALL"
>
>
> On 7/14/19, 10:42 AM, "Steven White" wrote:
>
>
Hi everyone,
In my schema, I have the following field:
When I index, I create dynamic fields and index into it like so:
doc.addField("CC_COMP_NAME_" + componentName.toUpperCase(),
ccAllFieldsDataValue);
In my query handler, I have this:
{"requestHandler":{"/select_hcl":{
"class":
Thanks David. But is there a SolrJ sample code on how to do this? I need
to see one, or at least the API, so I know how to make the call.
Steven
On Fri, Jul 12, 2019 at 9:42 AM David Hastings
wrote:
> just use a facet on the field should work yes?
>
> On Fri, Jul 12, 2019 at 9:39
Hi everyone,
One of my indexed field is as follows:
It holds the file extension of the files I'm indexing. That is, let us say
I indexed 10 million files and the result of such indexing, the field
CC_FILE_EXT will now have the file extension. In my case the unique file
extension list is a
Thanks Shawn! That was quick, easy and it works.
Steven
On Tue, Jul 9, 2019 at 6:57 PM Shawn Heisey wrote:
> On 7/9/2019 4:52 PM, Steven White wrote:
> > In this code sample that's part of my overall code, how do I tell Solr
> > dynamically / programmatically to use AND
Hi everyone,
In this code sample that's part of my overall code, how do I tell Solr
dynamically / programmatically to use AND or OR as the default operator?
SolrQuery query = new SolrQuery(queryString);
query.setRequestHandler("/select_test");
response = solrClient.query(query);
SolrDocumentList
ht be interested in about “fq” clauses and dates in the
> filter cache:
>
> https://dzone.com/articles/solr-date-math-now-and-filter
>
> Best,
> Erick
>
> > On Jul 5, 2019, at 11:55 AM, Steven White wrote:
> >
> > I need both: point in time and range. In
i, Jul 5, 2019 at 6:51 PM Steven White wrote:
>
> > Thanks Mikhail. I will read those links and switch over to latest Solr.
> >
> > Just to be sure, my schema setup and the way I'm indexing the date data
> are
> > not the issue, right?
> >
> > Steven.
eFieldTest.java
>
> and check
>
> https://lucene.apache.org/solr/guide/8_0/working-with-dates.html#date-range-formatting
> (also see below) for sure.
> Also, I remember lack of strictness in 7,2.1 see
> https://issues.apache.org/jira/browse/LUCENE-8640
>
> On Fri, Jul 5, 2
Hi everyone,
I'm using Solr 7.2.1 but can upgrade if I must.
I setup my schema like so:
And indexed my data like so:
doc.addField("CC_FILE_DATETIME", "2019-02-05T12:04:00Z");;
When I try to search against this field, some search are working, others
are not. Here are examples
Hi everyone,
Does Solr support federated / distributed search when the schema of the 2
or more indexes are different?
I have 3 indexes and all three use different schema. However, the unique
key field name is the same across all 3 indexes. I need to search on all 3
indexes and return results as
Hi everyone,
I'm in need of providing a search engine to search source code such as
Java, C++, PERL, Visual Basic, etc.
My questions are:
1) Are there existing open source source code parsers I can use that will
scan a source code and return it as elements such as comments, function
names, class
Hi everyone,
We currently support both Oracle and IBM Java to run Solr and I'm task to
switch over to OpenJDK.
Does anyone use Solr, any version, with OpenJDK? If so, what has been your
experience? Also, what platforms have you used it on?
I run Solr on Windows, Linux, AIX and Solaris and on e
Hi everyone,
I'm about to start a project that requires indexing 36 million records
using Solr 7.2.1. Each record range from 500 KB to 0.25 MB where the
average is 0.1 MB.
Has anyone indexed this number of records? What are the things I should
worry about? And out of curiosity, what is the lar
What about for the case when you need to match cases as such the analyzer
does not use LowerCaseFilterFactory? Is there a solution for this?
Steve
On Tue, Mar 27, 2018 at 4:22 AM, RAUNAK AGRAWAL
wrote:
> Hi Peter,
>
> Yes, I am using the stopword file which has *or *in it. Thanks for pointing
Please ignore this. It was a user error. I was pointing to the wrong
analyzer in my app's cfg file.
Steve
On Mon, Mar 26, 2018 at 10:17 AM, Steven White wrote:
> Setting "sow=true" didn't make a difference.
>
> Here is what I'm using now: http://localhos
"query":{
"time":0.0},
"facet":{
"time":0.0},
"facet_module":{
"time":0.0},
"mlt":{
"time":0.0},
"highlight":{
"time&qu
Hi everyone,
I switched over from Solr 5.2.1 to 7.2.1 other than re-indexing my data and
schema design remain the same.
The issue I see now is I'm getting 0 hits on phrase searches, why?
Here is the query I'm sending that gives me 0 hits:
http://localhost:8983/solr/ccfts/select_test?q=%22cat+do
trix SCM
> >
> > CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL
> 32256 USA / www.cevalogistics.com
> > T 904.9281448 / F 904.928.1525 / daphne@cevalogistics.com
> >
> > Making business flow
> >
> > -Original Messa
Hi everyone,
There are some good write ups on the internet comparing the two and the one
thing that keeps coming up about Elasticsearch being superior to Solr is
it's analytic capability. However, I cannot find what those analytic
capabilities are and why they cannot be done using Solr. Can some
Hi everyone,
I have a design problem that i"m not sure how to solve best so I figured I
share it here and see what ideas others may have.
I have a DB that hold documents (over 1 million and growing). This is
known as the "Public" DB that holds documents visible to all of my end
users.
My applic
Hi everyone,
Is there a way to setup Solr so the search commands I send it and the
searches it does is similar to the way "grep" works (i.e.: regex)? If not,
how close can Solr be setup to mimic "grep"?
Thanks in advanced.
Steve
rd in
the log. This is bad from a security perspective, Should a security defect
be open against Solr about this?
Steve
On Mon, Dec 5, 2016 at 10:45 AM, Shawn Heisey wrote:
> On 12/5/2016 8:10 AM, Steven White wrote:
> > Hi everyone,
> >
> > I'm password protecting Solr
Hi everyone,
I'm password protecting Solr using Jetty's realm.properties and noticed
that if the password has "@" Jetty throws an error and thus I cannot access
Solr:
java.lang.IllegalArgumentException: Illegal character in fragment at index
31:
SolrAdminUser:81#Mst#Demo@18
@localhost:8983/solr/d
(sorry if this a second post, the first one 1 posted 1 hour ago has yet to
make it to the mailing list!!)
Hi everyone,
Currently, we are on Solr 5.2 and use 1 core and none of the cloud
features. We are planning to upgrade to Solr 6.2 and utilize SolrCloud not
because our data need to scale (si
Hi everyone,
Currently, we are on Solr 5.2 and use 1 core and none of the cloud
features. We are planning to upgrade to Solr 6.2 and utilize SolrCloud not
because our data need to scale (single core with no cloud is doing just
fine on our index of 2 million records and about 15 gb index size) but
Hi everyone,
I'm up to speed about Solr on how it can be setup to provide high
availability (if one Solr server goes down, the backup one takes over). My
question is how do I make my custom crawler to play "nice" with Solr in
this environment.
Let us say I setup Solr with 3 servers so that if on
/do-all-stopword-queries-matter/
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
>
> > On Aug 29, 2016, at 5:39 PM, Steven White wrote:
> >
> > Thanks Shawn. This is the best answer I have seen, much appreciat
Thanks Shawn. This is the best answer I have seen, much appreciated.
A follow up question, I want to remove stop words from the list, but if I
do, then search quality will degradation (and index size will grow (less of
an issue)). For example, if I remove "a", then if someone search for "For
a F
reCase="true"/>
>ignoreCase="true"/>
>
>
>protected="protwords.txt"/>
>
>
>
> Regards
> Srinivas Meenavalli
>
> -Original Message-
> From: Steven White [mailto:swhite4...@gmail.com
Hi everyone,
I'm curious, the current "default" stopword list, for English and other
languages, how was it determined? And for English, why "I" is not in the
stopword list?
Thanks in advanced.
Steve
Hi everyone
Let's say I search for the word "Olympic" and I get a hit on 10 documents
that have similar content (let us assume the content is at least 80%
identical) how can I have Solr rank them so that the ones with most
recently updated doc gets ranked higher? Is this something I have to do at
Hi everyone,
In my environment, I have use cases where I need to fully re-index my
data. This happens because Solr's schema requires changes based on changes
made to my data source, the DB. For example, my DB schema may change so
that it now has a whole new set of field added or removed (on reco
ts changed to two tokens, "the" and "the}".
>
> So you really have to look more closely at your analysis chain, that's
> pretty much where
> your problems appear to be.
>
> Best,
> Erick
>
> On Tue, Jul 5, 2016 at 4:30 PM, Steven White wrote:
f I click on any of
the listed words, I get a hit, but I get 0 hits when I click on "be".
Thanks.
Steve
On Tue, Jul 5, 2016 at 7:07 PM, Steven White wrote:
> Thanks for the quick reply Erick.
>
> Here is the analyzer I'm using:
>
>positionIncrementGap="100&qu
> and setting terms.prefix=the
>
> See:
> https://cwiki.apache.org/confluence/display/solr/The+Terms+Component
>
> Best,
> Erick
>
> On Tue, Jul 5, 2016 at 2:34 PM, Steven White wrote:
> > HI Everyone,
> >
> > I'm trying to understand why I get a hi
HI Everyone,
I'm trying to understand why I get a hit when I search for "the}" but not
when I search for "the" (searches are done without the quotes and "the" is
a stopword in my case).
Here is the debugQuery output using "the}":
"debug": {
"rawquerystring": "the}",
"querystring": "the}
use “{!field}[]” in fq was what I read somewhere (I cannot
find it now, even after many Google’s on it) is far faster and efficient
than the tradition :[value]
Steve
On Wed, Jun 8, 2016 at 6:46 PM, Shawn Heisey wrote:
> On 6/8/2016 2:28 PM, Steven White wrote:
> > ?q=*&q.op=OR&
4:31 PM, Mikhail Khludnev wrote:
> C'mon Steve, filters fq=& are always intersected.
>
> On Wed, Jun 8, 2016 at 11:28 PM, Steven White
> wrote:
>
> > Hi everyone,
> >
> > I cannot make sense of this so I hope someone here can shed some light.
Hi everyone,
I cannot make sense of this so I hope someone here can shed some light.
The following gives me 0 hits (expected):
?q=*&q.op=OR&fq={!field+f=DateA+op=Intersects}[2020-01-01+TO+2030-01-01]
The following gives me hits (expected):
?q=*&q.op=OR&fq={!field+f=DateB+op=Intersects}[200
Hi everyone,
I'm using "solr.DateRangeField" data type to index my dates data and based
on [1] the format of the dates data is "-MM-DDThh:mm:ssZ".
In my case, I have no need to search on time, just dates. I started by
indexing my dates data as "2016-06-01" but Solr threw an exception. I the
d something?
Thanks.
Steve
On Thu, Jun 2, 2016 at 11:46 AM, Steven White wrote:
> Hi everyone,
>
> This is two part question about date in Solr.
>
> Question #1:
>
> My understanding is, in order for me to index date types, the date data
> must be formatted and i
Hi everyone,
This is two part question about date in Solr.
Question #1:
My understanding is, in order for me to index date types, the date data
must be formatted and indexed as such:
-MM-DDThh:mm:ssZ
What if I do not have the time part, should I be indexing it as such and
still get all
> in a lower JVM footprint, fewer GC problems etc.
>
> But the implication is, indeed, that you should use DocValues
> for field you intend to facet and/or sort etc on. If you only search
> it's just wasted space.
>
> Best,
> Erick
>
> On Fri, May 27, 2016 at 6:25
ng that can have performance
> > implications.
> >
> > But I'd certainly add docValues="true" to the definition no matter
> > which you decide on.
> >
> > Best,
> > Erick
> >
> > On Wed, May 25, 2016 at 9:29 AM, Steven White
> > w
Hi everyone,
I will be faceting on data of type integers and I'm wonder if there is any
difference on how I design my schema. I have no need to sort or use range
facet, given this, in terms of Lucene performance and index size, does it
make any difference if I use:
#1:
Or
#2:
(notice how I
Hi everyone,
I'm reading on Solr's Parallel SQL. I see some good examples but not much
on how to set it up and what are the limitations. My reading on it is that
I can use Parallel SQL to send to Solr SQL syntax to search in Solr, but:
1) Does this mean all of SQL's query statements are support
r-user
> CC:
> Date: 2016/5/23 (週一) 21:14
> Subject: Re: How to use "fq"
>
>
> Try the {!terms} query parser. That should make it work well for you. Let
> us know how it does.
>
>Erik
>
> > On May 23, 2016, at 08:52, Steven White wrote:
> >
>
Hi everyone,
I'm trying to figure out what's the best way for me to use "fq" when the
list of items is large (up to 200, but I have few cases with up to 1000).
My current usage is like so: &fq=category:(1 OR 2 OR 3 OR 4 ... 200)
When I tested with up to 1000, I hit the "too many boolean clauses"
Hi folks,
The code I'm maintaining, uses Solr config API per
https://cwiki.apache.org/confluence/display/solr/Config+API to manage
request handlers. My environment has many request handlers, up to 100 in
few extreme cases. When that's the case, it means I will issue 100
"delete-requesthandler" f
ell you what that one does.
>
> If you look for solr.StrField in the schema.xml file, you'll get some idea
> of how it's defined. The default setting is for it not to be analyzed.
>
> On Tue, May 3, 2016 at 10:16 AM, Steven White
> wrote:
>
> > Hi Everyon
Hi Everyone,
Is solr.StrField and solr.StringField the same thing?
Thanks in advanced!
Steve
d be more time and
> space efficient. Here's some background:
>
> https://lucidworks.com/blog/2016/02/13/solrs-daterangefield-perform/
>
> Date types should be indexed as fully specified strings, as
>
> -MM-DDThh:mm:ssZ
>
> Best,
> Erick
>
> On Mon, Apr 11, 2
Hi everyone,
I need to index data data into Solr and then use this field for facet
search. My question is this, the date data in my DB is stored in the
following format "2016-03-29 15:54:35.461":
1) What format I should be indexing this date + time stamp into Solr?
2) What Solr field type I shou
This is all good stuff. Thank you all for your insight.
Steve
On Mon, Apr 4, 2016 at 6:15 PM, Yonik Seeley wrote:
> On Mon, Apr 4, 2016 at 6:06 PM, Chris Hostetter
> wrote:
> > :
> > : Not sure I understand... _version_ is time based and hence will give
> > : roughly the same accuracy as some
Hi everyone,
When I send Solr the query *:* the result I get back is sorted based on
Lucene's internal DocID which is oldest to most recent (can someone correct
me if I get this wrong?) Given this, the most recently added / updated
document is at the bottom of the list. Is there a way to reverse
Yonik Seeley wrote:
> On Sat, Mar 12, 2016 at 11:00 AM, Steven White
> wrote:
> > Hi folks
> >
> > I need to search for terms in a field that will be AND'ed with user's
> real
> > search terms, such as:
> >
> > user-real-search-terms AND
Hi folks
I need to search for terms in a field that will be AND'ed with user's real
search terms, such as:
user-real-search-terms AND FooField:(a OR b OR c OR d OR e OR ...)
The list of terms in the field FooField can be as large as 1000 items, but
will average around 100.
The list of OR'ed
Got it.
Last question on this topic (maybe), wouldn't a commit at the very end take
too long on a 1 billion items? Wouldn't a commit every, lets say 10,000
items be more efficient?
Steve
On Thu, Mar 10, 2016 at 5:44 PM, Shawn Heisey wrote:
> On 3/10/2016 3:29 PM, Steve
9, 2016 at 8:32 PM, Shawn Heisey wrote:
> On 3/9/2016 6:10 PM, Steven White wrote:
> > I'm indexing about 1 billion records (each are small Solr doc, no more
> than
> > 20 bytes each). The logic is basically as follows:
> >
> > while (data-of-1-b
Hi folks,
I'm indexing about 1 billion records (each are small Solr doc, no more than
20 bytes each). The logic is basically as follows:
while (data-of-1-billion) {
read-1000-items from DB
at-100-items send 100 items to Solr: i.e.:
solrConnection.add(docs);
}
solrConn
Re-posting. Anyone has any idea about this question? Thanks.
Steve
On Mon, Mar 7, 2016 at 5:15 PM, Steven White wrote:
> Hi folks,
>
> In Solr's solr-8983-console.log I see the following (about 50 in a span of
> 24 hours when index is on going):
>
> WARNING: Co
Hi folks,
In Solr's solr-8983-console.log I see the following (about 50 in a span of
24 hours when index is on going):
WARNING: Couldn't flush user prefs:
java.util.prefs.BackingStoreException: Couldn't get file lock.
What does it mean? Should I wary about it?
What about this one:
118
right?
Steve
On Sat, Mar 5, 2016 at 1:33 AM, Shawn Heisey wrote:
> On 3/4/2016 10:21 PM, Steven White wrote:
> > org.apache.solr.update.processor.LogUpdateProcessor; [test]
> > webapp=/solr path=/update params={wt=xml&version=2.2} {add=[5539783
> > (1527883353280217088)
HI folks,
I am analyzing a performance issue with Solr during indexing. My
simplefiled psedue code looks like so
while (more-items) {
for (int i = 0; i < 100; i++) {
docs.add(doc);
}
UpdateResponse resp = solrConn.add(docs, 1); // <== yes, using "1"
is bad
Hi,
Where can I learn more about the upcoming Solr 6.0? I understand the
release date cannot be know, but I hope the features and how it difference
from 5.x is known.
Thank you
Steve
the trick.
Each document I'm adding is unique so there is no deletion involved here at
all.
I'm testing this on Windows, so that maybe a factor too (the OS is not
releasing file handles?!)
Steve
On Tue, Feb 16, 2016 at 11:57 AM, Shawn Heisey wrote:
> On 2/16/2016 9:37 AM, St
I found the issue: as soon as I restart Solr, the index size goes down.
My index and data size must have been at a border line where some segments
are not released on my last document commit.
Steve
On Mon, Feb 15, 2016 at 11:09 PM, Shawn Heisey wrote:
> On 2/15/2016 1:12 PM, Steven Wh
I bet one will be
> 2x the other.
>
> Upayavira
>
> On Mon, Feb 15, 2016, at 08:12 PM, Steven White wrote:
> > Hi folks,
> >
> > I'm fixing code that I noticed to have a defect. My expectation was that
> > once I make the fix, the index size will be sma
Hi folks,
I'm fixing code that I noticed to have a defect. My expectation was that
once I make the fix, the index size will be smaller but instead I see it
growing.
Here is the stripped down version of the code to show the issue:
Buggy code #1:
for (String field : fieldsList)
{
doc.add
start:
>
> https://lucidworks.com/blog/2011/12/14/options-to-tune-documents-relevance-in-solr/
>
> Does that work?
>
> Best,
> Erick
>
> On Fri, Feb 12, 2016 at 8:30 AM, Steven White
> wrote:
> > Hi everyone,
> >
> > I'm trying to figure out if
Hi everyone,
I'm trying to figure out if this is possible, if so how do I do it.
I'm indexing records from my database. The Solr doc has 2 basic fields:
the ID and the Data field. I lump the data of each field from the record
into Solr's Data field. At search time, I search on this single fiel
-
> > > From: Erick Erickson [mailto:erickerick...@gmail.com]
> > > Sent: Tuesday, February 09, 2016 10:05 PM
> > > To: solr-user
> > > Subject: Re: How is Tika used with Solr
> > >
> > > My impulse would be to _not_ run Tika in it
1 - 100 of 251 matches
Mail list logo