On 5/10/2018 7:51 AM, Issei Nishigata wrote:
I create a field called employee_name, and use it as multiValued.
If “Mr.Smith" that is part of the value of the field is changed to
“Mr.Brown", do I have to create 1 million deletion queries
and updating queries in case where “Mr.Smith" appears in 1 m
On 5/10/2018 8:56 AM, Christopher Schultz wrote:
What version of OpenJDK did you happen to run?
I don't remember for sure, but it was probably version 8. All my current
Solr installs are on machines with Oracle Java.
My preference would be for Java 8, but on my Debian Wheezy install
only Ja
On 5/10/2018 7:51 AM, Issei Nishigata wrote:
I am designing a schema.
I calculated the number of the necessary field as trial, and found that I
need at least more than 35000.
I do not use all these fields in 1 document.
I use 300 field each document at maximum, and do not use remaining 34700
fie
On 5/9/2018 2:37 PM, Piyush Kumar Nayak wrote:
> Same here. "sow" restores the old behavior.
This might be a bug. I'd like someone who has better understanding of
the low-level internals to comment before assuming that it's a bug,
though. Sounds like sow=false (default as of 7.0) might be causin
On 5/9/2018 12:56 PM, root23 wrote:
> Thanks for the explanation shawn. I will look at our autowarming time.
> Looking at your response i am thinking i might be doing few more things
> wrong
> 1. Does Must clause with any of the filter query makes any sense or is
> automatically implied.
> e.g i
On 5/9/2018 2:38 PM, Andy C wrote:
> Was not quite sure from reading the JIRA why the Zookeeper team felt the
> issue was so critical that they felt the need to pull the release from
> their mirrors.
If somebody upgrades their servers from an earlier 3.4.x release to
3.4.11, then 3.4.11 might be u
On 5/9/2018 1:51 PM, Andy C wrote:
> According to the 7.3 release notes I should be using Zookeeper 3.4.11 with
> Solr 7.3.
The release notes don't make any specific recommendation about the
version of ZK that you should use for your server. The information in
the release notes that mentions 3.4.
On 5/9/2018 1:25 PM, David Hastings wrote:
> https://pastebin.com/0QUseqrN
>
> here is mine for an example with the exact same behavior
Can you try the query in the Analysis tab in the admin UI on both
versions and see which step in the analysis chain is the point at which
the two diverge from ea
On 5/9/2018 1:25 PM, David Hastings wrote:
> https://pastebin.com/0QUseqrN
Can you provide the *full* schema for both versions?
Thanks,
Shawn
On 5/9/2018 12:39 PM, Piyush Kumar Nayak wrote:
> we have recently upgraded from Solr5 to Solr7. I'm running into a change of
> behavior that I cannot fathom.
> For the term "test3" Solr7 splits the numeric and alphabetical components and
> does a simple term search while Solr 5 did a phrase sear
On 5/8/2018 11:36 AM, Kojo wrote:
> If I tag the fq query and I query for a simple word it works fine too. But
> if query a multi word with space in the middle it breaks:
>
> {'q':'*:*', 'fl': '*',
> 'fq':'{!tag=city_colaboration_tag}city_colaboration:"College
> Station"', 'json.facet': '{city_cola
On 5/8/2018 9:58 AM, root23 wrote:
> In case of frange query how do we specify the Must clause ?
Looking at how frange works, I'm pretty sure that all queries with
frange are going to be effectively single-clause. So you don't need to
specify MUST -- it's implied.
> the reason we are using frang
On 5/8/2018 4:02 AM, Alfonso Noriega wrote:
I found solr 5.5.4 is doing some unexpected behavior (at least unexpected
for me) when using Must and Must not operator and parenthesis for filtering
and it would be great if someone can confirm if this is unexpected or not
and why.
Do you have a
On 5/8/2018 4:32 AM, Aji Viswanadhan wrote:
Is this issue happened due to the size of the index? or any recommendations
to not happen in future. Please let me know.
I have no idea why it happened. Running out of disk space could cause
any number of problems. Program operation becomes unpredi
On 5/7/2018 3:50 PM, Atita Arora wrote:
> I noticed the same and hence overruled the idea to use it.
> Further , while exploring the V2 api (as we're currently in Solr 6.6 and
> will soon be on Solr 7.X) ,I came across the shards API which has
> "property.index.version": "1525453818563"
>
> Which i
On 5/7/2018 9:51 AM, manuj singh wrote:
> I am kind of confused how must clause(+) behaves with the filter queries.
> e.g i have below query:
> q=*:*&fq=+{!frange cost=200 l=NOW-179DAYS u=NOW/DAY+1DAY incl=true
> incu=false}date
>
> So i am filtering documents which are less then 179 old days.
> So
On 5/7/2018 8:22 AM, Bernd Fehling wrote:
> thanks for asking, I figured it out this morning.
> If setting -Xloggc= the option -XX:+PrintGCTimeStamps will be set
> as default and can't be disabled. It's inside JAVA.
>
> Currently using Solr 6.4.2 with
> Java HotSpot(TM) 64-Bit Server VM (25.121-b13
On 5/7/2018 5:05 PM, Jay Potharaju wrote:
> There are some deletes by query. I have not had any issues with DBQ,
> currently have 5.3 running in production.
Here's the big problem with DBQ. Imagine this sequence of events with
these timestamps:
13:00:00: A commit for change visibility happens.
1
On 5/7/2018 8:45 AM, kumar gaurav wrote:
Hi Shawn
It is solr 7.3 .
On Sun, May 6, 2018 at 1:17 AM, Shawn Heisey wrote:
The error in what you shared is incomplete. Can you find any errors in
solr.log and provide the full error text for any of them that occurred
around the relevant timestamp
On 5/7/2018 8:09 AM, natejasper wrote:
I'm setting up SOLR on an internal website for my company and I would like
to know if anyone can recommend an analytics that I can see what the users
are searching for? Does the log in SOLR give me that information?
Unless the logging configuration is chan
On 5/7/2018 3:27 AM, Vincenzo D'Amore wrote:
So just to understand, why we have this behaviour? Is there anything, a
mail thread or a ticket I could read?
https://issues.apache.org/jira/browse/SOLR-10829?attachmentOrder=desc
Thanks,
Shawn
On 5/6/2018 3:09 PM, Atita Arora wrote:
I am working on a developing a utility which lets one monitor the
indexPipeline Status.
The indexing job runs in two forms where either it -
1. Creates a new core OR
2. Runs the delta on existing core.
To put down to simplest form I look into the DB timesta
On 5/5/2018 1:02 PM, kumar gaurav wrote:
I am facing possible analysis error. in case of indexing "&" ( ampersand )
in text_general fields . It is working fine if solr is running in single
node mode also working fine in string fields . Exceptions is coming by
replicas i hope.
Anybody please sug
On 5/4/2018 4:29 AM, Aji Viswanadhan wrote:
> Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file
> found in
> LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@D:\Solr8.2\solr-5.5.4\solr-5.5.4\server\solr\collection_web_index\data\index.20180428184955635
> l
On 5/3/2018 9:07 AM, Arturas Mazeika wrote:
> Short question:
>
> How can I systematically explore the solrj functionality/API?
As Erick said, there is not an extensive programming guide. The
javadocs for SolrJ classes are pretty decent, but figuring out precisely
what the response objects actual
On 5/3/2018 12:55 PM, Satya Marivada wrote:
> We have a solr (6.3.0) index which is being re-indexed every night, it
> takes about 6-7 hours for the indexing to complete. During the time of
> re-indexing, the index becomes flaky and would serve inconsistent count of
> documents 70,000 at times and
On 5/2/2018 6:23 PM, Erick Erickson wrote:
> Perhaps this is: SOLR-11660?
That definitely looks like the problem that Micheal describes. And it
indicates that restarting Solr instances after restore is a workaround.
The issue also says something that might indicate that collection reload
after r
On 5/2/2018 3:52 PM, Michael B. Klein wrote:
> It works ALMOST perfectly. The restore operation reports success, and if I
> look at the UI, everything looks great in the Cloud graph view. All green,
> one leader and two other active instances per collection.
>
> But once we start updating, we run i
On 5/2/2018 11:45 AM, Patrick Recchia wrote:
> Is there any logging I can turn on to know when a commit happens and/or
> when a segment is flushed?
The normal INFO-level logging that Solr ships with will log all
commits. It probably doesn't log segment flushes unless they happen as
a result of a
On 5/2/2018 1:03 PM, Mike Konikoff wrote:
> Is there a way to configure the DataImportHandler to use bind variables for
> the entity queries? To improve database performance.
Can you clarify where these variables would come from and precisely what
you want to do?
>From what I can tell, you're tal
On 5/2/2018 2:56 PM, Weffelmeyer, Stacie wrote:
> Question on faceting. We have a dynamicField that we want to facet
> on. Below is the field and the type of information that field generates.
>
>
>
> cid:image001.png@01D3E22D.DE028870
>
This image is not available. This mailing list will almos
On 5/2/2018 10:58 AM, Greenhorn Techie wrote:
> The current hardware profile for our production cluster is 20 nodes, each
> with 24cores and 256GB memory. Data being indexed is very structured in
> nature and is about 30 columns or so, out of which half of them are
> categorical with a defined list
On 5/1/2018 5:33 PM, Greenhorn Techie wrote:
> Wondering what are the considerations to be aware to arrive at an optimal
> heap size for Solr JVM? Though I did discuss this on the IRC, I am still
> unclear on how Solr uses the JVM heap space. Are there any pointers to
> understand this aspect bette
On 5/2/2018 4:54 AM, Patrick Recchia wrote:
> I'm seeing way too many commits on our solr cluster, and I don't know why.
Are you sure there are commits happening? Do you have logs actually
saying that a commit is occurring? The creation of a new segment does
not necessarily mean a commit happene
On 5/2/2018 4:40 AM, Markus Jelsma wrote:
> One of our collections, that is heavy with tons of TokenFilters using large
> dictionaries, has a lot of trouble dealing with collection reload. I removed
> all custom plugins from solrconfig, dumbed the schema down and removed all
> custom filters and
On 5/2/2018 3:13 AM, Mohan Cheema wrote:
> We are using Solr to index our data. The data contains £ symbol within the
> text and for currency. When data is exported from the source system data
> contains £ symbol, however, when the data is imported into the Solr £ symbol
> is converted to �.
>
>
On 5/1/2018 8:40 AM, THADC wrote:
> I get the following exception:
>
> *Exception writing document id FULL_36265 to the index; possible analysis
> error: Document contains at least one immense term in
> field="gridFacts_tsing" (whose UTF8 encoding is longer than the max length
> 32766), all of whic
On 4/30/2018 2:56 PM, Michael Joyner wrote:
> Based on experience, 2x head room is room is not always enough,
> sometimes not even 3x, if you are optimizing from many segments down
> to 1 segment in a single go.
In all situations a user is likely to encounter in the wild, having
enough extra disk
On 4/30/2018 12:03 PM, Monica Skidmore wrote:
> As we try to set up an external load balancer to go between two clusters,
> though, we still have some questions. We need a way to determine that a node
> is still 'alive' and should be in the load balancer, and we need a way to
> know that a new
On 4/30/2018 9:51 AM, Antony A wrote:
I am running two separate solr clouds. I have 8 shards in each with a total
of 300 million documents. Both the clouds are indexing the document from
the same source/configuration.
I am noticing there is a difference in the size of the collection between
them
On 4/30/2018 4:32 AM, THADC wrote:
First of all, I have a second (unrelated) question on this solr user group.
I hope it is ok to have more than one question being asked at the same time
against this group. Please let me know if not.
Anyway, I have a need to keep our existing solr version 4.7 in
On 4/27/2018 10:48 AM, THADC wrote:
I am new to zookeeper. created my zoo.cfg and attempted starting instance
(DOS shell). my command is:
.\bin\zkServer.cmd start zoo.cfg
and getting following error:
"C:\Users\thclotworthy\DevBase\zookeeper\zookeeper-3.4.10\bin\..\build\classes;C:\Users\thclot
On 4/26/2018 11:02 AM, THADC wrote:
ok, I am creating myConfigset from the _default configset. So, I am copying
ALL the files under _default/conf to myConfigset/conf. I am only modifying
the schema.xml, and then I will re-upload to ZP.
Question: Do I need to modify any of the other copied files
On 4/26/2018 10:30 AM, THADC wrote:
Shawn, thanks for the reply. The issue is that I need to modify the
schema.xml file to add my customizations. Are you saying I cannot manually
access the config set to modify the schema file? If not, how do I modify it?
Make the change in a copy of the config
On 4/26/2018 10:16 AM, THADC wrote:
I am pretty certain I created it because when I execute the request to list
all configsets:
http://localhost:8983/solr/admin/configs?action=LIST
, I see my configset in the response:
{
"responseHeader":{
"status":0,
"QTime":1},
"configSets":["_default",
"myS
On 4/26/2018 7:31 AM, Johannes Brucher wrote:
Maybe I have found a more accurate example constellation to reproduce
the error.
By default the .system-collection is created with 1 shard and 1 replica.
None of your screenshots have made it to the list. Attachments are
almost always stripped
On 4/25/2018 4:12 AM, rameshkjes wrote:
Actually I am trying to approach this problem from another way.
I am taking user input from gui which is direcotory of dataset, and saving
that path in properties file. Since I am using Maven, so I am able to access
that path in my pom file using properties
On 4/25/2018 4:02 AM, Lee Carroll wrote:
*We don't recommend using solr-cell for production indexing.*
Ok. Are the reasons for:
Performance. I think we have rather modest index requirement (1000 a day...
on a busy day)
Security. The index workflow is, upload files to public facing server w
On 4/23/2018 11:56 PM, Papa Pappu wrote:
> I've written down my query over stack-overflow. Here is the link for that :
> https://stackoverflow.com/questions/49993681/preventing-solr-cache-flush-when-commiting
>
> In short, I am facing troubles maintaining my solr caches when commits
> happen and th
On 4/24/2018 10:26 AM, Lee Carroll wrote:
> Does the solr cell contrib give access to the files raw content along with
> the extracted metadata?\
That's not usually the kind of information you want to have in a Solr
index. Most of the time, there will be an entry in the Solr index that
tells the
On 4/24/2018 1:53 PM, Markus Jelsma wrote:
> I don't see stack traces for most WARNs, for example the checksum
> warning on recovery (other thread), or the Trie* deprecations.
I just tried it on 7.3.0. Added a line to CoreContainer.java to log an
exception at warn when Solr is starting:
log
On 4/24/2018 12:36 PM, Markus Jelsma wrote:
> I should be more precise, i said the stack traces of WARN are not shown, only
> the messages are visible. The 'low disk space' line was hidden in the stack
> trace of the WARN, as you can see in the pasted example, thus invisible in
> the GUI with de
On 4/24/2018 9:46 AM, Markus Jelsma wrote:
> Disk space was WARN level. It seems only stack traces of ERROR level messages
> are visible via the GUI, and that is where the 'No space left' was hiding.
> Without logging in and inspecting the logs manually, you will never notice
> that message.
Th
On 4/24/2018 6:52 AM, Markus Jelsma wrote:
Forget about it, recovery got a java.io.IOException: No space left on device
but it wasn't clear until i inspected the real logs.
The logs in de web admin didn't show the disk space exception, even when i
expand the log line. Maybe that could be chang
On 4/24/2018 2:03 AM, msaunier wrote:
If I access to the interface, I have a null pointer exception:
null:java.lang.NullPointerException
at
org.apache.solr.handler.RequestHandlerBase.getVersion(RequestHandlerBase.java:233)
The line of code where this exception occurred uses fundamenta
On 4/24/2018 6:30 AM, Chris Ulicny wrote:
I haven't worked with AWS, but recently we tried to move some of our solr
instances to a cloud in Google's Cloud offering, and it did not go well.
All of our problems ended up stemming from the fact that the I/O is
throttled. Any complicated enough query
On 4/24/2018 8:50 AM, Steven White wrote:
We currently support both Oracle and IBM Java to run Solr and I'm task to
switch over to OpenJDK.
Oracle Java is the preferred choice. OpenJDK should be work very well,
as long as it's at least version 7. Recent Solr versions require Java
8, so that
On 4/23/2018 11:13 AM, Scott M. wrote:
I recently installed Solr 7.1 and configured it to work with Dovecot for
full-text searching. It works great but after about 2 days of indexing, I've
pressed the 'Optimize' button. At that point it had collected about 17 million
documents and it was takin
On 4/23/2018 8:30 AM, msaunier wrote:
I have add debug:
curl
"http://srv-formation-solr:8983/solr/arguments_test/test_dih?command=full-im
port&commit=true&debug=true"
500588true1DIH/indexation_events.xml
This is looking like a really nasty error that I cannot understand,
possibly caused by a
On 4/23/2018 6:12 AM, msaunier wrote:
I have a problem with DIH in SolrCloud. I don't understand why, so I need
your help.
Solr 6.6 in Cloud.
##
COMMAND:
curl http://srv-formation-solr:8983/solr/test_dih?command=full-import
RESULT:
Error 404 Not F
On 4/22/2018 6:27 PM, Kelly Rusk wrote:
Thanks for the assistance. The Master Server has a self-signed Cert with its
machine name, and the Slave has a self-signed Cert with its machine name.
They have identical configurations, and I created a keystore per server. Should
I import the self-signe
On 4/22/2018 4:40 PM, Kelly Rusk wrote:
I already have a key store/trust store and my settings are as follows:
set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
set SOLR_SSL_KEY_STORE_PASSWORD=secret
set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
set SOLR_SSL_TRUST_STORE_PASSWORD=secret
REM R
On 4/21/2018 10:24 AM, kway wrote:
However, I can't get replication to work when using SSL/HTTPS. It throws IO
Communication errors as it can’t resolve the https connection to a localhost
certificate on the Master. The error is as follows:
Master at: https://mastercomputername:8983/solr/core_ind
On 4/20/2018 6:01 AM, rameshkjes wrote:
Using solrJ, I am able to access the solr core. But still I need to go to
command prompt to execute command for solr instance. Is there way to do
that?
I saw that you entered the IRC channel previouslyand asked the same
question, but I got no response fr
On 4/20/2018 5:38 AM, Bernd Fehling wrote:
Thanks Alessandro for the info.
I am currently in the phase to find the right setup with shards,
nodes, replicas and so on.
I have decided to begin with 5 hosts and want to setup 1 collection with 5
shards.
And start with 2 replicas per shard.
But the
On 4/19/2018 12:32 AM, Bernd Fehling wrote:
Would be cool if that would be possible, to change the log level for solr.log
from the Admin UI. Imagine, a running system with problems, you can change
log level and get more logging info into solr.log without restarting the system
and overloading the
On 4/19/2018 6:28 AM, Bernd Fehling wrote:
How would you setup a SolrCloud an why?
shard1 shard2 shard3
| | | | | |
| |r1| | | |r1| | | |r1| |
| | | | | |
| | | | | |
|
On 4/18/2018 1:12 PM, Wendy2 wrote:
> "debug":{
> "debugQuery mode indicates that Solr dropped the ""A."" when parsing the
> query:
> ""debug"":{
> ""rawquerystring"":""\""Ellington, A.\,
> ""querystring"":""\""Ellington, A.\,
>
> ""parsedquery"":""(+DisjunctionMaxQuery(((en
On 4/18/2018 8:03 AM, Bernd Fehling wrote:
I just tried to change the log level with Solr Admin UI but it
does not change any logging on my running SolrCloud.
It just shows the changes in the Admin UI and the commands in the
request log, but no changes in the level of logging.
Do I have to RELOA
On 4/18/2018 3:45 AM, Kapil Bhardwaj wrote:
After making changes i RELOADED the schema via terminal command and tried
to re-index the schema using solr core admin button.
You can't reindex by clicking a button. Unless it's the same button you
used to do the indexing the first time.
https://
On 4/17/2018 8:54 PM, Aristedes Maniatis wrote:
Is there any difference between using the tools supplied with Solr to
write configuration to Zookeeper or just writing directly to our
Zookeeper cluster?
We have tooling that makes it much easier to write directly to ZK
rather than having to use
On 4/17/2018 8:44 PM, Erick Erickson wrote:
The other possibility is that you have LuceneMatchVersion set to
5-something in solrconfig.xml.
It's my understanding that luceneMatchVersion does NOT affect index
format in any way, that about the only things that pay attention to this
value are a
On 4/17/2018 12:17 PM, Jay Potharaju wrote:
> After digging into the error a bit more ..I see that the error messages
> contain a call to lucenecodec54. I am using version solr 6.6.3. Any ideas
> why is lucene54 being referred here??
The 6.6 version uses index file formats that were last updated i
On 4/17/2018 8:15 AM, Kojo wrote:
> I am trying schemaless mode and it seems to works very nice, and there is
> no overhead to write a custom schema for each type of collection that we
> need to index.
> However we are facing a strange problem. Once we have created a collection
> and indexed data o
On 4/16/2018 7:32 PM, gadelkareem wrote:
I cannot complain cuz it actually worked well for me so far but..
I still do not understand if Solr already paginates the results from the
full import, why not do the same for the delta. It is almost the same query:
`select id from t where t.lastmod > ${s
On 4/17/2018 5:35 AM, Alessandro Benedetti wrote:
Apache Lucene/Solr is a big project, is there anywhere in the official
Apache Lucene/Solr website where each committer list the modules of
interest/expertise ?
No, there is no repository like that. Each committer knows what their
own expertise
On 3/6/2018 6:31 AM, sol...@seznam.cz wrote:
> A would like to use Analytisc component. I configured it by https://lucene.
> apache.org/solr/guide/7_2/analytics.html.
> When I try to send query to solr, exception is thrown.
>
> Reason: Server ErrorCaused by:java.lang.
> IllegalAccessError: tried
On 4/15/2018 2:31 PM, Christopher Schultz wrote:
I'd usually call this a "date", but Solr's documentation says that a
"date" is what I would call a timestamp (including time zone).
That is correct. Lucene dates are accurate to the millisecond. They
don't actually handle timezones the way you
On 4/15/2018 2:24 PM, Christopher Schultz wrote:
No, it wouldn't have. It doesn't read any configuration files and
guesses its way through everything. Simply adding HTTPS support
required me to modify the script and manually-specify the URL. That's
why I went through the trouble of explaining so
On 4/15/2018 5:42 AM, Nikolay Khitrin wrote:
Given example is class="solr.SimplePatternSplitTokenizerFactory" pattern="[
\t\r\n]+"/> but Lucene's RegExp constructor consumes raw
unicode characters instead of \t\r\n form, so correct configuration is
Looks like you're right about that exampl
On 4/15/2018 1:22 AM, Moshe Recanati | KMS wrote:
We’re using SolrCloud as part of our product solution for High
Availability.
During upgrade of a version we need to run full index build on our
Solr data.
What are you upgrading? If it's Solr, you should pause/stop indexing
while you do
On 4/13/2018 1:44 AM, neotorand wrote:
Lets say i have 5 different entities and they have each 10,20,30,40 and 50
attributes(Columns) to be indexed/stored.
Now if i store them in single collection.is there any ways empty spaces
being created.
On other way if i store heterogeneous data items in a
On 4/13/2018 5:07 PM, Tomás Fernández Löbbe wrote:
> Yes... Unfortunately there is no GET API :S Can you open a Jira? Patch
> should be trivial
My suggestion would be to return the list of properties for a collection
when a URL like this is used:
/solr/admin/collections?action=COLLECTIONPROP&name
On 4/13/2018 7:49 AM, Christopher Schultz wrote:
> $ SOLR_POST_OPTS="-Djavax.net.ssl.trustStore=/etc/solr/solr-client.p12
> -Djavax.net.ssl.trustStorePassword=whatevs
> -Djavax.net.ssl.trustStoreType=PKCS12" /usr/local/solr/bin/post -c
> new_core https://localhost:8983/solr/new_core
>
> [time passe
On 4/13/2018 11:34 AM, Jesus Olivan wrote:
> first of all, thanks for your answer.
>
> How you import simultaneously these 6 shards?
I'm not running in SolrCloud mode, so Solr doesn't know that each shard
is part of a larger index. What I'm doing would probably not work in
SolrCloud mode without
On 4/13/2018 11:03 AM, Jesus Olivan wrote:
> thanks for your answer. It happens that when we launch full import process
> didn't finished (we wait for more than 60 hours last time, and we cancelled
> it, because this is not an acceptable time for us) There weren't any errors
> in solr logfile simpl
On 4/13/2018 10:11 AM, Jesus Olivan wrote:
> we're trying to launch a full import of 375 millions of docs aprox. from a
> MySQL database to our solrcloud cluster. Until now, this full import
> process takes around 24/27 hours to finish due to an huge import query
> (several group bys, left joins, e
On 4/13/2018 2:54 AM, rameshkjes wrote:
I am using datasource as "FileDataSource", how can i grab that information
from url?
I'm pretty sure that you can provide pretty much ANY information in the
DIH config file from a URL parameter, using the ${dih.request.} syntax.
To answer your othe
On 4/10/2018 9:14 AM, Erick Erickson wrote:
The very first thing I'd do is set up a simple SolrCloud setup and
give it a spin. Unless your indexing load is quite heavy, the added
work the NRT replicas have in SolrCloud isn't a problem so worrying
about that is premature optimization unless you ha
On 4/12/2018 9:48 PM, Antony A wrote:
Thank you. I was trying to create the collection using the API.
Unfortunately the API changes a bit between 6x to 7x.
I posted the API that I used to create the collection and subsequently when
trying to create cores for the same collection.
https://pastebi
On 4/12/2018 6:46 AM, Vincenzo D'Amore wrote:
Thanks Shawn, synonyms right now are just organized in categories with
different meanings. Thanks a lot for the response.
I think this behaviour should be clearly stated in the documentation. Can I
access to solr guide and add few notes on this?
Any
On 4/12/2018 4:57 AM, neotorand wrote:
I read from the link you shared that
"Shard cannot contain more than 2 billion documents since Lucene is using
integer for internal IDs."
In which java class of SOLR implimentaion repository this can be found.
The 2 billion limit is a *hard* limit from L
On 4/12/2018 5:53 AM, girish.vignesh wrote:
Solr gives old data while faceting from old deleted or updated documents.
For example we are doing faceting on name. name changes frequently for our
application. When we index the document after changing the name we get both
old name and new name in th
On 4/12/2018 3:11 AM, Vincenzo D'Amore wrote:
Hi all, anyone could at least point me some good resource that explain how
to configure filters in fieldType building?
Just understand if exist a document that explain the changes introduced
with SynonymGraphFilter or in general what kind of filters
On 4/12/2018 1:46 AM, LOPEZ-CORTES Mariano-ext wrote:
In our search application we have one facet filter (Status)
Each status value corresponds to multiple values in the Solr database
Example : Status : Initialized --> status in solr = 11I, 12I, 13I, 14I, ...
On status value click, search is
On 4/11/2018 9:21 AM, rameshkjes wrote:
> I am doing configuration of solr with the xml and pdf dataset, it works
> perfect. But, I want to modify few things:
> Such as, we can see below, "baseDir" and "filePrefix" is being defined
> manually. I want this to be defined on the runtime.
The way I w
On 4/11/2018 10:52 AM, SOLR4189 wrote:
> How can I change field value by specific condition in indexing?
>
> Indexed Doc in SOLR: { id:1, foo:A }
> Indexing Doc into SOLR: { id:1, foo: B }
>
> foo is single value field.
>
> Let's say I want to replace value of foo from A to B, if A > B, else do
>
On 4/11/2018 9:23 AM, Adam Harrison-Fuller wrote:
> In addition, here is the GC log leading up to the crash.
>
> https://www.dropbox.com/s/sq09d6hbss9b5ov/solr_gc_log_20180410_1009.zip?dl=0
I pulled that log into the http://gceasy.io website. This is a REALLY
nice way to look at GC logs. I do sti
On 4/11/2018 4:15 AM, neotorand wrote:
> I believe heterogeneous data can be indexed to same collection and i can
> have multiple shards for the index to be partitioned.So whats the need of a
> second collection?. yes when collection size grows i should look for more
> collection.what exactly that
On 4/11/2018 8:29 AM, Christopher Schultz wrote:
>> Unless you run Solr in cloud mode (which means using zookeeper), the
>> server cannot create the core directories itself. When running in
>> standalone mode, the core directory is created by the bin/solr program
>> doing the "create" -- which was
1001 - 1100 of 5433 matches
Mail list logo