Hello Kamaljeet,
it's also mentioned on that page that Solr runs inside a Java servlet
container such as Tomcat, Jetty, or Resin - you have to install one of
those first. I don't know about Resin but Tomcat and Jetty have their
webapps directories right inside of them. Solr Home directory
Yes, I have erased the tlog in replica 2 and it appears that the the first
replica's tlog was corrupted because of an ungracefull servlet shutdown.
There was no log for it unfortunately, neither the zookeeper log logged
anything about this. Is there a a place I could check in the zookeeper what
Hi All,
I have a query regarding the use of wordDelimiterFilterFactory. My schema
definition for the text field is as follows
fieldType name=text class=solr.TextField
positionIncrementGap=100
analyzer
Hi,
Based on your WhitespaceTokenizerFactory due to the
LowerCaseFilterFactory the words actually indexed are:
speed, post, speedpost
You should get results for: q:Content:speedpost
So either remove the LowerCaseFilterFactory or add the
LowerCaseFilterFactory to as a query time Analyzer as
I don't understand the question. CloudSolrServer subclasses SolrServer
which has a
public QueryResponse query(SolrParams params)
Have you tried that?
Best
Erick
On Thu, Aug 15, 2013 at 4:01 AM, Furkan KAMACI furkankam...@gmail.comwrote:
Here is a conversation about it:
Hi Aloke,
I am using the same analyzer for indexing as well as quering so
LowerCaseFilterFactory should work for both, right?
--
View this message in context:
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085025.html
Sent from the Solr - User
Hi,
Another Example I found is q=Content:wi-fi doesn't match for documents with
word wifi. I think it is not catenating the query keywords correctly
--
View this message in context:
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085030.html
Sent
Wy too high :). H, not much detail there...
bq: filterCache (size=30 initialSize=30
autowarmCount=5),
This is an OOM waiting to happen. Each filterCache entry is a key/value
pair. The key is the fq clause, but the value is a bitmap of all the docs in
the index, i.e.
Not that I know of. If you can index the docs cleverly, you
might be able to form a query that does the trick.
Pseudo-joins might also do the trick. Be aware that these
don't return data from the from document, only the to
doc. That's gibberish, but see:
http://wiki.apache.org/solr/Join
Best
Hi,
I have the following jar in jetty/lib/ext:
log4j-1.2.16.jar
slf4j-api-1.6.6.jar
slf4j-log4j12-1.6.6.jar
jcl-over-slf4j-1.6.6.jar
jul-to-slf4j-1.6.6.jar
do you?
Dmitry
On Thu, Aug 8, 2013 at 12:49 PM, Spadez james_will...@hotmail.com wrote:
Apparently this is the error:
2013-08-08
Have you made ANY changes to the analyzer since indexing the data?
Generally, you need to completely reindex your data after any changes to a
field type analyzer.
Otherwise, run the Solr Admin UI Analyzer web page and check the output for
both index and query.
Also, be aware that
Hi,
That's correct the Analyzers will get applied to both Index Query time.
In fact I do get results back for speedPost with this field definition.
Regards,
Aloke
On Fri, Aug 16, 2013 at 5:21 PM, vicky desai vicky.de...@germinait.comwrote:
Hi,
Another Example I found is q=Content:wi-fi
Hi,
We are migrating or solr from 3.5 to 4.4, but stuck at the strategy to
migrate the index.
I read that we can point the new solr 4.4 to the data index from
previous solr i.e. 3.5. Is my understanding correct? If this is true, can
we change the schema in 4.4 solr. We have many
/ I read that we can point the new solr 4.4 to the data index from previous
solr i.e. 3.5/
Yes you can do that. It would be even better if you would run an optimize
post migration, it will re-write the segments.
/f this is true, can we change the schema in 4.4 solr. We have many
un-stored fields
Is any other source trying to write into your index when you try to reload
it? If this was so, then I guess it would have locked up the index. Check
for a write.lock file in your index directory. You can remove that file
manually and then retry it.
--
View this message in context:
DIH is not at all necessary and yes, SolrJ can be used to add data, the XML
bit am not too sure though.
Try:
http://wiki.apache.org/solr/UpdateXmlMessages
http://wiki.apache.org/solr/UpdateXmlMessages
and
http://wiki.apache.org/solr/Solrj http://wiki.apache.org/solr/Solrj
--
View this
Hi all.
Using the example setup of solr-4.4.0, I was able to easily feed 23
million documents from ClueWeb09.
The I tried to split the one shard into tqo. The size on disk is:
% du -sh collection1
118Gcollection1
I started Solr with 8GB for the JVM:
java -Xmx8000m -DzkRun -DnumShards=2
I am very new to Solr. I am looking to index an xml file and search its
contents. Its structure resembles something like this
entry id=REACT_142474 acc=REACT_142474.5
name((1,6)-alpha-glucosyl)poly((1,4)-alpha-glucosyl)glycogenin =gt;
poly{(1,4)-alpha- glucosyl} glycogenin +
Hey there,
I'm testing a custom similarity which loads data from and external file
located in solr_home/core_name/conf/. I load data from the file into a Map
on the init method of the SimilarityFactory. I would like to reload that Map
every time a commit happens or every X hours.
To do that I've
Thanks for the reply, no nothing else would be writing to the index, I'm sure
it is a solrconfig setting but not sure which.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-and-above-core-swap-tp4084794p4085091.html
Sent from the Solr - User mailing list archive at
On Fri, Aug 16, 2013 at 1:20 PM, Artem Karpenko [via Lucene]
ml-node+s472066n4084995...@n3.nabble.com wrote:
it's also mentioned on that page that Solr runs inside a Java servlet
container such as Tomcat, Jetty, or Resin - you have to install one of
those first.
Ok.
Can you please suggest me
Have you tried it with a smaller number of documents? I haven't been able
to successfully split a shard with 4.4.0 with even a handful of docs.
-Greg
On Fri, Aug 16, 2013 at 7:09 AM, Harald Kirsch harald.kir...@raytion.comwrote:
Hi all.
Using the example setup of solr-4.4.0, I was able to
Hi,
Slightly off topic, but just wondering if you've worked through the
tutorial: https://lucene.apache.org/solr/4_4_0/tutorial.html You can then
use the packaged Jetty servlet container while you get comfortable with
working with solr.
Best of luck
Brendan
On Fri, Aug 16, 2013 at 12:25 PM,
On 8/16/2013 6:43 AM, Kuchekar wrote:
If we do a csv export from 3.5 solr and then import it in the 4.4
index, we get a problem with copy fields i.e. the value in the copy field
is computed twice. Once from the csv import and other from solr internal
computation.
Supplemental reply on
On 8/16/2013 10:14 AM, richardg wrote:
Thanks for the reply, no nothing else would be writing to the index, I'm sure
it is a solrconfig setting but not sure which.
Are you specifying the DirectoryFactory and/or lock type in your
solrconfig.xml, and if so, what are they set to? Is your index
Hello All,
Recently I used stats component of solr. I can do a group by and get stats
for each group in the following solr request:
http://localhost/solr/quan/select?q=*:*stats=truestats.field=incomerows=0indent=truestats.facet=township
In this case, solr will group by township and do stats on
Is there a way to find if We have a zookeeper quorum? We can ping individual
zookeeper and see if it is running, but it would be nice to ping/query one URL
and check if we have a quorum.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, August 09, 2013
On Fri, Aug 16, 2013 at 10:22 PM, Brendan Grainger [via Lucene]
ml-node+s472066n4085100...@n3.nabble.com wrote:
ou can then
use the packaged Jetty servlet container while you get comfortable with
working with solr.
Can I ask why jetty?
--
Kamaljeet Kaur
kamalkaur188.wordpress.com
Assuming you have downloaded solr into a dir called 'solr' then if you look
in the 'example' there is a bundled Jetty installation ready roll for
testing etc. So to answer your question 'why jetty?'. Have you worked
through the tutorial?
On Fri, Aug 16, 2013 at 2:06 PM, Kamaljeet Kaur
On Fri, Aug 16, 2013 at 11:50 PM, Brendan Grainger [via Lucene]
ml-node+s472066n4085140...@n3.nabble.com wrote:
Have you worked
through the tutorial?
Yes, I'm working through. But not getting the significance of these
used commands. I know its to give a taste of solr with an example.
But
: I am very new to Solr. I am looking to index an xml file and search its
: contents. Its structure resembles something like this
...
: Is it essential to use the DIH to import this data into Solr? Isn't there
: any simpler way to accomplish the task? Can it be done through SolrJ as I am
On 8/16/2013 11:58 AM, Joshi, Shital wrote:
Is there a way to find if We have a zookeeper quorum? We can ping individual
zookeeper and see if it is running, but it would be nice to ping/query one URL
and check if we have a quorum.
This is a really good question, to which I do not have an
Hi All,
I have a big index of 256 GB .Right now it is on one physical box of 256 GB
RAM . I am planning to virtualize it to the size of 32 GB Ram*8
boxes.Whether the MMap will work regardless in this condition ?
Vibhor Jaiswal
--
View this message in context:
Hi,
Let's say I have this synonyms entry :
b c = ok
My configuration (index time) :
1. WhitespaceTokenizerFactory
2. WordDelimiterFilterFactory with catenateWords=0
3. SynonymFilterFactory
The input : a/b c produce (one line per tokenizer/filter)
0:a/b, 1:c
0:a, 1:b, 2:c
0:a, 1:ok
So
bq:why does it replicate all the index instead of copying just the
newer formed segments
because there's no guarantee that the segments are identical on the
nodes that make up a shard. The simplest way to conceptualize this
is to consider the autocommit settings on the servers Let's say
the hard
You might be able to get info from the Zookeeper four letter words.
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkCommands
Here is a command to get the status for one of our Zookeeper hosts:
$ echo stat | nc zk-web02.test3.cloud.cheggnet.com 2181
wunder
On Aug 16, 2013, at
I've thought about it, and I have no time to really do a meta-search during
evaluation. What I need to do is to create a single core that contains
both of my data sets, and then describe the architecture that would be
required to do blended results, with liberal estimates.
From the perspective
On 8/16/2013 1:02 PM, vibhoreng04 wrote:
I have a big index of 256 GB .Right now it is on one physical box of 256 GB
RAM . I am planning to virtualize it to the size of 32 GB Ram*8
boxes.Whether the MMap will work regardless in this condition ?
As far as MMap goes, if the operating system you
Hi,
You can MMAP a size bigger than your memory without having any problem.
Part of your file will just not be loaded into RAM, because you don't
access it too often.
If you are short in memory, consider deactivating page Host IO Caching, as
it will be only redundant with your guest
OS page
good stuff
here is a more recent version of the same resource as they have added a few new
commands in the recent releases of zookeeper
http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_zkCommands
From: Walter Underwood
On 8/16/2013 11:58 AM, Joshi, Shital wrote:
Is there a way to find if We have a zookeeper quorum? We can ping individual
zookeeper and see if it is running, but it would be nice to ping/query one URL
and check if we have a quorum.
I filed an issue on this:
the mntr command can give that info if you hit the leader of the zk quorum
e.g. in the example for that command on the link you can see that its a 5
member zk ensemble (zk_followers 4) and that all followers are synced
(zk_synced_followers 4)
you would obviously need to query for the zk leader
sorry, it looks like you can get the follower/leader status for each node using
just the mntrnot the zk_server_state values
echo mntr | nc fookeeper_follower 2181
zk_version 3.4.5-1392090, built on 09/30/2012 17:52 GMT
zk_avg_latency 0
zk_max_latency 45
zk_min_latency 0
Okay, it's hot off the e-presses: my updated book Solr 4.x Deep Dive, Early
Access Release #5 is now available for purchase and download as an e-book
for $9.99 on Lulu.com at:
http://www.lulu.com/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-1/ebook/product-21120181.html
(That
44 matches
Mail list logo