Hello again,
Current situation is, after setting the two options in order not to load
the cores on start up
and ramBufferSizeMB=32 Tomcat is stable, responsive, threads reach 60 as a
maximum.
Browsing and storing are fast. I should note that I have many cores with
small amount of documents.
Unfort
Thanks Shawn for your suggestion.
Earlier the write permission was set only on the *data *directory. Per your
suggestion after providing write access to
*data/index* folder and all the files under it the core gets loaded.
Regards,
Modassar
On Tue, Apr 15, 2014 at 10:12 AM, Shawn Heisey wrote:
On 4/14/2014 10:25 PM, Modassar Ather wrote:
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
> timed out: NativeFSLock@/data/index/write.lock:
> java.io.FileNotFoundException: /data/index/write.lock (Permission
> denied)
This sounds like the user (the one that's running
Hi,
The index is the fresh one (older index deleted) and there is no soft
commit on them.
There are other logs available a part of which is as follows:
Apr 11, 2014 9:34:49 AM org.apache.solr.common.SolrException log
SEVERE: null:org.apache.solr.common.SolrException: Unable to create core:
corena
The use case I keep thinking about is Flue/Morphline replacing
DataImportHandler. So, when I saw morphline shipped with Solr, I tried
to understand whether it is a step towards it.
As it is, I am still not sure I understand why those jars are shipped
with Solr, if it is not actually integrating in
_which_ solrCache objects? filterCache? result cache? documentcache?
result cache is about "average size of a query" + "window size *
sizeof int) for each entry.
filter cache is about "average size of a filter query" + maxdoc/8
document cacha is about "average size of the stored fields in bytes" *
So are you sending the MP3 files to Solr? That's actually generally a
bad practice, it places the load for analyzing all the files on Solr.
Yes, SolrCell makes this possible, and it's great for small data sets.
What I'd actually recommend is that you parse the files on a SolrJ
client using Tika and
Hi Mike,
Glad I was able to help. Good note about the PoolingReuseStrategy, I
did not think of that either.
Is there a blog post or a GitHub repository coming with more details
on that? Sounds like something others may benefit from as well.
Regards,
Alex.
P.s. If you don't have your own blog
I lost the original thread; sorry for the new / repeated topic, but
thought I would follow up to let y'all know that I ended up implementing
Alex's idea to implement an UpdateRequestProcessor in order to apply
different analysis to different fields when doing something analogous to
copyFields.
On 4/14/2014 12:56 PM, Ramkumar R. Aiyengar wrote:
> ant compile / ant -f solr dist / ant test certainly work, I use them with a
> git working copy. You trying something else?
> On 14 Apr 2014 19:36, "Jeff Wartes" wrote:
>
>> I vastly prefer git, but last I checked, (admittedly, some time ago) you
Hi;
It should work with a git clone. I've never faced with an issue for it (I
use git clone for a long time) What kind of problem do you get?
Thanks;
Furkan KAMACI
14 Nis 2014 21:56 tarihinde "Ramkumar R. Aiyengar"
yazdı:
> ant compile / ant -f solr dist / ant test certainly work, I use them wi
We'd like to graph the approximate RAM size of our SolrCache instances. Our
first attempt at doing this was to use the Lucene RamUsageEstimator [1].
Unfortunately, this appears to give a bogus result. Every instance of
FastLRUCache was judged to have the same exact size, down to the byte. I
assume
Some update:
I removed the auto warm configurations for the various caches and reduced
the cache sizes. I then issued a call to delete a day's worth of data (800K
documents).
There was no out of memory this time - but some of the nodes went into
recovery mode. Was able to catch some logs this tim
ant compile / ant -f solr dist / ant test certainly work, I use them with a
git working copy. You trying something else?
On 14 Apr 2014 19:36, "Jeff Wartes" wrote:
> I vastly prefer git, but last I checked, (admittedly, some time ago) you
> couldn't build the project from the git clone. Some of t
: we tried another commands to delete the document ID:
:
: 1> For Deletion:
:
: curl http://localhost:8983/solr/update -H 'Content-type:application/json' -d
: '
: [
You're use of square brackets here is triggering the syntax-sugar that
let's you add documents as objects w/o needing the "add" k
I vastly prefer git, but last I checked, (admittedly, some time ago) you
couldn't build the project from the git clone. Some of the build scripts
assumed some svn commands will work.
On 4/12/14, 3:56 PM, "Furkan KAMACI" wrote:
>Hi Amon;
>
>There has been a conversation about it at dev list:
>h
Yes, that is our approach. We did try deleting a day's worth of data at a
time, and that resulted in OOM as well.
Thanks
Vinay
On 14 April 2014 00:27, Furkan KAMACI wrote:
> Hi;
>
> I mean you can divide the range (i.e. one week at each delete instead of
> one month) and try to check whether y
On 4/14/2014 7:43 AM, Shawn Heisey wrote:
> When results are being determined for the response, it sounds like the
> schema is NOT consulted -- I think the code simply reads what's in the
> Lucene index and applies the "fl" parameter to decide which fields are
> returned. The index doesn't change w
On 4/14/2014 7:52 AM, sachin.jain wrote:
> It makes sense now, but I am a little surprised that solr does not convert
> the object into a json form, I have to use a google library to do that.
Currently Solr treats a Map as a request for an Atomic Update. The key
must be add, inc, or set ... if
Thanks Eric
It makes sense now, but I am a little surprised that solr does not convert
the object into a json form, I have to use a google library to do that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-not-save-maps-in-solr-tp4130890p4131023.html
Sent from the So
On 4/14/2014 5:50 AM, Gurfan wrote:
> We have a setup of SolrCloud 4.6. The fields Stored value is true.
> Now I want to delete a field from indexed document. Is there any way from
> which we can delete the field??
> Field which we are trying to delete(extracted from schema.xml):
>
> omitNorms="
Currently all Solr morphline use cases I’m aware of run in processes outside of
the Solr JVM, e.g. in Flume, in MapReduce, in HBase Lily Indexer, etc. These
ingestion processes generate Solr documents for Solr updates. Running in
external processes is done to improve scalability, reliability, fl
Hi All,
I have implemented a sponsor search where I have to elevate a particular
document for a specific query text.
To achieve this I have made the following changes (solr version:4.7.1):
1) Changes in solrConfig.xml
string
elevate.xml
explicit
elevator
2)adde
Hi,
I have updated my solr instance from 4.5.1 to 4.7.1.
Now my solr query failing some tests.
Query: q=*:*&fq=(title:((T&E)))?debug=true
Before the update:
*:*
*:*
MatchAllDocsQuery(*:*)
*:*
LuceneQParser
(title:((T&E)))
+title:t&e +title:t +title:e
...
After the update:
*:*
*:*
Mat
"Aliases are meant for read operations can refer to one or more real
collections".
So should I go with the approach of creating a collection for per day's
data and aliasing a collection with all these collection names?
So instead of trying to route the documents to a shard should I send to a
spec
Hi,
We have a setup of SolrCloud 4.6. The fields Stored value is true.
Now I want to delete a field from indexed document. Is there any way from
which we can delete the field??
Field which we are trying to delete(extracted from schema.xml):
We comment out this field(SField2) entry from schema.
Hello,
I saw that 4.7.1 has morphline and hadoop contribution libraries, but
I can't figure out the degree to which they are useful to _Solr_
users. I found one hadoop example in the readme that does some sort
injection into Solr. Is that the only use case supported?
I thought that maybe there is
Would collection aliasing be a relevant feature here (a different
approach):
http://blog.cloudera.com/blog/2013/10/collection-aliasing-near-real-time-search-for-really-big-data/
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Acceler
Thanks Dmitry,
Yes I'm on *nix OS. Yes *soft-link* was mentioned by one of my sys-admin
friend as well. :-).
I was just trying to from the community if there there would be a way from
Solr itself.
Thanks for suggestion. Probably soft-link is the way to go now :-).
--
View this message in c
Hello Experts,
I want to index my documents in a way that all documents for a day are
stored in a single shard.
I am planning to have shards for each day e.g. shard1_01_01_2010,
shard1_02_01_2010 ...
And while hashing the documents of 01/01/2010 should go to
shard1_01_01_2010.
Thins way I can qu
Hi;
I mean you can divide the range (i.e. one week at each delete instead of
one month) and try to check whether you still get an OOM or not.
Thanks;
Furkan KAMACI
2014-04-14 7:09 GMT+03:00 Vinay Pothnis :
> Aman,
> Yes - Will do!
>
> Furkan,
> How do you mean by 'bulk delete'?
>
> -Thanks
> V
31 matches
Mail list logo