Hi
I have an lucene index which contains size field. The size field
contains long value indexed using NumberTools.longToString(longvalue). I
want Solr to search range query with in this index.
To do so, I added
field name=size type=slong indexed=true stored=true/ in the
schema.xml file.
Hi,
I indexed an document containing a field with fieldtype long. I also
indexed an document with the same field with field type long using my
application. I indexed that long field with NumberTools.longToString
(longvalue).
I opened both the indexes through the Luke. Luke is showing
Hello,
How Solr indexed your field depends on how you configured that field in Solr's
schema.xml. That would be the first place to look. While looking at that
file, search it for the word long and you will see how you can configure Solr
to handle fields with values of type long.
Otis
--
Hi Jawahar,
I see you already did configure your long field using slong type. But have a
look a the comments:
!-- Numeric field types that manipulate the value into
a string value that isn't human-readable in its internal form,
but with a lexicographic ordering the same
Hi,
I have checked already its configuration. It is using slong to index long
values. The Field types ie. sint, slong, sdouble, sfloat are the types which
make possible the searching range query. But I want to know that is it using
NumberTool to index numeric values or something else. Because I
I'm sorry about this guys, but I'm having the strangest path errors
with replication..
SEVERE: java.io.IOException: Cannot run program snapshooter (in
directory /opt/solr/bin): java.io.IOException: error=2, No such
file or directory
but.. cd /opt/solr/bin and the snapshooter file is
I'd like to thank everyone that created and helped bring us Solr.
Newspad is working awesomely.
http://www.newspad.com/
And sorting in 1.2.0 is going to be such a bonus!
Thanks!
Jed
: values. The Field types ie. sint, slong, sdouble, sfloat are the types which
if you look higher up in your schema, you'll see a fieldtypes section
where slong is defined and mapped to some classname, that class witll
contain all of the code showing how your data is being indexed.
more then
Also...
[EMAIL PROTECTED]:/home/mruno]$ sudo /opt/solr/bin/snapinstaller -M
search1.zappos.com -d /opt/solr/data -S /opt/solr/logs -u tomcat5 -v
: command not foundpts.conf: line 15:
: command not foundpts.conf: line 15:
started by mruno
command: /opt/solr/bin/snapinstaller -M
On 7/18/07, Otis Gospodnetic [EMAIL PROTECTED] wrote:
Why? Too small of a Java heap. :)
Increase the size of the Java heap and lower the maxBufferedDocs number in
solrconfig.xml and then try again.
If it only happens after a lot of docs, it's probably not
maxBufferedDocs, but when a big
Hi,
I'm currently working on an application which is living in a
clustered server environment. There is a hardware based balancer, and
each node in the cluster has a separate install of Solr. The
application code and files are on a NFS mount, along with the solr/
home. The first node has
No need to restart when there has been an index change...
just send a commit message and Solr will open a new searcher.
If there have been schema changes, Solr will need to be restarted.
-Yonik
On 7/18/07, Matt Mitchell [EMAIL PROTECTED] wrote:
Hi,
I'm currently working on an application
Hi all,
I just started working with the Solr.. i also tried the tutorial
as well and was able to understand the concept. However, when I tried looking
into the SolrServlet, it is depricated...(I'm gonna looking into Javadoc as
well) but just looking for the quick answer ... which
Also...
tomcat5 17003 0.0 0.0 3084 1456 ?S10:32 0:00 /bin/
bash /opt/solr/bin/snapshooter -u tomcat5 -d /opt/solr/data
tomcat5 17008 0.0 0.0 3084 1456 ?S10:32 0:00 /bin/
bash /opt/solr/bin/snapshooter -u tomcat5 -d /opt/solr/data
tomcat5 17013 0.0
It seems that as soon as I get a commit, snapshooter goes wild.
I have 1107 running instances of snapshooter right now..
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
On 18-Jul-07, at 11:47 AM, Yonik Seeley wrote:
On 7/18/07, Otis Gospodnetic [EMAIL PROTECTED] wrote:
Why? Too small of a Java heap. :)
Increase the size of the Java heap and lower the maxBufferedDocs
number in solrconfig.xml and then try again.
If it only happens after a lot of docs, it's
On your slave, you can run snappuller to get the latest snapshot from
master(generated by snapshooter), then run snapinstaller to notify solr
to use the updated index.
-Original Message-
From: Matt Mitchell [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 18, 2007 12:12 PM
To:
On 18-Jul-07, at 2:58 PM, Yonik Seeley wrote:
On 7/18/07, Mike Klaas [EMAIL PROTECTED] wrote:
Could happen when doDeleting the pending docs too. James: try
sending commit every 500k docs or so.
Hmmm, right... some of the memory usage will be related to the treemap
keeping track of deleted
I correct it,,,i index 17M docs. not 1.7M,,,so OutOfMemory happen when it
finish index ~11.3m docs
It is new index.
i think it maybe the reason:
On 7/18/07, Otis Gospodnetic [EMAIL PROTECTED] wrote:
Why? Too small of a Java heap. :)
Increase the size of the Java heap and lower the
19 matches
Mail list logo