I have an XML file with structure :
documents
doc.../doc
doc.../doc
.
.
/documents
It is present on disk on some location let's say C:\\documents.xml
Q.1. Using solrJ can I index all docs in this file directly?? or do I have
to convert each document to solrInputDocument by parsing
environment variables do not work in solr.xml
datadir must not be specified like this
dataDir${solr.data.dir:%SOLR_DATA%}/dataDir
it should be like
dataDir${SOLR_DATA}/dataDir
the part aftyer the colon is the default value
On Wed, Mar 11, 2009 at 1:53 PM, con convo...@gmail.com wrote:
Hi
String xml = null;//load the file to the xml string
DirectXmlRequest up = new DirectXmlRequest( /update, xml );
solrServer.request( up );
On Wed, Mar 11, 2009 at 2:19 PM, Ashish P ashish.ping...@gmail.com wrote:
I have an XML file with structure :
documents
doc.../doc
doc.../doc
.
Good morning,
I reviewed a Solr Patch-742, which corrects an issue with the data import
process properly ingesting/commiting (solr add xml) document with dynamic
fields.
Is this fix available for Solr 1.3 or is there a known work around?
Cheers,
Wesley
Thanks nobble for the quick reply,
But still it is not working
I changed the data directory accordingly,
dataDir${SOLR_DATA}/dataDir
But this is not working and is giving the following error:
SEVERE: Error in solrconfig.xml:org.apache.solr.common.SolrException: No
system property or default
I added lockTypesingle/lockType in indexDefaults that made the error
before go away but now I am getting following error :
Mar 11, 2009 6:12:56 PM org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: Cannot overwrite:
C:\dw-solr\solr\data\index\_1o.fdt
at
Hey there,
I have noticed that once snapshots are deleted, tomcat keeps holding
references to them.Due to this disk space will never set free until Tomcat
is restarted, I have realized that doing a lsof | grep 'tomcat', the result
look like:
...
java 22015 tomcat 614r REG 253,0 1149569723
On Wed, Mar 11, 2009 at 2:28 PM, Wesley Small wesley.sm...@mtvstaff.comwrote:
Good morning,
I reviewed a Solr Patch-742, which corrects an issue with the data import
process properly ingesting/commiting (solr add xml) document with dynamic
fields.
Is this fix available for Solr 1.3 or is
On Wed, Mar 11, 2009 at 2:55 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Wed, Mar 11, 2009 at 2:28 PM, Wesley Small
wesley.sm...@mtvstaff.comwrote:
Good morning,
I reviewed a Solr Patch-742, which corrects an issue with the data import
process properly ingesting/commiting
On Wed, Mar 11, 2009 at 2:47 PM, Marc Sturlese marc.sturl...@gmail.comwrote:
How can I make tomcat free the snapshots. Or even better... why is it
happening?
Is this on the rsync replication or the new java replication in trunk?
--
Regards,
Shalin Shekhar Mangar.
Using Solr Cell (ExtractingRequestHandler) which is now built into
trunk, and thus an eventual Solr 1.4 release, indexing a directory of
text (or even Word, PDF, etc) files is mostly 'out of the box'.
It still requires scripting an iteration over all files and sending
them. Here's an
I am using the scripts of Collection Distribution.
The problem is just happening in the master, not in the slaves.
Do you have any clue? I am fighting against this since a few days ago...
Thanks in advance
Shalin Shekhar Mangar wrote:
On Wed, Mar 11, 2009 at 2:47 PM, Marc Sturlese
On Mar 11, 2009, at 5:14 AM, con wrote:
But still it is not working
I changed the data directory accordingly,
dataDir${SOLR_DATA}/dataDir
But this is not working and is giving the following error:
SEVERE: Error in
solrconfig.xml:org.apache.solr.common.SolrException: No
system property or
Erm... can it be you have several processes running on it?
I would start by a cleanup of it all and attempt a simple solr
addition before the xml.
paul
Le 11-mars-09 à 10:16, Ashish P a écrit :
I added lockTypesingle/lockType in indexDefaults that made the
error
before go away but now
Thanks Erik
That did the trick for data directory.
I gave the -DSOLR_DATA=''%SOLR_DATA% in the jboss run.bat and then i am
using this variable in solrconfig.xml. This works fine
But how can i redirect solr to a seperate lib directrory that is outside of
the solr.home
Is this possible in solr
Thanks for the feedback Shalin. I will investigate the backport of this 1.4
fix into 1.3.Do you know of any other subsequent patches related to the
data import and dynamic fields that I also should located and backport as
well? I just ask if you happen to have this information handy.
I am
On Mar 11, 2009, at 6:07 AM, con wrote:
But how can i redirect solr to a seperate lib directrory that is
outside of
the solr.home
Is this possible in solr 1.3
I don't believe it is possible (but please correct me if I'm wrong).
From SolrResourceLoader:
log.info(Solr home set to '
I attempted a backport of Patch-742 on Solr-1.3. You can see the results
below with Hunk failures.
Is there specific method to obtain a list of patches may that occurred
specific to the data import functionality prior to PATCH-742. I suppose I
would need to ensure that these specific data
I guess you can take the trunk and comment out the contents of
SolrWriter#rollback() and it should work with Solr1.3
On Wed, Mar 11, 2009 at 3:37 PM, Wesley Small wesley.sm...@mtvstaff.com wrote:
Thanks for the feedback Shalin. I will investigate the backport of this 1.4
fix into 1.3. Do
On Wed, Mar 11, 2009 at 4:01 PM, Noble Paul നോബിള് नोब्ळ्
noble.p...@gmail.com wrote:
I guess you can take the trunk and comment out the contents of
SolrWriter#rollback() and it should work with Solr1.3
I agree. Rollback is the only feature which depends on enhancements in
Solr/Lucene
Hmmm was my mail so weird or my question so stupid ... or is
there simply noone with an answer? Not even a hint? :(
Tobias Dittrich schrieb:
Hi all,
I know there are a lot of topics about compound word search already but
I haven't found anything for my specific problem yet. So if this is
Is there anyone who have any idea solve this issue?
Please give your thoughts.
Regards,
Praveen
PKJ wrote:
Hi Eric,
Thanks for your response.
Yes you are right! Am trying to place POJOs into Solr directly and this is
working fine.
I want to search them based on the object properties,
Solr really isn't organized for tree structures of data. I think you
might do better using a database with a tree structure.
pojo would be a table of pojo's serialized out. And the parent_id
could point to another structure that builds the tree. Can you flesh
out your use case more of
On Mar 11, 2009, at 8:47 AM, Eric Pugh wrote:
Solr really isn't organized for tree structures of data. I think
you might do better using a database with a tree structure.
That's not a very fair statement. Sure, documents in Solr/Lucene are
simply composed of a flat list of fields, but
Ryan,
If we index the documents using CommonsHttpSolrServer and search using
the same, we get the updated results
That means we can search the latest added document as well even if it is
not committed to the file system
So it looks like there is some kind of cache that is used by both index
Solr could still work for you Praveen.
Consider a schema with a field named parentPath that is not tokenized. It
stores the path to the folder containing the current document but does not have
the document's name in it. In your example, this would be
/Repository/Folder1/. The document's name
Good thinking David! :)
Or even a tokenized field... tokenized by path segments, so the
document we're talking about could have a path field with the terms: /
Repository and /Repository/Folder1 - that way the queries can be made
simpler (and perhaps faster).
Erik
On Mar 11,
Is it possible get the search results from the spell corrected word in a
single solr search query?. Like I search for the word globl and the
correct spelling is global.. The query should return results matching
with the word global. Would appreciate any ideas..
Thanks.
Karthik
On Wed, Mar 11, 2009 at 7:00 PM, Narayanan, Karthikeyan
karthikeyan.naraya...@gs.com wrote:
Is it possible get the search results from the spell corrected word in a
single solr search query?. Like I search for the word globl and the
correct spelling is global.. The query should return
On Wed, Mar 11, 2009 at 3:07 PM, Marc Sturlese marc.sturl...@gmail.comwrote:
I am using the scripts of Collection Distribution.
The problem is just happening in the master, not in the slaves.
Do you have any clue? I am fighting against this since a few days ago...
Thanks in advance
No
Shalin,
Thanks for info...
Thanks.
Karthik
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Wednesday, March 11, 2009 9:33 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr search
On Wed, Mar 11, 2009 at 6:37 PM, Kulkarni, Ajit Kamalakar
ajkulka...@ptc.com wrote:
If we index the documents using CommonsHttpSolrServer and search using
the same, we get the updated results
That means we can search the latest added document as well even if it is
not committed to the file
Thanks for your response Eric.
Actually the scenario is to use the Solr search server as a hierarchical
storage system also. Where we store objects/files in a hierarchy and do
searches on that. something like DB (for files) + Solr.
Here I am evaluating Solr with an knowledge management system.
Pretty good idea David.
Yeah I understand that Solr has not build for such purposes. It is a good
search server.
Will incorporated your idea to store the path as a property and search.
Thanks again. Please post if you see any improvements for this requirement
in near future.
Regards,
Praveen
I like the path idea, however, it does not allow you to perform
tantamount joins on elements in the path. I am working on an idea that
would effectively enable you to perform RDF style queries:
FIND a WHERE a.f1 300 AND a.b.f1 3 AND a.c.name == 'foo'
f1 is a field on 'a' where 'b' and 'c'
Thanks for your response Eric.
I appreciate your idea to have the path as a property/field itself. Am
pretty new to Solr and Lucene.
Could you please post some good pointers for learning more on Solr?
Erik Hatcher wrote:
On Mar 11, 2009, at 8:47 AM, Eric Pugh wrote:
Solr really isn't
Hey Shalin,
I am using XFS file system with Debian Linux version 2.6.26-1-amd64.
Tomcat 5.5 server and java 1.6
As I optimize the index always before doing an snapshot (so the hard links
will have the size of the whole index every time they are shot) I will try
to modify snapshooter to create
Thanks for the responses guys!
I looked around the wiki for an example of using DataImportHandler to
iterate over a list of files and read the content into a field and didn't
find anything. I agree it would be useful!
Erik Hatcher wrote:
Using Solr Cell (ExtractingRequestHandler) which is
Hi,
1)
Can i give by default defaultSearchField with multiple field values as
like
defaultSearchFieldtext, Tag, Category/defaultSearchField
Or should i use
copyField source=Tag dest=text/
copyField source=Category dest=text/
Thanks,
Kalidoss.m,
On Mar 11, 2009, at 11:14 AM, Kalidoss MM wrote:
1)
Can i give by default defaultSearchField with multiple field
values as
like
defaultSearchFieldtext, Tag, Category/
defaultSearchField
No, but
Or should i use
copyField source=Tag dest=text/
On Wed, Mar 11, 2009 at 8:32 PM, KennyN kenneth.n...@sparta.com wrote:
Thanks for the responses guys!
I looked around the wiki for an example of using DataImportHandler to
iterate over a list of files and read the content into a field and didn't
find anything. I agree it would be useful!
On Mar 11, 2009, at 10:39 AM, PKJ wrote:
Could you please post some good pointers for learning more on Solr?
The Solr wiki is quite rich with details. The official Solr tutorial
is a nice quick start, and we expanded a bit on this with our article
and screencast here:
Thanks for the pointers Erik.
Erik Hatcher wrote:
On Mar 11, 2009, at 10:39 AM, PKJ wrote:
Could you please post some good pointers for learning more on Solr?
The Solr wiki is quite rich with details. The official Solr tutorial
is a nice quick start, and we expanded a bit on this
On Wed, Mar 11, 2009 at 5:17 AM, Marc Sturlese marc.sturl...@gmail.com wrote:
I have noticed that once snapshots are deleted, tomcat keeps holding
references to them.Due to this disk space will never set free until Tomcat
is restarted, I have realized that doing a lsof | grep 'tomcat', the
Hey Yonik,
I have realized that the problem is happening not because of the snapshots.
I mean, I do an index and optimize it. Modifiy the database and to
delta-import and optimize again... After that I do an lsof of the tomcat
user and I see the old index still holded by tomcat...
...
ava
Id suggest what someone else mentioned to just do a full clean up of
the index. Sounds like you might have kill -9 or stopped the process
manually while indexing (would be only reason for a left over lock).
- Jon
On Mar 11, 2009, at 5:16 AM, Ashish P wrote:
I added
Are you using the replication feature by any chance?
- Jon
On Mar 10, 2009, at 2:28 PM, Matthew Runo wrote:
We're currently using 1.4 in production right now, using a recent
nightly. It's working fine for us.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
I'm attempting to set up the 1.3 replication feature in Windows via
Cygwin. The 1.4 version looks a little simpler which is why I was
prodding the last about the 1.4 release date.
-Original Message-
From:
solr-user-return-19484-laurent.vauthrin=disney@lucene.apache.org
Yes, we are using the Java replication feature to send our index and
configuration files from our master server to 4 slaves.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com - 702-943-7833
On Mar 11, 2009, at 9:29 AM, Jon Baer wrote:
Are you using the
Sorry, I missed this. We have the same problem.
None of our customers use query syntax, so I have considered making a
full-text query parser. Use the analyzer chain, then convert the result
into a big OR query, then pass it to the rest of Dismax. Shingles and
synonyms should work at query time
: Just as you have an xslt response writer to convert Solr xml response to
: make it compatible with any application, on the input side do you have an
: xslt module that will parse xml documents to solr format before posting them
: to solr indexer. I have gone through dataimporthandler, but it
: In my case, my query of id_s_i_s_nm:(om_B00114162K*) returned nothing
: but query id_s_i_s_nm:om_B00114162K* returned the right result.
:
: What's the difference between using () or not.
parensx are used for grouping -- when used after a field name like that,
they mean that you want all of
+1 for this (as it would be an added bonus to do something based on
the log events) ... so in this case if you have that transformer does
it mean it will get events before and after the import? Correct me if
Im wrong there are currently (1.4) preImportDeleteQuery and
postImportDeleteQuery
: Hmmm was my mail so weird or my question so stupid ... or is there simply
: noone with an answer? Not even a hint? :(
patience my freind, i've got a backlog of ~~500 Lucene related messages in
my INBOX, and i was just reading your original email when this reply came
in.
In generally this is
I'm hoping to use Solr version 1.4 but in the meantime I'm trying to get
replication to work in version 1.3. I'm running Tomcat as a Windows
service and have Cygwin installed. I'm trying to get the snapshooter
script to run with the following in my solrconfig.xml:
listener
: I think I have detected a bug in admin solr screen. I am using multicore
: with various cores. When I click a core in the admin page and after click
: INFO the info that apears (class, cache info..) it's always of the same
: core (the last one in solrconfig.xml).
: I don't know if there's
SOLR-1062 just logs the details at an entity level.
suggest ways to log other events.
On Wed, Mar 11, 2009 at 10:51 PM, Jon Baer jonb...@gmail.com wrote:
+1 for this (as it would be an added bonus to do something based on the log
events) ... so in this case if you have that transformer does it
we should revive the thread on releasing 1.4 a bit earlier. let us
start trimming down the list of unresolved issues.
On Tue, Mar 10, 2009 at 11:37 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
I've been working with the mid-April assumption.
Otis
--
Sematext --
On Tue, Mar 10, 2009 at 12:17 PM, CIF Search cifsea...@gmail.com wrote:
Just as you have an xslt response writer to convert Solr xml response to
make it compatible with any application, on the input side do you have an
xslt module that will parse xml documents to solr format before posting them
On Wed, Mar 11, 2009 at 12:23 PM, Marc Sturlese marc.sturl...@gmail.com wrote:
I have checked the log and it is closing an indexWriter and registering a
new searcher but can't see that the older one is closed:
On a quiet system, you should see the original searcher closed right
after the new
SOLR-1064 ... resolved.
: : I think I have detected a bug in admin solr screen. I am using multicore
: : with various cores. When I click a core in the admin page and after click
: : INFO the info that apears (class, cache info..) it's always of the same
: : core (the last one in
: The question is how would/could I add the final option Sus scrofa domestica
: to the list of synonyms for swine would any of these work or am I totally off
: base here?
:
: 1) swine = hogs,pigs,porcine,Sus scrofa domestica
: 2) swine = hogs,pigs,porcine,Sus\ scrofa\ domestica
swine =
There are many spatial solutions out there
- R-tree
- Quad-Tree
- SRID with positional proximity like geohash
- Voronoi diagrams
etc..
All have their pros cons as do Cartesian grids.
Feel free to contribute the more there are, the more solutions that can be
applied to different problems, I use
Hoss,
Thanks I am the op on this question.
I figured it out by just trying t out and what your saying seems to
be correct.
Thanks
Chris Hostetter wrote:
: The question is how would/could I add the final option Sus scrofa domestica
: to the list of synonyms for swine would any of these work
Howdy all,
I'm running into a problem where for a search that returns a relatively high
number of results (a few hundred), I'm running into an OutOfMemoryError. I
tried killing all faceting and also not specifying a sort, in hopes that
that would help, but no luck so far. I've read on this list
Yes, I coded a transformer to deal with the data from a mysql table before
index it with dataimporthandler.
I am actually using a nightly build from middle january with all the
concurrency bugs of dataimporthandler fixed.
After lots of tracing I think the problem could be in the commit void of
First off: you can't sort on a field where any doc has more then one token
-- that's why worting on a TextField doesn't work unless you use something
like the KeywordTokenizer.
Second...
: I found out that reason the strings are not getting sorted is because
: there is no way to pass the
: Subject: indexing multiple schemas Vs extending existing schema
: In-Reply-To: 746bb9e20903091644g6959224am9445c26c76532...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply
: Subject: Solr 1.3; Data Import w/ Dynamic Fields
: In-Reply-To: 5e76b0ad0903110150h3e75bb68pd3603b8da4261...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing
Adding ' + jars[j].toString() + ' to Solr classloaderAdding ' +
jars[j].toString() + ' to Solr classloader
: But how can i redirect solr to a seperate lib directrory that is outside of
: the solr.home
:
: Is this possible in solr 1.3
:
: I don't believe it is possible (but please correct
On Wed, Mar 11, 2009 at 6:20 PM, outoftime m...@patch.com wrote:
I've read on this list and elsewhere
that sorting by fields has well-defined memory implications - is that true
for relevance sorting as well?
Relevancy sorting has pretty much no memory overhead.
Note that if you use function
: If the problem is not there the other thing that comes to my mind is
: lucene2.9-dev... maybe there's a problem closing indexWriter?... opiously
: it's just a thought.
you never answered yoniks question about wether you see any Closing
Searcher messagges in your log, also it's useful to know
Yes cleaning up works...
But not sure how to avoid this happening again??
-Ashish
jonbaer wrote:
Id suggest what someone else mentioned to just do a full clean up of
the index. Sounds like you might have kill -9 or stopped the process
manually while indexing (would be only reason for
I'm attempting to write a solr query that ensures that if one field has a
particular value that another field also have a particular value. I've
arrived at this syntax, but it doesn't seem to work correctly.
((myField:superneat AND myOtherField:somethingElse) OR NOT myField:superneat)
either
74 matches
Mail list logo