the tag and the rest of the stuff is missing in your data-config file
On Tue, Jan 6, 2009 at 12:50 PM, The Flight Captain
wrote:
>
> I am having trouble setting up an Oracle datasource. Can anyone help me
> connect to the datasource?
>
> My solrconfig.xml:
>
> ...
> class="org.apache.solr.hand
I am having trouble setting up an Oracle datasource. Can anyone help me
connect to the datasource?
My solrconfig.xml:
...
data-config.xml
...
My data-config.xml
I have placed the oracle driver on the classpath of JBoss.
I am getting the following errors i
The driver can be put directly into the WEB-INF/lib of the solr web
app or it can be put into ${solr.home}/lib dir.
or if something is really screwed up you can try the old fashioned way
of putting your driver jar into JAVA_HOME/lib/ext
--Noble
On Tue, Jan 6, 2009 at 7:05 AM, Performance wrote
Hi
Thanks for the concern, Grant
My Issue is resolved.
Problem was spell checker was not working after changing the accuracy or
number of suggestions in solrconfig.xml file.
Solution is:-
We have to add "&build=true" in the command so that it should generate the
indexes everytime we change t
I have been following this tutorial but I can't seem to get past an error
related to not being able to load the DB2 Driver. The user has all the
right config to load the JDBC driver and Squirrel works fine. Do I need to
update and path within Solr?
muxa wrote:
>
> Looked through the tutorial
Can you describe what "not working" means? You're not getting
suggestions or your getting exceptions? Is there any error in your log?
If you add &debugQuery=true to your query, does it show that the
Spell component was run? (I think it should)
Do your logs show the Spell Checker being i
On Jan 5, 2009, at 3:59 PM, Yevgeniy Belman wrote:
Thru trial and error i got this line to narrow the facet:
query.add("fq", "manu:apple");
but what does this line do? I didn't notice any result difference.
query.addFacetQuery("manu:apple");
addFacetQuery is equivalent to using &facet.query,
I am playing with solrj trying to run through a few scenarios one of which
is to "drill down" into a facet. Here, my facet is "manu". I want to narrow
the search by requesting anything that matches "ipod", and falls into an
"apple" manufacturer facet. I am new to Solr/Lucene, my appologies for basi
The problem with that approach is that unlike databases, a commit is an
expensive operation in Lucene right now. It is not very practical to commit
per document, therefore log replication offers very little.
On Tue, Jan 6, 2009 at 12:07 AM, Jacob Singh wrote:
> Has there been a discussion anywhe
2009/1/5 Grant Ingersoll
> I haven't fully thought it through, but I was thinking that, in the create
> code in the Factory (where it returns then new TokenFilter), you would
> simply check to see if the file is new, and if it is, reload it and recreate
> the SynonymMap, accounting for threading
Hi.
I'm confused by the UUID type comment. It's said that
/**
* Generates a UUID if val is either null, empty or "NEW".
*
* Otherwise it behaves much like a StrField but checks that the value
given
* is indeed a valid UUID.
*
* @param val The value of the field
* @see org.ap
Has there been a discussion anywhere about a "binary log" style
replications scheme (ala mysql?) Wherein, every write request goes to
the master, and the the slaves read in a queue of the requests and
update themselves one record at a time instead of wholesale? Or is
this just not worth the devel
The default IndexDeletionPolicy just keeps the last commit only
(KeepOnlyLastCommitDeletionPolicy) .Files belonging to older commits
are removed. If the files are needed longer for replication, they are
leased . The lease is extended 10 secs at a time. Once all the slaves
have copied the lease is n
Hi,
I did this. The only option I've found is to use Matt's attached solution.
I suggest just using MultiCore/CoreAdmin though.
Best,
Jacob
On Mon, Jan 5, 2009 at 8:47 AM, gwk wrote:
> Hello,
>
>
> I'm trying to get multiple instances of Solr running with Jetty as per
> the instructions on ht
Hello,
I'm trying to get multiple instances of Solr running with Jetty as per
the instructions on http://wiki.apache.org/solr/SolrJetty, however I've
run into a snag. According to the page you set the solr/home parameter
as follows:
solr/home
*My Solr Home Dir*
However, as MattKangas m
To get the unique brand names, you are wandering in to the Facet query
territory that I mentioned.
You could consider a separate index, and that will probably provide the best
performance. Especially if you are hitting it on a per-keystroke basis to
update that auto-complete box. Creating a sep
On Sun, Jan 4, 2009 at 9:47 PM, Mark Miller wrote:
> Hey Brian, I didn't catch what OS you are using on EC2 by the way. I
> thought most UNIX OS's were using memory overcommit - A quick search brings
> up Linux, AIX, and HP-UX, and maybe even OSX?
>
> What are you running over there? EC2, so Linu
Hi
Thanks for your response.
Please find the attached.
1) schema.xml and solrconfig.xml
In solrconfig.xml file, we are changing the below parts ...
PART 1:
false
false
10
explicit
0.01
statusNa
Can you share your configuration, or at least the relevant pieces?
-Grant
On Jan 5, 2009, at 9:24 AM, Navdeep wrote:
Hi all
we are facing an issue in spell checker with solr server. We are
changing
the below given attributes of SolrConfig.xml file
1) Accuracy
2) Number of Suggestions
we
Hi all
we are facing an issue in spell checker with solr server. We are changing
the below given attributes of SolrConfig.xml file
1) Accuracy
2) Number of Suggestions
we are rebuilding solr indexes using "spellcheck.build=true" :
URL used for POST_SOLR_URL=
"select?q=*:*&spellcheck.q=flavro
Noble Paul ??? ?? wrote:
* SolrReplication does not create snapshots . So you have less cleanup
to do. The script based replication results is more disk space
consumption (especially if you do frequent commits)
Doesn't SolrReplication effectively take a snapshot by using a custom
Inde
On Jan 5, 2009, at 8:25 AM, Kalidoss MM wrote:
Is it possible to issue the commit to the Solr Server from that java
code it
self??
Of course... using CommonsHttpSolrServer#commit
Erik
I have tried the same by issueing the command in terminal
(/solr/bin/./commit) and it worked..
Is it possible to issue the commit to the Solr Server from that java code it
self??
I have tried the same by issueing the command in terminal
(/solr/bin/./commit) and it worked..
Please let me know. is it possible to do the same in java code it self?
kalidoss.m,
On Mon, Jan 5, 2009 at 6:18 PM,
Thanks I will have a look to my JdbcDataSource. Anyway it's weird because
using the 1.3 release I don't have that problem...
Shalin Shekhar Mangar wrote:
>
> Yes, initially I figured that we are accidentally re-using a closed data
> source. But Noble has pinned it right. I guess you can try look
I haven't fully thought it through, but I was thinking that, in the
create code in the Factory (where it returns then new TokenFilter),
you would simply check to see if the file is new, and if it is, reload
it and recreate the SynonymMap, accounting for threading issues, of
course, and poss
On Jan 5, 2009, at 7:33 AM, Kalidoss MM wrote:
We have created a Java EmbeddedSolrServer Client Code, I can able
to
add, delete, update the Solr content - At the same time i cant able to
search the updated conente from the Running Solr client(jetty) web
interface.
My requirement is, All
Yes, initially I figured that we are accidentally re-using a closed data
source. But Noble has pinned it right. I guess you can try looking into your
JDBC driver's documentation for a setting which increases the connection
alive-ness.
On Mon, Jan 5, 2009 at 5:29 PM, Noble Paul നോബിള് नोब्ळ् <
nob
Hi,
We have created a Java EmbeddedSolrServer Client Code, I can able to
add, delete, update the Solr content - At the same time i cant able to
search the updated conente from the Running Solr client(jetty) web
interface.
My requirement is, All search need to happen from/by running web
S
Thanks,
kalidoss.m,
** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to
Sify Limited and is intended for use only by the individual or entity to
which it is addressed, and may contain information that is privileged,
confidential or exempt f
I guess the indexing of a doc is taking too long (may be because of
the de-dup patch) and the resultset gets closed automaticallly (timed
out)
--Noble
On Mon, Jan 5, 2009 at 5:14 PM, Marc Sturlese wrote:
>
> Donig this fix I get the same error :(
>
> I am going to try to set up the last nigthly b
Donig this fix I get the same error :(
I am going to try to set up the last nigthly build... let's see if I have
better luck.
The thing is it stop indexing at the doc num 150.000 aprox... and give me
that mysql exception error... Without DeDuplication patch I can index 2
milion docs without prob
Yes I meant the 05/01/2008 build. The fix is a one line change
Add the following as the last line of DataConfig.Entity.clearCache()
dataSrc = null;
On Mon, Jan 5, 2009 at 4:22 PM, Marc Sturlese wrote:
>
> Shalin you mean I should test the 05/01/2008 nighlty? maybe with this one
> works? If the
2009/1/3 Grant Ingersoll
>
> On Jan 2, 2009, at 10:25 AM, Alexander Ramos Jardim wrote:
>
> Grant,
>>
>
>
>> 2. SynonymTokenFilterFactory does the "synonyms.txt" parse and creates the
>> SynonymTokenFilter instance. If I want the SynonymTokenFilter to reload
>> synonyms.txt file from time to tim
Shalin you mean I should test the 05/01/2008 nighlty? maybe with this one
works? If the fix you did is not really big can u tell me where in the
source is and what is it for? (I have been debuging and tracing a lot the
dataimporthandler source and I I would like to know what the imporovement is
ab
Yeah looks like but... if I don't use the DeDuplication patch everything
works perfect. I can create my indexed using full import and delta import
without problems. The JdbcDataSource of the nightly is pretty similar to the
1.3 release...
The DeDuplication patch doesn't touch the dataimporthandl
Yeah looks like but... if I don't use the DeDuplication patch everything
works perfect. I can create my indexed using full import and delta import
without problems. The JdbcDataSource of the nightly is pretty similar to the
1.3 release...
The DeDuplication patch doesn't touch the dataimporthandle
Marc, I've just committed a fix which may have caused the bug. Can you use
svn trunk (or the next nightly build) and confirm?
On Mon, Jan 5, 2009 at 3:10 PM, Noble Paul നോബിള് नोब्ळ् <
noble.p...@gmail.com> wrote:
> looks like a bug w/ DIH with the recent fixes.
> --Noble
>
> On Mon, Jan 5, 2009
looks like a bug w/ DIH with the recent fixes.
--Noble
On Mon, Jan 5, 2009 at 2:36 PM, Marc Sturlese wrote:
>
> Hey there,
> I was using the Deduplication patch with Solr 1.3 release and everything was
> working perfectly. Now I upgraded to a nigthly build (20th december) to be
> able to use new
Hello,
I'm trying to resolve a problem for which I've seen several posts but I have
not found any suitable answer.
I need to filter my results according to complex rights managment such as it
can't be part of a field or something like this. So let's say that the only
way to know if a user has ac
Hey there,
I was using the Deduplication patch with Solr 1.3 release and everything was
working perfectly. Now I upgraded to a nigthly build (20th december) to be
able to use new facet algorithm and other stuff and DeDuplication is not
working any more. I have followed exactly the same steps to ap
40 matches
Mail list logo