Thanks for replying Erick!
I executed this query jar -tf jarfile.jar and my classes were indeed there.
Regarding the package I had this line written in my java file:
package org.apache.lucene.analysis.mr
I removed this line but still it was not working.
Then I tried creating a small project
Hi,
With big BLOB objects transcripted in Base64, anyone tried a performance test
in query with a huge data (~M documents, ~500Gb) to compare 2 methods:
- store directly theses BLOB-Base64 objects in Index by String field
- separate these BLOB in a DB, then a URL index field
Second option is better. Storing Big BLOB data in index will increase index
size and it will create performance issues
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Inner-index-stock-big-BLOB-or-separate-in-a-DB-tp2166964p2167129.html
Sent from the Solr -
Hi Pankaj and to whom it may concern - The crux and cure have been
identified. This is because of missing jasper deployer in Geronimo
2.1.6, which are available in earlier Geronimo version
Just download the following components, manually copy them to relevant
folders. Done :)
Hi,
I am new to carrot2 clustering tool. Can anyone Guide me related to
this tool and how it can integrate with solr or lucene.
Thanks!
Seeking for your guidance.
Hi,
when using WordDelimiterFilterFactory in the fieldType definition and
setting termVectors=true termPositions=true termOffsets=true on
the field, Solr gives me the following response for the query request
?q=warmwasserspeicherversion=2.2indent=onhl=true
lst name=highlighting
lst name=id-1
How about reading the wiki:
http://wiki.apache.org/solr/ClusteringComponent
On Thursday 30 December 2010 13:21:19 Isha Garg wrote:
Hi,
I am new to carrot2 clustering tool. Can anyone Guide me related to
this tool and how it can integrate with solr or lucene.
Thanks!
Seeking for your
Hi Erick!
Here is my DIH configuration:
dataConfig
dataSource name=jdbc driver=org.postgresql.Driver
url=jdbc:postgresql://${dataimporter.request.dbHost}:${dataimporter.request.dbPort}/${dataimporter.request.dbName}
user=${dataimporter.request.dbUser}
What does jar -tf your jar file here show you the actual classes in your
jar are?
You're still saying it doesn't work, without providing details that let us
help.
Imagine we're asking you for help. Does your message give enough info to
suggest much?
Best
Erick
On Wed, Dec 29, 2010 at 11:51 PM,
WARNING: DIH isn't my strong suit, I generally prefer doing things
in SolrJ. Mostly I asked for clarification so someone #else# who
actually knows DIH details could chime in...
That said, I'm a bit confused. As I understand it, you shouldn't
be UPDATEing anything in DIH, it's a select where
Erick:
Thanks for the quick response.
I can't use the timestamp for doing DIH, so I need to use a custom
field that I need to update one for each delta-import, so that is why
I need to execute an UPDATE on the deltaQuery.
Cheers!
Juan M.
On Thu, Dec 30, 2010 at 10:07 AM, Erick Erickson
Hi List,
I got a little issue with sorting a FacetQuery.
Currently I am doing something like that in SolrJ:
SolrQuery q = new SolrQuery(myQuery);
q.setFacetQuery(names:thomas);//want to see the count of thomas's
documents.
q.setFacetPrefix(short, th);
I don't know any better example, but the
We have tried all locktypes simple, single, native. But nothing worked. I
have upgraded to Solr 1.4, and when i used the replication system of 1.4.
It is working fine. Not sure why the scripts are not able to replicate the
index on Linux, but java based replication is working.
Thanks for the
Set facet.limit to -1 (globally or for that field). That will return all
the facets, in lexicographical order.
Stephen Duncan Jr
www.stephenduncanjr.com
On Thu, Dec 30, 2010 at 9:04 AM, Em mailformailingli...@yahoo.de wrote:
Hi List,
I got a little issue with sorting a FacetQuery.
No
http://wiki.apache.org/solr/SimpleFacetParameters#facet.sort
On Thursday 30 December 2010 15:42:14 Stephen Duncan Jr wrote:
Set facet.limit to -1 (globally or for that field). That will return all
the facets, in lexicographical order.
Stephen Duncan Jr
www.stephenduncanjr.com
On
Markus is right, it will return them by count.
I think my question could me more general:
How can I set limit, sort etc. for a Facet Query?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sort-Facet-Query-tp2167635p2167965.html
Sent from the Solr - User mailing list
I will be out of the office starting 30/12/2010 and will not return until
03/01/2011.
Please email to itsta...@actionimages.com for any urgent issues.
Action Images is a division of Reuters Limited and your data will therefore be
protected
in accordance with the Reuters Group Privacy / Data
This may sound silly, but are you sure the user you're using has
permissions to do the updates you want? Not sure about postgres but I
think some jdbc's require that the connection be defined as rw, maybe
you should try adding readOnly=false to your jdbc definition.
Ephraim Ofir
-Original
The SpellCheckComponent in v1.4 does not use fq. All it does is take the
keywords out of the q (or spellcheck.q) parameter and check them against the
entire dictionary. If any keyword is not in the dictionary, it gives you a
list of alternatives. The collate function then takes the query and
Hi all.
I have designed a synchronizer that goes out to various databases,
extracts some data, does some processing, and then uses the
StreamingUpdateSolrServer to send the records to a Solr index. When
everything is up, it works just fine.
Now I'm trying to account for problems, like if
At the end of Marcus' link is facet.sort=false which will return in
lexigraphical order (sometimes called index order).
Best
Erick
On Thu, Dec 30, 2010 at 10:26 AM, Em mailformailingli...@yahoo.de wrote:
Markus is right, it will return them by count.
I think my question could me more
Hi Ephraim! Thanks for the answer!
Actually the user has permissions to make UPDATE queries.
I changed the configuration to
dataSource name=jdbc driver=org.postgresql.Driver
Hi Travis!
I am executing a function in de DB that has two queries: an UPDATE and
a SELECT, and I am getting the select results ok, but the update has
no effects, so it seems that that approach is not working.
Cheers!
Juan M.
On Thu, Dec 30, 2010 at 11:26 AM, Travis Low t...@4centurion.com
Does your function get_deltaimport_items perform the update first and then the
select? Does it make a difference if you change the order? Did you try omitting
the TRANSACTION_SERIALIZABLE part?
Ephraim Ofir
-Original Message-
From: Juan Manuel Alvarez [mailto:naici...@gmail.com]
Sent:
Hi Ephraim! Thanks again for taking the time to help me. Really appreciated =o)
The UPDATE was before the SELECT, but putting it after leads to the
same result, with or without the TRANSACTION_SERIALIZABLE.
Cheers!
Juan M.
2010/12/30 Ephraim Ofir ephra...@icq.com:
Does your function
Yes, I understood.
But what if I DON'T want to return ALL facet fields in index-order, but only
2 of 5?
When faceting on fields I could just specify it, but how can I do so with a
FacetQuery without making *all* Facets sorted in the same way?
--
View this message in context:
Using Lucid's Solr 1.4 distribution, if I index my email inbox and then
search it by passing in different email expressions, I notice that I get
different results based on whether the '@' character is included, even
though the character is present in every email address in the field I'm
My current solution is to use the ping() function -- which doesn't run in a
thread -- to test the connection before trying to send the data to the Solr
index. It isn't elegant, but it works.
If anyone has a better idea, I'd like to hear it.
-- Chris
On Thu, Dec 30, 2010 at 11:10 AM,
What steps have you taken to figure out whether the
contents of your index are what you think? I suspect
that the fields you're indexing aren't being
analyzed/tokenized quite the way you expect either at
query time or index time (or maybe both!).
Take a look at the admin/analysis page for the
Basically, just what you've suggested. I did the field/query analysis piece
with verbose output. Not entirely sure how to interpret the results, of
course. Currently reading anything I can find on that.
Thanks
Erick Erickson wrote:
What steps have you taken to figure out whether the
Hi. I am using solrj and it has been working fine. I now have a requirement
to add more parameters. So many that I get a max URI exceeded error. Is
there anyway using SolrQuery todo a http post so I don't have these issues?
don
Hi Don,
you could give the HTTP method to be used as a second argument to the
QueryRequest constructor:
: : NonFic/Science, how do I turn that into 0/NonFic
:
: : 1/NonFic/Science using the DIH?
:
: I don't have any specific suggestions for you -- i've never
...
: Thanks Chris.
:
: What did you use to generate those encodings if not DIH?
I've used this gereral approach several
solr/admin/analysis.jsp uses the Luke handler. You can browse facets and fields.
On Wed, Dec 29, 2010 at 7:46 PM, Ahmet Arslan iori...@yahoo.com wrote:
If I understand you correctly, for an INT dynamic field
called *_int2
filled with field callled my_number_int2 during data
import
in a
Another way is to create a requestHandler entry point in
solrconfig.xml that includes lots of parameters in the defaults
section. This way your URLs only have things that change.
On Thu, Dec 30, 2010 at 3:12 PM, Sascha SZOTT sz...@gmx.de wrote:
Hi Don,
you could give the HTTP method to be used
Hi,
I'm getting this exception when I have 2 cores as masters. Seems like one of
the cores obtains a lock (file) and then the other tries to obtain the same
one. However, the first one is not deleted.
How do I fix this?
Dec 30, 2010 4:34:48 PM org.apache.solr.handler.ReplicationHandler
When my Solr guru gets back, we'll redo the schema and see what happens, thanks!
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
This will not work. At all.
You can only have one Solr core instance changing an index.
On Thu, Dec 30, 2010 at 4:38 PM, Tri Nguyen tringuye...@yahoo.com wrote:
Hi,
I'm getting this exception when I have 2 cores as masters. Seems like one of
the cores obtains a lock (file) and then the
Hi,
I am trying to make sure that when I search for text—regardless of
what that text is—that I get an exact match. I'm *still* getting some
issues, and this last mile is becoming very painful. The solr field,
for which I'm setting this up on, is pasted below my explanation. I
appreciate any
You can gain a lot of insight into this kind of thing with the
admin/analysis page. Often the issue is that your tokenizing/
filtering isn't doing quite what you think. Try turning on the
debug checkboxes on that page and seeing what tokens are
generated at index and analysis page.
In particular,
When using DIH my delta imports appear to finish quickly.. ie it says
Indexing completed. Added/Updated: 95491 documents. Deleted 11148
documents. in a relatively short amount of time (~30mins).
However the importMessage says A command is still running... for a
really long time (~60mins).
41 matches
Mail list logo