It appears the issue is with the encrypted file. Are these files encrypted?
If yes, you need to decrypt it first.
moreCaused by: javax.crypto.BadPaddingException: RSA private key operation
failed
Best,
Ben
On Tue, Sep 1, 2020, 10:51 PM yaswanth kumar wrote:
> Can some one please help me
Can you send solr logs?
Best,
Ben
On Sun, Aug 9, 2020, 9:55 AM Rashmi Jain wrote:
> Hello Team,
>
> I am Rashmi jain implemented solr on one of our site
> bookswagon.com<https://www.bookswagon.com/>. last 2-3 month we are facing
> strange issue, solr
Hope to hear back from you soon.
Best,
Ben
2020-07-21 13:21:02.786 INFO (main) [ ] o.e.j.u.log Logging initialized
@4907ms to org.eclipse.jetty.util.log.Slf4jLog
2020-07-21 13:21:03.004 WARN (main) [ ] o.e.j.s.AbstractConnector Ignoring
deprecated socket close linger time
2020-07-21 13:21
Before I submit a new bug, I should ask you folks if this is my error.
I started a local SolrCloud instance with two nodes and two replicas per
node. I created one empty collection on each node.
I tried to use the ping method in Solrj to verify my connected client.
When I try to use it, it throw
Daniel Carrasco wrote
> Hello,
>
> I'm investigating an 8 nodes Solr 7.2.1 cluster because we've a lot of
> problems, like when a node fails to import from a DB (maybe it freeze),
> the
> entire cluster goes down, and other like the leader wont change even when
> is down (all nodes detects that is
?
Regards,
Ben
Am 03.08.2016 um 14:57 schrieb Joel Bernstein:
Also the TermsComponent now can export the docFreq for a list of terms and
the numDocs for the index. This can be used as a general purpose mechanism
for scoring facets with a callback.
https://issues.apache.org/jira/browse/SOLR-9243
Joel
?
Thanks,
Ben Earley
Hey!
New message, please read <http://askdrrutherford.com/eat.php?2dijy>
Ben Tilly
I have not mentioned before that the index are always routed to specific
machine.
Is there a way to avoid connectivity from the node to all other nodes?
> From: adi...@hotmail.com
> To: solr-user@lucene.apache.org
> Subject: check If I am Still Leader
> Date: Thu, 16 Apr 2015 16:08:15 +
Hi,
I am using Solr 4.10.0 with tomcat and embedded Zookeeper.
I use SolrCloud in my system.
Each Shard machine try to reach/connect with other cluster machines in order to
index the document ,it just checks if it is still the leader.
I don't use replication so why does it has to check who is
Hello
I am playing with solr5 right now, to see if its cloud features can replace
what we have with solr 3.6, and I have some questions, some newbie, and
some not so newbie
Background: the documents we are putting in solr have a date field. the
majority of our searches are restricted to documents
on and the
problem disappeared.
the httpcomponents jars which are dependencies of solrj where in the 4.2.x
version, I upgraded to httpclient-4.3.1 , httpcore-4.3 and httpmime-4.3.1
I ran the replication a few times now and no problem at all, it is now
working as expected.
It seams that the upgrade
ation
will work ?
Thank you again.
Shalom
On Wed, Oct 30, 2013 at 10:00 PM, Shawn Heisey wrote:
> On 10/30/2013 1:49 PM, Shalom Ben-Zvi Kazaz wrote:
>
>> we are continuously getting this exception during replication from
>> master to slave. our index size is 9.27 G and we
we are continuously getting this exception during replication from
master to slave. our index size is 9.27 G and we are trying to replicate
a slave from scratch.
Its a different file each time , sometimes we get to 60% replication
before it fails and sometimes only 10%, we never managed a successfu
Hello,
I have a text and text_ja fields where text is english and text_ja is
japanese analyzers, i index both with copyfield from other fields.
I'm trying to search both fields using edismax and qf parameter, but I
see strange behaviour of edismax , I wonder if someone can give me a
hist to what's
Hi,
We have a customer that needs support for both english and japanese, a
document can be any of the two and we have no indication about the
language for a document. ,so I know I can construct a schema with both
english and japanese fields and index them with copy field. I also know
I can detect t
Hello list
In one of our search that we use Result Grouping we have a need to
filter results to only groups that have more then one document in the
group, or more specifically to groups that have two documents.
Is it possible in some way?
Thank you
Hi
You can give soft-commit a try.
More details available here http://wiki.apache.org/solr/NearRealtimeSearch
-Original Message-
From: 李威 [mailto:li...@antvision.cn]
Sent: Thursday, 2 May 2013 12:02 PM
To: solr-user
Cc: 李景泽; 罗佳
Subject: How to deal with cache for facet search when inde
Hi Hoss
Thanks for the reply.
Unfortunately we have other customized similarity classes that I don’t know how
to disable them and still make query work.
I am trying to attach more information once I work out how to simply the issue.
Thanks
Ben
From
Hi
We met a wired problem in our project when sorting by score in Solr 4.0, the
biggest score document is not a the top the debug explanation from solr are
like this,
First Document
1.8412635 = (MATCH) sum of:
2675.7964 = (MATCH) sum of:
0.0 = (MATCH) sum of:
0.0 = (MATCH) max of:
Hi Yonik
I will give the latest 4.0 release a try.
Thanks anyway.
Cheers
Ben
From: ysee...@gmail.com [ysee...@gmail.com] on behalf of Yonik Seeley
[yo...@lucidworks.com]
Sent: Tuesday, November 13, 2012 2:04 PM
To: solr-user@lucene.apache.org
Subject
napshot before the alpha release. Could that be the problem? we have some
customized parsers so it will take quite some time to upgrade.
Ben
From: ysee...@gmail.com [ysee...@gmail.com] on behalf of Yonik Seeley
[yo...@lucidworks.com]
Sent: Tuesday, Novembe
more information, problem only happends when I have both sort by function and
grouping in query.
From: Kuai, Ben [ben.k...@sensis.com.au]
Sent: Monday, November 12, 2012 2:12 PM
To: solr-user@lucene.apache.org
Subject: sort by function error
Hi
I am
But, check out things like httplib2 and urllib2.
-Original Message-
From: Spadez [mailto:james_will...@hotmail.com]
Sent: Thursday, June 07, 2012 2:09 PM
To: solr-user@lucene.apache.org
Subject: RE: Help! Confused about using Jquery for the Search query - Want to
ditch it
Thank you, that
As far as I know, it is the only way to do this. Look around a bit, Python (or
PHP, or C, etc., etc.) is able to act as an HTTP client...in fact, that is the
most common way that web services are consumed. But, we are definitely beyond
the scope of the Solr list at this point.
-Original Mes
to
ditch it
Hi Ben,
Thank you for the reply. So, If I don't want to use Javascript and I want the
entire page to reload each time, is it being done like this?
1. User submits form via GET
2. Solr server queried via GET
3. Solr server completes query
4. Solr server returns XML output
5. XML
I'm new to Solr...but this is more of a web programming question...so I can get
in on this :).
Your only option to get the data from Solr sans-Javascript, is the use python
to pull the results BEFORE the client loads the page.
So, if you are asking if you can get AJAX like results (an already l
Hello,
When I have seen this it usually means the SOLR you are trying to connect to is
not available.
Do you have it installed on:
http://localhost:8080/solr
Try opening that address in your browser. If your running the example solr
using the embedded Jetty you wont be on 8080 :D
Hope that
out waiting for
https://issues.apache.org/jira/browse/SOLR-2366
Thanks
Ben
This e-mail is sent on behalf of Trader Media Group Limited, Registered Office:
Auto Trader House, Cutbush Park Industrial Estate, Danehill, Lower Earley,
Reading, Berkshire, RG6
tool. Im checking on the
autocommit handler.
Has anyone seen anything similar?
Thanks
Ben
This e-mail is sent on behalf of Trader Media Group Limited, Registered Office:
Auto Trader House, Cutbush Park Industrial Estate, Danehill, Lower Earley
up in Java.
Thanks
Ben
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 13 April 2012 13:28
To: solr-user@lucene.apache.org
Subject: Re: Solr data export to CSV File
Does this help?
http://wiki.apache.org/solr/CSVResponseWriter
Best
Erick
On Fri, Apr 13, 2012
That's great information.
Thanks for all the help and guidance, its been invaluable.
Thanks
Ben
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 26 March 2012 12:21
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication Question
It&
then work across the whole index?
Thanks
Ben
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
Sent: 23 March 2012 15:10
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication Question
Also, what happens if, instead of adding the 40K docs yo
minutes from the slave. When it kicks in I see a
new version of the index and then it copys the full 5gb index.
Thanks
Ben
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
Sent: 23 March 2012 14:29
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave
move around 150G/hour
when hooking up a new slave to the master.
/Martin
On Fri, Mar 23, 2012 at 12:33 PM, Ben McCarthy <
ben.mccar...@tradermedia.co.uk> wrote:
> Hello,
>
> Im looking at the replication from a master to a number of slaves. I
> have configured it and it app
massive 200gb indexs,
does it not take a while to bring the slaves inline with the master?
Thanks
Ben
This e-mail is sent on behalf of Trader Media Group Limited, Registered Office:
Auto Trader House, Cutbush Park Industrial Estate, Danehill, Lower Earley
I run the query for the delta on the DB I get back the expected 100
stock id’s
Any help would be appreciated.
Thanks
Ben
Hello Solr users.
My organization is working on a solr implementation with multiple cores. I
want to prepare us for the day when we'll need to make a change to our
schema.xml, and roll that change into our production environment.
I believe we'll need to perform the following steps:
# delete all o
Thanks A lot mark,
Since My SolrCloud code was old I tried downloading and building the
newest code from here
https://svn.apache.org/repos/asf/lucene/dev/trunk/
I am using tomcat6
I manually created the sc sub-directory in my zooKeeper ensemble
file-system
I used this connection String to my ZK ens
Hi!
I am using solrCloud with a zookeeper ensamble of 3.
I noticed that solcOuld stores information direclt under the root dir in the
ZooKeepr file system:
\config \live_nodes \ collections
In my setup Zookeepr is also used by other modules so I would like solrCLoud
to store everything under /s
p;moreParams...}
In just a few minutes, I have racked up 10MB of log my dev environment. Any
ideas for a sane way of handling these messages? I imagine its slowing down
Solr as well.
Thanks
-Ben
after mika (i without the macron) but before miki (also without
the macron), or about Welsh, where the digraphs (ch, dd, etc.) are
treated as single letters, or about Ojibwe, where the apostrophe ' is a
letter which sorts between h and i.
How do non-English languages typically handle this?
-Ben
Thanks both for your replies
Eric,
Yep, I use the Analysis page extensively, but what I was directly looking
for was whether all of only the last line of values given by the analysis
page, where eventually indexed.
I think we've concluded it's only the last line.
Cheers,
Ben
On W
)
are indexed?
Is every value that is produced from each char filter, tokenizer, and filter
indexed?
Or is the only the final value after completing the whole chain indexed?
Cheers,
Ben
Use admin/analysis.jsp to see which filter is removing it.
Configure a field type appropriate to what you want to index.
On Mon, Apr 4, 2011 at 9:55 AM, mechravi25 wrote:
> Hi,
> Has anyone indexed the data with Trade Mark symbol??...when i tried to
> index, the data appears as below.
>
> Data:
I can't remember where I read it, but I think MappingCharFilterFactory is
prefered.
There is an example in the example schema.
>From this, I get:
org.apache.solr.analysis.MappingCharFilterFactory
{mapping=mapping-ISOLatin1Accent.txt}
|text|despues|
On Tue, Apr 5, 2011 at 5:06 PM, Nemani, Raj
document? I understand that it is possible to do a MLT
query using free text, but I want to utilize structured data.
Thanks,
Ben
--
Ben Anhalt
ben.anh...@gmail.com
Mi parolas Esperante.
Hi folks,
Is there any way to know the size *in bytes* occupied by a cache (filter
cache, doc cache ...)? I don't find such information within the stats page.
Regards
--
Mehdi BEN HAJ ABBES
> processor="FileListEntityProcessor" fileName=".*xml" recursive="true"
Shouldn't this be fileName="*.xml"?
Ben
On Oct 22, 2010, at 10:52 PM, pghorp...@ucla.edu wrote:
>
>
>
>
>
> processor="FileListEntityProcesso
uring the load, then the record must have been deleted from
the database.
Hope this helps.
Ben
On Wed, Oct 20, 2010 at 8:05 PM, Erick Erickson wrote:
> << and
> do a complete re-indexing each week also we want to delete the orphan solr
> documents (for which the data is not prese
would
be happy with finding a decent place to try to add it. I'm not sure if there
is a clean place for it.
Ben
On Oct 20, 2010, at 8:36 PM, Erick Erickson wrote:
> It seems to me that multiple cores are along the lines you
> need, a single instance of Solr that can search
y be also sharding a small sub-set of
them.
Thanks in advance,
Ben
Hi I am using solrCloud which uses an ensemble of 3 zookeeper instances.
I am performing survivability tests:
Taking one of the zookeeper instances down I would expect the client to use a
different zookeeper server instance.
But as you can see in the below logs attached
Depending on which insta
Hi I am running a zookeeper ensemble of 3 zookeeper instances
and established a solrCloud to work with it (2 masters , 2 slaves)
on each master machine I have 2 shards (4 shards in total)
on one of the masters I keep noticing ZooKeeper related exceptions which I
can't understand:
One appears to be
Yatir Ben Shlomo
Outbrain Engineering
yat...@outbrain.com<mailto:yat...@outbrain.com>
tel: +972-73-223912
fax: +972-9-8350055
www.outbrain.com<http://www.outbrain.com/>
Hi
I am using solrCloud.
Suppose I have a total 4 machines dedicated for solr.
I want to have 2 machines as replication (salves) and 2 masters
But I want to work with 8 logical cores rather 2.
i.e. each master (and each slave) will have 4 cores on it.
the reason is that I can optimize the cores on
Further to earlier note re Lucandra. I note that Cassandra, which
Lucandra backs onto, is 'eventually consistent', so given your real-
time requirements, you may want to review this in the first instance,
if Lucandra is of interest.
On 21 May 2010, at 06:12, Walter Underwood wrote:
Solr
You may wish to look at Lucandra: http://github.com/tjake/Lucandra
On 21 May 2010, at 06:12, Walter Underwood wrote:
Solr is a very good engine, but it is not real-time. You can turn
off the caches and reduce the delays, but it is fundamentally not
real-time.
I work at MarkLogic, and we h
It could be that you should be providing an implementation of
"SortComparatorSource"
I have missed the earlier part of this thread, I assume you're trying to
implement some form of custom search?
B
dontthinktwice wrote:
Marc Sturlese wrote:
I have been able to create my custom field. The
ted is confusing the
hell out of me too!
Thanks
Ben
Yonik Seeley wrote:
On Thu, Jul 2, 2009 at 4:24 PM, Candide Kemmler wrote:
I have a simple question rel the DocSlice class. I'm trying to use the (very
handy) set operations on DocSlices and I'm rather confused by the way it
behaves
ts
queries or something.
Has anyone built their Solr index using Lucene, and how did you handle
stemmed fields in Lucene so that Solr worked properly with them?
Cheers,
Ben
er each
individual value when looking for matches, meaning the simple query
syntax can made adequate to do what's needed.
Many thanks Uwe.
B
Uwe Klosa wrote:
2009/7/1 Ben
I'm not quite sure I understand exactly what you mean.
The string I'm processing could have many ten
7;re saying, you're saying that I should
leave whitespaces between the individual parts of the string, pass in
the string into a "multiValued" field and have SOLR internally treat
each "word" as an individual entity?
Thanks for your help with this...
Ben
Uwe Klosa
Is there a way in the Schema to specify that the comma should be used to
split the values up?
e.g. Can I specify my "vector" field as multivalue and also specify some
sort of tokeniser to automatically split on commas?
Ben
Uwe Klosa wrote:
You should split the strings at the comm
ing on the comma's so that I can
apply a normal wildcard query and SOLR applies it to each
individually?*** That would solve all my problems :
e.g.
The string is internally represented in lucene/solr as
A1_B1_C1_D1
A2_B2_C2_D2
A3_B3_C3_D3
where it tries to match the wildcard query on each in turn?
Thanks for you help, I'm deeply confused about this at the moment...
Ben
ng also doesn't work :
Cannot parse 'vector:_\*[\^_\]\*_[\^_\]\*_[\^_\]\*': Encountered "]" at
line 1, column 15.
Was expecting one of:
"TO" ...
...
...
Ben wrote:
Ben wrote:
The exception SOLR raises is :
org.apache.l
Ben wrote:
The exception SOLR raises is :
org.apache.lucene.queryParser.ParseException: Cannot parse
'vector:_*[^_]*_[^_]*_[^_]*': Encountered "]" at line 1, column 12.
Was expecting one of:
"TO" ...
...
...
Ben wrote:
Passing in a RegularExpr
The exception SOLR raises is :
org.apache.lucene.queryParser.ParseException: Cannot parse
'vector:_*[^_]*_[^_]*_[^_]*': Encountered "]" at line 1, column 12.
Was expecting one of:
"TO" ...
...
...
Ben wrote:
Passing in a RegularExpression lik
somebody please advise how to handle character exclusion in such
searches?
Any help or pointers are much appreciated!
Thanks
Ben
mplying with Analysis? If
that were the case, I'd not need to worry about character exclusion.
Sorry if that's a bit fuzzy... it's hard trying to explain enough to be
useful, but not too much that it turns into an essay!!!
Thanks,
Ben
The solution I'm using is to form a ve
sub
strings it matched of the queried facet, rather than the whole string?
I hope somebody can help :)
Thanks,
Ben
Hello,
I wish to send an Mlt request to Solr and filter the result by a list of values
to specific field. The problem is sometimes the list can include
thousands of values and it's impossible to send such GET request.
Sending this request as POST didn't work well... Is POST supporte
would work
without any qf. I manually added a qf to the query with the
application solrconfig and got a result. Off to debug the application
side!
Thank you very much for the help!
Ben
On Thu, Mar 26, 2009 at 3:08 PM, Otis Gospodnetic
wrote:
>
> Standard searches your default field (specif
0
name
regex
So there's no particular mention of any fields from schema.xml in
dismax, but the standard works without that.
Thanks for the responses,
Ben
On Thu, Mar 26, 2009 at 2:11 PM, Matt Mitchell wrote:
> Do you have qf set? Just last week I had a problem
module, beta 6. The problem occurs in the admin interface for solr,
though, not just in the end application.
And...that's it? I don't know what else to say or offer other than
dismax doesn't work, and I'm not sure where else to go to
troubleshoot. Any ideas?
Ben
Hi Solr users,
Is there a method of retrieving a field range i.e. the min and max
values of that fields term enum.
For example I would like to know the first and last date entry of N
documents.
Regards,
-Ben
ik
Seeley
Sent: Tuesday, October 07, 2008 1:10 PM
To: solr-user@lucene.apache.org
Subject: Re: *Very* slow Commit after upgrading to solr 1.3
On Tue, Oct 7, 2008 at 6:32 AM, Ben Shlomo, Yatir
<[EMAIL PROTECTED]> wrote:
> The problem is solved, see below.
> Since the performance is so
: Saturday, October 04, 2008 6:07 PM
To: solr-user@lucene.apache.org
Subject: Re: *Very* slow Commit after upgrading to solr 1.3
Ben, see also
http://www.nabble.com/Commit-in-solr-1.3-can-take-up-to-5-minutes-td1980
2781.html#a19802781
What type of physical drive is this and what interface is
as to check?
Thanks.
Here is part of my solrConfig file:
- < -
-
false
1000
1000
2147483647
1
1000
1
-
-
false
1000
1000
2147483647
1
-
true
Yatir Ben-shlomo | eBay, Inc. | Classificati
ch will be missing 1 doc. 10 mil each on 3 machines, a *:*
search will be missing 30. Not a big deal, but could be a concern for
some with picky, look at everything customers.
- Mark
Ben Shlomo, Yatir wrote:
> Hi!
>
> I am already using solr 1.2 and happy with it.
>
> In a new pro
Hi!
I am already using solr 1.2 and happy with it.
In a new project with very tight dead line (10 development days from
today) I need to setup a more ambitious system in terms of scale
Here is the spec:
* I need to index about 60,000,000
documents
* E
Shalin, Thanks a lot. I'll do that.
On Tue, Mar 18, 2008 at 11:13 AM, Shalin Shekhar Mangar <
[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> If I had to do this, I would start by adding a custom
> javax.servlet.Filter into Solr. It should work fine since all you're
> d
though, it would be ideal, Sorl query is a lot superior than that
of XYZ).
Basically I need to replace : with = ; + with / and = with : in the query
syntax.
Thank you.
On Tue, Mar 18, 2008 at 9:50 AM, Shalin Shekhar Mangar <
[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> It would b
though, it would be ideal, Sorl query is a lot superior than that
of XYZ, but impractical).
On Tue, Mar 18, 2008 at 9:50 AM, Shalin Shekhar Mangar <
[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> It would be nice if you can tell us your use-case so that we can be
> more helpful.
>
>
Hi solr users,
I need to change the query format for solr a little bit. How can I
accomplish this. I don't wan to modify the underlying lucene query
specification but just the way I query the index through the the GET http
method in solr.
Thanks a lot for your help.
Ben
why does the web admin append "core=null" to all the requests?
e.g. admin/get-file.jsp?core=null&file=schema.xml
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> Of Yonik Seeley
> Sent: Monday, 17 December 2007 4:44 PM
> To: solr-user@lucene.apache.org
> Subject: Re: retrieve lucene "doc id"
>
> On Dec 16, 2007 11:40 PM, Ben I
how do I retrieve the lucene "doc id" in a query?
-Ben
sorry - this should have been posted on the Lucene user list.
...the solution is to use the lucene PerFieldAnalyzerWrapper and add the
field with the KeywordAnalyzer then pass the PerFieldAnalyzerWrapper to
the QueryParser.
-Ben
> -Original Message-
> From: Ben Incani [mailto:
to perform a phrase query such as my_field:(the
value) or my_field:"the value", which don't work?
So is there a way to prevent tokenisation of a field using the
StandardAnalyzer, without implementing your own TokenizerFactory?
Regards
Ben
Did you try to add a backslash to escape the "-" in Geckoplp4-M
(Geckoplp4\-M)
-Original Message-
From: Kevin Lewandowski [mailto:[EMAIL PROTECTED]
Sent: Friday, October 12, 2007 9:40 PM
To: solr-user@lucene.apache.org
Subject: solr not finding all results
I've found an odd situation wh
Hi!
I know I can delete multiple docs with the following:
mediaId:(6720 OR 6721 OR )
My question is can I do something like this?
languageId:123 AND manufacturer:456
(It does not work for me and I didn't forget to commit)
How can I do it ? with copy field ?
languageIdmanufacturer:12345
tions
(in catalina.bat)
yatir
____
From: Ben Shlomo, Yatir [mailto:[EMAIL PROTECTED]
Sent: Monday, August 20, 2007 6:40 PM
To: solr-user@lucene.apache.org
Subject: problem with quering solr after indexing UTF-8 encoded CSV files
Hi!
I have utf-8 encoded data in
Hi!
I have utf-8 encoded data inside a csv file (actually it’s a tab separated file
- attached)
I can index it with no apparent errors
I did not forget to set this in my tomcat configuration
When I query a document using the UTF-8 text I get zero matches:
-
rtitioning by domain.
-Yonik
On 8/9/07, Ben Shlomo, Yatir <[EMAIL PROTECTED]> wrote:
> Hi!
>
> say I have 300 csv files that I need to index.
>
> Each one holds millions of lines (each line is a few fields separated
by
> commas)
>
> Each csv file represents a different d
Hi!
say I have 300 csv files that I need to index.
Each one holds millions of lines (each line is a few fields separated by
commas)
Each csv file represents a different domain of data (e,g, file1 is
computers, file2 is flowers, etc)
There is no indication of the domain ID in the data insid
;text'?
Is this merely a documentation issue or have I missed something here...
Regards,
Ben
-solr.
Or would this require a code change?
Regards
-Ben
te?
>
>
>
> -Hoss
>
No - no advanced use of XML has been implemented.
One of the fields in the add request would contain the original binary
document encoded in base64, then this would preferably be decoded to
binary and placed into a lucene binary field, which would need to be
defined in Solr.
Thanks
Ben
org.apache.solr.util.XML
http://issues.apache.org/jira/browse/SOLR-20
So far I have been storing binary data in the lucene index, I realise
this is not an optimal solution, but so far I have not found a java
container system to manage documents. Can anyone recommend one?
Regards,
Ben
100 matches
Mail list logo