Hi there,
my index is created from XML files that are downloaded on the fly.
This also includes downloading a mapping file that is used to resolve IDs in
the main file (root entity) and map them onto names.
The basic functionality works - the supplier_name is set for each document.
However, the
>
> Interesting I don't recall a bug like that being fixed.
> Anyway, glad it works for you now!
> -Yonik
Then it’s probably because it’s Christmas time! :-)
;:"Kontaktlinsen > Torische Linsen",
"count":75168,
"cat_sum":3752665.0497611803},
Thanks,
Chantal
> Am 15.12.2016 um 16:00 schrieb Chantal Ackermann :
>
> Hi Yonik,
>
> are you certain that nesting a function works as documen
Hi Yonik,
are you certain that nesting a function works as documented on
http://yonik.com/solr-subfacets/?
top_authors:{
type: terms,
field: author,
limit: 7,
sort: "revenue desc",
facet:{
revenue: "sum(sales)"
}
}
I’m getting th
Hi Yonik,
here is an update on what I’ve tried so far, unfortunately without any more
luck.
The field directive is (should have included this when asking the question):
/query?
json.facet={
num_pop:{query: "popularity[* TO *]“},
all_pop: "sum(popularity)“,
shop_cat: {type:terms, field:shop
Hi,
@Lance - thanks, it's a pleasure to give something back to the community. Even
if it is comparatively small. :-)
@Paul - it's definitly not 15 min but rather 2 min. Actually, the testing part
of this setup is very regular compared to other Maven projects. The copying of
the WAR file and re
> application?
> (e.g. with 100 documents)
>
> thanks in advance
>
> Paul
>
>
> On 14 mars 2013, at 09:29, Chantal Ackermann wrote:
>
>> Hi all,
>>
>>
>> this is not a question. I just wanted to announce that I've written a blog
>&
Hi all,
this is not a question. I just wanted to announce that I've written a blog post
on how to set up Maven for packaging and automatic testing of a SOLR index
configuration.
http://blog.it-agenten.com/2013/03/integration-testing-your-solr-index-with-maven/
Feedback or comments appreciated
Hi Johannes,
on production, SOLR is better a backend service to your actual web application:
Client (Browser) <---> Web App <---> Solr Server
Very much like a database. The processes are implemented in your Web App, and
when they require results from Solr for whatever reason they simply query i
Drop the logging.properties file into the solr.war at WEB-INF/classes .
See here:
http://lucidworks.lucidimagination.com/display/solr/Configuring+Logging
Section "Tomcat Logging Settings"
Cheers,
Chantal
Am 27.08.2012 um 16:43 schrieb Nicholas Ding:
> Hello,
>
> I've deployed Solr 4 on Tomca
>
> I don't see that you have anything in the DIH that tells what columns from
> the query go into which fields in the index. You need something like
>
>
>
>
>
That is not completely true. If the columns have the same names as the fields,
the mapping is redundant. Nevertheless, it might b
> Are you committing? You have to commit for them to be actually added….
If DIH says it did not add any documents ("added 0 documents") committing won't
help.
Likely, there is a problem with the mapping between DIH and the schema so that
none of the fields make it into the index. We would need
Hi Lance,
does this do what you want?
http://maven.apache.org/plugins/maven-assembly-plugin/descriptor-refs.html#jar-with-dependencies
It's maven but that would be an advantage I'd say… ;-)
Chantal
Am 05.08.2012 um 01:25 schrieb Lance Norskog:
> Has anybody tried packaging the contrib distrib
Hi Roy,
the example URL is correct if your core is available under that name
(configured in solr.xml) and has started without errors. I think I observed
that it makes a different whether there is a trailing slash or not (but that
was a while ago, so maybe that has changed).
If you can reach th
Hi Elisabeth,
try adding the same tokenizer chain for "query", as well, or simply remove the
type="index" from the analyzer element.
Your chain is analyzing the input of the indexer and removing diacritics and
lowercasing. With your current setup, the input to the search is not analyzed
likewi
Hi Kalyan,
that is becouse SolrJ uses "javabin" as format which has class version numbers
in the serialized objects that do not match. Set the format to XML ("wt"
parameter) and it will work (maybe JSON would, as well).
Chantal
Am 31.07.2012 um 20:50 schrieb Manepalli, Kalyan:
> Hi all,
>
Hi Rodrigo,
as I understand it, you know where the index is located.
If you can also find out where the configuration files are - if they are simply
accessible on the FS - you could start a regular SOLR server that simply uses
that config directory as SOLR_HOME (it will automatically use the co
Hi Rodrigo,
the data will only show in SOLR if the index is built after the data has been
committed to the database it reads the data from.
If the data does not show up in the index there could be several reasons why
that is:
a) different database
b) permissions prevent that the data is visible
Your're welcome :-)
C
gt; condition you can apply to copyField directive, it means that this logic has
> to be implementend by the process that populates the Solr core. Is this
> assumption correct?
>
> That's kind of bad, because I'd like to have this kind of "rules" in the Solr
&g
Hi Daniel,
depending on how you decide on the list of ids, in the first place, you could
also create a new index (core) and populate it with DIH which would select only
documents from your main index (core) in this range of ids. When updating you
could try a delta import.
Of course, this is on
Hi,
use two fields:
1. KeywordTokenizer (= single token) with ngram minsize=1 and maxsize=2 for
inputs of length < 3,
2. the other one tokenized as appropriate with minsize=3 and longer for all
longer inputs
Cheers,
Chantal
Am 26.07.2012 um 09:05 schrieb Finotti Simone:
> Hi Ahmet,
> busine
Hi Daniel,
index the id into a field of type tint or tlong and use a range query
(http://wiki.apache.org/solr/SolrQuerySyntax?highlight=%28rangequery%29):
fq=id:[200 TO 2000]
If you want to exclude certain ids it might be wiser to simply add an exclusion
query in addition to the range query in
> Suppose I have a product with a title='kMix Espresso maker'. If I tokenize
> this and put the result in product_tokens I should get
> '[kMix][Espresso][maker]'.
>
> If now I try to search with facet.field='product_tokens' and
> facet.prefix='espresso' I should get only 'espresso' while I want '
Hi Ugo,
You can use facet.prefix on a tokenized field instead of a String field.
Example:
facet.prefix on "product" will only return hits that match the start of the
single token stored in that field.
As "product_tokens" contains the value of "product" tokenized in a fashion that
suites you,
Here are the working solutions for:
3.6.1 (or lower probably)
via ScriptTransformer in data-config.xml:
function prepareData(row) {
var cols = new java.util.ArrayList();
cols.add("spent_hours");
HI,
I haven't been following from the beginning but am still curious: is the war
file on a shared fs?
See also:
http://www.mail-archive.com/users@tomcat.apache.org/msg79555.html
http://stackoverflow.com/questions/5493931/java-lang-illegalargumentexception-invalid-or-unreadable-war-file-error-in-
Hi Hoss,
> Did you perhaps forget to include RunUpdateProcessorFactory at the end?
What is that? ;-)
I had copied the config from http://wiki.apache.org/solr/UpdateRequestProcessor
but removed the lines I thought I did not need. :-(
I've changed my configuration, and this is now WORKING (4.0-AL
Hi there,
sorry for the length - it is mostly (really) log output. The basic issue is
reflected in the subject: DIH runs fine, but even with an extra optimize on top
(which should not be necessary given my DIH config) the index remains empty.
(I have changed from 3.6.1 to 4.0-ALPHA because of H
Hi Hoss,
thank you for the quick response and the explanations!
> My suggestion would be to modify the XPath expression you are using to
> pull data out of your original XML files and ignore ""
>
I don't think this is possible. That would include text() in the XPath which is
not handled by t
Hi all,
I'm trying to index float values that are not required, input is an XML file. I
have problems avoiding the NFE.
I'm using SOLR 3.6.
Index input:
- XML using DataImportHandler with XPathProcessor
Data:
Optional, Float, CDATA like: 2.0 or
Original Problem:
Empty values would cause a
Hi Henri,
you have not provided very much information, so, here comes a guess:
try ${bdte1} instead of $bdte1 - maybe Velocity resolves $bdte and
concatenates "1" instead of trying the longer value as variable, first.
Chantal
On Wed, 2012-03-28 at 12:04 +0200, henri.gour...@laposte.net wrote:
Hi,
I put all those jars into SOLR_HOME/lib. I do not specify them in
solrconfig.xml explicitely, and they are all found all right.
Would that be an option for you?
Chantal
On Thu, 2012-03-15 at 17:43 +0100, ViruS wrote:
> Hello,
>
> I just now try to switch from 3.4.0 to 3.5.0 ... i make ne
Hi Alp,
if you have not changed how SOLR logs in general, you should find the
log output in the regular server logfile. For Tomcat you can find this
in TOMCAT_HOME/catalina.out (or search for that name).
If there is a problem with your schema, SOLR should be complaining about
it during applicatio
You can download Oracle's Java (which was Sun's) from Oracle directly.
You will have to create an account with them. You can use the same
account for reading the java forum and downloading other software like
their famous DB.
Simply download. JDK6 is still a binary as were all Sun packages before.
Sorry to have misunderstood.
It seems the new Relevance Functions in Solr 4.0 might help - unless you
need to use an official release.
http://wiki.apache.org/solr/FunctionQuery#Relevance_Functions
On Thu, 2012-02-23 at 13:04 +0100, rks_lucene wrote:
> Dear Chantal,
>
> Thanks for your reply, b
Hi Ritesh,
you could add another field that contains the size of the list in the
AREFS field. This way you'd simply sort by that field in descending
order.
Should you update AREFS dynamically, you'd have to update the field with
the size, as well, of course.
Chantal
On Thu, 2012-02-23 at 11:27
Make sure your Tomcat instances are started each with a max heap size
that adds up to something a lot lower than the complete RAM of your
system.
Frequent Garbage collection means that your applications request more
RAM but your Java VM has no more resources, so it requires the Garbage
Collector t
If your script turns out too complex to maintain, and you are developing
in Java, anyway, you could extend EntityProcessor and handle the data in
a custom way. I've done that to transform a datamart like data structure
back into a row based one.
Basically you override the method that gets the data
I've done something like that by calculating the hours during indexing
time (in the script part of the DIH config using java.util.Calendar
which gives you all those field values without effort). I've also
extracted information on which weekday it is (using the integer
constants of Calendar).
If you
Hi,
I've got these errors when my client used a different SolrJ version from
the SOLR server it connected to:
SERVER 3.5 responding ---> CLIENT some other version
You haven't provided any information on your client, though.
Chantal
On Wed, 2012-02-15 at 13:09 +0100, mechravi25 wro
> >
> > does anyone of the maillinglist users use solr as an API to avoid database
> > queries? [...]
>
> Like in a... cache?
>
> Why not use a cache then? (memcached, for example, but there are more).
>
Good point. A cache only uses lookup by one kind of cache key while SOLR
provides lookup b
Hi,
you would not want to include the unique ID and similar stuff, though?
No idea whether it would impact the number of hits but it would most
probably influence the scoring if nothing else.
E.g. if you compare by certain fields, I would expect that a score of
1.0 indicates a match on all of tho
Hi Bráulio,
I don't know about HunspellStemFilterFactory especially but concerning
accents:
There are several accent filter that will remove accents from your
tokens. If the Hunspell filter factory requires the accents, then simply
add the accent filters after Hunspell in your index and query fil
On Thu, 2012-02-09 at 23:45 +0100, alessio crisantemi wrote:
> hi all,
> I would index on solr my pdf files wich includeds on my directory c:\myfile\
>
> so, I add on my solr/conf directory the file data-config.xml like the
> following:
>
>
>
>
>
> *0*
"""
DIH hasn't even retrieved any dat
n lib/)
>
> Do I need also add ... and where?
>
> Thanks.
> Alex.
>
>
>
>
>
> -----Original Message-
> From: Chantal Ackermann
> To: solr-user
> Sent: Fri, Jan 13, 2012 1:52 am
> Subject: Re: can solr automatically search for diff
Hi Dipti,
just to make sure: are you aware of
http://wiki.apache.org/solr/DisMaxQParserPlugin
This will handle the user input in a very conventional and user friendly
way. You just have to specify on which fields you want it to search.
With the 'mm' parameter you have a powerfull option to speci
Hi wunder,
for us, it works with internal dots when specifying the properties in
$SOLR_HOME/[core]/conf/solrcore.properties:
like this:
db.url=xxx
db.user=yyy
db.passwd=zzz
$SOLR_HOME/[core]/conf/data-config.xml:
Cheers,
Chantal
On Sat, 2012-01-21 at 01:01 +0100, Walter Underwood wrote:
>
Hi Alex,
for me, ICUFoldingFilterFactory works very good. It does lowercasing and
removes diacritica (this is how umlauts and accenting of letters is
called - punctuation means comma, points etc.). It will work for any any
language, not only German. And it will also handle apostrophs as in
"C'est
Thanks, Erick! That sounds great. I really do have to upgrade.
Chantal
On Sun, 2012-01-01 at 16:42 +0100, Erick Erickson wrote:
> Chantal:
>
> bq: The problem with the wildcard searches is that the input is not
> analyzed.
>
> As of 3.6/4.0, this is no longer entirely true. Some analysis is
>
The problem with the wildcard searches is that the input is not
analyzed. For english, this might not be such a problem (except if you
expect case insenstive search). But than again, you don't get that with
like, either. Ngrams bring that and more.
What I think is often forgotten when comparing '
Hi Ahmed,
if you have a multi core setup, you could change the file
programmatically (e.g. via XML parser), copy the new file to the
existing one (programmatically, of course), then reload the core.
I haven't reloaded the core programmatically, yet, but that should be
doable via SolrJ. Or - if y
Hi Shawn,
maybe the requests that fail have a certain pattern - for example that
they are longer than all the others.
Chantal
Never would have thought that MS could help me earn such honours...
;D
On Tue, 2011-12-20 at 12:57 +0100, PeterKerk wrote:
> Chantal...you are the queen! :p
> That was it, I downgraded to 6.27 and now it works again...thank god!
>
>
> --
> View this message in context:
> http://lucene.472066.n3
You could also create a single index and use a field "user" to filter
results for only a single user. This would also allow for statistics
over the complete base.
Chantal
On Tue, 2011-12-20 at 12:43 +0100, graham wrote:
> Hi,
>
> I'm a complete newbie and currently at the stage of wondering w
Hi Shawn,
the exception indicates that the connection was lost. I'm sure you
figured that out for yourself.
Questions:
- is that specific server instance really running? That is, can you
reach it via browser?
- If yes: how is your connection pool configured and how do you
initialize it? More spec
DIH does not simply fail. Without more information, it's hard just to
guess.
As your using MS SQLServer, maybe you ran into this?
http://blogs.msdn.com/b/jdbcteam/archive/2011/11/07/supported-java-versions-november-2011.aspx
Would be a problem caused by certain java versions.
Have you turned the
Hi Peter,
the most probable cause is that your database query returns no results.
Have you run the query that DIH is using directly on your database?
In the output you can see that DIH has fetched 0 rows from the DB. Maybe
your query contains a restriction that suddenly had this effect - like a
r
Hi Ben,
what I understand from your post is:
Advertiser (1) <-> (*) Advert
(one-to-many where there can be 50,000 per single Advertiser)
Your index entity is based on Advert which means that there can be
50,000 documents in the index that need to be changed if a field of an
Advertiser is updated
Hi Torsten,
some more information would help us to help you:
- does calling /apps/solrslave/admin/ return the Admin Homepage?
- what is the path to your SOLR_HOME
- where in the filesystem are solrconfig.xml and schema.xml (even if
this sounds redundant, maybe they are just misplaced)
- their read
Hi Yavar,
my experience with similar problems was that there was something wrong
with the database connection or the database.
Chantal
On Wed, 2011-11-23 at 11:57 +0100, Husain, Yavar wrote:
> I am using Solr 1.4.1 on Windows/MS SQL Server and am using DIH for importing
> data. Indexing and al
In solrconfig.xml, change the xsltCacheLifetimeSeconds property of the
XSLTResponseWriter to the desired value (this example 6000secs):
6000
On Mon, 2011-11-14 at 15:31 +0100, vrpar...@gmail.com wrote:
> Hello All,
>
> i am using xslt to transform solr xml response, when made search;getting
Hi Victor,
your wages are hopefully more than what costs disk space, nowadays?
I don't want to spoil the fun in thinking of new challenges when it
comes to SOLR, but from a project management point of view I would buy
some more storage and get it done with copyfield and two requesthandlers
that c
Hi Jeremy,
The xsl files go into the subdirectory /xslt/ (you have to create that)
in the /conf/ directory of the core that should return the transformed
results.
So, if you have a core /myCore/ that you want to return transformed
results you need to put the example.xsl into:
$SOLR_HOME/myCore
source
core does not include any empty fields because it seems not to be
possible to handle that in XSLT (at least from what I found out
searching the i-net).
Chantal
On Mon, 2011-10-10 at 16:07 +0200, Gora Mohanty wrote:
> On Mon, Oct 10, 2011 at 4:40 PM, Chantal Ackermann
> wrote:
> [...]
Hi there,
I have been using cores to built up new cores (because of various
reasons). (I am not using SOLR as data storage, the cores are re-indexed
frequently.)
This solution works for releases 1.4 and 3 as it does not use the
SolrEntityProcessor.
To load data from another SOLR core and populat
Hi Stefan,
I'm using Firefox 3.6.20 and Chromium 12.0.742.112 (90304) Ubuntu 10.10.
The "undefined" appears with both of them.
Chantal
On Wed, 2011-08-24 at 14:09 +0200, Stefan Matheis wrote:
> Hi Chantal,
>
> On Wed, Aug 24, 2011 at 1:43 PM, Chantal Ackermann
r output, that could not be mapped
> :/ i just checked this with the example schema .. so there may be
> cases which are not correct.
>
> Regards
> Stefan
>
> On Wed, Aug 24, 2011 at 10:48 AM, Chantal Ackermann
> wrote:
> > Hi all,
> >
> > the Schema Browser
Hi all,
the Schema Browser in the SOLR Admin shows me the following information:
"""
Field: title
Field Type: string
Properties: Indexed, Stored, Multivalued, Omit Norms, undefined, Sort
Missing Last
Schema: Indexed, Stored, Multivalued, Omit Norms, undefined, Sort
Missing Last
Index: Indexe
Hi Michael,
have you considered the DataImportHandler?
You could use the the LineEntityProcessor to create fields per line and
then copyField to collect everything for the AllData field.
http://wiki.apache.org/solr/DataImportHandler#LineEntityProcessor
Chantal
On Tue, 2011-08-23 at 12:28 +02
Hi g,
ok, I understand your problem, now. (Sorry for answering that late.)
I don't think PlainTextEntityProcessor can help you. It does not take a
regex. LineEntityProcessor does but your record elements probably do not
come on their own line each and you wouldn't want to depend on that,
anyway.
Hi g,
have a look at the PlainTextEntityProcessor:
http://wiki.apache.org/solr/DataImportHandler#PlainTextEntityProcessor
you will have to call the URL twice that way, but I don't think you can
get the complete document (the root element with all structure) via
xpath - so the XPathEntityProcesso
> Please tick one of the options below with an [X]:
>
> [ ] I always use the JDK logging as bundled in solr.war, that's perfect
> [X] I sometimes use log4j or another framework and am happy with
> re-packaging solr.war
actually : not so happy because our operations team has to repackage it.
B
Hi all,
does anyone have a successfull setup (=pom.xml) that specifies the
Hudson snapshot repository :
https://builds.apache.org/hudson/job/Lucene-Solr-Maven-3.x/lastStableBuild/artifact/maven_artifacts
(or that for trunk)
and entries for any solr snapshot artifacts which are then found by
Mave
ra Mohanty wrote:
> On Thu, Mar 10, 2011 at 8:42 PM, Chantal Ackermann
> wrote:
> [...]
> > Is this supposed to work at all? I haven't found anything so far on the
> > net but I could have used the wrong keywords for searching, of course.
> >
> > As answer t
ty? Could you paste your dataimport & the relevant part of the
> logging-output?
>
> Regards
> Stefan
>
> On Thu, Mar 10, 2011 at 4:12 PM, Chantal Ackermann
> wrote:
> > Dear all,
> >
> > in DIH, is it possible to have two sibling entities where:
> &
Dear all,
in DIH, is it possible to have two sibling entities where:
- the first one is the root entity that creates the documents by
iterating over a table that has one row per document.
- the second one is executed after the completion of the first entity
iteration, and it provides more data th
SOLR-2020 addresses upgrading to HttpComponents (form HttpClient). I
have had no time to work more on it, yet, though. I also don't have that
much experience with the new version, so any help is much appreciated.
Cheers,
Chantal
On Tue, 2010-12-07 at 18:35 +0100, Yonik Seeley wrote:
> On Tue, Dec
Hi Jörg,
you could use the DataImportHandler's XPathEntityProcessor. There you
can specify for each sorl field the XPath at which its value is stored
in the original file (your first example snippet).
The value of field "FIEL_ITEMS_DATEINAME" for example would have the
XPath //fie...@name='DATEIN
99 might have what you want.
>
> https://issues.apache.org/jira/browse/SOLR-1499
>
> James Dyer
> E-Commerce Systems
> Ingram Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Chantal Ackermann [mailto:chantal.ackerm...@btelligent.de]
>
Dear all,
my use case is:
Creating an index using DIH where the sub-entity is querying another
SOLR index for more fields.
As there is a very convenient attribute "useSolrAddSchema" that would
spare me to list all the fields I want to add from the other index, I'm
looking for a way to get the sea
Hi Allistair,
On Wed, 2010-09-29 at 15:37 +0200, Allistair Crossley wrote:
> Hello list,
>
> I am implementing a directory using Solr. The user is able to search with a
> free-text query or 2 filters (provided as pick-lists) for country. A
> directory entry only has one country.
>
> I am usin
t; entry for auto-suggest per document, do let me know. Cause I haven't
> been able to come with one.
>
> Jonathan
>
> Chantal Ackermann wrote:
> > What works very good for me:
> >
> > 1.) Keep the tokenized field (KeywordTokenizerFilter,
> > WordDelimit
On Wed, 2010-09-22 at 20:14 +0200, Arunkumar Ayyavu wrote:
> Thanks for the responses. Now, I included the EdgeNGramFilter. But, I get
> the following results when I search for "canon pixma".
> Canon PIXMA MP500 All-In-One Photo Printer
> Canon PowerShot SD500
>
> As you can guess, I'm not expecti
What works very good for me:
1.) Keep the tokenized field (KeywordTokenizerFilter,
WordDelimiterFilter) (like you described you had)
2.) create an additional field that stores uses the String type with the
same content (use copy field to fill either)
3.) use facet.prefix instead of terms.prefix fo
hi Stefan
> users can send privates messages, the selection of recipients is done via
> auto-complete. therefore we need to restrict the possible results based on
> the users confirmed contacts - but i have absolutely no idea how to do that
> :/ Add all confirmed contacts to the index, and use it
Hi Ravi,
with dismax, use the parameter "q.alt" which expects standard lucene
syntax (instead of "q"). If "q.alt" is present in the query, "q" is not
required. Add the parameter "qt=dismax".
Chantal
On Thu, 2010-09-16 at 06:22 +0200, Ravi Kiran wrote:
> Hello Mr.Rochkind,
>
Hi Andre,
changing the entity in your index from donor to gift changes of course
the scope of your search results. I found it helpful to re-think such
change from that "other" side (the result side).
If the users of your search application look for individual gifts, in
the end, then changing the i
Hi Shaun,
you create the SolrServer using multicore by just adding the core to the
URL. You don't need to add anything with SolrQuery.
URL url = new URL(new URL(solrBaseUrl), coreName);
CommonsHttpSolrServer server = new CommonsHttpSolrServer(url);
Concerning the "default" core thing - I wouldn'
Hi Ahmed,
fields that are empty do not impact the index. It's different from a
database.
I have text fields for different languages and per document there is
always only one of the languages set (the text fields for the other
languages are empty/not set). It works all very well and fast.
I wonder
This is probably what you want?
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.LengthFilterFactory
On Thu, 2010-07-29 at 15:44 +0200, Paul Dlug wrote:
> Is there a filter available that will remove large tokens from the
> token stream? Ideally something configurable to a chara
Hi Gora,
your suggestion is good.
Two thoughts:
1. if both of the tables you are joining are in the same database under
the same user you might want to check why the join is so slow. Maybe you
just need to add an index on a column that is used in your WHERE
clauses. Joins should not be slow.
2.
at
works for my ScriptTransformer.)
Are you sure that you cannot change the SOLR results at query time
according to your needs? Maybe you should ask for that, first (ask for X
instead of Y...).
Cheers,
Chantal
>
> Thanks for sharing ideas.
>
> - Mitch
>
You could use org.apache.solr.handler.JsonLoader.
That one uses org.apache.noggit.JSONParser internally.
I've used the JacksonParser with Spring.
http://json.org/ lists parsers for different programming languages.
Cheers,
Chantal
On Wed, 2010-07-28 at 15:08 +0200, MitchK wrote:
> Hello ,
>
> S
Hi Lance!
On Wed, 2010-07-28 at 02:31 +0200, Lance Norskog wrote:
> Should this go into the trunk, or does it only solve problems unique
> to your use case?
The solution is generic but is an extension of XPathEntityProcessor
because I didn't want to touch the solr.war. This way I can deploy the
e
make sure to set stored="true" on every field you expect to be returned
in your results for later display.
Chantal
Hi Mitch,
thanks for the code. Currently, I've got a different solution running
but it's always good to have examples.
> > If realized
> > that I have to throw an exception and add the onError attribute to the
> > entity to make that work.
> >
> I am curious:
> Can you show how to make a meth
Hi Mitch,
> New idea:
> Create a method which returns the query-string:
>
> returnString(theVIP)
> {
>if ( theVIP != null || theVIP != "")
>{
>return "a query-string to find the vip"
>}
>else
>{
>return "SELECT 1" // you nee
ps?
>
> Hopefully, the variable resolver is able to resolve something like
> ${dih.functions.getReplacementIfNeeded(prog.vip).
>
> Kind regards,
> - Mitch
>
>
>
> Chantal Ackermann wrote:
> >
> > Hi,
> >
> > my use case is the following:
>
Hi,
IMHO you can do this with date range queries and (date) facets.
The DateMathParser will allow you to normalize dates on min/hours/days.
If you hit a limit there, then just add a field with an integer for
either min/hour/day. This way you'll loose the month information - which
is sometimes what
1 - 100 of 211 matches
Mail list logo