On 23 June 2013 01:31, Mysurf Mail stammail...@gmail.com wrote:
I try to model my db using
thishttp://wiki.apache.org/solr/DataImportHandler#Full_Import_Exampleexample
from solr wiki.
I have a table called item and a table called features with
id,featureName,description
here is the updated
On Sat, Jun 22, 2013, at 08:41 PM, Lance Norskog wrote:
Accumulo is a BigTable/Cassandra style distributed database. It is now
an Apache Incubator project. In the README we find this gem:
Synchronize your accumulo conf directory across the cluster. As a
precaution against mis-configured
Thanks for your comment.
What I need is to model it so that i can connect between the featureName
and the feature description of the.
Currently if item has 3 features I get two list - each three elements long.
But then I need to correlate them.
On Sun, Jun 23, 2013 at 9:25 AM, Gora Mohanty
Any idea can help on this?
2013/6/22 Erick Erickson erickerick...@gmail.com
Unfortunately, from here I need to leave it to people who know
the highlighting code
Erick
On Wed, Jun 19, 2013 at 8:40 PM, Floyd Wu floyd...@gmail.com wrote:
Hi Erick,
multivalue is my typo, thanks for
Hi Isaac,
ComplexPhrase-4.2.1.zip should work with solr4.2.1. Zipball contains a
ReadMe.txt file about instructions.
You could try with higher solr versions too. If it does not work, please lets
us know.
https://issues.apache.org/jira/secure/attachment/12579832/ComplexPhrase-4.2.1.zip
When in doubt... flatten.
Seriously.
Multivalued fields are an extremely useful feature of Solr, but they MUST be
used in moderation and with careful discipline. They simply are not the kind
of get out of (data modeling) jail for free and pass (data modeling) Go
card that too many people
When we have a user query like keyword1 OR keyword2, we can find the count
of each keyword using the following params.
q= keyword1 OR keyword2
facet.query=keyword1
facet.query=keyword2
facet=true
How do we do a date range facet that will return results for each keyword
faceted by date range ?
Just do separate faceted query requests:
q= keyword1
facet.range=date_field_name
...
facet=true
q= keyword2
facet.range=date_field_name
...
facet=true
Where the ... means fill in the additional facet.range.xxx parameters
(start, end, gap, etc.)
-- Jack Krupansky
-Original Message-
Thats exactly how we are doing now. However, we need to offer the search
over slow networks, hence was wondering if there's a way to reduce server
round-trips.
On Sun, Jun 23, 2013 at 7:14 PM, Jack Krupansky j...@basetechnology.comwrote:
Just do separate faceted query requests:
q= keyword1
Is there a way to write this query using pivots. Will try out and post here.
Appreciate if someone points to a way.
On Sun, Jun 23, 2013 at 7:53 PM, Sourajit Basak sourajit.ba...@gmail.comwrote:
Thats exactly how we are doing now. However, we need to offer the search
over slow networks,
If your keywords are the value in some other field, then, yes, you can use
facet pivots:
facet.pivot=keyword_field,date_field
(See the example in the book! Or on the wiki.)
-- Jack Krupansky
-Original Message-
From: Sourajit Basak
Sent: Sunday, June 23, 2013 10:29 AM
To:
Do the requests in parallel (separate threads) and then the performance
won't be impacted significantly.
-- Jack Krupansky
-Original Message-
From: Sourajit Basak
Sent: Sunday, June 23, 2013 10:23 AM
To: solr-user@lucene.apache.org
Subject: Re: edismax: date range facet with queries
I want to create a search engine for my computer. My doubt is, can i crawl my
G: / or any drive in my network to search any string in any file (any type
of file like XML, .log, properties) using solr? if yes, Please guide me, I
went through the tutorials given in Solr site but could not find them
We are using edismax; the keywords can be in any of the 'qf' fields
specified. Assume 'qf' to be a single fieldA, then the following doesn't
seem to make sense.
q=keyword1 OR keyword2
facet=true
facet.pivot=fieldA,date_field
The purpose is to display the count of the matches of keyword1
On 23 June 2013 01:27, Sourabh107 sourabh.jain@gmail.com wrote:
I want to create a search engine for my computer. My doubt is, can i crawl my
G: / or any drive in my network to search any string in any file (any type
of file like XML, .log, properties) using solr? if yes, Please guide me, I
To be clear, the Data Import Handler is certainly capable of indexing
directly from the file system that Solr is running on. DIH is not just for
databases.
Sorry, but I haven't written the DIH section of my book yet! Maybe I'll try
to do that by the end of the summer, if there is enough
You don't even need threads, async HTTP works just fine.
1. Send request A
2. Send request B
3. Wait for response A
4. Wait for response B
wunder
On Jun 23, 2013, at 7:41 AM, Jack Krupansky wrote:
Do the requests in parallel (separate threads) and then the performance won't
be impacted
Asif:
Thanks, this is great info and may add to the priority of making this
configurable.
I raised a JIRA, see: https://issues.apache.org/jira/browse/SOLR-4956
and feel free to add anything you'd like or correct anything I didn't get
right.
Best
Erick
On Sat, Jun 22, 2013 at 10:16 PM, Asif
Can somebody help with this one, please?
On Fri, Jun 21, 2013 at 10:36 PM, Joe Zhang smartag...@gmail.com wrote:
A quite standard configuration of nutch seems to autoamtically map url
to id. Two questions:
- Where is such mapping defined? I can't find it anywhere in
nutch-site.xml or
Thanks!
1. shards.tolerant=true works, shouldn't this parameter be default?
2. Regarding zk, yes it should be outside the solr nodes and I am
evaluating what difference does it make.
3. Regarding usecase: Daily queries will be about 100k to 200k, not much.
The total data to be indexed is about
I've just taken a peek at the src for DocTransformers. They get given a
TransformContext. That context contains the query and a few other bits
and pieces.
If it contained the response, DocTransformers would be able to do output
restructuring. The best example is hit highlighting. If you did:
Add the passthrough dynamic field to your Solr schema, and then see what
fields get passed through to Solr from Nutch. Then, add the missing fields
to your Solr schema and remove the passthrough.
dynamicField name=* type=string indexed=true stored=true
multiValued=true /
Or, add Solr
Ahmet, it looks great!
Can you tell us why havn't this code been commited into lucene+solr trunk?
On Sun, Jun 23, 2013 at 2:28 PM, Ahmet Arslan iori...@yahoo.com wrote:
Hi Isaac,
ComplexPhrase-4.2.1.zip should work with solr4.2.1. Zipball contains a
ReadMe.txt file about instructions.
Hi,
Zip version meant to be consumed as a solr plugin. This one uses a modified
version of o.a.l.queryparser.complexPhrase.ComplexPhraseQueryParser
(modifications enable 1) configuration for ordered versus unordered phrase
queries and 2) fielded queries). These modifications should be
Update: I tested it and it looks fine now.
Thanks a lot for your help,
Sven
On Fri, Jun 21, 2013 at 3:39 PM, Sven Stark sven.st...@m-square.com.auwrote:
I think you're onto it. Our schema.xml had it
field name=_version_ type=string indexed=true stored=true
multiValued=false/
I'll
On 23 June 2013 14:07, Mysurf Mail stammail...@gmail.com wrote:
Thanks for your comment.
What I need is to model it so that i can connect between the featureName
and the feature description of the.
Currently if item has 3 features I get two list - each three elements long.
But then I need to
Hi,
Can write the query like this way ?
I need to append *** to returned field, is this possible.
account_number has last 4 character
account_name has full name of the account
I need to append *** front of account number. Instead of handling in front
end, thought to handle in solr.
How can we
Hi,
Some more unsolicited feedback since my last experience setting up Solr…
I am concerned that having a duplicate copy of a large part of my database up
on the internet at a guessable location, available for the world to see, is
probably not such a good idea. So I went to look up the various
28 matches
Mail list logo