On 1/17/07, Eivind Hasle Amundsen <[EMAIL PROTECTED]> wrote:
What I am really talking about, is this: There is a growing market for
simple search solutions that can work out of the box, and that can still
be customized. Something that:
- organizations can use on their network, out of the box
- o
Apache Wiki wrote:
* move website
* checkout in new location (from the new svn location too)
Note that you can update the .htaccess file in
/www/incubator.apache.org/solr to redirect the old site to the new site.
http://svn.apache.org/repos/asf/incubator/public/trunk/site-publish/.ht
(...) any enterprise interested
in having a serious search solution (i.e. buy FAST, Autonomy or do
open source lucene) will want a custom solution (...) then
let an integrator/consultancy-firm/IT department to do the actual
implementation. So
a search distribution as pointed out is somewhat meani
Solr's source in subversion has moved within the ASF repository to
to https://svn.apache.org/repos/asf/lucene/solr/
(Thanks Doug!)
The easiest way to change your working directories is to use "svn switch".
For example, if you have the "trunk" of solr checked out, cd to that
directory and execute
I have a requirement wherein the documents that are retrieved based on the
similarity computation
are bucketed and resorted based on user score.
An example -
Let us say a search returns the following data set -
Doc ID Lucene score User score
10001000 125
1000 900
Apache Wiki wrote:
* have everyone update their subversion working directories (remember to
update SVN paths in IDEs too, etc)
Note that 'svn switch' makes this easy.
Doug
On 1/17/07, Eivind Hasle Amundsen <[EMAIL PROTECTED]> wrote:
> (...) the point being
> that once they've got you using a monolithic application, it's a lot
> harder to stop using the whole thing all at once, then it would be for you
> to stop using 1 of N mini-applications they provide.
Well, FA
[
https://issues.apache.org/jira/browse/SOLR-104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465563
]
Ryan McKinley commented on SOLR-104:
removed getRequestParser() from Handler interface.
using ':' in the URL to sp
[
https://issues.apache.org/jira/browse/SOLR-104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan McKinley updated SOLR-104:
---
Attachment: DispatchFilter.patch
> Update Plugins
> --
>
> Key: SOLR-104
>
Sorry for the "flame" , but I've used spring on 2 large projects and it
worked out great.. you should check out some of the GUIs to help manage
the XML configuration files, if that is reason your team thought it was
a nightmare because of the configuration(we broke ours up to help)..
Jeryl Cook
On 1/17/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
: I'm not sure i underestand preProcess( ) and what it gets us.
it gets us the abiliity for a RequestParser to be able to pull out the raw
InputStream from the HTTP POST body, and make it available to the
RequestHandler as a ContentStream a
On 1/17/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
: OK, here's the TODO list I can think of.
i added this as a new section on the TaskList (like we did for the first
release) so it can evolve as people think of other things that need done
(or do things on the list)
Hopefully it won't tak
: OK, here's the TODO list I can think of.
i added this as a new section on the TaskList (like we did for the first
release) so it can evolve as people think of other things that need done
(or do things on the list)
-Hoss
[
https://issues.apache.org/jira/browse/SOLR-93?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465539
]
Bertrand Delacretaz commented on SOLR-93:
-
I tried that on minotaur as well...and it works for me (apart from th
: I'm not sure i underestand preProcess( ) and what it gets us.
it gets us the abiliity for a RequestParser to be able to pull out the raw
InputStream from the HTTP POST body, and make it available to the
RequestHandler as a ContentStream and/or it can wait untill the servlet
has parsed the URL t
[
https://issues.apache.org/jira/browse/SOLR-104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465538
]
Ryan McKinley commented on SOLR-104:
I attached 'DispatchFilter.patch' This extracts some stuff from my previous
[
https://issues.apache.org/jira/browse/SOLR-104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan McKinley updated SOLR-104:
---
Attachment: DispatchFilter.patch
> Update Plugins
> --
>
> Key: SOLR-104
>
I also think it is too early to move to 1.6. Only Sun has released their
1.6 JVM.
Bill
On 1/17/07, Bertrand Delacretaz <[EMAIL PROTECTED]> wrote:
On 1/17/07, Thorsten Scherler <[EMAIL PROTECTED]> wrote:
> ...Should I use 1.6 for a patch or above mentioned libs?...
IMHO moving to 1.6 is way
OK, here's the TODO list I can think of.
I'll start with figuring out how to do the svn move.
-Yonik
x Graduate
x lucene, solr, incubator status website update (news section)
- update incubator status page (move solr to graduated projects,
remove from project.xml,etc)
- move svn
- svn permission
I'm not sure i underestand preProcess( ) and what it gets us.
I like the model that
1. The URL path selectes the RequestHandler
2. RequestParser = RequestHandler.getRequestParser() (typically from
its default params)
3. SolrRequest = RequestParser.parse( HttpServletRequest )
4. handler.handleRe
Acctually, i have to amend that ... it occured to me in my slep last night
that calling HttpServletRequest.getInputStream() wasn't safe unless we
*now* the Requestparser wasnts it, and will close it if it's non-null, so
the API for preProcess would need to look more like this...
interface Po
(...) the point being
that once they've got you using a monolithic application, it's a lot
harder to stop using the whole thing all at once, then it would be for you
to stop using 1 of N mini-applications they provide.
Well, FAST is composed of many small, modular products that can be
replaced
On 1/17/07, Yonik Seeley <[EMAIL PROTECTED]> wrote:
Solr has just graduated from the Incubator, and has been accepted as a
Lucene sub-project!...
Congratulations, and big thanks to all involved!
...I have a feeling we're just getting started :-)..
Yes, and the mailing list stats at
http://p
On Wed, 2007-01-17 at 10:07 -0500, Yonik Seeley wrote:
> Solr has just graduated from the Incubator, and has been accepted as a
> Lucene sub-project!
> Thanks to all the Lucene and Solr users, contributors, and developers
> who helped make this happen!
>
Yeah congrats to the whole community and e
Solr has just graduated from the Incubator, and has been accepted as a
Lucene sub-project!
Thanks to all the Lucene and Solr users, contributors, and developers
who helped make this happen!
I have a feeling we're just getting started :-)
-Yonik
At 11:48 PM -0800 1/16/07, Chris Hostetter wrote:
>yeah ... once we have a RequestHandler doing that work, and populating a
>SolrQueryResponse with it's result info, it
>would probably be pretty trivial to make an extremely bare-bones
>LegacyUpdateOutputWRiter that only expected that simple mount o
Ryan McKinley wrote:
In addition, consider the case where you want to index a SVN
repository. Yes, this could be done in SolrRequestParser that logs in
and returns the files as a stream iterator. But this seems like more
'work' then the RequestParser is supposed to do. Not to mention you
woul
Chris Hostetter wrote:
i'm totally on board now ... the RequestParser decides where the streams
come from if any (post body, file upload, local file, remote url, etc...);
the RequestHandler decides what it wants to do with those streams, and has
a library of DocumentProcessors it can pick from t
On 1/17/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:
...To put it another way: it's a lot easier for people to put reusable
components with clean APIs together in interesting ways, then it is for
people to extract reusable components with clean APIs from a monolithic
application
Very muc
: > 2) "contrib" code that runs as it's own process to crawl documents and
: > send them to a Solr server. (mybe it parses them, or maybe it relies on
: > the next item...)
:
: Do you know FAST? It uses a step-by-step approach ("pipeline") in which
: all of these tasks are done. Much of it is tune
talking about the URL structure made me realize that the Servlet should
dicate the URL structure and the param parsing, but it should do it after
giving the RequestParser a crack at any streams it wants (actually i think
that may be a direct quote from JJ ... can't remember now) ... *BUT* the
Requ
data and wrote it out in the current update response format .. so the
current SolrUpdateServlet could be completley replaced with a simple url
mapping...
/update --> /select?qt=xmlupdate&wt=legacyxmlupdate
Using the filter method above, it could (and i think should) be mapped to:
/update
32 matches
Mail list logo