[jira] Commented: (SOLR-258) Date based Facets

2007-07-12 Thread Pieter Berkel (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512372
 ] 

Pieter Berkel commented on SOLR-258:


I've just tried this patch and the results are impressive!

I agree with Ryan regarding the naming of 'pre', 'post' and 'inner', using 
simple concrete words will make it easier for developers to understand the 
basic concepts.  At first I was a little confused how the 'gap' parameter was 
used, perhaps a name like 'interval' would be more indicative of it's purpose?

While on the topic of gaps / intervals, I can imagine a case where one might 
want facet counts over non-linear intervals, for instance obtaining results 
from: "Last 7 days", "Last 30 days", "Last 90 days", "Last 6 months".  
Obviously you can achieve this by setting facet.date.gap=+1DAY and then 
post-process the results, but a much more elegant solution would be to allow 
"facet.date.gap"  (or another suitably named param) to accept a 
(comma-delimited) set of explicit partition dates:

facet.date.start=NOW-6MONTHS/DAY
facet.date.end=NOW/DAY
facet.date.gap=NOW-90DAYS/DAY,NOW-30DAYS/DAY,NOW-7DAYS/DAY

It would then be trivial to calculate facet counts for the ranges specified 
above.

It would be useful to make the 'start' an 'end' parameters optional.  If not 
specified 'start' should default to the earliest stored date value, and 'end' 
should default to the latest stored date value (assuming that's possible).  
Probably should return a 400 if 'gap' is not set.

My personal opinion is that 'end' should be a hard limit, the last gap should 
never go past 'end'.  Given that the facet label is always generated from the 
lower value in the range, I don't think truncating the last 'gap' will cause 
problems, however it may be helpful to return the actual date value for "end" 
if it was specified as a offset of NOW.

What might be a problem is when both start and end dates are specified as 
offsets of NOW, the value of NOW may not be constant for both values.  In one 
of my tests, I set:

facet.date.start=NOW-12MONTHS
facet.date.end=NOW
facet.date.gap=+1MONTH

With some extra debugging output I can see that mostly the value of NOW is the 
same:

2006-07-13T06:06:07.397
2007-07-13T06:06:07.397

However occasionally there is a difference:

2006-07-13T05:48:23.014
2007-07-13T05:48:23.015

This difference alters the number of gaps calculated (+1 when NOW values are 
diff for start & end).  Not sure how this could be fixed, but as you mentioned 
above, it will probably involve changing "ft.toExternal(ft.toInternal(...))".

Thanks again for creating this useful addition, I'll try to test it a bit more 
and see if I can find anything else.


> Date based Facets
> -
>
> Key: SOLR-258
> URL: https://issues.apache.org/jira/browse/SOLR-258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: date_facets.patch, date_facets.patch, date_facets.patch, 
> date_facets.patch, date_facets.patch
>
>
> 1) Allow clients to express concepts like...
> * "give me facet counts per day for every day this month."
> * "give me facet counts per hour for every hour of today."
> * "give me facet counts per hour for every hour of a specific day."
> * "give me facet counts per hour for every hour of a specific day and 
> give me facet counts for the 
>number of matches before that day, or after that day." 
> 2) Return all data in a way that makes it easy to use to build filter queries 
> on those date ranges.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-269) UpdateRequestProcessorFactory - process requests before submitting them

2007-07-12 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512356
 ] 

Ryan McKinley commented on SOLR-269:


> the RequestHandler has final say over what Processor gets used

absolutely!  The question is just what do in the default /update case.  I'm 
inclined to have the request say what processor to use.  With 'invariants' that 
can be fixed to a single implementation, and will let people configure 
processors without a custom handler.

How do you all feel about the basic structure?  I like the structure, but am 
not sure how 'public' to make the configuration and implementation.  While it 
would be nice to keep the base stuff package protected, then we can't have 
external configuration and external classes could not reuse the other bits of 
the chain (defeating the 'chain' advantages)

I have a pending deadline that depends on input processing and SOLR-139 
modifiable documents -- it would be great to work from a lightly patched trunk 
rather then a heavily patched one ;)

> UpdateRequestProcessorFactory - process requests before submitting them
> ---
>
> Key: SOLR-269
> URL: https://issues.apache.org/jira/browse/SOLR-269
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ryan McKinley
>Assignee: Ryan McKinley
> Fix For: 1.3
>
> Attachments: SOLR-269-UpdateRequestProcessorFactory.patch, 
> SOLR-269-UpdateRequestProcessorFactory.patch, 
> SOLR-269-UpdateRequestProcessorFactory.patch, UpdateProcessor.patch
>
>
> A simple UpdateRequestProcessor was added to a bloated SOLR-133 commit. 
> An UpdateRequestProcessor lets clients plug in logic after a document has 
> been parsed and before it has been 'updated' with the index.  This is a good 
> place to add custom logic for:
>  * transforming the document fields
>  * fine grained authorization (can user X updated document Y?)
>  * allow update, but not delete (by query?)
>
>   name="update.processor.class">org.apache.solr.handler.UpdateRequestProcessor
>  
>   ... (optionally pass in arguments to the factory init method) ...
>   
>
> http://www.nabble.com/Re%3A-svn-commit%3A-r547495---in--lucene-solr-trunk%3A-example-solr-conf-solrconfig.xml-src-java-org-apache-solr-handler-StaxUpdateRequestHandler.java-src-java-org-apache-solr-handler-UpdateRequestProcessor.jav-tf3950072.html#a11206583

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-258) Date based Facets

2007-07-12 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512353
 ] 

Ryan McKinley commented on SOLR-258:


This looks great Hoss.  Thanks!

The facet param interface look reasonable.  Structurally, I would like to see 
'component' based params split into their own file - FacetParams should be 
similar to HighlightParams.  It seems funny to munge the get/set field bit with 
the expanding list of things we may get or set.  If we implement FacetParams as 
an interface (like HighlightParams), the deprecated class 
o.a.s.request.SolrParams could implement FacetParams.

One thing to note on FacetDateOther.get( string ), if you put in an invalid 
string, you will get IllegalArgumentException or NullPointer - not a 400 
response code.  Perhaps somethign like:

  public enum FacetDateOther {
PRE, POST, INNER, ALL, NONE;
public String toString() { return super.toString().toLowerCase(); }
public static FacetDateOther get(String label) {
  try {
return valueOf(label.toUpperCase());
  }
  catch( Exception ex ) {
throw new SolrException
  (SolrException.ErrorCode.BAD_REQUEST,
   label +" is not a valid type of 'other' date facet information", ex 
);
  }
}
  } 

Personally, I like the sound of  "before", "after" and "between" better then 
"pre" "post" "inner".  before/after seem to sit nicely with the other 
parameters 'start' 'end'.


> Date based Facets
> -
>
> Key: SOLR-258
> URL: https://issues.apache.org/jira/browse/SOLR-258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: date_facets.patch, date_facets.patch, date_facets.patch, 
> date_facets.patch, date_facets.patch
>
>
> 1) Allow clients to express concepts like...
> * "give me facet counts per day for every day this month."
> * "give me facet counts per hour for every hour of today."
> * "give me facet counts per hour for every hour of a specific day."
> * "give me facet counts per hour for every hour of a specific day and 
> give me facet counts for the 
>number of matches before that day, or after that day." 
> 2) Return all data in a way that makes it easy to use to build filter queries 
> on those date ranges.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-07-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512332
 ] 

Yonik Seeley commented on SOLR-240:
---

> SingleInstanceLockFactory 
or even better, a subclass or other implementation: 
SingleInstanceWarnLockFactory or SingleInstanceCoordinatedLockFactory that log 
a failure if obtain() is called for a lock that is already locked.

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, IndexWriter2.patch, 
> IndexWriter2.patch, stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-07-12 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512330
 ] 

Yonik Seeley commented on SOLR-240:
---

> i set the "example" default to be NoLockFactory 

How about SingleInstanceLockFactory to aid in catching concurrency bugs?

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, IndexWriter2.patch, 
> IndexWriter2.patch, stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-07-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-240:
--

Attachment: IndexWriter2.patch

This is a variation on Will's IndexWriter2.patch that replaces the 
"useNativeLocks" boolean config option with a string config option allowing 
people to pick any of the 4 built in Lucene lock factories.

(i'd been meaning to try and write a "LockFactoryFactory" to allow people to 
specify any arbitrary LockFactory impl as a plugin, but it seemed like overkill 
-- having Will's useNativeLocks option didn't preclude adding something like 
that later, but recent comments reminded me that for the majority of SOlr 
users, the "NoLockFactory" would actually be perfectly fine since Solr only 
ever opens one IndexWriter at a time)

so this patch provides a little bit more flexibility then the previous one, 
without going whole-hog to a FactoryFactory/plugin model.

It should be noted that I left the hardcoded default in the code in to be 
SimpleFSLockFactory but i set the "example" default to be NoLockFactory with a 
comment that that should be find for any Solr user not modifying the index 
externally to Solr.

comments?


> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, IndexWriter2.patch, 
> IndexWriter2.patch, stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-264) Support 'random' sort order

2007-07-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-264.
---

   Resolution: Fixed
Fix Version/s: 1.3

this was all committed a little while ago and seems to be working.


> Support 'random' sort order
> ---
>
> Key: SOLR-264
> URL: https://issues.apache.org/jira/browse/SOLR-264
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ryan McKinley
>Priority: Minor
> Fix For: 1.3
>
> Attachments: RandomSortField.java, RandomSortField.java, 
> RandomSortField.java, RandomSortField.java, RandomSortField.java, 
> SOLR-264-RandomSortField-2.patch, SOLR-264-RandomSortField-2.patch, 
> SOLR-264-RandomSortOrder.patch, SOLR-264-RandomSortOrder.patch, 
> SOLR-264-RandomSortOrder.patch
>
>
> Support querying for random documents:
>   http://localhost:8983/solr/select/?q=*:*&fl=sku&sort=random%20desc

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-258) Date based Facets

2007-07-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-258:
-

Assignee: Hoss Man

i'd like to commit this in the next few days baring any objections.

in particular, feedback on the API (ie: query params) would be good ... the 
internals can always be cleaned up later if people don't like them, but the 
query args should be sanity checked before people start using them.

> Date based Facets
> -
>
> Key: SOLR-258
> URL: https://issues.apache.org/jira/browse/SOLR-258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: date_facets.patch, date_facets.patch, date_facets.patch, 
> date_facets.patch, date_facets.patch
>
>
> 1) Allow clients to express concepts like...
> * "give me facet counts per day for every day this month."
> * "give me facet counts per hour for every hour of today."
> * "give me facet counts per hour for every hour of a specific day."
> * "give me facet counts per hour for every hour of a specific day and 
> give me facet counts for the 
>number of matches before that day, or after that day." 
> 2) Return all data in a way that makes it easy to use to build filter queries 
> on those date ranges.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-258) Date based Facets

2007-07-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-258:
--

Attachment: date_facets.patch

patch now includes unit tests, as well as a bug fix i discovered for the 
pre/inner/post logic after writing the test.

> Date based Facets
> -
>
> Key: SOLR-258
> URL: https://issues.apache.org/jira/browse/SOLR-258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Hoss Man
> Attachments: date_facets.patch, date_facets.patch, date_facets.patch, 
> date_facets.patch, date_facets.patch
>
>
> 1) Allow clients to express concepts like...
> * "give me facet counts per day for every day this month."
> * "give me facet counts per hour for every hour of today."
> * "give me facet counts per hour for every hour of a specific day."
> * "give me facet counts per hour for every hour of a specific day and 
> give me facet counts for the 
>number of matches before that day, or after that day." 
> 2) Return all data in a way that makes it easy to use to build filter queries 
> on those date ranges.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-298) NGramTokenFilter missing in trunk

2007-07-12 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-298.
---

Resolution: Invalid

[EMAIL PROTECTED]:~/svn/solr$ jar tf 
lib/lucene-analyzers-2007-05-20_00-04-53.jar | grep -i ngram
org/apache/lucene/analysis/ngram/
org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter$Side.class
org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.class
org/apache/lucene/analysis/ngram/EdgeNGramTokenizer$Side.class
org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.class
org/apache/lucene/analysis/ngram/NGramTokenFilter.class
org/apache/lucene/analysis/ngram/NGramTokenizer.class


> NGramTokenFilter missing in trunk
> -
>
> Key: SOLR-298
> URL: https://issues.apache.org/jira/browse/SOLR-298
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Thomas Peuss
>Priority: Minor
>
> In one of the patches for SOLR-81 are Ngram TokenFilters. Only the Tokenizers 
> seem to have made it into Subversion (trunk). What happened to them?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-299) Audit use of backticks in solr.py

2007-07-12 Thread Mike Klaas (JIRA)
Audit use of backticks in solr.py
-

 Key: SOLR-299
 URL: https://issues.apache.org/jira/browse/SOLR-299
 Project: Solr
  Issue Type: Bug
  Components: clients - python
Affects Versions: 1.2
Reporter: Mike Klaas
Assignee: Mike Klaas
 Fix For: 1.3


backticks are often the wrong thing to do (they return a "debugging" 
representation of a variable).

This may be superceded by the new python client, but should be fixed in the 
mean time.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR 2 1 5) Multiple Solr Cores

2007-07-12 Thread Yonik Seeley

On 7/11/07, Henrib <[EMAIL PROTECTED]> wrote:

You're absolutely right. I went too fast.
We can access the http request parameters from the servlet filter though, so
this "quick fix" might be possible.
Henri


I'm not sure if that would handle params as part of a multi-part
upload (which uses a 3rd party lib since it's not supported by servlet
containers AFAIK).

-Yonik



Yonik Seeley wrote:
>
> On 7/11/07, Henrib <[EMAIL PROTECTED]> wrote:
>> Passing the core name as an Http request parameter to use an existing
>> core
>> is easy.
>> In the patched SolrServlet, replacing the first few lines of doGet with
>> the
>> code that follows should do the trick.
>
> Not quite that easy, given that we are using a filter now
> (SolrDispatchFilter),
> which uses SolrRequestParsers to get the parameters (which currently
> requires the core), and can do other things such as handle a binary
> post body in addition to parsing params from the URL.  It looks like
> some of that code should be refactored so that the core is not needed,
> and then the core parameter could be retrieved from the resulting
> SolrParams.
>
> IMO SolrServlet, being the older (original) implementation need not
> support multiple cores.
>
> -Yonik
>
>

--
View this message in context: 
http://www.nabble.com/Re%3A--jira--Commented%3A-%28SOLR-2-1-5%29-Multiple-Solr-Cores-tf4063316.html#a11548596
Sent from the Solr - Dev mailing list archive at Nabble.com.




Re: Embedded Solr with Java 1.4.x

2007-07-12 Thread Yonik Seeley

On 7/12/07, Jery Cook <[EMAIL PROTECTED]> wrote:

http://pharaohofkush.blogspot.com/
I need to make solr work with java 1.4, the orgnaization I work for has not
approved java 1.5 for the network...Before I download the source code and
see if this is possible, what do u guys thing the level of effort will be?


1) push your organization to get into the 21st century ;-)
2) start with some of the tools available that can convert 1.5 classes to 1.4

If neither (1) or (2) works, the effort level would probably be substantial.

-Yonik


[jira] Commented: (SOLR-294) scripts fail to log elapsed time on Solaris

2007-07-12 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512085
 ] 

Bill Au commented on SOLR-294:
--

By the way, I have lost my access to Solaris.  While I did tested the code 
snipplet the patch is using before when I still had access, I am now not able 
to test the patch itself on Solaris.  It will be good if someone using Solaris 
check this out.

> scripts fail to log elapsed time on Solaris
> ---
>
> Key: SOLR-294
> URL: https://issues.apache.org/jira/browse/SOLR-294
> Project: Solr
>  Issue Type: Bug
>  Components: replication
> Environment: Solaris
>Reporter: Bill Au
>Assignee: Bill Au
>Priority: Minor
> Attachments: solr-294.patch
>
>
> The code in the scritps to determine the elapsed time does not work on 
> Solaris because the date command there does not support the %s output format.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-294) scripts fail to log elapsed time on Solaris

2007-07-12 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-294:
-

Attachment: solr-294.patch

Here is the patch to using

perl -e "print time;"

to get the timestamp in seconds on Solaris.

I have added a new function in scripts-util to set the start time and updated 
all the scripts to use that.  This way the code to get timestamp and to 
calculate elapsed time is centralized in scripts.util.  We will just have to 
update a single source if we need to change it to support additional OSes.

> scripts fail to log elapsed time on Solaris
> ---
>
> Key: SOLR-294
> URL: https://issues.apache.org/jira/browse/SOLR-294
> Project: Solr
>  Issue Type: Bug
>  Components: replication
> Environment: Solaris
>Reporter: Bill Au
>Assignee: Bill Au
>Priority: Minor
> Attachments: solr-294.patch
>
>
> The code in the scritps to determine the elapsed time does not work on 
> Solaris because the date command there does not support the %s output format.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-213) snapshooter link creation doesn't work on OS X

2007-07-12 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-213.
--

Resolution: Duplicate

I have committed the patch for SOLR-282 which will take care of this problem 
for both OS X and Solaris.

> snapshooter link creation doesn't work on OS X
> --
>
> Key: SOLR-213
> URL: https://issues.apache.org/jira/browse/SOLR-213
> Project: Solr
>  Issue Type: Bug
>  Components: replication
> Environment: OS X 10.4.9
>Reporter: Grant Ingersoll
>Assignee: Bill Au
>Priority: Minor
>
> The snapshooter script fails on the cp -l command when trying to create the 
> hard links for a snapshot.  Should be able to use ln instead.  Also should 
> look into if there are other issues on OS X.
> There may be some relation to https://issues.apache.org/jira/browse/SOLR-93

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-282) snapshooter does not work under solaris

2007-07-12 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-282.
--

Resolution: Fixed

I have committed the patch.

> snapshooter does not work under solaris 
> 
>
> Key: SOLR-282
> URL: https://issues.apache.org/jira/browse/SOLR-282
> Project: Solr
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 1.2
> Environment: solaris
>Reporter: Xuesong Luo
>Assignee: Bill Au
> Attachments: solr-282-solaris-and-osx.patch, solr-282.patch
>
>
> http://www.mail-archive.com/[EMAIL PROTECTED]/msg04761.html
> solr is able to find snapshooter but didn't  generate any snapshot files 
> after the index is updated. I checked the
> log, everything looks fine, then I run snapshooter from command line. It 
> failed because Solaris doesn't support 
> -l option for cp command. I changed command "cp -lr dir1 dir2" to:
> mkdir dir2
> ln dir1/* dir2
> It seems working. Otis suggested to create an issue so that Bill Au & Co. can 
> fix this problem. 
> Please note: several other commands under solr/bin also have this problem. 
> You can use grep "cp -lr" to find all of them 
> and make similar changes.
> I'm also curious why there is no error log when solr failed running 
> snapshooter. Shouldn't solr log an error message?
> Thanks
> Xuesong

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-215) Multiple Solr Cores

2007-07-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512038
 ] 

Otis Gospodnetic commented on SOLR-215:
---

Just a quick comment - the .zip version of the patch is really a gzipped file:

$ wget --quiet 
https://issues.apache.org/jira/secure/attachment/12361583/solr-215.patch.zip
$ file solr-215.patch.zip
solr-215.patch.zip: gzip compressed data, was "solr-215.patch", from Unix


> Multiple Solr Cores
> ---
>
> Key: SOLR-215
> URL: https://issues.apache.org/jira/browse/SOLR-215
> Project: Solr
>  Issue Type: Improvement
>Reporter: Henri Biestro
>Priority: Minor
> Attachments: solr-215.patch, solr-215.patch, solr-215.patch, 
> solr-215.patch.zip, solr-215.patch.zip, solr-215.patch.zip, 
> solr-trunk-533775.patch, solr-trunk-538091.patch, solr-trunk-542847-1.patch, 
> solr-trunk-542847.patch, solr-trunk-src.patch
>
>
> WHAT:
> As of 1.2, Solr only instantiates one SolrCore which handles one Lucene index.
> This patch is intended to allow multiple cores in Solr which also brings 
> multiple indexes capability.
> The patch file to grab is solr-215.patch.zip (see MISC session below).
> WHY:
> The current Solr practical wisdom is that one schema - thus one index - is 
> most likely to accomodate your indexing needs, using a filter to segregate 
> documents if needed. If you really need multiple indexes, deploy multiple web 
> applications.
> There are a some use cases however where having multiple indexes or multiple 
> cores through Solr itself may make sense.
> Multiple cores:
> Deployment issues within some organizations where IT will resist deploying 
> multiple web applications.
> Seamless schema update where you can create a new core and switch to it 
> without starting/stopping servers.
> Embedding Solr in your own application (instead of 'raw' Lucene) and 
> functionally need to segregate schemas & collections.
> Multiple indexes:
> Multiple language collections where each document exists in different 
> languages, analysis being language dependant.
> Having document types that have nothing (or very little) in common with 
> respect to their schema, their lifetime/update frequencies or even collection 
> sizes.
> HOW:
> The best analogy is to consider that instead of deploying multiple 
> web-application, you can have one web-application that hosts more than one 
> Solr core. The patch does not change any of the core logic (nor the core 
> code); each core is configured & behaves exactly as the one core in 1.2; the 
> various caches are per-core & so is the info-bean-registry.
> What the patch does is replace the SolrCore singleton by a collection of 
> cores; all the code modifications are driven by the removal of the different 
> singletons (the config, the schema & the core).
> Each core is 'named' and a static map (keyed by name) allows to easily manage 
> them.
> You declare one servlet filter mapping per core you want to expose in the 
> web.xml; this allows easy to access each core through a different url. 
> USAGE (example web deployment, patch installed):
> Step0
> java -Durl='http://localhost:8983/solr/core0/update' -jar post.jar solr.xml 
> monitor.ml
> Will index the 2 documents in solr.xml & monitor.xml
> Step1:
> http://localhost:8983/solr/core0/admin/stats.jsp
> Will produce the statistics page from the admin servlet on core0 index; 2 
> documents
> Step2:
> http://localhost:8983/solr/core1/admin/stats.jsp
> Will produce the statistics page from the admin servlet on core1 index; no 
> documents
> Step3:
> java -Durl='http://localhost:8983/solr/core0/update' -jar post.jar ipod*.xml
> java -Durl='http://localhost:8983/solr/core1/update' -jar post.jar mon*.xml
> Adds the ipod*.xml to index of core0 and the mon*.xml to the index of core1;
> running queries from the admin interface, you can verify indexes have 
> different content. 
> USAGE (Java code):
> //create a configuration
> SolrConfig config = new SolrConfig("solrconfig.xml");
> //create a schema
> IndexSchema schema = new IndexSchema(config, "schema0.xml");
> //create a core from the 2 other.
> SolrCore core = new SolrCore("core0", "/path/to/index", config, schema);
> //Accessing a core:
> SolrCore core = SolrCore.getCore("core0"); 
> PATCH MODIFICATIONS DETAILS (per package):
> org.apache.solr.core:
> The heaviest modifications are in SolrCore & SolrConfig.
> SolrCore is the most obvious modification; instead of a singleton, there is a 
> static map of cores keyed by names and assorted methods. To retain some 
> compatibility, the 'null' named core replaces the singleton for the relevant 
> methods, for instance SolrCore.getCore(). One small constraint on the core 
> name is they can't contain '/' or '\' avoiding potential url & file path 
> problems.
> SolrConfig (& SolrIndexConfig) are now used to persist all configur

[jira] Created: (SOLR-298) NGramTokenFilter missing in trunk

2007-07-12 Thread Thomas Peuss (JIRA)
NGramTokenFilter missing in trunk
-

 Key: SOLR-298
 URL: https://issues.apache.org/jira/browse/SOLR-298
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Thomas Peuss
Priority: Minor


In one of the patches for SOLR-81 are Ngram TokenFilters. Only the Tokenizers 
seem to have made it into Subversion (trunk). What happened to them?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-81) Add Query Spellchecker functionality

2007-07-12 Thread Thomas Peuss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-81?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12511989
 ] 

Thomas Peuss commented on SOLR-81:
--

Hello Otis!

What happened to the TokenFilters included in the patch? They are in the patch 
but in trunk I don't see them.

CU
Thomas

> Add Query Spellchecker functionality
> 
>
> Key: SOLR-81
> URL: https://issues.apache.org/jira/browse/SOLR-81
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Otis Gospodnetic
>Priority: Minor
> Attachments: hoss.spell.patch, SOLR-81-edgengram-ngram.patch, 
> SOLR-81-ngram-schema.patch, SOLR-81-ngram.patch, SOLR-81-ngram.patch, 
> SOLR-81-ngram.patch, SOLR-81-ngram.patch, SOLR-81-spellchecker.patch, 
> SOLR-81-spellchecker.patch, SOLR-81-spellchecker.patch
>
>
> Use the simple approach of n-gramming outside of Solr and indexing n-gram 
> documents.  For example:
> 
> lettuce
> let
> let ett ttu tuc uce
> uce
> lett
> lett ettu ttuc tuce
> tuce
> 
> See:
> http://www.mail-archive.com/[EMAIL PROTECTED]/msg01254.html
> Java clients: SOLR-20 (add delete commit optimize), SOLR-30 (search)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.