Re: Turn off caching

2011-02-10 Thread Stijn Vanhoorelbeke
Hi,

You can comment out all sections in solrconfig.xml pointing to a cache.
However, there is a cache deep in Lucence - the fieldcache - that can't be
commented out. This cache will always jump into the picture

If I need to do such things, I restart the whole tomcat6 server to flush ALL
caches.

2011/2/11 Li Li 

> do you mean queryResultCache? you can comment related paragraph in
> solrconfig.xml
> see http://wiki.apache.org/solr/SolrCaching
>
> 2011/2/8 Isan Fulia :
> > Hi,
> > My solrConfig file looks like
> >
> > 
> >  
> >
> >  
> > > multipartUploadLimitInKB="2048" />
> >  
> >
> >   > default="true" />
> >  
> >   > class="org.apache.solr.handler.admin.AdminHandlers" />
> >
> >
> >   > class="org.apache.solr.request.XSLTResponseWriter">
> >  
> >  
> >  
> >*:*
> >  
> > 
> >
> >
> > EveryTime I fire the same query so as to compare the results for
> different
> > configurations , the query result time is getting reduced because of
> > caching.
> > So I want to turn off the cahing or clear the ache before  i fire the
> same
> > query .
> > Does anyone know how to do it.
> >
> >
> > --
> > Thanks & Regards,
> > Isan Fulia.
> >
>


Re: SolrCloud (ZooKeeper)

2011-02-10 Thread Stijn Vanhoorelbeke
So,

The only way - we now have - to integrate ZooKeeper is by using
'-DzkHost=url:port_of_ZooKeeper' when we start up a Solr instance?

+ I've noticed, when a solr instance goes down, the node comes inactive in
Zookeeper - but the node is maintained in the list of nodes. How can you
remove a solr instance from the list of hosts?


2011/2/10 Yonik Seeley 

> On Thu, Feb 10, 2011 at 5:00 PM, Stijn Vanhoorelbeke
>  wrote:
> > I've completed the quick&dirty tutorials of SolrCloud ( see
> > http://wiki.apache.org/solr/SolrCloud ).
> > The whole concept of SolrCloud and ZooKeeper look indeed very promising.
> >
> > I found also some info about a 'ZooKeeperComponent' - From this conponent
> it
> > should be possible to configure ZooKeeper directly from the
> sorlconfig.xml (
> > see http://wiki.apache.org/solr/ZooKeeperIntegration ).
>
> That's not part of what has been committed to trunk.
> Also, if one want's to get solrconfig from zookeeper, having the
> zookeeper config in solrconfig is a bit chicken and eggish ;-)
>
> -Yonik
> http://lucidimagination.com
>


Re: Turn off caching

2011-02-10 Thread Li Li
do you mean queryResultCache? you can comment related paragraph in
solrconfig.xml
see http://wiki.apache.org/solr/SolrCaching

2011/2/8 Isan Fulia :
> Hi,
> My solrConfig file looks like
>
> 
>  
>
>  
>     multipartUploadLimitInKB="2048" />
>  
>
>   default="true" />
>  
>   class="org.apache.solr.handler.admin.AdminHandlers" />
>
>
>   class="org.apache.solr.request.XSLTResponseWriter">
>  
>  
>  
>    *:*
>  
> 
>
>
> EveryTime I fire the same query so as to compare the results for different
> configurations , the query result time is getting reduced because of
> caching.
> So I want to turn off the cahing or clear the ache before  i fire the same
> query .
> Does anyone know how to do it.
>
>
> --
> Thanks & Regards,
> Isan Fulia.
>


Re: solr render biased search result

2011-02-10 Thread William Bell
I am not sure I understand your question.

But you can boost the result based on one value over another value.

Look at bf

http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_change_the_score_of_a_document_based_on_the_.2Avalue.2A_of_a_field_.28say.2C_.22popularity.22.29


On Wed, Feb 9, 2011 at 12:44 PM, cyang2010  wrote:
>
> Hi,
>
> I am asked that whether solr renders biased search result?  For example, for
> this search (query all movie title by this Comedy genre),  for user who
> indicates a preference to 1950's movies, solr renders the 1950's movies with
> higher score (top in the list)?    Or if user is a kid, then the result will
> render G/PG rated movie top in the list, and render all the R rated movie
> bottom in the list?
>
> I know that solr can boost score based on match on a particular field.  But
> it can't favor some value over other value in the same field.  is that
> right?
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/solr-render-biased-search-result-tp2461155p2461155.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Hits when using group=true

2011-02-10 Thread Bill Bell
It would be good if someone added the hits= on group=true in the log.

We are using this parameter and have build a really cool SOLR log analyzer
(that I am pushing to release to open source).
But it is not as effective if we cannot get group=true to output hits= in
the log - since 90% of our queries are group=true...

There is a ticket in SOLR for this under SOLR-2337. Can someone help me
identify what would be required to get this to work?

Bill

>




Re: QTime Solr Query

2011-02-10 Thread didier deshommes
On Thu, Feb 10, 2011 at 4:08 PM, Stijn Vanhoorelbeke
 wrote:
> Hi,
>
> I've done some stress testing onto my solr system ( running in the ec2 cloud
> ).
> From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
> on a index of ~2 million documents ).
>
> My first thought pointed me to the different Solr caches; so I've disabled
> all of them. Yet QTime stays low.
> Then the Lucence internal Field Cache came into sight. This cache is hidden
> deep into Lucence and is not configurable trough Solr.
>
> To cope with this I thought I would lower the memory allocated to Solr -
> that way a smaller cache is forced.
> But yet QTime stays low.

When stress-testing Solr, I usually flush the OS cache also. This is
the command to do it on linux:

# sync; echo 3 > /proc/sys/vm/drop_caches

didier
>
> Can Solr be so fast to retrieve queries in just 1/2 ms - even if I only
> allocate 100 Mb to Solr?
>


Re: running optimize on master

2011-02-10 Thread Erick Erickson
Optimize will do just what you suggest, although there's a
parameter whose name escapes me controlling how many
segments the index is reduced to so this is configurable.

It's also possible, but kind of unlikely, that the original indexing
process would produce only one segment. You could tell which was
the case tell this easily enough by just looking at your index
directory during the build. My bet would be the optimize step..

Best
Erick

On Thu, Feb 10, 2011 at 8:10 PM, Tri Nguyen  wrote:
> Does optimize merge all segments into 1 segment on the master after the build?
>
> Or after the build, there's only 1 segment.
>
> thanks,
>
> Tri
>
>
>
>
> 
> From: Erick Erickson 
> To: solr-user@lucene.apache.org
> Sent: Thu, February 10, 2011 5:08:44 PM
> Subject: Re: running optimize on master
>
> Optimizing isn't necessary in your scenario, as you don't delete
> documents and rebuild the whole thing each time anyway.
>
> As for faster searches, this has been largely been made obsolete
> by recent changes in how indexes are built in the first place. Especially
> as you can build your index in an hour, it's likely not big enough to
> benefit from optimizing even under the old scenario
>
> So, unless you have some evidence that your queries are performing
> poorly, I would just leave the optimize step off.
>
> Best
> Erick
>
>
> On Thu, Feb 10, 2011 at 7:09 PM, Tri Nguyen  wrote:
>> Hi,
>>
>> I've read running optimize is similar to running defrag on a hard disk.
>>Deleted
>> docs are removed and segments are reorganized for faster searching.
>>
>> I have a couple questions.
>>
>> Is optimize necessary if  I never delete documents?  I build the index every
>> hour but we don't delete in between builds.
>>
>> Secondly, what kind of reorganizing of segments is done to make searches
>>faster?
>>
>> Thanks,
>>
>> Tri
>


Re: running optimize on master

2011-02-10 Thread Tri Nguyen
Does optimize merge all segments into 1 segment on the master after the build?

Or after the build, there's only 1 segment.

thanks,

Tri





From: Erick Erickson 
To: solr-user@lucene.apache.org
Sent: Thu, February 10, 2011 5:08:44 PM
Subject: Re: running optimize on master

Optimizing isn't necessary in your scenario, as you don't delete
documents and rebuild the whole thing each time anyway.

As for faster searches, this has been largely been made obsolete
by recent changes in how indexes are built in the first place. Especially
as you can build your index in an hour, it's likely not big enough to
benefit from optimizing even under the old scenario

So, unless you have some evidence that your queries are performing
poorly, I would just leave the optimize step off.

Best
Erick


On Thu, Feb 10, 2011 at 7:09 PM, Tri Nguyen  wrote:
> Hi,
>
> I've read running optimize is similar to running defrag on a hard disk.  
>Deleted
> docs are removed and segments are reorganized for faster searching.
>
> I have a couple questions.
>
> Is optimize necessary if  I never delete documents?  I build the index every
> hour but we don't delete in between builds.
>
> Secondly, what kind of reorganizing of segments is done to make searches 
>faster?
>
> Thanks,
>
> Tri


Re: running optimize on master

2011-02-10 Thread Erick Erickson
Optimizing isn't necessary in your scenario, as you don't delete
documents and rebuild the whole thing each time anyway.

As for faster searches, this has been largely been made obsolete
by recent changes in how indexes are built in the first place. Especially
as you can build your index in an hour, it's likely not big enough to
benefit from optimizing even under the old scenario

So, unless you have some evidence that your queries are performing
poorly, I would just leave the optimize step off.

Best
Erick


On Thu, Feb 10, 2011 at 7:09 PM, Tri Nguyen  wrote:
> Hi,
>
> I've read running optimize is similar to running defrag on a hard disk.  
> Deleted
> docs are removed and segments are reorganized for faster searching.
>
> I have a couple questions.
>
> Is optimize necessary if  I never delete documents?  I build the index every
> hour but we don't delete in between builds.
>
> Secondly, what kind of reorganizing of segments is done to make searches 
> faster?
>
> Thanks,
>
> Tri


Re: QTime Solr Query

2011-02-10 Thread Erick Erickson
Let's see what the queries are. If you're searching for single
terms that don't match many docs that's one thing. If you're looking
at many terms that match many documents, I'd expect larger numbers.

Unless you're hitting the document cache and not searching at all

Best
Erick

On Thu, Feb 10, 2011 at 5:08 PM, Stijn Vanhoorelbeke
 wrote:
> Hi,
>
> I've done some stress testing onto my solr system ( running in the ec2 cloud
> ).
> From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
> on a index of ~2 million documents ).
>
> My first thought pointed me to the different Solr caches; so I've disabled
> all of them. Yet QTime stays low.
> Then the Lucence internal Field Cache came into sight. This cache is hidden
> deep into Lucence and is not configurable trough Solr.
>
> To cope with this I thought I would lower the memory allocated to Solr -
> that way a smaller cache is forced.
> But yet QTime stays low.
>
> Can Solr be so fast to retrieve queries in just 1/2 ms - even if I only
> allocate 100 Mb to Solr?
>


running optimize on master

2011-02-10 Thread Tri Nguyen
Hi,

I've read running optimize is similar to running defrag on a hard disk.  
Deleted 
docs are removed and segments are reorganized for faster searching.

I have a couple questions.

Is optimize necessary if  I never delete documents?  I build the index every 
hour but we don't delete in between builds.

Secondly, what kind of reorganizing of segments is done to make searches faster?

Thanks,

Tri

Re: edismax with windows path input?

2011-02-10 Thread Yonik Seeley
On Thu, Feb 10, 2011 at 5:51 PM, Ryan McKinley  wrote:
>>
>> foo_s:foo\-bar
>> is a valid lucene query (with only a dash between the foo and the
>> bar), and presumably it should be treated the same in edismax.
>> Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
>> bar) might cause more problems than it's worth?
>>
>
> I don't think we should escape anything that has a valid field name.
> If "foo_s" is a field, then foo_s:foo\-bar should be used as is.
>
> If "foo_s" is not a field, I would want the whole thing escaped to:
> foo_s\:foo\\-bar before getting passed to the rest of the dismax mojo.
>
> Does that make sense?

Maybe in some scenarios, but probably not in others?

The clause  bar\-baz  will be equiv to field:"bar-baz"
Hence it seems like the clause foo:bar\-baz should be equiv to
field:"foo:bar-baz" (assuming foo is not a field)

-Yonik
http://lucidimagination.com


Re: edismax with windows path input?

2011-02-10 Thread Ryan McKinley
>
> foo_s:foo\-bar
> is a valid lucene query (with only a dash between the foo and the
> bar), and presumably it should be treated the same in edismax.
> Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
> bar) might cause more problems than it's worth?
>

I don't think we should escape anything that has a valid field name.
If "foo_s" is a field, then foo_s:foo\-bar should be used as is.

If "foo_s" is not a field, I would want the whole thing escaped to:
foo_s\:foo\\-bar before getting passed to the rest of the dismax mojo.

Does that make sense?

marking edismax as experimental for 3.1 makes sense!

ryan


Re: SolrCloud (ZooKeeper)

2011-02-10 Thread Yonik Seeley
On Thu, Feb 10, 2011 at 5:00 PM, Stijn Vanhoorelbeke
 wrote:
> I've completed the quick&dirty tutorials of SolrCloud ( see
> http://wiki.apache.org/solr/SolrCloud ).
> The whole concept of SolrCloud and ZooKeeper look indeed very promising.
>
> I found also some info about a 'ZooKeeperComponent' - From this conponent it
> should be possible to configure ZooKeeper directly from the sorlconfig.xml (
> see http://wiki.apache.org/solr/ZooKeeperIntegration ).

That's not part of what has been committed to trunk.
Also, if one want's to get solrconfig from zookeeper, having the
zookeeper config in solrconfig is a bit chicken and eggish ;-)

-Yonik
http://lucidimagination.com


Re: edismax with windows path input?

2011-02-10 Thread Chris Hostetter

: "essentially that FOO:BAR and FOO\:BAR would be equivalent if FOO is
: not the name of a real field according to the IndexSchema"
: 
: That part is true, but doesn't say anything about escaping.  And for
: some unknown reason, this no longer works.

that's the only part i was refering to.


-Hoss


Re: Unable to build the SolrCloud branch - SOLR-1873

2011-02-10 Thread Stijn Vanhoorelbeke
Hi,

I've followed the guide & worked perfect for me.
( I had to execute ant compile - not ant example, but not likely that was
your problem ).

2011/1/2 siddharth 

>
> I seemed to have figured out the problem. I think it was an issue with the
> JAVA_HOME being set. The build was failing while compiling the module solrj
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Unable-to-build-the-SolrCloud-branch-SOLR-1873-tp2180635p2180800.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: edismax with windows path input?

2011-02-10 Thread Yonik Seeley
On Thu, Feb 10, 2011 at 3:05 PM, Ryan McKinley  wrote:
> ah -- that makes sense.
>
> Yonik... looks like you were assigned to it last week -- should I take
> a look, or do you already have something in the works?

I got busy on other things, and don't have anything in the works.
I think edismax should probably just be marked as experimental for 3.1.

-Yonik
http://lucidimagination.com


Re: edismax with windows path input?

2011-02-10 Thread Yonik Seeley
On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
 wrote:
>
> : extending edismax.  Perhaps when F: does not match a given field, it
> : could auto escape the rest of the word?
>
> that's actually what yonik initially said it was suppose to do

Hmmm, not really.
"essentially that FOO:BAR and FOO\:BAR would be equivalent if FOO is
not the name of a real field according to the IndexSchema"

That part is true, but doesn't say anything about escaping.  And for
some unknown reason, this no longer works.

foo_s:foo\-bar
is a valid lucene query (with only a dash between the foo and the
bar), and presumably it should be treated the same in edismax.
Treating it as foo_s:foo\\-bar (a backslash and a dash between foo and
bar) might cause more problems than it's worth?

-Yonik
http://lucidimagination.com


Monitor the QTime.

2011-02-10 Thread Stijn Vanhoorelbeke
Hi,

Is it possible to monitor the QTime of the queries.
I know I could enable logging - but then all of my requests are logged,
making big&nasty logs.

I just want to log the QTime periodically, lets say once every minute.
Is this possible using Solr or can this be set up in tomcat anyway?


QTime Solr Query

2011-02-10 Thread Stijn Vanhoorelbeke
Hi,

I've done some stress testing onto my solr system ( running in the ec2 cloud
).
>From what I've noticed during the tests, the QTime drops to just 1 or 2 ms (
on a index of ~2 million documents ).

My first thought pointed me to the different Solr caches; so I've disabled
all of them. Yet QTime stays low.
Then the Lucence internal Field Cache came into sight. This cache is hidden
deep into Lucence and is not configurable trough Solr.

To cope with this I thought I would lower the memory allocated to Solr -
that way a smaller cache is forced.
But yet QTime stays low.

Can Solr be so fast to retrieve queries in just 1/2 ms - even if I only
allocate 100 Mb to Solr?


SolrCloud (ZooKeeper)

2011-02-10 Thread Stijn Vanhoorelbeke
Hi,

I've completed the quick&dirty tutorials of SolrCloud ( see
http://wiki.apache.org/solr/SolrCloud ).
The whole concept of SolrCloud and ZooKeeper look indeed very promising.

I found also some info about a 'ZooKeeperComponent' - From this conponent it
should be possible to configure ZooKeeper directly from the sorlconfig.xml (
see http://wiki.apache.org/solr/ZooKeeperIntegration ).

Is this functionality already implemented (or is something for the future)?
If so, can you please point me to a good guide/tutorial, cause I do not find
much on the regular web. Or else copy me your real working
ZooKeeperComponent layout & how you configured ZooKeeper.

Thanks for helping me,


Re: edismax with windows path input?

2011-02-10 Thread Ryan McKinley
ah -- that makes sense.

Yonik... looks like you were assigned to it last week -- should I take
a look, or do you already have something in the works?


On Thu, Feb 10, 2011 at 2:52 PM, Chris Hostetter
 wrote:
>
> : extending edismax.  Perhaps when F: does not match a given field, it
> : could auto escape the rest of the word?
>
> that's actually what yonik initially said it was suppose to do, but when i
> tried to add a param to let you control which fields would be supported
> using the ":" syntax i discovered it didn't work but oculdn't figure out
> why ... details are in the SOLR-1553 comments
>
>
> -Hoss
>


Re: rejected email

2011-02-10 Thread Erick Erickson
Hmmm, never noticed that link before, thanks!
Which shows you how much I can ignore that's perfectly
obvious ...


Works like a champ.

Erick

On Thu, Feb 10, 2011 at 2:05 PM, Shane Perry  wrote:
> I tried posting from gmail this morning and had it rejected.  When I
> resent as plaintext, it went through.
>
> On Thu, Feb 10, 2011 at 11:51 AM, Erick Erickson
>  wrote:
>> Anyone else having problems with the Solr users list suddenly deciding
>> everything you send is spam? For the last couple of days I've had this
>> happening from gmail, and as far as I know I haven't changed anything that
>> would give my mails a different "spam score" which is being exceeded
>> according to the bounced message...
>>
>> Thanks,
>> Erick
>>
>


Re: Writing on master while replicating to slave

2011-02-10 Thread Erick Erickson
Just for the first part: There's no problem here, the write lock
is to keep simultaneous *writes* from occurring, the slave reading
the index doesn't enter in to it. Note that in Solr, when segments
are created in an index, they are write-once. So basically what
happens when a slave replicates is that the current segments that have
had commits against them are replicated (they won't be written to
after a commit) and  Solr merrily continues writing to new segments.
So the slave gets its snapshot of the index as of the last commit that
happened before it polled the master.
The trunk has something called NRT (near real time) that attempts to
decrease the latency, but I don't know how that plays with replication.
Best
Erick
On Thu, Feb 10, 2011 at 1:37 PM, Erick Erickson  wrote:
>
> Just for the first part: There's no problem here, the write lock
> is to keep simultaneous *writes* from occurring, the slave reading
> the index doesn't enter in to it. Note that in Solr, when segments
> are created in an index, they are write-once. So basically what
> happens when a slave replicates is that the current segments that have
> had commits against them are replicated (they won't be written to
> after a commit) and  Solr merrily continues writing to new segments.
> So the slave gets its snapshot of the index as of the last commit that
> happened before it polled the master.
> The trunk has something called NRT (near real time) that attempts to
> decrease the latency, but I don't know how that plays with replication.
> Best
> Erick
> On Thu, Feb 10, 2011 at 10:33 AM, Shane Perry  wrote:
>>
>> Hi,
>>
>> When a slave is replicating from the master instance, it appears a
>> write lock is created. Will this lock cause issues with writing to the
>> master while the replication is occurring or does SOLR have some
>> queuing that occurs to prevent the actual write until the replication
>> is complete?  I've been looking around but can't seem to find anything
>> definitive.
>>
>> My application's data is user centric and as a result the application
>> does a lot of updates and commits.  Additionally, we want to provide
>> near real-time searching and so replication would have to occur
>> aggressively.  Does anybody have any strategies for handling such an
>> application which they would be willing to share?
>>
>> Thanks,
>>
>> Shane
>


Re: edismax with windows path input?

2011-02-10 Thread Chris Hostetter

: extending edismax.  Perhaps when F: does not match a given field, it
: could auto escape the rest of the word?

that's actually what yonik initially said it was suppose to do, but when i 
tried to add a param to let you control which fields would be supported 
using the ":" syntax i discovered it didn't work but oculdn't figure out 
why ... details are in the SOLR-1553 comments


-Hoss


edismax with windows path input?

2011-02-10 Thread Ryan McKinley
I am using the edismax query parser -- its awesome!  works well for
standard dismax type queries, and allows explicit fields when
necessary.

I have hit a snag when people enter something that looks like a windows path:

 F:\path\to\a\file

this gets parsed as:
F:\path\to\a\file
F:\path\to\a\file
+()

Putting it in quotes makes the not-quite right query:
"F:\path\to\a\file"
"F:\path\to\a\file"

+DisjunctionMaxQuery((path:f:pathtoafile^4.0 | name:"f (pathtoafile
fpathtoafile)"^7.0)~0.01)


+(path_path:f:pathtoafile^4.0 | name:"f (pathtoafile fpathtoafile)"^7.0)~0.01


Telling people to escape the query:
q=F\:\\path\\to\\a\\file
is unrealistic, but gives the proper parsed query:

+DisjunctionMaxQuery((path_path:f:/path/to/a/file^4.0 | name:"f path
to a (file fpathtoafile)"^7.0)~0.01)

Any ideas on how to support this?  I could try looking for things like
paths in the app, and then modify the query, or maybe look at
extending edismax.  Perhaps when F: does not match a given field, it
could auto escape the rest of the word?

thanks
ryan


Re: SOLR-792 (hierarchical faceting) issue when only 1 document should be present in the pivot

2011-02-10 Thread Adeel Qureshi
I have had the same problem .. my facet pivots was returning results
something like

Cat-A (3)
 Item X
 Item Y

only 2 items instead of 3

or even
Cat-B (2)
 no items

zero items instead of 2

so the parent level count didnt matched with the returned child pivots ..
but once I set the facet.pivot.mincount = 0 .. then it works fine .. is this
a bug or the desired behavior ..


On Wed, Nov 24, 2010 at 7:49 AM, Nicolas Peeters wrote:

> Hi Solr-Users,
>
> I realized that I can get the behaviour that I expect if I put
> facet.pivot.mincount to 0. However, I'm still puzzled why this needs to be
> 0
> and not 1. There's one occurence for this document, isn't it?
> With this value to 1, the print out of the pivot looks like this (where you
> clearly see (1) for "Value_that_can't_be_matched"):
>
> PIVOT: level1_loc_s,level2_loc_s,level3_loc_s
> level1_loc_s=Greater London (8)
>  level2_loc_s=London (5)
>level3_loc_s=Mayfair (3)
>level3_loc_s=Hammersmith (2)
>  level2_loc_s=Greenwich (3)
>level3_loc_s=Greenwich Centre (2)
> level3_loc_s=Value_that_cant_be_matched (1)
> level1_loc_s=Groot Amsterdam (5)
>  level2_loc_s=Amsterdam (3)
>level3_loc_s=Jordaan (2)
> level3_loc_s=Centrum (1)
>   level2_loc_s=Amstelveen (2)
>level3_loc_s=Centrum (2)
>
> Any expert advice on why this is the case is more than welcome!
>
> Best regards,
>
> Nicolas
>
> On Wed, Nov 24, 2010 at 2:27 PM, Nicolas Peeters  >wrote:
>
> > Hi Solr Community,
> >
> > I've been experimenting with Solr 4.0 (trunk) in order to test the
> SOLR-792
> > feature. I have written a test that shows what I'm trying to ask.
> Basically,
> > I'm creating a hierarchy of the area/city/neighbourhood. The problem that
> I
> > see is that for documents that have only 1 item in a particular hierarchy
> > (e.g. Greater London/Greenwich/Centre (which I've called
> > "Value_that_cant_be_matched in this example"...)), these are not found by
> > the pivot facet. If I add a second one, then it works. I'm puzzled why
> this
> > is the case.
> >
> > This is the result of the Sytem.out that prints out the pivot facet
> fields
> > hierarchy (see line 86)
> >
> > PIVOT: level1_loc_s,level2_loc_s,level3_loc_s
> > level1_loc_s=Greater London (8)
> >   level2_loc_s=London (5)
> > level3_loc_s=Mayfair (3)
> > level3_loc_s=Hammersmith (2)
> >   level2_loc_s=Greenwich (3)
> > level3_loc_s=Greenwich Centre (2)
> >  //--> why isn't there a
> > "level3_loc_s=Value_that_cant_be_matched (1)" here?
> > level1_loc_s=Groot Amsterdam (5)
> >   level2_loc_s=Amsterdam (3)
> > level3_loc_s=Jordaan (2)
> >   level2_loc_s=Amstelveen (2)
> > level3_loc_s=Centrum (2)
> >
> >
> > How can I make sure that Solr would find in the tree the single document
> > when I facet on this "location" hierarchy?
> >
> > Thank you very much for your help.
> >
> > Nicolas
> >
> > import java.io.IOException;
> > import java.net.MalformedURLException;
> > import java.util.ArrayList;
> > import java.util.List;
> > import java.util.Map;
> >
> > import org.apache.solr.client.solrj.SolrQuery;
> > import org.apache.solr.client.solrj.SolrServerException;
> > import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
> > import org.apache.solr.client.solrj.response.PivotField;
> > import org.apache.solr.client.solrj.response.QueryResponse;
> > import org.apache.solr.common.SolrInputDocument;
> > import org.apache.solr.common.util.NamedList;
> > import org.junit.Assert;
> > import org.junit.Before;
> > import org.junit.Test;
> >
> > /**
> >  * This is a test for hiearchical faceting based on SOLR-792 (I basically
> > just checkout the trunk of Solr-4.0).
> >  *
> >  * Unit test that shows the particular behaviour that I'm experiencing.
> >  * I would have expected that the doc (see line 95) with as level3_loc_s
> > "Value_that_cant_be_matched" would appear in the pivot. It seems that you
> > actually need at least 2!
> >  *
> >  * @author npeeters
> >  */
> > public class HierarchicalPivotTest {
> >
> > CommonsHttpSolrServer server;
> >
> > @Before
> > public void setup() throws MalformedURLException {
> > // the instance can be reused
> > this.server = new CommonsHttpSolrServer("
> > http://localhost:8983/solr";);
> > this.server.setSoTimeout(500); // socket read timeout
> > this.server.setConnectionTimeout(100);
> > this.server.setDefaultMaxConnectionsPerHost(100);
> > this.server.setMaxTotalConnections(100);
> > this.server.setFollowRedirects(false); // defaults to false
> > // allowCompression defaults to false.
> > }
> >
> > protected List createHierarchicalOrgData() {
> > int id = 1;
> > List docs = new
> ArrayList();
> > docs.add(makeTestDoc("id", id++, "name", "Organization " + id,
> > "level1_loc_s", "Groot Amsterdam", "level2_loc_s", "Amsterdam",
> > "level3_loc_s", "Centrum"));
> > docs.add(makeTestDoc("id", id++, "name

Re: rejected email

2011-02-10 Thread Shane Perry
I tried posting from gmail this morning and had it rejected.  When I
resent as plaintext, it went through.

On Thu, Feb 10, 2011 at 11:51 AM, Erick Erickson
 wrote:
> Anyone else having problems with the Solr users list suddenly deciding
> everything you send is spam? For the last couple of days I've had this
> happening from gmail, and as far as I know I haven't changed anything that
> would give my mails a different "spam score" which is being exceeded
> according to the bounced message...
>
> Thanks,
> Erick
>


rejected email

2011-02-10 Thread Erick Erickson
Anyone else having problems with the Solr users list suddenly deciding
everything you send is spam? For the last couple of days I've had this
happening from gmail, and as far as I know I haven't changed anything that
would give my mails a different "spam score" which is being exceeded
according to the bounced message...

Thanks,
Erick


RE: Restart SolR and not Tomcat

2011-02-10 Thread Brad Dewar

I don't know:  either way works for me via cURL.

I can only say double check your typing (make sure you're passing the 
user/password you think you are), and double check server.xml.

Oh, the tomcat roles were tightened up a bit in tomcat 7.  If you're using 
tomcat 7 (especially if you've upgraded to it recently), you might have to add 
a little more detail to server.xml to make that user/password combination work 
from different sources. (i.e. add the manager-gui role or add the manager-text 
role to your tomcat user as appropriate).  Read and understand the tomcat docs 
before doing that.

Brad





-Original Message-
From: Paul Libbrecht [mailto:p...@hoplahup.net] 
Sent: February-10-11 2:08 PM
To: solr-user@lucene.apache.org
Subject: Re: Restart SolR and not Tomcat

Exactly Jenny,

*you are not authorized*

means the request cannot be authorized to execute.
Means some calls failed with a security error.

manager/html/reload -> for browsers by humans
manager/reload-> for curl

(at least that's my experience)

paul


Le 10 févr. 2011 à 17:32, Jenny Arduini a écrit :

> If I execute this comand in shell:
> curl -u : http://localhost:8080/manager/html/reload?path=/solr
> I get this result:
> 
>  "http://www.w3.org/TR/html4/strict.dtd";>
> 
> 
> 401 Unauthorized
> 
> 
> 
> 
> 
> 401 Unauthorized
> 
>You are not authorized to view this page. If you have not changed
>any configuration files, please examine the file
> conf/tomcat-users.xml in your installation. That
>file must contain the credentials to let you use this webapp.
> 
> 
>For example, to add the manager-gui role to a user named
> tomcat with a password of s3cret, add the following to the
>config file listed above.
> 
> 
> 
> 
> 
> 
>Note that for Tomcat 7 onwards, the roles required to use the manager
>application were changed from the single manager role to the
>following four roles. You will need to assign the role(s) required for
>the functionality you wish to access.
> 
> 
> manager-gui - allows access to the HTML GUI and the status
>  pages
> manager-script - allows access to the text interface and the
>  status pages
> manager-jmx - allows access to the JMX proxy and the status
>  pages
> manager-status - allows access to the status pages only
> 
> 
>The HTML interface is protected against CSRF but the text and JMX 
> interfaces
>are not. To maintain the CSRF protection:
> 
> 
> users with the manager-gui role should not be granted either
>the manager-script or manager-jmx roles.
> if the text or jmx interfaces are accessed through a browser (e.g. for
> testing since these interfaces are intended for tools not humans) then
> the browser must be closed afterwards to terminate the session.
> 
> 
>For more information - please see the
> Manager App HOW-TO.
> 
> 
> 
> 
> 
> What do I do of wrong?
> But if I show the manager with browser is all ok, and I reload the 
> application without problem.
> 
> Jenny Arduini
> I.T.&T. S.r.l.
> Strada degli Angariari, 25
> 47891 Falciano
> Repubblica di San Marino
> Tel 0549 941183
> Fax 0549 974280
> email: jardu...@ittweb.net
> http://www.ittweb.net
> 
> 
> Il 10/02/2011 17.11, Wilkes, Chris ha scritto:
>> Her URL has "/text/" in it for some reason, replace that with "html" like 
>> Paul has:
>>  curl -u : 
>> http://localhost:8080/manager/html/reload?path=/solr
>> Alternatively if you have JMX access get the mbean with
>>  domain: Catalina
>>  name: //localhost/solr
>>  j2eeType: WebModule
>>  J2EEServer: none
>>  J2EEApplication: none
>>  beanClass: org.apache.tomcat.util.modeler.BaseModelMBean
>> and call "reload" on it.
>> 
>> Chris
>> 
>> On Feb 10, 2011, at 7:45 AM, Paul Libbrecht wrote:
>> 
>>> Jenny,
>>> 
>>> look inside the documentation of the manager application, I'm guessing you 
>>> haven't activated the cross context and privileges in the server.xml to get 
>>> this running.
>>> 
>>> Or does it work with HTML in a browser?
>>> 
>>> http://localhost:8080/manager/html
>>> 
>>> paul
>>> 
>>> 
>>> Le 10 févr. 2011 à 16:07, Jenny Arduini a écrit :
>>> 
 Hello everybody,
 I use SolR with Tomcat, and I've this problem:
 I must to restart SolR without restart Tomcat and I must to do this 
 operation on shell.
 I try to do this operation with this syntax but it doesn't give result:
 curl -u : 
 http://localhost:8080/manager/text/reload?path=/solr
 How can I do?
 
 -- 
 Jenny Arduini
 I.T.&T. S.r.l.
 Strada degli Angariari, 25
 47891 Falciano

Re: Restart SolR and not Tomcat

2011-02-10 Thread Paul Libbrecht
Exactly Jenny,

*you are not authorized*

means the request cannot be authorized to execute.
Means some calls failed with a security error.

manager/html/reload -> for browsers by humans
manager/reload-> for curl

(at least that's my experience)

paul


Le 10 févr. 2011 à 17:32, Jenny Arduini a écrit :

> If I execute this comand in shell:
> curl -u : http://localhost:8080/manager/html/reload?path=/solr
> I get this result:
> 
>  "http://www.w3.org/TR/html4/strict.dtd";>
> 
> 
> 401 Unauthorized
> 
> 
> 
> 
> 
> 401 Unauthorized
> 
>You are not authorized to view this page. If you have not changed
>any configuration files, please examine the file
> conf/tomcat-users.xml in your installation. That
>file must contain the credentials to let you use this webapp.
> 
> 
>For example, to add the manager-gui role to a user named
> tomcat with a password of s3cret, add the following to the
>config file listed above.
> 
> 
> 
> 
> 
> 
>Note that for Tomcat 7 onwards, the roles required to use the manager
>application were changed from the single manager role to the
>following four roles. You will need to assign the role(s) required for
>the functionality you wish to access.
> 
> 
> manager-gui - allows access to the HTML GUI and the status
>  pages
> manager-script - allows access to the text interface and the
>  status pages
> manager-jmx - allows access to the JMX proxy and the status
>  pages
> manager-status - allows access to the status pages only
> 
> 
>The HTML interface is protected against CSRF but the text and JMX 
> interfaces
>are not. To maintain the CSRF protection:
> 
> 
> users with the manager-gui role should not be granted either
>the manager-script or manager-jmx roles.
> if the text or jmx interfaces are accessed through a browser (e.g. for
> testing since these interfaces are intended for tools not humans) then
> the browser must be closed afterwards to terminate the session.
> 
> 
>For more information - please see the
> Manager App HOW-TO.
> 
> 
> 
> 
> 
> What do I do of wrong?
> But if I show the manager with browser is all ok, and I reload the 
> application without problem.
> 
> Jenny Arduini
> I.T.&T. S.r.l.
> Strada degli Angariari, 25
> 47891 Falciano
> Repubblica di San Marino
> Tel 0549 941183
> Fax 0549 974280
> email: jardu...@ittweb.net
> http://www.ittweb.net
> 
> 
> Il 10/02/2011 17.11, Wilkes, Chris ha scritto:
>> Her URL has "/text/" in it for some reason, replace that with "html" like 
>> Paul has:
>>  curl -u : 
>> http://localhost:8080/manager/html/reload?path=/solr
>> Alternatively if you have JMX access get the mbean with
>>  domain: Catalina
>>  name: //localhost/solr
>>  j2eeType: WebModule
>>  J2EEServer: none
>>  J2EEApplication: none
>>  beanClass: org.apache.tomcat.util.modeler.BaseModelMBean
>> and call "reload" on it.
>> 
>> Chris
>> 
>> On Feb 10, 2011, at 7:45 AM, Paul Libbrecht wrote:
>> 
>>> Jenny,
>>> 
>>> look inside the documentation of the manager application, I'm guessing you 
>>> haven't activated the cross context and privileges in the server.xml to get 
>>> this running.
>>> 
>>> Or does it work with HTML in a browser?
>>> 
>>> http://localhost:8080/manager/html
>>> 
>>> paul
>>> 
>>> 
>>> Le 10 févr. 2011 à 16:07, Jenny Arduini a écrit :
>>> 
 Hello everybody,
 I use SolR with Tomcat, and I've this problem:
 I must to restart SolR without restart Tomcat and I must to do this 
 operation on shell.
 I try to do this operation with this syntax but it doesn't give result:
 curl -u : 
 http://localhost:8080/manager/text/reload?path=/solr
 How can I do?
 
 -- 
 Jenny Arduini
 I.T.&T. S.r.l.
 Strada degli Angariari, 25
 47891 Falciano
 Repubblica di San Marino
 Tel 0549 941183
 Fax 0549 974280
 email: jardu...@ittweb.net
 http://www.ittweb.net
 
>>> 
>> 



Re: SolrCloud Feedback

2011-02-10 Thread thorsten

Hi Mark, hi all,

I just got a customer request to conduct an analysis on the state of
SolrCloud. 

He wants to see SolrCloud part of the next solr 1.5 release and is willing
to sponsor our dev time to close outstanding bugs and open issues that may
prevent the inclusion of SolrCloud in the next release. I need to give him a
listing of issues and an estimation how long it will take us to fix them.

I did
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&jqlQuery=project+%3D+SOLR+AND+(summary+~+cloud+OR+description+~+cloud+OR+comment+~+cloud)+AND+resolution+%3D+Unresolved
which returns me 8 bug. Do you consider this a comprehensive list of open
issues or are there missing some important ones in this list?

I read http://wiki.apache.org/solr/SolrCloud and it is talking about a
branch of its own however when I review
https://issues.apache.org/jira/browse/SOLR-1873 I get the impression that
the work is already merged back into trunk, right?

So what is the best to start testing the branch or trunk?

TIA for any informations

salu2
-- 
Thorsten Scherler 
codeBusters S.L. - web based systems

http://www.codebusters.es/
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Feedback-tp2290048p2467091.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Restart SolR and not Tomcat

2011-02-10 Thread Jenny Arduini

If I execute this comand in shell:
curl -u : 
http://localhost:8080/manager/html/reload?path=/solr

I get this result:

"http://www.w3.org/TR/html4/strict.dtd";>



401 Unauthorized





401 Unauthorized

You are not authorized to view this page. If you have not changed
any configuration files, please examine the file
conf/tomcat-users.xml in your installation. That
file must contain the credentials to let you use this webapp.


For example, to add the manager-gui role to a user named
tomcat with a password of s3cret, add the following to the
config file listed above.






Note that for Tomcat 7 onwards, the roles required to use the manager
application were changed from the single manager role to the
following four roles. You will need to assign the role(s) required for
the functionality you wish to access.


manager-gui - allows access to the HTML GUI and the status
  pages
manager-script - allows access to the text interface and the
  status pages
manager-jmx - allows access to the JMX proxy and the status
  pages
manager-status - allows access to the status pages only


The HTML interface is protected against CSRF but the text and JMX 
interfaces

are not. To maintain the CSRF protection:


users with the manager-gui role should not be granted either
the manager-script or manager-jmx roles.
if the text or jmx interfaces are accessed through a browser (e.g. for
 testing since these interfaces are intended for tools not 
humans) then
 the browser must be closed afterwards to terminate the 
session.



For more information - please see the
Manager App HOW-TO.





What do I do of wrong?
But if I show the manager with browser is all ok, and I reload the 
application without problem.


Jenny Arduini
I.T.&T. S.r.l.
Strada degli Angariari, 25
47891 Falciano
Repubblica di San Marino
Tel 0549 941183
Fax 0549 974280
email: jardu...@ittweb.net
http://www.ittweb.net


Il 10/02/2011 17.11, Wilkes, Chris ha scritto:
Her URL has "/text/" in it for some reason, replace that with "html" 
like Paul has:
  curl -u : 
http://localhost:8080/manager/html/reload?path=/solr

Alternatively if you have JMX access get the mbean with
  domain: Catalina
  name: //localhost/solr
  j2eeType: WebModule
  J2EEServer: none
  J2EEApplication: none
  beanClass: org.apache.tomcat.util.modeler.BaseModelMBean
and call "reload" on it.

Chris

On Feb 10, 2011, at 7:45 AM, Paul Libbrecht wrote:


Jenny,

look inside the documentation of the manager application, I'm 
guessing you haven't activated the cross context and privileges in 
the server.xml to get this running.


Or does it work with HTML in a browser?

 http://localhost:8080/manager/html

paul


Le 10 févr. 2011 à 16:07, Jenny Arduini a écrit :


Hello everybody,
I use SolR with Tomcat, and I've this problem:
I must to restart SolR without restart Tomcat and I must to do this 
operation on shell.

I try to do this operation with this syntax but it doesn't give result:
curl -u : 
http://localhost:8080/manager/text/reload?path=/solr

How can I do?

--
Jenny Arduini
I.T.&T. S.r.l.
Strada degli Angariari, 25
47891 Falciano
Repubblica di San Marino
Tel 0549 941183
Fax 0549 974280
email: jardu...@ittweb.net
http://www.ittweb.net







Re: Restart SolR and not Tomcat

2011-02-10 Thread Wilkes, Chris
Her URL has "/text/" in it for some reason, replace that with "html"  
like Paul has:

  curl -u : http://localhost:8080/manager/html/reload?path=/solr
Alternatively if you have JMX access get the mbean with
  domain: Catalina
  name: //localhost/solr
  j2eeType: WebModule
  J2EEServer: none
  J2EEApplication: none
  beanClass: org.apache.tomcat.util.modeler.BaseModelMBean
and call "reload" on it.

Chris

On Feb 10, 2011, at 7:45 AM, Paul Libbrecht wrote:


Jenny,

look inside the documentation of the manager application, I'm  
guessing you haven't activated the cross context and privileges in  
the server.xml to get this running.


Or does it work with HTML in a browser?

 http://localhost:8080/manager/html

paul


Le 10 févr. 2011 à 16:07, Jenny Arduini a écrit :


Hello everybody,
I use SolR with Tomcat, and I've this problem:
I must to restart SolR without restart Tomcat and I must to do this  
operation on shell.
I try to do this operation with this syntax but it doesn't give  
result:

curl -u : http://localhost:8080/manager/text/reload?path=/solr
How can I do?

--
Jenny Arduini
I.T.&T. S.r.l.
Strada degli Angariari, 25
47891 Falciano
Repubblica di San Marino
Tel 0549 941183
Fax 0549 974280
email: jardu...@ittweb.net
http://www.ittweb.net







Re: Sharing a schema with multiple cores

2011-02-10 Thread Marc SCHNEIDER
Ok I found the solution:
First of all schema is an attribute of the core tag so it becomes:


Also make sure the conf directory in your classpath or relative to the path
from where you are launching solr.
It is NOT relative to solr.xml path.

Marc.

On Thu, Feb 10, 2011 at 2:48 PM, Marc SCHNEIDER
wrote:

> Hi,
>
> I'm using Solr 1.4.1 and trying to share a schema among two cores.
> Here is what I did :
>
> solr.xml :
> 
>  shareSchema="true">
> 
>  conf/schema.xml
> 
> 
>  conf/schema.xml
> 
> 
> 
>
> Then in my solr.home (where solr.xml lives) I created a conf directory and
> put schema.xml inside.
>
> But it doesn't work and Solr is complaining about missing schema.xml (Can't
> find resource 'schema.xml').
> If I put schema.xml into test/conf and prod/conf it does work...
>
> So what am I missing here?
>
> Thanks,
> Marc.
>


Re: Restart SolR and not Tomcat

2011-02-10 Thread Paul Libbrecht
Jenny,

look inside the documentation of the manager application, I'm guessing you 
haven't activated the cross context and privileges in the server.xml to get 
this running.

Or does it work with HTML in a browser?

  http://localhost:8080/manager/html

paul


Le 10 févr. 2011 à 16:07, Jenny Arduini a écrit :

> Hello everybody,
> I use SolR with Tomcat, and I've this problem:
> I must to restart SolR without restart Tomcat and I must to do this operation 
> on shell.
> I try to do this operation with this syntax but it doesn't give result:
> curl -u : http://localhost:8080/manager/text/reload?path=/solr
> How can I do?
> 
> -- 
> Jenny Arduini
> I.T.&T. S.r.l.
> Strada degli Angariari, 25
> 47891 Falciano
> Repubblica di San Marino
> Tel 0549 941183
> Fax 0549 974280
> email: jardu...@ittweb.net
> http://www.ittweb.net
> 



Writing on master while replicating to slave

2011-02-10 Thread Shane Perry
Hi,

When a slave is replicating from the master instance, it appears a
write lock is created. Will this lock cause issues with writing to the
master while the replication is occurring or does SOLR have some
queuing that occurs to prevent the actual write until the replication
is complete?  I've been looking around but can't seem to find anything
definitive.

My application's data is user centric and as a result the application
does a lot of updates and commits.  Additionally, we want to provide
near real-time searching and so replication would have to occur
aggressively.  Does anybody have any strategies for handling such an
application which they would be willing to share?

Thanks,

Shane


Re: Wikipedia table of contents.

2011-02-10 Thread Markus Jelsma
Yes but it's not very useful:
http://wiki.apache.org/solr/TitleIndex


On Thursday 10 February 2011 16:14:40 Dennis Gearon wrote:
> Is there a detailed, perhaps alphabetical & hierarchical table of
> contents for all ether wikis on the sole site? Sent
> from Yahoo! Mail on Android

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Wikipedia table of contents.

2011-02-10 Thread Dennis Gearon
Is there a detailed, perhaps alphabetical & hierarchical table of 
contents for all ether wikis on the sole site?
Sent from Yahoo! Mail on Android


Re: SolrCloud Feedback

2011-02-10 Thread Jan Høydahl
Hi,

I have so far just tested the examples and got a N by M cluster running. My 
feedback:

a) First of all, a major update of the SolrCloud Wiki is needed, to clearly 
state what is in which version, what are current improvement plans and get rid 
of outdated stuff. That said I think there are many good ideas there.

b) The "collection" terminology is too much confused with "core", and should 
probably be made more distinct. I just tried to configure two cores on the same 
Solr instance into the same collection, and that worked fine, both as distinct 
shards and as same shard (replica). The wiki examples give the impression that 
"collection1" in localhost:8983/solr/collection1/select?distrib=true is some 
magic collection identifier, but what it really does is doing the query on the 
*core* named "collection1", looking up what collection that core is part of and 
distributing the query to all shards in that collection.

c) ZK is not designed to store large files. While the files in conf are 
normally well below the 1M limit ZK imposes, we should perhaps consider using a 
lightweight distributed object or k/v store for holding the /CONFIGS and let ZK 
store a reference only

d) How are admins supposed to update configs in ZK? Install their favourite ZK 
editor?

e) We should perhaps not be so afraid to make ZK a requirement for Solr in v4. 
Ideally you should interact with a 1-node Solr in the same manner as you do 
with a 100-node Solr. An example is the Admin GUI where the "schema" and 
"solrconfig" links assume local file. This requires decent tool support to make 
ZK interaction intuitive, such as "import" and "export" commands.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

On 19. jan. 2011, at 21.07, Mark Miller wrote:

> Hello Users,
> 
> About a little over a year ago, a few of us started working on what we called 
> SolrCloud.
> 
> This initial bit of work was really a combination of laying some base work - 
> figuring out how to integrate ZooKeeper with Solr in a limited way, dealing 
> with some infrastructure - and picking off some low hanging search side fruit.
> 
> The next step is the indexing side. And we plan on starting to tackle that 
> sometime soon.
> 
> But first - could you help with some feedback?ISome people are using our 
> SolrCloud start - I have seen evidence of it ;) Some, even in production.
> 
> I would love to have your help in targeting what we now try and improve. Any 
> suggestions or feedback? If you have sent this before, I/others likely missed 
> it - send it again!
> 
> I know anyone that has used SolrCloud has some feedback. I know it because 
> I've used it too ;) It's too complicated to setup still. There are still 
> plenty of pain points. We accepted some compromise trying to fit into what 
> Solr was, and not wanting to dig in too far before feeling things out and 
> letting users try things out a bit. Thinking that we might be able to adjust 
> Solr to be more in favor of SolrCloud as we go, what is the ideal state of 
> the work we have currently done?
> 
> If anyone using SolrCloud helps with the feedback, I'll help with the coding 
> effort.
> 
> - Mark Miller
> -- lucidimagination.com



Restart SolR and not Tomcat

2011-02-10 Thread Jenny Arduini

Hello everybody,
I use SolR with Tomcat, and I've this problem:
I must to restart SolR without restart Tomcat and I must to do this 
operation on shell.

I try to do this operation with this syntax but it doesn't give result:
curl -u : 
http://localhost:8080/manager/text/reload?path=/solr

How can I do?

--
Jenny Arduini
I.T.&T. S.r.l.
Strada degli Angariari, 25
47891 Falciano
Repubblica di San Marino
Tel 0549 941183
Fax 0549 974280
email: jardu...@ittweb.net
http://www.ittweb.net



solr.xml isn't loaded from classpath?

2011-02-10 Thread Nicholas Swarr
(may have double posted...apologies if it is)

It seems like when "solr home" is absent, Solr makes an attempt to look a few 
other places to load its configuration.  It will try to look for solrconfig.xml 
on the classpath as well.  It doesn't seem like it makes any attempt to find 
solr.xml though.  Why is that?  Read below for the larger narrative...

The gory details:

Having this configuration discovery makes things really convenient for creating 
custom Solr web applications where you can throw all of Solr's config in your 
resources, create a war, deploy it to Tomcat and it happily loads.  No setting 
of environment variables or setup required.  Something like this,

/someapp/src/main/resources
   |-solrconfig.xml
   |-schema.xml
   |-etc.

The same approach is outlined here:
http://netsuke.wordpress.com/2010/06/24/launching-solr-from-maven-for-rapid-development/

We're creating a multicore installation and have created a folder structure 
which no longer has a solrconfig.xml at the top level of the resources.

/someapp/src/main/resources
   |-solr.xml
   |-core1
  |-solrconfig.xml
  |-schema.xml
  |-etc.
   |-core2
  |-solrconfig.xml
  |-schema.xml
  |-etc.

And when you try to run this, Solr can't find what it needs to start up.  To 
fix this, we manually deployed the configuration on the web server and set the 
solr/home environment variable on the web app's config within Tomcat.  Not 
ideal and it makes automation awkward.

Ultimately, I want a completely packaged war for a multicore instance I can 
drop anywhere without additional setup.  Is this possible?  Am I approaching 
this wrong?



This e-mail message, and any attachments, is intended only for the use of the 
individual or entity identified in the alias address of this message and may 
contain information that is confidential, privileged and subject to legal 
restrictions and penalties regarding its unauthorized disclosure and use. Any 
unauthorized review, copying, disclosure, use or distribution is strictly 
prohibited. If you have received this e-mail message in error, please notify 
the sender immediately by reply e-mail and delete this message, and any 
attachments, from your system. Thank you.



Re: SolrCloud Feedback

2011-02-10 Thread Mark Miller

On Jan 20, 2011, at 12:49 AM, Grijesh wrote:

> 
> Hi Mark,
> 
> I was just working on SolrCloud for my R&D and I got a question in my Mind.
> Since in SolrCloud the configuration files are being shared on all Cloud
> instances and If I have different configuration files for different cores
> then how can I manage it by my Zookeeper managed SolrCloud.
> 
> -
> Thanx:
> Grijesh



You can create as many configuration sets as you want - then you just set on 
the collection zk node which set of config files should be used.

On the Cloud wiki you will see:

-Dbootstrap_confdir=./solr/conf -Dcollection.configName=myconf

That will upload the config files in ./solr/conf to a config set named myconf. 
When there is only one config set, that is what will be used - but when there 
is more than one, you can set which config set to use on the collection node.

- Mark Miller
lucidimagination.com






Sharing a schema with multiple cores

2011-02-10 Thread Marc SCHNEIDER
Hi,

I'm using Solr 1.4.1 and trying to share a schema among two cores.
Here is what I did :

solr.xml :



 conf/schema.xml


 conf/schema.xml




Then in my solr.home (where solr.xml lives) I created a conf directory and
put schema.xml inside.

But it doesn't work and Solr is complaining about missing schema.xml (Can't
find resource 'schema.xml').
If I put schema.xml into test/conf and prod/conf it does work...

So what am I missing here?

Thanks,
Marc.


Re: Solr Cloud - shard replication

2011-02-10 Thread Jan Høydahl
Hi,

SolrCloud does not currently handle the indexing side at all. So you'll need to 
set up replication to tell Solr that node B should be a replica of node A.

http://wiki.apache.org/solr/SolrReplication

After you do this, you can push a document to node A, wait a minute to let it 
replicate to node B, and then do a distributed (load balanced) query:

http://nodeA:8983/solr/collection1/select&q=foo bar&distrib=true

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

On 10. feb. 2011, at 11.32, Hayden Stainsby wrote:

> Hi all,
> 
> I'm attempting to set up a simple Solr Cloud, right now almost directly from 
> the tutorial at: http://wiki.apache.org/solr/SolrCloud
> 
> I'm attempting to set up a simple 1 shard cloud, across two servers. I'm not 
> sure I understand the architecture behind this, but what I'm after is two 
> copies of a single shard, so that if a single server goes down I will still 
> have a full index available.
> 
> I have two servers set up with 1 shard between them, however when I load data 
> (from exampledocs) into one of the servers and then run a distributed search, 
> when the server I load data into handles the query I get a result, but if the 
> other server handles it I get nothing. All queries are run using the URL for 
> one server, but I can see from the CLI output which server is actually 
> handling the request.
> 
> I'm working on the latest nightly build 
> (apache-solr-4.0-2011-02-10_08-27-09), although I've tried a couple of 
> different nightly builds over the last week or so with the same effect.
> 
> My solr/solr.xml contains:
> 
>  
>
>  
> 
> 
> If someone could point out what I'm doing wrong or where I need to look to 
> correct my configuration that'd be fantastic.
> 
> Thanks in advance.
> 
> --
> Hayden
> 
> 
> 
> 
> #!/usr/bin/perl
> chop($_=<>);@s=split/ /;foreach$m(@s){if($m=='*'){$z=pop@t;$x=
> pop@t;$a=eval"$x$m$z";push@t,$a;}else{push@t,$m;}}print"$a\n";
> 
> 
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> __



Re: Tomcat6 and Log4j

2011-02-10 Thread Xavier SCHEPLER
Yes thanks. This works fine :

log4j.rootLogger=INFO, SOLR
log4j.appender.SOLR=org.apache.log4j.DailyRollingFileAppender
log4j.appender.SOLR.file=/home/quetelet_bdq/logs/bdq.log
log4j.appender.SOLR.datePattern='.'-MM-dd
log4j.appender.SOLR.layout=org.apache.log4j.PatternLayout
log4j.appender.SOLR.layout.conversionPattern=%d %p [%c{3}] - [%t] - %X{ip}: %m%n


--
Tous les courriers électroniques émis depuis la messagerie
de Sciences Po doivent respecter des conditions d'usages.
Pour les consulter rendez-vous sur
http://www.ressources-numeriques.sciences-po.fr/confidentialite_courriel.htm


Re: Tomcat6 and Log4j

2011-02-10 Thread Markus Jelsma
Oh, and for sharing purposes; we use a configuration like this one. It'll have 
an info and error log and stores them next to Tomcat's own logs in 
/var/log/tomat on Debian systems (or whatever catalina.base is on other 
distros).

log4j.rootLogger=DEBUG, info, error
 
log4j.appender.info=org.apache.log4j.DailyRollingFileAppender
log4j.appender.info.Threshold=INFO
log4j.appender.info.MaxFileSize=500KB
log4j.appender.info.MaxBackupIndex=20
log4j.appender.info.Append=true
log4j.appender.info.File=${catalina.base}/logs/info.log
log4j.appender.info.DatePattern='.'-MM-dd'.log'
log4j.appender.info.layout=org.apache.log4j.PatternLayout
log4j.appender.info.layout.ConversionPattern=%d %p [%c{3}] - [%t] - %X{ip}: 
%m%n
 
 
log4j.appender.error=org.apache.log4j.DailyRollingFileAppender
log4j.appender.error.Threshold=ERROR
log4j.appender.error.File=${catalina.base}/logs/error.log
log4j.appender.error.DatePattern='.'-MM-dd'.log'
log4j.appender.error.layout=org.apache.log4j.PatternLayout
log4j.appender.error.layout.ConversionPattern=%d %p [%c{3}] - [%t] - %X{ip}: 
%m%n
log4j.appender.error.MaxFileSize=500KB
log4j.appender.error.MaxBackupIndex=20



On Thursday 10 February 2011 12:51:13 Markus Jelsma wrote:
> Oh, now looking at your log4j.properties, i believe it's wrong. You
> declared INFO as rootLogger but you use SOLR.
> 
> -log4j.rootLogger=INFO
> +log4j.rootLogger=SOLR
> 
> try again
> 
> On Thursday 10 February 2011 09:41:29 Xavier Schepler wrote:
> > Hi,
> > 
> > I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to
> > $CATALINA_HOME/webapps/solr/WEB-INF/lib ,
> > then deleted the library “slf4j-jdk14-1.5.5.jar” from
> > $CATALINA_HOME/webapps/solr/WEB-INF/lib,
> > then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
> > and created $CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties
> > with the following contents :
> > 
> > log4j.rootLogger=INFO
> > log4j.appender.SOLR.logfile=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.SOLR.logfile.file=/home/quetelet_bdq/logs/bdq.log
> > log4j.appender.SOLR.logfile.DatePattern='.'-MM-dd
> > log4j.appender.SOLR.logfile.layout=org.apache.log4j.PatternLayout
> > log4j.appender.SOLR.logfile.layout.conversionPattern=%d %p [%c{3}] -
> > [%t] - %X{ip}: %m%n
> > log4j.appender.SOLR.logfile = true
> > 
> > I restarted solr and I got the following message in the catalina.out log
> > :
> > 
> > log4j:WARN No appenders could be found for logger
> > (org.apache.solr.core.SolrResourceLoader).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> > 
> > What is told on this page is that this error occurs what the
> > log4j.properties isn't found.
> > 
> > Could someone help me to have it working ?
> > 
> > Thanks in advance,
> > 
> > Xavier

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Tomcat6 and Log4j

2011-02-10 Thread Markus Jelsma
Oh, now looking at your log4j.properties, i believe it's wrong. You declared 
INFO as rootLogger but you use SOLR. 

-log4j.rootLogger=INFO
+log4j.rootLogger=SOLR

try again




On Thursday 10 February 2011 09:41:29 Xavier Schepler wrote:
> Hi,
> 
> I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to
> $CATALINA_HOME/webapps/solr/WEB-INF/lib ,
> then deleted the library “slf4j-jdk14-1.5.5.jar” from
> $CATALINA_HOME/webapps/solr/WEB-INF/lib,
> then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
> and created $CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties
> with the following contents :
> 
> log4j.rootLogger=INFO
> log4j.appender.SOLR.logfile=org.apache.log4j.DailyRollingFileAppender
> log4j.appender.SOLR.logfile.file=/home/quetelet_bdq/logs/bdq.log
> log4j.appender.SOLR.logfile.DatePattern='.'-MM-dd
> log4j.appender.SOLR.logfile.layout=org.apache.log4j.PatternLayout
> log4j.appender.SOLR.logfile.layout.conversionPattern=%d %p [%c{3}] -
> [%t] - %X{ip}: %m%n
> log4j.appender.SOLR.logfile = true
> 
> I restarted solr and I got the following message in the catalina.out log :
> 
> log4j:WARN No appenders could be found for logger
> (org.apache.solr.core.SolrResourceLoader).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
> 
> What is told on this page is that this error occurs what the
> log4j.properties isn't found.
> 
> Could someone help me to have it working ?
> 
> Thanks in advance,
> 
> Xavier

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Tomcat6 and Log4j

2011-02-10 Thread Xavier SCHEPLER
I added it to /etc/default/tomcat6.
What happened is that the same error message appeared twice in 
/var/log/tomcat6/catalina.out.
Like the same file was loaded twice.


--
Tous les courriers électroniques émis depuis la messagerie
de Sciences Po doivent respecter des conditions d'usages.
Pour les consulter rendez-vous sur
http://www.ressources-numeriques.sciences-po.fr/confidentialite_courriel.htm


Re: Tomcat6 and Log4j

2011-02-10 Thread Markus Jelsma
Add it to the CATALINA_OPTS, on Debian systems you could edit 
/etc/default/tomcat

On Thursday 10 February 2011 12:27:59 Xavier SCHEPLER wrote:
>  -Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.pr
> operties

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


Re: Tomcat6 and Log4j

2011-02-10 Thread Xavier SCHEPLER
Thanks for your response.
How could I do that ?




> 
> From: Jan Høydahl 
> Sent: Thu Feb 10 11:01:15 CET 2011
> To: 
> Subject: Re: Tomcat6 and Log4j
> 
> 
> Have you tried to start Tomcat with 
> -Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
> On 10. feb. 2011, at 09.41, Xavier Schepler wrote:
> 
> > Hi,
> > 
> > I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to 
> > $CATALINA_HOME/webapps/solr/WEB-INF/lib ,
> > then deleted the library “slf4j-jdk14-1.5.5.jar” from 
> > $CATALINA_HOME/webapps/solr/WEB-INF/lib,
> > then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
> > and created $CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties 
> > with the following contents :
> > 
> > log4j.rootLogger=INFO
> > log4j.appender.SOLR.logfile=org.apache.log4j.DailyRollingFileAppender
> > log4j.appender.SOLR.logfile.file=/home/quetelet_bdq/logs/bdq.log
> > log4j.appender.SOLR.logfile.DatePattern='.'-MM-dd
> > log4j.appender.SOLR.logfile.layout=org.apache.log4j.PatternLayout
> > log4j.appender.SOLR.logfile.layout.conversionPattern=%d %p [%c{3}] - [%t] - 
> > %X{ip}: %m%n
> > log4j.appender.SOLR.logfile = true
> > 
> > I restarted solr and I got the following message in the catalina.out log :
> > 
> > log4j:WARN No appenders could be found for logger 
> > (org.apache.solr.core.SolrResourceLoader).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
> > more info.
> > 
> > What is told on this page is that this error occurs what the 
> > log4j.properties isn't found.
> > 
> > Could someone help me to have it working ?
> > 
> > Thanks in advance,
> > 
> > Xavier
> 


--
Tous les courriers électroniques émis depuis la messagerie
de Sciences Po doivent respecter des conditions d'usages.
Pour les consulter rendez-vous sur
http://www.ressources-numeriques.sciences-po.fr/confidentialite_courriel.htm


Replication and newSearcher registerd > poll interval

2011-02-10 Thread dan sutton
Hi,

If the replication window is too small to allow a new searcher to warm
and close the current searcher before the new one needs to be in
place, then the slaves continuously has a high load, and potentially
an OOM error. we've noticed this in our environment where we have
several facets on large multivalued fields.

I was wondering what the list though about modifying the replication
process to skip polls (though warning to logs) when there is a
searcher in the process of warming? Else as in our case it brings the
slave to it's knees, workaround was to extend the poll interval,
though not ideal.

Cheers,
Dan


Solr Cloud - shard replication

2011-02-10 Thread Hayden Stainsby
Hi all,

I'm attempting to set up a simple Solr Cloud, right now almost directly from 
the tutorial at: http://wiki.apache.org/solr/SolrCloud

I'm attempting to set up a simple 1 shard cloud, across two servers. I'm not 
sure I understand the architecture behind this, but what I'm after is two 
copies of a single shard, so that if a single server goes down I will still 
have a full index available.

I have two servers set up with 1 shard between them, however when I load data 
(from exampledocs) into one of the servers and then run a distributed search, 
when the server I load data into handles the query I get a result, but if the 
other server handles it I get nothing. All queries are run using the URL for 
one server, but I can see from the CLI output which server is actually handling 
the request.

I'm working on the latest nightly build (apache-solr-4.0-2011-02-10_08-27-09), 
although I've tried a couple of different nightly builds over the last week or 
so with the same effect.

My solr/solr.xml contains:

  

  


If someone could point out what I'm doing wrong or where I need to look to 
correct my configuration that'd be fantastic.

Thanks in advance.

--
Hayden




#!/usr/bin/perl
chop($_=<>);@s=split/ /;foreach$m(@s){if($m=='*'){$z=pop@t;$x=
pop@t;$a=eval"$x$m$z";push@t,$a;}else{push@t,$m;}}print"$a\n";


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__


Re: Tomcat6 and Log4j

2011-02-10 Thread Jan Høydahl
Have you tried to start Tomcat with 
-Dlog4j.configuration=$CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

On 10. feb. 2011, at 09.41, Xavier Schepler wrote:

> Hi,
> 
> I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to 
> $CATALINA_HOME/webapps/solr/WEB-INF/lib ,
> then deleted the library “slf4j-jdk14-1.5.5.jar” from 
> $CATALINA_HOME/webapps/solr/WEB-INF/lib,
> then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
> and created $CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties with 
> the following contents :
> 
> log4j.rootLogger=INFO
> log4j.appender.SOLR.logfile=org.apache.log4j.DailyRollingFileAppender
> log4j.appender.SOLR.logfile.file=/home/quetelet_bdq/logs/bdq.log
> log4j.appender.SOLR.logfile.DatePattern='.'-MM-dd
> log4j.appender.SOLR.logfile.layout=org.apache.log4j.PatternLayout
> log4j.appender.SOLR.logfile.layout.conversionPattern=%d %p [%c{3}] - [%t] - 
> %X{ip}: %m%n
> log4j.appender.SOLR.logfile = true
> 
> I restarted solr and I got the following message in the catalina.out log :
> 
> log4j:WARN No appenders could be found for logger 
> (org.apache.solr.core.SolrResourceLoader).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 
> What is told on this page is that this error occurs what the log4j.properties 
> isn't found.
> 
> Could someone help me to have it working ?
> 
> Thanks in advance,
> 
> Xavier



Re: solr render biased search result

2011-02-10 Thread Jan Høydahl
You can do a lot with function queries.
Only you know what the domain specific requirements are, so you should write 
application layer code to modify the Solr query based on the user profile to 
the one searching.

Example for the 1950 movie lover you could do:

q=goo bar&defType=dismax&bf=map(movieyear,1950,1959,1,0)^1000.0

Or you could use a recip(abs(sub(movieyear,1955)),10,100,10)^10.0 to give more 
boost to the move the closer to 1955 it is.

However, sometimes it is easier to pre-calculate some field values in the index 
and filter/boost on those.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

On 9. feb. 2011, at 20.44, cyang2010 wrote:

> 
> Hi,
> 
> I am asked that whether solr renders biased search result?  For example, for
> this search (query all movie title by this Comedy genre),  for user who
> indicates a preference to 1950's movies, solr renders the 1950's movies with
> higher score (top in the list)?Or if user is a kid, then the result will
> render G/PG rated movie top in the list, and render all the R rated movie
> bottom in the list?
> 
> I know that solr can boost score based on match on a particular field.  But
> it can't favor some value over other value in the same field.  is that
> right?
> -- 
> View this message in context: 
> http://lucene.472066.n3.nabble.com/solr-render-biased-search-result-tp2461155p2461155.html
> Sent from the Solr - User mailing list archive at Nabble.com.



Tomcat6 and Log4j

2011-02-10 Thread Xavier Schepler

Hi,

I added “slf4j-log4j12-1.5.5.jar” and “log4j-1.2.15.jar” to 
$CATALINA_HOME/webapps/solr/WEB-INF/lib ,
then deleted the library “slf4j-jdk14-1.5.5.jar” from 
$CATALINA_HOME/webapps/solr/WEB-INF/lib,

then created a directory $CATALINA_HOME/webapps/solr/WEB-INF/classes.
and created $CATALINA_HOME/webapps/solr/WEB-INF/classes/log4j.properties 
with the following contents :


log4j.rootLogger=INFO
log4j.appender.SOLR.logfile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.SOLR.logfile.file=/home/quetelet_bdq/logs/bdq.log
log4j.appender.SOLR.logfile.DatePattern='.'-MM-dd
log4j.appender.SOLR.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.SOLR.logfile.layout.conversionPattern=%d %p [%c{3}] - 
[%t] - %X{ip}: %m%n

log4j.appender.SOLR.logfile = true

I restarted solr and I got the following message in the catalina.out log :

log4j:WARN No appenders could be found for logger 
(org.apache.solr.core.SolrResourceLoader).

log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.


What is told on this page is that this error occurs what the 
log4j.properties isn't found.


Could someone help me to have it working ?

Thanks in advance,

Xavier


Re: DataImportHandler: regex debugging

2011-02-10 Thread Stefan Matheis
Jon,

looks like you're (just) missing the transformer="RegexTransformer" in
your entity-definition, like documented here:
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer

Regards
Stefan

On Wed, Feb 9, 2011 at 9:16 PM, Jon Drukman  wrote:
> I am trying to use the regex transformer but it's not returning anything.
> Either my regex is wrong, or I've done something else wrong in the setup of 
> the
> entity.  Is there any way to debug this?  Making a change and waiting 7 
> minutes
> to reindex the entity sucks.
>
>   query="SELECT GROUP_CONCAT(i.url, ',') boxshot_url,
>  GROUP_CONCAT(i2.url, ',') boxshot_url_small FROM games g
>         left join image_sizes i ON g.box_image_id = i.id AND i.size_type = 39
>         left join image_sizes i2 on g.box_image_id = i2.id AND i2.size_type = 
> 40
>         WHERE g.game_seo_title = '${game.game_seo_title}'
>         GROUP BY g.game_seo_title">
>    
>     sourceColName="boxshot_url_small" />
> 
>
> This returns columns that are either null, or have some comma-separated 
> strings.
> I want the bit up to the first comma, if it exists.
>
> Ideally I could have it log the query and the input/output
> of the field statements.
>
>