but choose a stopped or non
stopped field if quotes are present when your application builds the query
Regards
David Stuart
M +44(0) 778 854 2157
T +44(0) 845 519 5465
www.axistwelve.com
Axis12 Ltd | The Ivories | 6/18 Northampton Street, London | N1 2HY | UK
AXIS12 - Enterprise Web Solutions
of the RELOAD action but
that was ignored.
Any ideas? If not I will create a patch for RELOAD to support this
functionality.
Regards,
David Stuart
M +44(0) 778 854 2157
T +44(0) 845 519 5465
www.axistwelve.com
Axis12 Ltd | The Ivories | 6/18 Northampton Street, London | N1 2HY | UK
AXIS12
in the data directory, it will be reloaded for each IndexReader.
Which is the elevate.xml. So looks like I will go down the custom coding route.
Regards,
David Stuart
M +44(0) 778 854 2157
T +44(0) 845 519 5465
www.axistwelve.com
Axis12 Ltd | The Ivories | 6/18 Northampton Street, London | N1 2HY
is it fixed here? I could also go down the root of extending the
SolrCore create function to accept additional params and move the file into the
defined data directory.
Ideas?
Thanks for your help
David Stuart
M +44(0) 778 854 2157
T +44(0) 845 519 5465
www.axistwelve.com
Axis12 Ltd
This is good timing I am/was just to embark on a spike if anyone is keen to
help out
On 30 Nov 2010, at 00:37, Mark wrote:
The DataSource subclass route is what I will probably be interested in. Are
there are working examples of this already out there?
On 11/29/10 12:32 PM, Aaron Morton
If you are using Solr Multicore http://wiki.apache.org/solr/CoreAdmin you can
issue a Reload command
http://localhost:8983/solr/admin/cores?action=RELOADcore=core0
On 26 Oct 2010, at 11:09, Swapnonil Mukherjee wrote:
Hi Everybody,
If I change my schema.xml to, do I have to restart Solr. Is
Two things, one are your DB column uppercase as this would effect the out.
Second what does your db-data-config.xml look like
Regards,
Dave
On 30 Sep 2010, at 03:01, harrysmith wrote:
Looking for some clarification on DIH to make sure I am interpreting this
correctly.
I have a wide DB
.
Is there a easy way to deal with schema.xml?
Thanks.
From: David Stuart david.stu...@progressivealliance.co.uk
To: solr-user@lucene.apache.org
Sent: Mon, July 26, 2010 1:46:58 PM
Subject: Re: How to Combine Drupal solrconfig.xml with Nutch solrconfig.xml?
Hi Savannah,
I have just
http://drupal.org/project/nutch
Regards,
David Stuart
On 26 Jul 2010, at 21:37, Savannah Beckett wrote:
I am using Drupal ApacheSolr module to integrate solr with drupal. I already
integrated solr with nutch. I already moved nutch's solrconfig.xml and
schema.xml to solr's example
On 3 Jun 2010, at 02:58, Dennis Gearon gear...@sbcglobal.net wrote:
When adding data continuously, that data is available after
committing and is indexed, right?
Yes
If so, how often is reindexing do some good?
You should only need to reindex if the data changes or you change your
On 3 Jun 2010, at 02:51, Dennis Gearon gear...@sbcglobal.net wrote:
Well, I hope to have around 5 million datasets/documents within 1
year, so this is good info. BUT if I DO have that many, then the
market I am aiming at will end giving me 100 times more than than
within 2 years.
Are
On 3 Jun 2010, at 03:51, Blargy zman...@hotmail.com wrote:
Would dumping the databases to a local file help at all?
I would suspect not especally with the size of your data. But it would
be good to know how long that takes i.e. Creating a SQL script that
just pulls that data out how
How long does it take to do a grab of all the data via SQL? I found by
denormalizing the data into a lookup table meant that I was able to
index about 300k rows of similar data size with dih regex spilting on
some fields in about 8mins I know it's not quite the scale bit with
batching...
in very large it would reduce the overhead of running a
multicore solution especially in indexing etc
David Stuart
On 28 May 2010, at 18:12, Moazzam Khan moazz...@gmail.com wrote:
Thanks for all your answers guys. Requests and consultants have a many
to many relationship so I can't store
Hey,
In osx you shoud be able to patch in the same way as on liux patch -p
[level] name_of_patch.patch. You can do this from the shell
including on the mac.
David Stuart
On 11 May 2010, at 17:15, Jonty Rhods jonty.rh...@gmail.com wrote:
hi all,
I am very new to solr.
Now I required
Hi jonty,
In then root directory of the src run
patch -p0 name_of_patch.patch
David Stuart
On 11 May 2010, at 17:50, Jonty Rhods jonty.rh...@gmail.com wrote:
hi David,
thanks for quick reply..
please give me full command. so I can patch. what is meaning of
[level].
As I write I had
Using a multicore setup should do the trick see http://wiki.apache.org/solr/CoreAdmin
specificly the swap option
Cheers
David Stuart
On 19 Mar 2010, at 10:18, muneeb muneeba...@hotmail.com wrote:
Hi,
I have indexed almost 7 million articles on two separate cores, each
with
their own
Hi,
I think your needs would meet better with Distributed Search http://wiki.apache.org/solr/DistributedSearch
Which allows sharding to live on different servers and will search
across all of those shard when a query comes in. There are a few patch
which will hopefully be available in the
Hey All,
Can anyone tell me what the attribute name is for defining a default value in
the field tag of the RSS data import handler??
Basically I want to do something like
field column=type value=external_source commonField=true/
Any Ideas?
Regards,
Dave
The MoreLikeThisHandler allows external text to be streamed to it see
http://wiki.apache.org/solr/MoreLikeThisHandler#Using_ContentStreams. The url
feature is quite good if you have a lot of text and start hitting the character
limit in the url
Regards,
Dave
On 22 Jan 2010, at 05:24, Otis
Hi,
The Drupal Solr Module will work with both Solr 1.3 and 1.4
I currently have client installations using both these versions with Drupal
(verison 5 and 6 )
Regards,
Dave
On 14 Jan 2010, at 23:08, Otis Gospodnetic wrote:
You may want to ask on Drupal's mailing lists. I hear about Drupal
Hi,
The answer is it depends ;)
If your 10 tables represent an entity e.g a person their address etc
the one document entity works
But if your 10 tables each represnt a series of entites that you want
to surface in your search results separately then make a document for
each (I.e it
The returning XML result tag has a numFound attribute that will report
0 if nothing matches your search criteria
David
On 15 Dec 2009, at 08:16, Roland Villemoes r...@alpha-solutions.dk
wrote:
Hi
Question: How do you log zero result searches?
I quite important from a business
Yea i tried it as well it doesn't seem to implement xpointer properly
so you can't add multiple fields or field types
David
On 28 Nov 2009, at 18:49, Peter Wolanin peter.wola...@acquia.com
wrote:
Follow-up: it seems the schema parser doesn't barf if you use
xinclude with a single
Hi
This is a php problem you need to increase your per thread memory
limit in your php.ini the field name is memory_limit
Regards
David
On 11 Nov 2009, at 07:56, Jörg Agatz joerg.ag...@googlemail.com
wrote:
Hallo,
I have a Problem withe the Memory Size, but i dont know how i can
You should be ok with the the revision option below. Look for the
highest revision number in the list of files in the patch as
subversion increments revision number on a repo basis not a file basis
so the highest number will represent the current state of all the
files when the patch was
Hi,
I am pushing data to solr from two different sources nutch and a cms.
I have a data clash in that in nutch a copyField is required to push
the url field to the id field as it is used as the primary lookup in
the nutch solr intergration update. The other cms also uses the url
field
Hi,
I am trying to get xincludes with xpointer working in schema.xml as
per this closed issue requrest https://issues.apache.org/jira/browse/SOLR-1167
.
To make our upgrade path easier I want to be able to include extra
custom fields
in the schema and am including an extra set of fields
28 matches
Mail list logo