Also, you have enable=${solr.clustering.enabled:false} in there. Are
you setting solr.clustering.enabled=true anywhere? I'd just remove that
bit, you clearly want it enabled.
Upayavira
On Thu, Jan 10, 2013, at 04:23 PM, obi240 wrote:
I recently started working with the clustering plugin on solr
Heh, the it depends answer :-)
Thanks for the clarification.
Upayavira
On Thu, Jan 10, 2013, at 05:01 PM, Mark Miller wrote:
I think it really depends - if you are gong for very fast visibility,
your going to spend a bunch of time warming, and then just throw it out
before it even gets much
, or three,
etc as demand increases - and you can put them all behind an elastic
load balancer, and scale easily.
You may have multiple cores on your Solr system, but note that servers
have multiple CPUs, so two simultaneous replication requests needn't be
a disaster.
Upayavira
On Thu, Jan 10
Thx, that makes a lot of sense.
Upayavira
On Thu, Jan 10, 2013, at 09:37 PM, Mark Miller wrote:
I actually think the example NRT setting of one second is probably lower
than it should be.
When you think about most NRT cases, do you really need 1 second
visibility? You normally could easily
It seems to me like you want to use result grouping by hotel. You'll
have to add up the tariffs for each hotel, but that isn't hard.
Upayavira
On Wed, Jan 9, 2013, at 06:08 AM, Harshvardhan Ojha wrote:
Hi Alex,
Thanks for your reply.
I saw prices based on daterange using multipoints
Try q=name:(ian paisley)q.op=AND
Does that work better for you?
It would also match Ian James Paisley, but not Ian Jackson.
Upayavira
On Wed, Jan 9, 2013, at 01:30 PM, Michael Jones wrote:
Hi,
My schema file is here http://pastebin.com/ArY7xVUJ
Query (name:'ian paisley') returns ~ 3000
not indexing or committing to the core at the time you do this.
Upayavira
On Wed, Jan 9, 2013, at 01:48 PM, marotosg wrote:
Hi,
Is possible to restore an old backup without shutting down Solr?
Regards,
Sergio
--
View this message in context:
http://lucene.472066.n3.nabble.com
with
the CoreAdminHandler, I reckon.
Upayavira
On Wed, Jan 9, 2013, at 09:27 PM, Paul Jungwirth wrote:
Yes, I agree about making sure the backups actually work, whatever the
approach. Thanks for your reply and all you've contributed to the
Solr/Lucene community. The Lucene in Action book has been a huge help to
me
Have you uploaded a patch to JIRA???
Upayavira
On Tue, Jan 8, 2013, at 07:57 PM, jmozah wrote:
Hmm. Fixed it.
Did similar thing as SOLR-247 for distributed search.
Basically modified the FacetInfo method of the FacetComponent.java to
make it work.. :-)
./zahoor
On 08-Jan-2013, at 9
Can you explain why you want to implement a different sort first? There
may be other ways of achieving the same thing.
Upayavira
On Sun, Jan 6, 2013, at 01:32 AM, andy wrote:
Hi,
Maybe this is an old thread or maybe it's different with previous one.
I want to custom solr sort and pass
things down.
Upayavira
On Mon, Jan 7, 2013, at 05:18 PM, Uwe Reh wrote:
Hi Robi,
thank you for the contribution. It's exiting to read, that your index
isn't contaminated by the number of fields. I can't exclude other
mistakes, but my first experience with extensive use of dynamic fields
think of any reasons
why using SolrCloud would require a change to your indexing (assuming
you are currently indexing to a single node).
But really, the best way to find out is to try it.
Upayavira
On Sun, Jan 6, 2013, at 01:08 AM, Jorge Luis Betancourt Gonzalez wrote:
So, from my php app point
difficulty via the
XML/HTTP interface.
Your mileage may vary, but for that particular app, that is what it
took.
Note, 4.0 can work in a 3.x way (old style replication, etc). You don't
need to use SolrCloud etc when using 4.0.
Upayavira
On Sat, Jan 5, 2013, at 08:20 AM, Jorge Luis Betancourt
to be out by the time you go into production, which will likely iron
out many niggles found in 4.0.
Upayavira
On Fri, Jan 4, 2013, at 12:07 PM, Dikchant Sahi wrote:
As someone in the forum correctly said, if all Solr releases were
evolutionary Solr 4.0 is revolutionary. It has lots of improvement over
significant, development effort.
Upayavira
On Fri, Jan 4, 2013, at 11:39 AM, Oleg Ruchovets wrote:
Hi ,
I am very new to solr.
We want to use solr search capabilities thru multiple data sources
We have couple data sources in house and couple of data sources
externally.
We can index all
Using your terminology, I'd say core is a physical solr term, and index
is a pysical lucene term. A collection or a shard is a logical solr
term.
Upayavira
On Fri, Jan 4, 2013, at 06:28 PM, darren wrote:
My understanding is core is a logical solr term. Index is a physical
lucene term. A solr
to get scores to be roughly the same between Solr
and your API, which might be prohibitively difficult.
Others might have alternative ideas.
Upayavira
On Fri, Jan 4, 2013, at 04:37 PM, Oleg Ruchovets wrote:
Ok , thank you for the answer.
May be you can pointing me on documentation or any other
I agree with the 'more mature' analysis, but surely you can use 4.0 in a
3.x style without greater difficulty, no?
Upayavira
On Fri, Jan 4, 2013, at 07:35 PM, Otis Gospodnetic wrote:
Hi,
If you don't need to shard your index and don't need NRT search Solr 3.x
is
much simpler to operate
DIH won't make any real difference, I'd say. The work to write terms to
your index still happens in either case.
Upayavira
On Fri, Jan 4, 2013, at 11:25 PM, Marcin Rzewucki wrote:
Thanks. I guess you're right - it's normal behaviour. Are there some
guidelines how to use ramBufferSizeMB or only
understand,
anyway.
Upayavira
On Wed, Jan 2, 2013, at 06:10 PM, Lance Norskog wrote:
Indexes will not work. I have not heard of an index upgrader. If you run
your 3.6 and new 4.0 Solr at the same time, you can upload all the data
with a DataImportHandler script using the SolrEntityProcessor
I have permission to provide an export. Right now I'm thinking of it
being a one off dump, without the user dir. If someone wants to research
how to make moin automate it, I at least promise to listen.
Upayavira
On Tue, Jan 1, 2013, at 08:10 AM, Alexandre Rafalovitch wrote:
That's why I think
together, and to do so in a way that is performant, and that's
likely to be a challenge. And that is a Java-.Net thing, not a
specifically Solr thing.
Upayavira
On Tue, Jan 1, 2013, at 09:12 PM, dafna wrote:
The code that I don't want to rewrite is the analyzers.
I have written many analyzers.
I
as is achieved by a Field of type string.
Upayavira
On Mon, Dec 31, 2012, at 01:08 PM, Tomás Fernández Löbbe wrote:
It can't be *really* case independent. You could lowercase everything,
but
you'd see the facet value in lowercase too. If you really need to search
in
lowercase and display
.
Upayavira
On Mon, Dec 31, 2012, at 12:55 AM, Jason wrote:
Hi, Erick
I didn't configure anything for index backup.
My ReplicationHandler configuration is below.
Other setting in solrconfig.xml is almost default.
Is there a deletion policy for replication?
I know maxNumberOfBackups parameter
active connections to solr hosts,
which is not a common scenario for load balances as I understand it.
Upayavira
On Fri, Dec 28, 2012, at 04:39 PM, Marcin Rzewucki wrote:
Hi,
Does Solr need connection to all of hosts in ZK ensemble or only to one
of
them at a time ? I wonder if it is possible
audit if those cores in your short
term super-collections, but the net result is some names you can use to
address various subsets of your total content.
Upayavira (who's kids are still asleep, so no excitement yet...)
On Tue, Dec 25, 2012, at 06:49 AM, Otis Gospodnetic wrote:
Hi,
Right
You make the text field stored=false in your schema, then reindex.
Then it won't show in search results.
Upayavira
On Sun, Dec 23, 2012, at 09:27 AM, uwe72 wrote:
your query-time fl parameter.
means don't return this field?
because we have many many fields, so probably now i use
:(tomorrow morning)
Upayavira
On Fri, Dec 21, 2012, at 09:37 AM, xpow wrote:
Hi,
I have a case where i need to find the results of a query that search for
a
exact phrase + other words, for example, if i have a keyword like:
how about tomorrow morning
the results that should be fetched should
Let me see how easy it is to patch this.
Upayavira
On Fri, Dec 21, 2012, at 12:56 PM, Erick Erickson wrote:
Nuke 'em or something. This is one of the most annoying errors I see.
I'm
not a new user, but every time that flashes by I have to look to see what
happened and think Oh, it's
;/subdocgt;
/strstr name=xml2_s
lt;subdoc id=id_1gt;
lt;datagt;testinglt;/datagt;
lt;/subdocgt;
/str
So both forms have worked for me.
Upayavira
On Fri, Dec 21, 2012, at 02:44 PM, Modou DIA wrote:
I am working with an xml format named EAD (Encoded Archival
Description
it is logs, distributed IDF has no real bearing.
Upayavira
, and given
files in a lucene index never change (they are only ever deleted or
replaced), this works as a good copy technique for backing up.
Upayavira
On Thu, Dec 20, 2012, at 10:34 AM, Markus Jelsma wrote:
You can use the replication handler to fetch a complete snapshot of the
index over HTTP
I cannot see how SolrJ and the admin UI would return different results.
Could you run exactly the same query on both and show what you get here?
Upayavira
On Thu, Dec 20, 2012, at 06:17 AM, Joe wrote:
I'm using SOLR 4 for an application, where I need to search the index
soon
after inserting
Which strikes me as the right way to go.
Upayavira
On Thu, Dec 20, 2012, at 12:30 PM, AlexeyK wrote:
Implemented it with http://wiki.apache.org/solr/DocTransformers.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Dynamic-modification-of-field-value
, in which case I guess you could monitor your
logs for the last commit, and do your backup a 10 seconds after that.
Upayavira
On Thu, Dec 20, 2012, at 12:44 PM, Andy D'Arcy Jewell wrote:
On 20/12/12 11:58, Upayavira wrote:
I've never used it, but the replication handler has an option:
http
Personally I have never given it any attention, so I suspect it doesn't
matter much.
Upayavira
On Thu, Dec 20, 2012, at 05:08 AM, Alexandre Rafalovitch wrote:
Hello,
In the schema.xml, we have a name attribute on the root note. The
documentation says it is for display purpose only
That's neat, but wouldn't that run on every commit? How would you use it
to, say, back up once a day?
Upayavira
On Thu, Dec 20, 2012, at 01:57 PM, Markus Jelsma wrote:
You can use the postCommit event in updateHandler to execute a task.
-Original message-
From:Upayavira u
Are you sure a commit didn't happen between? Also, a background merge
might have happened.
As to using a backup, you are right, just stop solr, put the snapshot
into index/data, and restart.
Upayavira
On Thu, Dec 20, 2012, at 05:16 PM, Andy D'Arcy Jewell wrote:
On 20/12/12 13:38, Upayavira
You're saying that there's no chance to catch it in the middle of
writing the segments file?
Having said that, the segments file is pretty small, so the chance would
be pretty slim.
Upayavira
On Thu, Dec 20, 2012, at 06:45 PM, Lance Norskog wrote:
To be clear: 1) is fine. Lucene index updates
Right, you can store it, but you can't search on it that way, and you
certainly can't do complex searches that take the XML structure into
account (e.g. xpath queries).
Upayavira
On Thu, Dec 20, 2012, at 10:22 PM, Alexandre Rafalovitch wrote:
What happens if you just supply it as CDATA
Or stop including those files. They were an option in the old admin UI
that 'had' to be propogated through to the new one. But does anyone
actually use them?
Upayavira
On Tue, Dec 18, 2012, at 07:19 AM, Shawn Heisey wrote:
On 12/17/2012 11:55 PM, Alexandre Rafalovitch wrote:
Again, thinking
they are found. Solr (or Lucene rather) once a document has
been written, will never rewrite it, so it seems perfectly reasonable to
expect stored multi-valued fields to return in the same order.
When it comes to indexed terms, that's a different matter.
Upayavira
On Tue, Dec 18, 2012, at 07:59 AM
(exception angst for admins)
than benefit.
Upayavira
On Tue, Dec 18, 2012, at 09:34 AM, Alexandre Rafalovitch wrote:
The new UI interface does use them (at least some of them) and puts them
into their own section. Unfortunately, the CSS reset does something funny
to their styles (e.g. H1 looks like
anyhow.
Upayavira
On Tue, Dec 18, 2012, at 06:03 AM, Dixline wrote:
Hi,
I've deleted a document using
http://localhost:8983/solr/update?stream.body=
deletequeryskills_s:Perl/query/delete and the committed the
delete
also. Again if search using q=perl i'm able to see the same document
Yay, someone actually using it...
Are you using 4.0? In 4.0 it has been merged into a single
UpdateRequestHandler now. As I understand it, if you post XML to
http://localhost:8983/solr/update and provide a tr parameter, it will do
the same thing as the XsltUpdateRequestHandler did.
Upayavira
to create a facet that shows which of them matched.
If you want combinations, you should do that at display time.
Upayavira
On Mon, Dec 17, 2012, at 07:07 AM, veena rani wrote:
Hi,
If searched for three words in query.
Eg:
Sony, Samsung,LG.
i should get faceted result with the count
Note with 4.0 you don't need to build a spellcheck index. Spellchecking
can happen from your main index (unless you are providing your own
dictionary).
Upayavira
On Mon, Dec 17, 2012, at 12:36 PM, Artyom wrote:
Thank you, Tomás.
This wiki
http://wiki.apache.org/solr/UpdateXmlMessages
No, not that I am aware of, you would have to do it in your application.
Upayavira
On Sun, Dec 16, 2012, at 12:55 AM, Jorge Luis Betancourt Gonzalez wrote:
Exist any similar approach that I could use in solr 3.6.1 or should I add
this logic to my application?
- Mensaje original
The default sort by score will only trigger if there is in fact a score
- some searches result in a constant score, meaning documents come back
in index order.
Upayavira
On Fri, Dec 14, 2012, at 02:13 PM, Bill Au wrote:
If your exact search returns more than one result, then by default
: fantasy},
ISBN_s: { set : 0-380-97365-0}
remove_s : { set : null } }
]'
/* example stolen from Yonik's ApacheCon talk */
Upayavira
On Sat, Dec 15, 2012, at 01:34 AM, Jorge Luis Betancourt Gonzalez wrote:
Hi all:
I'm trying to build a query suggestion system using solr (also used
Nope, it is a Solr 4.0 thing. In order for it to work, you need to store
every field, as what it does behind the scenes is retrieve the stored
fields, rebuilds the document, and then posts the whole document back.
Upayavira
On Sat, Dec 15, 2012, at 04:52 PM, Jorge Luis Betancourt Gonzalez wrote
out there, and that would simply be
prohibitive.
Upayavira
On Fri, Dec 14, 2012, at 05:52 AM, Dikchant Sahi wrote:
Yes, we have an uniqueId defined but merge adds two documents with the
same
id. As per my understanding this is how Solr behaves. Correct me if am
wrong.
On Fri, Dec 14, 2012
Can you access your Solr server via a browser? I bet it is something
simple like a URL being wrong.
Upayavira
On Fri, Dec 14, 2012, at 09:37 AM, Romita Saha wrote:
Hi I am using Solr-PHP client. I am not able to ping Solr. Is their any
change that I need to make in Solr config files so
a templating language and
is no way at fault. E.g. I remember a really effective use of XSLT to
render RSS direct out of Solr - worked exceptionally well.
Upayavira
On Fri, Dec 14, 2012, at 05:15 PM, Erik Hatcher wrote:
There's absolutely nothing inherently wrong with using Velocity with lean
in
$CATALINA_HOME/conf/Catalina/localhost/solr-example.xml, you should
replace 'solr-example' with whatever you name your webapps.
Upayavira
On Wed, Dec 12, 2012, at 10:44 AM, Daniel Exner wrote:
Hi,
Gian Maria Ricci - aka Alkampfer wrote:
Hi to everyone, I've a solr3.6 server up and running
You can only search against terms that are stored in your index. If you
have applied index time synonyms, you can't remove them at query time.
You can, however, use copyField to clone an incoming field to another
field that doesn't use synonyms, and search against that field instead.
Upayavira
running a slave and its master
on the same same hardware.
Upayavira
On Tue, Dec 11, 2012, at 01:10 PM, suri wrote:
Hi,
We are planning to setup multiple slaves to handle search loads. Some of
these slaves will be on the same physical machine. Instead of each slave
doing the replication
, more like a conventional search results, or at least as much
like that as possible when you really don't know much about what data
you're getting back.
To put my money where my mouth is, I've uploaded a patch to JIRA[1] with
a first pass at what I mean.
Thoughts/comments welcome.
Upayavira
[1] https
more expensive. Try it on google, they won't let you go beyond
around 900 pages or such (or is it 900 results?)
Upayavira
On Sat, Dec 8, 2012, at 01:10 AM, Petersen, Robert wrote:
Hi guys,
Sometimes we get a bot crawling our search function on our retail web
site. The ebay crawler loves
issues already mentioned there's the MVC side -
you have a model and a view, but no controller, thus it becomes hard to
build anything useful very quickly.
I'd happily hack disclaimers into place if considered useful.
Upayavira
On Tue, Dec 4, 2012, at 01:21 PM, Jack Krupansky wrote:
let's also
One small question - did you re-index in-between? The index structure
will be different for each.
Upayavira
On Tue, Dec 4, 2012, at 02:30 PM, Aaron Daubman wrote:
Greetings,
I'm finally updating an old instance and in testing, discovered that
using
the recommended TrieField instead
only.
Upayavira
On Tue, Dec 4, 2012, at 02:37 PM, Jack Krupansky wrote:
Or, maybe integrate /browse with the Solr Admin UI and give it a graphic
treatment that screams that it is a development tool and not designed to
be
a model for an app UI.
And, I still think it would be good
But it would be a lot harder than either splitting them out into
separate docs, or writing code to re-index docs when one of their
'next-event' dates passes, with a new single valued 'next-event' field.
Less efficient, but easier to write/manage.
Upayavira
On Tue, Dec 4, 2012, at 07:35 PM, Chris
But there's value in having something packaged within Solr itself, for
demo purposes.
That would I suspect make it Java (like it or not!) And that would
probably not make it very state-of-the art, unless it used jquery, with
a very lightweight java portion, which would be possible.
Upayavira
not doing it at all
and see if your index actually reaches a point where it is needed.
Upayavira
On Wed, Dec 5, 2012, at 12:31 AM, Otis Gospodnetic wrote:
Hi,
You should search the ML archives for : optimize wunder Erick Otis :)
Is WAS really AWS? If so, if these are new EC2 instances you
You can't. I guess you could extract those numbers from your text and
index them into a separate numeric field.
Upayavira
On Sun, Dec 2, 2012, at 07:08 AM, jend wrote:
Hi,
Im building a solr install which has a blurb of data in a field
description.
In that field there are sentences
, showing exactly what the query was parsed into.
I bet you they are both parsed to more or less the same thing, and thus
no real impact on query time.
Upayavira
On Wed, Nov 28, 2012, at 10:54 AM, Charra, Johannes wrote:
Hi all,
Is there any reason to prefer a query
field:value1
doc4: book
doc5: java
doc9: book
doc77: book
With that structure, you'll have in your index exactly what Solr
expects, and will be able to take advantage of the inbuilt ranking
capabilities of Lucene and Solr.
Upayavira
On Wed, Nov 28, 2012, at 10:15 AM, Floyd Wu wrote:
Hi there,
If I have
You may want to change your tokenisation anyhow, as a search for
'universidad' will not match your term 'universidad,'
But you are on the right track - to improve suggestions, improve what is
in your index.
Upayavira
On Mon, Nov 26, 2012, at 07:54 PM, Jorge Luis Betancourt Gonzalez wrote:
Hi
your Zookeeper nodes far less often
than you'd need to do it with Solr.
Upayavira
On Mon, Nov 19, 2012, at 09:39 PM, Marcin Rzewucki wrote:
OK, got it. Thanks.
On 19 November 2012 15:00, Mark Miller markrmil...@gmail.com wrote:
Nodes stop accepting updates if they cannot talk to Zookeeper
In fact, you shouldn't need OR:
id:(123 456 789)
will default to OR.
Upayavira
On Mon, Nov 19, 2012, at 10:45 PM, Shawn Heisey wrote:
On 11/19/2012 1:49 PM, Dotan Cohen wrote:
On Mon, Nov 19, 2012 at 10:27 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
How about id1
maintenance interfaces, etc, all stuff that you'd expect from a
servlet container).
Upayavira
On Sat, Nov 17, 2012, at 03:04 PM, Erick Erickson wrote:
1 Well, it loads the local conf directory up to zookeeper so new nodes
can
fetch the configuration and store it locally.
2 No, you have to upload
Er, it can't. What are you seeing that seems wrong?
Upayavira
On Fri, Nov 16, 2012, at 10:13 AM, Reik Schatz wrote:
This might be a silly question but if I search *.* in the admin tool, how
can it show me the full document including all the fields that are set to
stored=false or that don't
for each core on different disks which can give
ome performance benefit.
Just some thoughts.
Upayavira
On Thu, Nov 15, 2012, at 11:04 PM, Buttler, David wrote:
Hi,
I have a question about the optimal way to distribute solr indexes across
a cloud. I have a small number of collections (less than 10
be maxing out memory by having multiple warming
searchers.
Upayavira
On Thu, Nov 15, 2012, at 03:43 PM, richardg wrote:
Here is our setup:
Solr 4.0
Master replicates to three slaves after optimize
We have a problem were every so often after replication the CPU load on
the
Slave servers maxes
.
Upayavira
On Wed, Nov 14, 2012, at 09:17 AM, Peter Kirk wrote:
Hi
Thanks for the reply. It is strange, because when I index to a field
defined like:
dynamicField indexed=true
name=*_string
stored=true
type=string /
Then the results
to run this first query on
every commit, meaning your users will always get faster queries.
Upayavira
On Tue, Nov 13, 2012, at 11:16 AM, Aeroox Aeroox wrote:
Thanks Yonik.
Should I consider sharding in this case ( actually I have one big index
with replication) ? Or create 2 index (one
sort=score desc, priority desc
Won't that do it?
Upayavira
On Mon, Oct 15, 2012, at 09:14 AM, Sandip Agarwal wrote:
Hi,
I have many documents indexed into Solr. I am now facing a requirement
where the search results should be returned sorted based on their scores.
In the *case of non
this was mentioned earlier in this
thread.
Upayavira
On Mon, Oct 8, 2012, at 04:33 PM, Radim Kolar wrote:
Do it as it is done in cassandra database. Adding new node and
redistributing data can be done in live system without problem it looks
like this:
every cassandra node has key range
it with wt=xslttr=.xsl
That way you wouldn't need to modify Solr at all.
Also, look in Solr 4.0, which has calculated fields. Not sure if there's
the scope to find the document position as a function query though.
Upayavira
On Mon, Oct 8, 2012, at 05:02 AM, deniz wrote:
well basically i was about
for you.
Upayavira
On Mon, Oct 8, 2012, at 01:24 AM, Jorge Luis Betancourt Gonzalez wrote:
Hi!
I was wondering if there are any built-in mechanism that allow me to
store the queries made to a solr server inside the index itself. I know
that the suggester module exist, but as far as I know
the /browse
interface, that might help you also.
Upayavira
On Mon, Oct 8, 2012, at 08:54 AM, deniz wrote:
Could xslt processor be useful for json response too? because i will be
using
the response not for browser but for some other jars..
-
Zeki ama calismiyor... Calissa yapar...
--
View
the lines of shard3:{}, then upload it again, what would happen
then?
My theory is that the next host you start up would become the first node
of shard3. Worth a try (unless someone more knowledgeable tells us
otherwise!)
Upayavira
On Mon, Oct 8, 2012, at 01:35 AM, Radim Kolar wrote:
i am reading
the customer facing ones back,
they would justd pull their indexes from your remote replicas, and you'd
be good to go once more.
Upayavira
On Thu, Sep 20, 2012, at 10:30 PM, jimtronic wrote:
I'm thinking about catastrophic failure and recovery. If, for some
reason,
the cluster should go down
Solr places constraints upon what you can do with your lucene index
(e.g. You must conform to a schema). If your Lucene index cannot be
mapped to a schema, then it cannot be used within Solr.
Upayavira
On Tue, Jul 24, 2012, at 11:05 PM, spredd1208 wrote:
Is there a best practice to copy
Skip the asterisk and analyse you search terms as an ngram, maybe an
edge-ngram, and then it'll match.
You'd be querying for:
A
AB
AB-
AB-C
AB-CD
AB-CD-
etc...
Any of those terms would match your terms.
Upayavira
On Fri, Jun 29, 2012, at 06:35 PM, Kissue Kissue wrote:
Hi,
I Want to know
never change, they're
only ever deleted and new ones added. Thus, using hard links to clone an
index works well.
Upayavira
On Wed, Jun 27, 2012, at 02:30 PM, garlandkr wrote:
How can I get a snapshot of the index in SOLR 3.x?
I am currently taking EBS (Amazon) snapshots of the volume where
How many numbers? 0-9? Or every number under the sun?
You could achieve a limited number by using synonyms, 0 is a synonym for
nought and zero, etc.
Upayavira
On Wed, Jun 27, 2012, at 05:22 PM, Alireza Salimi wrote:
Hi,
I was wondering if there's a built in solution in Solr so that you can
, filter queries, etc).
Upayavira
When you make queries, you do
On Tue, Jun 26, 2012, at 07:13 AM, subhendu.acha...@rbs.com wrote:
Hi
I am running Solr 3.5 server with a master-slave and repeater-slave
architecture from last one year on linux machines. Recently found out
that my slaves (both
still risk involved in using an unreleased
product, you'll have increased your chances of stability.
Still hoping someone has answers to my original questions...
Upayavira
On Fri, 10 Jun 2011 10:55 +0530, Mohammad Shariq
shariqn...@gmail.com wrote:
I am also planning to move to SolrCloud
a different
master (or can I delegate the decision as to which host/core is its
master to zookeeper?)
Thanks for any pointers.
Upayavira
---
Enterprise Search Consultant at Sourcesense UK,
Making Sense of Open Source
don't know if this is what you mean: you can add 'score' to the fl field
list, and it will show you the score for each item.
Upayavira
On Thu, 02 Jun 2011 11:30 -0700, arian487 akarb...@tagged.com wrote:
Basically I don't want the hits and the scores at the same time. I want
to
get a list
servers
without a lot of manual labor.
I'm likely to try playing with moving cores between hosts soon. In
theory it shouldn't be hard. We'll see what the practice is like!
Upayavira
---
Enterprise Search Consultant at Sourcesense UK,
Making Sense of Open Source
.
Yep, I'm expecting it to require some changes to both the
CoreAdminHandler and the ReplicationHandler.
Probably the ReplicationHandler would need a 'one-off' replication
command. And some way to delete the core when it has been transferred.
Upayavira
On Wed, Jun 1, 2011 at 4:14 AM, Upayavira u
On Wed, 01 Jun 2011 11:47 -0400, Jonathan Rochkind rochk...@jhu.edu
wrote:
On 6/1/2011 11:26 AM, Upayavira wrote:
Probably the ReplicationHandler would need a 'one-off' replication
command...
It's got one already, if you mean a command you can issue to a slave to
tell it to pull
your firewall, and your search functionality is available
outside.
Upayavira
On Mon, 09 May 2011 14:57 -0400, Brian Lamb
brian.l...@journalexperts.com wrote:
Hi all,
Is it possible to set up solr so that it will only execute dataimport
commands if they come from localhost?
Right now, my
This is not Solr crashing, per se, it is your JVM. I personally haven't
generally had much success debugging these kinds of failure - see
whether it happens again, and if it does, try updating your
JVM/switching to another/etc.
Anyone have better advice?
Upayavira
On Mon, 04 Apr 2011 11:59
package based upon Solr) does offer
security features.
Upayavira
---
Enterprise Search Consultant at Sourcesense UK,
Making Sense of Open Source
What query are you doing?
Try q=*:*
Also, what does /solr/admin/stats.jsp report for number of docs?
Upayavira
On Mon, 28 Mar 2011 04:28 -0700, Merlin Morgenstern
merli...@fastmail.fm wrote:
Hi there,
I am new to solr and have just installed it on a suse box with mysql
backend
There's options in solr.xml that point to lib dirs. Make sure you get
them right.
Upayavira
On Thu, 24 Mar 2011 23:28 +0100, Markus Jelsma
markus.jel...@openindex.io wrote:
I believe it's example/solr/lib where it looks for shared libs in
multicore.
But, each core can has its own lib dir
701 - 800 of 854 matches
Mail list logo