Hi All,
How to make Solr multiple dataimporthandler scheduleer in php?
I have multiple dataimport in solrconfig.xml
I want to make a php script which will make index one by one by running url
like below...
http://localhost:8080/solr/core_sql/dataimport1?command=full-import
Hi Mikhail,
Thanks for the reply.
I think coord works at the document level, I was thinking of having
something that worked at a field level, against a 'principle/primary'
field.
I'm using edismax with tie=1 (a.k.a. Disjunction Sum) and several fields,
but docs with greater query overlap on the
I'm afraid I'm note completely clear about your scenario. Let me say how
I understand what you're saying, and what I've done in the past.
Firstly, I take it you are using Solr 3.x (from your reference to a
'shards' parameter.
Secondly, you refer to a 'stripe' as one set of nodes, one for each
I'm not really getting your point. If you say that tf,idf get over on
coord, I think it's possible to eliminate them by custom similarity.
On Wed, Jan 30, 2013 at 2:06 PM, Daniel Rosher rosh...@gmail.com wrote:
Hi Mikhail,
Thanks for the reply.
I think coord works at the document level, I
I´ve noticed checking Cloud admin UI that sometimes one of the nodes
appears with no Cloud information (even reloading cluster state .json).
However, a while later the whole Cloud status information appears again. It
looks like it disconnects and re-connects itself.
Quite strage, guys...
If you are in the San. Fran. area next Wednesday, Feb. 06, LucidWorks and I
will be hosting a Lucene/Solr hack night. To reserve a spot or learn more, see
http://www.meetup.com/SFBay-Lucene-Solr-Meetup/
Bring you laptop, your code, etc. and we'll hack on Lucene/Solr for a few
hours.
Hello,
I would like to start solr with the following configuration;
Replication between master and slave activated but not enabled.
Regards
--
View this message in context:
Hello,
I'm testing Solr 4.1, but I've run into some problems with
DataImportHandler's new propertyWriter tag.
I'm trying to use variable expansion in the `filename` field when using
SimplePropertiesWriter.
Here are the relevant parts of my configuration:
conf/solrconfig.xml
Hi,
There are a lot of posts which talk about hardening the /admin handler with
user credentials etc.
From the other hand, replication handler wouldn't work if /admin/cores is
also hardened.
Considering this fact, how could I allow secure external access to the admin
interface AND allow proper
Hi, we will be using solr on development, test and prod platforms. Is
there a way to dynamically create the datasource so that the url,
password and user id is passed in or can I point it to a properties file
that has this info? Thanks
Hi
I have to developed a function that must comunicate with webservice and
this function must execute after each time commits.
My doubt;
it's possible get that records had been updated on solr index?
My function must send information about add, updated and delete records
from solr index to
Hi
Iam using 4.1 release and i see a problem when i set the response type as JSON
in the UI.
I am using Safari 6.0.2 and i see a SyntaxError: JSON Parse error:
Unrecognized token ''.
app.js line 465. When i debug more.. i see the response is still coming in XML
format.
Is anyone else
The stack is
format_json -- app.js (465)
json -- query.js (59)
complete - query.js (77)
fire -- require.js (3099)
fireWith -- require.js (3217)
done -- require.js (9469)
callback -- require.js (10235)
./zahoor
On 30-Jan-2013, at 6:43 PM, J Mohamed Zahoor zah...@indix.com wrote:
Hi
Iam
Before worrying about anything else, try doing a full cache clean. My
(Chrome) browser was caching Solr 4.0 resources for unreasonably long
period of time until I completely disable its cache (in dev tools) and
tried the full reload.
Or try a browser you did not use before.
Regards,
Alex.
Upayavira,
Thank you for your response. I'm sorry my post is perhaps not clear...I am
relatively new to solr and I'm not sure I'm using the correct nomenclature.
We did encounter the issue of one shard in the stripe going down and all other
shards continue to receive requests...and return
Hi Alex,
Cleared Cache - Problem persists.
Disabled Cache - problem Persists.
This was in Safari though.
./zahoor
On 30-Jan-2013, at 6:55 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:
Before worrying about anything else, try doing a full cache clean. My
(Chrome) browser was caching
Stored fields are now compressed in 4.1. There's other efficiencies too
in 4.0 that will also result in smaller indexes, but the compressed
stored fields is the most significant.
Upayavira
On Wed, Jan 30, 2013, at 01:59 PM, anarchos78 wrote:
Hello,
I am using Solr 3.6.1 and I am very
I haven't got anything to back this up, but I'd say there's no issue
pointing your load balancer to all your nodes. When you do a distributed
query, the work required of the distributed part is relatively small -
it pushes the request to all the shard nodes, then does the job of
merging the
There are probably any number of changes between 3.x and 4.x to account for
query differences. This includes bug fixes and in some cases new bugs, in
areas such as the query parsers and various filters. The first step is to
isolate a couple of examples of both false positive queries and false
On 1/30/2013 7:28 AM, Upayavira wrote:
Stored fields are now compressed in 4.1. There's other efficiencies too
in 4.0 that will also result in smaller indexes, but the compressed
stored fields is the most significant.
The compressed stored fields explains your smaller index. As to why you
Hi All,
I'm facing an issue in relevancy calculation by dismax query parser.
The boost factor applied does not work as expected in certain cases when
the keyword is generic and by generic I mean, if the keyword is appearing
many times in the document as well as in the index.
I have parser
On 1/30/2013 6:45 AM, Lee, Peter wrote:
Upayavira,
Thank you for your response. I'm sorry my post is perhaps not clear...I am
relatively new to solr and I'm not sure I'm using the correct nomenclature.
We did encounter the issue of one shard in the stripe going down and all other
shards
This is a bug. Can you paste what you've said here into a new JIRA issue?
https://issues.apache.org/jira/browse/SOLR
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: Jonas Birgander [mailto:jonas.birgan...@prisjakt.nu]
Sent: Wednesday, January 30, 2013 4:54 AM
Start by expressing the specific semantics of those queries in strict
boolean form. I mean, what exactly do you mean by in, and location1,
location 2, and location1, loc2 and loc3? Is the latter an AND or an OR?
Or at least fully express those two queries, unambiguously in plain English.
From a performance point of view, I can't imagine it mattering. In our setup,
we have a dedicated Solr server that is not a shard that takes incoming
requests (we call it the coordinator). This server is very lightweight and
practically has no load at all.
My gut feeling is that having a
Hi Elizabeth,
I haven't tried this, but given this entry:
http://wiki.apache.org/solr/DataImportHandler#Adding_datasource_in_solrconfig.xml
You should be able to parameterize the arguments in solrconfig.xml
with environment variables and then set them in solr.xml or at runtime
using command
Sorry, email sent too quickly. Here's the second url:
http://wiki.apache.org/solr/SolrConfigXml?highlight=%28solrconfig%5C.xml%29#System_property_substitution
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
Hi,
I am looking for a way to get the top terms for a query result.
Faceting does not work since counts are measured as documents containing a term
and not as the overall count of a term in all found documents:
Hi Sandeep,
Quick answer is that not only the boost that you define in your
requestHandler is taken to calculate the score of each document. There are
others factors that contribute to score calculation. You can take a look
here about http://wiki.apache.org/solr/SolrRelevancyFAQ. Also, you can
Thank you for the reply, issue created at
https://issues.apache.org/jira/browse/SOLR-4386.
Regards,
Jonas Birgander
On 2013-01-30 16:26, Dyer, James wrote:
This is a bug. Can you paste what you've said here into a new JIRA issue?
https://issues.apache.org/jira/browse/SOLR
James Dyer
Thanks Felipe, yes I have seen that and my requirement somewhere falls for
On 30 January 2013 15:53, Felipe Lahti fla...@thoughtworks.com wrote:
Hi Sandeep,
Quick answer is that not only the boost that you define in your
requestHandler is taken to calculate the score of each document. There
(Sorry for in complete reply in my previous mail, didn't know Ctrl F sends
an email in Gmail.. ;-))
Thanks Felipe, yes I have seen that and my requirement falls for
How can I make exact-case matches score higher
Example: a query of Penguin should score documents containing Penguin
higher than
This was discussed last week, with two different solutions:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/browser
In general, you can set a Java property, like -Ddbpass=fred, then use it in
the config files as ${dbpass}.
wunder
On Jan 30, 2013, at 3:37 AM,
Aren't you concerned about having a single point of failure with this setup?
On Wed, Jan 30, 2013 at 10:38 AM, Michael Ryan mr...@moreover.com wrote:
From a performance point of view, I can't imagine it mattering. In our
setup, we have a dedicated Solr server that is not a shard that takes
Hi Jamel,
You can start solr slaves with them pointed at a master and then turn off
replication in the admin replication page.
Hope that helps,
-Robi
Robert (Robi) Petersen
Senior Software Engineer
Search Department
-Original Message-
From: Jamel ESSOUSSI
As stated by Robi, you can through the admin UI:
-disable replication on the master through the admin or
-disable polling on the slave through the admin UI. Disabling polling on
the slaves is very handy if you doing stuff on the master that require
master restart as a restart.
Thanks.
The admin user interface and admin/cores are two very different things - they
just happen to share admin in the url.
It doesn't make any sense to secure admin/cores unless you are also going to
secure all the other Solr API's.
- Mark
On Jan 30, 2013, at 5:55 AM, AlexeyK lex.kudi...@gmail.com
Is it possible to issue a command through the collections API that will
assign a new config set (already stored in zookeeper) to an existing
collection?
Related - because such changes would require a reload, is there a RELOAD
action on the collection API that finds all the cores for that
Hi,
We currently have Solr3.5 and working well. With the features and fixes
available in 4.1 we decided to upgrade.
We started some test with Solr4.1 on Jbos7.1. Everything looks good at first
run and indexing and execute some queries. We restart servers before
performing Load test and
Do you have any customized settings for useColdSearcher, warming queries,
or customized settings for the old/deprecated indexDefaults or
mainIndex? Try using the settings from the latest solrconfig.xml and then
customize from there. Or at least see how they are different.
-- Jack Krupansky
On Jan 30, 2013, at 12:14 PM, Shawn Heisey s...@elyograg.org wrote:
Is it possible to issue a command through the collections API that will
assign a new config set (already stored in zookeeper) to an existing
collection?
No, not currently. We are talking about such things here: SOLR-4193.
Let me see if I understood your problem:
By your first e-mail I think you are worried about the returned order of
documents from Solr. Is that correct? If yes, as I said before it's not
only the boosting that influence the order of returned documents. There's
term frequency, IDF(inverse document
On 1/30/2013 10:31 AM, Jack Krupansky wrote:
Do you have any customized settings for useColdSearcher, warming
queries, or customized settings for the old/deprecated indexDefaults
or mainIndex? Try using the settings from the latest solrconfig.xml
and then customize from there. Or at least see
thanks Jack,
I did take the latest solrconfig.xml file.
The only change i made to the file is for using MMapDirectory
directoryFactory name=DirectoryFactory
class=${solr.directoryFactory:solr.MMapDirectoryFactory}/
Apart from that i increased the cache size for
I have pasted it below and it is slightly variant from the dismax
configuration I have mentioned above as I was playing with all sorts of
boost values, however it looks more lie below:
str name=c208c2ca-4270-27b8-e040-a8c00409063a
2675.7844 = (MATCH) sum of: 2675.7844 = (MATCH) max plus 0.01
Thanks for additional information Shawn,
I am testing 4.1 on single machine sing core. so no cloud. I did change
NRTCachingDirectoryFactory to MMapDirectoryFactory and after indexing all
the document we do a hard commit explicitly from our publisher client.
I was able to run queries to verify my
Shawn,
i believe your point is valid ... if you see below my tlog.* file size is
huge. Bu shouldn't that be cleared if i am not using soft commit and do an
explicit hard commit?
After deleting this i was able to get my server up. Thanks for the
information/help. Also pleae let me know how to
On 1/30/2013 11:21 AM, adityab wrote:
Shawn,
i believe your point is valid ... if you see below my tlog.* file size is
huge. Bu shouldn't that be cleared if i am not using soft commit and do an
explicit hard commit?
After deleting this i was able to get my server up. Thanks for the
thanks Shawn,
We use Master-Slave architecture in Prod and planning to continue even with
4.1.
Our indexing usually happens on Master. and we about 10K docs every 2hrs and
then perform commit.
Our full re-index is only when we have schema change. So we dont use auto
commit.
Is there a way to
On 1/30/2013 11:48 AM, adityab wrote:
thanks Shawn,
We use Master-Slave architecture in Prod and planning to continue even with
4.1.
Our indexing usually happens on Master. and we about 10K docs every 2hrs and
then perform commit.
Our full re-index is only when we have schema change. So we dont
thanks Shawn,
I will try both the approach.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Server-stops-at-Opening-Searcher-in-4-1-tp4037458p4037499.html
Sent from the Solr - User mailing list archive at Nabble.com.
: I think coord works at the document level, I was thinking of having
: something that worked at a field level, against a 'principle/primary'
: field.
I'm not sure what you mean by works at hte document level ... coord is
used y the BooleanQuery scoring mechanism to define how scores should be
If you compare the first and last document scores you will see that the
last one matches more fields than first one. So, you maybe thinking why?
The first doc only matches contributions field and the last matches a
bunch of fields so if you want to have behave more like (str
Hi Erik.
You way want to have a look at:
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
Arcadius.
On 30 January 2013 17:09, Erik Holstad erikhols...@gmail.com wrote:
Hey!
Been playing around with Suggester and things are working just fine,
but I have a use
Thanks Arcadius!
Looks very promising and will have a look at it.
On Wed, Jan 30, 2013 at 3:00 PM, Arcadius Ahouansou arcad...@menelic.comwrote:
Hi Erik.
You way want to have a look at:
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
Arcadius.
On 30 January
Please feel free to just edit the Wiki yourself, all you have to do is
create a login
On Wed, Jan 23, 2013 at 9:04 AM, eShard zim...@yahoo.com wrote:
Thanks,
That worked.
So the documentation needs to be fixed in a few places (the solr wiki and
the default solrconfig.xml in Solr 4.0
Pretty much. The queryResultCache is pretty inexpensive. But be a bit
careful, it's tempting to increase it greatly, but that only buys you
performance if you see your users actually ask for subsequent pages
reasonably often
Best
Erick
On Tue, Jan 29, 2013 at 1:38 PM, Isaac Hebsh
On 1/30/2013 6:04 PM, Petersen, Robert wrote:
Hi
Just a quick question: for a single valued int field in solr 3.6.1 how much
more space is used if the field is stored vs indexed and not stored?
Here is the index file format reference for the two files that make up
stored fields in the 3.6
On 1/30/2013 6:24 PM, Shawn Heisey wrote:
If I had to guess about the extra space required for storing an int
field, I would say it's in the neighborhood of 20 bytes per document,
perhaps less. I am also interested in a definitive answer.
The answer is very likely less than 20 bytes per doc.
As long as Core Admin is accessible via HTTP and allows to manipulate Solr
cores, it should be secured, regardless of configured path. The difference
between securing Admin vs. securing other handlers is that other handlers
are accessed by a specific application server(s), and therefore may be
you can of course check suggestions, but then you should remove
str name=spellcheck.dictionarywordbreak/str
from your handler, because its purpose is to find cases, when user types
spaces wrongly (e.g., solrrocks, sol rrocks, so lr)
--
View this message in context:
61 matches
Mail list logo