replying my own question. ids is used in responsebuilders some internal
mapping structure which is used for sorting and reordering document list
before it is shown to the end user... simply it stores unique field values
from documents which are to be shown to the user, and each mapping of these
Hi,
Please read the following page about solr and lots of cores:
http://wiki.apache.org/solr/LotsOfCores.
Cheers,
Maj
On 5 December 2012 07:01, Otis Gospodnetic otis.gospodne...@gmail.comwrote:
Hi,
It depends on your resources. The other day somebody mentioned having
5 cores.
Otis
http://lucene.472066.n3.nabble.com/file/n4024423/5-12-2012_9-26-09_PM.jpg
Hi, trying to play with polygon searches and have noticed this issue. It
seems when you cross a line you invalidate the polygon (Sorry I have no idea
about the technical term) and solr cannot handle it.
As far as my
Hey Guys,
Thanks a lot for your input!
But my interpretation of the next start time is that it wsa dependent on
the value of NOW when the query was executed (ie: some of the indexed
values may be in the past) in which case that approach wouldn't work.
If the query was always a NOW query,
Hi,
I have found out the solution to my own question. I need to change the qf
parameter in solrconfig file.
Thanks and regards,
Romita
From: Romita Saha romita.s...@sg.panasonic.com
To: solr-user@lucene.apache.org,
Date: 12/05/2012 06:20 PM
Subject:Change searching field
Hi everyone,
I've got a problem where I have docs with a source_id field, and there can be
many docs from each source. Searches will typically return docs from many
sources. I want to restrict the number of docs from each source in results, so
there will be no more than (say) 3 docs from
@ Walter, the daily optimization was introduced as we saw a decrease in the
performance for searches that happen during the peak hours - when loads of
updates take place on index. The load testing was proved slightly
successfull on optimized indexes. As a matter of fact, the merge factor was
I have followed this instruction:
http://wiki.apache.org/lucene-java/HowtoConfigureIntelliJ
1. Downloaded lucene_solr_4_0 branch
2. ant idea
3. opened this project in IDEA
4. clicked Build/Make project menu
I got message: Compilation completed with 111 errors and 9 warnings
I was wondering, with my current setup I have an ID field which is a number.
If I wanted to then change it so my ID field was actually a mix of numbers
and letters (to do with the backend system), would this cause any sort of
problem?
I would never do any kind of sorting by ID on my search page,
That would be fine.
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm
On Dec 5, 2012 7:14 AM, Spadez james_will...@hotmail.com wrote:
I was wondering, with my current setup I have an ID field which is a
number.
If I wanted to then change it so my ID field was actually a mix of
Hi,
We're suddenly seeing a shard called `properties` in the cloud graph page when
testing today's trunk with a clean Zookeeper data directory. Any idea where it
comes from? We have not changed the solr.xml on any node.
Thanks
InelliJ IDEA is not so intelligent with Solr: to fix this problem I've
dragged these modules into the IDEA's artifact (parent module is wrong):
analysis-common
analysis-extras
analysis-uima
clustering
codecs
codecs-resources
dataimporthandler
dataimporthandler-extras
lucene-core
Update:
I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually. I then removed the
index from one of the replicas, restarted solr and it replicated ok. So I'm
wondering whether this is something that happens over a period
No, your Solr unique key field should ALWAYS be of type string. In some
cases you can get away with other types, but eventually you may run into
some Solr feature which requires that the unique key field be a string, so
it is better to avoid the potential problems from the start. You can put
Hi James,
Just to let you know, i've just completed the PoC and it worked great!
Thanks.
What i still find difficult to is how to implement a 'guided' navigation
with Solr. That is one of the strenghts of Endeca and with Solr you have to
create this yourself. What are your thoughts on that and
Is there a way to turn on support for Unicode characters in version 1.4.1? The
strange thing is that my coworker and I are supposed to have the same
configuration, yet on her machine, there seems to be Unicode support enabled.
For example, if I use the SOLR admin to do a search for the a
There are some positive experience about suggester over Solr 4.0 with cloud?
I downloaded solr4, followed http://wiki.apache.org/solr/Suggester but I
don't get suggestion items
Any ideas?
-
Complicare è facile, semplificare é difficile.
Complicated is easy, simple is hard.
quote:
On 12/5/2012 7:31 AM, Nguyen, Vincent (CDC/OD/OADS) (CTR) wrote:
Is there a way to turn on support for Unicode characters in version 1.4.1? The
strange thing is that my coworker and I are supposed to have the same
configuration, yet on her machine, there seems to be Unicode support enabled.
Hi Artyom,
The lucene_solr_4_0 branch IntelliJ setup works for me.
Sounds like Ivy isn't succeeding in downloading dependencies.
'ant idea' calls 'ant resolve', which uses Apache Ivy to download binary
dependencies.
Can you post output from running 'ant resolve' at the top level?
Steve
On
Hi Artyom,
I don't use IntelliJ artifacts - I just edit/compile/test.
I can include this stuff in the IntelliJ configuration if you'll help me. Can
you share screenshots of what you're talking about, and/or IntelliJ config
files?
Steve
On Dec 5, 2012, at 8:24 AM, Artyom ice...@mail.ru
Nobody?
I just found nested queries can be used
(http://mullingmethodology.blogspot.cz/2012/03/adventures-with-solr-join.html).
But I don't like this solution, it is too complicated and not very readable
...
So is there any way how use JOIN with Solrj? Any idea? :)
--
View this message in
Hi All,
Is the server hosting nightly builds of Solr down?
https://builds.apache.org/job/Solr-Artifacts-4.x/lastSuccessfulBuild/artifact/solr/package/
If anyone knows an alternate link to download the nightly build please let
me know.
--Shreejay
--
View this message in context:
Maarten,
Glad to hear that your DIH experiment worked well for you.
To implement something like Endeca's guided navigation, see
http://wiki.apache.org/solr/SolrFacetingOverview . If you need to implement
multi-level faceting, see http://wiki.apache.org/solr/HierarchicalFaceting
(but
Thanks a lot Shawn! You were right, I didn't think to look into Tomcat. I
enabled UTF8 in tomcat and everything works now, Thanks
Vincent Vu Nguyen
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday, December 05, 2012 10:12 AM
To:
This is really not the use-case faceting is designed for, I don't think
there's really any good way to speed this case up. What is the higher-level
issue you're trying to solve? Perhaps there's a better way to do it.
I'm not sure why you think altering the cache settings would help, they're
In addition to Mark's comment, be sure you aren't starving the memory for
the OS by over-allocating to the JVM.
FWIW,
Erick
On Tue, Dec 4, 2012 at 2:25 AM, John Nielsen j...@mcb.dk wrote:
Success!
I tried adding -XX:+UseConcMarkSweepGC to java to make it GC earlier. We
haven't seen any
Probably what you're seeing is that as segments are merged, deleted
documents are purged.
As to how the deleted docs got there in the first place, were you using an
index that had been populated before?
Best
Erick
On Tue, Dec 4, 2012 at 5:06 PM, Shawn Heisey elyog...@elyograg.org wrote:
On
Hi All,
I have a Solrcloud instance with 6 million documents. We are using the
ngroups feature in a few places and I am aware that this is still a JIRA
issue with work in progress (and some patches).
Apart from using the patch here
https://issues.apache.org/jira/browse/SOLR-2592 , and
Hi,
I am looking to import entries to my SOLR server by using the DIH,
connecting to an external postgre SQL server using the JDBC driver. I will
be importing about 50,000 entries each time.
Is connecting to an external SQL server for my data unreliable or risky, or
is it instead perfrectly
So I'm creating a request handler plugin where I want to fetch some values
from the schema. I make it SchemaAware but the inform method is never
called. I assume that there is some way of registering the instance as
aware (and I have seen and used this before, although that information
escapes me
Sorry, not that I know of. This is a disturbing design issue. I mean, I
understand the value of the feature, but resolving it is not currently
feasible, other than by by extremely slow brute force - but, maybe that
option should be pursued anyway for situations where zippy performance is
not
On 12/5/2012 9:19 AM, Erick Erickson wrote:
Probably what you're seeing is that as segments are merged, deleted
documents are purged.
As to how the deleted docs got there in the first place, were you using an
index that had been populated before?
After sleeping on it, I also realized that it
Solr runs in a container and the container controls the port. So, you need
to tell the container which port to use.
For example,
java -Djetty.port=8180 -jar start.jar
-- Jack Krupansky
-Original Message-
From: Bill Au
Sent: Wednesday, December 05, 2012 10:30 AM
To:
Hi Roman,
from solr webservice or admin interface you receive correct result
this is you correct join query syntax
{!join from=parent to=id}(name:John AND age:17)
you can add this directly by query.setQuery() method.
-
Complicare è facile, semplificare é difficile.
Complicated
On 12/05/2012 02:09 AM, Mark Miller wrote:
On Dec 4, 2012, at 4:57 AM, Andre Bois-Crettezandre.b...@kelkoo.com wrote:
* what can we do to help progress on SOLR-3866 ? Maybe use case
scenarios, detailing desired behavior ? Constrains on what cores or
collections are allowed to SWAP, ie. same
Sorry to bombard you - final update of the day...
One thing that I have noticed is that we have a lot of connections between
the solr boxes with the connection set to CLOSE_WAIT and they hang around
for ages.
-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com]
If you do grouping on source_id, it should be enough to request 3 times
more documents than you need, then reorder and drop the bottom.
Is a 3x overhead acceptable ?
On 12/05/2012 12:04 PM, Tom Mortimer wrote:
Hi everyone,
I've got a problem where I have docs with a source_id field, and
Not sure but, maybe you are running out of file descriptors ?
On each solr instance, look at the dashboard admin page, there is a
bar with File Descriptor Count.
However if this was the case, I would expect to see lots of errors in
the solr logs...
André
On 12/05/2012 06:41 PM, Annette Newton
Hi,
I'm running some join query, let's say it looks as follows: {!join
from=some_id to=another_id}(a_id:55 AND some_type_id:3). When I run it on
single instance of SOLR I got the correct result, but when I'm running it on
the sharded system (2 shards with replica for each shard (total index
Hi Amit/Shanu,
You can create the solr document for only the updated record and index it
to ensure only the updated record gets indexed.
You need not rebuild indexes from scratch for every record update.
Thanks,
Sandeep
Hi Team,
We are looking into using the SIREn plugin with Solr 4.0. I am just
confirming if this plugin works with Solr 4.0.
Also if someone has some good documentation on this, it will be of great
help.
Thanks,
Balaji
--
View this message in context:
Correct, but it sounds like a great feature to request - the ability to
requests stats for a function query.
-- Jack Krupansky
-Original Message-
From: Edward Garrett
Sent: Wednesday, December 05, 2012 10:56 AM
To: solr-user@lucene.apache.org
Subject: not possible to apply
: So I'm creating a request handler plugin where I want to fetch some values
: from the schema. I make it SchemaAware but the inform method is never
: called. I assume that there is some way of registering the instance as
: aware (and I have seen and used this before, although that information
:
I am using tomcat. In my tomcat start script I have tried setting system
properties with both
-Djetty.port=8080
and
-DhostPort=8080
but neither changed the host port for SolrCloud. It still uses the default
8983.
Bill
On Wed, Dec 5, 2012 at 12:11 PM, Jack Krupansky
It is set in solr.xml, but solr.xml has a syntax that allows you to set values
by system properties.
By default solr.xml is setup so that the jetty.port system property should set
the hostPort. I'm sure that works in general, so I'm not sure why it's not
working for you.
Can you provide your
Be aware that you still have to setup tomcat to run Solr on the right port -
and you also have to provide the port to Solr on startup. With jetty we do both
with -Djetty.port - with Tomcat you have to setup Tomcat to run on the right
port *and* tell Solr what that port is. By default that means
Looks like it… hopefully it comes back soon.
- Mark
On Dec 5, 2012, at 7:52 AM, shreejay shreej...@gmail.com wrote:
Hi All,
Is the server hosting nightly builds of Solr down?
https://builds.apache.org/job/Solr-Artifacts-4.x/lastSuccessfulBuild/artifact/solr/package/
If anyone knows
See the custom hashing issue - the UI has to be updated to ignore this.
Unfortunately, it seems that clients have to be hard coded to realize
properties is not a shard unless we add another nested layer.
Should be 100% harmless.
- Mark
On Dec 5, 2012, at 5:05 AM, Markus Jelsma
What Solr version - beta, alpha, 4.0 final, 4X or 5X?
- Mark
On Dec 5, 2012, at 4:21 PM, Sudhakar Maddineni maddineni...@gmail.com wrote:
Hi,
We are uploading solr documents to the index in batches using 30 threads
and using ThreadPoolExecutor, LinkedBlockingQueue with max limit set to
using solr version - 4.0 final.
Thx, Sudhakar.
On Wed, Dec 5, 2012 at 5:26 PM, Mark Miller markrmil...@gmail.com wrote:
What Solr version - beta, alpha, 4.0 final, 4X or 5X?
- Mark
On Dec 5, 2012, at 4:21 PM, Sudhakar Maddineni maddineni...@gmail.com
wrote:
Hi,
We are uploading solr
It kind of looks like the urls solrcloud is using are not accessible. When you
go to the admin page and the cloud tab, can you access the urls it shows for
each shard? That is, if you click on of the links or copy and paste the address
into a web browser, does it work?
You may have to
: (3) A third possibility I thought of was to add a field for every day of
: the year to each document that contains the next-start date for that
: particular day: next_start_20121212_dt etc. Then I could order by the
: dynamic field. But as only some of my events are recurring and few of those
:
Hey Mark,
Yes, I am able to access all of the nodes under each shard from solrcloud
admin UI.
- *It kind of looks like the urls solrcloud is using are not accessible.
When you go to the admin page and the cloud tab, can you access the urls it
shows for each shard? That is, if you click
The waiting logging had to happen on restart unless it's some kind of bug.
Beyond that, something is off, but I have no clue why - it seems your
clusterstate.json is not up to date at all.
Have you tried restarting the cluster then? Does that help at all?
Do you see any exceptions around
We are using SolrCloud and trying to configure it for testing purposes, we
are seeing that the average query time is increasing if we have more than
one node in the SolrCloud cluster. We have a single shard 12 gigs
index.Example:1 node, average query time *~28 msec* , load 140
queries/second3
Did you try restarting that node? Have you seen a successful recovery before?
What exact version are you using?
Can you share any related info in the logs for that node?
- Mark
On Dec 5, 2012, at 6:48 PM, Nathaniel Domingo niel.domi...@gmail.com wrote:
Hi,
I'm very new to solr, less than
yes, i tried restarting the node twice already and both times it just got
stucked in recovering. one node also had some problems a few days ago,
after a restart, it eventually moved from recovering to active after an
hour. i'm using solr 4.0.0.
Thanks
On Thu, Dec 6, 2012 at 11:03 AM, Mark
Okay - logs from that node would help a lot then (or just the parts around when
it's trying to recover).
- Mark
On Dec 5, 2012, at 7:11 PM, Nathaniel Domingo niel.domi...@gmail.com wrote:
yes, i tried restarting the node twice already and both times it just got
stucked in recovering. one
Yup, it should say config set, not collections.
A config set is all the config files for a single collection -
schema.xml, solrconfig.xml and related config files. You can name the
set of them so that multiple collections can use the same 'config
set'.
If you don't use a bootstrap option,
Right, solrhome is not required for upconfig, just for the bootstrap cmd.
You can also just upload modified files, but the tool doesn't really
let you do it in a fine grained way. But there are lots of zookeeper
tools you can use to do this if you wanted.
- Mark
On Thu, Nov 22, 2012 at 10:45
attached is a log relevant to the recovery.
Thanks
On Thu, Dec 6, 2012 at 11:23 AM, Mark Miller markrmil...@gmail.com wrote:
Okay - logs from that node would help a lot then (or just the parts around
when it's trying to recover).
- Mark
On Dec 5, 2012, at 7:11 PM, Nathaniel Domingo
I think the list strips most attachments or something - can you try something
like pastebin.com?
Thanks,
mark
On Dec 5, 2012, at 7:46 PM, Nathaniel Domingo niel.domi...@gmail.com wrote:
attached is a log relevant to the recovery.
Thanks
On Thu, Dec 6, 2012 at 11:23 AM, Mark Miller
here's a link to a portion of the log in pastebin.
http://pastebin.com/UDBMDdMv
Thanks
On Thu, Dec 6, 2012 at 11:53 AM, Mark Miller markrmil...@gmail.com wrote:
I think the list strips most attachments or something - can you try
something like pastebin.com?
Thanks,
mark
On Dec 5,
We have a longstanding issue with failed to respond errors in Solr when our
coordinator is querying our Solr shards.
To elaborate further... we're using the built-in distributed capabilities of
Solr 3.6, and using Jetty as our server. Occasionally, we will have a query
fail due to an error
This is just the std scatter gather distrib search stuff solr has been using
since around 1.4.
There is some overhead to that, but generally not much. I've measured it at
around 30-50ms for a 100 machines, each with 10 million docs a few years ago.
So…that doesn't help you much…but FYI…
-
Looks like the connecting to ZooKeeper is flapping. So as it tries to recover
it keeps losing the connection to zookeeper and then trying again, and I don't
have enough of the log to tell, but that probably just repeats and repeats.
I guess the network is probably not so fast and or the load
On 5 December 2012 22:12, Spadez james_will...@hotmail.com wrote:
Hi,
I am looking to import entries to my SOLR server by using the DIH,
connecting to an external postgre SQL server using the JDBC driver. I will
be importing about 50,000 entries each time.
Unless you have a lot of data in
As you add nodes, the average response time of the slowest node will likely
increase. For example, consider an extreme case where you have something like 1
million nodes - you're practically guaranteed that one of them is going to be
doing something like a stop-the-world garbage collection. So
Yep, after restarting, cluster came back to normal state.We will run couple
of more tests and see if we could reproduce this issue.
Btw, I am attaching the server logs before that 'INFO: *Waiting until we
see more replicas*' message.From the logs, we can see that leader election
process started
Thanks Sandeep,
How can it done when using a database because database has all the records
old, new and updated.
On Wed, Dec 5, 2012 at 11:47 PM, Sandeep Mestry sanmes...@gmail.com wrote:
Hi Amit/Shanu,
You can create the solr document for only the updated record and index it
to ensure only
On 6 December 2012 11:13, Amit Jha shanuu@gmail.com wrote:
Thanks Sandeep,
How can it done when using a database because database has all the records
old, new and updated.
You need to do a delta-import:
http://wiki.apache.org/solr/DataImportHandler#Using_delta-import_command
Regards,
On 12/5/2012 9:42 AM, Spadez wrote:
I am looking to import entries to my SOLR server by using the DIH,
connecting to an external postgre SQL server using the JDBC driver. I will
be importing about 50,000 entries each time.
Is connecting to an external SQL server for my data unreliable or risky,
Very interesting!
I've seen references to NRTCachingDirectory, MMapDirectory, FSDirectory,
RamDirectory and NIOFSDirectory, and thats just what I can remember. I have
tried to search for more information about these, but I'm not having much
luck.
Is there a place where I can read up on these?
I'm not sure I understand why this is important. Too much memory would just
be unused.
This is what the heap looks now:
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 17179869184 (16384.0MB)
NewSize = 21757952 (20.75MB)
MaxNewSize
See the screenshots:
solr_idea1: adding an IDEA tomcat artifact
solr_idea2: adding an IDEA facet
solr_idea3: placing modules into the artifact (drag modules from the Available
Elements to output root) and the created facet
Среда, 5 декабря 2012, 7:28 от sarowe [via Lucene]
75 matches
Mail list logo