Hi Anshum,
Im using solr 4.4. Is there a problem with using replicationFactor of 2
On Thu, Sep 12, 2013 at 11:20 AM, Anshum Gupta ans...@anshumgupta.netwrote:
Prasi, a replicationFactor of 2 is what you want. However, as of the
current releases, this is not persisted.
On Thu, Sep 12,
Thanks, guys. Now I know a little more about DocValues and realize that
they will do the job wrt FieldCache.
Regards, Per Steffensen
On 9/12/13 3:11 AM, Otis Gospodnetic wrote:
Per, check zee Wiki, there is a page describing docvalues. We used them
successfully in a solr for analytics
Thanks Erick!
Yeah, I think the next step will be CloudSolrServer with the SOLR-4816
patch. I think that is a very, very useful patch by the way. SOLR-5232
seems promising as well.
I see your point on the more-shards idea, this is obviously a
global/instance-level lock. If I really had to,
I'm trying to index a view in an Oracle database, and have come across some
strange behaviour: all the VARCHAR2 fields are being returned as empty
strings; this also applies to a datetime field converted to a string via
TO_CHAR, and the url field built by concatenating two constant strings and
a
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr-node on
each, one collection across the 6 nodes, 4 shards per node
Storing/indexing from 100 threads on external machines, each thread one
doc at the time, full speed (they always have a new doc to store/index)
See attached
Hi solr users
I want to create a core with node_name through the api
CloudSolrServer.query(SolrParams params ).
For example:
ModifiableSolrParams params = new ModifiableSolrParams();
params.set(qt, /admin/cores);
params.set(action, CREATE);
params.set(name,
no jetty, and yes for tomcat i've seen a couple of answers
On 12. Sep 2013, at 3:12 AM, Otis Gospodnetic wrote:
Using tomcat by any chance? The ML archive has the solution. May be on
Wiki, too.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Sep 11, 2013 8:56 AM, Andreas Owen
Hi Aditya,
You need to start another 6 instances (9 instances in total) to
achieve this. The first 3 instances, as you mention, are already
assigned to the 3 shards. The next 3 will be become their replicas,
followed by the next 3 as the next replicas.
You could create two copies each of the
Can you specify what do you mean by 'problem'? I don't think there should
be any issues with that.
Hope this is what you followed in your attempt so far:
http://wiki.apache.org/solr/SolrCloud#Example_B:_Simple_two_shard_cluster_with_shard_replicas
On Thu, Sep 12, 2013 at 11:31 AM, Prasi S
Followup: I just tried modifying the select with
select CAST('APPLICATION' as varchar2(100)) as sourceid, ...
and that caused the sourceid field to be empty. CASTing to char(100) gave
me the expected value ('APPLICATION', right-padded to 100 characters).
Meanwhile, google gave me this:
could it have something to do with the meta encoding tag is iso-8859-1 but the
http-header tag is utf8 and firefox inteprets it as utf8?
On 12. Sep 2013, at 8:36 AM, Andreas Owen wrote:
no jetty, and yes for tomcat i've seen a couple of answers
On 12. Sep 2013, at 3:12 AM, Otis Gospodnetic
This is probably a bug with Oracle thin JDBC driver. Google found a
similar issue:
http://stackoverflow.com/questions/4168494/resultset-getstring-on-varchar2-column-returns-empty-string
I don't think this is specific to DataImportHandler.
On Thu, Sep 12, 2013 at 12:43 PM, Raymond Wiker
Maybe the fact that we are never ever going to delete or update
documents, can be used for something. If we delete we will delete entire
collections.
Regards, Per Steffensen
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr-node
on
Seems like the attachments didnt make it through to this mailing list
https://dl.dropboxusercontent.com/u/25718039/doccount.png
https://dl.dropboxusercontent.com/u/25718039/iowait.png
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one
Hi
I tried to reindex the solr. I get the regular expression problem. The
steps I followed are
I started the java -jar start.jar
http://localhost:8983/solr/update?stream.body=
deletequery*:*querydelete
http://localhost:8983/solr/update?stream.body=commit/
I stopped the solr server
I changed
Hi,
My Question is related to OpenNLP Integration with SOLR.
I have successfully applied OpenNLP LUCENE-2899-x.patch to latest solr
branch checkout from here:
http://svn.apache.org/repos/asf/lucene/dev/branches/branch_4x
And also iam able to compile source code, generated all realted
hi all.
I am trying solr cloud on my server. The server is a virtual machine.
I have followed solr cloude wiki http://wiki.apache.org/solr/SolrCloud .
When I run solr Cloud, It si failed. But If I try on my local ,it runs
successfully. Why does solr behave differently on server and local?
My
Fewer client threads updating makes sense, and going to 1 core also seems
like it might help. But it's all a crap-shoot unless the underlying cause
gets fixed up. Both would improve things, but you'll still hit the problem
sometime, probably when doing a demo for your boss ;).
Adrien has branched
Per:
One thing I'll be curious about. From my reading of DocValues, it uses
little or no heap. But it _will_ use memory from the OS if I followed
Simon's slides correctly. So I wonder if you'll hit swapping issues...
Which are better than OOMs, certainly...
Thanks,
Erick
On Thu, Sep 12, 2013
Hi,
I am also seeing this issue when the search query is something like how
are you? (Quotes for clarity).
The query parser splits it to the below tokens:
+text:whats +text:your +text:raashee?
However when I remove the ? from the search query how are you I get the
results.
Is ? a special
You must specify maxShardsPerNode=3 for this to happen. By default
maxShardsPerNode defaults to 1 so only one shard is created per node.
On Thu, Sep 12, 2013 at 3:19 AM, Aditya Sakhuja
aditya.sakh...@gmail.com wrote:
Hi -
I am trying to set the 3 shards and 3 replicas for my solrcloud
That sounds reasonable. I've done some more digging, and found that the
database instance in this case is an _OLD_ version of Oracle: 9.2.0.8.0. I
also tried using the OCI driver (version 12), which refuses to even talk to
this database.
I have three other databases running on more recent
Thanks. It'd be great if you can update this thread if you ever find a
workaround. We will document it on the DataImportHandlerFaq wiki page.
http://wiki.apache.org/solr/DataImportHandlerFaq
On Thu, Sep 12, 2013 at 4:56 PM, Raymond Wiker rwi...@gmail.com wrote:
That sounds reasonable. I've done
Yes, thanks.
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
unlimited data-arrays, that just behind the scenes goes to
memory-mapped files when there is no more room on heap. Never finished
it, and it might
Question mark and asterisk are wildcard characters, so if you want them to
be treated as punctuation, either enclose the terms in quotes or escape the
characters.
Wildcard characters suppress the execution of some token filters if they are
not able to cope with wildcards.
-- Jack Krupansky
On Thu, 2013-09-12 at 14:48 +0200, Per Steffensen wrote:
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
unlimited data-arrays, that just behind the scenes goes to
memory-mapped files when there is no more
On 9/12/13 3:28 PM, Toke Eskildsen wrote:
On Thu, 2013-09-12 at 14:48 +0200, Per Steffensen wrote:
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
unlimited data-arrays, that just behind the scenes goes to
Hi,
I got a small issue here, my facet settings are returning counts for empty
. I.e. when no the actual field was empty.
Here are the facet settings:
str name=facet.sortcount/str
str name=facet.limit6/str
str name=facet.mincount1/str
str name=facet.missingfalse/str
and this is the part of the
My problem is solved. My server default java version is 1.5 . I upgrade
java version.
2013/9/12 cihat güzel c.guzel@gmail.com
hi all.
I am trying solr cloud on my server. The server is a virtual machine.
I have followed solr cloude wiki http://wiki.apache.org/solr/SolrCloud
.
When I
My problem is solved. My server default java version is 1.5 . I upgrade
java version.
2013/9/12 cihat güzel c.guzel@gmail.com
hi all.
I am trying solr cloud on my server. The server is a virtual machine.
I have followed solr cloude wiki http://wiki.apache.org/solr/SolrCloud
.
When I
On 9/12/2013 2:14 AM, Per Steffensen wrote:
Starting from an empty collection. Things are fine wrt
storing/indexing speed for the first two-three hours (100M docs per
hour), then speed goes down dramatically, to an, for us, unacceptable
level (max 10M per hour). At the same time as speed goes
Neoman,
Make sure that solr08-prod (or the elected leader at any time) isn't doing a
stop-the-world garbage collection that takes long enough that the zookeeper
connection times out. I've seen that in my cluster when I didn't have parallel
GC enabled and my zkClientTimeout in solr.xml was too
Thanks greg. Currently we have 60 seconds (we reduced it recently). I may
have to reduce it again. can you please share your timeout value.
--
View this message in context:
Neoman,
I've got ours set at 45 seconds:
int name=zkClientTimeout${zkClientTimeout:45000}/int
-Original Message-
From: neoman [mailto:harira...@gmail.com]
Sent: Thursday, September 12, 2013 9:33 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr cloud shard goes down after
Exception in shard1 (solr01-prod) primary
09/12/13
13:56:46:635|http-bio-8080-exec-66|ERROR|apache.solr.servlet.SolrDispatchFilter|null:ClientAbortException:
java.net.SocketException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:406)
Hi Jack,
On Sep 11, 2013, at 5:34pm, Jack Krupansky wrote:
Do a copyField to another field, with a limit of 8 characters, and then use
that other field.
Thanks - I should have included a few more details in my original question.
The issue is that I've got an index with 200M records, of
On 9/12/2013 7:54 AM, Raheel Hasan wrote:
I got a small issue here, my facet settings are returning counts for empty
. I.e. when no the actual field was empty.
Here are the facet settings:
str name=facet.sortcount/str
str name=facet.limit6/str
str name=facet.mincount1/str
str
ok, so I got the idea... I will pull 7 fields instead and remove the empty
one...
But there must be some setting that can be done in Facet configuration to
ignore certain value if we want to
On Thu, Sep 12, 2013 at 7:44 PM, Shawn Heisey s...@elyograg.org wrote:
On 9/12/2013 7:54 AM,
Slow down, back up, and now tell us what problem (if any!) you are really
trying to solve. Don't leap to a proposed solution before you clearly state
the problem to be solved.
First, why do you think there is any problem at all?
Or, what are you really trying to achieve?
-- Jack Krupansky
it was the http-header, as soon as i force a iso-8859-1 header it worked
On 12. Sep 2013, at 9:44 AM, Andreas Owen wrote:
could it have something to do with the meta encoding tag is iso-8859-1 but
the http-header tag is utf8 and firefox inteprets it as utf8?
On 12. Sep 2013, at 8:36 AM,
Hi Jack,
Sorry, I was not clear earlier. What I'm trying to achieve is :
I want to know when a document is committed (hard commit). There can be a
lot of time lapse (1 hour or more) between the time you indexed that
document vs you issue a commit in my case. Now, I exactly want to know when
a
Lol, at breaking during a demo - always the way it is! :) I agree, we are
just tip-toeing around the issue, but waiting for 4.5 is definitely an
option if we get-by for now in testing; patched Solr versions seem to
make people uneasy sometimes :).
Seeing there seems to be some danger to SOLR-5216
Right, I don't see SOLR-5232 making 4.5 unfortunately. It could perhaps make a
4.5.1 - it does resolve a critical issue - but 4.5 is in motion and SOLR-5232
is not quite ready - we need some testing.
- Mark
On Sep 12, 2013, at 2:12 PM, Erick Erickson erickerick...@gmail.com wrote:
My take on
Sorry, but all you've done is reshuffle your previous statements but without
telling us about the actual problem that you are trying to solve!
Repeating myself: You, the application developer can send a hard commit any
time you want to assure that documents are searchable. Maybe not every
On Sep 12, 2013, at 20:55 , phanichaitanya pvempaty@gmail.com wrote:
Apologies again. But here is another try :
I want to make sure that documents that are indexed are committed in say an
hour. I agree that if you pass commitWithIn params and the like will make
sure of that based on the
On 9/12/2013 12:55 PM, phanichaitanya wrote:
I want to make sure that documents that are indexed are committed in say an
hour. I agree that if you pass commitWithIn params and the like will make
sure of that based on the time configurations we set. But, I want to make
sure that the document is
So, now I want to know when that document becomes searchable or when it is
committed. I've the following scenario:
1) Indexing starts at say 9:00 AM - with the above additions to the
schema.xml I'll know the indexed time of each document I send to Solr via
the update handler. Say 9:01, 9:02 and
My take on it is this, assuming I'm reading this right:
1 SOLR-5216 - probably not going anywhere, 5232 will take care of it.
2 SOLR-5232 - expected to fix the underlying issue no matter whether
you're using CloudSolrServer from SolrJ or sending lots of updates from
lots of clients.
3 SOLR-4816 -
On 9/12/2013 11:17 AM, Andreas Owen wrote:
it was the http-header, as soon as i force a iso-8859-1 header it worked
Glad you found a workaround!
If you are in a situation where you cannot control the header of the
request or modify the content itself to include charset information, or
there's
That makes sense, thanks Erick and Mark for you help! :)
I'll see if I can find a place to assist with the testing of SOLR-5232.
Cheers,
Tim
On 12 September 2013 11:16, Mark Miller markrmil...@gmail.com wrote:
Right, I don't see SOLR-5232 making 4.5 unfortunately. It could perhaps
make a
maxAnalyzedChars did it! I wasn't setting that param, and I'm working with
some very long documents. I also made the hl.fl param formatting change that
you suggested, Aloke.
Thanks again!
- Eric
On Sep 11, 2013, at 3:10 AM, Eric O'Hanlon elo2...@columbia.edu wrote:
Thank you, Aloke and
On 9/12/2013 11:04 AM, phanichaitanya wrote:
So, now I want to know when that document becomes searchable or when it is
committed. I've the following scenario:
1) Indexing starts at say 9:00 AM - with the above additions to the
schema.xml I'll know the indexed time of each document I send to
Yes, the document will be searchable after it is committed.
Although you can also do auto commits and commitWithin which do not
guarantee immediate visibility of index changes, you can do a hard commit
any time you want to make a document searchable.
-- Jack Krupansky
-Original
I'd like to know when a document is committed in Solr vs. the indexed time.
For indexed time, I can add a field as : field name=indexed_time
type=date default=NOW indexed=true stored=true /.
If I have say, 10 million docs indexed and I want to know the actual commit
time of the document which
Hi Prabu,
It's difficult to tell what's going wrong without the full exception stack
trace, including what the exception is.
If you can provide the specific input that triggers the exception, that might
also help.
Steve
On Sep 12, 2013, at 4:14 AM, prabu palanisamy pr...@serendio.com wrote:
Hi,
I just have this issue came out of no where
Everything was fine until all of a sudden the browser cant connect to this
solr.
Here is the solr log:
INFO - 2013-09-12 20:07:58.142; org.eclipse.jetty.server.Server;
jetty-8.1.8.v20121106
INFO - 2013-09-12 20:07:58.179;
While attempting to upgrade from Solr 4.3.0 to Solr 4.4.0 I ran into
this exception:
java.lang.IllegalArgumentException: enablePositionIncrements=false is
not supported anymore as of Lucene 4.4 as it can create broken token
streams
which led me to
Solr admin exposes time of last commit. You can use that.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Sep 12, 2013 3:22 PM, phanichaitanya pvempaty@gmail.com wrote:
Apologies again. But here is another try :
I want to make sure that documents that are indexed are committed in
I'm trying to get score by using a custom boost and also get the distance. I
found David's code* to get it using Intersects, which I want to replace by
{!geofilt} or geodist()
*David's code: https://issues.apache.org/jira/browse/SOLR-4255
He told me geodist() will be available again for this
59 matches
Mail list logo