You have to add two new Java options to your Glassfish config (example if
you use the standard keystore and truststore):
asadmin create-jvm-options -- -Djavax.net.ssl.keyStorePassword=changeit
asadmin create-jvm-options -- -Djavax.net.ssl.trustStorePassword=changeit
/Uwe
On 10 April 2013
Thanks to this!
No I have another problem. I tried to give the XML file the right format so
I made this
?xml version=1.0 encoding=UTF-8?
adddoc
field name=id455HHS-2232/field
field name=titleT0072-00031-DOWNLOAD - Blatt 12v/field
field name=formatapplication/pdf/field
Hi Joel,
I followed your steps, the cores and collection get created, but there
is no leader elected so I can not query the collection...
Do I miss something ?
Kind Regards
Alexander
Am 2013-04-09 10:21, schrieb A.Eibner:
Hi,
thanks for your faster answer.
You don't use the Collection API
Thanks Hoss, those are some really useful clarifications. Since what I'm
working on is currently at POC stage I'll go with the system properties and
will refactor them out as I move towards having a standalone ZooKeeper
ensemble.
Thanks again.
Edd
On 10 April 2013 01:41, Chris Hostetter
Greetings Solrians
This is just a reminder that Solrstrap is a thing, and that it might help
you out with your Solr project.
http://fergiemcdowall.github.io/solrstrap/
Solrstap is wondering which new features it needs. Solrstrap would like to
hear your suggestions. Feel free to post here or
Just for information: I indicate that the problem occurs when I try to add
the fields, created, last_modified, issued (all three have the type date)
and the field rightsholder.
Maybe it is helpful!
--
View this message in context:
Thx, I'll try this approach.
Zitat von Alexandre Rafalovitch arafa...@gmail.com:
Have you looked at edismax and the 'qf' fields parameter? It allows you to
define the fields to search. Also, you can define those parameters in
solrconfig.xml and not have to send them down the wire.
Finally,
On Wed, Apr 10, 2013 at 10:35 AM, Max Bo maximilian.brod...@gmail.comwrote:
Just for information: I indicate that the problem occurs when I try to add
the fields, created, last_modified, issued (all three have the type date)
and the field rightsholder.
Maybe it is helpful!
From the example
Thank you.
I changed it and now it works.
But is there any possibility to make the given timestamp acceptable for
solr?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexed-data-not-searchable-tp4054473p4054985.html
Sent from the Solr - User mailing list archive at
number of documents found can be found in a field called numFound in
the response.
If you do use SolrJ you will likely have a QueryResponse qr and can just
do a qr.setNumFound().
If you use do not use SolrJ try to add e.g. wt=json to your search query
to get the response in JSON. Find the
We need to make Solr Search like
Success Failure
Working 50%
but Solr query parser eliminates all special characters from search.
my search query is as mentioned below
http://localhost:8080/solr/core/select?q=%22Success%20%26%20Failure%22hl=onhl.snippets=99debugQuery=on
below is debugQuery
On 4/10/13 12:17 PM, Per Steffensen wrote:
number of documents found can be found in a field called numFound
in the response.
If you do use SolrJ you will likely have a QueryResponse qr and can
just do a qr.setNumFound().
qr.getResults().getNumFound() :-)
If you use do not use SolrJ try to
Thank you for your explanations, this will help me to figure out my system.
2013/4/10 Shawn Heisey s...@elyograg.org
On 4/9/2013 9:12 PM, Furkan KAMACI wrote:
I am sorry but you said:
*you need enough free RAM for the OS to cache the maximum amount of disk
space all your indexes will
I'm sure the best way for me to solve this issue myself is to ask it
publicly, so...
If I have two {!join} queries that select a collection of documents
each, how do I create a filter query that combines their results?
If I do fq={!join} {!join...} it only considers the first.
From what I
Solr assumes you are using UTC. It is your job to do a conversion.
If you want Solr to do it, you could use an UpdateProcessor to do it,
either using RegExp, or perhaps a ScriptUpdateProcessor.
In fact, if you're comfortable with XSLT, you can make Solr accept your
old format of XML by posting
Hi guys,
I have some problems with Solr replication and can see some
unexpected behavior.
Would be nice to have some answers where am I wrong, or what is the best
way to solve the problem.
I have a replication master-slave. http://192.168.2.204:8080/solr/ is
master and
On Wed, Apr 10, 2013, at 12:22 PM, Upayavira wrote:
I'm sure the best way for me to solve this issue myself is to ask it
publicly, so...
If I have two {!join} queries that select a collection of documents
each, how do I create a filter query that combines their results?
If I do
Hi all,
accoridng to this ticket: https://issues.apache.org/jira/browse/SOLR-2998
Are there any plans to fix this bug? Is there antoher way to usign fvh
and still having proper results (without concatenation)?
--
Karol Sikora
+48 781 493 788
Laboratorium EE
ul. Mokotowska 46A/23 | 00-543
Switch the field types from the standard tokenizer to the white space
tokenizer and don't use the word delimiter filter.
Or, you can sometimes add custom character mapping tables to some filters
and indicate that your desired special characters should be mapped to type
ALPHA.
-- Jack
Jack - I apologize for my ignorance here, but when you keep emphasizing 'new'
- does that mean that there is ANOTHER version of this tool than the one
that is built into solr-4.2.1?
And on the encoding issue - I thought pdf was platform-agnostic? Or is the
problem on my windows system - i.e. that
Yes, there is the version that comes with Solr 3.x.
I'm not aware of an encoding issue.
-- Jack Krupansky
-Original Message-
From: sdspieg
Sent: Wednesday, April 10, 2013 8:11 AM
To: solr-user@lucene.apache.org
Subject: Re: Pushing a whole set of pdf-files to solr
Jack - I apologize
On 4/9/2013 7:03 PM, Furkan KAMACI wrote:
These are really good metrics for me:
You say that RAM size should be at least index size, and it is
better to have a RAM size twice the index size (because of worst
case scenario).
On the other hand let's assume that I have a RAM size that
Hi guys,
I have some problems with Solr replication and can see some
unexpected behavior.
Would be nice to have some answers where am I wrong, or what is the best
way to solve the problem.
I have a replication master-slave. http://192.168.2.204:8080/solr/ is
master and
You're mixing up disk and RAM requirements when you talk
about having twice the disk size. Solr does _NOT_ require
twice the index size of RAM to optimize, it requires twice
the size on _DISK_.
In terms of RAM requirements, you need to create an index,
run realistic queries at the installation
Hi Uwe,
Thanks for your response. As I mentioned in my email, I would prefer the
application to not have access to the keystore.
Do you know if there is a way of specifying a different HttpClient
implementation (e.g. DefaultHttpClient rather than
SystemDefaultHttpClient) ?
Have you tried to create a HttpSolrServer with this constructor: *
HttpSolrServerhttp://lucene.apache.org/solr/4_2_0/solr-solrj/org/apache/solr/client/solrj/impl/HttpSolrServer.html#HttpSolrServer(java.lang.String,
Hello,
I am building a Search Interface in front of Solr. I am using facets and
other approaches to do fielded restrictions (via fq queries). I am also
providing a free-form search field to the user.
I would like that free-form field to search against eDisMax rules (multiple
source fields,
I'm using this sample query to group the result set by category:
q=testgroup=truegroup.field=category
This works as expected and I get this sample response:
response:
{numFound:1,start:0,docs:[
{
...
}
{numFound:6,start:0,docs:[
{
...
}
{numFound:3,start:0,docs:[
{
...
}
Correct, except the worst case maximum for disk space is three times. --wunder
On Apr 10, 2013, at 6:04 AM, Erick Erickson wrote:
You're mixing up disk and RAM requirements when you talk
about having twice the disk size. Solr does _NOT_ require
twice the index size of RAM to optimize, it
Ok,
We figured it out:
The cert wasn't in the trusted CA keystore. I know we put it in there
earlier; I don't know why it was missing.
But we added it in again and everything works as before.
Thanks,
--
View this message in context:
Hello,
I tried updating our solrcloud from 4.0.0 to 4.1.0.
So I set up a cloud on my local machine with a standalone zookeeper
(3.4.5), 3 collections and 6 Solr servers (4.0.0).
I added some documents via SolrJ, and stopped the servers. After that I
restarted the nodes with the newer version
Hi,
I run multiple solr indexes in 1 single tomcat (1 webapp per index). All
the indexes are solr 3.5 and I have upgraded few of them to solr 4.1
(about half of them).
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1
Either preprocess the query in your application layer, add a query
preprocessor custom search component, or propose some additional options to
Solr to disable certain features like local params nested queries, etc.
Oh, with fielded search, maybe you can just set uf (user fields) to empty.
I
On Wed, Apr 10, 2013 at 11:59 AM, Jack Krupansky j...@basetechnology.comwrote:
Oh, with fielded search, maybe you can just set uf (user fields) to
empty. I haven't checked if that restricts qf as well.
I just tested and UF seems to affect FQ (Filter Query). So, that would have
been a cool
In our application we are using Solr 4.1.
And we wanna filter results by score relevance.
I had the idea to use statistic data (i.e. standard deviation, mean) for
score field.
Is it exists workaround of using …stats=truestats.field=score... ?
Thanks in advance
--
View this message in
Hi,
When we try to access Solr(3.6) admin page some times it is not taking us to
the right page instead it is showing the below message.
Directory: /solr/admin/
Parent Directory
Replication 4096 bytes Mar 25 2013 9:34:06 AM
When we click on parent Directory it displays the below
Is it possible to have a Solr cloud in a master/slave configuration with
another solr server where the cloud is the slave?
--
To *know* is one thing, and to know for certain *that* we know is another.
--William James
Hi Marc,
Why such a big heap? Do you really need it? You disabled all caches,
so the JVM really shouldn't need much memory. Have you tried with
-Xmx20g or even -Xmx8g? Aha, survivor is getting to 100% so you kept
increasing -Xmx?
Have you tried just not using any of these:
-XX:+UseG1GC
Can you post what your clusterstate.json?
After you spin up the initial core, it will automatically become leader for
that shard.
On Wed, Apr 10, 2013 at 3:43 AM, A.Eibner a_eib...@yahoo.de wrote:
Hi Joel,
I followed your steps, the cores and collection get created, but there is
no leader
On 4/10/2013 9:48 AM, Marc Des Garets wrote:
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1 collector.
The perm gen went from 410Mb to 600Mb.
The eden space usage is a lot bigger and the survivor space usage is
Hey guys,
This feels like a silly question already, here goes:
In SolrCloud it doesn't seem obvious to me where one can grab stats
regarding caches for a given core using an http call (JSON/XML). Those
values are available in the web-based app, but I am looking for a http call
that would return
Hey Tim
SolrCloud-Mode or not does not really matter for this fact .. in 4.x (and afaik
as well in 3.x) you can find the stats here:
http://host:port/solr/admin/mbeans?stats=true in xml or json (setting the
responsewriter with wt=json) - as you like
HTH
Stefan
On Wednesday, April 10, 2013
It's under /admin/mbeans.
Alan Woodward
www.flax.co.uk
On 10 Apr 2013, at 20:53, Tim Vaillancourt wrote:
Hey guys,
This feels like a silly question already, here goes:
In SolrCloud it doesn't seem obvious to me where one can grab stats
regarding caches for a given core using an http
There we go, Thanks Stefan!
You're right, 3.x has this as well, I guess I missed it. I'll add this to
the docs for SolrCaching.
Cheers!
Tim
On 10 April 2013 13:19, Stefan Matheis matheis.ste...@gmail.com wrote:
Hey Tim
SolrCloud-Mode or not does not really matter for this fact .. in 4.x
To complete my as well in 3.x phrase - what i wanted to say is: it was
already there in the times of 3.x - but because there was stats.jsp .. you know
:)
On Wednesday, April 10, 2013 at 10:19 PM, Stefan Matheis wrote:
Hey Tim
SolrCloud-Mode or not does not really matter for this fact .. in
Hi,
here the clusterstate.json (from zookeeper) after creating the core:
{storage:{
shards:{shard1:{
range:8000-7fff,
state:active,
replicas:{app02:9985_solr_storage-core:{
shard:shard1,
state:down,
core:storage-core,
Hi,
I have a nullable TextField with field type as follows(field
name=fun_group):-
fieldType name=lowercase_sort_missing_first class=solr.TextField
sortMissingFirst=true positionIncrementGap=100
analyzertokenizer class=solr.KeywordTokenizerFactory/
filter
Are you sure you want to facet on a text field??? That will facet on the
individual terms, which isn't usually very useful.
Usually, people want to facet on full phrases or entire strings, so they do
a schema copyField from the text field to a string field and then facet on
the string field.
Ah... I see now that you are using the keyword tokenizer that should
preserve the phrase structure of the text.
You haven't detailed the exception stack trace.
What are the numbers in terms of number of values and average length of each
value?
-- Jack Krupansky
-Original Message-
Number of values fun_group on shard 6 = 48000
Max length of fun_group is 20 chars
If I run the facet on just shard6 it doesn't error out no matter the
facet.limit. Also this query returns results only from shard 6 since the
my_id:4024 belongs to shard 6.
--
View this message in context:
If the NPE was in SolrDispatchFilter, it could relate to some limit on the
HTTP request or response size.
Again, we need the full stack trace, and the Solr release.
-- Jack Krupansky
-Original Message-
From: coolpriya5
Sent: Wednesday, April 10, 2013 7:06 PM
To:
Solr Version is 3.4. As for stacktrace, I tried setting logger level to
FINEST on the solr admin logging page and it still doesn't print the
stacktrace. All I get are one liners:-
2013-04-10 17:09:59,889 [http--18] ERROR [Marker: ]
org.apache.solr.core.SolrCore :
Try URL encoding it and/or escaping the
On Tue, Apr 9, 2013 at 2:32 AM, Rohan Thakur rohan.i...@gmail.com wrote:
hi all
one thing I wanted to clear is for every other query I have got correct
suggestions but these 2 cases I am not getting what suppose to be the
suggestions:
1) I have
I think a lot of your e-mail failed to make it through various filters,
can you try sending in a simpler format?
Best
Erick
On Tue, Apr 9, 2013 at 8:19 AM, neha yadav nehayadav...@gmail.com wrote:
I am trying to modify the results of solr output . basically I need to
change the ranking of the
Large facet.limit values cause a very large amount of form data to be sent to
the shards, though I'm not sure why this would cause a NullPointerException.
Perhaps the web server you are using is truncating the data instead of
returning a form too large error, which is somehow causing an NPE.
I'm using tomcat. Also in such a case, why wouldn't the same error occur when
I run the same query on shard 6 alone? Is this a limitation of distributed
search?
Shard 6 is the only shard that has data for this query.
--
View this message in context:
Yes, this is a distributed search thing. In a distributed search, it will first
make a somewhat normal facet request to all of the shards, get back the facet
values, then make a second request in order to get the full counts of the facet
values - this second request contains a list of facet
Root caused the Issue to a Code Bug / Contract Violation in SnapPuller in
solr 4.2.1 (impacts trunk as well) and Fixed by Patching the SnapPuller
locally.
fetchfilelist API expects indexversion to be specified as param.
So Call to Master should of be Form :
On 10 April 2013 22:03, lexus a...@scalepoint.com wrote:
In our application we are using Solr 4.1.
And we wanna filter results by score relevance.
I had the idea to use statistic data (i.e. standard deviation, mean) for
score field.
Is it exists workaround of using
59 matches
Mail list logo