boost parent fields BlockJoinQuery

2016-04-11 Thread michael solomon
Hi,
I'm using in BlockJoin Parser Query for return the parent of the relevant
child i.e:
{!parent which="is_parent:true" score=max}(child_field:bla)

It's possible to boost the parent? something like:

{!parent which="is_parent:true" score=max}(child_field:bla)
parent_field:"bla bla"^10
Thanks,
Michael


Re: Facet heatmaps: cluster coordinates based on average position of docs

2016-04-11 Thread Reth RM
Can you please be bit more specific on what type of query are you making
and what other values are you expecting, with example?

If you know of specific jira for the use case, then you can write comments
there.


On Mon, Apr 11, 2016 at 5:54 PM, Anton K.  wrote:

> Anyone?
>
> Or how can i contact with facet heatmaps creator?
>
> 2016-04-07 18:42 GMT+03:00 Anton K. :
>
> > I am working with new solr feature: facet heatmaps. It works great, i
> > create clusters on my map with counts. When user click on cluster i zoom
> in
> > that area and i might show him more clusters or documents (based on
> current
> > zoom level).
> >
> > But all my cluster icons (i use round one, see screenshot below) placed
> > straight in the center of cluster's rectangles:
> >
> > https://dl.dropboxusercontent.com/u/1999619/images/map_grid3.png
> >
> > Some clusters can be in sea and so on. Also it feels not natural in my
> > case to have icons placed orderly on the world map.
> >
> > I want to place cluster's icons in average coords based on coordinates of
> > all my docs inside cluster. Is there any way to achieve this? I am trying
> > to use stats component for facet heatmap but it isn't implemented yet.
> >
>


Re: Cache problem

2016-04-11 Thread Reth RM
As per solr admin dashboard's memory report, solr jvm is not using memory
more than 20 gb, where as physical memory is almost full.  I'd set
xms=xmx=16 gb and let operating system use rest. And regarding caches:
 filter cache hit ratio looks good so it should not be concern. And afaik,
document cache actually uses OS cache. Overall, I'd reduce memory allocated
to jvm as said above and try.




On Mon, Apr 11, 2016 at 7:40 PM,  wrote:

> You do need to optimize to get rid of the deleted docs probably...
>
> That is a lot of deleted docs
>
> Bill Bell
> Sent from mobile
>
>
> > On Apr 11, 2016, at 7:39 AM, Bastien Latard - MDPI AG
>  wrote:
> >
> > Dear Solr experts :),
> >
> > I read this very interesting post 'Understanding and tuning your Solr
> caches' !
> > This is the only good document that I was able to find after searching
> for 1 day!
> >
> > I was using Solr for 2 years without knowing in details what it was
> caching...(because I did not need to understand it before).
> > I had to take a look since I needed to restart (regularly) my tomcat in
> order to improve performances...
> >
> > But I now have 2 questions:
> > 1) How can I know how much RAM is my solr using in real (especially for
> caching)?
> > 2) Could you have a quick look into the following images and tell me if
> I'm doing something wrong?
> >
> > Note: my index contains 66 millions of articles with several text fields
> stored.
> > 
> >
> > My solr contains several cores (all together are ~80Gb big), but almost
> only the one below is used.
> >
> > I have the feeling that a lot of data is always stored in RAM...and
> getting bigger and bigger all the time...
> >
> > 
> > 
> >
> > (after restart)
> > $ sudo tail -f /var/log/tomcat7/catalina.out | grep GC
> > 
> > [...] after a few minutes
> > 
> >
> > Here are some images, that can show you some stats about my Solr
> performances...
> > 
> > 
> > 
> >
> > 
> >
> > Kind regards,
> > Bastien Latard
> >
> >
>


Re: Solr Sharding Strategy

2016-04-11 Thread Toke Eskildsen
On Tue, 2016-04-12 at 05:57 +, Bhaumik Joshi wrote:

> //Insert Document
> UpdateResponse resp = cloudServer.add(doc, 1000);
> 
Don't insert documents one at a time, if it can be avoided:
https://lucidworks.com/blog/2015/10/05/really-batch-updates-solr-2/


Try pausing the indexing fully when you do your query test, to check how
big the impact of indexing is.

When you run your query performance test, are the queries issued
sequentially or in parallel?


- Toke Eskildsen, State and Univeristy Library, Denmark




Re: Solrj API for Managed Resources

2016-04-11 Thread Reth RM
I think its best to use available APIs. Here are the list of apis for
managing synonyms and stop words

https://cwiki.apache.org/confluence/display/solr/Managed+Resources

And this blog post with details
https://lucidworks.com/blog/2014/03/31/introducing-solrs-restmanager-and-managed-stop-words-and-synonyms/



On Tue, Apr 12, 2016 at 4:39 AM, iambest  wrote:

> Is there a solrj API to add synonyms or stop words using the Managed
> Resources API? I have to programmatically add them, what is the best way?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solrj-API-for-Managed-Resources-tp4269454.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Specify relative path to current core conf folder when it's originally relative to solr home

2016-04-11 Thread Reth RM
 I think there are some root paths defined in solr.sh file that will be in
bin directory. You can pick root directory variable from there and use it.
Example in solrconfig.xml, there is a value as :
" ${solr.install.dir:../../../..}" I think solr.install.dir is the root
path and its definition is set in solr.sh. I'm not sure but worth giving a
try.




On Tue, Apr 12, 2016 at 9:34 AM, scott.chu  wrote:

> I got a custom tokenizer. When configuring it, there's an attribute
> 'fileDir', whose value is  a path relative to solr home. But I wish it can
> be relative to current core. Is there some system variable out-of-box, say
> {current_core}, that I can use in the value? For example,
>
> solr home = /solr5/server/solr
> In the current core's solrconfig.xml, I can specify
> 
> 
> myfiledir
> 
> 
>
> so it will refer to /solr5/server/solr/myfiledir.
>
> But I wanna put myfileDir under current core's conf folder. I wish there's
> something such as:
> ...
> {current_core}/conf/myfiledir
> ...
>
> Is it possible?
>


Re: Solr Sharding Strategy

2016-04-11 Thread Bhaumik Joshi
Please note that all caches are disable in mentioned test.


In 2 shards: Intended queries and updates = 10 per sec Actual queries per sec = 
3.3 Actual updates per sec = 10 so for 302 queries avg query time is 2192ms.

In 1 shard: Intended queries and updates = 10 per sec Actual queries per sec = 
9.7 Actual updates per sec = 10.3 so for 302 queries avg query time is 83ms.

We do soft commit when we insert/update document.

//Insert Document
UpdateResponse resp = cloudServer.add(doc, 1000);
if (resp.getStatus() == 0)
{
success = true;
}

//Update Document
UpdateRequest req = new UpdateRequest();
req.setCommitWithin(1000);
req.add(docs);
UpdateResponse resp = req.process(cloudServer);
if (resp.getStatus() == 0)
{
success = true;
}

Here is commit settings in solrconfig.xml.


60
2
false



${solr.autoSoftCommit.maxTime:-1}




Thanks & Regards,

Bhaumik Joshi


From: Daniel Collins 
Sent: Monday, April 11, 2016 8:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr Sharding Strategy

I'd also ask about your indexing times, what QTime do you see for indexing
(in both scenarios), and what commit times are you using (which Toke
already asked).

Not entirely sure how to read your table, but looking at the indexing side
of things, with 2 shards, there is inherently more work to do, so you would
expect indexing latency to increase (we have to index in 1 shard, and then
index in the 2nd shard, so logically its twice the workload).

Your table suggests you managed 10 updates per second, but you never
managed 25 updates per second either with 1 shard or 2 shards.  Though the
numbers don't make sense, you managed 13.9 updates per sec on 1 shard, and
21.9 updates per sec on 2 shards.  That suggests to me that in the single
shard case, your searches are causing your indexing to throttle, maybe the
resourcing is favoring searches and so the indexing threads aren't getting
a look in...  Whereas in the 2 shard case, it seems clear (as Toke said),
that search isn't really hitting the index much, not sure where the
bottleneck is, but its not on the index, which is why your indexing load
can get more requests through.

On 11 April 2016 at 15:36, Toke Eskildsen  wrote:

> On Mon, 2016-04-11 at 11:23 +, Bhaumik Joshi wrote:
> > We are using solr 5.2.0 and we have Index-heavy (100 index updates per
> > sec) and Query-heavy (100 queries per sec) scenario.
>
> > Index stats: 10 million documents and 16 GB index size
>
> > Which sharding strategy is best suited in above scenario?
>
> Sharding reduces query throughput and can improve query latency as well
> as indexing speed. For small indexes, the overhead of sharding is likely
> to worsen query latency. So as always, it depends.
>
> Qualified guess: Don't use multiple shards, but consider using replicas.
>
> > Please share reference resources which states detailed comparison of
> > single shard over multi shard if any.
>
> Sorry, could not find the one I had in mind.
> >
> > Meanwhile we did some tests with SolrMeter (Standalone java tool for
> > stress tests with Solr) for single shard and two shards.
> >
> > Index stats of test solr cloud: 0.7 million documents and 1 GB index
> > size.
> >
> > As observed in test average query time with 2 shards is much higher
> > than single shard.
>
> Makes sense: Your shards are so small that the actual time spend on the
> queries is very low. So relatively, the overhead of distributed (aka
> multi-shard) searching is high, negating any search-gain you got by
> sharding. I would not have expected the performance drop-off to be that
> large (factor 20-60) though.
>
> Your query speed is unusually low for an index of your size, which leads
> me to believe that your indexing is slowing everything down. This is
> often due to too frequent commits and/or too many warm up queries.
>
> There is a bit about it at
> https://wiki.apache.org/solr/SolrPerformanceFactors
>
>
> - Toke Eskildsen, State and University Library, Denmark
>
>
>
>


Specify relative path to current core conf folder when it's originally relative to solr home

2016-04-11 Thread scott.chu
I got a custom tokenizer. When configuring it, there's an attribute 'fileDir', 
whose value is  a path relative to solr home. But I wish it can be relative to 
current core. Is there some system variable out-of-box, say {current_core}, 
that I can use in the value? For example, 

solr home = /solr5/server/solr
In the current core's solrconfig.xml, I can specify


myfiledir



so it will refer to /solr5/server/solr/myfiledir.

But I wanna put myfileDir under current core's conf folder. I wish there's 
something such as:
...
{current_core}/conf/myfiledir
...

Is it possible?


Re: SolrCloud Config file

2016-04-11 Thread Erick Erickson
Do note by the way that as of Solr 5.5, the bin/solr script has an
option for uploading and downloading configsets. Try typing

bin/solr zk -help

Best,
Erick

On Mon, Apr 11, 2016 at 6:30 PM, Shawn Heisey  wrote:
> On 4/11/2016 6:40 PM, Sam Xia wrote:
>> Where is the path of topic collection zookeeper config file? Here is from
>> wiki (see below). But I was not able to find configs/topic anywhere in the
>> installation folder.
>
> The /configs/topic path is *inside the zookeeper database*.  It is not a
> path on the filesystem at all.  Zookeeper is a separate Apache project
> that Solr happens to use when running in cloud mode.
>
> http://zookeeper.apache.org/
>
>> "The create command will upload a copy of the data_driven_schema_configs
>> configuration directory to ZooKeeper under /configs/mycollection. Refer to
>> the Solr Start Script Reference
>> > ce> page for more details about the create command for creating
>> collections.”
>>
>> Here is the command that I run and verify zookeeper is in port 8983. BTW,
>> I did not modify anything and the Solr is a clean install so I do not know
>> why Python is used in the script. The error looks to me that the config
>> folder was not created at first command. So when you try to update it, it
>> gets an IO error.
>>
>> ./solr status
>>
>> Found 2 Solr nodes:
>>
>> Solr process 30976 running on port 7574
>> {
>>   "solr_home":"/locm/solr-6.0.0/example/cloud/node2/solr",
>>   "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize -
>> 2016-04-01 14:41:49",
>>   "startTime":"2016-04-11T23:42:59.513Z",
>>   "uptime":"0 days, 0 hours, 51 minutes, 43 seconds",
>>   "memory":"93.2 MB (%19) of 490.7 MB",
>>   "cloud":{
>> "ZooKeeper":"localhost:9983",
>> "liveNodes":"2",
>> "collections":"2"}}
>>
>>
>> Solr process 30791 running on port 8983
>> {
>>   "solr_home":"/locm/solr-6.0.0/example/cloud/node1/solr",
>>   "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize -
>> 2016-04-01 14:41:49",
>>   "startTime":"2016-04-11T23:42:54.041Z",
>>   "uptime":"0 days, 0 hours, 51 minutes, 49 seconds",
>>   "memory":"78.9 MB (%16.1) of 490.7 MB",
>>   "cloud":{
>> "ZooKeeper":"localhost:9983",
>> "liveNodes":"2",
>> "collections":"2"}}
>
> 8983 is a *Solr* port.  The default embedded zookeeper port is the first
> Solr port in the cloud example plus 1000, so it usually ends up being 9983.
>
>> If you run the following steps, you would be able to reproduce the issue
>> every time.
>>
>> Step 1) bin/solr start -e cloud -noprompt
>> Step 2) bin/solr create -c topic -d sample_techproducts_configs
>> Step 3) ./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic
>> -solrhome /locm/solr-5.5.0/ -confdir
>> /locm/solr-5.5.0/server/solr/configsets/sample_techproducts_configs/conf
>
> The "-solrhome" option is not something you need.  I have no idea what
> it will do, but it is not one of the options for upconfig.
>
> I tried this (on Windows) and I'm getting a different problem on the
> upconfig command trying to connect to zookeeper:
>
> https://www.dropbox.com/s/c65zmkhd0le6mzv/upconfig-error.png?dl=0
>
> Trying again on Linux, I had zero problems with the commands you used,
> changing only minor details for the upconfig command (things are in a
> different place, and I didn't use the unnecessary -solrhome option):
>
> https://www.dropbox.com/s/edoa07anmkkep0l/xia-recreate1.png?dl=0
> https://www.dropbox.com/s/ad5ukuvfvlgwq0z/xia-recreate2.png?dl=0
> https://www.dropbox.com/s/ay1u3jjuwy5t52s/xia-recreate3.png?dl=0
>
> Your stated commands indicate 5.5.0, but the JSON status information
> above and the paths they contain indicate that it is 6.0.0 that is
> responding.  I will have to try 6.0.0 later.
>
> If nothing has changed, then "get-pip.py" would not be there.  There
> isn't a configset named "topic_configs_ori" included with Solr, not even
> in the 6.0.0 version.  This came from somewhere besides the Solr website.
>
> Thanks,
> Shawn
>


Re: SolrCloud Config file

2016-04-11 Thread Shawn Heisey
On 4/11/2016 6:40 PM, Sam Xia wrote:
> Where is the path of topic collection zookeeper config file? Here is from 
> wiki (see below). But I was not able to find configs/topic anywhere in the 
> installation folder. 

The /configs/topic path is *inside the zookeeper database*.  It is not a
path on the filesystem at all.  Zookeeper is a separate Apache project
that Solr happens to use when running in cloud mode.

http://zookeeper.apache.org/

> "The create command will upload a copy of the data_driven_schema_configs 
> configuration directory to ZooKeeper under /configs/mycollection. Refer to 
> the Solr Start Script Reference 
>  ce> page for more details about the create command for creating 
> collections.”
>
> Here is the command that I run and verify zookeeper is in port 8983. BTW, 
> I did not modify anything and the Solr is a clean install so I do not know 
> why Python is used in the script. The error looks to me that the config 
> folder was not created at first command. So when you try to update it, it 
> gets an IO error.
>
> ./solr status
>
> Found 2 Solr nodes: 
>
> Solr process 30976 running on port 7574
> {
>   "solr_home":"/locm/solr-6.0.0/example/cloud/node2/solr",
>   "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 
> 2016-04-01 14:41:49",
>   "startTime":"2016-04-11T23:42:59.513Z",
>   "uptime":"0 days, 0 hours, 51 minutes, 43 seconds",
>   "memory":"93.2 MB (%19) of 490.7 MB",
>   "cloud":{
> "ZooKeeper":"localhost:9983",
> "liveNodes":"2",
> "collections":"2"}}
>
>
> Solr process 30791 running on port 8983
> {
>   "solr_home":"/locm/solr-6.0.0/example/cloud/node1/solr",
>   "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 
> 2016-04-01 14:41:49",
>   "startTime":"2016-04-11T23:42:54.041Z",
>   "uptime":"0 days, 0 hours, 51 minutes, 49 seconds",
>   "memory":"78.9 MB (%16.1) of 490.7 MB",
>   "cloud":{
> "ZooKeeper":"localhost:9983",
> "liveNodes":"2",
> "collections":"2"}}

8983 is a *Solr* port.  The default embedded zookeeper port is the first
Solr port in the cloud example plus 1000, so it usually ends up being 9983.

> If you run the following steps, you would be able to reproduce the issue 
> every time.
>
> Step 1) bin/solr start -e cloud -noprompt
> Step 2) bin/solr create -c topic -d sample_techproducts_configs
> Step 3) ./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic 
> -solrhome /locm/solr-5.5.0/ -confdir 
> /locm/solr-5.5.0/server/solr/configsets/sample_techproducts_configs/conf

The "-solrhome" option is not something you need.  I have no idea what
it will do, but it is not one of the options for upconfig.

I tried this (on Windows) and I'm getting a different problem on the
upconfig command trying to connect to zookeeper:

https://www.dropbox.com/s/c65zmkhd0le6mzv/upconfig-error.png?dl=0

Trying again on Linux, I had zero problems with the commands you used,
changing only minor details for the upconfig command (things are in a
different place, and I didn't use the unnecessary -solrhome option):

https://www.dropbox.com/s/edoa07anmkkep0l/xia-recreate1.png?dl=0
https://www.dropbox.com/s/ad5ukuvfvlgwq0z/xia-recreate2.png?dl=0
https://www.dropbox.com/s/ay1u3jjuwy5t52s/xia-recreate3.png?dl=0

Your stated commands indicate 5.5.0, but the JSON status information
above and the paths they contain indicate that it is 6.0.0 that is
responding.  I will have to try 6.0.0 later.

If nothing has changed, then "get-pip.py" would not be there.  There
isn't a configset named "topic_configs_ori" included with Solr, not even
in the 6.0.0 version.  This came from somewhere besides the Solr website.

Thanks,
Shawn



Re: Indexing date data for facet search

2016-04-11 Thread Erick Erickson
You have two options for dates in this scenario, "tdate" or "dateRange".
Probably in this case use dateRange, it should be more time and
space efficient. Here's some background:

https://lucidworks.com/blog/2016/02/13/solrs-daterangefield-perform/

Date types should be indexed as fully specified strings, as

-MM-DDThh:mm:ssZ

Best,
Erick

On Mon, Apr 11, 2016 at 3:03 PM, Steven White  wrote:
> Hi everyone,
>
> I need to index data data into Solr and then use this field for facet
> search.  My question is this, the date data in my DB is stored in the
> following format "2016-03-29 15:54:35.461":
>
> 1) What format I should be indexing this date + time stamp into Solr?
> 2) What Solr field type I should be using?  Is it "date"?
> 3) How do I handle various time zones and locales?
> 4) Can I insert multi value data data into the single "date" facet field
> and still use this field for facet search?
> 5) Based on my need, will all the Date Math per [1] on date facet still
> work? I'm confused here because of my need for (3).
>
> To elaborate on (4) some more.  The need here is this.  In my DB, there are
> more than one column with date data.  I will be indexing them all into this
> single multi-value Solr field of type Date that I will then use for facet.
> Is this possible?
>
> I guess, this is a two part question, for date facet: a) how to properly
> index, and b) how do I properly search.
>
> As always, any insight is greatly appreciated.
>
> Steve
>
> [1] https://cwiki.apache.org/confluence/display/solr/Working+with+Dates


Re: Limiting regex queries

2016-04-11 Thread Erick Erickson
There's some ability to time-limit queries so they
stop after a specified time. That does not do any
cost analysis ahead of time though.

Periodically there's some interest in a way to
short-circuit "expensive" queries through some
kind of query plan, but nothing committed yet.

Yeah, basically the underlying query has to enumerate
all the terms between the two values and create
a huge OR clause (it's more efficient than that, but
that's the conceptual task). Don't know of a good
automatic way to do that.

Best,
Erick

On Sun, Apr 10, 2016 at 2:38 PM, Michael Harkins  wrote:
> Well the originally architecture is out of my hands , but when someone
> sends in a query like that, if the range is a large number , my system
> basically shuts down and the cpu spikes with a large increase in
> memory usage. The queries are for strings. The query itself was an
> accident but I want to be able to prevent an accident from bringing
> down the index.
>
>
>> On Apr 10, 2016, at 12:34 PM, Erick Erickson  wrote:
>>
>> OK, why is this a problem? This smells like an XY problem,
>> you want to take some specific action, but it's not at all
>> clear what the problem is. There might be other ways
>> of doing this.
>>
>> If you're allowing regexes on numeric fields, using real
>> number fields (trie) and using range queries is a much
>> better way to go.
>>
>> Best,
>> Erick
>>
>>> On Sun, Apr 10, 2016 at 9:28 AM, Michael Harkins  wrote:
>>> Hey all,
>>>
>>> I am using lucene and solr version 4.2, and was wondering what would
>>> be the best way to not allow regex queries with very large numbers.
>>> Something like blah{1234567} or blah{1234, 123445678}


Re: Solr 6 - AbstractSolrTestCase Error Unable to build KeyStore from file: null

2016-04-11 Thread Chris Hostetter

https://issues.apache.org/jira/browse/SOLR-8970
https://issues.apache.org/jira/browse/SOLR-8971

: Date: Mon, 11 Apr 2016 20:35:22 -0400
: From: Joe Lawson 
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Solr 6 - AbstractSolrTestCase Error Unable to build KeyStore from
:  file: null
: 
: Thanks for the insight. I figured that it was something like that and
: perhaps I has thread contention on a resource that wasn't really thread
: safe.
: 
: I'll give your suggestions a shot tomorrow.
: 
: Regards,
: 
: Joe Lawson
: On Apr 11, 2016 8:24 PM, "Chris Hostetter"  wrote:
: 
: >
: > : I'm upgrading a plugin and use the AbstractSolrTestCase for tests. My
: > tests
: > : work fine in 5.X but when I upgraded to 6.X the tests sometimes throw an
: > : error during initialization. Basically it says,
: > : "org.apache.solr.common.SolrException: Error instantiating
: > : shardHandlerFactory class
: > : [org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
: > : build KeyStore from file: null"
: >
: > Ugh.  and of course there are no other details to troubleshoot that
: > because the stupid error handling doesn't wrap the original exception --
: > it just throws it away.
: >
: > I'm pretty sure the problem you are seeing (unfortunately manifested in
: > a really confusing way) is that SolrTestCaseJ4 (and AbstractSolrTestCase
: > which subclasses it) has randomized the use of SSL for a while, but at
: > some point it also started randomizing the use of client auth -- but this
: > randomization happens very infrequently.
: >
: > (for details, check out the SSLTestConfig and it's usage in
: > SolrTestCaseJ4)
: >
: > The bottom line is, in order for the (randomized) clientAuth stuff to
: > work, SolrTestCaseJ4 assumes it can find an
: > "../etc/test/solrtest.keystore" realtive to ExternalPaths.SERVER_HOME.
: >
: > If you don't have that in your test setup, bad things happen.
: >
: > I believe the quickest way for you to resolve this failure in your own
: > usage of AbstractSolrTestCase is to just add the @SupressSSL annotation to
: > your tests -- assuming you don't care about randomly testing your plugin
: > with SSL authentication (for 99.999% of solr plugins, wether solr is being
: > used over http or https shouldn't matter for test purposes)
: >
: > If you do want to include randomized SSL testing, then you need to make
: > sure your that when/how you run your tests, ExternalPaths.SERVER_HOME
: > resolves to the correct place, and "../etc/test/solrtest.keystore"
: > resolves to a real file solr can use as the keystore.
: >
: > I'll file some Jiras to try and improve the error handline in these
: > situations.
: >
: >
: >
: > -Hoss
: > http://www.lucidworks.com/
: >
: 

-Hoss
http://www.lucidworks.com/


Re: SolrCloud Config file

2016-04-11 Thread Sam Xia
Thanks Shawn.

Where is the path of topic collection zookeeper config file? Here is from 
wiki (see below). But I was not able to find configs/topic anywhere in the 
installation folder. 

"The create command will upload a copy of the data_driven_schema_configs 
configuration directory to ZooKeeper under /configs/mycollection. Refer to 
the Solr Start Script Reference 
 page for more details about the create command for creating 
collections.”



Here is the command that I run and verify zookeeper is in port 8983. BTW, 
I did not modify anything and the Solr is a clean install so I do not know 
why Python is used in the script. The error looks to me that the config 
folder was not created at first command. So when you try to update it, it 
gets an IO error.

./solr status

Found 2 Solr nodes: 

Solr process 30976 running on port 7574
{
  "solr_home":"/locm/solr-6.0.0/example/cloud/node2/solr",
  "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 
2016-04-01 14:41:49",
  "startTime":"2016-04-11T23:42:59.513Z",
  "uptime":"0 days, 0 hours, 51 minutes, 43 seconds",
  "memory":"93.2 MB (%19) of 490.7 MB",
  "cloud":{
"ZooKeeper":"localhost:9983",
"liveNodes":"2",
"collections":"2"}}


Solr process 30791 running on port 8983
{
  "solr_home":"/locm/solr-6.0.0/example/cloud/node1/solr",
  "version":"6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 
2016-04-01 14:41:49",
  "startTime":"2016-04-11T23:42:54.041Z",
  "uptime":"0 days, 0 hours, 51 minutes, 49 seconds",
  "memory":"78.9 MB (%16.1) of 490.7 MB",
  "cloud":{
"ZooKeeper":"localhost:9983",
"liveNodes":"2",
"collections":"2"}}



If you run the following steps, you would be able to reproduce the issue 
every time.

Step 1) bin/solr start -e cloud -noprompt
Step 2) bin/solr create -c topic -d sample_techproducts_configs
Step 3) ./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic 
-solrhome /locm/solr-5.5.0/ -confdir 
/locm/solr-5.5.0/server/solr/configsets/sample_techproducts_configs/conf








On 4/11/16, 5:29 PM, "Shawn Heisey"  wrote:

>On 4/11/2016 4:59 PM, Sam Xia wrote:
>> Solr is installed in /locm/solr-5.5.0/ folder
>>
>> 1) First I create a topic connection with the following command:
>>
>> bin/solr create -c topic -d topic_configs_ori
>>
>> But there is no folder name topc in 
>>/locm/solr-5.5.0/server/solr/configsets/topic after the above commend.
>
>This command does not change anything in configsets.  Since you are in
>cloud mode, it will copy that configset from the indicated directory
>(topic_configs_ori) to zookeeper, to a config named "topic" -- assuming
>that this config does not already exist in zookeeper.  If the named
>config already exists in zookeeper, then it will be used as-is, and not
>updated.  When not in cloud mode, it behaves a little differently, but
>still would not create anything in configsets.
>
>> I got the following error:
>>
>> ./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic 
>>-solrhome /locm/solr-5.5.0/ -confdir 
>>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf
>>
>> Exception in thread "main" java.io.IOException: Error uploading file 
>>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf/get-pip.py
>> to zookeeper path /configs/topic/get-pip.py
>
>
>
>> Caused by: 
>>org.apache.zookeeper.KeeperException$ConnectionLossException: 
>>KeeperErrorCode = ConnectionLoss for /configs/topic/get-pip.py
>
>The stacktrace from the "caused by" exception indicates that the zkcli
>command is trying to create the "/configs/topic/get-pip.py" path in the
>zookeeper database and is having a problem connecting to zookeeper.  Are
>you positive that "localhost:9983" is the correct connection string, and
>that there is an active zookeeper server listening on that port?  FYI:
>The embedded zookeeper server should not be used in production.
>
>Side issue:  I'm curious why you have a python script in your config. 
>Nothing explicitly wrong with that, it's just an odd thing to feed to a
>Java program like Solr.
>
>Thanks,
>Shawn
>


Re: Solr 6 - AbstractSolrTestCase Error Unable to build KeyStore from file: null

2016-04-11 Thread Joe Lawson
Thanks for the insight. I figured that it was something like that and
perhaps I has thread contention on a resource that wasn't really thread
safe.

I'll give your suggestions a shot tomorrow.

Regards,

Joe Lawson
On Apr 11, 2016 8:24 PM, "Chris Hostetter"  wrote:

>
> : I'm upgrading a plugin and use the AbstractSolrTestCase for tests. My
> tests
> : work fine in 5.X but when I upgraded to 6.X the tests sometimes throw an
> : error during initialization. Basically it says,
> : "org.apache.solr.common.SolrException: Error instantiating
> : shardHandlerFactory class
> : [org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
> : build KeyStore from file: null"
>
> Ugh.  and of course there are no other details to troubleshoot that
> because the stupid error handling doesn't wrap the original exception --
> it just throws it away.
>
> I'm pretty sure the problem you are seeing (unfortunately manifested in
> a really confusing way) is that SolrTestCaseJ4 (and AbstractSolrTestCase
> which subclasses it) has randomized the use of SSL for a while, but at
> some point it also started randomizing the use of client auth -- but this
> randomization happens very infrequently.
>
> (for details, check out the SSLTestConfig and it's usage in
> SolrTestCaseJ4)
>
> The bottom line is, in order for the (randomized) clientAuth stuff to
> work, SolrTestCaseJ4 assumes it can find an
> "../etc/test/solrtest.keystore" realtive to ExternalPaths.SERVER_HOME.
>
> If you don't have that in your test setup, bad things happen.
>
> I believe the quickest way for you to resolve this failure in your own
> usage of AbstractSolrTestCase is to just add the @SupressSSL annotation to
> your tests -- assuming you don't care about randomly testing your plugin
> with SSL authentication (for 99.999% of solr plugins, wether solr is being
> used over http or https shouldn't matter for test purposes)
>
> If you do want to include randomized SSL testing, then you need to make
> sure your that when/how you run your tests, ExternalPaths.SERVER_HOME
> resolves to the correct place, and "../etc/test/solrtest.keystore"
> resolves to a real file solr can use as the keystore.
>
> I'll file some Jiras to try and improve the error handline in these
> situations.
>
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: SolrCloud Config file

2016-04-11 Thread Shawn Heisey
On 4/11/2016 4:59 PM, Sam Xia wrote:
> Solr is installed in /locm/solr-5.5.0/ folder
>
> 1) First I create a topic connection with the following command:
>
> bin/solr create -c topic -d topic_configs_ori
>
> But there is no folder name topc in 
> /locm/solr-5.5.0/server/solr/configsets/topic after the above commend.

This command does not change anything in configsets.  Since you are in
cloud mode, it will copy that configset from the indicated directory
(topic_configs_ori) to zookeeper, to a config named "topic" -- assuming
that this config does not already exist in zookeeper.  If the named
config already exists in zookeeper, then it will be used as-is, and not
updated.  When not in cloud mode, it behaves a little differently, but
still would not create anything in configsets.

> I got the following error:
>
> ./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic -solrhome 
> /locm/solr-5.5.0/ -confdir 
> /locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf
>
> Exception in thread "main" java.io.IOException: Error uploading file 
> /locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf/get-pip.py to 
> zookeeper path /configs/topic/get-pip.py



> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /configs/topic/get-pip.py

The stacktrace from the "caused by" exception indicates that the zkcli
command is trying to create the "/configs/topic/get-pip.py" path in the
zookeeper database and is having a problem connecting to zookeeper.  Are
you positive that "localhost:9983" is the correct connection string, and
that there is an active zookeeper server listening on that port?  FYI:
The embedded zookeeper server should not be used in production.

Side issue:  I'm curious why you have a python script in your config. 
Nothing explicitly wrong with that, it's just an odd thing to feed to a
Java program like Solr.

Thanks,
Shawn



Re: Solr 6 - AbstractSolrTestCase Error Unable to build KeyStore from file: null

2016-04-11 Thread Chris Hostetter

: I'm upgrading a plugin and use the AbstractSolrTestCase for tests. My tests
: work fine in 5.X but when I upgraded to 6.X the tests sometimes throw an
: error during initialization. Basically it says,
: "org.apache.solr.common.SolrException: Error instantiating
: shardHandlerFactory class
: [org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
: build KeyStore from file: null"

Ugh.  and of course there are no other details to troubleshoot that 
because the stupid error handling doesn't wrap the original exception -- 
it just throws it away.

I'm pretty sure the problem you are seeing (unfortunately manifested in 
a really confusing way) is that SolrTestCaseJ4 (and AbstractSolrTestCase 
which subclasses it) has randomized the use of SSL for a while, but at 
some point it also started randomizing the use of client auth -- but this 
randomization happens very infrequently.

(for details, check out the SSLTestConfig and it's usage in 
SolrTestCaseJ4)

The bottom line is, in order for the (randomized) clientAuth stuff to 
work, SolrTestCaseJ4 assumes it can find an 
"../etc/test/solrtest.keystore" realtive to ExternalPaths.SERVER_HOME.

If you don't have that in your test setup, bad things happen.

I believe the quickest way for you to resolve this failure in your own 
usage of AbstractSolrTestCase is to just add the @SupressSSL annotation to 
your tests -- assuming you don't care about randomly testing your plugin 
with SSL authentication (for 99.999% of solr plugins, wether solr is being 
used over http or https shouldn't matter for test purposes)

If you do want to include randomized SSL testing, then you need to make 
sure your that when/how you run your tests, ExternalPaths.SERVER_HOME 
resolves to the correct place, and "../etc/test/solrtest.keystore" 
resolves to a real file solr can use as the keystore.

I'll file some Jiras to try and improve the error handline in these 
situations.



-Hoss
http://www.lucidworks.com/


Re: SolrCloud Config file

2016-04-11 Thread Sam Xia
I tried solr-6.0 and was able to see the same issue. Please help. Thanks





On 4/11/16, 3:59 PM, "Sam Xia"  wrote:

>Hi,
>
>I installed Solr 5.5 in my test server but was having issue updating the 
>solrconfig.xml.
>
>Solr is installed in /locm/solr-5.5.0/ folder
>
>1) First I create a topic connection with the following command:
>
>bin/solr create -c topic -d topic_configs_ori
>
>But there is no folder name topc in 
>/locm/solr-5.5.0/server/solr/configsets/topic after the above commend.
>
>The issue is that when I check the configuration file in Solr admin, the 
>correct solrconfig.xml is not updated to the one in 
>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori. Actually it 
>looked to me that the default config files are used.
>
>2) Then I run the following command to try to update
>
>./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic -solrhome 
>/locm/solr-5.5.0/ -confdir 
>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf
>
>I got the following error:
>
>./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic -solrhome 
>/locm/solr-5.5.0/ -confdir 
>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf
>
>Exception in thread "main" java.io.IOException: Error uploading file 
>/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf/get-pip.py 
>to zookeeper path /configs/topic/get-pip.py
>
>at 
>org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.j
>ava:69)
>
>at 
>org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.j
>ava:59)
>
>at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:135)
>
>at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:199)
>
>at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:69)
>
>at java.nio.file.Files.walkFileTree(Files.java:2602)
>
>at java.nio.file.Files.walkFileTree(Files.java:2635)
>
>at 
>org.apache.solr.common.cloud.ZkConfigManager.uploadToZK(ZkConfigManager.ja
>va:59)
>
>at 
>org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManag
>er.java:121)
>
>at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:222)
>
>Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
>KeeperErrorCode = ConnectionLoss for /configs/topic/get-pip.py
>
>at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
>
>at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>
>at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
>
>at 
>org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:501
>)
>
>at 
>org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.ja
>va:60)
>
>at 
>org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:498)
>
>at 
>org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:408)
>
>at 
>org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.j
>ava:67)
>
>... 9 more
>
>Please help me as this seems to be very basic. But I followed the 
>document in:
>https://cwiki.apache.org/confluence/display/solr/Using+ZooKeeper+to+Manage
>+Configuration+Files
>
>Is this a bug or am I missing anything? Thanks
>
>
>


Solrj API for Managed Resources

2016-04-11 Thread iambest
Is there a solrj API to add synonyms or stop words using the Managed
Resources API? I have to programmatically add them, what is the best way?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solrj-API-for-Managed-Resources-tp4269454.html
Sent from the Solr - User mailing list archive at Nabble.com.


SolrCloud Config file

2016-04-11 Thread Sam Xia
Hi,

I installed Solr 5.5 in my test server but was having issue updating the 
solrconfig.xml.

Solr is installed in /locm/solr-5.5.0/ folder

1) First I create a topic connection with the following command:

bin/solr create -c topic -d topic_configs_ori

But there is no folder name topc in 
/locm/solr-5.5.0/server/solr/configsets/topic after the above commend.

The issue is that when I check the configuration file in Solr admin, the 
correct solrconfig.xml is not updated to the one in 
/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori. Actually it looked 
to me that the default config files are used.

2) Then I run the following command to try to update

./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic -solrhome 
/locm/solr-5.5.0/ -confdir 
/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf

I got the following error:

./zkcli.sh -cmd upconfig -zkhost localhost:9983 -confname topic -solrhome 
/locm/solr-5.5.0/ -confdir 
/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf

Exception in thread "main" java.io.IOException: Error uploading file 
/locm/solr-5.5.0/server/solr/configsets/topic_configs_ori/conf/get-pip.py to 
zookeeper path /configs/topic/get-pip.py

at 
org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:69)

at 
org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:59)

at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:135)

at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:199)

at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:69)

at java.nio.file.Files.walkFileTree(Files.java:2602)

at java.nio.file.Files.walkFileTree(Files.java:2635)

at 
org.apache.solr.common.cloud.ZkConfigManager.uploadToZK(ZkConfigManager.java:59)

at 
org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:121)

at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:222)

Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for /configs/topic/get-pip.py

at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)

at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)

at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:501)

at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)

at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:498)

at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:408)

at 
org.apache.solr.common.cloud.ZkConfigManager$1.visitFile(ZkConfigManager.java:67)

... 9 more

Please help me as this seems to be very basic. But I followed the document in:
https://cwiki.apache.org/confluence/display/solr/Using+ZooKeeper+to+Manage+Configuration+Files

Is this a bug or am I missing anything? Thanks





Indexing date data for facet search

2016-04-11 Thread Steven White
Hi everyone,

I need to index data data into Solr and then use this field for facet
search.  My question is this, the date data in my DB is stored in the
following format "2016-03-29 15:54:35.461":

1) What format I should be indexing this date + time stamp into Solr?
2) What Solr field type I should be using?  Is it "date"?
3) How do I handle various time zones and locales?
4) Can I insert multi value data data into the single "date" facet field
and still use this field for facet search?
5) Based on my need, will all the Date Math per [1] on date facet still
work? I'm confused here because of my need for (3).

To elaborate on (4) some more.  The need here is this.  In my DB, there are
more than one column with date data.  I will be indexing them all into this
single multi-value Solr field of type Date that I will then use for facet.
Is this possible?

I guess, this is a two part question, for date facet: a) how to properly
index, and b) how do I properly search.

As always, any insight is greatly appreciated.

Steve

[1] https://cwiki.apache.org/confluence/display/solr/Working+with+Dates


Re: How to set multivalued false, using SolrJ

2016-04-11 Thread Chris Hostetter

:  Can you do me a favour, I use solrJ to index, but I get all the 
:  Field is multivalued. How can I set my Field to not 
:  multivalued, can you tell me how to setting use solrJ.

If you are using a "Managed Schema" (which was explicitly configured in 
most Solr 5.x exampleconfigs, and is now the implicit default in Solr 6) 
you can use the Schema API to make these changes.  There is also a 
"SchemaRequest" convinience class for this if you are a SolrJ user...

https://cwiki.apache.org/confluence/display/solr/Schema+API
https://lucene.apache.org/solr/5_5_0/solr-solrj/org/apache/solr/client/solrj/request/schema/SchemaRequest.html

SolrClient client = ...;
SchemaRequest req = new SchemaRequest.ReplaceField(...);
...
req.process(client)




-Hoss
http://www.lucidworks.com/


Re: Range filters: inclusive?

2016-04-11 Thread Robert Brown

It's a string field, ean...

http://paste.scsys.co.uk/510132



On 04/11/2016 06:00 PM, Yonik Seeley wrote:

On Mon, Apr 11, 2016 at 12:52 PM, Robert Brown  wrote:

Hi,

When I perform a range query of ['' TO *] to filter out docs where a
particular field has a value, this does what I want, but I thought using the
square brackets was inclusive, so empty-string values should actually be
included?

They should be.  Are you saying that zero length values are not
included by the range query above?

-Yonik




Re: Range filters: inclusive?

2016-04-11 Thread Chris Hostetter
: > When I perform a range query of ['' TO *] to filter out docs where a
: > particular field has a value, this does what I want, but I thought using the
: > square brackets was inclusive, so empty-string values should actually be
: > included?
: 
: They should be.  Are you saying that zero length values are not
: included by the range query above?

Oh ... maybe i missread the question ... are you are saying that when you 
add a document you explicitly include the empty string as a field value, 
but later when yoy search for ['' TO *] those documents do not get 
returned?

what exactly is the field type you are using, and what update processors 
do you have configured?

If you are using a StrField (w/o any special processors) then the literal 
value "" should exist a a term -- but if you are using a TextField w/some 
analyzer then the analyzer may be throwing that input away.  

Likewise there are update processors that do this explicitly: 

https://lucene.apache.org/solr/5_5_0/solr-core/org/apache/solr/update/processor/RemoveBlankFieldUpdateProcessorFactory.html

-Hoss
http://www.lucidworks.com/


Re: Range filters: inclusive?

2016-04-11 Thread Chris Hostetter

: When I perform a range query of ['' TO *] to filter out docs where a
: particular field has a value, this does what I want, but I thought using the
: square brackets was inclusive, so empty-string values should actually be
: included?

I'm not sure i understand your question ... if you are dealing with 
something like a StrField, then the empty string (ie: 0 byte long string: 
"") is in fact a real term.  you are inclusively including that term in 
what you match on.

That is differnet from matching docs that do not have any values at all 
-- ie: they do not contain a signle term.



-Hoss
http://www.lucidworks.com/


Re: Range filters: inclusive?

2016-04-11 Thread Yonik Seeley
On Mon, Apr 11, 2016 at 12:52 PM, Robert Brown  wrote:
> Hi,
>
> When I perform a range query of ['' TO *] to filter out docs where a
> particular field has a value, this does what I want, but I thought using the
> square brackets was inclusive, so empty-string values should actually be
> included?

They should be.  Are you saying that zero length values are not
included by the range query above?

-Yonik


Range filters: inclusive?

2016-04-11 Thread Robert Brown

Hi,

When I perform a range query of ['' TO *] to filter out docs where a 
particular field has a value, this does what I want, but I thought using 
the square brackets was inclusive, so empty-string values should 
actually be included?


The JSON I post to Solr has empty values, not null/undefined.

Am I missing something or is this a feature?

Thanks,
Rob




Re: EmbeddedSolr for unit tests in Solr 6

2016-04-11 Thread Joe Lawson
Check for example tests here too:
https://github.com/apache/lucene-solr/tree/master/solr/core/src/test/org/apache/solr

On Mon, Apr 11, 2016 at 12:24 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> Please use MiniSolrCloudCluster instead of EmbeddedSolrServer for
> unit/integration tests.
>
> On Mon, Apr 11, 2016 at 2:26 PM, Rohana Rajapakse <
> rohana.rajapa...@gossinteractive.com> wrote:
>
> > Thanks Shawn,
> >
> > I am now pointing solrHomeFolder to  lucene-solr-master\solr\server\solr
> > which contains the correct solr.xml file.
> > Tried the following two ways to create an EmbeddedSolrServer:
> >
> >
> > 1. CoreContainer corecon =
> > CoreContainer.createAndLoad(Paths.get(solrHomeFolder));
> >corecon.load();
> >SolrClient svr = new EmbeddedSolrServer(corecon, corename);
> >
> >
> > 2.   SolrClient svr = new EmbeddedSolrServer(Paths.get(solrHomeFolder),
> > corename);
> >
> >
> > They both throws the same exception  (java.lang.NoClassDefFoundError:
> > Could not initialize class org.apache.solr.servlet.SolrRequestParsers).
> > org.apache.solr.servlet.SolrRequestParsers class is present in the
> > solr-core-7.0.0-SNAPSHOT.jar and this jar is present in the WEB-INF\lib
> > folder (in solr server) and also included as a dependency jar in the
> > pom.xml of the test project.
> >
> > Here is the full stack trace of the exception:
> >
> > java.lang.NoClassDefFoundError: Could not initialize class
> > org.apache.solr.servlet.SolrRequestParsers
> > at
> >
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:112)
> > at
> >
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:70)
> > at
> >
> com.gossinteractive.solr.DocPhraseUpdateProcessorTest.createEmbeddedSolrServer(DocPhraseUpdateProcessorTest.java:141)
> > at
> >
> com.gossinteractive.solr.DocPhraseUpdateProcessorTest.setUp(DocPhraseUpdateProcessorTest.java:99)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:497)
> > at
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> > at
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> > at
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> > at
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
> > at
> >
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> > at
> >
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
> > at
> >
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
> > at
> > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> > at
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> > at
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> > at
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> > at
> >
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> > at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> > at
> >
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> > at
> >
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> > at
> >
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> > at
> >
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> > at
> >
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> > at
> >
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> >
> >
> > I have debugged this a bit and found that this exception is thrown on the
> > following line in EmbeddedServer.class
> >
> > _parser = new SolrRequestParsers(null);
> >
> > Also, the coreContainer object has no cores at this point.
> >
> >
> > Wonder if I should update my code from master (it is now about two weeks
> > old).
> >
> > Thanks for any help.
> >
> > Rohana
> >
> >
> > -Original Message-
> > From: Shawn Heisey [mailto:apa...@elyograg.org]
> > Sent: 08 April 2016 16:46
> > To: solr-user@lucene.apache.org
> > Subject: Re: EmbeddedSolr for unit tests in Solr 6
> >
> > On 4/8/2016 7:51 AM, Rohana Rajapakse wrote:
> > > Thanks. I know it exists, but don't know how to use it.
> > >
> > > I am trying to use EmbeddedSolrServer(Path solrHome, String
>

Solr 6 - AbstractSolrTestCase Error Unable to build KeyStore from file: null

2016-04-11 Thread Joe Lawson
I'm upgrading a plugin and use the AbstractSolrTestCase for tests. My tests
work fine in 5.X but when I upgraded to 6.X the tests sometimes throw an
error during initialization. Basically it says,
"org.apache.solr.common.SolrException: Error instantiating
shardHandlerFactory class
[org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
build KeyStore from file: null"

I don't really see any changes from 5 to 6 that cause this. Any clues? Here
is the code:
https://github.com/healthonnet/hon-lucene-synonyms/tree/solr-6.0.0

Thanks for the help,

Joe Lawson

Full Error:


NOTE: test params are: codec=Asserting(Lucene60): {}, docValues:{},
>> maxPointsInLeafNode=604, maxMBSortInHeap=5.184451165904283,
>> sim=ClassicSimilarity, locale=en, timezone=America/Blanc-Sablon
>
> NOTE: Linux 4.4.5-1-ARCH amd64/Oracle Corporation 1.8.0_77
>> (64-bit)/cpus=8,threads=1,free=215181912,total=358088704
>
> NOTE: All tests run in this JVM: [TestBaggedSynonyms,
>> TestConstructedPhrases]
>
> NOTE: reproduce with: ant test  -Dtestcase=TestConstructedPhrases
>> -Dtests.seed=48D5F3D29EAB417 -Dtests.locale=en
>> -Dtests.timezone=America/Blanc-Sablon -Dtests.asserts=true
>> -Dtests.file.encoding=UTF-8
>
>
>> org.apache.solr.common.SolrException: Error instantiating
>> shardHandlerFactory class
>> [org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
>> build KeyStore from file: null
>
>
>> at __randomizedtesting.SeedInfo.seed([48D5F3D29EAB417]:0)
>
> at
>> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:52)
>
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:404)
>
> at org.apache.solr.util.TestHarness.(TestHarness.java:164)
>
> at org.apache.solr.util.TestHarness.(TestHarness.java:127)
>
> at org.apache.solr.util.TestHarness.(TestHarness.java:133)
>
> at org.apache.solr.util.TestHarness.(TestHarness.java:96)
>
> at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:598)
>
> at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:588)
>
> at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:430)
>
> at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:419)
>
> at
>> org.apache.solr.search.HonLuceneSynonymTestCase.beforeClass(HonLuceneSynonymTestCase.java:36)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>
> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
>
> at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>
> at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>
> at
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>
> at
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>
> at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>
> at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>
> at
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>
> at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>
> at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
>>


Re: EmbeddedSolr for unit tests in Solr 6

2016-04-11 Thread Shalin Shekhar Mangar
Please use MiniSolrCloudCluster instead of EmbeddedSolrServer for
unit/integration tests.

On Mon, Apr 11, 2016 at 2:26 PM, Rohana Rajapakse <
rohana.rajapa...@gossinteractive.com> wrote:

> Thanks Shawn,
>
> I am now pointing solrHomeFolder to  lucene-solr-master\solr\server\solr
> which contains the correct solr.xml file.
> Tried the following two ways to create an EmbeddedSolrServer:
>
>
> 1. CoreContainer corecon =
> CoreContainer.createAndLoad(Paths.get(solrHomeFolder));
>corecon.load();
>SolrClient svr = new EmbeddedSolrServer(corecon, corename);
>
>
> 2.   SolrClient svr = new EmbeddedSolrServer(Paths.get(solrHomeFolder),
> corename);
>
>
> They both throws the same exception  (java.lang.NoClassDefFoundError:
> Could not initialize class org.apache.solr.servlet.SolrRequestParsers).
> org.apache.solr.servlet.SolrRequestParsers class is present in the
> solr-core-7.0.0-SNAPSHOT.jar and this jar is present in the WEB-INF\lib
> folder (in solr server) and also included as a dependency jar in the
> pom.xml of the test project.
>
> Here is the full stack trace of the exception:
>
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.solr.servlet.SolrRequestParsers
> at
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:112)
> at
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:70)
> at
> com.gossinteractive.solr.DocPhraseUpdateProcessorTest.createEmbeddedSolrServer(DocPhraseUpdateProcessorTest.java:141)
> at
> com.gossinteractive.solr.DocPhraseUpdateProcessorTest.setUp(DocPhraseUpdateProcessorTest.java:99)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
> at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
>
>
> I have debugged this a bit and found that this exception is thrown on the
> following line in EmbeddedServer.class
>
> _parser = new SolrRequestParsers(null);
>
> Also, the coreContainer object has no cores at this point.
>
>
> Wonder if I should update my code from master (it is now about two weeks
> old).
>
> Thanks for any help.
>
> Rohana
>
>
> -Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: 08 April 2016 16:46
> To: solr-user@lucene.apache.org
> Subject: Re: EmbeddedSolr for unit tests in Solr 6
>
> On 4/8/2016 7:51 AM, Rohana Rajapakse wrote:
> > Thanks. I know it exists, but don't know how to use it.
> >
> > I am trying to use EmbeddedSolrServer(Path solrHome, String
> > defaultCoreName)
> >
> > What should be the "solrHome"? Should it be the actual solr home (i.e.
> lucene-solr-master\solr\server\solr) in the solr server, or can it be any
> temporary folder?
> >
> > I create it with:  new EmbeddedSolrServer((new
> File("testdata/solr")).toPath(), "tmpcore");  and get the following
> Exception (I use solr-Solr-7.0.0):
> >
> > org.apache.solr.common.SolrException: Should not have found
> > solr/@persistent . Please upgrade your solr.xml

Problem with AbstractMethodError

2016-04-11 Thread João Gonçalo Guimarães Correia
Hi,

Im trying to extract pdf data with solrnet client. I code is the following:

using (MemoryStream stream = new 
MemoryStream((byte[])dataReader["file_stream"]))
{
var solr = 
ServiceLocator.Current.GetInstance>();
ExtractParameters extract = new 
ExtractParameters(stream, "doc1", dataReader["nome_original"] + "")
{
ExtractOnly = true,
ExtractFormat = ExtractFormat.Text/*,
StreamType = "application/pdf"*/
};
var response = solr.Extract(extract);
Debug.WriteLine("\n+++ " + 
response.Content);
}

However Im getting the error you can see below:

java.lang.RuntimeException: java.lang.AbstractMethodError

at 
org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:604)

at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)

at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)

at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)

at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)

at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)

at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)

at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)

at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)

at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)

at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)

at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)

at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)

at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)

at org.eclipse.jetty.server.Server.handle(Server.java:499)

at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)

at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)

at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)

at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)

at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.AbstractMethodError

at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)

at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)

at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)

at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)

at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)

... 22 more

Can anyone help me?

Thanks


CONFIDENCIAL: Esta mensagem (e eventuais ficheiros anexos) é confidencial e 
deverá ser lida apenas por aqueles a quem ela é dirigida. Se não é o 
destinatário da mesma, por favor contacte-nos, apague a mensagem do seu 
computador e destrua quaisquer cópias. É proibida a sua divulgação ou cópia sem 
a nossa autorização.

CONFIDENCIAL: This message (and any files attached) is confidential and should 
only be read by those to whom it is addressed. If you are not the intended 
recipient, please contact us, delete the message from your computer and destroy 
any copies. Any distribution or copying without our prior permission is 
prohibited.


Re: How to set multivalued false, using SolrJ

2016-04-11 Thread Shawn Heisey
On 4/11/2016 7:40 AM, Charles Sanders wrote:
> Multivalued fields are controlled by the schema. You need to define your 
> field in the schema file as 'not' a multivalue field. Here are a couple of 
> examples of field definitions, one multivalued, the other not. 
>
>  multiValued="true"/> 
>  /> 
>
> If you do not explicitly define your field, then solr will use default 
> definitions, which are probably storing the field as multivalued. 

Also, make sure that the schema version (a parameter within the schema
itself, NOT the Solr version) is high enough.  If this parameter is set
to 1.0 *all* fields are multivalued.  If you do not include the version
parameter at all, it will default to 1.0.  (Devs: should we change this
to default to whatever the current highest version is?  Defaulting to
*ancient* tech seems like a bad thing.)

Here's the comment near the top of the techproducts example schema from
Solr 5.5.0:

  

Thanks,
Shawn



Re: Solr Sharding Strategy

2016-04-11 Thread Daniel Collins
I'd also ask about your indexing times, what QTime do you see for indexing
(in both scenarios), and what commit times are you using (which Toke
already asked).

Not entirely sure how to read your table, but looking at the indexing side
of things, with 2 shards, there is inherently more work to do, so you would
expect indexing latency to increase (we have to index in 1 shard, and then
index in the 2nd shard, so logically its twice the workload).

Your table suggests you managed 10 updates per second, but you never
managed 25 updates per second either with 1 shard or 2 shards.  Though the
numbers don't make sense, you managed 13.9 updates per sec on 1 shard, and
21.9 updates per sec on 2 shards.  That suggests to me that in the single
shard case, your searches are causing your indexing to throttle, maybe the
resourcing is favoring searches and so the indexing threads aren't getting
a look in...  Whereas in the 2 shard case, it seems clear (as Toke said),
that search isn't really hitting the index much, not sure where the
bottleneck is, but its not on the index, which is why your indexing load
can get more requests through.

On 11 April 2016 at 15:36, Toke Eskildsen  wrote:

> On Mon, 2016-04-11 at 11:23 +, Bhaumik Joshi wrote:
> > We are using solr 5.2.0 and we have Index-heavy (100 index updates per
> > sec) and Query-heavy (100 queries per sec) scenario.
>
> > Index stats: 10 million documents and 16 GB index size
>
> > Which sharding strategy is best suited in above scenario?
>
> Sharding reduces query throughput and can improve query latency as well
> as indexing speed. For small indexes, the overhead of sharding is likely
> to worsen query latency. So as always, it depends.
>
> Qualified guess: Don't use multiple shards, but consider using replicas.
>
> > Please share reference resources which states detailed comparison of
> > single shard over multi shard if any.
>
> Sorry, could not find the one I had in mind.
> >
> > Meanwhile we did some tests with SolrMeter (Standalone java tool for
> > stress tests with Solr) for single shard and two shards.
> >
> > Index stats of test solr cloud: 0.7 million documents and 1 GB index
> > size.
> >
> > As observed in test average query time with 2 shards is much higher
> > than single shard.
>
> Makes sense: Your shards are so small that the actual time spend on the
> queries is very low. So relatively, the overhead of distributed (aka
> multi-shard) searching is high, negating any search-gain you got by
> sharding. I would not have expected the performance drop-off to be that
> large (factor 20-60) though.
>
> Your query speed is unusually low for an index of your size, which leads
> me to believe that your indexing is slowing everything down. This is
> often due to too frequent commits and/or too many warm up queries.
>
> There is a bit about it at
> https://wiki.apache.org/solr/SolrPerformanceFactors
>
>
> - Toke Eskildsen, State and University Library, Denmark
>
>
>
>


Re: Solr Sharding Strategy

2016-04-11 Thread Toke Eskildsen
On Mon, 2016-04-11 at 11:23 +, Bhaumik Joshi wrote:
> We are using solr 5.2.0 and we have Index-heavy (100 index updates per
> sec) and Query-heavy (100 queries per sec) scenario.

> Index stats: 10 million documents and 16 GB index size

> Which sharding strategy is best suited in above scenario?

Sharding reduces query throughput and can improve query latency as well
as indexing speed. For small indexes, the overhead of sharding is likely
to worsen query latency. So as always, it depends.

Qualified guess: Don't use multiple shards, but consider using replicas.

> Please share reference resources which states detailed comparison of
> single shard over multi shard if any.

Sorry, could not find the one I had in mind.
> 
> Meanwhile we did some tests with SolrMeter (Standalone java tool for
> stress tests with Solr) for single shard and two shards.
> 
> Index stats of test solr cloud: 0.7 million documents and 1 GB index
> size.
> 
> As observed in test average query time with 2 shards is much higher
> than single shard.

Makes sense: Your shards are so small that the actual time spend on the
queries is very low. So relatively, the overhead of distributed (aka
multi-shard) searching is high, negating any search-gain you got by
sharding. I would not have expected the performance drop-off to be that
large (factor 20-60) though.

Your query speed is unusually low for an index of your size, which leads
me to believe that your indexing is slowing everything down. This is
often due to too frequent commits and/or too many warm up queries.

There is a bit about it at 
https://wiki.apache.org/solr/SolrPerformanceFactors


- Toke Eskildsen, State and University Library, Denmark





Re: Cache problem

2016-04-11 Thread billnbell
You do need to optimize to get rid of the deleted docs probably...

That is a lot of deleted docs

Bill Bell
Sent from mobile


> On Apr 11, 2016, at 7:39 AM, Bastien Latard - MDPI AG 
>  wrote:
> 
> Dear Solr experts :),
> 
> I read this very interesting post 'Understanding and tuning your Solr caches' 
> !
> This is the only good document that I was able to find after searching for 1 
> day!
> 
> I was using Solr for 2 years without knowing in details what it was 
> caching...(because I did not need to understand it before).
> I had to take a look since I needed to restart (regularly) my tomcat in order 
> to improve performances...
> 
> But I now have 2 questions: 
> 1) How can I know how much RAM is my solr using in real (especially for 
> caching)?
> 2) Could you have a quick look into the following images and tell me if I'm 
> doing something wrong?
> 
> Note: my index contains 66 millions of articles with several text fields 
> stored.
> 
> 
> My solr contains several cores (all together are ~80Gb big), but almost only 
> the one below is used.
> 
> I have the feeling that a lot of data is always stored in RAM...and getting 
> bigger and bigger all the time...
> 
> 
> 
> 
> (after restart)
> $ sudo tail -f /var/log/tomcat7/catalina.out | grep GC
> 
> [...] after a few minutes
> 
> 
> Here are some images, that can show you some stats about my Solr 
> performances...
> 
> 
> 
> 
> 
> 
> Kind regards,
> Bastien Latard
> 
> 


Re: Soft commit does not affecting query performance

2016-04-11 Thread billnbell
Why do you think it would ?

Bill Bell
Sent from mobile


> On Apr 11, 2016, at 7:48 AM, Bhaumik Joshi  wrote:
> 
> Hi All,
> 
> We are doing query performance test with different soft commit intervals. In 
> the test with 1sec of soft commit interval and 1min of soft commit interval 
> we didn't notice any improvement in query timings.
> 
> 
> 
> We did test with SolrMeter (Standalone java tool for stress tests with Solr) 
> for 1sec soft commit and 1min soft commit.
> 
> Index stats of test solr cloud: 0.7 million documents and 1 GB index size.
> 
> Solr cloud has 2 shard and each shard has one replica.
> 
> 
> 
> Please find below detailed test readings: (all timings are in milliseconds)
> 
> 
> Soft commit - 1sec
> Queries per sec Updates per sec   Total Queries   
>   Total Q time   Avg Q Time Total Client time 
>   Avg Client time
> 1  5  
> 100 44340 
>443 48834
> 488
> 5  5  
> 101 128914
>   1276   143239  1418
> 10   5
>   104 295325  
> 2839   330931  3182
> 25   5
>   102 675319  
> 6620   793874  7783
> 
> Soft commit - 1min
> Queries per sec Updates per sec   Total Queries   
>   Total Q time   Avg Q Time Total Client time 
>   Avg Client time
> 1  5  
> 100 44292 
>442 48569
> 485
> 5  5  
> 105 131389
>   1251   147174  1401
> 10   5
>   102 299518  
> 2936   337748  3311
> 25   5
>   108 742639  
> 6876   865222  8011
> 
> As theory suggests soft commit affects query performance but in my case it 
> doesn't. Can you put some light on this?
> Also suggest if I am missing something here.
> 
> Regards,
> Bhaumik Joshi
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [Asite]
> 
> The Hyperloop Station Design Competition - A 48hr design collaboration, from 
> mid-day, 23rd May 2016.
> REGISTER HERE http://www.buildearthlive.com/hyperloop
> 
> [Build Earth Live Hyperloop]
> 
> [CC Award Winners 2015]


Soft commit does not affecting query performance

2016-04-11 Thread Bhaumik Joshi
Hi All,

We are doing query performance test with different soft commit intervals. In 
the test with 1sec of soft commit interval and 1min of soft commit interval we 
didn't notice any improvement in query timings.



We did test with SolrMeter (Standalone java tool for stress tests with Solr) 
for 1sec soft commit and 1min soft commit.

Index stats of test solr cloud: 0.7 million documents and 1 GB index size.

Solr cloud has 2 shard and each shard has one replica.



Please find below detailed test readings: (all timings are in milliseconds)


Soft commit - 1sec
Queries per sec Updates per sec   Total Queries 
Total Q time   Avg Q Time Total Client time   
Avg Client time
1  5
  100 44340 
   443 48834488
5  5
  101 128914
  1276   143239  1418
10   5  
104 295325  
2839   330931  3182
25   5  
102 675319  
6620   793874  7783

Soft commit - 1min
Queries per sec Updates per sec   Total Queries 
Total Q time   Avg Q Time Total Client time   
Avg Client time
1  5
  100 44292 
   442 48569485
5  5
  105 131389
  1251   147174  1401
10   5  
102 299518  
2936   337748  3311
25   5  
108 742639  
6876   865222  8011

As theory suggests soft commit affects query performance but in my case it 
doesn't. Can you put some light on this?
Also suggest if I am missing something here.

Regards,
Bhaumik Joshi










[Asite]

The Hyperloop Station Design Competition - A 48hr design collaboration, from 
mid-day, 23rd May 2016.
REGISTER HERE http://www.buildearthlive.com/hyperloop

[Build Earth Live Hyperloop]

[CC Award Winners 2015]


Re: How to set multivalued false, using SolrJ

2016-04-11 Thread Charles Sanders
Hello, 
Multivalued fields are controlled by the schema. You need to define your field 
in the schema file as 'not' a multivalue field. Here are a couple of examples 
of field definitions, one multivalued, the other not. 

 
 

If you do not explicitly define your field, then solr will use default 
definitions, which are probably storing the field as multivalued. 


Charles 





- Original Message -

From: "巩学超"  
To: solr-user@lucene.apache.org 
Sent: Monday, April 11, 2016 7:58:35 AM 
Subject: How to set multivalued false, using SolrJ 

Hello, 
Can you do me a favour, I use solrJ to index, but I get all the Field is 
multivalued. How can I set my Field to not multivalued, can you tell me how to 
setting use solrJ. 




Solr Sharding Strategy

2016-04-11 Thread Bhaumik Joshi
Hi,



We are using solr 5.2.0 and we have Index-heavy (100 index updates per sec) and 
Query-heavy (100 queries per sec) scenario.

Index stats: 10 million documents and 16 GB index size



Which sharding strategy is best suited in above scenario?

Please share reference resources which states detailed comparison of single 
shard over multi shard if any.



Meanwhile we did some tests with SolrMeter (Standalone java tool for stress 
tests with Solr) for single shard and two shards.

Index stats of test solr cloud: 0.7 million documents and 1 GB index size.

As observed in test average query time with 2 shards is much higher than single 
shard.

Please find below detailed readings:
2 Shards

Intended queries per sec

Actual queries per min

Actual queries per sec

Intended updates per sec

Actual updates per min

Actual updates per sec

Total Queries

Total Q time (ms)

Avg Q Time (ms)

Avg Q Time (sec)

Total Client time (ms)

Avg Client time (ms)

10

198

3.3

10

600

10

302

662176

2192

2.192

756603

2505

25

168

2.8

25

1314

21.9

301

2019735

6710

6.71

2370018

7873


1 Shard

Intended queries per sec

Actual queries per min

Actual queries per sec

Intended updates per sec

Actual updates per min

Actual updates per sec

Total Queries

Total Q time (ms)

Avg Q Time (ms)

Avg Q Time (sec)

Total Client time (ms)

Avg Client time (ms)

10

582

9.7

10

618

10.3

302

25081

83

0.083

55612

184

25

1026

17.1

25

834

13.9

306

33366

109

0.109

259392

847


Note: Query returns 250 rows and matches 57880 documents




Thanks & Regards,


[Description: Description: Description: 
C:\Users\hparekh\AppData\Roaming\Microsoft\Signatures\images\logo.jpg]

Bhaumik Joshi
Developer



Asite, A4, Shivalik Business Center, B/h. Rajpath Club, Opp. Kens Ville Golf 
Academy, Bodakdev,
Ahmedabad 380054, Gujarat, India.
T: +91 (079) 4021 1900 Ext: 5234 | M: +91 94282 99055 | E: 
bjo...@asite.com
W: www.asite.com | Twitter: 
@Asite | Facebook: 
facebook.com/Asite



[Asite]

The Hyperloop Station Design Competition - A 48hr design collaboration, from 
mid-day, 23rd May 2016.
REGISTER HERE http://www.buildearthlive.com/hyperloop

[Build Earth Live Hyperloop]

[CC Award Winners 2015]


Re: Facet heatmaps: cluster coordinates based on average position of docs

2016-04-11 Thread Anton K.
Anyone?

Or how can i contact with facet heatmaps creator?

2016-04-07 18:42 GMT+03:00 Anton K. :

> I am working with new solr feature: facet heatmaps. It works great, i
> create clusters on my map with counts. When user click on cluster i zoom in
> that area and i might show him more clusters or documents (based on current
> zoom level).
>
> But all my cluster icons (i use round one, see screenshot below) placed
> straight in the center of cluster's rectangles:
>
> https://dl.dropboxusercontent.com/u/1999619/images/map_grid3.png
>
> Some clusters can be in sea and so on. Also it feels not natural in my
> case to have icons placed orderly on the world map.
>
> I want to place cluster's icons in average coords based on coordinates of
> all my docs inside cluster. Is there any way to achieve this? I am trying
> to use stats component for facet heatmap but it isn't implemented yet.
>


How to set multivalued false, using SolrJ

2016-04-11 Thread 巩学超
Hello,
 Can you do me a favour, I use solrJ to index, but I get all the Field 
is multivalued. How can I set my Field to not multivalued, can you tell me how 
to setting use solrJ.



Solr Sharding Strategy

2016-04-11 Thread Bhaumik Joshi
Hi,



We are using solr 5.2.0 and we have Index-heavy (100 index updates per sec) and 
Query-heavy (100 queries per sec) scenario.

Index stats: 10 million documents and 16 GB index size



Which sharding strategy is best suited in above scenario?

Please share reference resources which states detailed comparison of single 
shard over multi shard if any.



Meanwhile we did some tests with SolrMeter (Standalone java tool for stress 
tests with Solr) for single shard and two shards.

Index stats of test solr cloud: 0.7 million documents and 1 GB index size.

As observed in test average query time with 2 shards is much higher than single 
shard.

Please find below detailed readings:
2 Shards

Intended queries per sec

Actual queries per min

Actual queries per sec

Intended updates per sec

Actual updates per min

Actual updates per sec

Total Queries

Total Q time (ms)

Avg Q Time (ms)

Avg Q Time (sec)

Total Client time (ms)

Avg Client time (ms)

10

198

3.3

10

600

10

302

662176

2192

2.192

756603

2505

25

168

2.8

25

1314

21.9

301

2019735

6710

6.71

2370018

7873


1 Shard

Intended queries per sec

Actual queries per min

Actual queries per sec

Intended updates per sec

Actual updates per min

Actual updates per sec

Total Queries

Total Q time (ms)

Avg Q Time (ms)

Avg Q Time (sec)

Total Client time (ms)

Avg Client time (ms)

10

582

9.7

10

618

10.3

302

25081

83

0.083

55612

184

25

1026

17.1

25

834

13.9

306

33366

109

0.109

259392

847


Note: Query returns 250 rows and matches 57880 documents





Thanks & Regards,


[Description: Description: Description: 
C:\Users\hparekh\AppData\Roaming\Microsoft\Signatures\images\logo.jpg]

Bhaumik Joshi
Developer



Asite, A4, Shivalik Business Center, B/h. Rajpath Club, Opp. Kens Ville Golf 
Academy, Bodakdev,
Ahmedabad 380054, Gujarat, India.
T: +91 (079) 4021 1900 Ext: 5234 | M: +91 94282 99055 | E: 
bjo...@asite.com
W: www.asite.com | Twitter: 
@Asite | Facebook: 
facebook.com/Asite



[Asite]

The Hyperloop Station Design Competition - A 48hr design collaboration, from 
mid-day, 23rd May 2016.
REGISTER HERE http://www.buildearthlive.com/hyperloop

[Build Earth Live Hyperloop]

[CC Award Winners 2015]


Re: Solr JSON facet range out of memory exception

2016-04-11 Thread Toke Eskildsen
On Mon, 2016-04-11 at 13:31 +0430, Ali Nazemian wrote:
> http: //10.102.1.5: 8983/solr/edgeIndex/select?q=*%3A*&fq=stat_owner_id:
> 122952&rows=0&wt=json&indent=true&facet=true&json.facet=%7bresult: %7b
> type: range,
> field: stat_date,
> start: 146027158386,
> end: 1460271583864,
> gap: 1
> %7d%7d

(1460271583864-146027158386)/1 = 131424442 (132 million) buckets.

I do not know the internal JSON code well enough, but if it creates an
object for each of the 132 million buckets, I can understand why it
OOMs. Even if it doesn't, you could easily be looking at 50K buckets,
which does seems excessive.
 
- Toke Eskildsen, State and University Library, Denmark




Re: Solr JSON facet range out of memory exception

2016-04-11 Thread Ali Nazemian
Dear Yonik,
Hi,

The entire index has 50k documents not the faceted one.  It is just a test
case right now! I used the JSON facet API, here is my query after encoding:

http: //10.102.1.5: 8983/solr/edgeIndex/select?q=*%3A*&fq=stat_owner_id:
122952&rows=0&wt=json&indent=true&facet=true&json.facet=%7bresult: %7b
type: range,
field: stat_date,
start: 146027158386,
end: 1460271583864,
gap: 1
%7d%7d

Sincerely,


On Sun, Apr 10, 2016 at 4:56 PM, Yonik Seeley  wrote:

> On Sun, Apr 10, 2016 at 3:47 AM, Ali Nazemian 
> wrote:
> > Dear all Solr users/developeres,
> > Hi,
> > I am going to use Solr JSON facet range on a date filed which is stored
> as
> > long milis. Unfortunately I got java heap space exception no matter how
> > much memory assigned to Solr Java heap! I already test that with 2g heap
> > space for Solr core with 50k documents!!
>
> You mean the entire index is 50K documents? Or do you mean the subset
> of documents to be faceted?
> If you're getting an OOM with the former (with a 2G heap), it sounds
> like you've hit some sort of bug.
>
> What does your faceting command look like?
>
> -Yonik
>



-- 
A.Nazemian


RE: EmbeddedSolr for unit tests in Solr 6

2016-04-11 Thread Rohana Rajapakse
Thanks Shawn,

I am now pointing solrHomeFolder to  lucene-solr-master\solr\server\solr  which 
contains the correct solr.xml file.
Tried the following two ways to create an EmbeddedSolrServer:


1. CoreContainer corecon = 
CoreContainer.createAndLoad(Paths.get(solrHomeFolder));
   corecon.load();
   SolrClient svr = new EmbeddedSolrServer(corecon, corename);


2.   SolrClient svr = new EmbeddedSolrServer(Paths.get(solrHomeFolder), 
corename);


They both throws the same exception  (java.lang.NoClassDefFoundError: Could not 
initialize class org.apache.solr.servlet.SolrRequestParsers).
org.apache.solr.servlet.SolrRequestParsers class is present in the 
solr-core-7.0.0-SNAPSHOT.jar and this jar is present in the WEB-INF\lib folder 
(in solr server) and also included as a dependency jar in the pom.xml of the 
test project.

Here is the full stack trace of the exception:

java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.solr.servlet.SolrRequestParsers
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:112)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.(EmbeddedSolrServer.java:70)
at 
com.gossinteractive.solr.DocPhraseUpdateProcessorTest.createEmbeddedSolrServer(DocPhraseUpdateProcessorTest.java:141)
at 
com.gossinteractive.solr.DocPhraseUpdateProcessorTest.setUp(DocPhraseUpdateProcessorTest.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)


I have debugged this a bit and found that this exception is thrown on the 
following line in EmbeddedServer.class

_parser = new SolrRequestParsers(null);

Also, the coreContainer object has no cores at this point.


Wonder if I should update my code from master (it is now about two weeks old).

Thanks for any help.

Rohana


-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: 08 April 2016 16:46
To: solr-user@lucene.apache.org
Subject: Re: EmbeddedSolr for unit tests in Solr 6

On 4/8/2016 7:51 AM, Rohana Rajapakse wrote:
> Thanks. I know it exists, but don't know how to use it.
>
> I am trying to use EmbeddedSolrServer(Path solrHome, String 
> defaultCoreName)
>
> What should be the "solrHome"? Should it be the actual solr home (i.e. 
> lucene-solr-master\solr\server\solr) in the solr server, or can it be any 
> temporary folder?
>
> I create it with:  new EmbeddedSolrServer((new 
> File("testdata/solr")).toPath(), "tmpcore");  and get the following Exception 
> (I use solr-Solr-7.0.0):
>
> org.apache.solr.common.SolrException: Should not have found 
> solr/@persistent . Please upgrade your solr.xml: 
> https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml
>   at 
> org.apache.solr.core.SolrXmlConfig.failIfFound(SolrXmlConfig.java:167)
>   at 
> org.apache.solr.core.SolrXmlConfig.checkForIllegalConfig(SolrXmlConfig.java:149)
>   at org.apache.solr.core.SolrXmlConfig.fromConfig(SolrXmlConfig.java:61)
>   at 
> org.apache

Re: Set Config API user properties with SolrJ

2016-04-11 Thread Georg Sorst
The issue is here:

org.apache.solr.handler.SolrConfigHandler.handleRequestBody()

This method will check the 'httpMethod' of the request. The
set-user-property call will only be evaluated if the method is POST.
Apparently, for non-HTTP requests this will never be true.

I'll gladly write an issue / testcase / patch if someone can give me a
little help.

Georg Sorst  schrieb am So., 10. Apr. 2016 um
14:36 Uhr:

> Addendum: Apparently the code works fine with HttpSolrClient, but not with
> EmbeddedSolrServer (used in our tests).The most recent version I tested
> this was 5.5.0
>
> Georg Sorst  schrieb am So., 10. Apr. 2016 um
> 01:49 Uhr:
>
>> Hi,
>>
>> how can you set Config API values from SolrJ? Does anyone have an example
>> for this?
>>
>> Here's what I'm currently trying:
>>
>> /* Build the structure for the request */
>> Map parameters = new HashMap() {{
>>   put("key", "value");
>> }};
>> final NamedList requestParameters = new NamedList<>();
>> requestParameters.add("set-user-property", parameters);
>>
>> /* Build the JSON */
>> CharArr json = new CharArr();
>> new SchemaRequestJSONWriter(json).write(requestParameters);
>> ContentStreamBase.StringStream stringStream = new
>> ContentStreamBase.StringStream(json.toString());
>> Collection contentStreams = Collections.
>> singletonList(stringStream);
>>
>> /* Send the request */
>> GenericSolrRequest request = new
>> GenericSolrRequest(SolrRequest.METHOD.POST, "/config/overlay", null);
>> request.setContentStreams(contentStreams);
>> SimpleSolrResponse response = request.process(new HttpSolrClient("
>> http://localhost:8983/solr/test";));
>>
>> The JSON is looking good, but it's doing... nothing. The response just
>> contains the default config-overlay contents (znodeVersion). Any idea why?
>>
>> Thanks!
>> Georg
>> --
>> *Georg M. Sorst I CTO*
>> FINDOLOGIC GmbH
>>
>>
>>
>> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
>> E.: g.so...@findologic.com
>> www.findologic.com Folgen Sie uns auf: XING
>> facebook
>>  Twitter
>> 
>>
>> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
>> Hier  Termin
>> vereinbaren!
>> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>>  Termin
>> vereinbaren!
>>
> --
> *Georg M. Sorst I CTO*
> FINDOLOGIC GmbH
>
>
>
> Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
> E.: g.so...@findologic.com
> www.findologic.com Folgen Sie uns auf: XING
> facebook
>  Twitter
> 
>
> Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!*
> Hier  Termin
> vereinbaren!
> Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
>  Termin
> vereinbaren!
>
-- 
*Georg M. Sorst I CTO*
FINDOLOGIC GmbH



Jakob-Haringer-Str. 5a | 5020 Salzburg I T.: +43 662 456708
E.: g.so...@findologic.com
www.findologic.com Folgen Sie uns auf: XING
facebook
 Twitter


Wir sehen uns auf dem *Shopware Community Day in Ahaus am 20.05.2016!* Hier
 Termin
vereinbaren!
Wir sehen uns auf der* dmexco in Köln am 14.09. und 15.09.2016!* Hier
 Termin
vereinbaren!