Early Access Release #5 for Solr 4.x Deep Dive is now available for download on Lulu.com

2013-08-16 Thread Jack Krupansky
Okay, it's hot off the e-presses: my updated book Solr 4.x Deep Dive, Early 
Access Release #5 is now available for purchase and download as an e-book 
for $9.99 on Lulu.com at:


http://www.lulu.com/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-1/ebook/product-21120181.html

(That link says "release-1", but it apparently correctly redirects to EAR 
#5.)


Summary of changes:

* Coverage of Real-time Get component

* Coverage of Terms Component

* Coverage of Term Vectors Component

* Coverage of Highlighting Component

* Round a decimal number. I added a JavaScript script for the 
StatelessScriptUpdate processor which takes an input field, a number of 
decimal digits (default is zero), an output field (defaults to replacing the 
input field), and an optional flag for whether the rounded decimal number 
should have its type changed to integer (default is to stay as a float 
decimal.) Handles multivalued fields.


* Append a field onto another field. This is just a use of the Clone and 
Concat update processors, using various delimiters. Also an example that 
uses the Ignore Field update processor to remove the source field after it 
has been appended.


* Map country code to continent code. This JavaScript script for the 
StatelessScriptUpdate processor can do the mapping in-place or output to 
another field. Option for case of output string (default is lower case). 
Handles multivalued fields. Unmappable input values are preserved as-is.


Total of 281 pages of additional content since EAR#4.

Please feel free to email or comment on my blog 
(http://basetechnology.blogspot.com/) for any questions or issues related to 
the book.


Thanks!

-- Jack Krupansky



RE: external zookeeper with SolrCloud

2013-08-16 Thread Boogie Shafer
sorry, it looks like you can get the follower/leader status for each node using 
just the "mntr"not the "zk_server_state" values



echo mntr | nc  2181

zk_version  3.4.5-1392090, built on 09/30/2012 17:52 GMT
zk_avg_latency  0
zk_max_latency  45
zk_min_latency  0
zk_packets_received 1132824
zk_packets_sent 1132875
zk_num_alive_connections4
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  218
zk_watch_count  12
zk_ephemerals_count 85
zk_approximate_data_size546670
zk_open_file_descriptor_count   35
zk_max_file_descriptor_count4096



From: Boogie Shafer 
Sent: Friday, August 16, 2013 14:26
To: solr-user@lucene.apache.org
Subject: RE: external zookeeper with SolrCloud

the mntr command can give that info if you hit the leader of the zk quorum

e.g. in the example for that command on the link you can see that its a 5 
member zk ensemble (zk_followers 4) and that all followers are synced 
(zk_synced_followers 4)

you would obviously need to query for the zk leader before you could get that 
data. the srvr command can tell you the status of a given zk (leader or 
follower)


$ echo mntr | nc localhost 2185

zk_version  3.4.0
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 70
zk_packets_sent 69
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count   4
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size27
zk_followers4   - only exposed by the Leader
zk_synced_followers 4   - only exposed by the Leader
zk_pending_syncs0   - only exposed by the Leader
zk_open_file_descriptor_count 23- only available on Unix platforms
zk_max_file_descriptor_count 1024   - only available on Unix platforms


--- some examples of using srvr command

echo srvr | nc  2185
Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
Latency min/avg/max: 0/0/45
Received: 1132673
Sent: 1132724
Connections: 4
Outstanding: 0
Zxid: 0x600172e5a
Mode: follower
Node count: 218

echo srvr | nc  2181
Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
Latency min/avg/max: 0/0/880
Received: 21976696
Sent: 21988742
Connections: 17
Outstanding: 0
Zxid: 0x600172e66
Mode: leader
Node count: 218



From: Shawn Heisey 
Sent: Friday, August 16, 2013 14:13
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud

On 8/16/2013 11:58 AM, Joshi, Shital wrote:
> Is there a way to find if We have a zookeeper quorum? We can ping individual 
> zookeeper and see if it is running, but it would be nice to ping/query one 
> URL and check if we have a quorum.

I filed an issue on this:

https://issues.apache.org/jira/browse/SOLR-5169

Thanks,
Shawn





RE: external zookeeper with SolrCloud

2013-08-16 Thread Boogie Shafer
the mntr command can give that info if you hit the leader of the zk quorum

e.g. in the example for that command on the link you can see that its a 5 
member zk ensemble (zk_followers 4) and that all followers are synced 
(zk_synced_followers 4)

you would obviously need to query for the zk leader before you could get that 
data. the srvr command can tell you the status of a given zk (leader or 
follower)


$ echo mntr | nc localhost 2185

zk_version  3.4.0
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 70
zk_packets_sent 69
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count   4
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size27
zk_followers4   - only exposed by the Leader
zk_synced_followers 4   - only exposed by the Leader
zk_pending_syncs0   - only exposed by the Leader
zk_open_file_descriptor_count 23- only available on Unix platforms
zk_max_file_descriptor_count 1024   - only available on Unix platforms


--- some examples of using srvr command

echo srvr | nc  2185
Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
Latency min/avg/max: 0/0/45
Received: 1132673
Sent: 1132724
Connections: 4
Outstanding: 0
Zxid: 0x600172e5a
Mode: follower
Node count: 218

echo srvr | nc  2181
Zookeeper version: 3.4.5-1392090, built on 09/30/2012 17:52 GMT
Latency min/avg/max: 0/0/880
Received: 21976696
Sent: 21988742
Connections: 17
Outstanding: 0
Zxid: 0x600172e66
Mode: leader
Node count: 218



From: Shawn Heisey 
Sent: Friday, August 16, 2013 14:13
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud

On 8/16/2013 11:58 AM, Joshi, Shital wrote:
> Is there a way to find if We have a zookeeper quorum? We can ping individual 
> zookeeper and see if it is running, but it would be nice to ping/query one 
> URL and check if we have a quorum.

I filed an issue on this:

https://issues.apache.org/jira/browse/SOLR-5169

Thanks,
Shawn




Re: external zookeeper with SolrCloud

2013-08-16 Thread Shawn Heisey

On 8/16/2013 11:58 AM, Joshi, Shital wrote:

Is there a way to find if We have a zookeeper quorum? We can ping individual 
zookeeper and see if it is running, but it would be nice to ping/query one URL 
and check if we have a quorum.


I filed an issue on this:

https://issues.apache.org/jira/browse/SOLR-5169

Thanks,
Shawn



RE: external zookeeper with SolrCloud

2013-08-16 Thread Boogie Shafer
good stuff

here is a more recent version of the same resource as they have added a few new 
commands in the recent releases of zookeeper

http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_zkCommands



From: Walter Underwood 
Sent: Friday, August 16, 2013 12:48
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud

You might be able to get info from the Zookeeper "four letter words".

http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkCommands

Here is a command to get the status for one of our Zookeeper hosts:

$ echo stat | nc zk-web02.test3.cloud.cheggnet.com 2181

wunder

On Aug 16, 2013, at 12:01 PM, Shawn Heisey wrote:

> On 8/16/2013 11:58 AM, Joshi, Shital wrote:
>> Is there a way to find if We have a zookeeper quorum? We can ping individual 
>> zookeeper and see if it is running, but it would be nice to ping/query one 
>> URL and check if we have a quorum.
>
> This is a really good question, to which I do not have an answer.  If your 
> client code is Java, you could probably get this information out of 
> CloudSolrServer, with something like this:
>
> server.getZkStateReader().getZkClient().getSolrZooKeeper().getState();
>
> If the state is CONNECTED everything's probably fine.
>
> If anyone who's dealt with Zookeeper happens to know whether this would work, 
> I'd appreciate knowing.  For Solr, it is probably a good idea to expose 
> something via an admin handler with the current zookeeper quorum state.
>
> Thanks,
> Shawn
>



Re: Does MMap works on the Virtual Box?

2013-08-16 Thread Paul Masurel
Hi,

You can MMAP a size bigger than your memory without having any problem.
Part of your file will just not be loaded into RAM, because you don't
access it too often.

If you are short in memory, consider deactivating page Host IO Caching, as
it will be only redundant with your guest
OS page cache.

Regards,

Paul



On Fri, Aug 16, 2013 at 10:26 PM, Shawn Heisey  wrote:

> On 8/16/2013 1:02 PM, vibhoreng04 wrote:
>
>> I have a big index of 256 GB .Right now it is on one physical box of 256
>> GB
>> RAM . I am planning to virtualize it to the size of 32 GB Ram*8
>> boxes.Whether the MMap will work regardless in this condition ?
>>
>
> As far as MMap goes, if the operating system you are running is 64-bit,
> your Java is 64-bit, and the OS supports MMap (which almost every operating
> system does, including Linux and Windows), then you'd be fine.
>
> If you have the option of running Solr on bare metal vs. running on the
> same hardware in a virtualized environment, you should always choose the
> bare metal.
>
> I had a Solr installation with a sharded index.  When I first set it up, I
> used virtual machines, one Solr instance and shard per VM.  Half the VMs
> were running on one physical box, half on another.  For redundancy, I had a
> second pair of physical servers doing the same thing, each with VMs
> representing half the index.
>
> That same setup now runs on bare metal -- the exact same physical
> machines, in fact.  The index arrangement is nearly the same as before,
> except it uses multicore Solr, one instance per machine.
>
> Removing the virtualization layer helped performance quite a bit. Average
> QTimes went way down and it took less time to do a full index rebuild.
>
> Thanks,
> Shawn
>
>


-- 
__

 Masurel Paul
 e-mail: paul.masu...@gmail.com


Re: Does MMap works on the Virtual Box?

2013-08-16 Thread Shawn Heisey

On 8/16/2013 1:02 PM, vibhoreng04 wrote:

I have a big index of 256 GB .Right now it is on one physical box of 256 GB
RAM . I am planning to virtualize it to the size of 32 GB Ram*8
boxes.Whether the MMap will work regardless in this condition ?


As far as MMap goes, if the operating system you are running is 64-bit, 
your Java is 64-bit, and the OS supports MMap (which almost every 
operating system does, including Linux and Windows), then you'd be fine.


If you have the option of running Solr on bare metal vs. running on the 
same hardware in a virtualized environment, you should always choose the 
bare metal.


I had a Solr installation with a sharded index.  When I first set it up, 
I used virtual machines, one Solr instance and shard per VM.  Half the 
VMs were running on one physical box, half on another.  For redundancy, 
I had a second pair of physical servers doing the same thing, each with 
VMs representing half the index.


That same setup now runs on bare metal -- the exact same physical 
machines, in fact.  The index arrangement is nearly the same as before, 
except it uses multicore Solr, one instance per machine.


Removing the virtualization layer helped performance quite a bit. 
Average QTimes went way down and it took less time to do a full index 
rebuild.


Thanks,
Shawn



More on topic of Meta-search/Federated Search with Solr

2013-08-16 Thread Dan Davis
I've thought about it, and I have no time to really do a meta-search during
evaluation.  What I need to do is to create a single core that contains
both of my data sets, and then describe the architecture that would be
required to do blended results, with liberal estimates.

>From the perspective of evaluation, I need to understand whether any of the
solutions to better ranking in the absence of global IDF have been
explored?I suspect that one could retrieve a much larger than N set of
results from a set of shards, re-score in some way that doesn't require
IDF, e.g. storing both results in the same priority queue and *re-scoring*
before *re-ranking*.

The other way to do this would be to have a custom SearchHandler that works
differently - it performs the query, retries all results deemed relevant by
another engine, adds them to the Lucene index, and then performs the query
again in the standard way.   This would be quite slow, but perhaps useful
as a way to evaluate my method.

I still welcome any suggestions on how such a SearchHandler could be
implemented.


Re: external zookeeper with SolrCloud

2013-08-16 Thread Walter Underwood
You might be able to get info from the Zookeeper "four letter words".

http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkCommands

Here is a command to get the status for one of our Zookeeper hosts:

$ echo stat | nc zk-web02.test3.cloud.cheggnet.com 2181

wunder

On Aug 16, 2013, at 12:01 PM, Shawn Heisey wrote:

> On 8/16/2013 11:58 AM, Joshi, Shital wrote:
>> Is there a way to find if We have a zookeeper quorum? We can ping individual 
>> zookeeper and see if it is running, but it would be nice to ping/query one 
>> URL and check if we have a quorum.
> 
> This is a really good question, to which I do not have an answer.  If your 
> client code is Java, you could probably get this information out of 
> CloudSolrServer, with something like this:
> 
> server.getZkStateReader().getZkClient().getSolrZooKeeper().getState();
> 
> If the state is CONNECTED everything's probably fine.
> 
> If anyone who's dealt with Zookeeper happens to know whether this would work, 
> I'd appreciate knowing.  For Solr, it is probably a good idea to expose 
> something via an admin handler with the current zookeeper quorum state.
> 
> Thanks,
> Shawn
> 



Re: Wrong leader election leads to shard removal

2013-08-16 Thread Erick Erickson
bq:why does it replicate all the index instead of copying just the
newer formed segments

because there's no guarantee that the segments are identical on the
nodes that make up a shard. The simplest way to conceptualize this
is to consider the autocommit settings on the servers Let's say
the hard commits (which close the current segment and open a new
one) are all set to 1 minute. The fact that the servers are starting
at different times means that the segments on one node will close at
different times than another node.

And that doesn't even consider the complicated cases of possibly
having different segments merged depending on the start/stop
pattern on one of the nodes

Best,
Erick


On Fri, Aug 16, 2013 at 5:25 AM, Ido Kissos  wrote:

> Yes, I have erased the tlog in replica 2 and it appears that the the first
> replica's tlog was corrupted because of an ungracefull servlet shutdown.
> There was no log for it unfortunately, neither the zookeeper log logged
> anything about this. Is there a a place I could check in the zookeeper what
> exactly happened during this election?
>
> Partly connected - about transient disk that needs to be free for the
> replication after sync failure - why does it replicate all the index
> instead of copying just the newer formed segments? That would require much
> less space than a full copy, wouldn't it?
> Why not making 100 docs for tlog sync configurable?
>


Multiple word synonym is not found because of an extra token between words

2013-08-16 Thread Jean-Marc Desprez
Hi,

Let's say I have this synonyms entry :
b c => ok

My configuration (index time) :
1. WhitespaceTokenizerFactory
2. WordDelimiterFilterFactory with catenateWords="0"
3. SynonymFilterFactory

The input : "a/b c" produce (one line per tokenizer/filter)
0:"a/b", 1:"c"
0:"a", 1:"b", 2:"c"
0:"a", 1:"ok"

So everything is ok, now if I set catenateWords to "1", the same input
produce :
0:"a/b", 1:"c"
0:"a", 1:"b", 1:"ab", 2:"c"
0:"a", 1:"b", 1:"ab", 2:"c"

The synonym filter doesn't match entry because of the extra token "ab"
between "b" and "c".
To my mind the synonym should be triggered when a token "b" and a token "c"
are separate by one position (which is still the case in the second
example).

Is there any way to make the second example work ?

Jean-Marc


Does MMap works on the Virtual Box?

2013-08-16 Thread vibhoreng04
Hi All,

I have a big index of 256 GB .Right now it is on one physical box of 256 GB
RAM . I am planning to virtualize it to the size of 32 GB Ram*8
boxes.Whether the MMap will work regardless in this condition ?

Vibhor Jaiswal



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Does-MMap-works-on-the-Virtual-Box-tp4085154.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: external zookeeper with SolrCloud

2013-08-16 Thread Shawn Heisey

On 8/16/2013 11:58 AM, Joshi, Shital wrote:

Is there a way to find if We have a zookeeper quorum? We can ping individual 
zookeeper and see if it is running, but it would be nice to ping/query one URL 
and check if we have a quorum.


This is a really good question, to which I do not have an answer.  If 
your client code is Java, you could probably get this information out of 
CloudSolrServer, with something like this:


server.getZkStateReader().getZkClient().getSolrZooKeeper().getState();

If the state is CONNECTED everything's probably fine.

If anyone who's dealt with Zookeeper happens to know whether this would 
work, I'd appreciate knowing.  For Solr, it is probably a good idea to 
expose something via an admin handler with the current zookeeper quorum 
state.


Thanks,
Shawn



Re: Indexing an XML file in Apache Solr

2013-08-16 Thread Chris Hostetter
: I am very new to Solr. I am looking to index an xml file and search its
: contents. Its structure resembles something like this
...
: Is it essential to use the DIH to import this data into Solr? Isn't there
: any simpler way to accomplish the task? Can it be done through SolrJ as I am

Ignore for a minute that your data happens to be in an XML file

 * you have some structured data
 * you want to put it in solr
 * you want to search it

In order to do thiese things, you have to understand (for yourself, but if 
you want help from others you have to be able to explain it to us as well) 
what that structure means, and how you want to be able to search it.

If you are familiar with relational databases, ask your self: if i were 
putting my data into a table, what would my rows be? what would my 
colunmns be? what data types would i use for each column? what pieces of 
my data would i put into each column/row? 

You have to ask yourself the same types questions when you use Solr to 
decide what you want your schema.xml to look like, and what you want to 
model as "documents" -- and depening on your answers, then you can decide 
how to index the data.

Do you have to use DIH to index an XML file?  Not at all.  

You do have to use *something* to pull the pieces of data you want out of 
your XML file (or out of your CSV file, or out of your relational 
database, etc...) to model them as "Documents" containing "Fields" that 
can put them into Solr.  You might find DIH useful for that, or you might 
also find the ExtractingRequestHandler useful forthat, or you might ust 
want to implement your own bit of code that pulls what you want out of 
your XML files and sends them to Solr as SolrInputDocuments (using SolrJ), 
or you might want to write a bit of python/ruby/perl/lua/haskel code that 
does the same thing and sends it to Solr as xml or json using the format 
Solr expects for indexing commands.

that's entirely up to you.


-Hoss


Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Kamaljeet Kaur
On Fri, Aug 16, 2013 at 11:50 PM, Brendan Grainger [via Lucene]
 wrote:
> Have you worked
> through the tutorial?


Yes, I'm working through. But not getting the significance of these
used commands. I know its to give a taste of solr with an example.

But actually I don't know where the problem is, sorry for saying this,
but I don't understand how solr works :(
How people manage to change the code to get into working. I am just
unable to understand all this :(
What should I do?

-- 
Kamaljeet Kaur

kamalkaur188.wordpress.com
facebook.com/kaur.188




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968p4085143.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Brendan Grainger
Assuming you have downloaded solr into a dir called 'solr' then if you look
in the 'example' there is a bundled Jetty installation ready roll for
testing etc. So to answer your question 'why jetty?'. Have you worked
through the tutorial?



On Fri, Aug 16, 2013 at 2:06 PM, Kamaljeet Kaur wrote:

> On Fri, Aug 16, 2013 at 10:22 PM, Brendan Grainger [via Lucene]
>  wrote:
> > ou can then
> > use the packaged Jetty servlet container while you get comfortable with
> > working with solr.
>
>
> Can I ask why jetty?
>
> --
> Kamaljeet Kaur
>
> kamalkaur188.wordpress.com
> facebook.com/kaur.188
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968p4085135.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Brendan Grainger
www.kuripai.com


Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Kamaljeet Kaur
On Fri, Aug 16, 2013 at 10:22 PM, Brendan Grainger [via Lucene]
 wrote:
> ou can then
> use the packaged Jetty servlet container while you get comfortable with
> working with solr.


Can I ask why jetty?

-- 
Kamaljeet Kaur

kamalkaur188.wordpress.com
facebook.com/kaur.188




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968p4085135.html
Sent from the Solr - User mailing list archive at Nabble.com.

RE: external zookeeper with SolrCloud

2013-08-16 Thread Joshi, Shital
Is there a way to find if We have a zookeeper quorum? We can ping individual 
zookeeper and see if it is running, but it would be nice to ping/query one URL 
and check if we have a quorum. 

-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org] 
Sent: Friday, August 09, 2013 2:15 PM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud

On 8/9/2013 11:15 AM, Joshi, Shital wrote:
> Same thing happen. It only works with N/2 + 1 zookeeper instances up.

Got it.

An update came in on the issue that I filed.  This behavior that you're 
seeing is currently by design.

Because this is expected behavior, I've changed the issue to improvement 
instead of a bug.  I don't know if it is something that will happen, but 
the request is in.

The workaround is fairly simple -- don't start or restart Solr nodes if 
you don't have zookeeper quorum.

Thank you for your diligent testing!

Shawn



Best way to version config files

2013-08-16 Thread SolrLover
Currently we use multiple stopwords.txt, protwords.txt , elevate.xml and
other solr related config files. We use subversion to maintain various
versions of these files manually. I was thinking of checking the forum on
the process that is being followed to preserve the version history other
than just using a version control tool. 

May be I'm dreaming but it would be great if there is a UI to edit the
files, connect to a version control repository and commit the changes there.
This way I would be able to view all version of file in SOLR admin panel and
also revert back to any version needed...



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Best-way-to-version-config-files-tp4085115.html
Sent from the Solr - User mailing list archive at Nabble.com.


How can I show stats or results based on stats of a group?

2013-08-16 Thread zhangquan913
Hello All,

Recently I used stats component of solr. I can do a "group by" and get stats
for each group in the following solr request:
http://localhost/solr/quan/select?q=*:*&stats=true&stats.field=income&rows=0&indent=true&stats.facet=township

In this case, solr will group by "township" and do stats on "income" of
families for each township. However I want to get info of families in detail
rather than stats if the number of families is less than 5 in a group. How
can I get this kind of mixed response from solr?

I need to do solr query only once. Don't tell me that get the response from
previous request and do queries again for the groups who have less than 5
families.

Thanks,
Quan



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-can-I-show-stats-or-results-based-on-stats-of-a-group-tp4085114.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 4.3 and above core swap

2013-08-16 Thread Shawn Heisey

On 8/16/2013 10:14 AM, richardg wrote:

Thanks for the reply, no nothing else would be writing to the index, I'm sure
it is a solrconfig setting but not sure which.


Are you specifying the DirectoryFactory and/or lock type in your 
solrconfig.xml, and if so, what are they set to?  Is your index on a 
network filesystem, or an unusual filesystem type?


Thanks,
Shawn



Re: Migrating from solr 3.5 to 4.4

2013-08-16 Thread Shawn Heisey

On 8/16/2013 6:43 AM, Kuchekar wrote:

  If we do a csv export from 3.5 solr and then import it in the 4.4
index, we get a problem with copy fields i.e. the value in the copy field
is computed twice. Once from the csv import and other from solr internal
computation.


Supplemental reply on this specific part:

Your copyField destinations should not be stored.  This is listed as a 
specific requirement for atomic updates:


http://wiki.apache.org/solr/Atomic_Updates#Stored_Values

Thanks,
Shawn



Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Brendan Grainger
Hi,

Slightly off topic, but just wondering if you've worked through the
tutorial: https://lucene.apache.org/solr/4_4_0/tutorial.html You can then
use the packaged Jetty servlet container while you get comfortable with
working with solr.

Best of luck
Brendan



On Fri, Aug 16, 2013 at 12:25 PM, Kamaljeet Kaur wrote:

> On Fri, Aug 16, 2013 at 1:20 PM, Artem Karpenko [via Lucene]
>  wrote:
> > it's also mentioned on that page that "Solr runs inside a Java servlet
> > container such as Tomcat, Jetty, or Resin" - you have to install one of
> > those first.
>
>
> Ok.
> Can you please suggest me the servlet container to use with Django 1.4
> and solr version 4.4.0?
> And its version?
>
> --
> Kamaljeet Kaur
>
> kamalkaur188.wordpress.com
> facebook.com/kaur.188
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968p4085094.html
> Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Brendan Grainger
www.kuripai.com


Re: Share splitting at 23 million documents -> OOM

2013-08-16 Thread Greg Preston
Have you tried it with a smaller number of documents?  I haven't been able
to successfully split a shard with 4.4.0 with even a handful of docs.


-Greg


On Fri, Aug 16, 2013 at 7:09 AM, Harald Kirsch wrote:

> Hi all.
>
> Using the example setup of solr-4.4.0, I was able to easily feed 23
> million documents from ClueWeb09.
>
> The I tried to split the one shard into tqo. The size on disk is:
>
> % du -sh collection1
> 118Gcollection1
>
> I started Solr with 8GB for the JVM:
>
> java -Xmx8000m -DzkRun -DnumShards=2 
> -Dbootstrap_confdir=./solr/**collection1/conf
> -Dcollection.configName=myconf -jar start.jar
>
> Then I asked for the split
>
> http://localhost:8983/solr/**admin/collections?action=**
> SPLITSHARD&collection=**collection1&shard=shard1
>
> After a while I got the OOM in the logs:
>
> 841168 [qtp614872954-17] ERROR org.apache.solr.servlet.**SolrDispatchFilter
>  – null:java.lang.**RuntimeException: java.lang.OutOfMemoryError: Java
> heap space
>
> My question: is it to be expected that the split needs huge amounts of RAM
> or is there a chance that some configuration or procedure change could get
> me past this?
>
> Regards,
> Harald.
> --
> Harald Kirsch
> Raytion GmbH
> Kaiser-Friedrich-Ring 74
> 40547 Duesseldorf
> Fon +49-211-550266-0
> Fax +49-211-550266-19
> http://www.raytion.com
>


Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Kamaljeet Kaur
On Fri, Aug 16, 2013 at 1:20 PM, Artem Karpenko [via Lucene]
 wrote:
> it's also mentioned on that page that "Solr runs inside a Java servlet
> container such as Tomcat, Jetty, or Resin" - you have to install one of
> those first.


Ok.
Can you please suggest me the servlet container to use with Django 1.4
and solr version 4.4.0?
And its version?

-- 
Kamaljeet Kaur

kamalkaur188.wordpress.com
facebook.com/kaur.188




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968p4085094.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr 4.3 and above core swap

2013-08-16 Thread richardg
Thanks for the reply, no nothing else would be writing to the index, I'm sure
it is a solrconfig setting but not sure which.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-3-and-above-core-swap-tp4084794p4085091.html
Sent from the Solr - User mailing list archive at Nabble.com.


Listeners, cores and Similarity

2013-08-16 Thread Marc Sturlese
Hey there,
I'm testing a custom similarity which loads data from and external file
located in solr_home/core_name/conf/. I load data from the file into a Map
on the init method of the SimilarityFactory. I would like to reload that Map
every time a commit happens or every X hours.
To do that I've thought on implementing a custom listener which populates a
custom cache (working as the Map) every time a new searcher is opened. The
problem is that from the SimilarityFactory or Similarity class I can't
access the Solr caches, just have access to the SolrParams.
The only way I see to populate the Map outside the Similarity class is
making it static but would like to avoid that.
Any advice?
Thanks in advance




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Listeners-cores-and-Similarity-tp4085083.html
Sent from the Solr - User mailing list archive at Nabble.com.


Indexing an XML file in Apache Solr

2013-08-16 Thread Abhiroop
I am very new to Solr. I am looking to index an xml file and search its
contents. Its structure resembles something like this


((1,6)-alpha-glucosyl)poly((1,4)-alpha-glucosyl)glycogenin =>
poly{(1,4)-alpha-  glucosyl} glycogenin + alpha-D-glucose
This event has been computationally inferred from an event that
has been demonstrated in another species.The inference is based on the
homology mapping in Ensembl Compara. Briefly, reactions for which all
involved PhysicalEntities (in input, output and catalyst) have a mapped
orthologue/paralogue (for complexes at least 75% of components must have a
mapping) are inferred to the other species. High level events are also
inferred for these events to allow for easier navigation.More details and
caveats of the event inference in Reactome. For details on the Ensembl
Compara system see also: Gene orthology/paralogy prediction
method.














Saccharomyces cerevisiae



Is it essential to use the DIH to import this data into Solr? Isn't there
any simpler way to accomplish the task? Can it be done through SolrJ as I am
fine with outputting the result through the console too. It would be really
helpful if someone could point me to some useful examples or resources on
this apart from the official documentation.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Indexing-an-XML-file-in-Apache-Solr-tp4085053.html
Sent from the Solr - User mailing list archive at Nabble.com.


Share splitting at 23 million documents -> OOM

2013-08-16 Thread Harald Kirsch

Hi all.

Using the example setup of solr-4.4.0, I was able to easily feed 23 
million documents from ClueWeb09.


The I tried to split the one shard into tqo. The size on disk is:

% du -sh collection1
118Gcollection1

I started Solr with 8GB for the JVM:

java -Xmx8000m -DzkRun -DnumShards=2 
-Dbootstrap_confdir=./solr/collection1/conf 
-Dcollection.configName=myconf -jar start.jar


Then I asked for the split

http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1

After a while I got the OOM in the logs:

841168 [qtp614872954-17] ERROR 
org.apache.solr.servlet.SolrDispatchFilter  – 
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space


My question: is it to be expected that the split needs huge amounts of 
RAM or is there a chance that some configuration or procedure change 
could get me past this?


Regards,
Harald.
--
Harald Kirsch
Raytion GmbH
Kaiser-Friedrich-Ring 74
40547 Duesseldorf
Fon +49-211-550266-0
Fax +49-211-550266-19
http://www.raytion.com


Re: Indexing an XML file in Apache Solr

2013-08-16 Thread tamanjit.bin...@yahoo.co.in
DIH is not at all necessary and yes, SolrJ can be used to add data, the XML
bit am not too sure though.

Try:
http://wiki.apache.org/solr/UpdateXmlMessages
  
and
http://wiki.apache.org/solr/Solrj   



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Indexing-an-XML-file-in-Apache-Solr-tp4085053p4085054.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 4.3 and above core swap

2013-08-16 Thread tamanjit.bin...@yahoo.co.in
Is any other source trying to write into your index when you try to reload
it? If this was so, then I guess it would have locked up the index. Check
for a write.lock file in your index directory. You can remove that file
manually and then retry it.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-4-3-and-above-core-swap-tp4084794p4085052.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Migrating from solr 3.5 to 4.4

2013-08-16 Thread tamanjit.bin...@yahoo.co.in
/ I read that we can point the new solr 4.4 to the data index from previous
solr i.e. 3.5/
Yes you can do that. It would be even better if you would run an optimize
post migration, it will re-write the segments.

/f this is true, can we change the schema in 4.4 solr. We have many
un-stored fields in solr 3.5 schema, which we would like to make stored, so
that we can take the advantage of atomic updates. /
Changes can be made to the schema, but reindexing would be required for the
data to make unstored data, stored.

/If we do a csv export from 3.5 solr and then import it in the 4.4 index, we
get a problem with copy fields i.e. the value in the copy field is computed
twice. Once from the csv import and other from solr internal computation. /

Have you tried removing the copyField data from the csv? I think it should
work then.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Migrating-from-solr-3-5-to-4-4-tp4085049p4085051.html
Sent from the Solr - User mailing list archive at Nabble.com.


Migrating from solr 3.5 to 4.4

2013-08-16 Thread Kuchekar
Hi,

We are migrating or solr from 3.5 to 4.4, but stuck at the strategy to
migrate the index.

 I read that we can point the new solr 4.4 to the data index from
previous solr i.e. 3.5. Is my understanding correct? If this is true, can
we change the schema in 4.4 solr. We have many un-stored fields in solr 3.5
schema, which we would like to make stored, so that we can take the
advantage of atomic updates.

 If we cannot change the schema, what is the other best strategy
suggested to do the migration?.

 If we do a csv export from 3.5 solr and then import it in the 4.4
index, we get a problem with copy fields i.e. the value in the copy field
is computed twice. Once from the csv import and other from solr internal
computation.

Is there any other way, by which we can migrate the index from 3.5 to 4.4
solr?.

Looking forward for your reply.

Thanks.
Kuchekar, Nilesh


Re: struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread Aloke Ghoshal
Hi,

That's correct the Analyzers will get applied to both Index & Query time.
In fact I do get results back for speedPost with this field definition.

Regards,
Aloke


On Fri, Aug 16, 2013 at 5:21 PM, vicky desai wrote:

> Hi,
>
> Another Example I found is q=Content:wi-fi doesn't match for documents with
> word wifi. I think it is not catenating the query keywords correctly
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085030.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread Jack Krupansky
Have you made ANY changes to the analyzer since indexing the data? 
Generally, you need to completely reindex your data after any changes to a 
field type analyzer.


Otherwise, run the Solr Admin UI Analyzer web page and check the output for 
both index and query.


Also, be aware that preserveOriginal will preserve index-time punctuation 
such as trailing comma, period, or enclosing parentheses.


Also, what is your default query operator? You need to use q.op=OR when 
using WDF to generate multiple, non-phrase terms at query time.


Also add debugQuery=true to your request and see what the generated parse 
query looks like.


-- Jack Krupansky

-Original Message- 
From: vicky desai

Sent: Friday, August 16, 2013 7:51 AM
To: solr-user@lucene.apache.org
Subject: Re: struggling with solr.WordDelimiterFilterFactory

Hi,

Another Example I found is q=Content:wi-fi doesn't match for documents with
word wifi. I think it is not catenating the query keywords correctly



--
View this message in context: 
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085030.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Re: Problems installing Solr4 in Jetty9

2013-08-16 Thread Dmitry Kan
Hi,

I have the following jar in jetty/lib/ext:

log4j-1.2.16.jar
slf4j-api-1.6.6.jar
slf4j-log4j12-1.6.6.jar
jcl-over-slf4j-1.6.6.jar
jul-to-slf4j-1.6.6.jar

do you?

Dmitry


On Thu, Aug 8, 2013 at 12:49 PM, Spadez  wrote:

> Apparently this is the error:
>
> 2013-08-08 09:35:19.994:WARN:oejw.WebAppContext:main: Failed startup of
> context
> o.e.j.w.WebAppContext@64a20878
> {/solr,file:/tmp/jetty-0.0.0.0-8080-solr.war-_solr-any-/webapp/,STARTING}{/solr.war}
> org.apache.solr.common.SolrException: Could not find necessary SLF4j
> logging
> jars. If using Jetty, the SLF4j logging jars need to go in the jetty
> lib/ext
> directory. For other containers, the corresponding directory should be
> used.
> For more information, see: http://wiki.apache.org/solr/SolrLogging
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Problems-installing-Solr4-in-Jetty9-tp4083209p4083224.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: is indirection possible?

2013-08-16 Thread Erick Erickson
Not that I know of. If you can index the docs cleverly, you
might be able to form a query that does the trick.

Pseudo-joins might also do the trick. Be aware that these
don't return data from the "from" document, only the "to"
doc. That's gibberish, but see:
http://wiki.apache.org/solr/Join

Best
Erick


On Thu, Aug 15, 2013 at 8:26 PM, Sandy Mustard wrote:

> I am relatively new to the use of Solr.
>
> I have a set of documents with fields that contain the id of other
> documents.  Is it possible to specify a query that will return the related
> documents?
>
> Doc 1   id=444, name=First Document, pointer=777
> Doc 2   id=777, name=Next Document, pointer=555
> 
>
> I would like to query for Doc 1 and also get Doc 2 returned.
>
> Thanks,
> Sandy Mustard
>


Re: Large cache settings values - sanity check

2013-08-16 Thread Erick Erickson
Wy too high :). H, not much detail there...

bq:   filterCache (size="30" initialSize="30"
autowarmCount="5"),

This is an OOM waiting to happen. Each filterCache entry is a key/value
pair. The key is the fq clause, but the value is a bitmap of all the docs in
the index, i.e. maxDoc/8 bytes.

So for a 32M doc corpus, each one is 4M. Your filter cache is
potentially1.2T
if I've done my math right. The entries aren't allocated until an fq is
used, so at startup it's not very big.

Then it gets worse. You say "most libraries would do only
one or two incremental commits a day". So the filter cache will gradually
accumulate until the next hard commit (openSearcher=true in 4.0, any
hard commit in 3.x), leading to unpredictable OOMs.

I claim you can create an algorithmic query generator that keeps adding
unique fq clauses and crash your app at will.

And when you _do_ do a commit, the most recent 50,000 fq clauses are
re-executed leading to what I suspect are very long startup times.

As I wrote in another context, evictions and hit ratios are the key
statistics. Drop these WAY back and monitor these to see what
the sizes _should_ be. If you have no evictions, it's probably too large. If
you have lots of evictions _and_ the hit ratio is small (< 75% or so) then
think of making it larger.

If you're doing date-based fq clauses, beware of NOW clauses, see:
http://searchhub.org/2012/02/23/date-math-now-and-filter-queries/


bq queryResultCache (size="10" initialSize="10"
 autowarmCount="5")

Not as big an offender as the filterCache, but still. The usual case for
this cache is when a user pages. But the autowarmCount is huge. Again,
whenever
a new searcher is opened the last 50,000 queries will be re-executed before
the
searcher handles new queries.

The size here isn't as bad as the filterCache, but it's still far too large
IMO. The
key is the query and the value is a couple of windows
sized of ints (where window size is, say, 20). But the autowarmcount is a
killer.

Again, look on the admin page for this cache and the hit ratio. It's
usually actually
quite small so this cache often (but you need to measure) not all that
valuable.

I think the documentcache is also kind of big, the usual recommendation is
something
like (max simultaneous queries) * (&rows parameter) as a start.

Best
Erick


On Thu, Aug 15, 2013 at 3:58 PM, Eoghan Ó Carragáin <
eoghan.ocarrag...@gmail.com> wrote:

> Hi,
>
> I’m involved in the an open source project called Vufind which uses Solr to
> search across library catalogue records [1].
>
> The project uses what seems to be very high defaults cache settings in
> solrconfig.xml [2]:
>
>-
>
>filterCache (size="30" initialSize="30" autowarmCount="5"),
>-
>
>queryResultCache (size="10" initialSize="10"
>autowarmCount="5"),
>-
>
>documentCache (size="5" initialSize="5").
>
>
> These settings haven’t been reviewed since early in the project history (c.
> 2007) but came up in a recent discussion around out-of-memory issues and
> garbage collection.
>
> Of course decisions on cache configuration (along with jvm settings,
> sharding etc) vary depending on the instance (index size, query/sec etc),
> but I wanted to run these values past this list as a sanity check for what
> you’d consider good default settings giving that most adopters of the
> software will not touch the defaults.
>
> Some characteristics of library data & Vufind’s schema [3] which may have a
> bearing on the issue:
>
>-
>
>quite a few facet fields & filtering (~ 12 facets configured by default)
>-
>
>high number of unique facet values (e.g. several hundred-thousands in a
>facet field for authors or subjects)
>-
>
>most libraries would do only one or two incremental commits a day (which
>may justify high auto-warming settings since the next commit isn’t for
> 24
>hours)
>-
>
>sorting: relevance by default but other options configured by default
>(title, author, callnumber, year, etc)
>-
>
>mostly, small sparse documents (MARC records containing title, author,
>desciption etc but no full-text content)
>-
>
>quite a few stored fields, including a field which stores the full MARC
>record for additional parsing by the application
>-
>
>average number of documents for most adopters probably somewhere between
>500K and 2 million MARC records (Vufind has several adopters with up to
> 50m
>full-text docs but these make considerable customisations their Solr
> setup)
>- query/sec will vary from library to library, but shouldn't be anything
>too taxing for most adopters
>
>
> Do the current cache settings make sense in this context, or should we
> consider dropping back to the much lower values given in the Solr example
> and wiki?
>
> Many thanks
>
> Eoghan
>
>
> [1] vufind.org
>
> [2]
>
> https://github.com/vufind-org/vufind/blob/master/solr/biblio/c

Re: struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread vicky desai
Hi,

Another Example I found is q=Content:wi-fi doesn't match for documents with
word wifi. I think it is not catenating the query keywords correctly



--
View this message in context: 
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085030.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread vicky desai
Hi Aloke,

I am using the same analyzer for indexing as well as quering so
LowerCaseFilterFactory should work for both, right?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021p4085025.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Getter API for SolrCloud

2013-08-16 Thread Erick Erickson
I don't understand the question. CloudSolrServer subclasses SolrServer
which has a
public QueryResponse query(SolrParams params)
Have you tried that?


Best
Erick


On Thu, Aug 15, 2013 at 4:01 AM, Furkan KAMACI wrote:

> Here is a conversation about it:
>
> http://lucene.472066.n3.nabble.com/SolrCloud-with-Zookeeper-ensemble-in-production-environment-SEVERE-problems-td4047089.html
> However the result of conversation is not clear. Any ideas?
>
>
> 2013/8/15 Furkan KAMACI 
>
> > I've implemented an application that connects my UI and SolrCloud. I want
> > to write a code that makes a search request to SolrCloud and I will send
> > result to my UI. I know that there are some examples about it by I want a
> > fast and really good way for it. One way I did:
> >
> > ModifiableSolrParams params = new ModifiableSolrParams();
> > params.set("q", "*:*");
> > params.set("fl", "url lang");
> > params.set("sort", "url desc");
> > params.set("start", start);
> > QueryResponse response = lbHttpSolrServer.query(params);
> > for (SolrDocument document : response.getResults()) {
> > ...
> > }
> >
> > I want to use CloudSolrServer. Is there any example that is really fast
> > for getting data from SolrCloud? (I can get data at any format, i.e.
> > javabin. I will process it at my bridge application and send it to UI)
> >
>


Re: struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread Aloke Ghoshal
Hi,

Based on your WhitespaceTokenizerFactory & due to the
LowerCaseFilterFactory the words actually indexed are:
speed, post, speedpost

You should get results for: q:Content:speedpost

So either remove the LowerCaseFilterFactory or add the
LowerCaseFilterFactory to as a query time Analyzer as well.

Regards,
Aloke




On Fri, Aug 16, 2013 at 4:53 PM, vicky desai wrote:

> Hi All,
>
> I have a query regarding the use of wordDelimiterFilterFactory.  My schema
> definition for the text field is as follows
>
>  positionIncrementGap="100">
> 
>  class="solr.WhitespaceTokenizerFactory" />
>  class="solr.WordDelimiterFilterFactory"
> splitOnCaseChange="1"
> generateWordParts="1" generateNumberParts="1"
> catenateWords="1"
> catenateNumbers="1"
> catenateAll="1"  preserveOriginal="1"/>
>  class="solr.LowerCaseFilterFactory" />
> 
> 
>
>  multiValued="false"/>
>
> If I make the following query q=Content:speedPost
>
> then docs having Content *speed post *are matched which is as expected but
> docs having Content *speedpost* do not match.
>
> Can anybody please highlight if I am going incorrect somewhere
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


struggling with solr.WordDelimiterFilterFactory

2013-08-16 Thread vicky desai
Hi All,

I have a query regarding the use of wordDelimiterFilterFactory.  My schema
definition for the text field is as follows











If I make the following query q=Content:speedPost

then docs having Content *speed post *are matched which is as expected but
docs having Content *speedpost* do not match.

Can anybody please highlight if I am going incorrect somewhere



--
View this message in context: 
http://lucene.472066.n3.nabble.com/struggling-with-solr-WordDelimiterFilterFactory-tp4085021.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Wrong leader election leads to shard removal

2013-08-16 Thread Ido Kissos
Yes, I have erased the tlog in replica 2 and it appears that the the first
replica's tlog was corrupted because of an ungracefull servlet shutdown.
There was no log for it unfortunately, neither the zookeeper log logged
anything about this. Is there a a place I could check in the zookeeper what
exactly happened during this election?

Partly connected - about transient disk that needs to be free for the
replication after sync failure - why does it replicate all the index
instead of copying just the newer formed segments? That would require much
less space than a full copy, wouldn't it?
Why not making 100 docs for tlog sync configurable?


Re: Where is the webapps directory of servlet container?

2013-08-16 Thread Artem Karpenko

Hello Kamaljeet,

it's also mentioned on that page that "Solr runs inside a Java servlet 
container such as Tomcat, Jetty, or Resin" - you have to install one of 
those first. I don't know about Resin but Tomcat and Jetty have their 
webapps directories right inside of them. Solr Home directory from 
distribution is "apache-solr-4.x.0/example/solr/". You just have to copy 
it anywhere you like and then point out this location by providing 
system property solr.solr.home when starting the servlet container of 
your choice (also described later in the document).


Best,
Artem Karpenko.

16.08.2013 6:04, Kamaljeet Kaur пишет:

Hello,
They write in the  reference guide
  , "Copy
the solr.war file from the Solr distribution to the webapps directory of
your servlet container. "

I can't find the webapps directory of the Servlet Container- Java. Please
help.

In next step its written: "Copy the Solr Home directory
apache-solr-4.x.0/example/solr/ from the distribution to your desired Solr
Home location. "

Which solr home directory they refer to? And which desired Solr Home
location?? Please tell.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Where-is-the-webapps-directory-of-servlet-container-tp4084968.html
Sent from the Solr - User mailing list archive at Nabble.com.