Can I be added to the Solr wiki contributors list?
Username: garysieling
Thanks
Gary
documents?
Best,
Gary
2016-06-15 19:23 GMT+02:00 Erick Erickson <erickerick...@gmail.com>:
> Simplest, though a bit risky is to manually edit the znode and
> correct the znode entry. There are various tools out there, including
> one that ships with Zookeeper (see the ZK documentati
could recover from this problem.
Best,
Gary
/
/entity
/entity
/document
/dataConfig
So it's something related to BinFileDataSource and TikaEntityProcessor.
Thanks,
Gary.
On 26/02/2015 14:24, Gary Taylor wrote:
Alex,
That's great. Thanks for the pointers. I'll try and get more info on
this and file a JIRA issue.
Kind
Alex,
Same results on recursive=true / recursive=false.
I also tried importing plain text files instead of epub (still using
TikeEntityProcessor though) and get exactly the same result - ie. all
files fetched, but only one document indexed in Solr.
With verbose output, I get a row for each
Alex,
That's great. Thanks for the pointers. I'll try and get more info on
this and file a JIRA issue.
Kind regards,
Gary.
On 26/02/2015 14:16, Alexandre Rafalovitch wrote:
On 26 February 2015 at 08:32, Gary Taylor g...@inovem.com wrote:
Alex,
Same results on recursive=true / recursive
.
Thanks for any assistance / pointers.
Regards,
Gary
--
Gary Taylor | www.inovem.com | www.kahootz.com
INOVEM Ltd is registered in England and Wales No 4228932
Registered Office 1, Weston Court, Weston, Berkshire. RG20 8JE
kahootz.com is a trading name of INOVEM Ltd.
in the index in preparation for trying out the
search highlighting. Couldn't work out how to do that with post.jar
Thanks,
Gary
On 25/02/2015 17:09, Alexandre Rafalovitch wrote:
Try removing that first epub from the directory and rerunning. If you
now index 0 documents
Can anyone remove this spammer please?
On Tue, Jul 23, 2013 at 4:47 AM, wired...@yahoo.com wrote:
Hi! http://mackieprice.org/cbs.com.network.html
at the
code, it seems that I was wrong. Here's how to send a POST query:
response = server.query(query, METHOD.POST);
The import required for this is:
import org.apache.solr.client.solrj.SolrRequest.METHOD;
Gary, if you can avoid it, you should not be creating a new
HttpSolrServer object
names a different shard.
On Fri, Mar 22, 2013 at 3:39 PM, Gary Yngve gary.yn...@gmail.com wrote:
I have a situation we just discovered in solr4.2 where there are
previously cached results from a limited field list, and when querying for
the whole field list, it responds differently depending
(ZkStateReader.java:201)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:526)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
We ended up upgrading to solr4.2 and rebuilding the whole index from our
datastore.
-Gary
On Sat, Mar 16, 2013
field
list or the full field list.
We're releasing tonight, so is there a query param to selectively bypass
the cache, which I can use as a temp fix?
Thanks,
Gary
Cool, I'll need to try this. I could have sworn that it didn't work that
way in 4.0, but maybe my test was bunk.
-g
On Fri, Mar 15, 2013 at 9:41 PM, Mark Miller markrmil...@gmail.com wrote:
You can do this - just modify your starting Solr example to have no cores
in solr.xml. You won't be
Sorry, should have specified. 4.1
On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller markrmil...@gmail.com wrote:
What Solr version? 4.0, 4.1 4.2?
- Mark
On Mar 15, 2013, at 7:19 PM, Gary Yngve gary.yn...@gmail.com wrote:
my solr cloud has been running fine for weeks, but about a week ago
Also, looking at overseer_elect, everything looks fine. node is valid and
live.
On Fri, Mar 15, 2013 at 4:47 PM, Gary Yngve gary.yn...@gmail.com wrote:
Sorry, should have specified. 4.1
On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller markrmil...@gmail.comwrote:
What Solr version? 4.0
thread running perhaps? Or just post the results?
To recover, you should be able to just restart the Overseer node and have
someone else take over - they should pick up processing the queue.
Any logs you might be able to share could be useful too.
- Mark
On Mar 15, 2013, at 7:51 PM, Gary
it doesn't appear to be a shard1 vs shard11 issue... 60% of my followers
are red now in the solr cloud graph.. trying to figure out what that
means...
On Fri, Mar 15, 2013 at 6:48 PM, Gary Yngve gary.yn...@gmail.com wrote:
I restarted the overseer node and another took over, queues are empty
i think those followers are red from trying to forward requests to the
overseer while it was being restarted. i guess i'll see if they become
green over time. or i guess i can restart them one at a time..
On Fri, Mar 15, 2013 at 6:53 PM, Gary Yngve gary.yn...@gmail.com wrote:
it doesn't
at 7:14 PM, Mark Miller markrmil...@gmail.com wrote:
On Mar 15, 2013, at 10:04 PM, Gary Yngve gary.yn...@gmail.com wrote:
i think those followers are red from trying to forward requests to the
overseer while it was being restarted. i guess i'll see if they become
green over time. or i
the param in solr.xml should be shard, not shardId. i tripped over this
too.
-g
On Mon, Jan 14, 2013 at 7:01 AM, starbuck thomas.ma...@fiz-karlsruhe.dewrote:
Hi all,
I am trying to realize a solr cloud cluster with 2 collections and 4 shards
each with 2 replicates hosted by 4 solr
independently of each other.
Thanks,
Gary
antecedents :))
-g
On Mon, Jan 14, 2013 at 6:27 PM, Gary Yngve gary.yn...@gmail.com wrote:
Posting this
?xml version=1.0 encoding=UTF-8?adddocfield name=nickname_s
update=setblah/fieldfield name=tags_ss
update=addqux/fieldfield name=tags_ss
update=addquux/fieldfield name=idfoo/field/doc/add
fine with all
Groovy versions. Can't imagine what the root cause might be -- Groovy
implements jsr223 differently in later versions? I suppose to find out I could
compile Solr with my jdk but time to march on. ;)
Gary
-Original Message-
From: Erick Erickson [mailto:erickerick
errors. Thanks in advance for any tips.
Gary
This electronic message contains information generated by the USDA solely for
the intended recipients. Any unauthorized interception of this message or the
use or disclosure of the information it contains may violate the law and
subject the violator
Hi, there
In order to keep a DocID vs UID map, we added payload to a solr core. The
search on UID is very fast but we get a problem with adding/deleting docs.
Every time we commit an adding/deleting action, solr/lucene will take up to 30
seconds to complete. Without payload, a same action
/mods/v3/mods-3-4.xsd;
version=3.4
titleInfo
titleMalus domestica: Arnold/title
/titleInfo
/mods
then xpath=//titleInfo/title works just fine. Can anyone confirm that this
is the case and, if so, recommend a solution?
Thanks
Gary
Gary Moore
Technical Lead
LCA Digital Commons
Hi
I have a scenario that I am not sure how to write the query for.
Here is the scenario - have an employee record with multi value for project,
started date, end date.
looks something like
John Smith web site bug fix 2010-01-01 2010-01-03
AJava Development 2011-09-01
2011-09-15
Thanks in advance
Gary
On Thu, Sep 15, 2011 at 3:33 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
You didn't tell us what your schema looks like, what fields with what types
are involved.
But similar to how you'd do it in your database
an application backend,
e.g. a PHP application running on port 80 connects to Solr on port 8983.
Gary
-Original Message-
From: nagarjuna [mailto:nagarjuna.avul...@gmail.com]
Sent: Wednesday, September 07, 2011 7:41 AM
To: solr-user@lucene.apache.org
Subject: how to run solr in apache server?
Hi
Hah, I knew it was something simple. :) Thanks.
Gary
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Sunday, August 28, 2011 12:50 PM
To: solr-user@lucene.apache.org
Subject: Re: commas in synonyms.txt are not escaping
Turns out
not doing but am a bit stumped at the moment and would appreciate any
tips.
Thanks
Gary
Here you go -- I'm just hacking the text field at the moment. Thanks,
Gary
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.SynonymFilterFactory synonyms
Thanks, Yonik.
Gary
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Friday, August 26, 2011 11:25 AM
To: solr-user@lucene.apache.org
Subject: Re: commas in synonyms.txt are not escaping
On Fri, Aug 26, 2011 at 11:16 AM, Yonik Seeley
Alexi,
Yes but no difference. This is apparently an issue introduced in 3.*. Thanks
for your help.
-Gary
-Original Message-
From: Alexei Martchenko [mailto:ale...@superdownloads.com.br]
Sent: Friday, August 26, 2011 10:45 AM
To: solr-user@lucene.apache.org
Subject: Re: commas
where the match is.
Hope that helps.
Kind regards,
Gary.
On 09/06/2011 03:00, Naveen Gupta wrote:
Hi Gary
It started working .. though i did not test for Zip files, but for rar
files, it is working fine ..
only thing what i wanted to do is to index the metadata (text mapped to
content
Naveen,
For indexing Zip files with Tika, take a look at the following thread :
http://lucene.472066.n3.nabble.com/Extracting-contents-of-zipped-files-with-Tika-and-Solr-1-4-1-td2327933.html
I got it to work with the 3.1 source and a couple of patches.
Hope this helps.
Regards,
Gary.
On 08
(from ExtractingDocumentLoader.java) I was running the correct
code anyway.
However, I'm very pleased that it's working now - I get the full
contents of the zipped files indexed and not just the file names.
Thank you again for your assistance, and the patch!
Kind regards,
Gary.
On 21/05
grateful.
Thanks and kind regards,
Gary.
On 11/04/2011 11:12, Gary Taylor wrote:
Jayendra,
Thanks for the info - been keeping an eye on this list in case this
topic cropped up again. It's currently a background task for me, so
I'll try and take a look at the patches and re-test soon.
Joey
connect!
-Gary
http://www.linkedin.com/in/garyyngve
with it. I've not yet moved to Solr 3.1 but it's on my to-do
list, as is testing out the patches referenced by Jayendra. I'll post
my findings on this thread - if you manage to test the patches before
me, let me know how you get on.
Thanks and kind regards,
Gary.
On 11/04/2011 05:02, Jayendra
As an example, I run this in the same directory as the msword1.doc file:
curl
http://localhost:8983/solr/core0/update/extract?literal.docid=74literal.type=5;
-F file=@msword1.doc
The type literal is just part of my schema.
Gary.
On 03/03/2011 11:45, Ken Foskey wrote:
On Thu, 2011-03-03
to help
me work out why it's only returning the file names and not the file
contents when parsing a ZIP file?
Thanks and kind regards,
Gary.
On 25/01/2011 16:48, Jayendra Patil wrote:
Hi Gary,
The latest Solr Trunk was able to extract and index the contents of the zip
file using
contents, and not doesn't even index the file names!
Is there a version of Tika that works with the Solr 1.4.1 released
distribution which does index the contents of the zipped files?
Thanks and kind regards,
Gary
and
HTMLStripStandardTokenizerFactory deprecated. To strip HTML tags,
HTMLStripCharFilter can be used with an arbitrary Tokenizer. (koji)
Unfortunately, I can't seem to get that to work correctly. Does anyone
have an example fieldType stanza (for schema.xml) for stripping out HTML ?
Thanks and kind regards,
Gary.
On 25/01
the filenames and
contents. Should I be able to index the contents of files stored in a
zip by using extract ?
Thanks and kind regards,
Gary.
On 25/01/2011 15:32, Gary Taylor wrote:
Thanks Erlend.
Not used SVN before, but have managed to download and build latest
trunk code.
Now I'm getting
anyone else seen this before or have an idea on how to surmount it?
I'm not quite ready to file a Jira issue on it yet, as I'm hoping it's user
error.
Thanks,
Gary
Sorry, false alarm. Had a bad merge and had a stray library linking to an
older version of another library. Works now.
-Gary
On Sat, Nov 27, 2010 at 4:17 PM, Gary Yngve gary.yn...@gmail.com wrote:
logs grep SEVERE solr.err.log
SEVERE: org.apache.solr.common.SolrException: Error loading
this to extend ExtractingRequestHandler to
allow multiple binary files and thus specify our own RequestHandler, or
would using the SolrJ interface directly be a better bet, or am I
missing something fundamental?
Thanks and regards,
Gary.
Jayendra,
Brilliant! A very simple solution. Thank you for your help.
Kind regards,
Gary
On 17 Nov 2010 22:09, Jayendra Patil lt;jayendra.patil@gmail.comgt;
wrote:
The way we implemented the same scenario is zipping all the attachments into
a single zip file which can be passed
confguration, thus your
copyField should be defined as a type that is configured with the
SynonymFilterFactory, just like
person_name.
You can find some guidance here:
http://bibwild.wordpress.com/2010/04/14/solr-stop-wordsdismax-gotcha/
Gary
Hi Eric
I catch the NPE in the NonAdjacentDocumentCollapser class and now it does
return the data field collapsed.
However I can not promise how accurate or correct this fix is becuase I have not
got allot of time to study all the code.
It would be best if some of the experts could give us a
http://www.webtide.com/choose/jetty.jsp
- Original Message -
From: Steve Radhouani r.steve@gmail.com
To: solr-user@lucene.apache.org
Sent: Tuesday, 16 February, 2010 12:38:04 PM
Subject: Tomcat vs Jetty: A Comparative Analysis?
Hi there,
Is there any analysis out
It works excellently in Tomcat 6. The toughest thing I had to deal with is
discovering that the environment variable in web.xml for solr/home is
essential. If you skip that step, it won't come up.
env-entry
env-entry-namesolr/home/env-entry-name
/html/solr override=true /
/Context
I am using the example configs (unmodified).
Thanks again
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
-Original Message-
From: Chris Hostetter [mailto:[EMAIL PROTECTED]
Sent: Tuesday
:
The requested resource (/solr/select/) is not available
I have other apps running under tomcat okay, seems like it can't find
the lib .jars or can't access the classes within them?
Stuck...
Cheers
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph
? (I've attached the trace for
reference)
Thanks again
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
May 14, 2007 1:17:34 PM org.apache.solr.core.SolrException log
SEVERE: java.lang.NullPointerException
on the tutorial, but post.jar cannot be found...
Where is it? Is there a path variable I need to set up somewhere?
Any help greatly appreciated.
Regards,
Gary
Gary Browne
Development Programmer
Library IT Services
University of Sydney
Australia
ph: 61-2-9351 5946
58 matches
Mail list logo