Hi
My domain model is made of users that have access to projects which
are composed of items. I'm hoping to use Solr and would like to make
sure that searches only return results for items that users have
access to.
I've looked over some of the older posts on this mailing list about
access
iorixxx wrote:
queryParser name=complexphrase
class=org.apache.solr.search.ComplexPhraseQParserPlugin
bool
name=inOrderfalse/bool
/queryParser
I added this change to SOLR-1604, can you test it give us feedback?
May thanks. I'll test this quite soon and let you know.
processor=FileListEntityProcessor fileName=.*xml recursive=true
Shouldn't this be fileName=*.xml?
Ben
On Oct 22, 2010, at 10:52 PM, pghorp...@ucla.edu wrote:
dataConfig
dataSource name=myfilereader type=FileDataSource/
document
entity name=f rootEntity=false dataSource=null
On Oct 20, 2010, at 12:14 PM, Pradeep Singh wrote:
Thanks for your response Grant.
I already have the bounding box based implementation in place. And on a
document base of around 350K it is super fast.
What about a document base of millions of documents? While a tier based
approach will
what i know is to define you field in schema.xml file and build
database_conf.xml file which contain identification for your database
finally you should define dataimporthandler in solrconfig.xml file
i put sample from what you should done in first post in this topic you can
check it,
if i know
i found this files but i can't found any useful info. inside it, what i found
is GET command in http request
--
View this message in context:
http://lucene.472066.n3.nabble.com/Import-From-MYSQL-database-tp1738753p1756778.html
Sent from the Solr - User mailing list archive at Nabble.com.
Here are all the files: http://rghost.net/3016862
1) StandardAnalyzer.java, StandardTokenizer.java - patched files from
lucene-2.9.3
2) I patch these files and build lucene by typing ant
3) I replace lucene-core-2.9.3.jar in solr/lib/ by my
lucene-core-2.9.3-dev.jar that I'd just compiled
4) than
Unfortunately its not online yet, but is there anything I can clarify in more
detail?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Javascript-JSON-not-optimized-for-SEO-tp1751641p1758054.html
Sent from the Solr - User mailing list archive at Nabble.com.
Did you delete the folder Jetty_0_0_0_0_8983_solr.war_** under
apache-solr-1.4.1\example\work?
--- On Sat, 10/23/10, Sergey Bartunov sbos@gmail.com wrote:
From: Sergey Bartunov sbos@gmail.com
Subject: Re: How to index long words with StandardTokenizerFactory?
To:
Hi Paul,
Regardless of how you implement it, I would recommend you use filter queries
for the permissions check rather than making it part of the main query.
On Sat, Oct 23, 2010 at 4:03 AM, Paul Carey paul.p.ca...@gmail.com wrote:
Hi
My domain model is made of users that have access to
Yes. I did. Won't help.
On 23 October 2010 17:45, Ahmet Arslan iori...@yahoo.com wrote:
Did you delete the folder Jetty_0_0_0_0_8983_solr.war_** under
apache-solr-1.4.1\example\work?
--- On Sat, 10/23/10, Sergey Bartunov sbos@gmail.com wrote:
From: Sergey Bartunov sbos@gmail.com
I think you should replace your new lucene-core-2.9.3-dev.jar in
\apache-solr-1.4.1\lib and then create a new solr.war under
\apache-solr-1.4.1\dist. And copy this new solr.war to
solr/example/webapps/solr.war
--- On Sat, 10/23/10, Sergey Bartunov sbos@gmail.com wrote:
From: Sergey
On Fri, Oct 22, 2010 at 12:07 PM, Sergey Bartunov sbos@gmail.com wrote:
I'm trying to force solr to index words which length is more than 255
If the field is not a text field, the Solr's default analyzer is used,
which currently limits the token to 256 bytes.
Out of curiosity, what's your
Look at the scheme.xml that I provided. I use my own text_block type
which is derived from TextField. And I force using
StandardTokenizerFactory using tokenizer tag.
If I use StrField type there are no problems with big data indexing.
The problem is in the tokenizer.
On 23 October 2010 18:55,
This is exactly what I did. Look:
3) I replace lucene-core-2.9.3.jar in solr/lib/ by
my
lucene-core-2.9.3-dev.jar that I'd just compiled
4) than I do ant compile and ant dist in solr
folder
5) after that I recompile
solr/example/webapps/solr.war
On 23 October 2010 18:53, Ahmet Arslan
On Fri, Oct 22, 2010 at 11:52 PM, pghorp...@ucla.edu wrote:
dataConfig
dataSource name=myfilereader type=FileDataSource/
document
entity name=f rootEntity=false dataSource=null
processor=FileListEntityProcessor fileName=.*xml recursive=true
baseDir=C:\data\sample_records\mods\starr
Two things will lessen the solr admininstrative load :
1/ Follow examples of databases and *nix OSs. Give each user their own group,
or set up groups that don't have regular users as OWNERS, but can have users
assigned to the group to give them particular permissions. I.E. Roles, like
why use filter queries?
Wouldn't reducing the set headed into the filters by putting it in the main
query be faster? (A question to learn, since I do NOT know :-)
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
Forgot to add,
3/ The external, application code selects the GROUPS that the user has
permission to read (Solr will only serve up what is to be read?) then search on
those groups.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is
Ah, I should have read more carefully...
I remember this being discussed on the dev list, and I thought there might
be
a Jira attached but I sure can't find it.
If you're willing to work on it, you might hop over to the solr dev list and
start
a discussion, maybe ask for a place to start. I'm
Why do you want to? Basically, the caches are there to improve
#searching#. To search something, you must index it. Retrieving
it is usually a rare enough operation that caching is irrelevant.
This smells like an XY problem, see:
http://people.apache.org/~hossman/#xyproblem
If this seems like
Ops I am sorry, I thought that solr/lib refers to solrhome/lib.
I just tested this and it seems that you have successfully increased the max
token length. You can verify this by analysis.jsp page.
Although analysis.jsp's output, it seems that some other mechanism is
preventing this huge token
In general, the behavior when sorting is not predictable when
sorting on a tokenized field, which text is. What would
it mean to sort on a field with erick Moazzam as tokens
in a single document? Should it be in the es or the ms?
That said, you probably want to watch out for case
Best
Erick
Hi Darren,
Usually patches are written for the latest trunk branch at the time.
I've just updated the patch. Try it for the current trunk if you prefer.
Koji
--
http://www.rondhuit.com/en/
(10/10/22 19:10), Darren Govoni wrote:
Hi Koji,
I tried to apply your patch to the 1.4.0 tagged
Pushing ACL logic outside Solr sounds like a prudent choice indeed as in, my
opinion, all of the business rules/conceptual logic should reside only
within the code boundaries. This way your domain will be easier to model and
your code to read, understand and maintain.
More information on Filter
Answering my own question:
The pf feature only kicks in with multi term q param. In my case I used a
field tokenized by KeywordTokenizer, hence pf never kicked in.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
On 14. okt. 2010, at 13.29, Jan Høydahl / Cominvent
Hi All,
I think using filter queries will be a good option to consider because of
the following reasons
* The filter query does not affect the score of the items in the result set.
If the ACL logic is part of the main query, it could influence the scores of
the items in the result set.
* Using
27 matches
Mail list logo