: I disabled the account by assigning a dummy eMail and gave it a random
password.
:
: I was not able to unassign the issues, as most issues were Closed,
: where no modifications can be done anymore. Reopening and changing
Uwe: it may be too late (depending on wether you remember the dummy
: Is it possible to change it? If not, what is the policy here? To open a
: new issue and close the old one?
...
: In this case, that would mean either closing this issue and opening a new one,
: or taking the discussion to the mailing list where subject headers may be
: modified as the
: No, no, no, Lucene still has no need for maven or ivy for dependency
management.
: We can just hack around all issues with ant scripts.
it doesn't really matter if it's ant scripts, or ivy declarations, or
maven pom entries -- the point is the same.
We can't distribute the jars, but we can
: I was wondering yesterday why aren't the required libs checked in to SVN? We
Licensing issues.
we can't redistribute them (but we can provide the build.xml code to fetch
them)
-Hoss
-
To unsubscribe, e-mail:
: In addition to what Shai mentioned, I wanted to say that there are
: other oddities about how the contrib tests run in ant. For example,
: I'm not sure why we create the junitfailed.flag files (I think it has
: something to do with detecting top-level that a single contrib
: failed).
Correct
: build and nicely gets all dependencies to Lucene and Tika whenever I build
: or release, no problem there and certainly no need to have it merged into
: Lucene's svn!
The key distinction is that Solr is allready in Lucene's svn -- The
question is how reorg things in a way that makes it easier
: with, if id didn't happen on the lists, it didn't happen. Its the same as
+1
But as the IRC channel gets used more and more, it would *also* be nice if
there was an archive of the IRC channel so that there is a place to go
look to understand the back story behind an idea once it's
: prime-time as the new solr trunk! Lucene and Solr need to move to a
: common trunk for a host of reasons, including single patches that can
: cover both, shared tags and branches, and shared test code w/o a test
: jar.
Without a clearer picture of how people envision development overhead
: Subject: [DISCUSS] Do away with Contrib Committers and make core committers
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: nice of the wiki software to change every single line!
this type of thing seems to happen anytime you edit in GUI mode for the
first time since the MoinMOin upgradea few months back -- it's normalizing
all the whitespace.
-Hoss
: I prefer to see tags used for what it is, a place to park an actually
: released; it shouldn't be used for testing or its content changed
: dynamically.
I have no opinion about the rest of this thread (changing the back compat
testing to use a specific revision on the previous release branch)
: Why do I see \java\tags\lucene_*_back_compat_tests_2009*\ directories (well
: over 100 so far) when I SVN update?
Are you saying you have http://svn.apache.org/repos/asf/lucene/java/;
checked out in it's entirety?
That seems ... problematic. New tags/branches could be created at anytime
--
: https://issues.apache.org/jira/browse/LUCENENET-331). This begs the
: question, if Lucene.Net takes just this one patch, than Lucene.Net 2.9.1 is
: now 2.9.1.1 (which I personally don't like to see happening as I prefer to
: see a 1-to-1 release match).
As a general comment on this topic: I
: I configured hudson to simply run the hudson.sh from the nightly checkout.
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: What to do now, any votes on adding the missing maven artifacts for
: fast-vector-highlighter to 2.9.1 and 3.0.0 on the apache maven reposititory?
It's not even clear to me that anything special needs to be done before
publishing those jars to maven. 2.9.1 and 3.0.0 were already voted on and
: I signed up for a login, and voted for this issue. If others did the same,
: that might help.
if you read the comments in the issue, there's really nothing that can be
fixed in Jira to make this work better -- jira already puts an
In-Reply-To header on all of the messages so that mail
: Hudson says, the lucene node is dead, so builds are stuck since 2 days. Does
: anybody knows more?
Uwe: i didn't find any evidence that you opend an infra bug about this, so
i went ahead and created one...
https://issues.apache.org/jira/browse/INFRA-2351
-Hoss
: putting too many irons in the fire, especially non-critical ones. I don't
: see a way to assign it to myself, either I'm missing something or I'm just
: underprivileged G, so if someone would go ahead and assign it to me I'll
: work on it post 3.0.
Jira's ACLs prevent issues from being assigned
: I think the other tests do not catch it because the error only happens
: if the docID is over 8192 (the chunk size that BooleanScorer uses).
: Most of our tests work on smaller sets of docs.
I don't have time to try this out right now, but i wonder if just
modifying the QueryUtils wrap*
: - property name=javac.source value=1.4/
: - property name=javac.target value=1.4/
: + property name=javac.source value=1.5/
: + property name=javac.target value=1.5/
Isn't that one of the signs of the apocolypse?
-Hoss
: However, they may be something with the fact that Lucene's Analyzers
: automatically close the reader when its done analyzing. I think this
: encourages people not to explicitly close them, and creates the potential of
: having open fd's if an exception is thrown in the middle of the analysis
: So in 2.9, the Reader is correctly closed, if the TokenStream chain is
: correctly set up, passing all close() calls to the delegate.
Thanks for digging into that Uwe.
So Daniel: The ball is in your court here: what analyzer /
tokenizer+tokenfilters is your app using in the cases where you
: That is my opinion, too. Closing the readers should be done by the caller in
I don't disagree with either of you, but...
: a finally block and not automatically by the IW. I only wanted to confirm,
: that the behaviour of 2.9 did not change. Closing readers two times is not a
...i wanted to
: Thanks Mark for the pointer, I thought somehow that lucene closed them as a
: convenience, I don't know if it did that in previous releases (aka 2.4.1) but
: I'll close them myself from now on.
FWIW: As far as i know, Lucene has always closed the Reader for you when
calling
: - db/bdb fails to compile with 1.4 because of a ClassFormatError in one of
: the bundled libs, so this contrib is in reality 1.5 only.
there's not much we can do about that, no one can blame us if the
dependency requires 1.5
: - Tests of contrib/misc use String.contains(), which is 1.5 only.
: http://people.apache.org/~markrmiller/staging-area/lucene2.9/
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: And I done it. Then I noticed this:
:
: http://wiki.apache.org/lucene-java/TopLevelProject
That's about the TLP site (http://lucene.apache.org/) anything in a
subdirectory is handled by the individual project site directories.
according to HowToUpdateTheWebsite, both the versioned
: They are there just not replicated or shown in mirrors?
: http://www.apache.org/dist/lucene/java/
:
:
: Its pretty odd they don't go out to the mirrors - I mean, whats the
: point? Users can't use them to verify anything anyway if they don't have
: them. Anyone know anything about
: but it says the tests only ran for 12 minutes, so it took a day to compile?
The JUnit report on total testing time is just the sum of the timing
reported for each test, and as the testIndexWRiter report notes...
: duration0.0030/duration
...
: errorDetailsForked Java
: md5sum generates a hash line like this:
: a21f40c4f4fb1c54903e761caf43e1d7 *lucene-2.9.0.tar.gz
:
: Then when you do a check, it knows what file to check against.
:
: The Maven artifacts just list the hash though. So it seems proper to
: remove the second part and just put the hash?
Some
: Could a git branch make things easier for mega-features like this?
why not just start a subversion branch?
:
: Further steps towards flexible indexing
: ---
:
: Key: LUCENE-1458
: URL:
: Subject: NumericRange Field and LuceneUtils?
: References: 9ac0c6aa090932s69804fa5vbf5590ea6181e...@mail.gmail.com
: In-Reply-To: 9ac0c6aa090932s69804fa5vbf5590ea6181e...@mail.gmail.com
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a
: which I assume is in seconds. So the great bulk of the ant test
: seems to be spent in various ant housecleaning tasks, trying to verify
: that everything is indeed built, and/or looking for test classes that
: might match the name ShingleFilterTest.
Bear in mind, each contrib is built/tested
http://people.apache.org/~hossman/#java-dev
Please Use java-u...@lucene Not java-...@lucene
Your question is better suited for the java-u...@lucene mailing list ...
not the java-...@lucene list. java-dev is for discussing development of
the internals of the Lucene Java library ... it is *not*
: My question is, I would prefer to track SVN commits to keep track of
: changes, vs. what I'm doing now. This will allow us to stay weeks
: behind a Java release vs. months or years as it is now. However, while
: I'm subscribed to SVN's commits mailing list, I'm not getting all those
:
: releases 2.9. Robert raised a question if we should mark smartcn as
: experimental so that we can change interfaces and public methods etc.
: during the refactoring. Would that make sense for 2.9 or is there no
: such thing as a back compat policy for modules like that.
: Thanks for the help finishing up the javadoc cleanup Hoss - we almost
: have a clean javadoc run - which is fantastic, because I didn't think it
: was going to be possible. I think its just this and 1863 and the run is
: clean.
you obviously haven't tried ant javadocs -Djavadoc.access=private
: you obviously haven't tried ant javadocs -Djavadoc.access=private lately
: ... i'm working on cleaning that up at the moment.
: tried it? I'm not even aware of it. Not mentioned in the release todo.
yeah ... it's admittedly esoteric, but it helps surface bugs in docs on
private level
: i'm thinking we should change the nightly build to set
: -Djavadoc.access=private so we at least expose more errors earlier.
: (assuming we also setup the hudson to report stats on javadoc
: warnings ... i've seen it in other instances but don't know if it requires
: a special plugin)
pulling a crap doc fro mteh release seems sound to me.
alternately: couldn't we just replace it with the output from the
contrib/benchmarker on some of the bigger tests (the full wikipedia ones)
comparing 2.4 with 2.9 ?
then just make it a pre-release TODO item for the future: update that
: Prob want to run it on decent hardware as well (eg mabye I shouldn't do
: it with my 5200 rpm laptop drives).
as long as both are run on the same hardware, and the page lists the
hardware, it's the relative numbers that matter the most.
-Hoss
I noticed that the Release TODO recommends running ant rat-sources to
look for possible errors ... but the rat-soruces tag is setup to only
analyze the src/java directory -- not any of the other source files
included in the release (contrib, tests, demo, etc...) let alone the full
release
: reason why I did only src/java. I agree we should have it cover all
: sources.
Hmmm... rat is a memory hog, but the rat ant task is ridiculous (probably
because it only supports being bpassed filesets containing actualy files
to analyze, i can't figure out a way to just give it a directory
: from the commandline i'm seeing about what you're seeing, from the ant
correction .. even calling RAT directly (via ant's java) contrib takes a
few minutes -- but it doens't chew up RAM (it was the uncompressed dist
artifacts that were really fast on the comman line i think)
: I wonder
: How much RAM is it taking for you? I've got it scanning
I didn't look into it htat hard.
: demo/test/src/contrib and it takes 6 seconds - the mem does appear to
: pop to like 160MB from 70 real quick - what are you seeing for RAM reqs?
are you running from the commandline, or from ant? if
: This prompts the question (in my mind anyway): should source releases include
third-party binary jars?
if i remember correctly, the historical argument has been that this way
the source release contains everything you need to compile the source.
except that if i remember correctly (and i'm
i notice this file has the full licensing info for ICU...
contrib/collation/lib/ICU-LICENSE.txt
...but isn't there also suppose to be at least a one line mention of
this in the top level NOTICE.txt file?
-Hoss
-
To
can someone explain this to me...
http://svn.apache.org/viewvc/lucene/java/trunk/contrib/snowball/LICENSE.txt?view=co
http://svn.apache.org/viewvc/lucene/java/trunk/contrib/snowball/SNOWBALL-LICENSE.txt?view=co
...that first one seems like a (very old) mistake.
-Hoss
: FWIW, committers can get Hudson accounts. See
are you sure about that? I never understood the reason, but the wiki has
always said...
if you are a member of an ASF PMC, get in touch and we'll set you up with
an account.
: http://wiki.apache.org/general/Hudson. Committers can also get
: There is a discussion about this at:
:
:http://issues.apache.org/jira/browse/LUCENE-740
Hmmm... ok. even with that in mind, I don't understand why we need
./contrib/snowball/LICENSE.txt -- all of (lucene) source code is already
covered by ./LICENSE.txt right?
-Hoss
: I'm curious if there is a meetup this year @ ApacheCon US similar to
: the one at ApacheCon Europe earlier this year?
There's one on the schedule for tuesday night...
http://wiki.apache.org/apachecon/ApacheMeetupsUs09
I'v updated the Lucene wiki page about apachecon (orriginally created
for
: Grant does the cutover to hudson.zones still invoke the nightly.sh? I
: thought it did? (But then looking at the console output from the
: build, I can't correlate it..).
nightly.sh is not run, there's a complicated set of shell commands
configured in hudson that gets run instead. (why it's
As a general rule: if the javadoc command generates a warning, it's a
pretty good indication that the resulting javadocs aren't going to look
the way you expect. (there may be lots of places where the javadocs look
wrong and no warning is logged -- but the reverse is almost never true)
The
: I don't know why Entry has int type and String locale, either. I
: agree it'd be cleaner for FieldSortedHitQueue to store these on its
: own, privately.
:
: Note that FieldSortedHitQueue is deprecated in favor of
: FieldValueHitQueue, and that FieldValueHitQueue doesn't cache
: comparators
Hey everybody, over in LUCENE-1749 i'm trying to make sanity checking of
the FieldCache possible, and i'm banging my head into a few walls, and
hoping people can help me fill in the gaps about how sorting w/FieldCache
is *suppose* to work.
For starters: i was getting confused why some
: I wonder: if we run an svn commit . tags/lucene_2_4.../src whether
: svn will do this as a single transaction? Because . (the trunk
: checkout) and tags/lucene_2_4... are two separate svn checkouts. (I
: haven't tested). If it does, then I think this approach is cleanest?
you can't have an
[
https://issues.apache.org/jira/browse/LUCENE-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12738721#action_12738721
]
Chris Hostetter commented on LUCENE-1749:
-
: I've got one more draft here
: changes to just go per reader for each doc - and a couple other unrelated
tiny tweaks.
FWIW: now that this issues has uncovered a few genuine bugs in code (as
opposed to justs tests being odd) it would probably be better to track
those bugs and their patches in seperate issues that can be
: I didn't realize the nightly build runs the tests twice (with w/o
: clover); I agree, running only with clover seems fine?
i'm not caught up on this issue, but i happen to notice this comment in
email.
the reason the tests are run twice is because in between the two runs we
package up the
: In the insanity check, when you drop into the sequential subreaders - I
: think its got to be recursive - you might have a multi at the top with
: other subs, or any combo thereof. I can add to next patch.
i don't have the code in front of me, but i thought i was adding the sub
readers to
: SortField.equals() and hashCode() contain a hint:
:
: /** Returns true if codeo/code is equal to this. If a
:* {...@link SortComparatorSource} (deprecated) or {...@link
:* FieldCache.Parser} was provided, it must properly
:* implement equals (unless a singleton is always
: We prob want a javadoc warning of some kind too though right? Its not
: immediately obvious that when you switch to using remote, you better
: have implemented some form of equals/hashcode or you will have a memory
: leak.
Hmmm, now i'm confused.
Uwe's comment in the issue said This is
: Is the assistance restricted to people presenting and committers?
nope...
http://www.apache.org/travel/index.html
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands,
: LUCENE-1749 FieldCache introspection API Unassigned 16/Jul/09
:
: You have time to work on this Hoss?
i'd have more time if there weren't so many darn solr-user questions that
no one else answers.
The meat of the patch (adding an API to inspect the cache) could be
commited as is today --
: OK, I agree this makes sense and would be good for major features.
:
: Btw: For the new TokenStream API I wrote in the original patch (JIRA-1422) a
: quite elaborate section in the package.html of the analysis package.
Yeah ... whenever javadocs make sense, they're probably better then wiki
(Please remain calm, this is just a request for clarification/summation)
As I slowly catch up on the 9000+ Lucene related emails that I accumulated
during my 2 month hiatus, I notice several rather large threads (i think
totally ~400 messages) on the subject of our back compat policy (where
: Done. Thanks for testing!
I hate to be a buzz kill, but all this really does is replace the outdated
javadoc generated index.html file with a new one that points at the
subdirs we've created ... I don't see how this solves the root problem:
Hudson doesn't delete the old files
On Tue, 9 Jun 2009, Vico Marziale wrote:
: highly-multicore processors to speed computer forensics tools. For the
: moment I am trying to figure out what the most common performance bottleneck
: inside of Lucene itself is. I will then take a crack at porting some (small)
: portion of Lucene to
: The javadocs state clearly it must be MapString,String. Plus, the
: type checking is in fact enforced (you hit an exception if you violate
: it), dynamically (like Python).
:
: And then I was thinking with 1.5 (3.0 -- huh, neat how it's exactly
: 2X) we'd statically type it (change Map to
: We have a number of sources that don't have eol-style set to native...
This should also serve as a reminder for all committers to make sure they
have sane auto-prop configs for their svn client when svn adding files
-- SVN doesn't have any way to configure these on the server side, so
: But then when you retrieve your metadata it's converted to String - String.
Correct ... the documentation should make it clear that what gets
persisted is a String, but the method of giving the String to the API is
by passing an Obejct that will be toString()ed.
(Asside: it would be really
: If the user serializes object, opens the index on another machine where
: different versions of these classes are installed and he did not use
: serialVersionId to create a version info in index. As long as you only
: serialize standard Java classes like String, HashMap,... you will have no
:
: I'm back to getting duplicate emails. Every email sent on LUCENE-1708 was
: sent to my email, and java-dev. So this really looks like it's a JIRA
: project setting, since I only get these duplicates on issues I open. Am I
: the only one?
That's they way Jira works by default... it sends an
: We had some discussions about it, the easiest is, to set the bootclasspath
: in the javac/ task to an older rt.jar during compilation. Because this
: needs updates for e.g. Hudson (rt.jar missing) we said, that the one, who
: releases the final version should simply check this before on the
:
: Then during build we can package up certain combinations. I think
: there should be sub-kitchen-sink jars by area, eg a jar that contains
: all analyzers/tokenstreams/filters, all queries/filters, etc.
Or just make it trivial to get all jars that fit a given profile w/o
actually merging
: We've been doing this using just one source tree (like in Lucene), and
: instead ensuring the separation using the build system. We did not, like you
I think you are missunderstanding my previous comment ... Lucene-Java does
not currenlty have one source tree in the sense that someone else
: If there are any serious moves to reorganize things, we should at least
: consider the benefits of maven.
+1
we can certainly do a lot to improve things just by refacting stuff from
core into contrib, and improving the visibility of contribs and
documentation about contribs -- but if we're
After stiring things up, and then being off-list for ~10 days, I'm in an
interesting position coming back to this thread and seeing the discussion
*after* it essentially ended, with a lot of semi-concensus but no clear
sense of hard and fast resolution or plan of action.
FWIW, here are the
: Every now and again, someone emails me off list asking to be removed from the
: list and I always forward them to Erik, b/c I know he is a moderator.
: However, I was wondering who else is besides Erik, since, AIUI, there needs to
: be at least 3 in ASF-land, right?
:
: So, if you're a list
: My vote for contrib would depend on the state of the code - if it passes all
: the tests and is truly back compat, and is not crazy slower, I don't see why
: we don't move it in right away depending on confidence levels. That would
: ensure use and attention that contrib often misses. The old
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
-- Forwarded message --
Date: Fri, 20 Mar 2009 15:29:13 -0700 (PDT)
: TopDocCollector's (TDC) implementation of collect() seems a bit problematic
:
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 15:30:27 -0700 (PDT)
:
: http://people.apache.org/~hossman/#java-dev
: Please Use java-u...@lucene Not java-...@lucene
:
: Your question is
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 15:30:59 -0700 (PDT)
:
: http://people.apache.org/~hossman/#java-dev
: Please Use java-u...@lucene Not java-...@lucene
:
: Your question is
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 16:51:05 -0700 (PDT)
:
: : I think we should move TrieRange* into core before 2.9?
:
: -0
:
: I think we should try to move more things
: What I would LOVE is if I could do it in a standard Lucene search like I
: mentioned earlier.
: Hit.doc[0].getHitTokenList() :confused:
: Something like this...
The Query/Scorer APIs don't provide any mechanism for information like
that to be conveyed back up the call chain -- mainly because
: I can implement the functionality just using the data tables from the Unicode
: Consortium, including http://www.unicode.org/reports/tr39, but there's still
: the issue of the Unicode data license and its compatibility with Apache 2.0.
:
: Does anybody know whether
: TrieRange fields is needed), I again thought about the issue. Maybe we could
: change FieldCache to only put the very first term from a field of the
: document into the cache, enabling sorting against this field. If possible,
: this would be very nice and in my opinion better that the idea
: but i need the result by the word place in the sentence like this:
:
: bbb text 4 , text 2 bbb text , text 1 ok ok ok bbb ..
1) SpanFirstQuery should work, it scores higher the closer the nested
query is to the start -- just use a really high limit,. if you are only
dealing with
: Subject: Jukka's not on Who We Are yet
:
: Jukka's not on http://lucene.apache.org/java/docs/whoweare.html
That list is specificly the Lucene-Java committers. Jukka is listed on
the PMC list...
http://lucene.apache.org/who.html
-Hoss
: I don't know how others feel, but I'd personally like to stop the
: practice of making more Analyzer classes whenever a new TokenFilter is
: added.
+1
-Hoss
-
To unsubscribe, e-mail:
: I'm OK with LIA2 on the front page - as Erik suggests it does help lend
: credibility to a project.
+1 to more visibility to books focused on lucene on official www site
pages (not just hte wiki)
+1 to prominent display via a section on the main page like wicket
currently has, with
: Also in the futuer please post your questions to java-dev@lucene.apache.org
I believe jason ment to type java-u...@lucene...
http://people.apache.org/~hossman/#java-dev
Please Use java-u...@lucene Not java-...@lucene
Your question is better suited for the java-u...@lucene mailing list ...
: By allowing Random to randomly seed itself, we effectively test a much
: much larger space, ie every time we all run the test, it's different. We can
: potentially cast a much larger net than a fixed seed.
i guess i'm just in favor of less randomness and more iterations.
: Fixing the bug is
: I think, the outdated docs should be removed from the server to also
: disappear from search engines.
:
: +1
that may be easier said then done.
Each build is done in a clean workspace, and then a config option in
hudson tells it what to copy to the main javadoc URL...
: Wiki is updated w/ the info. Basically, it runs nightly. If you want it done
: more often, I can change it.
doesn't matter to me ... just wasn't sure if there was a problem since i
didn't know when to expect it. it all looks fine.
-Hoss
According to this doc...
http://wiki.apache.org/lucene-java/HowToUpdateTheWebsite
...Grant's crontab is used to update /www/lucene.apache.org/java/docs
from...
http://svn.apache.org/repos/asf/lucene/java/site/docs
...but the wiki page isn't very explicit about how often that cron script
Catching up on my holiday email, I on't think there were any replies to
this question yet.
The low level file formats used by Lucene is an area I don't have
time/expertise to follow carefully, but if i'm remember correctly the
concensus is/was to more more towards pure (byte[] data, int
: Has anyone explored ways to have ant test take advantage of concurrency?
: Since each JUnit test source (TestXXX.java) is independent, this should be
: possible.
: I'd love to have ant test test-tag run faster on an N-core machine.
I've see some attempts at a generalized solution to this in
I'm happy to announce that in recognition of his efforts in moving
forward with creating a spatial searching contrib (and his ongoing
experience as both a Solr committer and PMC member) The PMC has voted
to make Ryan McKinley a Lucene-Java Contrib and Documentation committer.
Congrats Ryan,
: 1) Use a modified SpanNearQuery. If we assume that country + phone will always
: be one token, we can rely on the fact that the positions of 'au' and '5678' in
: Fred's document will be different.
:
:SpanQuery q1 = new SpanTermQuery(new Term(addresscountry, au));
:SpanQuery q2 = new
1 - 100 of 547 matches
Mail list logo