: I disabled the account by assigning a dummy eMail and gave it a random
password.
:
: I was not able to unassign the issues, as most issues were "Closed",
: where no modifications can be done anymore. Reopening and changing
Uwe: it may be too late (depending on wether you remember the dummy
: > Is it possible to change it? If not, what is the policy here? To open a
: > new issue and close the old one?
...
: In this case, that would mean either closing this issue and opening a new one,
: or taking the discussion to the mailing list where subject headers may be
: modified as th
: No, no, no, Lucene still has no need for maven or ivy for dependency
management.
: We can just hack around all issues with ant scripts.
it doesn't really matter if it's ant scripts, or ivy declarations, or
maven pom entries -- the point is the same.
We can't distribute the jars, but we can d
: I was wondering yesterday why aren't the required libs checked in to SVN? We
Licensing issues.
we can't redistribute them (but we can provide the build.xml code to fetch
them)
-Hoss
-
To unsubscribe, e-mail: java-dev-unsu
: In addition to what Shai mentioned, I wanted to say that there are
: other oddities about how the contrib tests run in ant. For example,
: I'm not sure why we create the junitfailed.flag files (I think it has
: something to do with detecting top-level that a single contrib
: failed).
Correct ..
: build and nicely gets all dependencies to Lucene and Tika whenever I build
: or release, no problem there and certainly no need to have it merged into
: Lucene's svn!
The key distinction is that Solr is allready in "Lucene's svn" -- The
question is how reorg things in a way that makes it easier
: with, "if id didn't happen on the lists, it didn't happen". Its the same as
+1
But as the IRC channel gets used more and more, it would *also* be nice if
there was an archive of the IRC channel so that there is a place to go
look to understand the back story behind an idea once it's synthesi
: prime-time as the new solr trunk! Lucene and Solr need to move to a
: common trunk for a host of reasons, including single patches that can
: cover both, shared tags and branches, and shared test code w/o a test
: jar.
Without a clearer picture of how people envision development "overhead"
wor
: Subject: [DISCUSS] Do away with Contrib Committers and make core committers
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: nice of the wiki software to change every single line!
this type of thing seems to happen anytime you edit in GUI mode for the
first time since the MoinMOin upgradea few months back -- it's normalizing
all the whitespace.
-Hoss
: I prefer to see "tags" used for what it is, a place to park an actually
: released; it shouldn't be used for testing or its content changed
: dynamically.
I have no opinion about the rest of this thread (changing the back compat
testing to use a specific revision on the previous release branch
: https://issues.apache.org/jira/browse/LUCENENET-331). This begs the
: question, if Lucene.Net takes just this one patch, than Lucene.Net 2.9.1 is
: now 2.9.1.1 (which I personally don't like to see happening as I prefer to
: see a 1-to-1 release match).
As a general comment on this topic: I wo
: Why do I see \java\tags\lucene_*_back_compat_tests_2009*\ directories (well
: over 100 so far) when I SVN update?
Are you saying you have "http://svn.apache.org/repos/asf/lucene/java/";
checked out in it's entirety?
That seems ... problematic. New tags/branches could be created at anytime
-
: I configured hudson to simply run the hudson.sh from the nightly checkout.
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: What to do now, any votes on adding the missing maven artifacts for
: fast-vector-highlighter to 2.9.1 and 3.0.0 on the apache maven reposititory?
It's not even clear to me that anything special needs to be done before
publishing those jars to maven. 2.9.1 and 3.0.0 were already voted on and
: Hudson says, the lucene node is dead, so builds are stuck since 2 days. Does
: anybody knows more?
Uwe: i didn't find any evidence that you opend an infra bug about this, so
i went ahead and created one...
https://issues.apache.org/jira/browse/INFRA-2351
-Hoss
---
: I signed up for a login, and voted for this issue. If others did the same,
: that might help.
if you read the comments in the issue, there's really nothing that can be
fixed in Jira to make this work better -- jira already puts an
In-Reply-To header on all of the messages so that mail clients
: putting too many irons in the fire, especially non-critical ones. I don't
: see a way to assign it to myself, either I'm missing something or I'm just
: underprivileged , so if someone would go ahead and assign it to me I'll
: work on it post 3.0.
Jira's ACLs prevent issues from being assigned t
: I think the other tests do not catch it because the error only happens
: if the docID is over 8192 (the chunk size that BooleanScorer uses).
: Most of our tests work on smaller sets of docs.
I don't have time to try this out right now, but i wonder if just
modifying the QueryUtils wrap* fun
Can someone smarter then me review the patch in LUCENE-1974...
https://issues.apache.org/jira/browse/LUCENE-1974
...on the surface this seems to suggest a pretty serious error somewhere
in the low level scoring code when a BooleanQuery is involved.
(If this really is a bug, and not just me
: However, they may be something with the fact that Lucene's Analyzers
: automatically close the reader when its done analyzing. I think this
: encourages people not to explicitly close them, and creates the potential of
: having open fd's if an exception is thrown in the middle of the analysis or
: > -
: > -
: > +
: > +
Isn't that one of the signs of the apocolypse?
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: That is my opinion, too. Closing the readers should be done by the caller in
I don't disagree with either of you, but...
: a finally block and not automatically by the IW. I only wanted to confirm,
: that the behaviour of 2.9 did not change. Closing readers two times is not a
...i wanted to t
: So in 2.9, the Reader is correctly closed, if the TokenStream chain is
: correctly set up, passing all close() calls to the delegate.
Thanks for digging into that Uwe.
So Daniel: The ball is in your court here: what analyzer /
tokenizer+tokenfilters is your app using in the cases where you se
: Thanks Mark for the pointer, I thought somehow that lucene closed them as a
: convenience, I don't know if it did that in previous releases (aka 2.4.1) but
: I'll close them myself from now on.
FWIW: As far as i know, Lucene has always closed the Reader for you when
calling addDocument/updateD
: > They are there just not replicated or shown in mirrors?
: > http://www.apache.org/dist/lucene/java/
: >
:
: Its pretty odd they don't go out to the mirrors - I mean, whats the
: point? Users can't use them to verify anything anyway if they don't have
: them. Anyone know anything about
: And I done it. Then I noticed this:
:
: http://wiki.apache.org/lucene-java/TopLevelProject
That's about the TLP site (http://lucene.apache.org/) anything in a
subdirectory is handled by the individual project site directories.
according to HowToUpdateTheWebsite, both the versioned & unversion
: http://people.apache.org/~markrmiller/staging-area/lucene2.9/
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org
: - db/bdb fails to compile with 1.4 because of a ClassFormatError in one of
: the bundled libs, so this contrib is in reality 1.5 only.
there's not much we can do about that, no one can blame us if the
dependency requires 1.5
: - Tests of contrib/misc use String.contains(), which is 1.5 only.
: but it says the tests only ran for 12 minutes, so it took a day to compile?
The JUnit report on total testing time is just the sum of the timing
reported for each test, and as the testIndexWRiter report notes...
: > 0.0030
...
: > Forked Java VM exited abnormally. Please
: md5sum generates a hash line like this:
: a21f40c4f4fb1c54903e761caf43e1d7 *lucene-2.9.0.tar.gz
:
: Then when you do a check, it knows what file to check against.
:
: The Maven artifacts just list the hash though. So it seems proper to
: remove the second part and just put the hash?
Some back
: Subject: NumericRange Field and LuceneUtils?
: References: <9ac0c6aa090932s69804fa5vbf5590ea6181e...@mail.gmail.com>
: In-Reply-To: <9ac0c6aa090932s69804fa5vbf5590ea6181e...@mail.gmail.com>
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting
: Could a git branch make things easier for mega-features like this?
why not just start a subversion branch?
:
: > Further steps towards flexible indexing
: > ---
: >
: > Key: LUCENE-1458
: > URL: https://issues.apache.org/jira
: which I assume is in seconds. So the great bulk of the "ant test"
: seems to be spent in various ant housecleaning tasks, trying to verify
: that everything is indeed built, and/or looking for test classes that
: might match the name "ShingleFilterTest".
Bear in mind, each contrib is built/test
http://people.apache.org/~hossman/#java-dev
Please Use "java-u...@lucene" Not "java-...@lucene"
Your question is better suited for the java-u...@lucene mailing list ...
not the java-...@lucene list. java-dev is for discussing development of
the internals of the Lucene Java library ... it is *not*
: My question is, I would prefer to track SVN commits to keep track of
: changes, vs. what I'm doing now. This will allow us to stay weeks
: behind a Java release vs. months or years as it is now. However, while
: I'm subscribed to SVN's commits mailing list, I'm not getting all those
: comm
: There is a discussion about this at:
:
:http://issues.apache.org/jira/browse/LUCENE-740
Hmmm... ok. even with that in mind, I don't understand why we need
./contrib/snowball/LICENSE.txt -- all of (lucene) source code is already
covered by ./LICENSE.txt right?
-Hoss
--
: FWIW, committers can get Hudson accounts. See
are you sure about that? I never understood the reason, but the wiki has
always said...
"if you are a member of an ASF PMC, get in touch and we'll set you up with
an account."
: http://wiki.apache.org/general/Hudson. Committers can also get L
can someone explain this to me...
http://svn.apache.org/viewvc/lucene/java/trunk/contrib/snowball/LICENSE.txt?view=co
http://svn.apache.org/viewvc/lucene/java/trunk/contrib/snowball/SNOWBALL-LICENSE.txt?view=co
...that first one seems like a (very old) mistake.
-Hoss
---
i notice this file has the full licensing info for ICU...
contrib/collation/lib/ICU-LICENSE.txt
...but isn't there also suppose to be at least a one line mention of
this in the top level NOTICE.txt file?
-Hoss
-
To u
: This prompts the question (in my mind anyway): should source releases include
third-party binary jars?
if i remember correctly, the historical argument has been that this way
the source release contains everything you need to compile the source.
except that if i remember correctly (and i'm v
: > from the commandline i'm seeing about what you're seeing, from the ant
correction .. even calling RAT directly (via ant's ) contrib takes a
few minutes -- but it doens't chew up RAM (it was the uncompressed dist
artifacts that were really fast on the comman line i think)
: I wonder if yo
: How much RAM is it taking for you? I've got it scanning
I didn't look into it htat hard.
: demo/test/src/contrib and it takes 6 seconds - the mem does appear to
: pop to like 160MB from 70 real quick - what are you seeing for RAM reqs?
are you running from the commandline, or from ant? if yo
: reason why I did only src/java. I agree we should have it cover all
: sources.
Hmmm... rat is a memory hog, but the rat ant task is ridiculous (probably
because it only supports being bpassed filesets containing actualy files
to analyze, i can't figure out a way to just give it a directory (
I noticed that the Release TODO recommends running "ant rat-sources" to
look for possible errors ... but the rat-soruces tag is setup to only
analyze the src/java directory -- not any of the other source files
included in the release (contrib, tests, demo, etc...) let alone the full
release a
: Prob want to run it on decent hardware as well (eg mabye I shouldn't do
: it with my 5200 rpm laptop drives).
as long as both are run on the same hardware, and the page lists the
hardware, it's the relative numbers that matter the most.
-Hoss
--
pulling a crap doc fro mteh release seems sound to me.
alternately: couldn't we just replace it with the output from the
contrib/benchmarker on some of the bigger tests (the full wikipedia ones)
comparing 2.4 with 2.9 ?
then just make it a pre-release TODO item for the future: update that page
: I still have the same thought though - why not? Unless it takes a lot
: longer to parse, why hide bad JavaDoc? We may maintain public JavaDoc
: for users, but we maintain private JavaDoc for developers as well.
if we default it to private, the release will wind up advertising all of
the privat
: True enough - I don't think its super important for release that the
: private javadocs are 100% valid. Buts its nice if it is regardless :)
FWIW: i wasn't trying to suggest that it was, but it helps surface things
like LUCENE-1864 which can be really confusing when you start looking at
long
: > i'm thinking we should change the nightly build to set
: > -Djavadoc.access=private so we at least expose more errors earlier.
: > (assuming we also setup the hudson to report stats on javadoc
: > warnings ... i've seen it in other instances but don't know if it requires
: > a special plug
: > you obviously haven't tried "ant javadocs -Djavadoc.access=private" lately
: > ... i'm working on cleaning that up at the moment.
: tried it? I'm not even aware of it. Not mentioned in the release todo.
yeah ... it's admittedly esoteric, but it helps surface bugs in docs on
private level m
: Thanks for the help finishing up the javadoc cleanup Hoss - we almost
: have a clean javadoc run - which is fantastic, because I didn't think it
: was going to be possible. I think its just this and 1863 and the run is
: clean.
you obviously haven't tried "ant javadocs -Djavadoc.access=private"
: releases > 2.9. Robert raised a question if we should mark smartcn as
: experimental so that we can change interfaces and public methods etc.
: during the refactoring. Would that make sense for 2.9 or is there no
: such thing as a back compat policy for modules like that.
http://wiki.apache.org
: I'm curious if there is a meetup this year @ ApacheCon US similar to
: the one at ApacheCon Europe earlier this year?
There's one on the schedule for tuesday night...
http://wiki.apache.org/apachecon/ApacheMeetupsUs09
I'v updated the Lucene wiki page about apachecon (orriginally created
for p
: Grant does the cutover to hudson.zones still invoke the nightly.sh? I
: thought it did? (But then looking at the console output from the
: build, I can't "correlate" it..).
nightly.sh is not run, there's a complicated set of shell commands
configured in hudson that gets run instead. (why it'
As a general rule: if the javadoc command generates a warning, it's a
pretty good indication that the resulting javadocs aren't going to look
the way you expect. (there may be lots of places where the javadocs look
wrong and no warning is logged -- but the reverse is almost never true)
The o
: Hoss Man uses Chris Hostetter in Changes? Weak. I'll update it before
committing.
blame Hatcher, He started it...
http://svn.apache.org/viewvc/lucene/java/trunk/CHANGES.txt?r1=150654&r2=150658
Once i became a committer, I just followed the only rule of CHANGES.txt:
"Maint
: I don't know why Entry has "int type" and "String locale", either. I
: agree it'd be cleaner for FieldSortedHitQueue to store these on its
: own, privately.
:
: Note that FieldSortedHitQueue is deprecated in favor of
: FieldValueHitQueue, and that FieldValueHitQueue doesn't cache
: comparators
Hey everybody, over in LUCENE-1749 i'm trying to make sanity checking of
the FieldCache possible, and i'm banging my head into a few walls, and
hoping people can help me fill in the gaps about how sorting w/FieldCache
is *suppose* to work.
For starters: i was getting confused why some debugg
: I wonder: if we run an "svn commit . tags/lucene_2_4.../src" whether
: svn will do this as a single transaction? Because "." (the trunk
: checkout) and tags/lucene_2_4... are two separate svn checkouts. (I
: haven't tested). If it does, then I think this approach is cleanest?
you can't have
[
https://issues.apache.org/jira/browse/LUCENE-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12738721#action_12738721
]
Chris Hostetter commented on LUCENE-1749:
-
: I've got one more draft
: changes to just go per reader for each doc - and a couple other unrelated
tiny tweaks.
FWIW: now that this issues has uncovered a few genuine "bugs" in code (as
opposed to justs tests being odd) it would probably be better to track
those bugs and their patches in seperate issues that can be
: In the insanity check, when you drop into the sequential subreaders - I
: think its got to be recursive - you might have a multi at the top with
: other subs, or any combo thereof. I can add to next patch.
i don't have the code in front of me, but i thought i was adding the sub
readers to th
: I didn't realize the nightly build runs the tests twice (with & w/o
: clover); I agree, running only with clover seems fine?
i'm not caught up on this issue, but i happen to notice this comment in
email.
the reason the tests are run twice is because in between the two runs we
package up the
: SortField.equals() and hashCode() contain a hint:
:
: /** Returns true if o is equal to this. If a
:* {...@link SortComparatorSource} (deprecated) or {...@link
:* FieldCache.Parser} was provided, it must properly
:* implement equals (unless a singleton is always used). */
:
:
: We prob want a javadoc warning of some kind too though right? Its not
: immediately obvious that when you switch to using remote, you better
: have implemented some form of equals/hashcode or you will have a memory
: leak.
Hmmm, now i'm confused.
Uwe's comment in the issue said "This is note
: LUCENE-1749 FieldCache introspection API Unassigned 16/Jul/09
:
: You have time to work on this Hoss?
i'd have more time if there weren't so many darn solr-user questions that
no one else answers.
The meat of the patch (adding an API to inspect the cache) could be
commited as is today --
: Is the assistance restricted to people presenting and committers?
nope...
http://www.apache.org/travel/index.html
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-ma
: OK, I agree this makes sense and would be good for major features.
:
: Btw: For the new TokenStream API I wrote in the original patch (JIRA-1422) a
: quite elaborate section in the package.html of the analysis package.
Yeah ... whenever javadocs make sense, they're probably better then wiki
d
: Couldn't we just update the description of the Jira issue itself so that it
: reflects the current state of the patch? Often the inital description of a
: Jira issue is never updated after the issue is created, even though the patch
: and goals changed as discussions happened. I think that would
: Done. Thanks for testing!
I hate to be a buzz kill, but all this really does is replace the outdated
javadoc generated index.html file with a new one that points at the
subdirs we've created ... I don't see how this solves the root problem:
Hudson doesn't delete the old files
https://h
(Please remain calm, this is just a request for clarification/summation)
As I slowly catch up on the 9000+ Lucene related emails that I accumulated
during my 2 month hiatus, I notice several rather large threads (i think
totally ~400 messages) on the subject of our back compat policy (where
i
On Tue, 9 Jun 2009, Vico Marziale wrote:
: highly-multicore processors to speed computer forensics tools. For the
: moment I am trying to figure out what the most common performance bottleneck
: inside of Lucene itself is. I will then take a crack at porting some (small)
: portion of Lucene to CUD
: I'm back to getting duplicate emails. Every email sent on LUCENE-1708 was
: sent to my email, and java-dev. So this really looks like it's a JIRA
: project setting, since I only get these duplicates on issues I open. Am I
: the only one?
That's they way Jira works by default... it sends an emai
: If the user serializes object, opens the index on another machine where
: different versions of these classes are installed and he did not use
: serialVersionId to create a version info in index. As long as you only
: serialize standard Java classes like String, HashMap,... you will have no
: pr
: But then when you retrieve your metadata it's converted to String -> String.
Correct ... the documentation should make it clear that what gets
persisted is a String, but the method of giving the String to the API is
by passing an Obejct that will be toString()ed.
(Asside: it would be really
: We have a number of sources that don't have eol-style set to "native"...
This should also serve as a reminder for all committers to make sure they
have sane auto-prop configs for their svn client when "svn add"ing files
-- SVN doesn't have any way to configure these on the server side, so
yo
: The javadocs state clearly it must be Map. Plus, the
: type checking is in fact enforced (you hit an exception if you violate
: it), dynamically (like Python).
:
: And then I was thinking with 1.5 (3.0 -- huh, neat how it's exactly
: 2X) we'd statically type it (change Map to Map).
the other o
: We had some discussions about it, the easiest is, to set the bootclasspath
: in the task to an older rt.jar during compilation. Because this
: needs updates for e.g. Hudson (rt.jar missing) we said, that the one, who
: releases the final version should simply check this before on the
: compilat
: If there are any serious moves to reorganize things, we should at least
: consider the benefits of maven.
+1
we can certainly do a lot to improve things just by refacting stuff from
core into contrib, and improving the visibility of contribs and
documentation about contribs -- but if we're
: We've been doing this using just one source tree (like in Lucene), and
: instead ensuring the separation using the build system. We did not, like you
I think you are missunderstanding my previous comment ... Lucene-Java does
not currenlty have one source tree in the sense that someone else
su
: Then during build we can package up certain combinations. I think
: there should be sub-kitchen-sink jars by area, eg a jar that contains
: all analyzers/tokenstreams/filters, all queries/filters, etc.
Or just make it trivial to get all jars that fit a given profile w/o
actually merging those
After stiring things up, and then being off-list for ~10 days, I'm in an
interesting position coming back to this thread and seeing the discussion
*after* it essentially ended, with a lot of semi-concensus but no clear
sense of hard and fast resolution or plan of action.
FWIW, here are the not
: Every now and again, someone emails me off list asking to be removed from the
: list and I always forward them to Erik, b/c I know he is a moderator.
: However, I was wondering who else is besides Erik, since, AIUI, there needs to
: be at least 3 in ASF-land, right?
:
: So, if you're a list mod
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 16:51:05 -0700 (PDT)
:
: : I think we should move TrieRange* into core before 2.9?
:
: -0
:
: I think we should try to move more things *out*
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 15:30:59 -0700 (PDT)
:
: http://people.apache.org/~hossman/#java-dev
: Please Use "java-u...@lucene" Not "java-...@lucene"
:
: Your question i
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
: Date: Fri, 20 Mar 2009 15:30:27 -0700 (PDT)
:
: http://people.apache.org/~hossman/#java-dev
: Please Use "java-u...@lucene" Not "java-...@lucene"
:
: Your question i
(resending msg from earlier today during @apache mail outage -- i didn't
get a copy from the list, so i'm assuming no one did)
-- Forwarded message --
Date: Fri, 20 Mar 2009 15:29:13 -0700 (PDT)
: TopDocCollector's (TDC) implementation of collect() seems a bit problematic
: to
: My vote for contrib would depend on the state of the code - if it passes all
: the tests and is truly back compat, and is not crazy slower, I don't see why
: we don't move it in right away depending on confidence levels. That would
: ensure use and attention that contrib often misses. The old pa
: TrieRange fields is needed), I again thought about the issue. Maybe we could
: change FieldCache to only put the very first term from a field of the
: document into the cache, enabling sorting against this field. If possible,
: this would be very nice and in my opinion better that the idea propos
: I can implement the functionality just using the data tables from the Unicode
: Consortium, including http://www.unicode.org/reports/tr39, but there's still
: the issue of the Unicode data license and its compatibility with Apache 2.0.
:
: Does anybody know whether http://www.unicode.org/copyri
: What I would LOVE is if I could do it in a standard Lucene search like I
: mentioned earlier.
: Hit.doc[0].getHitTokenList() :confused:
: Something like this...
The Query/Scorer APIs don't provide any mechanism for information like
that to be conveyed back up the call chain -- mainly because
: but i need the result by the word place in the sentence like this:
:
: "bbb text 4...". , "text 2 bbb text " , "text 1 ok ok ok bbb" ..
1) SpanFirstQuery should work, it scores higher the closer the nested
query is to the start -- just use a really high limit,. if you are only
dealing with
: Subject: Jukka's not on "Who We Are" yet
:
: Jukka's not on http://lucene.apache.org/java/docs/whoweare.html
That list is specificly the Lucene-Java committers. Jukka is listed on
the PMC list...
http://lucene.apache.org/who.html
-Hoss
-
: Also in the futuer please post your questions to java-dev@lucene.apache.org
I believe jason ment to type java-u...@lucene...
http://people.apache.org/~hossman/#java-dev
Please Use "java-u...@lucene" Not "java-...@lucene"
Your question is better suited for the java-u...@lucene mailing list ...
: I'm OK with LIA2 on the front page - as Erik suggests it does help lend
: credibility to a project.
+1 to more visibility to books focused on lucene on "official" www site
pages (not just hte wiki)
+1 to prominent display via a section on the main page like wicket
currently has, with lin
: I don't know how others feel, but I'd personally like to stop the
: practice of making more Analyzer classes whenever a new TokenFilter is
: added.
+1
-Hoss
-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.o
: Make tests using java.util.Random reproducible on failure
Whoa ... i make an off the cuff comment that i forget about 10 hours
later, and the next thing i know Uwe and Michael have made it reality.
+1.
PS: "It would be really nice if i had several million dollars, because
that way i could
: By allowing Random to randomly seed itself, we effectively test a much
: much larger space, ie every time we all run the test, it's different. We can
: potentially cast a much larger net than a fixed seed.
i guess i'm just in favor of less randomness and more iterations.
: Fixing the bug is t
: It's not repeatable, which is fine (because the test has randomness, which we
: should leave in there).
Side note: while i agree that test with randomness (ie: do lots of
iterations over randomly selected data) are good to help find weird edge
casees you might not otherwise think to explicitl
1 - 100 of 628 matches
Mail list logo