Re: [scala-functional] Need help with calling and returning value from anon function in scala

2019-05-16 Thread Jed Wesley-Smith
as far as I know, only Perl can do that:

https://www.eventbrite.com/e/temporally-quaquaversal-virtual-nanomachine-programming-in-multiple-topologically-connected-quantum-tickets-3425548909?aff=estw#

On Tue, 14 May 2019 at 11:39, Vlad Patryshev  wrote:

> I'm just curious if we could apply this approach to other problems. Like
> "I want this code decrypted right away, not 1000 years from now. Can Scala
> do that, instead of making me wait?
>
> Thanks,
> -Vlad
>
>
> On Sat, May 11, 2019 at 4:49 PM tushar pandit 
> wrote:
>
>> Thanks Steven. Can I; in any way modify the code to not have a Future? I
>> mean I need the response immediately and not in the future and using await
>> is not an option here as I don't know the actual duration each request will
>> take.
>>
>> On Tuesday, May 7, 2019 at 10:00:41 PM UTC-5, Steven Parkes wrote:
>>>
>>> The issue is here is that the `onComplete` call is async (assuming
>>> https://github.com/daggerrz/druid-scala-client is the library you're
>>> using)
>>>
>>> It looks like `DruidClient#apply` returns a `Future`:
>>> https://github.com/daggerrz/druid-scala-client/blob/master/src/main/scala/com/tapad/druid/client/DruidClient.scala
>>>
>>> In this case, `onComplete` gets called at some point in the future
>>> (possibly/probably after `fetchFromDruid` has returned.) The result of
>>> `onComplete` on a `Future` is `Unit`: the `onComplete` is synchronous but
>>> the callback is async.
>>>
>>> If you want your method to wait until the Druid call returns, you want
>>> to use `Future#transform` (
>>> https://www.scala-lang.org/api/current/scala/concurrent/Future.html)
>>> and `Await.result` (
>>> https://www.scala-lang.org/api/current/scala/concurrent/Await$.html)
>>>
>>> On Tue, May 7, 2019 at 7:27 PM Jed Wesley-Smith 
>>> wrote:
>>>
>>>> Scala is an expression oriented language, so the last thing you refer
>>>> to is returned.
>>>>
>>>> def foo: Int = 3
>>>>
>>>> returns 3. There is no need to use the return keyword, indeed its use
>>>> can complicate matters as you can create non-local returns, which I suspect
>>>> is what is confusing you in your example.
>>>>
>>>> There are some very good books that explain how Scala works, as well as
>>>> some good online tutorials. The book "Functional Programming in Scala" is
>>>> an excellent book introducing the various concepts of FP and how to use
>>>> them effectively. It does help to have a basic general knowledge of the
>>>> language though, so may not be the ideal introduction.
>>>>
>>>> For further questions about how the basics of the language work, there
>>>> are many links on the https://www.scala-lang.org/community/ page.
>>>>
>>>> On Wed, 8 May 2019 at 12:19, tushar pandit 
>>>> wrote:
>>>>
>>>>> now I need to return* this object
>>>>>
>>>>> On Tuesday, May 7, 2019 at 9:18:26 PM UTC-5, tushar pandit wrote:
>>>>>>
>>>>>> Yeah. I am sorry to put it in a confusing manner. Yeah I am working
>>>>>> with Druid in Scala, but my question is about Scala code itself. So
>>>>>> consider I get some data from some data store (in my case it is Druid), I
>>>>>> get the dataset in an object and now I need to pass this object to the
>>>>>> calling function. I am unable to do this.
>>>>>>
>>>>>> To simplify the code:
>>>>>>
>>>>>> def main(args: Array[String]): Unit = {
>>>>>>   val res = fetchData()
>>>>>>
>>>>>>   // res comes as null here
>>>>>> }
>>>>>>
>>>>>> def fetchData():Any {
>>>>>>
>>>>>>   client(query).onComplete {
>>>>>> case Success(resp) =>
>>>>>>
>>>>>>   *return resp*
>>>>>> case Failure(ex) =>
>>>>>>   ex.printStackTrace()
>>>>>>
>>>>>>   return null
>>>>>>   }
>>>>>> }
>>>>>>
>>>>>>
>>>>>> So my "res" in "main" method does not hold any results after the call
>>>>>> is done. But when I try to print the "resp" in "success"

Re: [scala-functional] Need help with calling and returning value from anon function in scala

2019-05-07 Thread Jed Wesley-Smith
Scala is an expression oriented language, so the last thing you refer to is
returned.

def foo: Int = 3

returns 3. There is no need to use the return keyword, indeed its use can
complicate matters as you can create non-local returns, which I suspect is
what is confusing you in your example.

There are some very good books that explain how Scala works, as well as
some good online tutorials. The book "Functional Programming in Scala" is
an excellent book introducing the various concepts of FP and how to use
them effectively. It does help to have a basic general knowledge of the
language though, so may not be the ideal introduction.

For further questions about how the basics of the language work, there are
many links on the https://www.scala-lang.org/community/ page.

On Wed, 8 May 2019 at 12:19, tushar pandit 
wrote:

> now I need to return* this object
>
> On Tuesday, May 7, 2019 at 9:18:26 PM UTC-5, tushar pandit wrote:
>>
>> Yeah. I am sorry to put it in a confusing manner. Yeah I am working with
>> Druid in Scala, but my question is about Scala code itself. So consider I
>> get some data from some data store (in my case it is Druid), I get the
>> dataset in an object and now I need to pass this object to the calling
>> function. I am unable to do this.
>>
>> To simplify the code:
>>
>> def main(args: Array[String]): Unit = {
>>   val res = fetchData()
>>
>>   // res comes as null here
>> }
>>
>> def fetchData():Any {
>>
>>   client(query).onComplete {
>> case Success(resp) =>
>>
>>   *return resp*
>> case Failure(ex) =>
>>   ex.printStackTrace()
>>
>>   return null
>>   }
>> }
>>
>>
>> So my "res" in "main" method does not hold any results after the call is
>> done. But when I try to print the "resp" in "success" it prints all the
>> fetched data.
>>
>> On Tuesday, May 7, 2019 at 9:13:41 PM UTC-5, jed.wesleysmith wrote:
>>>
>>> Tushar, this seems to be an issue with whatever Druid is. This mailing
>>> list is for questions about Functional Programming in Scala, both in
>>> general and more specifically the book with that title. If you ask your
>>> question in a more relevant forum somebody may be able to provide you with
>>> an answer.
>>>
>>> On Wed, 8 May 2019 at 12:06, tushar pandit  wrote:
>>>
 Hi,

 I am trying to call function fetchFromDruid

 def main(args: Array[String]): Unit = {
   val res = fetchFromDruid()

   // res comes as null here
 }

 def fetchFromDruid(): GroupByResponse {
   implicit val executionContext = ExecutionContext.Implicits.global
   val client = DruidClient("http://localhost:8082;)

   val query = GroupByQuery(
 source = "wikipedia",
 interval = new Interval(new DateTime().minusMonths(60), new 
 DateTime()),
 dimensions = Seq("countryName"),
 descending = true,
 granularity = Granularity.All,
 aggregate = Seq(
   DSL.count("row_count")
 ),
 postAggregate = Seq(
 ),
 limit = Some(100)
   )

   client(query).onComplete {
 case Success(resp) =>
   resp.data.foreach { row =>
 println(row)
   }

   *return resp*
   println("none")
 //System.exit(0)
 case Failure(ex) =>
   ex.printStackTrace()

   return null
 //System.exit(0)
   }
 }


 But somehow I am not able to return the response to the caller; i.e the 
 main function. What could be the issue?


 Thanks,

 Tushar

 --
 You received this message because you are subscribed to the Google
 Groups "scala-functional" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to scala-fu...@googlegroups.com.
 To view this discussion on the web, visit
 https://groups.google.com/d/msgid/scala-functional/8bc53b5e-729d-497d-89bd-ab2568674cc7%40googlegroups.com
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>> --
> You received this message because you are subscribed to the Google Groups
> "scala-functional" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-functional+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/scala-functional/c8a0dac0-5163-4de9-a389-e63767b4b4f6%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scala-functional" group.
To unsubscribe from this group and stop receiving emails from it, send an 

Re: [scala-functional] Need help with calling and returning value from a function in scala

2019-05-07 Thread Jed Wesley-Smith
Tushar, this is a duplicate question. Please see the other one for a
response, and please try not to double post if possible, thanks!

On Wed, 8 May 2019 at 12:06, tushar pandit 
wrote:

> Hi,
>
> I am trying to call function fetchFromDruid
>
> def main(args: Array[String]): Unit = {
>   val res = fetchFromDruid()
>
>   // res comes as null here
> }
>
> def fetchFromDruid(): GroupByResponse {
>   implicit val executionContext = ExecutionContext.Implicits.global
>   val client = DruidClient("http://localhost:8082;)
>
>   val query = GroupByQuery(
> source = "wikipedia",
> interval = new Interval(new DateTime().minusMonths(60), new DateTime()),
> dimensions = Seq("countryName"),
> descending = true,
> granularity = Granularity.All,
> aggregate = Seq(
>   DSL.count("row_count")
> ),
> postAggregate = Seq(
> ),
> limit = Some(100)
>   )
>
>   client(query).onComplete {
> case Success(resp) =>
>   resp.data.foreach { row =>
> println(row)
>   }
>
>   *return resp*
>   println("none")
> //System.exit(0)
> case Failure(ex) =>
>   ex.printStackTrace()
>
>   return null
> //System.exit(0)
>   }
> }
>
>
> But somehow I am not able to return the response to the caller; i.e the main 
> function. What could be the issue?
>
>
> Thanks,
>
> Tushar
>
> --
> You received this message because you are subscribed to the Google Groups
> "scala-functional" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-functional+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/scala-functional/e420e90d-a7f4-45d9-b87e-cbaf2e31da58%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scala-functional" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scala-functional+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/scala-functional/CAN3nywAL8bGPAMxaPpdToJX9TLhCEFQpspJPHF6MzbUDr8dBOg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [scala-functional] Need help with calling and returning value from anon function in scala

2019-05-07 Thread Jed Wesley-Smith
Tushar, this seems to be an issue with whatever Druid is. This mailing list
is for questions about Functional Programming in Scala, both in general and
more specifically the book with that title. If you ask your question in a
more relevant forum somebody may be able to provide you with an answer.

On Wed, 8 May 2019 at 12:06, tushar pandit 
wrote:

> Hi,
>
> I am trying to call function fetchFromDruid
>
> def main(args: Array[String]): Unit = {
>   val res = fetchFromDruid()
>
>   // res comes as null here
> }
>
> def fetchFromDruid(): GroupByResponse {
>   implicit val executionContext = ExecutionContext.Implicits.global
>   val client = DruidClient("http://localhost:8082;)
>
>   val query = GroupByQuery(
> source = "wikipedia",
> interval = new Interval(new DateTime().minusMonths(60), new DateTime()),
> dimensions = Seq("countryName"),
> descending = true,
> granularity = Granularity.All,
> aggregate = Seq(
>   DSL.count("row_count")
> ),
> postAggregate = Seq(
> ),
> limit = Some(100)
>   )
>
>   client(query).onComplete {
> case Success(resp) =>
>   resp.data.foreach { row =>
> println(row)
>   }
>
>   *return resp*
>   println("none")
> //System.exit(0)
> case Failure(ex) =>
>   ex.printStackTrace()
>
>   return null
> //System.exit(0)
>   }
> }
>
>
> But somehow I am not able to return the response to the caller; i.e the main 
> function. What could be the issue?
>
>
> Thanks,
>
> Tushar
>
> --
> You received this message because you are subscribed to the Google Groups
> "scala-functional" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-functional+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/scala-functional/8bc53b5e-729d-497d-89bd-ab2568674cc7%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scala-functional" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scala-functional+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/scala-functional/CAN3nywAbk_%3DMsmsY0gOQc80WQFL5keW--hengy7rB5y9O8SKoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2017-12-13 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290245#comment-16290245
 ] 

Jed Wesley-Smith commented on KAFKA-2260:
-

This would be a valuable feature, currently we need something else to 
coordinate monotonic writes (for an event source), and Kafka cannot support 
this directly.

> Allow specifying expected offset on produce
> ---
>
> Key: KAFKA-2260
> URL: https://issues.apache.org/jira/browse/KAFKA-2260
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Kirwin
>Priority: Minor
> Attachments: KAFKA-2260.patch, expected-offsets.patch
>
>
> I'd like to propose a change that adds a simple CAS-like mechanism to the 
> Kafka producer. This update has a small footprint, but enables a bunch of 
> interesting uses in stream processing or as a commit log for process state.
> h4. Proposed Change
> In short:
> - Allow the user to attach a specific offset to each message produced.
> - The server assigns offsets to messages in the usual way. However, if the 
> expected offset doesn't match the actual offset, the server should fail the 
> produce request instead of completing the write.
> This is a form of optimistic concurrency control, like the ubiquitous 
> check-and-set -- but instead of checking the current value of some state, it 
> checks the current offset of the log.
> h4. Motivation
> Much like check-and-set, this feature is only useful when there's very low 
> contention. Happily, when Kafka is used as a commit log or as a 
> stream-processing transport, it's common to have just one producer (or a 
> small number) for a given partition -- and in many of these cases, predicting 
> offsets turns out to be quite useful.
> - We get the same benefits as the 'idempotent producer' proposal: a producer 
> can retry a write indefinitely and be sure that at most one of those attempts 
> will succeed; and if two producers accidentally write to the end of the 
> partition at once, we can be certain that at least one of them will fail.
> - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
> messages consecutively to a partition, even if the list is much larger than 
> the buffer size or the producer has to be restarted.
> - If a process is using Kafka as a commit log -- reading from a partition to 
> bootstrap, then writing any updates to that same partition -- it can be sure 
> that it's seen all of the messages in that partition at the moment it does 
> its first (successful) write.
> There's a bunch of other similar use-cases here, but they all have roughly 
> the same flavour.
> h4. Implementation
> The major advantage of this proposal over other suggested transaction / 
> idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
> currently-unused field, adds no new APIs, and requires very little new code 
> or additional work from the server.
> - Produced messages already carry an offset field, which is currently ignored 
> by the server. This field could be used for the 'expected offset', with a 
> sigil value for the current behaviour. (-1 is a natural choice, since it's 
> already used to mean 'next available offset'.)
> - We'd need a new error and error code for a 'CAS failure'.
> - The server assigns offsets to produced messages in 
> {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
> changed, this method would assign offsets in the same way -- but if they 
> don't match the offset in the message, we'd return an error instead of 
> completing the write.
> - To avoid breaking existing clients, this behaviour would need to live 
> behind some config flag. (Possibly global, but probably more useful 
> per-topic?)
> I understand all this is unsolicited and possibly strange: happy to answer 
> questions, and if this seems interesting, I'd be glad to flesh this out into 
> a full KIP or patch. (And apologies if this is the wrong venue for this sort 
> of thing!)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [scalaz] scalaz 7.2.8 released

2016-12-04 Thread Jed Wesley-Smith
\o/

thanks Kenji.

On 4 December 2016 at 00:20, Kenji Yoshida <6b656e6...@gmail.com> wrote:

> Hi everyone.
>
> scalaz 7.2.8 released!
>
> "org.scalaz" %% "scalaz-core" % "7.2.8"
>
> for Scala binary versions 2.10, 2.11 and 2.12.
>
> Thanks to all contributors.
> This is second maintenance release for the 7.2.x series.
> It's a drop-in replacement for 7.2.0 to 7.2.7 which is fully binary backwards 
> compatible (tested with MiMa).
>
> There are some new features and changes. see 
> https://github.com/scalaz/scalaz/wiki/7.2.8
>
> Cheers.
> Kenji (@xuwei-k )
>
> --
> You received this message because you are subscribed to the Google Groups
> "scalaz" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scalaz+unsubscr...@googlegroups.com.
> To post to this group, send email to scalaz@googlegroups.com.
> Visit this group at https://groups.google.com/group/scalaz.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scalaz" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scalaz+unsubscr...@googlegroups.com.
To post to this group, send email to scalaz@googlegroups.com.
Visit this group at https://groups.google.com/group/scalaz.
For more options, visit https://groups.google.com/d/optout.


[jira] [Commented] (LOG4J2-741) Reinstate the package attribute for discovering custom plugins

2014-08-22 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14107836#comment-14107836
 ] 

Jed Wesley-Smith commented on LOG4J2-741:
-

@ianbarfield OOI, do you specify a packages attribute in the log4j config? I 
am not a contributor, nor an expert on this, but I wonder if either it is 
specified to something small (or non-existent) then it shouldn't do much 
scanning at all.

I also wonder if it should scan at all if the attribute is not present, but 
that's just thinking aloud, I don't actually know how it works atm.

 Reinstate the package attribute for discovering custom plugins
 --

 Key: LOG4J2-741
 URL: https://issues.apache.org/jira/browse/LOG4J2-741
 Project: Log4j 2
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0-rc2, 2.0
Reporter: Remko Popma
Assignee: Remko Popma
Priority: Blocker
 Fix For: 2.0.1

 Attachments: LOG4J2-741-patch.txt


 Several people reported problems with their custom plugins no longer being 
 recognized by log4j2. See LOG4J2-673 and [this StackOverflow 
 question|http://stackoverflow.com/questions/24918810/log4j2-configuration-will-not-load-custom-pattern-converter].
 Plugins created before the annotation processor was added to log4j2 (all 
 plugins created with 2.0-rc1 and earlier) may not have a 
 {{META-INF/org/apache/logging/log4j/core/config/plugins/Log4j2Plugins.dat}} 
 file.
 Previously plugins without this metadata file could still be found if the 
 user specified their custom plugin package(s) in the {{packages}} attribute 
 of the {{Configuration}} element in their log4j2.xml configuration file.
 However, since 2.0-rc2, the {{packages}} configuration attribute was 
 disabled; users may still specify a value, but log4j2 will no longer use this 
 value to try to load custom plugins. This causes problems for custom plugins 
 built before the annotation processor was added to log4j2, as well as custom 
 plugins that are built in an environment where the annotation processor does 
 not work (for example, most IDEs require some setting changes to enable 
 annotation processing).
 This Jira ticket is to reactivate the packages configuration attribute. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Commented] (LOG4J2-741) Reinstate the package attribute for discovering custom plugins

2014-07-24 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074016#comment-14074016
 ] 

Jed Wesley-Smith commented on LOG4J2-741:
-

I strongly consider the removal of this feature in a late RC a serious bug, and 
a violation of the RC process. I wasted a fair amount of time trying to 
diagnose why things that worked previously were no longer working, with no 
information anywhere that the {documented configuration 
element|http://logging.apache.org/log4j/2.x/manual/configuration.html#ConfigurationSyntax}
 packages actually doesn't do anything.

Eventually I discovered this had been replaced as the javac annotation 
processor can be used to create an obscure file in the jar. In our case we do 
not use javac, we are writing our plugins in Scala. We (and others in other JVM 
languages) need a way to load our plugins that isn't tied to a specific 
compiler.

By the way {the 
documentation|http://logging.apache.org/log4j/2.x/manual/plugins.html} still 
refers to the {{PluginManager}} that no longer works.

 Reinstate the package attribute for discovering custom plugins
 --

 Key: LOG4J2-741
 URL: https://issues.apache.org/jira/browse/LOG4J2-741
 Project: Log4j 2
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0, 2.0-rc2
Reporter: Remko Popma
Assignee: Matt Sicker
 Fix For: 2.0.1

 Attachments: LOG4J2-741-patch.txt


 Several people reported problems with their custom plugins no longer being 
 recognized by log4j2. See LOG4J2-673 and [this StackOverflow 
 question|http://stackoverflow.com/questions/24918810/log4j2-configuration-will-not-load-custom-pattern-converter].
 Plugins created before the annotation processor was added to log4j2 (all 
 plugins created with 2.0-rc1 and earlier) may not have a 
 {{META-INF/org/apache/logging/log4j/core/config/plugins/Log4j2Plugins.dat}} 
 file.
 Previously plugins without this metadata file could still be found if the 
 user specified their custom plugin package(s) in the {{packages}} attribute 
 of the {{Configuration}} element in their log4j2.xml configuration file.
 However, since 2.0-rc2, the {{packages}} configuration attribute was 
 disabled; users may still specify a value, but log4j2 will no longer use this 
 value to try to load custom plugins. This causes problems for custom plugins 
 built before the annotation processor was added to log4j2, as well as custom 
 plugins that are built in an environment where the annotation processor does 
 not work (for example, most IDEs require some setting changes to enable 
 annotation processing).
 This Jira ticket is to reactivate the packages configuration attribute. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Commented] (LOG4J2-673) plugin preloading fails in shaded jar files

2014-07-24 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074022#comment-14074022
 ] 

Jed Wesley-Smith commented on LOG4J2-673:
-

Note that alternate language (like Scala) plugins have a similar problem due to 
the javac annotation processor not being available.

 plugin preloading fails in shaded jar files
 ---

 Key: LOG4J2-673
 URL: https://issues.apache.org/jira/browse/LOG4J2-673
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-rc2
Reporter: Mck SembWever
Priority: Critical
  Labels: annotations, compiler, plugins
 Fix For: 2.0.1

 Attachments: 
 0002-LOG4J2-673-plugin-preloading-fails-in-shaded-jar-fil.patch, 
 0005-LOG4J2-673-plugin-preloading-fails-in-shaded-jar-fil.patch


 Support for plugin preloading through the standard 
 javax.annotation.processing tool was adding in LOG4J2-595
 But the plugin processor always creates and stores the processed Plugin 
 annotated classes into the same file. This works fine when the classpath 
 consists of individual jar files, but fails when shaded jar files are used.
 A tested fix exists at 
 https://github.com/finn-no/logging-log4j2/tree/bugfix/LOG4J2-673
 There's also a github pull request and a manual diff attached. (I can clean 
 up anything not used afterwards)
 The fix saves the dat file in a location under META-INF that matches the 
 shared package all the processed plugins are found under.
 The package attribute in the config file is then used so that multiple dat 
 files can be loaded at runtime.
 This means that the package attribute is no longer deprecated.
 This has been tested against 
 https://github.com/finn-no/log4j2-logstash-jsonevent-layout



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Comment Edited] (LOG4J2-741) Reinstate the package attribute for discovering custom plugins

2014-07-24 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074016#comment-14074016
 ] 

Jed Wesley-Smith edited comment on LOG4J2-741 at 7/25/14 4:15 AM:
--

I strongly consider the removal of this feature in a late RC a serious bug, and 
a violation of the RC process. I wasted a fair amount of time trying to 
diagnose why things that worked previously were no longer working, with no 
information anywhere that the [documented configuration 
element|http://logging.apache.org/log4j/2.x/manual/configuration.html#ConfigurationSyntax]
 packages actually doesn't do anything.

Eventually I discovered this had been replaced as the javac annotation 
processor can be used to create an obscure file in the jar. In our case we do 
not use javac, we are writing our plugins in Scala. We (and others in other JVM 
languages) need a way to load our plugins that isn't tied to a specific 
compiler.

By the way [the 
documentation|http://logging.apache.org/log4j/2.x/manual/plugins.html] still 
refers to the {{PluginManager}} that no longer works.


was (Author: jedws):
I strongly consider the removal of this feature in a late RC a serious bug, and 
a violation of the RC process. I wasted a fair amount of time trying to 
diagnose why things that worked previously were no longer working, with no 
information anywhere that the {documented configuration 
element|http://logging.apache.org/log4j/2.x/manual/configuration.html#ConfigurationSyntax}
 packages actually doesn't do anything.

Eventually I discovered this had been replaced as the javac annotation 
processor can be used to create an obscure file in the jar. In our case we do 
not use javac, we are writing our plugins in Scala. We (and others in other JVM 
languages) need a way to load our plugins that isn't tied to a specific 
compiler.

By the way {the 
documentation|http://logging.apache.org/log4j/2.x/manual/plugins.html} still 
refers to the {{PluginManager}} that no longer works.

 Reinstate the package attribute for discovering custom plugins
 --

 Key: LOG4J2-741
 URL: https://issues.apache.org/jira/browse/LOG4J2-741
 Project: Log4j 2
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0, 2.0-rc2
Reporter: Remko Popma
Assignee: Matt Sicker
 Fix For: 2.0.1

 Attachments: LOG4J2-741-patch.txt


 Several people reported problems with their custom plugins no longer being 
 recognized by log4j2. See LOG4J2-673 and [this StackOverflow 
 question|http://stackoverflow.com/questions/24918810/log4j2-configuration-will-not-load-custom-pattern-converter].
 Plugins created before the annotation processor was added to log4j2 (all 
 plugins created with 2.0-rc1 and earlier) may not have a 
 {{META-INF/org/apache/logging/log4j/core/config/plugins/Log4j2Plugins.dat}} 
 file.
 Previously plugins without this metadata file could still be found if the 
 user specified their custom plugin package(s) in the {{packages}} attribute 
 of the {{Configuration}} element in their log4j2.xml configuration file.
 However, since 2.0-rc2, the {{packages}} configuration attribute was 
 disabled; users may still specify a value, but log4j2 will no longer use this 
 value to try to load custom plugins. This causes problems for custom plugins 
 built before the annotation processor was added to log4j2, as well as custom 
 plugins that are built in an environment where the annotation processor does 
 not work (for example, most IDEs require some setting changes to enable 
 annotation processing).
 This Jira ticket is to reactivate the packages configuration attribute. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Commented] (LOG4J2-741) Reinstate the package attribute for discovering custom plugins

2014-07-24 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074072#comment-14074072
 ] 

Jed Wesley-Smith commented on LOG4J2-741:
-

I don't know how you'd ask javac to process .class files

You're surprised that javac plugins don't work outside of javac?

 Reinstate the package attribute for discovering custom plugins
 --

 Key: LOG4J2-741
 URL: https://issues.apache.org/jira/browse/LOG4J2-741
 Project: Log4j 2
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0, 2.0-rc2
Reporter: Remko Popma
Assignee: Matt Sicker
 Fix For: 2.0.1

 Attachments: LOG4J2-741-patch.txt


 Several people reported problems with their custom plugins no longer being 
 recognized by log4j2. See LOG4J2-673 and [this StackOverflow 
 question|http://stackoverflow.com/questions/24918810/log4j2-configuration-will-not-load-custom-pattern-converter].
 Plugins created before the annotation processor was added to log4j2 (all 
 plugins created with 2.0-rc1 and earlier) may not have a 
 {{META-INF/org/apache/logging/log4j/core/config/plugins/Log4j2Plugins.dat}} 
 file.
 Previously plugins without this metadata file could still be found if the 
 user specified their custom plugin package(s) in the {{packages}} attribute 
 of the {{Configuration}} element in their log4j2.xml configuration file.
 However, since 2.0-rc2, the {{packages}} configuration attribute was 
 disabled; users may still specify a value, but log4j2 will no longer use this 
 value to try to load custom plugins. This causes problems for custom plugins 
 built before the annotation processor was added to log4j2, as well as custom 
 plugins that are built in an environment where the annotation processor does 
 not work (for example, most IDEs require some setting changes to enable 
 annotation processing).
 This Jira ticket is to reactivate the packages configuration attribute. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



Re: RFR: 8015317: Optional.filter, map, and flatMap

2013-07-15 Thread Jed Wesley-Smith
I did supply a test that you can use to try it.

What we are talking about is whether type BoxParent is substitutable by
BoxChild in the contravariant position. Intuitively we think we only need
Box? extends Parent because we only care about the type parameter, but
the type – as you point out – is actually different. BoxParent is not
inherited by BoxChild.

Specifically, if we have a consuming Box, and we replace it with a Box of a
more specific type parameter, we could attempt feed the more general type
into it, ie. a BoxChild isn't going to appreciate having Parent fed to
it. This is why covariance and mutable containers don't mix well, and why
Java's covariant arrays are problematic.

In this situation we have an immutable container, and we can substitute the
type of our container with one of a more specific type, as it will only
ever supply a value – and a value of Child will suffice as a Parent. So,
for this case we need a Box that is substitutable, and therefore we need to
add the covariance to our box.

? extends Box is simply adding covariance to our Box type.

For a much better explanation than I can give about this, see this
excellent post describing generics in Scala, which – apart from have
declaration-site variance and using [A] in place of A – generally follow
the same pattern:

http://termsandtruthconditions.herokuapp.com/blog/2012/12/29/covariance-and-contravariance-in-scala/

cheers,
jed.


On 14 July 2013 04:49, Henry Jen henry@oracle.com wrote:

 I think the type you talking about here is Optional? extends U instead
 of ? extends OptionalU.

 IIRC, Optional? extends U is not a subtype of OptionalU, just like any
 other Collection class. ListChild is not a ListParent.

 Cheers,
 Henry


 On Jul 13, 2013, at 3:15 AM, Jed Wesley-Smith j...@wesleysmith.io wrote:

  The ? extends Optional is unnecessary in flatMap as Optional is final.

 interestingly enough, it actually is.

 try the following test:

 class OptionalTest {
   class Parent {};

   class Child extends Parent {};

   @Test public void covariantReturn() {
 OptionalParent some = some(new Parent());
 FunctionParent, OptionalChild f = new FunctionParent,
 OptionalChild() {
   @Override public OptionalChild apply(Parent p) {
 return some(new Child());
   }
 };
 OptionalParent mapped = some.Parent flatMap(f);
 assertThat(mapped.get(), notNullValue());
   }
 }

 adapted from the fugue test suite:


 https://bitbucket.org/atlassian/fugue/src/96a65067fb7aaf1edae1bffa07167a5865cbebec/src/test/java/com/atlassian/fugue/OptionTest.java#cl-155

 The point to remember is that OptionalChild is a type and as such is
 actually a subtype of OptionalParent –  and therefore requires a
 covariant return.

 cheers,
 jed.




 On 13 July 2013 04:15, Mike Duigou mike.dui...@oracle.com wrote:

 The ? extends Optional is unnecessary in flatMap as Optional is final.
 Otherwise this looks good.

 Mike

 On Jul 5 2013, at 14:37 , Henry Jen wrote:

  Hi,
 
  Please review the webrev at
 
  http://cr.openjdk.java.net/~henryjen/ccc/8015317.0/webrev/
 
  Which adds following method to Optional,
 
  public static T OptionalT ofNullable(T value) {}
  public OptionalT filter(Predicate? super T predicate) {}
  publicU OptionalU map(Function? super T, ? extends U mapper) {}
  publicU OptionalU flatMap(Function? super T, ? extends OptionalU
  mapper) {}
 
  Also included is some cleanup on javadoc.
 
  Cheers,
  Henry







Re: RFR: 8015317: Optional.filter, map, and flatMap

2013-07-15 Thread Jed Wesley-Smith
I'm not entirely sure that is a problem, have a look at the following:

https://gist.github.com/jedws/5993596#file-variancetest-java

it is only the one with a covariance annotation on the parameter that fails…


On 15 July 2013 12:52, Zhong Yu zhong.j...@gmail.com wrote:

 Another example of possible missing a wildcard


 http://gee.cs.oswego.edu/dl/jsr166/dist/docs/java/util/concurrent/CompletionStage.html#thenCompose%28java.util.function.Function%29

thenCompose(Function? super T,? extends CompletionStageU fn)

 should be

thenCompose(Function? super T,? extends CompletionStage? extends U
 fn)

 The problem is probably wide spread, and we need a tool to find these
 mistakes.

 Zhong Yu


 On Sun, Jul 14, 2013 at 8:04 AM, Jed Wesley-Smith j...@wesleysmith.io
 wrote:
  (accidentally didn't post to the list)
 
  You probably know that the example provided is not completed ported to
  work with our Optional implementation,
 
  It should be, for the example I wrote an Optional that is final and
 should
  be otherwise identical. It should certainly be fairly easy for any
  committer to try. If you can make it work without the ? extends Optional
  I'd love an explanation of how.
 
  and fugue works around the type system with Option as abstract class.
 
  As I've tried to explain, this isn't about the implementation of the
  container class, but how covariance works with a parameterised class.
 
  We originally had the non-, but in a discussion with Brian he alerted us
 to
  the fact that the signature was wrong. We hastily fixed it:
 
 
 https://bitbucket.org/atlassian/fugue/commits/9eca663326a5baeb8f23974732ec585d5627a05c
 
  To further demonstrate, I give you a minimal example of a final Optional
  implementation that does not compile for this test:
 
  https://gist.github.com/jedws/5993596#file-gistfile1-java-L57
 
  cheers,
  jed.
 
 
 
  On 14 July 2013 15:02, Henry Jen henry@oracle.com wrote:
 
   I think I understand what you are saying. However, unless we make
  Optional not final, the extends part just doesn't matter.
 
  You probably know that the example provided is not completed ported to
  work with our Optional implementation, and fugue works around the type
  system with Option as abstract class.
 
  Cheers,
  Henry
 
   On Jul 13, 2013, at 4:35 PM, Jed Wesley-Smith j...@wesleysmith.io
 j...@wesleysmith.iowrote:
 
   I did supply a test that you can use to try it.
 
   What we are talking about is whether type BoxParent is substitutable
  by BoxChild in the contravariant position. Intuitively we think we
 only
  need Box? extends Parent because we only care about the type
 parameter,
  but the type – as you point out – is actually different. BoxParent is
 not
  inherited by BoxChild.
 
   Specifically, if we have a consuming Box, and we replace it with a Box
  of a more specific type parameter, we could attempt feed the more
 general
  type into it, ie. a BoxChild isn't going to appreciate having Parent
 fed
  to it. This is why covariance and mutable containers don't mix well, and
  why Java's covariant arrays are problematic.
 
   In this situation we have an immutable container, and we can substitute
  the type of our container with one of a more specific type, as it will
 only
  ever supply a value – and a value of Child will suffice as a Parent. So,
  for this case we need a Box that is substitutable, and therefore we
 need to
  add the covariance to our box.
 
   ? extends Box is simply adding covariance to our Box type.
 
   For a much better explanation than I can give about this, see this
  excellent post describing generics in Scala, which – apart from have
  declaration-site variance and using [A] in place of A – generally
 follow
  the same pattern:
 
 
 
 http://termsandtruthconditions.herokuapp.com/blog/2012/12/29/covariance-and-contravariance-in-scala/
 
   cheers,
  jed.
 
 
  On 14 July 2013 04:49, Henry Jen henry@oracle.com wrote:
 
  I think the type you talking about here is Optional? extends U
 instead
  of ? extends OptionalU.
 
   IIRC, Optional? extends U is not a subtype of OptionalU, just like
  any other Collection class. ListChild is not a ListParent.
 
   Cheers,
  Henry
 
 
   On Jul 13, 2013, at 3:15 AM, Jed Wesley-Smith j...@wesleysmith.io
  wrote:
 
The ? extends Optional is unnecessary in flatMap as Optional is
 final.
 
   interestingly enough, it actually is.
 
   try the following test:
 
   class OptionalTest {
class Parent {};
 
 class Child extends Parent {};
 
 @Test public void covariantReturn() {
  OptionalParent some = some(new Parent());
  FunctionParent, OptionalChild f = new FunctionParent,
  OptionalChild() {
@Override public OptionalChild apply(Parent p) {
  return some(new Child());
}
  };
  OptionalParent mapped = some.Parent flatMap(f);
  assertThat(mapped.get(), notNullValue());
}
  }
 
   adapted from the fugue test suite:
 
 
 
 https://bitbucket.org/atlassian

Re: RFR: 8015317: Optional.filter, map, and flatMap

2013-07-15 Thread Jed Wesley-Smith
(accidentally didn't post to the list)

 You probably know that the example provided is not completed ported to
work with our Optional implementation,

It should be, for the example I wrote an Optional that is final and should
be otherwise identical. It should certainly be fairly easy for any
committer to try. If you can make it work without the ? extends Optional
I'd love an explanation of how.

 and fugue works around the type system with Option as abstract class.

As I've tried to explain, this isn't about the implementation of the
container class, but how covariance works with a parameterised class.

We originally had the non-, but in a discussion with Brian he alerted us to
the fact that the signature was wrong. We hastily fixed it:

https://bitbucket.org/atlassian/fugue/commits/9eca663326a5baeb8f23974732ec585d5627a05c

To further demonstrate, I give you a minimal example of a final Optional
implementation that does not compile for this test:

https://gist.github.com/jedws/5993596#file-gistfile1-java-L57

cheers,
jed.



On 14 July 2013 15:02, Henry Jen henry@oracle.com wrote:

  I think I understand what you are saying. However, unless we make
 Optional not final, the extends part just doesn't matter.

 You probably know that the example provided is not completed ported to
 work with our Optional implementation, and fugue works around the type
 system with Option as abstract class.

 Cheers,
 Henry

  On Jul 13, 2013, at 4:35 PM, Jed Wesley-Smith 
 j...@wesleysmith.ioj...@wesleysmith.iowrote:

  I did supply a test that you can use to try it.

  What we are talking about is whether type BoxParent is substitutable
 by BoxChild in the contravariant position. Intuitively we think we only
 need Box? extends Parent because we only care about the type parameter,
 but the type – as you point out – is actually different. BoxParent is not
 inherited by BoxChild.

  Specifically, if we have a consuming Box, and we replace it with a Box
 of a more specific type parameter, we could attempt feed the more general
 type into it, ie. a BoxChild isn't going to appreciate having Parent fed
 to it. This is why covariance and mutable containers don't mix well, and
 why Java's covariant arrays are problematic.

  In this situation we have an immutable container, and we can substitute
 the type of our container with one of a more specific type, as it will only
 ever supply a value – and a value of Child will suffice as a Parent. So,
 for this case we need a Box that is substitutable, and therefore we need to
 add the covariance to our box.

  ? extends Box is simply adding covariance to our Box type.

  For a much better explanation than I can give about this, see this
 excellent post describing generics in Scala, which – apart from have
 declaration-site variance and using [A] in place of A – generally follow
 the same pattern:


 http://termsandtruthconditions.herokuapp.com/blog/2012/12/29/covariance-and-contravariance-in-scala/

  cheers,
 jed.


 On 14 July 2013 04:49, Henry Jen henry@oracle.com wrote:

 I think the type you talking about here is Optional? extends U instead
 of ? extends OptionalU.

  IIRC, Optional? extends U is not a subtype of OptionalU, just like
 any other Collection class. ListChild is not a ListParent.

  Cheers,
 Henry


  On Jul 13, 2013, at 3:15 AM, Jed Wesley-Smith j...@wesleysmith.io
 wrote:

   The ? extends Optional is unnecessary in flatMap as Optional is final.

  interestingly enough, it actually is.

  try the following test:

  class OptionalTest {
   class Parent {};

class Child extends Parent {};

@Test public void covariantReturn() {
 OptionalParent some = some(new Parent());
 FunctionParent, OptionalChild f = new FunctionParent,
 OptionalChild() {
   @Override public OptionalChild apply(Parent p) {
 return some(new Child());
   }
 };
 OptionalParent mapped = some.Parent flatMap(f);
 assertThat(mapped.get(), notNullValue());
   }
 }

  adapted from the fugue test suite:


 https://bitbucket.org/atlassian/fugue/src/96a65067fb7aaf1edae1bffa07167a5865cbebec/src/test/java/com/atlassian/fugue/OptionTest.java#cl-155

  The point to remember is that OptionalChild is a type and as such is
 actually a subtype of OptionalParent –  and therefore requires a
 covariant return.

  cheers,
 jed.




 On 13 July 2013 04:15, Mike Duigou mike.dui...@oracle.com wrote:

 The ? extends Optional is unnecessary in flatMap as Optional is final.
 Otherwise this looks good.

 Mike

 On Jul 5 2013, at 14:37 , Henry Jen wrote:

  Hi,
 
  Please review the webrev at
 
  http://cr.openjdk.java.net/~henryjen/ccc/8015317.0/webrev/
 
  Which adds following method to Optional,
 
  public static T OptionalT ofNullable(T value) {}
  public OptionalT filter(Predicate? super T predicate) {}
  publicU OptionalU map(Function? super T, ? extends U mapper) {}
  publicU OptionalU flatMap(Function? super T, ? extends
 OptionalU
  mapper) {}
 
  Also included is some

Re: RFR: 8015317: Optional.filter, map, and flatMap

2013-07-15 Thread Jed Wesley-Smith
 The ? extends Optional is unnecessary in flatMap as Optional is final.

interestingly enough, it actually is.

try the following test:

class OptionalTest {
  class Parent {};

  class Child extends Parent {};

  @Test public void covariantReturn() {
OptionalParent some = some(new Parent());
FunctionParent, OptionalChild f = new FunctionParent,
OptionalChild() {
  @Override public OptionalChild apply(Parent p) {
return some(new Child());
  }
};
OptionalParent mapped = some.Parent flatMap(f);
assertThat(mapped.get(), notNullValue());
  }
}

adapted from the fugue test suite:

https://bitbucket.org/atlassian/fugue/src/96a65067fb7aaf1edae1bffa07167a5865cbebec/src/test/java/com/atlassian/fugue/OptionTest.java#cl-155

The point to remember is that OptionalChild is a type and as such is
actually a subtype of OptionalParent –  and therefore requires a
covariant return.

cheers,
jed.




On 13 July 2013 04:15, Mike Duigou mike.dui...@oracle.com wrote:

 The ? extends Optional is unnecessary in flatMap as Optional is final.
 Otherwise this looks good.

 Mike

 On Jul 5 2013, at 14:37 , Henry Jen wrote:

  Hi,
 
  Please review the webrev at
 
  http://cr.openjdk.java.net/~henryjen/ccc/8015317.0/webrev/
 
  Which adds following method to Optional,
 
  public static T OptionalT ofNullable(T value) {}
  public OptionalT filter(Predicate? super T predicate) {}
  publicU OptionalU map(Function? super T, ? extends U mapper) {}
  publicU OptionalU flatMap(Function? super T, ? extends OptionalU
  mapper) {}
 
  Also included is some cleanup on javadoc.
 
  Cheers,
  Henry





Re: RFR: 8015317: Optional.filter, map, and flatMap

2013-07-15 Thread Jed Wesley-Smith
ignore me, you do actually need both ? extends on the type constructor and
the inner type – dunno what I was thinking.


On 15 July 2013 13:02, Jed Wesley-Smith j...@wesleysmith.io wrote:

 I'm not entirely sure that is a problem, have a look at the following:

 https://gist.github.com/jedws/5993596#file-variancetest-java

 it is only the one with a covariance annotation on the parameter that
 fails…


 On 15 July 2013 12:52, Zhong Yu zhong.j...@gmail.com wrote:

 Another example of possible missing a wildcard


 http://gee.cs.oswego.edu/dl/jsr166/dist/docs/java/util/concurrent/CompletionStage.html#thenCompose%28java.util.function.Function%29

thenCompose(Function? super T,? extends CompletionStageU fn)

 should be

thenCompose(Function? super T,? extends CompletionStage? extends U
 fn)

 The problem is probably wide spread, and we need a tool to find these
 mistakes.

 Zhong Yu


 On Sun, Jul 14, 2013 at 8:04 AM, Jed Wesley-Smith j...@wesleysmith.io
 wrote:
  (accidentally didn't post to the list)
 
  You probably know that the example provided is not completed ported to
  work with our Optional implementation,
 
  It should be, for the example I wrote an Optional that is final and
 should
  be otherwise identical. It should certainly be fairly easy for any
  committer to try. If you can make it work without the ? extends Optional
  I'd love an explanation of how.
 
  and fugue works around the type system with Option as abstract class.
 
  As I've tried to explain, this isn't about the implementation of the
  container class, but how covariance works with a parameterised class.
 
  We originally had the non-, but in a discussion with Brian he alerted
 us to
  the fact that the signature was wrong. We hastily fixed it:
 
 
 https://bitbucket.org/atlassian/fugue/commits/9eca663326a5baeb8f23974732ec585d5627a05c
 
  To further demonstrate, I give you a minimal example of a final Optional
  implementation that does not compile for this test:
 
  https://gist.github.com/jedws/5993596#file-gistfile1-java-L57
 
  cheers,
  jed.
 
 
 
  On 14 July 2013 15:02, Henry Jen henry@oracle.com wrote:
 
   I think I understand what you are saying. However, unless we make
  Optional not final, the extends part just doesn't matter.
 
  You probably know that the example provided is not completed ported to
  work with our Optional implementation, and fugue works around the type
  system with Option as abstract class.
 
  Cheers,
  Henry
 
   On Jul 13, 2013, at 4:35 PM, Jed Wesley-Smith j...@wesleysmith.io
 j...@wesleysmith.iowrote:
 
   I did supply a test that you can use to try it.
 
   What we are talking about is whether type BoxParent is substitutable
  by BoxChild in the contravariant position. Intuitively we think we
 only
  need Box? extends Parent because we only care about the type
 parameter,
  but the type – as you point out – is actually different. BoxParent
 is not
  inherited by BoxChild.
 
   Specifically, if we have a consuming Box, and we replace it with a Box
  of a more specific type parameter, we could attempt feed the more
 general
  type into it, ie. a BoxChild isn't going to appreciate having Parent
 fed
  to it. This is why covariance and mutable containers don't mix well,
 and
  why Java's covariant arrays are problematic.
 
   In this situation we have an immutable container, and we can
 substitute
  the type of our container with one of a more specific type, as it will
 only
  ever supply a value – and a value of Child will suffice as a Parent.
 So,
  for this case we need a Box that is substitutable, and therefore we
 need to
  add the covariance to our box.
 
   ? extends Box is simply adding covariance to our Box type.
 
   For a much better explanation than I can give about this, see this
  excellent post describing generics in Scala, which – apart from have
  declaration-site variance and using [A] in place of A – generally
 follow
  the same pattern:
 
 
 
 http://termsandtruthconditions.herokuapp.com/blog/2012/12/29/covariance-and-contravariance-in-scala/
 
   cheers,
  jed.
 
 
  On 14 July 2013 04:49, Henry Jen henry@oracle.com wrote:
 
  I think the type you talking about here is Optional? extends U
 instead
  of ? extends OptionalU.
 
   IIRC, Optional? extends U is not a subtype of OptionalU, just
 like
  any other Collection class. ListChild is not a ListParent.
 
   Cheers,
  Henry
 
 
   On Jul 13, 2013, at 3:15 AM, Jed Wesley-Smith j...@wesleysmith.io
  wrote:
 
The ? extends Optional is unnecessary in flatMap as Optional is
 final.
 
   interestingly enough, it actually is.
 
   try the following test:
 
   class OptionalTest {
class Parent {};
 
 class Child extends Parent {};
 
 @Test public void covariantReturn() {
  OptionalParent some = some(new Parent());
  FunctionParent, OptionalChild f = new FunctionParent,
  OptionalChild() {
@Override public OptionalChild apply(Parent p) {
  return some(new Child

[jira] [Commented] (LOG4J2-169) LogManager.getLogger doesn't work

2013-03-27 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615907#comment-13615907
 ] 

Jed Wesley-Smith commented on LOG4J2-169:
-

Anything in particular holding up a beta5 release?

It has been a few months since the last one and this particular problem is 
seriously slowing our test runs as they must be executed sequentially to avoid 
tripping this up.

 LogManager.getLogger doesn't work
 -

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Assignee: Ralph Goers
Priority: Critical
  Labels: thread-safety
 Fix For: 2.0-beta5


 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



Re: RFR : JDK-8001642 : Add OptionalT, OptionalDouble, OptionalInt, OptionalLong

2013-03-06 Thread Jed Wesley-Smith
There is no need for an Option container to show how nested collections may be 
misused, you could just as easily show an example of a ListListListT that 
is isomorphic to a flattened IterableT . I'd simply point to the utility of 
the monadic bind or flatMap function!

There are several reasons why Option types provide serious usability 
improvements over null, notwithstanding annotation support. The first is that 
the type-system prevents misuse (modulo unsafe API), and the second is that it 
allows operations that produce optional results to be sequenced easily without 
if checks. For mor information see previous posts on Optional forming a Monad.

cheers,
jed.

On 06/03/2013, at 10:58 PM, Remi Forax fo...@univ-mlv.fr wrote:

 On 03/06/2013 11:54 AM, Jed Wesley-Smith wrote:
 Really, this is a lot of fuss over nothing.
 
 There is actually no fundamental difference between Scala's Option, Guava's 
 Optional, Fugue's Option, Java's Optional and Haskell's Maybe – they are 
 modelling the same thing, the possibility of a value not being present.
 
 The fact that there may be minor differences in api or semantics around 
 whether null is a legal value are minor in the scheme of things (and yes, 
 null is a pretty stupid legal value of a Some IMHO).
 
 Stephen's example is ludicrous, why have a list of optional values? You'd 
 flatten down into just a list – and an optional list only makes sense if the 
 enclosed list is guaranteed to be non-empty, otherwise you just return an 
 empty list!
 
 People like shooting their own feet.
 http://cs.calstatela.edu/wiki/index.php/Courses/CS_460/Fall_2012/Week_8/gamePlay.combat.BattleAnalysis
  
 
 
 If we are going to use potential straw-men as arguments we can stall all 
 progress. Please concentrate on the important matters, let's disavow null as 
 a valid value and save us all a billion dollars
 
 Also Scala Option is not the only way to solve the null problem.
 The JSR308 annotation @Nullable/@NonNull are recognized by Eclipse and 
 IntelliJ at least.
 
 .
 
 cheers,
 jed.
 
 cheers,
 Rémi
 
 
 On 06/03/2013, at 8:47 PM, Remi Forax fo...@univ-mlv.fr wrote:
 
 Ok, let be nuclear on this,
 There is no good reason to introduce OptionalT in java.util.
 
 It doen't work like Google's Guava Optional despite having the same
 name, it doesn't work like Scala's Option despite having a similar name,
 moreover the lambda pipeline face a similar issue with the design of
 collectors (see stream.collect()) but solve that similar problem with a
 different design, so the design of Optional is not even consistent with
 the rest of the stream API.
 
 So why do we want something like Optional, we want it to be able to
 represent the fact that as Mike states a returning result can have no
 value by example Colections.emptyList().stream().findFirst() should
 'return' no value.
 
 As Stephen Colebourne said, Optional is a bad name because Scala uses
 Option [1] which can used in the same context, as result of a filter/map
 etc. but Option in Scala is a way to mask null. Given the name
 proximity, people will start to use Optional like Option in Scala and we
 will see methods returning things like OptionalListOptionalString.
 
 Google's Guava, which is a popular library, defines a class named
 Optional, but allow to store null unlike the current proposed
 implementation, this will generate a lot of confusions and frustrations.
 
 In fact, we don't need Optional at all, because we don't need to return
 a value that can represent a value or no value,
 the idea is that methods like findFirst should take a lambda as
 parameter letting the user to decide what value should be returned by
 findFirst if there is a value and if there is no value.
 So instead of
   stream.findFirst().orElse(null)
 you will write
   stream.findFirst(orNull)
 with orNull() defined as like that
   public static T Optionalizer orNull() {
 return (isPresent, element) - isPresent? element: null;
   }
 
 The whole design is explained here [2] and is similar to the way
 Collectors are defined [3],
 it's basically the lambda way of thinking, instead of creating an object
 representing the different states resulting of a call to findFirst,
 findFirst takes a lambda as parameter which is fed with the states of a
 call.
 
 cheers,
 Rémi
 
 [1] http://www.scala-lang.org/api/current/index.html#scala.Option
 [2]
 http://mail.openjdk.java.net/pipermail/lambda-libs-spec-observers/2013-February/001470.html
 [3]
 http://hg.openjdk.java.net/lambda/lambda/jdk/file/tip/src/share/classes/java/util/stream/Collectors.java
 
 
 On 03/04/2013 09:29 PM, Mike Duigou wrote:
 Hello All;
 
 This patch introduces Optional container objects to be used by the lambda 
 streams libraries for returning results.
 
 The reference Optional type, as defined, intentionally does not allow null 
 values. null may be used with the Optional.orElse() method.
 
 All of the Optional types define hashCode() and equals implementations. 
 Use of Optional types

Re: RFR : JDK-8001642 : Add OptionalT, OptionalDouble, OptionalInt, OptionalLong

2013-03-06 Thread Jed Wesley-Smith
Really, this is a lot of fuss over nothing.

There is actually no fundamental difference between Scala's Option, Guava's 
Optional, Fugue's Option, Java's Optional and Haskell's Maybe – they are 
modelling the same thing, the possibility of a value not being present.

The fact that there may be minor differences in api or semantics around whether 
null is a legal value are minor in the scheme of things (and yes, null is a 
pretty stupid legal value of a Some IMHO).

Stephen's example is ludicrous, why have a list of optional values? You'd 
flatten down into just a list – and an optional list only makes sense if the 
enclosed list is guaranteed to be non-empty, otherwise you just return an empty 
list!

If we are going to use potential straw-men as arguments we can stall all 
progress. Please concentrate on the important matters, let's disavow null as a 
valid value and save us all a billion dollars.

cheers,
jed.

On 06/03/2013, at 8:47 PM, Remi Forax fo...@univ-mlv.fr wrote:

 Ok, let be nuclear on this,
 There is no good reason to introduce OptionalT in java.util.
 
 It doen't work like Google's Guava Optional despite having the same 
 name, it doesn't work like Scala's Option despite having a similar name, 
 moreover the lambda pipeline face a similar issue with the design of 
 collectors (see stream.collect()) but solve that similar problem with a 
 different design, so the design of Optional is not even consistent with 
 the rest of the stream API.
 
 So why do we want something like Optional, we want it to be able to 
 represent the fact that as Mike states a returning result can have no 
 value by example Colections.emptyList().stream().findFirst() should 
 'return' no value.
 
 As Stephen Colebourne said, Optional is a bad name because Scala uses 
 Option [1] which can used in the same context, as result of a filter/map 
 etc. but Option in Scala is a way to mask null. Given the name 
 proximity, people will start to use Optional like Option in Scala and we 
 will see methods returning things like OptionalListOptionalString.
 
 Google's Guava, which is a popular library, defines a class named 
 Optional, but allow to store null unlike the current proposed 
 implementation, this will generate a lot of confusions and frustrations.
 
 In fact, we don't need Optional at all, because we don't need to return 
 a value that can represent a value or no value,
 the idea is that methods like findFirst should take a lambda as 
 parameter letting the user to decide what value should be returned by 
 findFirst if there is a value and if there is no value.
 So instead of
   stream.findFirst().orElse(null)
 you will write
   stream.findFirst(orNull)
 with orNull() defined as like that
   public static T Optionalizer orNull() {
 return (isPresent, element) - isPresent? element: null;
   }
 
 The whole design is explained here [2] and is similar to the way 
 Collectors are defined [3],
 it's basically the lambda way of thinking, instead of creating an object 
 representing the different states resulting of a call to findFirst,
 findFirst takes a lambda as parameter which is fed with the states of a 
 call.
 
 cheers,
 Rémi
 
 [1] http://www.scala-lang.org/api/current/index.html#scala.Option
 [2] 
 http://mail.openjdk.java.net/pipermail/lambda-libs-spec-observers/2013-February/001470.html
 [3] 
 http://hg.openjdk.java.net/lambda/lambda/jdk/file/tip/src/share/classes/java/util/stream/Collectors.java
 
 
 On 03/04/2013 09:29 PM, Mike Duigou wrote:
 Hello All;
 
 This patch introduces Optional container objects to be used by the lambda 
 streams libraries for returning results.
 
 The reference Optional type, as defined, intentionally does not allow null 
 values. null may be used with the Optional.orElse() method.
 
 All of the Optional types define hashCode() and equals implementations. Use 
 of Optional types in collections should be generally discouraged but having 
 useful equals() and hashCode() is ever so convenient.
 
 http://cr.openjdk.java.net/~mduigou/JDK-8001642/0/webrev/
 
 Mike
 
 
 


[jira] [Commented] (LOG4J2-169) LogManager.getLogger doesn't work

2013-02-26 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587518#comment-13587518
 ] 

Jed Wesley-Smith commented on LOG4J2-169:
-

 Using a CopyOnWriteArrayList seems like a brute force solution would generate 
 a lot of garbage when many loggers are added. My server at work uses 100's of 
 loggers.

100s is not very many at all. Also, we are talking about ConfigurationFactory 
objects, not Loggers. This technique (copy-on-write) is tried and true for this 
kind of read-mostly configuration data.

 The alternative would be to synchronize in just the right places, which is 
 harder to get right. 

Well no, it is easy actually, but you don't want to. Syncchronizing or locking 
means you must synchronize readers as well as writers – which is an unnecessary 
performance hit. CopyOnWriteArrayList does the right thing for you. It is 
extremely simple to implement.



 LogManager.getLogger doesn't work
 -

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Priority: Critical
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Commented] (LOG4J2-169) LogManager.getLogger doesn't work

2013-02-26 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587526#comment-13587526
 ] 

Jed Wesley-Smith commented on LOG4J2-169:
-

 I looked at this code last night and it struck me that I don't think 
 getInstance should be adding factory Class instances to the list every time 
 it is called. There are a couple of choices in fixing that which would also 
 solve the ConcurrentModificationException.

That is truly bizarre.

 LogManager.getLogger doesn't work
 -

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Priority: Critical
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Commented] (LOG4J2-169) LogManager.getLogger doesn't work

2013-02-26 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587561#comment-13587561
 ] 

Jed Wesley-Smith commented on LOG4J2-169:
-

 What is bizarre?

Creating a new Configuration each time getInstance is called. I hadn't noticed 
that, I was only looking at the direct usages of the list.

 LogManager.getLogger doesn't work
 -

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Priority: Critical
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Updated] (LOG4J2-169) ConfigurationFactory is not thread-safe

2013-02-25 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated LOG4J2-169:


Priority: Critical  (was: Major)

 ConfigurationFactory is not thread-safe
 ---

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Priority: Critical
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Updated] (LOG4J2-169) LogManager.getLogger doesn't work

2013-02-25 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated LOG4J2-169:


Summary: LogManager.getLogger doesn't work  (was: ConfigurationFactory is 
not thread-safe)

 LogManager.getLogger doesn't work
 -

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
Priority: Critical
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Created] (LOG4J2-169) ConfigurationFactory is not thread-safe

2013-02-20 Thread Jed Wesley-Smith (JIRA)
Jed Wesley-Smith created LOG4J2-169:
---

 Summary: ConfigurationFactory is not thread-safe
 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith


We randomly get the following:

{noformat}
java.util.ConcurrentModificationException
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
at …
{noformat}

factories is defined as:

{code}
private static ListConfigurationFactory factories = new 
ArrayListConfigurationFactory();
{code}

The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:

{code}
private static final ListConfigurationFactory factories = new 
CopyOnWriteArrayListConfigurationFactory();
{code}

https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



[jira] [Updated] (LOG4J2-169) ConfigurationFactory is not thread-safe

2013-02-20 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LOG4J2-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated LOG4J2-169:


 Labels: thread-safety  (was: )
Description: 
We randomly get the following:

java.util.ConcurrentModificationException
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
at …

factories is defined as:

private static ListConfigurationFactory factories = new 
ArrayListConfigurationFactory();

The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:

private static final ListConfigurationFactory factories = new 
CopyOnWriteArrayListConfigurationFactory();

https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

  was:
We randomly get the following:

{noformat}
java.util.ConcurrentModificationException
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
at …
{noformat}

factories is defined as:

{code}
private static ListConfigurationFactory factories = new 
ArrayListConfigurationFactory();
{code}

The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:

{code}
private static final ListConfigurationFactory factories = new 
CopyOnWriteArrayListConfigurationFactory();
{code}

https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java


 ConfigurationFactory is not thread-safe
 ---

 Key: LOG4J2-169
 URL: https://issues.apache.org/jira/browse/LOG4J2-169
 Project: Log4j 2
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0-beta4
Reporter: Jed Wesley-Smith
  Labels: thread-safety

 We randomly get the following:
 java.util.ConcurrentModificationException
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:377)
   at 
 org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:361)
   at 
 org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:266)
   at 
 org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:134)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75)
   at 
 org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:30)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:165)
   at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:174)
   at …
 factories is defined as:
 private static ListConfigurationFactory factories = new 
 ArrayListConfigurationFactory();
 The simple fix is to use a java.util.concurrent.CopyOnWriteArrayList:
 private static final ListConfigurationFactory factories = new 
 CopyOnWriteArrayListConfigurationFactory();
 https://svn.apache.org/repos/asf/logging/log4j/log4j2/trunk/core/src/main/java/org/apache/logging/log4j/core/config/ConfigurationFactory.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e

Re: [The Java Posse] Re: java.util.OptionalT (Java 8)

2012-11-02 Thread Jed Wesley-Smith
On Friday, 2 November 2012 07:02:38 UTC+11, fabrizio.giudici wrote:

 … I'm a little worried   
 about a possible proliferation of multiple, incompatible libraries. I'd be 
   
 interested in knowing why Google rejected the patches submitted by   
 Atlassian (I interpret their comment as the fact that they tried to   
 provide patches...). 


We – and others – have tried to submit patches to Guava several times and 
were basically met with a stone-wall, normally various excuses about some 
internal thing or other, and then Kevin came out and basically said[1] they 
were never going to take any submissions from outside, so please stop 
trying.

Fugue[2] was written as it became increasingly obvious that any sane 
functional library code would be deliberately subverted as the Guava team 
don't like it [3] and have no intention of supporting it except 
accidentally.

BTW Fugue isn't our only functional library code, we have recently added a 
Promise[4] to atlassian-util-concurrent[5] that gives a much saner 
interface to the whacky ListenableFuture stuff.

I'm somewhat disheartened by this whole thread though, I was really hoping 
java8 might bring some sanity to functional programming in Java. There 
seems though to be much investment from some parts of the community in 
making sure it doesn't happen, and no good reason for it. I have spoken 
with Brian Goetz, he's a great guy but he alone cannot do all of lambda as 
well as implement really good standard FP lib. Hopefully they'll get more 
resources so they can round things out, but there'd be a lot more than just 
having map/flatMap on Option (think immutable collections for instance).

[1] https://plus.google.com/113026104107031516488/posts/ZRdtjTL1MpM
[2] https://bitbucket.org/atlassian/fugue/
[3] http://code.google.com/p/guava-libraries/wiki/FunctionalExplained
[4] 
https://bitbucket.org/atlassian/atlassian-util-concurrent/src/master/src/main/java/com/atlassian/util/concurrent/Promise.java
[5] https://bitbucket.org/atlassian/atlassian-util-concurrent/

-- 
You received this message because you are subscribed to the Google Groups Java 
Posse group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/javaposse/-/5mlsWD6FnyUJ.
To post to this group, send email to javaposse@googlegroups.com.
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en.



Re: Http client API

2012-08-08 Thread Jed Wesley-Smith
Michael McMahon michael.x.mcmahon@... writes:

 A new revision of the Http client API planned for jdk 8 can be viewed
 at the following link
 
 http://cr.openjdk.java.net/~michaelm/httpclient/v0.3/
 
 We would like to review the api on this mailing list.
 So, all comments are welcome.

Can you separate the domain objects (in particular HttpClient, HttpRequest)
and their set-up (all the mutators) into separate concerns (Builders perhaps, 
see Guava for instance)?

It'd be nice to have this all thread-safe by default, it seems creating an API 
that isn't thread-safe is maybe not ideal.

cheers,
Jed Wesley-Smith




[jira] Created: (WW-3457) Significant performance improvement for freemarker TagModel

2010-06-09 Thread Jed Wesley-Smith (JIRA)
Significant performance improvement for freemarker TagModel
---

 Key: WW-3457
 URL: https://issues.apache.org/jira/browse/WW-3457
 Project: Struts 2
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Jed Wesley-Smith


The class org.apache.struts2.views.freemarker.tags.TagModel creates a new 
freemarker.template.DefaultObjectWrapper for each call to 
unwrapParameters(Map). The problem is that that class is quite expensive to 
construct as the super-class does a quite bit of introspection to fill up its 
caches. Not only is this a lot of work, it tends to hit the Introspector class 
pretty hard, which internally does a lot of sychronising. In our soak 
performance tests this was a significant cause of blocking in our application.

The DefaultObjectWrapper (or more properly the 
freemarker.ext.beans.BeansWrapper class as that is actually all the TagModel is 
using) is actually meant to be a cache that prevents exactly this kind of 
slowness, so constructing a new one every time is counter-productive.

I don't have the latest source handy so I can't provide a patch, but the change 
is to simply replace the line:

DefaultObjectWrapper objectWrapper = new DefaultObjectWrapper();

with:

BeansWrapper objectWrapper = BeansWrapper.getDefaultInstance();

in the second line of TagModel.unwrapParameters(Map params)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (WW-3457) Significant performance improvement for freemarker TagModel

2010-06-09 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/WW-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12877332#action_12877332
 ] 

Jed Wesley-Smith commented on WW-3457:
--

 Did you profile that ... ?

Yes, we saw this as one of the major causes of blocking in multi-threaded and a 
significant hotspot in the single-threaded profiled runs.

 Solved, thanks!

Awesome!

 Significant performance improvement for freemarker TagModel
 ---

 Key: WW-3457
 URL: https://issues.apache.org/jira/browse/WW-3457
 Project: Struts 2
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Jed Wesley-Smith
Assignee: Lukasz Lenart
 Fix For: 2.2.0


 The class org.apache.struts2.views.freemarker.tags.TagModel creates a new 
 freemarker.template.DefaultObjectWrapper for each call to 
 unwrapParameters(Map). The problem is that that class is quite expensive to 
 construct as the super-class does a quite bit of introspection to fill up its 
 caches. Not only is this a lot of work, it tends to hit the Introspector 
 class pretty hard, which internally does a lot of sychronising. In our soak 
 performance tests this was a significant cause of blocking in our application.
 The DefaultObjectWrapper (or more properly the 
 freemarker.ext.beans.BeansWrapper class as that is actually all the TagModel 
 is using) is actually meant to be a cache that prevents exactly this kind of 
 slowness, so constructing a new one every time is counter-productive.
 I don't have the latest source handy so I can't provide a patch, but the 
 change is to simply replace the line:
   DefaultObjectWrapper objectWrapper = new DefaultObjectWrapper();
 with:
 BeansWrapper objectWrapper = BeansWrapper.getDefaultInstance();
 in the second line of TagModel.unwrapParameters(Map params)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-19 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12869447#action_12869447
 ] 

Jed Wesley-Smith commented on FELIX-2332:
-

Any update?

The m_extensions_cache is still being written to with an blank cache during the 
remove operation.

 Lots of contention on ExtensionManager.openConnection(URL)
 --

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: framework-2.0.5
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Fix For: framework-3.0.0

 Attachments: ExtensionManager.java.patch


 This method is synchronized, apparently to protect is the iteration through 
 the m_extensions list. We have seen significant blocking in our applications 
 as this lock encompasses the call to URL.openConnection as well.
 As this list is rarely changed, a copy-on-write structure would be more 
 appropriate, but at the very least the, only holding the lock during the 
 iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-12 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith reopened FELIX-2332:
-


Sorry Karl, one more thing. In _removeExtensions(Object source) you still write 
a blank m_extensionCache (line 526 in r943407). If you remove that line all 
should be good.

 Lots of contention on ExtensionManager.openConnection(URL)
 --

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: framework-2.0.5
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Fix For: framework-3.0.0

 Attachments: ExtensionManager.java.patch


 This method is synchronized, apparently to protect is the iteration through 
 the m_extensions list. We have seen significant blocking in our applications 
 as this lock encompasses the call to URL.openConnection as well.
 As this list is rarely changed, a copy-on-write structure would be more 
 appropriate, but at the very least the, only holding the lock during the 
 iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-11 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith reopened FELIX-2332:
-


 Lots of contention on ExtensionManager.openConnection(URL)
 --

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: framework-2.0.5
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Fix For: framework-3.0.0

 Attachments: ExtensionManager.java.patch


 This method is synchronized, apparently to protect is the iteration through 
 the m_extensions list. We have seen significant blocking in our applications 
 as this lock encompasses the call to URL.openConnection as well.
 As this list is rarely changed, a copy-on-write structure would be more 
 appropriate, but at the very least the, only holding the lock during the 
 iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-11 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12866464#action_12866464
 ] 

Jed Wesley-Smith commented on FELIX-2332:
-

Karl, there are a couple of things I am concerned about with your commit.

Firstly there is partial visibility of changes, specifically in 
_removeExtensions(Object). It now completely clears the m_extensionsCache and 
then calls _add(String, Bundle) to add the old ones in one by one. As the 
openConnection(URL) method is no longer synchronised it can see the 
m_extensionsCache in a partially updated state. It would be better to write the 
m_extensionsCache at the end of the method once m_extensions is completely 
updated. This would also remove any intermediate array creation in this case. 
Obviously addExtension(Object source, Bundle extension) would also need to 
explicitly write the m_extensionsCache as well. This partial visibility issue 
could lead to some spurious and very hard to reproduce IOExceptions.

Second is a small point, the rewritten openConnection method uses Java5 syntax 
(foreach loop), but the m_extensionsCache write in _add uses the old 1.4 
explicit cast of the array, unnecessary in Java5. I don't know if Felix3 is 
Java5 or not, or if it is whether there a policy for using Java5 features.

 Lots of contention on ExtensionManager.openConnection(URL)
 --

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: framework-2.0.5
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Fix For: framework-3.0.0

 Attachments: ExtensionManager.java.patch


 This method is synchronized, apparently to protect is the iteration through 
 the m_extensions list. We have seen significant blocking in our applications 
 as this lock encompasses the call to URL.openConnection as well.
 As this list is rarely changed, a copy-on-write structure would be more 
 appropriate, but at the very least the, only holding the lock during the 
 iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-10 Thread Jed Wesley-Smith (JIRA)
Lots of contention on ExtensionManager.openConnection(URL)
--

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Reporter: Jed Wesley-Smith


This method is synchronized, apparently to protect is the iteration through the 
m_extensions list. We have seen significant blocking in our applications as 
this lock encompasses the call to URL.openConnection as well.

As this list is rarely changed, a copy-on-write structure would be more 
appropriate, but at the very least the, only holding the lock during the 
iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (FELIX-2332) Lots of contention on ExtensionManager.openConnection(URL)

2010-05-10 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated FELIX-2332:


Attachment: ExtensionManager.java.patch

attached is a simple patch that reduces the scope of the sync block to the 
iteration.

 Lots of contention on ExtensionManager.openConnection(URL)
 --

 Key: FELIX-2332
 URL: https://issues.apache.org/jira/browse/FELIX-2332
 Project: Felix
  Issue Type: Bug
  Components: Framework
Reporter: Jed Wesley-Smith
 Attachments: ExtensionManager.java.patch


 This method is synchronized, apparently to protect is the iteration through 
 the m_extensions list. We have seen significant blocking in our applications 
 as this lock encompasses the call to URL.openConnection as well.
 As this list is rarely changed, a copy-on-write structure would be more 
 appropriate, but at the very least the, only holding the lock during the 
 iteration would be far preferable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-1746) Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-14 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12765814#action_12765814
 ] 

Jed Wesley-Smith commented on FELIX-1746:
-

Karl, there are a couple of things that I think are not complete in your patch. 
Firstly, the register and unregister methods are still synchronizing on (this) 
rather than the m_serviceRegsMap, which increases the chance for that lock to 
become contended. Uncontended synchronization is very cheap these days (in 
Java6 it boils down to a single CAS per lock acquire for unbiased locking, or 
less for unbiased locking if the current thread has already established 
ownership of the lock), but contended synchronization can be extremely 
expensive, and basically gets more and more so as the number of cores and 
threads go up. At a minimum, it is important to use the most specific lock 
possible.

Question, you state that HotSpot will conflate the two subsequent lock acquires 
into one, can you provide a reference? I have not been able to find one. It is 
just as easy to force the matter and surround the two calls in a 
synchronized(m_serviceRegsMap).

You also state above that it is difficult to reason about performance with the 
COW map solution. This is not actually the case. The COWMap will perform a map 
copy under lock during a write, and then modify the copied map. Reads never 
lock, they simply read a volatile reference to the underlying effectively 
immutable map. The amount of work done in the copy is not a whole lot more than 
is done in your read method where the values() collection is copied to an 
array. Your map writes are locked as well but must lock while modifying the 
value array to ensure atomic updates. Therefore the COWMap disadvantage for 
writes is minimal, while reads completely eliminate locking (and therefore the 
chance of blocking), which also minimises the number of necessary lock 
acquisitions. We have a lot of experience performance testing these structures 
and the map has to be pretty large and the ratio of writes to reads very 
significant before the write copy under lock becomes significant. For a lot of 
usages the COWMap significantly outperforms the java.util.ConcurrentHashMap 
(and obviously blows SynchronizedMap out of the water).

You state that Felix in general locks correctly, and I too am very glad that it 
does - but this correctness is often implemented by very broad locking. While 
this is technically correct, it has the effect of serializing access to a lot 
of these classes. It does not take too much load for this to become a major 
performance bottleneck.

All that being said, if we can get a slightly modified (as above) commit in for 
now and revisit later that seems an acceptable compromise. We will use our 
patch on our fork for now and mark it as diverged. Please add the link to the 
created issue.

Lastly, it seems some calls to getReference() have been protected by a catch 
(IllegalStateException) but not others. Is there a reason for this? Should it 
be documented? Why is this change in this changeset at all anyway? Shouldn't 
there be a separate issue and commit for this?

 Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)
 

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Attachments: blocked-threads.gif.jpg, FELIX-1746-alt.patch, 
 FELIX-1746-alt2.patch, FELIX-1746.patch


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-1746) Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-14 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12765817#action_12765817
 ] 

Jed Wesley-Smith commented on FELIX-1746:
-

 Question, you state that HotSpot will conflate the two subsequent lock 
 acquires into one, can you provide a reference? I have not been able to find 
 one...

Don't worry, found it: 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6245809

 Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)
 

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Attachments: blocked-threads.gif.jpg, FELIX-1746-alt.patch, 
 FELIX-1746-alt2.patch, FELIX-1746.patch


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (FELIX-1746) Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-13 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated FELIX-1746:


Summary: Eliminate contention on 
ServiceRegistry.getServiceReferences(String, Filter)  (was: Reduce contention 
on ServiceRegistry.getServiceReferences(String, Filter))

 Eliminate contention on ServiceRegistry.getServiceReferences(String, Filter)
 

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Attachments: blocked-threads.gif.jpg, FELIX-1746-alt.patch, 
 FELIX-1746-alt2.patch, FELIX-1746.patch


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-1746) Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-13 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12765374#action_12765374
 ] 

Jed Wesley-Smith commented on FELIX-1746:
-

Karl, your patch reduces the contention somewhat, as it now only synchronizes 
on the map.get() (rather than holding the ServiceRegistry lock for the entire 
read operation) and the map.put() cases using the map's monitor, but it still 
requires the lock to be acquired on read. We have found that the read case is 
called very often, while write is comparatively rare.

My patch removes all synchronisation during read, eliminating most cases of 
lock contention. It was also was extremely careful not to introduce any 
semantic changes to the ServiceRegistry. You also still use the ServiceRegistry 
monitor to co-ordinate updates to the map, which blocks concurrent calls to 
ServiceRegistry.getUsingBundles(ServiceReference), 
ServiceRegistry.getServicesInUse(Bundle) and other parts of the class that 
explicitly synchronize on (this).

Lastly, it seems to me that the edge-case change you made in alt2 (deal with 
IllegalStateException when derefencing) is orthogonal to this change and the 
change would need to be made regardless of this particular problem.

 Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)
 -

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
Assignee: Karl Pauls
 Attachments: blocked-threads.gif.jpg, FELIX-1746-alt.patch, 
 FELIX-1746-alt2.patch, FELIX-1746.patch


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (FELIX-1746) Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-12 Thread Jed Wesley-Smith (JIRA)
Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)
-

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith


Performance testing has shown that there is significant contention on the 
ServiceRegistry object's monitor during startup. This is caused by Spring DM 
making lots of calls to the synchronized method 
ServiceRegistry.getServiceReferences(String, Filter). This method is 
synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
does a lot more work than simply accessing the map.

Propose changing the ServiceRegistry to use a thread-safe Map implementation 
that does not require external synchronization, in particular a CopyOnWriteMap. 
I will add a patch that includes a CopyOnWriteMap implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (FELIX-1746) Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-12 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated FELIX-1746:


Attachment: blocked-threads.gif.jpg

jprofiler screenshot attached

 Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)
 -

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
 Attachments: blocked-threads.gif.jpg


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (FELIX-1746) Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)

2009-10-12 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated FELIX-1746:


Attachment: FELIX-1746.patch

Attached is a patch that adds a fully functional and tested CopyOnWriteMap 
implementation and a copy of the Java5 ConcurrentMap interface which it 
implements. This is then used in the ServiceRegistry to provide lock free read 
access to the m_serviceRegsMap.

The CopyOnWriteMap has a full test suite and is based on the implementation in 
the atlassian-util-concurrent library, but backported to Java 1.3.

 Reduce contention on ServiceRegistry.getServiceReferences(String, Filter)
 -

 Key: FELIX-1746
 URL: https://issues.apache.org/jira/browse/FELIX-1746
 Project: Felix
  Issue Type: Improvement
  Components: Framework
Affects Versions: felix-2.0.0
Reporter: Jed Wesley-Smith
 Attachments: blocked-threads.gif.jpg, FELIX-1746.patch


 Performance testing has shown that there is significant contention on the 
 ServiceRegistry object's monitor during startup. This is caused by Spring DM 
 making lots of calls to the synchronized method 
 ServiceRegistry.getServiceReferences(String, Filter). This method is 
 synchronized in order to protect the m_serviceRegsMap HashMap, but the method 
 does a lot more work than simply accessing the map.
 Propose changing the ServiceRegistry to use a thread-safe Map implementation 
 that does not require external synchronization, in particular a 
 CopyOnWriteMap. I will add a patch that includes a CopyOnWriteMap 
 implementation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SHINDIG-1132) ClassLoader memory leak caused by XmlUtil ThreadLocal

2009-07-29 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/SHINDIG-1132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12736934#action_12736934
 ] 

Jed Wesley-Smith commented on SHINDIG-1132:
---

Unfortunately, the problem is that the instance inside the threadlocal contains 
 a strong reference to the XmlUtil class. Making it a new instance each time or 
using the Guide threadlocal do not change this at all and will not fix the 
problem.

The simplest fix would be to set the error handler to null after using it 
(example doing this in the parse method):

Index: XmlUtil.java
===
--- XmlUtil.java(revision 33230)
+++ XmlUtil.java(working copy)
@@ -282,23 +282,29 @@
* Attempts to parse the input xml into a single element.
* @param xml
* @return The document object
-   * @throws XmlException if a parse error occured.
+   * @throws XmlException if a parse error occurred.
*/
   public static Element parse(String xml) throws XmlException {
 try {
-  DocumentBuilder builder = getBuilder();
-  InputSource is = new InputSource(new StringReader(xml.trim()));
-  Element element = builder.parse(is).getDocumentElement();
-  return element;
-} catch (SAXParseException e) {
-  throw new XmlException(
-  e.getMessage() +  At: ( + e.getLineNumber() + ',' + 
e.getColumnNumber() + ')', e);
-} catch (SAXException e) {
-  throw new XmlException(e);
+  final DocumentBuilder builder = getBuilder();
+  try {
+InputSource is = new InputSource(new StringReader(xml.trim()));
+Element element = builder.parse(is).getDocumentElement();
+return element;
+  } catch (SAXParseException e) {
+throw new XmlException(
+e.getMessage() +  At: ( + e.getLineNumber() + ',' + 
e.getColumnNumber() + ')', e);
+  } catch (SAXException e) {
+throw new XmlException(e);
+  } catch (IOException e) {
+throw new XmlException(e);
+  } finally {
+if (builder != null) {
+  builder.setErrorHandler(null);
+}
+  }
 } catch (ParserConfigurationException e) {
   throw new XmlException(e);
-} catch (IOException e) {
-  throw new XmlException(e);
 }
   }
 }

 ClassLoader memory leak caused by XmlUtil ThreadLocal
 -

 Key: SHINDIG-1132
 URL: https://issues.apache.org/jira/browse/SHINDIG-1132
 Project: Shindig
  Issue Type: Bug
  Components: Java
Affects Versions: 1.0
 Environment: When trying to unload the ClassLoader that loaded 
 Shindig, for instance in an OSGi environment
Reporter: Jed Wesley-Smith

 The class org.apache.shindig.common.xml.XmlUtil caches a 
 javax.xml.parsers.DocumentBuilder in the ThreadLocal reusableBuilder 
 variable. These instances are created with the static ErrorHandler instance 
 which creates the strong reference to the XmlUtil class that prevents the 
 ClassLoader from being reclaimed.
 Currently the only way to turn off this behaviour is for 
 DocumentBuilder.reset() to throw an exception.
 We need a way to turn off this caching. Maybe the caching aspect could be 
 injected via Guice?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (LUCENE-1609) Eliminate synchronization contention on initial index reading in TermInfosReader ensureIndexIsRead

2009-06-03 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12716118#action_12716118
 ] 

Jed Wesley-Smith commented on LUCENE-1609:
--

We get hit by this too. We'd love to see a fix and we'd agree that up-front 
initialisation would work for us.

AFAICT there are a number of other potential subtle concurrency issues with 
{{TermInfosReader}}:

# lack of {{final}} on fields - a number of fields ({{directory}}, {{segment}}, 
{{fieldInfos}}, {{origEnum}}, {{enumerators}} etc.) are never written to after 
construction and should be declared {{final}} for better publication semantics
# unsafe publication of {{indexDivisor}} and {{totalIndexInterval}} these 
fields are not written to under lock and in a worst-case could be unstable 
under use.
# {{close()}} calls {{enumerators.set(null)}} which only clears the value for 
the calling thread.

Making the {{TermInfosReader}} more immutable would address some of these 
issues.

As far as the root problem goes, uncontended synchronisation is generally _very 
fast_, but significantly slows down once a lock becomes contended. The kind of 
pattern employed here (do something quite expensive but only once) is not an 
ideal use of synchronisation as it commonly leads to a contended lock, which 
remains a slow lock well after it is required\*. That being said, it isn't easy 
to do correctly and performantly under 1.4. 

\* An alternative approach is something like this 
[LazyReference|http://labs.atlassian.com/source/browse/CONCURRENT/trunk/src/main/java/com/atlassian/util/concurrent/LazyReference.java?r=2242]
 class, although this kind of thing really requires Java5 for full value.

 Eliminate synchronization contention on initial index reading in 
 TermInfosReader ensureIndexIsRead 
 ---

 Key: LUCENE-1609
 URL: https://issues.apache.org/jira/browse/LUCENE-1609
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Affects Versions: 2.9
 Environment: Solr 
 Tomcat 5.5
 Ubuntu 2.6.20-17-generic
 Intel(R) Pentium(R) 4 CPU 2.80GHz, 2Gb RAM
Reporter: Dan Rosher
 Fix For: 2.9

 Attachments: LUCENE-1609.patch, LUCENE-1609.patch


 synchronized method ensureIndexIsRead in TermInfosReader causes contention 
 under heavy load
 Simple to reproduce: e.g. Under Solr, with all caches turned off, do a simple 
 range search e.g. id:[0 TO 99] on even a small index (in my case 28K 
 docs) and under a load/stress test application, and later, examining the 
 Thread dump (kill -3) , many threads are blocked on 'waiting for monitor 
 entry' to this method.
 Rather than using Double-Checked Locking which is known to have issues, this 
 implementation uses a state pattern, where only one thread can move the 
 object from IndexNotRead state to IndexRead, and in doing so alters the 
 objects behavior, i.e. once the index is loaded, the index nolonger needs a 
 synchronized method. 
 In my particular test, this uncreased throughput at least 30 times.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Created: (FELIX-1170) MemoryLeak when stopping and restarting Felix

2009-05-21 Thread Jed Wesley-Smith (JIRA)
MemoryLeak when stopping and restarting Felix
-

 Key: FELIX-1170
 URL: https://issues.apache.org/jira/browse/FELIX-1170
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: felix-1.2.1

 Environment: Atlassian JIRA
Reporter: Jed Wesley-Smith


There is a memory leak caused by a strong reference from the 
BundleProtectionDomain to a bundle and Felix.

The problem is that a URLClassLoader gets its AccessControlContext from the 
stack - AccessController.getContext() calls 
AccessController.getStackAccessControlContext() which is basically arbitrary at 
the time.

In our case we have a ServletFilter plugin that is being loaded by Felix. When 
a JasperLoader (a URLClassLoader) is created to load a JSP it inherits the 
BundleProtectionDomain as part of its AccessControlContext. If we later shut 
down Felix, it cannot be GC'd due to this reference.

For our purposes we have tested making the m_felix and m_bundle weak references 
and have verified that it does indeed fix the problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (FELIX-1170) MemoryLeak when stopping and restarting Felix

2009-05-21 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/FELIX-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated FELIX-1170:


Attachment: BundleProtectionDomain.java.FELIX-1170.patch

patch attached.

Note that the hashcode and toString are now constructed up front. 

 MemoryLeak when stopping and restarting Felix
 -

 Key: FELIX-1170
 URL: https://issues.apache.org/jira/browse/FELIX-1170
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: felix-1.2.1

 Environment: Atlassian JIRA
Reporter: Jed Wesley-Smith
 Attachments: BundleProtectionDomain.java.FELIX-1170.patch


 There is a memory leak caused by a strong reference from the 
 BundleProtectionDomain to a bundle and Felix.
 The problem is that a URLClassLoader gets its AccessControlContext from the 
 stack - AccessController.getContext() calls 
 AccessController.getStackAccessControlContext() which is basically arbitrary 
 at the time.
 In our case we have a ServletFilter plugin that is being loaded by Felix. 
 When a JasperLoader (a URLClassLoader) is created to load a JSP it inherits 
 the BundleProtectionDomain as part of its AccessControlContext. If we later 
 shut down Felix, it cannot be GC'd due to this reference.
 For our purposes we have tested making the m_felix and m_bundle weak 
 references and have verified that it does indeed fix the problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (FELIX-1170) MemoryLeak when stopping and restarting Felix

2009-05-21 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12711926#action_12711926
 ] 

Jed Wesley-Smith commented on FELIX-1170:
-

The [Atlassian Plugins ticket|https://studio.atlassian.com/browse/PLUG-388]

 MemoryLeak when stopping and restarting Felix
 -

 Key: FELIX-1170
 URL: https://issues.apache.org/jira/browse/FELIX-1170
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: felix-1.2.1

 Environment: Atlassian JIRA
Reporter: Jed Wesley-Smith
 Attachments: BundleProtectionDomain.java.FELIX-1170.patch


 There is a memory leak caused by a strong reference from the 
 BundleProtectionDomain to a bundle and Felix.
 The problem is that a URLClassLoader gets its AccessControlContext from the 
 stack - AccessController.getContext() calls 
 AccessController.getStackAccessControlContext() which is basically arbitrary 
 at the time.
 In our case we have a ServletFilter plugin that is being loaded by Felix. 
 When a JasperLoader (a URLClassLoader) is created to load a JSP it inherits 
 the BundleProtectionDomain as part of its AccessControlContext. If we later 
 shut down Felix, it cannot be GC'd due to this reference.
 For our purposes we have tested making the m_felix and m_bundle weak 
 references and have verified that it does indeed fix the problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (FELIX-1170) MemoryLeak when stopping and restarting Felix

2009-05-21 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/FELIX-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12711926#action_12711926
 ] 

Jed Wesley-Smith edited comment on FELIX-1170 at 5/21/09 7:22 PM:
--

The Atlassian Plugins ticket: https://studio.atlassian.com/browse/PLUG-388

  was (Author: jedws):
The [Atlassian Plugins ticket|https://studio.atlassian.com/browse/PLUG-388]
  
 MemoryLeak when stopping and restarting Felix
 -

 Key: FELIX-1170
 URL: https://issues.apache.org/jira/browse/FELIX-1170
 Project: Felix
  Issue Type: Bug
  Components: Framework
Affects Versions: felix-1.2.1

 Environment: Atlassian JIRA
Reporter: Jed Wesley-Smith
 Attachments: BundleProtectionDomain.java.FELIX-1170.patch


 There is a memory leak caused by a strong reference from the 
 BundleProtectionDomain to a bundle and Felix.
 The problem is that a URLClassLoader gets its AccessControlContext from the 
 stack - AccessController.getContext() calls 
 AccessController.getStackAccessControlContext() which is basically arbitrary 
 at the time.
 In our case we have a ServletFilter plugin that is being loaded by Felix. 
 When a JasperLoader (a URLClassLoader) is created to load a JSP it inherits 
 the BundleProtectionDomain as part of its AccessControlContext. If we later 
 shut down Felix, it cannot be GC'd due to this reference.
 For our purposes we have tested making the m_felix and m_bundle weak 
 references and have verified that it does indeed fix the problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



memory leaks

2009-04-14 Thread Jed Wesley-Smith

all,

we are having some very difficult to track down memory leak issues that 
_very tentatively_ appear to be triggered by the felix framework and our 
usage of it.*


we have spent a long time tracking down all trackable paths to GC roots 
and have eliminated them all but still the garbage persists.


we are wondering if anyone has seen anyone has seen anything similar and 
may have some advice.


cheers,
Jed Wesley-Smith
JIRA team @ Atlassian

* http://jira.atlassian.com/browse/JRA-16932

-
To unsubscribe, e-mail: users-unsubscr...@felix.apache.org
For additional commands, e-mail: users-h...@felix.apache.org



Re: IllegalStateEx thrown when calling close

2008-10-30 Thread Jed Wesley-Smith

Thanks Mike!

Michael McCandless wrote:


OK I'll add that (what IW does on setting an OOME) to the javadocs.

Mike

Jed Wesley-Smith wrote:


Mike,

regarding this paragraph:

To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method) and then 2) not call any methods on
the old writer.  Even if the old writer has concurrent merges running,
they will refuse to commit on seeing that an OOM had occurred.

I'm not sure that an IndexWriter is particularly useful once its 
hitOOM flag is set to true, whether autocommit is true or not. Once 
it is true you can't do anything with it (it reverts to its last 
commit point and stays there) and need to discard it. I am suggesting 
that this could be documented as it is not immediately obvious 
without coming across it and debugging it. That being said, the VM is 
probably not that useful once OOMEs are flying around anyway :-)


cheers,
jed.

Michael McCandless wrote:


Jed Wesley-Smith wrote:

Yeah, I saw the change to flush(). Trying to work out the correct 
strategy for our IndexWriter handling now. We probably should not 
be using autocommit for our writers.


autoCommit=true is deprecated as of 2.4.0, and will go away when we 
finally get to 3.0, so I think switching to false, and possibly 
changing your app to periodically commit() if you were relying on 
those semantics, is a good step forward.


It was brought up by others that the OutOfMemoryError handling 
requirements are a fairly strong part of the contract now - but 
aren't documented. Do you think the last paragraph below should be 
incorporated into the class JavaDoc?


Well, that paragraph is a workaround for the issue you hit, which 
only applies when autoCommit is true, so going forward (or, if you 
use autoCommit=false) you should simply close the IndexWriter.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-30 Thread Jed Wesley-Smith
ahh, yes, sorry, the ability to read is occasionally handy... [wipes egg 
off forehead]


cheers,
jed.

Michael McCandless wrote:


Actually, yes in 2.3.2: IndexReader.unlock has existed for a long time.

In 2.4.0, we moved this to IndexWriter.unlock.

Mike

Jed Wesley-Smith wrote:


not in 2.3.2 though.

cheers,
jed.

Michael McCandless wrote:


Or you can use IndexReader.unlock.

Mike

Jed Wesley-Smith wrote:


Michael McCandless wrote:


To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method)


IndexWriter.unlock(*) is 2.4 only.

Use the following instead:

 directory.makeLock(IndexWriter.WRITE_LOCK_NAME).release();



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-29 Thread Jed Wesley-Smith

Michael McCandless wrote:


To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method) 


IndexWriter.unlock(*) is 2.4 only.

Use the following instead:

   directory.makeLock(IndexWriter.WRITE_LOCK_NAME).release();

cheers,
jed.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-29 Thread Jed Wesley-Smith

not in 2.3.2 though.

cheers,
jed.

Michael McCandless wrote:


Or you can use IndexReader.unlock.

Mike

Jed Wesley-Smith wrote:


Michael McCandless wrote:


To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method)


IndexWriter.unlock(*) is 2.4 only.

Use the following instead:

  directory.makeLock(IndexWriter.WRITE_LOCK_NAME).release();



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-29 Thread Jed Wesley-Smith

Mike,

regarding this paragraph:

To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method) and then 2) not call any methods on
the old writer.  Even if the old writer has concurrent merges running,
they will refuse to commit on seeing that an OOM had occurred.

I'm not sure that an IndexWriter is particularly useful once its hitOOM 
flag is set to true, whether autocommit is true or not. Once it is true 
you can't do anything with it (it reverts to its last commit point and 
stays there) and need to discard it. I am suggesting that this could be 
documented as it is not immediately obvious without coming across it and 
debugging it. That being said, the VM is probably not that useful once 
OOMEs are flying around anyway :-)


cheers,
jed.

Michael McCandless wrote:


Jed Wesley-Smith wrote:

Yeah, I saw the change to flush(). Trying to work out the correct 
strategy for our IndexWriter handling now. We probably should not be 
using autocommit for our writers.


autoCommit=true is deprecated as of 2.4.0, and will go away when we 
finally get to 3.0, so I think switching to false, and possibly 
changing your app to periodically commit() if you were relying on 
those semantics, is a good step forward.


It was brought up by others that the OutOfMemoryError handling 
requirements are a fairly strong part of the contract now - but 
aren't documented. Do you think the last paragraph below should be 
incorporated into the class JavaDoc?


Well, that paragraph is a workaround for the issue you hit, which only 
applies when autoCommit is true, so going forward (or, if you use 
autoCommit=false) you should simply close the IndexWriter.




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-1429) close() throws incorrect IllegalStateEx after IndexWriter hit an OOME when autoCommit is true

2008-10-28 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12643360#action_12643360
 ] 

Jed Wesley-Smith commented on LUCENE-1429:
--

Thanks Michael, I'll try and work out the best policy for the client code that 
should notice OOME and react appropriately.

 close() throws incorrect IllegalStateEx after IndexWriter hit an OOME when 
 autoCommit is true
 -

 Key: LUCENE-1429
 URL: https://issues.apache.org/jira/browse/LUCENE-1429
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 2.3, 2.3.1, 2.3.2, 2.4
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 2.9


 Spinoff from 
 http://www.nabble.com/IllegalStateEx-thrown-when-calling-close-to20201825.html
 When IndexWriter hits an OOME, it records this and then if close() is
 called it calls rollback() instead.  This is a defensive measure, in
 case the OOME corrupted the internal buffered state (added/deleted
 docs).
 But there's a bug: if you opened IndexWriter with autoCommit true,
 close() then incorrectly throws an IllegalStatException.
 This fix is simple: allow rollback to be called even if autoCommit is
 true, internally during close.  (External calls to rollback with
 autoCommmit true is still not allowed).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



IllegalStateEx thrown when calling close

2008-10-28 Thread Jed Wesley-Smith

All,

We have seen the following stacktrace in production with Lucene 2.3.2:

java.lang.IllegalStateException: abort() can only be called when 
IndexWriter was opened with autoCommit=false

   at org.apache.lucene.index.IndexWriter.abort(IndexWriter.java:2009)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1175)
   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1154)

This is caused by some IndexWriter method catching an OutOfMemoryError 
previously and then aborting the close.


My question is twofold. Firstly, does it make any sense for this to 
happen (it feels like a bug, shouldn't close not call abort if 
autoCommit=true?). Secondly, is there anything I can do to recover 
(should I call flush()?), or can I just ignore it if autoCommit is true?


Following is a test that reproduces the problem:

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.store.RAMDirectory;
import org.junit.Assert;
import org.junit.Test;

import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.concurrent.atomic.AtomicBoolean;

public class TestIndexWriter {

   @Test public void testOutOfMemoryErrorCausesCloseToFail() throws 
Exception {

   final AtomicBoolean throwFirst = new AtomicBoolean(true);
   final IndexWriter writer = new IndexWriter(new RAMDirectory(), 
new StandardAnalyzer()) {

   @Override public void message(final String message) {
   if (message.startsWith(now flush at close)  
throwFirst.getAndSet(false)) {

   throw new OutOfMemoryError(message);
   }
   }
   };
   // need to set an info stream so message is called
   writer.setInfoStream(new PrintStream(new 
ByteArrayOutputStream())); //or better, use NullOS from commons-io

   try {
   writer.close();
   Assert.fail(OutOfMemoryError expected);
   }
   catch (final OutOfMemoryError expected) {}

   // throws IllegalStateEx
   writer.close();
   }
}

thanks for any help,
jed.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-28 Thread Jed Wesley-Smith

Michael,

https://issues.apache.org/jira/browse/LUCENE-1429

Thanks mate. I'll try and work out the client handling policy of the 
IndexWriter calls. I see that flush now aborts the transaction as well...


cheers,
jed.

Michael McCandless wrote:


Woops, you're right: this is a bug.  I'll open an issue, fold in your
nice test case  fix it.  Thanks Jed!

On hitting OOM, IndexWriter marks that its internal state (buffered
documents, deletions) may be corrupt and so it rollsback to the last
commit instead of flushing a new segment.

To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method) and then 2) not call any methods on
the old writer.  Even if the old writer has concurrent merges running,
they will refuse to commit on seeing that an OOM had occurred.

Mike



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: IllegalStateEx thrown when calling close

2008-10-28 Thread Jed Wesley-Smith
Yeah, I saw the change to flush(). Trying to work out the correct 
strategy for our IndexWriter handling now. We probably should not be 
using autocommit for our writers.


It was brought up by others that the OutOfMemoryError handling 
requirements are a fairly strong part of the contract now - but aren't 
documented. Do you think the last paragraph below should be incorporated 
into the class JavaDoc?


cheers,
jed.

Michael McCandless wrote:


Sorry I forgot to follow up with the issue, but yup that's the one.

I did also fix IW to disallow flush after it has seen an OOME.

Mike

Jed Wesley-Smith wrote:


Michael,

https://issues.apache.org/jira/browse/LUCENE-1429

Thanks mate. I'll try and work out the client handling policy of the 
IndexWriter calls. I see that flush now aborts the transaction as 
well...


cheers,
jed.

Michael McCandless wrote:


Woops, you're right: this is a bug.  I'll open an issue, fold in your
nice test case  fix it.  Thanks Jed!

On hitting OOM, IndexWriter marks that its internal state (buffered
documents, deletions) may be corrupt and so it rollsback to the last
commit instead of flushing a new segment.

To workaround this, on catching an OOME on any of IndexWriter's
methods, you should 1) forcibly remove the write lock
(IndexWriter.unlock static method) and then 2) not call any methods on
the old writer.  Even if the old writer has concurrent merges running,
they will refuse to commit on seeing that an OOM had occurred.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-1282) Sun hotspot compiler bug in 1.6.0_04/05 affects Lucene

2008-07-11 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12613018#action_12613018
 ] 

Jed Wesley-Smith commented on LUCENE-1282:
--

Sun has posted their evaluation on the bug above and accepted it as High 
priority.

 Sun hotspot compiler bug in 1.6.0_04/05 affects Lucene
 --

 Key: LUCENE-1282
 URL: https://issues.apache.org/jira/browse/LUCENE-1282
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: 2.3, 2.3.1
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 2.4

 Attachments: corrupt_merge_out15.txt, crashtest, crashtest.log, 
 hs_err_pid27359.log


 This is not a Lucene bug.  It's an as-yet not fully characterized Sun
 JRE bug, as best I can tell.  I'm opening this to gather all things we
 know, and to work around it in Lucene if possible, and maybe open an
 issue with Sun if we can reduce it to a compact test case.
 It's hit at least 3 users:
   
 http://mail-archives.apache.org/mod_mbox/lucene-java-user/200803.mbox/[EMAIL 
 PROTECTED]
   
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200804.mbox/[EMAIL 
 PROTECTED]
   
 http://mail-archives.apache.org/mod_mbox/lucene-java-user/200805.mbox/[EMAIL 
 PROTECTED]
 It's specific to at least JRE 1.6.0_04 and 1.6.0_05, that affects
 Lucene.  Whereas 1.6.0_03 works OK and it's unknown whether 1.6.0_06
 shows it.
 The bug affects bulk merging of stored fields.  When it strikes, the
 segment produced by a merge is corrupt because its fdx file (stored
 fields index file) is missing one document.  After iterating many
 times with the first user that hit this, adding diagnostics 
 assertions, its seems that a call to fieldsWriter.addDocument some
 either fails to run entirely, or, fails to invoke its call to
 indexStream.writeLong.  It's as if when hotspot compiles a method,
 there's some sort of race condition in cutting over to the compiled
 code whereby a single method call fails to be invoked (speculation).
 Unfortunately, this corruption is silent when it occurs and only later
 detected when a merge tries to merge the bad segment, or an
 IndexReader tries to open it.  Here's a typical merge exception:
 {code}
 Exception in thread Thread-10 
 org.apache.lucene.index.MergePolicy$MergeException: 
 org.apache.lucene.index.CorruptIndexException:
 doc counts differ for segment _3gh: fieldsReader shows 15999 but 
 segmentInfo shows 16000
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:271)
 Caused by: org.apache.lucene.index.CorruptIndexException: doc counts differ 
 for segment _3gh: fieldsReader shows 15999 but segmentInfo shows 16000
 at 
 org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:313)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:262)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:221)
 at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3099)
 at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:2834)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:240)
 {code}
 and here's a typical exception hit when opening a searcher:
 {code}
 org.apache.lucene.index.CorruptIndexException: doc counts differ for segment 
 _kk: fieldsReader shows 72670 but segmentInfo shows 72671
 at 
 org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:313)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:262)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:230)
 at 
 org.apache.lucene.index.DirectoryIndexReader$1.doBody(DirectoryIndexReader.java:73)
 at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:636)
 at 
 org.apache.lucene.index.DirectoryIndexReader.open(DirectoryIndexReader.java:63)
 at org.apache.lucene.index.IndexReader.open(IndexReader.java:209)
 at org.apache.lucene.index.IndexReader.open(IndexReader.java:173)
 at 
 org.apache.lucene.search.IndexSearcher.init(IndexSearcher.java:48)
 {code}
 Sometimes, adding -Xbatch (forces up front compilation) or -Xint
 (disables compilation) to the java command line works around the
 issue.
 Here are some of the OS's we've seen the failure on:
 {code}
 SuSE 10.0
 Linux phoebe 2.6.13-15-smp #1 SMP Tue Sep 13 14:56:15 UTC 2005 x86_64 
 x86_64 x86_64 GNU/Linux 
 SuSE 8.2
 Linux phobos 2.4.20-64GB-SMP #1 SMP Mon Mar 17 17:56:03 UTC 2003 i686 
 unknown unknown GNU/Linux 
 Red Hat Enterprise Linux Server release 5.1 (Tikanga)
 Linux lab8.betech.virginia.edu 2.6.18-53.1.14.el5 #1

[jira] Commented: (LUCENE-140) docs out of order

2007-01-10 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463781
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Michael, Doron, you guys are legends!

Indeed the problem is using only the IndexWriter with create true to recreate 
the directory. Creating a new Directory with create true does fix the problem. 
The javadoc for this constructor is fairly explicit that it should recreate the 
index for you (no caveat), so I would consider that a bug, but - given that 
head fixes it - not one that requires any action.

Thanks guys for the prompt attention, excellent and thorough analysis.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated LUCENE-140:


Attachment: indexing-failure.log

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463440
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Hi Michael,

Thanks for the patch, applied and recreated. Attached is the log.

To be explicit, we are recreating the index via the IndexWriter ctor with the 
create flag set and then completely rebuilding the index. We are not completely 
deleting the entire directory. There ARE old index files (_*.cfs  _*.del) in 
the directory with updated timestamps that are months old. If I completely 
recreate the directory the problem does go away. This is a fairly trivial 
fix, but we are still investigating as we want to know if this is indeed the 
problem, how we have come to make it prevalent, and what the root cause is.

Thanks for all the help everyone.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463470
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

BTW. We have looked at all the open files referenced by the VM when the 
indexing errors occur, and there does not seem to be any reference to the old 
index segment files, so I am not sure how those files are influencing this 
problem.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-08 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463202
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Hi Michael,

This is awesome, I have prepared a patched 1.9.1: 
http://jira.atlassian.com/secure/attachment/19390/lucene-core-1.9.1-atlassian-patched-2007-01-09.jar

Unfortunately we don't have a repeatable test for this so we will have to 
distribute to afflicted customers and - well, pray I guess. We have been seeing 
this sporadically in our main JIRA instance http://jira.atlassian.com so we 
will hopefully not observe it now.

We do only use the deleteDocuments(Term) method, so we are not sure whether 
this will truly fix our problem, but we note that that method calls 
deleteDocument(int) based on the TermDocs returned for the Term - and maybe 
they can be incorrect???

Out of interest, apart from changing from 1.4.3 to 1.9.1, in the JIRA 3.7 
release we changed our default merge factor to 4 from 10. We hadn't seen this 
problem before, and suddenly we have had a reasonable number of occurrences. 

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-08 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463203
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Alas, this doesn't appear to be the problem. We are still getting it, but we do 
at least have a little more info. We added the doc and lastDoc to the 
IllegalArgEx and we are getting very strange numbers:

java.lang.IllegalStateException: docs out of order (-1764  0)
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(SegmentMerger.java:335)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(SegmentMerger.java:298)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(SegmentMerger.java:272) 
at 
org.apache.lucene.index.SegmentMerger.mergeTerms()V(SegmentMerger.java:236)
at org.apache.lucene.index.SegmentMerger.merge()I(SegmentMerger.java:89)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
 
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

where doc = -1764 and lastDoc is zero

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462949
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have now seen this in a number of customer sites since upgrading JIRA to use 
Lucene 1.9.1. The JIRA report is here: 
http://jira.atlassian.com/browse/JRA-11861

We only seem to have seen it since the upgrade from 1.4.3 to 1.9.1, we hadn't 
seen it before then.

This is now a major issue for us, it is hitting a number of our customers. I am 
trying to generate a repeatable test for it as a matter of urgency.

As a follow-up we sometimes see the old ArrayIndexOutOfBoundsEx in 
BitVector.get() (BitVector.java:63)

will post more if I find something worth sharing.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462949
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have now seen this in a number of customer sites since upgrading JIRA to use 
Lucene 1.9.1. The JIRA report is here: 
http://jira.atlassian.com/browse/JRA-11861

We only seem to have seen it since the upgrade from 1.4.3 to 1.9.1, we hadn't 
seen it before then.

This is now a major issue for us, it is hitting a number of our customers. I am 
trying to generate a repeatable test for it as a matter of urgency.

As a follow-up we sometimes see the old ArrayIndexOutOfBoundsEx in 
BitVector.get() (BitVector.java:63)

will post more if I find something worth sharing.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462950
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

and we also see ArrayIndexOutOfBoundsEx in the SegmentReader.isDeleted() method:

java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.index.SegmentReader.isDeleted(I)Z(Optimized Method)
at org.apache.lucene.index.SegmentMerger.mergeFields()I(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462950
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

and we also see ArrayIndexOutOfBoundsEx in the SegmentReader.isDeleted() method:

java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.index.SegmentReader.isDeleted(I)Z(Optimized Method)
at org.apache.lucene.index.SegmentMerger.mergeFields()I(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-18 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-748?page=comments#action_12459489 ] 

Jed Wesley-Smith commented on LUCENE-748:
-

I guess, particularly in light of LUCENE-702 that this behavior is OK - and the 
IndexReader.unlock(dir) is a good suggestion. My real problem was that the 
finalize() method does eventually remove the write lock. 

For me then the suggestion would be to document the exceptional behavior of the 
close() method (ie. it means that changes haven't been written and the write 
lock is still held) and link to the IndexReader.unlock(Directory) method.

 Exception during IndexWriter.close() prevents release of the write.lock
 ---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith

 After encountering a case of index corruption - see 
 http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
 encounters an exception in the flushRamSegments() method, the index 
 write.lock is not released (ie. it is not really closed).
 The writelock is only released when the IndexWriter is GC'd and finalize() is 
 called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-18 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-748?page=comments#action_12459502 ] 

Jed Wesley-Smith commented on LUCENE-748:
-

Awesome, thanks!

 Exception during IndexWriter.close() prevents release of the write.lock
 ---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith
 Assigned To: Michael McCandless

 After encountering a case of index corruption - see 
 http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
 encounters an exception in the flushRamSegments() method, the index 
 write.lock is not released (ie. it is not really closed).
 The writelock is only released when the IndexWriter is GC'd and finalize() is 
 called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2006-12-14 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-140?page=comments#action_12458669 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have seen this one as well. We don't have the same usage as above, we only 
ever delete documents with IndexReader.deleteDocuments(Term)

We are using Lucene 1.9.1

It occurs in two places, inside IndexWriter.addDocument():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

and inside IndexWriter.close():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.flushRamSegments()V(IndexWriter.java:628)
at org.apache.lucene.index.IndexWriter.close()V(IndexWriter.java:375)

The second one exposes a problem in the close() method which is that the index 
write.lock is not released when exceptions are thrown in close() causing 
subsequent attempts to open an IndexWriter to fail.

 docs out of order
 -

 Key: LUCENE-140
 URL: http://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia

[jira] Commented: (LUCENE-140) docs out of order

2006-12-14 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-140?page=comments#action_12458669 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have seen this one as well. We don't have the same usage as above, we only 
ever delete documents with IndexReader.deleteDocuments(Term)

We are using Lucene 1.9.1

It occurs in two places, inside IndexWriter.addDocument():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

and inside IndexWriter.close():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.flushRamSegments()V(IndexWriter.java:628)
at org.apache.lucene.index.IndexWriter.close()V(IndexWriter.java:375)

The second one exposes a problem in the close() method which is that the index 
write.lock is not released when exceptions are thrown in close() causing 
subsequent attempts to open an IndexWriter to fail.

 docs out of order
 -

 Key: LUCENE-140
 URL: http://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia

[jira] Created: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-14 Thread Jed Wesley-Smith (JIRA)
Exception during IndexWriter.close() prevents release of the write.lock
---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith


After encountering a case of index corruption - see 
http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
encounters an exception in the flushRamSegments() method, the index write.lock 
is not released (ie. it is not really closed).

The writelock is only released when the IndexWriter is GC'd and finalize() is 
called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-681) org.apache.lucene.document.Field is Serializable but doesn't have default constructor

2006-12-10 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-681?page=comments#action_12457253 ] 

Jed Wesley-Smith commented on LUCENE-681:
-

worksforme

public class SerializationTest
{
public static void main(String[] args) throws Exception
{
Field field = new Field(name, value, Field.Store.YES, 
Field.Index.TOKENIZED);
System.out.println(field);
final Object field2 = new SerializationTest().serialize(field);
System.out.println(field2);
System.out.println(field == field2);
}

Object serialize(Object input) throws IOException, ClassNotFoundException
{
ByteArrayOutputStream outBytes = new ByteArrayOutputStream();
ObjectOutputStream outObjects = new ObjectOutputStream(outBytes);
outObjects.writeObject(input);

ByteArrayInputStream inBytes = new 
ByteArrayInputStream(outBytes.toByteArray());
ObjectInputStream inObjects = new ObjectInputStream(inBytes);
return inObjects.readObject();
}
}

Its a final class dude, what does it need a default constructor for?

Consider closing.

 org.apache.lucene.document.Field is Serializable but doesn't have default 
 constructor
 -

 Key: LUCENE-681
 URL: http://issues.apache.org/jira/browse/LUCENE-681
 Project: Lucene - Java
  Issue Type: Bug
  Components: Other
Affects Versions: 1.9, 2.0.0, 2.1, 2.0.1
 Environment: doesn't depend on environment
Reporter: Elijah Epifanov
Priority: Critical

 when I try to pass Document via network or do anyhing involving 
 serialization/deserialization I will get an exception.
 the following patch should help (Field.java):
   public Field () {
   }
   private void writeObject (java.io.ObjectOutputStream out)
   throws IOException {
 out.defaultWriteObject ();
   }
   private void readObject (java.io.ObjectInputStream in)
   throws IOException, ClassNotFoundException {
 in.defaultReadObject ();
 if (name == null) {
   throw new NullPointerException (name cannot be null);
 }
 this.name = name.intern ();// field names are interned
   }
 Maybe other classes do not conform to Serialization requirements too...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



FSDirectory.close()

2006-10-30 Thread Jed Wesley-Smith

All,

Just a quick question regarding the need to call Directory.close() 
(actually for an FSDirectory) and whether it is really necessary. As far 
as I can tell, the only implication of this is that the refCount is not 
decremented and therefore the FSDirectory will persist for the life of 
the VM. This is not a problem for us as it never changes for us, and so 
will stick around anyway.


The way we use the directory is to use it for creating IndexReaders and 
IndexWriters that are closed directly after use (apart from the Reader 
used for a Searcher which is closed only when mutative operations are 
made to the index).


At the moment, Directory.close() is NEVER called, not even during 
application shutdown. This doesn't appear to have any problems, but we 
are just wondering if anybody has seen any problems with not calling it.


--
cheers,
- jed.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Possible exceptions using IndexReader IndexWriter

2006-09-17 Thread Jed Wesley-Smith

all,

We're just wondering if anyone has seen any exceptions when using the 
IndexWriter.addDocument(...) or IndexReader.deleteDocuments(Term term) 
methods apart from catastrophic IOExceptions (disk full/failed etc.).


Is it possible for instance that we may be able to create a document 
that causes an exception when written? It doesn't seem to be possible, 
and we've never seen it happen, but we just to check with everyone  to 
see if anyone knows of such things.


--
cheers,
- jed.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Upgrade from 1.4.3 to 1.9.1. Any problems with using existing index files?

2006-08-25 Thread Jed Wesley-Smith

Hello all,

We are upgrading from Lucene 1.4.3 to 1.9.1, and have many customers 
with large existing index files. In our testing we have reused large 
indexes created in 1.4.3 in 1.9.1 without incident. We have looked 
through the changelog and the code and can't see any reason there should 
be any problems doing so.


So, we're just wondering, has anyone had any problems, or is there 
anything we need to look out for?


--
cheers,
- jed.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]