I have no problem using someone else's logging infrastructure.

The only sort-of-requirement is I've always hated the overhead of logging 
because to create a good log message you end up doing a bunch of work and then 
you pass that to the logger which says "not at the log level where that is 
needed", and throws it all away.

The reason for the logging macro is to lower the overhead so that logging like

    log(SomeLevel, formatStringExpr, arg1Expr, arg2Expr,....)

imagine those "...Expr" things are in fact expressions, perhaps with some cost 
to lookup the offending things etc. They may access lazy vals that have to be 
computed, for example.

You really want this to behave as if this was what was written:

    if (SomeLevel >= LoggingLevel)
      log(formatStringExpr, arg1Expr, arg2Expr, ....)

So that none of the cost of computing the arg expressions is encountered unless 
you are at a log level where they are needed.

That's what the macro does. Just hoists the if test above the evaluation of all 
those expressions.

We can certainly still do that even if the underlying logger is one of the 
conventional ones popular in the java world.


________________________________
From: Steve Lawrence <slawre...@apache.org>
Sent: Wednesday, April 28, 2021 8:22 AM
To: dev@daffodil.apache.org <dev@daffodil.apache.org>
Subject: Re: flakey windows CI build? Or real issue?

Maybe we should consider dropping our own logging implementation and use
some existing logging library. Other people have put a lot more time and
thought into logging than we have. And I don't think Daffodil has any
special logging requirements that other loggers don't already have.

Thoughts?


On 4/27/21 7:28 PM, Beckerle, Mike wrote:
> Logging is highly suspicious for race conditions to me.
>
> This whole design is completely non-thread safe, and just doesn't make sense. 
> I think "with Logging" was just copied as a pattern from place to place.
>
> I just created https://issues.apache.org/jira/browse/DAFFODIL-2510 for this 
> issue.
> ________________________________
> From: Beckerle, Mike <mbecke...@owlcyberdefense.com>
> Sent: Tuesday, April 27, 2021 3:28 PM
> To: dev@daffodil.apache.org <dev@daffodil.apache.org>
> Subject: Re: flakey windows CI build? Or real issue?
>
> This one line:
>
> [error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2 failed: 
> expected:<0> but was:<1>, took 0.307 sec
>
> For that test to fail an assertEquals, but only on one platform,... and it is 
> not reproducible. Is very disconcerting.
>
> The test has exactly 3 assertEquals that compare against 0.
>
>   @Test
>   def testScalaAPI2(): Unit = {
>     val lw = new LogWriterForSAPITest()
>
>     Daffodil.setLogWriter(lw)
>     Daffodil.setLoggingLevel(LogLevel.Info)
>
>     ...
>
>     val res = dp.parse(input, outputter)
>
>    ...
>     assertEquals(0, lw.errors.size)
>     assertEquals(0, lw.warnings.size)
>     assertEquals(0, lw.others.size)
>
>     // reset the global logging state
>     Daffodil.setLogWriter(new ConsoleLogWriter())
>     Daffodil.setLoggingLevel(LogLevel.Info)
>   }
>
> So this test is failing sporadically because of something being written to 
> the logWriter (lw) that wasn't before.
>
> ________________________________
> From: Interrante, John A (GE Research, US) <john.interra...@ge.com>
> Sent: Tuesday, April 27, 2021 2:47 PM
> To: dev@daffodil.apache.org <dev@daffodil.apache.org>
> Subject: flakey windows CI build? Or real issue?
>
> Once you drill down into and expand the "Run Unit Tests" log, GitHub lets you 
> search that log with a magnifying lens icon and input search text box above 
> the log.  Searching for "failed:" makes it easier to find the specific 
> failures.  I found one failure and three warnings:
>
> [error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2 failed: 
> expected:<0> but was:<1>, took 0.307 sec
>
> [warn] Test assumption in test 
> org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_1 failed: 
> org.junit.AssumptionViolatedException: (Implementation: daffodil) Test 
> 'test_sep_ssp_never_1' not compatible with implementation., took 0.033 sec
> [warn] Test assumption in test 
> org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_3 failed: 
> org.junit.AssumptionViolatedException: (Implementation: daffodil) Test 
> 'test_sep_ssp_never_3' not compatible with implementation., took 0.005 sec
> [warn] Test assumption in test 
> org.apache.daffodil.usertests.TestSepTests.test_sep_ssp_never_4 failed: 
> org.junit.AssumptionViolatedException: (Implementation: daffodil) Test 
> 'test_sep_ssp_never_4' not compatible with implementation., took 0.003 sec
>
> Your previous run failed in the Windows Java 11 build's Compile step with a 
> http 504 error when sbt was trying to fetch artifacts:
>
> [error] 
> lmcoursier.internal.shaded.coursier.error.FetchError$DownloadingArtifacts: 
> Error fetching artifacts:
> [error] 
> https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar:
>  download error: Caught java.io.IOException: Server returned HTTP response 
> code: 504 for URL: 
> https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar
>  (Server returned HTTP response code: 504 for URL: 
> https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar)
>  while downloading 
> https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.12/sbt_1.0/1.8.1/jars/sbt-native-packager.jar
>
> That error probably is just a flaky network or server problem.
>
> John
>
> -----Original Message-----
> From: Steve Lawrence <slawre...@apache.org>
> Sent: Tuesday, April 27, 2021 2:17 PM
> To: dev@daffodil.apache.org
> Subject: EXT: Re: flakey windows CI build? Or real issue?
>
> I haven't seen failures in tests in a while, only thing I've noticed is 
> GitHub actions just stalling with no output.
>
> In the linked PR, I see the error:
>
> [error] Test org.apache.daffodil.example.TestScalaAPI.testScalaAPI2
> failed: expected:<0> but was:<1>, took 0.307 sec
>
> I wonder if these isAtEnd changes have introduced a race condition, or made 
> an existing race condition more likely to get hit?
>
> On 4/27/21 2:13 PM, Beckerle, Mike wrote:
>> My PR keeps failing to build on Windows e.g., This failed the windows
>> java8 build:
>> https://github.com/mbeckerle/daffodil/actions/runs/789865909
>> <https://github.com/mbeckerle/daffodil/actions/runs/789865909>
>>
>> Previously today it failed the windows java 11 build.
>>
>> The errors were different. Earlier today it was in daffodil-io, the
>> linked checks above it's in daffodil-sapi.
>>
>> In neither case is there an [error] identifying the specific test
>> failing. Only a summary at the end indicating there were failures in that 
>> module.
>>
>> Is any of this expected behavior? I've seen mostly all 6 standard CI
>> checks working of late on others' PRs.
>>
>>
>> Mike Beckerle | Principal Engineer
>>
>> mbecke...@owlcyberdefense.com <mailto:bhum...@owlcyberdefense.com>
>>
>> P +1-781-330-0412
>>
>
>

Reply via email to