Re: ApacheCon CFP

2014-02-18 Thread Nicholas Williams
Unfortunately, they have wait-listed ALL FOUR of our presentations. It's rather 
maddening how they kept emailing us saying your project isn't represented and 
not enough people have submitted CFPs, and now they decided that our project 
won't be represented. 

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Feb 2, 2014, at 1:00, Gary Gregory garydgreg...@gmail.com wrote:
 
 On Sat, Feb 1, 2014 at 9:40 PM, Remko Popma remko.po...@gmail.com wrote:
 Well done guys!
 
 +1
 
 G 
 
 On Sunday, February 2, 2014, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 Matt and I coordinated off-list today and got our presentation proposals 
 submitted to ApacheCon. Assuming they're all accepted, Log4j will have four 
 presentations representing it. Go us!
 
 Nick
 
 
 On Jan 31, 2014, at 11:25 PM, Nick Williams wrote:
 
  Matt, sorry for the delay. I've had a bad cold today. Ick. Anyway, my 
  proposals are below. I haven't submitted them yet. Haven't seen your 
  proposals yet--can you get them to me ASAP? If you decide you're 
  uncomfortable committing to two presentations, I'm prepared to also take 
  on Extending Log4j 2: Writing Custom Appenders, Filters, and Layouts 
  and leave you to deal with just the first/intro one. I've been using the 
  following guidelines from the ApacheCon website:
 
   • Choose a submission type (Presentation, Panel, BoFs, Tutorial)
   • Choose the category for your proposal (Developer, Operations, 
  Business/Legal, Wildcard)
   • Provide a biography, including your previous speaking experience 
  (900 characters maximum).
   • Provide us with an abstract about what you will be presenting at 
  the event (900 characters maximum).
   • Describe who the audience is and what you expect them to gain 
  from your presentation (900 characters maximum).
   • Tell us how the content of your presentation will help better the 
  Apache and open source ecosystem. (900 characters maximum).
   • Select the experience level (Beginner, Intermediate, Advanced, 
  Any).
   • List any technical requirements that you have for your 
  presentation over and above the standard projector, screen and wireless 
  Internet.
 
  ---
  Title: Log4j 2 in Web Applications: A Deeper Look at Effective Java EE 
  Logging
  Experience: Intermediate
 
  Abstract: The newly-released Log4j 2 includes much greater support than 
  previous versions for Java EE web applications and proper initialization 
  and deinitialization of the framework with the application lifecycle. The 
  Servlet and JSP specifications have changed significantly in the 12 years 
  since Log4j 1.2 first released. Some of those changes make logging 
  easier, and some of them make it harder. In this presentation you will 
  learn about properly configuring Log4j in a web application, what to do 
  when the container is using Log4j, how to log within your JSPs using the 
  Log4j tag library, and what to do when handling requests asynchronously.
 
  Audience Gain: The audience will gain a better understanding of the 
  lifecycle and class loader hierarchy of Java EE web applications and how 
  they affect the lifecycle and configuration of Log4j. They'll take a look 
  at some of the different ways to initialize and configure Log4j and learn 
  when each approach is appropriate and--more importantly--when it's not. 
  They'll explore some of the pitfalls of asynchronous request handling and 
  learn about the important tools that Log4j provides to help and the steps 
  they must take to keep logging working. Finally, they'll see that logging 
  in JSPs is easy, too, and doesn't require a single line of Java code.
 
  Benefit: This is one in a series of hopefully four different 
  presentations on Log4j lead by the Apache Logging community. These 
  presentations will benefit the community by providing exposure for the 
  new version of Log4j, explaining its benefits and strengths over other 
  frameworks, and encouraging Log4j users to improve the framework and 
  contribute those improvements back to the community. I am submitting two 
  presentations and Matt Sicker is submitting the other two. For the most 
  part their order doesn't matter, but Matt's An Intro to Log4j 2.0: A New 
  Generation of Apache Logging should happen earlier on the schedule than 
  the other three.
  ---
 
  ---
  Title: Logging to Relational and NoSQL Databases with Log4j 2
  Experience: Intermediate
 
  Abstract: The newly-released Log4j 2 contains a number of different 
  appenders to help you deliver log events to the storage device you 
  desire. Among those are the JDBCAppender, JPAAppender, and NoSQLAppender, 
  allowing you to store your log events in essentially any database you can 
  imagine. While very powerful, configuring these appenders requires more 
  knowledge and care than configuring standard file appenders with the 
  PatternLayout. In this presentation you will learn 

Re: ApacheCon CFP

2014-02-18 Thread Nicholas Williams
I have just been told that a couple of our presentations were miss-marked. 
Please stand by... :-)

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Feb 18, 2014, at 7:38, Nicholas Williams nicho...@nicholaswilliams.net 
 wrote:
 
 Unfortunately, they have wait-listed ALL FOUR of our presentations. It's 
 rather maddening how they kept emailing us saying your project isn't 
 represented and not enough people have submitted CFPs, and now they 
 decided that our project won't be represented. 
 
 Nick
 
 Sent from my iPhone, so please forgive brief replies and frequent typos
 
 On Feb 2, 2014, at 1:00, Gary Gregory garydgreg...@gmail.com wrote:
 
 On Sat, Feb 1, 2014 at 9:40 PM, Remko Popma remko.po...@gmail.com wrote:
 Well done guys!
 
 +1
 
 G 
 
 On Sunday, February 2, 2014, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 Matt and I coordinated off-list today and got our presentation proposals 
 submitted to ApacheCon. Assuming they're all accepted, Log4j will have 
 four presentations representing it. Go us!
 
 Nick
 
 
 On Jan 31, 2014, at 11:25 PM, Nick Williams wrote:
 
  Matt, sorry for the delay. I've had a bad cold today. Ick. Anyway, my 
  proposals are below. I haven't submitted them yet. Haven't seen your 
  proposals yet--can you get them to me ASAP? If you decide you're 
  uncomfortable committing to two presentations, I'm prepared to also take 
  on Extending Log4j 2: Writing Custom Appenders, Filters, and Layouts 
  and leave you to deal with just the first/intro one. I've been using the 
  following guidelines from the ApacheCon website:
 
   • Choose a submission type (Presentation, Panel, BoFs, Tutorial)
   • Choose the category for your proposal (Developer, Operations, 
  Business/Legal, Wildcard)
   • Provide a biography, including your previous speaking experience 
  (900 characters maximum).
   • Provide us with an abstract about what you will be presenting at 
  the event (900 characters maximum).
   • Describe who the audience is and what you expect them to gain 
  from your presentation (900 characters maximum).
   • Tell us how the content of your presentation will help better 
  the Apache and open source ecosystem. (900 characters maximum).
   • Select the experience level (Beginner, Intermediate, Advanced, 
  Any).
   • List any technical requirements that you have for your 
  presentation over and above the standard projector, screen and wireless 
  Internet.
 
  ---
  Title: Log4j 2 in Web Applications: A Deeper Look at Effective Java EE 
  Logging
  Experience: Intermediate
 
  Abstract: The newly-released Log4j 2 includes much greater support than 
  previous versions for Java EE web applications and proper initialization 
  and deinitialization of the framework with the application lifecycle. 
  The Servlet and JSP specifications have changed significantly in the 12 
  years since Log4j 1.2 first released. Some of those changes make logging 
  easier, and some of them make it harder. In this presentation you will 
  learn about properly configuring Log4j in a web application, what to do 
  when the container is using Log4j, how to log within your JSPs using the 
  Log4j tag library, and what to do when handling requests asynchronously.
 
  Audience Gain: The audience will gain a better understanding of the 
  lifecycle and class loader hierarchy of Java EE web applications and how 
  they affect the lifecycle and configuration of Log4j. They'll take a 
  look at some of the different ways to initialize and configure Log4j and 
  learn when each approach is appropriate and--more importantly--when it's 
  not. They'll explore some of the pitfalls of asynchronous request 
  handling and learn about the important tools that Log4j provides to help 
  and the steps they must take to keep logging working. Finally, they'll 
  see that logging in JSPs is easy, too, and doesn't require a single line 
  of Java code.
 
  Benefit: This is one in a series of hopefully four different 
  presentations on Log4j lead by the Apache Logging community. These 
  presentations will benefit the community by providing exposure for the 
  new version of Log4j, explaining its benefits and strengths over other 
  frameworks, and encouraging Log4j users to improve the framework and 
  contribute those improvements back to the community. I am submitting two 
  presentations and Matt Sicker is submitting the other two. For the most 
  part their order doesn't matter, but Matt's An Intro to Log4j 2.0: A 
  New Generation of Apache Logging should happen earlier on the schedule 
  than the other three.
  ---
 
  ---
  Title: Logging to Relational and NoSQL Databases with Log4j 2
  Experience: Intermediate
 
  Abstract: The newly-released Log4j 2 contains a number of different 
  appenders to help you deliver log events to the storage device you 
  desire. Among those are the JDBCAppender, JPAAppender, and 
  NoSQLAppender

Re: [VOTE] Log4j 2.0-rc1 RC2

2014-02-13 Thread Nicholas Williams
Apparently I don't have karma to commit anything on dist, because I can't 
commit my keys OR the rc1 release artifacts. I'm told it's forbidden. Can y'all 
provide me (even temporary) karma to do that? Or does Infra have to get 
involved?

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Feb 13, 2014, at 3:39, Christian Grobmeier grobme...@gmail.com wrote:
 
 On 12 Feb 2014, at 20:01, Nick Williams wrote:
 What matters is what is in https://www.apache.org/dist/logging/KEYS
 
 If someone will provide me instructions, I can put my keys there.
 
 Instructions are at the top of the file:
 https://www.apache.org/dist/logging/KEYS
 
 also, following through the links I posted earlier, you get to 
 https://www.apache.org/dev/release-signing.html
 
 I've read those pages until I was blue in the face :-) but neither help me 
 any at all. Exporting my keys is *easy*. I'm looking for instructions to 
 edit the KEYS file. I don't know where it is or whether I should edit it via 
 SVN or SSH or what.
 
 
 oh haha :) Sorry for misunderstanding!
 
 Please check the file out from:
 https://dist.apache.org/repos/dist/release/logging/
 
 Unfortunately you'll checkout the old releases as well.
 You *should* have karma to commit your key to the file directly.
 
 From there your KEY will be spread to the mirrors (which is including the 
 link above)
 
 Cheers
 
 
 Nick
 
 Gary
 
 Cheers
 
 
 Nick
 
 If the RM's key is not there, the files cannot be verified.
 
 I am guessing you added Nick's key to your keystore?
 
 Gary
 
 
 On 2/12/14, Gary Gregory garydgreg...@gmail.com wrote:
 On Wed, Feb 12, 2014 at 11:44 AM, Nick Williams 
 nicho...@nicholaswilliams.net wrote:
 
 
 On Feb 12, 2014, at 10:29 AM, Nick Williams wrote:
 
 
 On Feb 12, 2014, at 10:15 AM, Gary Gregory wrote:
 
 Nick,
 
 You've got to add your key to the project KEYS file, in the case the
 Log4j
 projects KEYS file as referenced from
 https://logging.apache.org/log4j/2.x/download.html
 
 
 Okay. The ASF tech folks never told me that. How do I edit that file?
 
 
 Interesting: You don't have a key in that file. Additionally, all the
 keys
 in that file are expired.
 
 
 Well, now's a good time to find all this out! ;)
 
 Gary
 
 
 
 Verifying sigs and hashes is a step in the voting process AFAIK.
 
 
 Agreed. My question was, Does this vote need to be canceled? followed
 by, Does that mean someone on the PMC needs to change their vote from +1
 to -1? because we already have the necessary votes to release.
 
 If you follow the links from my previous messages, you'll find all the
 information you need about signing, keys, using PGP/GPG and so on.
 
 
 Okay. My problem was I couldn't find any information anywhere (including
 in the links you sent me) that tell me how to tell whether a PGP key is
 RSA
 or DSA and what its strength is. Through some deductive reasoning, I
 THINK
 when you see 1024R or 2048R it mess 1024-bit RSA or 2048-bit RSA,
 respectively, and likewise 1024D and 2048D mean 1024-bit DSA and
 2048-bit DSA, respectively. IF I'm correct about this, Christian is still
 using a 1024-bit DSA key.
 
 In the meantime, I'll generate a new key.
 
 Nick
 
 
 Gary
 
 
 On Wed, Feb 12, 2014 at 10:57 AM, Nick Williams 
 nicho...@nicholaswilliams.net wrote:
 
 I'm guessing the public key wasn't found because you didn't import it.
 
 I don't know why I would have generated a DSA key. That doesn't make any
 sense. Unfortunately, I can't even figure out how to VIEW the contents
 of
 my own GPG public key to see what's in it. All I've been able to find is
 how to list my keys and view their fingerprints, so I can't see whether
 any
 of them are RSA or DSA or what strength they are. Anyone have any
 suggestions?
 
 What I DO know is that, before I could become a committer, the ASF tech
 people required me to generate a key and upload it to a public site. I
 uploaded it to http://pgp.mit.edu/. They went and looked at the key and
 told me that it looked good and that I had been approved for
 committership.
 I assumed that meant the key was sufficient for ASF code signing
 purposes.
 I would think they would have told me that it was DSA and not strong
 enough. :-[
 
 On the other hand, it's possible these instructions have changed in the
 last year and I just wasn't aware of it.
 
 Does this vote need to be canceled? Technically speaking, we already
 have
 3 PMC votes, so I THINK that means a PMC member who has already voted +1
 would HAVE to change their vote to -1, but I may misunderstand that
 rule.
 
 Nick
 
 On Feb 12, 2014, at 9:01 AM, Gary Gregory wrote:
 
 Has anyone verified the signatures of all the files?
 
 I am guessing not, the first one I try fails:
 
 gpg --verify log4j-1.2-api-2.0-rc1.jar.asc
 gpg: Signature made 02/09/14 14:09:30 using DSA key ID ED446286
 gpg: Can't check signature: public key not found
 
 Also, we are NOT supposed to use DSA keys per
 https://www.apache.org/dev/release-signing.html
 
 Under Important: 

Re: [VOTE] Log4j 2.0-rc1 RC2

2014-02-10 Thread Nicholas Williams
I think it's very clearly a yes. The legal page says code with the MIT license 
can be included in ASF projects. 

However, I have a suggestion for the next release that will make this whole 
discussion moot: let's use the CDN instead of including the JQuery source code 
in source control. That way we're not hosting the JQuery files, and we don't 
have to worry about RAT reports, NOTICE files, or license info in the POM. 

As for RC1, I don't see any reason it can't proceed. This isn't a 
regression—the JQuery files have been in source control for 14 months without 
incident or complaint.

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Feb 10, 2014, at 20:20, Gary Gregory garydgreg...@gmail.com wrote:
 
 This should be documented clearly in our build (in this case, in the POM).
 
 Is the JQuery license compatible with ours? 
 
 If you read https://www.apache.org/legal/resolved.html#category-a as a yes  
 then the files RAT complains about can be excluded from the report. If you 
 read it as a no, then we cannot include JQuery.
 
 It reads like a yes to me.
 
 The next question is: Do we need to add JQuery to our NOTICE file?
 
 Gary
 
 
 
 
 
 On Mon, Feb 10, 2014 at 8:46 PM, Remko Popma remko.po...@gmail.com wrote:
 I agree with Nick.
 Didn't we discuss this before, for the beta-9 release (and came to the same 
 conclusion)? 
 
 
 On Tuesday, February 11, 2014, Nick Williams 
 nicho...@nicholaswilliams.net wrote:
 I'm not sure what our policy is, either, but there's nothing we can do 
 about it. We can't modify the license header of those files—that would be 
 in violation of the license under which JQuery is made available. And 
 ceasing to use JQuery would make the site not very good anymore.
 
 Either way, since this is JS for the site and not source code for Log4j, 
 and since those files have been there for a very long time, I certainly 
 don't think it should hold up this release.
 
 Nick
 
 On Feb 10, 2014, at 4:12 PM, Gary Gregory wrote:
 
 RAT complains:
 
 ***
 
 Unapproved licenses:
 
   src/site/resources/js/jquery.js
   src/site/resources/js/jquery.min.js
 
 ***
 I'm not sure what our policy is for this kind of issue.
 
 Gary
 
 
 
 On Mon, Feb 10, 2014 at 2:56 PM, Nick Williams 
 nicho...@nicholaswilliams.net wrote:
 
 On Feb 9, 2014, at 1:56 PM, Nick Williams wrote:
 
 This is a vote to release Log4j 2.0-rc1, the twelfth release of Log4j 
 2.0.
 
 snip /
 
 Please test and cast your votes.
 [x] +1, release the artifacts
 
 [ ] -1, don't release because...
 
 Nick
 
 
 
 -- 
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
 Java Persistence with Hibernate, Second Edition
 JUnit in Action, Second Edition
 Spring Batch in Action
 Blog: http://garygregory.wordpress.com 
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory
 
 
 
 -- 
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
 Java Persistence with Hibernate, Second Edition
 JUnit in Action, Second Edition
 Spring Batch in Action
 Blog: http://garygregory.wordpress.com 
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory


Re: Author tags in xdocs

2014-02-08 Thread Nicholas Williams
Agreed. 

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Feb 8, 2014, at 9:05, Remko Popma remko.po...@gmail.com wrote:
 
 Fine with me also. 
 Remko
 
 On Saturday, February 8, 2014, Gary Gregory garydgreg...@gmail.com wrote:
 Fine with me.
 
 Gary
 
 
  Original message 
 From: Christian Grobmeier 
 Date:02/08/2014 09:45 (GMT-05:00) 
 To: Log4J Developers List 
 Subject: Author tags in xdocs 
 
 Hello,
 
 we have a lot of 
 
 author email=../author
 
 tags in our docs. Thinking about the board recommendation to remove
 author tags from the source code, I would like to bring up the 
 discussion of removing author tags from the docs too.
 
 I am not very oppinionated on this change, just want to bring it
 up as I have looked in the docs today
 
 Cheers
 Christian
 
 ---
 http://www.grobmeier.de
 The Zen Programmer: http://bit.ly/12lC6DL
 @grobmeier
 GPG: 0xA5CC90DB
 
 -
 To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
 For additional commands, e-mail: log4j-dev-h...@logging.apache.org


Re: Using Custom Levels with a Custom/Wrapper Interface

2014-01-26 Thread Nicholas Williams
Yes, I was saying that. But, unless I'm misunderstanding, Scott doesn't want 
the user to even have to write the interface. He wants them to just configure 
it and the interface become available magically. I was pointing out that 
there's a disconnect between when the configuration is used (runtime) and when 
the user needs the interface (compile time). 

Unless we provide a code-generation tool for the user to run from the command 
line or from Ant/Maven/Gradle, they're going to have to write the interface 
themselves.

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Jan 26, 2014, at 22:49, Remko Popma remko.po...@gmail.com wrote:
 
 Nick, I thought that you meant that users would provide their own interface, 
 like this:
 public interface MyLogger extends Logger {
 @LoggingLevel(name=DIAG)
 void diag(String message);
 // optional other methods
 }
 
 That way, this interface exists at compile time. 
 
 On Monday, January 27, 2014, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:
 Scott, invokedynamic and javassist...those are all /runtime/ things. The 
 user needs Logger#notice to be available at compile time. Those are not 
 compatible.
 
 Nick
 
 Sent from my iPhone, so please forgive brief replies and frequent typos
 
  On Jan 26, 2014, at 22:37, Scott Deboy scott.de...@gmail.com wrote:
 
  Yes, I would like to declare in the config:
 
  Level: NOTICE, value: 232
 
  And in Java code be able to use logger.notice(some message).
 
  But I think that'd require invokedynamic..which would probably
  require..javassist/ASM?
 
  I'd be ok with anything that's really close to that :)
 
  Scott
 
 
  On 1/26/14, Ralph Goers ralph.go...@dslextreme.com wrote:
  Scott would like users to add a level definition to the logging
  configuration and have everything else happen auto-magically.  That would
  happen at run-time which is a bit late since the methods need to be
  available at compile time.
 
  I believe Scott said he would be fine if users had to do
 
  logger.log(SomeClass.SomeLevel, message);
 
  but even that requires SomeClass to be available at compile time.
 
  So what Scott says he would like and what Nick is proposing are two
  different things.
 
  Ralph
 
 
 
  On Jan 26, 2014, at 8:09 PM, Remko Popma remko.po...@gmail.com wrote:
 
  I actually thought that Nick's idea was the answer to that: users create 
  a
  custom interface, something like this:
 
  public interface MyLogger extends Logger {
 @LoggingLevel(name=DIAG)
 void diag(String message);
 // optional other methods
  }
 
  They get an instance of this interface by calling:
  LogManager.getCustomLogger(MyLogger.class);
 
  LogManager has access to the processed configuration. The config has
  LevelsLevel name=DIAG intValue=450 elements. During configuration
  processing, the custom Level instances are created and registered, so on
  the first call to LogManager.getCustomLogger(MyLogger.class), the 
  MyLogger
  instance is created and cached. Also, at this point the annotations are
  parsed to see what Level instance the MyLogger implementation will pass 
  to
  the Logger.log(Level, String) method when the diag method is called.
 
  What is still open in this scenario is how the instance is created. 
  Proxy?
  Or generate source  compile? Or use a byte code library?
 
  On Monday, January 27, 2014, Ralph Goers ralph.go...@dslextreme.com
  wrote:
  I am going to have to echo what Nick said.  If you can think of a way to
  make
 
  logger.log(SomeClass.SomeLevel, hello world);
 
  work without actually creating SomeClass then please share!
 
  Ralph
 
  On Jan 26, 2014, at 7:45 PM, Nick Williams 
  nicho...@nicholaswilliams.net
  wrote:
 
  It would not be possible to do this strictly through configuration
  because the user needs a compiled interface to code against. Where is
  that compiled interface to come from?
 
  Nick
 
  On Jan 26, 2014, at 9:40 PM, Scott Deboy wrote:
 
  If there is a way to support this strictly through configuration that
  would be ideal.
 
  I'm trying to find a way to remove my request for additional built in
  levels but through configuration instead of adding them ourselves.
 
  Scott
  Scott
 
  On Jan 26, 2014 7:38 PM, Nick Williams 


Re: Enums and Custom Levels

2014-01-25 Thread Nicholas Williams
No, of course, everyone seems to agree that custom levels should be permitted. 
But I never heard agreement on whether we were going the extensible enum route 
or the Level-as-interface route. The camp still seemed to disagree on that.

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Jan 25, 2014, at 11:20, Ralph Goers ralph.go...@dslextreme.com wrote:
 
 I have not heard anyone disagree with allowing custom Levels.  The 
 disagreement I am hearing is over adding new pre-defined levels.
 
 Ralph
 
 On Jan 25, 2014, at 7:29 AM, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 
 I may have missed something. Did we decide on an approach? Last I heard, the 
 camp was still split: Some wanted to go with my extensible enum, others 
 wanted to change Level to an interface and make a Levels enum.
 
 So I'm a bit confused. Which implementation are you working on?
 
 Nick
 
 On Jan 25, 2014, at 7:08 AM, Ralph Goers wrote:
 
 I am working on the implementation of custom levels now.  I should have it 
 done today.
 
 Ralph
 
 On Jan 24, 2014, at 7:07 PM, Remko Popma remko.po...@gmail.com wrote:
 
 What is the best way to make progress on the custom levels implementation?
 
 Do we re-open LOG4J-41 or start a fresh Jira ticket? For implementation 
 ideas, do we attach files to Jira, or create a branch?
 
 Remko
 
 On Saturday, January 25, 2014, Gary Gregory garydgreg...@gmail.com 
 wrote:
 On Fri, Jan 24, 2014 at 11:48 AM, Remko Popma remko.po...@gmail.com 
 wrote:
 Gary,
 
 The hard-coded levels were proposed because it seemed that the 
 extensible enum idea raised by Nick was not going to be accepted.
 My original position was that Markers could fulfill the requirement but 
 Nick and yourself made it clear that this was not satisfactory.
 
 With extensible enums and markers off the table it seemed that the 
 hard-coded levels was the only alternative, and discussion ensued about 
 what these levels should be called and what strength they should have.
 
 During this discussion, several people, including me, repeatedly 
 expressed strong reservations about adding pre-defined levels, but by 
 this time I think people were thinking there was no alternative.
 
 It looked like we were getting stuck, with half the group moving in one 
 direction (add pre-defined levels!) and the other half wanting to move 
 in another direction (don't add pre-defined levels!). I asked that we 
 re-reviewed our assumptions and try to reach a solution that would 
 satisfy all users. 
 
 We then decided to explore the option of using extensible enums again. 
 This is still ongoing, but I haven't seen anyone arguing against this 
 idea since we started this thread.
 
 Hard-coded levels and the extensible enum are different solutions to the 
 same problem.
 
 Hello All:
 
 Absolutely not. See my DEFCON example. 
 Talking about an extensible enum is mixing design and implementation, 
 we are talking about 'custom' and/or 'extensible' levels.
 Custom/Extensible levels can be designed to serve one or all of:
 
 - Allow inserting custom levels between built-in levels.
 - Allow for domain specific levels outside of the concept of built-in 
 levels, the DEFCON example.
 - Should the custom levels themselves be extensible?
 
 Gary
  
 The extensible enum solution satisfies all of us who are opposed to 
 adding pre-defined levels, while also satisfying the original requirement 
 raised by Nick and yourself. Frankly I don't understand why you would 
 still want the pre-defined levels.
 
 Remko
 
 
 
 On Sat, Jan 25, 2014 at 12:53 AM, Gary Gregory garydgreg...@gmail.com 
 wrote:
 On Thu, Jan 23, 2014 at 10:45 PM, Remko Popma remko.po...@gmail.com 
 wrote:
 Gary, 
 
 I think that's a very cool idea!
 Much more flexible, powerful and elegant than pre-defined levels could 
 ever be. 
 
 As I wrote: I am discussing custom levels here with the understanding 
 that this is a separate topic from what the built-in levels are.
 
 I'm not sure why you want to make the features mutually exclusive. (Some) 
 others agree that these are different features.
 
 I see two topics:
 
 - What are the default levels for a 21st century logging framework. Do we 
 simply blindly copy Log4j 1? Or do we look at frameworks from different 
 languages and platforms for inspiration?
 - How (not if, I think we all agree) should we allow for custom levels.
 
 Gary
 
 It definitely makes sense to design the extensible enum with this 
 potential usage in mind. 
 
 Remko
 
 
 On Friday, January 24, 2014, Gary Gregory garydgreg...@gmail.com wrote:
 I am discussing custom levels here with the understanding that this is a 
 separate topic from what the built-in levels are. Here is how I convinced 
 myself that custom levels are a “good thing”.
 
 No matter which built-in levels exits, I may want custom levels. For 
 example, I want my app to use the following levels DEFCON1, DEFCON2, 
 DEFCON3, DEFCON4, and DEFCON5. This might be for one part of my app 

Re: Enums and Custom Levels

2014-01-25 Thread Nicholas Williams
I actually do object I think. It sounds like a significantly more convoluted 
approach than the extensible enum. With the extensible enum, new levels are 
immediately discovered, serialization works automatically, and extenders don't 
have to do any extra work in the constructor. Why are we making this so 
difficult?

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Jan 25, 2014, at 15:18, Ralph Goers ralph.go...@dslextreme.com wrote:
 
 Here is what I am implementing:
 
 1. Level is now an Interface.  This allows the vast amount of code to 
 continue to work. 
 2. The current Level enum has been renamed to StdLevel. It implements the 
 Level interface.
 3. A new class named Levels is in the spi package of the API. It contains a 
 ConcurrentMap containing all the registered Levels as well as the static 
 methods that were previously part of the Level enum.
 
 For the most part the conversion to this has been pretty easy.  The most 
 frustrating part was that I had to move the toLevel methods from what was the 
 Level enum to the Levels class as static methods are not allowed in 
 interfaces until Java 8. This meant I had to modify several classes to use 
 Levels.toLevel instead of Level.toLevel.  In addition, a few classes were 
 using the valueOf enum method. Those were converted to use Levels.getLevel.
 
 The few places were Level is actually used as an enum were also pretty easy 
 to handle as in those cases the custom levels need to be converted to a 
 StdLevel and then that enum is used.
 
 Unless anyone objects I plan on committing this later today once I finish it 
 and create some tests and documentation.
 
 Ralph
 
 
 
 On Jan 25, 2014, at 12:49 PM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:
 
 No, of course, everyone seems to agree that custom levels should be 
 permitted. But I never heard agreement on whether we were going the 
 extensible enum route or the Level-as-interface route. The camp still seemed 
 to disagree on that.
 
 Nick
 
 Sent from my iPhone, so please forgive brief replies and frequent typos
 
 On Jan 25, 2014, at 11:20, Ralph Goers ralph.go...@dslextreme.com wrote:
 
 I have not heard anyone disagree with allowing custom Levels.  The 
 disagreement I am hearing is over adding new pre-defined levels.
 
 Ralph
 
 On Jan 25, 2014, at 7:29 AM, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 
 I may have missed something. Did we decide on an approach? Last I heard, 
 the camp was still split: Some wanted to go with my extensible enum, 
 others wanted to change Level to an interface and make a Levels enum.
 
 So I'm a bit confused. Which implementation are you working on?
 
 Nick
 
 On Jan 25, 2014, at 7:08 AM, Ralph Goers wrote:
 
 I am working on the implementation of custom levels now.  I should have 
 it done today.
 
 Ralph
 
 On Jan 24, 2014, at 7:07 PM, Remko Popma remko.po...@gmail.com wrote:
 
 What is the best way to make progress on the custom levels 
 implementation?
 
 Do we re-open LOG4J-41 or start a fresh Jira ticket? For implementation 
 ideas, do we attach files to Jira, or create a branch?
 
 Remko
 
 On Saturday, January 25, 2014, Gary Gregory garydgreg...@gmail.com 
 wrote:
 On Fri, Jan 24, 2014 at 11:48 AM, Remko Popma remko.po...@gmail.com 
 wrote:
 Gary,
 
 The hard-coded levels were proposed because it seemed that the 
 extensible enum idea raised by Nick was not going to be accepted.
 My original position was that Markers could fulfill the requirement 
 but Nick and yourself made it clear that this was not satisfactory.
 
 With extensible enums and markers off the table it seemed that the 
 hard-coded levels was the only alternative, and discussion ensued 
 about what these levels should be called and what strength they should 
 have.
 
 During this discussion, several people, including me, repeatedly 
 expressed strong reservations about adding pre-defined levels, but by 
 this time I think people were thinking there was no alternative.
 
 It looked like we were getting stuck, with half the group moving in 
 one direction (add pre-defined levels!) and the other half wanting 
 to move in another direction (don't add pre-defined levels!). I 
 asked that we re-reviewed our assumptions and try to reach a solution 
 that would satisfy all users. 
 
 We then decided to explore the option of using extensible enums again. 
 This is still ongoing, but I haven't seen anyone arguing against this 
 idea since we started this thread.
 
 Hard-coded levels and the extensible enum are different solutions to 
 the same problem.
 
 Hello All:
 
 Absolutely not. See my DEFCON example. 
 Talking about an extensible enum is mixing design and implementation, 
 we are talking about 'custom' and/or 'extensible' levels.
 Custom/Extensible levels can be designed to serve one or all of:
 
 - Allow inserting custom levels between built-in levels.
 - Allow for domain specific levels outside of the concept of built-in 
 levels

Re: Enums and Custom Levels

2014-01-25 Thread Nicholas Williams
Ralph, if you're getting compile errors with that code, A) there's a 
copy-paste/transposition error, or B) there's something wrong with your 
(non-standard?) compiler. Given:

abstract class A { ... }

This is perfectly legal in Java 5+:

A a = new A() { };

That's an anonymous inner class extending A. I think that should clear up 
several of your problems. If your compiler won't compile that, something is 
very wrong.

Removing the ordinal would make it less enum-y. The synchronization could, 
however, be improved. A reentrant read-write lock seems like a better approach 
to me. 

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Jan 25, 2014, at 22:49, Ralph Goers ralph.go...@dslextreme.com wrote:
 
 As I am working on this I just want to point out a number of issues with the 
 code below:
 
 1. The class is abstract. The static block is doing a bunch of new Level() 
 invocations which obviously generate compile errors on an abstract class.  I 
 had to make it be a non-abstract class.
 2. As I pointed out before there is no way to access the “standard” levels as 
 an enum. I have addressed that.
 3. Although the constructor is synchronized access to the Map is not. Trying 
 to get from the map while a Level is being added will result in a 
 ConcurrentModificationException. I am using a ConcurrentMap instead.
 3. The constructor requires synchronization because it is modifying both the 
 map and the ordinal. However, since this isn’t an enum the ordinal value is 
 of dubious value. Removing that would allow the removal of the 
 synchronization in the constructor. I am considering that but I haven’t done 
 it yet.
 4. Your example of creating the extension shows doing a new Level(). This 
 doesn’t work because a) the class is abstract and b) the constructor is 
 protected. I am leaving the constructor protected so extension will require 
 doing new ExtendedLevel(name, value) and creating a constructor. Not 
 requiring that means applications can do a new Level() anywhere and I am 
 opposed to allowing that.
 
 Ralph
 
 On Jan 23, 2014, at 12:42 AM, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 
 Okay, I finally got a minute to read all of these emails, and...
 
 EVERYBODY FREEZE!
 
 What if I could get you an extensible enum that required no interface 
 changes and no binary-incompatible changes at all? Sound too good to be 
 true? I proposed this months ago (LOG4J2-41) and it got shot down multiple 
 times, but as of now I've heard THREE people say extensible enum in this 
 thread, so here it is, an extensible enum:
 
 public abstract class Level implements ComparableLevel, Serializable {
   public static final Level OFF;
   public static final Level FATAL;
   public static final Level ERROR;
   public static final Level WARN;
   public static final Level INFO;
   public static final Level DEBUG;
   public static final Level TRACE;
   public static final Level ALL;
 
 
   private static final long serialVersionUID = 0L;
   private static final HashtableString, Level map;
   private static final TreeMapInteger, Level values;
   private static final Object constructorLock;
 
 
   static {
   // static variables must be constructed in certain order
   constructorLock = new Object();
   map = new HashtableString, Level();
   values = new TreeMapInteger, Level();
   OFF = new Level(OFF, 0) {};
   FATAL = new Level(FATAL, 100) {};
   ERROR = new Level(ERROR, 200) {};
   WARN = new Level(WARN, 300) {};
   INFO = new Level(INFO, 400) {};
   DEBUG = new Level(DEBUG, 500) {};
   TRACE = new Level(TRACE, 600) {};
   ALL = new Level(ALL, Integer.MAX_VALUE) {};
   }
 
 
   private static int ordinals;
 
 
   private final String name;
   private final int intLevel;
   private final int ordinal;
 
 
   protected Level(String name, int intLevel) {
   if(name == null || name.length() == 0)
   throw new IllegalArgumentException(Illegal null Level constant);
   if(intLevel  0)
   throw new IllegalArgumentException(Illegal Level int less than 
 zero.);
   synchronized (Level.constructorLock) {
   if(Level.map.containsKey(name.toUpperCase()))
   throw new IllegalArgumentException(Duplicate Level constant 
 [ + name + ].);
   if(Level.values.containsKey(intLevel))
   throw new IllegalArgumentException(Duplicate Level int [ + 
 intLevel + ].);
   this.name = name;
   this.intLevel = intLevel;
   this.ordinal = Level.ordinals++;
   Level.map.put(name.toUpperCase(), this);
   Level.values.put(intLevel, this);
   }
   }
 
 
   public int intLevel() {
   return this.intLevel;
   }
 
 
   public boolean isAtLeastAsSpecificAs(final Level level) {
   return this.intLevel = level.intLevel;
   }
 
 
   public boolean isAtLeastAsSpecificAs(final int level) {
   return this.intLevel = level;
   }
 
 
   public boolean 

Re: Web Issues, Logging Levels, and GA

2014-01-18 Thread Nicholas Williams
To be clear, here's how I see it (assuming we adopted all levels proposed):

FATAL  ERROR  WARN  CONFIG  INFO  VERBOSE  DEBUG  FINE  TRACE.

CONFIG would map to INFO for slf4j. VERBOSE and FINE would both map to DEBUG.

My motivation for FINE was similar to your motivation for VERBOSE: DEBUG isn't 
quite enough. In retrospect, I agree more with you that something is needed 
more on the INFO side of DEBUG rather than the TRACE side. That would allow 
DEBUG to be used for what it's really meant for. So I'm fine with VERBOSE 
instead.

My reason for putting CONFIG between INFO and WARN is simple: I ALWAYS want to 
see config-related messages when the application starts, but I don't always 
want to see INFO messages after it starts. And if something re-configures while 
the application is running, I want to see that, too. I've developed the habit 
of logging startup messages as WARNings, which I don't like doing. 

Hope that helps some. 

Nick

Sent from my iPhone from the Las Vegas airport, so please forgive brief replies 
and frequent typos

 On Jan 18, 2014, at 11:21, Ralph Goers ralph.go...@dslextreme.com wrote:
 
 STEP?  No clue what that means.
 
 Gary, if you want to implement VERBOSE between INFO and DEBUG I’m OK with 
 that, but what will that map to in SLF4J, etc.  DEBUG?
 
 And yes, something on the web site should document our recommended usage for 
 levels and markers.
 
 Ralph
 
 
 On Jan 18, 2014, at 10:53 AM, Gary Gregory garydgreg...@gmail.com wrote:
 
 Ah, my view of VERBOSE is that it is _more_ information, hence INFO  
 VERBOSE  DEBUG; while it sounds like Ralphs sees it as more DEBUG data. 
 
 For me DEBUG data is going to be already verbose, even more than 'verbose'.
 
 What is interesting (to me) is that DEBUG is often misused based on this 
 basic mix: debug messages can be for users *and/or* for developers, there is 
 no distinction in the audience. 
 
 For example, as a user, I want to get data to help me debug my configuration 
 and my process. As a developer, I want to debug the code. These can be two 
 very different set of data. 
 
 But we do not have DEBUG_USER and DEBUG_DEV levels. I would see INFO next to 
 VERBOSE as useful to users. Then DEBUG and TRACE useful for developers. Each 
 app can have its convention of course, but it would be nice to have the 
 distinction available through levels for developers to use.
 
 I see TRACE as method entry and exit type of logging, *very* *low* level 
 stuff. 
 
 We could also have both (ducking for projectiles):
 
 INFO
 VERBOSE
 DEBUG
 STEP
 TRACE
 
 Gary
 
 
 On Sat, Jan 18, 2014 at 12:47 PM, Ralph Goers ralph.go...@dslextreme.com 
 wrote:
 Oops. I just noticed you proposed that VERBOSE be between INFO and DEBUG. 
 Now that I don’t understand. My experience is that VERBOSE is usually more 
 detailed than debug messages, not less.  
 
 Ralph
 
 On Jan 18, 2014, at 9:44 AM, Ralph Goers ralph.go...@dslextreme.com 
 wrote:
 
 I understand the need for CONFIG.  However it isn’t clear to me whether it 
 belongs between INFO and WARN or DEBUG and INFO.  That is because it 
 typically would be used to log configuration during startup.  That doesn’t 
 necessarily imply that you would then want to see all INFO messages as 
 well.  Due to that, it would make more sense to me to make a CONFIG marker.
 
 I don’t really understand the point of FINE or FINER.  
 
 On the other hand, VERBOSE does make a bit more sense, but I’m struggling 
 with how that is any different than TRACE.  I guess the idea is that TRACE 
 is for control flow (entry, exit) and VERBOSE is for more detailed debug 
 messages?  I suppose I can go along with that argument, but again one 
 could just as easily create a VERBOSE marker and attach it to either TRACE 
 or DEBUG.  I guess I wouldn’t object if VERBOSE was added as a Level but 
 I’m not really convinced it is necessary either.
 
 Ralph
 
 
 
 On Jan 18, 2014, at 7:08 AM, Remko Popma remko.po...@gmail.com wrote:
 
 I've always liked Ralph's argument that Markers give users much more 
 flexibility than any predefined Levels. 
 I would prefer to stick to the log4j/slf4j level names.
 
 
 On Sat, Jan 18, 2014 at 10:32 PM, Gary Gregory garydgreg...@gmail.com 
 wrote:
 Interesting, I have been wanting a VERBOSE level better INFO and DEBUG.
 
 See 
 http://mail-archives.apache.org/mod_mbox/logging-log4j-dev/201310.mbox/%3CCACZkXPxNwYbn__CbXUqFhC7e3Q=kee94j+udhe8+6jiubcz...@mail.gmail.com%3E
 
 You'll have to dig a little in that ref to find my proposal, sorry I'm 
 on my phone ATM.
 
 It sounds like we see logging configuration messages differently though. 
 I do not like the name CONFIG because it does not sound like a level to 
 me. Otoh, many command lines have a verbose AND a debug switch. So it 
 makes sense to me too have corresponding levels. 
 
 Gary
 
 
  Original message 
 From: Nick Williams 
 Date:01/17/2014 23:50 (GMT-05:00) 
 To: Log4J Developers List 
 Subject: Web Issues, Logging Levels, and GA 

Re: Question about Log4jServletFilter in core.

2014-01-18 Thread Nicholas Williams
Yes. Next weekend I plan on adding a Servlet context parameter that allows the 
user to disable starting Log4j automatically. That should allow us to keep 
everything in one JAR while supporting both sides of the argument. 

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

 On Jan 18, 2014, at 10:54, Gary Gregory garydgreg...@gmail.com wrote:
 
 On Sat, Jan 18, 2014 at 12:35 PM, Ralph Goers ralph.go...@dslextreme.com 
 wrote:
 I’ve always had reservations about the servlet 3.0 automatic configuration 
 because if the log4j jar is present it can’t be disabled or be modified by 
 the end user. We’ve had some issues with Spring initialization and now 
 LOG4J2-452 reinforces that.  I would propose that if we want to keep it that 
 we move the minimum amount required into its own jar so that users have a 
 choice as to whether it is automatically initialized.
 
 Am I the only one who feels this way?  Frankly, this and one other issue I 
 plan to work on this weekend are the only things I see as blockers for a GA 
 release.
 
 For me, the fewer jars, the better. Can't this be configured somehow without 
 having to do more jar juggling?
 
 Gary
  
 
 Ralph
 
 On Jan 17, 2014, at 8:25 PM, Nick Williams nicho...@nicholaswilliams.net 
 wrote:
 
 Filter initialization is one of the last things to happen in web app 
 startup. The ServletContainerInitializer sets the threads logger context so 
 that web app startup procedures can use it. The filter's init() method 
 clears it near the end of startup so that it doesn't bleed into another web 
 app.
 
 Then, on web apps shutdown, destruction of filters is one of the first 
 things to happen. The filter's destroy() sets the logger context so that 
 the web app shutdown procedures can use it.
 
 Nick
 
 On Jan 17, 2014, at 10:17 PM, Matt Sicker wrote:
 
 Now I'm not sure if I'm interpreting this correctly, but init() clears the 
 current thread's logger context, and destroy() sets it. What's up with 
 this? Especially since it just gets set and cleared in the doFilter() bit.
 
 -- 
 Matt Sicker boa...@gmail.com
 
 
 
 -- 
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
 Java Persistence with Hibernate, Second Edition
 JUnit in Action, Second Edition
 Spring Batch in Action
 Blog: http://garygregory.wordpress.com 
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory


Re: Web Issues, Logging Levels, and GA

2014-01-18 Thread Nicholas Williams
I explained in the email why CONFIG  INFO. Not sure I can explain it any 
better. :-/

To repeat in case you didn't see it:

 My reason for putting CONFIG between INFO and WARN is simple: I ALWAYS want 
 to see config-related messages when the application starts, but I don't 
 always want to see INFO messages after it starts. And if something 
 re-configures while the application is running, I want to see that, too. 
 I've developed the habit of logging startup messages as WARNings, which I 
 don't like doing. 

Nick

Sent from my iPhone from a Southwest jet, so please forgive brief replies and 
frequent typos

 On Jan 18, 2014, at 12:27, Gary Gregory garydgreg...@gmail.com wrote:
 
 My reason for putting CONFIG between INFO and WARN is simple: I ALWAYS want 
 to see config-related messages when the application starts, but I don't 
 always want to see INFO messages after it starts. And if something 
 re-configures while the application is running, I want to see that, too. I've 
 developed the habit of logging startup messages as WARNings, which I don't 
 like doing. 

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



Re: Question about Log4jServletFilter in core.

2014-01-18 Thread Nicholas Williams
Unfortunately, Log4j simply has no way to guarantee intercepting when an 
asynchronous request wakes up. If the code dispatches the async context to 
another Servlet or to a JSP, it'll trigger the Log4j filter again. If the code 
just writes to the output stream, there's no way to know that happened.

A filter isn't the perfect solution, but it's really our only choice. It 
takes care of _most_ situations—the vast majority of situations, I would argue. 
Developers who use async contexts will have to go to extra effort to make sure 
the logging context is set up. I don't see any way to avoid that. Removing the 
filter would make life more difficult for those devs using non-asynchronous 
requests. 

On the up side, not only does this issue only affect devs using async requests, 
but also of THOSE situations it only really makes an impact with non-standard 
configurations. Typical out of the box configurations don't really NEED the 
logging context set up on each request. 

N

Sent from my iPhone from LAX baggage claim, so please forgive brief replies and 
frequent typos

 On Jan 18, 2014, at 13:56, Matt Sicker boa...@gmail.com wrote:
 
 Guys, I've been reading a little bit about OSGi lately, and that seems like 
 the perfect use case when combined with servlet 3.0. The thing is, making 
 minimal JARs is a lot like making bundles.
 
 The issue I see with async servlets, though, is how to manage the thread 
 local logger context when an async servlet can have multiple threads. The 
 most trivial way to make the proper logger context available _somewhere_ is 
 using request attributes or the servlet context attributes (which is already 
 being used to hold the initializer which holds the logger context anyway). 
 That's the thing, though. With multiple threads in a single web app instance, 
 it's hard to manage state for all those threads without being higher up in 
 the food chain. I don't think implementing this as a filter is the best way 
 to go for servlet 3.0.
 
 
 On 18 January 2014 15:19, Ralph Goers ralph.go...@dslextreme.com wrote:
 I was hoping to start the GA release sooner than that. 
 
 If the servlet context initializer is disabled then the listener should 
 still be allowed.
 
 Ralph
 
 On Jan 18, 2014, at 11:38 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:
 
 Yes. Next weekend I plan on adding a Servlet context parameter that allows 
 the user to disable starting Log4j automatically. That should allow us to 
 keep everything in one JAR while supporting both sides of the argument. 
 
 Nick
 
 Sent from my iPhone, so please forgive brief replies and frequent typos
 
 On Jan 18, 2014, at 10:54, Gary Gregory garydgreg...@gmail.com wrote:
 
 On Sat, Jan 18, 2014 at 12:35 PM, Ralph Goers 
 ralph.go...@dslextreme.com wrote:
 I’ve always had reservations about the servlet 3.0 automatic 
 configuration because if the log4j jar is present it can’t be disabled or 
 be modified by the end user. We’ve had some issues with Spring 
 initialization and now LOG4J2-452 reinforces that.  I would propose that 
 if we want to keep it that we move the minimum amount required into its 
 own jar so that users have a choice as to whether it is automatically 
 initialized.
 
 Am I the only one who feels this way?  Frankly, this and one other issue 
 I plan to work on this weekend are the only things I see as blockers for 
 a GA release.
 
 For me, the fewer jars, the better. Can't this be configured somehow 
 without having to do more jar juggling?
 
 Gary
  
 
 Ralph
 
 On Jan 17, 2014, at 8:25 PM, Nick Williams 
 nicho...@nicholaswilliams.net wrote:
 
 Filter initialization is one of the last things to happen in web app 
 startup. The ServletContainerInitializer sets the threads logger context 
 so that web app startup procedures can use it. The filter's init() 
 method clears it near the end of startup so that it doesn't bleed into 
 another web app.
 
 Then, on web apps shutdown, destruction of filters is one of the first 
 things to happen. The filter's destroy() sets the logger context so that 
 the web app shutdown procedures can use it.
 
 Nick
 
 On Jan 17, 2014, at 10:17 PM, Matt Sicker wrote:
 
 Now I'm not sure if I'm interpreting this correctly, but init() clears 
 the current thread's logger context, and destroy() sets it. What's up 
 with this? Especially since it just gets set and cleared in the 
 doFilter() bit.
 
 -- 
 Matt Sicker boa...@gmail.com
 
 
 
 -- 
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
 Java Persistence with Hibernate, Second Edition
 JUnit in Action, Second Edition
 Spring Batch in Action
 Blog: http://garygregory.wordpress.com 
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory
 
 
 
 -- 
 Matt Sicker boa...@gmail.com


Re: Web Issues, Logging Levels, and GA

2014-01-18 Thread Nicholas Williams
I prefer to avoid markers whenever possible. Unlike levels, markers require 
some amount of configuration to get them to log/not log when desired. They 
don't just work.

N

Sent from my iPhone from LAX baggage claim, so please forgive brief replies and 
frequent typos

 On Jan 18, 2014, at 14:01, Matt Sicker boa...@gmail.com wrote:
 
 Markers all around! No logging levels, just allow markers to have ordinals or 
 bit-flags to allow more flexible filtering.
 
 Sorry, nothing useful to add beyond wild speculations.
 
 
 On 18 January 2014 15:15, Ralph Goers ralph.go...@dslextreme.com wrote:
 Actually, here is how I would prefer it.  Let’s see if it makes sense to 
 anyone else.
 
 FATAL - Hopefully, almost never logged because the system is crashing.
 ERROR - Something affecting the usability of the system occurred.
 WARN - Something not nice, but probably recoverable occurred. May lead to 
 errors later.
 INFO - Something of general interest, but not necessarily significant.
 DIAG or DIAGNOSTIC - Events that can be used by operations or users to 
 diagnose problems in the system.
 DEBUG - Used by developers for internal debugging.
 VERBOSE - Used to log minute details of the system.  As its dictionary 
 definition implies this is extremely chatty.
 TRACE - Adds tracing of method entry and exit, possibly object creation and 
 initialization.
 
 I believe these should be enough for anybody.  I still think CONFIG is a 
 Marker at the INFO level. The advantage of being a Marker is that it can be 
 enabled regardless of its level and enabling it doesn’t imply enabling other 
 levels.
 
 Ralph
 
 
 On Jan 18, 2014, at 1:03 PM, Gary Gregory garydgreg...@gmail.com wrote:
 
 On Sat, Jan 18, 2014 at 2:21 PM, Ralph Goers ralph.go...@dslextreme.com 
 wrote:
 STEP?  No clue what that means.
 
 Gary, if you want to implement VERBOSE between INFO and DEBUG I’m OK with 
 that, but what will that map to in SLF4J, etc.  DEBUG?
 
 Sounds OK, I can see it as debug data, but for users, instead of developers.
 
 Gary 
 
 And yes, something on the web site should document our recommended usage 
 for levels and markers.
 
 Ralph
 
 
 
 On Jan 18, 2014, at 10:53 AM, Gary Gregory garydgreg...@gmail.com wrote:
 
 Ah, my view of VERBOSE is that it is _more_ information, hence INFO  
 VERBOSE  DEBUG; while it sounds like Ralphs sees it as more DEBUG data. 
 
 For me DEBUG data is going to be already verbose, even more than 
 'verbose'.
 
 What is interesting (to me) is that DEBUG is often misused based on this 
 basic mix: debug messages can be for users *and/or* for developers, there 
 is no distinction in the audience. 
 
 For example, as a user, I want to get data to help me debug my 
 configuration and my process. As a developer, I want to debug the code. 
 These can be two very different set of data. 
 
 But we do not have DEBUG_USER and DEBUG_DEV levels. I would see INFO next 
 to VERBOSE as useful to users. Then DEBUG and TRACE useful for 
 developers. Each app can have its convention of course, but it would be 
 nice to have the distinction available through levels for developers to 
 use.
 
 I see TRACE as method entry and exit type of logging, *very* *low* level 
 stuff. 
 
 We could also have both (ducking for projectiles):
 
 INFO
 VERBOSE
 DEBUG
 STEP
 TRACE
 
 Gary
 
 
 On Sat, Jan 18, 2014 at 12:47 PM, Ralph Goers 
 ralph.go...@dslextreme.com wrote:
 Oops. I just noticed you proposed that VERBOSE be between INFO and 
 DEBUG. Now that I don’t understand. My experience is that VERBOSE is 
 usually more detailed than debug messages, not less.  
 
 Ralph
 
 On Jan 18, 2014, at 9:44 AM, Ralph Goers ralph.go...@dslextreme.com 
 wrote:
 
 I understand the need for CONFIG.  However it isn’t clear to me whether 
 it belongs between INFO and WARN or DEBUG and INFO.  That is because it 
 typically would be used to log configuration during startup.  That 
 doesn’t necessarily imply that you would then want to see all INFO 
 messages as well.  Due to that, it would make more sense to me to make 
 a CONFIG marker.
 
 I don’t really understand the point of FINE or FINER.  
 
 On the other hand, VERBOSE does make a bit more sense, but I’m 
 struggling with how that is any different than TRACE.  I guess the idea 
 is that TRACE is for control flow (entry, exit) and VERBOSE is for more 
 detailed debug messages?  I suppose I can go along with that argument, 
 but again one could just as easily create a VERBOSE marker and attach 
 it to either TRACE or DEBUG.  I guess I wouldn’t object if VERBOSE was 
 added as a Level but I’m not really convinced it is necessary either.
 
 Ralph
 
 
 
 On Jan 18, 2014, at 7:08 AM, Remko Popma remko.po...@gmail.com wrote:
 
 I've always liked Ralph's argument that Markers give users much more 
 flexibility than any predefined Levels. 
 I would prefer to stick to the log4j/slf4j level names.
 
 
 On Sat, Jan 18, 2014 at 10:32 PM, Gary Gregory 
 garydgreg...@gmail.com wrote:
 Interesting, I have been 

Logo Contest on Homepage?

2013-08-17 Thread Nicholas Williams
Umm, guys? Nobody ever put the logo contest on the homepage. No wonder
we only have one submission so far!

I don't have access to do this that I know of. Someone needs to update it ASAP!

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos


smime.p7s
Description: S/MIME cryptographic signature

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Re: log4j-dev mail erratic

2013-07-20 Thread Nicholas Williams
Emails have been very snappy for me. Perhaps your provider is marking some
as spam, or delaying some? Then again, you're using Yahoo! mail. I can't
encourage you enough to use someone else.

I will note: a lot of these messages used to be delivered to spam for me
(Gmail). So I created a filter to:log4j-dev@logging.apache.org and told
it to put the emails in a folder and never mark as spam. I've never had a
problem since, and it keeps my email organized.

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

On Jul 20, 2013, at 1:37, Remko Popma rem...@yahoo.com wrote:

Recently email from the Log4J Developers List has been acting strange: I
receive messages out of order, sometimes many hours after they have been
sent. I also see replies to emails that I never received... Has anyone else
experienced something similar or is it just me?

Remko


smime.p7s
Description: S/MIME cryptographic signature

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

LOG4J2-291 and error handling

2013-07-16 Thread Nicholas Williams
If it helps, check out LOG4J2-291. I may be handling errors
incorrectly in the JDBC/JPA/NoSQL appenders. If an exception is thrown
within the *Manager write methods, should I just let that exception
propagate (wrapping it if it's a checked exception that I can't
throw)? Should I wrap all exceptions in some standard exception
(LoggingException or something similar)? Right now I'm catching
exceptions and logging them using the SimpleLogger. I may need to
change that behavior.

Nick

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org



Re: [PROPOSAL] for the text of an email and front page of the site

2013-05-11 Thread Nicholas Williams
Sounds good to me, for the most part. But perhaps a sooner date? I think it
would be nice if the new logo was in place by then. It seems (to me,
anyway) that we are very close to general availability.

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

On May 11, 2013, at 20:39, Gary Gregory garydgreg...@gmail.com wrote:

[PROPOSAL for the text of an email and front page of the site]

Log4j 2 logo contest

Log4j 2 will be a major new version of Log4j. To mark this occasion, Log4j
2 will use a new logo. We would like to open participation to the community
to create this new logo.

Attach your entries to the Jira http://...

Each person can submit more than one logo.

Submissions will be accepted until [I pulled this one out of a hat] June
30, 2013. The PMC will then call for a vote on the user and dev mailing
lists.

You must submit your entries under the Apache License version 2.0.

The Log4j team.
---

Thoughts?

Gary

-- 
E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
Java Persistence with Hibernate, Second Editionhttp://www.manning.com/bauer3/
JUnit in Action, Second Edition http://www.manning.com/tahchiev/
Spring Batch in Action http://www.manning.com/templier/
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory


smime.p7s
Description: S/MIME cryptographic signature

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Re: [VOTE] Log4j 2.0-beta6 rc2

2013-05-09 Thread Nicholas Williams
:-/ fair enough. But I would definitely like to put the rush on beta7,
like you said. Can someone figure out this bug and go ahead and commit
the change so that if the release is canceled for one reason or
another the bug will already be fixed?

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

On May 9, 2013, at 0:47, Ralph Goers ralph.go...@dslextreme.com wrote:

 How it works is that
 a) a release cannot be vetoed.
 b) a release must have 3 binding +1 votes (which this does already unless 
 Ivan or Gary were to change their vote)
 c) the release manager (in this case me) takes the issues found into 
 consideration and determines whether a bug is serious enough to warrant 
 redoing a release.

 In this case given that a) this bug apparently is not new, b) it is not a 
 defect in the release contents (license headers, etc) c) this is a beta 
 release and d) it is generally better to release often than to postpone I am 
 not inclined to cancel the release due to this.  That said, just like we had 
 a short release cycle between beta5 and beta6 I see no reason not to figure 
 out what the fix is and release beta7 in a week if need be.

 Ralph

 On May 8, 2013, at 9:24 PM, Remko Popma wrote:

 Nick, good job on finding the issue.

 Just curious, how does it work with new bugs found during the vote?
 Are they always show stoppers or only if the new version introduces a bug 
 that did not exist in the old version?

 Sent from my iPhone

 On 2013/05/09, at 13:06, Nick Williams nicho...@nicholaswilliams.net wrote:

 Well, I hate to do this, but I've gotta -1 (non-binding, of course). beta5 
 and beta6 are both unusable with Spring Framework (I didn't go back any 
 further). Any time an error gets logged through log4j-jcl bridge using 
 Spring, the error below appears in the Tomcat log, masking the error that 
 Spring was trying to log and making it very difficult to figure out what 
 happened. I've also included my configuration file below the stack trace. 
 The root error is happening on Tomcat 6 due to Spring bug, and that root 
 problem is unimportant. The important problem is the Log4j error that masks 
 it.

 I've created LOG4J2-245 regarding this issue.

 SEVERE: Exception sending context initialized event to listener instance of 
 class org.springframework.web.context.ContextLoaderListener
 java.util.EmptyStackException
  at java.util.Stack.peek(Stack.java:102)
  at 
 org.apache.logging.log4j.core.impl.ThrowableProxy.resolvePackageData(ThrowableProxy.java:339)
  at 
 org.apache.logging.log4j.core.impl.ThrowableProxy.init(ThrowableProxy.java:71)
  at 
 org.apache.logging.log4j.core.impl.Log4jLogEvent.init(Log4jLogEvent.java:110)
  at 
 org.apache.logging.log4j.core.impl.Log4jLogEvent.init(Log4jLogEvent.java:81)
  at 
 org.apache.logging.log4j.core.config.LoggerConfig.createEvent(LoggerConfig.java:423)
  at 
 org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:344)
  at org.apache.logging.log4j.core.Logger.log(Logger.java:110)
  at 
 org.apache.logging.log4j.spi.AbstractLoggerWrapper.log(AbstractLoggerWrapper.java:55)
  at 
 org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:539)
  at 
 org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:319)
  at 
 org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112)
  at 
 org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4765)
  at 
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5210)
  at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
  at 
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726)
  at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:702)
  at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:698)
  at org.apache.catalina.startup.HostConfig.manageApp(HostConfig.java:1491)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:491)
  at 
 org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:300)
  at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:792)
  at 
 org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:468)
  at 
 org.apache.catalina.mbeans.MBeanFactory.createStandardContext(MBeanFactory.java:415)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at 

Re: Appenders + Managers + Reconfiguration

2013-04-29 Thread Nicholas Williams
Oh! That makes sense! So I should include the connection information
in my FileManager's name so that a new one gets created if the
connection information changes. Right?

Nick

Sent from my iPhone, so please forgive brief replies and frequent typos

On Apr 29, 2013, at 8:30, Ralph Goers rgo...@apache.org wrote:

 The FileManager includes the file name in its name. Likewise the 
 SocketManager would include the host  port.  So if the file name is changed 
 the manager will be stopped.

 Sent from my iPad

 On Apr 28, 2013, at 10:53 PM, Nick Williams nicho...@nicholaswilliams.net 
 wrote:

 Okay. So consider this scenario: Someone has the FileAppender configured to 
 write to file X. They change the configuration to write to file Y. Since the 
 FileManager already exists writing to file X, the FileManager won't be 
 stopped. Its use count will increase to 2 and the newly-configured 
 FileAppender will use it as well. The old FileAppender will stop and the 
 FileManager's use count will decrease back to 1, but it's being used by the 
 new FileAppender. The newly configured FileAppender is still writing to file 
 X, even though the configuration says File Y, because the FileManager hasn't 
 been reloaded.

 Do I understand that incorrectly? Am I missing something?

 Nick

 On Apr 29, 2013, at 12:47 AM, Ralph Goers wrote:

 When a reconfiguration takes place a new configuration is first created. 
 All the Filters, Appenders, etc named in the configuration will be created. 
  However, when the Appender calls getManager() it may find that one already 
 exists with the name it is searching for. If so, that manager's use count 
 will be incremented and the new Appender will reference it.  Then the old 
 configuration is stopped. During this time Appenders will call the release 
 method on the manager and the manager's use count will be decremented.  A 
 manager doesn't actually stop until its use count is decremented to 0.  
 This means database connections, sockets, streams, etc will all continue on 
 into the new Appenders.  If the new configuration no longer has a 
 particular appender then the use count on that manager will decrement to 
 zero and it will be stopped.

 Ralph

 On Apr 28, 2013, at 9:52 PM, Nick Williams wrote:

 Guys,

 I'm almost ready to submit a patch for JDBC/JPA/MongoDB/CouchDB, but I'd 
 like a little bit of instruction on something. I want to make sure I'm 
 understanding things correctly, because I'm getting ... strange ... 
 results when reloading configuration in between unit tests.

 So, if you have an Appender that has a Manager, and than appender has 
 written events, and LoggerContext.reconfigure() is called, what is done 
 with the existing Appender and Manager instances? Basically, can someone 
 explain this whole lifecycle? I want to make sure I'm using this pattern 
 right.

 Thanks,

 Nick


 -
 To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
 For additional commands, e-mail: log4j-dev-h...@logging.apache.org

 -
 To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
 For additional commands, e-mail: log4j-dev-h...@logging.apache.org



smime.p7s
Description: S/MIME cryptographic signature

-
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Database Appenders

2013-04-25 Thread Nicholas Williams
First, a quick question: do we anticipate the next version to be beta6 or
rc1? Just curious.

I'm currently working on cleaning up compiler warnings throughout the
project and should have that completed soon.

I want to go ahead and get the conversation started about database
appenders. I'd like to see two appenders:

- A JdbcAppender that is capable of logging to any RDBMS for which there is
a JDBC driver.
- A MongoAppender that is capable of logging to a MongoDB database.

The JdbcAppender and MongoAppender would, as far as I can tell, need
properties for mapping all of the possible logging event properties to
table columns (or Mongo equivalent). I don't really see any other way to
accomplish that. We could use layout patterns from the PatternLayout to
achieve this: column name=columnName pattern=PatternLayout
equivalent-pattern /

I imagine the JdbcAppender having mutually exclusive properties for JDBC
URL/username/password, DataSource JNDI URL, and class.staticFactoryMethod
for obtaining a DataSource.

The MongoAppender would similarly have mutually exclusive properties for
connection information and class.statucFactoryMethod for obtaining a Mongo
instance.

I'd like to take a stab at these after I complete fixing compiler warnings,
and wanted to start getting feedback/ideas and also see if anyone has use
cases for other NoSQL appenders.


Re: Database Appenders

2013-04-25 Thread Nicholas Williams
On Thu, Apr 25, 2013 at 9:51 AM, Gary Gregory garydgreg...@gmail.comwrote:

 On Thu, Apr 25, 2013 at 10:39 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:

 First, a quick question: do we anticipate the next version to be beta6 or
 rc1? Just curious.


 As long as we are adding features, I'd like to keep rolling Betas. But it
 would also be OK to release 2.0 and add appenders later.

 I tried porting our app to 2.0 a couple of weeks ago but ran into lots of
 issues, so I'll need to take another stab at it in a couple of weeks again.
 We rely on a lot of 1.0 guts so I'll have to think about that some more...


 I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.


 Great!



 I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:

 - A JdbcAppender that is capable of logging to any RDBMS for which there
 is a JDBC driver.
 - A MongoAppender that is capable of logging to a MongoDB database.


 We should not need a MongoDB appender if there is a JDBC driver for it:
 docs.mongodb.org/ecosystem/drivers/java/


The MongoDB driver is not a JDBC driver. You cannot have real JDBC
drivers that work with NoSQL databases. You can have wrapper drivers that
translate SQL to NoSQL actions, but that would be very inefficient. The
MongoDB driver is really just an API that you must use directly.
Therefore, there must be a separate appender for it.




 The JdbcAppender and MongoAppender would, as far as I can tell, need
 properties for mapping all of the possible logging event properties to
 table columns (or Mongo equivalent). I don't really see any other way to
 accomplish that. We could use layout patterns from the PatternLayout to
 achieve this: column name=columnName pattern=PatternLayout
 equivalent-pattern /


 You can look at Log4J 1 for inspiration. Keep it simple for a start. I
 think version 1 just let's you specify a SQL INSERT instead of using some
 XML for mapping.


I'd like to avoid simply specifying an INSERT if possible. Logging events
that contained user input could result in SQL injection. I will certainly
use Log4j 1 for inspiration.





 I imagine the JdbcAppender having mutually exclusive properties for JDBC
 URL/username/password, DataSource JNDI URL, and class.staticFactoryMethod
 for obtaining a DataSource.


 Keep is simple for the first cut ;)



 The MongoAppender would similarly have mutually exclusive properties for
 connection information and class.statucFactoryMethod for obtaining a Mongo
 instance.

 I'd like to take a stab at these after I complete fixing compiler
 warnings, and wanted to start getting feedback/ideas and also see if anyone
 has use cases for other NoSQL appenders.


 Search the ML for my note on NoSQL, it looks like there is a JDBC-like API
 for NoSQL DBs.

 Gary

 --
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
 Java Persistence with Hibernate, Second 
 Editionhttp://www.manning.com/bauer3/
 JUnit in Action, Second Edition http://www.manning.com/tahchiev/
 Spring Batch in Action http://www.manning.com/templier/
 Blog: http://garygregory.wordpress.com
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory



Re: Database Appenders

2013-04-25 Thread Nicholas Williams
On Thu, Apr 25, 2013 at 10:01 AM, Remko Popma rem...@yahoo.com wrote:

 I think this is the link Gary is talking about: (from the wiki)
 Build a NoSQL Appender, maybe with 
 AppScalehttp://wiki.apache.org/logging/AppScale
 : http://appscale.cs.ucsb.edu/datastores.html Inspiration came from the
 log4j1 appender for redis: https://github.com/pavlobaron/log4j2redis


Unless I'm misunderstanding something (very possible), AppScale seems like
a lot of overhead and overkill. However, it should certainly be possible to
create a NoSqlAppender that has a type property where you specify the
database type (Mongo, Redis, etc.).

One thing we need to keep in mind is dependencies: appending to Mongo will
require the org.mongodb:mongo-java-driver artifact (and its dependencies),
Redis will require the redis.clients:jedis artifact (and its dependencies),
etc. So, as far as I see it, we have two options:

A) Have one NoSqlAppender that initially supports MongoDB and can easily be
made to support other NoSQL databases, put it in the log4j-core artifact,
and mark all dependencies as OPTIONAL or PROVIDED (thoughts? opinions?) so
as not to cause using projects to download unnecessary dependencies.

B) Put NoSQL appenders in separate artifacts so that dependencies don't
have to be optional/provided, but as a result each NoSQL will have to have
its own appender and each will result in a separate log4j artifact.

I REALLY don't like option B, but optional/provided dependencies also
require careful, explicit documentation and, my bet, end up with lots of
mailing list emails Why am I getting a NoClassDefError?.

Agree with Gary on keeping things simple. Also agree every new feature
 needs to be in beta for a while to shake out bugs etc. I don't have an
 opinion on whether to include jdbc appenders in the first 2.0 release or
 add them later.

 You know, I was actually thinking to write a tutorial for custom plugin
 developers, called (How to) Write Your Own Darn JdbcAppender! :-)

 Sent from my iPhone

 On 2013/04/25, at 23:51, Gary Gregory garydgreg...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 10:39 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:

 First, a quick question: do we anticipate the next version to be beta6 or
 rc1? Just curious.


 As long as we are adding features, I'd like to keep rolling Betas. But it
 would also be OK to release 2.0 and add appenders later.

 I tried porting our app to 2.0 a couple of weeks ago but ran into lots of
 issues, so I'll need to take another stab at it in a couple of weeks again.
 We rely on a lot of 1.0 guts so I'll have to think about that some more...


 I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.


 Great!



 I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:

 - A JdbcAppender that is capable of logging to any RDBMS for which there
 is a JDBC driver.
 - A MongoAppender that is capable of logging to a MongoDB database.


 We should not need a MongoDB appender if there is a JDBC driver for it:
 docs.mongodb.org/ecosystem/drivers/java/



 The JdbcAppender and MongoAppender would, as far as I can tell, need
 properties for mapping all of the possible logging event properties to
 table columns (or Mongo equivalent). I don't really see any other way to
 accomplish that. We could use layout patterns from the PatternLayout to
 achieve this: column name=columnName pattern=PatternLayout
 equivalent-pattern /


 You can look at Log4J 1 for inspiration. Keep it simple for a start. I
 think version 1 just let's you specify a SQL INSERT instead of using some
 XML for mapping.



 I imagine the JdbcAppender having mutually exclusive properties for JDBC
 URL/username/password, DataSource JNDI URL, and class.staticFactoryMethod
 for obtaining a DataSource.


 Keep is simple for the first cut ;)



 The MongoAppender would similarly have mutually exclusive properties for
 connection information and class.statucFactoryMethod for obtaining a Mongo
 instance.

 I'd like to take a stab at these after I complete fixing compiler
 warnings, and wanted to start getting feedback/ideas and also see if anyone
 has use cases for other NoSQL appenders.


 Search the ML for my note on NoSQL, it looks like there is a JDBC-like API
 for NoSQL DBs.

 Gary

 --
 E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
 Java Persistence with Hibernate, Second 
 Editionhttp://www.manning.com/bauer3/
 JUnit in Action, Second Edition http://www.manning.com/tahchiev/
 Spring Batch in Action http://www.manning.com/templier/
 Blog: http://garygregory.wordpress.com
 Home: http://garygregory.com/
 Tweet! http://twitter.com/GaryGregory




Re: Database Appenders

2013-04-25 Thread Nicholas Williams
On Thu, Apr 25, 2013 at 10:43 AM, Gary Gregory garydgreg...@gmail.comwrote:

 On Thu, Apr 25, 2013 at 11:37 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:01 AM, Remko Popma rem...@yahoo.com wrote:

 I think this is the link Gary is talking about: (from the wiki)
 Build a NoSQL Appender, maybe with 
 AppScalehttp://wiki.apache.org/logging/AppScale
 : http://appscale.cs.ucsb.edu/datastores.html Inspiration came from the
 log4j1 appender for redis: https://github.com/pavlobaron/log4j2redis


 Unless I'm misunderstanding something (very possible), AppScale seems
 like a lot of overhead and overkill. However, it should certainly be
 possible to create a NoSqlAppender that has a type property where you
 specify the database type (Mongo, Redis, etc.).

 One thing we need to keep in mind is dependencies: appending to Mongo
 will require the org.mongodb:mongo-java-driver artifact (and its
 dependencies), Redis will require the redis.clients:jedis artifact (and its
 dependencies), etc. So, as far as I see it, we have two options:

 A) Have one NoSqlAppender that initially supports MongoDB and can easily
 be made to support other NoSQL databases, put it in the log4j-core
 artifact, and mark all dependencies as OPTIONAL or PROVIDED (thoughts?
 opinions?) so as not to cause using projects to download unnecessary
 dependencies.

 B) Put NoSQL appenders in separate artifacts so that dependencies don't
 have to be optional/provided, but as a result each NoSQL will have to have
 its own appender and each will result in a separate log4j artifact.

 I REALLY don't like option B, but optional/provided dependencies also
 require careful, explicit documentation and, my bet, end up with lots of
 mailing list emails Why am I getting a NoClassDefError?.


 That's not an issue IMO since you only get the error when the NoSQL
 appender class would be loaded. If you use plain log4j, you'd never see the
 error.

 If we do a NoSQL appender, it seems to me that each will have it's own
 idiosyncrasies and configuration tweak, so a single generic class is not
 going to be pretty so I would do:

 - an AppScale appender or equivalent JDBC-like vendor neutral API
 - If need be our own, with a NoSQL abstract appender and vendor specific
 subclasses.


I may not understand what AppScale is exactly. It looks like just a cloud
service. Is that correct?



 Gary


 Agree with Gary on keeping things simple. Also agree every new feature
 needs to be in beta for a while to shake out bugs etc. I don't have an
 opinion on whether to include jdbc appenders in the first 2.0 release or
 add them later.

 You know, I was actually thinking to write a tutorial for custom plugin
 developers, called (How to) Write Your Own Darn JdbcAppender! :-)

 Sent from my iPhone

 On 2013/04/25, at 23:51, Gary Gregory garydgreg...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 10:39 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:

 First, a quick question: do we anticipate the next version to be beta6
 or rc1? Just curious.


 As long as we are adding features, I'd like to keep rolling Betas. But
 it would also be OK to release 2.0 and add appenders later.

 I tried porting our app to 2.0 a couple of weeks ago but ran into lots
 of issues, so I'll need to take another stab at it in a couple of weeks
 again. We rely on a lot of 1.0 guts so I'll have to think about that some
 more...


 I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.


 Great!



 I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:

 - A JdbcAppender that is capable of logging to any RDBMS for which
 there is a JDBC driver.
 - A MongoAppender that is capable of logging to a MongoDB database.


 We should not need a MongoDB appender if there is a JDBC driver for it:
 docs.mongodb.org/ecosystem/drivers/java/



 The JdbcAppender and MongoAppender would, as far as I can tell, need
 properties for mapping all of the possible logging event properties to
 table columns (or Mongo equivalent). I don't really see any other way to
 accomplish that. We could use layout patterns from the PatternLayout to
 achieve this: column name=columnName pattern=PatternLayout
 equivalent-pattern /


 You can look at Log4J 1 for inspiration. Keep it simple for a start. I
 think version 1 just let's you specify a SQL INSERT instead of using some
 XML for mapping.



 I imagine the JdbcAppender having mutually exclusive properties for
 JDBC URL/username/password, DataSource JNDI URL, and
 class.staticFactoryMethod for obtaining a DataSource.


 Keep is simple for the first cut ;)



 The MongoAppender would similarly have mutually exclusive properties
 for connection information and class.statucFactoryMethod for obtaining a
 Mongo instance.

 I'd like to take a stab at these after I complete fixing compiler
 warnings, and wanted to start getting feedback

Re: Database Appenders

2013-04-25 Thread Nicholas Williams
On Thu, Apr 25, 2013 at 11:06 AM, Gary Gregory garydgreg...@gmail.comwrote:

 I thought it included a vendor neutral NoSQL API in its stack someplace.


From what I can tell, it includes a vendor-neutral way to connect to its
cloud-based application engine. But you must either connect to a hosted
cloud application engine, or you must download and install the cloud
application engine on your own machine(s) and then connect to it. There
doesn't appear to be an actual, extractable database API that can be used
to connect directly to generic NoSQL databases.

But I could be wrong. I'm not finding a whole lot of useful information on
their website.



 Gary

 On Apr 25, 2013, at 11:47, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:43 AM, Gary Gregory garydgreg...@gmail.comwrote:

 On Thu, Apr 25, 2013 at 11:37 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:01 AM, Remko Popma rem...@yahoo.com wrote:

 I think this is the link Gary is talking about: (from the wiki)
 Build a NoSQL Appender, maybe with 
 AppScalehttp://wiki.apache.org/logging/AppScale
 : http://appscale.cs.ucsb.edu/datastores.html Inspiration came from
 the log4j1 appender for redis:
 https://github.com/pavlobaron/log4j2redis


 Unless I'm misunderstanding something (very possible), AppScale seems
 like a lot of overhead and overkill. However, it should certainly be
 possible to create a NoSqlAppender that has a type property where you
 specify the database type (Mongo, Redis, etc.).

 One thing we need to keep in mind is dependencies: appending to Mongo
 will require the org.mongodb:mongo-java-driver artifact (and its
 dependencies), Redis will require the redis.clients:jedis artifact (and its
 dependencies), etc. So, as far as I see it, we have two options:

 A) Have one NoSqlAppender that initially supports MongoDB and can easily
 be made to support other NoSQL databases, put it in the log4j-core
 artifact, and mark all dependencies as OPTIONAL or PROVIDED (thoughts?
 opinions?) so as not to cause using projects to download unnecessary
 dependencies.

 B) Put NoSQL appenders in separate artifacts so that dependencies don't
 have to be optional/provided, but as a result each NoSQL will have to have
 its own appender and each will result in a separate log4j artifact.

 I REALLY don't like option B, but optional/provided dependencies also
 require careful, explicit documentation and, my bet, end up with lots of
 mailing list emails Why am I getting a NoClassDefError?.


 That's not an issue IMO since you only get the error when the NoSQL
 appender class would be loaded. If you use plain log4j, you'd never see the
 error.

 If we do a NoSQL appender, it seems to me that each will have it's own
 idiosyncrasies and configuration tweak, so a single generic class is not
 going to be pretty so I would do:

 - an AppScale appender or equivalent JDBC-like vendor neutral API
 - If need be our own, with a NoSQL abstract appender and vendor specific
 subclasses.


 I may not understand what AppScale is exactly. It looks like just a cloud
 service. Is that correct?



 Gary


 Agree with Gary on keeping things simple. Also agree every new feature
 needs to be in beta for a while to shake out bugs etc. I don't have an
 opinion on whether to include jdbc appenders in the first 2.0 release or
 add them later.

 You know, I was actually thinking to write a tutorial for custom plugin
 developers, called (How to) Write Your Own Darn JdbcAppender! :-)

 Sent from my iPhone

 On 2013/04/25, at 23:51, Gary Gregory garydgreg...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 10:39 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:

 First, a quick question: do we anticipate the next version to be beta6
 or rc1? Just curious.


 As long as we are adding features, I'd like to keep rolling Betas. But
 it would also be OK to release 2.0 and add appenders later.

 I tried porting our app to 2.0 a couple of weeks ago but ran into lots
 of issues, so I'll need to take another stab at it in a couple of weeks
 again. We rely on a lot of 1.0 guts so I'll have to think about that some
 more...


 I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.


 Great!



 I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:

 - A JdbcAppender that is capable of logging to any RDBMS for which
 there is a JDBC driver.
 - A MongoAppender that is capable of logging to a MongoDB database.


 We should not need a MongoDB appender if there is a JDBC driver for it:
 docs.mongodb.org/ecosystem/drivers/java/



 The JdbcAppender and MongoAppender would, as far as I can tell, need
 properties for mapping all of the possible logging event properties to
 table columns (or Mongo equivalent). I don't really see any other way to
 accomplish that. We could use layout patterns from the PatternLayout

Re: Database Appenders

2013-04-25 Thread Nicholas Williams
On Thu, Apr 25, 2013 at 11:15 AM, Nicholas Williams 
nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 11:06 AM, Gary Gregory garydgreg...@gmail.comwrote:

 I thought it included a vendor neutral NoSQL API in its stack someplace.


 From what I can tell, it includes a vendor-neutral way to connect to its
 cloud-based application engine. But you must either connect to a hosted
 cloud application engine, or you must download and install the cloud
 application engine on your own machine(s) and then connect to it. There
 doesn't appear to be an actual, extractable database API that can be used
 to connect directly to generic NoSQL databases.

 But I could be wrong. I'm not finding a whole lot of useful information on
 their website.


From their GitHub repository:

AppScale is a platform that allows users to deploy and host their own
Google App Engine applications. It executes automatically over Amazon EC2,
Rackspace, Eucalyptus, CloudStack, OpenStack, as well as Xen, VirtualBox,
VMWare, and KVM. It has been developed and is maintained by AppScale
Systems, Inc., in Santa Barbara, CA.

It supports the Python, Java, and Go Google App Engine platforms.






 Gary

 On Apr 25, 2013, at 11:47, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:43 AM, Gary Gregory garydgreg...@gmail.comwrote:

 On Thu, Apr 25, 2013 at 11:37 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:01 AM, Remko Popma rem...@yahoo.com wrote:

 I think this is the link Gary is talking about: (from the wiki)
 Build a NoSQL Appender, maybe with 
 AppScalehttp://wiki.apache.org/logging/AppScale
 : http://appscale.cs.ucsb.edu/datastores.html Inspiration came from
 the log4j1 appender for redis:
 https://github.com/pavlobaron/log4j2redis


 Unless I'm misunderstanding something (very possible), AppScale seems
 like a lot of overhead and overkill. However, it should certainly be
 possible to create a NoSqlAppender that has a type property where you
 specify the database type (Mongo, Redis, etc.).

 One thing we need to keep in mind is dependencies: appending to Mongo
 will require the org.mongodb:mongo-java-driver artifact (and its
 dependencies), Redis will require the redis.clients:jedis artifact (and its
 dependencies), etc. So, as far as I see it, we have two options:

 A) Have one NoSqlAppender that initially supports MongoDB and can
 easily be made to support other NoSQL databases, put it in the log4j-core
 artifact, and mark all dependencies as OPTIONAL or PROVIDED (thoughts?
 opinions?) so as not to cause using projects to download unnecessary
 dependencies.

 B) Put NoSQL appenders in separate artifacts so that dependencies don't
 have to be optional/provided, but as a result each NoSQL will have to have
 its own appender and each will result in a separate log4j artifact.

 I REALLY don't like option B, but optional/provided dependencies also
 require careful, explicit documentation and, my bet, end up with lots of
 mailing list emails Why am I getting a NoClassDefError?.


 That's not an issue IMO since you only get the error when the NoSQL
 appender class would be loaded. If you use plain log4j, you'd never see the
 error.

 If we do a NoSQL appender, it seems to me that each will have it's own
 idiosyncrasies and configuration tweak, so a single generic class is not
 going to be pretty so I would do:

 - an AppScale appender or equivalent JDBC-like vendor neutral API
 - If need be our own, with a NoSQL abstract appender and vendor specific
 subclasses.


 I may not understand what AppScale is exactly. It looks like just a cloud
 service. Is that correct?



 Gary


 Agree with Gary on keeping things simple. Also agree every new feature
 needs to be in beta for a while to shake out bugs etc. I don't have an
 opinion on whether to include jdbc appenders in the first 2.0 release or
 add them later.

 You know, I was actually thinking to write a tutorial for custom
 plugin developers, called (How to) Write Your Own Darn JdbcAppender! :-)

 Sent from my iPhone

 On 2013/04/25, at 23:51, Gary Gregory garydgreg...@gmail.com wrote:

 On Thu, Apr 25, 2013 at 10:39 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:

 First, a quick question: do we anticipate the next version to be
 beta6 or rc1? Just curious.


 As long as we are adding features, I'd like to keep rolling Betas. But
 it would also be OK to release 2.0 and add appenders later.

 I tried porting our app to 2.0 a couple of weeks ago but ran into lots
 of issues, so I'll need to take another stab at it in a couple of weeks
 again. We rely on a lot of 1.0 guts so I'll have to think about that some
 more...


 I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.


 Great!



 I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:

 - A JdbcAppender that is capable

Re: Database Appenders

2013-04-25 Thread Nicholas Williams
That's certainly a possibility. I think a JpaAppender might be overkill for
some applications, and perhaps the user doesn't want another dependency to
log to the database. I think a JpaAppender would be supplementary to a
JdbcAppender, not a replacement for it.


On Thu, Apr 25, 2013 at 1:11 PM, Ralph Goers ralph.go...@dslextreme.comwrote:

 Rather than a JDBC appender I was hoping for a more generic appender that
 could use JPA or something else.


 On Apr 25, 2013, at 7:39 AM, Nicholas Williams wrote:

  First, a quick question: do we anticipate the next version to be beta6
 or rc1? Just curious.
 
  I'm currently working on cleaning up compiler warnings throughout the
 project and should have that completed soon.
 
  I want to go ahead and get the conversation started about database
 appenders. I'd like to see two appenders:
 
  - A JdbcAppender that is capable of logging to any RDBMS for which there
 is a JDBC driver.
  - A MongoAppender that is capable of logging to a MongoDB database.
 
  The JdbcAppender and MongoAppender would, as far as I can tell, need
 properties for mapping all of the possible logging event properties to
 table columns (or Mongo equivalent). I don't really see any other way to
 accomplish that. We could use layout patterns from the PatternLayout to
 achieve this: column name=columnName pattern=PatternLayout
 equivalent-pattern /
 
  I imagine the JdbcAppender having mutually exclusive properties for JDBC
 URL/username/password, DataSource JNDI URL, and class.staticFactoryMethod
 for obtaining a DataSource.
 
  The MongoAppender would similarly have mutually exclusive properties for
 connection information and class.statucFactoryMethod for obtaining a Mongo
 instance.
 
  I'd like to take a stab at these after I complete fixing compiler
 warnings, and wanted to start getting feedback/ideas and also see if anyone
 has use cases for other NoSQL appenders.


 -
 To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
 For additional commands, e-mail: log4j-dev-h...@logging.apache.org




Re: Database Appenders

2013-04-25 Thread Nicholas Williams
Interesting idea. I'll look into the JSONLayout as a possible solution.


On Thu, Apr 25, 2013 at 1:13 PM, Ralph Goers ralph.go...@dslextreme.comwrote:

 I would think that for something like MongoDB you would want to use a
 JSONLayout and then just insert that.  I can tell you that for Cassandra it
 is considerably more complicated.

 Ralph




 On Apr 25, 2013, at 9:25 AM, Nicholas Williams wrote:


 On Thu, Apr 25, 2013 at 11:17 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 11:15 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 11:06 AM, Gary Gregory 
 garydgreg...@gmail.comwrote:

 I thought it included a vendor neutral NoSQL API in its stack
 someplace.


 From what I can tell, it includes a vendor-neutral way to connect to its
 cloud-based application engine. But you must either connect to a hosted
 cloud application engine, or you must download and install the cloud
 application engine on your own machine(s) and then connect to it. There
 doesn't appear to be an actual, extractable database API that can be used
 to connect directly to generic NoSQL databases.

 But I could be wrong. I'm not finding a whole lot of useful information
 on their website.


 From their GitHub repository:

 AppScale is a platform that allows users to deploy and host their own
 Google App Engine applications. It executes automatically over Amazon EC2,
 Rackspace, Eucalyptus, CloudStack, OpenStack, as well as Xen, VirtualBox,
 VMWare, and KVM. It has been developed and is maintained by AppScale
 Systems, Inc., in Santa Barbara, CA.

 It supports the Python, Java, and Go Google App Engine platforms.


 Doing some Googling, there doesn't seem to be a vendor-neutral Java API
 for NoSQL databases at all--which doesn't surprise me. But if you think
 about it, even if there were, we don't need something that large and (for
 us) cumbersome. All we need is a straightforward API for inserting simple
 POJOs. I think it would be more efficient/easier to simply create that
 two-interface API and implement it for each NoSQL database supported.







 Gary

 On Apr 25, 2013, at 11:47, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:43 AM, Gary Gregory 
 garydgreg...@gmail.comwrote:

 On Thu, Apr 25, 2013 at 11:37 AM, Nicholas Williams 
 nicho...@nicholaswilliams.net wrote:


 On Thu, Apr 25, 2013 at 10:01 AM, Remko Popma rem...@yahoo.comwrote:

 I think this is the link Gary is talking about: (from the wiki)
 Build a NoSQL Appender, maybe with 
 AppScalehttp://wiki.apache.org/logging/AppScale
 : http://appscale.cs.ucsb.edu/datastores.html Inspiration came from
 the log4j1 appender for redis:
 https://github.com/pavlobaron/log4j2redis


 Unless I'm misunderstanding something (very possible), AppScale seems
 like a lot of overhead and overkill. However, it should certainly be
 possible to create a NoSqlAppender that has a type property where you
 specify the database type (Mongo, Redis, etc.).

 One thing we need to keep in mind is dependencies: appending to Mongo
 will require the org.mongodb:mongo-java-driver artifact (and its
 dependencies), Redis will require the redis.clients:jedis artifact (and 
 its
 dependencies), etc. So, as far as I see it, we have two options:

 A) Have one NoSqlAppender that initially supports MongoDB and can
 easily be made to support other NoSQL databases, put it in the log4j-core
 artifact, and mark all dependencies as OPTIONAL or PROVIDED (thoughts?
 opinions?) so as not to cause using projects to download unnecessary
 dependencies.

 B) Put NoSQL appenders in separate artifacts so that dependencies
 don't have to be optional/provided, but as a result each NoSQL will have 
 to
 have its own appender and each will result in a separate log4j artifact.

 I REALLY don't like option B, but optional/provided dependencies also
 require careful, explicit documentation and, my bet, end up with lots of
 mailing list emails Why am I getting a NoClassDefError?.


 That's not an issue IMO since you only get the error when the NoSQL
 appender class would be loaded. If you use plain log4j, you'd never see 
 the
 error.

 If we do a NoSQL appender, it seems to me that each will have it's own
 idiosyncrasies and configuration tweak, so a single generic class is not
 going to be pretty so I would do:

 - an AppScale appender or equivalent JDBC-like vendor neutral API
 - If need be our own, with a NoSQL abstract appender and vendor
 specific subclasses.


 I may not understand what AppScale is exactly. It looks like just a
 cloud service. Is that correct?



 Gary


 Agree with Gary on keeping things simple. Also agree every new
 feature needs to be in beta for a while to shake out bugs etc. I don't 
 have
 an opinion on whether to include jdbc appenders in the first 2.0 
 release or
 add them later.

 You know, I was actually thinking to write a tutorial for custom
 plugin developers, called (How