Re: How to reload a programmed log4j2 configuration in java

2017-03-17 Thread ckl67
I go it:

import java.nio.file.Paths;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.Appender;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.config.Configuration;
import org.apache.logging.log4j.core.config.LoggerConfig;


public class Log4j2Init {

private String logPath;

// ---
//  CONSTRUCTOR
// ---
/**
 * Initialize Log4j2
 * Configure log path which will be used in "log4j2.xml"
 *  Common error is caused by the static logger
 *  If you logs before the System.setProperty, this will cause the 
variable
UNDEFINED error.
 *  private static final Logger logger =
LogManager.getLogger(test.class.getName());
 *  is forbidden here !!
 */
public Log4j2Init() {
logPath = Paths.get(getUserAppDirectory()+"/Pac-Tool").toString();
System.setProperty("logpath.name",logPath);
}

// ---
//  TEST
// ---

public static void main(String[] args) {

Log4j2Init log4j2Init = new Log4j2Init();
System.out.println("Pat of the Log File : " + 
log4j2Init.getLogPath());

// Create the Logger
Logger logger = 
LogManager.getLogger(Log4j2Init.class.getName());

@SuppressWarnings("resource")
LoggerContext ctx = (LoggerContext) 
LogManager.getContext(false);
Configuration config = ctx.getConfiguration();
LoggerConfig loggerConfig =
config.getLoggerConfig(LogManager.ROOT_LOGGER_NAME); 

// Read the Appenders
System.out.println("Appenders declared in .xml :" +
loggerConfig.getAppenderRefs());
System.out.println("Appenders used in Logger :" +
loggerConfig.getAppenders());

// Apply the level specified in log4j2.xml
System.out.println("Log Level (default in .xml)= " +
loggerConfig.getLevel());
logger.error("This is Logger for 1 Error");
logger.info("This is Logger for 1 Info");
logger.debug("This is Logger for 1 Debug");
logger.trace("This is Logger for 1 Trace");

// Remove Console Logger + Set new log level 
loggerConfig.removeAppender("Console");
loggerConfig.setLevel(Level.TRACE);
ctx.updateLoggers();  

System.out.println("Log Level = " + loggerConfig.getLevel());
System.out.println("Appenders used in Logger :" +
loggerConfig.getAppenders());

logger.error("This is Logger 2 Error");
logger.info("This is Logger 2 Info");
logger.debug("This is Logger 2 Debug");
logger.trace("This is Logger 2 Trace");

// Add Appender
Appender appender = config.getAppender("Console");
loggerConfig.addAppender(appender, Level.TRACE, null );
ctx.updateLoggers();  

System.out.println("Log Level = " + loggerConfig.getLevel());
System.out.println("Appenders used in Logger :" +
loggerConfig.getAppenders());

logger.error("This is Logger 3 Error");
logger.info("This is Logger 3 Info");
logger.debug("This is Logger 3 Debug");
logger.trace("This is Logger 3 Trace");

}



/**
 * getUserAppDirectory
 * @return
 */
private String getUserAppDirectory() {
String workingDirectory;
String OS = (System.getProperty("os.name")).toUpperCase();  
if (OS.contains("WIN"))
{
//it is simply the location of the "AppData" folder
workingDirectory = System.getenv("AppData");
}
else
{
//Otherwise, we assume Linux or Mac
workingDirectory = System.getProperty("user.home");
//if we are on a Mac, we are not done, we look for 
"Application Support"
workingDirectory += "/Library/Application Support";
}
return workingDirectory;
}

public String getLogPath() {
return logPath;
}


}




--
View this message in context: 
http://apache-logging.6191.n7.nabble.com/How-to-reload-a-programmed-log4j2-configuratio

Re: log4j2 issue

2017-03-17 Thread Mikael Ståldal
Have you tried to set blocking="false" on the AsyncAppender you have around
KafkaAppender?

Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
log4j2.DiscardThreshold?
https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.AsyncQueueFullPolicy

On Tue, Mar 14, 2017 at 1:00 PM, Yang Rui  wrote:

> Hi,
>
> I am Rui from China.
>
> We use both of KafkaAppender (with a AsyncAppender wrapper)
> and FileAppender of log4j2 with version 2.6.2 in the application.
>
> Here is the scenaria, when kafka cluster down and stop
> service, the application will slow down and wait for given timeout (
> request.timeout.ms)
>
> to response finally (The bufferSize of AsyncKafka is reached).
>
> I am wondering if there is any solution that the
> fileAppender can always work normally without any performance issue which
> affected
>
> by KafkaAppender. In other words, the KafkaAppender can "
> DISCARD" the logs when kafka cluster down while the application
>
> can output the logs by FileAppender.
>
>
> Thanks,
> Rui
>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
[image: MagineTV]

*Mikael Ståldal*
Senior software developer

*Magine TV*
mikael.stal...@magine.com
Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com

Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.


Re: log4j2 issue

2017-03-17 Thread Matt Sicker
If you don't care about old log messages that haven't been published yet
between times of Kafka availability, then yeah, discarding old messages
like that is an interesting workaround.

On 17 March 2017 at 08:58, Mikael Ståldal  wrote:

> Have you tried to set blocking="false" on the AsyncAppender you have around
> KafkaAppender?
>
> Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
> log4j2.DiscardThreshold?
> https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.
> AsyncQueueFullPolicy
>
> On Tue, Mar 14, 2017 at 1:00 PM, Yang Rui  wrote:
>
> > Hi,
> >
> > I am Rui from China.
> >
> > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > and FileAppender of log4j2 with version 2.6.2 in the application.
> >
> > Here is the scenaria, when kafka cluster down and stop
> > service, the application will slow down and wait for given timeout (
> > request.timeout.ms)
> >
> > to response finally (The bufferSize of AsyncKafka is reached).
> >
> > I am wondering if there is any solution that the
> > fileAppender can always work normally without any performance issue which
> > affected
> >
> > by KafkaAppender. In other words, the KafkaAppender can "
> > DISCARD" the logs when kafka cluster down while the application
> >
> > can output the logs by FileAppender.
> >
> >
> > Thanks,
> > Rui
> >
> >
> >
> > -
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> [image: MagineTV]
>
> *Mikael Ståldal*
> Senior software developer
>
> *Magine TV*
> mikael.stal...@magine.com
> Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com
>
> Privileged and/or Confidential Information may be contained in this
> message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not
> copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply
> email.
>



-- 
Matt Sicker