[
https://issues.apache.org/jira/browse/ACCUMULO-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13211197#comment-13211197
]
Josh Elser commented on ACCUMULO-407:
-------------------------------------
All nodes already need to have a valid local Accumulo installation (which
includes the conf directory), so no, I don't see a point to having it read from
HDFS currently. There could be some benefit from exposing something in the
monitor.
I'd like to see this rolled into the upcoming 1.4.0 release, so I didn't want
to make a big change. I would assume that you have some sort of tool to copy a
new log4j file out already installed on a cluster (e.g. pscp, pdcp). Same goes
for making a read-only conf directory writable to do the copy.
All that being said, I don't think anyone would object to you writing something
to procedurally alter the log level either. Please have at it.
> Look into on the fly log4j configuration
> ----------------------------------------
>
> Key: ACCUMULO-407
> URL: https://issues.apache.org/jira/browse/ACCUMULO-407
> Project: Accumulo
> Issue Type: Improvement
> Affects Versions: 1.4.0
> Reporter: John Vines
> Assignee: Josh Elser
> Fix For: 1.4.0
>
> Attachments: ACCUMULO-407-auto-reload-log4j.patch
>
>
> For long running systems, logs may not want to be kept at the debug level
> 24/7. But there may be times where a single long query may be want to looked
> into without cycling the entire systems. We think it may be possible to make
> log4j configurable on fly, so lets start by looking into how difficult it
> will be.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira