STORM-976.

Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/962d57be
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/962d57be
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/962d57be

Branch: refs/heads/master
Commit: 962d57be14d81c0f9b485182f6dd73bcd84dd50d
Parents: e7088c7
Author: YvonneIronberg <yvonne.ironb...@gmail.com>
Authored: Fri Aug 14 16:06:28 2015 -0700
Committer: YvonneIronberg <yvonne.ironb...@gmail.com>
Committed: Fri Aug 14 16:06:28 2015 -0700

----------------------------------------------------------------------
 SECURITY.md               |  2 +-
 bin/storm-config.cmd      | 14 +++++++-------
 docs/documentation/FAQ.md |  7 +++++--
 3 files changed, 13 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/962d57be/SECURITY.md
----------------------------------------------------------------------
diff --git a/SECURITY.md b/SECURITY.md
index c231547..c406ce2 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -441,7 +441,7 @@ The Logviewer daemon now is also responsible for cleaning 
up old log files for d
 
 | YAML Setting | Description |
 |--------------|-------------------------------------|
-| logviewer.cleanup.age.mins | How old (by last modification time) must a 
worker's log be before that log is considered for clean-up. (Living workers' 
logs are never cleaned up by the logviewer: Their logs are rolled via logback.) 
|
+| logviewer.cleanup.age.mins | How old (by last modification time) must a 
worker's log be before that log is considered for clean-up. (Living workers' 
logs are never cleaned up by the logviewer: Their logs are rolled via log4j2.) |
 | logviewer.cleanup.interval.secs | Interval of time in seconds that the 
logviewer cleans up worker logs. |
 
 

http://git-wip-us.apache.org/repos/asf/storm/blob/962d57be/bin/storm-config.cmd
----------------------------------------------------------------------
diff --git a/bin/storm-config.cmd b/bin/storm-config.cmd
index c0906e7..b839160 100644
--- a/bin/storm-config.cmd
+++ b/bin/storm-config.cmd
@@ -91,26 +91,26 @@ if not defined STORM_LOG_DIR (
 FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
        FOR /F "tokens=1,* delims= " %%a in ("%%i") do (
                if %%a == VALUE: (
-                       set STORM_LOGBACK_CONFIGURATION_DIR=%%b
+                       set STORM_LOG4J2_CONFIGURATION_DIR=%%b
                        del /F %CMD_TEMP_FILE%)
                )
        )
 )              
 
 @rem
-@rem if STORM_LOGBACK_CONFIGURATION_DIR was defined, also set 
STORM_LOGBACK_CONFIGURATION_FILE
+@rem if STORM_LOG4J2_CONFIGURATION_DIR was defined, also set 
STORM_LOG4J2_CONFIGURATION_FILE
 @rem
 
-if not %STORM_LOGBACK_CONFIGURATION_DIR% == nil (
-       set 
STORM_LOGBACK_CONFIGURATION_FILE=%STORM_LOGBACK_CONFIGURATION_DIR%\cluster.xml
+if not %STORM_LOG4J2_CONFIGURATION_DIR% == nil (
+       set 
STORM_LOG4J2_CONFIGURATION_FILE=%STORM_LOG4J2_CONFIGURATION_DIR%\cluster.xml
 ) 
 
 @rem
 @rem otherwise, fall back to default
 @rem
 
-if not defined STORM_LOGBACK_CONFIGURATION_FILE (
-  set STORM_LOGBACK_CONFIGURATION_FILE=%STORM_HOME%\log4j2\cluster.xml
+if not defined STORM_LOG4J2_CONFIGURATION_FILE (
+  set STORM_LOG4J2_CONFIGURATION_FILE=%STORM_HOME%\log4j2\cluster.xml
 )
 
 "%JAVA%" -client -Dstorm.options= -Dstorm.conf.file= -cp "%CLASSPATH%" 
backtype.storm.command.config_value java.library.path > %CMD_TEMP_FILE%
@@ -126,7 +126,7 @@ FOR /F "delims=" %%i in (%CMD_TEMP_FILE%) do (
 
 :storm_opts
  set STORM_OPTS=-Dstorm.options= -Dstorm.home=%STORM_HOME% 
-Djava.library.path=%JAVA_LIBRARY_PATH%;%JAVA_HOME%\bin;%JAVA_HOME%\lib;%JAVA_HOME%\jre\bin;%JAVA_HOME%\jre\lib
- set STORM_OPTS=%STORM_OPTS% 
-Dlog4j.configurationFile=%STORM_LOGBACK_CONFIGURATION_FILE%
+ set STORM_OPTS=%STORM_OPTS% 
-Dlog4j.configurationFile=%STORM_LOG4J2_CONFIGURATION_FILE%
  set STORM_OPTS=%STORM_OPTS% -Dstorm.log.dir=%STORM_LOG_DIR%
  del /F %CMD_TEMP_FILE%
 

http://git-wip-us.apache.org/repos/asf/storm/blob/962d57be/docs/documentation/FAQ.md
----------------------------------------------------------------------
diff --git a/docs/documentation/FAQ.md b/docs/documentation/FAQ.md
index b292b2f..a183753 100644
--- a/docs/documentation/FAQ.md
+++ b/docs/documentation/FAQ.md
@@ -28,7 +28,10 @@ documentation: true
 
 ### Halp! I cannot see:
 
-* **my logs** Logs by default go to $STORM_HOME/logs. Check that you have 
write permissions to that directory. They are configured in the 
logback/cluster.xml (0.9) and log4j/*.properties in earlier versions.
+* **my logs** Logs by default go to $STORM_HOME/logs. Check that you have 
write permissions to that directory. They are configured in 
+    * log4j2/{cluster, worker}.xml (0.11);
+    * logback/cluster.xml (0.9 - 0.10);
+    * log4j/*.properties in earlier versions (< 0.9).
 * **final JVM settings** Add the `-XX+PrintFlagsFinal` commandline option in 
the childopts (see the conf file)
 * **final Java system properties** Add `Properties props = 
System.getProperties(); props.list(System.out);` near where you build your 
topology.
 
@@ -120,4 +123,4 @@ You cannot know that all events are collected -- this is an 
epistemological chal
 * Set a time limit using domain knowledge
 * Introduce a _punctuation_: a record known to come after all records in the 
given time bucket. Trident uses this scheme to know when a batch is complete. 
If you for instance receive records from a set of sensors, each in order for 
that sensor, then once all sensors have sent you a 3:02:xx or later timestamp 
lets you know you can commit. 
 * When possible, make your process incremental: each value that comes in makes 
the answer more an more true. A Trident ReducerAggregator is an operator that 
takes a prior result and a set of new records and returns a new result. This 
lets the result be cached and serialized to a datastore; if a server drops off 
line for a day and then comes back with a full day's worth of data in a rush, 
the old results will be calmly retrieved and updated.
-* Lambda architecture: Record all events into an archival store (S3, HBase, 
HDFS) on receipt. in the fast layer, once the time window is clear, process the 
bucket to get an actionable answer, and ignore everything older than the time 
window. Periodically run a global aggregation to calculate a "correct" answer.
\ No newline at end of file
+* Lambda architecture: Record all events into an archival store (S3, HBase, 
HDFS) on receipt. in the fast layer, once the time window is clear, process the 
bucket to get an actionable answer, and ignore everything older than the time 
window. Periodically run a global aggregation to calculate a "correct" answer.

Reply via email to