[ 
https://issues.apache.org/jira/browse/SAMZA-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222819#comment-14222819
 ] 

Yan Fang commented on SAMZA-310:
--------------------------------

Thank you, Martin.

Made changes according to Martin's comments in the RB. The same RB: 
https://reviews.apache.org/r/28035/

{quote}
AM logs appeared in Kafka, except that the first 35 or so lines of log were 
missing. Do you know why that would be? Is there any way we can capture logs 
right from the start (buffering them if necessary if the producer is not yet 
connected)?
{quote}

I looked into it a little. All the logs missed are those happen during 
activateOptions(). The tricky part is that the appender does not call 
"protected void append(LoggingEvent event)" during "activateOptions()" even we 
have "log" in it. Similarly, we miss the logs during initiating the 
SystemProducers.

As discussed, if we can not figure out a better way, it seems acceptable 
because we do not lose the logs if the users specify other log appenders.

{quote}
When I looked at the output of kafka-console-consumer, the lines of the AM logs 
appeared in a different order from how they appeared in the regular log file. 
Any idea why? I thought using the container name as message key should make all 
the messages go to the same partition, and thus preserve their order.
{quote}

Could you check it now? I do not see the same problem. I only see the AM misses 
some logs due to (1).

{quote}
Logs from the other container (non-AM) did not appear in Kafka. The following 
error appeared on the container's stdout:
{quote}

fixed this.

> Publish container logs to a SystemStream
> ----------------------------------------
>
>                 Key: SAMZA-310
>                 URL: https://issues.apache.org/jira/browse/SAMZA-310
>             Project: Samza
>          Issue Type: New Feature
>          Components: container
>    Affects Versions: 0.7.0
>            Reporter: Martin Kleppmann
>            Assignee: Yan Fang
>             Fix For: 0.9.0
>
>         Attachments: SAMZA-310.1.patch, SAMZA-310.2.patch, SAMZA-310.patch
>
>
> At the moment, it's a bit awkward to get to a Samza job's logs: assuming 
> you're running on YARN, you have to navigate around the YARN web interface, 
> and you can only see one container's logs at a time.
> Given that Samza is all about streams, it would make sense for the logs 
> generated by Samza jobs to also be sent to a stream. There, they could be 
> indexed with [Kibana|http://www.elasticsearch.org/overview/kibana/], consumed 
> by an exception-tracking system, etc.
> Notes:
> - The serde for encoding logs into a suitable wire format should be 
> pluggable. There can be a default implementation that uses JSON, analogous to 
> MetricsSnapshotSerdeFactory for metrics, but organisations that already have 
> a standardised in-house encoding for logs should be able to use it.
> - Should this be at the level of Slf4j or Log4j? Currently the log 
> configuration for YARN jobs uses Log4j, which has the advantage that any 
> frameworks/libraries that use Log4j but not Slf4j appear in the logs. 
> However, Samza itself currently only depends on Slf4j. If we tie this feature 
> to Log4j, it would somewhat defeat the purpose of using Slf4j.
> - Do we need to consider partitioning? Perhaps we can use the container name 
> as partitioning key, so that the ordering of logs from each container is 
> preserved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to