Re: Review Request: FLUME-1425: Create Standalone Spooling Client

2012-08-03 Thread Patrick Wendell

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6377/
---

(Updated Aug. 4, 2012, 6:59 a.m.)


Review request for Flume.


Description (updated)
---

This patch extends the existing avro client to support a file-based spooling 
mechanism. See in-line documentation for precise details, but the basic idea is 
that a user can have a spool directory where files are deposited for ingestion 
into flume. Once ingested the files are clearly renamed and the implementation 
guarantees at-least-once delivery semantics similar to those achieved within 
flume itself, even across failures and restarts of the JVM running the code.

I feel vaguely uneasy about building this as part of the standlone avro client 
rather than as its own source. An alternative would be to build this as a 
proper source (in fact, there are some ad-hoc transaction semantics used here 
which would really be a better fit for a source). Interested in hearing 
feedback on that as well. The benefit of having this in the avro client is that 
you don't need the flume runner scripts which are not windows compatible.


This addresses bug FlUME-1425.
https://issues.apache.org/jira/browse/FlUME-1425


Diffs
-

  flume-ng-core/src/main/java/org/apache/flume/client/avro/AvroCLIClient.java 
4a5ecae 
  
flume-ng-core/src/main/java/org/apache/flume/client/avro/BufferedLineReader.java
 PRE-CREATION 
  flume-ng-core/src/main/java/org/apache/flume/client/avro/LineReader.java 
PRE-CREATION 
  
flume-ng-core/src/main/java/org/apache/flume/client/avro/SpoolingFileLineReader.java
 PRE-CREATION 
  
flume-ng-core/src/test/java/org/apache/flume/client/avro/TestBufferedLineReader.java
 PRE-CREATION 
  
flume-ng-core/src/test/java/org/apache/flume/client/avro/TestSpoolingFileLineReader.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/6377/diff/


Testing
---

Extensive unit tests and I also built and played with this using a stub flume 
agent. If you look at the JIRA I have a configuration file for an agent that 
will print out Avro events to the command line - that's helpful when testing 
this.


Thanks,

Patrick Wendell



[jira] [Commented] (FLUME-1417) File Channel checkpoint can be bad leading to the channel being unable to start.

2012-08-03 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428575#comment-13428575
 ] 

Hari Shreedharan commented on FLUME-1417:
-

As of now, it seems like if all events are taken from a particular file, we 
remove the fileID counter in the FlumeEventQueue. I am making this a bit more 
defensive. I am updating the FlumeEventQueue class to remove the fileID counter 
only when a commit is recorded successfully. 

> File Channel checkpoint can be bad leading to the channel being unable to 
> start.
> 
>
> Key: FLUME-1417
> URL: https://issues.apache.org/jira/browse/FLUME-1417
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
>
>  ERROR file.Log: Failed to initialize Log on [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)
> ERROR file.FileChannel: Failed to start the file channel 
> [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Review Request: FLUME-1425: Create Standalone Spooling Client

2012-08-03 Thread Patrick Wendell

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6377/
---

Review request for Flume.


Description
---

This patch extends the existing avro client to support a file-based spooling 
mechanism. See in-line documentation for precise details, but the basic idea is 
that a user can have a spool directory where files are deposited for ingestion 
into flume. Once ingested the files are clearly renamed and the implementation 
guarantees at-least-once delivery semantics similar to those achieved within 
flume itself, even across failures and restarts of the JVM running the code.

I feel vaguely uneasy about building this as part of the standlone avro client 
rather than as its own source. An alternative would be to build this as a 
proper source (in fact, there are some ad-hoc transaction semantics used here 
which would really be a better fit for a source). Interested in hearing 
feedback on that as well. The benefit of having this in the avro client is that 
you don't need the flume runner scripts which are windows dependent.


This addresses bug FlUME-1425.
https://issues.apache.org/jira/browse/FlUME-1425


Diffs
-

  flume-ng-core/src/main/java/org/apache/flume/client/avro/AvroCLIClient.java 
4a5ecae 
  
flume-ng-core/src/main/java/org/apache/flume/client/avro/BufferedLineReader.java
 PRE-CREATION 
  flume-ng-core/src/main/java/org/apache/flume/client/avro/LineReader.java 
PRE-CREATION 
  
flume-ng-core/src/main/java/org/apache/flume/client/avro/SpoolingFileLineReader.java
 PRE-CREATION 
  
flume-ng-core/src/test/java/org/apache/flume/client/avro/TestBufferedLineReader.java
 PRE-CREATION 
  
flume-ng-core/src/test/java/org/apache/flume/client/avro/TestSpoolingFileLineReader.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/6377/diff/


Testing
---

Extensive unit tests and I also built and played with this using a stub flume 
agent. If you look at the JIRA I have a configuration file for an agent that 
will print out Avro events to the command line - that's helpful when testing 
this.


Thanks,

Patrick Wendell



[jira] [Updated] (FLUME-1425) Create Standalone Spooling Client

2012-08-03 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated FLUME-1425:
---

Attachment: FLUME-1425.avro-conf-file.txt
FLUME-1425.patch.v1.txt

This is an implementation of a spooling client which extends the existing avro 
client. See reviewboard for more color.

> Create Standalone Spooling Client
> -
>
> Key: FLUME-1425
> URL: https://issues.apache.org/jira/browse/FLUME-1425
> Project: Flume
>  Issue Type: Improvement
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Attachments: FLUME-1425.avro-conf-file.txt, FLUME-1425.patch.v1.txt
>
>
> The proposal is to create a small executable client which reads logs from a 
> spooling directory and sends them to a flume sink, then performs cleanup on 
> the directory (either by deleting or moving the logs). It would make the 
> following assumptions
> - Files placed in the directory are uniquely named
> - Files placed in the directory are immutable
> The problem this is trying to solve is that there is currently no way to do 
> guaranteed event delivery across flume agent restarts when the data is being 
> collected through an asynchronous source (and not directly from the client 
> API). Say, for instance, you are using a exec("tail -F") source. If the agent 
> restarts due to error or intentionally, tail may pick up at a new location 
> and you lose the intermediate data.
> At the same time, there are users who want at-least-once semantics, and 
> expect those to apply as soon as the data is written to disk from the initial 
> logger process (e.g. apache logs), not just once it has reached a flume 
> agent. This idea would bridge that gap, assuming the user is able to copy 
> immutable logs to a spooling directory through a cron script or something.
> The basic internal logic of such a client would be as follows:
> - Scan the directory for files
> - Chose a file and read through, while sending events to an agent
> - Close the file and delete it (or rename, or otherwise mark completed)
> That's about it. We could add sync-points to make recovery more efficient in 
> the case of failure.
> A key question is whether this should be implemented as a standalone client 
> or as a source. My instinct is actually to do this as a source, but there 
> could be some benefit to not requiring an entire agent in order to run this, 
> specifically that it would become platform independent and you could stick it 
> on Windows machines. Others I have talked to have also sided on a standalone 
> executable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1424) File Channel should support encryption

2012-08-03 Thread Ralph Goers (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428543#comment-13428543
 ] 

Ralph Goers commented on FLUME-1424:


Aren't the put records the only ones with data that needs encrypting?

> File Channel should support encryption
> --
>
> Key: FLUME-1424
> URL: https://issues.apache.org/jira/browse/FLUME-1424
> Project: Flume
>  Issue Type: Bug
>Reporter: Arvind Prabhakar
>Assignee: Arvind Prabhakar
>
> When persisting the data to disk, the File Channel should allow some form of 
> encryption to ensure safety of data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1227) Introduce some sort of SpillableChannel

2012-08-03 Thread Denny Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428531#comment-13428531
 ] 

Denny Ye commented on FLUME-1227:
-

That's great and useful when Flume cannot reaches to HDFS or other destination. 
Also it's the same concept in Scribe with named 'primary store' and 'secondary 
store'. Wish any implementations.

> Introduce some sort of SpillableChannel
> ---
>
> Key: FLUME-1227
> URL: https://issues.apache.org/jira/browse/FLUME-1227
> Project: Flume
>  Issue Type: New Feature
>  Components: Channel
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
>
> I would like to introduce new channel that would behave similarly as scribe 
> (https://github.com/facebook/scribe). It would be something between memory 
> and file channel. Input events would be saved directly to the memory (only) 
> and would be served from there. In case that the memory would be full, we 
> would outsource the events to file.
> Let me describe the use case behind this request. We have plenty of frontend 
> servers that are generating events. We want to send all events to just 
> limited number of machines from where we would send the data to HDFS (some 
> sort of staging layer). Reason for this second layer is our need to decouple 
> event aggregation and front end code to separate machines. Using memory 
> channel is fully sufficient as we can survive lost of some portion of the 
> events. However in order to sustain maintenance windows or networking issues 
> we would have to end up with a lot of memory assigned to those "staging" 
> machines. Referenced "scribe" is dealing with this problem by implementing 
> following logic - events are saved in memory similarly as our MemoryChannel. 
> However in case that the memory gets full (because of maintenance, networking 
> issues, ...) it will spill data to disk where they will be sitting until 
> everything start working again.
> I would like to introduce channel that would implement similar logic. It's 
> durability guarantees would be same as MemoryChannel - in case that someone 
> would remove power cord, this channel would lose data. Based on the 
> discussion in FLUME-1201, I would propose to have the implementation 
> completely independent on any other channel internal code.
> Jarcec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1424) File Channel should support encryption

2012-08-03 Thread Arvind Prabhakar (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428457#comment-13428457
 ] 

Arvind Prabhakar commented on FLUME-1424:
-

@Ralph - this is definitely one way to address this requirement. The advantage 
(and perhaps a disadvantage at the same time) of this approach is that it will 
only incorporate encryption for the put records. 

Another way to do this is to implement encryption at the LogFile.Writer/Reader 
level where the byte buffers are serialized between transaction boundaries. 
This approach will have a higher performance penalty but would encrypt every 
file channel record regardless of type.


> File Channel should support encryption
> --
>
> Key: FLUME-1424
> URL: https://issues.apache.org/jira/browse/FLUME-1424
> Project: Flume
>  Issue Type: Bug
>Reporter: Arvind Prabhakar
>Assignee: Arvind Prabhakar
>
> When persisting the data to disk, the File Channel should allow some form of 
> encryption to ensure safety of data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (FLUME-1422) Fix "BarSource" Class Signature in Flume Developer Guide

2012-08-03 Thread Jarek Jarcec Cecho (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jarek Jarcec Cecho resolved FLUME-1422.
---

   Resolution: Fixed
Fix Version/s: v1.3.0

> Fix "BarSource" Class Signature in Flume Developer Guide
> 
>
> Key: FLUME-1422
> URL: https://issues.apache.org/jira/browse/FLUME-1422
> Project: Flume
>  Issue Type: Bug
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Fix For: v1.3.0
>
> Attachments: FLUME-1422.patch.v1.txt
>
>
> The class signature should be:
> public class BarSource extends AbstractSource implements Configurable, 
> PollableSource
> (where PollableSource is org.apache.flume.PollableSource)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1422) Fix "BarSource" Class Signature in Flume Developer Guide

2012-08-03 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428447#comment-13428447
 ] 

Jarek Jarcec Cecho commented on FLUME-1422:
---

Commit be6c13bdd3384d04001b234b2bb5d1e1e28aec52

Thank you for your contribution Patrik!

Jarcec

> Fix "BarSource" Class Signature in Flume Developer Guide
> 
>
> Key: FLUME-1422
> URL: https://issues.apache.org/jira/browse/FLUME-1422
> Project: Flume
>  Issue Type: Bug
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Attachments: FLUME-1422.patch.v1.txt
>
>
> The class signature should be:
> public class BarSource extends AbstractSource implements Configurable, 
> PollableSource
> (where PollableSource is org.apache.flume.PollableSource)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1422) Fix "BarSource" Class Signature in Flume Developer Guide

2012-08-03 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428444#comment-13428444
 ] 

Jarek Jarcec Cecho commented on FLUME-1422:
---

I'm bypassing Review board as this is simple and straightforward change and 
giving +1 here.

Jarcec

> Fix "BarSource" Class Signature in Flume Developer Guide
> 
>
> Key: FLUME-1422
> URL: https://issues.apache.org/jira/browse/FLUME-1422
> Project: Flume
>  Issue Type: Bug
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Attachments: FLUME-1422.patch.v1.txt
>
>
> The class signature should be:
> public class BarSource extends AbstractSource implements Configurable, 
> PollableSource
> (where PollableSource is org.apache.flume.PollableSource)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (FLUME-1417) File Channel checkpoint can be bad leading to the channel being unable to start.

2012-08-03 Thread Hari Shreedharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Shreedharan reassigned FLUME-1417:
---

Assignee: Hari Shreedharan

> File Channel checkpoint can be bad leading to the channel being unable to 
> start.
> 
>
> Key: FLUME-1417
> URL: https://issues.apache.org/jira/browse/FLUME-1417
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
>
>  ERROR file.Log: Failed to initialize Log on [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)
> ERROR file.FileChannel: Failed to start the file channel 
> [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1417) File Channel checkpoint can be bad leading to the channel being unable to start.

2012-08-03 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428445#comment-13428445
 ] 

Hari Shreedharan commented on FLUME-1417:
-

That will definitely help. Either way, I will submit a patch for this one soon.

> File Channel checkpoint can be bad leading to the channel being unable to 
> start.
> 
>
> Key: FLUME-1417
> URL: https://issues.apache.org/jira/browse/FLUME-1417
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
>
>  ERROR file.Log: Failed to initialize Log on [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)
> ERROR file.FileChannel: Failed to start the file channel 
> [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1417) File Channel checkpoint can be bad leading to the channel being unable to start.

2012-08-03 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428442#comment-13428442
 ] 

Brock Noland commented on FLUME-1417:
-

OK that makes sense. FLUME-1426 is for speeding up log replay without a 
checkpoint.

> File Channel checkpoint can be bad leading to the channel being unable to 
> start.
> 
>
> Key: FLUME-1417
> URL: https://issues.apache.org/jira/browse/FLUME-1417
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
>  ERROR file.Log: Failed to initialize Log on [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)
> ERROR file.FileChannel: Failed to start the file channel 
> [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1417) File Channel checkpoint can be bad leading to the channel being unable to start.

2012-08-03 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428433#comment-13428433
 ] 

Hari Shreedharan commented on FLUME-1417:
-

Brock, 

Unfortunately I don't have the log - but here is a brief explanation of the 
issue:

A) Checkpoint 1 occurs - files 1 2 and 3 are marked active in the checkpoint.
B) File 1 now has all events taken and committed and refcount down to 0, which 
means it is longer relevant and is removed from the queue.
C) Worker removes file 1 due to (B).
D) System shuts down - user kills/system dies anything - flume is shutdown.
E) Flume is restarted and file channel tries to replay from checkpoint, but 
can't find file 1 because worker deleted it - throws exception and keeps 
channel closed. Channel is unable to start and events are blocked at previous 
hop.

To fix this, simply delete the checkpoint. All events from all logs are 
replayed - but this takes very long and is not always a possibility.

My proposal is that the worker must not delete files unless a checkpoint 
happened. 

> File Channel checkpoint can be bad leading to the channel being unable to 
> start.
> 
>
> Key: FLUME-1417
> URL: https://issues.apache.org/jira/browse/FLUME-1417
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
>  ERROR file.Log: Failed to initialize Log on [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)
> ERROR file.FileChannel: Failed to start the file channel 
> [channel=file-channel]
> java.lang.NullPointerException
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:739)
>   at org.apache.flume.channel.file.Log.replay(Log.java:261)
>   at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:228)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:228)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:619)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1424) File Channel should support encryption

2012-08-03 Thread Ralph Goers (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428406#comment-13428406
 ] 

Ralph Goers commented on FLUME-1424:


In looking at the FileChannel, it has a class named FlumeEvent. Could this be 
handled by making the FlumeEvent implementation pluggable (i.e. use a factory)? 
An EncryptedFlumeEvent could then perform the encryption/decryption as needed.

> File Channel should support encryption
> --
>
> Key: FLUME-1424
> URL: https://issues.apache.org/jira/browse/FLUME-1424
> Project: Flume
>  Issue Type: Bug
>Reporter: Arvind Prabhakar
>Assignee: Arvind Prabhakar
>
> When persisting the data to disk, the File Channel should allow some form of 
> encryption to ensure safety of data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1425) Create Standalone Spooling Client

2012-08-03 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428389#comment-13428389
 ] 

Patrick Wendell commented on FLUME-1425:


One story would be to add this as an option to the AvroCLIClient. Right now you 
can either read from stdin or a single file, a natural extension would be to 
watch a directory and read from files dropped in that directory. 

> Create Standalone Spooling Client
> -
>
> Key: FLUME-1425
> URL: https://issues.apache.org/jira/browse/FLUME-1425
> Project: Flume
>  Issue Type: Improvement
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
>
> The proposal is to create a small executable client which reads logs from a 
> spooling directory and sends them to a flume sink, then performs cleanup on 
> the directory (either by deleting or moving the logs). It would make the 
> following assumptions
> - Files placed in the directory are uniquely named
> - Files placed in the directory are immutable
> The problem this is trying to solve is that there is currently no way to do 
> guaranteed event delivery across flume agent restarts when the data is being 
> collected through an asynchronous source (and not directly from the client 
> API). Say, for instance, you are using a exec("tail -F") source. If the agent 
> restarts due to error or intentionally, tail may pick up at a new location 
> and you lose the intermediate data.
> At the same time, there are users who want at-least-once semantics, and 
> expect those to apply as soon as the data is written to disk from the initial 
> logger process (e.g. apache logs), not just once it has reached a flume 
> agent. This idea would bridge that gap, assuming the user is able to copy 
> immutable logs to a spooling directory through a cron script or something.
> The basic internal logic of such a client would be as follows:
> - Scan the directory for files
> - Chose a file and read through, while sending events to an agent
> - Close the file and delete it (or rename, or otherwise mark completed)
> That's about it. We could add sync-points to make recovery more efficient in 
> the case of failure.
> A key question is whether this should be implemented as a standalone client 
> or as a source. My instinct is actually to do this as a source, but there 
> could be some benefit to not requiring an entire agent in order to run this, 
> specifically that it would become platform independent and you could stick it 
> on Windows machines. Others I have talked to have also sided on a standalone 
> executable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1427) Syslog Facility calculation is wrong

2012-08-03 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428364#comment-13428364
 ] 

Patrick Wendell commented on FLUME-1427:


Hey Bhaskar - thanks for the contribution! 

As a heads up, there is a common naming scheme for JIRA patches:

e.g. FLUME-1427.patch.v1

Just a note, hopefully the first of many contributions :)

> Syslog Facility calculation is wrong
> 
>
> Key: FLUME-1427
> URL: https://issues.apache.org/jira/browse/FLUME-1427
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0, v1.3.0
>Reporter: Bhaskar Karambelkar
>  Labels: syslog
> Fix For: v1.2.0, v1.3.0
>
> Attachments: syslog.patch
>
>
> As per Syslog RFC, priority = (facility * 8) + severity, given this logic, 
> the code to calculate facility and severity from priority should be
> severity = priority % 8 ;
> facility = (priority - severity) / 8 ;
> But in SyslogUtils's buildEvent method
> facility = priority - severity
> i.e. the / 8 is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (FLUME-1427) Syslog Facility calculation is wrong

2012-08-03 Thread Bhaskar Karambelkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhaskar Karambelkar updated FLUME-1427:
---

Attachment: syslog.patch

Patch to correctly calculate facility.

> Syslog Facility calculation is wrong
> 
>
> Key: FLUME-1427
> URL: https://issues.apache.org/jira/browse/FLUME-1427
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0, v1.3.0
>Reporter: Bhaskar Karambelkar
>  Labels: syslog
> Fix For: v1.2.0, v1.3.0
>
> Attachments: syslog.patch
>
>
> As per Syslog RFC, priority = (facility * 8) + severity, given this logic, 
> the code to calculate facility and severity from priority should be
> severity = priority % 8 ;
> facility = (priority - severity) / 8 ;
> But in SyslogUtils's buildEvent method
> facility = priority - severity
> i.e. the / 8 is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (FLUME-1427) Syslog Facility calculation is wrong

2012-08-03 Thread Bhaskar Karambelkar (JIRA)
Bhaskar Karambelkar created FLUME-1427:
--

 Summary: Syslog Facility calculation is wrong
 Key: FLUME-1427
 URL: https://issues.apache.org/jira/browse/FLUME-1427
 Project: Flume
  Issue Type: Bug
  Components: Sinks+Sources
Affects Versions: v1.2.0, v1.3.0
Reporter: Bhaskar Karambelkar


As per Syslog RFC, priority = (facility * 8) + severity, given this logic, the 
code to calculate facility and severity from priority should be

severity = priority % 8 ;
facility = (priority - severity) / 8 ;

But in SyslogUtils's buildEvent method
facility = priority - severity

i.e. the / 8 is missing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (FLUME-1426) FileChannel FlumeEventQueue.remove could be optimized

2012-08-03 Thread Brock Noland (JIRA)
Brock Noland created FLUME-1426:
---

 Summary: FileChannel FlumeEventQueue.remove could be optimized
 Key: FLUME-1426
 URL: https://issues.apache.org/jira/browse/FLUME-1426
 Project: Flume
  Issue Type: Bug
  Components: Channel
Affects Versions: v1.2.0
Reporter: Brock Noland


FlumeEventQueue.remove iterates from the beginning to the end. It's rarely used 
but when it is used, it's very slow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (FLUME-1350) HDFS file handle not closed properly when date bucketing

2012-08-03 Thread Yongcheng Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongcheng Li updated FLUME-1350:


Attachment: HDFSEventSink.java.patch

This is my patch for FLUME-1350. I've tested it on my Hadoop system and it 
worked.

> HDFS file handle not closed properly when date bucketing 
> -
>
> Key: FLUME-1350
> URL: https://issues.apache.org/jira/browse/FLUME-1350
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.1.0
>Reporter: Robert Mroczkowski
> Attachments: HDFSEventSink.java.patch
>
>
> With configuration:
> agent.sinks.hdfs-cafe-access.type = hdfs
> agent.sinks.hdfs-cafe-access.hdfs.path =  
> hdfs://nga/nga/apache/access/%y-%m-%d/
> agent.sinks.hdfs-cafe-access.hdfs.fileType = DataStream
> agent.sinks.hdfs-cafe-access.hdfs.filePrefix = cafe_access
> agent.sinks.hdfs-cafe-access.hdfs.rollInterval = 21600
> agent.sinks.hdfs-cafe-access.hdfs.rollSize = 10485760
> agent.sinks.hdfs-cafe-access.hdfs.rollCount = 0
> agent.sinks.hdfs-cafe-access.hdfs.txnEventMax = 1000
> agent.sinks.hdfs-cafe-access.hdfs.batchSize = 1000
> #agent.sinks.hdfs-cafe-access.hdfs.codeC = snappy
> agent.sinks.hdfs-cafe-access.hdfs.hdfs.maxOpenFiles = 5000
> agent.sinks.hdfs-cafe-access.channel = memo-1
> When new directory is created previous file handle remains opened. 
> rollInterval setting is used only with files in current date bucket. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Hari Shreedharan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9833
---


I have to agree with the others here. Removing the force() call is not a good 
way to improve performance. To quote someone I know: "Correctness before 
performance." Removing the force kills the durability semantics of the channel, 
which is what most people use it for.

- Hari Shreedharan


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



[jira] [Commented] (FLUME-1227) Introduce some sort of SpillableChannel

2012-08-03 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428246#comment-13428246
 ] 

Patrick Wendell commented on FLUME-1227:


This seems like a very sensible proposal and I think it makes sense to spill to 
a disk when memory is full.

We are very likely to see busty rate behaviors for various sinks (e.g. HDFS 
sink) and it makes sense to have more elastic buffering happening in 
intermediate agent nodes. I would imagine several deployments would want to use 
this if it were included.

I almost think disk spilling should just be a feature of the existing memory 
channel. We could have an option of how much to spill to disk uf memory is 
saturated - with the default of 0. If you chose some larger amount it will use 
a bounded amount of disk space. Could go either way on this last point, but 
overall think it is a great idea.

> Introduce some sort of SpillableChannel
> ---
>
> Key: FLUME-1227
> URL: https://issues.apache.org/jira/browse/FLUME-1227
> Project: Flume
>  Issue Type: New Feature
>  Components: Channel
>Reporter: Jarek Jarcec Cecho
>Assignee: Jarek Jarcec Cecho
>
> I would like to introduce new channel that would behave similarly as scribe 
> (https://github.com/facebook/scribe). It would be something between memory 
> and file channel. Input events would be saved directly to the memory (only) 
> and would be served from there. In case that the memory would be full, we 
> would outsource the events to file.
> Let me describe the use case behind this request. We have plenty of frontend 
> servers that are generating events. We want to send all events to just 
> limited number of machines from where we would send the data to HDFS (some 
> sort of staging layer). Reason for this second layer is our need to decouple 
> event aggregation and front end code to separate machines. Using memory 
> channel is fully sufficient as we can survive lost of some portion of the 
> events. However in order to sustain maintenance windows or networking issues 
> we would have to end up with a lot of memory assigned to those "staging" 
> machines. Referenced "scribe" is dealing with this problem by implementing 
> following logic - events are saved in memory similarly as our MemoryChannel. 
> However in case that the memory gets full (because of maintenance, networking 
> issues, ...) it will spill data to disk where they will be sitting until 
> everything start working again.
> I would like to introduce channel that would implement similar logic. It's 
> durability guarantees would be same as MemoryChannel - in case that someone 
> would remove power cord, this channel would lose data. Based on the 
> discussion in FLUME-1201, I would propose to have the implementation 
> completely independent on any other channel internal code.
> Jarcec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Jarek Cecho

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9828
---


I do have maybe unrelated question - aren't you trying to achieve something 
like SpillableChannel that is described in FLUME-1227?

- Jarek Cecho


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



[jira] [Updated] (FLUME-1420) Exception should be thrown if we cannot instaniate an EventSerializer

2012-08-03 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated FLUME-1420:
---

Attachment: FLUME-1420.patch.v2.txt

Oh I see. This one catches all four locations.

> Exception should be thrown if we cannot instaniate an EventSerializer
> -
>
> Key: FLUME-1420
> URL: https://issues.apache.org/jira/browse/FLUME-1420
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Brock Noland
>Assignee: Patrick Wendell
> Attachments: FLUME-1420.patch.v1.txt, FLUME-1420.patch.v2.txt
>
>
> Currently EventSerailizerFactory returns null if it cannot instantiate the 
> class. Then the caller NPEs because they don't expect null. If we cannot 
> satisfy the caller we should throw an exception
> {noformat}
> 2012-08-02 16:38:26,489 ERROR serialization.EventSerializerFactory: Unable to 
> instantiate Builder from 
> org.apache.flume.serialization.BodyTextEventSerializer
> 2012-08-02 16:38:26,490 WARN hdfs.HDFSEventSink: HDFS IO error
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:202)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125)
> at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:75)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:188)
> ... 13 more
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (FLUME-1422) Fix "BarSource" Class Signature in Flume Developer Guide

2012-08-03 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated FLUME-1422:
---

Attachment: FLUME-1422.patch.v1.txt

Fixing doc code and also one typo.

> Fix "BarSource" Class Signature in Flume Developer Guide
> 
>
> Key: FLUME-1422
> URL: https://issues.apache.org/jira/browse/FLUME-1422
> Project: Flume
>  Issue Type: Bug
>Reporter: Patrick Wendell
>Assignee: Patrick Wendell
> Attachments: FLUME-1422.patch.v1.txt
>
>
> The class signature should be:
> public class BarSource extends AbstractSource implements Configurable, 
> PollableSource
> (where PollableSource is org.apache.flume.PollableSource)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1420) Exception should be thrown if we cannot instaniate an EventSerializer

2012-08-03 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428226#comment-13428226
 ] 

Brock Noland commented on FLUME-1420:
-

There is actually 4 places that method can return null?  Do you see this 
section:

{code}

// build the builder
EventSerializer.Builder builder;
try {
  builder = builderClass.newInstance();
} catch (InstantiationException ex) {
  logger.error("Cannot instantiate builder: " + serializerType, ex);
  return null;
} catch (IllegalAccessException ex) {
  logger.error("Cannot instantiate builder: " + serializerType, ex);
  return null;
}

return builder.build(context, out);
{code}

> Exception should be thrown if we cannot instaniate an EventSerializer
> -
>
> Key: FLUME-1420
> URL: https://issues.apache.org/jira/browse/FLUME-1420
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Brock Noland
>Assignee: Patrick Wendell
> Attachments: FLUME-1420.patch.v1.txt
>
>
> Currently EventSerailizerFactory returns null if it cannot instantiate the 
> class. Then the caller NPEs because they don't expect null. If we cannot 
> satisfy the caller we should throw an exception
> {noformat}
> 2012-08-02 16:38:26,489 ERROR serialization.EventSerializerFactory: Unable to 
> instantiate Builder from 
> org.apache.flume.serialization.BodyTextEventSerializer
> 2012-08-02 16:38:26,490 WARN hdfs.HDFSEventSink: HDFS IO error
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:202)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125)
> at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:75)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:188)
> ... 13 more
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1420) Exception should be thrown if we cannot instaniate an EventSerializer

2012-08-03 Thread Patrick Wendell (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428224#comment-13428224
 ] 

Patrick Wendell commented on FLUME-1420:


Both of the "return null" statements are removed in this diff - am I missing 
something?

> Exception should be thrown if we cannot instaniate an EventSerializer
> -
>
> Key: FLUME-1420
> URL: https://issues.apache.org/jira/browse/FLUME-1420
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Brock Noland
>Assignee: Patrick Wendell
> Attachments: FLUME-1420.patch.v1.txt
>
>
> Currently EventSerailizerFactory returns null if it cannot instantiate the 
> class. Then the caller NPEs because they don't expect null. If we cannot 
> satisfy the caller we should throw an exception
> {noformat}
> 2012-08-02 16:38:26,489 ERROR serialization.EventSerializerFactory: Unable to 
> instantiate Builder from 
> org.apache.flume.serialization.BodyTextEventSerializer
> 2012-08-02 16:38:26,490 WARN hdfs.HDFSEventSink: HDFS IO error
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:202)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125)
> at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:75)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:188)
> ... 13 more
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1420) Exception should be thrown if we cannot instaniate an EventSerializer

2012-08-03 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428220#comment-13428220
 ] 

Brock Noland commented on FLUME-1420:
-

Patch looks good, but it looks like we still return null in the "build the 
builder" section?

> Exception should be thrown if we cannot instaniate an EventSerializer
> -
>
> Key: FLUME-1420
> URL: https://issues.apache.org/jira/browse/FLUME-1420
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Brock Noland
>Assignee: Patrick Wendell
> Attachments: FLUME-1420.patch.v1.txt
>
>
> Currently EventSerailizerFactory returns null if it cannot instantiate the 
> class. Then the caller NPEs because they don't expect null. If we cannot 
> satisfy the caller we should throw an exception
> {noformat}
> 2012-08-02 16:38:26,489 ERROR serialization.EventSerializerFactory: Unable to 
> instantiate Builder from 
> org.apache.flume.serialization.BodyTextEventSerializer
> 2012-08-02 16:38:26,490 WARN hdfs.HDFSEventSink: HDFS IO error
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:202)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125)
> at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:75)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:188)
> ... 13 more
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (FLUME-1420) Exception should be thrown if we cannot instaniate an EventSerializer

2012-08-03 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated FLUME-1420:
---

Attachment: FLUME-1420.patch.v1.txt

This patch throws a FlumeException if the serializer can't be instantiated or 
its class can't be found. This is appropriate since Flume can make no progress 
in these situations.

> Exception should be thrown if we cannot instaniate an EventSerializer
> -
>
> Key: FLUME-1420
> URL: https://issues.apache.org/jira/browse/FLUME-1420
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Brock Noland
>Assignee: Patrick Wendell
> Attachments: FLUME-1420.patch.v1.txt
>
>
> Currently EventSerailizerFactory returns null if it cannot instantiate the 
> class. Then the caller NPEs because they don't expect null. If we cannot 
> satisfy the caller we should throw an exception
> {noformat}
> 2012-08-02 16:38:26,489 ERROR serialization.EventSerializerFactory: Unable to 
> instantiate Builder from 
> org.apache.flume.serialization.BodyTextEventSerializer
> 2012-08-02 16:38:26,490 WARN hdfs.HDFSEventSink: HDFS IO error
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:202)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:48)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:155)
> at 
> org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:125)
> at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:152)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:307)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:717)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink$1.call(HDFSEventSink.java:714)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:75)
> at 
> org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:188)
> ... 13 more
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (FLUME-1425) Create Standalone Spooling Client

2012-08-03 Thread Patrick Wendell (JIRA)
Patrick Wendell created FLUME-1425:
--

 Summary: Create Standalone Spooling Client
 Key: FLUME-1425
 URL: https://issues.apache.org/jira/browse/FLUME-1425
 Project: Flume
  Issue Type: Improvement
Reporter: Patrick Wendell
Assignee: Patrick Wendell


The proposal is to create a small executable client which reads logs from a 
spooling directory and sends them to a flume sink, then performs cleanup on the 
directory (either by deleting or moving the logs). It would make the following 
assumptions

- Files placed in the directory are uniquely named
- Files placed in the directory are immutable

The problem this is trying to solve is that there is currently no way to do 
guaranteed event delivery across flume agent restarts when the data is being 
collected through an asynchronous source (and not directly from the client 
API). Say, for instance, you are using a exec("tail -F") source. If the agent 
restarts due to error or intentionally, tail may pick up at a new location and 
you lose the intermediate data.

At the same time, there are users who want at-least-once semantics, and expect 
those to apply as soon as the data is written to disk from the initial logger 
process (e.g. apache logs), not just once it has reached a flume agent. This 
idea would bridge that gap, assuming the user is able to copy immutable logs to 
a spooling directory through a cron script or something.

The basic internal logic of such a client would be as follows:

- Scan the directory for files
- Chose a file and read through, while sending events to an agent
- Close the file and delete it (or rename, or otherwise mark completed)

That's about it. We could add sync-points to make recovery more efficient in 
the case of failure.

A key question is whether this should be implemented as a standalone client or 
as a source. My instinct is actually to do this as a source, but there could be 
some benefit to not requiring an entire agent in order to run this, 
specifically that it would become platform independent and you could stick it 
on Windows machines. Others I have talked to have also sided on a standalone 
executable.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




customizing logging messages pattern

2012-08-03 Thread JP
Thanks Hari ,

As you told it is working. Im able to see the human redable format.

The following is the output:

2UTF81344001730307com.cisco.flume.FlumeTestSample info
message
3UTF81344001730432com.cisco.flume.FlumeTestSample warn
message
4UTF81344001730463com.cisco.flume.FlumeTestSample error
message
5UTF81344001730479com.cisco.flume.FlumeTestSample fatal
message


i want the following logging pattern

*%d \=\=\=\= thread\: %t \=\=\=\= %-8p>%n%c.%M() \=> %x (line\:
%L)%n\t%m%n%n*

*what changes required , to get this pattern work*

Thanks
JP


On Mon, Jul 30, 2012 at 11:52 PM, Hari Shreedharan <
hshreedha...@cloudera.com> wrote:

>  You are using the AvroEventSerializer. This formats the event into Avro
> format specified by 
> org.apache.flume.serialization.FlumeEventAvroEventSerializer,
> which is why it looks like garbage, while it is not. Your app should be
> written to read and understand the Avro format. If you need it to human
> readable, you will need to write your own serializer, perhaps by extending
> the BodyTextEventSerializer.
>
> Thanks
> Hari
>
> --
> Hari Shreedharan
>
> On Monday, July 30, 2012 at 9:34 AM, JP wrote:
>
> Thanks Hari ,
>
> i got little progress.
>
> But im getting garbage values.
>
> this is my configurations:
>
> *flume-conf.properties*
> ---
> agent2.sources = seqGenSrc
> agent2.channels = memoryChannel
> agent2.sinks = loggerSink
>
> agent2.sources.seqGenSrc.type = avro
> agent2.sources.seqGenSrc.bind=localhost
> agent2.sources.seqGenSrc.port=41414
>
> agent2.channels.memoryChannel.type = memory
> agent2.channels.memoryChannel.capacity = 100
> agent2.channels.memoryChannel.transactionCapacity = 100
> agent2.channels.memoryChannel.keep-alive = 30
>
> agent2.sources.seqGenSrc.channels = memoryChannel
>
> agent2.sinks.loggerSink.type = hdfs
> agent2.sinks.loggerSink.hdfs.path = hdfs://ip:portno/data/CspcLogs
> agent2.sinks.loggerSink.hdfs.fileType = DataStream
> agent2.sinks.loggerSink.channel = memoryChannel
> agent2.sinks.loggerSink.serializer = avro_event
> agent2.sinks.loggerSink.serializer.compressionCodec = snappy
> agent2.sinks.loggerSink.serializer.syncIntervalBytes = 2048000
> agent2.channels.memoryChannel.type = memory
>
>
> log4j.properties
>
> --
> log4j.rootLogger=INFO, CA, flume
>
> log4j.appender.CA=org.apache.log4j.ConsoleAppender
>
> log4j.appender.CA.layout=org.apache.log4j.PatternLayout
> log4j.appender.CA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
>
> log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender
> log4j.appender.flume.Hostname = localhost
> log4j.appender.flume.Port = 41414
>
>
> and my output:
> 
> Obj avro.codec null avro.schema�
> {"type":"record","name":"Event","fields":[{"name":"headers","type":{"type":"map","values":"string"}},{"name":"body","type":"bytes"}]}�|
> ��(r5��q ��nl � 8flume.client.log4j.log.level
> 4Fflume.client.log4j.message.encoding
> UTF88flume.client.log4j.timestamp
> 1343665387977 error message| ��(r5��q ��nl � 8flume.client.log4j.log.level
> 5Fflume.client.log4j.message.encoding
> UTF88flume.client.log4j.timestamp
> 1343665387993 fatal message| ��(r5��q ��nl � 8flume.client.log4j.log.level
> 2Fflume.client.log4j.message.encoding
> UTF88flume.client.log4j.timestamp
>
>
> Please let me know, if im in the wrong path.
>
> Please suggest me to get custom logging pattern (for example like in log4j)
>
>
> Thanks
> JP
>
> On Sun, Jul 29, 2012 at 10:04 AM, Hari Shreedharan <
> hshreedha...@cloudera.com> wrote:
>
>  + user@
>
> Thamatam,
>
> The Log4J appender adds the date, log level and logger name to the flume
> event headers and the text of the log event to the flume event body. The
> reason the log level and time are missing is that these are in the headers
> and the text serializer does not serialize the headers.
>
> To write to a file or HDFS, please use a Serializer together with the
> RollingFileSink or HDFSEventSink. Please take a look at the plain text
> serializer or Avro serializer to understand this better.
>
> Thanks,
> Hari
>
> --
> Hari Shreedharan
>
> On Saturday, July 28, 2012 at 5:47 PM, thamatam Jayaprakash wrote:
>
> Hi Hari,
>
>
> Actually im unable to send to this mail to the user and dev group so, im
> mailing to you .
>
> Could you pls point me where im going wrong.
> *Please suggest me which log appender need to use for custom logging
> pattern and appender.*
>
> Im working on Flume 1.1.0 and 1.2.0 . We are not able set log pattern and
> We are using log4jappender
> log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
>
> but we are getting plain test
>
> *Example if i log following mssages :*
>
> 17:42:55,928  INFO SimpleJdbcServlet:69 - doGet of SimpleJdbcServlet
> ended...
> 17:43:03,489  INFO HelloServlet:29 - Hello

Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Patrick Wendell

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9827
---


I'll echo concern from others that this abandons the semantics of the channel.

How much of the improvement is related to change (1) - i.e. not sync()'ing at 
transaction boundaries? It would be great to get the throughput improvement 
broken down by change.

- Patrick Wendell


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Sending avro data from other languages

2012-08-03 Thread Arvind Prabhakar
Another alternative to consider for cross-platform/language support would
be protocol buffers. That has relatively better tooling and integration
than other similar systems and is used by other projects as well.

Regards,
Arvind Prabhakar

On Thu, Aug 2, 2012 at 7:01 AM, Brock Noland  wrote:

> I cannot answer what made us move to Avro. However, I prefer Avro because
> you don't have to build the thrift compiler and you aren't required to do
> code generation.
>
> On Wed, Aug 1, 2012 at 11:06 PM, Juhani Connolly <
> juhani_conno...@cyberagent.co.jp> wrote:
>
> > It looks to me like this was because of the transceiver I was using.
> >
> > Unfortunately it seems like avro doesn't have a python implementation of
> a
> > transceiver that fits the format expected by netty/avro(in fact it only
> has
> > one transceiver... HTTPTransceiver).
> >
> > To address this, I'm thinking of putting together a thrift source(the
> > legacy source doesn't seem to be usable as it returns nothing, and lacks
> > batching). Does this seem like a reasonable solution to making it
> possible
> > to send data to flume from other languages(and allowing backoff on
> > failure?). Historically, what made us move away from thrift to avro?
> >
> >
> > On 07/30/2012 05:34 PM, Juhani Connolly wrote:
> >
> >> I'm playing around with making a standalone tail client in python(so
> that
> >> I can access inode data) that tracks position in a file and then sends
> it
> >> across avro to an avro sink.
> >>
> >> However I'm having issues with the avro part of this and wondering if
> >> anyone more familiar with it could help.
> >>
> >> I took the flume.avdl file and converted it using "java -jar
> >> ~/Downloads/avro-tools-1.6.3.**jar idl flume.avdl flume.avpr"
> >>
> >> I then run it through a simple test program to see if its sending the
> >> data correctly and it sends from the python client fine, but the sink
> end
> >> OOM's because presumably the wire format is wrong:
> >>
> >> 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 => /172.28.19.112:41414] OPEN
> >> 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 => /172.28.19.112:41414] BOUND: /
> 172.28.19.112:41414
> >> 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 => /172.28.19.112:41414] CONNECTED: /
> >> 172.22.114.32:55671
> >> 2012-07-30 17:22:57,646 WARN ipc.NettyServer: Unexpected exception from
> >> downstream.
> >> java.lang.OutOfMemoryError: Java heap space
> >> at java.util.ArrayList.(**ArrayList.java:112)
> >> at
> org.apache.avro.ipc.**NettyTransportCodec$**NettyFrameDecoder.
> >> **decodePackHeader(**NettyTransportCodec.java:154)
> >> at org.apache.avro.ipc.**NettyTransportCodec$**
> >> NettyFrameDecoder.decode(**NettyTransportCodec.java:131)
> >> at
> org.jboss.netty.handler.codec.**frame.FrameDecoder.callDecode(
> >> **FrameDecoder.java:282)
> >> at org.jboss.netty.handler.codec.**frame.FrameDecoder.**
> >> messageReceived(FrameDecoder.**java:216)
> >> at org.jboss.netty.channel.**Channels.fireMessageReceived(**
> >> Channels.java:274)
> >> at org.jboss.netty.channel.**Channels.fireMessageReceived(**
> >> Channels.java:261)
> >> at org.jboss.netty.channel.**socket.nio.NioWorker.read(**
> >> NioWorker.java:351)
> >> at org.jboss.netty.channel.**socket.nio.NioWorker.**
> >> processSelectedKeys(NioWorker.**java:282)
> >> at org.jboss.netty.channel.**socket.nio.NioWorker.run(**
> >> NioWorker.java:202)
> >> at java.util.concurrent.**ThreadPoolExecutor$Worker.**
> >> runTask(ThreadPoolExecutor.**java:886)
> >> at java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
> >> ThreadPoolExecutor.java:908)
> >> at java.lang.Thread.run(Thread.**java:619)
> >> 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 :> /172.28.19.112:41414] DISCONNECTED
> >> 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 :> /172.28.19.112:41414] UNBOUND
> >> 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
> >> 172.22.114.32:55671 :> /172.28.19.112:41414] CLOSED
> >>
> >> I've dumped the test program and its output
> >>
> >> http://pastebin.com/1DtXZyTu
> >> http://pastebin.com/T9kaqKHY
> >>
> >>
> >
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>


[jira] [Commented] (FLUME-1382) Flume adopt message from existing local Scribe

2012-08-03 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428069#comment-13428069
 ] 

Brock Noland commented on FLUME-1382:
-

Neve rmind, I see the RB discussion.

> Flume adopt message from existing local Scribe
> --
>
> Key: FLUME-1382
> URL: https://issues.apache.org/jira/browse/FLUME-1382
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Denny Ye
>Priority: Minor
>  Labels: scribe, thrift
> Fix For: v1.3.0
>
> Attachments: FLUME-1382-doc.patch, FLUME-1382.patch
>
>
> Currently, we are using Scribe in data ingest system. Central Scribe is hard 
> to maintain and upgrade. Thus, we would like to replace central Scribe with 
> Flume and adopt message from existing and amounts of local Scribe. This can 
> be treated as legacy part.
> We have generated ScribeSource and used with more effective Thrift code 
> without deserializing.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (FLUME-1382) Flume adopt message from existing local Scribe

2012-08-03 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13428068#comment-13428068
 ] 

Brock Noland commented on FLUME-1382:
-

Hi,

One question, this means scribe "agents" (or whatever they are called in 
scribe) would write be able to write to a Flume Source? 

> Flume adopt message from existing local Scribe
> --
>
> Key: FLUME-1382
> URL: https://issues.apache.org/jira/browse/FLUME-1382
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Denny Ye
>Priority: Minor
>  Labels: scribe, thrift
> Fix For: v1.3.0
>
> Attachments: FLUME-1382-doc.patch, FLUME-1382.patch
>
>
> Currently, we are using Scribe in data ingest system. Central Scribe is hard 
> to maintain and upgrade. Thus, we would like to replace central Scribe with 
> Flume and adopt message from existing and amounts of local Scribe. This can 
> be treated as legacy part.
> We have generated ScribeSource and used with more effective Thrift code 
> without deserializing.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Sending avro data from other languages

2012-08-03 Thread Brock Noland
Yeah I agree. FWIW, I am hoping in few weeks I will have a little more
spare time and I was planning on writing the Avro patches to ensure
languages such as Python, C#, etc could write messages to Flume.

On Fri, Aug 3, 2012 at 1:30 AM, Juhani Connolly <
juhani_conno...@cyberagent.co.jp> wrote:

> On paper it certainly seems like a good solution, it's just unfortunate
> that some "supported" languages can't actually interface to it. I
> understand that thrift can be quite a nuisance to deal with at times.
>
>
> On 08/02/2012 11:01 PM, Brock Noland wrote:
>
>> I cannot answer what made us move to Avro. However, I prefer Avro because
>> you don't have to build the thrift compiler and you aren't required to do
>> code generation.
>>
>> On Wed, Aug 1, 2012 at 11:06 PM, Juhani Connolly <
>> juhani_conno...@cyberagent.co.**jp >
>> wrote:
>>
>>  It looks to me like this was because of the transceiver I was using.
>>>
>>> Unfortunately it seems like avro doesn't have a python implementation of
>>> a
>>> transceiver that fits the format expected by netty/avro(in fact it only
>>> has
>>> one transceiver... HTTPTransceiver).
>>>
>>> To address this, I'm thinking of putting together a thrift source(the
>>> legacy source doesn't seem to be usable as it returns nothing, and lacks
>>> batching). Does this seem like a reasonable solution to making it
>>> possible
>>> to send data to flume from other languages(and allowing backoff on
>>> failure?). Historically, what made us move away from thrift to avro?
>>>
>>>
>>> On 07/30/2012 05:34 PM, Juhani Connolly wrote:
>>>
>>>  I'm playing around with making a standalone tail client in python(so
 that
 I can access inode data) that tracks position in a file and then sends
 it
 across avro to an avro sink.

 However I'm having issues with the avro part of this and wondering if
 anyone more familiar with it could help.

 I took the flume.avdl file and converted it using "java -jar
 ~/Downloads/avro-tools-1.6.3.jar idl flume.avdl flume.avpr"


 I then run it through a simple test program to see if its sending the
 data correctly and it sends from the python client fine, but the sink
 end
 OOM's because presumably the wire format is wrong:

 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 => /172.28.19.112:41414] OPEN
 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 => /172.28.19.112:41414] BOUND: /
 172.28.19.112:41414
 2012-07-30 17:22:57,565 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 => /172.28.19.112:41414] CONNECTED: /
 172.22.114.32:55671
 2012-07-30 17:22:57,646 WARN ipc.NettyServer: Unexpected exception from
 downstream.
 java.lang.OutOfMemoryError: Java heap space
  at java.util.ArrayList.(ArrayList.java:112)
  at org.apache.avro.ipc.NettyTransportCodec$
 NettyFrameDecoder.
 **decodePackHeader(NettyTransportCodec.java:154)
  at org.apache.avro.ipc.NettyTransportCodec$**
 NettyFrameDecoder.decode(NettyTransportCodec.java:131)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.**
 callDecode(
 **FrameDecoder.java:282)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.**
 messageReceived(FrameDecoder.java:216)
  at org.jboss.netty.channel.Channels.fireMessageReceived(**
 **
 Channels.java:274)
  at org.jboss.netty.channel.Channels.fireMessageReceived(**
 **
 Channels.java:261)
  at org.jboss.netty.channel.socket.nio.NioWorker.read(**
 NioWorker.java:351)
  at org.jboss.netty.channel.socket.nio.NioWorker.**
 processSelectedKeys(NioWorker.java:282)
  at org.jboss.netty.channel.socket.nio.NioWorker.run(**
 NioWorker.java:202)
  at java.util.concurrent.ThreadPoolExecutor$Worker.**
 runTask(ThreadPoolExecutor.java:886)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(
 ThreadPoolExecutor.java:908)
  at java.lang.Thread.run(Thread.java:619)

 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 :> /172.28.19.112:41414] DISCONNECTED
 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 :> /172.28.19.112:41414] UNBOUND
 2012-07-30 17:22:57,647 INFO ipc.NettyServer: [id: 0x5fc6e818, /
 172.22.114.32:55671 :> /172.28.19.112:41414] CLOSED

 I've dumped the test program and its output

 http://pastebin.com/1DtXZyTu
 http://pastebin.com/T9kaqKHY



>>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/


Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9825
---


1) This would completely eliminate the guarantee FIleChannel makes. It sounds 
to me like like you want a disk spooling channel which does not make strict 
durability guarantees. This is not what FIleChannel does today. If you want 
faster throughput, increase batch size.
2) This is to eliminate corruption when we reach a full disk.  Also, writing 
over existing data is faster than writing the first time. I think we could 
improve this. I think we should allocate the length but not the data itself. 
This eliminates the metadata update when we write. Then we could just stop 
writing to disk when it's nearly full.
3) Have you done tests that show it's faster? Either way it should lead to 
write system call.

- Brock Noland


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Denny Ye


> On Aug. 3, 2012, 10:16 a.m., Mubarak Seyed wrote:
> > Thanks Denny. IMO this patch defeats the purpose of 'durable' file channel. 
> > A reason an user might choose file-channel over memory-channel is to avoid 
> > data loss and JVM/server crash can happen due to various reasons such as  
> > JVM bug, hardware problems and it is very hard to determine the frequency 
> > of crash. I would try ByteBuffer.allocateDirect() but there may be a chance 
> > of OOM as it depends on finalizer run [1] and we need to manually clean 
> > DirectByteBuffer using reflection.
> > 
> > [1] 
> > http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java

hi Mubarak, if we remove 'force' method, throughput can increase from 5MB/s to 
20MB/s, it impact the system performance. If we use 'force' method at each 
commit, can it hold 100% delivery?  


- Denny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9819
---


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Juhani Connolly


> On Aug. 3, 2012, 9:29 a.m., Mike Percy wrote:
> > Wow you have been busy, that is really great! One problem with this change 
> > however is that FileChannel is no longer guaranteed to be durable. Many 
> > users cannot accept that limitation, especially after a Flume 1.2.0 release 
> > with a durable File Channel. Why not just use MemoryChannel if you don't 
> > need crash durability?
> 
> Denny Ye wrote:
> Flume doesn't guarantee 100% delivery. The difference between using 
> 'force' and not using is distinguish with process or server crash. The 
> possible or process crash is most frequently than server crash. Even we use 
> 'force' method in transaction commit each time, we always cannot keep 
> reliable delivery. MemoryChannel may cause too many full gc, I'm tracking gc 
> issue now. Maybe I will use DirectByteBuffer to reduce lots of   events in 
> heap.

Flume delivery guarrantees are dependent on the channel used. To  date 
FileChannel has been pushed as a durable/lossless channel. Perhaps we should 
have separate durable and performant file channels.

What scenario is it that you believe will result in data loss with the current 
model?


- Juhani


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9814
---


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Improvement for Log4j configuration

2012-08-03 Thread Denny Ye

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6316/
---

(Updated Aug. 3, 2012, 10:18 a.m.)


Review request for Flume and Hari Shreedharan.


Description
---

Updated log4j.properties 
1. Add 'logs' folder. It's better to aggregate logs into unified folder.
2. Log file using DailyRollingFileAppender
3. More useful log pattern. Add thread and code line information
4. Set debug level for Hadoop. If we use debug level, Hadoop log takes too many 
lines.


This addresses bug FLUME-1418.
https://issues.apache.org/jira/browse/FLUME-1418


Diffs
-

  trunk/conf/log4j.properties 1363210 

Diff: https://reviews.apache.org/r/6316/diff/


Testing
---

That's OK in my environment


Thanks,

Denny Ye



Re: Review Request: Improvement for Log4j configuration

2012-08-03 Thread Denny Ye


> On Aug. 2, 2012, 10:03 a.m., Mike Percy wrote:
> > The INFO Hadoop setting is certainly needed. Also, including the thread 
> > name is great. However I have heard that DailyRollingFileAppender is 
> > unreliable and can cause data loss at roll time. What's the reason behind 
> > that change?
> 
> Denny Ye wrote:
> Thanks Mike, I'm familiar with DatedFileAppender in my project and 
> DailyRollingFileAppender in Hadoop. I think that's good in troubleshooting 
> for me and QA to track issue at specified date. Log name suffixed with 
> '-MM-dd' is better than 'flume-{number}.log'. Another tip in my 
> experience is we have tools to clean up log file automatically with expire, 
> it verify file suffix.
> 
> Also, following your concern about data loss in DailyRollingFileAppender. 
> I found a Hadoop bug https://issues.apache.org/jira/browse/HADOOP-8149, the 
> same problem and they changed DailyRollingFileAppender back to 
> RollingFileAppender. I don't agree with them completely. It's hard to 
> distinguish log at specified date.
> 
> May be the DatedFileAppender is another choose. Do you think so?
> 
> Mike Percy wrote:
> I've never used DatedFileAppender personally. I believe this is the 
> original site: http://minaret.biz/tips/datedFileAppender.html and someone 
> apparently made some modifications to it: 
> http://sourceforge.net/p/log4j-dfa/home/Home/
> 
> Do you know of anyone using it in production? If not, it seems a bit 
> hasty to just make it the default for all users of Flume...

We used DatedFileAppender in production two years. It's stable always.
Configuration example in log4j.xml in my project:






 


Log file will be : x.2012-08-03.log


- Denny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6316/#review9745
---


On Aug. 2, 2012, 7 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6316/
> ---
> 
> (Updated Aug. 2, 2012, 7 a.m.)
> 
> 
> Review request for Flume and Hari Shreedharan.
> 
> 
> Description
> ---
> 
> Updated log4j.properties 
> 1. Add 'logs' folder. It's better to aggregate logs into unified folder.
> 2. Log file using DailyRollingFileAppender
> 3. More useful log pattern. Add thread and code line information
> 4. Set debug level for Hadoop. If we use debug level, Hadoop log takes too 
> many lines.
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1418.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1418
> 
> 
> Diffs
> -
> 
>   trunk/conf/log4j.properties 1363210 
> 
> Diff: https://reviews.apache.org/r/6316/diff/
> 
> 
> Testing
> ---
> 
> That's OK in my environment
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Mubarak Seyed

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9819
---


Thanks Denny. IMO this patch defeats the purpose of 'durable' file channel. A 
reason an user might choose file-channel over memory-channel is to avoid data 
loss and JVM/server crash can happen due to various reasons such as  JVM bug, 
hardware problems and it is very hard to determine the frequency of crash. I 
would try ByteBuffer.allocateDirect() but there may be a chance of OOM as it 
depends on finalizer run [1] and we need to manually clean DirectByteBuffer 
using reflection.

[1] 
http://stackoverflow.com/questions/1854398/how-to-garbage-collect-a-direct-buffer-java

- Mubarak Seyed


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Improvement for Log4j configuration

2012-08-03 Thread Mike Percy


> On Aug. 2, 2012, 10:03 a.m., Mike Percy wrote:
> > The INFO Hadoop setting is certainly needed. Also, including the thread 
> > name is great. However I have heard that DailyRollingFileAppender is 
> > unreliable and can cause data loss at roll time. What's the reason behind 
> > that change?
> 
> Denny Ye wrote:
> Thanks Mike, I'm familiar with DatedFileAppender in my project and 
> DailyRollingFileAppender in Hadoop. I think that's good in troubleshooting 
> for me and QA to track issue at specified date. Log name suffixed with 
> '-MM-dd' is better than 'flume-{number}.log'. Another tip in my 
> experience is we have tools to clean up log file automatically with expire, 
> it verify file suffix.
> 
> Also, following your concern about data loss in DailyRollingFileAppender. 
> I found a Hadoop bug https://issues.apache.org/jira/browse/HADOOP-8149, the 
> same problem and they changed DailyRollingFileAppender back to 
> RollingFileAppender. I don't agree with them completely. It's hard to 
> distinguish log at specified date.
> 
> May be the DatedFileAppender is another choose. Do you think so?

I've never used DatedFileAppender personally. I believe this is the original 
site: http://minaret.biz/tips/datedFileAppender.html and someone apparently 
made some modifications to it: http://sourceforge.net/p/log4j-dfa/home/Home/

Do you know of anyone using it in production? If not, it seems a bit hasty to 
just make it the default for all users of Flume...


- Mike


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6316/#review9745
---


On Aug. 2, 2012, 7 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6316/
> ---
> 
> (Updated Aug. 2, 2012, 7 a.m.)
> 
> 
> Review request for Flume and Hari Shreedharan.
> 
> 
> Description
> ---
> 
> Updated log4j.properties 
> 1. Add 'logs' folder. It's better to aggregate logs into unified folder.
> 2. Log file using DailyRollingFileAppender
> 3. More useful log pattern. Add thread and code line information
> 4. Set debug level for Hadoop. If we use debug level, Hadoop log takes too 
> many lines.
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1418.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1418
> 
> 
> Diffs
> -
> 
>   trunk/conf/log4j.properties 1363210 
> 
> Diff: https://reviews.apache.org/r/6316/diff/
> 
> 
> Testing
> ---
> 
> That's OK in my environment
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Flume adopt message from existing local Scribe

2012-08-03 Thread Juhani Connolly


> On Aug. 3, 2012, 3:45 a.m., Hari Shreedharan wrote:
> > I am not sure exactly how scribe works. From what you explained, it seems 
> > like you need to write a client of some sort from scribe side or scribe can 
> > somehow be configured to write to send the messages to a specified 
> > host/port? 
> > 
> > Anyway, what I was asking was if you could add a section to the Flume User 
> > Guide on how to set up the ScribeSource on Flume side, and also how to set 
> > up scribe to write to this source. You can find the Flume User Guide here: 
> > flume-ng-doc/sphinx/FlumeUserGuide.rst. If you could add an example of how 
> > to configure both scribe and flume so that you can dump events to flume, 
> > that would be great.
> > 
> > I'd like to commit this if I am able to test basic functionality, and 
> > improve and fix issues as they are noticed, since committing this will not 
> > affect the working of other components the remaining system.
> > 
> > So if you can help me set it up - configuring this and configuring scribe 
> > to write to this, it would be great.
> 
> Denny Ye wrote:
> Hari, I posted ScribeSource guide patch into into bug 
> https://issues.apache.org/jira/browse/FLUME-1382 (two patches). Please review 
> it. Scribe is existing ingest system from Facebook. The configuration of 
> Scribe was produced by Facebook.

While this uses the scribe protocol, there is nothing scribe specific about it 
other than that. Anything that can send thrift messages using the protocol will 
work. We have a program that we've been using to feed data to scribe using the 
thrift protocol until now and it works fine with this now


- Juhani


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6089/#review9582
---


On July 30, 2012, 2:49 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6089/
> ---
> 
> (Updated July 30, 2012, 2:49 a.m.)
> 
> 
> Review request for Flume and Hari Shreedharan.
> 
> 
> Description
> ---
> 
> There may someone like me that want to replace central Scribe with Flume to 
> adopt existing ingest system, using smooth changes for application user.
> Here is the ScribeSource put into legacy folder without deserializing. 
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1382.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1382
> 
> 
> Diffs
> -
> 
>   trunk/flume-ng-dist/pom.xml 1363210 
>   trunk/flume-ng-sources/flume-scribe-source/pom.xml PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/LogEntry.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ResultCode.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/Scribe.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ScribeSource.java
>  PRE-CREATION 
>   trunk/flume-ng-sources/pom.xml PRE-CREATION 
>   trunk/pom.xml 1363210 
> 
> Diff: https://reviews.apache.org/r/6089/diff/
> 
> 
> Testing
> ---
> 
> I already used ScribeSource into local environment and tested in past week. 
> It can use the existing local Scribe interface
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Denny Ye


> On Aug. 3, 2012, 9:29 a.m., Mike Percy wrote:
> > Wow you have been busy, that is really great! One problem with this change 
> > however is that FileChannel is no longer guaranteed to be durable. Many 
> > users cannot accept that limitation, especially after a Flume 1.2.0 release 
> > with a durable File Channel. Why not just use MemoryChannel if you don't 
> > need crash durability?

Flume doesn't guarantee 100% delivery. The difference between using 'force' and 
not using is distinguish with process or server crash. The possible or process 
crash is most frequently than server crash. Even we use 'force' method in 
transaction commit each time, we always cannot keep reliable delivery. 
MemoryChannel may cause too many full gc, I'm tracking gc issue now. Maybe I 
will use DirectByteBuffer to reduce lots of   events in heap.


- Denny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9814
---


On Aug. 3, 2012, 9:39 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 9:39 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> After tuning, throughput increasing from 5MB to 30MB
> 
> 
> This addresses bug FLUME-1423.
> https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Denny Ye

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/
---

(Updated Aug. 3, 2012, 9:39 a.m.)


Review request for Flume, Hari Shreedharan and Patrick Wendell.


Changes
---

Description updated


Description (updated)
---

Here is the description in code changes
1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
this 'force' method. This method is too heavy for amounts of data comes. Each 
'force' action will be consume 50-500ms that it confirms data stored into disk. 
Normally, OS will flush data from kernal buffer to disk asynchronously with ms 
level latency. It may useless in each commit operation. Certainly, data loss 
may occurs in server crash not process crash. Server crash is infrequent.
2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in my 
test result and low-level instruction, the former is better than the latter

Here I posted three changes, and I would like to use thread-level cached 
DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse outer-heap 
memory to reduce time that copying from heap to kernal). I will test this 
changes in next phase.

After tuning, throughput increasing from 5MB to 30MB


This addresses bug FLUME-1423.
https://issues.apache.org/jira/browse/FLUME-1423


Diffs
-

  
trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
 1363210 

Diff: https://reviews.apache.org/r/6329/diff/


Testing
---


Thanks,

Denny Ye



Re: Review Request: Low throughput of FileChannel

2012-08-03 Thread Mike Percy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6329/#review9814
---


Wow you have been busy, that is really great! One problem with this change 
however is that FileChannel is no longer guaranteed to be durable. Many users 
cannot accept that limitation, especially after a Flume 1.2.0 release with a 
durable File Channel. Why not just use MemoryChannel if you don't need crash 
durability?

- Mike Percy


On Aug. 3, 2012, 6:50 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6329/
> ---
> 
> (Updated Aug. 3, 2012, 6:50 a.m.)
> 
> 
> Review request for Flume, Hari Shreedharan and Patrick Wendell.
> 
> 
> Description
> ---
> 
> Here is the description in code changes
> 1. Remove the 'FileChannel.force(false)'. Each commit from Source will invoke 
> this 'force' method. This method is too heavy for amounts of data comes. Each 
> 'force' action will be consume 50-500ms that it confirms data stored into 
> disk. Normally, OS will flush data from kernal buffer to disk asynchronously 
> with ms level latency. It may useless in each commit operation. Certainly, 
> data loss may occurs in server crash not process crash. Server crash is 
> infrequent.
> 2. Do not pre-allocate disk space. Disk doesn't need the pre-allocation.
> 3. Use 'RandomAccessFile.write()' to replace 'FileChannel.write()'. Both in 
> my test result and low-level instruction, the former is better than the latter
> 
> Here I posted three changes, and I would like to use thread-level cached 
> DirectByteBuffer to replace inner-heap ByteBuffer.allocate() (reuse 
> outer-heap memory to reduce time that copying from heap to kernal). I will 
> test this changes in next phase.
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1423.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1423
> 
> 
> Diffs
> -
> 
>   
> trunk/flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/LogFile.java
>  1363210 
> 
> Diff: https://reviews.apache.org/r/6329/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



Re: Review Request: Flume adopt message from existing local Scribe

2012-08-03 Thread Denny Ye


> On Aug. 3, 2012, 3:45 a.m., Hari Shreedharan wrote:
> > I am not sure exactly how scribe works. From what you explained, it seems 
> > like you need to write a client of some sort from scribe side or scribe can 
> > somehow be configured to write to send the messages to a specified 
> > host/port? 
> > 
> > Anyway, what I was asking was if you could add a section to the Flume User 
> > Guide on how to set up the ScribeSource on Flume side, and also how to set 
> > up scribe to write to this source. You can find the Flume User Guide here: 
> > flume-ng-doc/sphinx/FlumeUserGuide.rst. If you could add an example of how 
> > to configure both scribe and flume so that you can dump events to flume, 
> > that would be great.
> > 
> > I'd like to commit this if I am able to test basic functionality, and 
> > improve and fix issues as they are noticed, since committing this will not 
> > affect the working of other components the remaining system.
> > 
> > So if you can help me set it up - configuring this and configuring scribe 
> > to write to this, it would be great.

Hari, I posted ScribeSource guide patch into into bug 
https://issues.apache.org/jira/browse/FLUME-1382 (two patches). Please review 
it. Scribe is existing ingest system from Facebook. The configuration of Scribe 
was produced by Facebook.


- Denny


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6089/#review9582
---


On July 30, 2012, 2:49 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6089/
> ---
> 
> (Updated July 30, 2012, 2:49 a.m.)
> 
> 
> Review request for Flume and Hari Shreedharan.
> 
> 
> Description
> ---
> 
> There may someone like me that want to replace central Scribe with Flume to 
> adopt existing ingest system, using smooth changes for application user.
> Here is the ScribeSource put into legacy folder without deserializing. 
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1382.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1382
> 
> 
> Diffs
> -
> 
>   trunk/flume-ng-dist/pom.xml 1363210 
>   trunk/flume-ng-sources/flume-scribe-source/pom.xml PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/LogEntry.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ResultCode.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/Scribe.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ScribeSource.java
>  PRE-CREATION 
>   trunk/flume-ng-sources/pom.xml PRE-CREATION 
>   trunk/pom.xml 1363210 
> 
> Diff: https://reviews.apache.org/r/6089/diff/
> 
> 
> Testing
> ---
> 
> I already used ScribeSource into local environment and tested in past week. 
> It can use the existing local Scribe interface
> 
> 
> Thanks,
> 
> Denny Ye
> 
>



[jira] [Updated] (FLUME-1382) Flume adopt message from existing local Scribe

2012-08-03 Thread Denny Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denny Ye updated FLUME-1382:


Attachment: FLUME-1382-doc.patch

> Flume adopt message from existing local Scribe
> --
>
> Key: FLUME-1382
> URL: https://issues.apache.org/jira/browse/FLUME-1382
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Denny Ye
>Priority: Minor
>  Labels: scribe, thrift
> Fix For: v1.3.0
>
> Attachments: FLUME-1382-doc.patch, FLUME-1382.patch
>
>
> Currently, we are using Scribe in data ingest system. Central Scribe is hard 
> to maintain and upgrade. Thus, we would like to replace central Scribe with 
> Flume and adopt message from existing and amounts of local Scribe. This can 
> be treated as legacy part.
> We have generated ScribeSource and used with more effective Thrift code 
> without deserializing.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: Flume adopt message from existing local Scribe

2012-08-03 Thread Juhani Connolly

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6089/#review9810
---


Hi, this is great... I was thinking I would need to make a new Thrift source 
for flume, but this is exactly what we need.

I'm testing it out now, and it seems to work fine.

At a glance, the code looks fine from what I can see, but it seems like the 
thrift generated code is included in the patch? I'll have a proper look at it 
asap(might not be until after the weekend though) if no-one else has.

- Juhani Connolly


On July 30, 2012, 2:49 a.m., Denny Ye wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/6089/
> ---
> 
> (Updated July 30, 2012, 2:49 a.m.)
> 
> 
> Review request for Flume and Hari Shreedharan.
> 
> 
> Description
> ---
> 
> There may someone like me that want to replace central Scribe with Flume to 
> adopt existing ingest system, using smooth changes for application user.
> Here is the ScribeSource put into legacy folder without deserializing. 
> 
> 
> This addresses bug https://issues.apache.org/jira/browse/FLUME-1382.
> 
> https://issues.apache.org/jira/browse/https://issues.apache.org/jira/browse/FLUME-1382
> 
> 
> Diffs
> -
> 
>   trunk/flume-ng-dist/pom.xml 1363210 
>   trunk/flume-ng-sources/flume-scribe-source/pom.xml PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/LogEntry.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ResultCode.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/Scribe.java
>  PRE-CREATION 
>   
> trunk/flume-ng-sources/flume-scribe-source/src/main/java/org/apache/flume/source/scribe/ScribeSource.java
>  PRE-CREATION 
>   trunk/flume-ng-sources/pom.xml PRE-CREATION 
>   trunk/pom.xml 1363210 
> 
> Diff: https://reviews.apache.org/r/6089/diff/
> 
> 
> Testing
> ---
> 
> I already used ScribeSource into local environment and tested in past week. 
> It can use the existing local Scribe interface
> 
> 
> Thanks,
> 
> Denny Ye
> 
>