sounds like a valid bug. i am curious though... is there a use real use
scenario you are facing in production ?
On Mon, Oct 14, 2013 at 7:39 PM, Suhas Satish suhas.sat...@gmail.comwrote:
In summary, although the flume-agent JVM doesnt exit, once a HDFS IO
exception
occurs due to deleting a
Hi Abhijeet,
sorry for the late response. missed this email.
You may want to look into this alternative:
Setup a local flume agent to pick up the locally generated syslogs. Then
configure this agent with failover sinks (avro / thrift) to talk to the
agents that you are concerned may crash.
Recently we switched over from Memory Channel to File Channel, as Memory
Channel has some GC issues.
Occasionally in File Channel I see this exception
org.apache.flume.ChannelException: Put queue for FileBackedTransaction of
capacity 5000 full, consider committing more frequently, increasing
What source are you using? Looks like the source is writing 5K events in one
transaction
Thanks,
Hari
On Tuesday, October 15, 2013 at 12:24 PM, Bhaskar V. Karambelkar wrote:
Recently we switched over from Memory Channel to File Channel, as Memory
Channel has some GC issues.
Source is Avro Source which gets evnets fed by a custom JVM application
using the flume client SDK.
So referring to the client SDK, if the batchSize property has be set to
1,000, but I pass say 10,000 events in the client.addBatch(ListEvent) call
what happens ?
On Tue, Oct 15, 2013 at 3:54 PM,