*-Ian*

Hi
While I am following this discussion with interest, I am trying to comprehend 
any architectural benefit of a spark sink.
Is there any feature in flume makes it more suitable to ingest stream data than 
sppark streaming, so that we should chain them? For example does it help 
durability or reliability of the source?
Or, it is a more tactical choice based on connector availability or such?
To me, flume is important component to ingest streams to hdfs or hive directly 
ie it plays on the batch side of lambda architecture pattern.
On 20 Nov 2016 22:30, "Mich Talebzadeh" <mich.talebza...@gmail.com[1]> wrote:


Hi Ian,


Has this been resolved?


How about data to Flume and then Kafka and Kafka streaming into Spark?


Thanks


Dr Mich Talebzadeh 
  
LinkedIn / 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw[2]/
 
  
http://talebzadehmich.wordpress.com[3]


*Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction. 
  


On 13 July 2016 at 11:13, Ian Brooks <i.bro...@sensewhere.com[4]> wrote:


Hi,
 
I'm currently trying to implement a prototype Spark application that gets data 
from Flume and processes it. I'm using the pull based method mentioned in 
https://spark.apache.org/docs/1.6.1/streaming-flume-integration.html[5] 
 
The is initially working fine for getting data from Flume, however the Spark 
client doesn't appear to be letting Flume know that the data has been received, 
so Flume doesn't remove it from the batch. 
 
After 100 requests Flume stops allowing any new data and logs
 
08 Jul 2016 14:59:00,265 WARN  [Spark Sink Processor Thread - 5] 
(org.apache.spark.streaming.flume.sink.Logging$class.logWarning:80)  - Error 
while processing transaction. 

 
My code to pull the data from Flume is
 
SparkConf sparkConf = new SparkConf(true).setAppName("SLAMSpark");
Duration batchInterval = new Duration(10000);
final String checkpointDir = "/tmp/";
 
final JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, 
batchInterval);
ssc.checkpoint(checkpointDir);
JavaReceiverInputDStream<SparkFlumeEvent> flumeStream = 
FlumeUtils.createPollingStream(ssc, host, port);
 
// Transform each flume avro event to a process-able format
JavaDStream<String> transformedEvents = flumeStream.map(new 
Function<SparkFlumeEvent, String>() {
 
@Override
public String call(SparkFlumeEvent flumeEvent) throws Exception {
String flumeEventStr = flumeEvent.event().toString();
avroData avroData = new avroData();
Gson gson = new GsonBuilder().create();
avroData = gson.fromJson(flumeEventStr, avroData.class); 
HashMap<String,String> body = avroData.getBody();
String data = body.get("bytes");
return data;
}
});
 
...
 
ssc.start();
ssc.awaitTermination();
ssc.close();
}
 
Is there something specific I should be doing to let the Flume server know the 
batch has been received and processed?


*Ian Brooks*
 




*Ian Brooks*
Lead Cloud Systems Engineer

Mobile: +44 7900987187
UK Office: +44 131 629 5155
US Office: +1 650 943 2403
Skype: ijbrooks

E-mail: i.bro...@sensewhere.com[6] 
Web: www.sensewhere.com[7] 

*sensewhere Ltd*. 4th Floor, 108 Princes Street, Edinburgh EH2 3AA.
Company Number: SC357036
*sensewhere USA* 800 West El Camino Real, Suite 180, Mountain View, California, 
94040
*sensewhere China* Room748, 7/F, Tower A, SCC, No.88 Haide 1st Avenue, Nanshan 
District, Shenzhen 51806

      

--------
[1] mailto:mich.talebza...@gmail.com
[2] 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
[3] http://talebzadehmich.wordpress.com
[4] mailto:i.bro...@sensewhere.com
[5] https://spark.apache.org/docs/1.6.1/streaming-flume-integration.html
[6] mailt:i.bro...@sensewhere.com
[7] http://www.sensewhere.com/

Reply via email to