On another note, you might want to first try Flume in case you are just at
exploration phase. The advantage of Flume (using push) is that you do not
need to write any additional program in order to sink or write your data to
any target system. I am not quite sure how well Flume works with SPARK
streaming (theoretically it should)

On other hand Kafka and its integration with SPARK is mentioned here
https://docs.databricks.com/spark/latest/structured-streaming/kafka.html


Regards,
Gourav Sengupta

On Sat, Apr 15, 2017 at 9:12 PM, tencas <diego...@gmail.com> wrote:

> Hi everybody,
>
>  I am using Apache Spark Streaming using a TCP connector to receive data.
> I have a python application that connects to a sensor, and create a TCP
> server that waits connection from Apache Spark, and then, sends json data
> through this socket.
>
> How can I manage to join many independent sensors sources to send data to
> the same receiver on Apache Spark?
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Join-streams-Apache-Spark-tp28603.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to