Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/300#discussion_r11721584
  
    --- Diff: 
streaming/src/main/scala/org/apache/spark/streaming/receiver/NetworkReceiver.scala
 ---
    @@ -0,0 +1,209 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.streaming.receiver
    +
    +import java.nio.ByteBuffer
    +
    +import scala.collection.mutable.ArrayBuffer
    +
    +import org.apache.spark.storage.StorageLevel
    +
    +/**
    + * Abstract class of a receiver that can be run on worker nodes to receive 
external data. A
    + * custom receiver can be defined by defining the functions onStart() and 
onStop(). onStart()
    + * should define the setup steps necessary to start receiving data,
    + * and onStop() should define the cleanup steps necessary to stop 
receiving data. A custom
    + * receiver would look something like this.
    + *
    + * class MyReceiver(storageLevel) extends 
NetworkReceiver[String](storageLevel) {
    + *   def onStart() {
    + *     // Setup stuff (start threads, open sockets, etc.) to start 
receiving data.
    + *     // Must start new thread to receive data, as onStart() must be 
non-blocking.
    + *
    + *     // Call store(...) in those threads to store received data into 
Spark's memory.
    + *
    + *     // Call stop(...), restart() or reportError(...) on any thread 
based on how
    + *     // different errors should be handled.
    + *
    + *     // See corresponding method documentation for more details.
    + *   }
    + *
    + *   def onStop() {
    + *     // Cleanup stuff (stop threads, close sockets, etc.) to stop 
receiving data.
    + *   }
    + * }
    + */
    +abstract class NetworkReceiver[T](val storageLevel: StorageLevel) extends 
Serializable {
    +
    +  /**
    +   * This method is called by the system when the receiver is started. 
This function
    +   * must initialize all resources (threads, buffers, etc.) necessary for 
receiving data.
    +   * This function must be non-blocking, so receiving the data must occur 
on a different
    +   * thread. Received data can be stored with Spark by calling 
`store(data)`.
    +   *
    +   * If there are errors in threads started here, then following options 
can be done
    +   * (i) `reportError(...)` can be called to report the error to the 
driver.
    +   * The receiving of data will continue uninterrupted.
    +   * (ii) `stop(...)` can be called to stop receiving data. This will call 
`onStop()` to
    +   * clear up all resources allocated (threads, buffers, etc.) during 
`onStart()`.
    +   * (iii) `restart(...)` can be called to restart the receiver. This will 
call `onStop()`
    +   * immediately, and then `onStart()` after a delay.
    +   */
    +  def onStart()
    +
    +  /**
    +   * This method is called by the system when the receiver is stopped. All 
resources
    +   * (threads, buffers, etc.) setup in `onStart()` must be cleaned up in 
this method.
    +   */
    +  def onStop()
    +
    +  /** Override this to specify a preferred location (hostname). */
    +  def preferredLocation : Option[String] = None
    +
    +  /** Store a single item of received data to Spark's memory. */
    +  def store(dataItem: T) {
    +    executor.pushSingle(dataItem)
    +  }
    +
    +  /** Store a sequence of received data into Spark's memory. */
    +  def store(dataBuffer: ArrayBuffer[T]) {
    +    executor.pushArrayBuffer(dataBuffer, None, None)
    +  }
    +
    +  /**
    +   * Store a sequence of received data into Spark's memory.
    +   * The metadata will be associated with this block of data
    +   * for being used in the corresponding InputDStream.
    +   */
    +  def store(dataBuffer: ArrayBuffer[T], metadata: Any) {
    --- End diff --
    
    There is a class of ingestion sources like flume that will allow you to 
receive data before fully "acknowledging" it in order to allow transactional 
semantics.
    
    I'm not sure the current API here really supports using those, because it's 
not clear to the receiver implementer when the underlying blocks get replicated.
    
    I think it would be good to expose either (a) some kind of `flush` 
operation, where you can force the block generator to create blocks for all 
outstanding pushed objects or (b) a way to do a bulk write where it guarantees 
that the iterator is fully pushed into a block.
    
    If you have those then you would be able to support the failure of a 
receiver in a nice way for sources like flume.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to