[ https://issues.apache.org/jira/browse/SPARK-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15573459#comment-15573459 ]
Michael Armbrust edited comment on SPARK-17812 at 10/13/16 10:53 PM: --------------------------------------------------------------------- +1 to the suggested ways of subscribing, and for using "assign" as a familiar name. I would probably leave it with a single option like this: {code} .option("startingOffsets", "earliest" | "latest" | """{"topicFoo": {"0": 1234, "1", 4567}}""") {code} Were you can give -1 or -2 (again following kafka) for specific partitions. {{startingTime}} could be added when we support time indexes. was (Author: marmbrus): +1 to the suggested was of subscribing, and for using "assign" as a familiar name. I would probably leave it with a single option like this: {code} .option("startingOffsets", "earliest" | "latest" | """{"topicFoo": {"0": 1234, "1", 4567}}""") {code} Were you can give -1 or -2 (again following kafka) for specific partitions. {{startingTime}} could be added when we support time indexes. > More granular control of starting offsets (assign) > -------------------------------------------------- > > Key: SPARK-17812 > URL: https://issues.apache.org/jira/browse/SPARK-17812 > Project: Spark > Issue Type: Sub-task > Components: SQL > Reporter: Michael Armbrust > > Right now you can only run a Streaming Query starting from either the > earliest or latests offsets available at the moment the query is started. > Sometimes this is a lot of data. It would be nice to be able to do the > following: > - seek to user specified offsets for manually specified topicpartitions -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org