+1
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin
<https://mailtrack.io/trace/link/3e560367e0508b2f285512f39bd070275e70f571?url=http%3A%2F%2Fwww.adamkunicki.com&signature=aabcc9d816de2753>
On Thu, Jun 16, 2016 at 1:56 PM, Craig Swift
You can use SSL certificate hostname verification for rudimentary
authentication rather than Kerberos. The two can be used together or
independently.
On Mon, Mar 21, 2016 at 8:53 AM -0700, "christopher palm"
wrote:
Hi All,
Does Kafka support SSL authentication and ACL authoriza
"test1234" cleartext in the file?
> Like some encryption?
>
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin <http://www.adamkunicki.com>
Has anyone built the Spark Streaming kafka receiver for Kafka 0.9?
Asking since spark master branch is still using 0.8.2.1
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin <http://www.adamkunicki.com>
the tools to enable exactly-once publication
> behaviour? Is this a planned enhancement to Kafka Connect? Is there already
> some technique that people are using effectively to get exactly-once?
>
> Andrew Schofield
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415
>ssl.truststore.location =
> > /opt/kafka_2.11-0.9.0.0/config/ssl/truststore.jks
> >log.cleaner.io.max.bytes.per.second =
> 1.7976931348623157E308
> >default.replication.factor = 1
> > metrics.sample.w
***
>
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information. If you are not the
> intended recipient, any disclosure, copying, use, or distr
3. Kafka Hadoop Loader
> 4. Camus -> Gobblin
>
> Although Flume can result into small file problems when your data is
> partitioned and some partitions generate sporadic data.
>
> What are some best practices and options to write data from Kafka to HDFS?
>
> Thanks,
> R P
>
&g
eeper and I understand that this is not
> the way to go!
>
> Any clues?
>
> Thanks and Regards,
> Joe
>
--
Adam Kunicki
StreamSets | Field Engineer
mobile: 415.890.DATA (3282) | linkedin
<https://mailtrack.io/trace/link/9a752f11f1171714c952d6ca9a9810b83a6eef7e?url=http%3A%2F%2Fwww.adamkunicki.com&signature=ec2eec76b957bfa3>
On Mon, Feb 1, 2016 at 9:55 PM, Gwen Shapira wrote:
> This is the second time I see this complaint, so we could probably make the
> API docs clearer.
>
> Adam, feel like submitting a JIRA?
>
> On Mon, Feb 1, 2016 at 3:34 PM, Adam Kunicki wrote:
>
> > Thanks, a
f you write code to commit offsets
> manually, it can be a gotcha.
>
> -Dana
>
> On Mon, Feb 1, 2016 at 1:35 PM, Adam Kunicki wrote:
>
> > Hi,
> >
> > I've been noticing that a restarted consumer in 0.9 will start consuming
> > from the last committed offse
Hi,
I've been noticing that a restarted consumer in 0.9 will start consuming
from the last committed offset (inclusive). This means that any restarted
consumer will get the last read (and committed) message causing a duplicate
each time the consumer is restarted from the same position if there hav
12 matches
Mail list logo