Small typo correction i meant headers at the end of this paragraph not keys 
(sorry long week already)


corrected:


"
Second i would suggest we do not add additional section (again i would be a 
little -1 here) into the record specifically for this the whole point of 
headers being added, is additional bits such as this would levy on top of 
headers, e.g. the aes or other data that needs to transport with the record 
should be set into headers.
"

On 7 May 2020 at 8:47, Michael André Pearce <michael.andre.pea...@icloud.com> 
wrote:


Hi 


I have just spotted this.


I would be a little -1 encrypting headers these are NOT safe to encrypt. The 
whole original reason for headers was for non-sensitive but transport or other 
meta information details, very akin to tcp headers, e.g. those also are not 
encrypted. These should remain un-encrypted so tools that are simply bridging 
messages between brokers/systems, can rely on headers for this, without needing 
to peek inside the business payload part (or decrypting it).


Second i would suggest we do not add additional section (again i would be a 
little -1 here) into the record specifically for this the whole point of 
headers being added, is additional bits such as this would levy on top of 
headers, e.g. the aes or other data that needs to transport with the record 
should be set into keys.


Please see both the original KIP-82 but more importantly the case and uses that 
they were added for.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers
https://cwiki.apache.org/confluence/display/KAFKA/A+Case+for+Kafka+Headers


Best
Mike



On 1 May 2020 at 23:18, Sönke Liebau <soenke.lie...@opencore.com.INVALID> wrote:


Hi Tom,

thanks for taking a look!

Regarding your questions, I've answered below, but will also add more
detail to the KIP around these questions.

1. The functionality in this first phase could indeed be achieved with
custom serializers, that would then need to wrap the actual serializer that
is to be used. However, looking forward I intend to add functionality that
allows configuration to be configured broker-side via topic level configs
and investigate encrypting entire batches of messages for performance. Both
those things would require us to move past doing this in a serializer, so I
think we should take that plunge now to avoid unnecessary refactoring later
on.

2. Absolutely! I am currently working on a very (very) rough implementation
to kind of prove the principle. I'll add those to the KIP as soon as I
think they are in a somewhat final form.
There are a lot of design details missing from the KIP, I didn't want to go
all the way just for people to hate what I designed and have to start over
;)

3. Yes. I plan to create a LocalKeystoreKeyManager (name tbd) as part of
this KIP that allows configuring keys per topic pattern and will read the
keys from a local file. This will provide encryption, but users would have
to manually sync keystores across consumer and producer systems. Proper key
management with rollover and retrieval from central vaults would come in a
later phase.

4. I'm not 100% sure I follow your meaning here tbh. But I think the
question may be academic in this first instance, as compression happens at
batch level, so we can't encrypt at the record level after that. If we want
to stick with encrypting individual records, that would have to happen
pre-compression, unless I am mistaken about the internals here.

Best regards,
Sönke


On Fri, 1 May 2020 at 18:19, Tom Bentley <tbent...@redhat.com> wrote:


Hi Sönke,


I never looked at the original version, but what you describe in the new
version makes sense to me.


Here are a few things which sprang to mind while I was reading:


1. It wasn't immediately obvious why this can't be achieved using custom
serializers and deserializers.
2. It would be useful to fully define the Java interfaces you're talking
about.
3 Would a KeyManager implementation be provided?
4. About compression+encryption: My understanding is CRIME used a chosen
plaintext attack. AFAICS using compression would potentially allow a known
plaintext attack, which is a weaker way of attacking a cipher. Even without
compression in the picture known plaintext attacks would be possible, for
example if the attacker knew the key was JSON encoded.


Kind regards,


Tom


On Wed, Apr 29, 2020 at 12:32 AM Sönke Liebau
<soenke.lie...@opencore.com.invalid> wrote:


All,

I've asked for comments on this KIP in the past, but since I didn't
really
get any feedback I've decided to reduce the initial scope of the KIP a
bit
and try again.

I have reworked to KIP to provide a limited, but useful set of features
for
this initial KIP and laid out a very rough roadmap of what I'd envision
this looking like in a final version.

I am aware that the KIP is currently light on implementation details, but
would like to get some feedback on the general approach before fully
speccing everything.

The KIP can be found at


https://cwiki.apache.org/confluence/display/KAFKA/KIP-317%3A+Add+end-to-end+data+encryption+functionality+to+Apache+Kafka


I would very much appreciate any feedback!

Best regards,
Sönke





--
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany

Reply via email to