Hi all,

Many thanks to everyone who reviewed this draft, all of the reviews have been 
really helpful in improving the document. As you may have seen, we uploaded a 
new version to Datatracker this morning which we hope addresses the comments 
made in the reviews.

Roman: Particular thanks for your comprehensive review, we hope we've addressed 
all your comments in our new version of the document. We've put our responses 
to your individual comments in-line below (apologies in advance for any 
unpleasant formatting).

Many thanks,
Andy

-----Original Message-----
From: Roman Danyliw via Datatracker <nore...@ietf.org> 
Sent: 19 January 2023 03:46
To: The IESG <i...@ietf.org>
Cc: draft-ietf-opsec-indicators-of-comprom...@ietf.org; opsec-cha...@ietf.org; 
opsec@ietf.org; furr...@gmail.com; furr...@gmail.com
Subject: Roman Danyliw's No Objection on 
draft-ietf-opsec-indicators-of-compromise-03: (with COMMENT)

Roman Danyliw has entered the following ballot position for
draft-ietf-opsec-indicators-of-compromise-03: No Objection

----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

Thank you to Kathleen Moriarty for the SECDIR review.

** Abstract
.  It
   highlights the need for IoCs to be detectable in implementations of
   Internet protocols, tools, and technologies - both for the IoCs'
   initial discovery and their use in detection - and provides a
   foundation for new approaches to operational challenges in network
   security.

What "new approaches" are being suggested?  It wasn't clear for the body of the
text.

>> No new approaches are being suggested; that's not the intention of the 
>> abstract or the document. The meaning is: this document provides a 
>> foundation of knowledge, to allow others to experiment and explore new 
>> approaches, which would meet the operational challenges in network security.

** Section 1.
   intrusion set (a
   collection of indicators for a specific attack)

This definition is not consistent with the use of the term as I know it.  In my
experience an intrusion set is set of activity attributed to an actor.  It may
entail multiple campaigns by a threat actor, and consist of many attacks, TTPs
and intrusions.  APT33 is an example of an intrusion set.

>> Thanks for the catch. This is clearly not right and has been updated to 
>> "intrusion set (a set of malicious activity and behaviours attributed to one 
>> threat actor)".

** Section 1.  Editorial. s/amount intelligence practitioners/cyber
intelligence practitioners/

>> Changed for clarification

** Section 2.  Editorial.
   used in malware strains to
   generate domain names periodically.  Adversaries may use DGAs to
   dynamically identify a destination for C2 traffic, rather than
   relying on a list of static IP addresses or domains that can be
   blocked more easily.

-- Isn't the key idea that these domains names are algorithmically generated on
a periodic basis? -- Don't adversaries computer not identify the C2
destination? -- Be cleared on the value proposition of dynamic generation vs
hard coded Ips

NEW
used in malware strains to periodically generate domain names algorithmically. 
This malware uses a DGAs to compute a destination for C2 traffic, rather than
relying on pre-assigned list of static IP addresses or domains that can be
blocked more easily if extracted from the malware.

>> Updated to use your preferred wording. Both definitions convey the key 
>> concepts i.e. that domain names are generated via algorithm - however, this 
>> is somewhat self-explanatory from the acronym, and including 'periodically' 
>> captures the periodic basis. The concept that blocking motivates use of DGAs 
>> is well-included in both versions.

** Section 2.  Kill chains need not be restricted to the seven phases defined
in the original Lockheed model.

>> Agreed, removed a number to future-proof and remove bias to any particular 
>> kill chain.

** Section 3.2.1  Editorial.
   IoCs are often discovered initially through manual investigation or
   automated analysis.

Isn't manual or automated the only two options?  Perhaps s/IoCs are often
discovered/IoC are discovered/

>> Apart from sharing, but that's probably not 'discovering' in the sense it's 
>> intended here. Changed.

** Section 3.2.1.
   They can be discovered in a range of sources,
   including in networks and at endpoints

What is "in networks" in this context?  Does it mean by monitoring the network?

>> It's places where IoCs exist, so perhaps "on the wire" reads better here? 
>> Updated.

** Section 3.2.1.
   Identifying a particular protocol run related to an
   attack
What is a "protocol run"? Is that a given session of a given protocol?

>> An exchange, or sequence of exchanged messages, in a protocol. It's more 
>> wordy but hopefully clearer now - changed to "Identifying a particular 
>> exchange (or sequence of exchanged messages) related to an attack is of 
>> limited benefit if indicators cannot be extracted and subsequently 
>> associated with a later related exchange of messages or artefacts in the 
>> same, or in a different, protocol."

** Section 3.2.1

   Identifying a particular protocol run related to an
   attack is of limited benefit if indicators cannot be extracted and
   subsequently associated with a later related run of the same, or a
   different, protocol.

-- Is this text assuming that the indicators to identify the flow need to come
from the network?  Couldn't one have reversed engineering a malware sample and
that be the basis of the IOC to watch for?

>> This text assumed that wherever the indicator is initially discovered, it is 
>> often later found/detected in protocols. The emphasis is more on the 'and 
>> subsequently associated' portion of the text. Rephrased to include "or, once 
>> they are extracted, can not be": "Identifying a particular exchange (or 
>> sequence of exchanged messages) related to an attack is of limited benefit 
>> if indicators cannot be extracted, or, once they are extracted, can not be 
>> subsequently associated with a later related exchange of messages or 
>> artefacts in the same, or in a different, protocol."

-- Wouldn't there be some residual value in identifying known attack traffic as
a one-off, if nothing more to timestamp the activity of the threat actor?

>> Yes; that's why we went with "limited benefit", and not "no benefit".

** Section 3.2.3.  In addition to ISACs, the term ISAO is also used (at least
in the US) OLD
   often
   dubbed Information Sharing and Analysis Centres (ISACs)
NEW
   often
   dubbed Information Sharing and Analysis Centres (ISACs) or Information
   Sharing and Analysis Organizations (ISAOs)

>> Added; thanks for the US perspective.

** Section 3.2.3.  s/intel feeds/intelligence feeds/

>> Added

** Section 3.2.3. s/international Computer Emergency Response Teams
(CERTs)/internal Computer Security Incident Response Teams (CSIRTs)/

>> We did intend to refer to international CERTs here, but we could add a 
>> reference to internal CSIRTs if that would be useful.

** Section 3.2.3
   Whomever
   they are, sharers commonly indicate the extent to which receivers may
   further distribute IoCs using the Traffic Light Protocol [TLP].

Perhaps weaker that TLP is the common way pass the redistribution guidance,
unless there is a strong citation to support the claim.

>> Have weakened the claim, as suggested, to TLP being an example of a sharing 
>> framework.

** Section 3.2.4
   For IoCs to provide defence-in-depth (see Section 6.1), which is one
   of their key strengths, and so cope with different points of failure,
   they should be deployed in controls monitoring networks and endpoints
   through solutions that have sufficient privilege to act on them.

I'm having trouble unpacking this sentence.

>> It is quite unwieldy - reworded.

-- Even with the text in Section 6.1, I don't follow how IoCs provide defense
in depth.  It's the underlying technology/controls performing mitigation that
provide this defense.

>> Defence-in-depth meaning to account for and defend against different 
>> failures at different parts of any system. Combining different types of IoCs 
>> provides this - as does use of IoCs across the protocol stack, as well as 
>> across the security controls on a network. A broad range of IoCs reinforces 
>> the defence-in-depth provided by the deployment and vice versa. Added some 
>> text to this effect.

-- what is a "controls monitoring networks"?
>> By controls monitoring networks, we meant security appliances monitoring 
>> network traffic, but this phrase has been reworded.

-- could more be said about the reference "solutions"
>> Expanding into solutions would begin to greatly increase the scope of the 
>> document, so we would prefer not to comment on that unless it's necessary 
>> for the document to be useful.

** Section 3.2.4
   While IoCs may be manually assessed after
   discovery or receipt, significant advantage may be gained by
   automatically ingesting, processing, assessing, and deploying IoCs
   from logs or intel feeds to the appropriate security controls.

True in certain cases.  Section 3.2.2. appropriately warned that IoCs are of
different quality and that one might need to ascribe different confidence to
them.  Recommend propagating or citing that caution.

>> Added some text to reiterate that caution about the varying quality of IoCs.

** Section 3.2.4.

   IoCs can be particularly effective when deployed in security controls
   with the broadest impact.

-- Could this principle be further explained?  What I got from the subsequent
text was that a managed configuration by a vendor (instead of the end-user) is
particularly effective.

>> The message about vendors is the key one here, but we have expanded a little 
>> to show what this means in the context of a specific enterprise network with 
>> various security controls.

-- It would be useful to explicitly say the obvious which is that "IoC can be
particularly effective _at mitigating malicious activity_"

>> The suggested text has been added.

** Section 3.2.5.

   Security controls with deployed IoCs monitor their relevant control
   space and trigger a generic or specific reaction upon detection of
   the IoC in monitored logs.

Is it just "logs" being monitored by security controls?  Couldn't a network
tap/interface be used too?

>> Indeed it could, added text to expand this from just logs to network 
>> interfaces.

** Section 4.1.1.  Editorial. This section has significant similarity with
Section 6.1.  Consider if this related material can be integrated or
streamlined.

>> These sections do cover similar ground, but coming at it from different 
>> angles. The aim for 6.1 is to motivate why, based on defence-in-depth 
>> principles, IoCs are valuable as part of the wider approach, while 4.1.1 is 
>> starting from IoCs and showing why they permit defence-in-depth principles. 
>> We felt covering this from both angles was useful, but are open to 
>> suggestions on how it could be more so.

** Section 4.1.1.  Editorial.

   Anti-Virus (AV) and Endpoint Detection and
   Response (EDR) products deploy IoCs via catalogues or libraries to
   all supported client endpoints

Is it "all support client endpoints" or "client endpoints"?  What does "all"
add?

>> This is to convey that the AV/EDR solution will be able to *uniformly* 
>> protect all client endpoints that are kept up-to-date and covered by the 
>> AV/EDR solution, we have removed the "all" however if that confuses matters.

** Section 4.1.1.

   Some types of IoC may be present
   across all those controls while others may be deployed only in
   certain layers.

What is a layer?  Is that layer in a protocol stack or a "defense in depth"
layer?

>> A layer here is a "defence in depth" layer, have clarified what is meant.

** Section 4.1.1.  I don't understand how the two examples in this section
illuminate the thesis of the opening paragraph t that almost all modern cyber
defense tools rely on indicators.

>> The thesis of the opening paragraph is more that IoCs provide multiple 
>> layers of defence, and that is why they are used in modern cyber defence 
>> tools. The thesis that the examples are supporting is the support IoCs 
>> provide to defence-in-depth rather than the ubiquity of IoCs in modern 
>> tools. Would more clarity on the thesis be useful here?

** Section 4.1.1.  What is "estate-wide patching"?  Is that the same as
"enterprise-wide"?

>> In some cases, yes. However, we don't think that's a helpful change - for 
>> instance, govt estate spans many departments (and aren't enterprises), and 
>> estate is changing beyond traditional 'enterprise-owned' or 
>> 'enterprise-located' terms. We find that IT estate is a common phrase, so 
>> estate-wide seems understandable enough.

** Section 4.1.2.  With respect, the thesis of this section is rather
simplistic and fails to capture the complexity and expertise required to field
IoCs.  No argument that a small manufacturer may be a target.  However, there
is a degree of expertise and time required to be able to load and curate these
IoCs.  In particular, I am challenged by the following sentence, "IoCs are
inexpensive, scalable, and easy to deploy, making their use particularly
beneficial for smaller entities ..."  My experience is that small business even
struggle with these activities.

IMO, the thesis (mentioned later in the text) should that the development of
IoCs can be left to better resourced organizations.  Organizations without the
ability to do so could still benefit from the shared threat intelligence.

>> The barrier of entry is getting lower. Once an organisation has the data in, 
>> threat feeds are sometimes free with SIEM tool purchases, and require simply 
>> a 'switch on', or OOTB detections or playbooks to be deployed. We agree that 
>> it's not zero-expertise, or zero-cost, but that's not what this text says. 
>> It says that threat matches can be made by small organisations, without 
>> needing to conduct the threat research themselves.
This is part of the thesis of the paragraph as intended - that smaller 
organisations are in a position to more easily benefit from the threat 
intelligence shared by larger organisations with more resources if they receive 
IoC feeds. Small organisations may struggle with deploying IoCs, but it is a 
much simpler task than finding threat indicators themselves, and simpler than 
deploying more complex controls such as the machine learning based controls 
discussed in the paragraph. We've added some text to acknowledge that even just 
deploying IoCs will require some expertise, hopefully that adds sufficient 
nuance to the section?

Additionally:
   One reason for this is that use of IoCs does not require the same
   intensive training as needed for more subjective controls, such as
   those based on manual analysis of machine learning events which
   require further manual analysis to verify if malicious.

-- what are "subjective controls"?  The provided example of a "machine learning
event" is the output of such a system?

>> Yes, an IoC match is clear cut whereas suspicious activity requires more 
>> nuance, and you're right that the example text doesn't quite make sense. 
>> We've reworded to make this a little clearer.

** Section 4.1.4.  This section has high overlap with Section 3.2.3.

-- Can they be streamlined?

>> 3.2.3 is the "how" you do and 4.1.4 is the "why" you would. The titles are 
>> similar, but 3.2.3 covers sharing as a part of the IoCs lifecycle and 
>> methods, and 4.1.4 is  "why they are easy to share".

-- Can the standards to shared indicators be made consistent?

>> Good idea, have consolidated the references into one section.

-- (author conflict of interest) Consider if you want to list IETF's own
indicator sharing format, RFC7970/RFC8727

>> We have added a reference.

** Section 4.1.4

   Quick and easy sharing of IoCs gives blanket coverage for
   organisations and allows widespread mitigation in a timely fashion -
   they can be shared with systems administrators, from small to large
   organisations and from large teams to single individuals, allowing
   them all to implement defences on their networks.

Isn't this text conveying the same idea as was said in the section right before
it (Section 4.1.3)?

>> Both of those sections are focused on wide benefits and sharing, but are 
>> aiming to make different points. The first section is aiming to draw out the 
>> wide benefits of deploying IoCs within an organisation, and the wide reach 
>> one IoC deployment can have, while the second is more focused on how easy it 
>> is to share between organisations. We've made some changes to try to make 
>> the points a little more distinct.

** Section 4.1.5  Isn't the thesis of automatic deployment of indicators
already stated in Section 3.2.4.

>> It is, but it's covered more briefly in a different context. The first 
>> paragraph to discuss considerations relating to the deployment part of the 
>> IoC lifecycle, whereas this paragraph is focused more on the time savings 
>> from automation.

** Section 4.1.5

   While it is still necessary to invest effort both to enable efficient
   IoC deployment, and to eliminate false positives when widely
   deploying IoCs, the cost and effort involved can be far smaller than
   the work entailed in reliably manually updating all endpoint and
   network devices.

What is the false positive being referenced here?  Is it false positive matches
against the IoC?  If so, how is that related to manually updated endpoints?

>> The effort to remove a false positive (i.e. removing an IoC that also 
>> matches on non-malicious traffic) when it has been deployed is still usually 
>> less effort than manually deploying across an estate on individual endpoints 
>> - and so, on balance, automation and the coverage it brings is considered a 
>> benefit that outweighs the risk of false positives being automatically 
>> deployed.

** Section 4.1.7.  No disagreement on the need for context.  However, I'm
confused about how this text is an "opportunity" and the new material it is
adding.  In my experience with the classes of organizations named as
distributing IoCs in Section 3.2.3. (i.e., ISACs, ISAO, CSIRTS, national cyber
centers), context is "table stakes" for sharing.  How does a receiving party
know how to act on the IoC otherwise?

>> The importance of context isn't covered as explicitly elsewhere in the draft 
>> which is why we wanted to cover it here. As described in 5.3, sometimes this 
>> (helpful) context is removed for privacy reasons. Additionally, once an IoC 
>> is brought into an enterprise, analysts need the context to action it - 
>> different to giving the context in those exchanges. This is important 
>> (knowing how to act on an IoC), e.g. imagine a setup that simply 
>> blocks/flags any connection associated with IoCs received on a feed - it's 
>> good that in practice more context is shared, but as this is an 
>> informational document, and an introduction to IoCs for some readers, we 
>> wanted to justify why the context is important. As regards the opportunity 
>> inherent in this, we have added some text to bring out how IoCs make this 
>> attribution easy.

** Section 5.1.1

   Malicious IP addresses and domain names can also be
   changed between campaigns, but this happens less frequently due to
   the greater pain of managing infrastructure compared to altering
   files, and so IP addresses and domain names provide a less fragile
   detection capability.

Please soften this claim or cite a reference.  How often an infrastructure
changes between campaigns can vary widely between threat actors.

>> It can vary, but we think it's quite intuitive that adding a white space or 
>> comment and recompiling code is less effort than migrating infrastructure 
>> and registering domains. Still, we have softened this claim as there is no 
>> citable reference for this.

** Section 5.1.2
   To be used in attack defence, IoCs must first be discovered through
   proactive hunting or reactive investigation.

Couldn't they also be shared with an organization too?

>> They can, but that isn't 'discovery', as described earlier. We're covering 
>> discovery separately to sharing, as there are different resource and 
>> validity points to both. Here discovery is the research of first identifying 
>> the IoC.

** Section 5.3.

   Self-censoring by sharers appears more prevalent and more extensive
   when sharing IoCs into groups with more members, into groups with a
   broader range of perceived member expertise (particularly the further
   the lower bound extends below the sharer's perceived own expertise),
   and into groups that do not maintain strong intermember trust.

Is there a citable basis for these assertions?

>> We think there is value in documenting the common wisdom and lived 
>> experience of these authors (and many who have reviewed it and agree). To 
>> make clear it's not taken from an existing paper or place on the internet, 
>> we have added 'in our experience', and a reference for some aspects.

** Section 5.3.

   Research
   opportunities exist to determine how IoC sharing groups' requirements
   for trust and members' interaction strategies vary and whether
   sharing can be optimised or incentivised, such as by using game
   theoretic approaches.

IMO, this seems asymmetric to call out.  In almost every section there would be
the opportunity for research.

>> Noted, have removed the research suggestion here.

** Section 5.4.

   The adoption of automation can also enable faster and easier
   correlation of IoC detections across log sources, time, and space.

-- Does "log sources" also mean network monitoring?
-- what is "space" in this context? Is it the same part of the network?

>> We've called out network monitoring in addition to log sources, and 
>> clarified that space means the physical location and geographies.

** Section 6.1.  The new "best practice" in this section isn't clear. 
"Defense-in-Depth" has been previously mentioned.

>> Noted, renamed the section to clarify that is covering defence in depth more 
>> thoroughly.

** Section 6.1.  Editorial.

   If an attack happens, then you hope an endpoint solution will pick it
   up.

Consider less colloquial language.

>> We've updated this to use more formal language.

** Section 6.1.  It isn't clear to me how the example of NCSC's PDNS service
demonstrated defense in depth.  What I read into was a successful, managed
security offering.  Where was the "depth"?

>> PDNS forms one layer of a defence in depth solution where domain names are 
>> the IoCs. This shows how IoCs can form an entire layer of such a solution; 
>> not using IoCs can mean an entire layer of defence removed. We've added some 
>> to text to draw this out a little.

** Section 6.1.

  but if the IoC is on PDNS, a consistent defence is
   maintained. This offers protection, regardless of whether the
   context is a BYOD environment

In a BYOD why is consistent defense ensured.  There is no assurance that the
device will be using the PDNS?

>> There is no guarantee that PDNS will be used on a BYOD, but we were 
>> referring to a consistent defence in the face of new threats - i.e. even if 
>> a BYOD is not updated, PDNS will be, accounting for different failures at 
>> different points of the system. We've added some clarification but can make 
>> more changes if the point is still unclear.

** Section 6.2.  It seems odd to next the Security Considerations under best
practices.  Especially since it is recommending speculative and not performed
research.  Additionally, per the "privacy-preserving" researching, the privacy
concerned noted in Section 5.3 don't seem clear enough to action.

>> Good point, we have moved this into its own section and removed the 
>> reference to speculative research.



_______________________________________________
OPSEC mailing list
OPSEC@ietf.org
https://www.ietf.org/mailman/listinfo/opsec

Reply via email to