On 7/13/25 00:36, Robert J. Hansen wrote:
[...]
Persistence and evading detection are a tradeoff for the attacker.

They are not. Often, evading detection assists in persistence by
reducing the amount of evidence that might trigger a sysadmin's attention.

If you mean to say persistence always increases the signature that must
be concealed, there I'd mostly agree.

That is the tradeoff I was referring to:  tampering the system to enable or improve persistence leaves artifacts for the sysadmin to discover.

Further, Mallory may not have the goals you assume.  If Mallory is a
professional, making the attack look like a low-skill smash and grab
could, for example, be a strategy to avoid raising alarm at the targets that they *are* targets.

I don't understand. Executing a successful high-profile exfiltration
from a target which is sure to be spotted by the sysadmin ... avoids
letting the sysadmin know they've been targeted?

Targeted *by whom*, maybe. Changing TTP for misattribution purposes is
definitely a thing. With that change, I agree.

That is more-or-less what I was saying:  if it looks like an opportunistic attack, the target is less likely to expect a repeat performance.  While any half-competent sysadmin is going to treat the incident as a wake-up call to tighten security, getting management to agree without evidence of an ongoing threat can be difficult.

Therefore, Professional Mallory can benefit from making a targeted attack *look* like an opportunistic attack.

Logically, the box is most likely to have about the same, except that a second copy of the same haul is valueless to Mallory.

Why should this be the default assumption? This is a computer, not a
cuneiform tablet: data is added and removed constantly. If right now it
has valuable data, it's at least as possible that in the future it will
continue to receive valuable data.

Not entirely the same, but the question I was getting towards is "how much additional value will Mallory expect from hitting the same target again in six months?"  Now, if the cost of coming back in six months is basically nil, Mallory's cost-benefit calculation is skewed accordingly and you can expect a return even if there is unlikely to be new data.

In short, there is no amount of persistence that can save Mallory's access once the target becomes aware of it and gets serious about kicking Mallory out.

Salt Typhoon would appear to be a counterexample.

As far as I can tell, Salt Typhoon reliably gets kicked out; they just keep finding new targets and/or pulling off new intrusions. (I consider "backdoor found; backdoor removed" to be "Mallory kicked out" although that does *not* preclude Mallory from finding another way in... perhaps the same way that backdoor had been planted...)

Salt Typhoon/GhostEmperor is also an example of why I would not expect logs to exist in the first place.  While they *do* escalate (and how), the remote filesystem access is provided by their malware instead of using system facilities that would keep logs. Malware executing in an unprivileged context could do the same with the user's files.

If Mallory expects to get back in just as easily six months from now, why leave something that an attentive admin might notice

Great question (zero sarcasm). My answer would be, "in the example you
give, the access is already a persistent access, so the persistency
objective is already done for Mallory by the negligence of the sysadmin."

Or reluctance by management to allocate needed resources---apparently that server had been getting hit repeatedly since before I worked there.  I was explicitly told to *not* bother tracing how the IRC bot had gotten in, just remove it and consider the box secured.  8-(

Clients keep SFTP logs?  Are you assuming that Mallory steals the user's password and then connects to sshd on the user's box to make off with the user's files?

Remote access, yes.

I have been talking about attacks on clients, not servers, because GnuPG is most typically run on client nodes.  Indeed, as far as I know, the PGP security model is focused on clients---the boxes physically used by the users. Servers are categorically untrusted in PGP.

In today's environment, you have to work really hard to even have a
meaningful network perimeter. I'm unconvinced it makes much sense
nowadays to talk about clean client/server distinctions at the machine
level. At the app level, maybe.

I agree that there is a serious problem with loose default configurations.  In particular, a box used with keyboard and local display probably should not be running sshd, but it seems that most distributions enable sshd by default.

In other words, on my "client" boxes, there are no SFTP logs because there is no SFTP listener.  Mallory cannot exploit a service that is not running.  :-D


-- Jacob



_______________________________________________
Gnupg-devel mailing list
[email protected]
https://lists.gnupg.org/mailman/listinfo/gnupg-devel

Reply via email to