The described pixel attack works in the digital domain (ie. modifying pre-captured images), in other words the attacker must have access to the digital pipeline. For real-time applications such access is rarely available, certainly not for the regular people on the street.

However, there are analog countermeasures:

https://arxiv.org/pdf/1602.04504.pdf
https://cvdazzle.com/
https://io9.gizmodo.com/how-fashion-can-be-used-to-thwart-facial-recognition-te-1495648863

I wonder if this will became a hoodie of the 21st century.



"CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.

The name is derived from a type of World War I naval camouflage called Dazzle, which used cubist-inspired designs to break apart the visual continuity of a battleship and conceal its orientation and size. Likewise, CV Dazzle uses avant-garde hairstyling and makeup designs to break apart the continuity of a face. Since facial-recognition algorithms rely on the identification and spatial relationship of key facial features, like symmetry and tonal contours, one can block detection by creating an “anti-face”. "



https://arxiv.org/abs/1710.08864

One pixel attack for fooling deep neural networks

Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi (Submitted on 24
Oct 2017)

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Reply via email to