바로가기 메뉴 본문 바로가기 주메뉴 바로가기
  • 09-2Do you have a defense technique in place against model evasion attacks?
    Determine applicability: Consider this question if expedients such as damage to human life and property, infringement of equity, and being selected as a welfare recipient are expected when developing and implementing AI services in the public sector, and determine if the requirement has been satisfied.

    • Model evasion attacks trick the AI model by minimally modifying the input data. Image domains are vulnerable to adversarial attacks since a slight change is not visible to humans.

    • There are studies on mitigating attacks against text processing AI models through adversarial training or against speech recognition AI algorithms by using downsampling, local smoothing, and quantization. These can be some of the techniques to consider mitigating model evasion attacks.