바로가기 메뉴 본문 바로가기 주메뉴 바로가기
  • 09-2Do you have a defense technique in place against model evasion attacks?
    Determine applicability: Consider this question if expedients are expected in medical insurance fraud, drug clinical trial subject selection, and/or emergency patient selection when developing AI diagnosis services, and determine if the requirement has been satisfied.

    • A model evasion attack is a type of adversarial attack, which is a technique that disrupts the performance of an AI model by tampering with input data in a minimal way (so that it is imperceptible to humans). Anticipated scenarios in medical services include adversarial attacks aimed at fraudulent receipt in the free-for-service system, the generation of adversarial patients for selection of clinical trial participants, and adversarial attacks aimed at changing the order of emergency patient examinations [43].

    • Particularly AI that utilizes images such as X-ray images, MRI images, and ultrasound images can be highly susceptible to model evasion attacks, so techniques such as adversarial training, gradient masking/distillation, and feature squeezing can be considered to mitigate this.