05-2Have you devised measures to defend against data-oriented attacks?
Determine applicability: Consider this question if data poisoning or evasion attacks through tampering or other misconduct are expected during the process of collecting, establishing, and storing training data, and determine applicability.
• An AI service under development or operation can be exposed to attacks that output different results than the expected harm by intentionally deteriorating the training data or minimally tampering input data. Hence, it is ideal to review and implement countermeasures in the process.
• There are not yet many actual cases of adversarial attacks on data towards medical devices and software, but theoretical possibilities have been raised continuously. There are ongoing studies on attempted data-oriented attacks and defense techniques, and defensive measures against these attacks should be established by identifying the recent research trends.
• Discussions are also underway through various research on the theory of possible abuse by adversarial attacks on chest X-ray images, ocular fundus images, and skin images to fraudulently receive health insurance or shorten clinical trials. Also, caution must be taken on HopSkipJump, Fast Gradient Method, Carlini & Wagner, Crafting Decision Tree, and Zeroth Order optimization, as they are theoretically possible among various poisoning and evasion attacks against medical data [16].