Adversarial
Adversarial example in some norm
在之前的ZOO可以看到我們創造的是$norm_2$的adversarial example
在JSMA創造的是$norm_\infty$
在EDA創造的是$norm_1$的例子,
Adversarial examples in more type
The use in CNN+RNN :
The use in speech recognition
The use in NLP with Sentiment Analysis,Fake-News Detection, Spam Filtering.
The improve of the ZOO -> AUTOZOO
- (Auto encoder based Z eroth Order Optimization Method for Attacking Black box Neural Networks)
主要做的有兩點:
使用了更新的gradient算法:
The Old one
The New one : $b \in (1, dimensions)$
The Sucessful rate in iterationsUse the dimension reduction:
Here is the two method
Left : Autoencoder
Right: Biliner
STRUCTURED ADVERSARIAL ATTACK
ADMM used to solve the subproblem
‘ostrich’ is the original label, and ‘unicycle’ is the misclassified label.
將圖片分成一些區塊,並且將那些區塊做loss處理,並且用ADMM將方程式做處理,處理過後的方程式就不列上了,有點長
連結在此