我是靠谱客的博主 调皮篮球,最近开发中收集的这篇文章主要介绍机器学习模型攻防cleverhans库中的mnist_tutorial_tf.py例子 介绍,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

cleverhans是一个机器学习模型攻防库,里面有很多的攻防技术实现。

下面来具体介绍一下其下mnist_tutorial_tf.py文件的例子。

它实现了以下方法:

  1. 实现了TensorFlow创建一个使用minst训练的模型。
  2. 然后使用FGSM方法生成对抗样本。
  3. 然后通过对抗训练使得模型对对抗样本更具有鲁棒性。

先贴出代码:(注意要在该库的环境下才能运行该代码)

"""
This tutorial shows how to generate adversarial examples using FGSM
and train a model using adversarial training with TensorFlow.
It is very similar to mnist_tutorial_keras_tf.py, which does the same
thing but with a dependence on keras.
The original paper can be found at:
https://arxiv.org/abs/1412.6572
"""
# pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

import logging
import numpy as np
import tensorflow as tf

from cleverhans.compat import flags
from cleverhans.loss import CrossEntropy
from cleverhans.dataset import MNIST
from cleverhans.utils_tf import model_eval
from cleverhans.train import train
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils import AccuracyReport, set_log_level
from cleverhans.model_zoo.basic_cnn import ModelBasicCNN

FLAGS = flags.FLAGS

NB_EPOCHS = 6
BATCH_SIZE = 128
LEARNING_RATE = 0.001
CLEAN_TRAIN = True
BACKPROP_THROUGH_ATTACK = False
NB_FILTERS = 64


def mnist_tutorial(train_start=0, train_end=60000, test_start=0,
                   test_end=10000, nb_epochs=NB_EPOCHS, batch_size=BATCH_SIZE,
                   learning_rate=LEARNING_RATE,
                   clean_train=CLEAN_TRAIN,
                   testing=False,
                   backprop_through_attack=BACKPROP_THROUGH_ATTACK,
                   nb_filters=NB_FILTERS, num_threads=None,
                   label_smoothing=0.1):
  """
  MNIST cleverhans tutorial
  :param train_start: index of first training set example
  :param train_end: index of last training set example
  :param test_start: index of first test set example
  :param test_end: index of last test set example
  :param nb_epochs: number of epochs to train model
  :param batch_size: size of training batches
  :param learning_rate: learning rate for training
  :param clean_train: perform normal training on clean examples only
                      before performing adversarial training.
  :param testing: if true, complete an AccuracyReport for unit tests
                  to verify that performance is adequate
  :param backprop_through_attack: If True, backprop through adversarial
                                  example construction process during
                                  adversarial training.
  :param label_smoothing: float, amount of label smoothing for cross entropy
  :return: an AccuracyReport object
  """

  # Object used to keep track of (and return) key accuracies
  report = AccuracyReport()

  # Set TF random seed to improve reproducibility
  tf.set_random_seed(1234)

  # Set logging level to see debug information
  set_log_level(logging.DEBUG)

  # Create TF session
  if num_threads:
    config_args = dict(intra_op_parallelism_threads=1)
  else:
    config_args = {}
  sess = tf.Session(config=tf.ConfigProto(**config_args))

  # Get MNIST data
  mnist = MNIST(train_start=train_start, train_end=train_end,
                test_start=test_start, test_end=test_end)
  x_train, y_train = mnist.get_set('train')
  x_test, y_test = mnist.get_set('test')

  # Use Image Parameters
  img_rows, img_cols, nchannels = x_train.shape[1:4]
  nb_classes = y_train.shape[1]

  print(img_rows, img_cols, nchannels, nb_classes)  # +, 28 28 1 10

  # Define input TF placeholder
  x = tf.placeholder(tf.float32, shape=(None, img_rows, img_cols,
                                        nchannels))
  y = tf.placeholder(tf.float32, shape=(None, nb_classes))

  # Train an MNIST model
  train_params = {
      'nb_epochs': nb_epochs,
      'batch_size': batch_size,
      'learning_rate': learning_rate
  }
  eval_params = {'batch_size': batch_size}
  fgsm_params = {
      'eps': 0.3,
      'clip_min': 0.,
      'clip_max': 1.
  }
  rng = np.random.RandomState([2017, 8, 30])

  def do_eval(preds, x_set, y_set, report_key, is_adv=None):
    acc = model_eval(sess, x, y, preds, x_set, y_set, args=eval_params)
    setattr(report, report_key, acc)
    if is_adv is None:
      report_text = None
    elif is_adv:
      report_text = 'adversarial'
    else:
      report_text = 'legitimate'
    if report_text:
      print('Test accuracy on %s examples: %0.4f' % (report_text, acc))

  if clean_train:
    model = ModelBasicCNN('model1', nb_classes, nb_filters)
    preds = model.get_logits(x)
    loss = CrossEntropy(model, smoothing=label_smoothing)

    def evaluate():
      do_eval(preds, x_test, y_test, 'clean_train_clean_eval', False)

    train(sess, loss, x_train, y_train, evaluate=evaluate,
          args=train_params, rng=rng, var_list=model.get_params())

    # Calculate training error
    if testing:
      do_eval(preds, x_train, y_train, 'train_clean_train_clean_eval')

    # Initialize the Fast Gradient Sign Method (FGSM) attack object and
    # graph
    fgsm = FastGradientMethod(model, sess=sess)
    adv_x = fgsm.generate(x, **fgsm_params)
    preds_adv = model.get_logits(adv_x)

    # Evaluate the accuracy of the MNIST model on adversarial examples
    do_eval(preds_adv, x_test, y_test, 'clean_train_adv_eval', True)

    # Calculate training error
    if testing:
      do_eval(preds_adv, x_train, y_train, 'train_clean_train_adv_eval')

    print('Repeating the process, using adversarial training')

  # Create a new model and train it to be robust to FastGradientMethod
  model2 = ModelBasicCNN('model2', nb_classes, nb_filters)
  fgsm2 = FastGradientMethod(model2, sess=sess)


  def attack(x):
    return fgsm2.generate(x, **fgsm_params)

  loss2 = CrossEntropy(model2, smoothing=label_smoothing, attack=attack)
  preds2 = model2.get_logits(x)
  adv_x2 = attack(x)

  if not backprop_through_attack:
    # For the fgsm attack used in this tutorial, the attack has zero
    # gradient so enabling this flag does not change the gradient.
    # For some other attacks, enabling this flag increases the cost of
    # training, but gives the defender the ability to anticipate how
    # the atacker will change their strategy in response to updates to
    # the defender's parameters.
    adv_x2 = tf.stop_gradient(adv_x2)
  preds2_adv = model2.get_logits(adv_x2)

  def evaluate2():
    # Accuracy of adversarially trained model on legitimate test inputs
    do_eval(preds2, x_test, y_test, 'adv_train_clean_eval', False)
    # Accuracy of the adversarially trained model on adversarial examples
    do_eval(preds2_adv, x_test, y_test, 'adv_train_adv_eval', True)

  # Perform and evaluate adversarial training
  train(sess, loss2, x_train, y_train, evaluate=evaluate2,
        args=train_params, rng=rng, var_list=model2.get_params())

  # Calculate training errors
  if testing:
    do_eval(preds2, x_train, y_train, 'train_adv_train_clean_eval')
    do_eval(preds2_adv, x_train, y_train, 'train_adv_train_adv_eval')

  return report


def main(argv=None):
  """
  Run the tutorial using command line flags.
  """
  from cleverhans_tutorials import check_installation
  check_installation(__file__)

  mnist_tutorial(nb_epochs=FLAGS.nb_epochs, batch_size=FLAGS.batch_size,
                 learning_rate=FLAGS.learning_rate,
                 clean_train=FLAGS.clean_train,
                 backprop_through_attack=FLAGS.backprop_through_attack,
                 nb_filters=FLAGS.nb_filters)


if __name__ == '__main__':
  flags.DEFINE_integer('nb_filters', NB_FILTERS,
                       'Model size multiplier')
  flags.DEFINE_integer('nb_epochs', NB_EPOCHS,
                       'Number of epochs to train model')
  flags.DEFINE_integer('batch_size', BATCH_SIZE,
                       'Size of training batches')
  flags.DEFINE_float('learning_rate', LEARNING_RATE,
                     'Learning rate for training')
  flags.DEFINE_bool('clean_train', CLEAN_TRAIN, 'Train on clean examples')
  flags.DEFINE_bool('backprop_through_attack', BACKPROP_THROUGH_ATTACK,
                    ('If True, backprop through adversarial example '
                     'construction process during adversarial training'))

  tf.app.run()
  

其实现的步骤为:(主要的)

  1. 从cleverhans库中导入MNIST、ModelBasicCNN等一些函数和类,还有numpy、TensorFlow等一些必备库。
  2. 检查完库的安装工作后,从主函数中进入mnist_tutorial函数。
  3. 加载MNIST数据集。
  mnist = MNIST(train_start=train_start, train_end=train_end,
                test_start=test_start, test_end=test_end)
  x_train, y_train = mnist.get_set('train')
  x_test, y_test = mnist.get_set('test')
  1. 定义分类模型和FGSM模型的各类参数。
  train_params = {
      'nb_epochs': nb_epochs,
      'batch_size': batch_size,
      'learning_rate': learning_rate
  }
  eval_params = {'batch_size': batch_size}
  fgsm_params = {
      'eps': 0.3,
      'clip_min': 0.,
      'clip_max': 1.
  }
  1. 构建并训练分类模型
    model = ModelBasicCNN('model1', nb_classes, nb_filters)
    preds = model.get_logits(x)
    loss = CrossEntropy(model, smoothing=label_smoothing)
    train(sess, loss, x_train, y_train, evaluate=evaluate,
          args=train_params, rng=rng, var_list=model.get_params())
  1. 评估分类模型对FGSM产生的对抗样本的精度
    fgsm = FastGradientMethod(model, sess=sess)
    adv_x = fgsm.generate(x, **fgsm_params)
    preds_adv = model.get_logits(adv_x)
    do_eval(preds_adv, x_test, y_test, 'clean_train_adv_eval', True)
  1. 构建用于对抗训练的模型
  model2 = ModelBasicCNN('model2', nb_classes, nb_filters)
  fgsm2 = FastGradientMethod(model2, sess=sess)
  loss2 = CrossEntropy(model2, smoothing=label_smoothing, attack=attack)
  preds2 = model2.get_logits(x)
  1. 使用FGSM产生对抗样本并反复用于训练
adv_x2 = attack(x)
preds2_adv = model2.get_logits(adv_x2)
train(sess, loss2, x_train, y_train, evaluate=evaluate2, args=train_params, rng=rng, var_list=model2.get_params())
  1. 输出结果大致如下:
[INFO 2019-03-24 05:56:02,687 cleverhans] Epoch 0 took 4.41518497467041 seconds
Test accuracy on legitimate examples: 0.9883
[INFO 2019-03-24 05:56:04,867 cleverhans] Epoch 1 took 1.9361844062805176 seconds
Test accuracy on legitimate examples: 0.9906
[INFO 2019-03-24 05:56:07,059 cleverhans] Epoch 2 took 1.9638454914093018 seconds
Test accuracy on legitimate examples: 0.9924
[INFO 2019-03-24 05:56:09,293 cleverhans] Epoch 3 took 1.9894905090332031 seconds
Test accuracy on legitimate examples: 0.9923
[INFO 2019-03-24 05:56:11,457 cleverhans] Epoch 4 took 1.9178931713104248 seconds
Test accuracy on legitimate examples: 0.9924
[INFO 2019-03-24 05:56:13,640 cleverhans] Epoch 5 took 1.9871423244476318 seconds
Test accuracy on legitimate examples: 0.9931

[INFO 2019-03-24 05:56:19,537 cleverhans] Epoch 0 took 4.100754499435425 seconds
Test accuracy on legitimate examples: 0.9773
Test accuracy on adversarial examples: 0.8373
[INFO 2019-03-24 05:56:24,225 cleverhans] Epoch 1 took 3.9488070011138916 seconds
Test accuracy on legitimate examples: 0.9868
Test accuracy on adversarial examples: 0.8790
[INFO 2019-03-24 05:56:28,689 cleverhans] Epoch 2 took 3.899299383163452 seconds
Test accuracy on legitimate examples: 0.9890
Test accuracy on adversarial examples: 0.9029
[INFO 2019-03-24 05:56:33,173 cleverhans] Epoch 3 took 3.9175209999084473 seconds
Test accuracy on legitimate examples: 0.9889
Test accuracy on adversarial examples: 0.9143
[INFO 2019-03-24 05:56:37,661 cleverhans] Epoch 4 took 3.9291722774505615 seconds
Test accuracy on legitimate examples: 0.9896
Test accuracy on adversarial examples: 0.9219
[INFO 2019-03-24 05:56:42,165 cleverhans] Epoch 5 took 3.9305431842803955 seconds
Test accuracy on legitimate examples: 0.9906
Test accuracy on adversarial examples: 0.9393

Process finished with exit code 0

最后

以上就是调皮篮球为你收集整理的机器学习模型攻防cleverhans库中的mnist_tutorial_tf.py例子 介绍的全部内容,希望文章能够帮你解决机器学习模型攻防cleverhans库中的mnist_tutorial_tf.py例子 介绍所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(41)

评论列表共有 0 条评论

立即
投稿
返回
顶部