我是靠谱客的博主 执着往事,最近开发中收集的这篇文章主要介绍Tensorflow2.0学习(七):猫狗大战2、训练与保存模型一、搭建网络二、读取tfrecord数据三、训练与保存模型的两种方式四、预测一张图像,并输出概率值,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

文章目录

  • 一、搭建网络
  • 二、读取tfrecord数据
  • 三、训练与保存模型的两种方式
    • 3.1、利用model.fit直接喂数据, model.save保存模型.
    • 3.2、利用tf.GradientTape() 梯度带,保存模型.
  • 四、预测一张图像,并输出概率值


熟悉TF1.X,对Record数据情有独钟,那么TF2.0如何训练record数据呢?
在Tensorflow2.0学习(六):猫狗大战1、制作与读取record数据 已经介绍了如何生成与读取tfRecord数据,这篇介绍训练record数据与保存模型的两种方法.


一、搭建网络

以ResNet50为例,通常去掉最后一层的网络,最为新任务的特征提取子网络,然后根据自己的任务类别,加一个对应数据的全连接层分类,就可以在预训练任务的基础上快速高效地学习新任务.

resnet50预训练模型的下载地址
https://github.com/keras-team/keras-applications/releases/download/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5

ResNet18 = tf.keras.applications.ResNet50(weights="./resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5", include_top=False)
print(ResNet18.summary())
x = tf.random.normal([8,224,224,3])
out = ResNet18(x)
print(out.shape)

首先利用Keras的模型乐园加载ResNet50网络
然后随机生成一个batch的图像数据,送入我们加载的网络中,查看输出的shape大小
输出的shape维度为 (8, 7, 7, 2048)

按照vgg的思路,需要将上诉网络的输出Flatten拉平,然后再接全连接,这里没有选择全连接,而是选择全局池话进行降维,然后再接一个全接连,softmax作为激活函数.
网络部分的完整代码如下:

ResNet18 = tf.keras.applications.ResNet50(weights="./resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5", include_top=False)
print(ResNet18.summary())
x = tf.random.normal([8,224,224,3])
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
fc = tf.keras.layers.Dense(2, activation="softmax")
ResNet18 = tf.keras.Sequential([ResNet18, global_average_layer, fc])
#
out = ResNet18(x)
print(out.shape)

二、读取tfrecord数据

train_record_path = "./train.record"
test_record_path = "./test.record"
# 调用后我们会得到一个Dataset(tf.data.Dataset),字面理解,这里面就存放着我们之前写入的所有Example。
train_dataset = tf.data.TFRecordDataset(train_record_path)
test_dataset = tf.data.TFRecordDataset(test_record_path)
# 定义一个解析函数

feature_description = {
    'image/filename': tf.io.FixedLenFeature([], tf.string),
    'image/class': tf.io.FixedLenFeature([], tf.int64),
    'image/encoded': tf.io.FixedLenFeature([], tf.string)
}

#
def parese_example(serialized_example):
    feature_dict = tf.io.parse_single_example(serialized_example, feature_description)
    image = tf.io.decode_jpeg(feature_dict['image/encoded'])  # 解码JPEG图片
    image = tf.image.resize_with_crop_or_pad(image, 224, 224)
    image = tf.reshape(image, [224, 224, 3])
    image = tf.cast(image, tf.float32)

    feature_dict['image'] = image
    return feature_dict['image'], feature_dict['image/class']
#
#
train_dataset = train_dataset.map(parese_example)
test_dataset = test_dataset.map(parese_example)

train_dataset = train_dataset.repeat().shuffle(2000).batch(batch_size).prefetch(3)
test_dataset = test_dataset.repeat().shuffle(2000).batch(batch_size).prefetch(3)

三、训练与保存模型的两种方式

3.1、利用model.fit直接喂数据, model.save保存模型.

这种方法继承keras的优点,训练速度快,且保存的h5模型中包含网络结构和权重参数.2.0大力推荐的一种训练方式
完整代码如下

learning_rate = 0.001
training_step = 30000

batch_size = 16
train_record_path = "./train.record"
test_record_path = "./test.record"
# 调用后我们会得到一个Dataset(tf.data.Dataset),字面理解,这里面就存放着我们之前写入的所有Example。
train_dataset = tf.data.TFRecordDataset(train_record_path)
test_dataset = tf.data.TFRecordDataset(test_record_path)
# 定义一个解析函数

feature_description = {
    'image/filename': tf.io.FixedLenFeature([], tf.string),
    'image/class': tf.io.FixedLenFeature([], tf.int64),
    'image/encoded': tf.io.FixedLenFeature([], tf.string)
}

#
def parese_example(serialized_example):
    feature_dict = tf.io.parse_single_example(serialized_example, feature_description)
    image = tf.io.decode_jpeg(feature_dict['image/encoded'])  # 解码JPEG图片
    image = tf.image.resize_with_crop_or_pad(image, 224, 224)
    image = tf.reshape(image, [224, 224, 3])
    image = tf.cast(image, tf.float32)

    feature_dict['image'] = image
    return feature_dict['image'], feature_dict['image/class']
#
#
train_dataset = train_dataset.map(parese_example)
test_dataset = test_dataset.map(parese_example)

train_dataset = train_dataset.repeat().shuffle(2000).batch(batch_size).prefetch(3)
test_dataset = test_dataset.repeat().shuffle(2000).batch(batch_size).prefetch(3)

# model = vgg16.vgg16()
ResNet50 = tf.keras.applications.ResNet50(weights=None, include_top=False)
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
fc = tf.keras.layers.Dense(2, activation="softmax") # 修改乘自己的类别数
model = tf.keras.Sequential([ResNet50, global_average_layer, fc])


model.compile(optimizer=tf.keras.optimizers.Adam(lr=learning_rate),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics=["accuracy"])
print(model.summary())
model.fit(train_dataset, epochs=5, validation_data=test_dataset,  shuffle=True, steps_per_epoch=1000, validation_steps=1)
model.save("./resnet.h5")

3.2、利用tf.GradientTape() 梯度带,保存模型.

版权提示: 参考github大神制作,我只是搬运工
https://github.com/YunYang1994/TensorFlow2.0-Examples
优点:动态图模型,随时可以保存模型.缺点:训练速度慢,代码复杂.
完整代码如下:

total_num = 25000
#参数设置
learning_rate = 0.001
test_step = 1000
saved_step = 5000
EPOCHS = 10

batch_size = 16

display_step = 10

training_step = int(total_num / batch_size)

train_record_path = "./train.record"
test_record_path = "./test.record"
# 调用后我们会得到一个Dataset(tf.data.Dataset),字面理解,这里面就存放着我们之前写入的所有Example。
train_dataset = tf.data.TFRecordDataset(train_record_path)
test_dataset = tf.data.TFRecordDataset(test_record_path)
# 定义一个解析函数

feature_description = {
    'image/filename': tf.io.FixedLenFeature([], tf.string),
    'image/class': tf.io.FixedLenFeature([], tf.int64),
    'image/encoded': tf.io.FixedLenFeature([], tf.string)
}

#
def parese_example(serialized_example):
    feature_dict = tf.io.parse_single_example(serialized_example, feature_description)
    image = tf.io.decode_jpeg(feature_dict['image/encoded'])  # 解码JPEG图片
    image = tf.image.resize_with_crop_or_pad(image, 224, 224)
    image = tf.reshape(image, [224, 224, 3])
    image = tf.cast(image, tf.float32)

    feature_dict['image'] = image
    return feature_dict['image'], feature_dict['image/class']
#
#
train_dataset = train_dataset.map(parese_example)
test_dataset = test_dataset.map(parese_example)

train_dataset = train_dataset.repeat().shuffle(5000).batch(batch_size).prefetch(3)
test_dataset = test_dataset.repeat().shuffle(5000).batch(batch_size, drop_remainder=True)

ResNet50 = tf.keras.applications.ResNet50(weights=None, include_top=False)
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
fc = tf.keras.layers.Dense(2, activation="softmax")
model = tf.keras.Sequential([ResNet50, global_average_layer, fc])

# # Choose an optimizer and loss function for training
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()

# # Select metrics to measure the loss and the accuracy of the model
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

optimizer = tf.keras.optimizers.Adam(0.001)
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images, training=True)
       # print("=> label shape: ", labels.shape, "pred shape", predictions.shape)
        loss = loss_object(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    train_loss(loss)
    train_accuracy(labels, predictions)
print("train..")

def test_step(images, labels):
    predictions = model(images)
    t_loss = loss_object(labels, predictions)
    test_loss(t_loss)
    test_accuracy(labels, predictions)

for epoch in range(EPOCHS):
    for step, (batch_x, batch_y) in enumerate(train_dataset, 1):
        train_step(batch_x, batch_y)
        if(step % display_step == 0):
          template = '=> train: step {}, Loss: {:.4}, Accuracy: {:.2%}'
          print(template.format(step+1,
                                train_loss.result(),
                                 train_accuracy.result(),
                                ))
    for step, (batch_x, batch_y) in enumerate(test_dataset, 1):
        test_step(batch_x, batch_y)

    template = '=> Epoch {}, , Test Loss: {:.4}, Test Accuracy: {:.2%}'
    print(template.format(epoch + 1,
                          test_loss.result(),
                          test_accuracy.result()))

    root = tf.train.Checkpoint(optimizer=optimizer,
                                model=model)
    saved_folder = "./ckpt2Model"
    if(not os.path.exists(saved_folder)):
        os.mkdir(saved_folder)
    checkpoint_prefix = (saved_folder + "/epoch:%i_acc") % (epoch + 1)
    root.save(checkpoint_prefix)

四、预测一张图像,并输出概率值

得到模型后,那么下一步自然先测一张图像,验证一下,毕竟这是最直观的感受。

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""                  
*  * *** *  * *  *      
*  *  *   **  *  *             
****  *   **  *  *                 
*  *  *   **  *  *         
*  * **  *  * ****  

@File     :DogVsCat/inference_one_img.py  
@Date     :2020/12/4 下午6:06  
@Require  :   
@Author   :hjxu2016, https://blog.csdn.net/hjxu2016/
@Funtion  :猫狗大战,测试一张图像
"""
import tensorflow as tf
import cv2
import numpy as np
# 
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
#
model_path = "./resnet.h5"
model = tf.keras.models.load_model(model_path)
img = cv2.imread("./cat.0.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (224, 224))
img = np.array(img, np.float32)
x = np.expand_dims(img, 0)
y = model.predict(x)
print("predict: ", y)

最后

以上就是执着往事为你收集整理的Tensorflow2.0学习(七):猫狗大战2、训练与保存模型一、搭建网络二、读取tfrecord数据三、训练与保存模型的两种方式四、预测一张图像,并输出概率值的全部内容,希望文章能够帮你解决Tensorflow2.0学习(七):猫狗大战2、训练与保存模型一、搭建网络二、读取tfrecord数据三、训练与保存模型的两种方式四、预测一张图像,并输出概率值所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(37)

评论列表共有 0 条评论

立即
投稿
返回
顶部