我是靠谱客的博主 清秀美女,最近开发中收集的这篇文章主要介绍TensorFlow 学习笔记(五)tensorboard,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

目录:

1.MNIST,准确率提升至98%以上(视频可提升至98.3%)

2.tensorboard网络结构

3.tensorboard网络运行(查看网络运行时数据,通过反馈图形调整参数,从而优化网络结构)

4.tensorboard可视化

 

1.MNIST,准确率提升至98%以上(视频可提升至98.3%)

# -*- coding:utf-8 -*-
# author: aihan time: 2019/1/22

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
import numpy as np
import matplotlib as plt
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)

#每个批次的大小
batch_size = 100
#计算一共有多少个批次
n_batch = mnist.train.num_examples

x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
keep_prob = tf.placeholder(tf.float32)
#学习率变量
lr = tf.Variable(0.001,dtype=tf.float32)

#创建一个神经网络
W1 = tf.Variable(tf.truncated_normal([784,500],stddev=0.1))
b1 = tf.Variable(tf.zeros([500])+0.1)
L1 = tf.nn.tanh(tf.matmul(x,W1)+b1)  #激活函数为双曲正切
L1_drop = tf.nn.dropout(L1,keep_prob)
#隐含层
W2 = tf.Variable(tf.truncated_normal([500,300],stddev=0.1))
b2 = tf.Variable(tf.zeros([300])+0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop,W2)+b2)  #激活函数为双曲正切
L2_drop = tf.nn.dropout(L2,keep_prob)

W3 = tf.Variable(tf.truncated_normal([300,10],stddev=0.1))
b3 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.nn.softmax(tf.matmul(L2_drop,W3)+b3)

# W = tf.Variable(tf.zeros([784,10]))
# b = tf.Variable(tf.zeros([10]))
#prediction = tf.nn.softmax(tf.matmul(x,W)+b)

#交叉熵代价函数
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
#训练
train_step = tf.train.AdamOptimizer(lr).minimize(loss)

#初始化变量
init = tf.global_variables_initializer()

#结果对比(true,false)存放在一个bool型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(51):
        sess.run(tf.assign(lr,0.001 * (0.95 ** epoch)))
        for betch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:1.0})
            #keep_prob:1.0  全部工作

        learning_rate = sess.run(lr)
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
        print("Iter"+str(epoch)+",Testing Accuracy"+str(acc)+",Learning Rate"+str(learning_rate))

运行结果:

Iter0,Testing Accuracy0.9769,Learning Rate0.001
Iter1,Testing Accuracy0.9792,Learning Rate0.00095
Iter2,Testing Accuracy0.9778,Learning Rate0.0009025
Iter3,Testing Accuracy0.9782,Learning Rate0.000857375
Iter4,Testing Accuracy0.979,Learning Rate0.00081450626
Iter5,Testing Accuracy0.9798,Learning Rate0.0007737809
Iter6,Testing Accuracy0.98,Learning Rate0.0007350919
Iter7,Testing Accuracy0.9796,Learning Rate0.0006983373
Iter8,Testing Accuracy0.9801,Learning Rate0.0006634204
Iter9,Testing Accuracy0.9794,Learning Rate0.0006302494
Iter10,Testing Accuracy0.981,Learning Rate0.0005987369
Iter11,Testing Accuracy0.9789,Learning Rate0.0005688001
Iter12,Testing Accuracy0.9805,Learning Rate0.0005403601
Iter13,Testing Accuracy0.9798,Learning Rate0.0005133421
Iter14,Testing Accuracy0.9794,Learning Rate0.000487675
Iter15,Testing Accuracy0.9792,Learning Rate0.00046329122
Iter16,Testing Accuracy0.981,Learning Rate0.00044012666
Iter17,Testing Accuracy0.9813,Learning Rate0.00041812033
Iter18,Testing Accuracy0.981,Learning Rate0.00039721432
Iter19,Testing Accuracy0.9805,Learning Rate0.0003773536
Iter20,Testing Accuracy0.98,Learning Rate0.00035848594
Iter21,Testing Accuracy0.9799,Learning Rate0.00034056162
Iter22,Testing Accuracy0.9799,Learning Rate0.00032353355
Iter23,Testing Accuracy0.9799,Learning Rate0.00030735688
Iter24,Testing Accuracy0.9796,Learning Rate0.000291989
Iter25,Testing Accuracy0.9797,Learning Rate0.00027738957
Iter26,Testing Accuracy0.9795,Learning Rate0.0002635201
Iter27,Testing Accuracy0.9793,Learning Rate0.00025034408
Iter28,Testing Accuracy0.9792,Learning Rate0.00023782688
Iter29,Testing Accuracy0.9791,Learning Rate0.00022593554
Iter30,Testing Accuracy0.9791,Learning Rate0.00021463877
Iter31,Testing Accuracy0.9791,Learning Rate0.00020390682
Iter32,Testing Accuracy0.979,Learning Rate0.00019371149
Iter33,Testing Accuracy0.9793,Learning Rate0.0001840259
Iter34,Testing Accuracy0.9792,Learning Rate0.00017482461
Iter35,Testing Accuracy0.9792,Learning Rate0.00016608338
Iter36,Testing Accuracy0.9792,Learning Rate0.00015777921
Iter37,Testing Accuracy0.9792,Learning Rate0.00014989026
Iter38,Testing Accuracy0.9792,Learning Rate0.00014239574
Iter39,Testing Accuracy0.9795,Learning Rate0.00013527596
Iter40,Testing Accuracy0.9793,Learning Rate0.00012851215
Iter41,Testing Accuracy0.9795,Learning Rate0.00012208655
Iter42,Testing Accuracy0.9794,Learning Rate0.00011598222
Iter43,Testing Accuracy0.9795,Learning Rate0.00011018311
Iter44,Testing Accuracy0.9796,Learning Rate0.000104673956
Iter45,Testing Accuracy0.9795,Learning Rate9.944026e-05
Iter46,Testing Accuracy0.9795,Learning Rate9.446825e-05
Iter47,Testing Accuracy0.9796,Learning Rate8.974483e-05
Iter48,Testing Accuracy0.9795,Learning Rate8.525759e-05
Iter49,Testing Accuracy0.9795,Learning Rate8.099471e-05
Iter50,Testing Accuracy0.9795,Learning Rate7.6944976e-05

Process finished with exit code 0
 

2.tensorboard网络结构

(1)绘制input部分图

# -*- coding:utf-8 -*-
# author: aihan time: 2019/1/22

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)

#每个批次的大小
batch_size = 100
#计算一共有多少个批次
n_batch = mnist.train.num_examples

#命名空间
with tf.name_scope('input'):
    x = tf.placeholder(tf.float32,[None,784],name="x-input")
    y = tf.placeholder(tf.float32,[None,10],name="y-input")

#创建一个简单的神经网络
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
prediction = tf.nn.softmax(tf.matmul(x,W)+b)

#二次代价函数
loss = tf.reduce_mean(tf.square(y - prediction))
#使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

#初始化变量
init = tf.global_variables_initializer()

#结果对比(true,false)存放在一个bool型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

with tf.Session() as sess:
    sess.run(init)
    writer = tf.summary.FileWriter('logs/',sess.graph)
    for epoch in range(1):
        for betch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})

        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print("Iter"+str(epoch)+",Testing Accuracy"+str(acc))

运行,出现文件夹logs

打开命令提示符(cmd)

复制网址,用浏览器打开:

附:若报错如下:

则配置环境即可,方法见链接:https://blog.csdn.net/zhylhy520/article/details/80760816和https://blog.csdn.net/Pursue_MyHeart/article/details/81226283

 

(2)显示隐含层图(若更改代码:)

with tf.name_scope('layer'):
    #创建一个简单的神经网络
    with tf.name_scope('wights'):
        W = tf.Variable(tf.zeros([784,10]),name='W')
    with tf.name_scope('biases'):
        b = tf.Variable(tf.zeros([10]),name='b')
    with tf.name_scope("wx_plus_b"):
        wx_plus_b = tf.matmul(x,W)+b
    with tf.name_scope("softmax"):
        prediction = tf.nn.softmax(wx_plus_b)

可得该部分图为:

 

(3)最终网络图:(代码添加略)

3.tensorboard网络运行(查看网络运行时数据)

在上一程序中进行修改即可

1)参数概要

2)分析变化,用到定义的参数

3)合并所有的summary

# -*- coding:utf-8 -*-
# author: aihan time: 2019/1/23

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)

#每个批次的大小
batch_size = 100
#计算一共有多少个批次
n_batch = mnist.train.num_examples

#参数概要(分析值)
def variable_summaries(var):
    with tf.name_scope('summaries'):
        mean = tf.reduce_mean(var)
        tf.summary.scalar('mean',mean)  #平均值
        with tf.name_scope('stddev'):
            stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
        tf.summary.scalar('stddev',stddev)   #标准差
        tf.summary.scalar('max',tf.reduce_max(var))
        tf.summary.scalar('min',tf.reduce_min(var))
        tf.summary.histogram('histogram',var)   #直方图

#命名空间
with tf.name_scope('input'):
    x = tf.placeholder(tf.float32,[None,784],name="x-input")
    y = tf.placeholder(tf.float32,[None,10],name="y-input")

with tf.name_scope('layer'):
    #创建一个简单的神经网络
    with tf.name_scope('wights'):
        W = tf.Variable(tf.zeros([784,10]),name='W')
        variable_summaries(W)
    with tf.name_scope('biases'):
        b = tf.Variable(tf.zeros([10]),name='b')
        variable_summaries(b)
    with tf.name_scope("wx_plus_b"):
        wx_plus_b = tf.matmul(x,W)+b
    with tf.name_scope("softmax"):
        prediction = tf.nn.softmax(wx_plus_b)

with tf.name_scope("loss"):
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
    tf.summary.scalar('loss',loss)
with tf.name_scope("train"):
    #使用梯度下降法
    train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

#初始化变量
init = tf.global_variables_initializer()

with tf.name_scope("accuracy"):
    with tf.name_scope("correct_prediction"):
        #结果对比(true,false)存放在一个bool型列表中
        correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
    with tf.name_scope("accuracy"):
        #求准确率
        accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
        tf.summary.scalar('accuracy', accuracy)

#合并所有的summary
merged = tf.summary.merge_all()

with tf.Session() as sess:
    sess.run(init)
    writer = tf.summary.FileWriter('logs/',sess.graph)
    for epoch in range(51):
        for betch in range(n_batch):
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            summary,_ = sess.run([merged,train_step],feed_dict={x:batch_xs,y:batch_ys})  #训练同时统计

        writer.add_summary(summary,epoch)
        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print("Iter"+str(epoch)+",Testing Accuracy"+str(acc))

 运行:

Iter0,Testing Accuracy0.9288
Iter1,Testing Accuracy0.9297
Iter2,Testing Accuracy0.9304
Iter3,Testing Accuracy0.9301
Iter4,Testing Accuracy0.9296
Iter5,Testing Accuracy0.9302
Iter6,Testing Accuracy0.93
Iter7,Testing Accuracy0.9299
Iter8,Testing Accuracy0.93
Iter9,Testing Accuracy0.9295
Iter10,Testing Accuracy0.9295
Iter11,Testing Accuracy0.9292
Iter12,Testing Accuracy0.9295
Iter13,Testing Accuracy0.9293
Iter14,Testing Accuracy0.9292
Iter15,Testing Accuracy0.9295
Iter16,Testing Accuracy0.9294
Iter17,Testing Accuracy0.9289
Iter18,Testing Accuracy0.9285
Iter19,Testing Accuracy0.929
Iter20,Testing Accuracy0.9288
Iter21,Testing Accuracy0.9285
Iter22,Testing Accuracy0.9286
Iter23,Testing Accuracy0.9286
Iter24,Testing Accuracy0.9281
Iter25,Testing Accuracy0.9284
Iter26,Testing Accuracy0.9281
Iter27,Testing Accuracy0.9284
Iter28,Testing Accuracy0.9284
Iter29,Testing Accuracy0.9284
Iter30,Testing Accuracy0.9285
Iter31,Testing Accuracy0.9287
Iter32,Testing Accuracy0.9285
Iter33,Testing Accuracy0.9287
Iter34,Testing Accuracy0.9289
Iter35,Testing Accuracy0.9291
Iter36,Testing Accuracy0.9291
Iter37,Testing Accuracy0.9289
Iter38,Testing Accuracy0.929
Iter39,Testing Accuracy0.929
Iter40,Testing Accuracy0.9288
Iter41,Testing Accuracy0.929
Iter42,Testing Accuracy0.929
Iter43,Testing Accuracy0.9291
Iter44,Testing Accuracy0.929
Iter45,Testing Accuracy0.9288
Iter46,Testing Accuracy0.9291
Iter47,Testing Accuracy0.9292
Iter48,Testing Accuracy0.9293
Iter49,Testing Accuracy0.9291
Iter50,Testing Accuracy0.929

Process finished with exit code 0

运行结果:

 

若loss震荡剧烈,可能原因为学习率设置过大,

4.tensorboard可视化

 

最后

以上就是清秀美女为你收集整理的TensorFlow 学习笔记(五)tensorboard的全部内容,希望文章能够帮你解决TensorFlow 学习笔记(五)tensorboard所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(36)

评论列表共有 0 条评论

立即
投稿
返回
顶部