我是靠谱客的博主 大气小蝴蝶,这篇文章主要介绍keras实验1Keras框架搭建实验介绍,现在分享给大家,希望可以做个参考。

实验介绍

本文参考keras中文官方文档操作。传送门

Keras框架搭建

Keras中mnist数据集测试

3行代码这么方便

复制代码
1
2
3
>>> git clone https://github.com/fchollet/keras.git >>> cd keras/examples/ >>> python mnist_mlp.py

执行日志

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
root@astron:/asdata# git clone https://github.com/fchollet/keras.git Cloning into 'keras'... remote: Counting objects: 23274, done. remote: Compressing objects: 100% (7/7), done. remote: Total 23274 (delta 1), reused 0 (delta 0), pack-reused 23267 Receiving objects: 100% (23274/23274), 8.84 MiB | 1.12 MiB/s, done. Resolving deltas: 100% (16738/16738), done. Checking connectivity... done. root@astron:/asdata# cd keras/ root@astron:/asdata/keras# ls CONTRIBUTING.md docs ISSUE_TEMPLATE.md LICENSE pytest.ini setup.cfg tests docker examples keras MANIFEST.in README.md setup.py root@astron:/asdata/keras# cd examples/ root@astron:/asdata/keras/examples# ls addition_rnn.py imdb_bidirectional_lstm.py mnist_hierarchical_rnn.py neural_doodle.py antirectifier.py imdb_cnn_lstm.py mnist_irnn.py neural_style_transfer.py babi_memnn.py imdb_cnn.py mnist_mlp.py pretrained_word_embeddings.py babi_rnn.py imdb_fasttext.py mnist_net2net.py README.md cifar10_cnn.py imdb_lstm.py mnist_siamese_graph.py reuters_mlp.py conv_filter_visualization.py lstm_benchmark.py mnist_sklearn_wrapper.py reuters_mlp_relu_vs_selu.py conv_lstm.py lstm_text_generation.py mnist_swwae.py stateful_lstm.py deep_dream.py mnist_acgan.py mnist_tfrecord.py variational_autoencoder_deconv.py image_ocr.py mnist_cnn.py mnist_transfer_cnn.py variational_autoencoder.py root@astron:/asdata/keras/examples# python mnist_mlp.py Using TensorFlow backend. I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally 60000 train samples 10000 test samples _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 512) 401920 _________________________________________________________________ dropout_1 (Dropout) (None, 512) 0 _________________________________________________________________ dense_2 (Dense) (None, 512) 262656 _________________________________________________________________ dropout_2 (Dropout) (None, 512) 0 _________________________________________________________________ dense_3 (Dense) (None, 10) 5130 ================================================================= Total params: 669,706.0 Trainable params: 669,706.0 Non-trainable params: 0.0 _________________________________________________________________ Train on 60000 samples, validate on 10000 samples Epoch 1/20 W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: Tesla M60 major: 5 minor: 2 memoryClockRate (GHz) 1.1775 pciBusID 0000:00:15.0 Total memory: 7.93GiB Free memory: 7.86GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla M60, pci bus id: 0000:00:15.0) 60000/60000 [==============================] - 3s - loss: 0.2442 - acc: 0.9246 - val_loss: 0.0994 - val_acc: 0.9692 Epoch 2/20 60000/60000 [==============================] - 1s - loss: 0.1041 - acc: 0.9684 - val_loss: 0.0818 - val_acc: 0.9751 Epoch 3/20 60000/60000 [==============================] - 1s - loss: 0.0751 - acc: 0.9766 - val_loss: 0.0821 - val_acc: 0.9762 Epoch 4/20 60000/60000 [==============================] - 1s - loss: 0.0596 - acc: 0.9821 - val_loss: 0.0688 - val_acc: 0.9809 Epoch 5/20 60000/60000 [==============================] - 1s - loss: 0.0501 - acc: 0.9845 - val_loss: 0.0789 - val_acc: 0.9801 Epoch 6/20 60000/60000 [==============================] - 1s - loss: 0.0429 - acc: 0.9874 - val_loss: 0.0918 - val_acc: 0.9796 Epoch 7/20 60000/60000 [==============================] - 1s - loss: 0.0367 - acc: 0.9889 - val_loss: 0.0879 - val_acc: 0.9803 Epoch 8/20 60000/60000 [==============================] - 1s - loss: 0.0336 - acc: 0.9900 - val_loss: 0.0799 - val_acc: 0.9828 Epoch 9/20 60000/60000 [==============================] - 1s - loss: 0.0324 - acc: 0.9907 - val_loss: 0.0896 - val_acc: 0.9826 Epoch 10/20 60000/60000 [==============================] - 1s - loss: 0.0285 - acc: 0.9913 - val_loss: 0.0860 - val_acc: 0.9829 Epoch 11/20 60000/60000 [==============================] - 1s - loss: 0.0265 - acc: 0.9923 - val_loss: 0.0994 - val_acc: 0.9822 Epoch 12/20 60000/60000 [==============================] - 1s - loss: 0.0237 - acc: 0.9933 - val_loss: 0.1013 - val_acc: 0.9844 Epoch 13/20 60000/60000 [==============================] - 1s - loss: 0.0226 - acc: 0.9933 - val_loss: 0.1026 - val_acc: 0.9818 Epoch 14/20 60000/60000 [==============================] - 1s - loss: 0.0229 - acc: 0.9938 - val_loss: 0.1056 - val_acc: 0.9830 Epoch 15/20 60000/60000 [==============================] - 1s - loss: 0.0219 - acc: 0.9942 - val_loss: 0.0991 - val_acc: 0.9825 Epoch 16/20 60000/60000 [==============================] - 1s - loss: 0.0210 - acc: 0.9943 - val_loss: 0.1119 - val_acc: 0.9827 Epoch 17/20 60000/60000 [==============================] - 1s - loss: 0.0206 - acc: 0.9948 - val_loss: 0.1041 - val_acc: 0.9837 Epoch 18/20 60000/60000 [==============================] - 1s - loss: 0.0206 - acc: 0.9947 - val_loss: 0.1147 - val_acc: 0.9836 Epoch 19/20 60000/60000 [==============================] - 1s - loss: 0.0179 - acc: 0.9953 - val_loss: 0.1231 - val_acc: 0.9807 Epoch 20/20 60000/60000 [==============================] - 1s - loss: 0.0174 - acc: 0.9955 - val_loss: 0.1126 - val_acc: 0.9823 Test loss: 0.112580380508 Test accuracy: 0.9823

最后

以上就是大气小蝴蝶最近收集整理的关于keras实验1Keras框架搭建实验介绍的全部内容,更多相关keras实验1Keras框架搭建实验介绍内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(65)

评论列表共有 0 条评论

立即
投稿
返回
顶部