我是靠谱客的博主 高高八宝粥,最近开发中收集的这篇文章主要介绍使用Sequential,Function API 构建简单CNNSequentialFunction API的简单使用,觉得挺不错的,现在分享给大家,希望可以做个参考。
概述
文章目录
- Sequential
- Function API的简单使用
Sequential
Sequential支持straightforward式的叠加中间层,他通过简单将每一层放置在当前最顶层来对中间层进行叠加,具体的内容可以参照官方文档
# GRADED FUNCTION: happyModel
def happyModel():
"""
Implements the forward propagation for the binary classification model:
ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE
Note that for simplicity and grading purposes, you'll hard-code all the values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
None
Returns:
model -- TF Keras model (object containing the information for the entire training process)
"""
model = tf.keras.Sequential([
## ZeroPadding2D with padding 3, input shape of 64 x 64 x 3
tfl.ZeroPadding2D(padding=3,input_shape=(64,64,3)),
## Conv2D with 32 7x7 filters and stride of 1
tfl.Conv2D(filters=32,kernel_size=(7,7)),
## BatchNormalization for axis 3
tfl.BatchNormalization(axis=3),
## ReLU
tfl.ReLU(),
## Max Pooling 2D with default parameters
tfl.MaxPool2D(),
## Flatten layer
tfl.Flatten(),
## Dense layer with 1 unit for output & 'sigmoid' activation
tfl.Dense(units=1,activation='sigmoid')
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
])
# 简单按照顺序叠加
return model
happy_model.fit(X_train, Y_train, epochs=10, batch_size=16)
happy_model.evaluate(X_test, Y_test)
Function API的简单使用
Function API可以更为灵活的神经网络结构,它需要一个或多个输入和输出。
关于它的使用,可以看作是构建一个计算图,在这其中,我们依然可以使用提供的各种layer模型。
# GRADED FUNCTION: convolutional_model
def convolutional_model(input_shape):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> DENSE
Note that for simplicity and grading purposes, you'll hard-code some values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
input_img -- input dataset, of shape (input_shape)
Returns:
model -- TF Keras model (object containing the information for the entire training process)
"""
input_img = tf.keras.Input(shape=input_shape)
## CONV2D: 8 filters 4x4, stride of 1, padding 'SAME'
# Z1 = None
Z1 = tfl.Conv2D(filters=8,kernel_size=(4,4),padding='same',strides=(1,1))(input_img)
## RELU
# A1 = None
A1 = tfl.ReLU()(Z1)
## MAXPOOL: window 8x8, stride 8, padding 'SAME'
# P1 = None
P1 = tfl.MaxPool2D(pool_size=(8,8),strides=(8,8),padding='same')(A1)
## CONV2D: 16 filters 2x2, stride 1, padding 'SAME'
# Z2 = None
Z2 = tfl.Conv2D(filters=16,kernel_size=(2,2),strides=(1,1),padding='same')(P1)
## RELU
# A2 = None
A2 = tfl.ReLU()(Z2)
## MAXPOOL: window 4x4, stride 4, padding 'SAME'
# P2 = None
P2 = tfl.MaxPool2D(pool_size=(4,4),strides=(4,4),padding='same')(A2)
## FLATTEN
# F = None
F = tfl.Flatten()(P2)
## Dense layer
## 6 neurons in output layer. Hint: one of the arguments should be "activation='softmax'"
# outputs = None
outputs = tfl.Dense(units=6,activation='softmax')(F)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
model = tf.keras.Model(inputs=input_img, outputs=outputs)
return model
在最后,我们使用tf.keras.Modle将之实例化。
绘制loss曲线
# The history.history["loss"] entry is a dictionary with as many values as epochs that the
# model was trained on.
df_loss_acc = pd.DataFrame(history.history)
df_loss= df_loss_acc[['loss','val_loss']]
df_loss.rename(columns={'loss':'train','val_loss':'validation'},inplace=True)
df_acc= df_loss_acc[['accuracy','val_accuracy']]
df_acc.rename(columns={'accuracy':'train','val_accuracy':'validation'},inplace=True)
df_loss.plot(title='Model loss',figsize=(12,8)).set(xlabel='Epoch',ylabel='Loss')
df_acc.plot(title='Model Accuracy',figsize=(12,8)).set(xlabel='Epoch',ylabel='Accuracy')
最后
以上就是高高八宝粥为你收集整理的使用Sequential,Function API 构建简单CNNSequentialFunction API的简单使用的全部内容,希望文章能够帮你解决使用Sequential,Function API 构建简单CNNSequentialFunction API的简单使用所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复