我是靠谱客的博主 精明网络,最近开发中收集的这篇文章主要介绍TF2-Tips:自定义model.fit官方示例Idea,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

官方示例

keras官方代码给的例子很详细:Customizing what happens in fit()

基础

class CustomModel(keras.Model):
    def train_step(self, data):
        # Unpack the data. Its structure depends on your model and
        # on what you pass to `fit()`.
        x, y = data # 这个data就是传入model.fit()的数据

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute the loss value
            # (the loss function is configured in `compile()`)
            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)
        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        # Update metrics (includes the metric that tracks the loss)
        self.compiled_metrics.update_state(y, y_pred)
        # Return a dict mapping metric names to current value
        return {m.name: m.result() for m in self.metrics}

import numpy as np

# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])

# Just use `fit` as usual
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=3)
  • CustomModel继承keras.Model,重写了train_step方法
  • self.compiled_loss就是model.compile中的loss方法
  • self.compiled_metrics就是model.compile中的metrics方法

在train_step方法中自定义loss:

loss_tracker = keras.metrics.Mean(name="loss")
mae_metric = keras.metrics.MeanAbsoluteError(name="mae")


class CustomModel(keras.Model):
    def train_step(self, data):
        x, y = data

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute our own loss
            loss = keras.losses.mean_squared_error(y, y_pred)

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))

        # Compute our own metrics
        loss_tracker.update_state(loss)
        mae_metric.update_state(y, y_pred)
        return {"loss": loss_tracker.result(), "mae": mae_metric.result()}

    @property
    def metrics(self):
        # We list our `Metric` objects here so that `reset_states()` can be
        # called automatically at the start of each epoch
        # or at the start of `evaluate()`.
        # If you don't implement this property, you have to call
        # `reset_states()` yourself at the time of your choosing.
        return [loss_tracker, mae_metric]


# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)

# We don't passs a loss or metrics here.
model.compile(optimizer="adam")

# Just use `fit` as usual -- you can use callbacks, etc.
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=5)
  • loss_tracker有两个方法:
    • update_state:传loss
    • result:当前平均loss
  • property修饰的metrics方法:
    • 在每个epoch开始前调用reset_states方法
    • 如果去掉metrics,则训练中体现的loss不是每个epoch的累积平均loss,而是从训练开始时的累积平均loss
  • 注意:这种情况下,model.compile中不需要再写loss了
  • 踩坑:对于tf2.0和tf2.1,在fit时会报错:“ValueError: The model cannot be compiled because it has no loss to optimize.” TF2.2及以上没问题。
  • 参考文章:AI学习笔记–Tensorflow自定义

class weight&sample weight

class CustomModel(keras.Model):
    def train_step(self, data):
        # Unpack the data. Its structure depends on your model and
        # on what you pass to `fit()`.
        if len(data) == 3:
            x, y, sample_weight = data
        else:
            sample_weight = None
            x, y = data

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute the loss value.
            # The loss function is configured in `compile()`.
            loss = self.compiled_loss(
                y,
                y_pred,
                sample_weight=sample_weight,
                regularization_losses=self.losses,
            )

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))

        # Update the metrics.
        # Metrics are configured in `compile()`.
        self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)

        # Return a dict mapping metric names to current value.
        # Note that it will include the loss (tracked in self.metrics).
        return {m.name: m.result() for m in self.metrics}


# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])

# You can now use sample_weight argument
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
sw = np.random.random((1000, 1))
model.fit(x, y, sample_weight=sw, epochs=3)

Idea

自监督任务没有label,loss需要自行设计,此场景适合自定义train_step方法。以对比学习为例:

  • 首先model.fit(x,y)中的x可以是一对正例,y可置None,此时train_step函数的输入为tuple:(x, )
  • 对一个batch设计compute_loss函数
  • call函数也需要自己设计,接受token id和seg id,返回embeding
  • 在train_step方法中调用call和compute_loss,使用loss_tracker.update_state传递loss

keras官方有一个关于clip算法的jupyter:Natural language image search with a Dual Encoder,其DualEncoder类的设计值得一读。
有空时我会仿照上面的思路写一个simcse的keras实现,欢迎follow~

最后

以上就是精明网络为你收集整理的TF2-Tips:自定义model.fit官方示例Idea的全部内容,希望文章能够帮你解决TF2-Tips:自定义model.fit官方示例Idea所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(46)

评论列表共有 0 条评论

立即
投稿
返回
顶部