我是靠谱客的博主 追寻蓝天,最近开发中收集的这篇文章主要介绍matlab train函数,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

network/train
 train Train a neural network.
 
   [NET,TR] = train(NET,X,T) takes a network NET, input data X
   and target data T and returns the network after training it, and a
   a training record TR.
 
   [NET,TR] = train(NET,X) takes only input data, in cases where
   the network's training function is unsupervised (i.e. does not require
   target data).
 
   [NET,TR] = train(NET,X,T,Xi,Ai,EW) takes additional optional
   arguments suitable for training dynamic networks and training with
   error weights.  Xi and Ai are the initial input and layer delays states
   respectively and EW defines error weights used to indicate
   the relative importance of each target value.
 
   train calls the network training function NET.trainFcn with the
   parameters NET.trainParam to perform training.  Training functions
   may also be called directly.
 
   train arguments can have two formats: matrices, for static
   problems and networks with single inputs and outputs, and cell arrays
   for multiple timesteps and networks with multiple inputs and outputs.
 
   The matrix format is as follows:
     X  - RxQ matrix
     Y  - UxQ matrix.
   Where:
     Q  = number of samples
     R  = number of elements in the network's input
     U  = number of elements in the network's output
 
   The cell array format is most general:
     X  - NixTS cell array, each element X{i,ts} is an RixQ matrix.
     Xi - NixID cell array, each element Xi{i,k} is an RixQ matrix.
     Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix.
     Y  - NOxTS cell array, each element Y{i,ts} is a UixQ matrix.
     Xf - NixID cell array, each element Xf{i,k} is an RixQ matrix.
     Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.
   Where:
     TS = number of time steps
     Ni = NET.numInputs
     Nl = NET.numLayers,
     No = NET.numOutputs
     ID = NET.numInputDelays
     LD = NET.numLayerDelays
     Ri = NET.inputs{i}.size
     Si = NET.layers{i}.size
     Ui = NET.outputs{i}.size
 
   The error weights EW can be 1, indicating all targets are equally
   important.  It can also be either a 1xQ vector defining relative sample
   importances, a 1xTS cell array of scalar values defining relative
   timestep importances, an Nox1 cell array of scalar values defining
   relative network output importances, or in general an NoxTS cell array
   of NixQ matrices (the same size as T) defining every target element's
   relative importance.
 
   The training record TR is a structure whose fields depend on the network
   training function (net.NET.trainFcn). It may include fields such as:
     *  Training, data division, and performance functions and parameters
     * Data division indices for training, validation and test sets
     * Data division masks for training validation and test sets
     * Number of epochs (num_epochs) and the best epoch (best_epoch).
     * A list of training state names (states).
     * Fields for each state name recording its value throughout training
     * Performances of the best network (best_perf, best_vperf, best_tperf)
 
   Here a static feedforward network is created, trained on some data, then
   simulated using SIM and network notation.
 
     [x,t] = simplefit_dataset;
     net = feedforwardnet(10);
     net = train(net,x,t);
     y1 = sim(net,x)
     y2 = net(x)
 
   Here a dynamic NARX network is created, trained, and simulated on
   time series data.
 
    [X,T] = simplenarx_dataset;
    net = narxnet(1:2,1:2,10);
    view(net)
    [Xs,Xi,Ai,Ts] = preparets(net,X,{},T);
    net = train(net,Xs,Ts,Xi,Ai);
    Y = net(Xs,Xi,Ai)
 
   <strong>Training with Parallel Computing</strong>
 
   Parallel Computing Toolbox allows Neural Network Toolbox to train
   networks faster and on larger datasets than can fit on one PC.
 
   (Parallel and GPU training are currently supported for backpropagation
   training only, i.e. not Self-Organizing Maps.
 
   Here training automatically happens across MATLAB parallel workers.
 
     parpool
     [X,T] = vinyl_dataset;
     net = feedforwardnet(140,'trainscg');
     net = train(net,X,T,'UseParallel','yes');
     Y = net(X,'UseParallel','yes');
 
   Use Composite values to distribute the data manually, and get back
   the results as a Composite value.  If the data is loaded as it is
   distributed then while each piece of the dataset must fit in RAM, the
   entire dataset is only limited by the number of workers RAM.  Use
   the function configure to prepare a network for training
   with parallel data.
 
     net = feedforwardnet(140,'trainscg');
     net = configure(net,X,T);
     Xc = Composite;
     Tc = Composite;
     for i=1:numel(Xc)
       Xc{i} = X+rand(size(X))*0.1; % (Use real data instead
       Tc{i} = T+rand(size(T))*0.1; % instead of random data)
     end
     net = train(net,Xc,Tc);
     Yc = net(Xc);
     Y = cat(2,Yc{:});
 
   Networks can be trained using the current GPU device, if it is
   supported by the Parallel Computing Toolbox. This is efficient for
   large static problems or dynamic problems with many series.
 
     net = feedforwardnet(140,'trainscg');
     net = train(net,X,T,'UseGPU','yes');
     Y = net(X,'UseGPU','yes');
 
   If a network is static (no delays) and has a single input and output,
   then training can be done with data already converted to gpuArray form,
   if the network is configured with MATLAB data first.
 
     net = feedforwardnet(140,'trainscg');
     net = configure(net,X,T);
     Xgpu = gpuArray(X);
     Tgpu = gpuArray(T);
     net = train(net,Xgpu,Tgpu);
     Ygpu = net(Xgpu);
     Y = gather(Ygpu);
 
   To run in parallel, with workers associated with unique GPUs taking
   advantage of that hardware, while the rest of the workers use CPUs:
 
     net = feedforwardnet(140,'trainscg');
     net = train(net,X,T,'UseParallel','yes','UseGPU','yes');
     Y = net(X,'UseParallel','yes','UseGPU','yes');
 
   Only using workers with unique GPUs may result in higher speed, as CPU
   workers may not keep up.
 
     net = feedforwardnet(140,'trainscg');
     net = train(net,X,T,'UseParallel','yes','UseGPU','only');
     Y = net(X,'UseParallel','yes','UseGPU','only');
 
   Use the 'ShowResources' option to verify the computing resources used.
 
     net = train(...,'ShowResources','yes');
 
   <strong>Training Safely with Checkpoint Files</strong>
 
   The optional parameter CheckpointFile allows you to specify a file to periodically save
   intermediate values of the neural network and training record during training.  This protects
   training results from power failures, computer lock ups, Ctrl-C, or any other event that
   halts the training process before train returns normally.
 
   CheckpointFile can be set to the empty string to disable checkpoint saves (the default value),
   to a filename to save to the current working directory, or a file path.
 
   The optional parameter CheckpointDelay limits how often saves happen.  It has a default
   value of 60 which means that checkpoint saves will not happen more than once a minute.
   Limiting the frequency of checkpoints keeps the amount of time saving checkpoints low
   compared to the time spent in calculations, using time efficiently.  Set CheckpointDelay
   to 0 if you want checkpoint saves to occur every epoch.
 
   For example, here a network is trained with checkpoints saved at a rate no greater than
   once each two minutes.
 
     [x,t] = vinyl_dataset;
     net = fitnet([60 30]);
     net = train(net,x,t,'CheckpointFile','MyCheckpoint','CheckpointDelay',120);
 
   A computer failure happens, the latest network can be recovered and used to continue
   training from the point of failure. The checkpoint file includes a structure variable
   'checkpoint' which includes the network, training record, filename, time and number.
 
     [x,t] = vinyl_dataset;
     load MyCheckpoint
     net = checkpoint.net;
     net = train(net,x,t,'CheckpointFile','MyCheckpoint');
 
   Another use for this feature is to be able to stop a parallel training session (using the
   UseParallel parameter described above) even though the Neural Network Training Tool
   is not available during parallel training.  Set a CheckpointFile, use Ctrl-C to stop
   training any time, then load your checkpoint file to get the network and training record.

最后

以上就是追寻蓝天为你收集整理的matlab train函数的全部内容,希望文章能够帮你解决matlab train函数所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(38)

评论列表共有 0 条评论

立即
投稿
返回
顶部