2018-07-07
keras
, learn
官方教程,记录以便查阅!
以下代码运行环境为 —— keras[2.2.4], tensorflow[1.11.0]
You can do batch training using model.train_on_batch(x, y)
and model.test_on_batch(x, y)
. See the models documentation.
Alternatively, you can write a generator that yields batches of training data and use the method model.fit_generator(data_generator, steps_per_epoch, epochs)
.
You can see batch training in action in our CIFAR10 example.
You can use an EarlyStopping
callback:
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
model.fit(x, y, validation_split=0.2, callbacks=[early_stopping])
Find out more in the callbacks documentation.
If you set the validation_split
argument in model.fit
to e.g. 0.1, then the validation data used will be the last 10% of the data. If you set it to 0.25, it will be the last 25% of the data, etc. Note that the data isn't shuffled before extracting the validation split, so the validation is literally just the last x% of samples in the input you passed.
The same validation set is used for all epochs (within a same call to fit
).
Yes, if the shuffle argument in model.fit
is set to True
(which is the default), the training data will be randomly shuffled at each epoch.
Validation data is never shuffled.
The model.fit
method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.
hist = model.fit(x, y, validation_split=0.2)
print(hist.history)
To "freeze" a layer means to exclude it from training, i.e. its weights will never be updated. This is useful in the context of fine-tuning a model, or using fixed embeddings for a text input.
You can pass a trainable
argument (boolean) to a layer constructor to set a layer to be non-trainable:
frozen_layer = Dense(32, trainable=False)
Additionally, you can set the trainable
property of a layer to True
or False
after instantiation. For this to take effect, you will need to call compile()
on your model after modifying the trainable
property. Here's an example:
x = Input(shape=(32,))
layer = Dense(32)
layer.trainable = False
y = layer(x)
frozen_model = Model(x, y)
# in the model below, the weights of `layer` will not be updated during training
frozen_model.compile(optimizer='rmsprop', loss='mse')
layer.trainable = True
trainable_model = Model(x, y)
# with this model the weights of the layer will be updated during training
# (which will also affect the above model since it uses the same layer instance)
trainable_model.compile(optimizer='rmsprop', loss='mse')
frozen_model.fit(data, labels) # this does NOT update the weights of `layer`
trainable_model.fit(data, labels) # this updates the weights of `layer`
Making a RNN stateful means that the states for the samples of each batch will be reused as initial states for the samples in the next batch.
When using stateful RNNs, it is therefore assumed that:
x1
and x2
are successive batches of samples, then x2[i]
is the follow-up sequence to x1[i]
, for every i
.To use statefulness in RNNs, you need to:
batch_size
argument to the first layer in your model. E.g. batch_size=32
for a 32-samples batch of sequences of 10 timesteps with 16 features per timestep.stateful=True
in your RNN layer(s).shuffle=False
when calling fit().To reset the states accumulated:
model.reset_states()
to reset the states of all layers in the modellayer.reset_states()
to reset the states of a specific stateful RNN layerExample:
x # this is our input data, of shape (32, 21, 16)
# we will feed it to our model in sequences of length 10
model = Sequential()
model.add(LSTM(32, input_shape=(10, 16), batch_size=32, stateful=True))
model.add(Dense(16, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# we train the network to predict the 11th timestep given the first 10:
model.train_on_batch(x[:, :10, :], np.reshape(x[:, 10, :], (32, 16)))
# the state of the network has changed. We can feed the follow-up sequences:
model.train_on_batch(x[:, 10:20, :], np.reshape(x[:, 20, :], (32, 16)))
# let's reset the states of the LSTM layer:
model.reset_states()
# another way to do it in this case:
model.layers[0].reset_states()
Note that the methods predict, fit, train_on_batch, predict_classes, etc. will all update the states of the stateful layers in a model. This allows you to do not only stateful training, but also stateful prediction.
You can remove the last added layer in a Sequential model by calling .pop():
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=784))
model.add(Dense(32, activation='relu'))
print(len(model.layers)) # "2"
model.pop()
print(len(model.layers)) # "1"
Code and pre-trained weights are available for the following image classification models:
Xception VGG16 VGG19 ResNet ResNet v2 ResNeXt Inception v3 Inception-ResNet v2 MobileNet v1 MobileNet v2 DenseNet NASNet
They can be imported from the module keras.applications
:
from keras.applications.xception import Xception
from keras.applications.vgg16 import VGG16
from keras.applications.vgg19 import VGG19
from keras.applications.resnet import ResNet50
from keras.applications.resnet import ResNet101
from keras.applications.resnet import ResNet152
from keras.applications.resnet_v2 import ResNet50V2
from keras.applications.resnet_v2 import ResNet101V2
from keras.applications.resnet_v2 import ResNet152V2
from keras.applications.resnext import ResNeXt50
from keras.applications.resnext import ResNeXt101
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.mobilenet import MobileNet
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.applications.densenet import DenseNet121
from keras.applications.densenet import DenseNet169
from keras.applications.densenet import DenseNet201
from keras.applications.nasnet import NASNetLarge
from keras.applications.nasnet import NASNetMobile
model = VGG16(weights='imagenet', include_top=True)
For a few simple usage examples, see the documentation for the Applications module.