2018-07-06
keras
, learn
官方教程,记录以便查阅!
以下代码运行环境为 —— keras[2.2.4], tensorflow[1.11.0]
model.layers
is a flattened list of the layers comprising the model.
model.inputs
is the list of input tensors of the model.
model.outputs
is the list of output tensors of the model.
model.summary()
prints a summary representation of your model. Shortcut for utils.print_summary
model.get_config()
returns a dictionary containing the configuration of the model. The model can be reinstantiated from its config via:
config = model.get_config()
model = Model.from_config(config)
# or, for Sequential:
model = Sequential.from_config(config)
model.get_weights()
returns a list of all weight tensors in the model, as Numpy arrays.
model.set_weights(weights)
sets the values of the weights of the model, from a list of Numpy arrays. The arrays in the list should have the same shape as those returned by get_weights().
model.to_json()
returns a representation of the model as a JSON string. Note that the representation does not include the weights, only the architecture. You can reinstantiate the same model (with reinitialized weights) from the JSON string via:
from keras.models import model_from_json
json_string = model.to_json()
model = model_from_json(json_string)
model.to_yaml()
returns a representation of the model as a YAML string. Note that the representation does not include the weights, only the architecture. You can reinstantiate the same model (with reinitialized weights) from the YAML string via:
from keras.models import model_from_yaml
yaml_string = model.to_yaml()
model = model_from_yaml(yaml_string)
model.save_weights(filepath)
saves the weights of the model as a HDF5 file.
model.load_weights(filepath, by_name=False)
loads the weights of the model from a HDF5 file (created by save_weights). By default, the architecture is expected to be unchanged. To load weights into a different architecture (with some layers in common), use by_name=True to load only those layers with the same name.
import keras
class SimpleMLP(keras.Model):
def __init__(self, use_bn=False, use_dp=False, num_classes=10):
super(SimpleMLP, self).__init__(name='mlp')
self.use_bn = use_bn
self.use_dp = use_dp
self.num_classes = num_classes
self.dense1 = keras.layers.Dense(32, activation='relu')
self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
if self.use_dp:
self.dp = keras.layers.Dropout(0.5)
if self.use_bn:
self.bn = keras.layers.BatchNormalization(axis=-1)
def call(self, inputs):
x = self.dense1(inputs)
if self.use_dp:
x = self.dp(x)
if self.use_bn:
x = self.bn(x)
return self.dense2(x)
model = SimpleMLP()
model.compile(...)
model.fit(...)
layer.get_weights()
: returns the weights of the layer as a list of Numpy arrays.
layer.set_weights(weights)
: sets the weights of the layer from a list of Numpy arrays (with the same shapes as the output of get_weights).
layer.get_config()
: returns a dictionary containing the configuration of the layer. The layer can be reinstantiated from its config via:
layer = Dense(32)
config = layer.get_config()
reconstructed_layer = Dense.from_config(config)
Or:
from keras import layers
config = layer.get_config()
layer = layers.deserialize({'class_name': layer.__class__.__name__,
'config': config})
If a layer has a single node (i.e. if it isn't a shared layer), you can get its input tensor, output tensor, input shape and output shape via:
layer.input
layer.output
layer.input_shape
layer.output_shape
If the layer has multiple nodes (see: the concept of layer node and shared layers), you can use the following methods:
layer.get_input_at(node_index)
layer.get_output_at(node_index)
layer.get_input_shape_at(node_index)
layer.get_output_shape_at(node_index)