[手抄] Keras官方教程 - 30秒Keras

2018-07-01 keras, learn

官方教程,记录以便查阅!

以下代码运行环境为 —— keras[2.2.4], tensorflow[1.11.0]

In [1]:
# The core data structure of Keras is a model, a way to organize layers. 
# The simplest type of model is the `Sequential` model, a linear stack of layers. 
# For more complex architectures, you should use the Keras functional API, 
# which allows to build arbitrary graphs of layers.
from keras.models import Sequential

model = Sequential()
Using TensorFlow backend.
In [2]:
# Stacking layers is as easy as .add()
from keras.layers import Dense

model.add(Dense(units=64, activation='relu', input_dim=784))
model.add(Dense(units=10, activation='softmax'))
In [3]:
# Once your model looks good, configure its learning process with .compile()
model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])
In [4]:
# If you need to, you can further configure your optimizer. 
# A core principle of Keras is to make things reasonably simple, 
# while allowing the user to be fully in control when they need to
import keras
model.compile(loss=keras.losses.categorical_crossentropy,
              metrics=['accuracy'],
              optimizer=keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True))
In [5]:
from keras import backend as K
from keras.datasets import mnist

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# if K.image_data_format() == 'channels_first':
#     x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
#     x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
#     input_shape = (1, img_rows, img_cols)
# else:
#     x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
#     x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
#     input_shape = (img_rows, img_cols, 1)

x_train = x_train.reshape(x_train.shape[0], -1)
x_test = x_test.reshape(x_test.shape[0], -1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train shape: (60000, 784)
60000 train samples
10000 test samples
In [6]:
# You can now iterate on your training data in batches
# x_train and y_train are Numpy arrays --just like in the Scikit-Learn API.
model.fit(x_train, y_train, epochs=5, batch_size=32)
Epoch 1/5
60000/60000 [==============================] - 2s 38us/step - loss: 0.3186 - acc: 0.9093
Epoch 2/5
60000/60000 [==============================] - 2s 35us/step - loss: 0.1614 - acc: 0.9532
Epoch 3/5
60000/60000 [==============================] - 2s 38us/step - loss: 0.1196 - acc: 0.9656
Epoch 4/5
60000/60000 [==============================] - 2s 40us/step - loss: 0.0966 - acc: 0.9715
Epoch 5/5
60000/60000 [==============================] - 2s 35us/step - loss: 0.0816 - acc: 0.9756
Out[6]:
<keras.callbacks.History at 0xb2236b908>
In [7]:
# Alternatively, you can feed batches to your model manually:
model.train_on_batch(x_train[:1000], y_train[:1000])
Out[7]:
[0.08204601, 0.977]
In [8]:
# Evaluate your performance in one line
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
10000/10000 [==============================] - 0s 10us/step
In [9]:
loss_and_metrics
Out[9]:
[0.09461714485567063, 0.9713]
In [10]:
model.metrics_names
Out[10]:
['loss', 'acc']
In [11]:
# Or generate predictions on new data
classes = model.predict(x_test, batch_size=128)
In [12]:
classes
Out[12]:
array([[1.6202371e-06, 1.5546624e-08, 1.1017072e-04, ..., 9.9580610e-01,
        4.2797988e-06, 4.4569868e-05],
       [6.6172972e-05, 1.8846485e-04, 9.9947387e-01, ..., 2.8578787e-11,
        3.1016646e-06, 6.5854850e-09],
       [6.9172788e-06, 9.9318802e-01, 4.2946776e-04, ..., 2.4977161e-03,
        1.6268161e-03, 2.9882387e-04],
       ...,
       [9.2756247e-10, 1.3719182e-09, 9.9988107e-08, ..., 4.9581774e-07,
        9.4726216e-05, 2.1990945e-04],
       [1.9457395e-06, 6.2094955e-06, 5.0209401e-08, ..., 3.6674285e-07,
        1.0093354e-03, 3.5336922e-07],
       [1.8916376e-07, 2.5019442e-10, 6.6131170e-06, ..., 7.1778048e-12,
        4.0408770e-09, 1.0079466e-10]], dtype=float32)
In [14]:
model.predict(x_test[:1])
Out[14]:
array([[1.6202340e-06, 1.5546593e-08, 1.1017050e-04, 4.0293336e-03,
        5.1589550e-09, 3.7974635e-06, 1.0487852e-09, 9.9580610e-01,
        4.2797906e-06, 4.4569824e-05]], dtype=float32)
In [ ]: