使用 Keras 和 VGG 进行转学习
在这个例子中,提出了三个简短而全面的子示例:
- 从 Keras 库附带的可用预训练模型加载重量
- 在 VGG 的任何层之上堆叠另一个网络进行培训
- 在其他图层的中间插入图层
- 使用 VGG 进行微调和转移学习的提示和一般经验法则
加载预先训练的重量
预先训练上 ImageNet 车型,其中包括 VGG-16 和 VGG-19 中,都可以 Keras 。在此示例中,将使用 VGG-16 。有关更多信息,请访问 Keras Applications 文档 。
from keras import applications
# This will load the whole VGG16 network, including the top Dense layers.
# Note: by specifying the shape of top layers, input tensor shape is forced
# to be (224, 224, 3), therefore you can use it only on 224x224 images.
vgg_model = applications.VGG16(weights='imagenet', include_top=True)
# If you are only interested in convolution filters. Note that by not
# specifying the shape of top layers, the input tensor shape is (None, None, 3),
# so you can use them for any size of images.
vgg_model = applications.VGG16(weights='imagenet', include_top=False)
# If you want to specify input tensor
from keras.layers import Input
input_tensor = Input(shape=(160, 160, 3))
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_tensor=input_tensor)
# To see the models' architecture and layer names, run the following
vgg_model.summary()
使用从 VGG 获取的底层创建新网络
假设对于尺寸为 (160, 160, 3)
的图像的某些特定任务,你希望使用预先训练的 VGG 底层,最多使用名称 block2_pool
的图层。
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(160, 160, 3))
# Creating dictionary that maps layer names to the layers
layer_dict = dict([(layer.name, layer) for layer in vgg_model.layers])
# Getting output tensor of the last VGG layer that we want to include
x = layer_dict['block2_pool'].output
# Stacking a new simple convolutional network on top of it
x = Conv2D(filters=64, kernel_size=(3, 3), activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(10, activation='softmax')(x)
# Creating new model. Please note that this is NOT a Sequential() model.
from keras.models import Model
custom_model = Model(input=vgg_model.input, output=x)
# Make sure that the pre-trained bottom layers are not trainable
for layer in custom_model.layers[:7]:
layer.trainable = False
# Do not forget to compile it
custom_model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
删除多个图层并在中间插入一个新图层
假设你需要通过用单个卷积层替换 block1_conv1
和 block2_conv2
来加速 VGG16,以便保存预先训练的权重。我们的想法是将整个网络拆分为单独的层,然后将其组装回来。以下是专门针对你的任务的代码:
vgg_model = applications.VGG16(include_top=True, weights='imagenet')
# Disassemble layers
layers = [l for l in vgg_model.layers]
# Defining new convolutional layer.
# Important: the number of filters should be the same!
# Note: the receiptive field of two 3x3 convolutions is 5x5.
new_conv = Conv2D(filters=64,
kernel_size=(5, 5),
name='new_conv',
padding='same')(layers[0].output)
# Now stack everything back
# Note: If you are going to fine tune the model, do not forget to
# mark other layers as un-trainable
x = new_conv
for i in range(3, len(layers)):
layers[i].trainable = False
x = layers[i](x)
# Final touch
result_model = Model(input=layer[0].input, output=x)