大家好,欢迎来到IT知识分享网。
CIFAR-10数据集
CIFAR-10数据集包含10个类的60000张32×32的彩色图像,每个类有6000张图像.有50000张训练图像和10000张测试图像.CIFAR-10数据集
10个分类明细及对应的部分图片:
其中主要涉及的文件:
文件 | 作用 |
---|---|
cifar10_input.py | 读取本地二进制文件 |
cifar10_input_test.py | 输入测试 |
cifar10.py | 建立CIFAR-10模型 |
cifar10_train.py | 在CPU或GPU上训练模型 |
cifar10_eval.py | 评估模型的预测性能 |
cifar10_multi_gpu_train.py | 在多GPU上训练模型 |
cifar10_input.py
cifar10_input.py的作用是读取CIFAR-10的二进制文件.
cifar10_input.py中主要有4部分:
- read_cifar10() 读取二进制CIFAR10数据
- _generate_image_and_label_batch() 构建[images,labels]的队列
- distorted_inputs() 读入并增广数据为训练构建输入
- inputs() 图像预处理并为预测构建输入
其中
- read_cifar10()
在CIFAR10数据的二进制文件中,第一个字节是图像标签是一个0-9的数字;接下来的3072个字节是像素值.由于每个图片的存储字节数是固定的,因此函数read_cifar10(filename_queue)中使用tf.FixedLengthRecordReader每次从文件中读取固定长度的字段.
在像素值的3072(3*1024)个字节中,RGB通道分别1024个,以行优先顺序存储.
二进制文件中,每个文件都包含10000个3073字节的行图像,没有分隔行限制,每个文件是30730000字节长.文件中没有页眉页脚,因此函数read_cifar10(filename_queue)中的tf.FixedLengthRecordReader()的参数header_bytes和footer_bytes都设为默认值0. - _generate_image_and_label_batch()
函数使用16个独立线程,16个线程被连续的安排在一个队列中;每次在执行读取一个 batch_size数量的样本[images,labels].分别在distorted_inputs()与inputs()中被调用,用来构建输入队列. - distorted_inputs()
distorted_inputs()为训练构建输入.在读取图像数据后,依次对图像进行了以下操作:
随机裁剪大小为24*24的图像
随机水平翻转图像
随机调整图像亮度
随机调整图像对比度
标准化处理:减去均值除以方差,线性缩放为零均值的单位范数
这样,增加了训练样本的数量,实现了数据增广.然后调用_generate_image_and_label_batch()构建图像和标签的队列.
# 随机裁剪[height, width]大小的图像
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# 随机水平翻转图像
distorted_image = tf.image.random_flip_left_right(distorted_image)
# 随机调整图像亮度与对比度(不可交换)
distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)
# 减去均值除以方差,线性缩放为零均值的单位范数:白化/标准化处理
float_image = tf.image.per_image_standardization(distorted_image)
- inputs()
inputs()为预测构建输入.通过以下操作:
在图像的中心裁剪24*24大小的图像
减去平均值并除以像素的方差,保证数据均值为0,方差为1
对图像进行预处理.然后调用_generate_image_and_label_batch()构建图像和标签的队列.
# 用于评估的图像处理
# 在图像的中心裁剪[height, width]大小的图像,裁剪中央区域用于评估
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, height, width)
# 减去平均值并除以像素的方差,保证数据均值为0,方差为1
float_image = tf.image.per_image_standardization(resized_image)
cifar10.py
cifar10.py的作用是构建CIFAR-10模型.
cifar10.py中主要有4部分:
- 模型输入:distorted_inputs() inputs()
- 模型训练:loss() _add_loss_summaries() train()等
- 模型预测:inference()等
其中,
- 模型输入部分
distorted_inputs()通过调用cifar10_input.yp中的distorted_inputs()为CIFAR-10训练构建输入;inputs()通过调用cifar10_input.yp中的inputs()为CIFAR-10预测构建输入. - 模型训练部分
loss()将L2损失添加到所有可训练变量.在计算logits和labels之间的交叉熵时,使用函数tf.nn.sparse_softmax_cross_entropy_with_logits()可在函数内部将labels稀疏化,因此loss()可以直接输入非稀疏的标签.
即原来使用函数tf.nn.softmax_cross_entropy_with_logits()计算交叉熵时,输入标签需要先稀疏化,常用one-hot编码,即标签[0,1,2]对应的稀疏化编码为[1 0 0][0 1 0][0 0 1];现在函数tf.nn.sparse_softmax_cross_entropy_with_logits()内部包含将labels稀疏化的操作,因此支持唯一值 labels.
def loss(logits, labels):
labels = tf.cast(labels, tf.int64)
# 计算logits和labels之间的交叉熵
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits, name='cross_entropy_per_example')
# 计算整个批次的平均交叉熵损失
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
# 把变量放入一个集合
tf.add_to_collection('losses', cross_entropy_mean)
# 总损失定义为交叉熵损失加上所有的权重衰减项(L2损失)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
_add_loss_summaries()中计算单个损失和总损失,并将指数移动平均应用于单个损失.
train()训练CIFAR-10模型,使用指数衰减学习率并对损失进行移动平均.最后采用滑动平均的方法更新参数,这样可以在评估过程中提升模型的性能.
# 跟踪所有可训练变量的移动均值
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
- 模型预测部分
inference()构建的CIFAR-10模型,依次由以下部分组成:
卷积层1 (实现卷积)
池化层 (max polling)
lrn层 (局部响应归一化:增强大的抑制小的,增强泛化能力)
卷积层2 (实现卷积)
lrn层 (局部响应归一化:增强大的抑制小的,增强泛化能力)
池化层 (max polling)
全连接层3 (添加L2正则化约束,防止过拟合)
全连接层4 (添加L2正则化约束,防止过拟合)
线性层 ((WX+b) 进行线性变换以输出 logits)
线性层中不使用softmax,因为loss()函数中的tf.nn.sparse_softmax_cross_entropy_with_logits接受非稀疏的logits并在内部执行softmax以提高效率.
# 线性层 (WX+b)
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=None)
# biases初始化为0
biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0))
# (WX+b) 进行线性变换以输出 logits
softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
# 汇总
_activation_summary(softmax_linear)
在TensorBoard 可查看模型结构:
cifar10_eval.py
cifar10_eval.py用于评估CIFAR-10模型的预测性能.
cifar10_train.py主要有两部分:
- eval_once() 单次评估
- evaluate() 评估CIFAR-10模型
cifar10_train.py周期性的在checkpoint文件中保存模型中的所有参数,但不对模型进行评估.cifar10_eval.py中的eval_once()函数使用checkpoint文件在另一部分数据集上测试预测性能.
cifar10_eval.py中的evaluate()函数利用cifar10.py中的inference() 函数进行重构模型.然后使用评估数据集(10000张图片)进行测试.
部分代码及注释
cifar10.py及注释:
#!/usr/bin/python
# coding:utf-8
# 建立CIFAR-10的模型
# pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import re
import sys
import tarfile
from six.moves import urllib
import tensorflow as tf
import cifar10_input
FLAGS = tf.app.flags.FLAGS
# 基本模型参数
tf.app.flags.DEFINE_integer('batch_size', 128,
"""Number of images to process in a batch.""")
tf.app.flags.DEFINE_string('data_dir', '/home/w/mycode/data/cifar10_data',
"""Path to the CIFAR-10 data directory.""")
tf.app.flags.DEFINE_boolean('use_fp16', False,# 半精度浮点数
"""Train the model using fp16.""")
# 描述CIFAR-10数据集的全局常量
IMAGE_SIZE = cifar10_input.IMAGE_SIZE
NUM_CLASSES = cifar10_input.NUM_CLASSES
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
# 描述训练过程的常量
MOVING_AVERAGE_DECAY = 0.9999 # 滑动平均衰减率
NUM_EPOCHS_PER_DECAY = 350.0 # 在学习速度衰退之后的Epochs
LEARNING_RATE_DECAY_FACTOR = 0.1 # 学习速率衰减因子
INITIAL_LEARNING_RATE = 0.1 # 初始学习率
# 如果模型使用多个GPU进行训练,则使用tower_name将所有Op名称加前缀以区分操作
# 可视化模型时从摘要名称中删除此前缀
TOWER_NAME = 'tower'
DATA_URL = 'https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz'
# 激活摘要创建助手
def _activation_summary(x):
# 若多个GPU训练,则从名称中删除'tower_[0-9]/',利于TensorBoard显示
tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name)
# 提供激活直方图的summary
tf.summary.histogram(tensor_name + '/activations', x)
# 衡量激活稀疏性的summary
tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x))
# 创建存储在CPU内存上的变量(变量的名称,整数列表,变量的初始化程序)
def _variable_on_cpu(name, shape, initializer):
with tf.device('/cpu:0'):
dtype = tf.float16 if FLAGS.use_fp16 else tf.float32
var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype)
return var
# 创建一个权重衰减的初始化变量(变量的名称,整数列表,截断高斯的标准差,加L2Loss权重衰减)
# 变量用截断正态分布初始化的.只有指定时才添加权重衰减
def _variable_with_weight_decay(name, shape, stddev, wd):
dtype = tf.float16 if FLAGS.use_fp16 else tf.float32
# 用截断正态分布进行初始化
var = _variable_on_cpu(name, shape, tf.truncated_normal_initializer(stddev=stddev,dtype=dtype))
if wd is not None:
# wd用于向losses添加L2正则化,防止过拟合,提高泛化能力
weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss')
# 把变量放入一个集合
tf.add_to_collection('losses', weight_decay)
return var
# -------------------------模型输入-----------------------------------
# 训练输入
# 返回:images:[batch_size, IMAGE_SIZE, IMAGE_SIZE, 3]; labels:[batch_size]
def distorted_inputs():
if not FLAGS.data_dir:
raise ValueError('Please supply a data_dir')
data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
# 读入并增广数据
images, labels = cifar10_input.distorted_inputs(data_dir=data_dir, batch_size=FLAGS.batch_size)
if FLAGS.use_fp16:
images = tf.cast(images, tf.float16)
labels = tf.cast(labels, tf.float16)
return images, labels
# 预测输入
# 返回:images:[batch_size, IMAGE_SIZE, IMAGE_SIZE, 3]; labels:[batch_size]
def inputs(eval_data):
if not FLAGS.data_dir:
raise ValueError('Please supply a data_dir')
data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
# 图像预处理及输入
images, labels = cifar10_input.inputs(eval_data=eval_data,data_dir=data_dir,batch_size=FLAGS.batch_size)
if FLAGS.use_fp16:
images = tf.cast(images, tf.float16)
labels = tf.cast(labels, tf.float16)
return images, labels
# -------------------------------------------------------------------
# -------------------------模型预测-----------------------------------
# 构建CIFAR-10模型
# 使用tf.get_variable()而不是tf.Variable()来实例化所有变量,以便跨多个GPU训练运行共享变量
# 若只在单个GPU上运行,则可通过tf.Variable()替换tf.get_variable()的所有实例来简化此功能
def inference(images):
# 卷积层1
with tf.variable_scope('conv1') as scope:
# weight不进行L2正则化
kernel = _variable_with_weight_decay('weights',shape=[5, 5, 3, 64],stddev=5e-2, wd=None)
# 卷积
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
# biases初始化为0
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
pre_activation = tf.nn.bias_add(conv, biases)
# 卷积层1的结果由ReLu激活
conv1 = tf.nn.relu(pre_activation, name=scope.name)
# 汇总
_activation_summary(conv1)
# 池化层1
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')
# lrn层1 局部响应归一化:增强大的抑制小的,增强泛化能力
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')
# 卷积层2
with tf.variable_scope('conv2') as scope:
# weight不进行L2正则化
kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64], stddev=5e-2, wd=None)
conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
# biases初始化为0.1
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1))
pre_activation = tf.nn.bias_add(conv, biases)
# 卷积层2的结果由ReLu激活
conv2 = tf.nn.relu(pre_activation, name=scope.name)
# 汇总
_activation_summary(conv2)
# lrn层2
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2')
# 池化层2
pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2')
# 全连接层3
with tf.variable_scope('local3') as scope:
# 将样本转换为一维向量
reshape = tf.reshape(pool2, [FLAGS.batch_size, -1])
# 维数
dim = reshape.get_shape()[1].value
# 添加L2正则化约束,防止过拟合
weights = _variable_with_weight_decay('weights', shape=[dim, 384], stddev=0.04, wd=0.004)
# biases初始化为0.1
biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1))
# ReLu激活
local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)
_activation_summary(local3)
# 全连接层4
with tf.variable_scope('local4') as scope:
# 添加L2正则化约束,防止过拟合
weights = _variable_with_weight_decay('weights', shape=[384, 192], stddev=0.04, wd=0.004)
# biases初始化为0.1
biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1))
# ReLu激活
local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name)
_activation_summary(local4)
# 线性层
# (WX+b)不使用softmax,因为tf.nn.sparse_softmax_cross_entropy_with_logits接受未缩放的logits并在内部执行softmax以提高效率
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=None)
# biases初始化为0
biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0))
# (WX+b) 进行线性变换以输出 logits
softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
# 汇总
_activation_summary(softmax_linear)
return softmax_linear
# -------------------------------------------------------------------
# -------------------------模型训练-----------------------------------
# 将L2损失添加到所有可训练变量
def loss(logits, labels):
labels = tf.cast(labels, tf.int64)
# 计算logits和labels之间的交叉熵
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits, name='cross_entropy_per_example')
# 计算整个批次的平均交叉熵损失
cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
# 把变量放入一个集合
tf.add_to_collection('losses', cross_entropy_mean)
# 总损失定义为交叉熵损失加上所有的权重衰减项(L2损失)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
# 添加损失的summary;计算所有单个损失的移动均值和总损失
def _add_loss_summaries(total_loss):
# 指数移动平均
loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg')
losses = tf.get_collection('losses')
# 将指数移动平均应用于单个损失
loss_averages_op = loss_averages.apply(losses + [total_loss])
# 单个损失损失和全部损失的标量summary
for l in losses + [total_loss]:
# 将每个损失命名为raw,并将损失的移动平均命名为原始损失
tf.summary.scalar(l.op.name + ' (raw)', l)
tf.summary.scalar(l.op.name, loss_averages.average(l))
return loss_averages_op
# 训练CIFAR-10模型
# 创建一个优化器并应用于所有可训练变量,为所有可训练变量添加移动均值(全部损失,训练步数)
def train(total_loss, global_step):
# 影响学习率的变量
num_batches_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN / FLAGS.batch_size
decay_steps = int(num_batches_per_epoch * NUM_EPOCHS_PER_DECAY)
# 指数衰减学习率
lr = tf.train.exponential_decay(INITIAL_LEARNING_RATE, global_step, decay_steps,
LEARNING_RATE_DECAY_FACTOR, staircase=True)
tf.summary.scalar('learning_rate', lr)
# 对总损失进行移动平均
loss_averages_op = _add_loss_summaries(total_loss)
# 计算梯度
with tf.control_dependencies([loss_averages_op]):
opt = tf.train.GradientDescentOptimizer(lr)
grads = opt.compute_gradients(total_loss)
# 应用处理过后的梯度
apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
# 为可训练变量添加直方图
for var in tf.trainable_variables():
tf.summary.histogram(var.op.name, var)
# 为梯度添加直方图
for grad, var in grads:
if grad is not None:
tf.summary.histogram(var.op.name + '/gradients', grad)
# 跟踪所有可训练变量的移动均值
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
# 使用默认图形的包装器
with tf.control_dependencies([apply_gradient_op, variables_averages_op]): train_op = tf.no_op(name='train')
return train_op
# -------------------------------------------------------------------
# 下载并解压数据
def maybe_download_and_extract():
dest_directory = FLAGS.data_dir
if not os.path.exists(dest_directory):
os.makedirs(dest_directory)
filename = DATA_URL.split('/')[-1]
filepath = os.path.join(dest_directory, filename)
if not os.path.exists(filepath):
def _progress(count, block_size, total_size):
sys.stdout.write('\r>> Downloading %s %.1f%%' % (filename,
float(count * block_size) / float(total_size) * 100.0))
sys.stdout.flush()
filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)
print()
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
extracted_dir_path = os.path.join(dest_directory, 'cifar-10-batches-bin')
if not os.path.exists(extracted_dir_path):
tarfile.open(filepath, 'r:gz').extractall(dest_directory)
cifar10_input.py及注释:
#!/usr/bin/python
# coding:utf-8
# 读取本地CIFAR-10的二进制文件
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import xrange
import tensorflow as tf
# 处理图像尺寸,与CIFAR原始图像大小32 x 32不同
IMAGE_SIZE = 24
# 全局常量
NUM_CLASSES = 10
# 训练实例个数
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000
# 验证实例个数
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000
# 读取二进制CIFAR10数据(filename_queue:要读取的文件名)
def read_cifar10(filename_queue):
class CIFAR10Record(object):
pass
result = CIFAR10Record()
# CIFAR-10数据集中图像的尺寸
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
image_bytes = result.height * result.width * result.depth
# 每条记录都包含一个标签,后面跟着图像,每个记录都有固定的字节数
record_bytes = label_bytes + image_bytes
# 从文件输出固定长度的字段(每个图片的存储字节数是固定的)
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
# 返回reader生成的下一条记录(key, value pair)
result.key, value = reader.read(filename_queue)
# 将字符串转换为uint8类型的向量
record_bytes = tf.decode_raw(value, tf.uint8)
# 将标签从uint8转换为int32
result.label = tf.cast(tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)
# 标签之后的字节表示图像,将其从[depth*height*width]转换为[depth,height,width]
depth_major = tf.reshape(
tf.strided_slice(record_bytes, [label_bytes], [label_bytes + image_bytes]),
[result.depth, result.height, result.width])
# 从[depth,height,width]转换为[height,width,depth].
result.uint8image = tf.transpose(depth_major, [1, 2, 0])
return result
# 构建[images,labels]的队列
def _generate_image_and_label_batch(image, label, min_queue_examples, batch_size, shuffle):
# 使用16个独立线程,16个线程被连续的安排在一个队列中
# 每次在执行读取一个 batch_size数量的样本[images,labels]
num_preprocess_threads = 16
# 是否随机打乱队列
if shuffle:
# images:4D张量[batch_size, height, width, 3]; labels:[batch_size]大小的1D张量
# 将队列中数据打乱后取出
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size=batch_size, # 每批次的图像数量
num_threads=num_preprocess_threads, # 入队tensor_list的线程数量
capacity=min_queue_examples + 3 * batch_size, # 队列中元素的最大数量
min_after_dequeue=min_queue_examples) # 提供批次示例的队列中保留的最小样本数
else:
# 将队列中数据按顺序取出
images, label_batch = tf.train.batch(
[image, label],
batch_size=batch_size, # 从队列中提取的新批量大小
num_threads=num_preprocess_threads, # 排列“tensor”的线程数量
capacity=min_queue_examples + 3 * batch_size) # 队列中元素的最大数量
# 在TensorBoard中显示训练图像
tf.summary.image('images', images)
return images, tf.reshape(label_batch, [batch_size])
# 读入并增广数据为训练构建输入(CIFAR-10数据的路径,每批次的图像数量)
# 返回值 images:[batch_size,IMAGE_SIZE,IMAGE_SIZE,3];labels:[batch_size]
def distorted_inputs(data_dir, batch_size):
# 获取5个二进制文件所在路径
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# 创建一个文件名的队列
filename_queue = tf.train.string_input_producer(filenames)
with tf.name_scope('data_augmentation'):
# 读取文件名队列中的文件
read_input = read_cifar10(filename_queue)
# 转换张量类型
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# 随机裁剪[height, width]大小的图像
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# 随机水平翻转图像
distorted_image = tf.image.random_flip_left_right(distorted_image)
# 随机调整图像亮度与对比度(不可交换)
distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)
# 减去均值除以方差,线性缩放为零均值的单位范数:白化/标准化处理
float_image = tf.image.per_image_standardization(distorted_image)
# 设置张量的形状
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# 确保随机乱序具有良好的混合性能
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * min_fraction_of_examples_in_queue)
print ('Filling queue with %d CIFAR images before starting to train.'
'This will take a few minutes.' % min_queue_examples)
# 构建图像和标签的队列
return _generate_image_and_label_batch(float_image, read_input.label, min_queue_examples, batch_size, shuffle=True)
# 图像预处理并为CIFAR预测构建输入
# 输入:(指示是否应该使用训练或eval数据集,CIFAR-10数据的路径,每批次的图像数量)
# 输出:images:[batch_size, IMAGE_SIZE, IMAGE_SIZE, 3]; labels: [batch_size]
def inputs(eval_data, data_dir, batch_size):
if not eval_data:
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
else:
filenames = [os.path.join(data_dir, 'test_batch.bin')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file:' + f)
with tf.name_scope('input'):
# 创建一个生成要读取的文件名的队列
filename_queue = tf.train.string_input_producer(filenames)
# 阅读文件名队列中文件的示例
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# 用于评估的图像处理
# 在图像的中心裁剪[height, width]大小的图像,裁剪中央区域用于评估
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, height, width)
# 减去平均值并除以像素的方差,保证数据均值为0,方差为1
float_image = tf.image.per_image_standardization(resized_image)
# 设置张量的形状
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# 确保随机乱序具有良好的混合性能
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch * min_fraction_of_examples_in_queue)
# 通过建立一个示例队列来生成一批图像和标签
return _generate_image_and_label_batch(float_image, read_input.label, min_queue_examples, batch_size, shuffle=False)
cifar10_eval.py及注释
#!/usr/bin/python
# coding:utf-8
# 评估CIFAR-10模型的预测性能
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
import math
import time
import numpy as np
import tensorflow as tf
import cifar10
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_string('eval_dir', '/tmp/cifar10_eval',
"""Directory where to write event logs.""")
tf.app.flags.DEFINE_string('eval_data', 'test',
"""Either 'test' or 'train_eval'.""")
tf.app.flags.DEFINE_string('checkpoint_dir', '/tmp/cifar10_train',
"""Directory where to read model checkpoints.""")
tf.app.flags.DEFINE_integer('eval_interval_secs', 60 * 5,
"""How often to run the eval.""")
tf.app.flags.DEFINE_integer('num_examples', 10000,
"""Number of examples to run.""")
tf.app.flags.DEFINE_boolean('run_once', False,
"""Whether to run eval only once.""")
# 单次评估
def eval_once(saver, summary_writer, top_k_op, summary_op):
with tf.Session() as sess:
# checkpoint文件会记录保存信息,通过它可以定位最新保存的模型
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
# 从检查点恢复
saver.restore(sess, ckpt.model_checkpoint_path)
# 假设model_checkpoint_path为/my-favorite-path/cifar10_train/model.ckpt-0从中提取global_step
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
else:
print('No checkpoint file found')
return
# 启动队列协调器
coord = tf.train.Coordinator()
try:
threads = []
for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):
threads.extend(qr.create_threads(sess, coord=coord, daemon=True,start=True))
num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))
# 统计正确预测的数量
true_count = 0
total_sample_count = num_iter * FLAGS.batch_size
step = 0
# 检查是否被请求停止
while step < num_iter and not coord.should_stop():
predictions = sess.run([top_k_op])
true_count += np.sum(predictions)
step += 1
# 计算准确度 precision@1
precision = true_count / total_sample_count
print('%s: precision @ 1 = %.3f' % (datetime.now(), precision))
summary = tf.Summary()
summary.ParseFromString(sess.run(summary_op))
summary.value.add(tag='Precision @ 1', simple_value=precision)
summary_writer.add_summary(summary, global_step)
# pylint: disable=broad-except
except Exception as e:
coord.request_stop(e)
# 请求线程结束
coord.request_stop()
# 等待线程终止
coord.join(threads, stop_grace_period_secs=10)
# 评估CIFAR-10
def evaluate():
with tf.Graph().as_default() as g:
# 获取CIFAR-10的图像和标签
eval_data = FLAGS.eval_data == 'test'
images, labels = cifar10.inputs(eval_data=eval_data)
# 构建一个图表,用于计算推理模型中的logits预测
logits = cifar10.inference(images)
# 计算预测
top_k_op = tf.nn.in_top_k(logits, labels, 1)
# 为eval恢复学习变量的移动平均
variable_averages = tf.train.ExponentialMovingAverage(cifar10.MOVING_AVERAGE_DECAY)
variables_to_restore = variable_averages.variables_to_restore()
# 创建一个saver对象,用于保存参数到文件中
saver = tf.train.Saver(variables_to_restore)
# 根据摘要TF集合构建摘要操作
summary_op = tf.summary.merge_all()
# 将Summary protocol buffers写入事件文件
summary_writer = tf.summary.FileWriter(FLAGS.eval_dir, g)
while True:
eval_once(saver, summary_writer, top_k_op, summary_op)
if FLAGS.run_once:
break
time.sleep(FLAGS.eval_interval_secs)
# pylint: disable=unused-argument
def main(argv=None):
cifar10.maybe_download_and_extract()
if tf.gfile.Exists(FLAGS.eval_dir):
tf.gfile.DeleteRecursively(FLAGS.eval_dir)
tf.gfile.MakeDirs(FLAGS.eval_dir)
evaluate()
if __name__ == '__main__':
tf.app.run()
免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://yundeesoft.com/14926.html