探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

   2023-03-08 学习力402
核心提示:  学习率是一个控制每次更新模型权重时响应估计误差而调整模型程度的超参数。学习率选取是一项具有挑战性的工作,学习率设置的非常小可能导致训练过程过长甚至训练进程被卡住,而设置的非常大可能会导致过快学习到次优的权重集合或者训练过程不稳定。迁移学

 

探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

 

学习率是一个控制每次更新模型权重时响应估计误差而调整模型程度的超参数。学习率选取是一项具有挑战性的工作,学习率设置的非常小可能导致训练过程过长甚至训练进程被卡住,而设置的非常大可能会导致过快学习到次优的权重集合或者训练过程不稳定。

迁移学习

我们使用迁移学习将训练好的机器学习模型应用于不同但相关的任务中。这在深度学习这种使用层级链接的神经网络中非常有效。特别是在计算机视觉任务中,这些网络中的前几层倾向于学习较简单的特征。例如:边缘、梯度特征等。

这是一种在计算机视觉任务中被证实过可以产生更好结果的成熟方法。大多数预训练的模型(Resnet,VGG,Inception等)都是在ImageNet上进行训练的,并且根据实际任务中所用数据与ImageNet数据的相似性,这些预训练得到的权重需要或多或少地改变。

在fast.ai课程中,Jeremy Howard探讨了迁移学习的不同学习率策略以提高模型在速度和准确性方面的表现。

  1. 差分学习(Differential learning)

差分学习提出的动机来自这样一个事实,即在对预训练模型进行微调时,更靠近输入的层更可能学习更多的简单特征。因此,我们不想改变这些层的权重,而是更大程度上修改更深层的权重从而适应目标任务/数据。

“差分学习率”是指在网络的不同部分使用不同的学习率,初始层的学习率较低,后几层的学习率逐渐提高。

探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

使用差分学习率的CNN样例

在Keras中实现差分学习率

为了在Keras中实现差异学习,我们需要修改优化器源代码。这里以Adam优化期为例,kears中Adam实现源代码如下:

class Adam(Optimizer):  """Adam optimizer. Default parameters follow those provided in the original paper. # Arguments lr: float >= 0. Learning rate. beta_1: float, 0 < beta < 1. Generally close to 1. beta_2: float, 0 < beta < 1. Generally close to 1. epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`. decay: float >= 0. Learning rate decay over each update. amsgrad: boolean. Whether to apply the AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and Beyond". """ def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False, **kwargs): super(Adam, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.beta_1 = K.variable(beta_1, name='beta_1') self.beta_2 = K.variable(beta_2, name='beta_2') self.decay = K.variable(decay, name='decay') if epsilon is None: epsilon = K.epsilon() self.epsilon = epsilon self.initial_decay = decay self.amsgrad = amsgrad @interfaces.legacy_get_updates_support def get_updates(self, loss, params): grads = self.get_gradients(loss, params) self.updates = [K.update_add(self.iterations, 1)] lr = self.lr if self.initial_decay > 0: lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay)))) t = K.cast(self.iterations, K.floatx()) + 1 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] if self.amsgrad: vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] else: vhats = [K.zeros(1) for _ in params] self.weights = [self.iterations] + ms + vs + vhats for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats): m_t = (self.beta_1 * m) + (1. - self.beta_1) * g v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g) if self.amsgrad: vhat_t = K.maximum(vhat, v_t) p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon) self.updates.append(K.update(vhat, vhat_t)) else: p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon) self.updates.append(K.update(m, m_t)) self.updates.append(K.update(v, v_t)) new_p = p_t # Apply constraints. if getattr(p, 'constraint', None) is not None: new_p = p.constraint(new_p) self.updates.append(K.update(p, new_p)) return self.updates def get_config(self): config = {'lr': float(K.get_value(self.lr)), 'beta_1': float(K.get_value(self.beta_1)), 'beta_2': float(K.get_value(self.beta_2)), 'decay': float(K.get_value(self.decay)), 'epsilon': self.epsilon, 'amsgrad': self.amsgrad} base_config = super(Adam, self).get_config() return dict(list(base_config.items()) + list(config.items()))

"""Adam optimizer.
 Default parameters follow those provided in the original paper.
 # Arguments
 lr: float >= 0. Learning rate.
 beta_1: float, 0 < beta < 1. Generally close to 1.
 beta_2: float, 0 < beta < 1. Generally close to 1.
 epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
 decay: float >= 0. Learning rate decay over each update.
 amsgrad: boolean. Whether to apply the AMSGrad variant of this
 algorithm from the paper "On the Convergence of Adam and
 Beyond".
 """

def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
 epsilon=None, decay=0., amsgrad=False, **kwargs):
super(Adam, self).__init__(**kwargs)
 with K.name_scope(self.__class__.__name__):
self.iterations = K.variable(0, dtype='int64', name='iterations')
self.lr = K.variable(lr, name='lr')
self.beta_1 = K.variable(beta_1, name='beta_1')
self.beta_2 = K.variable(beta_2, name='beta_2')
self.decay = K.variable(decay, name='decay')
if epsilon is None:
 epsilon = K.epsilon()
self.epsilon = epsilon
self.initial_decay = decay
self.amsgrad = amsgrad

 @interfaces.legacy_get_updates_support
def get_updates(self, loss, params):
 grads = self.get_gradients(loss, params)
self.updates = [K.update_add(self.iterations, 1)]

 lr = self.lr
if self.initial_decay > 0:
 lr = lr * (1. / (1. + self.decay * K.cast(self.iterations,
 K.dtype(self.decay))))

 t = K.cast(self.iterations, K.floatx()) + 1
 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) /
 (1. - K.pow(self.beta_1, t)))

 ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
 vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
if self.amsgrad:
 vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
else:
 vhats = [K.zeros(1) for _ in params]
self.weights = [self.iterations] + ms + vs + vhats

for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
 m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
 v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
if self.amsgrad:
 vhat_t = K.maximum(vhat, v_t)
 p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon)
self.updates.append(K.update(vhat, vhat_t))
else:
 p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon)

self.updates.append(K.update(m, m_t))
self.updates.append(K.update(v, v_t))
 new_p = p_t

# Apply constraints.
if getattr(p, 'constraint', None) is not None:
 new_p = p.constraint(new_p)

self.updates.append(K.update(p, new_p))
return self.updates

def get_config(self):
 config = {'lr': float(K.get_value(self.lr)),
'beta_1': float(K.get_value(self.beta_1)),
'beta_2': float(K.get_value(self.beta_2)),
'decay': float(K.get_value(self.decay)),
'epsilon': self.epsilon,
'amsgrad': self.amsgrad}
 base_config = super(Adam, self).get_config()
return dict(list(base_config.items()) + list(config.items()))

我们修改上面的源代码以包含以下内容:

  1. 拆分层:split_1split_2是分别进行第一次和第二次拆分的层名称。

  2. 修改参数lr以应用学习率表 - 应用3个学习率表(因为差分学习结构中分为3个不同的阶段)
     

在更新每层的学习率时,初始代码遍历所有层并为其分配学习速率。我们改变这一点,以便为不同的层设置不同的学习率。


 
class Adam_dlr(optimizers.Optimizer): """Adam optimizer. Default parameters follow those provided in the original paper. # Arguments split_1: split layer 1 split_2: split layer 2 lr: float >= 0. List of Learning rates. [Early layers, Middle layers, Final Layers] beta_1: float, 0 < beta < 1. Generally close to 1. beta_2: float, 0 < beta < 1. Generally close to 1. epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`. decay: float >= 0. Learning rate decay over each update. amsgrad: boolean. Whether to apply the AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and Beyond". """ def __init__(self, split_1, split_2, lr=[1e-7, 1e-4, 1e-2], beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False, **kwargs): super(Adam_dlr, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.beta_1 = K.variable(beta_1, name='beta_1') self.beta_2 = K.variable(beta_2, name='beta_2') self.decay = K.variable(decay, name='decay') # Extracting name of the split layers self.split_1 = split_1.weights[0].name self.split_2 = split_2.weights[0].name if epsilon is None: epsilon = K.epsilon() self.epsilon = epsilon self.initial_decay = decay self.amsgrad = amsgrad @keras.optimizers.interfaces.legacy_get_updates_support def get_updates(self, loss, params): grads = self.get_gradients(loss, params) self.updates = [K.update_add(self.iterations, 1)] lr = self.lr if self.initial_decay > 0: lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay)))) t = K.cast(self.iterations, K.floatx()) + 1 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] if self.amsgrad: vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] else: vhats = [K.zeros(1) for _ in params] self.weights = [self.iterations] + ms + vs + vhats  # Setting lr of the initial layers lr_grp = lr_t[0] for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):  # Updating lr when the split layer is encountered if p.name == self.split_1: lr_grp = lr_t[1] if p.name == self.split_2: lr_grp = lr_t[2]  m_t = (self.beta_1 * m) + (1. - self.beta_1) * g v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g) if self.amsgrad: vhat_t = K.maximum(vhat, v_t) p_t = p - lr_grp * m_t / (K.sqrt(vhat_t) + self.epsilon) # 使用更新后的学习率 self.updates.append(K.update(vhat, vhat_t)) else: p_t = p - lr_grp * m_t / (K.sqrt(v_t) + self.epsilon) self.updates.append(K.update(m, m_t)) self.updates.append(K.update(v, v_t)) new_p = p_t # Apply constraints. if getattr(p, 'constraint', None) is not None: new_p = p.constraint(new_p) self.updates.append(K.update(p, new_p)) return self.updates def get_config(self):# print('Optimizer LR: ', K.get_value(self.lr))# print() config = { 'lr': (K.get_value(self.lr)), 'beta_1': float(K.get_value(self.beta_1)), 'beta_2': float(K.get_value(self.beta_2)), 'decay': float(K.get_value(self.decay)), 'epsilon': self.epsilon, 'amsgrad': self.amsgrad} base_config = super(Adam_dlr, self).get_config() return dict(list(base_config.items()) + list(config.items()))

"""Adam optimizer.
 Default parameters follow those provided in the original paper.
 # Arguments
 split_1: split layer 1
 split_2: split layer 2
 lr: float >= 0. List of Learning rates. [Early layers, Middle layers, Final Layers]
 beta_1: float, 0 < beta < 1. Generally close to 1.
 beta_2: float, 0 < beta < 1. Generally close to 1.
 epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
 decay: float >= 0. Learning rate decay over each update.
 amsgrad: boolean. Whether to apply the AMSGrad variant of this
 algorithm from the paper "On the Convergence of Adam and
 Beyond".
 """

def __init__(self, split_1, split_2, lr=[1e-7, 1e-4, 1e-2], beta_1=0.9, beta_2=0.999,
 epsilon=None, decay=0., amsgrad=False, **kwargs):
super(Adam_dlr, self).__init__(**kwargs)
 with K.name_scope(self.__class__.__name__):
self.iterations = K.variable(0, dtype='int64', name='iterations')
self.lr = K.variable(lr, name='lr')
self.beta_1 = K.variable(beta_1, name='beta_1')
self.beta_2 = K.variable(beta_2, name='beta_2')
self.decay = K.variable(decay, name='decay')
# Extracting name of the split layers
self.split_1 = split_1.weights[0].name
self.split_2 = split_2.weights[0].name
if epsilon is None:
 epsilon = K.epsilon()
self.epsilon = epsilon
self.initial_decay = decay
self.amsgrad = amsgrad

 @keras.optimizers.interfaces.legacy_get_updates_support
def get_updates(self, loss, params):
 grads = self.get_gradients(loss, params)
self.updates = [K.update_add(self.iterations, 1)]

 lr = self.lr
if self.initial_decay > 0:
 lr = lr * (1. / (1. + self.decay * K.cast(self.iterations,
 K.dtype(self.decay))))

 t = K.cast(self.iterations, K.floatx()) + 1
 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) /
 (1. - K.pow(self.beta_1, t)))

 ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
 vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
if self.amsgrad:
 vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
else:
 vhats = [K.zeros(1) for _ in params]
self.weights = [self.iterations] + ms + vs + vhats

# Setting lr of the initial layers
 lr_grp = lr_t[0]
for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):

# Updating lr when the split layer is encountered
if p.name == self.split_1:
 lr_grp = lr_t[1]
if p.name == self.split_2:
 lr_grp = lr_t[2]

 m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
 v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
if self.amsgrad:
 vhat_t = K.maximum(vhat, v_t)
 p_t = p - lr_grp * m_t / (K.sqrt(vhat_t) + self.epsilon) # 使用更新后的学习率
self.updates.append(K.update(vhat, vhat_t))
else:
 p_t = p - lr_grp * m_t / (K.sqrt(v_t) + self.epsilon)

self.updates.append(K.update(m, m_t))
self.updates.append(K.update(v, v_t))
 new_p = p_t

# Apply constraints.
if getattr(p, 'constraint', None) is not None:
 new_p = p.constraint(new_p)

self.updates.append(K.update(p, new_p))
return self.updates

def get_config(self):
# print('Optimizer LR: ', K.get_value(self.lr))
# print()
 config = {
'lr': (K.get_value(self.lr)),
'beta_1': float(K.get_value(self.beta_1)),
'beta_2': float(K.get_value(self.beta_2)),
'decay': float(K.get_value(self.decay)),
'epsilon': self.epsilon,
'amsgrad': self.amsgrad}
 base_config = super(Adam_dlr, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
  1. 具有热启动的随机梯度下降(SGDR)
    理想情况下,对于每一批的随机梯度下降(SGD)网络应越来越接近损失的全局最小值。因此,随着训练的进行降低学习速率是有意义的,这使得算法不会超过错过并尽可能接近最小值。通过余弦退火,我们可以使用余弦函数来降低学习率。

    探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

    在前200次迭代内逐步调低学习率

SGDR是学习速率退火的最新变体,由Loshchilov&Hutter在他们的论文“Sgdr:Stochastic Gradient Descent with Warm Restarts”(https://arxiv.org/abs/1608.03983)中引入。在这种技术中,我们不时的进行学习率突增。下面是使用余弦退火重置三个均匀间隔的学习速率的示例。

探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

每迭代100次后将学习率调到最大

突然提高学习率背后的基本原理是:在这样做的情况下,梯度下降不会卡在任何局部最小值,并且可能以其向全局最小值的方式“跳出”局部最小值。

每次学习率下降到最小点(上图中每100次迭代),我们称之为循环。作者还建议通过一些常数因子使每个下一周期比前一周期更长。

探索学习率设置技巧以提高Keras中模型性能 | 炼丹技巧

每个周期需要两倍于上一个周期大小

在Keras中实现SGDR

使用Keras Callbacks回调函数,我们可以实现以遵循特定公式的方式更新学习率。具体实现可以参考周期性学习率官方实现方法这个Git(https://github.com/bckenstler/CLR)。


 
class LR_Updater(Callback): '''This callback is utilized to log learning rates every iteration (batch cycle) it is not meant to be directly used as a callback but extended by other callbacks ie. LR_Cycle ''' def __init__(self, iterations): ''' iterations = dataset size / batch size epochs = pass through full training dataset ''' self.epoch_iterations = iterations self.trn_iterations = 0. self.history = {} def on_train_begin(self, logs={}): self.trn_iterations = 0. logs = logs or {} def on_batch_end(self, batch, logs=None): logs = logs or {} self.trn_iterations += 1 K.set_value(self.model.optimizer.lr, self.setRate()) self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr)) self.history.setdefault('iterations', []).append(self.trn_iterations) for k, v in logs.items(): self.history.setdefault(k, []).append(v) def plot_lr(self): plt.xlabel("iterations") plt.ylabel("learning rate") plt.plot(self.history['iterations'], self.history['lr']) def plot(self, n_skip=10): plt.xlabel("learning rate (log scale)") plt.ylabel("loss") plt.plot(self.history['lr'], self.history['loss']) plt.xscale('log')class LR_Cycle(LR_Updater): '''This callback is utilized to implement cyclical learning rates it is based on this pytorch implementation https://github.com/fastai/fastai/blob/master/fastai and adopted from this keras implementation https://github.com/bckenstler/CLR ''' def __init__(self, iterations, cycle_mult = 1): ''' iterations = dataset size / batch size iterations = number of iterations in one annealing cycle cycle_mult = used to increase the cycle length cycle_mult times after every cycle for example: cycle_mult = 2 doubles the length of the cycle at the end of each cy$ ''' self.min_lr = 0 self.cycle_mult = cycle_mult self.cycle_iterations = 0. super().__init__(iterations) def setRate(self): self.cycle_iterations += 1 if self.cycle_iterations == self.epoch_iterations: self.cycle_iterations = 0. self.epoch_iterations *= self.cycle_mult cos_out = np.cos(np.pi*(self.cycle_iterations)/self.epoch_iterations) + 1 return self.max_lr / 2 * cos_out def on_train_begin(self, logs={}): super().on_train_begin(logs={}) #changed to {} to fix plots after going from 1 to mult. lr self.cycle_iterations = 0. self.max_lr = K.get_value(self.model.optimizer.lr)
'''This callback is utilized to log learning rates every iteration (batch cycle)
 it is not meant to be directly used as a callback but extended by other callbacks
 ie. LR_Cycle
 '''

def __init__(self, iterations):
'''
 iterations = dataset size / batch size
 epochs = pass through full training dataset
 '''
self.epoch_iterations = iterations
self.trn_iterations = 0.
self.history = {}

def on_train_begin(self, logs={}):
self.trn_iterations = 0.
 logs = logs or {}

def on_batch_end(self, batch, logs=None):
 logs = logs or {}
self.trn_iterations += 1
 K.set_value(self.model.optimizer.lr, self.setRate())
self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))
self.history.setdefault('iterations', []).append(self.trn_iterations)
for k, v in logs.items():
self.history.setdefault(k, []).append(v)

def plot_lr(self):
 plt.xlabel("iterations")
 plt.ylabel("learning rate")
 plt.plot(self.history['iterations'], self.history['lr'])

def plot(self, n_skip=10):
 plt.xlabel("learning rate (log scale)")
 plt.ylabel("loss")
 plt.plot(self.history['lr'], self.history['loss'])
 plt.xscale('log')


class LR_Cycle(LR_Updater):
'''This callback is utilized to implement cyclical learning rates
 it is based on this pytorch implementation https://github.com/fastai/fastai/blob/master/fastai
 and adopted from this keras implementation https://github.com/bckenstler/CLR
 '''

def __init__(self, iterations, cycle_mult = 1):
'''
 iterations = dataset size / batch size
 iterations = number of iterations in one annealing cycle
 cycle_mult = used to increase the cycle length cycle_mult times after every cycle
 for example: cycle_mult = 2 doubles the length of the cycle at the end of each cy$
 '''
self.min_lr = 0
self.cycle_mult = cycle_mult
self.cycle_iterations = 0.
super().__init__(iterations)

def setRate(self):
self.cycle_iterations += 1
if self.cycle_iterations == self.epoch_iterations:
self.cycle_iterations = 0.
self.epoch_iterations *= self.cycle_mult
 cos_out = np.cos(np.pi*(self.cycle_iterations)/self.epoch_iterations) + 1
return self.max_lr / 2 * cos_out

def on_train_begin(self, logs={}):
super().on_train_begin(logs={}) #changed to {} to fix plots after going from 1 to mult. lr
self.cycle_iterations = 0.
self.max_lr = K.get_value(self.model.optimizer.lr)

可以查看github存储库以获取差分学习SGDR的完整代码。它还包含一个测试文件,用于在样本数据集上使用这些技术。

 

欢迎关注磐创博客资源汇总站:
http://docs.panchuang.net/

欢迎关注PyTorch官方中文教程站:
http://pytorch.panchuang.net/

 
反对 0举报 0
 

免责声明:本文仅代表作者个人观点,与乐学笔记(本网)无关。其原创性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容、文字的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
    本网站有部分内容均转载自其它媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责,若因作品内容、知识产权、版权和其他问题,请及时提供相关证明等材料并与我们留言联系,本网站将在规定时间内给予删除等相关处理.

  • 拓端数据tecdat|使用Python中Keras的LSTM递归神经网络进行时间序列预测
    拓端数据tecdat|使用Python中Keras的LSTM递归神
     时间序列预测问题是预测建模问题中的一种困难类型。与回归预测建模不同,时间序列还增加了输入变量之间序列依赖的复杂性。用于处理序列依赖性的强大神经网络称为 递归神经网络。长短期记忆网络或LSTM网络是深度学习中使用的一种递归神经网络,可以成功地训
    03-08
  • Keras函数式API介绍 keras框架介绍
    Keras函数式API介绍 keras框架介绍
    参考文献:Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. O'Reilly Media, 2019.Keras的Sequential顺序模型可以快速搭建简易的神经网络,同时Ker
    02-09
  • keras——经典模型之LeNet5  实现手写字识别
    keras——经典模型之LeNet5 实现手写字识别
    经典论文:Gradient-Based Learning Applied to Document Recognition参考博文:https://blog.csdn.net/weixin_44344462/article/details/89212507构建LeNet-5模型#定义LeNet5网络深度为1的灰度图像def LeNet5(x_train, y_train, x_test, y_test):########搭
    02-09
  • Keras2.2 predict和fit_generator的区别
    查看keras文档中,predict函数原型:predict(self, x, batch_size=32, verbose=0)说明:只使用batch_size=32,也就是说每次将batch_size=32的数据通过PCI总线传到GPU,然后进行预测。在一些问题中,batch_size=32明显是非常小的。而通过PCI传数据是非常耗时的
    02-09
  • keras模块学习之-激活函数(activations)--笔
    本笔记由博客园-圆柱模板 博主整理笔记发布,转载需注明,谢谢合作!   每一个神经网络层都需要一个激活函数,例如一下样例代码:           from keras.layers.core import Activation, Densemodel.add(Dense(64))model.add(Activation('tanh'))或把
    02-09
  • 用于NLP的CNN架构搬运:from keras0.x to keras2.x
    用于NLP的CNN架构搬运:from keras0.x to keras
    本文亮点:将用于自然语言处理的CNN架构,从keras0.3.3搬运到了keras2.x,强行练习了Sequential+Model的混合使用,具体来说,是Model里嵌套了Sequential。本文背景:暑假在做一个推荐系统的小项目,老师让我们搜集推荐系统领域Top5的算法和模型,要求结合深度
    02-09
  • keras: 在构建LSTM模型时,使用变长序列的方法
    众所周知,LSTM的一大优势就是其能够处理变长序列。而在使用keras搭建模型时,如果直接使用LSTM层作为网络输入的第一层,需要指定输入的大小。如果需要使用变长序列,那么,只需要在LSTM层前加一个Masking层,或者embedding层即可。from keras.layers import
    02-09
  • 条件随机场CRF原理介绍 以及Keras实现
    条件随机场CRF原理介绍 以及Keras实现
    本文是对CRF基本原理的一个简明的介绍。当然,“简明”是相对而言中,要想真的弄清楚CRF,免不了要提及一些公式,如果只关心调用的读者,可以直接移到文末。 #按照之前的思路,我们依旧来对比一下普通的逐帧softmax和CRF的异同。 #CRF主要用于序列标注问题
    02-09
  • win10 python3.7 Anaconda3 安装tensorflow+Keras
    win10 python3.7 Anaconda3 安装tensorflow+Ker
    首先tensorflow 不支持python3.7,只能用tf1.9 也就是说:py3.7+ tf 1.9 +keras 2.2.0 才可以https://docs.floydhub.com/guides/environments/这个链接可以查询不同版本应该下载那个到Tensorflow支持Python3.7的一个whl:Unofficial Windows Binaries for Pyth
    02-09
  • keras channels_last、preprocess_input、全连
    channels_last 和 channels_firstkeras中 channels_last 和 channels_first 用来设定数据的维度顺序(image_data_format)。对2D数据来说,"channels_last"假定维度顺序为 (rows,cols,channels), 而"channels_first"假定维度顺序为(channels, rows, cols)。
    02-09
点击排行