medsegpy.solver

medsegpy.solver.build

medsegpy.solver.build.build_optimizer(config: medsegpy.config.Config)[source]

Build optimizer from config.

Currently supports Adam or AdamAccumulate optimizers.

Parameters:config (Config) – A config to read parameters from.
Returns:A Keras-compatible optimizer.
medsegpy.solver.build.build_lr_scheduler(config: medsegpy.config.Config) → tensorflow.python.keras.callbacks.Callback[source]

Build learning rate scheduler.

Supports “StepDecay” and “ReduceLROnPlateau”

Args: config (Config): A config to read parameters from.
Returns:keras.callback.LearningRateScheduler
Usage:
>>> callbacks = []  # list of callbacks to be used sith `fit_generator`
>>> scheduler = build_lr_scheduler(...)
>>> callbacks.append(scheduler)

medsegpy.solver.lr_scheduler

Learning rate schedulers.

Usage:
>>> callbacks = []  # list of callbacks to be used sith `fit_generator`
>>> scheduler = step_decay(...)
>>> callbacks.append(keras.callback.LearningRateScheduler(scheduler))
medsegpy.solver.lr_scheduler.step_decay(initial_lr, min_lr, drop_factor, drop_rate)[source]

Learning rate drops by factor of drop_factor every drop_rate epochs.

For legacy purposes, the first drop occurs after drop_rate - 1 epochs. For example, if drop_rate = 3, the first decay will occur after 2 epochs. Subsequently, the learning rate will drop every 3 epochs.

Parameters:
  • initial_lr – initial learning rate (default = 1e-4)
  • min_lr – minimum learning rate (default = None)
  • drop_factor – factor to drop (default = 0.8)
  • drop_rate – rate of learning rate drop (default = 1.0 epochs)
Returns:

func – To be used with :class`keras.callbacks.LearningRateScheduler`

medsegpy.solver.optimizer

Adopted from https://github.com/keras-team/keras/issues/3556#issuecomment-440638517

class medsegpy.solver.optimizer.AdamAccumulate(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False, accum_iters=1, **kwargs)[source]
get_updates(loss, params)[source]
get_config()[source]