c3.libraries.algorithms

Collection of (optimization) algorithms. All entries share a common signature with optional arguments.

Module Contents

c3.libraries.algorithms.algo_reg_deco(func)[source]

Decorator for making registry of functions

c3.libraries.algorithms.single_eval(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Return the function value at given point.

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) – Algorithm specific options

c3.libraries.algorithms.grid2D(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Two dimensional scan of the function values around the initial point.

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) –

    Options include points : int

    The number of samples

    boundslist

    Range of the scan for both dimensions

c3.libraries.algorithms.sweep(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

One dimensional scan of the function values around the initial point.

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) –

    Options include points : int

    The number of samples

    boundslist

    Range of the scan

c3.libraries.algorithms.adaptive_scan(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

One dimensional scan of the function values around the initial point, using adaptive sampling

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) –

    Options include

    accuracy_goal: float

    Targeted accuracy for the sampling algorithm

    probe_listlist

    Points to definitely include in the sampling

    init_pointboolean

    Include the initial point in the sampling

c3.libraries.algorithms.tf_sgd(x_init: numpy.ndarray, fun: Callable = None, fun_grad: Callable = None, grad_lookup: Callable = None, options: dict = {}) scipy.optimize.OptimizeResult[source]

Optimize using TensorFlow Stochastic Gradient Descent with Momentum https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/SGD

Parameters
  • x_init (np.ndarray) – starting value of parameter(s)

  • fun (Callable, optional) – function to minimize, by default None

  • fun_grad (Callable, optional) – gradient of function to minimize, by default None

  • grad_lookup (Callable, optional) – lookup stored gradients, by default None

  • options (dict, optional) – optional parameters for optimizer, by default {}

Returns

SciPy OptimizeResult type object with final parameters

Return type

OptimizeResult

c3.libraries.algorithms.tf_adam(x_init: numpy.ndarray, fun: Callable = None, fun_grad: Callable = None, grad_lookup: Callable = None, options: dict = {}) scipy.optimize.OptimizeResult[source]

Optimize using TensorFlow ADAM https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam

Parameters
  • x_init (np.ndarray) – starting value of parameter(s)

  • fun (Callable, optional) – function to minimize, by default None

  • fun_grad (Callable, optional) – gradient of function to minimize, by default None

  • grad_lookup (Callable, optional) – lookup stored gradients, by default None

  • options (dict, optional) – optional parameters for optimizer, by default {}

Returns

SciPy OptimizeResult type object with final parameters

Return type

OptimizeResult

c3.libraries.algorithms.tf_rmsprop(x_init: numpy.ndarray, fun: Callable = None, fun_grad: Callable = None, grad_lookup: Callable = None, options: dict = {}) scipy.optimize.OptimizeResult[source]

Optimize using TensorFlow RMSProp https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/RMSprop

Parameters
  • x_init (np.ndarray) – starting value of parameter(s)

  • fun (Callable, optional) – function to minimize, by default None

  • fun_grad (Callable, optional) – gradient of function to minimize, by default None

  • grad_lookup (Callable, optional) – lookup stored gradients, by default None

  • options (dict, optional) – optional parameters for optimizer, by default {}

Returns

SciPy OptimizeResult type object with final parameters

Return type

OptimizeResult

c3.libraries.algorithms.tf_adadelta(x_init: numpy.ndarray, fun: Callable = None, fun_grad: Callable = None, grad_lookup: Callable = None, options: dict = {}) scipy.optimize.OptimizeResult[source]

Optimize using TensorFlow Adadelta https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adadelta

Parameters
  • x_init (np.ndarray) – starting value of parameter(s)

  • fun (Callable, optional) – function to minimize, by default None

  • fun_grad (Callable, optional) – gradient of function to minimize, by default None

  • grad_lookup (Callable, optional) – lookup stored gradients, by default None

  • options (dict, optional) – optional parameters for optimizer, by default {}

Returns

SciPy OptimizeResult type object with final parameters

Return type

OptimizeResult

c3.libraries.algorithms.lbfgs(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Wrapper for the scipy.optimize.minimize implementation of LBFG-S. See also:

https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) – Options of scipy.optimize.minimize

Returns

Scipy result object.

Return type

Result

c3.libraries.algorithms.lbfgs_grad_free(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Wrapper for the scipy.optimize.minimize implementation of LBFG-S. We let the algorithm determine the gradient by its own.

See also:

https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html

Parameters
  • x_init (float) – Initial point

  • fun (callable) – Goal function

  • fun_grad (callable) – Function that computes the gradient of the goal function

  • grad_lookup (callable) – Lookup a previously computed gradient

  • options (dict) – Options of scipy.optimize.minimize

Returns

Scipy result object.

Return type

Result

c3.libraries.algorithms.cmaes(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Wrapper for the pycma implementation of CMA-Es. See also:

http://cma.gforge.inria.fr/apidocs-pycma/

Parameters
  • x_init (float) – Initial point.

  • fun (callable) – Goal function.

  • fun_grad (callable) – Function that computes the gradient of the goal function.

  • grad_lookup (callable) – Lookup a previously computed gradient.

  • options (dict) –

    Options of pycma and the following custom options.

    noisefloat

    Artificial noise added to a function evaluation.

    init_pointboolean

    Force the use of the initial point in the first generation.

    spreadfloat

    Adjust the parameter spread of the first generation cloud.

    stop_at_convergenceint

    Custom stopping condition. Stop if the cloud shrunk for this number of generations.

    stop_at_sigmafloat

    Custom stopping condition. Stop if the cloud shrunk to this standard deviation.

Returns

Parameters of the best point.

Return type

np.ndarray

c3.libraries.algorithms.cma_pre_lbfgs(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

Performs a CMA-Es optimization and feeds the result into LBFG-S for further refinement.

c3.libraries.algorithms.gcmaes(x_init, fun=None, fun_grad=None, grad_lookup=None, options={})[source]

EXPERIMENTAL CMA-Es where every point in the cloud is optimized with LBFG-S and the resulting cloud and results are used for the CMA update.