Fork me on GitHub Top

skopt.optimizer module

from .base import base_minimize
from .dummy import dummy_minimize
from .forest import forest_minimize
from .gbrt import gbrt_minimize
from .gp import gp_minimize
from .optimizer import Optimizer


__all__ = [
    "base_minimize", "dummy_minimize",
    "forest_minimize", "gbrt_minimize", "gp_minimize",
    "Optimizer"
]

Functions

def base_minimize(

func, dimensions, base_estimator, n_calls=100, n_random_starts=10, acq_func='EI', acq_optimizer='lbfgs', x0=None, y0=None, random_state=None, verbose=False, callback=None, n_points=10000, n_restarts_optimizer=5, xi=0.01, kappa=1.96, n_jobs=1)

Parameters

  • func [callable]: Function to minimize. Should take a array of parameters and return the function values.

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, "prior") tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).

    NOTE: The upper and lower bounds are inclusive for Integer dimensions.

  • base_estimator [sklearn regressor]: Should inherit from sklearn.base.RegressorMixin. In addition, should have an optional return_std argument, which returns std(Y | x)`` along withE[Y | x]`.

  • n_calls [int, default=100]: Maximum number of calls to func.

  • n_random_starts [int, default=10]: Number of evaluations of func with random points before approximating it with base_estimator.

  • acq_func [string, default="EI"]: Function to minimize over the posterior distribution. Can be either

    • "LCB" for lower confidence bound,
    • "EI" for negative expected improvement,
    • "PI" for negative probability of improvement.
    • `"EIps" for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken in seconds.
    • "PIps" for negated probability of improvement per second. The return type of the objective function is assumed to be similar to that of `"EIps
  • acq_optimizer [string, "sampling" or "lbfgs", default="lbfgs"]: Method to minimize the acquistion function. The fit model is updated with the optimal value obtained by optimizing acq_func with acq_optimizer.

    • If set to "sampling", then acq_func is optimized by computing acq_func at n_points randomly sampled points and the smallest value found is used.
    • If set to "lbfgs", then
      • The n_restarts_optimizer no. of points which the acquisition function is least are taken as start points.
      • "lbfgs" is run for 20 iterations with these points as initial points to find local minima.
      • The optimal of these local minima is used to update the prior.
  • x0 [list, list of lists or None]: Initial input points.

    • If it is a list of lists, use it as a list of input points.
    • If it is a list, use it as a single initial input point.
    • If it is None, no initial input points are used.
  • y0 [list, scalar or None] Evaluation of initial input points.

    • If it is a list, then it corresponds to evaluations of the function at each element of x0 : the i-th element of y0 corresponds to the function evaluated at the i-th element of x0.
    • If it is a scalar, then it corresponds to the evaluation of the function at x0.
    • If it is None and x0 is provided, then the function is evaluated at each element of x0.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • verbose [boolean, default=False]: Control the verbosity. It is advised to set the verbosity to True for long optimization runs.

  • callback [callable, list of callables, optional] If callable then callback(res) is called after each call to func. If list of callables, then each callable in the list is called.

  • n_points [int, default=10000]: If acq_optimizer is set to "sampling", then acq_func is optimized by computing acq_func at n_points randomly sampled points.

  • n_restarts_optimizer [int, default=5]: The number of restarts of the optimizer when acq_optimizer is "lbfgs".

  • xi [float, default=0.01]: Controls how much improvement one wants over the previous best values. Used when the acquisition is either "EI" or "PI".

  • kappa [float, default=1.96]: Controls how much of the variance in the predicted values should be taken into account. If set to be very high, then we are favouring exploration over exploitation and vice versa. Used when the acquisition is "LCB".

  • n_jobs [int, default=1]: Number of cores to run in parallel while running the lbfgs optimizations over the acquisition function. Valid only when acq_optimizer is set to "lbfgs." Defaults to 1 core. If n_jobs=-1, then number of jobs is set to number of cores.

Returns

  • res [OptimizeResult, scipy object]: The optimization result returned as a OptimizeResult object. Important attributes are:

    • x [list]: location of the minimum.
    • fun [float]: function value at the minimum.
    • models: surrogate models used for each iteration.
    • x_iters [list of lists]: location of function evaluation for each iteration.
    • func_vals [array]: function value for each iteration.
    • space [Space]: the optimization space.
    • specs [dict]`: the call specifications.
    • rng [RandomState instance]: State of the random state at the end of minimization.

    For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

def base_minimize(func, dimensions, base_estimator,
                  n_calls=100, n_random_starts=10,
                  acq_func="EI", acq_optimizer="lbfgs",
                  x0=None, y0=None, random_state=None, verbose=False,
                  callback=None, n_points=10000, n_restarts_optimizer=5,
                  xi=0.01, kappa=1.96, n_jobs=1):
    """
    Parameters
    ----------
    * `func` [callable]:
        Function to minimize. Should take a array of parameters and
        return the function values.

    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, "prior")` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

         NOTE: The upper and lower bounds are inclusive for `Integer`
         dimensions.

    * `base_estimator` [sklearn regressor]:
        Should inherit from `sklearn.base.RegressorMixin`.
        In addition, should have an optional `return_std` argument,
        which returns `std(Y | x)`` along with `E[Y | x]`.

    * `n_calls` [int, default=100]:
        Maximum number of calls to `func`.

    * `n_random_starts` [int, default=10]:
        Number of evaluations of `func` with random points before
        approximating it with `base_estimator`.

    * `acq_func` [string, default=`"EI"`]:
        Function to minimize over the posterior distribution. Can be either

        - `"LCB"` for lower confidence bound,
        - `"EI"` for negative expected improvement,
        - `"PI"` for negative probability of improvement.
        - `"EIps" for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken in seconds.
        - `"PIps"` for negated probability of improvement per second. The
          return type of the objective function is assumed to be similar to
          that of `"EIps

    * `acq_optimizer` [string, `"sampling"` or `"lbfgs"`, default=`"lbfgs"`]:
        Method to minimize the acquistion function. The fit model
        is updated with the optimal value obtained by optimizing `acq_func`
        with `acq_optimizer`.

        - If set to `"sampling"`, then `acq_func` is optimized by computing
          `acq_func` at `n_points` randomly sampled points and the smallest
          value found is used.
        - If set to `"lbfgs"`, then
              - The `n_restarts_optimizer` no. of points which the acquisition
                function is least are taken as start points.
              - `"lbfgs"` is run for 20 iterations with these points as initial
                points to find local minima.
              - The optimal of these local minima is used to update the prior.

    * `x0` [list, list of lists or `None`]:
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    * `y0` [list, scalar or `None`]
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `verbose` [boolean, default=False]:
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    * `callback` [callable, list of callables, optional]
        If callable then `callback(res)` is called after each call to `func`.
        If list of callables, then each callable in the list is called.

    * `n_points` [int, default=10000]:
        If `acq_optimizer` is set to `"sampling"`, then `acq_func` is
        optimized by computing `acq_func` at `n_points` randomly sampled
        points.

    * `n_restarts_optimizer` [int, default=5]:
        The number of restarts of the optimizer when `acq_optimizer`
        is `"lbfgs"`.

    * `xi` [float, default=0.01]:
        Controls how much improvement one wants over the previous best
        values. Used when the acquisition is either `"EI"` or `"PI"`.

    * `kappa` [float, default=1.96]:
        Controls how much of the variance in the predicted values should be
        taken into account. If set to be very high, then we are favouring
        exploration over exploitation and vice versa.
        Used when the acquisition is `"LCB"`.

    * `n_jobs` [int, default=1]:
        Number of cores to run in parallel while running the lbfgs
        optimizations over the acquisition function. Valid only when
        `acq_optimizer` is set to "lbfgs."
        Defaults to 1 core. If `n_jobs=-1`, then number of jobs is set
        to number of cores.

    Returns
    -------
    * `res` [`OptimizeResult`, scipy object]:
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `models`: surrogate models used for each iteration.
        - `x_iters` [list of lists]: location of function evaluation for each
           iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimization space.
        - `specs` [dict]`: the call specifications.
        - `rng` [RandomState instance]: State of the random state
           at the end of minimization.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
    """
    specs = {"args": copy.copy(inspect.currentframe().f_locals),
             "function": inspect.currentframe().f_code.co_name}

    acq_optimizer_kwargs = {
        "n_points": n_points, "n_restarts_optimizer": n_restarts_optimizer,
        "n_jobs": n_jobs}
    acq_func_kwargs = {"xi": xi, "kappa": kappa}

    # Initialize with provided points (x0 and y0) and/or random points
    if x0 is None:
        x0 = []
    elif not isinstance(x0[0], (list, tuple)):
        x0 = [x0]

    if not isinstance(x0, list):
        raise ValueError("`x0` should be a list, but got %s" % type(x0))

    if n_random_starts == 0 and not x0:
        raise ValueError("Either set `n_random_starts` > 0,"
                         " or provide `x0`")

    if isinstance(y0, Iterable):
        y0 = list(y0)
    elif isinstance(y0, numbers.Number):
        y0 = [y0]

    # is the budget for calling `func` large enough?
    required_calls = n_random_starts + (len(x0) if not y0 else 0)
    if n_calls < required_calls:
        raise ValueError(
            "Expected `n_calls` >= %d, got %d" % (required_calls, n_calls))

    # Number of points the user wants to evaluate before it makes sense to
    # fit a surrogate model
    n_initial_points = n_random_starts + len(x0)
    optimizer = Optimizer(dimensions, base_estimator,
                          n_initial_points=n_initial_points,
                          acq_func=acq_func, acq_optimizer=acq_optimizer,
                          random_state=random_state,
                          acq_optimizer_kwargs=acq_optimizer_kwargs,
                          acq_func_kwargs=acq_func_kwargs)

    assert all(isinstance(p, Iterable) for p in x0)

    if not all(len(p) == optimizer.space.n_dims for p in x0):
        raise RuntimeError("Optimization space (%s) and initial points in x0 "
                           "use inconsistent dimensions." % optimizer.space)

    callbacks = check_callback(callback)
    if verbose:
        callbacks.append(VerboseCallback(
            n_init=len(x0) if not y0 else 0,
            n_random=n_random_starts,
            n_total=n_calls))

    # setting the scope for these variables
    result = None

    # User suggested points at which to evaluate the objective first
    if x0 and y0 is None:
        y0 = list(map(func, x0))
        n_calls -= len(y0)

    # Pass user suggested initialisation points to the optimizer
    if x0:
        if not (isinstance(y0, Iterable) or isinstance(y0, numbers.Number)):
            raise ValueError(
                "`y0` should be an iterable or a scalar, got %s" % type(y0))

        if len(x0) != len(y0):
            raise ValueError("`x0` and `y0` should have the same length")


        result = optimizer.tell(x0, y0)
        result.specs = specs

        if eval_callbacks(callbacks, result):
            return result

    # Bayesian optimization loop
    for n in range(n_calls):
        next_x = optimizer.ask()

        next_y = func(next_x)
        result = optimizer.tell(next_x, next_y)
        result.specs = specs

        if eval_callbacks(callbacks, result):
            break

    return result

def dummy_minimize(

func, dimensions, n_calls=100, x0=None, y0=None, random_state=None, verbose=False, callback=None)

Random search by uniform sampling within the given bounds.

Parameters

  • func [callable]: Function to minimize. Should take a array of parameters and return the function values.

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, prior) tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).
  • n_calls [int, default=100]: Number of calls to func to find the minimum.

  • x0 [list, list of lists or None]: Initial input points.

    • If it is a list of lists, use it as a list of input points.
    • If it is a list, use it as a single initial input point.
    • If it is None, no initial input points are used.
  • y0 [list, scalar or None]: Evaluation of initial input points.

    • If it is a list, then it corresponds to evaluations of the function at each element of x0 : the i-th element of y0 corresponds to the function evaluated at the i-th element of x0.
    • If it is a scalar, then it corresponds to the evaluation of the function at x0.
    • If it is None and x0 is provided, then the function is evaluated at each element of x0.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • verbose [boolean, default=False]: Control the verbosity. It is advised to set the verbosity to True for long optimization runs.

  • callback [callable, list of callables, optional] If callable then callback(res) is called after each call to func. If list of callables, then each callable in the list is called.

Returns

  • res [OptimizeResult, scipy object]: The optimization result returned as a OptimizeResult object. Important attributes are:

    • x [list]: location of the minimum.
    • fun [float]: function value at the minimum.
    • x_iters [list of lists]: location of function evaluation for each iteration.
    • func_vals [array]: function value for each iteration.
    • space [Space]: the optimisation space.
    • specs [dict]: the call specifications.
    • rng [RandomState instance]: State of the random state at the end of minimization.

    For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

def dummy_minimize(func, dimensions, n_calls=100, x0=None, y0=None,
                   random_state=None, verbose=False, callback=None):
    """Random search by uniform sampling within the given bounds.

    Parameters
    ----------
    * `func` [callable]:
        Function to minimize. Should take a array of parameters and
        return the function values.

    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, prior)` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

    * `n_calls` [int, default=100]:
        Number of calls to `func` to find the minimum.

    * `x0` [list, list of lists or `None`]:
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    * `y0` [list, scalar or `None`]:
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `verbose` [boolean, default=False]:
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    * `callback` [callable, list of callables, optional]
        If callable then `callback(res)` is called after each call to `func`.
        If list of callables, then each callable in the list is called.

    Returns
    -------
    * `res` [`OptimizeResult`, scipy object]:
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `x_iters` [list of lists]: location of function evaluation for each
           iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimisation space.
        - `specs` [dict]: the call specifications.
        - `rng` [RandomState instance]: State of the random state
           at the end of minimization.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
    """
    # all our calls want random suggestions, except if we need to evaluate
    # some initial points
    if x0 is not None and y0 is None:
        n_random_calls = n_calls - len(x0)
    else:
        n_random_calls = n_calls

    return base_minimize(func, dimensions, base_estimator="dummy",
                         # explicitly set optimizer to sampling as "dummy"
                         # minimizer does not provide gradients.
                         acq_optimizer="sampling",
                         n_calls=n_calls, n_random_starts=n_random_calls,
                         x0=x0, y0=y0, random_state=random_state,
                         verbose=verbose,
                         callback=callback)

def forest_minimize(

func, dimensions, base_estimator='ET', n_calls=100, n_random_starts=10, acq_func='EI', x0=None, y0=None, random_state=None, verbose=False, callback=None, n_points=10000, xi=0.01, kappa=1.96, n_jobs=1)

Sequential optimisation using decision trees.

A tree based regression model is used to model the expensive to evaluate function func. The model is improved by sequentially evaluating the expensive function at the next best point. Thereby finding the minimum of func with as few evaluations as possible.

The total number of evaluations, n_calls, are performed like the following. If x0 is provided but not y0, then the elements of x0 are first evaluated, followed by n_random_starts evaluations. Finally, n_calls - len(x0) - n_random_starts evaluations are made guided by the surrogate model. If x0 and y0 are both provided then n_random_starts evaluations are first made then n_calls - n_random_starts subsequent evaluations are made guided by the surrogate model.

Parameters

  • func [callable]: Function to minimize. Should take a array of parameters and return the function values.

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, prior) tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).

    NOTE: The upper and lower bounds are inclusive for Integer dimensions.

  • base_estimator [string or Regressor, default="ET"]: The regressor to use as surrogate model. Can be either

    • "RF" for random forest regressor
    • "ET" for extra trees regressor
    • instance of regressor with support for return_std in its predict method

    The predefined models are initilized with good defaults. If you want to adjust the model parameters pass your own instance of a regressor which returns the mean and standard deviation when making predictions.

  • n_calls [int, default=100]: Number of calls to func.

  • n_random_starts [int, default=10]: Number of evaluations of func with random points before approximating it with base_estimator.

  • acq_func [string, default="LCB"]: Function to minimize over the forest posterior. Can be either

    • "LCB" for lower confidence bound.
    • "EI" for negative expected improvement.
    • "PI" for negative probability of improvement.
    • `"EIps" for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken in seconds.
    • "PIps" for negated probability of improvement per second. The return type of the objective function is assumed to be similar to that of "EIps"
  • x0 [list, list of lists or None]: Initial input points.

    • If it is a list of lists, use it as a list of input points.
    • If it is a list, use it as a single initial input point.
    • If it is None, no initial input points are used.
  • y0 [list, scalar or None]: Evaluation of initial input points.

    • If it is a list, then it corresponds to evaluations of the function at each element of x0 : the i-th element of y0 corresponds to the function evaluated at the i-th element of x0.
    • If it is a scalar, then it corresponds to the evaluation of the function at x0.
    • If it is None and x0 is provided, then the function is evaluated at each element of x0.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • verbose [boolean, default=False]: Control the verbosity. It is advised to set the verbosity to True for long optimization runs.

  • callback [callable, optional] If provided, then callback(res) is called after call to func.

  • n_points [int, default=10000]: Number of points to sample when minimizing the acquisition function.

  • xi [float, default=0.01]: Controls how much improvement one wants over the previous best values. Used when the acquisition is either "EI" or "PI".

  • kappa [float, default=1.96]: Controls how much of the variance in the predicted values should be taken into account. If set to be very high, then we are favouring exploration over exploitation and vice versa. Used when the acquisition is "LCB".

  • n_jobs [int, default=1]: The number of jobs to run in parallel for fit and predict. If -1, then the number of jobs is set to the number of cores.

Returns

  • res [OptimizeResult, scipy object]: The optimization result returned as a OptimizeResult object. Important attributes are:

    • x [list]: location of the minimum.
    • fun [float]: function value at the minimum.
    • models: surrogate models used for each iteration.
    • x_iters [list of lists]: location of function evaluation for each iteration.
    • func_vals [array]: function value for each iteration.
    • space [Space]: the optimization space.
    • specs [dict]`: the call specifications.

    For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

def forest_minimize(func, dimensions, base_estimator="ET", n_calls=100,
                    n_random_starts=10, acq_func="EI",
                    x0=None, y0=None, random_state=None, verbose=False,
                    callback=None, n_points=10000, xi=0.01, kappa=1.96,
                    n_jobs=1):
    """Sequential optimisation using decision trees.

    A tree based regression model is used to model the expensive to evaluate
    function `func`. The model is improved by sequentially evaluating
    the expensive function at the next best point. Thereby finding the
    minimum of `func` with as few evaluations as possible.

    The total number of evaluations, `n_calls`, are performed like the
    following. If `x0` is provided but not `y0`, then the elements of `x0`
    are first evaluated, followed by `n_random_starts` evaluations.
    Finally, `n_calls - len(x0) - n_random_starts` evaluations are
    made guided by the surrogate model. If `x0` and `y0` are both
    provided then `n_random_starts` evaluations are first made then
    `n_calls - n_random_starts` subsequent evaluations are made
    guided by the surrogate model.

    Parameters
    ----------
    * `func` [callable]:
        Function to minimize. Should take a array of parameters and
        return the function values.

    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, prior)` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

         NOTE: The upper and lower bounds are inclusive for `Integer`
         dimensions.

    * `base_estimator` [string or `Regressor`, default=`"ET"`]:
        The regressor to use as surrogate model. Can be either

        - `"RF"` for random forest regressor
        - `"ET"` for extra trees regressor
        - instance of regressor with support for `return_std` in its predict
          method

        The predefined models are initilized with good defaults. If you
        want to adjust the model parameters pass your own instance of
        a regressor which returns the mean and standard deviation when
        making predictions.

    * `n_calls` [int, default=100]:
        Number of calls to `func`.

    * `n_random_starts` [int, default=10]:
        Number of evaluations of `func` with random points before
        approximating it with `base_estimator`.

    * `acq_func` [string, default=`"LCB"`]:
        Function to minimize over the forest posterior. Can be either

        - `"LCB"` for lower confidence bound.
        - `"EI"` for negative expected improvement.
        - `"PI"` for negative probability of improvement.
        - `"EIps" for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken in seconds.
        - `"PIps"` for negated probability of improvement per second. The
          return type of the objective function is assumed to be similar to
          that of `"EIps"`

    * `x0` [list, list of lists or `None`]:
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    * `y0` [list, scalar or `None`]:
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `verbose` [boolean, default=False]:
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    * `callback` [callable, optional]
        If provided, then `callback(res)` is called after call to func.

    * `n_points` [int, default=10000]:
        Number of points to sample when minimizing the acquisition function.

    * `xi` [float, default=0.01]:
        Controls how much improvement one wants over the previous best
        values. Used when the acquisition is either `"EI"` or `"PI"`.

    * `kappa` [float, default=1.96]:
        Controls how much of the variance in the predicted values should be
        taken into account. If set to be very high, then we are favouring
        exploration over exploitation and vice versa.
        Used when the acquisition is `"LCB"`.

    * `n_jobs` [int, default=1]:
        The number of jobs to run in parallel for `fit` and `predict`.
        If -1, then the number of jobs is set to the number of cores.

    Returns
    -------
    * `res` [`OptimizeResult`, scipy object]:
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `models`: surrogate models used for each iteration.
        - `x_iters` [list of lists]: location of function evaluation for each
           iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimization space.
        - `specs` [dict]`: the call specifications.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
    """
    return base_minimize(func, dimensions, base_estimator,
                         n_calls=n_calls, n_points=n_points,
                         n_random_starts=n_random_starts,
                         x0=x0, y0=y0, random_state=random_state,
                         acq_func=acq_func,
                         xi=xi, kappa=kappa, verbose=verbose,
                         callback=callback, acq_optimizer="sampling")

def gbrt_minimize(

func, dimensions, base_estimator=None, n_calls=100, n_random_starts=10, acq_func='EI', acq_optimizer='auto', x0=None, y0=None, random_state=None, verbose=False, callback=None, n_points=10000, xi=0.01, kappa=1.96, n_jobs=1)

Sequential optimization using gradient boosted trees.

Gradient boosted regression trees are used to model the (very) expensive to evaluate function func. The model is improved by sequentially evaluating the expensive function at the next best point. Thereby finding the minimum of func with as few evaluations as possible.

The total number of evaluations, n_calls, are performed like the following. If x0 is provided but not y0, then the elements of x0 are first evaluated, followed by n_random_starts evaluations. Finally, n_calls - len(x0) - n_random_starts evaluations are made guided by the surrogate model. If x0 and y0 are both provided then n_random_starts evaluations are first made then n_calls - n_random_starts subsequent evaluations are made guided by the surrogate model.

Parameters

  • func [callable]: Function to minimize. Should take a array of parameters and return the function values.

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, "prior") tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).
  • base_estimator [GradientBoostingQuantileRegressor]: The regressor to use as surrogate model

  • n_calls [int, default=100]: Number of calls to func.

  • n_random_starts [int, default=10]: Number of evaluations of func with random points before approximating it with base_estimator.

  • acq_func [string, default="LCB"]: Function to minimize over the forest posterior. Can be either

    • "LCB" for lower confidence bound.
    • "EI" for negative expected improvement.
    • "PI" for negative probability of improvement.
    • "EIps" for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken.
    • "PIps" for negated probability of improvement per second.
  • x0 [list, list of lists or None]: Initial input points.

    • If it is a list of lists, use it as a list of input points.
    • If it is a list, use it as a single initial input point.
    • If it is None, no initial input points are used.
  • y0 [list, scalar or None]: Evaluation of initial input points.

    • If it is a list, then it corresponds to evaluations of the function at each element of x0 : the i-th element of y0 corresponds to the function evaluated at the i-th element of x0.
    • If it is a scalar, then it corresponds to the evaluation of the function at x0.
    • If it is None and x0 is provided, then the function is evaluated at each element of x0.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • verbose [boolean, default=False]: Control the verbosity. It is advised to set the verbosity to True for long optimization runs.

  • callback [callable, optional] If provided, then callback(res) is called after call to func.

  • n_points [int, default=10000]: Number of points to sample when minimizing the acquisition function.

  • xi [float, default=0.01]: Controls how much improvement one wants over the previous best values. Used when the acquisition is either "EI" or "PI".

  • kappa [float, default=1.96]: Controls how much of the variance in the predicted values should be taken into account. If set to be very high, then we are favouring exploration over exploitation and vice versa. Used when the acquisition is "LCB".

  • n_jobs [int, default=1]: The number of jobs to run in parallel for fit and predict. If -1, then the number of jobs is set to the number of cores.

Returns

  • res [OptimizeResult, scipy object]: The optimization result returned as a OptimizeResult object. Important attributes are:

    • x [list]: location of the minimum.
    • fun [float]: function value at the minimum.
    • models: surrogate models used for each iteration.
    • x_iters [list of lists]: location of function evaluation for each iteration.
    • func_vals [array]: function value for each iteration.
    • space [Space]: the optimization space.
    • specs [dict]`: the call specifications.
    • rng [RandomState instance]: State of the random state at the end of minimization.

    For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

def gbrt_minimize(func, dimensions, base_estimator=None,
                  n_calls=100, n_random_starts=10,
                  acq_func="EI", acq_optimizer="auto",
                  x0=None, y0=None, random_state=None, verbose=False,
                  callback=None, n_points=10000, xi=0.01, kappa=1.96,
                  n_jobs=1):
    """Sequential optimization using gradient boosted trees.

    Gradient boosted regression trees are used to model the (very)
    expensive to evaluate function `func`. The model is improved
    by sequentially evaluating the expensive function at the next
    best point. Thereby finding the minimum of `func` with as
    few evaluations as possible.

    The total number of evaluations, `n_calls`, are performed like the
    following. If `x0` is provided but not `y0`, then the elements of `x0`
    are first evaluated, followed by `n_random_starts` evaluations.
    Finally, `n_calls - len(x0) - n_random_starts` evaluations are
    made guided by the surrogate model. If `x0` and `y0` are both
    provided then `n_random_starts` evaluations are first made then
    `n_calls - n_random_starts` subsequent evaluations are made
    guided by the surrogate model.

    Parameters
    ----------
    * `func` [callable]:
        Function to minimize. Should take a array of parameters and
        return the function values.

    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, "prior")` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

    * `base_estimator` [`GradientBoostingQuantileRegressor`]:
        The regressor to use as surrogate model

    * `n_calls` [int, default=100]:
        Number of calls to `func`.

    * `n_random_starts` [int, default=10]:
        Number of evaluations of `func` with random points before
        approximating it with `base_estimator`.

    * `acq_func` [string, default=`"LCB"`]:
        Function to minimize over the forest posterior. Can be either

        - `"LCB"` for lower confidence bound.
        - `"EI"` for negative expected improvement.
        - `"PI"` for negative probability of improvement.
        - ``"EIps"`` for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken.
        - `"PIps"` for negated probability of improvement per second.

    * `x0` [list, list of lists or `None`]:
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    * `y0` [list, scalar or `None`]:
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `verbose` [boolean, default=False]:
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    * `callback` [callable, optional]
        If provided, then `callback(res)` is called after call to func.

    * `n_points` [int, default=10000]:
        Number of points to sample when minimizing the acquisition function.

    * `xi` [float, default=0.01]:
        Controls how much improvement one wants over the previous best
        values. Used when the acquisition is either `"EI"` or `"PI"`.

    * `kappa` [float, default=1.96]:
        Controls how much of the variance in the predicted values should be
        taken into account. If set to be very high, then we are favouring
        exploration over exploitation and vice versa.
        Used when the acquisition is `"LCB"`.

    * `n_jobs` [int, default=1]:
        The number of jobs to run in parallel for `fit` and `predict`.
        If -1, then the number of jobs is set to the number of cores.

    Returns
    -------
    * `res` [`OptimizeResult`, scipy object]:
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `models`: surrogate models used for each iteration.
        - `x_iters` [list of lists]: location of function evaluation for each
           iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimization space.
        - `specs` [dict]`: the call specifications.
        - `rng` [RandomState instance]: State of the random state
           at the end of minimization.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
    """
    # Check params
    rng = check_random_state(random_state)

    if base_estimator is None:
        base_estimator = cook_estimator("GBRT", random_state=rng,
                                        n_jobs=n_jobs)
    return base_minimize(func, dimensions, base_estimator,
                         n_calls=n_calls, n_points=n_points,
                         n_random_starts=n_random_starts,
                         x0=x0, y0=y0, random_state=random_state, xi=xi,
                         kappa=kappa, acq_func=acq_func, verbose=verbose,
                         callback=callback, acq_optimizer="sampling")

def gp_minimize(

func, dimensions, base_estimator=None, n_calls=100, n_random_starts=10, acq_func='gp_hedge', acq_optimizer='auto', x0=None, y0=None, random_state=None, verbose=False, callback=None, n_points=10000, n_restarts_optimizer=5, xi=0.01, kappa=1.96, noise='gaussian', n_jobs=1)

Bayesian optimization using Gaussian Processes.

If every function evaluation is expensive, for instance when the parameters are the hyperparameters of a neural network and the function evaluation is the mean cross-validation score across ten folds, optimizing the hyperparameters by standard optimization routines would take for ever!

The idea is to approximate the function using a Gaussian process. In other words the function values are assumed to follow a multivariate gaussian. The covariance of the function values are given by a GP kernel between the parameters. Then a smart choice to choose the next parameter to evaluate can be made by the acquisition function over the Gaussian prior which is much quicker to evaluate.

The total number of evaluations, n_calls, are performed like the following. If x0 is provided but not y0, then the elements of x0 are first evaluated, followed by n_random_starts evaluations. Finally, n_calls - len(x0) - n_random_starts evaluations are made guided by the surrogate model. If x0 and y0 are both provided then n_random_starts evaluations are first made then n_calls - n_random_starts subsequent evaluations are made guided by the surrogate model.

Parameters

  • func [callable]: Function to minimize. Should take a array of parameters and return the function values.

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, "prior") tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).

    NOTE: The upper and lower bounds are inclusive for Integer dimensions.

  • base_estimator [a Gaussian process estimator]: The Gaussian process estimator to use for optimization. By default, a Matern kernel is used with the following hyperparameters tuned.

    • All the length scales of the Matern kernel.
    • The covariance amplitude that each element is multiplied with.
    • Noise that is added to the matern kernel. The noise is assumed to be iid gaussian.
  • n_calls [int, default=100]: Number of calls to func.

  • n_random_starts [int, default=10]: Number of evaluations of func with random points before approximating it with base_estimator.

  • acq_func [string, default="EI"]: Function to minimize over the gaussian prior. Can be either

    • "LCB" for lower confidence bound.
    • "EI" for negative expected improvement.
    • "PI" for negative probability of improvement.
    • "gp_hedge" Probabilistically choose one of the above three acquisition functions at every iteration. The weightage given to these gains can be set by \eta through acq_func_kwargs.
      • The gains g_i are initialized to zero.
      • At every iteration,
        • Each acquisition function is optimised independently to propose an candidate point X_i.
        • Out of all these candidate points, the next point X_best is chosen by softmax(\eta g_i)
        • After fitting the surrogate model with (X_best, y_best), the gains are updated such that g_i -= \mu(X_i)
    • "EIps" for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken in seconds.
    • "PIps" for negated probability of improvement per second. The return type of the objective function is assumed to be similar to that of `"EIps
  • acq_optimizer [string, "sampling" or "lbfgs", default="lbfgs"]: Method to minimize the acquistion function. The fit model is updated with the optimal value obtained by optimizing acq_func with acq_optimizer.

    The acq_func is computed at n_points sampled randomly.

    • If set to "auto", then acq_optimizer is configured on the basis of the space searched over. If the space is Categorical then this is set to be "sampling"`.
    • If set to "sampling", then the point among these n_points where the acq_func is minimum is the next candidate minimum.
    • If set to "lbfgs", then
      • The n_restarts_optimizer no. of points which the acquisition function is least are taken as start points.
      • "lbfgs" is run for 20 iterations with these points as initial points to find local minima.
      • The optimal of these local minima is used to update the prior.
  • x0 [list, list of lists or None]: Initial input points.

    • If it is a list of lists, use it as a list of input points.
    • If it is a list, use it as a single initial input point.
    • If it is None, no initial input points are used.
  • y0 [list, scalar or None] Evaluation of initial input points.

    • If it is a list, then it corresponds to evaluations of the function at each element of x0 : the i-th element of y0 corresponds to the function evaluated at the i-th element of x0.
    • If it is a scalar, then it corresponds to the evaluation of the function at x0.
    • If it is None and x0 is provided, then the function is evaluated at each element of x0.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • verbose [boolean, default=False]: Control the verbosity. It is advised to set the verbosity to True for long optimization runs.

  • callback [callable, list of callables, optional] If callable then callback(res) is called after each call to func. If list of callables, then each callable in the list is called.

  • n_points [int, default=10000]: Number of points to sample to determine the next "best" point. Useless if acq_optimizer is set to "lbfgs".

  • n_restarts_optimizer [int, default=5]: The number of restarts of the optimizer when acq_optimizer is "lbfgs".

  • kappa [float, default=1.96]: Controls how much of the variance in the predicted values should be taken into account. If set to be very high, then we are favouring exploration over exploitation and vice versa. Used when the acquisition is "LCB".

  • xi [float, default=0.01]: Controls how much improvement one wants over the previous best values. Used when the acquisition is either "EI" or "PI".

  • noise [float, default="gaussian"]:

    • Use noise="gaussian" if the objective returns noisy observations. The noise of each observation is assumed to be iid with mean zero and a fixed variance.
    • If the variance is known before-hand, this can be set directly to the variance of the noise.
    • Set this to a value close to zero (1e-10) if the function is noise-free. Setting to zero might cause stability issues.
  • n_jobs [int, default=1] Number of cores to run in parallel while running the lbfgs optimizations over the acquisition function. Valid only when acq_optimizer is set to "lbfgs." Defaults to 1 core. If n_jobs=-1, then number of jobs is set to number of cores.

Returns

  • res [OptimizeResult, scipy object]: The optimization result returned as a OptimizeResult object. Important attributes are:

    • x [list]: location of the minimum.
    • fun [float]: function value at the minimum.
    • models: surrogate models used for each iteration.
    • x_iters [list of lists]: location of function evaluation for each iteration.
    • func_vals [array]: function value for each iteration.
    • space [Space]: the optimization space.
    • specs [dict]`: the call specifications.
    • rng [RandomState instance]: State of the random state at the end of minimization.

    For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html

def gp_minimize(func, dimensions, base_estimator=None,
                n_calls=100, n_random_starts=10,
                acq_func="gp_hedge", acq_optimizer="auto", x0=None, y0=None,
                random_state=None, verbose=False, callback=None,
                n_points=10000, n_restarts_optimizer=5, xi=0.01, kappa=1.96,
                noise="gaussian", n_jobs=1):
    """Bayesian optimization using Gaussian Processes.

    If every function evaluation is expensive, for instance
    when the parameters are the hyperparameters of a neural network
    and the function evaluation is the mean cross-validation score across
    ten folds, optimizing the hyperparameters by standard optimization
    routines would take for ever!

    The idea is to approximate the function using a Gaussian process.
    In other words the function values are assumed to follow a multivariate
    gaussian. The covariance of the function values are given by a
    GP kernel between the parameters. Then a smart choice to choose the
    next parameter to evaluate can be made by the acquisition function
    over the Gaussian prior which is much quicker to evaluate.

    The total number of evaluations, `n_calls`, are performed like the
    following. If `x0` is provided but not `y0`, then the elements of `x0`
    are first evaluated, followed by `n_random_starts` evaluations.
    Finally, `n_calls - len(x0) - n_random_starts` evaluations are
    made guided by the surrogate model. If `x0` and `y0` are both
    provided then `n_random_starts` evaluations are first made then
    `n_calls - n_random_starts` subsequent evaluations are made
    guided by the surrogate model.

    Parameters
    ----------
    * `func` [callable]:
        Function to minimize. Should take a array of parameters and
        return the function values.

    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, "prior")` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

         NOTE: The upper and lower bounds are inclusive for `Integer`
         dimensions.

    * `base_estimator` [a Gaussian process estimator]:
        The Gaussian process estimator to use for optimization.
        By default, a Matern kernel is used with the following
        hyperparameters tuned.
        - All the length scales of the Matern kernel.
        - The covariance amplitude that each element is multiplied with.
        - Noise that is added to the matern kernel. The noise is assumed
          to be iid gaussian.

    * `n_calls` [int, default=100]:
        Number of calls to `func`.

    * `n_random_starts` [int, default=10]:
        Number of evaluations of `func` with random points before
        approximating it with `base_estimator`.

    * `acq_func` [string, default=`"EI"`]:
        Function to minimize over the gaussian prior. Can be either

        - `"LCB"` for lower confidence bound.
        - `"EI"` for negative expected improvement.
        - `"PI"` for negative probability of improvement.
        - `"gp_hedge"` Probabilistically choose one of the above three
          acquisition functions at every iteration. The weightage
          given to these gains can be set by `\eta` through `acq_func_kwargs`.
            - The gains `g_i` are initialized to zero.
            - At every iteration,
                - Each acquisition function is optimised independently to
                  propose an candidate point `X_i`.
                - Out of all these candidate points, the next point `X_best` is
                  chosen by `softmax(\eta g_i)`
                - After fitting the surrogate model with `(X_best, y_best)`,
                  the gains are updated such that `g_i -= \mu(X_i)`
        - `"EIps"` for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken in seconds.
        - `"PIps"` for negated probability of improvement per second. The
          return type of the objective function is assumed to be similar to
          that of `"EIps

    * `acq_optimizer` [string, `"sampling"` or `"lbfgs"`, default=`"lbfgs"`]:
        Method to minimize the acquistion function. The fit model
        is updated with the optimal value obtained by optimizing `acq_func`
        with `acq_optimizer`.

        The `acq_func` is computed at `n_points` sampled randomly.

        - If set to `"auto"`, then `acq_optimizer` is configured on the
          basis of the space searched over.
          If the space is Categorical then this is set to be "sampling"`.
        - If set to `"sampling"`, then the point among these `n_points`
          where the `acq_func` is minimum is the next candidate minimum.
        - If set to `"lbfgs"`, then
              - The `n_restarts_optimizer` no. of points which the acquisition
                function is least are taken as start points.
              - `"lbfgs"` is run for 20 iterations with these points as initial
                points to find local minima.
              - The optimal of these local minima is used to update the prior.

    * `x0` [list, list of lists or `None`]:
        Initial input points.

        - If it is a list of lists, use it as a list of input points.
        - If it is a list, use it as a single initial input point.
        - If it is `None`, no initial input points are used.

    * `y0` [list, scalar or `None`]
        Evaluation of initial input points.

        - If it is a list, then it corresponds to evaluations of the function
          at each element of `x0` : the i-th element of `y0` corresponds
          to the function evaluated at the i-th element of `x0`.
        - If it is a scalar, then it corresponds to the evaluation of the
          function at `x0`.
        - If it is None and `x0` is provided, then the function is evaluated
          at each element of `x0`.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `verbose` [boolean, default=False]:
        Control the verbosity. It is advised to set the verbosity to True
        for long optimization runs.

    * `callback` [callable, list of callables, optional]
        If callable then `callback(res)` is called after each call to `func`.
        If list of callables, then each callable in the list is called.

    * `n_points` [int, default=10000]:
        Number of points to sample to determine the next "best" point.
        Useless if acq_optimizer is set to `"lbfgs"`.

    * `n_restarts_optimizer` [int, default=5]:
        The number of restarts of the optimizer when `acq_optimizer`
        is `"lbfgs"`.

    * `kappa` [float, default=1.96]:
        Controls how much of the variance in the predicted values should be
        taken into account. If set to be very high, then we are favouring
        exploration over exploitation and vice versa.
        Used when the acquisition is `"LCB"`.

    * `xi` [float, default=0.01]:
        Controls how much improvement one wants over the previous best
        values. Used when the acquisition is either `"EI"` or `"PI"`.

    * `noise` [float, default="gaussian"]:
        - Use noise="gaussian" if the objective returns noisy observations.
          The noise of each observation is assumed to be iid with
          mean zero and a fixed variance.
        - If the variance is known before-hand, this can be set directly
          to the variance of the noise.
        - Set this to a value close to zero (1e-10) if the function is
          noise-free. Setting to zero might cause stability issues.

    * `n_jobs` [int, default=1]
        Number of cores to run in parallel while running the lbfgs
        optimizations over the acquisition function. Valid only
        when `acq_optimizer` is set to "lbfgs."
        Defaults to 1 core. If `n_jobs=-1`, then number of jobs is set
        to number of cores.

    Returns
    -------
    * `res` [`OptimizeResult`, scipy object]:
        The optimization result returned as a OptimizeResult object.
        Important attributes are:

        - `x` [list]: location of the minimum.
        - `fun` [float]: function value at the minimum.
        - `models`: surrogate models used for each iteration.
        - `x_iters` [list of lists]: location of function evaluation for each
           iteration.
        - `func_vals` [array]: function value for each iteration.
        - `space` [Space]: the optimization space.
        - `specs` [dict]`: the call specifications.
        - `rng` [RandomState instance]: State of the random state
           at the end of minimization.

        For more details related to the OptimizeResult object, refer
        http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
    """
    # Check params
    rng = check_random_state(random_state)
    space = normalize_dimensions(dimensions)
    base_estimator = cook_estimator(
        "GP", space=space, random_state=rng.randint(0, np.iinfo(np.int32).max),
        noise=noise)

    return base_minimize(
        func, space, base_estimator=base_estimator,
        acq_func=acq_func,
        xi=xi, kappa=kappa, acq_optimizer=acq_optimizer, n_calls=n_calls,
        n_points=n_points, n_random_starts=n_random_starts,
        n_restarts_optimizer=n_restarts_optimizer,
        x0=x0, y0=y0, random_state=rng, verbose=verbose,
        callback=callback, n_jobs=n_jobs)

Classes

class Optimizer

Run bayesian optimisation loop.

An Optimizer represents the steps of a bayesian optimisation loop. To use it you need to provide your own loop mechanism. The various optimisers provided by skopt use this class under the hood.

Use this class directly if you want to control the iterations of your bayesian optimisation loop.

Parameters

  • dimensions [list, shape=(n_dims,)]: List of search space dimensions. Each search dimension can be defined either as

    • a (lower_bound, upper_bound) tuple (for Real or Integer dimensions),
    • a (lower_bound, upper_bound, "prior") tuple (for Real dimensions),
    • as a list of categories (for Categorical dimensions), or
    • an instance of a Dimension object (Real, Integer or Categorical).
  • base_estimator ["GP", "RF", "ET", "GBRT" or sklearn regressor, default="GP"]: Should inherit from sklearn.base.RegressorMixin. In addition the predict method, should have an optional return_std argument, which returns std(Y | x)`` along withE[Y | x]`. If base_estimator is one of ["GP", "RF", "ET", "GBRT"], a default surrogate model of the corresponding type is used corresponding to what is used in the minimize functions.

  • n_random_starts [int, default=10]: DEPRECATED, use n_initial_points instead.

  • n_initial_points [int, default=10]: Number of evaluations of func with initialization points before approximating it with base_estimator. Points provided as x0 count as initialization points. If len(x0) < n_initial_points additional points are sampled at random.

  • acq_func [string, default="gp_hedge"]: Function to minimize over the posterior distribution. Can be either

    • "LCB" for lower confidence bound.
    • "EI" for negative expected improvement.
    • "PI" for negative probability of improvement.
    • "gp_hedge" Probabilistically choose one of the above three acquisition functions at every iteration.
      • The gains g_i are initialized to zero.
      • At every iteration,
        • Each acquisition function is optimised independently to propose an candidate point X_i.
        • Out of all these candidate points, the next point X_best is chosen by $softmax(\eta g_i)$
        • After fitting the surrogate model with (X_best, y_best), the gains are updated such that $g_i -= \mu(X_i)$
    • `"EIps" for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken in seconds.
    • "PIps" for negated probability of improvement per second. The return type of the objective function is assumed to be similar to that of `"EIps
  • acq_optimizer [string, "sampling" or "lbfgs", default="auto"]: Method to minimize the acquistion function. The fit model is updated with the optimal value obtained by optimizing acq_func with acq_optimizer.

    • If set to "auto", then acq_optimizer is configured on the basis of the base_estimator and the space searched over. If the space is Categorical or if the estimator provided based on tree-models then this is set to be "sampling"`.
    • If set to "sampling", then acq_func is optimized by computing acq_func at n_points randomly sampled points.
    • If set to "lbfgs", then acq_func is optimized by
      • Sampling n_restarts_optimizer points randomly.
      • "lbfgs" is run for 20 iterations with these points as initial points to find local minima.
      • The optimal of these local minima is used to update the prior.
  • random_state [int, RandomState instance, or None (default)]: Set random state to something other than None for reproducible results.

  • acq_func_kwargs [dict]: Additional arguments to be passed to the acquistion function.

  • acq_optimizer_kwargs [dict]: Additional arguments to be passed to the acquistion optimizer.

Attributes

  • Xi [list]: Points at which objective has been evaluated.
  • yi [scalar]: Values of objective at corresponding points in Xi.
  • models [list]: Regression models used to fit observations and compute acquisition function.
  • space An instance of skopt.space.Space. Stores parameter search space used to sample points, bounds, and type of parameters.
class Optimizer(object):
    """Run bayesian optimisation loop.

    An `Optimizer` represents the steps of a bayesian optimisation loop. To
    use it you need to provide your own loop mechanism. The various
    optimisers provided by `skopt` use this class under the hood.

    Use this class directly if you want to control the iterations of your
    bayesian optimisation loop.

    Parameters
    ----------
    * `dimensions` [list, shape=(n_dims,)]:
        List of search space dimensions.
        Each search dimension can be defined either as

        - a `(lower_bound, upper_bound)` tuple (for `Real` or `Integer`
          dimensions),
        - a `(lower_bound, upper_bound, "prior")` tuple (for `Real`
          dimensions),
        - as a list of categories (for `Categorical` dimensions), or
        - an instance of a `Dimension` object (`Real`, `Integer` or
          `Categorical`).

    * `base_estimator` ["GP", "RF", "ET", "GBRT" or sklearn regressor, default="GP"]:
        Should inherit from `sklearn.base.RegressorMixin`.
        In addition the `predict` method, should have an optional `return_std`
        argument, which returns `std(Y | x)`` along with `E[Y | x]`.
        If base_estimator is one of ["GP", "RF", "ET", "GBRT"], a default
        surrogate model of the corresponding type is used corresponding to what
        is used in the minimize functions.

    * `n_random_starts` [int, default=10]:
        DEPRECATED, use `n_initial_points` instead.

    * `n_initial_points` [int, default=10]:
        Number of evaluations of `func` with initialization points
        before approximating it with `base_estimator`. Points provided as
        `x0` count as initialization points. If len(x0) < n_initial_points
        additional points are sampled at random.

    * `acq_func` [string, default=`"gp_hedge"`]:
        Function to minimize over the posterior distribution. Can be either

        - `"LCB"` for lower confidence bound.
        - `"EI"` for negative expected improvement.
        - `"PI"` for negative probability of improvement.
        - `"gp_hedge"` Probabilistically choose one of the above three
          acquisition functions at every iteration.
            - The gains `g_i` are initialized to zero.
            - At every iteration,
                - Each acquisition function is optimised independently to
                  propose an candidate point `X_i`.
                - Out of all these candidate points, the next point `X_best` is
                  chosen by $softmax(\eta g_i)$
                - After fitting the surrogate model with `(X_best, y_best)`,
                  the gains are updated such that $g_i -= \mu(X_i)$
        - `"EIps" for negated expected improvement per second to take into
          account the function compute time. Then, the objective function is
          assumed to return two values, the first being the objective value and
          the second being the time taken in seconds.
        - `"PIps"` for negated probability of improvement per second. The
          return type of the objective function is assumed to be similar to
          that of `"EIps

    * `acq_optimizer` [string, `"sampling"` or `"lbfgs"`, default=`"auto"`]:
        Method to minimize the acquistion function. The fit model
        is updated with the optimal value obtained by optimizing `acq_func`
        with `acq_optimizer`.

        - If set to `"auto"`, then `acq_optimizer` is configured on the
          basis of the base_estimator and the space searched over.
          If the space is Categorical or if the estimator provided based on
          tree-models then this is set to be "sampling"`.
        - If set to `"sampling"`, then `acq_func` is optimized by computing
          `acq_func` at `n_points` randomly sampled points.
        - If set to `"lbfgs"`, then `acq_func` is optimized by
              - Sampling `n_restarts_optimizer` points randomly.
              - `"lbfgs"` is run for 20 iterations with these points as initial
                points to find local minima.
              - The optimal of these local minima is used to update the prior.

    * `random_state` [int, RandomState instance, or None (default)]:
        Set random state to something other than None for reproducible
        results.

    * `acq_func_kwargs` [dict]:
        Additional arguments to be passed to the acquistion function.

    * `acq_optimizer_kwargs` [dict]:
        Additional arguments to be passed to the acquistion optimizer.


    Attributes
    ----------
    * `Xi` [list]:
        Points at which objective has been evaluated.
    * `yi` [scalar]:
        Values of objective at corresponding points in `Xi`.
    * `models` [list]:
        Regression models used to fit observations and compute acquisition
        function.
    * `space`
        An instance of `skopt.space.Space`. Stores parameter search space used
        to sample points, bounds, and type of parameters.

    """
    def __init__(self, dimensions, base_estimator="gp",
                 n_random_starts=None, n_initial_points=10,
                 acq_func="gp_hedge",
                 acq_optimizer="auto",
                 random_state=None, acq_func_kwargs=None,
                 acq_optimizer_kwargs=None):
        # Arguments that are just stored not checked
        self.acq_func = acq_func
        self.rng = check_random_state(random_state)
        self.acq_func_kwargs = acq_func_kwargs

        allowed_acq_funcs = ["gp_hedge", "EI", "LCB", "PI", "EIps", "PIps"]
        if self.acq_func not in allowed_acq_funcs:
            raise ValueError("expected acq_func to be in %s, got %s" %
                             (",".join(allowed_acq_funcs), self.acq_func))
        if self.acq_func == "gp_hedge":
            self.cand_acq_funcs_ = ["EI", "LCB", "PI"]
            self.gains_ = np.zeros(3)
        else:
            self.cand_acq_funcs_ = [self.acq_func]

        if acq_func_kwargs is None:
            acq_func_kwargs = dict()
        self.eta = acq_func_kwargs.get("eta", 1.0)

        if acq_optimizer_kwargs is None:
            acq_optimizer_kwargs = dict()

        self.n_points = acq_optimizer_kwargs.get("n_points", 10000)
        self.n_restarts_optimizer = acq_optimizer_kwargs.get(
            "n_restarts_optimizer", 5)
        n_jobs = acq_optimizer_kwargs.get("n_jobs", 1)
        self.acq_optimizer_kwargs = acq_optimizer_kwargs

        if n_random_starts is not None:
            warnings.warn(("n_random_starts will be removed in favour of "
                           "n_initial_points."),
                          DeprecationWarning)
            n_initial_points = n_random_starts

        self._check_arguments(base_estimator, n_initial_points, acq_optimizer,
                              dimensions)

        if isinstance(self.base_estimator_, GaussianProcessRegressor):
            dimensions = normalize_dimensions(dimensions)

        self.space = Space(dimensions)
        self.models = []
        self.Xi = []
        self.yi = []

        self._cat_inds = []
        self._non_cat_inds = []
        for ind, dim in enumerate(self.space.dimensions):
            if isinstance(dim, Categorical):
                self._cat_inds.append(ind)
            else:
                self._non_cat_inds.append(ind)

        self.n_jobs = n_jobs

        # The cache of responses of `ask` method for n_points not None.
        # This ensures that multiple calls to `ask` with n_points set
        # return same sets of points.
        # The cache is reset to {} at every call to `tell`.
        self.cache_ = {}

    def _check_arguments(self, base_estimator, n_initial_points,
                         acq_optimizer, dimensions):
        """Check arguments for sanity."""

        if isinstance(base_estimator, str):
            base_estimator = cook_estimator(
                base_estimator, space=dimensions,
                random_state=self.rng.randint(0, np.iinfo(np.int32).max))

        if not is_regressor(base_estimator) and base_estimator is not None:
            raise ValueError(
                "%s has to be a regressor." % base_estimator)

        if "ps" in self.acq_func:
            self.base_estimator_ = MultiOutputRegressor(base_estimator)
        else:
            self.base_estimator_ = base_estimator

        if n_initial_points < 0:
            raise ValueError(
                "Expected `n_initial_points` >= 0, got %d" % n_initial_points)
        self._n_initial_points = n_initial_points
        self.n_initial_points_ = n_initial_points

        if acq_optimizer == "auto":
            if has_gradients(self.base_estimator_):
                acq_optimizer = "lbfgs"
            else:
                acq_optimizer = "sampling"

        if acq_optimizer not in ["lbfgs", "sampling"]:
            raise ValueError("Expected acq_optimizer to be 'lbfgs' or "
                             "'sampling', got {0}".format(acq_optimizer))

        if (not has_gradients(self.base_estimator_) and
            acq_optimizer != "sampling"):
            raise ValueError("The regressor {0} should run with "
                             "acq_optimizer"
                             "='sampling'.".format(type(base_estimator)))

        self.acq_optimizer = acq_optimizer

    def copy(self, random_state=None):
        """Create a shallow copy of an instance of the optimizer.

        Parameters
        ----------
        * `random_state` [int, RandomState instance, or None (default)]:
            Set the random state of the copy.
        """

        optimizer = Optimizer(
            dimensions=self.space.dimensions,
            base_estimator=self.base_estimator_,
            n_initial_points=self.n_initial_points_,
            acq_func=self.acq_func,
            acq_optimizer=self.acq_optimizer,
            acq_func_kwargs=self.acq_func_kwargs,
            acq_optimizer_kwargs=self.acq_optimizer_kwargs,
            random_state=random_state,
        )

        if hasattr(self, "gains_"):
            optimizer.gains_ = np.copy(self.gains_)

        if self.Xi:
            optimizer.tell(self.Xi, self.yi)

        return optimizer

    def ask(self, n_points=None, strategy="cl_min"):
        """Query point or multiple points at which objective should be evaluated.

        * `n_points` [int or None, default=None]:
            Number of points returned by the ask method.
            If the value is None, a single point to evaluate is returned.
            Otherwise a list of points to evaluate is returned of size
            n_points. This is useful if you can evaluate your objective in
            parallel, and thus obtain more objective function evaluations per
            unit of time.

        * `strategy` [string, default=`"cl_min"`]:
            Method to use to sample multiple points (see also `n_points`
            description). This parameter is ignored if n_points = None.
            Supported options are `"cl_min"`, `"cl_mean"` or `"cl_max"`.

            - If set to `"cl_min"`, then constant liar strtategy is used
               with lie objective value being minimum of observed objective
               values. `"cl_mean"` and `"cl_max"` means mean and max of values
               respectively. For details on this strategy see:

               https://hal.archives-ouvertes.fr/hal-00732512/document

               With this strategy a copy of optimizer is created, which is
               then asked for a point, and the point is told to the copy of
               optimizer with some fake objective (lie), the next point is
               asked from copy, it is also told to the copy with fake
               objective and so on. The type of lie defines different
               flavours of `cl_x` strategies.

        """
        if n_points is None:
            return self._ask()

        supported_strategies = ["cl_min", "cl_mean", "cl_max"]

        if not (isinstance(n_points, int) and n_points > 0):
            raise ValueError(
                "n_points should be int > 0, got " + str(n_points)
            )

        if strategy not in supported_strategies:
            raise ValueError(
                "Expected parallel_strategy to be one of " +
                str(supported_strategies) + ", " + "got %s" % strategy
            )

        # Caching the result with n_points not None. If some new parameters
        # are provided to the ask, the cache_ is not used.
        if (n_points, strategy) in self.cache_:
            return self.cache_[(n_points, strategy)]

        # Copy of the optimizer is made in order to manage the
        # deletion of points with "lie" objective (the copy of
        # oiptimizer is simply discarded)
        opt = self.copy()

        X = []
        for i in range(n_points):
            x = opt.ask()
            X.append(x)
            if strategy == "cl_min":
                y_lie = np.min(opt.yi) if opt.yi else 0.0  # CL-min lie
            elif strategy == "cl_mean":
                y_lie = np.mean(opt.yi) if opt.yi else 0.0  # CL-mean lie
            else:
                y_lie = np.max(opt.yi) if opt.yi else 0.0  # CL-max lie
            opt.tell(x, y_lie)  # lie to the optimizer

        self.cache_ = {(n_points, strategy): X}  # cache_ the result

        return X

    def _ask(self):
        """Suggest next point at which to evaluate the objective.

        Return a random point while not at least `n_initial_points`
        observations have been `tell`ed, after that `base_estimator` is used
        to determine the next point.
        """
        if self._n_initial_points > 0 or self.base_estimator_ is None:
            # this will not make a copy of `self.rng` and hence keep advancing
            # our random state.
            return self.space.rvs(random_state=self.rng)[0]

        else:
            if not self.models:
                raise RuntimeError("Random evaluations exhausted and no "
                                   "model has been fit.")

            next_x = self._next_x
            min_delta_x = min([self.space.distance(next_x, xi)
                               for xi in self.Xi])
            if abs(min_delta_x) <= 1e-8:
                warnings.warn("The objective has been evaluated "
                              "at this point before.")

            # return point computed from last call to tell()
            return next_x

    def tell(self, x, y, fit=True):
        """Record an observation (or several) of the objective function.

        Provide values of the objective function at points suggested by `ask()`
        or other points. By default a new model will be fit to all
        observations. The new model is used to suggest the next point at
        which to evaluate the objective. This point can be retrieved by calling
        `ask()`.

        To add observations without fitting a new model set `fit` to False.

        To add multiple observations in a batch pass a list-of-lists for `x`
        and a list of scalars for `y`.

        Parameters
        ----------
        * `x` [list or list-of-lists]:
            Point at which objective was evaluated.

        * `y` [scalar or list]:
            Value of objective at `x`.

        * `fit` [bool, default=True]
            Fit a model to observed evaluations of the objective. A model will
            only be fitted after `n_initial_points` points have been told to
            the optimizer irrespective of the value of `fit`.
        """
        check_x_in_space(x, self.space)

        if "ps" in self.acq_func:
            if is_2Dlistlike(x):
                if np.ndim(y) == 2 and np.shape(y)[1] == 2:
                    y = [[val, log(t)] for (val, t) in y]
                    self.Xi.extend(x)
                    self.yi.extend(y)
                else:
                    raise TypeError("expcted y to be a list of (func_val, t)")
                self._n_initial_points -= len(y)
            elif is_listlike(x):
                if np.ndim(y) == 1 and len(y) == 2:
                    y = list(y)
                    y[1] = log(y[1])
                    self.Xi.append(x)
                    self.yi.append(y)
                else:
                    raise TypeError("expected y to be (func_val, t)")
                self._n_initial_points -= 1

        # if y isn't a scalar it means we have been handed a batch of points
        elif is_listlike(y) and is_2Dlistlike(x):
            self.Xi.extend(x)
            self.yi.extend(y)
            self._n_initial_points -= len(y)

        elif is_listlike(x):
            if isinstance(y, Number):
                self.Xi.append(x)
                self.yi.append(y)
                self._n_initial_points -= 1
            else:
                raise ValueError("`func` should return a scalar")

        else:
            raise ValueError("Type of arguments `x` (%s) and `y` (%s) "
                             "not compatible." % (type(x), type(y)))

        # optimizer learned somethnig new - discard cache
        self.cache_ = {}

        # after being "told" n_initial_points we switch from sampling
        # random points to using a surrogate model
        if (fit and self._n_initial_points <= 0 and
           self.base_estimator_ is not None):
            transformed_bounds = np.array(self.space.transformed_bounds)
            est = clone(self.base_estimator_)

            with warnings.catch_warnings():
                warnings.simplefilter("ignore")
                est.fit(self.space.transform(self.Xi), self.yi)

            if hasattr(self, "next_xs_") and self.acq_func == "gp_hedge":
                self.gains_ -= est.predict(np.vstack(self.next_xs_))
            self.models.append(est)

            # even with BFGS as optimizer we want to sample a large number
            # of points and then pick the best ones as starting points
            X = self.space.transform(self.space.rvs(
                n_samples=self.n_points, random_state=self.rng))

            self.next_xs_ = []
            for cand_acq_func in self.cand_acq_funcs_:
                values = _gaussian_acquisition(
                    X=X, model=est, y_opt=np.min(self.yi),
                    acq_func=cand_acq_func,
                    acq_func_kwargs=self.acq_func_kwargs)
                # Find the minimum of the acquisition function by randomly
                # sampling points from the space
                if self.acq_optimizer == "sampling":
                    next_x = X[np.argmin(values)]

                # Use BFGS to find the mimimum of the acquisition function, the
                # minimization starts from `n_restarts_optimizer` different
                # points and the best minimum is used
                elif self.acq_optimizer == "lbfgs":
                    x0 = X[np.argsort(values)[:self.n_restarts_optimizer]]

                    with warnings.catch_warnings():
                        warnings.simplefilter("ignore")
                        results = Parallel(n_jobs=self.n_jobs)(
                            delayed(fmin_l_bfgs_b)(
                                gaussian_acquisition_1D, x,
                                args=(est, np.min(self.yi), cand_acq_func,
                                      self.acq_func_kwargs),
                                bounds=self.space.transformed_bounds,
                                approx_grad=False,
                                maxiter=20)
                            for x in x0)

                    cand_xs = np.array([r[0] for r in results])
                    cand_acqs = np.array([r[1] for r in results])
                    next_x = cand_xs[np.argmin(cand_acqs)]

                # lbfgs should handle this but just in case there are
                # precision errors.
                if not self.space.is_categorical:
                    next_x = np.clip(
                        next_x, transformed_bounds[:, 0],
                        transformed_bounds[:, 1])
                self.next_xs_.append(next_x)

            if self.acq_func == "gp_hedge":
                logits = np.array(self.gains_)
                logits -= np.max(logits)
                exp_logits = np.exp(self.eta * logits)
                probs = exp_logits / np.sum(exp_logits)
                next_x = self.next_xs_[np.argmax(self.rng.multinomial(1,
                                                                      probs))]
            else:
                next_x = self.next_xs_[0]

            # note the need for [0] at the end
            self._next_x = self.space.inverse_transform(
                next_x.reshape((1, -1)))[0]

        # Pack results
        return create_result(self.Xi, self.yi, self.space, self.rng,
                             models=self.models)

    def run(self, func, n_iter=1):
        """Execute ask() + tell() `n_iter` times"""
        for _ in range(n_iter):
            x = self.ask()
            self.tell(x, func(x))

        return create_result(self.Xi, self.yi, self.space, self.rng,
                             models=self.models)

Ancestors (in MRO)

Static methods

def __init__(

self, dimensions, base_estimator='gp', n_random_starts=None, n_initial_points=10, acq_func='gp_hedge', acq_optimizer='auto', random_state=None, acq_func_kwargs=None, acq_optimizer_kwargs=None)

Initialize self. See help(type(self)) for accurate signature.

def __init__(self, dimensions, base_estimator="gp",
             n_random_starts=None, n_initial_points=10,
             acq_func="gp_hedge",
             acq_optimizer="auto",
             random_state=None, acq_func_kwargs=None,
             acq_optimizer_kwargs=None):
    # Arguments that are just stored not checked
    self.acq_func = acq_func
    self.rng = check_random_state(random_state)
    self.acq_func_kwargs = acq_func_kwargs
    allowed_acq_funcs = ["gp_hedge", "EI", "LCB", "PI", "EIps", "PIps"]
    if self.acq_func not in allowed_acq_funcs:
        raise ValueError("expected acq_func to be in %s, got %s" %
                         (",".join(allowed_acq_funcs), self.acq_func))
    if self.acq_func == "gp_hedge":
        self.cand_acq_funcs_ = ["EI", "LCB", "PI"]
        self.gains_ = np.zeros(3)
    else:
        self.cand_acq_funcs_ = [self.acq_func]
    if acq_func_kwargs is None:
        acq_func_kwargs = dict()
    self.eta = acq_func_kwargs.get("eta", 1.0)
    if acq_optimizer_kwargs is None:
        acq_optimizer_kwargs = dict()
    self.n_points = acq_optimizer_kwargs.get("n_points", 10000)
    self.n_restarts_optimizer = acq_optimizer_kwargs.get(
        "n_restarts_optimizer", 5)
    n_jobs = acq_optimizer_kwargs.get("n_jobs", 1)
    self.acq_optimizer_kwargs = acq_optimizer_kwargs
    if n_random_starts is not None:
        warnings.warn(("n_random_starts will be removed in favour of "
                       "n_initial_points."),
                      DeprecationWarning)
        n_initial_points = n_random_starts
    self._check_arguments(base_estimator, n_initial_points, acq_optimizer,
                          dimensions)
    if isinstance(self.base_estimator_, GaussianProcessRegressor):
        dimensions = normalize_dimensions(dimensions)
    self.space = Space(dimensions)
    self.models = []
    self.Xi = []
    self.yi = []
    self._cat_inds = []
    self._non_cat_inds = []
    for ind, dim in enumerate(self.space.dimensions):
        if isinstance(dim, Categorical):
            self._cat_inds.append(ind)
        else:
            self._non_cat_inds.append(ind)
    self.n_jobs = n_jobs
    # The cache of responses of `ask` method for n_points not None.
    # This ensures that multiple calls to `ask` with n_points set
    # return same sets of points.
    # The cache is reset to {} at every call to `tell`.
    self.cache_ = {}

def ask(

self, n_points=None, strategy='cl_min')

Query point or multiple points at which objective should be evaluated.

  • n_points [int or None, default=None]: Number of points returned by the ask method. If the value is None, a single point to evaluate is returned. Otherwise a list of points to evaluate is returned of size n_points. This is useful if you can evaluate your objective in parallel, and thus obtain more objective function evaluations per unit of time.

  • strategy [string, default="cl_min"]: Method to use to sample multiple points (see also n_points description). This parameter is ignored if n_points = None. Supported options are "cl_min", "cl_mean" or "cl_max".

    • If set to "cl_min", then constant liar strtategy is used with lie objective value being minimum of observed objective values. "cl_mean" and "cl_max" means mean and max of values respectively. For details on this strategy see:

    https://hal.archives-ouvertes.fr/hal-00732512/document

    With this strategy a copy of optimizer is created, which is then asked for a point, and the point is told to the copy of optimizer with some fake objective (lie), the next point is asked from copy, it is also told to the copy with fake objective and so on. The type of lie defines different flavours of cl_x strategies.

def ask(self, n_points=None, strategy="cl_min"):
    """Query point or multiple points at which objective should be evaluated.
    * `n_points` [int or None, default=None]:
        Number of points returned by the ask method.
        If the value is None, a single point to evaluate is returned.
        Otherwise a list of points to evaluate is returned of size
        n_points. This is useful if you can evaluate your objective in
        parallel, and thus obtain more objective function evaluations per
        unit of time.
    * `strategy` [string, default=`"cl_min"`]:
        Method to use to sample multiple points (see also `n_points`
        description). This parameter is ignored if n_points = None.
        Supported options are `"cl_min"`, `"cl_mean"` or `"cl_max"`.
        - If set to `"cl_min"`, then constant liar strtategy is used
           with lie objective value being minimum of observed objective
           values. `"cl_mean"` and `"cl_max"` means mean and max of values
           respectively. For details on this strategy see:
           https://hal.archives-ouvertes.fr/hal-00732512/document
           With this strategy a copy of optimizer is created, which is
           then asked for a point, and the point is told to the copy of
           optimizer with some fake objective (lie), the next point is
           asked from copy, it is also told to the copy with fake
           objective and so on. The type of lie defines different
           flavours of `cl_x` strategies.
    """
    if n_points is None:
        return self._ask()
    supported_strategies = ["cl_min", "cl_mean", "cl_max"]
    if not (isinstance(n_points, int) and n_points > 0):
        raise ValueError(
            "n_points should be int > 0, got " + str(n_points)
        )
    if strategy not in supported_strategies:
        raise ValueError(
            "Expected parallel_strategy to be one of " +
            str(supported_strategies) + ", " + "got %s" % strategy
        )
    # Caching the result with n_points not None. If some new parameters
    # are provided to the ask, the cache_ is not used.
    if (n_points, strategy) in self.cache_:
        return self.cache_[(n_points, strategy)]
    # Copy of the optimizer is made in order to manage the
    # deletion of points with "lie" objective (the copy of
    # oiptimizer is simply discarded)
    opt = self.copy()
    X = []
    for i in range(n_points):
        x = opt.ask()
        X.append(x)
        if strategy == "cl_min":
            y_lie = np.min(opt.yi) if opt.yi else 0.0  # CL-min lie
        elif strategy == "cl_mean":
            y_lie = np.mean(opt.yi) if opt.yi else 0.0  # CL-mean lie
        else:
            y_lie = np.max(opt.yi) if opt.yi else 0.0  # CL-max lie
        opt.tell(x, y_lie)  # lie to the optimizer
    self.cache_ = {(n_points, strategy): X}  # cache_ the result
    return X

def copy(

self, random_state=None)

Create a shallow copy of an instance of the optimizer.

Parameters

  • random_state [int, RandomState instance, or None (default)]: Set the random state of the copy.
def copy(self, random_state=None):
    """Create a shallow copy of an instance of the optimizer.
    Parameters
    ----------
    * `random_state` [int, RandomState instance, or None (default)]:
        Set the random state of the copy.
    """
    optimizer = Optimizer(
        dimensions=self.space.dimensions,
        base_estimator=self.base_estimator_,
        n_initial_points=self.n_initial_points_,
        acq_func=self.acq_func,
        acq_optimizer=self.acq_optimizer,
        acq_func_kwargs=self.acq_func_kwargs,
        acq_optimizer_kwargs=self.acq_optimizer_kwargs,
        random_state=random_state,
    )
    if hasattr(self, "gains_"):
        optimizer.gains_ = np.copy(self.gains_)
    if self.Xi:
        optimizer.tell(self.Xi, self.yi)
    return optimizer

def run(

self, func, n_iter=1)

Execute ask() + tell() n_iter times

def run(self, func, n_iter=1):
    """Execute ask() + tell() `n_iter` times"""
    for _ in range(n_iter):
        x = self.ask()
        self.tell(x, func(x))
    return create_result(self.Xi, self.yi, self.space, self.rng,
                         models=self.models)

def tell(

self, x, y, fit=True)

Record an observation (or several) of the objective function.

Provide values of the objective function at points suggested by ask() or other points. By default a new model will be fit to all observations. The new model is used to suggest the next point at which to evaluate the objective. This point can be retrieved by calling ask().

To add observations without fitting a new model set fit to False.

To add multiple observations in a batch pass a list-of-lists for x and a list of scalars for y.

Parameters

  • x [list or list-of-lists]: Point at which objective was evaluated.

  • y [scalar or list]: Value of objective at x.

  • fit [bool, default=True] Fit a model to observed evaluations of the objective. A model will only be fitted after n_initial_points points have been told to the optimizer irrespective of the value of fit.

def tell(self, x, y, fit=True):
    """Record an observation (or several) of the objective function.
    Provide values of the objective function at points suggested by `ask()`
    or other points. By default a new model will be fit to all
    observations. The new model is used to suggest the next point at
    which to evaluate the objective. This point can be retrieved by calling
    `ask()`.
    To add observations without fitting a new model set `fit` to False.
    To add multiple observations in a batch pass a list-of-lists for `x`
    and a list of scalars for `y`.
    Parameters
    ----------
    * `x` [list or list-of-lists]:
        Point at which objective was evaluated.
    * `y` [scalar or list]:
        Value of objective at `x`.
    * `fit` [bool, default=True]
        Fit a model to observed evaluations of the objective. A model will
        only be fitted after `n_initial_points` points have been told to
        the optimizer irrespective of the value of `fit`.
    """
    check_x_in_space(x, self.space)
    if "ps" in self.acq_func:
        if is_2Dlistlike(x):
            if np.ndim(y) == 2 and np.shape(y)[1] == 2:
                y = [[val, log(t)] for (val, t) in y]
                self.Xi.extend(x)
                self.yi.extend(y)
            else:
                raise TypeError("expcted y to be a list of (func_val, t)")
            self._n_initial_points -= len(y)
        elif is_listlike(x):
            if np.ndim(y) == 1 and len(y) == 2:
                y = list(y)
                y[1] = log(y[1])
                self.Xi.append(x)
                self.yi.append(y)
            else:
                raise TypeError("expected y to be (func_val, t)")
            self._n_initial_points -= 1
    # if y isn't a scalar it means we have been handed a batch of points
    elif is_listlike(y) and is_2Dlistlike(x):
        self.Xi.extend(x)
        self.yi.extend(y)
        self._n_initial_points -= len(y)
    elif is_listlike(x):
        if isinstance(y, Number):
            self.Xi.append(x)
            self.yi.append(y)
            self._n_initial_points -= 1
        else:
            raise ValueError("`func` should return a scalar")
    else:
        raise ValueError("Type of arguments `x` (%s) and `y` (%s) "
                         "not compatible." % (type(x), type(y)))
    # optimizer learned somethnig new - discard cache
    self.cache_ = {}
    # after being "told" n_initial_points we switch from sampling
    # random points to using a surrogate model
    if (fit and self._n_initial_points <= 0 and
       self.base_estimator_ is not None):
        transformed_bounds = np.array(self.space.transformed_bounds)
        est = clone(self.base_estimator_)
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")
            est.fit(self.space.transform(self.Xi), self.yi)
        if hasattr(self, "next_xs_") and self.acq_func == "gp_hedge":
            self.gains_ -= est.predict(np.vstack(self.next_xs_))
        self.models.append(est)
        # even with BFGS as optimizer we want to sample a large number
        # of points and then pick the best ones as starting points
        X = self.space.transform(self.space.rvs(
            n_samples=self.n_points, random_state=self.rng))
        self.next_xs_ = []
        for cand_acq_func in self.cand_acq_funcs_:
            values = _gaussian_acquisition(
                X=X, model=est, y_opt=np.min(self.yi),
                acq_func=cand_acq_func,
                acq_func_kwargs=self.acq_func_kwargs)
            # Find the minimum of the acquisition function by randomly
            # sampling points from the space
            if self.acq_optimizer == "sampling":
                next_x = X[np.argmin(values)]
            # Use BFGS to find the mimimum of the acquisition function, the
            # minimization starts from `n_restarts_optimizer` different
            # points and the best minimum is used
            elif self.acq_optimizer == "lbfgs":
                x0 = X[np.argsort(values)[:self.n_restarts_optimizer]]
                with warnings.catch_warnings():
                    warnings.simplefilter("ignore")
                    results = Parallel(n_jobs=self.n_jobs)(
                        delayed(fmin_l_bfgs_b)(
                            gaussian_acquisition_1D, x,
                            args=(est, np.min(self.yi), cand_acq_func,
                                  self.acq_func_kwargs),
                            bounds=self.space.transformed_bounds,
                            approx_grad=False,
                            maxiter=20)
                        for x in x0)
                cand_xs = np.array([r[0] for r in results])
                cand_acqs = np.array([r[1] for r in results])
                next_x = cand_xs[np.argmin(cand_acqs)]
            # lbfgs should handle this but just in case there are
            # precision errors.
            if not self.space.is_categorical:
                next_x = np.clip(
                    next_x, transformed_bounds[:, 0],
                    transformed_bounds[:, 1])
            self.next_xs_.append(next_x)
        if self.acq_func == "gp_hedge":
            logits = np.array(self.gains_)
            logits -= np.max(logits)
            exp_logits = np.exp(self.eta * logits)
            probs = exp_logits / np.sum(exp_logits)
            next_x = self.next_xs_[np.argmax(self.rng.multinomial(1,
                                                                  probs))]
        else:
            next_x = self.next_xs_[0]
        # note the need for [0] at the end
        self._next_x = self.space.inverse_transform(
            next_x.reshape((1, -1)))[0]
    # Pack results
    return create_result(self.Xi, self.yi, self.space, self.rng,
                         models=self.models)

Instance variables

var Xi

var acq_func

var acq_func_kwargs

var acq_optimizer_kwargs

var cache_

var eta

var models

var n_jobs

var n_points

var n_restarts_optimizer

var rng

var space

var yi