skopt.optimizer
.base_minimize¶
-
skopt.optimizer.
base_minimize
(func, dimensions, base_estimator, n_calls=100, n_random_starts=10, acq_func='EI', acq_optimizer='lbfgs', x0=None, y0=None, random_state=None, verbose=False, callback=None, n_points=10000, n_restarts_optimizer=5, xi=0.01, kappa=1.96, n_jobs=1, model_queue_size=None)[source][source]¶ Base optimizer class :param func: Function to minimize. Should take a single list of parameters
and return the objective value.
If you have a search-space where all dimensions have names, then you can use
skopt.utils.use_named_args
as a decorator on your objective function, in order to call it directly with the named arguments. Seeuse_named_args
for an example.- Parameters
dimensions (list, shape (n_dims,)) –
List of search space dimensions. Each search dimension can be defined either as
a
(lower_bound, upper_bound)
tuple (forReal
orInteger
dimensions),a
(lower_bound, upper_bound, "prior")
tuple (forReal
dimensions),as a list of categories (for
Categorical
dimensions), oran instance of a
Dimension
object (Real
,Integer
orCategorical
).
NOTE: The upper and lower bounds are inclusive for
Integer
dimensions.base_estimator (sklearn regressor) – Should inherit from
sklearn.base.RegressorMixin
. In addition, should have an optionalreturn_std
argument, which returnsstd(Y | x)`
along withE[Y | x]
.n_calls (int, default=100) – Maximum number of calls to
func
. An objective fucntion will always be evaluated this number of times; Various options to supply initialization points do not affect this value.n_random_starts (int, default=10) – Number of evaluations of
func
with random points before approximating it withbase_estimator
.acq_func (string, default=`"EI"`) –
Function to minimize over the posterior distribution. Can be either
"LCB"
for lower confidence bound,"EI"
for negative expected improvement,"PI"
for negative probability of improvement."EIps"
for negated expected improvement per second to take into account the function compute time. Then, the objective function is assumed to return two values, the first being the objective value and the second being the time taken in seconds."PIps"
for negated probability of improvement per second. The return type of the objective function is assumed to be similar to that of `”EIps
acq_optimizer (string,
"sampling"
or"lbfgs"
, default=`”lbfgs”`) –Method to minimize the acquistion function. The fit model is updated with the optimal value obtained by optimizing
acq_func
withacq_optimizer
.If set to
"sampling"
, thenacq_func
is optimized by computingacq_func
atn_points
randomly sampled points and the smallest value found is used.- If set to
"lbfgs"
, then The
n_restarts_optimizer
no. of points which the acquisition function is least are taken as start points."lbfgs"
is run for 20 iterations with these points as initial points to find local minima.The optimal of these local minima is used to update the prior.
- If set to
x0 (list, list of lists or
None
) –Initial input points.
If it is a list of lists, use it as a list of input points. If no corresponding outputs
y0
are supplied, then len(x0) of total calls to the objective function will be spent evaluating the points inx0
. If the corresponding outputs are provided, then they will be used together with evaluated points during a run of the algorithm to construct a surrogate.If it is a list, use it as a single initial input point. The algorithm will spend 1 call to evaluate the initial point, if the outputs are not provided.
If it is
None
, no initial input points are used.
y0 (list, scalar or
None
) –Objective values at initial input points.
If it is a list, then it corresponds to evaluations of the function at each element of
x0
: the i-th element ofy0
corresponds to the function evaluated at the i-th element ofx0
.If it is a scalar, then it corresponds to the evaluation of the function at
x0
.If it is None and
x0
is provided, then the function is evaluated at each element ofx0
.
random_state (int, RandomState instance, or None (default)) – Set random state to something other than None for reproducible results.
verbose (boolean, default=False) – Control the verbosity. It is advised to set the verbosity to True for long optimization runs.
callback (callable, list of callables, optional) – If callable then
callback(res)
is called after each call tofunc
. If list of callables, then each callable in the list is called.n_points (int, default=10000) – If
acq_optimizer
is set to"sampling"
, thenacq_func
is optimized by computingacq_func
atn_points
randomly sampled points.n_restarts_optimizer (int, default=5) – The number of restarts of the optimizer when
acq_optimizer
is"lbfgs"
.xi (float, default=0.01) – Controls how much improvement one wants over the previous best values. Used when the acquisition is either
"EI"
or"PI"
.kappa (float, default=1.96) – Controls how much of the variance in the predicted values should be taken into account. If set to be very high, then we are favouring exploration over exploitation and vice versa. Used when the acquisition is
"LCB"
.n_jobs (int, default=1) – Number of cores to run in parallel while running the lbfgs optimizations over the acquisition function. Valid only when
acq_optimizer
is set to “lbfgs.” Defaults to 1 core. Ifn_jobs=-1
, then number of jobs is set to number of cores.model_queue_size (int or None, default=None) – Keeps list of models only as long as the argument given. In the case of None, the list has no capped length.
- Returns
- res
OptimizeResult
, scipy object The optimization result returned as a OptimizeResult object. Important attributes are:
x
[list]: location of the minimum.fun
[float]: function value at the minimum.models
: surrogate models used for each iteration.x_iters
[list of lists]: location of function evaluation for eachiteration.
func_vals
[array]: function value for each iteration.space
[Space]: the optimization space.specs
[dict]`: the call specifications.rng
[RandomState instance]: State of the random stateat the end of minimization.
For more details related to the OptimizeResult object, refer http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html
- res