optimize_restarts
GPyOpt.core.task.SingleObjective
:
the objective function should take 2-dimensional numpy arrays as input and outputs. Each row should contain a location (in the case of the inputs) or a function evaluation (in the case of the outputs).
It seems in GPyOpt.methods.ModularBayesianOptimization(model, feasible_region, objective, acquisition, evaluator, initial_design)
, only evaluator
is actually used for BO calculation, but acquisition
is only used for printing information or plotting acquisition
function.
In this notebook we are going to see how to used GPyOpt to solve optimizaiton problems in which certain varaibles are fixed during the optimization phase. These are called context variables. For details see:
Krause, A. & Ong, C. S. Contextual gaussian process bandit optimization Advances in Neural Information Processing Systems (NIPS), 2011, 2447-2455
acquisitions/
: EI(_MCMC), ES, LCB(_MCMC), LP, MPI(_MCMC)core/
:evaluators/
: the evaluator of the function;主要有两种:EvaluatorBase
and SamplingBasedBatchEvaluator
SamplingBasedBatchEvaluator
进行了实现GPyOpt.core.evaluators.sequential.Sequential
没有实现batch calculation,因为只需要一步一步计算next sample。task/
: cost.py
: the cost of evaluating the function. By default no cost is used.objective.py
space.py
:variables.py
:bo
: Runner of Bayesian optimization loop. This class wraps the optimization loop around the different handlers.model_update_interval
,这个可以直接取代我自己修改的几个flags了,需要看一下具体实现是否满足我需要的逻辑errors
:experiment_design/
: 似乎是针对离散变量优化?interface/
:这是什么?methods/
:bayesian_optimization.py
, class BayesianOptimization(BO)
: Main class to initialize a Bayesian Optimization method.BO
有什么区别?self.modular_optimization = False
modular_bayesian_optimization.py
, class ModularBayesianOptimization(BO)
: Modular Bayesian optimization. This class wraps the optimization loop around the different handlers.self.initial_iter = True
, self.modular_optimization = True
,其它就是直接初始化BO
。models/
:GPModel(BOModel)
, InputWarpedGPModel(GPModel)
:基于GPy
的模型RFModel(BOModel)
, WarpedGPModel(BOModel)
(这个不可靠):基于sklearn.ensemble.RandomForestRegressor
objective_examples/
optimization/
:optimizer.py
: 几种不同的local optimizer的wrapperOptimizationWithContext
,并且只在这里被应用了。acquisition_optimizer.py
: General class for acquisition optimizers defined in domains with mix of discrete, continuous, bandit variablesclass AcquisitionOptimizer(object)
class ContextManager(object)
: 可以设定在优化AF的过程中的固定变量,即GpyOpt
所谓的context variable。anchor_points_generator.py
: 有几个不同的generator: lbfgs
, DIRECT
, CMA
plotting/
testing/
util
: 这里的很多东西看起来不稳定mcmc_sampler.py
duplicate_manager.py
self.modular_optimization
好像只出现在了BayesianOptimization
和ModularBayesianOptimization
中,搜索结果没有发现其它地方调用了这两个参数ContextManager
和constraints同时存在时,indexing好像会发生变化。应该查看optimization.optimizer.OptimizationWithContext()
找实现的细节。anchor_points
?DuplicateManager
是怎么发挥作用的?是sequential BO里面是否需要用它?AcquisitionLCB
, AcquisitionOptimizer
, ModularBayesianOptimization
中都需要space,这几个是否冲突?
ContextManager
会记录固定变量的index and value,然后通过optimization.optimizer.OptimizationWithContext()
把original objective function的变量中固定的部分去掉,实际输入到AF优化器中的,只的non-context variables。
真实的输入向量可以通过ContextManager._expand_vector()
来恢复。
最初初始化GPyOpt.Design_space
时设置的参数constraints
,
会随着space
参数被传递到GPyOpt.optimization.AcquisitionOptimizer
,
真正被调用起作用是在GPyOpt.experiment_design.random_design.RandomDesign.get_samples_with_constraints()
中,
最终执行设置的constaints
是在GPyOpt.core.task.space.Design_space.indicator_constraints()
中(仅此一处,参考这里的搜索结果),
此时的x
已经是经过ContextManager
转化后不包括context variable的向量(这里关键的一步发生在GPyOpt.optimization.anchor_points_generator.AnchorPointsGenerator.get()
重新定义内部变量space
的过程中,line 23–24),所以相应的constraints中的index应该是对应这里的x
的,而不原始的GP的输入变量。
(为什么要这样设计?)
(使用RandomDesign是为什么?不应该是maximize AF吗?什么关系?)
【我现在使用的是ObjectiveAnchorPointsGenerator
,默认了生成anchor points时使用random initial design。
在实际计算时,会随机在design space without constraints中生成1000(默认值)个点,然后再用constraint从中排除掉不满足约束的,然后再循环直到生成超过1000个点,最终从这1000个中选取5(默认值)个achor points,作为直到进行local optimization的intial values,然后选择从这5个点出发得到的最小值,作为optimize AF的最终结果,即最终的self.suggested_sample
。】
调用bo.run_optimization()
如果max_iter
大于1的话,也是可以实现不更新GP的,但是每一步的context variable没有办法更新
最终都要调用BO.run_optimization()
来运算。
run_optimization
不返回任何值,所有的计算值全部存储在BO
的相应变量中。
self.max_iter , self.max_time | 只作用于BO的iteration计时,不会提前结束GP的训练,GP训练时间超过max_time 也会继续直到训练完成。 |
self.Y_best | 用来画图的,没用 |
self.X , self.Y | 由输入值初始化,会不断append新的self.Y_new |
self.fx_opt | self.Y 的minimum。注意这里是objective function value,所以如果本身是带随机性的,这个值就是带随机性的,和直接由GP算出的值不同。 |
self.x_opt | 对应self.Y 的minimum |
self.model_update_interval | 控制多少个new samples后更新GP。怎么避免每次采样同样的地方? |
self.de_duplication | 这个是否可以和model_update_interval 配合防止采样相同的地方? |
self.suggested_sample | self._compute_next_evaluations() 计算出的next batch of points or point (if batch size is 1) |
已核实,AF的公式是如 GPyOpt Tutorial: 2.2 Acquisition Function 中所示,但是实际代码中是进行取负号后的 minimization:
AcquisitionLCB
core/evaluators/sequential.py
的Sequential().compute_batch()
中调用了self.acquisition.optimize()
self.optimizer.optimize(f=self.acquisition_function
AcquisitionBase().acquisition_function()
处对tutorial中AF的表达式者了求负,即最终变成 minimization 问题The GPy's document website is pretty much useless IMO. All are incomplete description of APIs with no insights on the overall structure.
Tutorials don't cover the whole capability of the package. Most tutorials are old.
However, the __init__.py
in each module contains many helpful and insightful descriptions and introduction.
Tell from models/__init__.py
, Classes of different models are organized in different files.
Many files have only a single class, but the class name is different from filename.
https://github.com/SheffieldML/GPy/blob/devel/GPy/core/__init__.py
optimize_restarts
: Perform random restarts of the model, and set the model to the best seen solution.