User Tools

Site Tools


academics:ml:bo:gpyopt

Learn GPyOpt

Bayesian Modeling

optimize_restarts

GPyOpt.core.task.SingleObjective: the objective function should take 2-dimensional numpy arrays as input and outputs. Each row should contain a location (in the case of the inputs) or a function evaluation (in the case of the outputs).

It seems in GPyOpt.methods.ModularBayesianOptimization(model, feasible_region, objective, acquisition, evaluator, initial_design), only evaluator is actually used for BO calculation, but acquisition is only used for printing information or plotting acquisition function.

https://github.com/SheffieldML/GPyOpt/blob/e3f31301a11c382789c6de1066bf0437b4f1660b/manual/GPyOpt_context.ipynb

In this notebook we are going to see how to used GPyOpt to solve optimizaiton problems in which certain varaibles are fixed during the optimization phase. These are called context variables. For details see:
Krause, A. & Ong, C. S. Contextual gaussian process bandit optimization Advances in Neural Information Processing Systems (NIPS), 2011, 2447-2455

代码结构研究

  • acquisitions/: EI(_MCMC), ES, LCB(_MCMC), LP, MPI(_MCMC)
  • core/:
    • evaluators/: the evaluator of the function;主要有两种:EvaluatorBase and SamplingBasedBatchEvaluator
      • 这里的batch calculation只针对SamplingBasedBatchEvaluator进行了实现
      • GPyOpt.core.evaluators.sequential.Sequential没有实现batch calculation,因为只需要一步一步计算next sample。
    • task/:
      • cost.py: the cost of evaluating the function. By default no cost is used.
      • objective.py
      • space.py:
      • variables.py:
    • bo: Runner of Bayesian optimization loop. This class wraps the optimization loop around the different handlers.
      • 这才是最重要的class
      • 对当前工作有用的参数:model_update_interval ,这个可以直接取代我自己修改的几个flags了,需要看一下具体实现是否满足我需要的逻辑
    • errors:
  • experiment_design/: 似乎是针对离散变量优化?
  • interface/ :这是什么?
  • methods/:
    • bayesian_optimization.py, class BayesianOptimization(BO): Main class to initialize a Bayesian Optimization method.
      • 这个和自己调用BO 有什么区别?
      • 这个实际上是想把所有的功能都集大成地写在一个统一调用的API,直接就是当作一个optimizer使用,self.modular_optimization = False
    • modular_bayesian_optimization.py, class ModularBayesianOptimization(BO): Modular Bayesian optimization. This class wraps the optimization loop around the different handlers.
      • 内部除了新加了两个属性self.initial_iter = True, self.modular_optimization = True,其它就是直接初始化BO
      • 这两个属性用在哪里了?(哪里都没有用到)
  • models/:
    • GPModel(BOModel), InputWarpedGPModel(GPModel):基于GPy的模型
    • RFModel(BOModel), WarpedGPModel(BOModel)(这个不可靠):基于sklearn.ensemble.RandomForestRegressor
  • objective_examples/
  • optimization/:
    • optimizer.py: 几种不同的local optimizer的wrapper
      • 这里定义了OptimizationWithContext,并且只在这里被应用了。
    • acquisition_optimizer.py: General class for acquisition optimizers defined in domains with mix of discrete, continuous, bandit variables
      • class AcquisitionOptimizer(object)
      • class ContextManager(object): 可以设定在优化AF的过程中的固定变量,即GpyOpt所谓的context variable。
        • 具体是怎么实现的?对index似乎会有影响
    • anchor_points_generator.py: 有几个不同的generator: lbfgs, DIRECT, CMA
  • plotting/
  • testing/
  • util: 这里的很多东西看起来不稳定
    • mcmc_sampler.py
    • duplicate_manager.py

问题

  1. self.modular_optimization好像只出现在了BayesianOptimizationModularBayesianOptimization中,搜索结果没有发现其它地方调用了这两个参数
  2. ContextManager和constraints同时存在时,indexing好像会发生变化。应该查看optimization.optimizer.OptimizationWithContext()找实现的细节。
  3. 什么是所谓的anchor_points
  4. DuplicateManager是怎么发挥作用的?是sequential BO里面是否需要用它?
  5. AcquisitionLCB, AcquisitionOptimizer, ModularBayesianOptimization中都需要space,这几个是否冲突?
    • 看下面的记录,实际上,constraints只在

细节记录

ContextManager会记录固定变量的index and value,然后通过optimization.optimizer.OptimizationWithContext()把original objective function的变量中固定的部分去掉,实际输入到AF优化器中的,只的non-context variables。 真实的输入向量可以通过ContextManager._expand_vector()来恢复。

最初初始化GPyOpt.Design_space时设置的参数constraints, 会随着space参数被传递到GPyOpt.optimization.AcquisitionOptimizer, 真正被调用起作用是在GPyOpt.experiment_design.random_design.RandomDesign.get_samples_with_constraints()中, 最终执行设置的constaints是在GPyOpt.core.task.space.Design_space.indicator_constraints()中(仅此一处,参考这里的搜索结果), 此时的x已经是经过ContextManager转化后不包括context variable的向量(这里关键的一步发生在GPyOpt.optimization.anchor_points_generator.AnchorPointsGenerator.get()重新定义内部变量space的过程中,line 23–24),所以相应的constraints中的index应该是对应这里的x的,而不原始的GP的输入变量。
(为什么要这样设计?)

(使用RandomDesign是为什么?不应该是maximize AF吗?什么关系?)
【我现在使用的是ObjectiveAnchorPointsGenerator,默认了生成anchor points时使用random initial design。 在实际计算时,会随机在design space without constraints中生成1000(默认值)个点,然后再用constraint从中排除掉不满足约束的,然后再循环直到生成超过1000个点,最终从这1000个中选取5(默认值)个achor points,作为直到进行local optimization的intial values,然后选择从这5个点出发得到的最小值,作为optimize AF的最终结果,即最终的self.suggested_sample。】

调用bo.run_optimization()如果max_iter大于1的话,也是可以实现不更新GP的,但是每一步的context variable没有办法更新

最终都要调用BO.run_optimization()来运算。 run_optimization不返回任何值,所有的计算值全部存储在BO的相应变量中。

self.max_iter, self.max_time 只作用于BO的iteration计时,不会提前结束GP的训练,GP训练时间超过max_time也会继续直到训练完成。
self.Y_best 用来画图的,没用
self.X, self.Y 由输入值初始化,会不断append新的self.Y_new
self.fx_opt self.Y的minimum。注意这里是objective function value,所以如果本身是带随机性的,这个值就是带随机性的,和直接由GP算出的值不同。
self.x_opt 对应self.Y的minimum
self.model_update_interval 控制多少个new samples后更新GP。怎么避免每次采样同样的地方?
self.de_duplication 这个是否可以和model_update_interval配合防止采样相同的地方?
self.suggested_sample self._compute_next_evaluations()计算出的next batch of points or point (if batch size is 1)

不同的 Acquisition Functions

已核实,AF的公式是如 GPyOpt Tutorial: 2.2 Acquisition Function 中所示,但是实际代码中是进行取负号后的 minimization:

  • 这里描述的是 UCB,但是实际上代码中class的名字是叫 AcquisitionLCB
  • 两个地方的表达式是完全一样的,是 maximization
  • 最终在代码中
    • core/evaluators/sequential.pySequential().compute_batch()中调用了self.acquisition.optimize()
    • 后者在计算时给optimizer传入了真正要minimize的函数self.optimizer.optimize(f=self.acquisition_function
    • AcquisitionBase().acquisition_function() 处对tutorial中AF的表达式者了求负,即最终变成 minimization 问题

Learn GPy

The GPy's document website is pretty much useless IMO. All are incomplete description of APIs with no insights on the overall structure.

Tutorials don't cover the whole capability of the package. Most tutorials are old.

However, the __init__.py in each module contains many helpful and insightful descriptions and introduction.

Tell from models/__init__.py, Classes of different models are organized in different files. Many files have only a single class, but the class name is different from filename.

https://github.com/SheffieldML/GPy/blob/devel/GPy/core/__init__.py

optimize_restarts: Perform random restarts of the model, and set the model to the best seen solution.

Development Branch

Learn Paramz

academics/ml/bo/gpyopt.txt · Last modified: 2021/03/30 21:12 by foreverph