AbstractBayesOpt Tutorial: 1D Bayesian Optimisation
Setup
Loading the necessary packages.
using AbstractBayesOpt
using AbstractGPs
using ForwardDiff
using PlotsDefine the objective function
We will optimise a simple 1D function: $f(x) = (x-2)^2 + \sin(3x)$
f(x) = (x - 2)^2 + sin(3x)
d = 1
domain = ContinuousDomain([0.0], [5.0])Standard GPs
We'll use a standard Gaussian Process surrogate with a Matérn 5/2 kernel. We add a small jitter term for numerical stability of $10^{-12}$.
noise_var = 1e-12
surrogate = StandardGP(Matern52Kernel(), noise_var)StandardGP{Float64}(AbstractGPs.GP{AbstractGPs.ZeroMean{Float64}, KernelFunctions.ScaledKernel{KernelFunctions.TransformedKernel{KernelFunctions.Matern52Kernel{Distances.Euclidean}, KernelFunctions.ScaleTransform{Float64}}, Float64}}(AbstractGPs.ZeroMean{Float64}(), Matern 5/2 Kernel (metric = Distances.Euclidean(0.0))
- Scale Transform (s = 1.0)
- σ² = 1.0), 1.0e-12, nothing)Generate uniform random samples x_train and evaluate the function at these points to get y_train.
n_train = 5
x_train = first.([
domain.lower .+ (domain.upper .- domain.lower) .* rand(d) for _ in 1:n_train
])
y_train = f.(x_train)5-element Vector{Float64}:
1.2995813599341142
0.5179286028794444
0.9192097626323938
1.3961489787170538
1.2427322328731445Choose an acquisition function
We'll use the Expected Improvement acquisition function with an exploration parameter ξ = 0.0.
ξ = 0.0
acq = ExpectedImprovement(ξ, minimum(y_train))ExpectedImprovement{Float64}(0.0, 0.5179286028794444)Set up the Bayesian Optimisation structure
We use BOStruct to bundle all components needed for the optimisation. Here, we set the number of iterations to 5 and the actual noise level to 0.0 (since our function is noiseless). We then run the optimize function to perform the Bayesian optimisation.
bo_struct = BOStruct(
f,
acq,
surrogate,
domain,
x_train,
y_train,
30, # number of iterations
0.0, # Actual noise level (0.0 for noiseless)
)
@info "Starting Bayesian ..."
result, acq_list, standard_params = AbstractBayesOpt.optimize(
bo_struct; standardize="mean_only"
);[ Info: Starting Bayesian ...
[ Info: Standardization choice: mean_only
[ Info: Standardization parameters: μ=1.07512018740723, σ=1.0
[ Info: Optimizing GP hyperparameters at iteration 1...
[ Info: New parameters: ℓ=[0.22516947030220216], variance =[0.13277368687269117]
[ Info: Iteration #1, current min val: 0.5179286028794444
[ Info: Acquisition optimized, new candidate point: 2.1447239811274255
[ Info: Iteration #2, current min val: 0.17135864763501524
[ Info: Acquisition optimized, new candidate point: 2.070038265929898
[ Info: Iteration #3, current min val: -0.06810014383195093
[ Info: Acquisition optimized, new candidate point: 2.009076208943551
[ Info: Iteration #4, current min val: -0.25308865995155355
[ Info: Acquisition optimized, new candidate point: 1.95616658274229
[ Info: Iteration #5, current min val: -0.40098079185534324
[ Info: Acquisition optimized, new candidate point: 1.9084767601010815
[ Info: Iteration #6, current min val: -0.5209062925606387
[ Info: Acquisition optimized, new candidate point: 1.865351543238708
[ Info: Iteration #7, current min val: -0.6161914044709941
[ Info: Acquisition optimized, new candidate point: 1.8260540202000524
[ Info: Iteration #8, current min val: -0.690589551381142
[ Info: Acquisition optimized, new candidate point: 1.790155792149061
[ Info: Iteration #9, current min val: -0.7471343484135283
[ Info: Acquisition optimized, new candidate point: 1.7571539643347265
[ Info: Iteration #10, current min val: -0.7887730058212771
[ Info: Acquisition optimized, new candidate point: 1.7263620238904762
[ Info: Optimizing GP hyperparameters at iteration 11...
[ Info: New parameters: ℓ=[1.5939169131914783], variance =[4.109412748428666]
[ Info: Iteration #11, current min val: -0.8181815224177434
[ Info: Acquisition optimized, new candidate point: 5.486350529347019e-12
[ Info: Iteration #12, current min val: -0.8181815224177434
[ Info: Acquisition optimized, new candidate point: 1.6392624967795932
[ Info: Iteration #13, current min val: -0.848848237009985
[ Info: Acquisition optimized, new candidate point: 4.999999999993152
[ Info: Iteration #14, current min val: -0.848848237009985
[ Info: Acquisition optimized, new candidate point: 1.6491994637112544
[ Info: Iteration #15, current min val: -0.8494045439247468
[ Info: Acquisition optimized, new candidate point: 1.6499495420948302
[ Info: Iteration #16, current min val: -0.8494045439247468
[ Info: Acquisition optimized, new candidate point: 1.6491609417992426
[ Info: Iteration #17, current min val: -0.8494045439247468
[ Info: Acquisition optimized, new candidate point: 1.6494385152435431
[ Info: Iteration #18, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.12525
[ Info: Iteration #19, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6500129640528018
[ Info: Iteration #20, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6487535664194268
[ Info: Optimizing GP hyperparameters at iteration 21...
[ Info: New parameters: ℓ=[3.3581327760365767], variance =[107.64704172007815]
[ Info: Iteration #21, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.65075
[ Info: Iteration #22, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.50625
[ Info: Iteration #23, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 4.23375
[ Info: Iteration #24, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.64775
[ Info: Iteration #25, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 0.52925
[ Info: Iteration #26, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6512499999999999
[ Info: Iteration #27, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6517500000000003
[ Info: Iteration #28, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.5987500000000001
[ Info: Iteration #29, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6637499999999998
[ Info: Iteration #30, current min val: -0.8494048250643402
[ Info: Acquisition optimized, new candidate point: 1.6467500000000002Results
The result is stored in result. We can print the best found input and its corresponding function value.
Optimal point: 1.6494385152435431
Optimal value: -0.8494048250643402Plotting of running minimum over iterations
The running minimum is the best function value found up to each iteration.
Gradient-enhanced GPs
Now, let's see how to use gradient information to improve the optimisation. We'll use the same function but now also provide its gradient. We define a new surrogate model that can handle gradient information, specifically a GradientGP.
grad_surrogate = GradientGP(ApproxMatern52Kernel(), d + 1, noise_var)
ξ = 0.0
acq = ExpectedImprovement(ξ, minimum(y_train))
∂f(x) = ForwardDiff.derivative(f, x)
f_∂f(x) = [f(x); ∂f(x)];Generate value and gradients at random samples
y_train_grad = f_∂f.(x_train)5-element Vector{Vector{Float64}}:
[1.2995813599341142, -0.7061930870502873]
[0.5179286028794444, 3.1755096265129925]
[0.9192097626323938, 2.690325882093927]
[1.3961489787170538, 1.7301132677036684]
[1.2427322328731445, 0.39211734597424464]Set up the Bayesian Optimisation structure
bo_struct_grad = BOStruct(
f_∂f,
acq,
grad_surrogate,
domain,
x_train,
y_train_grad,
10, # number of iterations
0.0, # Actual noise level (0.0 for noiseless)
)
result_grad, acq_list_grad, standard_params_grad = AbstractBayesOpt.optimize(
bo_struct_grad; standardize="mean_only"
);[ Info: Starting Bayesian Optimisation...
[ Info: Standardization choice: mean_only
[ Info: Standardization parameters: μ=[1.07512018740723, 0.0], σ=[1.0, 1.0]
[ Info: Optimizing GP hyperparameters at iteration 1...
[ Info: New parameters: ℓ=[1.8322836677992136], variance =[10.029464330564549]
[ Info: Iteration #1, current min val: 0.5179286028794444
[ Info: Acquisition optimized, new candidate point: 0.865128194222639
[ Info: Iteration #2, current min val: 0.5179286028794444
[ Info: Acquisition optimized, new candidate point: 1.6818325674918924
[ Info: Iteration #3, current min val: -0.8437998989760168
[ Info: Acquisition optimized, new candidate point: 1.6503849829241644
[ Info: Iteration #4, current min val: -0.8493999075747939
[ Info: Acquisition optimized, new candidate point: 4.999999999991542
[ Info: Iteration #5, current min val: -0.8493999075747939
[ Info: Acquisition optimized, new candidate point: 1.6494305816519332
[ Info: Iteration #6, current min val: -0.8494048255906168
[ Info: Acquisition optimized, new candidate point: 0.00025
[ Info: Iteration #7, current min val: -0.8494048255906168
[ Info: Acquisition optimized, new candidate point: 1.50825
[ Info: Iteration #8, current min val: -0.8494048255906168
[ Info: Acquisition optimized, new candidate point: 1.83225
[ Info: Iteration #9, current min val: -0.8494048255906168
[ Info: Acquisition optimized, new candidate point: 1.6247500000000001
[ Info: Iteration #10, current min val: -0.8494048255906168
[ Info: Acquisition optimized, new candidate point: 4.12875Results
The result is stored in result_grad. We can print the best found input and its corresponding function value.
Optimal point (GradBO): 1.6494305816519332
Optimal value (GradBO): -0.8494048255906168Plotting of running minimum over iterations
The running minimum is the best function value found up to each iteration. Since each evaluation provides both a function value and a 1D gradient, we duplicate the running minimum values to reflect the number of function evaluations.
Plotting the surrogate model
We can visualize the surrogate model's mean and uncertainty along with the true function and the evaluated
plot_domain = collect(domain.lower[1]:0.01:domain.upper[1])
plot_x = map(x -> [x], plot_domain)
plot_x = prep_input(grad_surrogate, plot_x)
post_mean, post_var = unstandardized_mean_and_var(
result_grad.model, plot_x, standard_params_grad
)
post_mean = reshape(post_mean, :, d + 1)[:, 1]
post_var = reshape(post_var, :, d + 1)[:, 1]
post_var[post_var .< 0] .= 0This page was generated using Literate.jl.