Unconstrained Optimization
ucminfcpp solves unconstrained
nonlinear optimization problems of the form:
minimise f(x), x ∈ ℝⁿ
The underlying algorithm is a quasi-Newton method with BFGS updating
of the inverse Hessian and a soft line search with an adaptive
trust-region radius. No constraints on x are imposed; if
you need bounds, consider wrapping the problem with a barrier or penalty
function.
Example — Himmelblau’s Function
Himmelblau’s function has four local minima. Starting from different initial points illustrates how the optimizer converges to the nearest minimum.
himmelblau <- function(x) {
(x[1]^2 + x[2] - 11)^2 + (x[1] + x[2]^2 - 7)^2
}
# Four different starting points → four different minima
starts <- list(c(3, 2), c(-2.805, 3.131), c(-3.779, -3.283), c(3.584, -1.848))
for (s in starts) {
res <- ucminfcpp::ucminf(s, himmelblau)
cat(sprintf("start = (%.2f, %.2f) -> par = (%.4f, %.4f), f = %.2e\n",
s[1], s[2], res$par[1], res$par[2], res$value))
}
#> start = (3.00, 2.00) -> par = (3.0000, 2.0000), f = 0.00e+00
#> start = (-2.81, 3.13) -> par = (-2.8051, 3.1313), f = 5.97e-11
#> start = (-3.78, -3.28) -> par = (-3.7793, -3.2832), f = 1.26e-10
#> start = (3.58, -1.85) -> par = (3.5844, -1.8481), f = 7.80e-12Flexibility in Multiple Dimensions
ucminfcpp handles objective functions of arbitrary
dimension n. The only requirement is that the function
accepts a numeric vector of length n and returns a
scalar.
Handling Edge Cases
Supplying an Analytical Gradient
Providing an exact gradient via the gr argument speeds
up convergence and improves accuracy, because finite-difference
approximation is skipped.
banana <- function(x) 100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2
banana_grad <- function(x) {
c(-400 * x[1] * (x[2] - x[1]^2) - 2 * (1 - x[1]),
200 * (x[2] - x[1]^2))
}
res_g <- ucminfcpp::ucminf(c(-1.2, 1), banana, gr = banana_grad)
cat("par:", res_g$par, "\n")
#> par: 1 1
cat("convergence:", res_g$convergence, "\n")
#> convergence: 1Already-at-Minimum Starting Point
Starting exactly at the minimum should return immediately without degrading the result.
Controlling the Optimizer
All tuning parameters of the original ucminf are
supported:
| Parameter | Default | Description |
|---|---|---|
control$maxeval |
500 | Maximum number of function evaluations |
control$trace |
0 | Verbosity level (0 = silent) |
control$eps |
sqrt(.Machine$double.eps) |
Gradient convergence tolerance |
control$stepmax |
1 | Maximum step length |