Simple Case:
python
import optimtool as oo
x1, x2, x3, x4 = sp.symbols("x1 x2 x3 x4") Declare symbolic variables
f = (x1 - 1)**2 + (x2 - 1)**2 + (x3 - 1)**2 + (x1**2 + x2**2 + x3**2 + x4**2 - 0.25)**2
oo.unconstrain.gradient_descent.barzilar_borwein(f, [x1, x2, x3, x4], (1, 2, 3, 4)) funcs, args, x_0
Bugs Fixed:
- update _kernel for more concise and comprehensive kernel selectors: kernel, linear_search, nonmonotonic_search
- add the setting variable of unconstrained break precision for constrained optimization method, like penalty_quadratic.
python
import optimtool.constrain as oc
f, x1, x2 = sp.symbols("f x1 x2")
f = (x1 - 2)**2 + (x2 - 1)**2
c1 = x1 - x2 - 1
c2 = 0.25*x1**2 - x2 - 1
oc.mixequal.penalty_L1(f, (x1, x2), c1, c2, (1.5, 0.5), epsk=1e-4) use `epsk` to set break epsilon of `kernel`
introduce to hybrid: (will be upload to optimtool in v2.5.0)
python
delta = x_0 - tk * gradient gradient=f(x).jacobian, g(x) is not differentiable.
proximity operators and iteration:
python
x_0 = np.sign(delta) * np.max(np.abs(delta) - tk, 0) L1
python
norm = np.linalg.norm(delta)
x_0 = (1 - tk / norm) * delta if norm > tk else 0 L2
python
x_0 = (delta + np.sqrt(delta**2 + 4 * tk)) / 2 -\sum_{i=1}^{n}\ln(xi), n=len(x_0)
Reference:
- [tests](https://github.com/linjing-lab/optimtool/tree/v1.8/tests)
- [examples/doc](https://github.com/linjing-lab/optimtool/tree/v1.8/examples/doc)