Nonsmooth optimization is useful when the loss includes an absolute value. 当损失函数包含绝对值时,非光滑优化很有用。
In sparse learning, L1 regularization leads to a nonsmooth optimization problem that is often solved with proximal gradient methods. 在稀疏学习中,L1 正则会带来一个非光滑优化问题,通常用近端梯度法来求解。