Yazar "Nocedal, Jorge" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A family of second-order methods for convex -regularized optimization(Springer Heidelberg, 2016) Byrd, Richard H.; Chin, Gillian M.; Nocedal, Jorge; Oztoprak, FigenThis paper is concerned with the minimization of an objective that is the sum of a convex function f and an regularization term. Our interest is in active-set methods that incorporate second-order information about the function f to accelerate convergence. We describe a semismooth Newton framework that can be used to generate a variety of second-order methods, including block active set methods, orthant-based methods and a second-order iterative soft-thresholding method. The paper proposes a new active set method that performs multiple changes in the active manifold estimate at every iteration, and employs a mechanism for correcting these estimates, when needed. This corrective mechanism is also evaluated in an orthant-based method. Numerical tests comparing the performance of three active set methods are presented.Öğe An inexact successive quadratic approximation method for L-1 regularized optimization(Springer Heidelberg, 2016) Byrd, Richard H.; Nocedal, Jorge; Oztoprak, FigenWe study a Newton-like method for the minimization of an objective function that is the sum of a smooth function and an regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model of the objective function . In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.