A framework for parallel second order incremental optimization algorithms for solving partially separable problems

dc.authoridKaya, Kamer/0000-0001-8678-5467|Koptagel, Hazal/0000-0002-6234-4656|Kuru, Nurdan/0000-0001-8184-7809|Birbil, Ilker/0000-0001-7472-7032
dc.authorwosidKaya, Kamer/J-8972-2015
dc.authorwosidOztoprak, Figen/ABH-1969-2021
dc.authorwosidKaya, Kamer/Y-4319-2019
dc.authorwosidBirbil, Ilker/L-7121-2018
dc.contributor.authorKaya, Kamer
dc.contributor.authorOztoprak, Figen
dc.contributor.authorBirbil, S. Ilker
dc.contributor.authorCemgil, A. Taylan
dc.contributor.authorSimsekli, Umut
dc.contributor.authorKuru, Nurdan
dc.contributor.authorKoptagel, Hazal
dc.date.accessioned2024-07-18T20:40:39Z
dc.date.available2024-07-18T20:40:39Z
dc.date.issued2019
dc.departmentİstanbul Bilgi Üniversitesien_US
dc.description.abstractWe propose Hessian Approximated Multiple Subsets Iteration (HAMSI), which is a generic second order incremental algorithm for solving large-scale partially separable convex and nonconvex optimization problems. The algorithm is based on a local quadratic approximation, and hence, allows incorporating curvature information to speed-up the convergence. HAMSI is inherently parallel and it scales nicely with the number of processors. We prove the convergence properties of our algorithm when the subset selection step is deterministic. Combined with techniques for effectively utilizing modern parallel computer architectures, we illustrate that a particular implementation of the proposed method based on L-BFGS updates converges more rapidly than a parallel gradient descent when both methods are used to solve large-scale matrix factorization problems. This performance gain comes only at the expense of using memory that scales linearly with the total size of the optimization variables. We conclude that HAMSI may be considered as a viable alternative in many large scale problems, where first order methods based on variants of gradient descent are applicable.en_US
dc.description.sponsorshipScientific and Technological Research Council of Turkey (TUBITAK) [113M492]en_US
dc.description.sponsorshipThis work is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) Grant No. 113M492.en_US
dc.identifier.doi10.1007/s10589-018-00057-7
dc.identifier.endpage705en_US
dc.identifier.issn0926-6003
dc.identifier.issn1573-2894
dc.identifier.issue3en_US
dc.identifier.scopus2-s2.0-85059648696en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage675en_US
dc.identifier.urihttps://doi.org/10.1007/s10589-018-00057-7
dc.identifier.urihttps://hdl.handle.net/11411/7163
dc.identifier.volume72en_US
dc.identifier.wosWOS:000463792400006en_US
dc.identifier.wosqualityQ1en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofComputational Optimization and Applicationsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectLarge-Scale Unconstrained Optimizationen_US
dc.subjectSecond Order İnformationen_US
dc.subjectShared-Memory Parallel İmplementationen_US
dc.subjectBalanced Coloringen_US
dc.subjectBalanced Stratificationen_US
dc.subjectMatrix Factorizationen_US
dc.titleA framework for parallel second order incremental optimization algorithms for solving partially separable problems
dc.typeArticle

Dosyalar