With the rapid advances in sensing, communication, and storage technologies, distributed data acquisition is now ubiquitous in many areas of engineering, biological, and social sciences. For example, the large-scale implementation of advanced metering systems in the smart grids enables real time collection of a huge amount of distributed data (voltages, phases, etc.), the understanding of which is critical in improving the overall performance of the future power systems. Other examples of distributed data generation include high-resolution videos from a network of surveillance systems, interactions on a social network, and environmental data from sensor networks. Timely and effectively processing of such large amount of distributed, and possibly corrupted and/or online data requires not only novel data processing techniques, but also a deep understanding of the underlying network properties of the physical system that generates the data, e.g., the network topology, the processing capability of each distributed node, the nature of the data, etc. These sophisticated characteristics bring new challenges for the design and analysis of distributed learning and optimization algorithms. This symposium aims to bring together researchers and experts in the fields of signal processing, machine learning, control, optimization, network sciences, cyber-physical systems to address the emerging challenges related to this topic. Emphasis will be given to theory and application of distributed signal processing and cyber-physicalsystems, as well as advanced distributed control and optimization techniques.
Machine learning with big data often involves large optimization models. For distributed optimization over a cluster of machines, frequent communication and synchronization of all model parameters (optimization variables) can be very costly. A promising solution is to use parameter servers to store different subsets of the model parameters, and update them asynchronously at different machines using local datasets. In this talk, we focus on distributed optimization of large linear models with convex loss functions, and propose a family of randomized primal-dual block coordinate algorithms that are especially suitable for asynchronous distributed implementation with parameter servers. In particular, we work with the saddle-point formulation of such problems which allows simultaneous data and model partitioning, and exploit its structure by doubly stochastic coordinate optimization with variance reduction (DSCOVR). Compared with other first-order distributed algorithms, we show that DSCOVR may require less amount of overall computation and communication, and less or no synchronization. We discuss the implementation details of the DSCOVR algorithms, and present numerical experiments on an industrial distributed computing system. This is joint work with Adam Wei Yu, Qihang Lin and Weizhu Chen.
Dr. Lin Xiao is a principal researcher at Microsoft Research, located in Redmond, Washington. He obtained his PhD in Aeronautics and Astronautics from Stanford University in 2004, and spent two years as a postdoctoral fellow at California Institute of Technology before joining Microsoft. His current research interests include large-scale optimization, machine learning, randomized algorithm, and parallel and distributed computing.
|Wednesday, November 28|
|09:40 - 10:40|
|DL DL-DLN.1: Lin Xiao: "Randomized Primal-Dual Algorithms for Asynchronous Distributed Optimization"|
|11:00 - 12:30|
|DLN-L.1: Distributed Learning & Optimization: Algorithms|
|14:00 - 15:30|
|DLN-L.2: Distributed Learning & Optimization: Applications I|
|15:50 - 17:20|
|DLN-L.3: Distributed Learning & Optimization: Applications II|
Submissions are welcome on topics including:
Prospective authors are invited to submit full-length papers (up to 4 pages for technical content including figures and possible references, and with one additional optional 5th page containing only references) and extended abstracts (up to 2 pages, for paper-less industry presentations and Ongoing Work presentations).. Manuscripts should be original (not submitted/published anywhere else) and written in accordance with the standard IEEE double-column paper template. Accepted full-length papers will be indexed on IEEE Xplore. Accepted abstracts will not be indexed in IEEE Xplore, however the abstracts and/or the presentations will be included in the IEEE SPS SigPort. Accepted papers and abstracts will be scheduled in lecture and poster sessions.
|Paper Submission Deadline|
|Review Results Announced||September 7, 2018|
|Camera-Ready Papers Due||September 24, 2018|
|November 5, 2018||Hotel Room Reservation Deadline|