GlobalSIP 2018:

Design, Implementation and Optimization of Deep Learning for Wireless Communications

[Download the PDF Call for Papers]

Witnessing its success in fields including computer vision, speech recognition, bioinformatics, and so on, researchers are considering deep learning for wireless communications. Preliminary results in channel estimation and baseband processing have shown that deep learning can help to understand the wireless contents and produce results comparable and in some cases superior to classic approaches. Though such initiatives have been named, their design, implementation, and optimization are not complete and in infancy. This symposium aims to bring together experts from the design and implementation, computer science as well as communications communities and provide a forum for challenges and solutions of deep learning for wireless communications, with special interests on design and implementation.

Distinguished Symposium Talk

Zhongfeng Wang Photo

Zhongfeng Wang

Nanjing University

VLSI Optimizations for Deep Neural Networks

In this talk, I will first give a brief introduction about basics of VLSI optimization for signal processing systems as well as basics of deep learning. Then I will discuss a few new methods about model compression for deep neural networks. Thereafter, I will focus on efficient VLSI design for deep convolutional neural network and recurrent neural network.

Dr. Zhongfeng Wang received both BS and MS degrees from Tsinghua University. He obtained the Ph.D. degree from the University of Minnesota, Minneapolis, in 2000. He is currently a Distinguished Professor at Nanjing University, China. Previously he worked for Broadcom Corporation, California, from 2007 to 2016 as an Associate Technical Director. Before that, he has worked for Oregon State University and National Semiconductor Corporation. Dr. Wang is a world-recognized expert on Low-Power High-Speed VLSI Design for Signal Processing Systems. He has published over one 170 technical papers with two best paper awards received in the IEEE Circuits and Systems (CAS) society. He has edited one book “VLSI” and filed over forty U.S. patent applications and disclosures. In the current record, he was the first person in the research society having five papers ranked among top 20 most downloaded manuscripts in IEEE Trans. on VLSI Systems. In the past, he has served as Associate Editor for IEEE Trans. on CAS-I, CAS-II, and VLSI Systems for multiple terms. In 2013, he served in the Best Paper Award selection committee for the IEEE CAS society. Meanwhile, he has contributed significantly to the industrial standards. So far, his technical proposals have been adopted by more than fifteen international networking standards. In 2015, He was elevated to the Fellow of IEEE for contributions to VLSI design and implementation of FEC coding. His current research interests are in the area of Low Power/High Speed VLSI Design for Digital Communications and Deep Learning.

Yiyu Shi Photo

Yiyu Shi

University of Notre Dame

Scaling of Deep Neural Networks for Edge Inference in Internet-of-Many-Things

Deep neural networks have demonstrated amazing potential across a wide range of applications, from autonomous cars to precision medicine. A clear trend in deep neural networks is the exponential growth of network size and the associated increases in computational complexity and memory consumption. On the other hand, when neural networks are being deployed in a network with many things, in order to reduce communication cost and to provide enhanced security/reliability, the inference is usually done on the edge with limited area and power budget. In this talk we analyze recent data and show that there are increasing gaps between the computational complexity and energy efficiency required by data scientists and the hardware capacity made available by hardware architects. We will then discuss various architecture and algorithm innovations that could help to bridge the gaps, with a special focus on network quantization and its theoretical implications and bounds. Finally, we will theoretically demonstrate the universal approximability of quantized neural networks, and the loss of expressive power induced by quantization, a missing piece in the literature.

Dr. Yiyu Shi is currently an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame, and the director of the Sustainable Computing Lab (SCL). He received his B.S. degree (with honor) in Electronic Engineering from Tsinghua University, Beijing, China in 2005, the M.S and Ph.D. degree in Electrical Engineering from the University of California, Los Angeles in 2007 and 2009 respectively. His current research interests include hardware intelligence and three-dimensional integration. In recognition of his research, many of his papers have been nominated for the Best Paper Awards in top conferences. He was also the recipient of IBM Invention Achievement Award, Japan Society for the Promotion of Science (JSPS) Faculty Invitation Fellowship, Humboldt Research Fellowship, IEEE St. Louis Section Outstanding Educator Award, Academy of Science (St. Louis) Innovation Award, Missouri S&T Faculty Excellence Award, NSF CAREER Award, IEEE Region 5 Outstanding Individual Achievement Award, and the Air Force Summer Faculty Fellowship. He has served on the technical program committee of many international conferences including DAC, ICCAD, DATE, ISPD, ASPDAC and ICCD. He is an executive committee member of ACM SIGDA, a member of IEEE CEDA Publicity Committee, deputy editor-in-chief of IEEE VLSI CAS Newsletter, and an associate editor of IEEE TCAD, ACM JETC, VLSI Integration, IEEE TCCCPS Newsletter and ACM SIGDA Newsletter. He is also the chair of 2018 DAC System Design Contest on Machine Learning on Embedded Platforms.


Thursday, November 29
08:30 - 09:30
Plenary PLEN-3: Zhongfeng Wang: "VLSI Optimizations for Deep Neural Networks"
09:40 - 10:40
DL DL-DLW.1: Yiyu Shi: "Scaling of Deep Neural Networks for Edge Inference in Internet-of-Many-Things"
11:00 - 12:30
DLW-L.1: Design and Implementation of Deep Learning for Wireless Communications
14:00 - 15:30
DLW-L.2: Deep-Learning-Based Signal Processing for Wireless Communications
15:50 - 17:20
DLW-L.3: Deep-Learning-Based Network Optimization for Wireless Communications

Organizing Committee

General Chairs

Technical Program Chairs

Submissions are welcome on topics including:

Paper Submission

Prospective authors are invited to submit full-length papers (up to 4 pages for technical content including figures and possible references, and with one additional optional 5th page containing only references) and extended abstracts (up to 2 pages, for paper-less industry presentations and Ongoing Work presentations).. Manuscripts should be original (not submitted/published anywhere else) and written in accordance with the standard IEEE double-column paper template. Accepted full-length papers will be indexed on IEEE Xplore. Accepted abstracts will not be indexed in IEEE Xplore, however the abstracts and/or the presentations will be included in the IEEE SPS SigPort. Accepted papers and abstracts will be scheduled in lecture and poster sessions.

Important Dates

Paper Submission DeadlineJune 17, 2018 June 29, 2018
Review Results AnnouncedSeptember 7, 2018
Camera-Ready Papers DueSeptember 24, 2018
November 5, 2018Hotel Room Reservation Deadline