Short Courses at ICASSP 2023 in Collaboration with SPS Education Board

The IEEE Signal Processing Society (IEEE-SPS) Education Board is planning an inaugural education activity in the form of short courses at ICASSP 2023. The introduction of education-oriented short courses will offer Professional Development Hours (PDHs) and Continuing Education Units (CEUs) certificates to those who complete each course. Given that students, academic, and industry researchers and practitioners have a broad diversity of interests and areas of experience worldwide, the IEEE-SPS goal is to develop meaningful methods of offering beneficial and relevant courses in support of our membership educational needs.

Four courses have been selected by SPS Education Board and the ICASSP committee. The courses will be conducted in-person during the main ICASSP conference. The total duration of each course is 10 hours.

Participants can attend either live or remotely.

Short Courses at ICASSP-2023

Dates and Time:  Tue 10am-12pm, Wed-Fri 9am-12pm, Executive Room Alpha

Course Abstract: 

Gradient descent (GD) is a well-known first order optimization method, which uses the gradient of the loss function, along with a step-size (or learning rate), to iteratively update the solution.  When the loss (cost) function is dependent on datasets with large cardinality, such in cases typically associated with deep learning (DL), GD becomes impractical.

In this scenario, stochastic GD (SGD), which uses a noisy gradient approximation (computed over a random fraction of the dataset), has become crucial. There exits several variants/improvements over the “vanilla” SGD, such RMSprop, Adagrad, Adadelta, Adam, Nadam, etc., which are usually given as black-boxes by most of  DL’s libraries (TensorFlow, PyTorch, MXNet, etc.).

The primary objective of this course is to combined the essential theoretical aspects related to SGD and variants, along with hands on experience to program in Python, from scratch (i.e. not based on DL’s libraries such as TensorFlow, PyTorch, MXNet) the SGD along with the RMSprop, Adagrad, Adadelta, Adam and Nadam algorithms and to test their performance using the MNIST and CIFAR-10 datasets for shallow networks (consisting of up to two ReLU layers and a Softmax as the last layer).

Syllabus and pre-reading details: 

https://hands-on-sgd.readthedocs.io

Presenter:

Paul Rodriguez, Pontifical Catholic University of Peru

 Biography:

Paul Rodriguez received the BSc degree in electrical engineering from the “Pontificia Universidad Católica del Perú’ {PUCP}), Lima, Peru, in 1997, and the MSc and PhD degrees in electrical engineering from the University of New Mexico, U.S., in 2003 and 2005 respectively. He spent two years (2005-2007) as a postdoctoral researcher at Los Alamos National Laboratory, and is currently a Full Professor with the Department of Electrical Engineering at PUCP.

His research interests include AM-FM models, parallel algorithms, adaptive signal decompositions, and optimization algorithms for inverse problems in signal and image processing such Total Variation, Basis Pursuit, principal component pursuit (a.k.a. robust PCA), convolutional sparse representations, extreme learning machines, etc.

Dates and TimeTue-Thu 2-5pm, Fri 2-4pm, Executive Room Alpha

Course Abstract:  

Over the past two decades, our signal processing community has witnessed explosive developments and the power of low-dimensional models for high-dimensional data, which revolutionized many applications from engineering to science. In the meantime, the community is in the transition of embracing the power of modern machine learning, especially deep learning, with unprecedented new challenges in terms of modeling and interpretability. This short course provides a timely tutorial that uniquely bridges fundamental mathematical models from signal processing to contemporary topics in nonconvex optimization and deep learning through low-dimensional models.

This short course will show how (i) these low-dimensional models and principles provide a valuable lens for formulating problems and understanding the behavior of methods, and (ii) how ideas from nonconvexity and deep learning help make these core models practical for real-world problems with nonlinear data and observation models, measurement nonidealities, etc. The course will start by introducing fundamental linear low-dimensional models (e.g., basic sparse and low-rank models) with motivating engineering applications, followed by a suite of scalable and efficient optimization methods. Based on these developments, we will introduce nonlinear low-dimensional models for several fundamental learning and inverse problems, followed by their guaranteed correctness and efficient nonconvex optimization. Building upon these results, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models, as well as leading to new principles for designing deep networks for learning low-dimensional structures, with both clear interpretability and practical benefits.

Syllabus and pre-reading details:

The targeted core audience of this course are senior undergraduate and entry-level graduate students and young researchers in Electrical Engineering and Computer Science (EECS), especially in the areas of data science, signal processing, optimization, machine learning, and applications.

Research experience in high-dimensional data is not required to take this course. Students only need to be familiar with basic linear algebra and programming. This course will equip students with systematic and rigorous training in concepts and methods of high-dimensional geometry, statistics, and optimization. Through a very diverse and rich set of applications and (programming)

exercises, the course also coaches students on how to correctly use such concepts and methods to model real-world data and solve real-world engineering and scientific problems.

Related course websites from previous years:

https://highdimdata-lowdimmodels-tutorial.github.io/

https://pages.github.berkeley.edu/UCB-EECS208/course_site/

https://slowdnn-workshop.github.io/tutorials/

 

Presenters:  

Qing Qu, University of Michigan

Sam Buchanan, Toyota Technological Institute at Chicago

Yi Ma,  UC Berkeley

Atlas Zhangyang Wang, University of Texas at Austin

John Wright, Columbia University

Yuqian Zhang, Rutgers University

Zhihui Zhu, Ohio State University

 

Biographies:

 

Sam Buchanan is a research assistant professor at the Toyota Technological Institute at Chicago (TTIC). He obtained his Ph.D. in Electrical Engineering from Columbia University in 2022, advised by John Wright. His research interests include the theoretical analysis of deep neural networks, particularly in connection with high-dimensional data with low-dimensional structure, and associated applications in vision. He received the 2017 NDSEG Fellowship, and the Eli Jury Award from Columbia University.

 

Yi Ma is a professor in the EECS department at  UC Berkeley. He received the David Marr Best Paper Award at ICCV 1999. He also received honorable mention for the Longuet-Higgins Best Paper Award at ECCV 2004, the Sang Uk Lee Best Student Paper Award at ACCV  2009, and the second prize of the Best Paper Award of the IMA Journal on Information and Inference in 2015. Yi Ma was the recipient of the CAREER Award from NSF in 2003 and the Young Investigator Program (YIP) Award from the Office of Naval Research (ONR) in 2005. He has given over thirty Keynote or Plenary Talks at international conferences and workshops. Yi Ma is an IEEE Fellow since 2013, an ACM Fellow since 2017, and a SIAM Fellow since 2020.

 

 

Qing Qu is an assistant professor in EECS department at the University of Michigan. Prior to that, he was a Moore-Sloan data science fellow at Center for Data Science, New York University, from 2018 to 2020. He received his Ph.D from Columbia University in Electrical Engineering in Oct. 2018. He received his B.Eng. from Tsinghua University in Jul. 2011, and a M.Sc.from the Johns Hopkins University in Dec. 2012, both in Electrical and Computer Engineering. He interned at U.S. Army Research Laboratory in 2012 and Microsoft Research in 2016, respectively. His research interest lies at the intersection of foundation of data science, machine learning, numerical optimization, and signal/image processing, with focus on developing efficient nonconvex methods and global optimality guarantees for solving representation learning and nonlinear inverse problems in engineering and imaging sciences. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun, John Wright), and the recipient of Microsoft PhD Fellowship in machine learning. He is the recipient of the NSF Career Award in 2022.

 

Atlas Zhangyang Wang is currently the Jack Kilby/Texas Instruments Endowed Assistant Professor in the Department of Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. degree in ECE from UIUC in 2016, advised by Professor Thomas S. Huang. He has broad research interests spanning from the theory to the application aspects of machine learning (ML). Most recently, he studies efficient ML / learning with sparsity, robust & trustworthy ML, AutoML / learning to optimize (L2O), and graph ML, as well as their applications in computer vision and interdisciplinary science.

 

 

John Wright is an associate professor in the EE department at Columbia University, and a member of Columbia’s Data Science Institute. He received his PhD in EE from the University of Illinois at Urbana-Champaign in 2009 advised by Prof. Yi Ma, and was with Microsoft Research from 2009-2011. His research is in the area of high-dimensional signal and data analysis, optimization, and computer vision. His work has received a number of awards and honors, including the 2009 Lemelson-Illinois Prize for Innovation for his work on face recognition, the 2009 UIUC Martin Award for Excellence in Graduate Research, and a 2008-2010 Microsoft Research Fellowship, and the Best Paper Award from the Conference on Learning Theory (COLT) in 2012, the 2015 PAMI TC Young Researcher Award.

 

Yuqian Zhang is an assistant professor in the ECE department at Rutgers University. She was a postdoctoral scholar with the Tripods Center for Data Science at Cornell University. She obtained her Ph.D. and M.S. in Electrical Engineering from Columbia University advised by Prof. John Wright, and B.S. in Information Engineering from Xi’an Jiaotong University. Her research leverages physical models in data-driven computations, convex and nonconvex optimization, solving problems in machine learning, computer vision, signal processing.

 

 

Zhihui Zhu is currently an Assistant Professor with the Department of Computer Science and Engineering at the Ohio State University. He was an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Denver from 2020-2022 and a Post-Doctoral Fellow with the Mathematical Institute for Data Science, Johns Hopkins University, from 2018 to 2019. He received his Ph.D. degree in electrical engineering in 2017 from the Colorado School of Mines, where his research was awarded a Graduate Research Award.

Dates and TimeTue 10am-12pm, Wed-Fri 9am-12pm, Executive Room Gamma

Course Abstract:  

Graph Neural Networks (GNNs) have emerged as the tool of choice for machine learning on graphs and are rapidly growing as the next deep learning frontier. Indeed, as the 2010’s were the time of convolutional neural networks (CNNs) applied to learning with images and time signals, the 2020’s are shaping up to be the time of GNNs for learning on graphs. This is the right time for practitioners and researchers to have the opportunity to learn about GNNs and their use in machine learning on graphs.

In this course, we present GNNs as generalizations of CNNs based on the generalization of convolutions in time and space to convolutions on graphs. The main focus of the course is to teach students how to formulate and solve machine learning problems with GNNs. We place emphasis on showing how the use of a convolutional architecture enables scalability to high-dimensional problems. We also explore three fundamental properties of GNNs: Equivariance to label permutations, stability to deformations, and transferability across dimensions.

This course is modeled on a regular course offered at the University of Pennsylvania (https://gnn.seas.upenn.edu).

 

Syllabus and pre-reading details:

https://gnn.seas.upenn.edu/icassp-2023/

Presenters:  

Alejandro Ribeiro, University of Pennsylvania

Charilaos I. Kanatsoulis, University of Pennsylvania

Navid NaderiAlizadeh, University of Pennsylvania

Alejandro Parada-Mayorga, University of Pennsylvania

Luana Ruiz, MIT

Biographies:

 

Alejandro Ribeiro (aribeiro@seas.upenn.edu) received the B.Sc. degree in electrical engineering from the Universidad de la Rep´ublica Oriental del Uruguay, Montevideo, Uruguay, in 1998, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Minnesota, Minneapolis, MN, USA, in 2005 and 2007, respectively. He joined the University of Pennsylvania, Philadelphia, PA, USA, in 2008, where he is currently a Professor of Electrical and Systems Engineering. His research interests include applications of statistical signal processing to the study of networks and networked phenomena, structured representations of networked data structures, graph signal processing, network optimization, robot teams, and networked control. Papers coauthored by Dr. Ribeiro received the 2022 IEEE Signal Processing Society Best Paper Award, the 2022 IEEE Brain Initiative Student Paper Award, the 2021 Cambridge Ring Publication of the Year Award, the 2020 IEEE Signal Processing Society Young Author Best Paper Award, the 2014 O. Hugo Schuck best paper award, and paper awards at EUSIPCO 2021, ICASSP 2020, EUSIPCO 2019, CDC 2017, SSP Workshop 2016, SAM Workshop 2016, Asilomar SSC Conference 2015, ACC 2013, ICASSP 2006, and ICASSP 2005. His teaching has been recognized with the 2017 Lindback award for distinguished teaching and the 2012 S. Reid Warren, Jr. Award presented by Penn’s undergraduate student body for outstanding teaching.

 

Charilaos I. Kanatsoulis (kanac@seas.upenn.edu) is a postdoctoral researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his Diploma in electrical and computer engineering from the National Technical University of Athens, Greece, in 2014, and his Ph.D. degree in electrical and computer engineering from the University of Minnesota, Twin Cities, in 2020. His research studies the interplay between machine learning and signal processing. He is particularly interested in principled convolutional and graph neural network design, tensor and multi-view analysis, representation learning and explainable artificial intelligence.

 

Navid NaderiAlizadeh (nnaderi@seas.upenn.edu) is a Postdoctoral Researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received the B.S. degree in electrical engineering from Sharif University of Technology, Tehran, Iran, in 2011, the M.S. degree in electrical and computer engineering from Cornell University, Ithaca, NY, USA, in 2014, and the Ph.D. degree in electrical engineering from the University of Southern California, Los Angeles, CA, USA, in 2016. Navid spent more than four years as a Research Scientist at Intel Labs and HRL Laboratories. His research interests span machine learning, signal processing, and information theory, and their applications for resource allocation in wireless networks. In addition to serving as a TPC member of several IEEE conferences, Navid has served as an Associate Editor for the IEEE Journal on Selected Areas in Communications and as a Guest Editor for the IEEE IoT Magazine

 

Alejandro Parada-Mayorga (alejopm@seas.upenn.edu) is a postdoctoral researcher in the Department of Electrical and Systems Engineering at University of Pennsylvania. He received his B.Sc. and M.Sc. degrees in electrical engineering from Universidad Industrial de Santander, Colombia, in 2009 and 2012, respectively, and his Ph.D. degree in electrical engineering from the University of Delaware, Newark, DE, in 2019. His research focuses on the foundations of information processing and learning using generalized convolutional signal processing. His research interests include algebraic signal processing, applied representation theory of algebras, geometric deep learning, applied category theory, graph neural networks, and topological signal processing

 

Luana Ruiz (ruizl@mit.edu) is a FODSI and METEOR postdoctoral fellow at MIT. She received the Ph.D. degree in electrical engineering from the University of Pennsylvania in 2022, and the M.Eng. and B.Eng. double degree in electrical engineering from the Ecole Sup´erieure d’Electricit´e, France, and the ´ University of S˜ao Paulo, Brazil, in 2017. Luana’s work focuses on large-scale graph information processing and graph neural network architectures. She was awarded an Eiffel Excellence scholarship from the French Ministry for Europe and Foreign Affairs between 2013 and 2015; nominated an iREDEFINE fellow in 2019, a MIT EECS Rising Star in 2021, a Simons Research Fellow in 2022, and a METEOR fellow in 2023; and received best student paper awards at the 27th and 29th European Signal Processing Conferences. She currently serves as a member of the MLSP TC.

Dates and TimeTue-Thu 2-5pm, Fri 2-4pm, Executive Room Gamma

Course Abstract:  

This course presents a novel data analytics perspective to deal with signals and data supported by graphs. Such data occurs in many application domains from traditional physics-based signals like time series, images, or video to data traveling on telecom networks, to gene networks, chemical networks, or arising in social networks, marketing, corporate, financial, health care domains.

Graph Signal Processing (GSP) extends traditional Digital Signal Processing (DSP) to data supported by graphs. Contrary to what is commonly believed in the SP community, GSP is not simply extending DSP approaches to arbitrary graphs. This course builds GSP from first principles and presents recent foundational results from GSP as an intuitive, direct extension of Digital Signal Processing concepts. Using this novel approach, this course introduces a canonical model, a new way to design and understand GSP concepts. It sheds new light on DSP interpretations and assumptions commonly taken for granted, providing new, valuable interpretations and perspectives in both DSP and GSP. Finally, the course considers Geometric Learning combining GSP and Deep Learning (DL), in particular, graph neural networks (GNN).

This course is designed to be self-contained (with no required prerequisites), providing something for both those new to GSP and GSP veterans alike.

Syllabus and pre-reading details:

(Prerequisite technical knowledge) 

Although the course will be self-contained, attendees will benefit from working knowledge of basic Linear Algebra concepts and having had the equivalent of an  undergraduate signal processing course.

Presenters:  

José M. F. Moura, Carnegie Mellon University

John Shi, Carnegie Mellon University

 Biographies:

 

José M. F. Moura, moura@ece.cmu.edu, is the Philip L. and Marsha Dowd University

Professor at Carnegie Mellon University. He holds a Licenciatura in Electrical Engineering from Instituto Superior T´ecnico (IST) and MSc, E.E., and D.Sc. degrees from MIT. He was a visiting Professor at MIT (1984-86, 1999-00, and 2006-07) and at NYU (210-14). His interests are in statistical, algebraic, and graph signal processing. He has published extensively and holds 17 patents, with the technology of two (co-inventor Alek Kavcic) found in over 3 billion disk drives of over 60% of all computers sold worldwide in the last 20 years, the subject of the then largest IP settlement in the ICT areas, US $750 Million, between CMU and a semiconductor company. He is a Fellow of the IEEE, AAAS, US Academy of Inventors, and member of the Academy of Sciences of Portugal and of the US National Academy of Engineering. He holds doctor honoris causa degrees from University of Strathclyde (UK) and Universidade de Lisboa (Portugal). He received several awards, including the now called IEEE Signal Processing Society (SPS) Claude Shannon– Harry Nyquist Technical Achievement Award and the IEEE SPS Norbert Wiener Society Award, and the 2023 IEEE Kilby Medal. He has received the Great Cross of Infante D. Henrique bestowed to him by the President of the Republic from Portugal.

José M. F. Moura has introduced at CMU a graduate course in “Network Science” in 2012 that he taught over a period of five years and introduced a graduate course in “Graph Signal Processing and Geometric Learning” in 2020 that, so far, he has taught twice.

 

 

John Shi (jshi3@andrew.cmu.edu) has undergraduate degrees in Computer Engineering and Applied Mathematics from the University of Maryland (UMD) and a Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University. He is currently a post-doc at Carnegie Mellon University (CMU). His Ph.D. thesis is titled, “A Dual Domain Approach to Graph Signal Processing.” Topics from this research will be included in this short course.

John Shi has significant teaching experience, working with both undergraduate and graduate students. As an undergraduate at the University of Maryland (2013-2017), he was a teacher assistant (TA) seven times, teaching C programming, Calculus, and Digital Systems. In addition, he was a Ron Strauss Teaching Assistant in 2017. This is a prestigious teaching assistant position only offered to seven undergraduate students. It allows these undergraduates to teach one year of Calculus, a position typically reserved for Mathematics Ph.D students at UMD. At CMU, he was a TA four times, teaching the undergraduate Signals and Systems course and Mathematical Foundations for Electrical Engineers course. He taught the Special Topics in Signal Processing: Graph Signal Processing and Geometric Learning course at CMU in Fall 2022, as a co-instructor with Dr. Moura.

Diamond Plus Patron

Diamond Patrons

Platinum Patrons

Gold Patrons

Silver Patrons

Bronze Patrons