Plenary Speakers
-
Andrea GoldsmithDean of Engineering and Applied Science and the Arthur LeGrand Doty Professor of Electrical Engineering at Princeton University
-
Richard G. BaraniukC. Sidney Burrus Professor of Electrical and Computer Engineering at Rice University and the Founding Director of OpenStax
-
Michael I. JordanPehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley
-
Christos Harilaos PapadimitriouDonovan Family Professor of Computer Science at Columbia University
Andrea Goldsmith
Title:
Disrupting NextG
Date/Location:
June 06 – 09:45 AM – 10:45 AM
Abstract:
As 5G takes to the airwaves, we now turn our imagination to the next generation of wireless technology. The promise of this technology has created an international race to innovate, with significant investment by government as well as industry. And much innovation is needed as 6G aspires to not only support significantly higher data rates than 5G, but also improved reliability along with excellent coverage indoors and out, including for underserved areas. New architectures including edge computing must be designed to drastically enhance efficient resource allocation while also reducing latency for real-time control. Breakthrough energy-efficiency architectures, algorithms and hardware will be needed so that wireless devices can be powered by tiny batteries, energy-harvesting, or over-the-air power transfer. And signal processing will play an outsized role in the underlying technologies for NextG as well as the “killer apps” that will drive its deployment and success. This talk will describe what the wireless future might look like along with some of the innovations and breakthroughs required to realize this vision.
Andrea Goldsmith
Dean of Engineering and Applied Science and the Arthur LeGrand Doty Professor of Electrical Engineering at Princeton University
Richard G. Baraniuk
Title:
The Local Geometry of Deep Learning
Date:
June 07 – 09:45 AM – 10:45 AM
Abstract:
We study the geometry of deep learning through the lens of approximation theory via splines. The enabling insight is that a large class of deep networks can be written as a composition of continuous piecewise affine (CPA) spline operators, which provides a powerful portal through which to interpret and analyze their inner workings. Our particular focus is the local geometry of the spline partition of the network’s input space, which opens up new avenues to study how deep networks organize signals in a hierarchical, multiscale fashion. Applications include the analysis of the deep network optimization landscape, explaining batch normalization, and debiasing pre-trained generative networks.
C. Sidney Burrus Professor of Electrical and Computer Engineering at Rice University and the Founding Director of OpenStax
Richard G. Baraniuk
Michael I. Jordan
Title:
An Alternative View on AI: Collaborative Learning, Incentives, and Social Welfare
Date:
June 08 – 09:45 AM – 10:45 AM
Abstract:
Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single, autonomous agent. Social issues are entirely secondary in this paradigm. When AI systems are deployed in social contexts, however, the overall design of such systems is often naive—a centralized entity provides services to passive agents and reaps the rewards. Such a paradigm need not be the dominant paradigm for information technology. In a broader framing, agents are active, they are cooperative, and they wish to obtain value from their participation in learning-based systems. Agents may supply data and other resources to the system, only if it is in their interest to do so. Critically, intelligence inheres as much in the overall system as it does in individual agents, be they humans or computers. This is a perspective familiar in the social sciences, and a first goal in this line of work is to bring economics into contact with the computing and data sciences. The long-term goal is two-fold—to provide a broader conceptual foundation for emerging real-world AI systems, and to upend received wisdom in the computational, economic, and inferential disciplines.
Michael I. Jordan
Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley
He gave the Inaugural IMS Grace Wahba Lecture in 2022, the IMS Neyman Lecture in 2011, and an IMS Medallion Lecture in 2004.
Christos Harilaos Papadimitriou
Title:
How does the brain create language?
Date:
June 09 – 09:45 AM – 10:45 AM
Abstract:
There is little doubt that cognitive phenomena are the result of neural activity. However, there has been slow progress towards articulating an overarching computational theory of how exactly this happens. I will discuss a simplified mathematical model of the brain, involving brain areas, spiking neurons, random synapses, local inhibition, hebbian plasticity, and long-range interneurons. Emergent behaviors of the resulting dynamical system — established both analytically and through simulations — include assemblies of neurons and universal computation. By simulating neural systems in this model, at a scale of tens of millions of neurons, we can emulate certain high-level cognitive phenomena such as sequence memorization, few-shot learning of classification tasks, planning in the blocks world, and parsing of natural language. I will describe current work aiming at creating in this framework a neuromorphic language organ: a neural tabula rasa which, on input consisting of a modest amount of grounded language, is capable of language acquisition: lexicon, syntax, semantics, comprehension, and generation.