Skip to main content Skip to secondary navigation
Main content start

EngX: The Digital Sensory System videos

Three Stanford engineers present research on technologies that mimic human capabilities such as seeing, touching and learning.

SEE

Fei-Fei Li, Associate Professor of Computer Science

"A Quest for Visual Intelligence in Computers"

More than half of the human brain is involved in visual processing. The remarkable human visual system evolved over billions of years, but computer vision is one of the youngest disciplines of Artificial Intelligence (AI). The central problem of computer vision is to turn millions of pixels of a single image into interpretable and actionable concepts so that computers can understand pictures just as well as humans do. Such technology will have a fundamental impact on almost every aspect of our daily lives and on society as a whole, in spheres that range from digital health and medicine to autonomous driving to national security. In this talk, Professor Li will provide an overview on computer vision and its history, and share some of her recent work to enable large-scale object recognition.

 

TOUCH

Allison Okamura, Associate Professor of Mechanical Engineering

"Haptics: Engineering Touch"

The sense of touch is essential for humans to control their bodies and interact with the surrounding world. Yet there are many scenarios in which the sense of touch is typically lost: when a surgeon teleoperates a robot to perform minimally invasive surgery, when an amputee uses a prosthetic arm, and when a student performs virtual laboratory exercises in an online class. Haptic technology combines robotics, design, psychology and neuroscience to artificially generate touch sensations in humans. Professor Okamura will describe how haptic technology works and how it is being applied to improve human health, education and quality of life.

 

LEARN

Christopher Manning, Professor of Computer Science and Linguistics

"Texts are Knowledge"

Both people and computers now have access to virtually all of the world’s knowledge. For humans, this access is marvelous. Unfortunately, computers still have trouble comprehending this gift that they have been given. How can we get computers to understand and use all this knowledge? Should the goal be to try to formalize this knowledge into more structured forms or would it be to better appreciate the flexibility and power of human language knowledge representations? How can computers make use of context for pragmatic interpretation as humans do? Professor Manning will talk about how his lab has been building statistical models of language to extract both facts and nuances from human language communication.