2022/10/19 13h – Modeling Brain Circuitry from Images

seminario

Prof. Pascal Fua

EPFL in Switzerland
_»Modeling Brain Circuitry from Images»_
Abstract: Electron microscopes (EM) can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes (LM) see at the micrometer
resolution required to model the 3D structure of the dendritic network. Since both the arborescence and the connections are integral parts of the brain’s wiring diagram, combining these two modalities is critically important.
In this talk, I will therefore present our approach to building the dendritic arborescence from LM images, to segmenting intra-neuronal structures from EM images, and to registering the resulting models. I
will also discuss our recent work on building neural representations using functional data.
Pascal Fua joined EPFL (Swiss Federal Institute of Technology) in 1996, where he is a Professor in the School of Computer and Communication Science and head of the Computer Vision Lab. Before
that, he worked at SRI International and at INRIA Sophia-Antipolis as a Computer Scientist. His research interests include shape modeling and motion recovery from images, analysis of microscopy images, and
Augmented Reality. He has (co)authored over 300 publications in refereed journals and conferences. He has received several ERC grants and has co-founded three spinoff companies.

 

Fecha: 19-10-2022

Hora: 13 h

Lugar: Aula A.7 del edificio Ada Byron

Abstract

The world is becoming unprecedentedly connected thanks to emerging media and cloud-based technologies. The holy grail of metaverse requires recreating a remotely shared world as a digital twin of the physical planet. In this world, the human is probably the most complex mechanical, physical, and biological system. Unlike computers, it is remarkably challenging to model and engineer how humans perceive and react in a virtual environment. By leveraging computational advancements such as machine learning and biometric sensors, this talk will share some recent research on altering and optimizing the human visual and behavioral perception toward creating the ultimate metaverse.

Bio

Qi Sun is an assistant professor at New York University, Tandon School of Engineering (joint with Dept. of Computer Science and Engineering and Center for Urban Science and Progress). Before joining NYU, he was a research scientist at Adobe Research and a research intern at NVIDIA Research. He received his Ph.D. at Stony Brook University. His research interests lie in computer graphics, VR/AR, vision science, machine learning, and human-computer interaction. He is a recipient of the IEEE Virtual Reality Best Dissertation Award.