Towards Artificial Systems: What Can We Learn from Human Perception? thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Towards Artificial Systems: What Can We Learn from Human Perception?

Published on Sep 28, 20113492 Views

Recent progress in learning algorithms and sensor hardware has led to rapid advances in artiicial systems. However, their performance continues to fall short of the eficiency and plasticity of human

Related categories

Chapter list

Towards Artificial Systems: What can we learn from Human Perception?00:00
My goal for today00:23
Talk Outline01:27
Research Philosophy03:20
Talk Outline: Diagnostic features03:48
What can we learn from human face recognition?03:57
Thatcher Illusion (Thompson, 1980)05:07
How good we are at recognizing familiar faces?06:39
How bad are we with unfamiliar faces?07:38
Caveat ...08:43
British Version by Ian Thornton09:45
Our approach to study recognition10:12
Organization of Facial Representations in a High-dimensional Face Space12:03
Subjective Perceptual Modeling13:00
Task dependent eye movements13:53
Diagnostic features vary according to task and observer14:48
What are the imaged-based features?16:03
Image-based material editing (1)18:08
Image-based material editing (3)19:03
Google Streetview Privacy19:43
Short summary: Diagnostic Features20:48
Talk Outline: Dynamic information21:29
The binding problem of View-based recognition21:36
The role of time in object learning23:02
MPI Face Database25:14
Semantic motion coding25:42
Semantic facial animation pipeline26:31
Study based on animation27:06
Virtual Mirror27:46
Max Planck ‘Virtual Mirror’ for closed-loop facial expression studies28:23
Short summary: Spatio-temporal Representations29:16
Talk Outline: Active perception29:39
Active Object Exploration30:11
Human perception is ‘forced’ to be active32:32
Gaze-assisted interfaces33:32
From single to multiple gaze shifts35:47
Image-based properties are not sufficient to predict gaze behavior37:20
Motor biases and Utility constrain gaze behavior38:05
Short summary: Active Perception38:47
Talk Outline: Human-in-the-loop39:20
The Human: a complex cybernetic system39:29
A new tool (toy) for the investigation of multi-sensory closed-loop control40:21
Cyberneum42:58
Isolating sources of non-visual information during locomotor tasks43:21
Flight Control46:17
First Steps to Helicopter Simulation47:49
The CyberMotion Simulator48:33
From basic research to applied research49:41
Heli-Trainer50:09
Today’s Flight Simulators50:27
CyberMotion Simulator with seventh axis and stereo projection51:45
Future Developments52:12
SUPRA – Upset recovery training52:50
The next generation of multisensory games55:18
Haptic feedback56:40
Haptic Control for Airplanes and Teleoperation of Remotely Piloted Vehicles57:48
Visual/ Haptic control of a team of flying robots58:06
Multi-sensory control of Unmanned Aerial Vehicles (UAVs)58:28
Teleoperation of Unmanned Aerial Vehicles59:16
Conclusion and further questions01:00:30
Thank you very much for your attention01:01:11