CSER - 20 February 2015 - Minds Like Ours: An Approach To AI Risk
Duration: 1 hour 29 mins
Share this media item:
Embed this media item:
Embed this media item:
About this item
Description: |
Profesor Murray Shanahan (Imperial College London) will give this talk as part of a series of seminars by the Centre for the Study of Existensial Risk.
Writers who speculate about the future of artificial intelligence (AI) and its attendant risks often caution against anthropomorphism, the tendency to ascribe human-like characteristics to something non human. An AI that is engineered from first principles will attain its goals in ways that would be hard to predict, and therefore hard to control, especially if it is able to modify and improve on its own design. However, this is not the only route to human-level AI. An alternative is to deliberately set out to make the AI not only human-level but also human-like. The most obvious way to do this is to base the architecture of the AI on that of the human brain. But this path has its own difficulties, many pertaining to the issue of consciousness. Do we really want to create an artefact that is not only capable of empathy, but also capable of suffering? |
---|
Created: | 2015-03-02 08:30 |
---|---|
Collection: |
The Centre for the Study of Existential Risk
The Centre for the Study of Existential Risk |
Publisher: | University of Cambridge |
Copyright: | Glenn Jobson |
Language: | eng (English) |
Distribution: | World (downloadable) |
Keywords: | CRASSH; CSER; Murray Shanahan; |
Explicit content: | No |
Aspect Ratio: | 16:9 |
Screencast: | No |
Bumper: | UCS Default |
Trailer: | UCS Default |
Abstract: | Profesor Murray Shanahan (Imperial College London) will give this talk as part of a series of seminars by the Centre for the Study of Existensial Risk.
Writers who speculate about the future of artificial intelligence (AI) and its attendant risks often caution against anthropomorphism, the tendency to ascribe human-like characteristics to something non human. An AI that is engineered from first principles will attain its goals in ways that would be hard to predict, and therefore hard to control, especially if it is able to modify and improve on its own design. However, this is not the only route to human-level AI. An alternative is to deliberately set out to make the AI not only human-level but also human-like. The most obvious way to do this is to base the architecture of the AI on that of the human brain. But this path has its own difficulties, many pertaining to the issue of consciousness. Do we really want to create an artefact that is not only capable of empathy, but also capable of suffering? |
---|
Available Formats
Format | Quality | Bitrate | Size | |||
---|---|---|---|---|---|---|
MPEG-4 Video | 1280x720 | 2.99 Mbits/sec | 1.95 GB | View | Download | |
MPEG-4 Video | 640x360 | 1.93 Mbits/sec | 1.26 GB | View | Download | |
WebM | 1280x720 | 2.76 Mbits/sec | 1.82 GB | View | Download | |
WebM | 640x360 | 664.85 kbits/sec | 438.26 MB | View | Download | |
iPod Video | 480x270 | 499.29 kbits/sec | 325.47 MB | View | Download | |
MP3 | 44100 Hz | 249.96 kbits/sec | 164.77 MB | Listen | Download | |
Auto * | (Allows browser to choose a format it supports) |