A wide range of CSER’s advisors, including co-founder Martin Rees, responded to edge.org’s annual question earlier this month. Stuart Russell, Max Tegmark, Nick Bostrom, George Church, Alison Gopnik, Murray Shanahan, and Lord Rees offered a range of opinions on the prospects, dangers and possibilities of machine intelligence.
George Church considered the task of applying our current understanding of human rights to artificial intelligences, while Stuart Russell confronted the difficulty of value alignment between possible machine intelligences and humans, suggesting that the extensive existing research into human motivations and values could be employed by a system also capable of studying humans itself to understand their actions.
Other responses included that of Martin Rees, who writes that we should be looking far further into the future than is customary, suggesting that over a sufficiently long time frame the replacement of biological brains by machines as the world’s dominant intellectual objects should be regarded as inevitable. Max Tegmark responded to a number of arguments commonly deployed by those sceptical of the value of research into AI safety, and Alison Gopnik outlined the dramatic strides that AI researchers must make to create machines with the general cognitive ability of even small children.
Read the full responses at edge.org