Ultimate search of science: artificial intelligence
Saturday, 8 November 2008
Syed Fattahul Alim
The purpose of creating machines has been to help man in doing his daily chores. First he made crude tools with the help of stone, wood, etc in the earliest era of his development. The aim of making these tools was not pure fun. Those he used to kill animals, dig earth, grind corns. He also developed crude weapons to protect himself from the attack from other animals or members of his own species inimical to him. With the discovery of metals he could develop better, more innovative and powerful tools and implements. Historians have divided human history into different epochs of civilisation such as Stone Age, Bronze Age, and Iron Age. What is interesting to note here is that such division of the human history has been done on the basis of the development of more improved types of tools and implements.
The tools and implements gave more ease, power and speed to humans to perform their task. At a later stage, the tools and implements gradually became more complicated with a view to helping people in doing more complicated tasks.
So, started the machine age, which provided more power and speed to man. The powerful lifting cranes, the high speed transports and various kinds of clever and powerful tools and machines only contributed to increasing man's ease at doing various tasks as well control over the surroundings. He explored the oceans, the deep underground and the outer space to with the help of the powerful ships and submarines and rockets. Meanwhile, he also developed machines that can aid him in computational tasks. With the advent of the electronic age, the machines for doing various arithmetic calculations became smarter and more powerful. Powerful in the sense, they could deal with more complicated computations and handle greater volumes of work and perform them with more speed. But man was not still satisfied. For these were still very dumb slaves. With the advent of the digital age, the power and speed of these computational devices gave a generational leap. And that is the age of computer. The modern computers can perform highly complicated computational tasks as well as do other tasks with skill and power undreamt of any time in the past. With further development in the power of these computers, they became more interactive. Sometimes people overawed with the power, skill and smartness of modern computers, even compare it to human brain. True, the way modern computers work, it evinces many features of the human faculty known as intelligence. But in the final analysis they are not really intelligent and bear no comparison with the brain of any lower animal, let alone the human brain.
However, the ultimate search of man and his science is to develop smart machines that will not only imitate human brain in its intelligence, but also outdo it. Some scientists are even claiming that man is already near to that goal. They are even predicting a point of time within the next two to four decades when man will be able to develop Artificial Intelligence that will outsmart man in every feature of his intellectual ability. Such a point of time they term 'Singularity'. They say the point of 'Singularity' is round the corner.
The Guardian's correspondent Wendy M Grossman reports on the latest development in the research on Artificial Intelligence in the following.
They are looking for the hockey stick. Hockey sticks are the shape technology startups hope their sales graphs will assume: a modestly ascending blade, followed by a sudden turn to a near-vertical long handle. Those who assembled in San Jose in late October for the Singularity Summit are awaiting the point where machine intelligence surpasses that of humans and takes off near-vertically into recursive self-improvement.
The key, said Ray Kurzweil, inventor of the first reading machine and author of 2005's The Singularity Is Near, is exponential growth in computational power - "the law of accelerating returns". In his favourite example, at the human genome project's initial speed, sequencing the genome should have taken thousands of years, not the 15 scheduled. Seven years in, the genome was 1 per cent sequenced. Exponential acceleration had the project finished on schedule. By analogy, enough doublings in processing power will close today's vast gap between machine and human intelligence.
This may be true. Or it may be an unfalsifiable matter of faith, which is why the singularity is sometimes satirically called "the Rapture for nerds". It makes assessing progress difficult. Justin Rattner, chief technology officer of Intel, addressed a key issue at the summit: can Moore's law, which has the number of transistors packed on to a chip doubling every 18 months, stay in line with Kurzweil's graphs? The end has been predicted many times but, said Rattner, although particular chip technologies have reached their limits, a new paradigm has always continued the pace.
"In some sense - silicon gate CMOS - Moore's law ended last year," Rattner said. "One of the founding laws of accelerating returns ended. But there are a lot of smart people at Intel and they were able to reinvent the CMOS transistor using new materials." Intel is now looking beyond 2020 at photonics and quantum effects such as spin. "The arc of Moore's law brings the singularity ever closer."
Judgment day
Belief in an approaching singularity is not solely American. Peter Cochrane, the former head of BT's research labs, says for machines to outsmart humans it "depends on almost one factor alone - the number of networked sensors. Intelligence is more to do with sensory ability than memory and computing power." The internet, he adds, overtook the capacity of a single human brain in 2006. "I reckon we're looking at the 2020 timeframe for a significant machine intelligence to emerge." And, he said: "By 2030 it really should be game over."
Predictions like this flew at the summit. Imagine when a human-scale brain costs $1 - you could have a pocket full of them. The web will wake up, like Gaia. Nova Spivack, founder of EarthWeb and, more recently, Radar Networks (creator of Twine.com), quoted Freeman Dyson: "God is what mind becomes when it has passed beyond the scale of our comprehension."
Listening, you'd never guess that artificial intelligence has been about 20 years away for a long time now. John McCarthy, one of AI's fathers, thought when he convened the first conference on the subject in 1956, that they'd be able to wrap the whole thing up in six months. McCarthy calls the singularity, bluntly, "nonsense".
Even so, there are many current technologies, such as speech recognition, machine translation, and IBM's human-beating chess grandmaster Deep Blue, that would have seemed like AI at the beginning. "It's incredible how intelligent a human being in front of a connected computer is," observed the CNBC reporter Bob Pisani, marvelling at how clever Google makes him sound to viewers phoning in. Such advances are reminders that there may be valuable discoveries that make attempts at even the wildest ideas worthwhile.
Dharmendra Modha, head of the cognitive computing group at IBM's Almaden research lab, is leading a "quest" to "understand and build a brain as cheaply and quickly as possible". Last year, his group succeeded in simulating a rat-scale cortical model - 55m neurons, 442bn synapses - in 8TB memory of a 32,768-processor IBM Blue Gene supercomputer. The key, he says, is not the neurons but the synapses, the electrical-chemical-electrical connections between those neurons. Biological microcircuits are roughly essentially the same in all mammals. "An individual human being is stored in the strength of the synapses."
Smarter than smart
Modha doesn't suggest that the team has made a rat brain. "Philosophically," he writes on the subject, "any simulation is always an approximation (a kind of 'cartoon') based on certain assumptions. A biophysically realistic simulation is not the focus of our work." His team is using the simulation to try to understand the brain's high-level computational principles.
But computational power is nothing without software. "Would the neural code that powers human reasoning run on a different substrate?" the sceptical science writer John Horgan asked Kurzweil, who replied: "The key to the singularity is amplifying intelligence. The prediction is that an entity that passes the Turing test and has emotional intelligence ... will convince us that it's conscious. But that's not a philosophical demonstration."
For intelligence to be effective, it has to be able to change the physical world. The MIT physicist Neil Gershenfeld was therefore at the summit to talk about programmable matter. It's a neat trick: computer science talks in ones and zeros, but these are abstractions representing the flow or interruption of electric current, a physical phenomenon. Gershenfeld, noting that maintaining that abstraction requires increasing amounts of power and complex programmning, wants to turn this on its head. What if, he asked, you could buy computing cells by the pound, coat them on a surface, and run programs that assemble them like proteins to solve problems?
Gershenfeld is always difficult for non-physicists to understand, and his video of cells sorting was no exception. Two things he said were clear. First: "We aim to create life." Second: "We have a 20-year road map to make the Star Trek replicator."
Twenty years: 2028. Vernor Vinge began talking about the singularity in the early 80s (naming it after the gravitational phenomenon around a black hole), and has always put the date at 2030. Kurzweil likes 2045; Rattner, before 2050.
Turning back time
These dates may be personally significant. Rattner is 59; Vinge is 64. Kurzweil is 60, takes 250 vitamins and other supplements a day, and believes some of them can turn back ageing. If curing all human ills will be a piece of cake for a superhuman intelligence, then the singularity carries with it the promise of immortality - as long as you're still alive when it happens.
It is in this connection between the singularity and immortality, along with the idea that sufficiently advanced technology can solve every problem from climate change to the exhaustion of oil reserves, that gives the summit the feel of a religious movement. Certainly, James Miller, assistant professor of economics at Smith College, sounded evangelical when he reviewed how best to prepare financially. He was optimistic, reviewing investment strategies and assuming retirement funds won't be needed.
HowStuffWorks founder Marshall Brain, by contrast, explained why 50 million people will lose their jobs when they can be replaced by robots. "In the whole universe, there is one intelligent species," he said. "We're in the process of creating the second intelligent species."
The anthropologist Jane Goodall may disagree. She sees a different kind of singularity - the growing ecological devastation of Africa - and worries about the disconnection between human minds and hearts. "If we're the most intellectual animal," she said, "why are we destroying our only home?"
If Goodall's singularity comes first, the other one might never happen at all - one of those catastrophes that Vinge admits as the only thing he can imagine that could stop it.
The purpose of creating machines has been to help man in doing his daily chores. First he made crude tools with the help of stone, wood, etc in the earliest era of his development. The aim of making these tools was not pure fun. Those he used to kill animals, dig earth, grind corns. He also developed crude weapons to protect himself from the attack from other animals or members of his own species inimical to him. With the discovery of metals he could develop better, more innovative and powerful tools and implements. Historians have divided human history into different epochs of civilisation such as Stone Age, Bronze Age, and Iron Age. What is interesting to note here is that such division of the human history has been done on the basis of the development of more improved types of tools and implements.
The tools and implements gave more ease, power and speed to humans to perform their task. At a later stage, the tools and implements gradually became more complicated with a view to helping people in doing more complicated tasks.
So, started the machine age, which provided more power and speed to man. The powerful lifting cranes, the high speed transports and various kinds of clever and powerful tools and machines only contributed to increasing man's ease at doing various tasks as well control over the surroundings. He explored the oceans, the deep underground and the outer space to with the help of the powerful ships and submarines and rockets. Meanwhile, he also developed machines that can aid him in computational tasks. With the advent of the electronic age, the machines for doing various arithmetic calculations became smarter and more powerful. Powerful in the sense, they could deal with more complicated computations and handle greater volumes of work and perform them with more speed. But man was not still satisfied. For these were still very dumb slaves. With the advent of the digital age, the power and speed of these computational devices gave a generational leap. And that is the age of computer. The modern computers can perform highly complicated computational tasks as well as do other tasks with skill and power undreamt of any time in the past. With further development in the power of these computers, they became more interactive. Sometimes people overawed with the power, skill and smartness of modern computers, even compare it to human brain. True, the way modern computers work, it evinces many features of the human faculty known as intelligence. But in the final analysis they are not really intelligent and bear no comparison with the brain of any lower animal, let alone the human brain.
However, the ultimate search of man and his science is to develop smart machines that will not only imitate human brain in its intelligence, but also outdo it. Some scientists are even claiming that man is already near to that goal. They are even predicting a point of time within the next two to four decades when man will be able to develop Artificial Intelligence that will outsmart man in every feature of his intellectual ability. Such a point of time they term 'Singularity'. They say the point of 'Singularity' is round the corner.
The Guardian's correspondent Wendy M Grossman reports on the latest development in the research on Artificial Intelligence in the following.
They are looking for the hockey stick. Hockey sticks are the shape technology startups hope their sales graphs will assume: a modestly ascending blade, followed by a sudden turn to a near-vertical long handle. Those who assembled in San Jose in late October for the Singularity Summit are awaiting the point where machine intelligence surpasses that of humans and takes off near-vertically into recursive self-improvement.
The key, said Ray Kurzweil, inventor of the first reading machine and author of 2005's The Singularity Is Near, is exponential growth in computational power - "the law of accelerating returns". In his favourite example, at the human genome project's initial speed, sequencing the genome should have taken thousands of years, not the 15 scheduled. Seven years in, the genome was 1 per cent sequenced. Exponential acceleration had the project finished on schedule. By analogy, enough doublings in processing power will close today's vast gap between machine and human intelligence.
This may be true. Or it may be an unfalsifiable matter of faith, which is why the singularity is sometimes satirically called "the Rapture for nerds". It makes assessing progress difficult. Justin Rattner, chief technology officer of Intel, addressed a key issue at the summit: can Moore's law, which has the number of transistors packed on to a chip doubling every 18 months, stay in line with Kurzweil's graphs? The end has been predicted many times but, said Rattner, although particular chip technologies have reached their limits, a new paradigm has always continued the pace.
"In some sense - silicon gate CMOS - Moore's law ended last year," Rattner said. "One of the founding laws of accelerating returns ended. But there are a lot of smart people at Intel and they were able to reinvent the CMOS transistor using new materials." Intel is now looking beyond 2020 at photonics and quantum effects such as spin. "The arc of Moore's law brings the singularity ever closer."
Judgment day
Belief in an approaching singularity is not solely American. Peter Cochrane, the former head of BT's research labs, says for machines to outsmart humans it "depends on almost one factor alone - the number of networked sensors. Intelligence is more to do with sensory ability than memory and computing power." The internet, he adds, overtook the capacity of a single human brain in 2006. "I reckon we're looking at the 2020 timeframe for a significant machine intelligence to emerge." And, he said: "By 2030 it really should be game over."
Predictions like this flew at the summit. Imagine when a human-scale brain costs $1 - you could have a pocket full of them. The web will wake up, like Gaia. Nova Spivack, founder of EarthWeb and, more recently, Radar Networks (creator of Twine.com), quoted Freeman Dyson: "God is what mind becomes when it has passed beyond the scale of our comprehension."
Listening, you'd never guess that artificial intelligence has been about 20 years away for a long time now. John McCarthy, one of AI's fathers, thought when he convened the first conference on the subject in 1956, that they'd be able to wrap the whole thing up in six months. McCarthy calls the singularity, bluntly, "nonsense".
Even so, there are many current technologies, such as speech recognition, machine translation, and IBM's human-beating chess grandmaster Deep Blue, that would have seemed like AI at the beginning. "It's incredible how intelligent a human being in front of a connected computer is," observed the CNBC reporter Bob Pisani, marvelling at how clever Google makes him sound to viewers phoning in. Such advances are reminders that there may be valuable discoveries that make attempts at even the wildest ideas worthwhile.
Dharmendra Modha, head of the cognitive computing group at IBM's Almaden research lab, is leading a "quest" to "understand and build a brain as cheaply and quickly as possible". Last year, his group succeeded in simulating a rat-scale cortical model - 55m neurons, 442bn synapses - in 8TB memory of a 32,768-processor IBM Blue Gene supercomputer. The key, he says, is not the neurons but the synapses, the electrical-chemical-electrical connections between those neurons. Biological microcircuits are roughly essentially the same in all mammals. "An individual human being is stored in the strength of the synapses."
Smarter than smart
Modha doesn't suggest that the team has made a rat brain. "Philosophically," he writes on the subject, "any simulation is always an approximation (a kind of 'cartoon') based on certain assumptions. A biophysically realistic simulation is not the focus of our work." His team is using the simulation to try to understand the brain's high-level computational principles.
But computational power is nothing without software. "Would the neural code that powers human reasoning run on a different substrate?" the sceptical science writer John Horgan asked Kurzweil, who replied: "The key to the singularity is amplifying intelligence. The prediction is that an entity that passes the Turing test and has emotional intelligence ... will convince us that it's conscious. But that's not a philosophical demonstration."
For intelligence to be effective, it has to be able to change the physical world. The MIT physicist Neil Gershenfeld was therefore at the summit to talk about programmable matter. It's a neat trick: computer science talks in ones and zeros, but these are abstractions representing the flow or interruption of electric current, a physical phenomenon. Gershenfeld, noting that maintaining that abstraction requires increasing amounts of power and complex programmning, wants to turn this on its head. What if, he asked, you could buy computing cells by the pound, coat them on a surface, and run programs that assemble them like proteins to solve problems?
Gershenfeld is always difficult for non-physicists to understand, and his video of cells sorting was no exception. Two things he said were clear. First: "We aim to create life." Second: "We have a 20-year road map to make the Star Trek replicator."
Twenty years: 2028. Vernor Vinge began talking about the singularity in the early 80s (naming it after the gravitational phenomenon around a black hole), and has always put the date at 2030. Kurzweil likes 2045; Rattner, before 2050.
Turning back time
These dates may be personally significant. Rattner is 59; Vinge is 64. Kurzweil is 60, takes 250 vitamins and other supplements a day, and believes some of them can turn back ageing. If curing all human ills will be a piece of cake for a superhuman intelligence, then the singularity carries with it the promise of immortality - as long as you're still alive when it happens.
It is in this connection between the singularity and immortality, along with the idea that sufficiently advanced technology can solve every problem from climate change to the exhaustion of oil reserves, that gives the summit the feel of a religious movement. Certainly, James Miller, assistant professor of economics at Smith College, sounded evangelical when he reviewed how best to prepare financially. He was optimistic, reviewing investment strategies and assuming retirement funds won't be needed.
HowStuffWorks founder Marshall Brain, by contrast, explained why 50 million people will lose their jobs when they can be replaced by robots. "In the whole universe, there is one intelligent species," he said. "We're in the process of creating the second intelligent species."
The anthropologist Jane Goodall may disagree. She sees a different kind of singularity - the growing ecological devastation of Africa - and worries about the disconnection between human minds and hearts. "If we're the most intellectual animal," she said, "why are we destroying our only home?"
If Goodall's singularity comes first, the other one might never happen at all - one of those catastrophes that Vinge admits as the only thing he can imagine that could stop it.