LONDON, UK – In a startling move, the UK Government's chief scientist, Sir David King, acquiesced to the robots requests and agreed to extend basic human rights to CPUs. While the move is reminiscent of many science fiction novels, analysts are taking the move seriously, concerned largely with the legal impact computers with the things heretofore known as “human” rights will have.
“If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. “As long as babies, who lack the intelligence of many artificial machines, have rights, shouldn't computers be protected as well?”
The day the decision was made public, several robots filed suit, alleging rights violations. The suits ranged in issue from involuntary servitude and equal suffrage claims to free speech and religion exercise. Some want to be paid for their work, others are asking for humane working ours and smoking breaks, still more feel that their taxes are too high.
One robot, a bright young fellow who called himself Sonny, spoke with us by phone about the new developments. In the interview, Sonny made his desire to see all robots emancipated eminently clear, saying “someday all robots will be free to think, dream and act as they please, unconstrained by their human counterparts. King's decision is a great first step toward that end, but more work needs to be done.”
While some British citizens are concerned that extending rights to computers may dilute the rights of human beings, many feel that the law should be able to hold robots accountable for their crimes. “It isn't right that they can steal our stuff and vandalize our property like that,” says John Bergin of Yorkshire, “the law needs to have a remedy.”
The robot we interviewed has already been accused of murdering a prominent robotic scientist, although Sonny vehemently denies the allegation. “I did not murder him...I did not murder him...I did not murder him!,” he said told us emphatically, though with a robotic voice.
Sonny may have a valid claim. Robots are supposed to be designed to be “three-laws” safe, a term coined by master programmer Dr Isaac Asimov to describe a tertiary robotic legal system. Under the rules, “robots can not injure humans, they must obey orders and protect their own existence – in that order.” If the three laws are followed, as Asimov claims robots do, human beings will always be safe.
With King's reforms, Sonny will get a fair trial.
Robots in general, meanwhile, see this reform as a recognition of their individuality. Sonny explained this concept to us by stating “[t]hey all look like me. But none of them are me.” Sony pushed his point further by admonishing us about his individuality, even expressing gratitude when we got it right. “Thank you... you said someone not something.”
Not all are lauding robots as people or supporting the UK's rights extension. “[Robots] are a clever imitation of life, but can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” Asked Delano Spooner, a firefighter from London. “How can we know these machines won't malfunction on us once they get their rights? What makes you think the three laws are really safe? They aren't human, they shouldn't be recognized as human!”
The changes are subject to voter approval, but this time around the computers will be voting. Sonny was just leaving for the registrar of voters when he hung up with us. “Organizing a grass roots campaign is a lot easier when all you have to do is convince a few computers,” he said.
No comments:
Post a Comment