Automated killing
A future for drones
Peter Finn
The Washington Post
Sept. 19, 2011
One afternoon last fall at Fort Benning, Ga., two model-size planes took off, climbed to 800 and 1,000 feet, and began criss-crossing the military base in search of an orange, green and blue tarp.
The automated, unpiloted planes worked on their own, with no human guidance, no hand on any control.
After 20 minutes, one of the aircraft, carrying a computer that processed images from an onboard camera, zeroed in on the tarp and contacted the second plane, which flew nearby and used its own sensors to examine the colorful object. Then one of the aircraft signaled to an unmanned car on the ground so it could take a final, close-up look.
Target confirmed.
This successful exercise in autonomous robotics could presage the future of the American way of war: a day when drones hunt, identify and kill the enemy based on calculations made by software, not decisions made by humans. Imagine aerial “Terminators,” minus beefcake and time travel.
The Fort Benning tarp “is a rather simple target, but think of it as a surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech Research Institute, which developed the software to run the demonstration. “You can imagine real-time scenarios where you have 10 of these things up in the air and something is happening on the ground and you don’t have time for a human to say, ‘I need you to do these tasks.’ It needs to happen faster than that.”
The demonstration laid the groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software. Once a match was made, a drone could launch a missile to kill the target.
Military systems with some degree of autonomy — such as robotic, weaponized sentries — have been deployed in the demilitarized zone between South and North Korea and other potential battle areas. Researchers are uncertain how soon machines capable of collaborating and adapting intelligently in battlefield conditions will come online. It could take one or two decades, or longer. The U.S. military is funding numerous research projects on autonomy to develop machines that will perform some dull or dangerous tasks and to maintain its advantage over potential adversaries who are also working on such systems.
The killing of terrorism suspects and insurgents by armed drones, controlled by pilots sitting in bases thousands of miles away in the western United States, has prompted criticism that the technology makes war too antiseptic. Questions also have been raised about the legality of drone strikes when employed in places such as Pakistan, Yemen and Somalia, which are not at war with the United States. This debate will only intensify as technological advances enable what experts call lethal autonomy.
The prospect of machines able to perceive, reason and act in unscripted environments presents a challenge to the current understanding of international humanitarian law. The Geneva Conventions require belligerents to use discrimination and proportionality, standards that would demand that machines distinguish among enemy combatants, surrendering troops and civilians.
“The deployment of such systems would reflect a paradigm shift and a major qualitative change in the conduct of hostilities,” Jakob Kellenberger, president of the International Committee of the Red Cross, said at a conference in Italy this month. “It would also raise a range of fundamental legal, ethical and societal issues, which need to be considered before such systems are developed or deployed.”
Drones flying over Afghanistan, Pakistan and Yemen can already move automatically from point to point, and it is unclear what surveillance or other tasks, if any, they perform while in autonomous mode. Even when directly linked to human operators, these machines are producing so much data that processors are sifting the material to suggest targets, or at least objects of interest. That trend toward greater autonomy will only increase as the U.S. military shifts from one pilot remotely flying a drone to one pilot remotely managing several drones at once.
But humans still make the decision to fire, and in the case of CIA strikes in Pakistan, that call rests with the director of the agency. In future operations, if drones are deployed against a sophisticated enemy, there may be much less time for deliberation and a greater need for machines that can function on their own.
The U.S. military has begun to grapple with the implications of emerging technologies.
“Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions,” according to an Air Force treatise called Unmanned Aircraft Systems Flight Plan 2009-2047. “These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems.”
In the future, micro-drones will reconnoiter tunnels and buildings, robotic mules will haul equipment and mobile systems will retrieve the wounded while under fire. Technology will save lives. But the trajectory of military research has led to calls for an arms-control regime to forestall any possibility that autonomous systems could target humans.
In Berlin last year, a group of robotic engineers, philosophers and human rights activists formed the International Committee for Robot Arms Control (ICRAC) and said such technologies might tempt policymakers to think war can be less bloody.
Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.
The ICRAC would like to see an international treaty, such as the one banning antipersonnel mines, that would outlaw some autonomous lethal machines. Such an agreement could still allow automated antimissile systems.
“The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war. “The good technology is far off, but technology that doesn’t work well is already out there. The worry is that these systems are going to be pushed out too soon, and they make a lot of mistakes, and those mistakes are going to be atrocities.”
Research into autonomy, some of it classified, is racing ahead at universities and research centers in the United States, and that effort is beginning to be replicated in other countries, particularly China.
“Lethal autonomy is inevitable,” said Ronald C. Arkin, the author of “Governing Lethal Behavior in Autonomous Robots,” a study that was funded by the Army Research Office.
Arkin believes it is possible to build ethical military drones and robots, capable of using deadly force while programmed to adhere to international humanitarian law and the rules of engagement. He said software can be created that would lead machines to return fire with proportionality, minimize collateral damage, recognize surrender, and, in the case of uncertainty, maneuver to reassess or wait for a human assessment.
In other words, rules as understood by humans can be converted into algorithms followed by machines for all kinds of actions on the battlefield.
“How a war-fighting unit may think — we are trying to make our systems behave like that,” said Lora G. Weiss, chief scientist at the Georgia Tech Research Institute.
Others, however, remain skeptical that humans can be taken out of the loop.
“Autonomy is really the Achilles’ heel of robotics,” said Johann Borenstein, head of the Mobile Robotics Lab at the University of Michigan. “There is a lot of work being done, and still we haven’t gotten to a point where the smallest amount of autonomy is being used in the military field. All robots in the military are remote-controlled. How does that sit with the fact that autonomy has been worked on at universities and companies for well over 20 years?”
Borenstein said human skills will remain critical in battle far into the future.
“The foremost of all skills is common sense,” he said. “Robots don’t have common sense and won’t have common sense in the next 50 years, or however long one might want to guess.”
Source:
http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html
_______________
Sunday, September 25, 2011
Saturday, September 17, 2011
Five Biggest Right-Wing Lies about Solyndra
By Dave Johnson
Nation of Change
Sept. 17, 2011
The Smear Machine
The Top Five Lies
Conservatives (and now picked up by corporate "mainstream" outlets) make the accusation that there was corruption in the process by which Solyndra received its loan because a major Obama donor named George Kaiser is a major investor in Solyndra. The charge is that Solyndra only received the loan guarantee as a result of campaign contributions by people "connected to" Solyndra. The problem with this is that George Kaiser was not an investor in Solyndra. According to Tulsa World,
In an emailed statement to the Tulsa World, a representative of the George Kaiser Family Foundation said the organization made the investment through Argonaut."George Kaiser is not an investor in Solyndra and did not participate in any discussions with the U.S. government regarding the loan," the statement said. "GKFF invests in a globally diversified portfolio across many different asset classes."
Oil-connected conservatives have been trying to kill off investment in green energy for some time. They see opportunity in hyping up a "scandal" over the bankruptcy of Solyndra as a way to attack the idea of developing a green-energy industry in the US.
Just today, Heritage Foundation, which for months has been attacking the idea of creating green jobs, has this today: Solyndra Scandal Ends Green Jobs Myth. (I have several examples of conservative attacks on green manufacturing in the post, The Phony Solyndra Solar Scandal.)
This is a core line of attack by the right. By tricking the public into thinking that the purpose of government's efforts to trigger a green-energy industry was to make money for the government by investing in individual companies, they can make this look bad because one company went into bankruptcy.
But the purpose of our government's involvement in this is to help trigger an ecosystem around which a green-energy industry can grow. When a new technology is promising, it might be risky to investors, but very beneficial to us as a country to pursue it. That way we end up with a chunk of the millions of jobs and trillions of dollars that result. That benefits everyone.
The loan guarantee enabled Solyndra to get private investment, and hire researchers as well as manufacturing and other employees, to build a state-of-the-art manufacturing facility in the U.S., to develop a supply chain, to buy equipment and the other components that would make a viable business. This was part of the stimulus and all that money was moved into the economy. And all of those are still in the United States, ready to be part of scaling up a green-energy industry. So where the country is concerned, we didn’t lose at all.
This loan originated under the Bush administration—and for good reasons. Following the passage of the Energy Policy Act of 2005, the Bush administration began efforts to cultivate a U.S.-based green-energy industry. Solyndra offered a promising technology and applied for loan guarantees. Following a review by career professional in the Department of Energy Solyndra was asked to provide more information. A few months later, under the new Obama administration, the same career professionals received the requested information and proceeded to approve the loan.
Approving the loan under the Obama administration also helps the country because that money went toward helping develop that ecosystem that creates companies and jobs. Stories about rushing the approval are meant to make it sound as if it was done to help a major campaign donor who, as point #1 above makes clear, was not the investor. It is the only reason the timing is an issue.
The Number One Lie
The right has been trying to push the idea that something bad has happened involving Solyndra. They are calling it a "scandal." But it is entirely a manufactured scandal, like those from the Clinton era. This is what they do. Nothing bad happened.
http://www.nationofchange.org/five-biggest-right-wing-lies-about-solyndra-1316271870.
All rights are reserved.
________________________
Thursday, September 15, 2011
The $2 Billion UBS Incident: 'Rogue Trader' My Ass
by Matt Taibbi
Rolling Stone Magazine
Sept. 15, 2011
The news that a "rogue trader" (I hate that term – more on
that in a moment) has soaked the Swiss banking giant UBS for $2 billion has
rocked the international financial community and threatened to drive a stake
through any chance Europe had of averting economic disaster. There is much
hand-wringing in the financial press today as the UBS incident has reminded the
whole world that all of the banks were almost certainly lying their asses off
over the last three years, when they all pledged to pull back from risky prop
trading. Here’s how the WSJ
put it:
The Swiss banking giant has been struggling to rebuild trust after running up vast losses in the original financial crisis. Under Chief Executive Oswald Grubel, the bank claimed to have put in place new risk management practices, pulled back from proprietary trading and focused on a low-risk client-driven model.
All the troubled banks, remember, made similar promises in the wake of the financial crisis. In fact, some of them used the exact same language. Some will recall Goldman’s executive summary from earlier this year in which the bank pledged to respond to a "challenging period" in its history by making changes.
"We reviewed the governance, standards and practices of certain of our firmwide operating committees," the bank wrote, "to ensure their focus on client service, business standards and practices and reputational risk management."
But the reality is, the brains of investment bankers by nature are not wired for "client-based" thinking. This is the reason why the Glass-Steagall Act, which kept investment banks and commercial banks separate, was originally passed back in 1933: it just defies common sense to have professional gamblers in charge of stewarding commercial bank accounts.
Investment bankers do not see it as their jobs to tend to the dreary business of making sure Ma and Pa Main Street get their $8.03 in savings account interest every month. Nothing about traditional commercial banking – historically, the dullest of businesses, taking customer deposits and making conservative investments with them in search of a percentage point of profit here and there – turns them on.
In fact, investment bankers by nature have huge appetites for risk, and most of them take pride in being able to sleep at night even when their bets are going the wrong way. If you’re not a person who can doze through a two-hour foot massage while your client (which might be your own bank) is losing ten thousand dollars a minute on some exotic trade you’ve cooked up, then you won’t make it on today’s Wall Street.
Nonetheless, thanks to the Gramm-Leach-Bliley Act passed in 1998 with the help of Bob Rubin, Larry Summers, Bill Clinton, Alan Greenspan, Phil Gramm and a host of other short-sighted politicians, we now have a situation where trillions in federally-insured commercial bank deposits have been wedded at the end of a shotgun to exactly such career investment bankers from places like Salomon Brothers (now part of Citi), Merrill Lynch (Bank of America), Bear Stearns (Chase), and so on.
These marriages have been a disaster. The influx of i-banking types into the once-boring worlds of commercial bank accounts, home mortgages, and consumer credit has helped turn every part of the financial universe into a casino. That’s why I can’t stand the term "rogue trader," which is always tossed out there when some investment-banker asshole loses a billion dollars betting with someone else’s money.
They’re not "rogue" for the simple reason that making insanely irresponsible decisions with other peoples’ money is exactly the job description of a lot of people on Wall Street. Hell, they don’t call these guys "rogue traders" when they make a billion dollars gambling.
The only thing that differentiates a "rogue" trader like Barings villain Nick Leeson from a Lloyd Blankfein, Dick Fuld, John Thain, or someone like AIG’s Joe Cassano, is that those other guys are more senior and their lunatic, catastrophic decisions were authorized (and yes, I know that Cassano wasn’t an investment banker, technically – but he was in financial services).
In the financial press you're called a "rogue trader" if you're some overperspired 28 year-old newbie who bypasses internal audits and quality control to make a disastrous trade that could sink the company. But if you're a well-groomed 60 year-old CEO who uses his authority to ignore quality control and internal audits in order to make disastrous trades that could sink the company, you get a bailout, a bonus, and heroic treatment in an Andrew Ross Sorkin book.
In other words, "rogue traders" are treated like bad accidents and condemned everywhere from the front pages to Ewan McGregor films. But rogue companies are protected at every level of the regulatory structure and continually empowered by dergulatory legislation giving them access to our bank accounts.
There is a movement in the UK for a thing called “ringfencing” that would separate investment bankers from commercial bankers. Some people think this UBS incident will aid that movement, even though UBS can apparently absorb the loss without necessitating a bailout or endangering client accounts.
The U.S. missed its own chance for ringfencing when a proposal for a full repeal of Gramm-Leach-Bliley was routed during the Dodd-Frank negotiations.
That means we’re probably stuck here in the states with companies like Bank of America, JP Morgan Chase and Citigroup, giant commercial banks in charge of stewarding trillions in client bank accounts and consumer credit accounts who also behave like turbocharged gamblers via their investment banking arms.
Sooner or later, this is going to blow up in our faces, and it won't be one lower-level guy with a $2 billion loss we'll be swallowing. It'll be the CEO of another rogue firm like Lehman Brothers, and it'll cost us trillions, not billions.
Source:
http://www.rollingstone.com/politics/blogs/taibblog/the-2-billion-ubs-incident-rogue-trader-my-ass-20110915
________________________
Thursday, September 01, 2011
Documents Reveal New Details About DHS Development of Mobile Body Scanners
The mobile backscatter machines cannot be American National Standards Institute “certified people scanners” because of the high level of radiation
Sept 1, 2011
EPIC has obtained more than one hundred fifty pages of documents detailing the Department of Homeland Security’s development of mobile body scanners and other crowd surveillance technology. The documents were obtained as a result of a Freedom Information Act lawsuit brought by EPIC against the federal agency.
According to the documents obtained by EPIC, vehicles equipped with mobile body scanners are designed to scan crowds and pedestrians on the street and can see through bags, clothing, and even other vehicles.The documents also reveal that the mobile backscatter machines cannot be American National Standards Institute “certified people scanners” because of the high level of radiation output and because subjects would not know they have been scanned.
For more information see EPIC: Whole Body Imaging Technology and EPIC: EPIC v. DHS (Suspension of the Body Scanner Program).
Source:
http://epic.org/2011/08/documents-reveal-new-details-a.html
_________________