In March 2000, Goldcorp, a Canadian gold mining company, went public with a problem that had plagued it for years. Company engineers believed there was a lot of gold on a 55,000-acre property but they couldn't find it. So the company put all its relevant geological data online and offered a $500,000 prize to the person or group that most accurately predicted the location of gold. As a result of public input, Goldcorp located over $3 billion of previously undiscovered gold on its holdings.
Goldcorp's approach was one of the first, and most spectacularly successful, examples of crowdsourcing. Governments and businesses alike are increasingly turning to open-source public input for help in solving difficult, time-intensive problems. Crowdsourcing drove overnight sensations such as YouTube and Wikipedia and is fast working its way into all sectors of corporate life. It offers solutions on filling data sets, predicting the actions of governments, solving complex biochemical protein problems, and more.
Now the intelligence community is looking into crowdsourcing with the help of two George Mason University professors. The question: Can crowdsourcing enhance national security?
In a project that is part competition and part research study, Charles Twardy and Kathryn Laskey are assembling a team on the Internet of more than 500 forecasters who make educated guesses about a series of world events, on everything from disease outbreaks to agricultural trends to political patterns.
They are competing with four other teams led by professors at several universities. Each differs in its approach, but all are studying how crowdsourcing can be used.
At stake is grant money provided by the Intelligence Advanced Research Projects Activity, part of the Office of the Director of National Intelligence, which heads up the nation's intelligence community. IARPA spokeswoman Cherreka Montgomery said her project's goal is to develop methods to refine and improve on crowdsourcing in a way that would be useful to intelligence analysts.
"It's all about strengthening the capabilities of our intelligence analysts," Montgomery said. George Mason received a $2.2 million grant from IARPA to conduct the study. If the team remains in the competition for the full four years - weaker teams are at risk of being discontinued - the grant will be increased to $8.2 million.
And if analysts can use crowdsourcing to better determine the likelihood of seemingly unpredictable world events, those analysts can help policymakers be prepared and develop smarter responses. In a hypothetical example, a crowd-powered prediction about the breakout of popular uprisings in the Middle East could influence what goes in a dossier given to decision-makers at the highest levels.
The program at George Mason is called DAGGRE, short for Decomposition-based Aggregation. The researchers have used blog postings, Twitter and other means to get the word out about their project to potential participants. No specialized background is required, though a college degree is preferred.
Military and intelligence researchers have long studied ways to improve the ability to predict the future. In 2003, the Defense Advanced Research Projects Agency launched research to see whether a terrorist attack could be predicted by allowing speculative trading in a financial market, in which people would make money on a futures contract if they bet on a terrorist attack occurring within a designated time frame. The theory was that a spike in the market could serve as a trip wire that an attack was under way. But some found the idea ghoulish, and others objected to the notion that a terrorist could profit by carrying out an attack, and the research was halted.
Laskey said George Mason's research is similar in some ways to the DARPA project, but nobody in the GMU project will profit from accurate predictions. Those who make accurate predictions are rewarded with a point system, and there is a leaderboard of sorts for participants to measure their success. Some can also choose to receive a small stipend for their time, but it's not tied to how they answer questions.
Participants come from all walks of life. While Twardy said he'd love to have, say, agronomists, on his team to help forecast European polices and responses to mad cow diseases and the cattle trade, the overriding principle is that people from various backgrounds can contribute to the crowd's collective wisdom, so participation is not restricted by fields of expertise.
"At some level, you cannot predict the future," Twardy said. "But you can do a lot better than just asking an expert."
But when it comes to gathering intelligence, is that true?
Avoiding analyst bias
Crowdsourcing would help avoid analyst bias, a significant problem in the field. It could also help governments rapidly address time sensitive problems that traditionally would be unsolvable, similar to the relief efforts in Haiti after the 2010 earthquake. Rescue efforts were coordinated at first using standard top-down methods, but after realizing that rescuers were not locating survivors quickly enough officials went public with a request for help.
Thousands of volunteers from around the world combed through tweets, social networks, and text messages to relay the real time locations of those needing help or rescue. Officials estimate that crowdsourcing the rescue operation allowed emergency responders to analyze data ten times faster than otherwise possible and to reduce analytical bias in process.
However, crowdsourcing has limited application in a national security context. Many traditional intelligence questions, involving state secrets about weapons of mass destruction, nuclear stockpiles, or double agents, simply cannot be scrutinized by the general public. Regardless of technological advances, governments will always have secrets that require careful protection.
Crowdsourcing also suffers from a phenomenon known as the "tragedy of the commons." People often participate in such projects because of the attention they receive from other group members. Researchers have found, looking at YouTube patterns, that the more attention they get for their posts, the more likely they are to continue posting. Twardy noted the George Mason study has already drawn more than 500 participants, but only about half are actively participating. The study continues to recruit people as some participants drop out over the four-year course of the study.
But solid intelligence analysis should be based on reality, not peer response. Intelligence analysts may need to present conclusions that are widely unpopular. Crowdsourcing, while beneficial for perspective and open source analysis, does not change the analyst's independent job of protecting the national security of the United States.
Crowdsourcing is like any other analytical tool. It has its uses, but it must be used with circumspection to insure the protection of our nation's secrets and guard against making national security a popularity poll.
Micah Walters is a government major at Patrick Henry College.
The Associated Press contributed to this report.
Castella, Tom. "Should we trust the wisdom of crowds?" BBC News. 05 July 2010. http://news.bbc.co.uk/2/hi/uk_news/magazine/8788780.stm
Heinzelman, Jessica and Carol Waters. "Crowdsourcing Crisis Information in Disaster Affected Haiti." United States Institute of Peace. Special Report 252. October 2010.
Huberman, Bernardo A. et al. "Crowdsourcing, attention and productivity." Journal of Information Science, Volume 35 Issue 6, 2009. 758-765.
Marsden, Paul. "Contagious Crowdsourcing." Contagious Magazine, Issue 18.
Nystrom, Michael. "Goldcorp, Wikinomics and Changing the World." Bull not Bull, Volume 2 Issue 1. 19 January 2007. http://www.bullnotbull.com/archive/wikinomics.html