- One of the key figures in talks to push Trump into using Google to determine the next mass shooter is a longtime friend of the president who is a former NBC executive.
- Analysts argue that Trump’s potential use of big tech algorithms to determine who could commit a mass shooting risks hitting lots of innocent people.
- The White House refuses to respond to questions about whether Trump is considering using big tech companies to aid in corralling potential mass shooters.
One of the chief proponents urging the Trump administration to use big tech companies to collect data on users who could potentially conduct violent actions is a former NBC executive and long-time friend of President Donald Trump.
Reports show former NBC chairman Bob Wright has briefed Trump officials on a proposal to create an agency called Health Advanced Research Projects Agency, or HARPA, to design inventive ways to use data for preventing violent incidents. Wright is one of Trump’s long-time friends.
Ivanka Trump asked people pushing for the new agency if it could prevent mass shootings, one person familiar with the conversations who spoke on anonymity because off the sensitive details told The Washington Post on Sept. 9. Her questions came after the shootings in El Paso, Texas, and Dayton, Ohio.
HARPA would develop “breakthrough technologies with high specificity and sensitivity for early diagnosis of neuropsychiatric violence,” according to a copy of the proposal. “A multi-modality solution, along with real-time data analytics, is needed to achieve such an accurate diagnosis.”
The document notes that such data collection would be up to new forms of technology, including Apple Watches, Amazon Echo and Google Home. Geoffrey Ling, the lead scientific adviser on HARPA, told reporters in August that the plan would require enormous amounts of data and “scientific rigor.”
Wright has not responded to the DCNF’s request for comment through his charity, AutismSpeaks. The White House has also repeatedly declined to comment on this story.
Some analysts worry HARPA sets a potentially dangerous precedent.
“I would love if some new technology suddenly came along that would help us identify violent risk, but there’s so many things about this idea of predicting violence that doesn’t make sense,” Marisa Randazzo, former chief research psychologist for the U.S. Secret Service, told reporters.
Such a program would probably flag tens, or hundreds of thousands, more possible suspects than actual shooters, Randazzo noted, adding that there’s a high possibility of false-positives. It would be difficult to determine which people really were at risk of acting violently and which were merely citizens, she added.
Similar circumstances befuddle conservatives on social media platforms.
Facebook, for instance, has faced criticisms of censoring conservatives, though some Democrats are also dinging the Silicon Valley giant. Democratic Massachusetts Sen. Elizabeth Warren, for one, has advocated for breaking up what she believes is Facebook and Amazon’s monopoly.
Conservatives meanwhile have hammered the company during the past few years over concerns related to censorship. Some tech analysts argue evidence of Facebook censorship are actually examples of the big tech company’s algorithms suffering false-positives.
Emily Williams, a data scientist and researcher based in California, believes Facebook’s algorithm likely has a 70 percent success rate, meaning roughly 30 percent of the time moderators are nixing conservatives who are sharing provocative content but not prohibitive content. In short, the company’s algorithms are unable to distinguish between valid conservative content and white nationalist content.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected]Wake up Right! Subscribe to our Morning Briefing and get the news delivered to your inbox before breakfast!