An ethical case for government technology contracts

The following was taken from a term paper that I wrote for UGBA 107: The Social, Political, and Ethical Environment of Business. The opinions below were written in an academic context and may or may not reflect my actual opinions on the subject matter.

To view the paper as a PDF, click here.

Introduction

As the development of artificial intelligence and cloud computing has accelerated over the past decade, the United States Government has taken an interest in using these cutting-edge technologies for purposes involving national security and defense. Recently, employees of major technology firms – Google, Amazon, and Microsoft – have been pushing their companies to put ethics before profit when it comes to working on contracts with military or defense motivations. There exists a strong business case to take these government contracts: they often bring in hundreds of millions of dollars of stable revenue for corporations. Different corporations, however, have taken unique stances on the issue. While Amazon engages frequently with defense contractors, providing everything from cloud computing services to facial recognition technology, Google has taken an opposing stance in a set of guiding AI principles published by the company. Technology corporations are ethically justified in accepting government contracts relating to national defense, border security, and government surveillance; these contracts spur unparalleled technological innovation, are politically advantageous for corporations, and provide a platform for private companies to contribute to public efforts to prepare for national and international crises.

Background

Technology companies provide a wide variety of services to the Department of Defense (DoD) and the Department of Homeland Security (DHS). An analysis of requisitions from 2015-2019 reveals millions of dollars of purchases spanning computers, hosting services, enterprise software, and similar from nearly every large tech firm. However, the degree of involvement in producing military-level technology and weapons differs. Consulting-focused technology firms, such as Palantir, explicitly create custom software products for government agencies, while involvement from more traditional tech companies often surrounds the tailoring of existing AI or cloud computing products to work in military contexts. Some corporations, like Google, are tasked with providing research-driven innovations for military contexts. Others, including Apple, exclusively provide hardware products, whereas firms like Microsoft and Oracle focus on server hosting services. Involvement in military contracts may be direct or indirect; whereas Palantir directly consults with ICE, for instance, Amazon provides the cloud infrastructure that underlies Palantir. Nearly every large technology firm plays a role in this web of interdependence; thus, an ethical analysis must take into account any indirect influences and involvements as well as direct ones.

The Innovation Argument

Government investments in technology through the form of defense contracts often spur unparalleled innovation followed by rapid periods of growth; the well-known Space Race of 1960 remains one of the strongest historical examples of this to date. The Defense Advanced Research Project Agency’s (DARPA) packet-switching initiative of 1969, known as ARPANET, highlights the benefits of developing cutting-edge technology under the purview of the United States military. A comparison to the modern-day case of Project Maven, a multi-million dollar Pentagon AI contract, provides a strong ethical justification for the now-terminated contract.

DARPA was founded as a response to the Soviet Union’s successful deployment of a satellite to space; soon after, the agency was tasked with improving a Command and Control system for missile communications. In this process, the agency, working closely with academic researchers at MIT and UCLA, architected an early packet-switching framework – a technology prefacing the vast infrastructure that powers the modern-day internet. DARPA subsequently awarded a multi-million dollar contract to Bolt, Beranek, and Newman (BBN) to industrialize and build the routers that would interlink this network of computers.

Project Maven, announced in 2017 by the DoD, was one of the Pentagon’s first moves into the territory of using AI for military purposes. In a press report, DoD described the project as focusing primarily on “computer vision – an aspect of machine learning and deep learning that autonomously extracts objects of interest from moving or still imagery.” In 2017, it was announced that Google had been awarded the contract; subsequently, thousands of Google employees protested the company’s decision to pursue the development of military technology. In June 2018, despite a potential for the contract to grow from $9 million into a larger $250 million dollar contract, Google executives announced that they would not renew the contract.

Technology motivated by warfare doesn’t necessarily translate into direct implications of human life; the use of this technology, moreover, is far from bounded to a single use case. Motivated by a desire by the President of the United States to control nuclear weapons more effectively, ARPANET has been quoted by both technologists and historians as laying the groundwork for a vast network of interconnected computers. Moreover, BBN – a private corporation – played an essential role in the development & implementation of this architecture across the US, indicating the capability for the private industry to industrialize advancements in technology for widespread use. Both of these assertions reveal a particular weakness in a deontological analysis of this scenario; focusing purely on motivation, as a deontologist might, could potentially result in a stance firmly against any form of military engagement on ethical grounds. As a result, this would disregard any future benefit encapsulated by the development of new innovations. Conversely, a utilitarian approach would strongly support both BBN: by taking the contract, BBN provided the technology to enable the implementation of the internet, which uncontroversially improved life for billions. By working on developing computer vision algorithms through the lens of Project Maven, Google could be developing general-purpose neural networks capable of advancing fields that could have far larger impacts than simply military technology.

Moreover, military contracts are often prefaced by the United States government with a broader goal of influencing and sponsoring scientific research to strengthen the United States as a whole. In 1969, the director of DARPA, J.C.R. Licklider, noted the strong parallels between military innovation & research: “many of the problems will be essentially as important, in the research context as in the military context.” While computational research occurs in academia and in industry, the involvement of DoD provides a unique advantage: the ability to provide a seemingly-unlimited amount of funding – potentially $250 million dollars, in Google’s case – to sponsor projects that may be far out of reach and out of scope for profit-driven companies and financially-strained universities. DoD is a critical component of the trifecta of scientific innovation and progress in the United States; thus, there exists a strong ethical justification based on the potential societal good in the future for Google to pursue Project Maven and the advancement of AI technology.

The Political Argument

Corporate executives should not have to justify licensing technology to the government; rather, the government should be held accountable through democratic methods by the general public. The conflation of politics with business leads to precarious ethical positions involving partisanship, corruption, and inequity, values that directly conflict with basic corporate social responsibility principles. Palantir’s involvement with the Immigration and Customs Enforcement Agency (ICE) provides insight into the complications of weighing business decisions with political ones, highlighting the ethical argument for maintaining a firm, apolitical stance.

Palantir, a technology firm founded in 2004, conducts a majority of its business through government contracts. In 2017, Palantir faced widespread public criticism following reports that the company was involved in building the Investigative Case Management (ICM) and FALCON systems for ICE – two data management and case tracking systems that critics asserted made Palentir guilty of “providing the engine for Donald Trump’s Deportation Machine”. In August 2019, Palantir announced the company had renewed its contract with ICE; earlier in the month, the Trump Administration had come under fire for family separation policies. The company faced even stronger disapproval from members across the progressive tech community, the effects of some inhibiting Palantir’s hiring efforts at university campuses.

For Palantir, there exists a strong business case to continue contracting with federal agencies, including ICE; the company was founded on the fundamental premise of addressing complex governmental needs. However, the ethical case for Palantir is more precarious. In an article in the Columbia Law Review, NYU research professor Kate Crawford raises concerns regarding Palantir’s systems provided to ICE: “few publicly available documents note how constitutional accountability is allocated in each system, especially within joint public-private endeavors.” Crawford, among others, raises concerns about Palantir’s responsibility in supporting an administration that supports the separation of migrant parents from their children. The phrasing of this argument is essential: the assertion is a political one, not an ethical one. Rather than the technology being called into question, the company’s affiliation with the administration that directs the use of the technology is what is at stake. In these circumstances, ethical lines are blurred by political ones; choosing a side in the debate over border policy forces Palantir into taking a political stance, a stance that may change depending on the leaders of the federal government. Analyzing this decision in the lens of the Friedman approach to corporate social responsibility, Palantir’s sole obligation is to its shareholders; accepting a contract with ICE falls under the purview of increasing profits, whereas the regulation of the ethics of this decision fall under the responsibility of the government.

The interconnectedness of the technology industry further complicates the ethics behind fulfilling government contracts. Palantir, for example, uses Amazon Web Services for hosting; in July 2019, thousands of Amazon employees demanded that Amazon terminate its contract with Palantir. The supply chain of internet companies is a vast interconnected web of enterprise software, a network that creates billions of dollars of revenue for the global economy. Politically-influenced decisions made at a corporate level have much further-reaching externalities than a simple termination of a contract, externalities that can span job losses and lost revenue across the industry. In this context, there exists a clear ethical argument to justify the ICE contract; it supports democratically-enforced ethics decisions regarding intensely political issues like border security while also strictly upholding the shareholder-focused approach to CSR.

The Social Argument

In times of crises, Americans turn to private corporations to use whatever tools necessary to prevent the loss of human life. While this has traditionally revolved around traditional industrial corporations, in modern day crises, technology corporations often serve at the forefront of a collective response. A comparison of the controversial Joint Enterprise Defense Infrastructure (JEDI) Pentagon cloud computing contract of 2018 with the tech industry’s response to the COVID-19 pandemic provides ethical justification of the JEDI project. Large technology firms, under the premise of Dayton’s views on CSR, have an ethical obligation to help maintain the safety and security of all American citizens, and maintaining critical government infrastructure is one aspect of this mission.

JEDI is a ten-year contract focused on bringing cloud computing to DoD. A New York Times article notes the importance of the contract to the Pentagon, citing its aging infrastructure. While Amazon, Oracle, and Google submitted bids for the contract, in 2018, the Pentagon awarded the contract to Microsoft. Soon after, Microsoft employees published an open letter to company executives, urging them to reconsider JEDI: “Many Microsoft employees don’t believe that what we build should be used for waging war.”

COVID-19 has exposed a significantly inadequate medical infrastructure for data tracking and monitoring in America. In response to the pandemic, America’s largest technology companies – notably, Google and Apple – are stepping in and rapidly creating digital infrastructure spanning unemployment application software, COVID-19 screening services, and bluetooth-based contact-tracing technology. In a joint statement, Google and Apple note the urgency of the pandemic as justification for their drastic response: “There has never been a more important moment to work together to solve one of the world’s most pressing problems.”

In the case of COVID-19, technology firms use the principles of utilitarianism – helping the greater good – to justify stepping in to fill a void left by an aging medical infrastructure powering the health care system of America. However, when it comes to national defense and security, these firms face difficulties assuring their stakeholders that the technologies that are being developed are on ethical foundations. The very utilitarian principles that are used to justify the COVID-19 response could be used to justify upgrading the infrastructure protecting over 300 million American citizens. On the matter of “building technology to wage war,” a deontological approach could also be used to support the JEDI contract: considering the DoD’s mission statement is to “provide the military forces needed to deter war and to protect the security of our country,” providing cloud computing services to achieve this goal could be aligned with DoD’s motivations of maintaining peace, rather than building weaponry to wage war.

If the United States were to enter a war-centric crisis, technology companies could find themselves in a position where stepping in and building more direct military technology would be billed as, in the words of Google and Apple, an “important moment to work together to solve one of the [US’s] most pressing problems.” During periods of normalcy, working with the government to build preventative technologies to be used in times of crises could prevent the necessity for more drastic steps taken in the midst of crises, while also preparing the country to be in a better position to respond to such crises. On this basis, not only do technology firms have a basis to accept government contracts such as JEDI, they have an ethical and moral obligation to assist with projects that support the upkeep of the critical infrastructure of American government.

Counter Argument

Critics of tech companies’ involvement in military and defense initiatives argue that by providing their services to the government, these companies violate their fundamental principles. Concerns raised by employees have an ethical foundation rooted in deontology; many in tech are strictly opposed to building technologies that could be used to achieve controversial governmental initiatives.

ARPANET and its parallels to Google’s AI initiatives indicate a necessity to focus on a utilitarian perspective of technology development, rather than a deontological one. While the development and advancement of technology may be originally motivated by defense purposes, these technologies can have much more widespread effects than simply providing a platform for “violent application.” AI technology developed under the purview of DoD applied to healthcare, for example, could have the potential to save millions of lives in the future.

Critics of Palantir claim that the company has a responsibility to step up against the inhumane policies and practices of ICE. Consider the case where Palantir – and other technology corporations – start making a number of politically-influenced decisions; these tech firms’ only legal obligation is to their shareholders, so the majority of Americans stakeholders who would potentially be impacted more widespread decisions would not be represented in the decision making behind these actions. Alternatively, placing this ethical responsibility on the government empowers the individuals who may be affected by it; American citizens have the power to elect or remove officials from office and consequently contribute to the ethics behind technology policy. If Palantir opts-out of building and maintaining critical database technology for ICE, they effectively have the power to actively shape immigration law – a power that’s often justified on the basis of ethical and moral principles. However, exercising this power may be unethical in itself; the board of directors of Palantir is far from a democratically-elected body and is hardly representative of the population of the United States of America. Ultimately, regulation over the technology that Palantir develops falls under the purview of the legislative branch of government, a body that is democratically elected and administrated in a fair and just manner.

Conclusion

When designing software from a security perspective, software engineers often refer to Kerckhoff’s principle – a principle that states that systems should be secure even if an enemy knows everything about the system, with the exception of a security key. Similarly, technology should be built with the assumption that third parties and governments will have access to it in the future, where legislation acts as the key that unlocks this technology for use. For companies, taking an ethical stance against a contract rarely stops the government from achieving its ultimate goals; after Google terminated the contract over Project Maven over employee concerns, the Pentagon soon announced that Palantir had accepted the project. Even if no private companies are willing to support the AI efforts of DoD, the department still has access to the AI tools that Google and Microsoft sell to corporations across America. Only legislation stands in the way of using the technologies created by private companies for military purposes. Even if their motivation may be to generate advertising revenue using advanced AI techniques, tech firms should consider the consequences of these technologies being used for purposes that violate their ethical standards – well before making them available for public use.


Sources

  1. “Our Principles,” n.d. https://ai.google/principles/.
  2. “EZSearch.” fpds.gov. Accessed April 11, 2020. https://www.fpds.gov/.
  3. Lukasik, S. (2010). Why the ARPANET was built. IEEE Annals of the History of Computing, 33(3), 4-21. Chicago
  4. “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End.” U.S. DEPARTMENT OF DEFENSE. Accessed April 11, 2020. https://www.defense.gov/Explore/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/.
  5. Wakabayashi, Daisuke, and Scott Shane. “Google Will Not Renew Pentagon Contract That Upset Employees.” The New York Times. The New York Times, June 1, 2018. https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
  6. Woodman, Spencer. “Palantir Provides the Engine for Donald Trump’s Deportation Machine.” The Intercept, March 2, 2017. https://theintercept.com/2017/03/02/palantir-provides-the-engine-for-donald-trumps-deportation-machine/.
  7. Casey, Alexandra, Angelina Wang, Alexandra Casey, and Angelina Wang. “Under Pressure, Palantir Cancels UC Berkeley Information Session.” The Daily Californian, September 26, 2019. https://www.dailycal.org/2019/09/24/under-pressure-palantir-cancels-uc-berkeley-information-session/.
  8. Crawford, K., & Schultz, J. (2019). AI SYSTEMS AS STATE ACTORS. Columbia Law Review, 119(7), 1941-1972.
  9. Sandler, Rachel. “Internal Email: Amazon Faces Pressure From More Than 500 Employees To Cut Ties With Palantir For Working With ICE.” Forbes. Forbes Magazine, July 16, 2019. https://www.forbes.com/sites/rachelsandler/2019/07/11/internal-email-amazon-faces-pressure-from-more-than-500-employees-to-cut-ties-with-palantir-for-working-with-ice/#121a87887539.
  10. Conger, Kate, David E. Sanger, and Scott Shane. “Microsoft Wins Pentagon’s $10 Billion JEDI Contract, Thwarting Amazon.” The New York Times. The New York Times, October 25, 2019. https://www.nytimes.com/2019/10/25/technology/dod-jedi-contract.html.
  11. Microsoft Employees. “An Open Letter to Microsoft: Don’t Bid on the US Military’s Project JEDI.” Medium. Medium, October 16, 2018. https://medium.com/s/story/an-open-letter-to-microsoft-dont-bid-on-the-us-military-s-project-jedi-7279338b7132.
  12. “COVID-19 Map.” Johns Hopkins Coronavirus Resource Center. Accessed April 11, 2020. https://coronavirus.jhu.edu/map.html.
  13. Elias, Jenn. “Google Creates Online Unemployment Application with State of New York.” CNBC. CNBC, April 9, 2020. https://www.cnbc.com/2020/04/09/google-creates-online-unemployment-application-with-state-of-new-york.html.
  14. “Apple Releases New COVID-19 App and Website Based on CDC Guidance.” Apple Newsroom, March 27, 2020. https://www.apple.com/newsroom/2020/03/apple-releases-new-covid-19-app-and-website-based-on-CDC-guidance/.
  15. Mickle, Tripp, and Rob Copeland. “Apple, Google Partner on Ambitious Project to Alert Users to Coronavirus Contact.” The Wall Street Journal. Dow Jones & Company, April 10, 2020. https://www.wsj.com/articles/apple-google-partner-on-coronavirus-contact-tracing-technology-11586540203.
  16. United States Department of Defense. Accessed April 11, 2020. https://archive.defense.gov/about/.
  17. Jr, Edward Ongweso. “Palantir’s CEO Finally Admits to Helping ICE Deport Undocumented Immigrants.” Vice, January 24, 2020. https://www.vice.com/en_us/article/pkeg99/palantirs-ceo-finally-admits-to-helping-ice-deport-undocumented-immigrants.
  18. Petitcolas, Fabien A. P. “Kerckhoffs’ Principle.” SpringerLink. Springer, Boston, MA, January 1, 1970. https://link.springer.com/referenceworkentry/10.1007/978-1-4419-5906-5_487.
  19. Bloomberg.com. Bloomberg. Accessed April 11, 2020. https://www.bloomberg.com/news/articles/2019-12-14/palantir-wins-new-pentagon-deal-with-111-million-from-the-army.