I. Introduction
Artificial Intelligence (AI) has undoubtedly caused another technological revolution. Many day-to-day applications, such as credit card approvals, financial decisions, online trading and are handled by . We might not realise it, but AI is becoming 鈥渙mnipresent鈥 in our daily interactions. Whether it is scrolling through our feeds on social media, shopping for products online or being connected to automated phone operators - which sometimes truly test the patience of the caller - there is a more or less intelligent algorithm working behind the scenes to make our experience seemingly better and more efficient.[i] Outer space, historically always one of the most technologically advanced domains, did not remain untouched by AI for long. Due to its suitability for the outer space environment, AI is increasingly being used in .
However, the integration of AI into space applications raises some important legal questions, particularly concerning the responsible State for potential damages caused by the AI. This short article investigates the interaction of AI and the responsibility provisions of and intends to provide a step towards the question of responsibility for the development of AI software. This short piece explores whether the use of AI, and the corresponding development of an AI space industry, could give rise to the responsibility of the State of the software developer.
II. Narrow AI: A Brief Overview
Before we can delve deeper into the legal analysis, it seems prudent to discuss what this article means when speaking of narrow AI. This explanation serves mainly two purposes, to (1) familiarise the reader with what the present research means when it speaks of AI, and (2) to clearly limit the scope of this article. Narrow AI refers to AI that is programmed to solve problems of a similar nature and is thus narrow in its application while general AI, in contrast, is AI that is highly advanced, autonomous, can solve problems in any domain, is human-like in its behaviour, and often built on machine learning.[ii]
For our purposes, this author will adopt Jacob Turner鈥檚 definition of AI as 鈥渢he Ability of a Non-Natural Entity to Make Choices by an Evaluative Process鈥.[iii] Since all AI used in space applications currently relies on narrow AI, we will limit our inquiry to this specific definition of AI rather than look at more advanced general AI and the problems that such type could cause in space applications.
III. Narrow AI, Space Law, and State Responsibility
In simple terms, State responsibility relates to the answerability of subjects of international law, ordinarily States, for their actions or omissions which lead to a breach of an international obligation when that breach is attributable to the State.[iv] Attribution generally arises for actions of State organs, persons or entities equipped with governmental authority, or person or entities acting under the instruction or control of the State. Provided that none of these circumstances are present, a State is generally not responsible for the actions of private entities or persons.[v]
Article VI changes this formulation. In the context of space activities, Article VI specifically addresses responsibility in Article VI. What makes this treaty provision unique is that it holds States directly responsible for the actions of non-governmental entities (i.e., private entities and individuals) in outer space. As long as the activities can be considered 鈥渘ational activities in outer space鈥 State Parties to the treaty accept direct international responsibility and a separate inquiry into attribution as an action on behalf of the State is not necessary.[vi]
With the increased use of AI in space applications it seems prudent to consider whether the State where the software was developed can be held responsible under Article VI of the Outer Space Treaty. For responsibility to arise, the software development needs to be considered a national activity in outer space.
IV. National Activities in Outer Space
Therefore, the interpretation of the term 鈥渘ational activities in outer space鈥 is an important aspect of our legal analysis. Interestingly, the treaty does not define this term, which leaves room for interpretation and has caused an academic debate. To find a suitable interpretation, it is essential, in our view, to take the broader context and objectives of the Outer Space Treaty into consideration.
Authors seem to agree that any activity, even if taking place on Earth, should be considered an activity in outer space so long as it is primarily and intentionally directed toward the outer space environment.[vii]
Michael Gerhard for example thinks that 鈥渁ny 鈥榓ctivity鈥 in outer space is subject to Article VI.鈥[viii] Bin Cheng supports this view and likewise does not limit responsibility to specific activities.[ix]
The term 鈥渘ational鈥 adds an additional layer of complexity and has been debated. Generally, two views seem to emerge: one view links national activities to domestic law, that is municipal law determines what qualifies as 鈥渘ational,鈥 while other authors favour an interpretation grounded in the Outer Space Treaty itself and not domestic legislation.[x] The latter view is preferable as responsibility is a question of international law and not national legislation.
As mentioned previously, Article VI of the Outer Space Treaty sets up a unique responsibility framework, making States directly responsible for the actions of non-governmental actors involved in space activities. This express departure from the customary rules of State responsibility under international law is evidence of the commitment of the State Parties to cast a wide net ensuring the international responsibility for space activities, regardless of whether it involves a governmental or non-governmental entity.
Perhaps it was most clearly enunciated by Judge Manfred Lachs when he emphasized that鈥漑t]his [Article VI] is intended to ensure that any outer space activity, no matter by whom conducted, shall be carried on in accordance with the relevant rules of international law, and to bring the consequences of such activity within its ambit鈥.[xi]
V. State Responsibility for Software Development in the Context of AI
In light of the above discussion and by applying the definition of national activities in outer space to the realm of AI development, I am of the view that the State of the software developer can and should be held responsible under Article VI of the Outer Space Treaty. The development of specialized AI intended for the use in outer space should qualify as a national activity in outer space. The development of software specifically targeted for space applications is arguably primarily intended for the outer space environment.
Further, the 鈥渘ational鈥 character is established through the jurisdiction and control that the State exercises over the software developer on its territory. Only the State in which the software developer is situated is in a position to effectively authorize and provide continuing supervision of the software developer, who, for the purposes of the Outer Space Treaty, is a non-governmental entity.[xii]I n other words, only that State would be able to adhere to the authorisation and supervising obligations set out within the framework of Article VI. Thus, the State of the software developer, having jurisdiction and control over the entity developing the AI, assumes direct responsibility under Article VI to comply with the Outer Space Treaty and public international law.[xiii]
As this author has argued earlier, and wants to underline here again, a national activity in outer space does not mean physical occurrence in outer space. Instead, it includes any activity which is primarily directed towards the outer space environment. This interpretation also makes sense in light of the evolving nature of space activities when technological advancements often go beyond traditional boundaries.
States will therefore have to make sure to put adequate legislation in place to be indemnified in instances when the AI, which was developed in their territory, gives rise to State responsibility.
VI. Conclusion
As AI continues to be used in space activities, some preliminary considerations are necessary to address pre-emptively some emerging challenges. This short article argued that the State of the software developer could be considered a responsible State under Article VI of the Outer Space Treaty. The development of AI software intended for use in space applications can fall within the scope of the term 鈥渘ational activities in outer space鈥 and make the State, in which the software developer is located, responsible for any issues arising from the use of AI. Moreover, this State would also be under the obligation to authorise and provide continuing supervision.
The space law community will inevitably have to deal with the intersection of AI and space law. Just like in the past, space law, indeed international law, will continue to apply to new technologies. As the field of AI progresses and is being more widely used in and adapted to space applications, continued legal analysis based on the lex lata will be necessary to address new challenges and maintain the integrity of international law. In our view, States have not yet put adequate national legislation in place to satisfy the continuing supervision and authorization requirement of Article VI of the Outer Space Treaty. Many proposals, if they are targeted toward the regulation of AI, are mostly still in the drafting stage and there has not been any meaningful effort to regulate AI intended to be used in space applications.[xiv]
This article, this author hopes, serves as a stepping stone in this ongoing discourse and offers a preliminary legal perspective on the State of the software developer as a responsible State.
听
Mr. Stefan-Michael Wedenig*, DCL candidate, Executive Director, Institute of Air and Space Law, 成人VR视频.
This commentary represents the personal views of the author.
* Stefan-Michael Wedenig is a DCL candidate at 成人VR视频 and the Executive Director of the Institute of Air and Space Law and its Centre for Research in Air and Space Law. His research focuses on responsibility and liability of States for space applications utilising AI and looks at the question whether, how and to what extent the current legal regime pertaining to responsibility and liability of States can accommodate the emergence of AI.听
[i] At least that is what is often claimed by the industry.
[ii] See e.g. Abdullah A. Abonamah, Muhammad Usman Tariq & Samar Shilbayeh, 鈥淥n the Commoditization of Artificial Intelligence鈥 (2021) 12 Frontiers in Psychology at 2; MaryAnne M Gobble, 鈥淭he Road to Artificial General Intelligence鈥 (2019) 62:3 Research-Technology Management 55 at 55; John Page, Michael Bain & Faqihza Mukhlish, 鈥淭he Risks of Low Level Narrow Artificial Intelligence鈥 (Paper delivered at the 2018 IEEE International Conference on Intelligent and Safety for Robotics (ISR) held in Shenyang, 15 November 2018). Narrow AI is sometimes also referred to as weak AI.
[iii] Jacob Turner, Robot Rules: Regulating Artificial Intelligence, (London: Palgrave MacMillan, 2019) at 16.
[iv] Hugh M Kindred, Phillip Martin Saunders & Jutta Brunn茅e, eds, International law, chiefly as interpreted and applied in Canada, 7th ed (Toronto: Emond Montgomery Publications, 2006) c 10.
[v] Unless the State was under a due diligence obligation to prevent a certain conduct. However, it is not the actions of the privates that impute the State in this case, but rather its own omission to prevent the conduct. See also Ibid.
[vi] Ram S Jakhu & Steven Freeland, The Relationship between the Outer Space Treaty and Customary International Law (Guadlajara, 2016).
[vii] For domestic law determining what is considered national, see Henri Wassenbergh, " International Space Law: A Turn of the Tide" (1997) 22:6 Air & Space L 334 at 335; For definition found in international law, see Stephan Hobe & Kuan-Wei Chen, 鈥淟egal Status of Outer Space and Celestial Bodies鈥 in Ram S Jakhu & Paul Stephen Dempsey, eds, Routledge Handb Space Law (New York: Routledge, 2017) at 37.
[viii] Michael Gerhard, 鈥淎rticle VI鈥 in Stephan Hobe et al, eds, Cologne Comment Space Law Three Vol CoCoSL (K枚ln: Heymanns, 2009).
[ix] Bin Cheng, 鈥淎rticle VI of the 1967 Space Treaty Revisited: 鈥業nternational Responsibility鈥, 鈥楴ational Activities鈥, and 鈥渢he Appropriate State鈥 26:1 J Space L 7.
[x] Wassenbergh, supra note 9; Bin Cheng, 鈥淭he Commercial Development of Space: The Need for New Treaties鈥 (1990) 19:1 J Space L at 36ff; Hanneke Louise van Traa-Engelman, ed, 鈥淧roblems of State Responsibility in International Space Law鈥 (1984) 26 139.
[xi] Manfred Lachs, The law of outer space: an experience in contemporary law-making, Tanja L. Masson-Zwaan & Stephan Hobe, eds (Leiden鈥; Boston: Martinus Nijhoff Publishers, 2010) at 114 [Emphasis added].
[xii] These issues deserve particular scrutiny in future when we consider that engineering teams working on the same software could be located in different countries, which could lead to multiple responsible States under Article VII Outer Space Treaty. It would go far beyond the scope of the this article to devote more attention towards these issues, and we shall be content to have mentioned them here only briefly.
[xiii] It will be interesting to see how States will legislate national to adhere to the requirements of Article VI of the Outer Space Treaty. At the date of writing there is not any meaningful specific AI legislation through which States would exercise regulatory authority over software developers up-front.
[xiv] This is does not mean that efforts in this regard are not already underway. The European Union for example is actively working on regulation in this regard. More information can be found .
听
Would you like to contribute with a Commentary? We accept submissions on a rolling basis. Drafts can be submitted to this听edannals.law [at] mcgill.ca (email).