It is said that when merchants arrived in the port of Alexandria in antiquity, their manuscripts would be seized, taken to the city’s famous library, and copied by scribes, who would confiscate the original and graciously give the copy to the merchant.
Something of that mercenary spirit is still alive in the software developers behind the wildly successful new generative artificial intelligence (AI) programs that are rewriting the digital economy. The functionality of ChatGPT and its competitors is built on collections of text and other data that some allege has not properly been paid for. A major lawsuit from authors accusing OpenAI of systematically violating copyright to build the corpus on which programs like ChatGPT are based is only the start of a new round of litigation and regulation that will try to place limits on what is and is not permissible in AI.
But two problems complicate matters. The first is that, even more than for earlier digital innovations like the search engine, there are major first-mover advantages and economies of scale that make AI ripe for natural monopolies. An early age of antitrust suits against software makers like Microsoft, generally ending in weak settlements, did little to establish general principles for the digital economy about where to draw the line between successful innovation and anti-competitive behaviour.
The second problem is that AI has quite obvious national security applications, and if there are monopoly rents to be had, each government would prefer — for security purposes as well as economic reasons — that their own companies hold the dominant market position. Because of the high fixed costs of entry and increasing returns to scale, as well as the national security nexus, established players in the United States and China have the upper hand. Given the volatile geopolitical situation and the splintering world economy, the new digital frontier has become an arena of contest between the two largest economies in the world, and that entails major risks for smaller economies, particularly in Asia.
New technologies often make existing rules obsolete, but not the values upon which they are based. The rapid spread of AI into every corner of the global economy demands new international economic rules, but they should be based on principles that have proven themselves, like international openness and transparency.
Given the centrality of the United States and China in the AI economy, there is an important role for Asian economic cooperation to play in driving the adoption of new rules of engagement for AI that address legitimate national security concerns without disadvantaging smaller economies. This explains Singapore’s proactivity in this sphere.
In this week’s lead article, excerpted from the latest East Asia Forum Quarterly, Jacob Taylor explores some of the potential features that a comprehensive system of AI governance might have. He argues that there is a need to address the tendency for governments to try to localise data through regional cooperation to ensure the free, well-regulated flow of data across national borders. This will help to lower the barriers to entry for new, smaller players in the region. There must also be a concerted effort to build capacity in communities that have been excluded from the emerging digital economy in Asia through effective financing and regulatory assistance. Any attempt to devise new rules to govern AI will, of course, come up against the unwillingness of Washington and Beijing to cede any advantage to their geopolitical rival.
The EAF Editorial Board is located in the Crawford School of Public Policy, College of Asia and the Pacific, The Australian
Source: East Asia Forum