Can Mena adopt AI at the same speed and scale it is recommended to?
Aliah Yacoub is an AI philosopher and Farah Ghazal is an AI social scientist at Egypt-based techQualia
Emerging technologies share, in essence, an indubitable transformative property. Artificial Intelligence (AI) is no different: it is hard to conceive of an area of the human experience that it cannot touch and transform. To ensure that AI would bring about the world of advancement and achievement prophesied by its early creators, and that it would ultimately be deployed in the service of humanity, a ‘global’ call for its regulation emerged.
Examples of AI in need of regulation include OpenAI’s conversational AI chatbot: ChatGPT. After gripping the public imagination, and making controversial headlines as the holy grail of AI, it was clear regulation of the technology was imperative. In a recent interview, OpenAI’s own CTO Mira Murati agreed that the platform has to be governed in alignment with human values and that affected governments’ policymakers have to get involved.
However, the idea that countries can swiftly create adequate regulatory frameworks at the same pace that the technology advances or, more importantly, at the same pace as each other, has proven to be challenging.
This disproportionality in capability is not lost on policymakers and stakeholders who, in a foreword to the published compendium entitled, ‘United Nations Activities on Artificial Intelligence,’ for example “warn against the capability gap between developed and less-developed countries”.
This warning by the 30 UN agencies and bodies signatory to the compendium comes as part of a dominant narrative on the advent of AI in the global South that is often marked by themes of linear development. Often originating from the West and exported elsewhere, these narratives of progress we refer to can be exemplified by reports of international organisations such as the World Economic Forum and the World Bank and their affiliates. The common thread is their tendency to enthusiastically advocate for the ‘arrival’ of AI in the global South, prophesying about the anticipated gains for the economy, and warning (no one in particular) that we ‘must not fall behind’.
These Western narratives are often accompanied by recommendations to implement certain measures to ensure the global South is adequately prepared for AI development. Such measures generally include some combination of strengthening infrastructure, enabling human capital, and formulating relevant regulatory mechanisms.
However, said recommendations – and the bulk of the narrative – tend to dismiss certain obstacles characteristic of governance in the region. These obstacles are the central focus of this article, in which we are going to identify structural governance issues which contain within them interrelated political, economic, and social dimensions.
In what follows, we employ Egypt as a case study to show that this pervasive call to blindly imitate the Western model for AI development is often void of nuanced assessments of the structures that inform the global South’s (in)capacity to implement such measures. We attempt to identify the specific constellations of overlooked economic, political, and social structures that impact and complicate the prevailing linear narrative surrounding AI progress in the region.
Obstacles pertaining to governance in Egypt
Particularly since the beginning of the Covid-19 pandemic, which exacerbated the adoption of many tech-enabled services, Egypt has seen a rise in the number of laws and regulations pertaining to the tech sector.While that indicates a serious commitment to regulating this fast-paced transformation, following up with effective implementation has been a challenge. As legal expert Mahmoud Shafik notes in an article cataloguing all recent tech regulations in the country, “The complexities surrounding the drafting, lobbying, and ratification of laws and their executive regulations often result in a lag between the creation and implementation of these regulations”. Below, we look at other possible reasons for regulation lags.
-
Regulatory structure
The Egyptian regulatory structure is characterised first and foremost by what is considered ‘weak’ enforcement. We identify two main axes on which it stands (or wobbles):
- Implementation gaps: literature on regulation in Egypt identifies a critical gap between the ‘number and quality of legal reforms on the one hand, and the actual enforcement of these reforms on the other’.
- Centralisation: a related issue, Egypt’s over-centralisation is a long-standing characteristic of governance in the country which naturally affects the enforcement of laws relating to a number of sectors, including service provision, health and education. Both of these are features of Egypt’s governance which affect its capacity to adopt AI, and are often overlooked by mainstream narratives we identify in the article, which continue to recommend the formulation of AI-related laws as a prerequisite for this adoption.
-
Political and socio-economic structures
Political structures which complicate the adoption of AI in Egypt include digital authoritarianism. By this we refer to how even when laws are successfully formulated and implemented, they often aim to control rather than protect against the harms of AI. One example of this is Egypt’s only data protection law, which was passed in 2020 and exempts national security authorities from the obligation to protect users’ personal data.
One part of this pitfall can be attributed to the lack of democratic decision-making and stakeholder-involvement in the legislative process. Another, related part is the general political climate in the country, which can discourage participation in digital innovation and expression. Ultimately, digital authoritarianism calls into question the shape AI implementation will take in Egypt, but remains overlooked by mainstream voices championing said implementation.
Socio-economic structures include certain developmental concerns. One example is inadequate digital infrastructure, which affects general access to technology and the overall internet penetration rate. For example, the number of social media users in Egypt in 2022 was equivalent to only 48.9 per cent of the total population.
These considerations also have gendered dimensions, with men generally enjoying higher rates of access to technology than women. One example of this disparity is relatively recent: Covid-19 stipends of EGP 500 EGP (then equivalent to $33) dedicated to informal workers announced by the government required online applications. A drop down box requiring applicants to specify their occupation included only typically 'male' informal jobs, such as plumbing, construction work, and welding, which meant the majority of women were prevented from applying. This speaks to a pervasive gender bias in social policy-making intertwined with technology, offering us a glimpse of the future of uncritical AI implementation championed by mainstream narratives.
Decolonial Implications
Although far from exhaustive, the above-mentioned challenges point towards the obstacles hindering the development and regulation of AI in Egypt. But where do we go from here?
The goal of this critical review is not to deter regional AI development. Rather, our intent is to help restore a much-needed balance between promise and reality: the promise of linear development of AI in the region and the reality of governance restrictions and implementation capabilities.
In so doing, it becomes increasingly clear that we should be firmly adopting a decolonial lens to operationalise AI development in Mena. Taking inspiration from the newly published AI Manyfesto, the approach rejects the “Western-normative language of ‘ethical’ AI and suggestions of ‘inclusivity’ that do not destabilise current patterns of domination and address power asymmetries'' and calls for an approach which recognises how “the social and the technical are interwoven, and technologies have immaterial as well as material impacts over specific gendered, racialised bodies and territories".
While the call for ‘decolonial AI’ is not particularly new, its discourse is relatively young, mostly in English, and often ungrounded in real-time analyses by actors from the global South that it references. This not only makes the movement inaccessible to most in Egypt, but also incapable of informing any real policy or business practices. So, while our critique operates within a decolonial framework, it merely aims to bridge this gap by explicating exactly why localising AI – including its production, regulation, and conversation surrounding it – is crucial.
In other words, regulation of AI here will have to take up a character of its own, one that takes into account the obstacles characteristic of governance in the region, as opposed to swiftly and blindly imitating Western models of regulation. Only then can we realistically and efficiently adopt and regulate AI at the scale and speed we are ‘recommended’ to.