Should we care about Philosophy of AI in the Mena region?
Aliah Yacoub is an AI philosopher and head of techQualia at Egypt-based Synapse Analytics, a data science and AI company
The artificial intelligence (AI) race between the global powers has countries everywhere hurriedly rummaging up AI applications. A quick glance at magazine headlines, popular culture, and even peer-reviewed academic literature shows the many grand predictions about AI and the eventual winner of its race. But is that race something to be celebrated or feared? And where does the Middle East and North Africa (Mena) region stand?
AI-induced panic or peace?
Today, algorithms, deep learning and AI have emerged as unparalleled forces of power and have made their way into the everyday world. From the seemingly trivial, like setting your alarm for you and automating your workplace, all the way to fighting the pandemic or fighting future wars. The question of whether that is to be celebrated or not has polarised the general public.
Some fear the global unchecked rise of machines in our societies and the multitude of cautionary tales of unconstrained algorithm use. General examples include the flawed evaluation of immigration applicants and “future crime” predictions, and specific applications include the targeting and tracking of Uyghurs through China’s facial recognition technology. Others believe that algorithmic intelligence could offer us a brave new world of advancement and achievement. It could ‘free’ us from exploitative waged labor, predict and mitigate natural disasters, discover curative treatments, and so on...
In that sense, it is clear that the answer lies in how AI is deployed – whether it’s in the service of humanity or a new tool for oppression and ultimate doom. Understanding the ethics, science, application and implementation of machine learning, artificial intelligence or algorithms within a broader societal setting becomes a must when we realise just how much it dictates our daily livelihoods. Thus, what draws the line between AI-induced panic or peaceful panacea is interdisciplinary knowledge, and it seems like our well-rounded knowledge of AI is lagging in the region.
The knowledge gap
The Middle East will not be left behind in the AI race – that’s a guarantee. A PwC study showed that the Middle East is expected to accumulate 2 per cent of total global AI benefits in 2030, which equates to $320 billion, and that the contribution of AI is expected to range between 20-34 per cent per year in the region, with the fastest growth being in the UAE followed by Saudi Arabia and Egypt. That said, if we do not start allowing for and holding real, physical space for interdisciplinary knowledge about AI, we will be left in the trenches.
How can we attempt to leverage AI solutions to streamline operations across various industries and thereby compete in the global AI race, when we do not understand what AI really is – its workings and implications?
As an AI philosopher, the world of tech fascinates me and the startup scene in Egypt seems to house countless others who share the same sentiments. Yet, a quick glance at the general public shows that most people are blatantly unaware of what AI really is, despite constantly asserting that it is transforming the world around them. Business owners are no different too; they strive to automate their business processes, but their lack of knowledge makes AI deployment and adoption quite a struggle.
This gap has far-reaching implications. Other than causing us to fall behind in the strictly economic and political sense, it also causes the AI-induced panic discussed earlier, which further exacerbates the unwillingness or inadequacy regarding properly adopting and managing AI.
Thus, it becomes a necessity to explain, inform, and critique the workings and implications of AI in the rigid, philosophically analytical way it so desperately requires, so that people can rein in the grand, usually unrealistic predictions about how AI is ‘transforming the world’ and substitute them with real-time knowledge about the actual future possibilities of AI and how they can utilise them.
Understanding Philosophical AI
A great way to navigate the variety of issues surrounding AI would be to take a step back and theoretically explore AI.
In its strictest analytical form, AI is not just intimately bound up to philosophy, AI is simply philosophy. According to philosopher and cognitive scientist Daniel Dennett, AI should be considered as “a most abstract inquiry into the possibility of intelligence or knowledge”. In this view, researchers in AI and philosophy (particularly ‘epistemology’, which is the study of the nature, origin, and limits of human knowledge) ask the same question: how is human knowledge possible? Is it possible to artificially replicate and recreate it in machines? Is the human brain essentially a computer or is there more at play? Can a machine have mental states and ‘qualia’?
This kind of research, called Philosophical Artificial Intelligence, is primarily concerned with understanding that which AI is trying to mimic: is it that mysterious term ‘consciousness’, total human cognition, or aspects of intelligence? This seemingly minor difference in terminology matters because it enters us into a semantic minefield: the human mind is still a massive enigma to scientists, with the term ‘consciousness’ being shrouded in layers of mystery; ‘intelligence’ is considered embodied and socially embedded; and ‘cognition’ is reductive to most.
More importantly, though, the terms refer to attempts at different types of ‘AI’. Since AI is a constantly evolving field, what the term denotes is also gradually changing. Investopedia defines AI as the “simulation of human intelligence in machines that are programmed to think and act like humans”. Oxford, on the other hand, defines it as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Why is this difference important?
If the purpose of AI is the latter –that is, being able to perform particular tasks formerly requiring human intelligence—then what should be studied is the components or minimal requirements for human intelligence. But if the purpose is to engineer machines that think and act like humans, then we are addressing total human cognition or that other ambiguous term: consciousness. This usually refers to the attempt at producing artificial general intelligence (AGI)–also called strong or deep AI –wherein machines are capable of understanding or learning any intellectual task that a human being can. Undoubtedly, AGI is clearly a more ambitious project than the development of AI.
This kind of AGI, the one related to creating sentient machines, is the subject of anxiety for most: will the machines take over? The serious reflection on this is credited to what is known as ‘The Singularity’ the point in the future when AI will not only exceed human intelligence, but also that the machines will, immediately thereafter, make themselves rapidly smarter, reaching a superhuman level of intelligence that, according to Vernor Vinge, “stuck as we are in the mud of our limited mentation, we can’t fathom.”
Are we ready for Superintelligent AI in the region? Is anyone? Are we going to be at the mercy of those who develop it—or are they merely slaves to the machines they created? What happens to the region’s labour-abundant, resource-poor countries in that future era of technological servitude? We would have to analyse issues related to data, legislative environment, infrastructure, and human resources in the entire region.
If we adopt the school of thought that asserts consciousness as immaterial (not produced by physical properties), then AGI is impossible. Similarly, if we are to view human cognition as embodied -that is, the body being instrumental to our perception- then disembodied networks can never demonstrate the same kind of human intelligence.
The theories are good and plenty, and although the subject (presented in such a general way) is mostly speculative, it pushes us to reconsider every instance of algorithmic intelligence – a task that is of irrefutable importance today.