Four challenges that need to be addressed on the path to an AI future
Artificial intelligence and machine learning might be all the rage today, but depending on who (and where) you ask, you’ll likely receive quite different answers to the question, “how do you feel about AI and automation?” Your respondent will either pontificate about the hopes for a better technological future with AI or instead share their economic anxieties stemming from the possible job displacements automation will bring. In this short piece, I want to highlight what I feel are the four major challenges that technologists and society at large will need to overcome if we wish to truly embrace an AI future.
By HK Lim
The economic anxiety over the job losses threatened by AI and automation is very real and should not be dismissed as an over-reaction. While there is no reversing or indeed halting of technological progress, inequality and the shortcomings of the global financial system have also painted us into a corner. Ironically, we still urgently need the economic boost from these new technologies to help us overcome the lackluster productivity growth that has led to poorer economic prospects of the majority. If technologists fail to use this technology in a manner that benefits as many people across society as possible, public resentment of automation will almost certainly rise. If AI is going to fulfill its true economic potential, governments, businesses and the people will need to pay as much attention to the social and economic challenges AI will bring as we do to the technical ones. Let us briefly dive into the four main challenges to an AI future.
I. Trust, the black box and incorporating social intelligence
The first challenge that needs to be overcome in paving the way to a wider acceptance and adoption of AI in all aspects of society is related to the issue of trust and the inherent black box nature of our currently most powerful approaches to machine learning. In a completely analogous way to how many aspects of human behavior are difficult (if not impossible) to explain in detail, it might also not be possible for us to ultimately explain everything that AI does. This poses a challenge to the conventional notions of trust we are so familiar with in human society. For example, in deep learning with neural networks, we require many intermediate (hidden) layers of nodes that make for a real challenge in explaining how we get our outputs from our inputs. With one layer of nodes feeding into the next and the existence of dynamically varying node weights, there is no easy intuitive explanation of how a particular input triggers or fires a particular output.
Once we utilize AI and machine learning in applications that involve judgment and decision making, we begin to confront this trust issue head on. How does one trust the recommended judgments and decisions of a machine learning algorithm when we cannot fully trace the process of how it comes up with each output. In a completely analogous way to how human intelligence may only partly be amenable to rational explanation (with the remainder instinctual, subconscious and even impervious to detailed explanation), at some point we may have to simply trust AI’s judgment or do without it if we cannot.
The other aspect of the black box of machine learning technologists will need to confront is related to social intelligence. Judgments made by AI systems will need to be designed to fit our established social norms just as how our society functions upon an implicit contract of expected behavior. This makes sense if we are planning on delegating judgments and decision-making to our AI systems on our behalf and hope that the outcomes are consistent with what we would have pre-AI. [There is the additional question of whether and how to correct for the inadvertent biases that might be introduced when our AI applications are trained using actual human data that might already incorporate our own human biases. This is also something that needs to be carefully addressed.]
II. Managing disruptions to traditional job functions
It has often been argued that while technological progress always inevitably leads to massive employment shifts, but at the end of the day, the economy will still grow as new jobs are created. This is unfortunately, a far too sanguine perspective of AI and automation’s impact on jobs today and does not confront the question of whether there should be greater responsibility for such shifts. It is widely expected that AI and automation will create considerable disruptions to both routine and non-routine traditional job functions and so managing this transition will be crucial if we are to ensure that AI’s reception and overall impact on society remains positive. An oft quoted 2016 OECD study (see here) estimates that up to 9% of existing jobs are at risk of automation, while another 2013 study from the Oxford Martin School (see here) places the estimated figure at 49% of US jobs.
While this transition is already well underway, it is also becoming increasingly clear that policy makers have not fully begun to appreciate the actual scale of this impending disruption, which is why a close dialogue and coordination between the various economic stakeholders (governments, businesses, citizens and technologists) is urgently needed if we are to ensure responsible transitioning of employees, the creation (and accessibility) of retraining pathways for all as well as an appropriate retooling of our educational system for an AI future. In this regard, the appropriate technology, labor and educational policies must be adopted by governments in consultation with industry to address the plight of workers displaced by technology and who are often ill prepared for the new jobs created.
These new policies will involve going beyond current cursory attempts to provide avenues and skills retraining funds to workers for vocational “retooling” and instead put in place a genuine long-term commitment to ensuring a responsible AI transition with as little adverse social impact via long-term job losses and “technical” unemployment as possible. One key aspect of the dialogue that is still missing is the absence of wider acknowledgement that under our current labor policies, those currently facing job displacements are also those most likely to have difficulties adapting to take up the new jobs that will emerge. As a consequence, these groups of workers will likely face the greatest economic distress in the coming years. What we really need are policies that focus on providing a genuinely navigable pathway for workers displaced by AI to find new economically viable jobs via a careful managing of this transition and ensure that governments and businesses will work together to achieve this.
III. Ensuring AI and automation do not exacerbate income inequality
There have also been growing concerns that AI could exacerbate income inequality and further worsen geographic economic disparities via large-scale job displacements via automation. It has also been observed that innovation intensity seems to correlate with growth in inequality (see here). As we transition to an AI future, we need to examine how best to ensure that the benefits from technological innovation are shared broadly across all of society. The prevailing viewpoint has been that while technological change destroyed jobs, it was also creating new and better ones. Now evidence is suggesting that while technology may be destroying jobs and creating new and better ones, there are also fewer jobs being created. So this jobs shortage becomes a major issue for policymakers concerned with unemployment and the labor market.
The biggest worry with the impending AI transition is if owners of technology (and automation) are the sole beneficiaries of technology facilitated innovation, will that lead to a further concentration of economic power in the hands of the few. This comes down to a question of how the benefits from AI and automation are shared in the labor market. This will be particularly detrimental to a society already in desperate need of a productivity miracle to reverse the deteriorating economic opportunities facing the majority. If AI’s benefits are not shared more widely, greater resentment and pushback within society is not unforeseeable and this could further undermine the very fabric of society. There are no easy solutions to these concerns and we will likely require some commitment mechanism to ensure a shared ownership of technological innovations as well as the benefits accruing from the resulting enhanced productivity with all of society.
IV. Ensuring that we invest in AI’s abundant possibilities while keeping people engaged
While the progress that AI promises could greatly improve how we live, there is also the risk that society falls short in embracing and investing in its abundant possibilities. If AI is to achieve its full potential, we will need to pay as much attention to the broad social and employment challenges a transition to an AI future creates as much as we do to the technical challenges. While this might prove difficult, governments should also aim to encourage innovation in a form that increases the employability of workers. When governments decide on what research to support through funding and when businesses decide which technologies to use, they are in fact influencing jobs and income distribution. It is also no easy task to design a practical mechanism to select technologies that tilt the odds in favor of a future in which more people have better jobs.
Conclusions
As the heart of all these considerations is the recognition that technology is not value-neutral. At the same time, in the words of the late Anthony Atkinson, “[b]ut the trajectory of technological progress is not inevitable, rather it depends on the choices of governments, consumers and businesses as they decide which technologies get researched and commercialized and how they are used.” The real challenge at the heart of all this is how to still keep people engaged in meaningful work if AI can do most things better than almost everyone. This becomes even more important in the longer-term where we have to confront the very real possibility of a future without work. We might indeed need some form of policy solution that involves the provision of some form of universal basic income, or we might not. Regardless, only by genuinely finding new ways for people to be part of this future can we truly alleviate the growing economic anxiety over an AI future. To do so effectively will require technologists, governments, businesses and members of society to collaboratively envision and design a future where people still remain at the heart of society.
References
1. “Artifical intelligence, automation and the economy”, The Executive Office of the President, December 2016.
2. “Capital in the Twenty-First Century”, Thomas Piketty, August 2013.
3. Arntz, M., T. Gregory and U. Zierahn (2016), “The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis”, OECD Social, Employment and Migration Working Papers, No. 189, OECD Publishing, Paris.
4. Frey, C., Carl and Osborne, M. (2013) , “The Future of Employment: How susceptible are jobs to computerisation?”, Oxford Martin School Working Paper.