Should we be panicking?

There’s a lot of attention in the media at the moment about Artificial Intelligence (AI), and how it could run out of control or cause all sorts of problems and even something as serious as the death of humanity.  and this point of view has been backed up by several well-known commentators such as Eton Musk.

However, should we be panicking and start taking action now to control AL?  

Or is everything getting a little overblown? And AI is not a real threat. or no worse than any other new technology implemented over the past 200 years?

So what is AI?

Before diving into the detail is probably worthwhile to take a small step back and define what AI is.

If one was to search the internet one will find many different definitions of AI

But for this short item, I’ve suggested we can define AI “as developing technology, machines, computers, etc. to try and replicate human intelligence”

This intelligence can be delivered by various methods such as natural language processing, machine learning and other advanced technologies.

One other key point to note is that AI often relied on a vast amount of data to define its rules.

How has AI helped society?

AI has certainly delivered some real benefits to society even in its current relative infancy stage,

It is allowed the ability to analyse vast amounts of  (often unstructured and constantly changing data) to provide (sometimes)  useful insights and allow complex issues to be assessed.

This in turn allowed improved decision-making and  automation

This in term provides benefits to society such as better weather predictions, automated customer services, personalized shopping, fraud, crime protection, self-driving cars, and many others.

So why are people worried about AI?

There is a real concern that if AI is left unchecked and it will cause irreversible bad changes to society

AI need a large amount of data to learn and make decisions. So therefore AI engines are constantly tracking and gathering data (such as tracking people’s movements by phone records, CCTVs, internet history, shopping habits, etc).  This creates several issues.

  • Firstly, it gives the perception of ‘big brother is watching you’ which makes people uncomfortable and also it could be an infringement of their civil rights.
  • Secondly, there was a general concern about how this data is used especially if it is used to change or manipulate people’s behaviours.  While this is not always a bad thing such as making people eat healthily or stop smoking, it can be used in bad ways.  This can be evidenced by political parties using AI to make people vote in certain ways or ‘fooling’ people to buy products which people do not necessarily need
  • Thirdly, there is a concern about how the data is stored and used.  For example, is it kept forever or sold to others to use?  While governments have reacted by creating legislation (such as the EU’s GDPR) these laws are still relatively new, have not been tested seriously in court and are hard to implement and explain.
  • Finally, AI’s rules are often based on existing data. so therefore, if there are any biases or gaps in this data, then this could be replicated and exaggerated through AI models which will reinforce any issues, bias and inequality across society.

There are also concerns about job losses. One thing technology has taught us over the years is that any new technology can wipe out entire industries quickly (such as the printing press etc).  Although it could be argued as some jobs are lost then new jobs are created  Although this is not much comfort if you’re one of those people who have lost their job. Also, this job replacement process could take years if not decades. to happen.

Thirdly (and probably my main concern) is the general fear of handing over more and more control, technology without any sufficient checks in place to control it.  For example, a large amount of financial trading is performed by AI engines which have caused issues such as the flash crash in 2010. Also, older readers of this will remember the 1983 film Wargames, where the US government handed over control of their nuclear arsenal to a computer which nearly started World War 3.

  • AI does not have any ‘human touch’ or any gut feeling where it can step back and ask itself “Is this correct?” or “I think that something is wrong”.  AI will just follow its rules regardless of consequences which could have major implications.
  • Technology has a long history of disastrous projects and implementations.  For example, many systems have been incorrectly designed and do not meet user requirements, and even systems that are correctly designed do not work properly and have many bugs in them.  So therefore, if you are handing more control to technology (with real societal impacts) then it is essential (a) the technology works and (b) there is sufficient support to manage it on a day-to-day basis.

Finally, taking this point further to an extreme, as AI becomes more and more powerful, and self-learning improves dramatically then AI could take humans out of the process completely with real dire consequences.  While I think we are many decades from this, this is still a worry for society.

The bottom line is that if these challenges are not managed or controlled now then the situation will get worse which will (a) impact society in a negative manner and/or (b) require a massive and costly effort to address in the future.

So what needs to be done?

Any actions to control AI can be looked at from two points of view.  A ‘bottom up’ approach from the masses and a ‘top down’ approach from government,  legislators, policymakers and regulators.

Regarding the bottom-up approach,

  • Society can make massive changes if they work together. 
  • Therefore if people push back or rebel against AI, because they are unhappy about it, do not understand it or are worried about it, then you will create a lot of noise in the traditional media, social media and with well-known commentators, which will force AI providers to change their approach to make society more comfortable. 
  • One could argue this bottom-up approach is the most powerful way of controlling AI.

Regarding a top-down approach

  • Governments,  legislators, policymakers and regulators need to implement clear and robust laws to ensure that there are controls in place to manage AI. 
  • This could cover areas such as ensuring AI algos work, provide benefit to society, data is not biased and processes are in place to support the day-to-day running
  • Similar to financial and health/safety regulations, there should be named individuals at organisations who will take personal responsibility in the event of issues.  Therefore in the event of law-breaking then both the offending organisations and named individuals will be liable for prosecution, fines and, if serious enough, prison.
  • It would also be advisable to ensure that any algos are audited independently to ensure compliance with these new laws.  This is not dissimilar to having external auditors for financial statements and/or health and safety compliance.
  • However, the challenge is that we need all countries around the world to do this but this will be hard to do because every country will have different agendas etc.  One option is to look at AI import licenses which means one county could block AI developed by another country unless they meet their local laws.  (This is similar to industries such as cars, electronics, food and financial services where importers must meet local rules before being allowed to distribute their products and services).

Finally, education is required on AI for individuals, organizations, governments and policymakers.  This education needs to cover what AI is, its benefits, its risks, and any legislation that is required to support it. Without this education, people tend to fill the void with bad news, which means it is impossible to make rational decisions.