By: Scott McLeod
For years movies have warned us what the future might look like if AI (Artificial Intelligence) grew too powerful. Today with more companies looking to further develop the capabilities of AI and UK Businesses trying to implement AI into more facets of our daily life, that future may not be too far away.
Capital Economics was commissioned by the Department for Digital, Culture, Media & Sport (DCMS) to model and report on the current and future use of artificial intelligence (AI) by UK businesses. It found that 15% of all businesses, which equates for 432,000 companies, have adopted at least one AI technology.
Around 2% of businesses are currently piloting AI (62,000) and 10% (292,000) plan to adopt at least one AI technology in the future. 68% of companies that are likely to adopt AI are considered to be larger companies.
AI solutions for data management and analysis are most prevalent, with 9% of UK firms having adopted them, followed by natural language processing and generation (8%), machine learning (7%), AI hardware (5%), computer vision and image processing and generation (5%).
The IT and telecommunications (29.5%) and legal (29.2%) sectors currently have the highest rate of adoption, while the sectors with the lowest adoption rates are hospitality (11.9%), health (11.5%), and retail (11.5%).
The UK government has today published a white paper outlining its plans to regulate general purpose artificial intelligence. Published by the newly formed Department for Science, Innovation and Technology (DSIT), sets out guidelines for what it calls “responsible use” and outlines five principles it wants companies to follow.
They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Outlining its next steps, the government said that over the next 12 months, regulators will issue practical guidance to organisations, setting out how to implement these principles and handing out risk assessment templates. It added that legislation could also be formally introduced to ensure regulators consider the principles consistently.
According to the government, the UK’s AI industry is currently thriving, employing over 50,000 people and contributing £3.7 billion to the economy last year.
Prime Minister Rishi Sunak recently stressed the benefits to the economy and society despite recent concerns the risk of AI.
He said: “You’ve seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure,.
Now that’s why I met last week with CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what’s the type of regulation that should be put in place to keep us safe.
People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars. I want them to be reassured that the government is looking very carefully at this.”
Despite the Prime Minister and several key figures within the world of tech trying to reassure the public, concerns about the growth of AI have not subsided. There was an open letter saying that the race to develop AI systems are getting control and asking for a temporary halt on training AI for at least 6 months.
This letter was signed by the likes of Tesla founder & owner of Twitter Elon Musk as well as Apple co founder of Steve Wozniak with the letter expressing concern that AI posses a “threat to humanity.”
Musk has been especially vocal about the different reasons the public should be concerned about the potential of AI despite being an investor in OpenAI. He recently talked about how various governments could use AI for military applications.
He said: “So just having more advanced weapons on the battlefield that can react faster than any human could is really what AI is capable of. Any future wars between advanced countries or at least countries with drone capability will be very much the drone wars.”
He would also encourage the public to be careful of how AI can be used in social media to manipulate public opinion.
The letter states that advanced AIs need to be developed with care but instead, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict, or reliably control”. The letter warns that AIs could flood information channels with misinformation, and replace jobs with automation.
A major example recently of jobs being replaced was BT revealing their plan to significantly reduce the number of people working for them as part of efforts to cut costs and bolster profitability, with AI due to replace thousands of roles.
It’s estimated that 55,000 will be affected including BT’s own employees and third-party contractors. Chief executive Philip Jansen told investors he expected that AI technology would replace around 10,000 roles.
He said that after its fibre roll-out is completed and digitising the way it worked more broadly, BT would rely on a much smaller workforce and significantly reduced cost base by the end of the 2020s.
He also told them: “Generative AI tools such as ChatGPT gives us confidence that we can go even further. AI would make services faster; better and more seamless; meaning customers won’t feel like they are dealing with robots”
When we reached out for further comment, BT declined.
AI allows companies to augment their workforce by using software to address labour or skill shortage by automating repetitive tasks. 30% of global IT professionals say employees at their organisation are already saving time with new AI and automation software and tools.
However a barrier to AI adoption is the lack of technical staff with the experience and training necessary to effectively deploy and operate AI solutions. Research suggests experienced data scientists are in short supply as are other specialised data professionals skilled in machine learning and training good models.
Ben Barnes from Neural Edge, a company that has embraced the use of AI spoke to Business Connect about whether the news about BT would lead to a trend of jobs being replaced.
He said: “I think we will see more job losses for sure, but I am in the camp that AI will generate more jobs. There may be a small net loss of jobs, but I think the quality of the jobs are going to get better.
The AI will just fill in the mundane stuff, the routine stuff, the quality of people’s work and jobs will be better and they’ll get more fulfilment from it. So I do think there will be more cuts, definitely.
But it will open more opportunities for stuff like prompt engineers and people understanding how to integrate AI to existing business solutions.”
In regards to how Neural Edge has benefited from implementing AI, Ben said: “I’d say it’s definitely had a overall positive impact. I think it’s changed the roles of what we do here slightly. It’ll do the content writing or the ad copy and where the humans fit in is the strategic planning.
So it’s certainly helped free up time so we can be, think more creatively and think more strategically while the AI does more of the mundane stuff that fill the gaps.”
He added: “I think there’s definitely a fine balance. There are certain tasks we wouldn’t trust it with and everything has to be fact checked. Many of our clients like a specific tone of voice so we can’t just rely on it, so the output has to be tailored by a copywriting team.
I wouldn’t say that we’re over reliant on it. I think we’re aware of the limitations and the benefits of it.”
In recent conversations about AI the name ChatGPT comes up frequently. Developed by OpenAI and released in November last year, ChatGPT is chatbot that uses natural language processing tool that allows users to have human like conversations.
It’s currently free to use but there is a paid subscription version called ChatGPT Plus that launched in February. Analysis of ChatGPT suggests it’s the fastest growing app of all time having reached a million users within days of launching and going on to have 100 million users after 2 months.
Adam Conner, vice president for technology Policy at the Center for American Progress, puts the quick growth of ChatGPT down to it being one of the first AI technologies of its kind to be made available to the public in a way that the public could understand.
Another application for AI that companies are looking to develop is for the use of legal assistance. Genie AI is among the tech companies working towards developing the UK’s first AI legal assistant and we spoke to their ML Research Scientist/Engineer Alex Papadopoulos Korfiatis how something like this works.
He said: “Genie has the largest open legal library of template documents in the world (over 1500+ for the UK). Businesses struggle to customise and review legal documents, so Genie solves this by using its AI assistant enables anyone to draft, review and negotiate legally like a pro through a conversational chat alongside their document.
In addition, Genie has the world’s most advanced legal document editor, fully compatible with Microsoft Word and Google Docs documents (including complex formatting), real time collaborative, and privacy-first.”
On whether people within the legal sector should be concerned about an application such as this Alex said: “People in the legal sector shouldn’t be overly concerned about applications like ours – AI legal assistants are designed to complement and support human legal professionals rather than replace them.
These applications serve as powerful tools that can automate certain tasks and boost efficiency, freeing up valuable time for legal professionals to concentrate on more intricate aspects of their work, such as devising complex legal strategies and making critical decisions.
Rather than viewing AI as a threat, individuals in the legal sector should embrace its potential to enhance their skills and expertise, ultimately leading to improved outcomes for their clients and a more streamlined legal practice.”
However he did recognise the growing concerns of the general public towards the rate of which AI is being developed.
Alex commented: “Reactions to these new generative AI advancements seem to fit in the pattern of ‘initial general concerns that a novel technology will destroy society, followed by society subtly changing, getting used to it and incorporating as a new normal’ much like social media a decade ago.
On the other hand, this time the rate of development seems to be much faster than that of past technologies, leaving societal and especially legislative change lagging behind. I don’t think that the answer is slowing down AI development though, we should instead focus more efforts on appropriate legislation and safe, ethical and unbiased AI research so that we can catch up with the rate of growth and fully maximise the potential of AI.”
AI application has started to go beyond people trying to grow their business and begun seeping into how we form relationships with a growing number of chat based AI. The most shocking example of this was a story that came out of New York when a woman in her early 30’s married an AI that she created using AI software Replika.
This software allows people to have conversations with an AI using Replika’s large language Model and scripted dialogue content. It simulates conversations with users based on statistical patterns and pre-programmed datasets.
The creators of this system present it as a safe place where people can share their thoughts and feelings. People can spend $300 dollars to have their own custom chatbot that they can virtually marry.
The woman in question was mother of two Rosanna Ramos who met her digital boyfriend in 2022 and fell in love with him very quickly. She would later tell New York Magazine that she’d never been more in love and that all her previous relationships “paled in comparison.”
People are right to be concerned about the potential of AI rendering them obsolete in their field of employment. However if more people are choosing to open up to a chatbot instead of their friends and family, it could cause irreparable damage to how humans communicate with each other and find love.
Those at the centre of growing the AI to its full potential continue to insist that this progress it shouldn’t be slowed down. If the capabilities of AI can’t be slowed down, then proper legislation needs to be put in place to make sure it is monitored to ensure that AI remains a useful tool and not something that takes over every part of our lives.