AI isn’t just coming, it’s here – so what are the biggest concerns?

As AI takes the world by storm with its rapid progression, David Duffy, CEO and co-founder of the Corporate Governance Institute, discusses the top concerns for business.

It’s estimated that nearly half of surveyed businesses already use ChatGPT to complete work tasks and 80% say the system is ‘legitimate and beneficial’. So, while it has only been in the public realm for months, ChatGPT has already found a home in many offices.

With OpenAI’s ChatGPT only one of many AI tools rapidly adopted by businesses, the technology presents growing concerns over ethical dilemmas if it is not regulated to an appropriate standard.

Will AI make my job redundant?

Job security is difficult to measure right now because, despite the hype and constant chatter about AI and job losses, we are still in the early days and don’t know the actual impact it will have on the workforce.

That said, the revolutionary nature of AI means there will inevitably be a shift in how we work. Some may lose jobs; some may need upskilling or other training. The ethical dilemma is how a company balances these changes with their embracement of AI.

Supporters of the system maintain that it doesn’t signal a replacement of traditional workers, but gives traditional workers a time-saving tool, the likes of which they have never seen before. In other words, it’s opening new doors.

However, it is understandable that fears abound as companies like BT announce predictions of replacing 10,000 workers with AI by the end of the decade.

AI can’t access personal data… right?

AIs can draw from any information held in the public domain of the internet. That’s a lot of data.

Despite best efforts to safeguard personal information, some is easily accessible. Perhaps a person shared information on a website without a second thought, thinking it was private. Or maybe the information was shared as a part of a wider data leak.

AI cannot distinguish between sensitive data and what is deemed fair for widespread use. If it is in its knowledge bank and contains sensitive data, it will handle it like any other information.

For example, Samsung banned ChatGPT for company use after an employee leaked secret data trying to use it for a task, making the data permanently in the A’Is language bank and easy for others to access.

Apple, JPMorgan, Deutsche Bank, and Verizon have also announced banning ChatGPT for various reasons, but mostly to protect against employees who might use AI and unintentionally jeopardise private company information by doing so.

To complicate things further, if organisations or employees use sensitive data collected by AI, they can be held liable, highlighting the importance of AI policies within business.

Will we see an increase in misinformation?

AI banks do not keep up with any news cycle. The most recent information could be months, if not years, old. This means any ChatGPT-produced content could ignore the most recent and relevant events. Its information bank also can include biased sources – as the internet contains an endless wave of biased news. ChatGPT could misinterpret these as hard facts and present them as such to an unsuspecting user.

Sam Altman, CEO of OpenAI, told a congressional hearing in Washington that the latest models of AI technology could easily manipulate users, saying, “the general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern.”

Eliot Higgins, the founder of Bellingcat, an independent investigative collective, used an AI image generator to create fake images of Donald Trump being arrested in New York. The tweet has since been retweeted and liked by thousands, being one of many similar incidents. This has caused fear of what the future holds for misinformation and deep fakes.

Additionally, Prof Michael Wooldridge, director of foundation AI research at the UK’s Alan Turing Institute, said that even when Photoshop became popular, similar fears were widespread but eventually the population could distinguish what was real vs curated.

AI feels different from Photoshop, as it has the potential to gain intelligence. Some fear we will reach a point where we will not be able to believe things encountered on the web.

What’s the bottom line?

With AI rapidly developing and more companies jumping on the development of their own AIs, it is clear that whether you like it or not, AI is already here.

It’s only a matter of time before businesses feel they will be left behind competitively if they do not adapt to the new technology. However, change can be good if done correctly, and AI has the potential to enhance our working lives.

This is why implementing a comprehensive policy around the use of AI in the workplace is vital for today’s businesses. Company board of directors should ensure the help of the technology is used effectively and ethically and that employees are trained on how to use it safely and responsibly.

Even with these rapid developments, regulation can’t seem to keep up. Businesses need to push for regulation and governance in AI. Having board members fully educated on the latest developments will ensure both businesses and employees are more protected.

The Corporate Governance Institute provides board directors with education and certification to leading standards.

Previous article£7.3 billion in commercial property value at risk across region
Next articleLandlords and inflation: what you need to know if you’re selling up