Sunday 22 December 2024
Select a region
News

Businesses should train staff on safe use of AI

Businesses should train staff on safe use of AI

Wednesday 21 June 2023

Businesses should train staff on safe use of AI

Wednesday 21 June 2023


The productivity benefits of artificial intelligence are huge, but companies should quickly train staff to effectively and ethically use the technology to keep pace and remain competitive, according to a local digital expert.

Patrick Cunningham, Director and Founder at Indulge, said generative AI, which exploded into the public domain late last year, just “creates things at a very simple level” but the effect has been a “democratisation of access” to advanced and specialised writing abilities.

But he warned the world is currently in an AI “bubble” with hundreds of new tools being released each week – the majority of which inevitably won’t survive when compared to the big players in the market, such as Google, Microsoft, and OpenAI.

However, the capabilities now available to general users is “insane”, he added, noting that we are “heading in a direction to generate a new season of Game of Thrones” without requiring an expansive cast and crew and months’ worth of production. 

Guernsey businesses should pay attention, as staff will already be using it but might be unaware of potential risks, competitors will improve productivity, suppliers will become more persuasive, hackers will be more effective, because “very soon” these tools will appear in existing digital productivity tools, like the Microsoft Office and Google suites, Mr Cunningham said.

Speaking to Guernsey’s Chamber of Commerce on Monday, he demonstrated OpenAI’s ChatGPT by getting it to produce a short story about a unicorn, a draft indemnity clause, and a letter of complaint to the planning service about pylons at La Grande Mare.

He also generated images using the program Midjourney including Boris Johnson at a party, a user interface for a sports e-commerce site, and packaging for a new line of lagers.

None of this comes without risk, he added, as the technology itself is limited, displays bias, and can be prone to “hallucination” and disinformation.

0K5A5665.jpg

Pictured: Some of the tools businesses should be applying right now, according to Mr Cunningham.

Mr Cunningham said this story began in the 1950s when theoretical computer scientist Alan Turing drew up the infamous Turing test. To pass, a machine must exhibit human-like intelligent behaviour that is beyond the doubt of the human user.

Major developments stemmed from that such as Eliza is 1965 – one of the world’s first chatbots - which worked by simply posing questions back to the human prompts provided, generating the illusion it understood what was being said.

It captivated even those who were involved in its design who were willing to provide it with personal information, forming an emotional attachment with it that it could understand their feelings. It showed that social engineering could help a machine to pass the Turing test.

In 1997, Deep Blue beat the world’s top chess player under tournament conditions. 20 year later AlphaGo defeated human players in the game Go, which Mr Cunningham said has billions more possibilities and complexities than chess. The algorithm taught itself previously undiscovered strategies in the game and it remains the number one player in the world. 

OpenAI inadvertently developed advanced large language models in 2018 after a single coder attempted to create an algorithm to predict the next word of Amazon reviews. It was then realised far more information could be predicted and categorised from this, and it was trained on a huge cache of information from books to websites.

In late 2022 the company released its latest version of its product, 3.5, to the public, and has since produced a more advanced version.

Matt_Thornton_Cortex_Chamber_of_Commerce.jpg

Pictured: Matt Thornton of Cortex, Chambers’ new digital lead, introduced the event by saying the advent of large language models is reflective of what the world has “come to expect from search engines”.

Local businesses should draw up AI policies at a corporate governance level, alongside existing IT policies and ensure staff are educated on its use, how to write effective prompts, and educate them on the policies throughout, according to Mr Cunningham.

But he said monitoring and improvements would need to be done more regularly due to the pace of change in the sector.

But the productivity improvements would be impossible to ignore, as well as the change in day-to-day applications. He added that by using ChatGPT and Perplexity, a similar program but one which can access live web pages, he doesn’t use Google anymore.

Risks

Mr Cunningham said personal data should never be input into these programs to ensure security and privacy is upheld.

All should also be aware of the potential for inaccuracies and misinformation in generated content. He said the free version of ChatGPT – 3.5 – is “very prone to hallucinations” and cited a court case where a lawyer requested cases for precedent about aircraft crashes. The chatbot then fabricated several examples.

Bias will naturally stem from responses, he said, as the algorithms were trained on a “huge corpus of information”, all of which “has bias built into it for all sorts of reasons”. 

Copyright remains a grey area, particularly for image generation and cases of art, which is yet to be legally resolved.

Firms should also be wary of becoming over-dependent on the technology, Mr Cunningham added.

READ MORE…

FOCUS: Guernsey Institute taking proactive approach to AI

AI low-risk in the short term – Economic Development

Sign up to newsletter

 

Comments

Comments on this story express the views of the commentator only, not Bailiwick Publishing. We are unable to guarantee the accuracy of any of those comments.

You have landed on the Bailiwick Express website, however it appears you are based in . Would you like to stay on the site, or visit the site?