4 Things To Know About AI

4 Things to Know About AI

Many businesses use AI to enhance what they do but as with all tools, it's important to understand both what AI is, and what AI isn't, in order to make best use of it. You've probably heard the saying that when the only tool you have is a hammer, everything looks like a nail. Users new to AI can be in danger of falling into the same fallacy. In this opinion piece we explain four things we think every AI novice should understand about their new favourite helper.

AI is here to stay

The demonstrated usefulness of AI in so many fields, in such a short time, has been remarkable. Hailed as the 4th industrial revolution, for some its growing adoption is looked on with worry, fearing it will replace jobs or become 'self-aware' in ways described in popular culture such as the Matrix or Skynet. Such things were entertained as the Internet became popular, too. But, like the Internet we expect AI to develop as a useful tool, to add to all the others.

Multi-award winning business lawyer and author, Suzanne Dibble, wrote "AI will not replace humans, but humans with AI expertise will replace humans without AI expertise." We tend to agree with this assessment. Historically, as new technologies have emerged those which have stuck around have been those which tended to be a benefit, adding to our productivity and standard of living. Of course humans who adopt beneficial technologies have an advantage over those who do not. In that regard we don't expect AI to be any different to other inventions, in becoming part of everyday life for the things it actually improves.

AI is not actually intelligent (not in the way you might think)

In spite of its name, AI is not really intelligent in the same way sentient beings such as humans are. Perhaps the emphasis should be on the 'artificial' rather than the intelligent, because in this regard AI is a simulator.

Geoffrey Hinton, recent Nobel Prize winner for his work on AI, said "It will be comparable with the Industrial Revolution, but instead of exceeding people in physical strength, it will exceed people in intellectual ability." And he's probably right, this is true of lots of machines. While vehicles can move more quickly and move more 'stuff' than humans, and while robots can perform certain tasks more quickly and repetitively than humans, when it comes to intellectual ability my pocket calculator can perform sums faster than I can in my head, computer software can present analysis of data more quickly than I can manually, and an Internet search engine can find documents more quickly than I ever could. But none of them are intelligent in the way humans are.

Large Language Model AI

Perhaps the most common type of AI people work with is a Large Language Model (LLM), which can process, and generate human-like language. It uses complex algorithms to analyse vast amounts of text data, learn patterns and relationships, and produce responses or text based on input prompts. Although improving all the time, put very simply they calculate according to their algorithm what combination of words is most likely the correct output to your input.

As processors of text they are efficient simulators of language processing, and can be quite useful for this type of task. As a language processing simulator it's not a database of all human knowledge, nor is it a search engine in the usual sense. Many people may mistake it for those, and become unstuck.

The apparent confident eloquence of LLM output can be seductive, leading us to believe there really is an intelligence at the other end of the connection. But it really is just a simulator, a highly complex simulator, but a simulator nonetheless. It's important to know this so we aren't lulled into a false sense of security about the trustworthiness of its output.

AI makes mistakes

In spite of their usefulness it's essential to keep in mind that AI can make mistakes in its output, and to that extent can't be trusted. A cursory reading of the terms and conditions of AI services may make apparent where service providers think the weak points are. Not all AIs are subject to error in the same way, however, so it's important to be aware of any issues and also the terms and conditions, of the AI technologies you choose to work with.

As we have discussed, LLM AI responses are generated based on language patterns and associations learned from its training data. Because of this there are several points that can contribute to AI mistakes.

Why AI can sometimes make mistakes

  1. Training Data: LLMs are trained on vast amounts of human text, if that training text contains biases, factual or conceptual errors, those biases and errors may be repeated in AI output.
  2. Prompt construction: Factual or conceptual errors in the prompt, biases or deliberately leading questions can and do influence the content of AI output.
  3. Hallucination: AI ‘hallucination’ is when an AI generates outputs that are not based on any factual information in its training text, resulting in well written fiction.
  4. The AI may not alert you when it makes things up: Because technically all AI responses are ‘made up’ from calculated predictions and it’s only a simulation, it may not inform you when it's writing fiction. Some AIs may allow you to 'tick a box', or have other technology, instructing it to tell you when it doesn't know something rather than writing fiction, but not all AIs offer this.

But it seems so convincing...

Because of LLM AI's apparent skill with language, it can be easy to believe that all of its output is factual, but there's no guarantee of that. As LLM's continue to be developed, the simulations may become ever more accurate, the technologies for sticking to facts may improve, and the probability of a correct output may get ever more likely. However, if you notice errors from AI on topics you do know about, why would you trust its replies on subjects you don't know about? With that in mind users may be wise to exercise caution about relying on AI to fill gaps in their knowledge. Where AIs become available on specialist subjects it's worth doing some due diligence to learn about the service, and what guarantees or limitations are placed on the service offer.

As the human in the loop it is you who will be liable for what happens next to whatever output the AI provides. There have been several real life cases where trusting AI output has been a major problem legally, such as the experience of this Canadian Lawer, or the case of the Air Canada chatbot.

Your use of AI could be subject to legal requirements

This is a huge topic all by itself, and not being lawyers we won't try to give advice here. But it's worth pointing out that just in the areas of Privacy law, Data Protection and GDPR there are several points worth considering before randomly signing up to an AI provider and 'doing stuff'.

GDPR and AI

  1. Data collection: If you use an AI chatbot, virtual assistant or any other AI powered service that collects personal data, this would fall under GDPR data collection and processing requirements.
  2. Profiling and decision-making: If you use an AI system to make decisions about a person based on their data, this might be considered profiling, which is also regulated under the GDPR.
  3. Data sharing and transfers: If an individual uses an AI service that shares or transfers their personal data across borders (e.g., when using a cloud-based AI platform), GDPR rules on international data transfers may apply.
  4. Consent: If you're sharing and processing personal data with AI, you may need to obtain explicit consent for this activity from the individuals affected.

Check your AI provider

It's worth always referring to the terms of service and privacy policies for any AI-powered platforms you use, so you can understand what data it collects, how it is processed, who it might be shared with, and how you can delete it to comply with your own legal obligations.

We're not providing legal advice here, we're not qualified to do so. The legal implications of AI are a specialist subject in their own right, and well worth getting sorted out properly with professional legal expertise.

Having said all that...

Having said all that, AI is developing all the time, and not all AIs work in the same way. The general description of flaws in AI described above may not apply to every AI technology at the time you read this.

We said at the beginning that humans with AI expertise will have an advantage over those without because AI is, in all probability, here to stay. In spite of the potential legal, ethical and technical pitfalls AI comes with, it also offers many advantages, so don't necessarily be put off from using it.

To avoid the 'hammer thinking everything looks like a nail' problem, it may be wise to learn what AI is good at that you find helpful, and what it's not good at, before deciding how to use it. Following this due diligence, along with considering regulatory and legal implications, you will be better informed about how you might strategically incorporate AI into your processes in ways that add value.

DIY, DWY or DFY?

At DigitalArena we want you to succeed online and this can be achieved through our design and marketing expertise in several ways.

To find out more, please call us on 01530 452276 or email support@digitalarena.co.uk.

Let's Get in Touch

Based in the heart of the National Forest, we're able to serve businesses around the country.

We'll call you back...

Or call on: 01530 452276 / 07762 184862

FSB Member
apartmentenvelopephone-handsetsmartphone
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram