Artificial Intelligence, the basics

Last updated: 23 November 2023

See updates at the end of the post.

Artificial Intelligence (AI) is slowly but inexorably creeping into our lives in everything that we do. Not a day goes by without new stories in the press about some new use of AI e.g. bringing Roman emperors back to life or the incredible capabilities of GPT-3 (superceded by GPT-4) , a software from a company called OpenAI, one of whose owners is none other than Elon Musk of Tesla Inc. With the arrival of the AI based chatbot ChatGPT at the end of November 2022 and other chatbots since, it seems unlikely that any industry will be able to escape the inclusion of AI in its core offerings, because not doing so will inevitably mean being left behind.

Why is it worthwhile making the effort to learn the basics about AI?

It is worthwhile making the effort to learn at least the basics so that you can engage meaningfully with colleagues and other teams in your company/organisation e.g. when discussing the suitability, design and/or implementation of AI based software in different areas of work. It also means that you can assess information on AI with a critical viewpoint rather than just accepting whatever is presented to you.

The importance of learning the basics about AI for those working in regulatory affairs

For those working in regulatory affairs in the pharmaceutical of medical device industries, it is not inconceivable that you may find yourself trying to get to grips with new regulations/guidelines etc. which concern AI and this makes it all the more important to learn the basics. Not taking the time to learn the basics could quite easily mean getting left out of important conversations concerning AI in your company/organisation/profession or even missing out on employment or promotion opportunities. If you haven’t already done so, engage with your IT team and ask them to provide some basic education on AI.

Are the basics about AI easy to learn?

As I started to learn the basics, I realised that many of the supposedly simple to understand articles and clips were/are in fact quite difficult to understand for someone with no background on the subject. Determined not to give up, I scoured the internet to find articles and clips that are easy/reasonably easy to understand.

Below, I have attempted to present some of the information that I found on the internet that will hopefully help you to start your journey in getting to grips with the basics. This includes definitions, and links to some hopefully easy to understand articles and video clips. The information provided is by no means exhaustive. Some patience and perseverance is required to learn the basics but it is very achievable.

Towards the end of the post, I have included a section on some of the uses of AI in the pharmaceutical industry.

What is Artificial Intelligence (AI)?

Depending on where you look, you will come across any number of definitions of AI. It seems reasonable to say that there is no universally accepted definition. None of the available definitions can be considered perfect or all encompassing. It is worthwhile reading different definitions as each one adds more to the overall understanding.

Because we have to make a start somewhere, the definition below provided on the website Science daily seems to be a reasonable one:

The modern definition of artificial intelligence (or AI) is “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

John McCarthy, widely recognised as the father of artificial intelligence coined the term in 1956, and defined it as “the science and engineering of making intelligent machines.”

A simpler and more easily understandable definition is provided by the The Cambridge dictionary which defines it as: The study of how to make computers that have some of the qualities of the human mind, for example the ability to understand language, recognise pictures, solve problems and learn.

Categories of AI

In general, AI falls into one of the following three categories:

i) Artificial Narrow Intelligence (ANI)

ANI also referred to as weak AI or narrow AI is what we have available currently. Narrow AI is goal-oriented and is designed to perform specific tasks. Examples of ANI include the following:

  • A smart speaker such as Amazon Echo or Google Nest responding to your spoken requests..
  • Netflix presenting you with a choice of films which it thinks you might like based on the previous films that you have watched.
  • YouTube presenting you a choice of video clips to view next, based on your previous viewing history.
  • The ability of systems on a production line in a glass making factory to spot a a crack in a glass.
  • Your mobile phone messaging app providing you a number of short responses (for you to select one) to reply to a message that you have received.
ii) Artificial General Intelligence (AGI)

AGI is also referred to as strong AI or deep AI. Conceptually, it concerns  a machine with general intelligence that mimics human intelligence and/or behaviour, with the ability to learn and apply its intelligence to solve any problem.  AGI has not been achieved thus far.

iii) Artificial Superintelligence (ASI)

ASI is when machines may (some time in the future) become self-aware and surpass the capacity of human intelligence and ability to the extent that they may outthink and outdo humans at everything by a long stretch.

You can read more about these three categories of AI on the website codebots.com.

Some of the terminology of AI that you will likely come across is provided below. Definitions are provided wherever possible as well as links to video clips in order to facilitate understanding.

Definition of machine learning, deep learning and data science

To start you off, here is an easy to follow clip from Andrew Ng defining the terms machine learning, deep learning and data science (which is not AI). If you decide to watch only one clip from all of the links provided in this post, this would be the one to watch.

What is Machine learning?

Below is a definition from Wikipedia:

Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to do so.[Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

The outcome of a machine learning project is normally software, examples of which are provided below.

Some examples of machine learning in daily life include the following:

  • Google Maps enabling you to reduce commuting time by selecting the fastest route between two places.
  • The ability of car ride-hailing apps such Uber to fix the price of a ride, or to reduce pick up time by working out the optimum way in which to pick up passengers.
  • The ability of email clients e.g. Gmail to be able to classify emails into groups such as Primary, Promotions, Social or the ability of email clients to be able to automatically filter spam email upon receipt.

To help you to understand machine learning a little better, watch this clip from the Royal Society and this clip by Yufeng Guo on YouTube. In a further clip, Yufeng explains the 7 steps of machine learning by working through an example. Here is another very good clip from Andrew Ng on the workflow of a machine learning project.

As you watch the above clips, you will encounter the the following terms:

  • Supervised learning
  • Unsupervised learning.
  • Semi-supervised
  • Reinforced learning

The above  are different types of learning algorithms and differ in their approach, depending on the type of data they input and output, and the type of task or problem that they are intended to solve.
To improve your understanding, you can read the articles or watch clips at the following links. Each one adds a little more understanding to the subject matter:

  • Wikipedia page for an explanation on all of these types of learning algorithms
  • Article on guru99.com to better understand supervised and unsupervised learning
  • This clip by Simplilearn should also help you to understand more about supervised and unsupervised learning. Be warned though that it is a short but fairly detailed clip and the presenter whizzes through the content so you may need to watch it several times in order to fully understand the concepts.
  • Article on springboard.com to understand supervised and unsupervised learning.

To get a clearer understanding about some of the capabilities of machine learning, visit the Microsoft Lobe website.

What is Deep learning?

Deep learning is also known as neural network or artificial neural network. The terms neural network and deep learning are almost used synonymously. The latter is the more modern term of the two. Deep learning is a subset of machine learning.

In spite of searching the world wide web, it has not been possible to find a written definition that is easily understood.  This clip from Andrew Ng includes a good definition of Deep learning, roughly half way through the clip. You might also wish to watch this clip by Simplilearn, on deep learning and this one on neural networks in order to improve your understanding.

What is Natural Language Processing (NLP)?

A simple definition is provided on the website becominghuman.ai as follows:

Natural Language Processing, usually shortened as NLP, is a branch of artificial intelligence that deals with the interaction between computers and humans using the natural language.

The ultimate objective of NLP is to read, decipher, understand, and make sense of the human languages in a manner that is valuable. Most NLP techniques rely on machine learning to derive meaning from human languages.

To get a slightly deeper understanding of NLP, read this article on the website enterpriseprojects.com or watch this clip on YouTube.

What is computer vision?

According to IBM, computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand.

What is Data science?

First of all, it is important to point out that data science is a separate field to artificial intelligence (AI). The reason for mentioning it here is that people often use the terms AI and Data Science interchangeably even though it is not correct to do so. However, as you will note in one of the definitions below, it is possible to have the involvement of AI in a data science project.

A very simple definition from the website builtin.com defines it as :
The extraction of actionable insights from raw data.

A slightly more detailed definition from Wikipedia is:
Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, machine learning and big data.

To add some more perspective to the definitions above, here is another definition from the website datarobot.com:

Data science is the field of study that combines domain expertise, programming skills, and knowledge of mathematics and statistics to extract meaningful insights from data. Data science practitioners apply machine learning algorithms to numbers, text, images, video, audio, and more to produce artificial intelligence (AI) systems to perform tasks that ordinarily require human intelligence. In turn, these systems generate insights which analysts and business users can translate into tangible business value.

In its simplest form, the outcome of a data science project is normally a report which provides/suggests actionable insights.

Here is a clip from Andrew Ng on the steps of a data science project.

What is generative AI?

According to Google, Generative AI or generative artificial intelligence refers to the use of AI to create new content, like text, images, music, audio, and videos.

Generative AI is powered by foundation models (large AI models) that can multi-task and perform out-of-the-box tasks, including summarization, Q&A, classification, and more. Plus, with minimal training required, foundation models can be adapted for targeted use cases with very little sample data. 

Generative AI works by using an Machine Learning (ML) model to learn the patterns and relationships in a dataset of human-created content. It then uses the learned patterns to generate new content. 

Generative AI processes vast content, creating insights and answers via text, images, and user-friendly formats. Generative AI can be used (amongst other things) to:

  • Improve customer interactions through enhanced chat and search experiences 
  • Explore vast amounts of unstructured data through conversational interfaces and summarizations 
  • Assist with repetitive tasks like replying to requests for proposals (RFPs), localizing marketing content in five languages, and checking customer contracts for compliance, and more

You can view this clip from Google to understand more about generative AI.

In this article from Synthesia, you can read about 50 Useful Generative AI Examples in 2023

Hopefully, you will have found this post useful in helping you to understand a little more about AI and use it as a springboard to learn more about the subject and also about its current and future applications in the pharmaceutical and/or medical device industries. Be aware that the terminology of AI is not used or defined/described consistently, so depending on where you look, don’t be surprised if you find differences both big and small in the way that concepts/terms are explained.

Further information/resources
  1. Elements of ai, a free basic course from Finland.
  2. Reflection paper on the use of artificial intelligence in the lifecycle of medicines – European Medicines Agency (EMA) – July 2023
  3. Artificial Intelligence and Machine Learning (AI/ML) for Drug Development – US FDA May 2023
  4. FDA Releases Two Discussion Papers to Spur Conversation about Artificial Intelligence and Machine Learning in Drug Development and Manufacturing – May 2023
  5. Artificial intelligence in medicine regulation – European Medicines Agency August 2021
  6. Focus Area: Artificial Intelligence – US FDA June 2022
  7. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan – US FDA December 2021
Updates
DateUpdate(s)
23 Nov 2023Added section on Geenrative AI.
3 Sept 2023Entire post updated to improve readability. Additional useful links provided. Further information, resources section added.