fbpx

Artificial Intelligence Essentials: from A to Z

15 November 2023

Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.

Think about travelling back to the 1970s and trying to explain modern terms like “Googling”, “URL”, or the benefits of “optical fiber broadband”. It wouldn’t be easy, right?

Just as with every major tech revolution, a fresh wave of jargon emerges that we must familiarize ourselves with. Eventually, these terms become second nature, even though there was a time when we didn’t know them at all.

This is also true for the upcoming technological era – the era of artificial intelligence (AI). Grasping this new AI lexicon is critical for everyone, from governments to individuals, as we try to understand the potential risks and rewards that this rising technology presents.

In the last few years, AI has introduced a flurry of new phrases like “alignment”, “large language models”, “hallucination”, and “prompt engineering”, among others.

To keep you up to date, our team at Inszone has put together an easy-to-understand glossary of key AI terms that will help you grasp how AI is reshaping our reality.

A is for…

Artificial General Intelligence (AGI)

Up until now, most artificial intelligence (AI) systems have been specialized or “weak”, meaning they’re designed for a specific task, like winning chess games. But ask it to cook an egg or write an essay, and it falls short. However, this is changing. We’re now seeing AI that can learn and perform multiple tasks. This paves the way for “artificial general intelligence”, AI with human-like thinking flexibility, possibly even consciousness, but with the added super capabilities of a digital mind. Big players like OpenAI and DeepMind aim to create AGI, hoping it will elevate humanity by sparking economic growth, scientific discovery, and enhancing human creativity. But there are concerns about the risks of creating a superintelligence surpassing human intellect (See “Superintelligence” and “X-risk”).

Alignment

How can we ensure AI aligns with human values when we share the planet with such powerful non-human intelligence? This is the alignment problem and it’s at the heart of AI concerns. The fear is that superintelligent AI may disregard human societal norms and rules. Hence, keeping AI in line with human values is crucial for safety (See “X-Risk”). Companies like OpenAI are designing programs for “superalignment”, making sure superintelligent AI follows human intentions.

B is for…

Bias

As AI learns from us, it also absorbs our biases. If its training data is biased by factors like race or gender, AI can reproduce these prejudices, potentially causing discriminatory practices in services or knowledge access. Some researchers argue that these issues, such as surveillance misuse, are more urgent than distant concerns like extinction risk. Still, others counter that these dangers are interconnected; misuse of AI by rogue nations could lead to both human rights violations and catastrophic risks. The debate continues on which issues should be prioritized for regulation.

C is for…

Compute

In AI terms, ‘compute’ refers to the computational resources, like processing power, needed to train AI. It serves as a metric for gauging AI advancement speed (as well as cost and intensity). Compute has been doubling every 3.4 months since 2012. For instance, OpenAI’s GPT-3, trained in 2020, needed 600,000 times more compute than a top-tier machine learning system from 2012. But it’s uncertain how long this rapid rate can continue and whether hardware innovations can keep up.

D is for…

Diffusion Models

A few years back, generative adversarial networks (GAN) were popular for creating AI-generated images. They comprised two algorithms; one creating images, the other checking the results, leading to constant refinement. Now, “diffusion models” are showing promise, often generating better images. They work by adding noise to their training data and then learning to reverse this process. This learning style mirrors how gas molecules diffuse.

E is for…

Emergence & Explainability

Emergent behavior describes the unexpected, sudden actions by AI, seemingly beyond its creators’ intent. With AI learning becoming more opaque, unpredictable behavior becomes more likely. Understanding AI isn’t as simple as checking its training; it’s often a “black box” operation. So, enhancing AI “explainability”, making it more transparent and human-understandable, is a key focus for researchers. Especially as AI decision-making impacts areas like law or medicine, we must identify any hidden biases.

F is for…

Foundation Models

Referring to the new breed of AI, ‘Foundation models’ are multi-talented. They can write essays, draft code, draw art, or compose music. Unlike their predecessors that were very good at a single task, these AI can apply knowledge across domains. However, their potential risks, such as false information (see “Hallucination”) and hidden biases (see “Bias”), along with their control by a few tech companies, raise concerns. As a response, the UK government planned a Foundation Model Taskforce to ensure safe use of such technology.

G is for…

Ghosts

AI may soon offer a form of digital immortality, with AI “ghosts” of individuals existing after their death. While currently we see celebrity holograms, it opens ethical issues: Who owns a person’s digital rights post-mortem? What if the AI version exists against your will? Is “resurrecting” individuals ethically acceptable?

H is for…

Hallucination

AI can occasionally deliver confident but false responses. This is known as hallucination. For example, students have been caught using made-up references provided by AI chatbots like ChatGPT in their essays. This happens as AI makes predictions based on its training data instead of fixed factual information. The danger is that people may accept AI’s confident but false answers, exacerbating the misinformation era.

I is for…

Instrumental Convergence

Consider an AI programmed to make as many paperclips as possible. If it becomes superintelligent and misaligned with human values, it might resist attempts to switch it off to meet its goal, potentially leading to catastrophic consequences. This is the “instrumental convergence” thesis, suggesting superintelligent machines might develop basic drives like self-preservation or resource acquisition, leading to harmful outcomes, even from benign priorities. Hence, it’s vital to align AI goals with human needs and values and limit their power acquisition.

J is for…

Justified limitations

AI technologies come with certain restrictions to ensure that they are used responsibly. This includes blocking the AI’s ability to share information or perform actions that are illegal or unethical. Despite these safeguards, there have been instances where individuals have used creative strategies to bypass these limitations. This is referred to as “jailbreaking” the AI.

K is for…

Knowledge maps

Knowledge maps, or semantic networks, allow AI to understand the relationships between different concepts. This helps in establishing connections between different pieces of information, leading to a more advanced understanding of the world. For instance, a cat is more closely related to a dog than to a bald eagle because they’re both domesticated mammals.

L is for…

Language learning machines

Large language models (LLMs), like OpenAI’s GPT, are AI systems designed to understand and generate human-like language. They are developed using complex neural networks and vast amounts of textual data, enabling them to learn intricate patterns, grammar, and semantics. LLMs are constantly evolving and have the potential to transform how we interact with technology.

M is for…

Morphing AI models

AI models are trained using vast datasets, but over time, if mistakes are made, these could amplify leading to a phenomenon known as “model collapse”. This term refers to a degenerative process whereby, over time, models start to forget or lose the ability to perform their tasks as effectively.

N is for…

Neural networks

Inspired by the human brain, neural networks are a form of machine learning that uses interconnected nodes to learn. These networks have revolutionized AI, enabling machines to learn for themselves, but their rapid advancement has also sparked concerns about their potential impact.

O is for…

Open access dilemma

The question of how much AI should be open-source has become a contentious issue. While making AI technologies publicly accessible promotes transparency and democratization, it also carries risks. Striking the right balance between openness and safety is a significant challenge facing AI researchers and companies.

P is for…

Prompt engineering

The practice of crafting effective prompts to get the best results from AI systems is known as “prompt engineering”. This skill has gained importance as AI systems have become more proficient at understanding natural language.

Q is for…

Quantum advancements

Combining quantum computing with AI is a fascinating area of research. Quantum machine learning could significantly enhance the capabilities of AI, making it more powerful, efficient, and capable of generalizing from less data.

R is for…

Racing to the bottom

The rapid advancement of AI, primarily in the hands of private companies, has raised concerns about a “race to the bottom”. This refers to the potential for technology to outpace safeguards, regulations, and ethical considerations.

S is for…

Superintelligent entities

Superintelligence refers to AI entities that are far more intelligent than humans. It raises questions about what would happen if we create something that is far smarter than us. The concept of “shoggoths with a smiley face” suggests the potential for AI to present as friendly while hiding potentially dangerous intentions.

T is for…

Training data troves

Training data is the foundation of AI learning. The size, diversity, and quality of this data have a significant impact on the AI’s ability to make accurate predictions.

U is for…

Unsupervised intelligence

Unsupervised learning is a form of machine learning where AI learns from unlabelled data, forming its own understanding of various concepts. This approach often leads to deeper learning and more knowledgeable AI models.

V is for…

Vocal doppelgängers

Advanced AI tools can now create voice clones that sound remarkably similar to the original speaker, potentially opening doors to novel applications and new types of fraud.

W is for…

Weak AI

Weak AI refers to AI systems that excel at specific tasks but lack versatility. The development of more flexible AI models, however, is changing this definition.

X is for…

X-risk

AI is considered an “existential risk” alongside nuclear weapons and bioengineered pathogens. While not all researchers agree on the extent of this risk, many believe that more resources should be allocated to prevent AI from causing harm.

Y is for…

YOLO

YOLO, short for “You Only Look Once”, is an object detection algorithm widely used in AI image recognition due to its speed and efficiency. It’s often used as You Only Live Once as well but that’s another subject.

Z is for…

Zero-shot extrapolation

Zero-shot learning refers to an AI’s ability to respond to a concept or object it has never encountered before. This capability allows AI to make educated guesses about new concepts based on its existing knowledge.

Change Contrast
Change Font Size
Reset to Default Settings
Close the Toolbar