Artificial Intelligence (AI) Taxonomy

Artificial Intelligence (AI) Taxonomy
April 25, 2025 (IG10077)

Summary

Information as of April 25, 2025. Prepared by Laurie Harris, Analyst in Science and Technology Policy; Nora Wells, Analyst in Health Policy; and Juan Pablo Madrid, Visual Information Specialist.

Experts generally describe AI as the broad concept of machine-based systems that can do tasks commonly thought to require human intelligence, like making predictions and recommendations, translating languages, or generating text, images, audio, and video. The term has evolved as research and applications of AI technologies have advanced, leading to the development of new terminology. As Congress works to develop and enact legislation related to AI technologies, questions frequently arise around what terms to define, how to define them, and how they are interrelated. This infographic describes key AI terms and illustrates how they are related to one another. (Note that while this represents a synthesis of ideas from many AI experts and stakeholders, the definitions and intersections of these terms are evolving and still under debate. There are not universally agreed-upon bright lines or hierarchies among terms.) There has been some debate over whether expert systems should still be considered AI, as they cannot adapt to unexpected inputs or variables outside of what they were trained on (i.e., they lack a learning component).

Artificial Intelligence (AI) is a broad term referring to algorithms and techniques that aim to give computer systems the ability to learn new concepts or tasks and solve complex problems in a manner that mimics human intelligence. The concept of AI and AI systems can encompass a range of technologies, methodologies, and application areas, such as natural language processing, facial recognition, and robotics.

Expert Systems, an early approach to making machines that mimic human intelligence, are algorithms encoded with expert knowledge but lacking a learning component. In these rules-based systems, programmers solve a problem, then program routines and rules that the system uses to respond to new inputs.

Robotics refers to the design, construction, and use of machines (robots) to replicate, or assist with, human actions. Robotics is one area where AI can be applied in physical applications, sometimes called robotic learning, to help robots learn and improve their performance through self-exploration or guidance from human operators.

Machine Learning (ML), often referred to as a subfield of AI, uses algorithms to enable systems to identify and learn from patterns or relationships in data without being explicitly programmed. The performance of these systems can improve as they learn from more data to then make predictions or decisions on new, unseen data.

Deep Learning (DL), a subset of ML, uses neural network (NN) techniques to process and analyze large-scale, complex data, including text, image, and audio. NNs were originally inspired by how layers of neurons in the human brain signal each other, with the artificial neurons grouped into layers of interconnected nodes (i.e., computational units). NNs "learn" by adjusting the connections between nodes, tuning the system based on characteristics of the training data to correspond to the correct output; this is the general process of AI model training. A DL system usually has numerous layers that can consist of thousands or millions of processing nodes. Given the size and complexity of most NNs built for AI model training, the terms DL and NN are often used interchangeably.

Natural Language Processing (NLP) refers to the use of rules-based or ML approaches to understand the structure and meaning of written or spoken human language. DL, GenAI, and LLMs have been leveraged for NLP applications such as improving language translation and powering chatbots, thus spanning multiple AI terms and approaches.

Generative AI (GenAI) refers to AI systems that can generate content—such as written material, audio, images, or computer code—from prompts using advanced techniques such as NNs that help the underlying models better understand how data elements influence and depend on one another.

General-Purpose AI Models, often referred to as foundation models (FMs), are GenAI models trained on large amounts of diverse types of data that can be fine-tuned for a wide range of downstream tasks. Large language models (LLMs) are one category of FMs designed to learn from and generate text for applications such as language translation, question answering, and computer code generation.