Skip to content
No results
Starphix
  • Catalog
  • Roadmap Builder
  • StarphiX HQ
    • About
    • Haive
    • PaiX
    • Policy
    • Terms
    • Jobs
Shopping cart$0.00 0
Learn AI
Starphix
  • Catalog
  • Roadmap Builder
  • StarphiX HQ
    • About
    • Haive
    • PaiX
    • Policy
    • Terms
    • Jobs
Shopping cart$0.00 0
Learn AI
Starphix

Welcome | Guided Learning Paths

  • Welcome to the StarphiX Knowledge Center!
  • 🧭 Curated Learning Paths
    • The Learning Path for the Student & Creative 🎨
    • The Learning Path for the Developer & Tech Enthusiast 💻
    • The Learning Path for the Business Owner & Professional 💼

The Story of AI: Past, Present, & Future

  • Pillar I: 📖
  • 📜 A Brief History of AI
    • The Transformer Revolution: The Architecture That Changed Everything 🧠
    • The Rise of Machine Learning: A New Paradigm 📈
    • The AI Winters: When Promises Outpaced Reality ❄️
    • The Dartmouth Workshop: The Birth of a Field 💡
    • The Dream of an Artificial Mind: AI’s Philosophical Origins 🏛️
  • 🌍 The AI Landscape Today
    • An Overview of AI’s Impact on Modern Work & Creativity 💼
    • Generative AI vs. Traditional AI: What’s the Difference? ↔️
    • Why Now? Understanding the Current AI Boom 💥
  • 🔭 The Future of AI: The Next Frontier
    • An Introduction to AI Ethics & Responsible Development ⚖️
    • An Introduction to AI Ethics & Responsible Development ⚖️
    • AI for Good: The Role of AI in Science, Medicine, and Climate Change ❤️
    • The Quest for AGI: What is Artificial General Intelligence? 🤖

The Modern AI Toolkit

  • ⚙️ The Technology Stack Explained
    • The Hardware Layer: Why GPUs are the Engine of AI ⚙️
    • The Model Layer: Understanding LLMs, Diffusion Models, and Agents 🧠
    • The Platform Layer: How APIs and No-Code Tools Connect Everything 🔗
  • 🏢 The Ecosystem: Major Players & Platforms
    • Major Players & Platforms 🏢
  • 🛠️ Practical Use Cases by Profession
    • For the Small Business Owner: 5 High-Impact Automations to Implement Today 🧑‍💼
    • For the Consultant or Coach: Streamlining Your Client Workflow with AI 🧑‍🏫
    • For the Creative Professional: Using AI as a Brainstorming Partner, Not a Replacement 🎨
    • For the Student & Researcher: How to Supercharge Your Learning with AI 🧑‍🎓

The Sovereign AI: A Guide to Local Systems

  • 🧠 The Philosophy of AI Sovereignty
    • Why Local AI is the Future of Work and Creativity 🚀
    • Data Privacy vs. Data Sovereignty: Taking Control of Your Digital Self 🛡️
    • The Open-Source AI Movement: A Force for Democratization 🌐
  • 🏠 Your First Local AI Lab
    • Understanding the Core Components of a Local AI Setup 🖥️
    • Choosing Your Hardware: A Buyer’s Guide for Every Budget 💰
    • The Software Stack: A Step-by-Step Installation Guide 💿
    • Downloading Your First Open-Source Model 🧠
    • A Guide to Model Sizes: What Do 7B, 13B, and 70B Really Mean? 📏
  • 🏗️ Building with Local AI: Practical Workflows
    • Your First Local Automation: Connecting to n8n 🤖
    • Creating a Private Chat Interface for Your Local Models 💬
    • The Power of APIs: Connecting Local AI to Other Tools 🔗
    • Practical Project: Building a Private ‘Meeting Matrix Summarizer’ 📄
    • Practical Project: Creating a ‘Knowledge-Core Agent’ with Your Own Documents 🧠
  • 🚀 Advanced Concepts & The PaiX Vision
    • An Introduction to Fine-Tuning Your Own Models ⚙️
    • Optimizing Performance: Quantization and Model Pruning Explained ⚡️
    • The StarphiX Vision: From DIY Homelab to a Professional PaiX Local Workstation ✨

The Library: Resources & Reference

  • The Archive of Seminal Papers 📜
  • Glossary of AI Terms 📖
  • The Directory of Tools & Frameworks 🧰
View Categories
  • Home
  • Docs
  • The Library: Resources & Reference
  • Glossary of AI Terms 📖

Glossary of AI Terms 📖

6 min read

A #

Accountability (in AI): The principle that AI systems should have clear lines of responsibility. If an AI makes a mistake or causes harm, there should be a way to determine who is responsible (the developers, the users, etc.).

AI Agent: A software system that uses AI to perform tasks autonomously. Unlike a typical program, an agent can perceive its environment, make decisions, and take actions to achieve specific goals.

AI Bias: When an AI system makes unfair or discriminatory decisions due to flaws in the data it was trained on. This can lead to the AI perpetuating or even amplifying existing societal biases.

API (Application Programming Interface): A set of rules and protocols that allows different software applications to communicate and exchange data with each other. In the context of AI, an API allows other programs to “talk” to an AI model.

API Endpoint: A specific URL (web address) where a software application can access the functionality of an API. In the context of Local AI, this is the “phone number” that other applications use to call your AI.

Artificial General Intelligence (AGI): A hypothetical form of AI that possesses human-level intelligence across a wide range of tasks, including reasoning, learning, and creativity. AGI does not currently exist.

Artificial Narrow Intelligence (ANI): Also known as “weak AI.” This is the type of AI that exists today. It is designed to perform specific tasks, like image recognition or language translation, and does not possess general intelligence.

Augmentation: In AI, this refers to using AI to enhance human capabilities, not replace them. For example, using AI to help a doctor make a diagnosis or a writer brainstorm ideas.

Automation: Using AI to perform tasks automatically, without direct human intervention. This can range from simple tasks like sorting emails to complex processes like running a factory.

C #

Climate Modeling: Using AI to analyze vast amounts of climate data and create simulations to predict future climate scenarios. This helps scientists understand and address climate change.

Completion: The output generated by a large language model in response to a prompt. It is the “completion” of the user’s initial thought or instruction.

CPU (Central Processing Unit): The “brain” of a computer. While CPUs are good at general-purpose computing, they are not as efficient as GPUs for the parallel processing required by AI models.

D #

Data Privacy: Protecting personal information from unauthorized access or use. AI systems that process personal data must be designed with strong privacy safeguards.

Data Sovereignty: The concept that individuals or organizations have ultimate control over their own data, including where it is stored and how it is used. Local AI is a key technology for achieving data sovereignty.

Diffusion Model: A type of generative AI model that creates images, videos, or audio by gradually removing noise from random data. Popular image generation models like DALL-E and Midjourney use diffusion models.

Drug Discovery: Using AI to accelerate the process of identifying and developing new drugs. AI can analyze vast amounts of biological data to predict which molecules are most likely to be effective.

F #

Fine-Tuning: The process of taking a pre-trained AI model and further training it on a smaller, specialized dataset to make it an expert in a specific domain or style.

G #

GPU (Graphics Processing Unit): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are highly efficient at the parallel processing required by AI models.

I #

Inference: The process of using a trained AI model to make predictions or generate outputs based on new input data.

Institutional Knowledge: The collective knowledge and experience within an organization. AI can be used to capture, organize, and make this knowledge accessible to employees.

L #

Large Language Model (LLM): A type of AI model trained on massive amounts of text data. LLMs can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

Latency: The delay between when a user provides input to an AI system and when the system generates a response. Low latency is crucial for a smooth and responsive user experience.

Local AI: Running AI models directly on your own computer or device, rather than relying on cloud-based servers. This offers significant benefits in terms of privacy, control, and cost.

Localhost: The standard hostname used to refer to the current computer. In the context of Local AI, this is often the address (e.g., http://localhost:11434) used to access the local API.

N #

No-Code Platform: A software development platform that allows users to build applications and automate workflows without writing any code. These platforms often use visual interfaces and drag-and-drop tools.

O #

Offline-First: A design principle for software applications that emphasizes functionality even without an internet connection. Local AI systems are inherently offline-first.

P #

Parallel Processing: The ability of a computer to perform multiple calculations simultaneously. GPUs are much better at parallel processing than CPUs, which is why they are essential for running AI models.

Parameters (AI Model): The adjustable values within an AI model that determine its behavior. A model with more parameters generally has a larger and more intricate “web of knowledge.”

Parsing: The process of analyzing a string of text, either in natural language or in computer code, to determine its logical structure and components.

Prompt: The input or instruction given to an AI model to generate a response. The quality of the prompt significantly affects the quality of the output.

Pruning (AI Model): A technique used to optimize AI models by removing redundant or unimportant connections (“neurons”) within the neural network, making the model leaner and more efficient.

Q #

Quantization: A technique used to optimize AI models by reducing the precision of the numbers (parameters) within the model, making it smaller and faster to run.

R #

Retrieval-Augmented Generation (RAG): A technique that improves the accuracy and reliability of AI models by allowing them to access and incorporate information from external knowledge sources.

S #

Structured Data: Data that is organized in a predefined format, such as a table or a database. AI can be used to analyze and extract insights from structured data.

T #

Token: The basic unit of text that an AI model processes. Words are often broken down into smaller sub-word units called tokens.

Tokenizer: A component of an AI model that breaks down text into tokens. The tokenizer determines how efficiently the model can process language.

Transparency (in AI): The degree to which the inner workings and decision-making processes of an AI system are understandable to humans. Transparent AI systems are easier to trust and debug.

U #

Unstructured Data: Data that does not have a predefined format, such as text documents, audio recordings, or images. AI is often used to extract meaning from unstructured data.

V #

VRAM (Video Random-Access Memory): The dedicated memory on a GPU. The amount of VRAM directly determines the size and complexity of the AI models that a GPU can run.

Table of Contents
  • A
  • C
  • D
  • F
  • G
  • I
  • L
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • About
  • Policy
  • Terms
  • Jobs
  • StarphiX HQ

Copyright © 2025 | PaiX Built