What Is Artificial Intelligence (AI)?

5 min. read

Artificial intelligence (AI) is a multidisciplinary field of computer science that focuses on designing systems capable of simulating human cognitive functions that enable machines to perform tasks requiring human intelligence. AI encompasses machine learning, which uses algorithms and statistical models to learn from:

  • Data
  • Natural language processing
  • Computer vision
  • Robotics
  • Expert systems

AI seeks to develop intelligent agents that perceive, reason, learn, plan, and act independently or in collaboration with humans, transforming diverse industries and shaping the future of technology.

Artificial Intelligence Explained

Artificial intelligence (AI) is a rapidly evolving field encompassing techniques, algorithms, and applications to create intelligent agents capable of mimicking human cognitive abilities — abilities like learning, reasoning, planning, perceiving, and understanding natural language. Though it’s only recently become mainstream, AI applications are everywhere. We encounter them in virtual assistants, chatbots, image classification, facial recognition, object recognition, speech recognition, machine translation, and robotic perception.

As a field of study, AI encompasses areas such as machine learning, natural language processing, computer vision, robotics, and expert systems.

Machine Learning

At the core of AI is machine learning, a subset that leverages algorithms and statistical models to enable systems to learn from and adapt to data inputs without explicit programming. Techniques like supervised, unsupervised, and reinforcement learning enable machines to identify patterns, make predictions, and optimize decision-making based on data.

  • Supervised Learning: This involves training an algorithm on a labeled dataset, which means that each input data point is paired with an output label. Supervised learning algorithms are designed to learn a mapping from inputs to outputs, ideal for applications like spam detection or image recognition.
  • Unsupervised Learning: In contrast to supervised learning, unsupervised learning algorithms are given no labels, relying instead on the intrinsic structure of the data to draw insights. It’s used for clustering, association, and dimensionality reduction tasks.
  • Semi-supervised and Reinforcement Learning: These forms leverage both labeled and unlabeled data, which can enhance learning efficiency and accuracy.

Natural Language Processing (NLP)

Natural language processing (NLP) equips AI systems with the ability to understand, interpret, generate, and interact with human languages. NLP techniques facilitate tasks like sentiment analysis, language translation, and chatbot development.

Computer Vision

Computer vision focuses on enabling machines to perceive, recognize, and interpret visual information from the surrounding environment. This discipline involves object recognition, facial recognition, and scene understanding, which are critical for applications such as autonomous vehicles and surveillance systems.

Robotics

Robotics integrates AI with mechanical, electrical, and control engineering to design, build, and program robots capable of performing complex tasks autonomously or semi-autonomously. Robots can range from industrial manipulators to humanoid assistants, leveraging AI for navigation, manipulation, and interaction with humans and their environment.

Expert Systems

Expert systems, a branch of AI, involve the development of rule-based systems that emulate human expertise in specific domains. Expert systems are used to provide recommendations, diagnoses, or decision support based on a set of predefined rules and a knowledge base.

The Interdependence of AI Techniques

Machine learning, deep learning, and natural language processing have become increasingly intertwined, with each subfield complementing the others to create more sophisticated AI systems.

For example, deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have been applied to computer vision and NLP tasks, leading to state-of-the-art performance in image classification and machine translation. Similarly, transformer architectures have revolutionized natural language processing by significantly improving the performance of tasks such as machine translation, information extraction, sentiment analysis, and question answering. The combination of probabilistic methods, such as Bayesian networks and expectation-maximization algorithms, with machine learning approaches has provided powerful tools for handling uncertainty and making data-driven decisions.

The fusion of reinforcement learning, computer vision, and control algorithms enables robots to learn complex behaviors, navigate dynamic environments, and interact with objects. Expert systems showcase the interdependence of AI techniques through the integration of knowledge representation, inference engines, and machine learning.

By combining these components, expert systems can reason, learn, and adapt to new information, making them valuable tools for decision-making in various domains.

Revolutionizing Industries

AI has made significant strides in various domains, transforming industries and the way we live, work, and interact.

Healthcare

AI has made remarkable advancements in healthcare, enabling early disease detection, personalized treatment plans, and improved patient outcomes. Deep learning algorithms, particularly convolutional neural networks (CNNs), have been instrumental in enhancing medical imaging analysis for diagnosing diseases such as cancer and Alzheimer's.

Natural language processing techniques have empowered the extraction of vital information from electronic health records and scientific literature, streamlining medical research and decision-making. Additionally, AI-driven drug discovery platforms have accelerated the development of new pharmaceuticals, reducing the time and cost of bringing life-saving medications to market.

Finance

The financial sector has harnessed AI to optimize trading strategies, detect fraud, manage risk, and improve customer service. Most of us have experienced streamlined support or received personalized financial advice from AI-driven chatbots and virtual assistants.

Machine learning algorithms, such as support vector machines and decision trees, enable automated trading systems to analyze vast quantities of data and execute trades with precision and speed. AI-powered fraud detection systems leverage anomaly detection and pattern recognition techniques to identify suspicious activities, enhancing security and mitigating loss.

Transportation

AI has transformed the transportation industry through the development of autonomous vehicles, traffic management systems, and route optimization algorithms. Machine learning techniques, computer vision, and sensor fusion enable self-driving cars to perceive and navigate complex environments, promising to reduce accidents and improve traffic flow.

AI-driven traffic management systems analyze real-time traffic data and predict congestion patterns, optimizing traffic signal timings and reducing commute times. Route optimization algorithms, powered by AI, help logistics companies and delivery services minimize fuel consumption and improve efficiency.

Education

AI has the potential to revolutionize education through personalized learning, intelligent tutoring systems, and automated grading. Machine learning algorithms analyze students' learning patterns, preferences, and progress, tailoring educational content to optimize learning outcomes. Intelligent tutoring systems provide individualized feedback, guidance, and support, bridging the gap between students and instructors. AI-driven grading systems can assess essays and other complex assignments, saving time for educators and providing students with timely, consistent feedback.

Manufacturing

AI has been instrumental in modernizing manufacturing processes, enhancing productivity, and reducing waste. Machine learning algorithms enable predictive maintenance, identifying potential equipment failures before they occur and reducing downtime. Computer vision systems, powered by deep learning, facilitate automated quality control, ensuring the accuracy and consistency of manufactured products. AI-driven supply chain optimization platforms analyze demand forecasts, inventory levels, and production schedules, streamlining operations and minimizing costs.

Entertainment and Media

AI has reshaped the entertainment and media landscape by enabling content personalization, recommendation systems, and creative applications. Machine learning algorithms analyze user preferences, behavior, and demographics to curate personalized content and recommendations, enhancing user engagement and satisfaction. Generative AI techniques, such as generative adversarial networks (GANs) and transformer architectures, have empowered the creation of novel art, music, and storytelling experiences, expanding the boundaries of human creativity.

Challenges and Opportunities in AI Research

Despite the significant progress made in AI, several challenges remain. One of the main challenges is developing AI systems that can exhibit general intelligence (i.e., the ability to learn and reason across a wide range of tasks and domains). Current AI systems are often specialized for specific tasks, and transfer learning techniques are still in their infancy. Moreover, the development of AI systems that can explain their reasoning and decisions, a crucial requirement for many applications, remains an open problem.

Ethical Deployment of AI Systems

Another challenge is ensuring the ethical and safe deployment of AI systems. Issues such as data privacy, algorithmic bias, and the impact of AI on employment have raised concerns among researchers, policymakers, and the public. These concerns highlight the importance of incorporating ethical and safety considerations in AI research and development.

AI-Powered Threats to Cloud Security

AI introduces several challenges to cloud security, with some of the most pressing issues arising from adversarial attacks, data privacy concerns, model complexity, AI-based cyberthreats, and resource consumption attacks.

Adversarial Attacks

AI systems, particularly deep learning models, are vulnerable to adversarial examples, which are inputs crafted to deceive the model into producing incorrect outputs. In a cloud environment, attackers can exploit these vulnerabilities to compromise AI services, leading to incorrect predictions, unauthorized access, or data manipulation.

Data Privacy and Confidentiality

Data privacy and confidentiality pose another challenge, as AI models often require massive amounts of data for training, which may include sensitive user information. Storing and processing this data in the cloud raises privacy concerns, as unauthorized access or data breaches can result in the exposure of sensitive information. Additionally, AI models can inadvertently leak confidential data through model inversion or membership inference attacks.

Model Complexity and Interpretability

The complexity of AI models, particularly deep learning and ensemble methods, challenges cloud security, as their lack of interpretability makes it difficult to assess security properties and identify vulnerabilities. This, in turn, hinders the detection and mitigation of potential attacks on AI services.

AI-Based Cyberthreats

Attackers can leverage AI techniques to develop more sophisticated cyberthreats, such as intelligent malware and automated vulnerability exploitation. These AI-enhanced attacks can be harder to detect and defend against in a cloud environment, posing significant challenges to traditional security measures.

Resource Consumption Attacks

AI models, particularly deep learning, require substantial computational resources for training and inference. Attackers can exploit this by launching resource consumption attacks, such as denial of service (DoS) or distributed denial of service (DDoS), targeting AI services in the cloud and causing performance degradation or service disruption.

To address these challenges, cloud security strategies must adopt a holistic approach that encompasses robust AI model design, secure data management practices, and advanced threat detection and mitigation techniques. This includes the development of secure AI frameworks, privacy-preserving data processing methods, and the continuous monitoring and assessment of AI services in the cloud.

Using AI to Defend the Cloud

AI can greatly enhance cloud security by improving capabilities that help maintain the confidentiality, integrity, and availability of cloud services while addressing the evolving challenges of the cloud security landscape.

By utilizing machine learning algorithms to analyze data generated in the cloud, AI can improve threat detection and identify patterns and anomalies that may indicate security threats. AI-driven security tools are capable of detecting unusual user behavior, network traffic, or system events and flagging them for further investigation. Real-time identification of threats, such as malware, data breaches, or unauthorized access, can substantially reduce the potential damage caused by these attacks.

In addition to threat detection, AI can streamline and automate incident response, minimizing the need for human intervention. Cloud security systems that leverage AI algorithms can automatically take corrective actions, such as isolating affected systems, blocking malicious IP addresses, or revoking compromised credentials. Automating incident response not only reduces response time but also mitigates the risk of human error, enhancing cloud security posture.

AI can also reinforce data privacy and confidentiality by employing privacy-preserving data processing techniques, such as differential privacy, homomorphic encryption, and secure multiparty computation. These methods allow AI models to learn from encrypted or anonymized data, ensuring sensitive information remains protected while still benefiting from AI-driven insights.

AI contributes to system resilience by continuously monitoring and adapting to the evolving threat landscape. AI-driven security solutions can learn from past incidents and adjust their behavior, updating detection models as needed. This adaptability enables cloud security systems to proactively defend against emerging threats and adjust to the changing tactics of malicious actors.

Artificial Intelligence Security Posture Management (AiSPM)

The growing complexity of threats, advancements in AI technology, and changes in the IT landscape, has given rise to AiSPM. As AI continues to evolve and mature, its role in managing and improving security posture is likely to become even more significant.

AiSPM — or artificial intelligence security posture management — refers to the application of artificial intelligence techniques to manage and improve the security posture of an organization's IT infrastructure. AiSPM’s approach involves using AI algorithms to analyze, monitor, and respond to potential security threats, vulnerabilities, and risks in real-time.

Key Components of AiSPM

Anomaly detection: AI algorithms can analyze large quantities of data, such as logs or network traffic, to detect unusual patterns and behaviors that may indicate security threats.

Vulnerability management: AI can help organizations identify and prioritize vulnerabilities in their IT infrastructure, enabling them to take proactive measures to remediate risks.

Incident response automation: AI can streamline the incident response process, automatically taking corrective actions when a security threat is detected, reducing response time, and mitigating the risk of human error.

Risk assessment: AI can help organizations assess and quantify their cybersecurity risks, enabling them to make data-driven decisions about their security strategy and resource allocation.

Continuous monitoring and adaptation: AI-driven security solutions can learn from incidents and adapt their behavior to defend against emerging threats and changing tactics of malicious actors.

The Future of AI

As AI continues to advance, we can expect to see more sophisticated applications and systems that leverage the full potential of machine learning, deep learning, natural language processing, computer vision, and robotics. Researchers are working toward developing AI systems that can learn and reason like humans, leading to more general and adaptable intelligence. The integration of AI techniques and the development of systems that can address ethical and safety concerns will play a critical role in ensuring the responsible and beneficial deployment of AI across various domains.

Machine Learning FAQs

AI is a broader field encompassing the development of computer systems capable of performing tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, and making decisions. Machine learning is a subfield of AI that specifically focuses on creating algorithms that learn from data, enabling computers to improve their performance on tasks without explicit programming.
Hallucination in AI refers to the generation of outputs by a machine learning model that aren’t grounded in the input data or factual information. This phenomenon often occurs when a model, such as an LLM, creates plausible-sounding but incorrect or nonsensical responses.
Confabulation in the context of AI and LLMs refers to the generation of incorrect or nonsensical outputs by a machine learning model. This usually occurs when the model is presented with input data that it hasn’t encountered during training, leading to the creation of inaccurate or implausible responses. It’s particularly important to address confabulation in AI systems to ensure their reliability and trustworthiness, as well as to maintain the security of the model and the data it processes.
Inference is the process by which a trained machine learning model, such as an LLM, generates predictions or outputs based on input data. During inference, the model applies the learned patterns and representations to make decisions or produce responses for the given input.
NLP is an area of AI that focuses on the interaction between humans and computers using natural language. NLP techniques include speech recognition, speech synthesis, machine translation, information extraction, information retrieval, and question answering.
Computer vision is a subfield of AI that deals with the automatic extraction of information from images and videos. Techniques in computer vision include image classification, facial recognition, object recognition, and robotic perception.
Robotics is a branch of AI that focuses on the development of intelligent agents capable of perceiving and interacting with the physical world. Robotic perception is a subfield of robotics that deals with the acquisition and processing of sensory data from the environment.
Expert systems are AI programs designed to mimic the decision-making abilities of a human expert in a specific domain. These systems use knowledge representation and reasoning techniques to provide advice, make decisions, or solve complex problems.
AI ethics and safety are concerned with the ethical and societal implications of AI systems, including issues related to data privacy, algorithmic bias, and the impact of AI on employment.

Supervised learning is a machine learning approach where models are trained using labeled data, with input-output pairs provided as examples. The model learns to map inputs to the correct outputs by minimizing the difference between its predictions and the actual labels. In the context of AI and LLMs, supervised learning is often used for tasks such as classification, regression, and sequence prediction.

Examples of supervised learning algorithms used in data mining include decision trees, support vector machines, and neural networks, which can be applied to a broad range of applications, such as customer churn prediction or credit risk assessment.

Ensuring the quality and integrity of the training data and managing access to sensitive information are crucial to maintain the security and trustworthiness of supervised learning models.

Unsupervised learning is a machine learning approach where models learn from data without explicit labels, discovering patterns and structures within the data itself. Common unsupervised learning techniques include clustering, where data points are grouped based on similarity, and dimensionality reduction, where high-dimensional data is transformed into lower-dimensional representations.

In the context of AI and LLMs, unsupervised learning can be used to uncover hidden patterns or relationships in data, providing valuable insights and improving model performance.

Unsupervised learning techniques, such as clustering and association rule mining, play a vital role in exploratory data analysis and the identification of meaningful groupings or relationships in data. Examples include the k-means algorithm for clustering and the Apriori algorithm for association rule mining, which allow for the discovery of previously unknown patterns or associations within datasets.

Semi-supervised learning is a machine learning paradigm that combines the use of labeled and unlabeled data during the training process. While supervised learning relies solely on labeled data and unsupervised learning employs only unlabeled data, semi-supervised learning leverages the strengths of both approaches to improve model performance.

The primary motivation behind semi-supervised learning is that labeled data is often scarce and expensive to obtain, while large quantities of unlabeled data are more readily available. By incorporating the unlabeled data, semi-supervised learning algorithms can extract additional insights and patterns, refining the model's decision boundaries and leading to better generalization on unseen data.

Common techniques used in semi-supervised learning include self-training, co-training, and graph-based methods, which enable the model to iteratively learn from both labeled and unlabeled data.

Reinforcement learning is a machine learning paradigm in which an agent learns to make decisions by interacting with an environment, receiving feedback in the form of rewards or penalties. The agent's objective is to maximize cumulative rewards over time by exploring different actions, building a policy that dictates the best action to take in each situation.

Reinforcement learning can be applied to natural language processing tasks where an agent must learn to generate optimal responses or make choices based on user input.

Deep learning is a subfield of machine learning that focuses on artificial neural networks with multiple layers, allowing for the automatic extraction of complex patterns and features from large amounts of data. These networks, often referred to as deep neural networks, can learn hierarchical representations, enabling them to tackle a wide range of tasks, such as image recognition, natural language processing, and speech recognition.

In the realm of AI and LLMs, deep learning helps to create more accurate and efficient models by leveraging data and computational resources available in the cloud.

General intelligence, often referred to as artificial general intelligence (AGI), describes the capability of an AI system to understand, learn, and perform tasks across a wide range of domains with human-like proficiency. Unlike narrow AI systems, which excel in specific tasks, AGI can adapt to new situations, reason abstractly, and transfer knowledge from one domain to another. Achieving AGI remains a long-term goal in AI research, as current systems primarily focus on specialized tasks and lack the broad cognitive abilities demonstrated by humans.
Swarm intelligence is a subfield of AI that studies the collective behavior of decentralized, self-organized systems, such as particle swarm optimization and ant colony optimization. These techniques are used for search and optimization tasks in AI.
Formal logic is a mathematical framework used for knowledge representation and reasoning in AI. It includes propositional logic, predicate logic, logical inference, fuzzy logic, and nonmonotonic logics.
Probabilistic methods are techniques that deal with uncertainty in AI. Bayesian networks, conditional probability tables, expectation-maximization algorithms, and decision networks are examples of probabilistic methods used in AI.
Decision theory is a mathematical framework for modeling and analyzing decision-making processes. Mechanism design is a subfield of game theory that focuses on designing systems to achieve desired outcomes in strategic interactions among agents.

Bayesian networks, also known as belief networks or Bayes nets, are probabilistic graphical models representing a set of variables and their conditional dependencies using directed acyclic graphs (DAGs). Each node in the graph corresponds to a random variable, while the edges represent the probabilistic dependencies between them.

By encoding the joint probability distribution, Bayesian networks facilitate efficient reasoning and inference under uncertainty. They are widely used in various domains, including artificial intelligence, machine learning, medical diagnosis, risk analysis, and natural language processing. The networks support tasks such as anomaly detection, classification, and decision-making by updating probabilities based on observed evidence, following Bayes' theorem.

Transformer architecture is an advanced deep learning model designed for NLP tasks, such as translation and text summarization. It uses self-attention mechanisms to process input sequences in parallel, rather than sequentially, as in traditional recurrent neural networks (RNNs) or long short-term memory (LSTM) networks. The architecture comprises an encoder and a decoder, each consisting of multiple identical layers with multihead attention and feed-forward sublayers.

Transformers have achieved state-of-the-art performance in various NLP benchmarks, serving as the foundation for models like BERT, GPT, and T5.

Recurrent neural networks (RNNs) are a class of neural networks designed to process sequential data, such as time series or natural language. Unlike feedforward networks, RNNs incorporate feedback connections, allowing them to maintain an internal state or memory of previous inputs. This structure enables RNNs to capture temporal dependencies and learn patterns within sequences.

RNNs, however, can struggle with long-term dependencies because of issues like vanishing or exploding gradients. To address this, variants such as long short-term memory (LSTM) and gated recurrent units (GRUs) have been developed, offering improved performance in tasks like language modeling, speech recognition, and machine translation.

Convolutional neural networks (CNNs) are a type of deep learning architecture specifically designed for processing grid-like data, such as images. CNNs consist of layers that perform convolution operations, pooling, and activation functions, enabling the network to learn hierarchical representations of the input data. By using local connectivity and shared weights, CNNs can efficiently detect patterns and features within images, making them highly effective in tasks like image classification, object detection, and semantic segmentation.

Generative adversarial networks (GANs) are a type of deep learning model that consists of two neural networks, a generator and a discriminator, trained simultaneously in a competitive setting. The generator creates synthetic data samples, while the discriminator evaluates the authenticity of both real and generated samples. The generator aims to produce realistic samples that can deceive the discriminator, while the discriminator strives to accurately distinguish between real and fake data.

Through this adversarial process, GANs can generate high-quality, realistic data, making them valuable in applications such as image synthesis, data augmentation, and style transfer.

Knowledge representation is a fundamental aspect of artificial intelligence that deals with encoding, organizing, and manipulating knowledge in a manner that enables reasoning and decision-making. Knowledge representation techniques include formal logic, semantic networks, frames, ontologies, and probabilistic graphical models. These methods allow AI systems to represent, store, and retrieve information, as well as infer new knowledge based on existing data. Knowledge representation is crucial for expert systems, natural language understanding, and planning applications.
Reasoning, in the context of AI, refers to the ability of a system to infer new knowledge or derive logical consequences from existing knowledge. AI reasoning techniques include deductive reasoning, inductive reasoning, and abductive reasoning. These approaches are used in various AI applications such as knowledge-based systems, natural language understanding, and automated theorem proving.
In artificial intelligence, planning involves developing algorithms and techniques that enable AI agents to create and execute plans in complex, dynamic, and uncertain environments. Classical planning, hierarchical task network (HTN) planning, and probabilistic planning are some common approaches used in AI planning systems. Applications of AI planning include robotics, logistics, and autonomous vehicles.
In artificial intelligence, perception refers to the development of algorithms and systems that enable machines to process and interpret sensory information, such as images, audio, and tactile data. Computer vision, speech recognition, and sensor fusion are examples of AI perception techniques used to analyze and make sense of the data. AI perception is critical for applications like robotics, autonomous vehicles, and human-computer interaction.
Markov decision processes (MDPs) are mathematical models used for decision-making in situations where an agent interacts with an environment and makes decisions based on probabilities. MDPs are characterized by a set of states, actions, transition probabilities, and rewards. The agent chooses actions to maximize the cumulative reward over time while considering the probabilistic nature of state transitions. MDPs are widely used in reinforcement learning, robotics, and control systems to model decision-making problems under uncertainty.
Markov decision processes (MDPs) and hidden Markov models (HMMs) are both mathematical frameworks that model systems with probabilistic state transitions. While MDPs focus on decision-making and incorporate rewards, HMMs primarily deal with the analysis of sequences generated by systems with hidden states. In an HMM, the observed data is a probabilistic function of the hidden states, and the goal is to infer the most likely sequence of hidden states given the observed data. HMMs are widely used in speech recognition, bioinformatics, and finance for tasks such as pattern recognition and sequence prediction.
The Turing Test, proposed by Alan Turing in 1950, is an evaluation method for determining whether a machine exhibits human-like intelligence. In this test, a human judge engages in a natural language conversation with a machine and another human, without knowing which is which. If the judge can’t reliably distinguish between the machine and human responses, the machine is considered to have passed the test, demonstrating a level of intelligence comparable to human cognition. In the context of AI and LLMs, the Turing Test serves as a benchmark for measuring the ability of an AI system to understand and generate human-like language.

The k-means algorithm is an unsupervised machine learning technique used for clustering data points based on their similarity. Given a set of data points and a predefined number of clusters (k), the algorithm aims to partition the data into k distinct groups, minimizing the within-cluster variance. The process begins by randomly selecting k initial centroids, followed by iteratively assigning data points to the nearest centroid and recalculating the centroids based on the mean of the assigned points. The algorithm converges when the centroids' positions stabilize or a predefined stopping criterion is met.

K-means is widely used for exploratory data analysis, anomaly detection, and image segmentation due to its simplicity, efficiency, and ease of implementation.

The Apriori algorithm is an unsupervised machine learning method used for association rule mining, primarily in the context of market basket analysis. The goal of the algorithm is to identify frequent itemsets and derive association rules that indicate relationships between items in large transactional databases.

Apriori operates on the principle of downward closure, which states that if an itemset is frequent, all its subsets must also be frequent. The algorithm proceeds in a breadth-first manner, iteratively generating candidate itemsets and pruning infrequent ones based on a minimum support threshold. Once frequent itemsets are identified, association rules are derived using a minimum confidence constraint.

The Apriori algorithm has widespread applications in retail, marketing, and recommendation systems, helping businesses uncover valuable insights and devise effective strategies.

Five popular machine learning algorithms include:

  • Linear Regression: A simple algorithm for predicting continuous numerical values based on the relationship between input features and output values.
  • Logistic Regression: A classification algorithm used to predict binary outcomes, such as whether a customer will make a purchase or not.
  • Decision Trees: A graphical model that recursively splits data into subsets based on feature values, enabling classification or regression tasks.
  • Support Vector Machines (SVM): A classification algorithm that finds the optimal boundary (or hyperplane) separating data points of different classes, maximizing the margin between them.
  • Neural Networks: A versatile algorithm inspired by the human brain, capable of learning complex patterns and representations, applicable to a wide range of tasks.