MachineLearningAlgorithms / templates /Apriori-Algorithm.html
deedrop1140's picture
Upload 16 files
c61ce8c verified
{% extends "layout.html" %}
{% block content %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Study Guide: Apriori Algorithm</title>
<!-- MathJax for rendering mathematical formulas -->
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<style>
/* General Body Styles */
body {
background-color: #ffffff; /* White background */
color: #000000; /* Black text */
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
font-weight: normal;
line-height: 1.8;
margin: 0;
padding: 20px;
}
/* Container for centering content */
.container {
max-width: 800px;
margin: 0 auto;
padding: 20px;
}
/* Headings */
h1, h2, h3 {
color: #000000;
border: none;
font-weight: bold;
}
h1 {
text-align: center;
border-bottom: 3px solid #000;
padding-bottom: 10px;
margin-bottom: 30px;
font-size: 2.5em;
}
h2 {
font-size: 1.8em;
margin-top: 40px;
border-bottom: 1px solid #ddd;
padding-bottom: 8px;
}
h3 {
font-size: 1.3em;
margin-top: 25px;
}
/* Main words are even bolder */
strong {
font-weight: 900;
}
/* Paragraphs and List Items with a line below */
p, li {
font-size: 1.1em;
border-bottom: 1px solid #e0e0e0; /* Light gray line below each item */
padding-bottom: 10px; /* Space between text and the line */
margin-bottom: 10px; /* Space below the line */
}
/* Remove bottom border from the last item in a list for cleaner look */
li:last-child {
border-bottom: none;
}
/* Ordered lists */
ol {
list-style-type: decimal;
padding-left: 20px;
}
ol li {
padding-left: 10px;
}
/* Unordered Lists */
ul {
list-style-type: none;
padding-left: 0;
}
ul li::before {
content: "โ€ข";
color: #000;
font-weight: bold;
display: inline-block;
width: 1em;
margin-left: 0;
}
/* Code block styling */
pre {
background-color: #f4f4f4;
border: 1px solid #ddd;
border-radius: 5px;
padding: 15px;
white-space: pre-wrap;
word-wrap: break-word;
font-family: "Courier New", Courier, monospace;
font-size: 0.95em;
font-weight: normal;
color: #333;
border-bottom: none;
}
/* Apriori Specific Styling */
.story-apriori {
background-color: #f0faf5;
border-left: 4px solid #198754; /* Green accent for Apriori */
margin: 15px 0;
padding: 10px 15px;
font-style: italic;
color: #555;
font-weight: normal;
border-bottom: none;
}
.story-apriori p, .story-apriori li {
border-bottom: none;
}
.example-apriori {
background-color: #e9f7f1;
padding: 15px;
margin: 15px 0;
border-radius: 5px;
border-left: 4px solid #20c997; /* Lighter Green accent for Apriori */
}
.example-apriori p, .example-apriori li {
border-bottom: none !important;
}
/* Quiz Styling */
.quiz-section {
background-color: #fafafa;
border: 1px solid #ddd;
border-radius: 5px;
padding: 20px;
margin-top: 30px;
}
.quiz-answers {
background-color: #e9f7f1;
padding: 15px;
margin-top: 15px;
border-radius: 5px;
}
/* Table Styling */
table {
width: 100%;
border-collapse: collapse;
margin: 25px 0;
}
th, td {
border: 1px solid #ddd;
padding: 12px;
text-align: left;
}
th {
background-color: #f2f2f2;
font-weight: bold;
}
/* --- Mobile Responsive Styles --- */
@media (max-width: 768px) {
body, .container {
padding: 10px;
}
h1 { font-size: 2em; }
h2 { font-size: 1.5em; }
h3 { font-size: 1.2em; }
p, li { font-size: 1em; }
pre { font-size: 0.85em; }
table, th, td { font-size: 0.9em; }
}
</style>
</head>
<body>
<div class="container">
<h1>๐Ÿ›’ Study Guide: The Apriori Algorithm</h1>
<!-- button -->
<div>
<!-- Audio Element -->
<!-- Note: Browsers may block audio autoplay if the user hasn't interacted with the document first,
but since this is triggered by a click, it should work fine. -->
<a
href="/apriori-three"
target="_blank"
onclick="playSound()"
class="
cursor-pointer
inline-block
relative
bg-blue-500
text-white
font-bold
py-4 px-8
rounded-xl
text-2xl
transition-all
duration-150
/* 3D Effect (Hard Shadow) */
shadow-[0_8px_0_rgb(29,78,216)]
/* Pressed State (Move down & remove shadow) */
active:shadow-none
active:translate-y-[8px]
">
Tap Me!
</a>
</div>
<script>
function playSound() {
const audio = document.getElementById("clickSound");
if (audio) {
audio.currentTime = 0;
audio.play().catch(e => console.log("Audio play failed:", e));
}
}
</script>
<!-- button -->
<h2>๐Ÿ”น Core Concepts</h2>
<div class="story-apriori">
<p><strong>Story-style intuition: The Supermarket Detective</strong></p>
<p>Imagine you're a detective hired by a supermarket. Your mission is to analyze thousands of shopping receipts (transactions) to find hidden patterns. You soon notice a classic pattern: "Customers who buy bread also tend to buy butter." This is a valuable clue! The store can place bread and butter closer together to increase sales. The <strong>Apriori Algorithm</strong> is the systematic method this detective uses to sift through all the receipts and find these "frequently bought together" item combinations and turn them into powerful rules. This whole process is called <strong>Market Basket Analysis</strong>.</p>
</div>
<p>The <strong>Apriori Algorithm</strong> is a classic algorithm used for <strong>association rule mining</strong>. Its main goal is to find relationships and patterns between items in large transactional datasets. It generates rules in the format "If A, then B," helping businesses understand customer behavior and make smarter decisions.</p>
<h2>๐Ÿ”น Key Definitions</h2>
<p>To be a good supermarket detective, you need to know the lingo. The three most important metrics are Support, Confidence, and Lift.</p>
<div class="example-apriori">
<p><strong>Example Scenario:</strong> Let's say we have 100 shopping receipts.</p>
<ul>
<li>80 receipts contain {Bread}.</li>
<li>70 receipts contain {Butter}.</li>
<li>60 receipts contain both {Bread, Butter}.</li>
</ul>
</div>
<ul>
<li>
<strong>Support:</strong> The popularity of an itemset. It's the fraction of total transactions that contain that itemset.
<p><strong>Example:</strong> The support for {Bread, Butter} is 60/100 = 0.6 or 60%. This tells us that 60% of all shoppers bought bread and butter together. High support means the itemset is frequent.</p>
</li>
<li>
<strong>Confidence:</strong> The reliability of a rule. For a rule {Bread} => {Butter}, it's the probability of finding butter in a basket that already has bread.
<p>$$ \text{Confidence}(X \Rightarrow Y) = \frac{\text{Support}(X \cup Y)}{\text{Support}(X)} $$</p>
<p><strong>Example:</strong> Confidence({Bread} => {Butter}) = Support({Bread, Butter}) / Support({Bread}) = 60 / 80 = 0.75 or 75%. This means that 75% of customers who bought bread also bought butter. High confidence makes the rule strong.</p>
</li>
<li>
<strong>Lift:</strong> The strength of a rule compared to random chance. It tells you how much more likely customers are to buy Y when they buy X.
<p>$$ \text{Lift}(X \Rightarrow Y) = \frac{\text{Confidence}(X \Rightarrow Y)}{\text{Support}(Y)} $$</p>
<p><strong>Example:</strong> Lift({Bread} => {Butter}) = Confidence({Bread} => {Butter}) / Support({Butter}) = 0.75 / 0.7 = 1.07.
<br>โ€ข Lift > 1: Indicates a positive correlation (buying bread increases the likelihood of buying butter).
<br>โ€ข Lift = 1: No correlation.
<br>โ€ข Lift < 1: Negative correlation.</p>
</li>
</ul>
<h2>๐Ÿ”น The Apriori Principle</h2>
<div class="story-apriori">
<p><strong>The Detective's Golden Rule:</strong> Our detective quickly realizes a simple but powerful truth: If customers rarely buy {Milk}, then they will *definitely* rarely buy the combination {Milk, Bread, Eggs}. Why waste time checking the records for a combination containing an already unpopular item? This is the Apriori Principle.</p>
</div>
<p>The principle states: <strong>"All non-empty subsets of a frequent itemset must also be frequent."</strong> This is the core idea that makes the Apriori algorithm efficient. It allows the algorithm to "prune" the search space by eliminating a huge number of candidate itemsets. If {Milk} is infrequent, any larger itemset containing {Milk} is guaranteed to be infrequent and can be ignored.</p>
<h2>๐Ÿ”น Algorithm Steps</h2>
<p>The algorithm works iteratively, building up larger and larger frequent itemsets level by level.</p>
<ol>
<li><strong>Set a Minimum Support Threshold:</strong> The detective decides they only care about itemsets that appear in at least, say, 50% of receipts.</li>
<li><strong>Find Frequent 1-Itemsets (L1):</strong> Scan all receipts and find every individual item that meets the minimum support. These are your "frequent items."</li>
<li><strong>Generate and Prune (Iterate):</strong>
<ul>
<li><strong>Join:</strong> Take the frequent itemsets from the previous step (Lk-1) and combine them to create candidate k-itemsets (Ck). E.g., combine {Bread} and {Butter} to make {Bread, Butter}.</li>
<li><strong>Prune:</strong> This is where the Apriori Principle comes in. Check every candidate. If any of its subsets is not in the frequent list (Lk-1), discard it immediately.</li>
<li><strong>Scan:</strong> For the remaining candidates, scan the database to count their support. Keep only those that meet the minimum support threshold. This new list is Lk.</li>
</ul>
</li>
<li><strong>Repeat Step 3</strong> until no new frequent itemsets can be found.</li>
<li><strong>Generate Rules:</strong> Once you have all frequent itemsets, generate association rules (like {Bread} => {Butter}) from them that meet a minimum confidence threshold.</li>
</ol>
<h2>๐Ÿ”น Strengths & Weaknesses</h2>
<h3>Advantages:</h3>
<ul>
<li>โœ… <strong>Simple and Intuitive:</strong> The logic is easy to understand and explain.</li>
<li>โœ… <strong>Guaranteed to Find All Rules:</strong> It is a complete algorithm that will find all frequent itemsets and rules if they exist.</li>
</ul>
<h3>Disadvantages:</h3>
<ul>
<li>โŒ <strong>Computationally Expensive:</strong> It requires multiple scans of the entire database, which can be very slow for large datasets.</li>
<li>โŒ <strong>Many Candidate Itemsets:</strong> It can generate a huge number of candidate itemsets, especially in early passes, which consumes a lot of memory.</li>
<li>โŒ <strong>Requires Tuning:</strong> Finding the right `min_support` and `min_confidence` can be tricky and requires trial and error.</li>
</ul>
<h2>๐Ÿ”น Python Implementation (Beginner Example with `mlxtend`)</h2>
<div class="story-apriori">
<p>Here, we'll be a supermarket detective with a small set of receipts. We need to prepare our data in a specific way (a one-hot encoded format) where each row is a transaction and each column is an item. Then, we'll use the `apriori` function to find frequent itemsets and `association_rules` to find the strong relationships.</p>
</div>
<pre><code>
import pandas as pd
from mlxtend.frequent_patterns import apriori, association_rules
# --- 1. Create a Sample Dataset ---
# This represents 5 shopping receipts.
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
# --- 2. Prepare Data in One-Hot Encoded Format ---
# mlxtend's apriori needs the data as a DataFrame of True/False values.
from mlxtend.preprocessing import TransactionEncoder
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
# --- 3. Find Frequent Itemsets with Apriori ---
# We set min_support to 0.6, meaning we only want itemsets
# that appear in at least 60% of the transactions (3 out of 5).
frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
print("--- Frequent Itemsets (Support >= 60%) ---")
print(frequent_itemsets)
# --- 4. Generate Association Rules ---
# We generate rules that have a confidence of at least 70%.
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
# Let's sort the rules by their "lift" to see the strongest relationships.
sorted_rules = rules.sort_values(by='lift', ascending=False)
print("\n--- Strong Association Rules (Confidence >= 70%) ---")
print(sorted_rules[['antecedents', 'consequents', 'support', 'confidence', 'lift']])
</code></pre>
<h2>๐Ÿ”น Best Practices</h2>
<ul>
<li><strong>Data Formatting:</strong> Ensure your data is properly formatted (one-hot encoded) before applying the algorithm.</li>
<li><strong>Setting Thresholds:</strong> Start with a higher `min_support` and gradually lower it. If it's too low on a large dataset, you might run out of memory.</li>
<li><strong>Use Lift:</strong> Don't just rely on confidence. A rule might have high confidence just because the consequent is a very popular item. Lift tells you if the rule is truly meaningful.</li>
<li><strong>Consider Alternatives:</strong> For very large datasets, algorithms like FP-Growth are often much faster than Apriori because they don't require candidate generation.</li>
</ul>
<div class="quiz-section">
<h2>๐Ÿ“ Quick Quiz: Test Your Knowledge</h2>
<ol>
<li><strong>What is the Apriori Principle, and why is it important?</strong></li>
<li><strong>If Support({A}) = 30%, Support({B}) = 40%, and Support({A, B}) = 20%, what is the Confidence of the rule {A} => {B}?</strong></li>
<li><strong>A rule {Diapers} => {Beer} has a Lift of 3.0. What does this mean in plain English?</strong></li>
<li><strong>What is the main performance bottleneck of the Apriori algorithm?</strong></li>
</ol>
<div class="quiz-answers">
<h3>Answers</h3>
<p><strong>1.</strong> The Apriori Principle states that all subsets of a frequent itemset must also be frequent. It's important because it allows the algorithm to prune a massive number of candidate itemsets early on, making the process much more efficient.</p>
<p><strong>2.</strong> Confidence({A} => {B}) = Support({A, B}) / Support({A}) = 20% / 30% โ‰ˆ 66.7%.</p>
<p><strong>3.</strong> A Lift of 3.0 means that customers who buy diapers are 3 times more likely to buy beer than a randomly chosen customer. This indicates a strong positive association.</p>
<p><strong>4.</strong> The main bottleneck is the candidate generation step. In each pass, it can create a very large number of potential itemsets that need to be checked against the entire database, which is slow and memory-intensive.</p>
</div>
</div>
<h2>๐Ÿ”น Key Terminology Explained (Apriori)</h2>
<div class="story-apriori">
<p><strong>The Story: Decoding the Supermarket Detective's Notebook</strong></p>
</div>
<ul>
<li>
<strong>Itemset:</strong>
<br>
<strong>What it is:</strong> A collection of one or more items purchased in a transaction.
<br>
<strong>Story Example:</strong> {Bread, Butter} is a 2-itemset. {Milk} is a 1-itemset. A single shopping receipt can contain many different itemsets.
</li>
<li>
<strong>Association Rule:</strong>
<br>
<strong>What it is:</strong> An "if-then" statement showing the relationship between two itemsets.
<br>
<strong>Story Example:</strong> {Bread} => {Butter} is an <strong>association rule</strong>. The "if" part ({Bread}) is called the <strong>antecedent</strong>, and the "then" part ({Butter}) is called the <strong>consequent</strong>.
</li>
<li>
<strong>Pruning:</strong>
<br>
<strong>What it is:</strong> The process of discarding candidate itemsets that are guaranteed to be infrequent without actually counting their occurrences in the database.
<br>
<strong>Story Example:</strong> This is the detective's efficiency trick. By knowing that {Caviar} is rare (infrequent), they immediately <strong>prune</strong> and throw away the need to check for {Caviar, Bread} or {Caviar, Milk}, saving a huge amount of time.
</li>
<li>
<strong>One-Hot Encoding:</strong>
<br>
<strong>What it is:</strong> A way of preparing transactional data for the algorithm. It creates a table where each row is a transaction, each column is an item, and the cells are True/False (or 1/0) indicating if the item was in that transaction.
<br>
<strong>Story Example:</strong> It's like turning each receipt into a checklist. For a receipt with bread and milk, the "Bread" and "Milk" columns would be checked (True), while all other columns (Butter, Eggs, etc.) would be unchecked (False).
</li>
</ul>
</div>
</body>
</html>
{% endblock %}