Introduction: The Model Context Protocol (MCP) is an open, modular protocol designed to standardize how applications provide structured context to Large Language Models (LLMs). Much like USB-C unifies device connectivity across different hardware, MCP creates a unified interface for AI applications to interact with models, tools, and data sources in a clean, scalable, and interoperable manner. It simplifies the orchestration of context-aware workflows while enabling flexibility and consistency across diverse AI-driven use cases.
Why Choose MCP?
The Model Context Protocol (MCP) acts as a powerful abstraction layer that significantly augments the capabilities of generative AI systems. By defining structured interactions between clients, servers, tools, and data sources, MCP enables the development of intelligent, adaptive, and scalable AI-driven applications.
This system empowers users to generate professional PowerPoint presentations using a modular architecture built on the Model Context Protocol (MCP). It seamlessly integrates multiple components into a cohesive workflow that prioritizes flexibility, reusability, and performance. The key technologies and features include:
ppt_mcp_server.py
: FastMCP server logic + tool registryppt_mcp_generator.py
: Core PowerPoint generation logicppt_client.py
: Streamlit UI client
"""
Copyright (c) 2025 AI Leader X (aileaderx.com). All Rights Reserved.
This software is the property of AI Leader X. Unauthorized copying, distribution,
or modification of this software, via any medium, is strictly prohibited without
prior written permission. For inquiries, visit https://aileaderx.com
"""
# =============================
# PPT MCP Server Implementation
# =============================
# ---- Import necessary libraries ----
from mcp.server.fastmcp import FastMCP
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.routing import Route, Mount
from mcp.server.sse import SseServerTransport
import uvicorn
import logging
from langchain_ollama.chat_models import ChatOllama
import re
from pptx import Presentation
from pptx.util import Inches, Pt
from io import BytesIO
import base64
# ---- Configure logging ----
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(message)s",
level=logging.INFO
)
# ---- Initialize FastMCP server with a name ----
mcp = FastMCP("ppt-server")
# ---- LLM model setup using Ollama and DeepSeek ----
llm = ChatOllama(
model="deepseek-r1:8b",
base_url="http://127.0.0.1:11434"
)
# =============================================
# Utility: Clean raw markdown TOC from the LLM
# =============================================
def clean_toc_md(raw_toc):
# Remove blocks
cleaned_text = re.sub(r'.*? ', '', raw_toc, flags=re.DOTALL | re.IGNORECASE)
# Extract TOC starting from first numbered point (like 1.)
lines = cleaned_text.splitlines()
start_index = 0
for i, line in enumerate(lines):
if line.strip().startswith("1."):
start_index = i
break
cleaned_lines = lines[start_index:]
return "\n".join(cleaned_lines)
# ===============================================
# Tool: Generate TOC using LLM based on a topic
# ===============================================
@mcp.tool()
async def generate_toc(topic: str) -> str:
"""Generate table of contents for a given topic"""
try:
prompt = (
f"Generate text contents in PPT style for a PowerPoint presentation on: {topic}. "
"Output only content relevant to the topic along with titles."
)
response = llm.invoke(prompt)
return clean_toc_md(response.content)
except Exception as e:
logging.error(f"Error generating TOC: {str(e)}")
raise
# =====================================================
# Utility: Generate PowerPoint bytes from topic and TOC
# =====================================================
def create_ppt_bytes(topic, toc_md):
ppt = Presentation()
# Set slide size to 16:9
ppt.slide_width = Inches(13.33)
ppt.slide_height = Inches(7.5)
# Title Slide
title_slide_layout = ppt.slide_layouts[0]
slide = ppt.slides.add_slide(title_slide_layout)
slide.shapes.title.text = topic
# Process TOC lines into multiple slides
toc_lines = toc_md.splitlines()
lines_per_slide = 8
num_slides = (len(toc_lines) + lines_per_slide - 1) // lines_per_slide
for i in range(num_slides):
slide_layout = ppt.slide_layouts[1]
slide = ppt.slides.add_slide(slide_layout)
slide.shapes.title.text = f"Title-{i}" # Placeholder title
start = i * lines_per_slide
end = min((i + 1) * lines_per_slide, len(toc_lines))
content_text = "\n".join(toc_lines[start:end])
placeholder = slide.placeholders[1]
placeholder.text = content_text
# Attempt to auto-size text
try:
from pptx.enum.text import MSO_AUTO_SIZE
placeholder.text_frame.auto_size = MSO_AUTO_SIZE.TEXT_TO_FIT_SHAPE
except Exception:
pass
# Set font size for each line
for paragraph in placeholder.text_frame.paragraphs:
for run in paragraph.runs:
run.font.size = Pt(26)
# Save to memory buffer
buffer = BytesIO()
ppt.save(buffer)
return buffer.getvalue()
# ===================================================
# Tool: Create base64-encoded PPTX from topic and TOC
# ===================================================
@mcp.tool()
async def create_ppt(topic: str, toc_md: str, ppt_name: str) -> dict:
"""Generate PowerPoint file from given topic and TOC"""
try:
ppt_bytes = create_ppt_bytes(topic, toc_md)
return {
"filename": f"{ppt_name}.pptx",
"content": base64.b64encode(ppt_bytes).decode("utf-8")
}
except Exception as e:
logging.error(f"Error generating PPT: {str(e)}")
raise
# ==========================================================
# Setup Starlette app with SSE endpoint for FastMCP clients
# ==========================================================
def create_starlette_app():
# Create SSE transport channel
transport = SseServerTransport("/messages/")
# SSE connection handler
async def handle_sse(request: Request):
client_ip = request.client.host if request.client else "unknown"
logging.info(f"New PPT connection from {client_ip}")
# Handle bi-directional communication using MCP server logic
async with transport.connect_sse(
request.scope, request.receive, request._send
) as (read_stream, write_stream):
try:
await mcp._mcp_server.run(
read_stream,
write_stream,
mcp._mcp_server.create_initialization_options()
)
except Exception as e:
logging.error(f"Connection error: {str(e)}")
raise
# Mount routes and SSE handler
return Starlette(
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=transport.handle_post_message),
]
)
# ======================
# Run the MCP PPT Server
# ======================
if __name__ == "__main__":
print("🟢 Starting PPT MCP Server on port 8888...")
app = create_starlette_app()
uvicorn.run(
app,
host="0.0.0.0",
port=8888,
log_config=None
)
"""
Copyright (c) 2025 AI Leader X (aileaderx.com). All Rights Reserved.
This software is the property of AI Leader X. Unauthorized copying, distribution,
or modification of this software, via any medium, is strictly prohibited without
prior written permission. For inquiries, visit https://aileaderx.com
"""
from pptx import Presentation # Main class to create PowerPoint presentations
from pptx.util import Inches, Pt # Utilities for defining slide dimensions and font sizes
from io import BytesIO # Used to handle in-memory byte streams (no need to save to disk)
class PPTGenerator:
@staticmethod
def create_ppt_bytes(topic, toc_md):
# Create a new empty PowerPoint presentation
ppt = Presentation()
# Set the slide dimensions to 16:9 widescreen format
ppt.slide_width = Inches(13.33)
ppt.slide_height = Inches(7.5)
# -----------------------------------------------
# Add the Title Slide as the first slide in the deck
# -----------------------------------------------
title_slide_layout = ppt.slide_layouts[0] # Use the default title slide layout
slide = ppt.slides.add_slide(title_slide_layout) # Add a slide with the title layout
slide.shapes.title.text = topic # Set the title of the slide as the topic
# -----------------------------------------------
# Process the TOC markdown to create content slides
# -----------------------------------------------
# Split the TOC markdown string into a list of lines
toc_lines = toc_md.splitlines()
# Define how many TOC lines to display per slide
lines_per_slide = 8
# Calculate the total number of slides needed
num_slides = (len(toc_lines) + lines_per_slide - 1) // lines_per_slide
# Loop through and create each content slide
for i in range(num_slides):
slide_layout = ppt.slide_layouts[1] # Use "Title and Content" layout for content slides
slide = ppt.slides.add_slide(slide_layout) # Add a new slide with that layout
# Assign a placeholder title to each slide (this can be improved to reflect actual content)
slide.shapes.title.text = f"Title-{i}"
# Determine the slice of TOC lines for this slide
start = i * lines_per_slide
end = min((i + 1) * lines_per_slide, len(toc_lines))
content_text = "\n".join(toc_lines[start:end]) # Join lines with newline character
# Fill the content placeholder with the extracted TOC lines
placeholder = slide.placeholders[1] # Usually the second placeholder is the content area
placeholder.text = content_text
# -----------------------------------------------
# Optional: Auto-size the text to fit the placeholder
# -----------------------------------------------
try:
from pptx.enum.text import MSO_AUTO_SIZE
placeholder.text_frame.auto_size = MSO_AUTO_SIZE.TEXT_TO_FIT_SHAPE
except Exception:
# If auto-sizing fails (e.g., older pptx version), just ignore
pass
# -----------------------------------------------
# Set the font size for all text in the content box
# -----------------------------------------------
for paragraph in placeholder.text_frame.paragraphs:
for run in paragraph.runs:
run.font.size = Pt(26) # Set font size to 26pt for readability
# -----------------------------------------------
# Save the presentation to an in-memory byte stream
# -----------------------------------------------
buffer = BytesIO() # Create a byte stream buffer
ppt.save(buffer) # Save the PowerPoint into the buffer
# Return the raw byte content of the PowerPoint file
return buffer.getvalue()
"""
Copyright (c) 2025 AI Leader X (aileaderx.com). All Rights Reserved.
This software is the property of AI Leader X. Unauthorized copying, distribution,
or modification of this software, via any medium, is strictly prohibited without
prior written permission. For inquiries, visit https://aileaderx.com
"""
# Required imports
import streamlit as st # Streamlit for web interface
from mcp import ClientSession # MCP client session to interact with MCP server
from mcp.client.sse import sse_client # SSE client for streaming communication
import asyncio # For running asynchronous calls
from contextlib import AsyncExitStack # To manage async resource cleanup
import base64 # For decoding base64-encoded PPT data
import json # For parsing JSON responses
# -------------------------------------
# PPTClient class: Handles interaction with the MCP server
# -------------------------------------
class PPTClient:
def __init__(self, server_url):
self.server_url = server_url # MCP SSE server URL
self.session = None # Will hold the ClientSession
self.exit_stack = AsyncExitStack() # For proper async cleanup
# Connect to the MCP SSE server and initialize session
async def connect(self):
try:
# Start the stream context to establish SSE connection
self._streams_context = sse_client(url=self.server_url)
streams = await self._streams_context.__aenter__()
# Create and initialize an MCP session
self._session_context = ClientSession(*streams)
self.session = await self._session_context.__aenter__()
await self.session.initialize()
return True
except Exception as e:
st.error(f"Connection failed: {str(e)}")
return False
# Call the "generate_toc" tool on the MCP server
async def generate_toc(self, topic):
response = await self.session.call_tool("generate_toc", {"topic": topic})
if response.content:
return response.content[0].text # Return the markdown TOC string
return None
# Call the "create_ppt" tool with the topic, content, and filename
async def create_ppt(self, topic, toc_md, ppt_name):
response = await self.session.call_tool("create_ppt", {
"topic": topic,
"toc_md": toc_md,
"ppt_name": ppt_name
})
if response.content:
data = json.loads(response.content[0].text)
# Decode base64 file content and return filename and bytes
return data["filename"], base64.b64decode(data["content"])
return None, None
# Properly close the client session and stream context
async def disconnect(self):
if self._session_context:
await self._session_context.__aexit__(None, None, None)
if self._streams_context:
await self._streams_context.__aexit__(None, None, None)
# -------------------------------------
# Helper function to only generate TOC content
# -------------------------------------
async def generate_content(topic, server_url):
client = PPTClient(server_url)
if not await client.connect():
return None
toc_md = await client.generate_toc(topic)
await client.disconnect()
return toc_md
# -------------------------------------
# Helper function to generate PPT using topic and TOC
# -------------------------------------
async def generate_presentation(topic, toc_md, ppt_name, server_url):
client = PPTClient(server_url)
if not await client.connect():
return None, None
filename, ppt_bytes = await client.create_ppt(topic, toc_md, ppt_name)
await client.disconnect()
return filename, ppt_bytes
# -------------------------------------
# Streamlit UI Section
# -------------------------------------
# Web app title and subtitle
st.title("AI-Generated PowerPoint Creator")
st.subheader("Generate professional presentations using MCP framework")
# Input field for the MCP server URL
server_url = st.text_input("MCP Server URL", "http://localhost:8888/sse")
# Input field to type the presentation topic
topic = st.text_area("Enter Topic:", height=100)
# Button to generate the slide content (TOC markdown)
if st.button("Generate Content"):
if not topic.strip():
st.error("Please enter a topic")
else:
toc_md = asyncio.run(generate_content(topic, server_url))
if toc_md:
st.session_state.toc_md = toc_md # Store TOC in session state
st.success("Content generated!")
else:
st.error("Failed to generate content")
# Show editable TOC content if already generated
if "toc_md" in st.session_state:
edited_toc = st.text_area("Edit Content:", st.session_state.toc_md, height=200)
st.session_state.toc_md = edited_toc # Allow user to edit before generating PPT
# Input field to specify the output PowerPoint filename
ppt_name = st.text_input("PPT Name:", "MyPresentation")
# Button to generate and download the presentation
if st.button("Generate Presentation"):
if "toc_md" not in st.session_state or not st.session_state.toc_md.strip():
st.error("Please generate content first")
else:
filename, ppt_bytes = asyncio.run(generate_presentation(
topic, st.session_state.toc_md, ppt_name, server_url
))
if filename and ppt_bytes:
st.success("Presentation ready!")
# Display download button for the generated PPT file
st.download_button(
label=f"Download {filename}",
data=ppt_bytes,
file_name=filename,
mime="application/vnd.openxmlformats-officedocument.presentationml.presentation"
)
else:
st.error("Failed to generate presentation")
# Core server/client framework
fastapi>=0.100.0
uvicorn>=0.22.0
starlette>=0.32.0
# PowerPoint generation
python-pptx>=0.6.21
# LLM and LangChain integrations
langchain>=0.1.14
langchain-community>=0.0.30
langchain-ollama>=0.0.5
# Streamlit for client UI
streamlit>=1.32.0
# SSE and async client-server support
httpx>=0.27.0
aiohttp>=3.9.0
sse-starlette>=1.3.3
# General utilities
requests>=2.31.0
# Needed for encoding/decoding binary files
base58>=2.1.1
# Regular expressions (built-in, no need to install separately)
# re
# For packaging and I/O
aiofiles>=23.2.1
# Optional: for better logging and dev experience
rich>=13.7.0
To install all dependencies:
D:\ppt_generator>conda create -n mcp-sse python=3.13 -y
D:\ppt_generator>conda activate mcp-sse
(mcp-sse) D:\ppt_generator>pip install -r requirements.txt
(mcp-sse) D:\ppt_generator>python ppt_mcp_server.py
🟢 Starting PPT MCP Server on port 8888...
2025-04-24 22:12:21,216 - INFO - Started server process [59852]
2025-04-24 22:12:21,217 - INFO - Waiting for application startup.
2025-04-24 22:12:21,218 - INFO - Application startup complete.
2025-04-24 22:12:21,218 - INFO - Uvicorn running on http://0.0.0.0:8888 (Press CTRL+C to quit)
(mcp-sse) D:\ppt_generator>streamlit run ppt_client.py
Access the UI in your browser:
(your local IP):8501
This MCP-based PowerPoint Generator showcases the power of the Model Context Protocol (MCP) in orchestrating and managing generative AI workflows in a modular and scalable manner. By structuring the application into well-defined components—such as tools, resources, prompts, and a FastAPI-based server—the system ensures clean separation of concerns and easy extensibility.
MCP serves as a robust backbone for generative AI systems by enabling context-driven interactions between clients and models. In this application, it effectively manages communication between the Streamlit-based UI (ppt_client.py
) and the backend service (ppt_mcp_server.py
), which coordinates LLM content generation, Qdrant-based image fetching, and PowerPoint creation. This results in a highly configurable AI assistant capable of generating full-fledged presentations with minimal user input.
The benefits of using MCP in generative applications include:
Future Enhancements may include:
Overall, this project not only demonstrates a practical implementation of MCP in a generative AI setting but also lays the groundwork for more advanced and domain-specific applications. It is a step forward in building intelligent, modular systems that can evolve with the rapid pace of AI development.