Author: russasaurous

  • The Quest for Enterprise RAG: A Deep Dive into Langfuse Integration

    The Quest for Enterprise RAG: A Deep Dive into Langfuse Integration

    January 14, 2026

    You know that feeling when your code almost works?
    Everything’s green, the server’s running, the LLM is responding
    beautifully, and then you look at that one field in your API response
    that’s supposed to showcase your observability chops for your dream job
    interview and it’s just… null.

    Welcome to my 90-minute debugging odyssey with Langfuse tracing.
    Spoiler: I won. But not before trying five different
    approaches
    , diving into import hell, and learning the single most
    valuable debugging lesson that somehow never makes it into the YouTube
    tutorials.

    Let’s talk about what happens when the docs don’t tell you what you
    need to know.


    TL;DR: What You’ll Learn

    • 🔍 The Problem: Langfuse credentials loaded, traces
      sent to dashboard, but trace_url returned null
      in API responses
    • 🛠️ The Solution: Use
      langfuse_handler.last_trace_id (found by inspecting the
      object with dir()) to build trace URLs manually
    • ⚠️ Environment Variable Hell: .env
      formatting is strict (no spaces around =, no quotes
      needed)
    • 🧪 The Debug-Driven Discovery: When APIs don’t
      expose what you need, inspect attributes at runtime and see what’s
      actually there
    • 💡 Import Gotcha: It’s
      from langfuse.langchain import CallbackHandler, not
      from langfuse.callback
    • 🎯 Key Insight: Trace IDs are only available
      after chain invocation via
      handler.last_trace_id
    • 🚀 Result: Full RAG pipeline observability with
      direct clickable trace URLs for every query

    Act I: The Setup
    (Everything’s Perfect… Right?)

    I’m building a Bank of America-compliant RAG demo for a GenAI
    Engineering interview. The stack is clean:

    • Python 3.11.9 with FastAPI/Uvicorn
    • AWS Bedrock (Claude 3 Sonnet) for generation
    • LangChain for the RAG orchestration
    • Chroma vector DB with BofA’s Responsible AI
      documents
    • LLM Guard for guardrails (input: PromptInjection,
      Toxicity, BanTopics; output: Sensitive, NoRefusal, Relevance)
    • Langfuse for observability (the star of today’s
      show)
    • DeepEval with OpenAI for evaluation

    The RAG pipeline is chef’s kiss. Quality answers, proper
    source citations, guardrails blocking malicious queries like a bouncer
    at an exclusive club. I can taste that job offer.

    But there’s one problem.

    {
      "query": "What are BofA's five core principles for responsible AI?",
      "answer": "According to [BofA_Responsible_AI_Framework_2024.md]...",
      "sources": ["BofA_Responsible_AI_Framework_2024.md", "..."],
      "trace_url": null,
      "guardrails_applied": true
    }

    That trace_url: null is haunting me. I know
    traces are going to Langfuse because I can see them in the dashboard.
    But I can’t link to them programmatically. For a demo about
    observability, that’s… suboptimal.

    Time to fix it.


    Act II: The Descent
    (Five Failed Attempts)

    Attempt 1: “It Must Be an
    Attribute”

    First instinct: the CallbackHandler probably has a
    .trace or .trace_id attribute I can access
    after running the chain.

    if trace and callbacks:
        if hasattr(langfuse_handler, 'trace'):
            trace_url = langfuse_handler.trace.get_trace_url()
        elif hasattr(langfuse_handler, 'trace_id'):
            trace_id = langfuse_handler.trace_id
            trace_url = f"{os.getenv('LANGFUSE_HOST')}/trace/{trace_id}"

    Result: Both hasattr() checks return
    False. No .trace, no
    .trace_id.

    Lesson: Don’t assume the obvious attributes
    exist.


    Attempt 2: “Maybe I Need to
    Flush?”

    I’ve seen callback handlers that need explicit flushing to send data.
    Maybe the trace isn’t finalized until I flush it?

    langfuse_handler.flush()
    if hasattr(langfuse_handler, 'trace_id'):
        trace_id = langfuse_handler.trace_id
        trace_url = f"{os.getenv('LANGFUSE_HOST')}/trace/{trace_id}"

    Terminal output:

    ⚠️  Trace error: 'LangchainCallbackHandler' object has no attribute 'flush'

    Result: Nope. No .flush() method.

    Lesson: Not all handlers follow the same patterns.
    Read the actual API.


    Attempt 3: “Let’s
    Create the Trace Explicitly”

    Okay, what if I’m doing this backwards? What if I need to use the
    Langfuse client to create a trace first, then pass it
    to the handler?

    from langfuse import Langfuse
    from langfuse.callback import CallbackHandler
    
    langfuse_client = Langfuse()
    
    # In the query function
    langfuse_trace = langfuse_client.trace(
        name="rag-query",
        metadata={"query": query, "guardrails_enabled": True}
    )
    langfuse_handler = CallbackHandler(trace=langfuse_trace)
    trace_url = langfuse_trace.get_trace_url()

    Terminal output:

    ModuleNotFoundError: No module named 'langfuse.callback'

    Wait, what?

    'Langfuse' object has no attribute 'trace'

    Result: Double fail. Wrong import path and
    wrong approach.

    Lesson: Verify your imports before debugging logic.
    Also, the Langfuse SDK doesn’t work the way I thought it did.


    Attempt 4: “Fine,
    Let Me Read the Import Docs”

    Turns out the correct import is:

    from langfuse.langchain import CallbackHandler  # NOT langfuse.callback

    But I’m still stuck on how to create a trace. Maybe I can pass
    parameters to CallbackHandler to identify the trace?

    langfuse_handler = CallbackHandler(
        session_id=str(uuid.uuid4()),
        user_id="rag-demo-user"
    )

    Terminal output:

    LangchainCallbackHandler.__init__() got an unexpected keyword argument 'session_id'

    Tried trace_id too. Same error.

    Result: CallbackHandler() doesn’t
    accept these parameters.

    Lesson: The LangChain integration for Langfuse is
    opinionated. It wants you to use it a specific way.


    Attempt 5: “Screw
    It, Let’s Inspect the Object”

    I’m out of ideas from the documentation. Time for the nuclear option:
    see what’s actually on the object at runtime.

    # After chain invocation
    if trace and langfuse_handler:
        attrs = [a for a in dir(langfuse_handler) if not a.startswith('_')]
        print(f"🔍 Available handler attributes: {attrs[:20]}")

    Terminal output:

    🔍 Available handler attributes: ['client', 'context_tokens', 'get_langchain_run_name', 
    'ignore_agent', 'ignore_chain', 'ignore_chat_model', 'ignore_custom_event', 'ignore_llm', 
    'ignore_retriever', 'ignore_retry', 'last_trace_id', 'on_agent_action', 'on_agent_finish', 
    'on_chain_end', 'on_chain_error', 'on_chain_start', 'on_chat_model_start', 
    'on_custom_event', 'on_llm_end', 'on_llm_error']

    Wait.

    last_trace_id.

    THERE IT IS.


    Act III: The Breakthrough

    That moment when you find the thing. The actual thing. Not
    what the docs say should be there. Not what makes logical sense based on
    other APIs. The thing that’s actually there.

    # Create callback handler for this request
    trace_url = None
    callbacks = []
    langfuse_handler = None
    if trace:
        langfuse_handler = CallbackHandler()  # No parameters needed!
        callbacks = [langfuse_handler]
    
    # Build the RAG chain
    rag_chain = (
        {"context": retriever | format_docs, "question": RunnablePassthrough()}
        | prompt_template
        | llm
        | StrOutputParser()
    )
    
    # Invoke with tracing
    answer = rag_chain.invoke(query, config={"callbacks": callbacks})
    
    # NOW extract the trace URL
    if trace and langfuse_handler:
        if hasattr(langfuse_handler, 'last_trace_id') and langfuse_handler.last_trace_id:
            langfuse_host = os.getenv('LANGFUSE_HOST', 'https://cloud.langfuse.com')
            trace_url = f"{langfuse_host}/trace/{langfuse_handler.last_trace_id}"

    POST request result:

    {
      "query": "What are BofA's responsible AI principles?",
      "answer": "According to [BofA_Responsible_AI_Framework_2024.md]...",
      "sources": ["BofA_Responsible_AI_Framework_2024.md"],
      "trace_url": "https://us.cloud.langfuse.com/trace/550e8400-e29b-41d4-a716-446655440000",
      "guardrails_applied": true
    }

    IT WORKS.

    The trace URL is a real, clickable link to the Langfuse dashboard
    showing: – Full chain execution – Token usage – Latency per step –
    Input/output for each LangChain component – Error tracking

    Perfect observability. Perfect for a demo. Perfect for an
    interview.


    Act IV:
    The Cleanup (Don’t Forget Environment Variables)

    Oh, and before we celebrate too much, let me tell you about the
    .env formatting saga that preceded all this.

    Wrong:

    LANGFUSE_PUBLIC_KEY = "pk-lf-..."
    LANGFUSE_SECRET_KEY = "sk-lf-..."
    LANGFUSE_BASE_URL=https://us.cloud.langfuse.com

    Problems: 1. Spaces around = signs (Python’s
    dotenv doesn’t like that) 2. Quotes around values (treated
    as part of the string) 3. Wrong variable name
    (LANGFUSE_BASE_URL instead of
    LANGFUSE_HOST)

    Correct:

    LANGFUSE_PUBLIC_KEY=pk-lf-...
    LANGFUSE_SECRET_KEY=sk-lf-...
    LANGFUSE_HOST=https://us.cloud.langfuse.com

    Debug logging confirmed the fix:

    print(f"Debug: LANGFUSE_PUBLIC_KEY loaded? {os.getenv('LANGFUSE_PUBLIC_KEY') is not None}")
    print(f"Debug: LANGFUSE_SECRET_KEY loaded? {os.getenv('LANGFUSE_SECRET_KEY') is not None}")

    Output:

    Debug: LANGFUSE_PUBLIC_KEY loaded? True
    Debug: LANGFUSE_SECRET_KEY loaded? True

    Lesson: Environment variables are finicky. Debug
    them early. Assume nothing.


    Lessons Learned (The Good
    Stuff)

    🔍 When Docs Fail,
    Inspect the Object

    This is the meta-lesson. The thing that saved me after 5 failed
    attempts.

    attrs = [a for a in dir(obj) if not a.startswith('_')]
    print(f"Available attributes: {attrs}")

    Don’t guess. Don’t assume the API works like similar APIs you’ve
    seen. Look at what’s actually there. This is the
    equivalent of using console.log(Object.keys(obj)) in
    JavaScript or vars() in Python. It’s unglamorous, but it
    works.

    ⏱️ Timing Matters

    last_trace_id is only available after the chain
    invocation, not before. This makes sense—the trace is created during
    execution—but it’s not intuitive if you’re used to passing trace IDs
    upfront.

    🧩 LangChain
    Integrations Are Opinionated

    The CallbackHandler() doesn’t want you to create traces
    manually. It handles everything internally if you just: 1. Initialize it
    with no parameters (reads from env vars) 2. Pass it to the chain via
    config={"callbacks": [handler]} 3. Access
    last_trace_id after invocation

    Fighting this pattern wastes time.

    📦 Import Paths
    Aren’t Always Obvious

    from langfuse.callback import CallbackHandler
    from langfuse.langchain import CallbackHandler

    The LangChain-specific integration lives in a separate module. Check
    the package structure.

    🔧 Environment
    Variables Need Love

    • No spaces around =
    • No quotes (unless you want quotes in the value)
    • Use the exact variable names the library expects
      (LANGFUSE_HOST not LANGFUSE_BASE_URL)
    • Add debug logging to verify they’re loading

    🎯 UUIDs Are Your
    Friend

    I imported uuid for this project even though I ended up
    not needing it for trace IDs (Langfuse handles that). But having it
    available let me experiment quickly:

    import uuid
    
    trace_id = str(uuid.uuid4())  # Quick unique ID for testing

    🚀 Don’t Give Up on
    the Right Solution

    I could have accepted trace_url: null and just told
    interviewers “check the Langfuse dashboard manually.” But that would
    have been a mediocre demo. Persistence pays off.


    The Final Stack

    Here’s what’s working now:

    Core RAG: – AWS Bedrock Claude 3 Sonnet
    (anthropic.claude-3-sonnet-20240229-v1:0) – Chroma vector DB with 8 BofA
    Responsible AI documents – HuggingFace embeddings
    (sentence-transformers/all-MiniLM-L6-v2) – LangChain LCEL for pipeline
    composition

    Guardrails (LLM Guard): – Input: PromptInjection,
    Toxicity, BanTopics – Output: Sensitive (PII redaction), NoRefusal,
    Relevance – All scanners working perfectly after parameter fixes

    Observability (Langfuse): – ✅ Traces sent to
    dashboard – ✅ Direct trace URLs in API responses – ✅ Token usage
    tracking – ✅ Latency metrics per chain step

    Evaluation (DeepEval + OpenAI): – OpenAI API key
    configured (180 chars, sk-proj-...) – Ready for
    Faithfulness, AnswerRelevancy, ContextualRelevancy metrics –
    LLM-as-Judge for adversarial query testing

    Deployment: – FastAPI server on 127.0.0.1:5000 –
    Auto-reload enabled for development – Git repo:
    github.com/mtnjxynt6p-ai/brownbi_com – Professional commit
    history with semantic prefixes


    Why This Matters for
    Your Next Interview

    Debugging stories like this demonstrate:

    1. Persistence: You don’t give up when the docs don’t
      have the answer
    2. Systematic thinking: You try multiple approaches
      methodically
    3. Debugging skills: You know how to inspect objects,
      read error messages, and isolate problems
    4. Tool knowledge: You understand how integrations
      work (callbacks, handlers, environment variables)
    5. Engineering judgment: You know when to dig deeper
      vs. when to move on

    When I show this demo to Bank of America (or any GenAI role), I won’t
    just show a working RAG system. I’ll show: –
    Observability: Clickable trace URLs for every query –
    Guardrails: Live blocking of prompt injection and PII
    leakage – Evaluation: Metrics proving answer quality
    and faithfulness – Professional engineering: Clean
    code, git history, documentation

    And if they ask “how did you get the Langfuse integration
    working?”

    I’ll tell them this story.


    The Git Commit That Ended It
    All

    git commit -m "feat: Add Langfuse trace URL to API response using last_trace_id
    
    - Import uuid for trace generation
    - Use CallbackHandler() to capture Langfuse traces
    - Extract trace URL using handler.last_trace_id
    - Build direct link to Langfuse dashboard for each query
    - Provides full observability of RAG pipeline execution"

    Commit: a1a7ce7


    Closing Thoughts

    If you’re building production GenAI systems, you’ll hit walls like
    this. The ecosystem is moving fast. Documentation lags. APIs change.
    Integrations are janky.

    The difference between a junior engineer and a senior engineer isn’t
    that the senior knows all the answers. It’s that the senior knows
    how to find the answers when the docs don’t have
    them
    .

    Use dir(). Use hasattr(). Print the damn
    attributes. Read error messages carefully. Try multiple approaches.
    Don’t assume the API works like you think it should.

    And when you finally get that trace_url field to
    populate with a real URL?

    Commit it. Document it. And add it to your interview portfolio.

    Because that’s the stuff that gets you hired.


    P.S. If you’re preparing for GenAI/ML engineering
    interviews and want to see the full code for this RAG system with
    guardrails, tracing, and evaluation, it’s all on GitHub:
    github.com/mtnjxynt6p-ai/brownbi_com. PRs welcome. Bugs
    expected. Observability guaranteed.

    P.P.S. Special shoutout to GitHub Copilot for
    helping me debug this in real-time. Even AI agents can’t magic away the
    need to inspect objects at runtime. But they sure make the journey
    faster.


    Tags: #GenAI #RAG #Langfuse #Observability
    #LangChain #Debugging #AWSBedrock #Python #FastAPI #InterviewPrep

    Reading time: ~10 minutes
    Debugging time saved: ~2 hours (if you learn from my
    mistakes)

  • The Massive COBOL Problem in Finance

    Russ Brown Jan 13, 2026

     

    A colleague of mine – Ellison Chan recently shared an exciting AI project he’s working on: using large language models (LLMs) and generative AI to modernize legacy COBOL code. It’s a smart, timely idea — and yes, there’s a very real and rapidly expanding market for this in financial services (banks, insurance companies, credit unions, payment processors, and more).This isn’t hype; it’s one of the hottest enterprise AI applications right now (as of early 2026). Here’s why it’s gaining serious traction, backed by real industry momentum.

    Financial institutions still run enormous amounts of mission-critical code in COBOL — the language from the late 1950s that’s powering core systems like transaction processing, accounts, loans, payments, and insurance claims.

    • Scale: Globally, estimates put 220–300 billion lines of COBOL code in production, with ~43% of banking systems still built on it. In the US alone, COBOL handles trillions of dollars in daily transactions.
    • Pain points:
      • Talent crisis: COBOL experts are retiring (average age 50+), and almost no new developers learn it. Maintenance costs are exploding — often eating 60%+ of IT budgets.
      • Technical debt: Legacy mainframes are expensive, hard to integrate with modern cloud/API/microservices, and slow innovation.
      • Risk: Full manual rewrites or migrations take 5–10+ years and hundreds of millions — too slow and risky for most organizations.

    Traditional modernization is painful. AI/LLMs change that by making it incremental, faster, and lower-risk.

    How Generative AI Tackles COBOL Modernization

    LLMs (especially code-specialized ones) shine at:

    • Explaining undocumented COBOL logic in plain English
    • Generating/updating documentation and comments
    • Refactoring code
    • Translating COBOL → Java, Python, C#, or microservices
    • Extracting buried business rules
    • Generating unit tests
    • Identifying dependencies and vulnerabilities

    This enables phased, safe modernization — update one module at a time, validate automatically, and preserve functionality while reducing complexity and costs (often 40–50% time savings, per McKinsey/Accenture reports).Real Momentum & Adoption in Financial Services (2025–2026)The space is heating up fast, with big players and real deployments:

    • IBM watsonx Code Assistant for Z — Specifically built for COBOL-to-Java on mainframes. Already in use at financial institutions; it’s a flagship for mainframe clients (banks/insurers).
    • Accenture, EY, Capgemini — All have GenAI tools for COBOL analysis, documentation, and translation.
    • Microsoft Azure AI agents — Used for COBOL migration and mainframe modernization in banking/insurance.
    • Case studies & pilots — Regional US banks/insurers use these tools; Goldman Sachs pilots assistants writing 40%+ of code; European insurers migrate complex apps with 95% latency reduction.

    Living It Firsthand in Charlotte

    Here in Charlotte, North Carolina — the second-largest banking hub in the United States — this issue is front and center every day. The city is home to major players like Bank of America (headquarters), Truist (significant presence), Wells Fargo (large operations), and dozens of regional banks, credit unions, and fintech firms. Many of these institutions still rely heavily on COBOL-based core systems for their day-to-day transaction processing. You can feel the urgency firsthand: IT leaders here are openly talking about the retiring workforce, skyrocketing maintenance costs, and the pressure to modernize without breaking the business. Charlotte’s banking community is actively piloting and investing in AI-assisted legacy modernization — it’s not just a national trend; it’s happening right in our backyard.

    Market Size & Growth Projections

    • Broader mainframe modernization: ~$8–9B in 2025 → $13B+ by 2030 (CAGR 9–10%).
    • AI/GenAI legacy code segment: ~$1.8–2B in 2024–2025 → $14B+ by 2033 (CAGR 25%+).
    • Application modernization services: $30B+ in 2025 → $100B+ by 2033.
    • Financial services takes the biggest share (~40%), with 70%+ of large banks planning AI budget increases for modernization & compliance by 2026.

    Gartner predicts 75% of software engineers will use AI code assistants by 2028 (up from <10% in 2023), with legacy modernization as a key driver.

  • From Validation Errors to Vector Recommendations: Building a Personalized “Recommended for You” for Classic Cars

    Russ Brown- January 10, 2026

    You know that feeling when you’re three hours deep into an AWS console at 11 PM, staring at an error message that might as well be written in ancient Sumerian? Yeah. This post is about that journey.

    I’ve always believed the best way to really learn a technology is to build something real with it — and then write down *exactly* how it went, including the parts where you questioned your career choices, Googled the same error five times, and finally cracked it with a solution so obvious you wanted to throw your laptop out the window.

    So when I decided to create a “Recommended for You” feature for classic car enthusiasts — personalized recommendations powered by vector embeddings and semantic search — I knew I had to document everything: the clever architecture decisions, the AWS configuration nightmares, the 2 AM breakthroughs, and that sweet, sweet dopamine hit when the first real recommendation actually made sense.

    **Disclaimer:** This is a personal, independent side project and technical demonstration. It is not affiliated with, endorsed by, or built for any specific company or real-world marketplace. All examples, data, scenarios, and car descriptions are hypothetical and used for educational purposes only. No actual muscle cars were harmed in the making of this tutorial.

    This post walks through the architecture, the AWS OpenSearch domain creation (featuring **all** the validation errors I encountered — there were… several), k-NN indexing, embedding generation, and basic integration. Basically, all the practical details that Medium tutorials conveniently skip over because the author “definitely didn’t spend four hours troubleshooting that part, nope, worked first try.”

    If you’re working on recommendation systems, vector search, or just trying to survive AWS’s passive-aggressive validation messages, I hope this saves you some time (or at least makes you feel less alone when things inevitably break).

    Let’s get into it. 🚗💨

    ## The Goal: Semantic Recommendations for Classic Cars

    Picture this: a marketplace packed with vintage beauties, roaring muscle cars, and pristine collector’s items. Users browse and buy based on *very* specific tastes — someone who just dropped $50K on a 1969 Ford Mustang Boss 429 isn’t randomly shopping for minivans next. They’re probably eyeing that cherry 1970 Dodge Charger R/T.

    Here’s the thing: these are infrequent, high-value purchases. Traditional collaborative filtering (“users who bought this also bought…”) completely face-plants with sparse data. You can’t build a Netflix-style recommendation engine when most users buy maybe one or two cars total.

    **Enter vector databases.**

    The concept is elegant: represent each car and each user’s preferences as high-dimensional embeddings (basically, coordinates in semantic space), then use k-Nearest Neighbors (k-NN) to find the most similar items. It’s like playing matchmaker, but with math.

    The system blends:
    – **Content-based signals** (car attributes: make, model, year, price, category, those slightly purple prose descriptions like “chrome gleaming in the sunset”)
    – **Behavioral signals** (past purchases, bids, views, the cars they favorited at 2 AM after watching *Bullitt*)

    ## Architecture Overview

    Here’s what I cobbled together:

    – **Embeddings:** Generated using Sentence Transformers (specifically `all-MiniLM-L6-v2` for text descriptions) with plans to add CLIP for image support because why not make things harder
    – **Vector Store:** AWS OpenSearch Service with k-NN plugin (managed service = someone else deals with cluster health at 3 AM)
    – **Backend:** AWS Lambda to generate user embeddings on-the-fly and query OpenSearch
    – **Data Sources:** Hypothetical listings with classic car attributes; user behavior stored in DynamoDB
    – **Freshness:** EventBridge triggers to index new listings as they appear (because recommendation staleness is so 2015)

    Simple enough, right? *Narrator: It was not simple enough.*

    ## Step 1: Creating the OpenSearch Domain (AKA: Welcome to Hell)

    AWS OpenSearch is actually pretty great — managed service, excellent k-NN support, integrates nicely with the rest of AWS. Creating the domain? Less great. More like negotiating with a very pedantic robot that rejects your application for reasons it will only vaguely hint at.

    ### Domain Configuration

    – **Name:** `hemmings-cars` (name your domain after your dreams)
    – **Version:** OpenSearch 2.17 (supports disk-based k-NN for better cost efficiency)
    – **Instance Type:** `r6g.large.search` (memory-optimized for vector workloads — vectors are hungry little beasts)
    – **Storage:** 100 GiB gp3 EBS per node
    – **Security:** Fine-grained access control enabled, with an IAM role ARN set as master user

    I clicked “Create.” I felt optimistic. I was a fool.

    ### The Validation Errors That Almost Broke Me

    AWS domain creation validates *everything* before provisioning. Here are the two errors that consumed my entire Saturday:

    #### Error #1: “General configuration validation failure”

    Super helpful, AWS. Thanks for the specificity.

    **Usual culprits:**
    – IAM permissions issues (does your role actually have `AmazonOpenSearchServiceFullAccess`?)
    – Instance type availability in your region (apparently some regions just… don’t have certain instance types? Cool system.)

    **Fix:** Double-checked the role permissions, switched regions, sacrificed a rubber duck to the AWS gods, and retried.

    #### Error #2: The IPv6 CIDR Nightmare

    Oh, this one. This *beautiful* error message:

    “`
    IPv6CIDRBlockNotFoundForSubnet
    “`

    Translation: “Your VPC has an IPv6 CIDR block assigned at the VPC level, but the specific subnets you selected don’t have their own IPv6 CIDR blocks, and I am fundamentally incapable of dealing with this philosophical inconsistency.”

    **Solution:** Assigned /64 IPv6 CIDR blocks to each subnet via the VPC console:

    “`bash
    aws ec2 associate-subnet-cidr-block \
    –subnet-id subnet-xxxxx \
    –ipv6-cidr-block “2600:1f13:xxxx:xxxx::/64”
    “`

    **Alternative** (if you don’t actually need IPv6 and enabled it by accident like me): Switch to IPv4-only subnets and remove the VPC’s IPv6 CIDR. Sometimes the best solution is admitting you didn’t need the fancy feature in the first place.

    ### When Things Get Stuck

    At one point, my domain got stuck in “Processing” state for 45 minutes. Nothing moving. Just… processing. Processing what? Processing *feelings?*

    Canceled the change:

    “`bash
    aws opensearch cancel-domain-config-change \
    –domain-name hemmings-cars \
    –region us-east-1
    “`

    **Pro tip that could’ve saved me hours:** Use dry runs to catch issues early:

    “`bash
    aws opensearch start-domain-dry-run \
    –domain-name hemmings-cars \
    –region us-east-1 \
    –cli-input-json file://domain-config.json
    “`

    This feature is criminally underused. Be smarter than Past Me.

    ## Step 2: Setting Up the k-NN Index

    Once the domain finally achieved sentience (or at least “Active” status), I created the index with k-NN enabled:

    “`bash
    curl -XPUT “https://hemmings-cars.us-east-1.opensearch.amazonaws.com/hemmings-cars” \
    -H ‘Content-Type: application/json’ -d ‘{
    “settings”: {
    “index”: {
    “knn”: true,
    “knn.algo_param.ef_search”: 100
    }
    },
    “mappings”: {
    “properties”: {
    “car_id”: { “type”: “keyword” },
    “embedding”: { “type”: “knn_vector”, “dimension”: 384 },
    “make”: { “type”: “keyword” },
    “model”: { “type”: “keyword” },
    “year”: { “type”: “integer” },
    “price”: { “type”: “float” },
    “category”: { “type”: “keyword” },
    “description”: { “type”: “text” }
    }
    }
    }’
    “`

    The `dimension: 384` matches the output of `all-MiniLM-L6-v2`. If you use a different model and get dimension mismatches, you’ll receive an error message that will make you feel like you’ve disappointed your computer personally.

    ## Step 3: Indexing Items with Embeddings

    For each hypothetical listing, I generated embeddings and indexed them:

    “`python
    from opensearchpy import OpenSearch, AWSV4SignerAuth
    from sentence_transformers import SentenceTransformer
    import boto3

    # AWS auth dance
    credentials = boto3.Session().get_credentials()
    auth = AWSV4SignerAuth(credentials, ‘us-east-1’, ‘es’)
    client = OpenSearch(
    hosts=[{‘host’: ‘hemmings-cars.us-east-1.opensearch.amazonaws.com’, ‘port’: 443}],
    http_auth=auth,
    use_ssl=True,
    verify_certs=True
    )

    # Load the model (this takes a minute the first time)
    model = SentenceTransformer(‘all-MiniLM-L6-v2’)

    # Example car listing
    car = {
    “car_id”: “car_001”,
    “description”: “1969 Ford Mustang Boss 429, V8, Candy Apple Red, excellent condition, numbers matching”,
    “make”: “Ford”,
    “model”: “Mustang”,
    “year”: 1969,
    “price”: 45000,
    “category”: “muscle car”
    }

    # Generate embedding from description
    embedding = model.encode(car[“description”]).tolist()

    # Index it
    document = {**car, “embedding”: embedding}
    client.index(index=”hemmings-cars”, id=car[“car_id”], body=document)
    “`

    For **user embeddings**, I averaged the embeddings of their past interactions (purchases, views, items they lingered on for suspiciously long). It’s not perfect, but it works surprisingly well.

    ## Step 4: Querying for Recommendations

    Simple k-NN query to find similar cars:

    “`python
    import numpy as np

    # Generate user embedding (simplified version)
    user_embedding = np.mean([
    model.encode(“purchased 1969 Ford Mustang Boss 429”)
    ], axis=0).tolist()

    # Query for 5 nearest neighbors
    query = {
    “size”: 5,
    “query”: {
    “knn”: {
    “embedding”: {
    “vector”: user_embedding,
    “k”: 5
    }
    }
    }
    }

    response = client.search(index=”hemmings-cars”, body=query)

    for hit in response[‘hits’][‘hits’]:
    print(f”{hit[‘_source’][‘year’]} {hit[‘_source’][‘make’]} {hit[‘_source’][‘model’]} – ${hit[‘_source’][‘price’]:,}”)
    “`

    The first time this returned sensible results, I literally said “oh DAMN” out loud to my empty apartment.

    You can add filters for price range, category, year, etc. as needed. Want muscle cars under $50K from the ’70s? Easy.

    ## Step 5: Integration & Scaling Thoughts

    For production (you know, if this weren’t just me messing around):

    – **AWS Lambda handler** to fetch user behavior from DynamoDB, generate embedding on-the-fly, query OpenSearch, return results
    – **EventBridge** for real-time indexing of new items (trigger Lambda on new listing creation)
    – **CloudWatch monitoring** for `KNNGraphMemoryUsage` and cost control (k-NN can get *expensive* at scale)
    – **Caching layer** (ElastiCache?) for frequently requested recommendations
    – **A/B testing framework** to measure actual click-through and conversion rates

    ## What I Actually Learned (Besides Regex for AWS Error Messages)

    The biggest surprises weren’t the ML parts — those were honestly pretty straightforward. It was the AWS domain validation quirks (especially that IPv6 CIDR issue) and how much small configuration details can completely derail your progress.

    I also learned that documentation lies. Not maliciously — it’s just that tutorials show you the happy path, and real life is *never* the happy path. Real life is three wrong turns, a weird error message, and a Stack Overflow post from 2019 that almost applies to your problem but not quite.

    Building and documenting this reminded me how valuable it is to capture the real process — errors, retries, small wins, moments of confusion, and all. This is the tutorial I wish I’d found on my first attempt.

    ## Next Steps

    If I keep going with this (and I probably will, because I’m apparently incapable of leaving projects alone):

    – **Image embeddings** with CLIP (because descriptions like “beautiful patina” are subjective, but pictures don’t lie)
    – **Hybrid search** combining semantic similarity with traditional filters
    – **A/B testing framework** to see if this actually performs better than random recommendations (spoiler: it does, but *how much better* matters)
    – **Production deployment** with proper monitoring, error handling, and all those boring adult engineering things

    ## Final Thoughts

    If you’ve tackled similar projects or hit the same AWS gotchas, I’d love to hear about it in the comments. Specifically, I want to know:

    – Did you also lose hours to IPv6 CIDR blocks?
    – What’s your favorite embedding model for niche domains?
    – Have you found a way to make AWS error messages less cryptic, or is that just our collective burden to bear?

    Thanks for reading, and may your validation errors be few and your k-NN queries be fast. 🏎️

     

    Coming soon – how did I get the DATA for this endeavor?

    *P.S. — If you actually work on a classic car marketplace and need a recommendation system, my DMs are open. Just saying.*