Case Studies Productivity

Building a Custom AI LLM for an Affiliate Marketing Company

Discover how GetDevDone fine-tuned Llama 3 for the an established affiliate marketing agency.

thumbnail

One of our long-term clients, an established affiliate marketing agency, sought to bring advanced custom AI LLM capabilities in-house and approached us with a potential POC idea. Our team at GetDevDone immediately sprang into action.

The goal was to streamline internal workflows, automate content generation, and gain full control over sensitive data. Up to the date, we’ve built a solid base that covers most of what the client needs and continue to improve the project.

  • Period of cooperation: October 2024 – ongoing
  • Expertise: Marketing, Advertising, Digital Marketing 
  • Headquarters: Boston, Massachusetts, United States

Project highlights

  • Custom LLM Hosting: Fine-tuned Llama 3.1 8B on AWS SageMaker for internal use by AP clients.
  • RAG-Based Insight Generation: Integrated external data with Retrieval-Augmented Generation for smarter output.
  • Workflow Automation: Leveraged AWS Lambda and SQS for auto-triggered training and deployment.
  • Non-technical UX Focus: Implemented OpenWebUI for a user-friendly interaction layer with mini GPTs.

Business challenge

Our client realized that public AI tools didn’t do much due to data privacy risks and limited customization. Without automation, repetitive tasks like legal review, data summarization, and PowerPoint generation delayed the clients’ internal processes.

They needed a custom solution to regain full control over sensitive data and customizations, along with easy onboarding for their non-technical staff, who required a simple way to access AI capabilities.

Technology solution

The client approached GetDevDone with a potential POC project focusing on hosting their own AI Large Language Model (LLM). Their requirements included:

  • Hosting an AI LLM internally to maintain control over data and customization.
  • Leveraging external data to enhance the model’s insights using Retrieval-Augmented Generation (RAG) techniques.
  • Keeping the interface user-friendly, considering that most of their team members aren’t technical.

Our Approach

1. LLM Hosting & Fine-Tuning on AWS

We deployed and fine-tuned Meta’s Llama 3.1 8B model using AWS SageMaker to allow AP to host their own large language model. Each client instance has its own tailored model for personalized context windows and interaction clarity.

The fine-tuning process was automated via:

  • SQS-triggered Lambda functions to pull datasets from S3 and initiate SageMaker training.
  • CloudWatch monitoring to detect job completion and trigger model deployment to AWS Bedrock.

2. RAG Integration & Prompt Engineering

To deliver precise and context-rich outputs, we implemented Retrieval-Augmented Generation. This allowed the model to pull data from external sources and adapt responses accordingly—ideal for contract analysis, financial summaries, and content automation.

3. OpenWebUI for Mini GPTs and Workflow Automation

We installed OpenWebUI, making it easy for non-technical users to engage with the AI.

Key features include:

  • Pre-built mini GPTs for specific tasks (e.g., financial insight extraction, legal flagging).
  • Pipelines that trigger custom scripts or automate AI interactions within larger workflows.

Tech Stack

Meta Llama 3.1, AWS SageMaker, AWS Bedrock, AWS SQS, Lambda, CloudWatch, OpenWebUI with Pipelines, RAG, custom prompt templates, mini GPTs.

Business outcome

While the project is still evolving, Our client has already seen substantial benefits:

  • Data Security & Ownership: A self-hosted AI setup ensures complete control over sensitive client data and model behavior.
  • Time Savings Across Teams: Teams now generate custom emails, analyze reports, and summarize legal documents in minutes—not hours.
  • UX for Everyone: OpenWebUI empowers non-technical users to interact with advanced AI in a simple and intuitive way.
  • Scalable AI Foundation: Each client-specific model is ready to scale as new workflows are identified, paving the way for even deeper automation and personalization.

What’s Next?

Since we’ve built a solid foundation that covers the majority of the client’s needs, we continue to develop and refine the system as well as roll out more advanced features to maximize the business value of the implemented solution.