RecceLabs LLM-powered Marketing Dashboard preview

RecceLabs LLM-powered Marketing Dashboard

A containerized microservices platform enabling non-technical stakeholders to perform complex marketing analytics via natural language and high-accuracy time-series forecasting.

FullStackNext.jsTypeScriptFlaskPythonMongoDBDockerNginxLLMProphetOptunaDeepSeekTailwindCSSMicroservices

Problem

Marketing managers relied on static CSVs and aggregated monthly data that made granular ROAS calculation impossible. Traditional forecasting models like ARIMA and XGBoost yielded high error rates (MAPE > 40%), failing to provide the predictive depth needed for proactive budget allocation.

Solution

Developed a containerized architecture utilizing Meta's Prophet for time-series forecasting and a multi-LLM pipeline (Llama 3, Llama 4 Maverick, and DeepSeek-R1) to automate report generation. The system features automated hyperparameter tuning via Optuna and an Nginx-routed microservices backend to handle concurrent analytical workloads.

Achievements

  • Achieved ≤ 15% MAPE for revenue forecasting, significantly outperforming ARIMA (42.3%) and XGBoost (49.8%)
  • 91.7% success rate in query classification across description, report, and chart tasks
  • Sub-second classification latency (0.71s) for real-time user query processing
  • Optimized 25 engineered features down to 7 key predictors using VIF to eliminate multicollinearity and improve model stability
RecceLab Dashboard Main Interface

Executive Context: The Evolution of Marketing Intelligence

The contemporary marketing ecosystem generates an unprecedented volume of data across fragmented channels, creating a significant analytical burden. We are shifting from manual, CSV-reliant reporting—often plagued by high analytical latency and static BI tools—to automated, AI-driven prescriptive analytics.

RecceLabs transforms fragmented data into a high-concurrency "Decision-Support Agent." However, AI-driven prescriptive analytics are mathematically fragile. The sophisticated models used to generate strategic narratives are highly susceptible to "Garbage-In, Garbage-Out" (GIGO) dynamics. Consequently, an algorithmic audit layer is an absolute prerequisite for ensuring the LLM pipeline operates on a foundation of statistical truth.

Algorithmic Foundations of the Automated Data Auditor

Without rigorous preprocessing, anomalies can cause catastrophic hallucinations in predictive models, particularly Bayesian models like Meta Prophet. For instance, an auditor failing to catch a 20x lead surge distorts the additive total, forcing extreme changepoints and ruining uncertainty intervals.

Architectural Justification: Microservices & NoSQL Superiority

The engineering stack (Next.js, Flask, MongoDB, Alibaba Cloud) balances high-concurrency demands with sub-second analytical latency.

RecceLab Login InterfaceUser Management Interface

The 4-Stage Modular LLM Orchestration Pipeline

To prevent "Prompt Bloat"—which increases costs and latency—RecceLabs utilizes a modular four-stage pipeline:
LLM-powered Report Builder

Marketing Science: Non-Linear Modeling & Predictive Excellence

Modern marketing requires moving beyond linear assumptions to capture the physical realities of consumer behavior. Primary forecasting is handled by Meta Prophet, optimized via Optuna to maintain a target MAPE ≤ 15%.

y(t) = g(t) + s(t) + h(t) + ε_t

Revenue and Ad Spend AnalysisChannel Contribution Heatmap

Business Value & Strategic Roadmap

The technical efficiency of the dashboard transforms the platform into a true conversational decision-support agent, enabling Morning "Pulse" Checks, Proactive Alerting, and Self-Service Exploration.