Manav Pandey

AI EngineerArizonaU.S. Citizen

I build production AI systems — from agentic frameworks serving thousands of developers to fine-tuned open-weight models running in production. I’m drawn to the boundary where engineering meets research, especially energy-based approaches to reasoning.

Currently a Senior AI Research Engineer at American Express and an M.S. student at Georgia Tech. My side research explores whether energy-based models can offer a principled alternative to autoregressive reasoning — Enso is my first proof of concept.

Manav Pandey

If I had to describe myself

01Production Python is my foundation — every system I’ve shipped started here, from conversational AI to agentic frameworks to internal developer tools.

Python Engineer.

02I’ve fine-tuned, quantized, and deployed open-weight models in production — the kind of work where inference latency and model quality both matter.

ML Engineer.

At Lightsource I was the sole ML engineer: vLLM serving across multiple GPUs, dynamic LoRA adapters, DPO and PPO fine-tuning of Mistral and Mixtral, INT4-FP8 quantization for production inference. At Amex I built the internal model routing and tooling layer that sits between developers and LLMs. The craft is making models work reliably at scale, not just on a notebook.

03I’m pursuing a research question: can energy-based models learn to reason through optimization rather than token prediction?

Researcher.

Enso is my first attempt at an answer a 36M-parameter model that solves Sudoku via Langevin dynamics in a learned energy landscape. I reproduce results before I trust them; Enso started as a replication of Kona 1.0 and became something new. I’m now studying ML formally at Georgia Tech while continuing to build in this direction.

Experience

American Express — Enterprise Data & AI R&D

Senior AI Research Engineer

American Express — Enterprise Data & AI R&D

Sep 2024 – Present
  • Built and shipped American Express’s first agentic use case in production (PR Agent) using LangGraph and MCP.
  • Lead engineer on internal GenAI frameworks serving 4,500+ developers — building Amex-native equivalents of LangChain, LangGraph, OpenAI-compatible model routing, and MCP tooling.
  • Core GenAI educator at Amex, leading technical sessions distilling cutting-edge research (MLA, Muon Optimizer, Context Engineering) for 500+ engineers.
  • Python Guild AI Lead — drove best-practices architectural decision records for ML/AI and merged the official Amex Python Skill into the internal AI skills repository.
  • Filed 4 patents with 28 additional filings underway across self-supervised learning and agentic AI.
Lightsource

Machine Learning Engineer — Research

Lightsource

Feb 2024 – Sep 2024
  • Sole ML Engineer — deployed open-weight GenAI models into production using vLLM across multiple GPUs with dynamic LoRAs.
  • Fine-tuned Mistral 7B and Mixtral 8x7B for multilingual content generation using DPO and PPO with synthetic preference data.
  • Optimized inference with INT4-FP8 quantization; implemented spherical interpolation model merging for improved multilingual performance.
Curiouser

Director of AI

Curiouser

Dec 2023 – Sep 2024
  • Led engineering for a conversational AI platform. Built LoRA dynamic adapter selection, chain-of-thought tool calling, and autonomous knowledge graph creation for persistent user understanding.
American Express

Software Engineer

American Express

Aug 2022 – Feb 2024
  • Built production conversational AI using BERT models achieving 90% accuracy across 1M+ monthly interactions.
  • Implemented APM monitoring across 100+ microservices using Splunk and OpenTelemetry.
Texas A&M University

ML Research Assistant

Texas A&M University

Oct 2021 – May 2022
  • Enhanced SVM accuracy from 79% to 94% through kernel optimization for sentiment classification and cross-border threat detection.

Projects

Enso

Energy-Based Model for Constraint Satisfaction

A personal research project: a 36.5M-parameter model combining JEPA-style joint embedding with energy-based inference, solving hard Sudoku through Langevin dynamics in latent space — exploring whether energy-based optimization can substitute for autoregressive generation.

  • 96.6% puzzle accuracy — exceeding Kona 1.0’s open-source benchmark of 96.2%
  • Forward pass achieves 95.6%; Langevin dynamics adds +1.0% through test-time compute scaling
  • Uses mechanistic interpretability to analyze energy-based model reasoning

Dialogue Tree Search

MCTS-Inspired Synthetic RL Data Generation

A parallel beam search system that treats conversation trajectories as a search tree, using Monte Carlo rollouts to explore diverse dialogue paths. Generates synthetic preference datasets for training tool-using agents via GRPO and PPO with Elo-based scoring.

  • MCTS-inspired parallel beam search over conversation trajectories
  • Produces preference data for RL fine-tuning of tool-using agents

Education

Georgia Institute of Technology

Georgia Institute of Technology

M.S. Computer Science (Machine Learning)

2025 – Present

Texas A&M University

Texas A&M University

B.S. Computing

2018 – 2022

Awards & Recognition

American Express Inventor Award

2025

Recognized for 4 filed patents (28 additional underway) spanning agentic AI, applied ML infrastructure, and self-supervised learning.

Anthropic Bug Bounty Program

Contributed to AI safety through Anthropic’s bug bounty program, identifying vulnerabilities in foundation model behavior.

Technical Skills

ML & AI

PyTorchTransformersRLHF / DPO / PPO / GRPOvLLMLoRA / QLoRA Fine-TuningModel Quantization (INT4 / FP8 / AWQ / GGUF)Distributed TrainingTensorRT

Python & Backend

Python 3.10+PydanticFastAPIasync / awaitCeleryMCP ServersDockerKubernetesAWS SageMaker

Agentic AI & LLM Infrastructure

LangGraphLangChainMCPMulti-Agent OrchestrationTool-Use OptimizationRAGKnowledge Graphs

Research Interests

Energy-Based ModelsJEPALangevin DynamicsSelf-Supervised LearningVICRegRepresentation Analysis