Rocket Rebates.
Search local rebate programs, generate detailed reports with AI, and assemble admin-reviewed submission packages — end to end.
- Role
- Solo Engineer
- When
- 2026
- Live
- No longer live
- Node.js
- NestJS
- PostgreSQL
- AWS RDS
- AWS S3
- OpenAI
- ETL
- Full-text search
Overview
Rocket Rebates was a platform for finding and applying to local rebate programs — search rebates in your area, create a project for the ones you wanted to pursue, generate a detailed report on each program with AI, and assemble an admin-reviewed submission package you could file. OpenAI sat in the middle of the workflow: identifying the documents a program required, drafting reports, and shaping the submission output.
The product is no longer live. I built it solo at Bizdesire — the data layer, the ETL, the APIs, the AI workflows, the document handling, the admin tooling, and the front-end that fronted them.
How the product worked
- Search — anyone could search local rebate programs without an account.
- Account + project — when a user found programs they wanted to pursue, they signed up and created a project to track them in.
- Deep search — inside a project, search got more detailed: program-specific filters, eligibility metadata, and ranked results.
- AI report — for any program in the project, the platform generated a detailed report with OpenAI — what the program is, what it requires, what it pays out.
- Submission package — OpenAI also identified the documents the program required and helped draft the submission report itself. Generated and uploaded documents landed in S3.
- Admin review — an admin reviewed the package in the dashboard. If everything looked right, they approved it.
- Submit — once approved, the user could file the submission externally.
What I built
- PostgreSQL schema on AWS RDS. Designed the relational schema for the rebate catalog — programs, eligibility rules, geographies, payouts — plus the application-side tables (users, projects, project–program links, generated reports, document references, review status).
- ETL pipelines for third-party feeds. Ingested rebate programs from outside data sources, normalised inconsistent fields, and kept the catalog fresh without double-writing on partial re-runs.
- NestJS REST APIs. Modular service exposing the search, project, report, document, and review endpoints. Query plans tuned for the access patterns the front-end actually used — pagination and filters designed around the indexes, not bolted on after.
- Full-text search layer. Powered both the public discovery flow and the deeper in-project search, with structured filters layered on top.
- OpenAI integration for reports + document detection. Two distinct AI workflows: (1) generate the detailed program report from structured program data, and (2) infer the document checklist a given program required, so the submission package could be assembled with the right pieces.
- S3 document storage. Generated reports and user-uploaded supporting documents stored in S3 with signed URLs for in-app viewing. Admins could review the full package without files leaving the bucket.
- Admin review workflow. Status machine with explicit gates — draft → ready for review → approved → submittable — so users couldn't file something an admin hadn't signed off on.
Key decisions
- Hybrid search: relational + full-text. Filters that cared about structure (geography, payout range, eligibility) ran against indexed columns; keyword discovery ran through the full-text layer. Neither storage paid for the other's query patterns.
- AI as a workflow step, not a chat surface. OpenAI calls were jobs: take this program record, produce that report; take this program, produce that document checklist. Outputs were structured and reviewed, not free-form prose dropped into the UI.
- S3 with signed URLs over a self-hosted file store. Free durability and CDN delivery; admin review reads documents directly from the bucket via short-lived signed links.
- Idempotent ingestion. Content-hashing inbound rebate records so partial pipeline reruns didn't duplicate programs or silently drop updates.
- Hard approval gate before submission. A submission package can't go out without admin approval — codified in the status machine, not relied on at the UI layer.
Role
Solo engineer on the project — owned the data model, the ETL, the search and project APIs, the OpenAI workflows, the submission/review pipeline, and the admin and end-user front-ends that worked against them.