Jobber: An Agentic Job Search Platform Link to heading

Project Summary Link to heading

Jobber is a multi-stage, AI-agent-driven platform that automates the full lifecycle of a professional job search — from harvesting listings out of email alerts, through intelligent triage and application packaging, to human-in-the-loop curation via a custom web console. The system is composed of three tightly integrated subsystems, each purpose-built to eliminate manual overhead while preserving human judgment where it matters most.

I. Mail-Fetcher — Automated Listing Pipeline Link to heading

The first stage solves a pervasive problem in any active job search: the sheer volume of email alerts from LinkedIn, Glassdoor, Dice, Indeed, and other platforms. Mail-Fetcher replaces traditional scraping and brittle parsing with an AI agent that reads raw email content through the msgvault MCP service, interprets the widely varying formats of job alert emails, and normalizes each listing into a consistent eight-field JSON schema covering sender, date, subject, title, company, location, salary, and description.

A Temporal schedule triggers this fetch cycle every twenty minutes, keeping the local database current without manual intervention. The agent backend is pluggable — configurable between Warp Oz cloud agents and Claude running locally via CLI — and each invocation is wrapped with a twelve-minute timeout for reliability. On the ingestion side, deterministic Go code scans the output directory, normalizes location and salary fields with regex-based parsers, computes SHA-256 hashes for deduplication, and upserts records into PostgreSQL. Every file is tracked in an ingestion log with row-level counters, and processed files are moved to an archive directory. The entire pipeline is idempotent and safe to run at any frequency.

II. 2026-1Q — Agentic Application Triage Link to heading

The second stage moves beyond collection into decision-making. When a job listing enters the triage inbox as a markdown file, an AI agent executes a structured five-step evaluation framework: staging the listing for analysis, parsing required skills and recruiter geography (including area code lookup for commute assessment), cross-referencing against stored CV material to generate a tailored one-page resume and cover letter (converted to Word via pandoc), producing a candid match rating with gap analysis, and finally issuing a decisive verdict — GO, HOLD, LOW BALL, or NO GO — with explicit references to prior evaluation steps.

Two prompt variants — standard and thirsty — control qualification strictness. The standard template enforces tight alignment with target roles in Data Engineering, Database Design, Business Intelligence, and Data Management. The thirsty variant relaxes these constraints, allowing overqualified matches and broadening scope to include Data Analysis. Each agent run produces a timestamped status report, creating a persistent audit trail. Over Q1 2026, the pipeline triaged approximately thirty listings across all disposition categories, with over thirty documented agent decisions.

III. Curator Console — Human-in-the-Loop Review Link to heading

The third stage provides the editorial interface where automated output meets human judgment. Curator Console is a server-rendered Go web application that connects to the same PostgreSQL backend populated by Mail-Fetcher. It exposes five primary views: a dashboard with aggregate statistics and rating distribution charts, a paginated and filterable job explorer, a detailed single-listing view with tabs for review, tagging, and full-text storage, a color-coded tag registry, and a settings panel for database configuration and theme selection.

The frontend follows a bespoke design system called “The Precise Curator,” an editorial aesthetic inspired by high-end print layouts. It employs a dark surface hierarchy built on navy-tinted tones, pairs Manrope, Inter, and Space Grotesk typefaces, and enforces a strict no-border rule in favor of tonal surface stepping and negative space. The application is tested with coverage across template parsing, render error handling, edge cases, and database CRUD operations.

Architecture and Technology Stack Link to heading

The platform is built primarily in Go, with shell scripts orchestrating agent dispatch. Temporal provides durable workflow scheduling and execution guarantees. PostgreSQL (on a local network host via pggold) serves as the persistence layer, with credentials managed through HashiCorp Vault and environment configuration handled by direnv. The AI agent layer communicates with email storage through the msgvault MCP service and supports both Warp Oz cloud agents and the Claude CLI as interchangeable backends. Document generation uses pandoc for format conversion. The Go codebase follows a clean internal package layout with net/http and html/template for the web tier — no heavy frameworks. Infrastructure configuration, secrets, and environment variables flow through .envrc files, and all projects maintain .context/ directories for agent memory and status reporting.

Taken together, Jobber demonstrates a practical, production-grade application of agentic AI — not as a novelty, but as load-bearing infrastructure in a real workflow where reliability, auditability, and human oversight are non-negotiable.