Internal Tooling · DevX Technical Owner Production

Multi-Venue CRM
Integration Platform

Audited, redesigned, and extended a broken AWS-backed CRM integration system across three business venues, fixing a live data contamination bug, shipping a new venue integration with a polling architecture, and cleaning up stale infrastructure. Owned from discovery through go-live, including vendor coordination, stakeholder blockers, and infrastructure cleanup.

3
Venues integrated into a single HubSpot pipeline
4
Stale Lambda functions identified and decommissioned
5min
Polling interval: contacts sync automatically, no manual export
Role
Developer Co-op · Technical Owner
Timeline
Aug 2025 – Present
Organization
Boston-based hospitality group
Stack
AWS Lambda · EventBridge · HubSpot API · Tripleseat API
Integration Architecture · Per-venue isolation
Polling EventBridge Isolated Lambdas
HubSpot CRM contact sync flags · lead records EventBridge Scheduler every 5 minutes venue-a-sync TS_BASE_URI · TS_PUBLIC_KEY HUBSPOT_TOKEN venue-b-tripleseat-sync TS_BASE_URI · TS_PUBLIC_KEY LIVE – deployed venue-c-[company]-sync [company] API (pending) blocked · awaiting API access Tripleseat CRM Venue A account Tripleseat CRM Venue B account Each Lambda has its own env vars · no shared config · no cross-venue state
Context

Inherited a broken integration. Audited it before touching it.

The company operates multiple hospitality venues. Each had its own contact management needs, and the business wanted a unified HubSpot CRM as the center of gravity for all of them.

When I arrived, a partial integration existed between HubSpot and Company's Tripleseat CRM. It was partially broken, partially stale, and actively contaminating another venue's data. My job was to understand it, fix it, and extend it, while managing vendor relationships and stakeholder coordination with no other technical person on the team.

I started with a full audit before writing a single line of new code.

What I Found

Four problems before I wrote a line of code.

01
Live data contamination bug
The existing first venue's webhook integration was routing HubSpot contacts to the wrong Tripleseat account, due to a stale webhook hitting a decommissioned endpoint. Real customer data was being silently misrouted.
02
Four stale Lambda functions in production
Four [company]-prod-* Lambda functions were sitting in the AWS account, along with two SQS FIFO queues and an API Gateway resource, none of which were connected to anything active. Every one was a maintenance liability and a source of confusion.
03
No path to a second integration without rewriting
The original integration was built for a single venue. Adding a second venue would have required duplicating a significant amount of logic with no isolation between venues, a shared config structure waiting to cause another contamination event.
04
Third venue blocked on unofficial API
The third venue runs on a [company] management platform with no native HubSpot integration and no official API for smaller operators. The only path forward involved reverse-engineered code held by an external developer.
Architecture Decision

Webhooks out. Polling in.

The most consequential decision I made was to abandon the webhook approach for new venue integrations and replace it with a polling Lambda on an EventBridge schedule.

Why polling beats webhooks here
HubSpot's webhook tier didn't support reliable subscription to the specific contact property changes we needed. Polling sidesteps this entirely: a Lambda runs every 5 minutes, queries HubSpot for contacts where a venue-specific sync flag is true, pushes them to the correct Tripleseat account, and resets the flag. Simpler to test, simpler to debug, easier to extend, adding a fourth venue means duplicating the Lambda and swapping three env vars, not rewriting logic.
Per-venue isolation via environment variables
Each venue integration runs as its own Lambda with its own environment variables: HUBSPOT_TOKEN, TRIPLESEAT_PUBLIC_KEY, TRIPLESEAT_BASE_URI. No shared config, no cross-venue state. Isolation enforced at the deployment level.
Property naming as an architectural decision
The original integration had a single HubSpot property called "Tripleseat Sync." With multiple venues, this was a data collision waiting to happen. Renamed it to "Tripleseat Sync - Venue A" and created a separate "Tripleseat Sync - Venue B" property. Each Lambda only reads its own flag, isolation enforced at the data model level, not just in code.
Code: Polling Lambda Core

How the sync loop works.

Every 5 minutes, the Lambda queries HubSpot for contacts with the sync flag set, pushes them to Tripleseat, and resets the flag. The pattern is identical across venues; only the env vars change.

[company].py · core polling loop Python 3.13
def lambda_handler(event, context): # Pull contacts where venue sync flag = True contacts = get_hubspot_contacts( token=os.environ["HUBSPOT_TOKEN"], filter_property="tripleseat_sync__[company]", filter_value="true" ) for contact in contacts: # Push contact to this venue's Tripleseat account create_tripleseat_lead( contact, base_uri=os.environ["TRIPLESEAT_BASE_URI"], public_key=os.environ["TRIPLESEAT_PUBLIC_KEY"] ) # Reset flag so contact isn't synced again next run reset_sync_flag(contact["id"], os.environ["HUBSPOT_TOKEN"]) return {"synced": len(contacts)}

Adding a new venue requires one thing: a new Lambda with three different environment variables. The sync logic itself doesn't change. That extensibility was a design constraint, not an accident.

Execution Timeline

What happened, in order.

Week 1
Audit. Mapped every Lambda, SQS queue, API Gateway route, and HubSpot workflow in the existing system. Identified the contamination bug, confirmed what was live vs. stale, documented the full picture before touching anything.
Week 1–2
Scoping. Ran discovery the second venue scope, map the Venue C blocker, and confirm Venue D was on hold. Produced a written scope document so there were no ambiguities.
Week 2–3
Build. Created venue-b-tripleseat-sync Lambda with polling architecture. Created new HubSpot contact property. Configured EventBridge schedule (5-minute interval). Set up OAuth credentials in the second venue's Tripleseat account.
Week 3
Test + go-live. Set sync flag on a test contact. Confirmed lead appeared in the second venue's Tripleseat within 5 minutes. Confirmed flag reset to false. Declared integration live. Integration has run without manual intervention since.
Week 3–4
Infrastructure cleanup. Deleted the four stale [company]-prod-* Lambda functions. Deleted two the stale SQS FIFO queues. Removed the venue API Gateway resource API Gateway resource. Redeployed to production. Reduced infrastructure surface area significantly.
Ongoing
the [company] platform scoping. Confirmed The platform won't issue official API tokens to smaller operators. Documented the path forward using an external developer's reverse-engineered API. Sent scoping documentation to leads with exact field mapping needed — name, email, phone, boat brand. Blocker documented, owner assigned.
Stakeholder & Vendor Coordination

The technical work was 60% of the job.

Most of the blockers on this project weren't technical, they were access, coordination, and decision-making problems. Managing them was as much a part of my ownership as writing the Lambda code.

A vendor support ticket
Hit a 401 permission error on the Venue A Tripleseat API. Confirmed the Lambda code was correct by isolating the API call. Opened a vendor support ticket, diagnosed the OAuth vs. public key distinction, and resolved it without engineering escalation.
Duplicate lead remediation
Discovered a significant number of duplicate leads in the second venue's Tripleseat from the contamination period. Drafted a written options memo to lead with two remediation paths: a paid mass-delete via Tripleseat (paid vendor fee) or manual deletion. Clear documentation of tradeoffs so the business decision could be made without requiring my involvement in the next step.
the [company] platform API dependency management
the [company] platform doesn't offer official API access to smaller operators. The only path forward involves code an external developer had written against the platform's internal REST API. Managed the dependency by sending a clear scoping email to both lead and the external developer, confirming exactly what was needed, and documenting the blocker with an explicit owner and next step so it doesn't stall future progress.
What I'd Do Differently

Three takeaways.

01
Audit before you build
The contamination bug and the stale infrastructure were both discoverable in a first-pass audit. I found them because I mapped everything before touching anything. The cost of skipping that step would have been building new integrations on top of a broken foundation, and discovering it later in production.
02
Name things for the next person, not yourself
Renaming the HubSpot property from "Tripleseat Sync" to "Tripleseat Sync - Venue A" was a five-minute change that prevents an entire category of future bugs. Clear, scoped naming is an architectural decision, not a style preference.
03
Document blockers explicitly, with owners
The the [company] platform integration and the stale webhook deletion were both blocked on third-party access. I couldn't unblock them unilaterally, but I could document the blocker, identify the owner, and write the next step clearly enough that progress didn't depend on me being in the room. That's what prevents things from going stale.
Next Project
NUCEE: Data Platform & Impact Infrastructure