CineMatch

A Hybrid LLM and Relational Approach for Personalized Movie Recommendation

CS 5614 - Database Management Systems · Virginia Tech · Spring 2026

CineMatch system overview

About

CineMatch is a hybrid movie recommendation system where each component handles the part of the problem it is best suited for. A Go backend serves a REST API with JWT auth, an LLM converts natural-language questions into safe SQL, Apache AGE runs graph traversals for related-title discovery, and a React/TypeScript frontend renders the experience. PostgreSQL 16 with PostGIS manages all structured storage, and an enrichment trigger pipeline auto-queues new movies for LLM-generated mood/theme tagging.

The system collects structured preference data at signup so recommendations are personalized from the first session, addressing the cold-start problem. Mood is treated as a real feature in the user profile, updated through feedback.


Stack

Go 1.26 Chi Router PostgreSQL 16 Apache AGE PostGIS pgxpool OpenRouter / Llama 3.3 React / TypeScript Vite ECharts Docker Cloudflared Nginx JWT + bcrypt

Codebase Snapshot

Numbers below are pulled from the tokei CLI tool.

Area Files Code Lines
Go 41 4,471
Frontend (TS/TSX/CSS/HTML) 22 4,128
SQL 3 229
Shell scripts 6 1,001
Total project 98 13,276

Real API Surface

Routes below are pulled from internal/infra/http/server/server.go. The current API surface has 20 HTTP endpoints total (19 under /api + 1 health route).

Public

Protected (JWT required)


Project Status Matrix

This matrix separates what is live in backend routes, what currently runs with mock data in UI, and what is wired in frontend but not exposed in the backend yet.

Area Capability Status Notes
API GET /health Live Health check route is registered and returns 200
API Auth + refresh rotation Live /api/auth/register|signup|login|refresh|logout
API Movie catalog + search + details + crew Live /api/movies* routes are active
API Onboarding save endpoint Live POST /api/users/onboard behind JWT middleware
API Chat NL -> SQL -> query execution Live /api/chat with prompt + SQL guard
API Graph recommendations Live /api/recommendations/graph + graph-related movie endpoint
API Feedback + not interested loop Planned Writes both relational and graph edges in one transaction
Frontend Home / Search / Movie / Chat routes Live Connected to realApi and live backend routes
Frontend 5-step onboarding signup flow Live Uses atomic signup endpoint /api/auth/signup
Frontend Dashboard analytics page UI Demo Dashboard page is currently bound to mockApi
LLM Schema-grounded system prompt Live schema_prompt.go defines full SQL contract + graph mapping rules
LLM SQL safety guard Live SELECT-only, blocked DML/DDL keywords, required LIMIT
LLM Retry + multi-model fallback chain Planned Primary retries x5, then 4 fallback models
Data 14-table relational schema + enums Live schema-01.sql with domain-specific enums and constraints
Data Movie enrichment trigger queue Live trg_movie_enrich inserts into enrichment_queue on new movies
Data Apache AGE graph support Live AGE loaded in DB connections and graph build script available
Ops CLI workflows (app, dl, migrate, psql) Live run.sh + Makefile commands wired
Ops Graceful shutdown Live Signal handling with 10s server shutdown timeout
Ops Migration utility Live Supports reset|drop|create|indexes|status
Page Mermaid rendering bootstrap Live main.js added; diagrams initialized client-side
API Gap /api/analytics/overview Planned Frontend client references it, backend route is not registered
API Gap /api/search/nl + /api/analytics/nl Planned Frontend client includes calls, backend currently uses /api/chat
API Gap /location ingest endpoint Planned Frontend sends geolocation, backend route not present yet

Runtime Operations

API startup and shutdown behavior is explicit in cmd/cenimatch and internal/container.

Lifecycle Step Behavior Code Path
Boot Loads environment, parses typed config, constructs container, wires DB, services, and HTTP server cmd/cenimatch/main.go, internal/container/container.go
Serve Starts HTTP server in a goroutine and keeps main process on signal wait App.Start(), App.Run()
Shutdown trigger Listens for SIGINT and SIGTERM signal.Notify(...)
Shutdown window Graceful HTTP shutdown with 10-second timeout, then DB pool close container.Shutdown()

Connection Bootstrap

Each new DB connection is prepared for both relational SQL and AGE graph calls before request handling starts.

Step Purpose Implementation
Parse URL + init pool config Standardized pgxpool setup from DATABASE_URL database.NewConnection()
Register enum types Registers tag_source, crew_role, mood_type in type map AfterConnect hook
Activate AGE context Runs LOAD 'age' and sets search_path = ag_catalog, "$user", public AfterConnect hook
Health gate Pool Ping must succeed before container starts serving pool.Ping(...)

Configuration Modes

Config loading has separate behavior for dev and production to speed local work while enforcing stricter prod settings.

Setting Dev Mode Production Mode
BCRYPT_COST Default 10 Required from env
JWT_EXPIRATION Default 24h Required from env
REFRESH_TOKEN_EXPIRATION Default 30 days Required from env
Core infra vars DATABASE_URL, PORT, JWT_SECRET, JWT_ISSUER required Same required set
CORS_ALLOWED_ORIGINS Optional CSV, merged with local defaults Optional CSV, appended to default allowed origins
OPENROUTER_API_KEY Optional Optional

CLI Tooling

The project ships with executable workflows for app runtime, downloads, and migration operations.

Command What It Does Entry Point
./run.sh app Builds and runs API server binary cmd/cenimatch
./run.sh dl ... Builds and runs downloader with source filters, worker count, and output directory cmd/download
./run.sh migrate reset|drop|create|indexes|status Runs migration utility commands cmd/migrate
./run.sh migrate seed Runs seed pipeline script migration/seed.sh
make db / make db-stop Starts or stops database stack with Docker Compose Makefile
./run.sh psql Opens interactive psql session inside running DB container run.sh

Migration Safety Notes

Migration behavior is catalog-driven and explicit about destructive operations.

Behavior Details Source
Drop sequencing Drops tables first, then enums, then sequences using Postgres catalogs internal/migrator/migrator.go
Sync pauses Includes short waits between drop phases to avoid catalog timing issues time.Sleep(500ms)
Schema split Tables from schema-01.sql, indexes from schema-02-indexes.sql CreateTables(), CreateIndexes()
Status introspection Shows connection, table count, and table list migrate status
Command timeouts create/indexes/reset use 30m context; lighter commands use 30s timeoutForCommand()

Downloader Reliability

The download pipeline is generic and source-agnostic, with built-in protection for common ingestion failures.

Capability Implementation Detail
Pluggable auth strategies NoAuth, BearerToken, BasicAuth, APIKey via shared AuthMethod interface
Controlled concurrency Goroutines with worker semaphore to cap parallel downloads
Archive handling Supports direct file downloads, .gz extraction, and zip extraction
Zip-slip protection Validates extracted path stays under target directory before writing
Resume-friendly behavior Skips already downloaded/extracted files where possible
Source selection UX --list and source alias expansion (imdb, tmdb, netflix, etc.)

Security Internals

Access and refresh tokens are handled as separate concerns with rotation and server-side revocation checks.

Control Current Behavior Implementation
Access tokens HS256 JWT with sub, username, iss, iat, exp internal/infra/security/jwt.go
Refresh format uuid:hex_secret; database stores sha256(secret), not raw secret internal/infra/security/refresh.go
Rotation + revoke Refresh revokes previous token before issuing new pair internal/service/auth.go
Metadata capture Refresh tokens can store request IP and User-Agent tokenMeta() + StoreRefreshToken()
Auth middleware Accepts cookie or bearer token, validates JWT, injects user context internal/infra/http/middleware/auth.go

API Envelope Contract

API responses follow a consistent envelope structure so frontend parsing stays uniform across endpoints.

{
  "success": true,
  "data": { "...": "..." }
}

{
  "success": false,
  "error": { "code": "INVALID_REQUEST" }
}

Frontend / Backend Parity

This table tracks calls referenced in frontend API clients against routes currently registered in backend server setup.

Route Frontend Usage Backend Route Status Notes
/api/chat Used in Chat page Live Core NL query path in production frontend
/api/analytics/overview Referenced in realApi Planned Dashboard currently uses mockApi
/api/search/nl Referenced in Search page client Planned Backend currently exposes /api/chat for NL SQL
/api/analytics/nl Referenced in dashboard client helpers Planned No server route registration yet
/location Location send call in Home page client Planned No server route registration yet

Page Reliability Notes


System Architecture

flowchart LR Browser(["Browser, React/TS"]) subgraph BACKEND["Go Backend - Chi Router"] direction TB MW["Middleware\nLogger, CORS, JWT, RealIP"] HANDLERS["Handlers\nAuth, Movies, Chat\nOnboarding, Feedback, Graph"] SERVICES["Services\nAuth, Chat\nOnboarding, Feedback"] REPOS["Repositories\nUser Repo, Movie Repo\nFeedback Repo"] end subgraph DATA["Data Layer"] direction TB PG[("PostgreSQL 16\n14 tables, 3 enums\nGIN, PostGIS")] AGE[("Apache AGE\nmovie_graph\nCypher queries")] QUEUE["Enrichment Queue\nTrigger pipeline"] end LLM["OpenRouter LLM\nLlama 3.3 70B + fallbacks"] Browser --> MW --> HANDLERS --> SERVICES --> REPOS REPOS --> PG REPOS --> AGE SERVICES -- "text-to-SQL" --> LLM PG -- "trg_movie_enrich" --> QUEUE

Backend Layer Diagram

The Go backend follows a strict layered architecture: handlers only decode/encode HTTP, services own all business logic, and repositories own SQL. Security interfaces (bcrypt, JWT, refresh tokens) are injected via ports.

Technical Excellence

flowchart LR Client(["Client"]) subgraph HTTP["HTTP Layer - Chi"] direction TB MW["Middleware\nlogger, recoverer\nCORS, JWT"] H_AUTH["Auth Handler"] H_MOV["Movies Handler"] H_CHAT["Chat Handler"] H_ON["Onboarding Handler"] H_FB["Feedback Handler"] H_REC["Graph Reco Handler"] end subgraph SVC["Service Layer"] direction TB S_AUTH["auth.go"] S_CHAT["chat.go"] S_ON["onboarding.go"] S_FB["feedback.go"] end subgraph SEC["Security"] direction TB BCRYPT["BcryptHasher"] JWT_NODE["JWT HS256\n+ refresh rotation"] end subgraph REPO["Repository Layer"] direction TB R_USER["user_repo.go"] R_MOV["movies_repo.go\nCTE + lateral joins"] R_GRAPH["Graph queries\ncypher()"] end subgraph DB["PostgreSQL 16"] DBM["DB Manager\npgxpool + WithTx"] PG[("Relational\nJSONB, GIN")] AGE2[("Apache AGE")] end Client --> MW --> H_AUTH & H_MOV & H_CHAT & H_ON & H_FB & H_REC H_AUTH --> S_AUTH H_MOV --> S_CHAT H_CHAT --> S_CHAT H_ON --> S_ON H_FB --> S_FB H_REC --> S_FB S_AUTH --> SEC & R_USER S_CHAT --> R_MOV S_ON --> R_USER S_FB --> R_MOV & R_GRAPH R_USER & R_MOV & R_GRAPH --> DBM DBM --> PG & AGE2

Atomic Write Flows

Multi-step writes are wrapped in a shared transaction helper so profile, feedback, and graph state stay consistent.

Flow What Happens in One Transaction Failure Behavior
POST /api/auth/signup Create user, preference JSONB, mood profile, graph user node, liked/disliked graph edges Any failure rolls back all writes
POST /api/users/onboard Update preferences + mood profile, clear old graph edges, rebuild WATCHED/RATED edges No partial profile or graph drift
POST /api/feedback Upsert watch history + user feedback, then sync graph WATCHED/RATED edges Relational and graph signals stay aligned
POST /api/feedback/not-interested Upsert not_interested relational flag and update graph edge attributes Preference exclusion is applied consistently

Database Schema

14 tables across four domains: movie catalog, user identity, interaction tracking, and infrastructure. Movies are keyed on tmdb_id, users on UUID. Three PostgreSQL enums enforce type safety: tag_source, crew_role, mood_type. PostGIS GEOGRAPHY columns on feedback and locations enable spatial queries.

Technical Excellence

Movie Catalog Domain

erDiagram movies ||--o{ movie_genres : "has" movies ||--o{ movie_tags : "tagged" movies ||--o{ movie_crew : "credits" movies ||--o{ enrichment_queue : "triggers" genres ||--o{ movie_genres : "categorizes" persons ||--o{ movie_crew : "performs" movies { int tmdb_id PK varchar imdb_id text title int release_year int runtime_min float vote_avg bigint budget bigint revenue boolean enriched } genres { serial id PK varchar name UK } movie_genres { int movie_id FK int genre_id FK } persons { varchar imdb_id PK text primary_name int birth_year } movie_crew { serial id PK int movie_id FK varchar person_id FK crew_role role text character } movie_tags { serial id PK int movie_id FK varchar tag_key varchar tag_value float confidence tag_source source } enrichment_queue { serial id PK int movie_id FK varchar status int attempts }

User & Interaction Domain

erDiagram users ||--o| user_preferences : "has" users ||--o| user_mood_profile : "has" users ||--o{ watch_history : "watches" users ||--o{ user_feedback : "rates" users ||--o{ recommendations : "receives" users ||--o{ refresh_tokens : "authenticates" users ||--o{ activity_events : "generates" users ||--o{ user_locations : "located" users { uuid id PK varchar username UK text email UK text password_hash } user_preferences { uuid user_id PK jsonb genre_weights int runtime_pref int decade_low int decade_high } user_mood_profile { uuid user_id PK int_arr liked int_arr disliked jsonb attributes } watch_history { serial id PK uuid user_id FK int movie_id FK boolean completed } user_feedback { serial id PK uuid user_id FK int movie_id FK float rating geography location } user_locations { serial id PK uuid user_id FK geography location text city } recommendations { serial id PK uuid user_id FK int movie_id FK float score text explanation } activity_events { serial id PK uuid user_id FK int movie_id FK text city_state } refresh_tokens { uuid id PK uuid user_id FK varchar token_hash UK }

LLM Text-to-SQL Pipeline

Users type natural-language questions. The backend wraps them with a schema prompt, sends to OpenRouter (Llama 3.3 70B with multi-model fallback), validates the returned SQL through a strict guard, then executes it read-only against PostgreSQL.

Technical Excellence

Runtime Safety Contract

Layer Rule Implementation
Prompt contract Model must output one raw SELECT, no markdown, max LIMIT 50, and UNSAFE: for disallowed asks internal/llm/schema_prompt.go
Post-generation guard Strip code fences, block DML/DDL keywords, require SELECT, require LIMIT internal/llm/query_guard.go
Model resiliency Retry primary model 5 times, then rotate through 4 free fallback models internal/llm/openrouter.go
Execution boundary LLM call timeout 20s, SQL execution timeout 10s, chat history trimmed to last 20 messages internal/service/chat.go

System Prompt Design

The prompt in internal/llm/schema_prompt.go is a full SQL contract, not a short instruction. It includes a table-by-table schema reference, graph-to-SQL mapping rules, and strict output constraints so the model returns one executable PostgreSQL SELECT.

flowchart LR NL(["User question"]) subgraph STAGE1["API & Chat Service"] direction TB VAL["Validate\ntrim to 20 msgs"] TO_LLM["LLM call\n20s timeout"] VAL --> TO_LLM end subgraph STAGE2["OpenRouter LLM"] direction TB PROMPT["Schema prompt\ngrounding"] PRIMARY["Llama-3.3-70B\nretry x5"] FB["Fallback pool\nHermes, Qwen\nGemma, Nemotron"] PROMPT --> PRIMARY PRIMARY -- "exhausted" --> FB end subgraph STAGE3["SQL Guard"] direction TB CLEAN["Strip fences\nnormalize"] CHKNODE["SELECT-only\nblock DML\nenforce LIMIT 50"] CLEAN --> CHKNODE end subgraph STAGE4["Execution & Format"] direction TB TO_SQL["SQL exec\n10s timeout"] PG2[("PostgreSQL")] RTYPE{"Result typing"} MOVIE["Movie grid"] GENERIC["Data table"] TO_SQL --> PG2 PG2 --> RTYPE RTYPE -- "tmdb_id present" --> MOVIE RTYPE -- "other cols" --> GENERIC end OUT(["Response\n+ generated SQL"]) NL --> VAL TO_LLM --> PROMPT PRIMARY --> CLEAN FB --> CLEAN CHKNODE --> TO_SQL MOVIE --> OUT GENERIC --> OUT

Authentication Flow

JWT access tokens (HS256, configurable expiry) plus refresh-token rotation. The refresh token format is uuid:hex_secret — only sha256(secret) is stored. Old tokens are revoked on rotation, preventing replay.

sequenceDiagram participant C as Client participant H as Auth Handler participant S as Auth Service participant R as User Repo participant DB as PostgreSQL C->>H: POST /auth/register H->>S: Register(name, email, pass) S->>S: bcrypt.Hash(pass) S->>R: CreateUser (tx) R->>DB: INSERT users + preferences + mood_profile S->>S: Generate JWT + Refresh S-->>C: access_token, refresh_token C->>H: POST /auth/login H->>S: Login(email, pass) S->>R: GetByEmail R->>DB: SELECT users S->>S: bcrypt.Compare S->>S: Generate JWT + Refresh S-->>C: access_token, refresh_token C->>H: POST /auth/refresh H->>S: Refresh(old_token) S->>S: Parse uuid:secret S->>R: Validate sha256(secret) S->>R: Revoke old, insert new S-->>C: new_access, new_refresh

Apache AGE Graph Model

The movie_graph is built from relational tables and queried via Cypher through ag_catalog.cypher(). It supports genre-based recommendations, collaborative filtering, actor/director traversals, and explainability paths.

Technical Excellence

Recommendation Query Recipes

Recipe Cypher Pattern Where It Appears
Connected by director (Movie)<-[:DIRECTED]-(Person)-[:DIRECTED]->(Movie) GET /api/movies/{id}/graph-related
Connected by cast (Movie)<-[:ACTED_IN]-(Person)-[:ACTED_IN]->(Movie) GET /api/movies/{id}/graph-related
Theme overlap (Movie)-[:IN_GENRE]->(Genre)<-[:IN_GENRE]-(Movie) with overlap ranking GET /api/movies/{id}/graph-related
Collaborative filtering Users with overlapping high ratings, then unseen highly-rated titles from similar users GET /api/recommendations/graph
flowchart LR U((User)) -- "WATCHED" --> M((Movie)) U -- "RATED >= 4.0" --> M M -- "IN_GENRE" --> G((Genre)) P((Person)) -- "ACTED_IN" --> M P -- "DIRECTED" --> M subgraph QUERIES["Graph Query Patterns"] direction TB Q1["Unseen movies from\nfavorite genres"] Q2["Collaborative filtering\noverlap >= 2 users"] Q3["Same actor/director\ntraversals"] Q4["Explainability paths\ngenre-based reasoning"] end M -..-> QUERIES U -..-> QUERIES

Data Ingestion Pipeline

CineMatch uses a concurrent Go pipeline to ingest, clean, and normalize movie data from 7 sources including IMDb, TMDB, and Kaggle.

Technical Excellence

flowchart LR subgraph SOURCES["7 Data Sources"] IMDB["IMDb TSVs\nbasics, ratings\ncrew, principals"] TMDB["TMDB CSV\n5K metadata"] KAGGLE["Kaggle sets\nNetflix, Movies"] end subgraph DL["Downloader CLI"] AUTH["Auth adapters\nNoAuth, Bearer\nBasic, APIKey"] CONC["Concurrent\ngoroutines\n+ semaphore"] end subgraph CLEANBOX["Data Quality"] FILTER["Explicit content\nregex filter\n43K rows removed"] ADULT["TMDB adult\nflag check"] end subgraph LOAD["Seed Pipeline"] STAGE["Staging tables\nbulk COPY"] CTE["Loading CTEs\npersons, crew\ngenres, movies"] IDX["Index migration\nGIN trigram\njoin indexes"] end PG3[("PostgreSQL\n100K+ movies")] IMDB & TMDB & KAGGLE --> AUTH --> CONC CONC --> FILTER & ADULT FILTER & ADULT --> STAGE --> CTE --> IDX --> PG3

Deployment Architecture

Two Docker Compose files: docker-compose.yml runs the database (custom PG16 image with AGE + PostGIS), docker-compose.deploy.yml runs the Go backend, React frontend (Nginx), and a Cloudflared tunnel for zero-trust HTTPS access.

flowchart LR INET(["Internet"]) subgraph CF["Cloudflare Zero Trust"] TUNNEL["cloudflared tunnel"] end subgraph DPLY["docker-compose.deploy.yml"] FE["cenimatch-frontend\nNginx :80\nReact/TS build"] BE["cenimatch-backend\nGo binary :8080"] end subgraph DBSTACK["docker-compose.yml"] DB2[("cenimatch-db\nPG16 + AGE + PostGIS\npgdata volume")] end INET -- "HTTPS" --> TUNNEL TUNNEL -- "app.domain" --> FE TUNNEL -- "api.domain" --> BE BE -- "pgx pool" --> DB2 FE -- "VITE_API_URL" --> BE

Features

Potential Next Additions


Completed


Project Structure

flowchart TB ROOT["cenimatch/"] subgraph CMD["cmd/ - Entry Points"] direction TB C1["cenimatch/main.go\nApp struct, signal handling\ngraceful shutdown"] C2["download/main.go\nflags: -o -s -w --list\nsource alias expansion"] C3["migrate/main.go\nsubcommands: reset, drop\ncreate, seed, status"] end subgraph INT["internal/"] direction TB CFG["config/\nenv.go - walk-up .env loader\nconfig.go - typed struct\nrequired/optional/duration helpers"] CNT["container/\ncontainer.go - New wires:\nconfig > pgxpool > DBManager > Server"] DOM["domain/\nerrors.go - ErrorCode enum\n10 codes: INTERNAL thru REFRESH_INVALID"] PRT["ports/\ndb.go - DBManager interface\nsecurity.go - Hasher, JWT\nRefreshTokenGenerator"] DD["dd/\ndownloader.go - AuthMethod interface\n4 impls, goroutine + semaphore\ndata.go - 7 Source definitions"] LLM2["llm/\nopenrouter.go - retry x5 + 4 fallbacks\nquery_guard.go - SELECT-only filter\nschema_prompt.go - grounding context"] MIG["migrator/\nmigrator.go - DropAllTables\nenum + sequence cleanup\nCreateTables, SeedData"] SRV["service/\nauth.go - register, login, refresh, logout\nchat.go - LLM orchestration + SQL exec\nonboarding.go - 5-step profile build\nfeedback.go - star ratings, dismiss"] subgraph INFRA["infra/"] direction TB DBPKG["database/\ndatabase.go - pgxpool wrapper\nAfterConnect type registration\ndb_manager.go - WithTx, WithTxOptions\nauto-rollback on error/panic"] HTTPPKG["http/\nserver.go - Chi mux setup\nhandlers/ - auth, movies, chat\nonboarding, feedback, graph, health\nmiddleware/ - cors, auth\nutils/ - JSON envelope, status mapping"] SECPKG["security/\nhasher.go - bcrypt\njwt.go - HS256 access tokens\nrefresh.go - uuid:secret format\nsha256 storage, rotation"] REPOPKG["repository/\nmovies_repo.go - CTE + lateral joins\nuser_repo.go - CRUD + tx support\ngraph queries - cypher via AGE"] end end subgraph UI["ui/src/ - React / TypeScript"] direction TB PAGES["pages/\nHome, MovieDetail, Chat\nSearch, Onboarding 5 steps\nDashboard, Login, Register"] COMPS["components/\nNavbar, MovieCard, MovieGrid\nMoodSelector, ChatMessage\nDataTable, CastSection"] UIAPI["api/\nrealApi.ts - axios instance\nauth, movies, chat, onboarding\nmappers.ts - DTO to domain"] end subgraph MG["migration/"] direction TB S01["schema-01.sql\n14 tables, 3 enums\nenrichment trigger"] S02["schema-02-indexes.sql\nGIN trigram, join indexes\n68s to under 1s queries"] SEED["seed.sh\nbulk COPY into staging\nCTE loading pipeline"] GRAPH["build-graph.sh\nAGE graph construction\nMovie, Genre, Person vertices"] end subgraph DEPLOY["deployment/"] direction TB DC1["docker-compose.yml\nPG16 + AGE + PostGIS\npgdata volume"] DC2["docker-compose.deploy.yml\nGo backend, React/Nginx\nCloudflared tunnel"] MK["Makefile + run.sh\nbuild, db, migrate\ndl, app targets"] end ROOT --> CMD & INT & UI & MG & DEPLOY C1 --> CNT CNT --> DBPKG & HTTPPKG HTTPPKG --> SRV SRV --> REPOPKG & SECPKG & LLM2 REPOPKG --> DBPKG PAGES --> UIAPI

Team

Abdulrahman Alamoudi Abdulrahman Alamoudi
Gary Young Gary Young
Jack Lyons Jack Lyons Team Leader
Yazeed Alharthi Yazeed Alharthi
Yordanos Tessema Yordanos Tessema

Responsibilities

Yazeed Alharthi DB Design & Data Pipeline, Backend dev + API & LLM-SQL Integration
Jack Lyons Team Leader Backend Dev, Writing
Gary Young Recommendation Logic & Spatial Analysis
Yordanos Tessema React/TS Frontend, ECharts & UX
Abdulrahman Alamoudi Apache AGE Graph, Data Pipeline, Spatial Queries (PostGIS), Deployment