Chrima.
A budget tool built the way I actually spend.
Upload your Bank of America CSV, watch the transactions auto-categorize, and get a monthly read on where your money went. Clean Recharts visualizations, budget rules that work, and no third party scraping your bank credentials.
Straightforward.
Every budget app wanted my bank password.
I tried Mint, then YNAB, then Copilot, then Monarch. Every one wanted me to hand over my Bank of America credentials so Plaid could scrape them. BofA blocks half those integrations every few months and tells me to call support. The category rules were always close but never right, and I couldn't fix them without fighting someone else's UI to do it.
Meanwhile my real workflow is embarrassingly simple: log into BoA once a month, download the CSV, stare at it. Chrima is that workflow with a categorizer, some charts, and a database that lives on my own machine.
Budgeting is a trust problem, not a math problem.
I asked six friends how they track spending. Three used a spreadsheet, two used a paid app they were half-ignoring, one had given up. Only one had ever trusted an app with their bank login for more than a year.
"I don't want a bot logging into my bank. I just want to know where the money went." // User interview · grad student · Seattle
"The categorizer was wrong on half my transactions and I couldn't fix it in bulk." // User interview · former Mint user
So: no OAuth scraping, no third-party middleman, and the categorizer has to learn from corrections fast. A two-click fix on one transaction should retag the next fifty.
CSV in, dashboard out.
Single-user, self-hosted. The React frontend uploads the CSV to a FastAPI endpoint, which parses and dedupes on merchant + amount + date, runs the categorizer (rules plus a learned-override table), and writes to SQLAlchemy. The frontend reads back through a small JSON API and renders Recharts. No Plaid, no OAuth, no outside services in the hot path.
Three calls I'd defend.
Plaid is a two-week integration, a recurring bill, and an ongoing support burden when BoA changes an endpoint. CSV is a one-evening parser that keeps working forever. The tradeoff is that the user does one manual step a month, which every research interview told me was acceptable — often preferred.
An LLM categorizer sounds great until you realize it's slow, non-deterministic, and expensive for a workload that boils down to "Trader Joe's is groceries." A rules file covers 90% of my transactions; a small overrides table handles the rest and learns from my corrections. If I ever need fuzziness I can layer an embedding lookup on top.
SQLAlchemy 2.0 lets the same code target both. On my laptop it's a single file I can back up with git; on a home server it's Postgres with proper concurrency. No ORM rewrite, no migration path to explain in the README.
I use it every Sunday.
Chrima is in progress and private. I'm the primary user. It's caught two subscriptions I'd forgotten to cancel and told me, correctly, that coffee shops are a bigger line item than gas. Numbers below are from my own usage over the last few months.
What I'd do differently.
I'd build the dedupe layer on top of a content hash, not on a composite key. Right now "merchant + amount + date" misses edge cases like a refund that posts twice with the same date/amount but a different transaction id, and I've had to special-case those by hand. A stable hash over the raw CSV row would have been cleaner and would survive whatever BoA does to the export format next quarter.
I'd also write the category rules as a small domain-specific language from the start instead of nested if statements. It's fine for me, but anyone else picking the repo up has to learn my assumptions about merchant strings, and a tiny rules file (regex, category, priority) would make that a read instead of an archaeology.