I build machines
that think.

L

I'm Leon , a self-taught technologist obsessed with making AI run on hardware you can put on your desk. Not in a data center. Not in someone else's cloud. Yours.

I started MistAI because the hardest part of getting into local AI isn't the models , it's knowing what GPU to buy, what power supply won't fry your board, and whether that CPU has enough PCIe lanes. I've spent hundreds of hours researching, benchmarking, and building so you don't have to.

Before this, I was deep in web development, self-hosting, and homelab culture. The same DIY mindset that drives people to build their own servers is what drives MistAI , understand the hardware, own your stack, don't depend on anyone else's uptime.

How I got here.

A nonlinear path through tech that somehow makes sense in retrospect.

2019

The Spark

First exposure to machine learning through online courses. Built a basic image classifier and immediately wanted more compute power than my laptop could give.

2020

Homelab Days

Built my first homelab server. Fell down the rabbit hole of enterprise hardware , rackmount servers, ECC RAM, HBA cards, ZFS pools. Learned that hardware compatibility is an art, not a science.

2022

The GPT Era

Large language models changed everything. Started running LLMs locally and hit every possible bottleneck , VRAM limits, thermal throttling, power delivery issues. Started documenting solutions.

2023

MistAI Begins

Realized there was no good tool for planning an AI workstation build. Existing PC builders ignored GPU compute, VRAM, and model compatibility. Started building the first version of MistAI.

Now

Building in Public

MistAI is live and growing. Writing guides, curating hardware, and building compatibility checks so anyone can configure a workstation that actually runs the models they want.

What I believe.

AI shouldn't be locked behind API keys and usage caps. The future of computing is personal , running models on hardware you own, with data that never leaves your machine.

01

Own your compute

Cloud is convenient but local is sovereign. Every model you can run on your own hardware is one less dependency on someone else's infrastructure.

02

Hardware should be accessible

You shouldn't need a degree in electrical engineering to build a workstation. The knowledge should be free, clear, and honest.

03

Benchmarks over marketing

I don't care what the spec sheet says. I care about tokens per second, model load times, and actual VRAM usage under load.

"

The best workstation is the one you actually understand. Not the most expensive one.

, Leon

Daily drivers

  • GPU: RTX 4090 24GB
  • CPU: Ryzen 9 7950X
  • RAM: 128GB DDR5
  • OS: Ubuntu Server 24.04
  • Runtime: llama.cpp + vLLM
  • Editor: VS Code + Copilot

Let's build something.

Whether you're training your first model or scaling up to multi-GPU inference, MistAI is here to help you pick the right hardware for the job.