TensorRT Just Fixed Local Image Generation
1 min read

TensorRT Just Fixed Local Image Generation

Running modern, heavy diffusion models locally has felt like trying to stuff a mattress into a compact car for months now. You lower the batch size. You offload text encoders to the CPU. You pray to the hardware gods, hit run, and watch the terminal spit out yet another RuntimeError: CUDA out of memory exception.