> Inference really doesn't need that much power. I live on solar and run > decently large models on my own equipment.
i think they meant LLM model training which uses way more power than inference. and most people just use pre-trained stuff like chatgpt/etc which are huge models so i bet their training power cost is very high compared to a small (trained with non-enterprise hardware) locally trained model

