30 Oct 2025 18:13:58 [email protected]:

> i think they meant LLM model training which uses way more power than
> inference.  and most people just use pre-trained stuff like
> chatgpt/etc which are huge models so i bet their training power cost
> is very high compared to a small (trained with non-enterprise
> hardware) locally trained model

MIT have stated that each query to e.g. chatgpt is like using the microwave for 
8 seconds (not sure what wattage microwave).

This url is one way to avoid AI overviews and which I assume use much less 
energy but I dislike personally in any case.

https://www.google.com/search?q=%20&udm=14

Reply via email to