Hi forest,

1 GiB RAM is a practical lower bound for sustained Tor relay throughput. Even 
with available CPU and bandwidth, memory pressure alone can cap throughput and 
make multiple relays per host counterproductive. ~150 Mbps on well-tuned 1 GiB 
EU guards matches our experience; in US East we typically see ~10–20 Mbps for 
guards.

At very low consensus weights, a relay’s direct traffic contribution is 
minimal, which reduces the marginal value of added path and failure-domain 
diversity (AS, geography, operator).

With the current measurement system, there is no reliable way to predict a 
maximum achievable consensus weight for a given AS or location in advance, 
especially where no relays already exist. Consensus weight depends on what 
bandwidth authorities observe over time, driven by traffic distribution, 
authority-to-relay path quality, and nearby competing capacity. Even locations 
with good EU connectivity can underperform if they are not on paths the 
authorities and relays exercise heavily. In practice, the ceiling is only 
discoverable empirically over weeks of stable uptime.

On the tor-dev side, two useful longer-term questions are whether there are 
plans to improve measurement fairness for underrepresented regions without 
encouraging gaming, and whether Arti is expected to materially improve relay 
memory efficiency, especially for low-memory environments.

Best,
Tor at 1AEO






On Monday, January 5th, 2026 at 1:15 AM, forest-relay-contact--- via tor-relays 
<[email protected]> wrote:

> 

> 

> Hello.
> 

> Tor at 1AEO wrote:
> 

> > Short answer: yes — adding more relays is a reasonable experiment if
> > the host has spare CPU, RAM, and bandwidth.
> 

> 

> Unfortunately, while there is plenty of remaining bandwidth and the CPU
> is nowhere near saturated, memory is a limiting factor. Most of these
> low-performance relays have only 1 GiB RAM. When combined with zswap and
> a lighter kernel, my fastest 1 GiB relays can achieve about 150 Mbps.
> But the overhead of running multiple relays caused by all the extra non-
> swappable kernel memory needed for conntrack tables, socket structures,
> etc., causes the memory pressure to rapidly bring down the system.
> 

> Hopefully there will be improvements to bandwidth measurement techniques
> in the future for relays which do have good connectivity to Europe.
> Until then, I've just been adding I2P routers, which use up ~25x more
> bandwidth than Tor on the slower relays (3 Mbps vs 120 Kbps) and uses
> much less resident memory.
> 

> > The cost imbalance you’re seeing is expected today: operating relays
> > outside the EU materially improves network diversity, but it can be
> > significantly more expensive per unit of traffic than running relays
> > in EU-dense locations.
> 

> 

> How does the improved diversity help if effectively no one is passing
> traffic through my relays?
> 

> Is there any way I can estimate the maximum consensus weight that I will
> be able to achieve with a particular AS in a particular location if no
> one else is already using it? Many of the poorly performing relays I
> have actually have decent connectivity to the EU, without any single AS
> or IXP being a bottleneck for all traffic.
> 

> Regards,
> forest

Attachment: publickey - [email protected] - 0x9288289B.asc
Description: application/pgp-keys

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
tor-relays mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to