Yeah, I thought it looked accurate. Attached is the full config for reference.
I’m still seeing issues where nginx frequently caches stats data from one of
the non-primary nodes even when I verify the primary node is responding when I
query it directly on it’s internal 10-net IP address. It’s pu
On 2/25/19 6:37 PM, Todd Fleisher wrote:
>> On Feb 23, 2019, at 8:35 PM, Jeremy T. Bouse
>> mailto:jeremy.bo...@undergrid.net>> wrote:
>>
>> I didn't have as many locations configured as you show in your example
>> but it looked like you were defining the map but I didn't see it being
>> used in an
I'd previously had only 2 instances and if they weren't peering outside
and one went down it seemed to cause problems. I went with 3 backend
secondary nodes with the primary node doing the peering outside my
network this time since I was re-deploying from scratch. This way I can
take 1 node out and
I don’t know if Kristian chose that number based on actual SKS load, since it
can be hard to predict how much traffic the various servers in the pool may
receive at any given time. That being said, the rule of 3 is pretty standard in
operations to prevent a single point of failure from being rev
Thanks for the information everyone. A further question: I saw the advice of
a minimum of three servers. Anyone know how that was arrived at, or if there
is a recommendation on how many queries an individual SKS back-end can handle?
Jonathon
Jonathon Weiss
MIT/IS&T/
So I'll preface this with a caveat that I know a couple of the recipient
mail servers are having some issues with my DMARC/DKIM/SPF settings so I
don't know if everyone is receiving my posts.
I've updated my configuration on sks.undergrid.net using NGINX and
load-balancing 4 SKS nodes... Here are
I ended up with the following NGINX configuration...
in /etc/nginx/conf.d/upstream.conf:
upstream sks_secondary {
server 127.0.0.1:11371 weight=5;
server 172.16.20.52:11371 weight=10;
server 172.16.20.53:11371 weight=10;
server 172.16.20.54:11371 weight=10;
}
upstream sks_primary
> On Feb 23, 2019, at 8:35 PM, Jeremy T. Bouse
> wrote:
> I didn't have as many locations configured as you show in your example but it
> looked like you were defining the map but I didn't see it being used in any
> of your location blocks unless I'm missing something. Shouldn't you be using
>
Hi Todd,
The timing of this thread and your reply are ideal as I'm in the
process of working to fix my cluster that has been down for some time
due to system failure and lack of available time on my part to repair
it. Since I'm in the process I've been working to revisit the setup itself.
On Sun, Feb 17, 2019 at 09:18:11AM -0800, Todd Fleisher wrote:
> The setup uses a caching NGINX server to reduce load on the backend nodes
> running SKS.
> His recommendation is to run at least 3 SKS instances in the backend (I’m
> running 4).
> Only one of the backend SKS nodes is configured to
Hi Jonathon,
I've previously spoken with Kristian about this off-list in an attempt to
improve the performance & resilience of my own server(s) pool(s), so let me
share his recommendations which I’ve been using with minimal issues.
The setup uses a caching NGINX server to reduce load on the back
Hello all,
I seem to recall that several key server operators are running in a
configuration with multiple SKS instances on a single machine, and others with
multiple machines running SKS.
Would anyone doing either of these things be willing to share their
configurations (especially sksconf
12 matches
Mail list logo