[Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread Shengjing Zhu
Hi,

While I rescued my key server back this night, I found the unusual
traffic for key 0x69D2EAD9 and 0xB33B4659. It caused load to my server
when it tried to sync up with the network.

Request counted in 2h:

   178 0xB33B4659
186 0x69D2EAD9
290 0x2016349F5BC6F49340FCCAF99F9169F4B33B4659
336 0x1013D73FECAC918A0A25823986CE877469D2EAD9

Requests come from pool.sks-keyservers.net. Compare to the server
number behind the pool,  I think these requests are quite unusual.
Does anyone know what happens to these two keys?

-- 
Regards,
Shengjing Zhu

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


[Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-03-20 Thread Andrew Nagy
All,

Looking to figure out a solution here. A Maintainer on the Ubuntu Key
server informed me about discussion of the following keys 0x69D2EAD9 and
0xB33B4659 here:
https://lists.nongnu.org/archive/html/sks-devel/2019-01/msg3.html

Unfortunately the email address modu...@freepbx.org is just a black hole
and so the email that was sent there from Brent Saner was lost forever.

I currently run the FreePBX project which uses the GPG network to sign
modules. Unfortunately due to:
https://bitbucket.org/skskeyserver/sks-keyserver/issues/57/anyone-can-make-any-pgp-key-unimportable

Someone poisoned our master key that we use to sign all other keys. This
has caused issues on the sks network for a while. However since January
we've noticed more and more sks servers are now just timing out and not
returning back our requests for 0xB33B4659. I assume that is probably
because of the message thread from January.

The way FreePBX software works is that it checks nightly against a list of
key servers to redownload 0x69D2EAD9 and 0xB33B4659 and re-verify. However
it appears that for many of you the bandwidth this causes is much too high.
Internally we need to recreate our master key without the poison but I am
afraid it will just as easily be re-poisoned again. Also even if we put a
new key out you will notice traffic increase from those keys over time and
well and we will be back to the bandwidth issue.

Perhaps we should be using GPG locally instead of through the GPG key
network. Let me know what you guys think,

Thank you
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread brent s.
On 1/12/19 2:15 PM, Shengjing Zhu wrote:
> Hi,
> 
> While I rescued my key server back this night, I found the unusual
> traffic for key 0x69D2EAD9 and 0xB33B4659. It caused load to my server
> when it tried to sync up with the network.
> 
> Request counted in 2h:
> 
>178 0xB33B4659
> 186 0x69D2EAD9
> 290 0x2016349F5BC6F49340FCCAF99F9169F4B33B4659
> 336 0x1013D73FECAC918A0A25823986CE877469D2EAD9
> 
> Requests come from pool.sks-keyservers.net. Compare to the server
> number behind the pool,  I think these requests are quite unusual.
> Does anyone know what happens to these two keys?
> 

they're for FreePBX and have caused at least one other issue:

https://lists.gnu.org/archive/html/sks-devel/2018-07/msg00077.html

based on this:

https://www.dslreports.com/forum/r30661088-PBX-FreePBX-for-the-Raspberry-Pi~start=810

it would SEEM they're part of the FreePBX installation process, but it's
possible that something from normal operation even fetches the key
operationally and frequently.

i see three possible situations:

0.) a recent update was made to FreePBX that fetches the key, even if it
exists in the keyring or a key refresh is called (very likely)
1.) a random attack targeting you specifically is ocurring and they just
randomly picked that key ID (a little likely, but not very)
2.) the key has been compromised and is being used as part of a botnet
for some purpose (extremely unlikely)

i'll see if i can find out from the freepbx source/the project devs.

will reply when i have further info.


meanwhile, can you let us know if those requests are all coming from the
same IP or allocation block?

-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread Shengjing Zhu
Sorry for top replying. I'm using mobile phone.

Requests are coming from different network, at least hundreds IP.

And it seems my server(pgp.ustc.edu.cn) is down again... I'll check it when
I got home. If it's caused by the two keys.. I may blacklist them...

brent s.  于 2019年1月13日周日 04:45写道:

> On 1/12/19 2:15 PM, Shengjing Zhu wrote:
> > Hi,
> >
> > While I rescued my key server back this night, I found the unusual
> > traffic for key 0x69D2EAD9 and 0xB33B4659. It caused load to my server
> > when it tried to sync up with the network.
> >
> > Request counted in 2h:
> >
> >178 0xB33B4659
> > 186 0x69D2EAD9
> > 290 0x2016349F5BC6F49340FCCAF99F9169F4B33B4659
> > 336 0x1013D73FECAC918A0A25823986CE877469D2EAD9
> >
> > Requests come from pool.sks-keyservers.net. Compare to the server
> > number behind the pool,  I think these requests are quite unusual.
> > Does anyone know what happens to these two keys?
> >
>
> they're for FreePBX and have caused at least one other issue:
>
> https://lists.gnu.org/archive/html/sks-devel/2018-07/msg00077.html
>
> based on this:
>
>
> https://www.dslreports.com/forum/r30661088-PBX-FreePBX-for-the-Raspberry-Pi~start=810
>
> it would SEEM they're part of the FreePBX installation process, but it's
> possible that something from normal operation even fetches the key
> operationally and frequently.
>
> i see three possible situations:
>
> 0.) a recent update was made to FreePBX that fetches the key, even if it
> exists in the keyring or a key refresh is called (very likely)
> 1.) a random attack targeting you specifically is ocurring and they just
> randomly picked that key ID (a little likely, but not very)
> 2.) the key has been compromised and is being used as part of a botnet
> for some purpose (extremely unlikely)
>
> i'll see if i can find out from the freepbx source/the project devs.
>
> will reply when i have further info.
>
>
> meanwhile, can you let us know if those requests are all coming from the
> same IP or allocation block?
>
> --
> brent saner
> https://square-r00t.net/
> GPG info: https://square-r00t.net/gpg-info
>
> ___
> Sks-devel mailing list
> Sks-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/sks-devel
>
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread brent s.
On 1/13/19 12:15 AM, Shengjing Zhu wrote:
> Sorry for top replying. I'm using mobile phone.
> 
> Requests are coming from different network, at least hundreds IP.
> 
> And it seems my server(pgp.ustc.edu.cn ) is down
> again... I'll check it when I got home. If it's caused by the two keys..
> I may blacklist them...
> 

i've asked on FreePBX's channel and emailed the organization directly
(via their key's UID info) but have not yet gotten a response from
either. it IS the weekend, so it may be a bit...

meanwhile you may want to firewall off your HKP port(s) (recon port
should still be fine to keep open, but someone correct me if not) and
disable the forwarding for HKPS (if you have it).


-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread Gabor Kiss
> Request counted in 2h:
> 
>178 0xB33B4659
> 186 0x69D2EAD9
> 290 0x2016349F5BC6F49340FCCAF99F9169F4B33B4659
> 336 0x1013D73FECAC918A0A25823986CE877469D2EAD9

I checked my logs. 15% of the recent 18k requests were related to these keys.
They belong to:

FreePBX Module Signing (This is the master key to sign FreePBX Modules) 

FreePBX Mirror 1 (Module Signing - 2014/2015) 

I guess there was some software upgrade on ten thousands of Asterix nodes.
Looks normal.

Gabor

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-12 Thread brent s.
On 1/13/19 1:49 AM, Gabor Kiss wrote:
>> Request counted in 2h:
>>
>>178 0xB33B4659
>> 186 0x69D2EAD9
>> 290 0x2016349F5BC6F49340FCCAF99F9169F4B33B4659
>> 336 0x1013D73FECAC918A0A25823986CE877469D2EAD9
> 
> I checked my logs. 15% of the recent 18k requests were related to these keys.
> They belong to:
> 
> FreePBX Module Signing (This is the master key to sign FreePBX Modules) 
> 
> FreePBX Mirror 1 (Module Signing - 2014/2015) 
> 
> I guess there was some software upgrade on ten thousands of Asterix nodes.
> Looks normal.
> 
> Gabor

last stable release was may 2018, so i'm not sure on that personally.
i'd expect a lot MORE if that were the case.[0] it's... a really popular
piece of software. i even used it when i managed some VoIP systems.

it's just at the amount where it's inordinately high, but low enough to
make me not think it was something like a new release.



[0] "With over 1 MILLION production systems worldwide and 20,000 new
systems installed monthly, ..."
https://www.freepbx.org/

-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-30 Thread Kristian Fiskerstrand
On 1/12/19 8:15 PM, Shengjing Zhu wrote:
>  I think these requests are quite unusual.
> Does anyone know what happens to these two keys?

Just to add a comment on this, adding a cache on the load-balancer is
really a nice way to slow down hits on the underlying SKS nodes, I keep
cache for 10 minutes in nginx, which really makes life more pleasant.

-- 

Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk

Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3

"Action is the foundational key to all success"
(Pablo Picasso)



signature.asc
Description: OpenPGP digital signature
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-01-30 Thread Martin Dobrev
Hi,My observations so far show that both keys generate  2+ TB/month traffic on 
average for all my clustered nodes. I'm running nginx + Varnish in-memory cache 
tuned at 5 minutes TTL which gives plenty of CPU cycles for the never-ending 
EventLoop alarm loops. The latter cause load-average spikes of up to 10 with 
just 4 Docker containers running on a 12 core system.Don't get me wrong. The 
throttling penalty is something I'd swallow-up as long as we keep the network 
running. Regards,Martin keyserver.dobrev.eu | pgp.dobrev.it 
 Original message From: Kristian Fiskerstrand 
 Date: 30/01/2019  20:18  
(GMT+00:00) To: Shengjing Zhu , sks-devel@nongnu.org 
Subject: Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659 On 
1/12/19 8:15 PM, Shengjing Zhu wrote:>  I think these requests are quite 
unusual.> Does anyone know what happens to these two keys?Just to add a comment 
on this, adding a cache on the load-balancer isreally a nice way to slow down 
hits on the underlying SKS nodes, I keepcache for 10 minutes in nginx, which 
really makes life more pleasant.-- Kristian 
FiskerstrandBlog: https://blog.sumptuouscapital.comTwitter: 
@krifiskPublic OpenPGP keyblock at 
hkp://pool.sks-keyservers.netfpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED 
FAE3"Action is the foundational key to all 
success"(Pablo Picasso)___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-02-04 Thread Rolf Wuerdemann

Hi,

Don't get me wrong, but within three days I've got 450G traffic
which can be assigned to sks by 99.9%. Estimated to 30 days this
means 4.5T (which is in good agreement of your 2+T/Key for these
two poison keys).

With this amount of traffic and the possibility to get
more of this keys (thus more traffic) every moment, I think it's
only a question of time until the network with the current
implementation will vanish. Traffic increased roughly a factor of
300 (15G->4.5T) within twelve months, nodes within the network
decreased by a factor of two at least for the same time.

So: where to go and how?

Just my 2ct,

   rowue

Am 2019-01-30 22:09, schrieb Martin Dobrev:

Hi,

My observations so far show that both keys generate  2+ TB/month
traffic on average for all my clustered nodes. I'm running nginx +
Varnish in-memory cache tuned at 5 minutes TTL which gives plenty of
CPU cycles for the never-ending EventLoop alarm loops. The latter
cause load-average spikes of up to 10 with just 4 Docker containers
running on a 12 core system.
Don't get me wrong. The throttling penalty is something I'd swallow-up
as long as we keep the network running.

Regards,
Martin

keyserver.dobrev.eu | pgp.dobrev.it

 Original message 
From: Kristian Fiskerstrand

Date: 30/01/2019 20:18 (GMT+00:00)
To: Shengjing Zhu , sks-devel@nongnu.org
Subject: Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and
0xB33B4659

On 1/12/19 8:15 PM, Shengjing Zhu wrote:

 I think these requests are quite unusual.
Does anyone know what happens to these two keys?


Just to add a comment on this, adding a cache on the load-balancer is
really a nice way to slow down hits on the underlying SKS nodes, I
keep
cache for 10 minutes in nginx, which really makes life more pleasant.

--

Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk

Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3

"Action is the foundational key to all success"
(Pablo Picasso)
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


--
Security is an illusion - Datasecurity twice
Rolf Würdemann  -  ro...@digitalis.org  -  DL9ROW
GnuPG fingerprint:EEDC BEA9 EFEA 54A9 E1A9  2D54 69CC 9F31 6C64 206A
xmpp: ro...@digitalis.org E1189573 6B4A150C A0C2BF5A 5553F865 0B9CBF7A
  ro...@jabber.ccc.de 64CBBB68 0A3514A4 026FC1E7 5328CE87 AEE2185F

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-02-04 Thread Martin Dobrev
Hi,

I've spent last week trying to optimize configuration as much as
possible. Following advise from a previous mail I've added:

command_timeout: 600
wserver_timeout: 30
max_recover: 150

to my sksconf and it seems this fixed majority of the EventLoop
failures. I've added DB_CONFIG in KDB/PTree folders to get rid of DB
archive logs that were causing plenty of IO load too.

My clusters are now happily responding to queries and load-average is
bellow one. Traffic wise things look better too, ~20GB/day.


Kind regards,
Martin Dobrev

P.S. Adding/changing DB_CONFIG might cause an error in the databases
that you can easily fix by running

db_recover -e -v -h /{KDB,PTree}


On 04/02/2019 09:49, Rolf Wuerdemann wrote:
> Hi,
>
> Don't get me wrong, but within three days I've got 450G traffic
> which can be assigned to sks by 99.9%. Estimated to 30 days this
> means 4.5T (which is in good agreement of your 2+T/Key for these
> two poison keys).
>
> With this amount of traffic and the possibility to get
> more of this keys (thus more traffic) every moment, I think it's
> only a question of time until the network with the current
> implementation will vanish. Traffic increased roughly a factor of
> 300 (15G->4.5T) within twelve months, nodes within the network
> decreased by a factor of two at least for the same time.
>
> So: where to go and how?
>
> Just my 2ct,
>
>    rowue
>
> Am 2019-01-30 22:09, schrieb Martin Dobrev:
>> Hi,
>>
>> My observations so far show that both keys generate  2+ TB/month
>> traffic on average for all my clustered nodes. I'm running nginx +
>> Varnish in-memory cache tuned at 5 minutes TTL which gives plenty of
>> CPU cycles for the never-ending EventLoop alarm loops. The latter
>> cause load-average spikes of up to 10 with just 4 Docker containers
>> running on a 12 core system.
>> Don't get me wrong. The throttling penalty is something I'd swallow-up
>> as long as we keep the network running.
>>
>> Regards,
>> Martin
>>
>> keyserver.dobrev.eu | pgp.dobrev.it
>>
>> -------- Original message 
>> From: Kristian Fiskerstrand
>> 
>> Date: 30/01/2019 20:18 (GMT+00:00)
>> To: Shengjing Zhu , sks-devel@nongnu.org
>> Subject: Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and
>> 0xB33B4659
>>
>> On 1/12/19 8:15 PM, Shengjing Zhu wrote:
>>>  I think these requests are quite unusual.
>>> Does anyone know what happens to these two keys?
>>
>> Just to add a comment on this, adding a cache on the load-balancer is
>> really a nice way to slow down hits on the underlying SKS nodes, I
>> keep
>> cache for 10 minutes in nginx, which really makes life more pleasant.
>>
>> -- 
>> 
>> Kristian Fiskerstrand
>> Blog: https://blog.sumptuouscapital.com
>> Twitter: @krifisk
>> 
>> Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
>> fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
>> 
>> "Action is the foundational key to all success"
>> (Pablo Picasso)
>> ___
>> Sks-devel mailing list
>> Sks-devel@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/sks-devel
>


pEpkey.asc
Description: application/pgp-keys
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-02-06 Thread Rolf Wuerdemann

With your suggestions:

load average below 1
Traffic: ~150G/day

Best,

   Rolf

Am 2019-02-04 12:52, schrieb Martin Dobrev:

Hi,

I've spent last week trying to optimize configuration as much as
possible. Following advise from a previous mail I've added:


command_timeout: 600
wserver_timeout: 30
max_recover: 150


to my sksconf and it seems this fixed majority of the EventLoop
failures. I've added DB_CONFIG in KDB/PTree folders to get rid of DB
archive logs that were causing plenty of IO load too.

My clusters are now happily responding to queries and load-average is
bellow one. Traffic wise things look better too, ~20GB/day.

Kind regards,
Martin Dobrev

P.S. Adding/changing DB_CONFIG might cause an error in the databases
that you can easily fix by running

db_recover -e -v -h /{KDB,PTree}

On 04/02/2019 09:49, Rolf Wuerdemann wrote:


Hi,

Don't get me wrong, but within three days I've got 450G traffic
which can be assigned to sks by 99.9%. Estimated to 30 days this
means 4.5T (which is in good agreement of your 2+T/Key for these
two poison keys).

With this amount of traffic and the possibility to get
more of this keys (thus more traffic) every moment, I think it's
only a question of time until the network with the current
implementation will vanish. Traffic increased roughly a factor of
300 (15G->4.5T) within twelve months, nodes within the network
decreased by a factor of two at least for the same time.

So: where to go and how?

Just my 2ct,

rowue

Am 2019-01-30 22:09, schrieb Martin Dobrev:
Hi,

My observations so far show that both keys generate  2+ TB/month
traffic on average for all my clustered nodes. I'm running nginx +
Varnish in-memory cache tuned at 5 minutes TTL which gives plenty of

CPU cycles for the never-ending EventLoop alarm loops. The latter
cause load-average spikes of up to 10 with just 4 Docker containers
running on a 12 core system.
Don't get me wrong. The throttling penalty is something I'd
swallow-up
as long as we keep the network running.

Regards,
Martin

keyserver.dobrev.eu | pgp.dobrev.it

 Original message 
From: Kristian Fiskerstrand

Date: 30/01/2019 20:18 (GMT+00:00)
To: Shengjing Zhu , sks-devel@nongnu.org
Subject: Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and
0xB33B4659

On 1/12/19 8:15 PM, Shengjing Zhu wrote:
I think these requests are quite unusual.
Does anyone know what happens to these two keys?

Just to add a comment on this, adding a cache on the load-balancer
is
really a nice way to slow down hits on the underlying SKS nodes, I
keep
cache for 10 minutes in nginx, which really makes life more
pleasant.

--

Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk

Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3

"Action is the foundational key to all success"
(Pablo Picasso)
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


--
Security is an illusion - Datasecurity twice
Rolf Würdemann  -  ro...@digitalis.org  -  DL9ROW
GnuPG fingerprint:EEDC BEA9 EFEA 54A9 E1A9  2D54 69CC 9F31 6C64 206A
xmpp: ro...@digitalis.org E1189573 6B4A150C A0C2BF5A 5553F865 0B9CBF7A
  ro...@jabber.ccc.de 64CBBB68 0A3514A4 026FC1E7 5328CE87 AEE2185F

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-02-06 Thread Todd Fleisher
I also applied these configuration options earlier today to all the servers in 
1 of my pools that was experiencing high IO load and repeated SigAlarms:
command_timeout: 600
wserver_timeout: 30
max_recover: 150

And since then, everything has been quiet:

IO on the main node that gossips externally: https://i.imgur.com/ERgz0Xo.jpg 
<https://i.imgur.com/ERgz0Xo.jpg>

IO from another node in the same pool that gossips internally with the above 
node: https://i.imgur.com/wsaxrJ5.jpg <https://i.imgur.com/wsaxrJ5.jpg>

Hopefully this can help other operators keep things in better shape for the 
time being.

-T


> On Feb 6, 2019, at 3:22 AM, Rolf Wuerdemann  wrote:
> 
> With your suggestions:
> 
> load average below 1
> Traffic: ~150G/day
> 
> Best,
> 
>   Rolf
> 
> Am 2019-02-04 12:52, schrieb Martin Dobrev:
>> Hi,
>> I've spent last week trying to optimize configuration as much as
>> possible. Following advise from a previous mail I've added:
>>> command_timeout: 600
>>> wserver_timeout: 30
>>> max_recover: 150
>> to my sksconf and it seems this fixed majority of the EventLoop
>> failures. I've added DB_CONFIG in KDB/PTree folders to get rid of DB
>> archive logs that were causing plenty of IO load too.
>> My clusters are now happily responding to queries and load-average is
>> bellow one. Traffic wise things look better too, ~20GB/day.
>> Kind regards,
>> Martin Dobrev
>> P.S. Adding/changing DB_CONFIG might cause an error in the databases
>> that you can easily fix by running
>> db_recover -e -v -h /{KDB,PTree}
>> On 04/02/2019 09:49, Rolf Wuerdemann wrote:
>>> Hi,
>>> Don't get me wrong, but within three days I've got 450G traffic
>>> which can be assigned to sks by 99.9%. Estimated to 30 days this
>>> means 4.5T (which is in good agreement of your 2+T/Key for these
>>> two poison keys).
>>> With this amount of traffic and the possibility to get
>>> more of this keys (thus more traffic) every moment, I think it's
>>> only a question of time until the network with the current
>>> implementation will vanish. Traffic increased roughly a factor of
>>> 300 (15G->4.5T) within twelve months, nodes within the network
>>> decreased by a factor of two at least for the same time.
>>> So: where to go and how?
>>> Just my 2ct,
>>> rowue
>>> Am 2019-01-30 22:09, schrieb Martin Dobrev:
>>> Hi,
>>> My observations so far show that both keys generate  2+ TB/month
>>> traffic on average for all my clustered nodes. I'm running nginx +
>>> Varnish in-memory cache tuned at 5 minutes TTL which gives plenty of
>>> CPU cycles for the never-ending EventLoop alarm loops. The latter
>>> cause load-average spikes of up to 10 with just 4 Docker containers
>>> running on a 12 core system.
>>> Don't get me wrong. The throttling penalty is something I'd
>>> swallow-up
>>> as long as we keep the network running.
>>> Regards,
>>> Martin
>>> keyserver.dobrev.eu | pgp.dobrev.it
>>>  Original message 
>>> From: Kristian Fiskerstrand
>>> 
>>> Date: 30/01/2019 20:18 (GMT+00:00)
>>> To: Shengjing Zhu , sks-devel@nongnu.org
>>> Subject: Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and
>>> 0xB33B4659
>>> On 1/12/19 8:15 PM, Shengjing Zhu wrote:
>>> I think these requests are quite unusual.
>>> Does anyone know what happens to these two keys?
>>> Just to add a comment on this, adding a cache on the load-balancer
>>> is
>>> really a nice way to slow down hits on the underlying SKS nodes, I
>>> keep
>>> cache for 10 minutes in nginx, which really makes life more
>>> pleasant.
>>> --
>>> 
>>> Kristian Fiskerstrand
>>> Blog: https://blog.sumptuouscapital.com
>>> Twitter: @krifisk
>>> 
>>> Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
>>> fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
>>> 
>>> "Action is the foundational key to all success"
>>> (Pablo Picasso)
>>> ___
>>> Sks-devel mailing list
>>> Sks-devel@nongnu.org
>>> https://lists.nongnu.org/mailman/listinfo/sks-devel
>> ___
>> Sks-devel mailing list
>> Sks-devel@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/sks-devel
> 
> --
> Security is an illusion - Datasecurity twice
> Rolf Würdemann  -  ro...@digitalis.org  -  DL9ROW
> GnuPG fingerprint:EEDC BEA9 EFEA 54A9 E1A9  2D54 69CC 9F31 6C64 206A
> xmpp: ro...@digitalis.org E1189573 6B4A150C A0C2BF5A 5553F865 0B9CBF7A
>  ro...@jabber.ccc.de 64CBBB68 0A3514A4 026FC1E7 5328CE87 AEE2185F
> 
> ___
> Sks-devel mailing list
> Sks-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/sks-devel



signature.asc
Description: Message signed with OpenPGP
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-02-08 Thread Todd Fleisher
To follow up on this, after making the below changes while my main disk IO went 
down, my load average went up, memory usage went through the roof & swapping 
ensued. I increased the amount of memory assigned to each of my main nodes 
(those that gossip with the outside world) and it seems to be holding steady so 
far: https://imgur.com/a/b0S4Ui2 <https://imgur.com/a/b0S4Ui2>

I believe the nodes may have also been having issues gossiping as I saw 
outbound network traffic flatline during the same time periods: 
https://imgur.com/a/IEoLboM <https://imgur.com/a/IEoLboM>

-T

> On Feb 6, 2019, at 4:21 PM, Todd Fleisher  wrote:
> 
> Signed PGP part
> I also applied these configuration options earlier today to all the servers 
> in 1 of my pools that was experiencing high IO load and repeated SigAlarms:
> command_timeout: 600
> wserver_timeout: 30
> max_recover: 150
> 
> And since then, everything has been quiet:
> 
> IO on the main node that gossips externally: https://i.imgur.com/ERgz0Xo.jpg 
> <https://i.imgur.com/ERgz0Xo.jpg>
> 
> IO from another node in the same pool that gossips internally with the above 
> node: https://i.imgur.com/wsaxrJ5.jpg <https://i.imgur.com/wsaxrJ5.jpg>
> 
> Hopefully this can help other operators keep things in better shape for the 
> time being.
> 
> -T
> 
> 
>> On Feb 6, 2019, at 3:22 AM, Rolf Wuerdemann > <mailto:ro...@digitalis.org>> wrote:
>> 
>> With your suggestions:
>> 
>> load average below 1
>> Traffic: ~150G/day
>> 
>> Best,
>> 
>>   Rolf
>> 
>> Am 2019-02-04 12:52, schrieb Martin Dobrev:
>>> Hi,
>>> I've spent last week trying to optimize configuration as much as
>>> possible. Following advise from a previous mail I've added:
>>>> command_timeout: 600
>>>> wserver_timeout: 30
>>>> max_recover: 150
>>> to my sksconf and it seems this fixed majority of the EventLoop
>>> failures. I've added DB_CONFIG in KDB/PTree folders to get rid of DB
>>> archive logs that were causing plenty of IO load too.
>>> My clusters are now happily responding to queries and load-average is
>>> bellow one. Traffic wise things look better too, ~20GB/day.
>>> Kind regards,
>>> Martin Dobrev
>>> P.S. Adding/changing DB_CONFIG might cause an error in the databases
>>> that you can easily fix by running
>>> db_recover -e -v -h /{KDB,PTree}
>>> On 04/02/2019 09:49, Rolf Wuerdemann wrote:
>>>> Hi,
>>>> Don't get me wrong, but within three days I've got 450G traffic
>>>> which can be assigned to sks by 99.9%. Estimated to 30 days this
>>>> means 4.5T (which is in good agreement of your 2+T/Key for these
>>>> two poison keys).
>>>> With this amount of traffic and the possibility to get
>>>> more of this keys (thus more traffic) every moment, I think it's
>>>> only a question of time until the network with the current
>>>> implementation will vanish. Traffic increased roughly a factor of
>>>> 300 (15G->4.5T) within twelve months, nodes within the network
>>>> decreased by a factor of two at least for the same time.
>>>> So: where to go and how?
>>>> Just my 2ct,
>>>> rowue
>>>> Am 2019-01-30 22:09, schrieb Martin Dobrev:
>>>> Hi,
>>>> My observations so far show that both keys generate  2+ TB/month
>>>> traffic on average for all my clustered nodes. I'm running nginx +
>>>> Varnish in-memory cache tuned at 5 minutes TTL which gives plenty of
>>>> CPU cycles for the never-ending EventLoop alarm loops. The latter
>>>> cause load-average spikes of up to 10 with just 4 Docker containers
>>>> running on a 12 core system.
>>>> Don't get me wrong. The throttling penalty is something I'd
>>>> swallow-up
>>>> as long as we keep the network running.
>>>> Regards,
>>>> Martin
>>>> keyserver.dobrev.eu <http://keyserver.dobrev.eu/> | pgp.dobrev.it 
>>>> <http://pgp.dobrev.it/>
>>>>  Original message 
>>>> From: Kristian Fiskerstrand
>>>> >>> <mailto:kristian.fiskerstr...@sumptuouscapital.com>>
>>>> Date: 30/01/2019 20:18 (GMT+00:00)
>>>> To: Shengjing Zhu mailto:zsj950...@gmail.com>>, 
>>>> sks-devel@nongnu.org <mailto:sks-devel@nongnu.org>
>>>> Subject: Re: [Sks-devel] Unusual traffic for key 0x6

Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-03-20 Thread Jeremy T. Bouse
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 3/20/2019 2:42 PM, Andrew Nagy wrote:
> All,
> 
> Looking to figure out a solution here. A Maintainer on the Ubuntu 
> Key server informed me about discussion of the following keys 
> 0x69D2EAD9 and 0xB33B4659 here: 
> https://lists.nongnu.org/archive/html/sks-devel/2019-01/msg3.html
>
>
>
>
> 
Unfortunately the email address modu...@freepbx.org
>  is just a black hole and so the email 
> that was sent there from Brent Saner was lost forever.
> 
> I currently run the FreePBX project which uses the GPG network to 
> sign modules. Unfortunately due to: 
> https://bitbucket.org/skskeyserver/sks-keyserver/issues/57/anyone-can-
make-any-pgp-key-unimportable
>
>
>
>
> 
Someone poisoned our master key that we use to sign all other
> keys. This has caused issues on the sks network for a while. 
> However since January we've noticed more and more sks servers are 
> now just timing out and not returning back our requests for 
> 0xB33B4659. I assume that is probably because of the message
> thread from January.
> 
> The way FreePBX software works is that it checks nightly against a 
> list of key servers to redownload 0x69D2EAD9 and 0xB33B4659 and 
> re-verify. However it appears that for many of you the bandwidth 
> this causes is much too high. Internally we need to recreate our 
> master key without the poison but I am afraid it will just as 
> easily be re-poisoned again. Also even if we put a new key out you 
> will notice traffic increase from those keys over time and well and
> we will be back to the bandwidth issue.
> 
> Perhaps we should be using GPG locally instead of through the GPG 
> key network. Let me know what you guys think,
> 
> Thank you
> 

I don't speak for all SKS server operators, but I do have a
configuration block in my NGINX configuration that specifically
identifies those keys and simply returns a 444 status code.

I've been trying to get a handle on the instability of my server
which has been running the CPU at 100% at times so I enabled an access
log rule to idenitfy the query strings and upstream times but not the
requesting IP address... According to my status page my server only
spent 61.924% of yesterday in the pool or roughly 14.86 hours, during
that same period of time my server saw 12436 requests for those 2 keys
for an average of 836.86 requests per hour or nearly 14 per minute.
During most times that wouldn't be a lot but when the system is under
load that volume adds up and is non-trivial.

One opption the FreePBX team could do is self-publish their key using
WKD or PKA. WKD you would store the file on your own web server
that no one else could touch (presumably if setup properly anyway) and
PKA you would publish within DNS records. GPG has the ability to
retrieve from both methods without needing to use a keyserver.
-BEGIN PGP SIGNATURE-

iQGzBAEBCgAdFiEEakJ0F+CHS9VzhSFg6lYpTv4TPXUFAlySpigACgkQ6lYpTv4T
PXU3aAwAve2kSUqxiXkg74zoO+l0lL1sD1cPiok4i6i9+D+nre+g4awR/sJdcVal
yGmIgYJa4HpuXKrCBUD9oX+n4HDjjekJpHdbqYOUYI5mx6+T5YKPLtP0hUmhe4Jv
P4hSnX4tppsQADJo4Ms4txHkmHqPis3Khjnr92+nGG7Xq98tHgOmu67jjoNmTsAG
0ADZ1+Lkd9V7UTBMy7+jkbqrda7/v65+3YgYfyoSQIYg7DCRp6Rg+jwGyrzRD/Vh
ZRRu+1McPrw2dKMksRabZHm8efkyDdpkFldvR9+bDR7EC70axZ+zWb8iKTIr7qLY
huiu9x/wVvTxw3mpVCN1Ii5Vj9qe3SfuiVYtHws8vqmaoOf/h9QGLIK+KcPgf0sj
Y8jKz7obltc29RyyfqblDtEJXNOrKD5OpSLptOUpa/6sm+F6MRT0DQKEIsHq4BRR
nCi2aLldat0ojZwkFTqNzD+M6mCBSI3FgCPzGlSITVRiLDdatO/R+wmy1hAgT3Hv
k5GG4CBt
=en3K
-END PGP SIGNATURE-

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-03-21 Thread brent s.
On 3/20/19 4:44 PM, Jeremy T. Bouse wrote:>
>   I don't speak for all SKS server operators, but I do have a
> configuration block in my NGINX configuration that specifically
> identifies those keys and simply returns a 444 status code.
> 
>   I've been trying to get a handle on the instability of my server
> which has been running the CPU at 100% at times so I enabled an access
> log rule to idenitfy the query strings and upstream times but not the
> requesting IP address... According to my status page my server only
> spent 61.924% of yesterday in the pool or roughly 14.86 hours, during
> that same period of time my server saw 12436 requests for those 2 keys
> for an average of 836.86 requests per hour or nearly 14 per minute.
> During most times that wouldn't be a lot but when the system is under
> load that volume adds up and is non-trivial.
> 
>   One opption the FreePBX team could do is self-publish their key using
> WKD or PKA. WKD you would store the file on your own web server
> that no one else could touch (presumably if setup properly anyway) and
> PKA you would publish within DNS records. GPG has the ability to
> retrieve from both methods without needing to use a keyserver.
> 


I'd second this, as it's the "most right" solution (he says, still not
having set up WKD/PKA for his own personal infra yet. I really need to
get on that...).

An alternative is to set up your own SKS that does not allow submissions
(I'd imagine you could just 403 the upload path/args except for a few
authorized addresses since HKP more or less is just HTTP), keep that as
your modules' key repository, and create a fresh master key that signs
those pubkeys, and publish that new master to the WoT/SKS pool. That
doesn't do a terribly good job of preventing something like this in the
future, of course, but does insulate you from the load caused by
installations fetching the keys; core could be distributed with the
master pubkey, even, and you'd then have a method of checking module
signatures right from the get-go.

You can also just host the master pubkey as an exported key somewhere,
and fetch that directly.

I think the massive load is mostly caused by the querying of the master
key against the SKS pool more than anything.

-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info



signature.asc
Description: OpenPGP digital signature
___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel


Re: [Sks-devel] Unusual traffic for key 0x69D2EAD9 and 0xB33B4659

2019-03-24 Thread Shengjing Zhu
On Thu, Mar 21, 2019 at 2:42 AM Andrew Nagy  wrote:
>
> All,
>
> Looking to figure out a solution here. A Maintainer on the Ubuntu Key server 
> informed me about discussion of the following keys 0x69D2EAD9 and 0xB33B4659 
> here: https://lists.nongnu.org/archive/html/sks-devel/2019-01/msg3.html
>
> Unfortunately the email address modu...@freepbx.org is just a black hole and 
> so the email that was sent there from Brent Saner was lost forever.
>
> I currently run the FreePBX project which uses the GPG network to sign 
> modules. Unfortunately due to: 
> https://bitbucket.org/skskeyserver/sks-keyserver/issues/57/anyone-can-make-any-pgp-key-unimportable
>
> Someone poisoned our master key that we use to sign all other keys. This has 
> caused issues on the sks network for a while. However since January we've 
> noticed more and more sks servers are now just timing out and not returning 
> back our requests for 0xB33B4659. I assume that is probably because of the 
> message thread from January.
>

Actually I've not blacklist your keys. But these two keys are not only
pulled too frequently, they are tooo large. I assume it's not your
fault, but someone poisons them.

0x1013D73FECAC918A0A25823986CE877469D2EAD9 stored in the key server is 38M.

As to no response for these requests, probably the key servers are just down :(

Please just don't rely on the key server network for such task, you
can use https://wiki.gnupg.org/WKD

-- 
Regards,
Shengjing Zhu

___
Sks-devel mailing list
Sks-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/sks-devel