Hi,
this issue already known for several months now - see [0], [1].
The keys used for this are very large (around 30-60MB). Syncing them
takes some bandwith and indexing/writing them to the disk consumes a lot
of CPU and I/O resources. If the addition to the database fails, the key
is added again
erdue.
>
>
> On Fri, 16 Nov 2018 00:50:31 +0100
> Moritz Wirth wrote:
>
>> I asked to be allowed to share some more details, however the request
>> was to remove/prevent indexing of 2 keys stored on our keyservers -
>> including copies of ID's to verify the reques
d
>> the mode of operation dictate and stay this way.
>>
>> --
>>
>> Thanks,
>>
>> Fabian S.
>>
>> OpenPGP:
>>
>> 0x643082042DC83E6D94B86C405E3DAA18A1C22D8F
>>
>> On Thu, Nov 15, 2018 at 5:58 PM, Georg Faerber wrote:
>>
>
Hello,
keys.flanga.io will cease operation - we received a request to remove
some keys and since we are unable to do this, we will shutdown all
keyservers and erase all relevant databases immediately.
Best Regards,
Moritz
signature.asc
Description: OpenPGP digital signature
_
eeds to be run as hockeypuck user.
>
> You need to create an hkp dB and hockeypuck role and grant
> ownership over the dB (at least that worked for me).
>
> Otherwise, it’s working and importing now. Thanks for your
> help. I’ll advis
You have to set the filters to get the reconciliation working with SKS:
[hockeypuck.conflux.recon]
reconAddr=":11370"
version="1.1.6"
filters=["yminsky.dedup", "yminsky.merge"]
For Postgresql:
[hockeypuck.openpgp.db]
driver="postgres-jsonb"
dsn="database=hkp host=/var/run/postgresql port=5432 ss
ular reason for that or you were just attempting
> to protect yourself during the attack?
>
>
>
> On 17/07/18 13:17, Moritz Wirth wrote:
>> Hi,
>>
>> keys2.flanga.io and keys3.flanga.io will cease operation immediately,
>> given the latest problems.
>>
>>
Hi,
keys2.flanga.io and keys3.flanga.io will cease operation immediately,
given the latest problems.
keys.flanga.io will remain online as long as it runs stable and the
required disk space does not exceed my limits (database capacity has
almost tripled when switching to hockeypuck and is now abou
Hi Tom,
I spend the night on the keydump - keys.flanga.io is now also running
with hockeypuck (I did not test anything to be honest though ;)). I'll
see if it runs stable (not sure if it is pool compatible) - version is
1.1.6.
A short write-up for installing this thing is already done - I can sen
le said "simple" and "resilient" rather than "with
> nearly optimal communication complexity", and the contents matched the
> title.
>
> The pool of engineers willing and able to get us out of this mess
> would be much larger.
>
> On Fri, Jul 13, 2018 at
FWIW, has anybody even started working on a fix for any of the bugs?
Am 13.07.18 um 21:52 schrieb Robert J. Hansen:
>> Sad but not surprised. Thanks for all your time and effort. It has been much
>> appreciated.
> Yes.
>
>> I am reluctant to declare defeat, but this calls for a tactical retreat
Are you sure that this is a problem of the CVE Vulnerability and not
because of a non responding keyservers?
Am 30.06.18 um 20:29 schrieb Eric Germann:
> Thanks
>
> So I should download all the source from the git repo as it seems 1.1.6
> doesn’t have the fixes?
>
>> On Jun 30, 2018, at 13:55, C
I am afraid there is not much you can do about this right now - the pool
itself is very unstable and crashes multiple times per day.
I found over 8 key hashes which cause an Eventloop - this happens every
2-3 minutes, sometimes with the same key, sometimes with other keys.
Best regards,
Am 22.
We are running a normal sks instance to keep up with the peering - the
instance is stopped every hour and a snapshot of the database is created
- the snapshot is then used in another VM for testing.
Am 18.06.18 um 12:27 schrieb Andrew Gallagher:
> On 18/06/18 11:11, Hendrik Visage wrote:
>>> On 1
m off
> once they hit a rate limit.
>
> Cheers!
> -Pete
>
> On 6/16/2018 5:47 PM, Moritz Wirth wrote:
>> Hi,
>>
>> seems like that is the "problem":
>>
>> https://bitbucket.org/skskeyserver/sks-keyserver/issues/60/denial-of-service-via-l
Hi,
seems like that is the "problem":
https://bitbucket.org/skskeyserver/sks-keyserver/issues/60/denial-of-service-via-large-uid-packets
https://bitbucket.org/skskeyserver/sks-keyserver/issues/57/anyone-can-make-any-pgp-key-unimportable
Best regards,
Moritz
Am 17.06.18 um 02:18 schrieb Pete St
FWIW, you can set the DB_LOG_AUTOREMOVE flag for the database - the logs
should be removed automatically
[root@instance-4 ~]# cat /var/lib/sks/KDB/DB_CONFIG
set_flags DB_LOG_AUTOREMOVE
Best regards,
Am 15.06.18 um 09:40 schrieb André Keller:
> Hi,
>
> On 15.06.2018 05:54, Kiss Gabo
This is a pool containing only servers available using hkps. Regular A
and and SRV records are included for port 443 servers, and a lookup
is performed for _pgpkey-https._tcp on the individual servers to
determine if a hkps enabled service is listening on another port. At
this point, however,
Letsencrypt probably forwards port 80/port 11371 to 443, you can solve that if
you add another server section for port 11371 (and port 80) where you handle
the requests.
Traffic on port 11371 should remain unencrypted so rewriting it to https is not
allowed
Sent from my iPhone
> Am 21.05.201
That does not help because you still Store european data which is still
affected by the GDPR.
What about only accepting valid keys and removing all revoked or expired keys
from the database? If someone wants to have his data deleted he can revoke his
key and the revoked signature is synced over
iPhone
> Am 29.04.2018 um 17:08 schrieb robots.txt fan :
>
> Moritz Wirth wrote:
>> Given the fact that it is not possible to delete data from a keyserver
>
> Of course this is possible. You can delete key by using the "sks drop "
> command. Now, if I understand it
Hi Fabian,
first of all, I am not a lawyer so you should not rely on my response as
it may be wrong :)
- The GDPR applies to all persons and companies who are located in the
EU or offering goods, services or who monitor the behavior of EU data
subjects - this means that all keyservers are affecte
Hi,
I am not completely sure how new keyservers are determined, one way
seems to be the peering list. If you advertise the same hostname on
multiple keyservers, only one node will be included (see keys1.flanga.io
and keys2.flanga.io are both included in peering lists but only
keys.flanga.io as loa
Re: [Sks-devel] Fwd: Re: Unde(r)served HKPS [was:
> Underserved areas?]
> Datum:Sun, 14 Jan 2018 19:15:59 +0100
> Von: Heiko Richter
> An: Moritz Wirth
>
>
>
>
> Am 14.01.2018 um 14:18 schrieb Moritz Wirth:
>> All in all, what do you want to do? Just
my 2 cents...
Am 14.01.18 um 13:25 schrieb Heiko Richter:
>
>
>
> Am 14.01.2018 um 13:04 schrieb Moritz Wirth:
>>
>> Certificate Revocation is broken in most browsers today so there is
>> no reliable way to revoke a certificate (especially if you do not use
>> OC
Certificate Revocation is broken in most browsers today so there is no
reliable way to revoke a certificate (especially if you do not use OCSP).
I don't think that it would be a big problem to get trusted certificates
for HKPS, however the trust problem stays the same and it comes with
other prob
I requested a certificate a few days ago, however only well known
keyservers receive a cert for HKPS (which is reasonable because the
certificates are valid for a year and there is no reliable way for
certificate revocation).
Another idea around the mitm problem - the client retrieves the current
Hi,
the pgp keyserver dump is outdated for about 100k keys - i created a new
dump which you can use: https://cdn.fstatic.io/sksdump
My peering lines:
keys.flanga.io 11370 # Flanga SKS Peering Administrator
0xd015c49b2eceb8f1
keys2.flanga.io 11370 # Flanga SKS Peering Administrator
0xd015c49b2
Hi,
your keydump is behind the lower bound of keys for about 92.000 keys,
you should probably use a different dump from another source.
Best regards,
Moritz
Am 03.01.18 um 14:35 schrieb Теплов М.Ю.:
> Hi,
>
> I am looking for peers for a new SKS keyserver installation.
>
> I am running SKS vers
Hi,
I would recommend that you use the new webinterface only in the /stats
location and keep the default interface on /pks/lookup?op=stats -
otherwise it breaks all scripts for querying/monitoring keyservers.
Best regards,
Moritz
Am 18.12.17 um 22:00 schrieb Webmaster IspFontela:
> Hello all:
Fabian A. Santiago:
> December 6, 2017 2:59 PM, "Kristian Fiskerstrand"
>
> wrote:
>
>> On 12/06/2017 08:10 PM, Moritz Wirth wrote:
>>
>>> Can we delete the logfiles in the KDB/ directory (log.x) - are there
>>> other ways to save some
Hello everybody,
keys.flanga.io started running out of space and I want to cleanup some
space. The database has been used for a year and is around 38 GB big now
- compared: the db size on keys2.flanga.io is about 28 GB and 21 GB on
keys3.flanga.io
Can we delete the logfiles in the KDB/ directory
Some recommendations:
- You must run a reverse proxy in front of your sks instance to be
included in the pool.
- The server contact should only be a PGP KeyId without your
email-address (though I dont know if it can be included there).
- A simple key lookup on your server fails ( Error handling re
keyserver.
Peering line:
keys2.flanga.io 11370 # Moritz Wirth 0x4733BFB2C7AC4938
I am also looking for new peers with my first keyserver:
keys.flanga.io 11370 # Moritz Wirth 0x4733BFB2C7AC4938
For operational issues please contact me directly.
Best regards,
Moritz
signature.asc
Good morning everybody,
is it possible to loadbalance SKS/Nginx using multiple A records for the
hostname?
e.g.
keys.flanga.io.0 INA1.2.3.4
keys.flanga.io.0 INA2.3.4.5
Best regards,
Moritz
signature.asc
Description: OpenPGP digital signature
_
Good evening all,
due to internal changes, keyserver.corenetworking.de will move to
keys.flanga.io.
The new domain is already up and running, please adapt your membership
files to the new line:
keys.flanga.io 11370 # Moritz Wirth 0x4733BFB2C7AC4938
IPv4 and IPv6 Adresses did not change, the
Hello,
Looks like your Sks-recon is not available on Port 11370 - Port 11370 is
closed for a reason.
The second thing I noticed is that your web interface can't access your
Keyserver - searching for a key is also not possible.
Best Regards,
Moritz
Am 05.11.16 um 03:11 schrieb Carles Tubio
/05/2016 09:52 PM, Danny Horne wrote:
>> On 05/09/2016 11:49 am, Moritz Wirth wrote:
>>> Furthermore, I started using Snort, but i think it blocks the spider for
>>> the pool status. Is there an IP-Address which I can whitelist?
>> I think you'd also have to whitel
Hello,
keyserver.corenetworking.de moved to a new datacenter last night and I
decided to create a second instance for loadbalancing.
All requests (on 80,443,11371) are handled by my nginx-cluster
(corosync), loadbalancing works fine, but my server isn't shown as
loadbalanced on sks-status page.
I think that rate limiting new or updated keys may be a good idea. My
keyserver currently recieves about 50-150 keys every hour, limiting this
to something like 300 keys in 10 minutes may help.
Am 26.05.16 um 09:42 schrieb Pascal Levasseur:
> The administrators of the SKS servers should be able
71 on IPv4, IPv6 will be
available later.
We've loaded a keydump from 2016-02-18.
We have 4190621 Keys loaded.
If you have any questions, please do not hesitate to contact us.
keyserver.corenetworking.de:11371
Thank you,
Moritz Wirth
PGP:0x44b1cafa8700570
-BEGIN PGP SIGNATURE---
41 matches
Mail list logo