[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PGP Keyserver Synchronization Protocol

In <19990624153938.C4137@tik.ee.ethz.ch>, on 06/24/99 
   at 03:39 PM, Marcel Waldvogel <mwa@tik.ee.ethz.ch> said:

>On Wed, Jun 23, 1999 at 05:41:51PM +0100, "teun, Tilburg University"
><Teun.Nijssen@kub.nl> wrote: > /Does it really matter if you do not know
>the internal packet format as long > /as you know where the packet ends?
>Hashing is simply mixing together a > /stream of octets and so I do not
>believe the 'format' makes much of a difference. > 
>> depending on the order in which a server received signatures in the past, a 
>> key may look quite different on different servers, although with sorted
>> sigs it is the same.

>The bad thing is that merging the keys may not produce the same result,
>so that each time the key would be re-requested. E.g., there are many
>keys floating around that have been revoked on multiple occasions, i.e.
>the merged key would need to contain multiple revocation certificates, to
>provide the same checksum, which does not conform to RFC 2440.

Well this was the point of sorting the key before calculating the hash. So
long as everyone is using the same sort order for the different packets in
the key they should generate the same hash for the same key. 

I am not sure what you mean by "revoked on multiple occasions"? Are you
saying that there are public keys on the servers that have different
revocation signatures for the same key on different servers?? This does
not make sense. If this is true how does the current servers address this
when it merges two keys that have different revocation sigs? I really
don't see any other solution than to include all the revocation signatures
when merging the keys. RFC 2440 may not have addressed this, but I doubt
anyone anticipated this.

>Also, I am not thrilled by the idea to exchange 28*600,000 bytes (8 bytes
>keyID+16..20 bytes of hash times the number of keys currently on my
>server) with a dozen or so sites (>200MB) every day or so just to find
>out whether I'm still in sync. This is not too far from just grabbing the
>entire keyring (some 650MB) from some other site.

I guess I should have clarified this better. It was not my intent for the
servers to conduct a synchronization on a daily basis. As far as I can
tell the current mechanism of incrementals is working well for the
propagation of keys (though it does seem to be taking up much more
bandwidth than needed). This was only intended to be a mechanism for
servers to periodically check with each other to make sure that they are
in synch (once a month seems like a good time frame).

Just grabing the keyring from another site is only a partial solution as
it only guarantees that you have the same keys that the other server has.
It does nothing for the others servers that you are "connected" to, nor
does it address any keys that your server may have that the other server
does not.

To provide the same level of synchronization as this method does by
pulling the entire keyring you are looking at downloading the entire
keyring for 12 servers (7800MB), merging them into your database (a time
consuming process), doing a keyring dump, and then uploading your keyring
back to those same 12 servers (7800MB) for a total of 15600MB!!!

There is no reason why a server could not generate a complete hash table
once a month, and then produce a "changes" hash table on a daily or weekly
basis that only contained the new/updated keys since the last complete
hash table was created.

In a previous post I outlined a mechanism using key hashes for doing the
daily incrementals that would greatly reduce the overhead in bandwidth &
processing over the current method.

William H. Geiger III  http://www.openpgp.net
Geiger Consulting    Cooking With Warp 4.0

Author of E-Secure - PGP Front End for MR/2 Ice
PGP & MR/2 the only way for secure e-mail.
OS/2 PGP 5.0 at: http://www.openpgp.net/pgp.html
Talk About PGP on IRC EFNet Channel: #pgp Nick: whgiii

Hi Jeff!! :)