[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Agree with PRZs MDC suggestion
yOn Sun, 9 May 1999, Werner Koch wrote:
> After implementing the MDC by using the signature packets and
> verifiyng that Tom's and my implementation are interoperable, I
> started over and tried to implement my suggestion to use
> use a special MDC packet instead of a signature packet.
> (1) new encrypted data packet
> (2) mdc packet with the encrypted hash over (1)
> The problem here is that (2) is encrypted with the same key as (1)
> and actually is part of (1). We must encrypt only the hash inside
> of (2), so that the parse can distinguish both packets. It is quiet
> obvious, that passing the decryption context from (1) to (2) is not
> very easy. What we would need to have a good solution, is a conatiner
> packet which replaces the old encrypted data packet
> Encrypted_Data -+- Data_Containter_Packet -+- Onepass
> ! !- Plaintext
> ! !- Signature
> !- MDC_Packet
Not true. Just as you run the encryptor and get two or more packets out
currently for signatures, and thus you don't need to "pass a decryption
context" from the 1 pass header to the plaintext to the Signature, you
wouldn't need to do this for the MDC, any more than you do if the MDC is
not a separate packet.
In effect what we have now (with buffering or pipes between threads) is:
ciphertext | dearmor | pkdecrypt | symdecrypt | sigver | deliteral
There may be multiple sigvers, one per level of nesting (I use a stack
instead of spawning another layer).
What will happen is that this will insert something between the symdecrypt
and the sigver sections. It will function nearly the same as the sigver,
except we are creating a completely new and disjoint syntax for handling
the new packet.
For me, it is really ugly to add a new layer when the existing one can
easily be modified to work and provide the same function.
Instead we fiddle with the definitions to make it incompatible with the
existing layers just enough so it must be a new layer, but realize that
all this new verbage simply exists to forcibly create a new layer, and not
for any functional reason.
I asked what didn't work about extending "signature" packets to
"verification" packets (with the zero algorithm or something similar -
or even keeping the signature packet format with a different ID number
for MDC only packets).
No one has yet told me why this is any less secure, slower, or harder than
tacking the 20 bytes onto the end of the plaintext.
How many lines of code did each method take? How many cpu-seconds?
> The bottom line is, that Phil's original suggestion to put the hash
> at the end of the encrypted message is the easiest way to address it.
The MDC is more easily added where it is already being generated through
existing control paths in the code, simply altering the output format
slightly. How can something that is already there be harder to add than
something completely new?
> I know that it is annoying to hold back the last 20 bytes, but compared
> to the hacked signature packets, I thing it is okay to to this.
Altering one byte to add a definition to the signature packet is less of
a hack than holding back 20 bytes. Don't assume everyone is running on
big machines with an OS or API calls to handle this and CPU cycles to
I don't. If you are going to do this then at least force an EOF and
append 20 bytes without any packet information (i.e. the new stream
packets would indicate an end where the message ends, but there would be
20 bytes (or whatever) extra following not encapsulated in any packet).
It may be easy to add in your implementation, and you may not be trying to
do stream I/O and have something that handles errant packet boundaries
easily. This is by no means universal.
One of my implementations is for the Palm Pilot. Adding a layer instead
of a few lines in an existing layer) is a really big deal there (exclusive
of adding algorithms). Holding back 20 bytes when I am already against
the memory limitations is going to be very painful.
I have pointed out that we can do it without bloating the code. But it
seems everyone wants to bloat it. Assuming PRZ isn't doing another
"hit-and-run", what does HE think of the alternatives?
If this is going to require another layer, then all such algorithms using
this new method should be given a MAY level of compliance and not SHOULD
because of the larger impact on the existing code base.
Conversely, extending the existing structure is easy enough to warrant
being a SHOULD.