[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Compression and partial length chunks
Thank you for your feedback, it has clarified a lot of confusion. I have
a couple of follow up comments:
If I must have hold the whole source data in memory before I can
compress and then encrypt it, then I would always know before-hand the
lengths of the resulting packets (literal, compressed, and encrypted).
So when exactly does the need arise to use PBLs? Even when I am
streaming data, I cannot avoid the basic constraint that the entire
source data must be in memory before it can be processed. It is not the
case, as you pointed out, that I am feeding chunks of the source data to
the compression and encryption routines and then sending them out as
My basic problem is that I cannot simultaneously hold both the encrypted
and decrypted data in memory because I will be dealing with large
documents up to 10GB. So, if an encrypted document comes in and it uses
PBLs, I still need to coallesce the chunks before I decrypt and
decompress them. In other words, I need to hold the entire
encrypted/compressed data in memory as well as the
I am new to the interesting world of PGP. I think it would be useful, if
at all possible, to be able to deal with the chunks independently so as
to avoid physical memory limitations.
If you can provide additional feedback, I would be most grateful.