On Jun 8, 2012, at 5:28 PM, Eric Wong <normalperson / yhbt.net> wrote:

> I like Net::HTTP being able to inflate compressed responses.
>=20
> However, I think doing this by default is exploitable by an evil server.
> A server could compress a huge file of zeroes to trigger an
> out-of-memory conditions in existing code that uses Net::HTTP.

Net::HTTP#get does this by default already, this patch (and #6494) make this=
 the default for all requests.

If you aren't using the API to handle a compressed 100MB request (Net::HTTPR=
esponse#read_body with a block) you probably can't handle an raw 100MB respo=
nse, so what is the difference besides bandwidth cost of the server?=