Gisle,
Thank you for your response!
I understand that HTTP RFC 2068 suggests that CRLF be used across the
wire to fully comply. And I believe it might even be appropriate to
convert multipart/form−data fields (that happen to be text/*) to CRLF
format so that everything across the raw transport is standard. But
there is nothing in RFC 1738 that suggests you can inject extra
characters during the encoding process for
application/x−www−form−urlencoded data. And "%0A" is just three digits
in plain text without any special characters so should be safe across
the wire as is, thus the violation referred to in my report.
However, I have seen SOME real browsers such as IE encode textarea
inputs with forced CRLF prior to encoding, which I believe your 6.03
will help emulate this behavior. I'm not sure what any RFC states about
that or where that idea even comes from or if that is even proper or
not. And if this behavior I've seen is proper, then yes, my patch would
break that.
Unfortunately, I'm not sure how to tell LWP::UserAgent or HTTP::Request
when I know that my content is not text and should not be corrupted in
this case. I just know that everything used to work fine for years
posting binary inputs, and suddenly after upgrading to 6.03 it is
broken. I guess I can double URL encode it on the client end and then
double decode it on the server end to workaround any encoding issues
with HTTP::Request, but that would require changes on both ends and
doesn't provide a slow migration path very easily. I'm just not sure
what the best solution is.
And which RFC are you claiming that the OLD behavior does not comply?
Or do you have any other suggestions?