Steffen Ullrich via RT <bug-IO-Socket-SSL@rt.cpan.org> wrote:
Show quoted text> <URL:
https://rt.cpan.org/Ticket/Display.html?id=129463 >
>
> Am So 05. Mai 2019, 17:33:50, e@80x24.org schrieb:
> > SSL_CTX_set_mode(3ssl) manpage states this can save around 34k
> > per idle connection.
> >
> > Contributed to OpenSSL by the Tor project, this feature has been
> > available since OpenSSL 1.0.0a and enabled by default in popular
> > projects such as: curl, nginx, Ruby, and (of course) tor.
> >
> > Patch is attached.
Show quoted text
Thanks for the quick response.
Interesting, I wasn't aware of CVE-2014-0198 specifically
(too many OpenSSL CVEs to keep track of), but it seemed to only
affect SSLv3 on ancient OpenSSL versions. I figure those
users have bigger problems and would've fallen over by now,
and not keeping IO::Socket::SSL up-to-date :)
Show quoted text> Also, if this would be completely free of side effects then
> why would this option not be enabled by default in OpenSSL?
Good question, I would guess inertia.
Show quoted text> From my understanding of the code in OpenSSL this option
> should be useful only if nothing will be read/write for a long
> time but the SSL object is still allocated. If instead lots of
> reads and writes are done on the SSL object it will instead
> continously free memory and allocate new memory, which
> actually might slow down the application. This is especially
> true if memory management itself has a large overhead: on some
> systems unused memory is actually returned to the system which
> means that syscalls are involved which can slow down
> everything a lot.
In my experience, traffic and throughput on a single socket is
rarely (if ever) significant in terms of CPU usage. Various
malloc implementations have also improved in throughput over the
years; and they tend to avoid making syscalls to release memory
to the kernel.
So, having memory idle for a "long time" could be milliseconds,
even, which is an eternity for buffers to sit idle.
What adds up is memory usage from multiple sockets, and being
able to reuse idle memory across different sockets means better
overall locality and fewer calls to mmap/brk to request memory
from the kernel.
Perhaps that memory reuse can increase the chance of a data leak
if there's other bugs in the code; but it may also reduce that
chance as data is frequently clobbered and not sitting idle in
buffers.
Show quoted text> Do you have any real-world experience which suggests that this
> option should be enabled by default even though OpenSSL has it
> disabled by default? And which describes the actual use cases
> where this option is useful and where it is harmful?
I've used nginx for TLS for many years; and now run another C10K
server, yahns (which I wrote in Ruby) on my own sites. I may
replace yahns with another C10K HTTPS/NNTP+TLS server I write in
Perl5.
I've yet to encounter a point on a public HTTPS server where TLS
traffic from a single socket caused performance problems. Maybe
it can happen in extremely high bandwidth situations and giant
files, but that's not most TLS users.
I'm not sure why OpenSSL does not enable it by default, probably
an abundance of caution since it was "new" at some point. But
it is enabled on security-sensitive projects, such as Tor, which
also has many concurrent connections to manage.
Along the same lines, I'm glad you enabled
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER by default. Ruby didn't
enable it until I added it, so I had to resort to a nasty
workaround in yahns.