Subject: | wishlist: repeated saving |
Date: | Thu, 20 Jun 2013 12:20:37 +0100 |
To: | bug-Devel-NYTProf [...] rt.cpan.org |
From: | Zefram <zefram [...] fysh.org> |
Separating this out from the Apache2::SizeLimit issue...
It ought to be possible to save profiling data without stopping collection
of profiling data, and it ought to be possible to save more than once in
one process. Each save operation saves the data that has been collected
since the previous save. Each save probably needs to go to a separate
file; you'll need nytprofmerge to combine them into a single file for
report generation.
The purpose of multiple saving is to get useful profiling data out of a
process that may die unpredictably in a way that D:NYTP doesn't catch.
This comes up a lot with Apache, where things like Apache2::SizeLimit
terminate a worker process after some request has been handled, with no
simple way to predict which request will do it. With multiple saving,
you can set up a PerlCleanupHandler, running after each request, that
saves data without stopping profiling. This means you can always pick up
data covering every request processed so far, regardless of the current
state of child processes. You then lose very little if you fail to
trap child termination. Processes can be killed uncleanly and you've
still got profiling data from everything they did except what they were
working on at the time of killing.
You pointed at the existing ability to start and stop data collection
repeatedly at runtime. That doesn't do what I'm after, because it
doesn't save the data collected so far. Currently, saving profile data
once means that no more data can be saved from that process.
-zefram