On Fri May 18 13:41:04 2018, RIBASUSHI wrote:
Show quoted text> Regarding the second portion I was referring to multiple corner cases of
> multiple nested txn_do blocks interspersed with multiple txn_scope_guard
> instantiations
BR is already doing quite a bit in terms of transaction_depth checks, so
I'm surprised there's even edge cases where this wouldn't work.
Show quoted text> plus the general ( not yet fully addressed ) problem of
> "trapped exception" within a txn.
I figured that would bubble up to the outer-most BR layer, since the
internal txn loops would immediately short-circuit to running the $cref,
uneval'd and outside of the _run recursion.
Show quoted text> With that said - if you are willing to spend the considerable time
> necessary to "get it perfect" - I will try my best to find the time to
> gather whatever test cases I have across various notes, to give you a
> full starting point.
While I can't commit to that level of effort, as the tuits aren't my own,
I would be interesting in seeing these kind of test cases if you have them
on hand. Maybe they aren't so bad to solve for. And if they exist in BR,
they probably would exist in DBIx::Connector::Retry, since they share
design concepts.
Show quoted text> Sure thing - though I must disagree with "risk abusing private
> interfaces" part. You could simply start using BR today: as long as you
> are willing to write a test on your end and ideally have it somewhere on
> CPAN, so that I can catch regressions when doing revdeps testing.
>
> That is - if the sub-API works for you well *as-is*, then there is
> little risk for you using it, as any future changes will be carefully
> planned and announced.
Fair enough, and thanks for being amicable.
Show quoted text> Once I do open and document it - changes that are not strictly additions
> are simply no longer possible. Given how central this interface is: I am
> being extra cautions and conservative.
Understood, though it seems like you're kind of in that state already. Any
sort of major changes to the interface could be disruptive, despite whatever
flaws it may already come with. Even if you go into the direction of
subtracting or changing things, it would probably come with compatibility
layers to support the old interface.
Show quoted text> This is just a gut feeling at this point, but both of the above
> approaches seem problematic in terms of "sharing state" perspective. I
> can not put my finger on it just yet, and it has been a while since I've
> been in this headspace. But I did try to do that when I designed BR
> originally and I had to back out of that design due to...<???> I would
> need to look through my reflogs to find out the exact details.
Possibly because of a permanent retry_handler coderef? Or are we talking
about a full BR object in a block_runner attribute?
Show quoted text> What is likely to be more "correct" instead is something like:
>
> txn_do( \%optional_params, sub { ... }, @optional_sub_args )
That does leave the open the possibility of temporarily increasing retries
or changing handlers just for one method call. Although, I would expect
the need for a more permanent bit of settings, too.
Show quoted text> However! This needs further design/planning from the other perspective
> of allowing for finalizers with proper composition ( "do X if and only
> if we managed to commit, and do Y i and only if we rolled back" ).
Now that sounds like feature scope creep :) But, if %optional_params is
a hashref, then it's infinitely expandable to whatever extra params are
needed.
Although, if BR manages to commit, which is really the default goal,
they can just run the next operation post-txn_do. If BR rolls back,
the connection is probably not reliable, but a finalizer still could be
called in the rollback eval.
Show quoted text> One last thing to address is that if you are on a tight timeline - you
> should *probably* go with just using BR as is. I am realistically
> another month away from attaining a large enough chunk of time to sit
> down and fix the ZR-reported performance regressions with what is
> currently in master and another couple months of ironing out random
> most-convoluted-context regressions. Again - I am not trying to dissuade
> you from participating in fixing this - it needs to be done and it will
> be done. I am simply managing walltime expectations up-front.
Okay, I think I'll go that route for now, but I'd still be open to
improving BR in the future. I agree that the problem solving is probably
95% implementation discussion and 5% code.