Skip Menu |

This queue is for tickets about the Bencher-Scenario-Serializers CPAN distribution.

Report information
The Basics
Id: 111269
Status: resolved
Priority: 0/
Queue: Bencher-Scenario-Serializers

People
Owner: Nobody in particular
Requestors: bkb [...] cpan.org
Cc:
AdminCc:

Bug Information
Severity: (no value)
Broken in: (no value)
Fixed in: (no value)



Subject: Can't understand this table
This table is a bit confusing: https://metacpan.org/pod/Bencher::Scenario::Serializers#SAMPLE-BENCHMARK-RESULTS Maybe you could organize deserializing and serializing into two different tables? I also think different data deserves different tables.
On 2016-01-18 02:05:48, BKB wrote: Show quoted text
> This table is a bit confusing: > > https://metacpan.org/pod/Bencher::Scenario::Serializers#SAMPLE- > BENCHMARK-RESULTS > > Maybe you could organize deserializing and serializing into two > different tables? > > I also think different data deserves different tables.
Maybe it would also help to simply group same datasets together: +-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+ | seq | name | rate | time | errors | samples | +-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+ | 97 | {dataset=>"json:hash_int_1000",participant=>"JSON::Decode::Regexp::from_json"} | 6.4 | 1.6e+02ms | 0.00092 | 20 | | 114 | {dataset=>"json:hash_int_1000",participant=>"Pegex::JSON"} | 15.6 | 64ms | 5.2e-05 | 20 | | 104 | {dataset=>"json:hash_int_1000",participant=>"JSON::Decode::Marpa::from_json"} | 18 | 55.5ms | 0.00013 | 21 | | 21 | {dataset=>"json:hash_int_1000",participant=>"JSON::PP::decode_json"} | 88.5 | 11.3ms | 9.2e-06 | 20 | ... +-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+ | 148 | {dataset=>"hash_int_1000",participant=>"YAML::Old::Dump"} | 23.5 | 42.6ms | 5.2e-05 | 20 | | 11 | {dataset=>"hash_int_1000",participant=>"JSON::PP::encode_json"} | 225 | 4.44ms | 2.2e-06 | 20 | | 160 | {dataset=>"hash_int_1000",participant=>"YAML::Syck::Dump"} | 655 | 1.53ms | 1.4e-06 | 24 | | 172 | {dataset=>"hash_int_1000",participant=>"YAML::XS::Dump"} | 671 | 1.49ms | 1.1e-06 | 20 | ... +-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+ ... ... and once this is done the "dataset" value could me move to a subtitle, and "participant" would be the only value left here, so the table could look like this: +-----+----------------------------------+------------+-------------+---------+---------+ | seq | participant | rate | time | errors | samples | +-----+----------------------------------+------------+-------------+---------+---------+ | json:hash_int_1000 | +-----+----------------------------------+------------+-------------+---------+---------+ | 97 | JSON::Decode::Regexp::from_json | 6.4 | 1.6e+02ms | 0.00092 | 20 | | 114 | Pegex::JSON | 15.6 | 64ms | 5.2e-05 | 20 | | 104 | JSON::Decode::Marpa::from_json | 18 | 55.5ms | 0.00013 | 21 | | 21 | JSON::PP::decode_json | 88.5 | 11.3ms | 9.2e-06 | 20 | ... +-----+----------------------------------+------------+-------------+---------+---------+ | hash_int_1000 | +-----+----------------------------------+------------+-------------+---------+---------+ ... I also don't know if the seq column is really needed here.
I've split the serializer and deserializer participants into separate table. Also removed the 'seq' field by default. Haven't split results for each dataset though. Hope it's more readable now.