On 2016-01-18 02:05:48, BKB wrote:
Show quoted text
Maybe it would also help to simply group same datasets together:
+-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+
| seq | name | rate | time | errors | samples |
+-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+
| 97 | {dataset=>"json:hash_int_1000",participant=>"JSON::Decode::Regexp::from_json"} | 6.4 | 1.6e+02ms | 0.00092 | 20 |
| 114 | {dataset=>"json:hash_int_1000",participant=>"Pegex::JSON"} | 15.6 | 64ms | 5.2e-05 | 20 |
| 104 | {dataset=>"json:hash_int_1000",participant=>"JSON::Decode::Marpa::from_json"} | 18 | 55.5ms | 0.00013 | 21 |
| 21 | {dataset=>"json:hash_int_1000",participant=>"JSON::PP::decode_json"} | 88.5 | 11.3ms | 9.2e-06 | 20 |
...
+-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+
| 148 | {dataset=>"hash_int_1000",participant=>"YAML::Old::Dump"} | 23.5 | 42.6ms | 5.2e-05 | 20 |
| 11 | {dataset=>"hash_int_1000",participant=>"JSON::PP::encode_json"} | 225 | 4.44ms | 2.2e-06 | 20 |
| 160 | {dataset=>"hash_int_1000",participant=>"YAML::Syck::Dump"} | 655 | 1.53ms | 1.4e-06 | 24 |
| 172 | {dataset=>"hash_int_1000",participant=>"YAML::XS::Dump"} | 671 | 1.49ms | 1.1e-06 | 20 |
...
+-----+---------------------------------------------------------------------------------+------------+-------------+---------+---------+
...
... and once this is done the "dataset" value could me move to a subtitle, and "participant" would be the only value left here, so the table could look like this:
+-----+----------------------------------+------------+-------------+---------+---------+
| seq | participant | rate | time | errors | samples |
+-----+----------------------------------+------------+-------------+---------+---------+
| json:hash_int_1000 |
+-----+----------------------------------+------------+-------------+---------+---------+
| 97 | JSON::Decode::Regexp::from_json | 6.4 | 1.6e+02ms | 0.00092 | 20 |
| 114 | Pegex::JSON | 15.6 | 64ms | 5.2e-05 | 20 |
| 104 | JSON::Decode::Marpa::from_json | 18 | 55.5ms | 0.00013 | 21 |
| 21 | JSON::PP::decode_json | 88.5 | 11.3ms | 9.2e-06 | 20 |
...
+-----+----------------------------------+------------+-------------+---------+---------+
| hash_int_1000 |
+-----+----------------------------------+------------+-------------+---------+---------+
...
I also don't know if the seq column is really needed here.